From dwf at cs.toronto.edu Tue Apr 1 04:41:27 2008 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Tue, 1 Apr 2008 04:41:27 -0400 Subject: [SciPy-user] kmeans2 random initialization In-Reply-To: <3d375d730803312017kb600709n8231dd9669f52414@mail.gmail.com> References: <20080331141951.3pkufn1h8gswgcsg@webmail.seas.upenn.edu> <3d375d730803312017kb600709n8231dd9669f52414@mail.gmail.com> Message-ID: <16098159-624A-4046-8D46-7E0E9A9BB7AB@cs.toronto.edu> On 31-Mar-08, at 11:17 PM, Robert Kern wrote: > The relevant function is scipy/cluster/vq.py:_krandinit(). It is > finding the covariance matrix and manually doing a multivariate normal > sampling. Your data is most likely degenerate and not of full rank. > It's arguable whether or not this should fail, but > numpy.random.multivariate_normal() uses the SVD instead of a Cholesky > decomposition to find the matrix square root, so it sort of ignores > non-positive definiteness. This might not be relevant, depending on how the covariance is computed, but one 'gotcha' I've seen with numerical algorithms that assume positive-definiteness is that occasionally floating point oddities will induce (very slight) non-symmetry of the input matrix, and thus the algorithm will choke; it's easily solved by averaging the matrix with it's transpose (though there are probably more efficient ways). David From humufr at yahoo.fr Tue Apr 1 10:41:02 2008 From: humufr at yahoo.fr (humufr at yahoo.fr) Date: Tue, 1 Apr 2008 10:41:02 -0400 Subject: [SciPy-user] FITS images with header-supplied axes? Message-ID: <200804011041.02702.humufr@yahoo.fr> Hello, I just want to discuss about a problem that our python astronomer have and the answer from Perry is a very good example of it. There are too many projects which are doing exactly the same thing and essentially because nothing is centralized or worst, in the case of pywcs, not advertised. For example Perry told us about pywcs developped by STSCI (it was the first time I saw any reference to this project) but Adam spoke about astlib ( http://astlib.sourceforge.net/ ) which are different package with exactly the same goal and I did myself something similar (even if it was fast and dirtier). I think it's time to try to identified the most important task that the astronomer are needing and try to centralized all the effort at the same place. The astropy mail list is probably a good start as the astropy.scipy.org website. Perhaps we can start to identified the need and desiderata. We will have too many but that some most important or urgent must be identified. As example, we clearly needed the pywcs and thanks to STSCI we have it now. We also need a package to plot our data, images etc. Matplotlib is very good but does have a major problem (at least for my point of view), it's slow, very slow for big array or can't even produce the image if the image is too big. ( If I remember a precedent discussion, the problem is mainly due to Agg.) After we need to know, if possible, for which project if someone is doing something, who and how to contact him/her. So interested people can eventually help him/her to do it. It's seems that STSCI is doing most of the work but I'm pretty sure that other people are willingly to help them to extend python to be the ideal tool for astronomer. Just my 2 cents, Nicolas Le Sunday 30 March 2008 10:54:04 Perry Greenfield, vous avez ?crit?: > On Mar 28, 2008, at 8:28 PM, Keflavich wrote: > > Is there any plotting routine in scipy / matplotlib that can plot a > > fits image with correct WCS coordinates on the axes? I know pyfits > > can load fits files, astLib has routines to interpret header > > coordinates, and I think you can make the axes different using > > > matplotlib transforms, but is there anything that puts all three > > together currently available? > > > > Thanks, > > Adam > > Well, we (STScI) recently wrapped WCSLIB to obtain a mapping function > between pixel and sky coordinates for python (you can find it as pywcs in > astrolib on scipy; that may have been what you were referring to). > > But I'm not sure you understand what you are asking for with regard to > matplotlib. The new transforms stuff should make it much easier to display > the sky coordinates in the interactive display. The axis labeling is a > different matter. Suppose your image (let's say it's 1Kx1K for the sake of > discussion) is rotated 45 degrees with regard to north (either way, it > doesn't really matter). What would you expect to see for axis labels? I > don't think it is at all obvious how people would want labeling to be done > along the edges of the image. I can imagine someone wanting axes or grids > superimposed on the image itself, but that's not quite the same thing. Do > you want the image rotated so that it is resampled on to RA and Dec and > displayed that way? > > In any event, no we haven't yet done anything to try to integrate all three > things. Among other things we wanted to make sure that the api for the wcs > info was suitable before doing a lot with it (and in the meantime, Mike is > working on rewriting drizzle which is taking a lot of his time). > > Perry > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From heytrent at gmail.com Tue Apr 1 12:36:10 2008 From: heytrent at gmail.com (heytrent at gmail.com) Date: Tue, 1 Apr 2008 09:36:10 -0700 Subject: [SciPy-user] Real-time plotting and data storage format questions Message-ID: <31488cc20804010936q2ab49167qa43a660b1f28e51b@mail.gmail.com> Greetings, A small team of us are developing a new simulation package from the ground up. Our legacy approach relied on MATLAB and other proprietary software. A hope pf ours is to be able to shed the use of MATLAB for the analysis of our simulation results and instead use python with scipy/numpy/matplotlib etc. I've successfully installed and compiled optimized numpy/scipy and all the supporting packages (ATLAS, FFTW, etc). So far so good. To the point - I have two questions: 1) We would like to have a "scope" to monitor simulation outputs in real time. We're using one tool that can take data over a tcp/ip port, but is clunky and only works on a single platform. Does such a thing exists within the python realm for plotting data in real time? 2) Our simulation creates large (1-4 GB) data sets. Since we're writing this simulation ourselves (C++) we can save the data in any format. Does anyone have a suggestion for a specific format or API that's been found to be optimal in terms of memory usage and ability to import into python for analysis and plotting? Thank you for any suggestions. We're still new with Python, so I apologize if these questions seem mundane. -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Tue Apr 1 13:01:18 2008 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 1 Apr 2008 13:01:18 -0400 Subject: [SciPy-user] Real-time plotting and data storage format questions In-Reply-To: <31488cc20804010936q2ab49167qa43a660b1f28e51b@mail.gmail.com> References: <31488cc20804010936q2ab49167qa43a660b1f28e51b@mail.gmail.com> Message-ID: 1. Will this project produce code that you will be sharing? If so, please post updates to this list! 2. Is the following of interest? Last example at Code at Comments at: Cheers, Alan Isaac From doutriaux1 at llnl.gov Tue Apr 1 13:06:35 2008 From: doutriaux1 at llnl.gov (Charles Doutriaux) Date: Tue, 01 Apr 2008 10:06:35 -0700 Subject: [SciPy-user] Real-time plotting and data storage format questions In-Reply-To: <31488cc20804010936q2ab49167qa43a660b1f28e51b@mail.gmail.com> References: <31488cc20804010936q2ab49167qa43a660b1f28e51b@mail.gmail.com> Message-ID: <47F26B9B.4050809@llnl.gov> Hi there, For the data format I strongly recommend NetCDF and especially NetCDF4 (allows for compression) In addition of NetCDF4 I strongly recommend you look at the CF conventions, which is a standard and allows software to learn about the data you're storing, see: http://cf-pcmdi.llnl.gov/ As far as an all in 1 python based software to store/analyze/plot your data, you can look at CDAT (which happens to redistribute scipy ) Hope this helps, C. heytrent at gmail.com wrote: > Greetings, > > A small team of us are developing a new simulation package from the > ground up. Our legacy approach relied on MATLAB and other proprietary > software. A hope pf ours is to be able to shed the use of MATLAB for > the analysis of our simulation results and instead use python with > scipy/numpy/matplotlib etc. I've successfully installed and compiled > optimized numpy/scipy and all the supporting packages (ATLAS, FFTW, etc). > > So far so good. > > To the point - I have two questions: > > 1) We would like to have a "scope" to monitor simulation outputs in > real time. We're using one tool that can take data over a tcp/ip port, > but is clunky and only works on a single platform. Does such a thing > exists within the python realm for plotting data in real time? > > 2) Our simulation creates large (1-4 GB) data sets. Since we're > writing this simulation ourselves (C++) we can save the data in any > format. Does anyone have a suggestion for a specific format or API > that's been found to be optimal in terms of memory usage and ability > to import into python for analysis and plotting? > > Thank you for any suggestions. We're still new with Python, so I > apologize if these questions seem mundane. > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From gael.varoquaux at normalesup.org Tue Apr 1 13:09:49 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Tue, 1 Apr 2008 19:09:49 +0200 Subject: [SciPy-user] Real-time plotting and data storage format questions In-Reply-To: <47F26B9B.4050809@llnl.gov> References: <31488cc20804010936q2ab49167qa43a660b1f28e51b@mail.gmail.com> <47F26B9B.4050809@llnl.gov> Message-ID: <20080401170949.GK1301@phare.normalesup.org> On Tue, Apr 01, 2008 at 10:06:35AM -0700, Charles Doutriaux wrote: > For the data format I strongly recommend NetCDF and especially NetCDF4 > (allows for compression) Actually, why not hdf5, which seems to be used by NetCDF4, but is also largely used across many scientific comunities, and very well supported under python (pytables)? My 2 cents, Ga?l From ryanlists at gmail.com Tue Apr 1 14:16:28 2008 From: ryanlists at gmail.com (Ryan Krauss) Date: Tue, 1 Apr 2008 13:16:28 -0500 Subject: [SciPy-user] Impulse/Step response troubles In-Reply-To: <20080331165002.GA1241@debian.akumar.iitm.ac.in> References: <20080331032257.GA7083@debian.akumar.iitm.ac.in> <20080331165002.GA1241@debian.akumar.iitm.ac.in> Message-ID: I have seen the problem with repeated roots. I thought I reported it. Your second case with a ramp response could be better handled with signal.lsim using your original transfer function ([K],[1.0,8.0,K]) and u = t. On Mon, Mar 31, 2008 at 11:50 AM, Kumar Appaiah wrote: > Reply below quote. > > > On Mon, Mar 31, 2008 at 08:52:57AM +0530, Kumar Appaiah wrote: > > The question is to determine (plot) the step and ramp responses of > > K / (s^2 + 8s + K) for K = 7, 16 and 80 > > > > So, the following code is used, and I'll keep changing a line: > > > > from scipy import * > > from pylab import * > > > > K = 16.0 # should be redone with 7.0, 16.0 and 80.0 > > > > r = signal.impulse(([K],[1.0,8.0,K]), T=r_[0:5:0.001]) > > plot(r[0], r[1], linewidth=2) > > show() > > > > > > The above code plots the impulse response of the given function. Now, > > on to the action: > > > > 1. Running the above code with K = 7.0 and K = 16.0 gives expected > > results. However, running the code with K = 16.0 doesn't work; if K = > > 16.0, the response should be 16 * t * exp(-4t) * u(t), which it > > isn't. Changing K to 16 + or - .00000000001 fixes it. There surely is > > some problem when the value of K is such that a double root is hit. > > Well, I've zeroed in on the issue. Running the code of > signal.lti.impulse, we get here: > > s,v = linalg.eig(sys.A) > vi = linalg.inv(v) > > Now, v is a matrix which has the eigen vectors, which are EQUAL in > this case: > > print sys.A > [[-0.9701425 -0.9701425 ] > [ 0.24253563 0.24253563]] > > which is singular. However, the code goes on to invert this happily, > resulting in a bad matrix and horrendous values. I would be surprised > if nobody has encountered this yet (didn't find this on the Tickets). > > What would be the best way to handle repeated roots in the transfer > function's denominator? > > Thanks. > > > > Kumar > -- > Kumar Appaiah, > 458, Jamuna Hostel, > Indian Institute of Technology Madras, > Chennai - 600 036 > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From akumar at iitm.ac.in Tue Apr 1 14:30:43 2008 From: akumar at iitm.ac.in (Kumar Appaiah) Date: Wed, 2 Apr 2008 00:00:43 +0530 Subject: [SciPy-user] Impulse/Step response troubles In-Reply-To: References: <20080331032257.GA7083@debian.akumar.iitm.ac.in> <20080331165002.GA1241@debian.akumar.iitm.ac.in> Message-ID: <20080401183043.GB4603@debian.akumar.iitm.ac.in> On Tue, Apr 01, 2008 at 01:16:28PM -0500, Ryan Krauss wrote: > I have seen the problem with repeated roots. I thought I reported it. Oh, I'll look again. Even otherwise, I would request someone to please fix this (I am looking for the right method to alter the code to handle this). > Your second case with a ramp response could be better handled with > signal.lsim using your original transfer function ([K],[1.0,8.0,K]) > and u = t. Many thanks. I shall try this. Kumar -- Kumar Appaiah, 458, Jamuna Hostel, Indian Institute of Technology Madras, Chennai - 600 036 From vanforeest at gmail.com Tue Apr 1 14:30:13 2008 From: vanforeest at gmail.com (nicky van foreest) Date: Tue, 1 Apr 2008 18:30:13 +0000 Subject: [SciPy-user] scipy and blitz In-Reply-To: References: <47EFDBE0.7090106@gentoo.org> <47F0C1D6.5020601@gentoo.org> Message-ID: Dear all, After running laplace.py (a part of the scipy performance tutorial) I get the following result: chuck~/tmp/perfpy> python laplace.py Doing 100 iterations on a 500x500 grid numeric took 3.35 seconds blitz took 8.61 seconds inline took 7.84 seconds fastinline took 0.75 seconds fortran took 0.42 seconds pyrex took 0.73 seconds whereas the results on http://www.scipy.org/PerformancePython give this: Numeric 29.3 Blitz 9.5 Inline 4.3 Fast Inline 2.3 Python/Fortran 2.9 Pyrex 2.5 Does anybody have clue about why the blitz code works slower on my machine than numpy? I searched the web on what might be a reason and came across a mail by Travis Oliphant. He also experiences relatively slow behavior of blitz, but after recompiling python with debugging info blitz worked well again. Thus, I checked my installation of python, but it is compiled without debugging info. As a further check I contacted the scipy maintainer of gentoo (see cc). He gets the same relative performance as I do. Thanks for any info. bye Nicky From robert.kern at gmail.com Tue Apr 1 14:41:32 2008 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 1 Apr 2008 13:41:32 -0500 Subject: [SciPy-user] kmeans2 random initialization In-Reply-To: <16098159-624A-4046-8D46-7E0E9A9BB7AB@cs.toronto.edu> References: <20080331141951.3pkufn1h8gswgcsg@webmail.seas.upenn.edu> <3d375d730803312017kb600709n8231dd9669f52414@mail.gmail.com> <16098159-624A-4046-8D46-7E0E9A9BB7AB@cs.toronto.edu> Message-ID: <3d375d730804011141x2054cc1dr3c84a4099903038b@mail.gmail.com> On Tue, Apr 1, 2008 at 3:41 AM, David Warde-Farley wrote: > This might not be relevant, depending on how the covariance is > computed, but one 'gotcha' I've seen with numerical algorithms that > assume positive-definiteness is that occasionally floating point > oddities will induce (very slight) non-symmetry of the input matrix, > and thus the algorithm will choke; it's easily solved by averaging the > matrix with it's transpose (though there are probably more efficient > ways). Ultimately, the covariance is computed like so: dot(X, X.T.conj()) Ultimately, it is up to the implementation of dot() as to whether or not it will ensure exact symmetry. At least on my MacBook Pro with the Accelerate.framework providing the BLAS (an ATLAS derivative), I could not generate a matrix that was not exactly symmetrical over many random inputs. I think that less-than-full-rank data is the more likely problem. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Tue Apr 1 14:51:05 2008 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 1 Apr 2008 13:51:05 -0500 Subject: [SciPy-user] scipy and blitz In-Reply-To: References: <47EFDBE0.7090106@gentoo.org> <47F0C1D6.5020601@gentoo.org> Message-ID: <3d375d730804011151p7111fc9r741e74eff4a737c4@mail.gmail.com> On Tue, Apr 1, 2008 at 1:30 PM, nicky van foreest wrote: > Dear all, > > After running laplace.py (a part of the scipy performance tutorial) I > get the following result: > > chuck~/tmp/perfpy> python laplace.py > Doing 100 iterations on a 500x500 grid > numeric took 3.35 seconds > blitz took 8.61 seconds > inline took 7.84 seconds > fastinline took 0.75 seconds > fortran took 0.42 seconds > pyrex took 0.73 seconds > > whereas the results on http://www.scipy.org/PerformancePython give this: > > Numeric 29.3 > Blitz 9.5 > Inline 4.3 > Fast Inline 2.3 > Python/Fortran 2.9 > Pyrex 2.5 > > Does anybody have clue about why the blitz code works slower on my > machine than numpy? The first time the weave.blitz and weave.inline codes are called, they have to compile, link, and load a C++ Python extension module. Run the benchmark a second time. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From rwagner at physics.ucsd.edu Tue Apr 1 14:51:39 2008 From: rwagner at physics.ucsd.edu (Rick Wagner) Date: Tue, 1 Apr 2008 11:51:39 -0700 Subject: [SciPy-user] Real-time plotting and data storage format questions In-Reply-To: <20080401170949.GK1301@phare.normalesup.org> References: <31488cc20804010936q2ab49167qa43a660b1f28e51b@mail.gmail.com> <47F26B9B.4050809@llnl.gov> <20080401170949.GK1301@phare.normalesup.org> Message-ID: >> For the data format I strongly recommend NetCDF and especially >> NetCDF4 >> (allows for compression) > > Actually, why not hdf5, which seems to be used by NetCDF4, but is also > largely used across many scientific comunities, and very well > supported > under python (pytables)? > > My 2 cents, > > Ga?l Seconded. We use HDF5 [1] in a parallel simulation code written in C+ + and Fortran, and find it to be incredibly useful. I personally use PyTables to access the data for analysis and plotting. If you have any questions, feel free to email me. --Rick [1] - http://hdfgroup.org/ ------------------------------------------------------------------------ - Rick Wagner, Graduate Student Researcher UCSD Physics 9500 Gilman Drive La Jolla, CA 92093-0424 Email: rwagner at physics.ucsd.edu WWW: http://lca.ucsd.edu/projects/rpwagner (858) 822-4784 Phone ------------------------------------------------------------------------ - Measuring programming progress by lines of code is like measuring aircraft building progress by weight. --Bill Gates ------------------------------------------------------------------------ - -------------- next part -------------- An HTML attachment was scrubbed... URL: From vanforeest at gmail.com Tue Apr 1 15:38:54 2008 From: vanforeest at gmail.com (nicky van foreest) Date: Tue, 1 Apr 2008 19:38:54 +0000 Subject: [SciPy-user] scipy and blitz In-Reply-To: <3d375d730804011151p7111fc9r741e74eff4a737c4@mail.gmail.com> References: <47EFDBE0.7090106@gentoo.org> <47F0C1D6.5020601@gentoo.org> <3d375d730804011151p7111fc9r741e74eff4a737c4@mail.gmail.com> Message-ID: Hi, Thanks for your answer. > > The first time the weave.blitz and weave.inline codes are called, they > have to compile, link, and load a C++ Python extension module. Run the > benchmark a second time. Sure. I did this a couple of times, but each time I get more or less the same result. I also do not see any new files appearing after running laplace.py. Does blitz store any of its compiled files in the directory (or subdirectories) that contains laplace.py? bye Nicky From robert.kern at gmail.com Tue Apr 1 15:54:43 2008 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 1 Apr 2008 14:54:43 -0500 Subject: [SciPy-user] scipy and blitz In-Reply-To: References: <47EFDBE0.7090106@gentoo.org> <47F0C1D6.5020601@gentoo.org> <3d375d730804011151p7111fc9r741e74eff4a737c4@mail.gmail.com> Message-ID: <3d375d730804011254n2ac6f25bvaa251f7ecd26aa44@mail.gmail.com> On Tue, Apr 1, 2008 at 2:38 PM, nicky van foreest wrote: > Hi, > > Thanks for your answer. > > > > > The first time the weave.blitz and weave.inline codes are called, they > > have to compile, link, and load a C++ Python extension module. Run the > > benchmark a second time. > > Sure. I did this a couple of times, but each time I get more or less > the same result. Hmm, okay. Then I'm not sure what's up here. > I also do not see any new files appearing after > running laplace.py. Does blitz store any of its compiled files in the > directory (or subdirectories) that contains laplace.py? No, they go into ~/.python2x_compiled where x is the minor version number of your Python interpreter. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From vanforeest at gmail.com Tue Apr 1 16:10:08 2008 From: vanforeest at gmail.com (nicky van foreest) Date: Tue, 1 Apr 2008 20:10:08 +0000 Subject: [SciPy-user] scipy and blitz In-Reply-To: <3d375d730804011254n2ac6f25bvaa251f7ecd26aa44@mail.gmail.com> References: <47EFDBE0.7090106@gentoo.org> <47F0C1D6.5020601@gentoo.org> <3d375d730804011151p7111fc9r741e74eff4a737c4@mail.gmail.com> <3d375d730804011254n2ac6f25bvaa251f7ecd26aa44@mail.gmail.com> Message-ID: > > > I also do not see any new files appearing after > > running laplace.py. Does blitz store any of its compiled files in the > > directory (or subdirectories) that contains laplace.py? > > > No, they go into ~/.python2x_compiled where x is the minor version > number of your Python interpreter. Ok, I found them. Their presence do not seem to help, though. BTW, I like your signature. bye Nicky > > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From marcotuckner at public-files.de Tue Apr 1 17:29:07 2008 From: marcotuckner at public-files.de (Marco Tuckner) Date: Tue, 1 Apr 2008 21:29:07 +0000 (UTC) Subject: [SciPy-user] removing certain dates from a time series Message-ID: Hello, as I mentioned in my last thread I am looking to create some kind of design data set with a year of hourly data. Such design years consist of aggregated (average, sum, etc.) long-term data and are often constructed without the effect of leap years resulting in a total of 8760 values (hourly frequency). Most of such data sets do not include february 29th because the average calculated for that day would be based only on a few data points when compared to the other days of the year. Think of a 10-years data set where february 29th may only have data points for two years (e.g. 2004 & 2008). How to I remove the february 29 from my timeseries without affecting the dates after? Since the timeseries are constructed by start date and the lenght of the data it would affect the construction of the timeseries of I remove the 29th beforehand. Thanks in adcance for your help. Kind regards, Marco From pgmdevlist at gmail.com Tue Apr 1 17:51:17 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Tue, 1 Apr 2008 17:51:17 -0400 Subject: [SciPy-user] removing certain dates from a time series In-Reply-To: References: Message-ID: <200804011751.18086.pgmdevlist@gmail.com> Marco, > How to I remove the february 29 from my timeseries without affecting the > dates after? Don't remove the dates, just mask the corresponding data, as illustrated in the following >>>import numpy >>>from numpy.ma import masked >>>import scikits.timeseries as ts >>>series=ts.time_series(numpy.arange(3650),start_date=ts.now('D')) >>>series[(series.days==29)&(series.months==2)]=ma.masked As long as your analysis functions deal w/ masked values in a nice way, you're set (for example, mean/sum/var all work well with masked values). HIH P. From zane at ideotrope.org Tue Apr 1 19:37:31 2008 From: zane at ideotrope.org (Zane Selvans) Date: Tue, 01 Apr 2008 16:37:31 -0700 Subject: [SciPy-user] paths relative to a python package Message-ID: <47F2C73B.30907@ideotrope.org> I know this isn't scipy specific but... If you have a data file that's distributed with a python package, how do you refer to it relative to the location that the package ultimately gets installed? Is there a package_root variable or something? e.g. in my package directory let's say I have: __init__.py MyModule.py datadir and within datadir: data1.dat data2.dat ... How do I, within MyModule.py tell the program to open one of the data files? If I just do open('datadir/data1.dat') it'll try and open that dir/file relative to wherever the python program that imported MyModule is running, right? Which isn't what I want. Thanks for any insight, Zane -- Zane Selvans Amateur Human zane at ideotrope.org 303/815-6866 PGP Key: 55E0815F -------------- next part -------------- A non-text attachment was scrubbed... Name: zane.vcf Type: text/x-vcard Size: 254 bytes Desc: not available URL: From robert.kern at gmail.com Tue Apr 1 19:40:23 2008 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 1 Apr 2008 18:40:23 -0500 Subject: [SciPy-user] paths relative to a python package In-Reply-To: <47F2C73B.30907@ideotrope.org> References: <47F2C73B.30907@ideotrope.org> Message-ID: <3d375d730804011640oa0497dx90b9888c163f3e8d@mail.gmail.com> On Tue, Apr 1, 2008 at 6:37 PM, Zane Selvans wrote: > I know this isn't scipy specific but... > > If you have a data file that's distributed with a python package, how do > you refer to it relative to the location that the package ultimately > gets installed? Is there a package_root variable or something? > > e.g. in my package directory let's say I have: > > __init__.py > MyModule.py > datadir > > and within datadir: > > data1.dat > data2.dat > ... > > How do I, within MyModule.py tell the program to open one of the data files? import os dirname = os.path.dirname(os.path.abspath(__file__)) datadir = os.path.join(dirname, 'datadir') f = open(os.path.join(datadir, 'data1.dat')) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From zane at ideotrope.org Tue Apr 1 20:19:17 2008 From: zane at ideotrope.org (Zane Selvans) Date: Tue, 01 Apr 2008 17:19:17 -0700 Subject: [SciPy-user] paths relative to a python package In-Reply-To: <3d375d730804011640oa0497dx90b9888c163f3e8d@mail.gmail.com> References: <47F2C73B.30907@ideotrope.org> <3d375d730804011640oa0497dx90b9888c163f3e8d@mail.gmail.com> Message-ID: <47F2D105.6050507@ideotrope.org> Perfect! Thanks! >> How do I, within MyModule.py tell the program to open one of the data files? > > import os > > dirname = os.path.dirname(os.path.abspath(__file__)) > datadir = os.path.join(dirname, 'datadir') > > f = open(os.path.join(datadir, 'data1.dat')) > -- Zane Selvans Amateur Human zane at ideotrope.org 303/815-6866 PGP Key: 55E0815F -------------- next part -------------- A non-text attachment was scrubbed... Name: zane.vcf Type: text/x-vcard Size: 254 bytes Desc: not available URL: From cygnusx1 at mac.com Tue Apr 1 21:47:26 2008 From: cygnusx1 at mac.com (Tom Bridgman) Date: Tue, 1 Apr 2008 21:47:26 -0400 Subject: [SciPy-user] Astronomical Python Projects (was Re: [AstroPy] FITS images with header-supplied axes?) In-Reply-To: References: <200804011041.02702.humufr@yahoo.fr> <848B8309-7988-4CC7-9DB2-C6C851C273FD@nasa.gov> Message-ID: Since I'm at home now, I'll describe my 'recreational' python projects. I'm probably the really weird one on the list. ;^) My major use is in developing simple models/simulations & data analysis exercises which can be used in debunking crank astronomy & physics (see my .sig below). I'm using scipy/numpy/pyfits/PIL/PyX for generating graphics from simple models. Here's some recent examples: - electron-proton-neutron gas equation of state (neutron stars can't exist claims) - power spectra of simple cosmological models (redshift quantization claims) - simple galactic supernova distribution models (not enough SNR claims) - simple radiation transport simulations - simple E&M field configuration simulations (trying out FiPy for this) - simulated 'observations' based on various real & crackpot cosmological models - nuclear decay & nuclear reaction networks. For this project, I don't have a strong need for precision tools. I don't use PyRAF, or the more complex tools available. I want most of my projects to be easily reproducible by students and instructors at various levels. FV & DS9 have sufficed for viewing images generated by image-based models to unprocessed Hubble imagery. The frameworks for some of these might be of use to others, but I'm not sure the best way to package them. I've looked at some of the data-table readers promoted on the list, but have generally found them overkill for my needs (hence I re-invent the wheel for much of this). More recently, I've been working with the pyPOV wrapper to generate nice visualizations from models using the POVray ray-tracer. I do have an interest in a good implementation of WCS in case I want to overlay a model with actual data. I have a simple UTC time class but it is *not* a precision object (unsuitable for say, pulsar or x- ray timing observations). I'll post the work list tomorrow. Thanks, Tom On Apr 1, 2008, at 4:32 PM, Russell E Owen wrote: > At 1:51 PM -0400 2008-04-01, Bridgman, William T. wrote: >> >> Would there be any interest in members of the list publishing a short >> description of what types of modules they are designing in their own >> work? It might be worthwhile for coordination & possible >> collaborations. My requirements for work projects are quite >> different from my recreational & educational python projects. > > Good idea. > > I'm part of the team working on data processing for the LSST. This > number crunching is done in C++ and the high level operations are > done in python. This work will probably be primarily of interest to > those processing a lot of data because it is pipeline-oriented. > > The final product will include: > - basic types for images, masks and masked images > - image linearization (bias subtraction, flat fielding, etc.) > - wcs determination > - source detection > - image subtraction > > We intend it to be usable on a variety of data sources (not just > LSST) since that's the only way to test it. Nonetheless it will > probably take a bit of work to massage the data headers. > > As far as wcs goes: right now we use wcslib and have a limited python > interface on it. I suspect if anything better came along we'd be > happy to switch. (at least at the C++ level; I'm not sure about the > Python level). > > -- Russell > _______________________________________________ > AstroPy mailing list > AstroPy at scipy.org > http://lists.astropy.scipy.org/mailman/listinfo/astropy -- Dealing with Creationism in Astronomy http://homepage.mac.com/cygnusx1 cygnusx1 at mac.com "They're trained to believe, not to know. Belief can be manipulated. Only knowledge is dangerous." --Frank Herbert, "Dune Messiah" -------------- next part -------------- An HTML attachment was scrubbed... URL: From beckers at orn.mpg.de Wed Apr 2 04:52:07 2008 From: beckers at orn.mpg.de (Gabriel J.L. Beckers) Date: Wed, 02 Apr 2008 10:52:07 +0200 Subject: [SciPy-user] Real-time plotting and data storage format questions In-Reply-To: <20080401170949.GK1301@phare.normalesup.org> References: <31488cc20804010936q2ab49167qa43a660b1f28e51b@mail.gmail.com> <47F26B9B.4050809@llnl.gov> <20080401170949.GK1301@phare.normalesup.org> Message-ID: <1207126327.6658.12.camel@gabriel-desktop> I too recommend hdf5 as data format. hdf5 has become my standard data format for pretty much all my work with scientific data because pytables makes it very easy to use, and yet offers the possibility of using advanced features at the same time. Have a look at their website http://www.pytables.org . Best, Gabriel On Tue, 2008-04-01 at 19:09 +0200, Gael Varoquaux wrote: > On Tue, Apr 01, 2008 at 10:06:35AM -0700, Charles Doutriaux wrote: > > For the data format I strongly recommend NetCDF and especially NetCDF4 > > (allows for compression) > > Actually, why not hdf5, which seems to be used by NetCDF4, but is also > largely used across many scientific comunities, and very well supported > under python (pytables)? > > My 2 cents, > > Ga?l > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From doreen at aims.ac.za Wed Apr 2 05:22:34 2008 From: doreen at aims.ac.za (Doreen Mbabazi) Date: Wed, 2 Apr 2008 11:22:34 +0200 (SAST) Subject: [SciPy-user] Estimation of parameters while fitting data In-Reply-To: References: <59052.192.168.42.175.1206989619.squirrel@webmail.aims.ac.za> <20080331190712.GA6796@encolpuis> <42271.192.168.42.175.1206994141.squirrel@webmail.aims.ac.za> Message-ID: <40576.192.168.42.175.1207128154.squirrel@webmail.aims.ac.za> Thanks, What you proposed has worked and I have been able to get the values that I want to use to obtain the residuals however when it comes to using the leastsq, I get an error which I have tried to google but its giving me no useful results. Below is the code and the error. initial_y = [10,0,10e-6] def f(y,t,p): y_dot = [0.,0.,0.] y_dot[0] = p[0] - p[1]*y[0] - p[2]*y[0]*y[2] y_dot[1] = p[2]*y[0]*y[2] - p[3]*y[1] y_dot[2] = p[4]*y[1] - p[5]*y[2] return y_dot filename=('essay1.dat') # data file is stored data = scipy.io.array_import.read_array(filename) # data file is read. t = data[:,0] V = data[:,1] lamda_0 = 0.16; d_0 = 0.016; k_0 = 0.40e-3; delta_0 = 0.35; pi_0 = 900; c_0 = 3 #p0 = array([lamda_0 , d_0, k_0, delta_0,pi_0,c_0]) p = array([lamda_0 , d_0, k_0, delta_0,pi_0,c_0]) def S(t, p): y_list = [] ys = odeint(f, initial_y, t, args =(p,)) for i in range(len(t)): y_V = ys[i][2] y_list.append(y_V) return y_list y = odeint(f,initial_y,t,args=(p,)) #print S(t,p) def residuals(p,y,t): return [V - S(t,p) for i in xrange(len(t))] pbest = leastsq(residuals, p, args=(V,t)) The error ValueError: setting an array element with a sequence. Traceback (most recent call last): File "essay6.py", line 44, in pbest = leastsq(residuals, p, args=(V,t)) File "/usr/lib/python2.5/site-packages/scipy/optimize/minpack.py", line 266, in leastsq retval = _minpack._lmdif(func,x0,args,full_output,ftol,xtol,gtol,maxfev,epsfcn,factor,diag) minpack.error: Result from function call is not a proper array of floats. Doreen. Anne Archibald > On 31/03/2008, Doreen Mbabazi wrote: > >> Thanks, I tried to do that(by taking err = V-f(y,t,p)[2]) while >> defining >> the function residuals but the trouble is that actually f(y,t,p) >> calculates value of y at t0 so it cannot help me. What I want are the >> third values from y(y[i][2]). Below I have tried to do that but that >> gives >> particular values of y so my parameters are not optimized. >> >> def residuals(p, V, t): >> """The function is used to calculate the residuals >> """ >> for i in range(len(t)): >> err = V-y[i][2] >> return err >> >> #Function defined with y[0]=T,y[1]=T*,y[2] = V,lamda = p[0],d = p[1], >> k=p[2],delta=p[3], pi = p[4], c = p[5] >> initial_y = [10,0,10e-6] # initial conditions T(0)= 10cells , T*(0)=0, >> V(0)=10e-6 >> >> p is the list of parameters that are being estimated >> (lamda,d,k,delta,pi,c) >> def f(y,t,p): >> y_dot = [0,0,0] >> y_dot[0] = p[0] - p[1]*y[0] - p[2]*y[0]*y[2] >> y_dot[1] = p[2]*y[0]*y[2] - p[3]*y[1] >> y_dot[2] = p[4]*y[1] - p[5]*y[2] >> return y_dot >> >> y = odeint(f,initial_y,t,args=(p,)) > > First of all, I'm not totally sure I have correctly understood your > problem. Let me state it as I understand it: > > You have a collection of points, (x[i], y[i]). You also have a model y > = S(x, P[j]) giving y as a function of x and some parameters P. This > function is not given explicitly, instead you know that it is the > solution to a certain differential equation, with initial values and > parameters given by P. You want to find the values of P that minimize > sum((y[i] - S(x[i],P))**2). > > Is that about right? (The differential equation is actually expressed > as three coupled ODEs, but that's not really a problem.) > > The easiest-to-understand way to solve the problem is probably to > start by writing a python function S that behaves like S does. Of > course, it has to be computed by solving the ODE, which means we're > going to have to solve the ODE a zillion times, but that's okay, > that's what computers are for. > > def S(x, P): > ys = odeint(f, initial_y, [0,x], P) > return ys[1,0] > > Now check that this function looks vaguely right (perhaps by plotting > it, or checking that the values that come out are sensible). > > Now you can do quite ordinary least-squares fitting: > > def residuals(P,x,y): > return [y[i] - S(x[i],P) for i in xrange(len(x))] > > Pbest = scipy.optimize.leastsq(residuals, Pguess, args=(x,y)) > > This should work, and be understandable. But it is not very efficient, > since for every set of parameters, we solve the ODE len(x) times. We > can improve things by using the fact that odeint can return the > integrated function evaluated at a list of places. So we'd modify S to > accept a list of xs, and return the S values at all those places. This > would even simplify our residuals function: > > def S(xs, P): > ys = odeint(f, initial_y, numpy.concatenate(([0],xs), P) > return ys[1:,0] > > def residuals(P,xs,ys): > return ys - S(xs, P) > > Is this the problem you were trying to solve? I suggest first getting > used to how odeint and leastsq work first, then combining them. Their > arguments can be weird, in particular the way odeint treats the > initial x like the xs you want your ode integrated to. > > Good luck, > Anne > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From rob.clewley at gmail.com Wed Apr 2 10:30:19 2008 From: rob.clewley at gmail.com (Rob Clewley) Date: Wed, 2 Apr 2008 10:30:19 -0400 Subject: [SciPy-user] Estimation of parameters while fitting data In-Reply-To: <40576.192.168.42.175.1207128154.squirrel@webmail.aims.ac.za> References: <59052.192.168.42.175.1206989619.squirrel@webmail.aims.ac.za> <20080331190712.GA6796@encolpuis> <42271.192.168.42.175.1206994141.squirrel@webmail.aims.ac.za> <40576.192.168.42.175.1207128154.squirrel@webmail.aims.ac.za> Message-ID: Hi Doreen, > def residuals(p,y,t): > return [V - S(t,p) for i in xrange(len(t))] > Minpack expects an array type to be returned. You're returning a python list. So try this: def residuals(p,y,t): return array([V - S(t_val,p) for t_val in t]) Here I've also removed the unnecessary i index variable -- Python lets you iterate directly over the contents of the list t. -Rob -- Robert H. Clewley, Ph. D. Assistant Professor Department of Mathematics and Statistics Georgia State University 720 COE, 30 Pryor St Atlanta, GA 30303, USA tel: 404-413-6420 fax: 404-651-2246 http://www.mathstat.gsu.edu/~matrhc http://brainsbehavior.gsu.edu/ From shao at msg.ucsf.edu Wed Apr 2 14:00:41 2008 From: shao at msg.ucsf.edu (Lin Shao) Date: Wed, 2 Apr 2008 11:00:41 -0700 Subject: [SciPy-user] kmeans2 bug for 1D data? Message-ID: I understand that kmeans2 is usually used for multidimensional vector space, but it is sometimes useful for 1D clustering, such as clustering the pixels of an image based solely on pixel intensities. And kmeans2 does theoretically support 1D, as stated in its function doc: def kmeans2(data, k, iter = 10, thresh = 1e-5, minit = 'random', missing = 'warn'): """ ...... data : ndarray Expect a rank 1 or 2 array. Rank 1 are assumed to describe one dimensional data, rank 2 multidimensional data, in which case one row is one observation. ........ But the truth is if data is 1D and if minit is 'random', there's an error message when calling vq(data, code) in _kmeans(), because apparently code is a rank-2 array. The cause is in _krandinit() where the return value, x, is rank-2 no matter input data is rank 1 or 2. I "fixed" it by replacing "return x" with "return x.squeeze()". I'm not sure if that's the right way. -- The magic of the microscope is not that it makes little creatures larger, but that it makes a large one smaller. From jturner at gemini.edu Wed Apr 2 14:35:08 2008 From: jturner at gemini.edu (James Turner) Date: Wed, 02 Apr 2008 14:35:08 -0400 Subject: [SciPy-user] [AstroPy] FITS images with header-supplied axes? In-Reply-To: <04DF97AA-4E62-408C-81BD-EEC155DBC7F7@nasa.gov> References: <200804011041.02702.humufr@yahoo.fr> <848B8309-7988-4CC7-9DB2-C6C851C273FD@nasa.gov> <04DF97AA-4E62-408C-81BD-EEC155DBC7F7@nasa.gov> Message-ID: <47F3D1DC.5080008@gemini.edu> Hi everyone, I agree that it would be good for the community to get into the habit of exposing work on astronomical Python modules via SciPy (for example). It seems that the obvious place for this is "AstroLib", which has been part of the SciPy site for some time, but doesn't seem to have gathered a lot of momentum yet: http://scipy.org/AstroLib I'm also interested in WCS, as critical functionality that hasn't quite crystallized yet. Wrapping WCSLib did seem like an obvious direction when I first looked into this and talked to Perry, but it seemed that there may be licensing issues to investigate, with it being GPL (which I personally like, but I'm not sure about internal policy on relicensing everything). At Gemini, one thing we have been working on is a data access class that takes care of some logistics like figuring out the "type" (meaning instrument, for example, not numerical type) of data, accessing standard header information in an instrument-agnostic way, iterating over things like nod sets, FITS extensions and so on. In future, the class may also support array operations with automatic propagation of things like variance and header info. This is still really alpha code at the moment though and I'm not announcing it officially. Later in the year, we will be working on some generic spectroscopic data reduction tasks. These things are planned to operate both in a pipeline context and interactively. I also have a little script for mosaicing IFU data that might be of interest to someone, but it still needs converting to NumPy. That will probably work its way into the official Gemini DR package at some point. By the way, those of us at AURA sites will be meeting in June to discuss collaboration on new DR infrastructure (in general). Cheers, James. -- James E.H. Turner Gemini Observatory Southern Operations Centre, Casilla 603, Tel. (+56) 51 205609 La Serena, Chile. Fax. (+56) 51 205650 From keflavich at gmail.com Wed Apr 2 14:46:10 2008 From: keflavich at gmail.com (Keflavich) Date: Wed, 2 Apr 2008 11:46:10 -0700 (PDT) Subject: [SciPy-user] FITS images with header-supplied axes? In-Reply-To: <9AA7B346-74BF-4A77-AC67-29CE8A576695@stsci.edu> References: <20080330105404.CRP80397@comet.stsci.edu> <1e83f451-f01d-4413-95ee-8af77a78d42c@s19g2000prg.googlegroups.com> <9AA7B346-74BF-4A77-AC67-29CE8A576695@stsci.edu> Message-ID: <5f9d9d68-6ce0-4f89-b389-d5e28453a934@y21g2000hsf.googlegroups.com> On Mar 31, 7:59 pm, Perry Greenfield wrote: > On Mar 30, 2008, at 12:38 PM, Keflavich wrote: > > > I was thinking no resampling, just put an RA/DEC grid and fit the > > image into it as well as it can be. I don't know if it's possible to > > display rotated pixels, but that would be the most useful behavior in > > this case. > > Just to clarify, do you mean nearest neighbor regridding? It does > sound like you mean that you would like to see the image rotated to > have north up. Right? That's correct. > > I'm not sure I understand how the transforms would make it easier to > > display sky coordinates without labeling axes, though. Are you saying > > that if the image is already sampled in RA/DEC (or whatever coordinate > > system) space, then it should be easy to display the RA/DEC > > coordinates on the axes? > > The transforms machinery would allow displaying the image in its > original orientation and then when the cursor was moved over the > image, the displayed x, y coordinates could display RA, DEC instead > of pixel coordinates. But if you desire to rotate the image and have > it aligned with north, that capability isn't really important. Doing > the rotated image needs some tool to do the rotation (quickly > presumably rather than a precise mapping) and then display that as > part of a larger tool. That would be much more straightforward. But > we haven't made such a tool yet. If someone does it first, it sure > would be nice to add, at least as part of an astronomy toolkit. OK, I didn't realize that the transforms work as you describe, I'll have to test that. If I can get something like the tool you described running, I'll post my results here, but I'm going to may slow progress if any. I'm not clear how I would go about making usable paper figures in Python without this capability, though, so what does everyone else do? Just use other tools for figure generation? Re: Nicolas and Tom Is the astropy mailing list active? I've attempted to subscribe but haven't received confirmation or any list e-mails. Otherwise, perhaps just listing things on the wiki is enough, or keeping a list via this group? Adam From perry at stsci.edu Wed Apr 2 16:00:33 2008 From: perry at stsci.edu (Perry Greenfield) Date: Wed, 2 Apr 2008 16:00:33 -0400 Subject: [SciPy-user] [AstroPy] FITS images with header-supplied axes? In-Reply-To: <47F3D1DC.5080008@gemini.edu> References: <200804011041.02702.humufr@yahoo.fr> <848B8309-7988-4CC7-9DB2-C6C851C273FD@nasa.gov> <04DF97AA-4E62-408C-81BD-EEC155DBC7F7@nasa.gov> <47F3D1DC.5080008@gemini.edu> Message-ID: <3A8958DB-498E-4134-9FFB-E3E13413F649@stsci.edu> On Apr 2, 2008, at 2:35 PM, James Turner wrote: > Hi everyone, > > I agree that it would be good for the community to get into the > habit of exposing work on astronomical Python modules via SciPy > (for example). It seems that the obvious place for this is "AstroLib", > which has been part of the SciPy site for some time, but doesn't seem > to have gathered a lot of momentum yet: > > http://scipy.org/AstroLib > > I'm also interested in WCS, as critical functionality that hasn't > quite crystallized yet. Wrapping WCSLib did seem like an obvious > direction when I first looked into this and talked to Perry, but it > seemed that there may be licensing issues to investigate, with it > being GPL (which I personally like, but I'm not sure about internal > policy on relicensing everything). > Turns out the license for WCSLIB was changed to LGPL so it's no longer a problem. So the pywcs that is on astrolib is still BSD (though I haven't checked to see if it has a license along with it, I'll have that done). Perry From pav at iki.fi Wed Apr 2 17:31:28 2008 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 2 Apr 2008 21:31:28 +0000 (UTC) Subject: [SciPy-user] Estimation of parameters while fitting data References: <59052.192.168.42.175.1206989619.squirrel@webmail.aims.ac.za> <20080331190712.GA6796@encolpuis> <42271.192.168.42.175.1206994141.squirrel@webmail.aims.ac.za> <40576.192.168.42.175.1207128154.squirrel@webmail.aims.ac.za> Message-ID: Wed, 02 Apr 2008 11:22:34 +0200, Doreen Mbabazi wrote: > What you proposed has worked and I have been able to get the values that > I want to use to obtain the residuals however when it comes to using the > leastsq, I get an error which I have tried to google but its giving me > no useful results. Below is the code and the error. I think you should try to construct your program piece-by-piece, testing each part one-by-one (eg. printing/displaying some intermediate results to see whether they are correct), ie. 1. Does the function f(y,t,p) function as expected? 2. Does the function S(t, p) function as expected? 3. Does the function residuals(p, y, t) function as expected? 4. Does the whole shebang work? I think in this case you would have noticed problems in phases 2 and 3 even before 4. [clip] > def S(t, p): > y_list = [] > ys = odeint(f, initial_y, t, args =(p,)) > for i in range(len(t)): > y_V = ys[i][2] > y_list.append(y_V) > return y_list I'm not sure whether the indentation was messed up in the mail, but if the code reads like this, it probably won't work (y_list will always contain only a single value). Anyway, you'll likely be better off using indexing instead of constructing a list: def S(t, p): ys = odeint(f, initial_y, t, args =(p,)) return ys[:,2] > y = odeint(f,initial_y,t,args=(p,)) > #print S(t,p) > > def residuals(p,y,t): > return [V - S(t,p) for i in xrange(len(t))] Also this looks a bit funny: you create a list of len(t) elements that are all the same. Also, each element is an 1-d array containing the values V[:] - S(t[0],p). Anyway, you probably meant (no need to use the loop, subtraction is defined also for arrays): def residuals(p, y, t): return V - S(t,p) In general, if you find yourself writing xrange(len(z)) in numpy code, this often indicates that you should think a second time what you are trying to do: there may be an easier and more efficient way. > pbest = leastsq(residuals, p, args=(V,t)) > > The error > ValueError: setting an array element with a sequence. > Traceback (most recent call last): > File "essay6.py", line 44, in > pbest = leastsq(residuals, p, args=(V,t)) > File "/usr/lib/python2.5/site-packages/scipy/optimize/minpack.py", > line 266, in leastsq > retval = > _minpack._lmdif (func,x0,args,full_output,ftol,xtol,gtol,maxfev,epsfcn,factor,diag) > minpack.error: Result from function call is not a proper array of > floats. I think this says that the 'residuals' function didn't return an array (1- d?) of floats, which it indeed doesn't appear to do. -- Pauli Virtanen From ac1201 at gmail.com Wed Apr 2 19:45:18 2008 From: ac1201 at gmail.com (Andrew Charles) Date: Thu, 3 Apr 2008 10:45:18 +1100 Subject: [SciPy-user] Real-time plotting and data storage format questions In-Reply-To: <1207126327.6658.12.camel@gabriel-desktop> References: <31488cc20804010936q2ab49167qa43a660b1f28e51b@mail.gmail.com> <47F26B9B.4050809@llnl.gov> <20080401170949.GK1301@phare.normalesup.org> <1207126327.6658.12.camel@gabriel-desktop> Message-ID: netCDF is mostly used for geophysical, in particular atmospheric data, although I also use it for more general physical model results. There are a wealth of command line tools for inspecting and operating on netCDF files. I found the AMBER (an MD package - http://amber.scripps.edu/netcdf/nctraj.html) convention useful in working out how to lay out my particle data in a netCDF file. I've never used HDF, so I can't give a comparison. On Wed, Apr 2, 2008 at 7:52 PM, Gabriel J.L. Beckers wrote: > I too recommend hdf5 as data format. hdf5 has become my standard data > format for pretty much all my work with scientific data because pytables > makes it very easy to use, and yet offers the possibility of using > advanced features at the same time. Have a look at their website > http://www.pytables.org . > > Best, Gabriel > > > > On Tue, 2008-04-01 at 19:09 +0200, Gael Varoquaux wrote: > > On Tue, Apr 01, 2008 at 10:06:35AM -0700, Charles Doutriaux wrote: > > > For the data format I strongly recommend NetCDF and especially NetCDF4 > > > (allows for compression) > > > > Actually, why not hdf5, which seems to be used by NetCDF4, but is also > > largely used across many scientific comunities, and very well supported > > under python (pytables)? > > > > My 2 cents, > > > > Ga?l > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- ------------------------- Andrew Charles From gael.varoquaux at normalesup.org Wed Apr 2 19:47:18 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 3 Apr 2008 01:47:18 +0200 Subject: [SciPy-user] Real-time plotting and data storage format questions In-Reply-To: References: <31488cc20804010936q2ab49167qa43a660b1f28e51b@mail.gmail.com> <47F26B9B.4050809@llnl.gov> <20080401170949.GK1301@phare.normalesup.org> <1207126327.6658.12.camel@gabriel-desktop> Message-ID: <20080402234718.GR4585@phare.normalesup.org> On Thu, Apr 03, 2008 at 10:45:18AM +1100, Andrew Charles wrote: > I've never used HDF, so I can't give a comparison. NetCDF4 relies on HDF5, so the feature sets should be comparable. Cheers, Ga?l From lilyphysik at gmail.com Wed Apr 2 22:22:15 2008 From: lilyphysik at gmail.com (chengbo duan) Date: Thu, 3 Apr 2008 10:22:15 +0800 Subject: [SciPy-user] TypeError: can't convert complex to float; use abs(z) Message-ID: <554c886f0804021922n45c49f66hf1a0917c84f5206b@mail.gmail.com> Hi,guys I encountered this error message in other place,then I simplify the code and paste it and the output messages in the end of the letter . The output of the variable "c" confirm that the kind of "c" is complex,the kind of "b" is also complex,there is no need to convert complex to float.So I 'm confused with this message,Any suggestion? Thanks in advance Abo #!/usr/bin/env python #coding=utf-8 from scipy import * def gij(vi,vj,mat): i=size(vi,0)*size(vi,1) st= dot(reshape(conjugate(vi),(1,i)),dot(mat,reshape(vj,(i,1)))) return st a=zeros((2,2),dtype=complex128) b=zeros((2,2),dtype=complex128) z=1.0j sign=-1.0 vi=zeros((2,1),dtype=complex128) vj=zeros((2,1),dtype=complex128) vi=[[1.0+0.5j],[1.0+0.5j]] vj=copy(vi) a=1.0+1.0j c=gij(vi,vj,a) print "c=",c b[0,0]=c > "C:\Python25\python.exe" -u "C:\work\complex.py" c= [[ 2.5+2.5j]] Traceback (most recent call last): File "C:\work\complex.py", line 19, in b[0,0]=c TypeError: can't convert complex to float; use abs(z) -------------- next part -------------- An HTML attachment was scrubbed... URL: From strawman at astraw.com Thu Apr 3 03:32:56 2008 From: strawman at astraw.com (Andrew Straw) Date: Thu, 03 Apr 2008 00:32:56 -0700 Subject: [SciPy-user] Real-time plotting and data storage format questions In-Reply-To: <20080401170949.GK1301@phare.normalesup.org> References: <31488cc20804010936q2ab49167qa43a660b1f28e51b@mail.gmail.com> <47F26B9B.4050809@llnl.gov> <20080401170949.GK1301@phare.normalesup.org> Message-ID: <47F48828.5080703@astraw.com> I second the pytables recommendation, but if your simulations are in C and want to avoid additional dependencies, you might simply just want to write binary data direct to disk in your own format. Numpy's memmap function can then use them directly if your data are in same-length rows. Note that the term "rows" does not hint at what is actually possible. For example, an n-dimensional array can be one "column" of each such "row". -Andrew Gael Varoquaux wrote: > On Tue, Apr 01, 2008 at 10:06:35AM -0700, Charles Doutriaux wrote: > >> For the data format I strongly recommend NetCDF and especially NetCDF4 >> (allows for compression) >> > > Actually, why not hdf5, which seems to be used by NetCDF4, but is also > largely used across many scientific comunities, and very well supported > under python (pytables)? > > My 2 cents, > > Ga?l > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From stefan at sun.ac.za Thu Apr 3 04:08:57 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Thu, 3 Apr 2008 10:08:57 +0200 Subject: [SciPy-user] Real-time plotting and data storage format questions In-Reply-To: <47F48828.5080703@astraw.com> References: <31488cc20804010936q2ab49167qa43a660b1f28e51b@mail.gmail.com> <47F26B9B.4050809@llnl.gov> <20080401170949.GK1301@phare.normalesup.org> <47F48828.5080703@astraw.com> Message-ID: <9457e7c80804030108oc0fe496qcfe543427efc58e9@mail.gmail.com> On 03/04/2008, Andrew Straw wrote: > I second the pytables recommendation, but if your simulations are in C > and want to avoid additional dependencies, you might simply just want to > write binary data direct to disk in your own format. Numpy's memmap > function can then use them directly if your data are in same-length > rows. Note that the term "rows" does not hint at what is actually > possible. For example, an n-dimensional array can be one "column" of > each such "row". If you follow this route, you may want to take a look at the new NumPy .npy format. Regards St?fan From rwagner at physics.ucsd.edu Thu Apr 3 04:19:13 2008 From: rwagner at physics.ucsd.edu (Rick Wagner) Date: Thu, 3 Apr 2008 01:19:13 -0700 Subject: [SciPy-user] Real-time plotting and data storage format questions In-Reply-To: <47F48828.5080703@astraw.com> References: <31488cc20804010936q2ab49167qa43a660b1f28e51b@mail.gmail.com> <47F26B9B.4050809@llnl.gov> <20080401170949.GK1301@phare.normalesup.org> <47F48828.5080703@astraw.com> Message-ID: <43B6FBFA-B02C-478C-BBAD-CAE1927C1A19@physics.ucsd.edu> > I second the pytables recommendation, but if your simulations are in C > and want to avoid additional dependencies, you might simply just > want to > write binary data direct to disk in your own format. Numpy's memmap > function can then use them directly if your data are in same-length > rows. Note that the term "rows" does not hint at what is actually > possible. For example, an n-dimensional array can be one "column" of > each such "row". > > -Andrew > I have to disagree with using raw binary data, if data portability is at all a concern. I use binary formats often, but only as a quick and dirty way get data out of C. If you go with binary data, you will soon find yourself defining a custom format that includes array sizes, endianess, precision, etc. Which is why formats like HDF, NetCDF and FITS were invented. And HDF5's simple API makes it very convenient to write datasets from C. Plus, you get the HDF5 command line tools which allow you to inspect your data without needing to write your own custom tools. Also, to clarify the PyTables aspect, I think HDF5 is useful, and PyTables provides a convenient Python API to HDF5 data. However, I would not suggest using PyTables for storing the data, unless you were writing everything in Python. OK, that was longer and more opinionated than I intended. There must be a bad experience with raw binary data somewhere in my past (or several). --Rick > > Gael Varoquaux wrote: >> On Tue, Apr 01, 2008 at 10:06:35AM -0700, Charles Doutriaux wrote: >> >>> For the data format I strongly recommend NetCDF and especially >>> NetCDF4 >>> (allows for compression) >>> >> >> Actually, why not hdf5, which seems to be used by NetCDF4, but is >> also >> largely used across many scientific comunities, and very well >> supported >> under python (pytables)? >> >> My 2 cents, >> >> Ga?l >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > ------------------------------------------------------------------------ - Rick Wagner, Graduate Student Researcher UCSD Physics 9500 Gilman Drive La Jolla, CA 92093-0424 Email: rwagner at physics.ucsd.edu WWW: http://lca.ucsd.edu/projects/rpwagner (858) 822-4784 Phone ------------------------------------------------------------------------ - Measuring programming progress by lines of code is like measuring aircraft building progress by weight. --Bill Gates ------------------------------------------------------------------------ - -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Thu Apr 3 04:24:41 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 3 Apr 2008 10:24:41 +0200 Subject: [SciPy-user] Real-time plotting and data storage format questions In-Reply-To: <43B6FBFA-B02C-478C-BBAD-CAE1927C1A19@physics.ucsd.edu> References: <31488cc20804010936q2ab49167qa43a660b1f28e51b@mail.gmail.com> <47F26B9B.4050809@llnl.gov> <20080401170949.GK1301@phare.normalesup.org> <47F48828.5080703@astraw.com> <43B6FBFA-B02C-478C-BBAD-CAE1927C1A19@physics.ucsd.edu> Message-ID: <20080403082441.GD14710@phare.normalesup.org> On Thu, Apr 03, 2008 at 01:19:13AM -0700, Rick Wagner wrote: > Also, to clarify the PyTables aspect, I think HDF5 is useful, and PyTables > provides a convenient Python API to HDF5 data. However, I would not > suggest using PyTables for storing the data, unless you were writing > everything in Python. +1. Use HDF5, but do not use anything pytables-specific (though in my experience, this is quite easy to avoid, it is even what you do naturaly, if you are not trying to make fancy stuff). My 2 cents, Ga?l From lbolla at gmail.com Thu Apr 3 05:25:49 2008 From: lbolla at gmail.com (lorenzo bolla) Date: Thu, 3 Apr 2008 11:25:49 +0200 Subject: [SciPy-user] TypeError: can't convert complex to float; use abs(z) In-Reply-To: <554c886f0804021922n45c49f66hf1a0917c84f5206b@mail.gmail.com> References: <554c886f0804021922n45c49f66hf1a0917c84f5206b@mail.gmail.com> Message-ID: <80c99e790804030225y57628a8dqe0aed02b92bdc254@mail.gmail.com> your function gij returns a 2d array (a matrix) with just 1 element. you cannot assign to the element b[0,0], the value of the matrix c, but you can assign it the value of the _only_ element of c, i.e. c.item(). your last line become: b[0,0] = c.item() hth, L. On Thu, Apr 3, 2008 at 4:22 AM, chengbo duan wrote: > Hi,guys > > I encountered this error message in other place,then I simplify the > code and paste it and the output messages in the end of the letter . > The output of the variable "c" confirm that the kind of "c" is > complex,the kind of "b" is also complex,there is no need to convert > complex to float.So I 'm confused with this message,Any suggestion? > > Thanks in advance > Abo > > #!/usr/bin/env python > #coding=utf-8 > from scipy import * > def gij(vi,vj,mat): > i=size(vi,0)*size(vi,1) > st= dot(reshape(conjugate(vi),(1,i)),dot(mat,reshape(vj,(i,1)))) > return st > a=zeros((2,2),dtype=complex128) > b=zeros((2,2),dtype=complex128) > z=1.0j > sign=-1.0 > vi=zeros((2,1),dtype=complex128) > vj=zeros((2,1),dtype=complex128) > vi=[[1.0+0.5j],[1.0+0.5j]] > vj=copy(vi) > a=1.0+1.0j > c=gij(vi,vj,a) > print "c=",c > b[0,0]=c > > > "C:\Python25\python.exe" -u "C:\work\complex.py" > c= [[ 2.5+2.5j]] > Traceback (most recent call last): > File "C:\work\complex.py", line 19, in > b[0,0]=c > TypeError: can't convert complex to float; use abs(z) > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -- Lorenzo Bolla lbolla at gmail.com http://lorenzobolla.emurse.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Thu Apr 3 05:56:18 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 3 Apr 2008 04:56:18 -0500 Subject: [SciPy-user] TypeError: can't convert complex to float; use abs(z) In-Reply-To: <80c99e790804030225y57628a8dqe0aed02b92bdc254@mail.gmail.com> References: <554c886f0804021922n45c49f66hf1a0917c84f5206b@mail.gmail.com> <80c99e790804030225y57628a8dqe0aed02b92bdc254@mail.gmail.com> Message-ID: <3d375d730804030256m8f5836csb5d11be13d0034d9@mail.gmail.com> On Thu, Apr 3, 2008 at 4:25 AM, lorenzo bolla wrote: > your function gij returns a 2d array (a matrix) with just 1 element. > you cannot assign to the element b[0,0], the value of the matrix c, but you > can assign it the value of the _only_ element of c, i.e. c.item(). > > your last line become: > b[0,0] = c.item() Correct. However, it's still a weird error message that may be indicative of incorrect code somewhere in numpy. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From gary.pajer at gmail.com Thu Apr 3 07:22:23 2008 From: gary.pajer at gmail.com (Gary Pajer) Date: Thu, 3 Apr 2008 07:22:23 -0400 Subject: [SciPy-user] Real-time plotting and data storage format questions In-Reply-To: <31488cc20804010936q2ab49167qa43a660b1f28e51b@mail.gmail.com> References: <31488cc20804010936q2ab49167qa43a660b1f28e51b@mail.gmail.com> Message-ID: <88fe22a0804030422ie5b5b97m43a52746d260b9a2@mail.gmail.com> On Tue, Apr 1, 2008 at 12:36 PM, heytrent at gmail.com wrote: > Greetings, > > A small team of us are developing a new simulation package from the ground > up. Our legacy approach relied on MATLAB and other proprietary software. A > hope pf ours is to be able to shed the use of MATLAB for the analysis of our > simulation results and instead use python with scipy/numpy/matplotlib etc. > I've successfully installed and compiled optimized numpy/scipy and all the > supporting packages (ATLAS, FFTW, etc). > > So far so good. > > To the point - I have two questions: > > 1) We would like to have a "scope" to monitor simulation outputs in real > time. We're using one tool that can take data over a tcp/ip port, but is > clunky and only works on a single platform. Does such a thing exists within > the python realm for plotting data in real time? Not sure of your detailed needs, but I can point out some options. One typical problem in python 'scope applications is speed. Most current graphics solutions weren't designed with realtime use in mind. Nonetheless, the ones that are being actively developed (e.g. matplotlib) are better than they used to be. Pmw-Blt. This is pretty fast, but development has all but stopped. But not entirely stopped. Too bad. It's the fastest solution that I found in a not-exhaustive search. Look out: the Linux version of the current release has a library bug, and it's necessary to set an environment variable to get it to work. This is not documented, AFAIK I'm currently writing a realtime data acquisition/display application, but speed is not critical. I'm using Enthought Tool Suite (ETS) and Traits, and Chaco for data display. IMHO, I think that in five years ETS/Traits will be the most commonly used framework for scientific applications. > > 2) Our simulation creates large (1-4 GB) data sets. Since we're writing this > simulation ourselves (C++) we can save the data in any format. Does anyone > have a suggestion for a specific format or API that's been found to be > optimal in terms of memory usage and ability to import into python for > analysis and plotting? > > Thank you for any suggestions. We're still new with Python, so I apologize > if these questions seem mundane. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From marcotuckner at public-files.de Thu Apr 3 07:24:30 2008 From: marcotuckner at public-files.de (Marco Tuckner) Date: Thu, 3 Apr 2008 11:24:30 +0000 (UTC) Subject: [SciPy-user] removing certain dates from a time series References: <200804011751.18086.pgmdevlist@gmail.com> Message-ID: Hello Pierre, thanks for your response. > Don't remove the dates, just mask the corresponding data, as illustrated in > the following Thanks for your demonstration code. But in this case I really want to remove the values and dates. Let's suppose I have physically invalid/not plausible values on other dates (e.g. 12-Jan, 15-Feb). I would certaily mask these values. Maybe then estimate them later with correlation or interpolation. But I don't want to use the data of 29-Feb at all neither masked nor interpoled. If I mask 29-Feb and continue my timeseries operation I would for instance get different monthly or annual averages. Therefore I would like to exclude that date completely. Do you understand my point? Is there any suggestion you could give? Thanks and kind regards, Marco From stefan at sun.ac.za Thu Apr 3 08:05:13 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Thu, 3 Apr 2008 14:05:13 +0200 Subject: [SciPy-user] TypeError: can't convert complex to float; use abs(z) In-Reply-To: <80c99e790804030225y57628a8dqe0aed02b92bdc254@mail.gmail.com> References: <554c886f0804021922n45c49f66hf1a0917c84f5206b@mail.gmail.com> <80c99e790804030225y57628a8dqe0aed02b92bdc254@mail.gmail.com> Message-ID: <9457e7c80804030505j16628c08j5cf8e7fa9b20bc6e@mail.gmail.com> On 03/04/2008, lorenzo bolla wrote: > your function gij returns a 2d array (a matrix) with just 1 element. > you cannot assign to the element b[0,0], the value of the matrix c, but you > can assign it the value of the _only_ element of c, i.e. c.item(). > > your last line become: > b[0,0] = c.item() And yet: In [3]: x = np.zeros((2,1)) In [4]: x Out[4]: array([[ 0.], [ 0.]]) In [5]: y = np.array([[1.5]]) In [6]: x[0,0] = y In [7]: x Out[7]: array([[ 1.5], [ 0. ]]) Looks like a bug to me? Cheers St?fan From pgmdevlist at gmail.com Thu Apr 3 11:38:04 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Thu, 3 Apr 2008 11:38:04 -0400 Subject: [SciPy-user] removing certain dates from a time series In-Reply-To: References: <200804011751.18086.pgmdevlist@gmail.com> Message-ID: <200804031138.04824.pgmdevlist@gmail.com> Marco On Thursday 03 April 2008 07:24:30 Marco Tuckner wrote: > But I don't want to use the data of 29-Feb at all neither masked nor > interpoled. If I mask 29-Feb and continue my timeseries operation I would > for instance get different monthly or annual averages. Please give me an example where masking Feb 29 wouldn't give you the result you expect. > Therefore I would like to exclude that date completely. The problem with deleting these dates is that you won't be able to use the convert function afterwards, as it requires a regularly-spaced time series to work. From wizzard028wise at gmail.com Thu Apr 3 11:46:37 2008 From: wizzard028wise at gmail.com (Dorian) Date: Thu, 3 Apr 2008 17:46:37 +0200 Subject: [SciPy-user] Gaussian copula Message-ID: <674a602a0804030846v5c801230m6941e32247b06335@mail.gmail.com> Hi all, Is there any way or link , which could help me to plot a Gaussian Copula. Thanks in advance, Dorian -------------- next part -------------- An HTML attachment was scrubbed... URL: From s.mientki at ru.nl Thu Apr 3 13:39:51 2008 From: s.mientki at ru.nl (Stef Mientki) Date: Thu, 03 Apr 2008 19:39:51 +0200 Subject: [SciPy-user] Real-time plotting and data storage format questions In-Reply-To: <88fe22a0804030422ie5b5b97m43a52746d260b9a2@mail.gmail.com> References: <31488cc20804010936q2ab49167qa43a660b1f28e51b@mail.gmail.com> <88fe22a0804030422ie5b5b97m43a52746d260b9a2@mail.gmail.com> Message-ID: <47F51667.4030708@ru.nl> hi Gary, Gary Pajer wrote: > On Tue, Apr 1, 2008 at 12:36 PM, heytrent at gmail.com wrote: > >> Greetings, >> >> A small team of us are developing a new simulation package from the ground >> up. Our legacy approach relied on MATLAB and other proprietary software. A >> hope pf ours is to be able to shed the use of MATLAB for the analysis of our >> simulation results and instead use python with scipy/numpy/matplotlib etc. >> I've successfully installed and compiled optimized numpy/scipy and all the >> supporting packages (ATLAS, FFTW, etc). >> >> So far so good. >> >> To the point - I have two questions: >> >> 1) We would like to have a "scope" to monitor simulation outputs in real >> time. We're using one tool that can take data over a tcp/ip port, but is >> clunky and only works on a single platform. Does such a thing exists within >> the python realm for plotting data in real time? >> > > Not sure of your detailed needs, but I can point out some options. > One typical problem in python 'scope applications is speed. Most > current graphics solutions weren't designed with realtime use in mind. > Nonetheless, the ones that are being actively developed (e.g. > matplotlib) are better than they used to be. > > Pmw-Blt. Where can I find information about Pwm-Blt ? > This is pretty fast, but development has all but stopped. > But not entirely stopped. Too bad. It's the fastest solution that I > found in a not-exhaustive search. Look out: the Linux version of the > current release has a library bug, and it's necessary to set an > environment variable to get it to work. This is not documented, AFAIK > > I'm currently writing a realtime data acquisition/display application, > but speed is not critical. I'm using Enthought Tool Suite (ETS) and > Traits, and Chaco for data display. IMHO, I think that in five years > ETS/Traits will be the most commonly used framework for scientific > applications. > I'm too writing one/two scope-like units (rewrite of PyPlot), because MatPlotLib was too slow ;-) I'm aiming at 16 channels 1 kHz per channel. cheers, Stef > > >> 2) Our simulation creates large (1-4 GB) data sets. Since we're writing this >> simulation ourselves (C++) we can save the data in any format. Does anyone >> have a suggestion for a specific format or API that's been found to be >> optimal in terms of memory usage and ability to import into python for >> analysis and plotting? >> >> Thank you for any suggestions. We're still new with Python, so I apologize >> if these questions seem mundane. >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> >> >> > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > From david.huard at gmail.com Thu Apr 3 14:20:14 2008 From: david.huard at gmail.com (David Huard) Date: Thu, 3 Apr 2008 14:20:14 -0400 Subject: [SciPy-user] Gaussian copula In-Reply-To: <674a602a0804030846v5c801230m6941e32247b06335@mail.gmail.com> References: <674a602a0804030846v5c801230m6941e32247b06335@mail.gmail.com> Message-ID: <91cf711d0804031120p6eac0a73i2731bb56c574212f@mail.gmail.com> I am not aware of any copula package for python. There is one is R though, that you could link to using rpy. See http://cran.r-project.org/web/packages/ David 2008/4/3, Dorian : > > Hi all, > > Is there any way or link , which could help me to plot a Gaussian Copula. > > Thanks in advance, > > Dorian > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fredmfp at gmail.com Thu Apr 3 15:20:04 2008 From: fredmfp at gmail.com (fred) Date: Thu, 03 Apr 2008 21:20:04 +0200 Subject: [SciPy-user] conforming to Python GIL... Message-ID: <47F52DE4.0@gmail.com> Hi, I use a lot of ConVeX OPTimsation and fortran (via f2py) routines in my Traits app. As I want to compute the data and want to display them, I use threads. The issue I get is that data displayed (using Chaco2) are not updated (app is frozen) while computing the input data. From D. Morrill's answer (the Traits guru ;-)), it appears that cvxopt (and solve() from scipy, in fact) and fortran modules does not release the "Python GIL (Global Intepreter Lock)". This is very bad to hear as the display is not updated (after a window popup, left blank, or one can't handle properties display data). Someone can gives an helping hand on this issue ? TIA. Cheers, -- http://scipy.org/FredericPetit From dwf at cs.toronto.edu Thu Apr 3 15:21:27 2008 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Thu, 3 Apr 2008 15:21:27 -0400 Subject: [SciPy-user] optimize.fmin_bfgs only returning a scalar for xopt? Message-ID: <0EFFDD53-C67A-4AC8-A698-042495855D68@cs.toronto.edu> Hi folks, I just checked out a fresh copy of scipy this morning and I'm having an odd problem with optimize.fmin_bfgs (and actually fmin_cg too). Namely, the function I give it (and the initial guess) use 1d-array arguments of length > 1, but I only get a scalar back (specifically a numpy.float64) in the first tuple position, which, if I'm reading the docs right, should contain the value of x that minimizes the function you provided. This happens whether or not I supply a gradient function. Furthermore, my objective function actually raises an exception if it's called with a scalar arg instead of an array of the right length. Is this a bug? Can anyone confirm this behaviour in r4076? Or am I doing it wrong? Thanks, David From nwagner at iam.uni-stuttgart.de Thu Apr 3 15:30:17 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 03 Apr 2008 21:30:17 +0200 Subject: [SciPy-user] optimize.fmin_bfgs only returning a scalar for xopt? In-Reply-To: <0EFFDD53-C67A-4AC8-A698-042495855D68@cs.toronto.edu> References: <0EFFDD53-C67A-4AC8-A698-042495855D68@cs.toronto.edu> Message-ID: On Thu, 3 Apr 2008 15:21:27 -0400 David Warde-Farley wrote: > Hi folks, > > I just checked out a fresh copy of scipy this morning >and I'm having > an odd problem with optimize.fmin_bfgs (and actually >fmin_cg too). > > Namely, the function I give it (and the initial guess) >use 1d-array > arguments of length > 1, but I only get a scalar back >(specifically a > numpy.float64) in the first tuple position, which, if >I'm reading the > docs right, should contain the value of x that minimizes >the function > you provided. > > This happens whether or not I supply a gradient >function. >Furthermore, my objective function actually raises an >exception if > it's called with a scalar arg instead of an array of the >right length. > > Is this a bug? Can anyone confirm this behaviour in >r4076? Or am I > doing it wrong? > > Thanks, > > David > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user Can you provide us with your example code ? Nils From marcotuckner at public-files.de Thu Apr 3 15:41:22 2008 From: marcotuckner at public-files.de (Marco Tuckner) Date: Thu, 3 Apr 2008 19:41:22 +0000 (UTC) Subject: [SciPy-user] removing certain dates from a time series References: <200804011751.18086.pgmdevlist@gmail.com> <200804031138.04824.pgmdevlist@gmail.com> Message-ID: Hello, > > But I don't want to use the data of 29-Feb at all neither masked nor > > interpoled. If I mask 29-Feb and continue my timeseries operation I would > > for instance get different monthly or annual averages. > > Please give me an example where masking Feb 29 wouldn't give you the result > you expect. As you are that confident, I will need to investigate further... > > Therefore I would like to exclude that date completely. > The problem with deleting these dates is that you won't be able to use the > convert function afterwards, as it requires a regularly-spaced time series to > work. Ah, OKI I forgot about this. But then I just would like to bend this date (29-Feb) out when writing the report [timseries.Report(series)]. How would you do that? Or would you recommend to reopen the csv-file again afterwards to delete the records? Kind regards, Marco P.S. I think that many problems I have using timeseries arise from not knowing how maskedarray works. If only the new maskedarry would have been that well documented as the time series package [1] I would find my way in much easier! Is there any documentation for maskedarray like: [1] http://scipy.org/scipy/scikits/wiki/TimeSeries From oliphant at enthought.com Thu Apr 3 15:44:43 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Thu, 03 Apr 2008 14:44:43 -0500 Subject: [SciPy-user] conforming to Python GIL... In-Reply-To: <47F52DE4.0@gmail.com> References: <47F52DE4.0@gmail.com> Message-ID: <47F533AB.6080807@enthought.com> fred wrote: > Hi, > > I use a lot of ConVeX OPTimsation and fortran (via f2py) routines in my > Traits app. > > As I want to compute the data and want to display them, I use threads. > > The issue I get is that data displayed (using Chaco2) are not updated > (app is frozen) while computing the input data. > > From D. Morrill's answer (the Traits guru ;-)), it appears that cvxopt > (and solve() from scipy, in fact) and fortran modules does not release > the "Python GIL (Global Intepreter Lock)". > This requires a bit of effort to solve. We need to in multiple places... 1) release the GIL 2) put semaphores into code that is not thread-safe. f2py should be modified to handle this. Other could should be modified to handle it as well. I suspect NumPy should grow an API to handle the semaphore thing easily (I think Python already has one), so these may be able to be wrapped into the MACROS already available for releasing the GIL. -Travis O. From pgmdevlist at gmail.com Thu Apr 3 16:03:13 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Thu, 3 Apr 2008 16:03:13 -0400 Subject: [SciPy-user] removing certain dates from a time series In-Reply-To: References: <200804031138.04824.pgmdevlist@gmail.com> Message-ID: <200804031603.13587.pgmdevlist@gmail.com> On Thursday 03 April 2008 15:41:22 Marco Tuckner wrote: > But then I just would like to bend this date (29-Feb) out when writing the > report [timseries.Report(series)]. How would you do that? > Or would you recommend to reopen the csv-file again afterwards to delete > the records? Ah, I'll forward your email to Matt Knox, who wrote the report module. I never used it myself... But if you need to get rid of masked dates/data for one reason or another, you can just use the .compressed() method: >>>import numpy.ma as ma >>>import scikits.timeseries as ts >>>series=ts.time_series(numpy.arange(1,51), >>> start_date=ts.Date('D',string='2000-02-01')) >>>series[(series.months==2)&(series.days==29)]=ma.masked >>>print series.compressed() [1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50] > P.S. I think that many problems I have using timeseries arise from not > knowing how maskedarray works. If only the new maskedarry would have been > that well documented as the time series package [1] I would find my way in > much easier! Is there any documentation for maskedarray like: > [1] http://scipy.org/scipy/scikits/wiki/TimeSeries Nope, and I take full blame for it: I never really had the time nor the gumption to write one. I'd be quite grateful if you or any volunteer could start one, so that we could all append it as needed. From dwf at cs.toronto.edu Thu Apr 3 16:18:28 2008 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Thu, 3 Apr 2008 16:18:28 -0400 Subject: [SciPy-user] optimize.fmin_bfgs only returning a scalar for xopt? In-Reply-To: References: <0EFFDD53-C67A-4AC8-A698-042495855D68@cs.toronto.edu> Message-ID: On 3-Apr-08, at 3:30 PM, Nils Wagner wrote: > Can you provide us with your example code ? Sure. Here I'm just attempting to fit a logistic regression model. def lr_probs(coeffs, data, target): bias = coeffs[0] coeffs = coeffs[1:] eta = bias + N.sum(coeffs * data, axis=1) probs = 1 / (1 + N.exp(-eta)) return probs def lr_nloglik(coeffs, data, target, decay=0): probs = lr_probs(coeffs, data, target) bias = coeffs[0] coeffs = coeffs[1:] L = N.sum(target * probs) if decay: L = L - decay * (bias**2 + N.sum(coeffs**2)) return -L def lr_fit(data, target, decay=None): #initial = S.randn(data.shape[1] + 1) * 0.01 initial = N.zeros(data.shape[1] + 1) beta, lik, fcalls, gcalls = opt.fmin_bfgs(lr_nloglik, initial, None, args=(data,target,decay)) From dwf at cs.toronto.edu Thu Apr 3 16:25:20 2008 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Thu, 3 Apr 2008 16:25:20 -0400 Subject: [SciPy-user] optimize.fmin_bfgs only returning a scalar for xopt? In-Reply-To: References: <0EFFDD53-C67A-4AC8-A698-042495855D68@cs.toronto.edu> Message-ID: On 3-Apr-08, at 4:18 PM, David Warde-Farley wrote: > On 3-Apr-08, at 3:30 PM, Nils Wagner wrote: > >> Can you provide us with your example code ? > > Sure. Here I'm just attempting to fit a logistic regression model. ... Okay, I figured out the problem. The documentation for those methods is slightly misleading; I didn't see the mention of the full_output parameter. I just happened to be using a function that takes a 4d array as it's argument, so it was unpacking correctly. :P Neeeeevermind, but thanks anyway. David From marcotuckner at public-files.de Thu Apr 3 18:38:43 2008 From: marcotuckner at public-files.de (Marco Tuckner) Date: Thu, 3 Apr 2008 22:38:43 +0000 (UTC) Subject: [SciPy-user] removing certain dates from a time series References: <200804031138.04824.pgmdevlist@gmail.com> <200804031603.13587.pgmdevlist@gmail.com> Message-ID: > can just use the .compressed() method: ... I' ll test that and come back on you when I finshed. It's late here ;-) >> that well documented as the time series package [1] I would find my way in >> much easier! Is there any documentation for maskedarray like: >> [1] http://scipy.org/scipy/scikits/wiki/TimeSeries > > Nope, and I take full blame for it: I never really had the time nor the > gumption to write one. I'd be quite grateful if you or any volunteer could > start one, so that we could all append it as needed. OKI, I will see what I can do. This will be put onto the scipy wiki, right? Thanks for your patience. Marco From marcotuckner at public-files.de Thu Apr 3 18:52:26 2008 From: marcotuckner at public-files.de (Marco Tuckner) Date: Thu, 3 Apr 2008 22:52:26 +0000 (UTC) Subject: [SciPy-user] fill a timeseries with masked data by correlation from another series Message-ID: Hello, on the timeseries Scikit wiki it is shown how to perform simple operations on two or more time series of the same frequency such as adding them [1]. Is it also possible to perform mathematical and statistical operations on two or more timeseries with same frequencies? I would like to correlate two (or more timeseries) to estimate invalid and masked values in one series based on the values of another complete series using the correlation coefficient. How can I to that? I tried this without successs: ### CODE ### In [22]: scipy.stats.corrcoef(series,modseries) /usr/lib/python2.5/site-packages/maskedarray/core.py:1520: UserWarning: Warning: converting a masked element to nan. warnings.warn("Warning: converting a masked element to nan.") Out[22]: array([[ nan, nan], [ nan, nan]]) ###### Furthermore I like to calculate other tests like t-test, chi-test, Kolmogorov-Smirnof on the two sets and get distribution parameters (RMSE, MBE, STD) for each series. How can this be done? Thank you in advance for your help. Regards, Marco [1]: Operations on TimeSeries - http://scipy.org/scipy/scikits/wiki/TimeSeries#OperationsonTimeSeries From marcotuckner at public-files.de Thu Apr 3 19:24:49 2008 From: marcotuckner at public-files.de (Marco Tuckner) Date: Thu, 3 Apr 2008 23:24:49 +0000 (UTC) Subject: [SciPy-user] creating timeseries for non convertional custom frequencies Message-ID: Hello, thanks to the help of the time series developers I am stepping gradually into time series processing with Python. I have another issue creating timeseries objects: How do I create a timeseries object with custom or irregular frequencies? Here a more verbose explanation of what I want: I have data that has been recorded from a data logger. Due to memory constraints, the logger has been set to only save the observations on a 5-minute basis (1 data point every 5 minutes). How do I create a hourly data set / timeseries from such a data? To give two more examples: * Another data set I have has only data point at every 6 hours (NCAR reanalysis data). How do I convert such data into a normal frequencies such as daily or monthly? * A restaurant records statistics about its guests. The place is closed every Monday. So there will not be any attendance numbers for Monday. If I use the daily frequency the timeseries will mess up. They'd not count the Monday as "empty." Basicly, I am looking for a way to create my time series object with data that is not complete by purpose, irregular or of a non convertional frequency. Something like the business day frequency with the difference that the gaps are different. Thanks in advance for your help! Kind regards, Marco From pgmdevlist at gmail.com Thu Apr 3 19:21:13 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Thu, 3 Apr 2008 19:21:13 -0400 Subject: [SciPy-user] fill a timeseries with masked data by correlation from another series In-Reply-To: References: Message-ID: <200804031921.13651.pgmdevlist@gmail.com> On Thursday 03 April 2008 18:52:26 Marco Tuckner wrote: > Is it also possible to perform mathematical and statistical operations on > two or more timeseries with same frequencies? Mmh, that'll depend on what you want to do. More specific questions would be better answered... As long as a function works with MaskedArrays, it should work with TimeSeries. > I would like to correlate two (or more timeseries) to estimate invalid and > masked values in one series based on the values of another complete series > using the correlation coefficient. > How can I to that? > > I tried this without successs: > ### CODE ### > In [22]: scipy.stats.corrcoef(series,modseries) scipy.stats.corrcoef doesn't accept masked arrays. More exactly, it transforms a masked array into a regular ndarray, therefore losing the mask. Most of the functions of scipy.stats work that way. I'm currently rewriting (most of) them to support masked array, hopefully I'll be able to post something in the next few days. Where should I post them ? In scipy.stats.mstats ? In numpy.ma ? > Furthermore I like to calculate other tests like t-test, chi-test, > Kolmogorov-Smirnof on the two sets and get distribution parameters (RMSE, > MBE, STD) for each series. > How can this be done? For the basic statistical tests in scipy.stats, see above. For STD, you can use the std method For RMSE, you could try to sum the squares of the anomalies For MBE, you're losing me: Member of the British Empire From fredmfp at gmail.com Thu Apr 3 20:08:03 2008 From: fredmfp at gmail.com (fred) Date: Fri, 04 Apr 2008 02:08:03 +0200 Subject: [SciPy-user] conforming to Python GIL... In-Reply-To: <47F533AB.6080807@enthought.com> References: <47F52DE4.0@gmail.com> <47F533AB.6080807@enthought.com> Message-ID: <47F57163.7090605@gmail.com> Travis E. Oliphant a ?crit : > This requires a bit of effort to solve. We need to in multiple places... Hum, I do understand, but... It's very hard to believe that I am the only guy in the entire universe who encounters this kind of issue, uh ? ;-))) IOW... how do scipy users do ??? Cheers, -- http://scipy.org/FredericPetit From marcotuckner at public-files.de Thu Apr 3 20:10:00 2008 From: marcotuckner at public-files.de (Marco Tuckner) Date: Fri, 4 Apr 2008 00:10:00 +0000 (UTC) Subject: [SciPy-user] =?utf-8?q?fill_a_timeseries_with_masked_data_by_corr?= =?utf-8?q?elation=09from_another_series?= References: <200804031921.13651.pgmdevlist@gmail.com> Message-ID: > Mmh, that'll depend on what you want to do. More specific questions would be > better answered... That was meant as a introduction to the questions that followed ;-) >> I would like to correlate two (or more timeseries) to estimate invalid and >> masked values in one series based on the values of another complete series >> using the correlation coefficient. >> How can I to that? >> >> I tried this without successs: >> ### CODE ### >> In [22]: scipy.stats.corrcoef(series,modseries) > > scipy.stats.corrcoef doesn't accept masked arrays. More exactly, it transforms > a masked array into a regular ndarray, therefore losing the mask. Most of the > functions of scipy.stats work that way. I'm currently rewriting (most of) > them to support masked array, Good news! > hopefully I'll be able to post something in the > next few days. Where should I post them ? In scipy.stats.mstats ? In > numpy.ma ? I don't know because I am not a developer. But I would use scipy.stats.mstats. >> Furthermore I like to calculate other tests like t-test, chi-test, >> Kolmogorov-Smirnof on the two sets and get distribution parameters (RMSE, >> MBE, STD) for each series. >> How can this be done? Actually, I was able to compute Kolmogorov-Smirnof (KSI). But correlation wasn't possible. > > For the basic statistical tests in scipy.stats, see above. > For STD, you can use the std method > For RMSE, you could try to sum the squares of the anomalies > For MBE, you're losing me: Member of the British Empire Some publications call it Mean Bias Error, I think it's the same as http://en.wikipedia.org/wiki/Mean_squared_error Thanks a lot for your efforts! From pgmdevlist at gmail.com Thu Apr 3 20:17:47 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Thu, 3 Apr 2008 20:17:47 -0400 Subject: [SciPy-user] creating timeseries for non convertional custom frequencies In-Reply-To: References: Message-ID: <200804032017.48109.pgmdevlist@gmail.com> Marco, Letting the user define their own frequency is on our to-do list, but not very high: the main problem we have is that it would require some significant C hacking, which I'm not prepared to do (and I know that Matt is swamped as well). However, in most cases, you can work around by using the fact that your data don't have to be regularly spaced in time: In this example: >>>import scikits.timeseries as ts, numpy.ma as ma >>>series=ts.time_series(numpy.arange(12), start_date=ts.now('M')) >>>newseries=series[::2] newseries has a monthly frequency, but gaps. If you have your two series of dates and corresponding data, you can create your series with time_series(data, dates=dates), regardless of the potential gaps. If you don't have the dates but only the starting point, you can try to create a temporary list of dates, regularly spaced, and then take only the dates you want. For example, we could have created the newseries of the previous example with: >>>ts.time_series(numpy.arange(12), dates=ts.date_array(freq='M',start_date=ts.now('M'),length=24)[::2]) > I have data that has been recorded from a data logger. Due to memory > constraints, the logger has been set to only save the observations > on a 5-minute basis (1 data point every 5 minutes). > How do I create a hourly data set / timeseries from such a data? Import your data at a minute frequency, fill it (with fill_missing_dates), convert it to hour: you'll have to decide what to do with your 5-min data: sum per hour ? Average per hour ? > To give two more examples: > * Another data set I have has only data point at every 6 hours (NCAR > reanalysis data). > How do I convert such data into a normal frequencies such as daily or > monthly? Same thing: import it with an hour frequency, fill it and convert it to the new frequency. > * A restaurant records statistics about its guests. The place is > closed every Monday. So there will not be any attendance numbers for > Monday. If I use the daily frequency the timeseries will mess up. > They'd not count the Monday as "empty." Mmh, could you use a daily frequency and mask every Monday ? series[(series.weekday==0)] = masked > Basicly, I am looking for a way to create my time series object with data > that is not complete by purpose, irregular or of a non convertional > frequency. Something like the business day frequency with the difference > that the gaps are different. Once again, because TimeSeries objects are MaskedArrays with an extra array attached, you should be able to achieve what you want by masking the data you want. From pgmdevlist at gmail.com Thu Apr 3 20:21:10 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Thu, 3 Apr 2008 20:21:10 -0400 Subject: [SciPy-user] fill a timeseries with masked data by correlation from another series In-Reply-To: References: Message-ID: <200804032021.10982.pgmdevlist@gmail.com> On Thursday 03 April 2008 18:52:26 Marco Tuckner wrote: > I would like to correlate two (or more timeseries) to estimate invalid and > masked values in one series based on the values of another complete series > using the correlation coefficient. > How can I to that? Thinking about it, there's a simple solution if you're in a hurry: don't use masked data. Of course, you have to make sure to be consistent... - Make sure that your two series are aligned (that they start at the same date and have the same frequency and length). - Create a common mask with something like commonmask = ma.mask_or(ma.getmask(series_1), ma.getmask(series_2)) - Apply the mask to your two series: series_1.mask = series_2.mask = commonmask - Compress them to get rid of any missing values with the 'compressed' function - Perform the computation you want on the compressed data. You may have to copy some data when needed, but you have the gist of it. let me know how it goes. From gael.varoquaux at normalesup.org Thu Apr 3 20:42:26 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Fri, 4 Apr 2008 02:42:26 +0200 Subject: [SciPy-user] conforming to Python GIL... In-Reply-To: <47F57163.7090605@gmail.com> References: <47F52DE4.0@gmail.com> <47F533AB.6080807@enthought.com> <47F57163.7090605@gmail.com> Message-ID: <20080404004226.GI22774@phare.normalesup.org> On Fri, Apr 04, 2008 at 02:08:03AM +0200, fred wrote: > Travis E. Oliphant a ?crit : > > This requires a bit of effort to solve. We need to in multiple places... > Hum, I do understand, but... > It's very hard to believe that I am the only guy in the entire universe > who encounters this kind of issue, uh ? ;-))) > IOW... how do scipy users do ??? Sorry Fred to be bringing you the bad news. You must have been hiding under a hole for a few years if you weren't aware of the bad news: Python is not terribly good at parallel computing in a shared-memory model. To achieve good parallel computing, you need to use several process, like does the processing module that Robert mentioned. This issue has been mentioned on the scipy-user mailing list more than once. Cheers, Ga?l From robert.kern at gmail.com Thu Apr 3 20:56:30 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 3 Apr 2008 19:56:30 -0500 Subject: [SciPy-user] fill a timeseries with masked data by correlation from another series In-Reply-To: References: Message-ID: <3d375d730804031756o1fa4d438h490f3a98ef7a2520@mail.gmail.com> On Thu, Apr 3, 2008 at 5:52 PM, Marco Tuckner wrote: > I would like to correlate two (or more timeseries) to estimate invalid and > masked values in one series based on the values of another complete series using > the correlation coefficient. > How can I to that? With difficulty. If your data is close enough to multivariate normal, then one can use an Expectation-Maximization (EM) method to jointly estimate the common mean and covariance along with the missing data. This is fairly common in financial circles for measuring risk. I don't have an Internet reference on-hand, but the book _Computational Statistics_ by Givens and Hoeting has a chapter on this. http://www.amazon.com/Computational-Statistics-Wiley-Probability/dp/0471461245 In my experience, it is slow and may not converge. The approach that Pierre suggests (finding a common mask where you have data for all time series) is good for 2 series (in fact, it's probably optimal), but with increasing numbers of series, you will most likely lose too many days to make a reasonable estimate. Yet another approach is to find the correlations between each pair of series using the common mask for each pair. This will almost certainly give you an invalid correlation matrix (all eigenvalues must be >= 0), but from that you can find the closest valid correlation matrix. There are a couple of ways you could implement this; the one I've used successfully is called Alternating Projections. http://citeseer.ist.psu.edu/higham02computing.html To impute the missing data, you essentially apply the "Expectation" step of the EM method. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From mattknox_ca at hotmail.com Thu Apr 3 20:59:40 2008 From: mattknox_ca at hotmail.com (Matt Knox) Date: Fri, 4 Apr 2008 00:59:40 +0000 (UTC) Subject: [SciPy-user] =?utf-8?q?creating_timeseries_for_non_convertional_c?= =?utf-8?q?ustom=09frequencies?= References: Message-ID: > How do I create a timeseries object with custom or irregular frequencies? > > Here a more verbose explanation of what I want: > I have data that has been recorded from a data logger. Due to memory > constraints, the logger has been set to only save the observations > on a 5-minute basis (1 data point every 5 minutes). > How do I create a hourly data set / timeseries from such a data? Custom frequencies per say, are not currently supported. However, a TimeSeries object does not have to have continuous dates strictly speaking. A 'minutely' frequency series can have 1 data point every five minutes if you want. For example... ============================= import scikits.timeseries as ts dates = [] for x in range(5): dates.append(ts.Date(freq='minutely', year=2008, month=1, day=1, hour=1, minute=(x+1)*5)) series = ts.time_series(range(5), dates=dates, freq='minutely') ============================= the above code generates a minutely frequency series with irregularly spaced dates. You can even convert this series to other frequencies, however it will be "stretched out" to a regularly spaced TimeSeries first (with masked values inserted) before doing so. So you could do... daily_series = series.convert('daily') which would result in a 2d series with a lot of masked values, but if you are just interested in things like averages, standard deviations, etc... you can use the relevant functions from the numpy.ma module and it will work fine. All that being said... in a perfect world there would be a way to define custom frequencies like "once every 5 minutes", which wouldn't involve a bunch of extra masked values when converting to other frequencies. But this simply isn't implemented yet. It is something that has been contemplated before, but it will be non-trivial to implement and Pierre and myself have no pressing need for it as far as I know, so we will have to wait for a motivated volunteer to come along that is in desparate need of this functionality for it to be implemented. - Matt From lilyphysik at gmail.com Thu Apr 3 21:06:45 2008 From: lilyphysik at gmail.com (=?UTF-8?B?5q615Lie5Y2a?=) Date: Fri, 04 Apr 2008 09:06:45 +0800 Subject: [SciPy-user] TypeError: can't convert complex to float; use abs(z) In-Reply-To: References: Message-ID: <47F57F25.9020705@gmail.com> Yes,I notice that gij returns a 2d array,if it returns a complex number instead of an array,the result should be right.I reexamine the function "gij",and found the error. I should use an integer to specify the dimension of the new array. In "reshape",the argument called "newshape" should be tuple or integer,the function will return a 1D array if an integer is specified. >> your function gij returns a 2d array (a matrix) with just 1 element. >> you cannot assign to the element b[0,0], the value of the matrix c, but you >> can assign it the value of the _only_ element of c, i.e. c.item(). >> >> your last line become: >> b[0,0] = c.item() >> > > Correct. However, it's still a weird error message that may be > indicative of incorrect code somewhere in numpy. > > From peridot.faceted at gmail.com Thu Apr 3 21:20:18 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Thu, 3 Apr 2008 21:20:18 -0400 Subject: [SciPy-user] conforming to Python GIL... In-Reply-To: <47F57163.7090605@gmail.com> References: <47F52DE4.0@gmail.com> <47F533AB.6080807@enthought.com> <47F57163.7090605@gmail.com> Message-ID: On 03/04/2008, fred wrote: > Travis E. Oliphant a ?crit : > > > > This requires a bit of effort to solve. We need to in multiple places... > > Hum, I do understand, but... > > It's very hard to believe that I am the only guy in the entire universe > who encounters this kind of issue, uh ? ;-))) > > IOW... how do scipy users do ??? You're not; it's in the FAQ, and there's a page devoted to it: http://www.scipy.org/ParallelProgramming I am surprised by the *particular* problem you have, that is, that the GIL is not released *at all* for long periods by scipy; I think most large pieces of compiled code in numpy/scipy operate outside the GIL (and interpreted code releases the GIL frequently). This makes it not just a parallel processing issue. Anne From oliphant at enthought.com Thu Apr 3 21:58:18 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Thu, 03 Apr 2008 20:58:18 -0500 Subject: [SciPy-user] conforming to Python GIL... In-Reply-To: References: <47F52DE4.0@gmail.com> <47F533AB.6080807@enthought.com> <47F57163.7090605@gmail.com> Message-ID: <47F58B3A.1070100@enthought.com> Anne Archibald wrote: > On 03/04/2008, fred wrote: > >> Travis E. Oliphant a ?crit : >> >> >> > This requires a bit of effort to solve. We need to in multiple places... >> >> Hum, I do understand, but... >> >> It's very hard to believe that I am the only guy in the entire universe >> who encounters this kind of issue, uh ? ;-))) >> >> IOW... how do scipy users do ??? >> > > You're not; it's in the FAQ, and there's a page devoted to it: > http://www.scipy.org/ParallelProgramming > > I am surprised by the *particular* problem you have, that is, that the > GIL is not released *at all* for long periods by scipy; I think most > large pieces of compiled code in numpy/scipy operate outside the GIL > (and interpreted code releases the GIL frequently). This makes it not > just a parallel processing issue. > I'm surprised that f2py is not helping us release the GIL more often. On the other-hand, releasing the GIL automatically is dangerous if the code that could possibly execute in the same thread is not re-entrant. I had thought that f2py was releasing the GIL more often than it actually is. This is a problem and should be fixed -- perhaps by the addition of a default semaphore for code that may not be re-entrant. -Travis O. From peridot.faceted at gmail.com Thu Apr 3 22:19:08 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Thu, 3 Apr 2008 22:19:08 -0400 Subject: [SciPy-user] conforming to Python GIL... In-Reply-To: <47F58B3A.1070100@enthought.com> References: <47F52DE4.0@gmail.com> <47F533AB.6080807@enthought.com> <47F57163.7090605@gmail.com> <47F58B3A.1070100@enthought.com> Message-ID: On 03/04/2008, Travis E. Oliphant wrote: > Anne Archibald wrote: > > On 03/04/2008, fred wrote: > > > >> Travis E. Oliphant a ?crit : > >> > >> > >> > This requires a bit of effort to solve. We need to in multiple places... > >> > >> Hum, I do understand, but... > >> > >> It's very hard to believe that I am the only guy in the entire universe > >> who encounters this kind of issue, uh ? ;-))) > >> > >> IOW... how do scipy users do ??? > >> > > > > You're not; it's in the FAQ, and there's a page devoted to it: > > http://www.scipy.org/ParallelProgramming > > > > I am surprised by the *particular* problem you have, that is, that the > > GIL is not released *at all* for long periods by scipy; I think most > > large pieces of compiled code in numpy/scipy operate outside the GIL > > (and interpreted code releases the GIL frequently). This makes it not > > just a parallel processing issue. > > > > I'm surprised that f2py is not helping us release the GIL more often. > On the other-hand, releasing the GIL automatically is dangerous if the > code that could possibly execute in the same thread is not re-entrant. > > I had thought that f2py was releasing the GIL more often than it > actually is. This is a problem and should be fixed -- perhaps by the > addition of a default semaphore for code that may not be re-entrant. What are the ways in which FORTRAN code can fail to be reentrant? My impression is that this is mostly the result of using statically-allocated storage (which may be built into the code's calling convention). Can one safely assume (default to) that FORTRAN code does not manipulate python interpreter internals, so that FORTRAN code without special annotations could go under its own global semaphore (but not under the GIL)? Can one safely assume that FORTRAN code in separate compilation units can't step on each other's toes? After all f2py knows rather a lot about what is linked to what during compilation... And of course letting a user easily create and use named locks would make it fairly easy to specify which code needed to avoid being run concurrently. Anne From pearu at cens.ioc.ee Fri Apr 4 02:04:28 2008 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Fri, 4 Apr 2008 09:04:28 +0300 (EEST) Subject: [SciPy-user] conforming to Python GIL... In-Reply-To: <47F52DE4.0@gmail.com> References: <47F52DE4.0@gmail.com> Message-ID: <52711.88.89.195.179.1207289068.squirrel@cens.ioc.ee> On Thu, April 3, 2008 10:20 pm, fred wrote: > Hi, > > I use a lot of ConVeX OPTimsation and fortran (via f2py) routines in my > Traits app. > > As I want to compute the data and want to display them, I use threads. > > The issue I get is that data displayed (using Chaco2) are not updated > (app is frozen) while computing the input data. > > From D. Morrill's answer (the Traits guru ;-)), it appears that cvxopt > (and solve() from scipy, in fact) and fortran modules does not release > the "Python GIL (Global Intepreter Lock)". > > This is very bad to hear as the display is not updated (after a window > popup, left blank, or one can't handle properties display data). > > > Someone can gives an helping hand on this issue ? Have you tried to use f2py `threadsafe` statement when wrapping fortran code? With the `threadsafe` statement the fortran call will be surrounded with Py_BEGIN_ALLOW_THREADS-Py_END_ALLOW_THREADS block. HTH, Pearu From marcotuckner at public-files.de Fri Apr 4 04:07:00 2008 From: marcotuckner at public-files.de (Marco Tuckner) Date: Fri, 4 Apr 2008 08:07:00 +0000 (UTC) Subject: [SciPy-user] =?utf-8?q?creating_timeseries_for_non_convertional_c?= =?utf-8?q?ustom=09frequencies?= References: Message-ID: Hello Matt, hello Pierre. Thank you both for your comments and suggestions. I will try and check if your ideas work for me. I will report back. Either here on list or by puting the solution on the wiki (needs time, patience, please). As I mentioned, I still have to get better skills in maskedarray. Kind regards, Marco From marcotuckner at public-files.de Fri Apr 4 05:19:42 2008 From: marcotuckner at public-files.de (Marco Tuckner) Date: Fri, 4 Apr 2008 09:19:42 +0000 (UTC) Subject: [SciPy-user] =?utf-8?q?fill_a_timeseries_with_masked_data_by_corr?= =?utf-8?q?elation=09from_another_series?= References: <3d375d730804031756o1fa4d438h490f3a98ef7a2520@mail.gmail.com> Message-ID: Hello. > This is fairly common in financial circles for measuring risk. I don't > have an Internet reference on-hand, but the book _Computational > Statistics_ by Givens and Hoeting has a chapter on this. > > http://www.amazon.com/Computational-Statistics-Wiley-Probability/dp/0471461245 When talking about literature: I think we can look at Time Series Analysis and Its Applications: With R Examples http://www.stat.pitt.edu/stoffer/tsa2/index.html Maybe it offers some help for the timeseries scikit. > The approach that Pierre suggests (finding a common mask where you > have data for all time series) is good for 2 series (in fact, it's > probably optimal), but with increasing numbers of series, you will > most likely lose too many days to make a reasonable estimate. Again, I will just test in a quiet minute what you all suggested and give a feedback. Kind regards, Marco From fredmfp at gmail.com Fri Apr 4 06:52:48 2008 From: fredmfp at gmail.com (fred) Date: Fri, 04 Apr 2008 12:52:48 +0200 Subject: [SciPy-user] conforming to Python GIL... In-Reply-To: <52711.88.89.195.179.1207289068.squirrel@cens.ioc.ee> References: <47F52DE4.0@gmail.com> <52711.88.89.195.179.1207289068.squirrel@cens.ioc.ee> Message-ID: <47F60880.4030100@gmail.com> Pearu Peterson a ?crit : > Have you tried to use f2py `threadsafe` statement when wrapping > fortran code? With the `threadsafe` statement the fortran call > will be surrounded with Py_BEGIN_ALLOW_THREADS-Py_END_ALLOW_THREADS block. Well... I heard so bad news yesterday, and today, so good... :-)))) You rock, Pearu ! :-))) Tons of thanks. Cheers, -- Fred, _very_ happy. From oliphant at enthought.com Fri Apr 4 10:06:00 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Fri, 04 Apr 2008 09:06:00 -0500 Subject: [SciPy-user] conforming to Python GIL... In-Reply-To: <52711.88.89.195.179.1207289068.squirrel@cens.ioc.ee> References: <47F52DE4.0@gmail.com> <52711.88.89.195.179.1207289068.squirrel@cens.ioc.ee> Message-ID: <47F635C8.7080001@enthought.com> Pearu Peterson wrote: > On Thu, April 3, 2008 10:20 pm, fred wrote: > >> Hi, >> >> I use a lot of ConVeX OPTimsation and fortran (via f2py) routines in my >> Traits app. >> >> As I want to compute the data and want to display them, I use threads. >> >> The issue I get is that data displayed (using Chaco2) are not updated >> (app is frozen) while computing the input data. >> >> From D. Morrill's answer (the Traits guru ;-)), it appears that cvxopt >> (and solve() from scipy, in fact) and fortran modules does not release >> the "Python GIL (Global Intepreter Lock)". >> >> This is very bad to hear as the display is not updated (after a window >> popup, left blank, or one can't handle properties display data). >> >> >> Someone can gives an helping hand on this issue ? >> > > Have you tried to use f2py `threadsafe` statement when wrapping > fortran code? With the `threadsafe` statement the fortran call > will be surrounded with Py_BEGIN_ALLOW_THREADS-Py_END_ALLOW_THREADS block. > I thought there was something like that. Apparently, the scipy interfaces do not use this statement, correct? Perhaps all code used to release the GIL, but then the threadsafe statement was added and now none of the scipy-f2py code does. -Travis O. From pearu at cens.ioc.ee Fri Apr 4 11:49:34 2008 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Fri, 4 Apr 2008 18:49:34 +0300 (EEST) Subject: [SciPy-user] conforming to Python GIL... In-Reply-To: <47F635C8.7080001@enthought.com> References: <47F52DE4.0@gmail.com> <52711.88.89.195.179.1207289068.squirrel@cens.ioc.ee> <47F635C8.7080001@enthought.com> Message-ID: <53983.88.89.195.89.1207324174.squirrel@cens.ioc.ee> On Fri, April 4, 2008 5:06 pm, Travis E. Oliphant wrote: > > I thought there was something like that. Apparently, the scipy > interfaces do not use this statement, correct? Yes (except lapack wrappers). This was decided some years ago when most of us had single CPU computers and then the idea was that using `threadsafe` would add unnecessary overhead. Nowadays, I guess, all f2py wrappers could have threadsafe enabled. > Perhaps all code used to > release the GIL, but then the threadsafe statement was added and now > none of the scipy-f2py code does. What do you mean? Currently only lapack wrappers in scipy use threadsafe and there has been no attempts to use it elsewhere in scipy, awaik. I don't understand whether you consider it as a bad or good thing. Regards, Pearu From doreen at aims.ac.za Fri Apr 4 13:01:49 2008 From: doreen at aims.ac.za (Doreen Mbabazi) Date: Fri, 4 Apr 2008 19:01:49 +0200 (SAST) Subject: [SciPy-user] Getting the set of parameters from a leastsq function at each iteration. In-Reply-To: References: <59052.192.168.42.175.1206989619.squirrel@webmail.aims.ac.za> <20080331190712.GA6796@encolpuis> <42271.192.168.42.175.1206994141.squirrel@webmail.aims.ac.za> <40576.192.168.42.175.1207128154.squirrel@webmail.aims.ac.za> Message-ID: <49312.192.168.42.175.1207328509.squirrel@webmail.aims.ac.za> Hi, I am still working at my code and I have not yet been able to get optimal parameters. The problem being that actually my fitting function which is a system of odes is evaluated at only one set of parameters(the initial) thus the final results I get is the initial set of parameters. I need to find a way to be able to find integrate the system of odes at each iteration as the set of parameters change. In other words, Is there a way to get the parameters at each iteration while using the leastsq function? If that is possible how can I integrate my system of odes using these sets of parameters at each iteration? Some help please. Code is below. from scipy import * from scipy.integrate import odeint from scipy.optimize import leastsq import scipy.io.array_import import Gnuplot,Gnuplot.funcutils gp=Gnuplot.Gnuplot(persist=1) # , debug=1 def residuals(p,V,t): err = V - S(t,p) return err #y[0]=T, y[1]=T*, y[2] = V, lamda = p[0], d = p[1], k=p[2], delta=p[3], pi = p[4], c = p[5] initial_y = [10,0,10e-6] # initial conditions T(0)= 10cells , T*(0)=0, V(0)=10e-6 filename=('essay1.dat') # data file is stored data = scipy.io.array_import.read_array(filename) # data file is read. t = data[:,0] V = data[:,1] pname = (['lamda','d','k','delta','pi','c']) lamda_0 = 0.1; d_0 = 0.01; k_0 = 0.60e-3; delta_0 = 0.35; pi_0 = 800; c_0 = 3.0 p = array([lamda_0 , d_0, k_0, delta_0,pi_0,c_0]) def f(y,t,p): y_dot = [0.,0.,0.] y_dot[0] = p[0] - p[1]*y[0] - p[2]*y[0]*y[2] y_dot[1] = p[2]*y[0]*y[2] - p[3]*y[1] y_dot[2] = p[4]*y[1] - p[5]*y[2] return y_dot y = odeint(f,initial_y,t,args=(p,)) def S(t,p): v = y[:,2] return v pbest = leastsq(residuals, p, args=(V,t),maxfev=2000) print pbest[0] Doreen. Rob Clewley > Hi Doreen, > >> def residuals(p,y,t): >> return [V - S(t,p) for i in xrange(len(t))] >> > > Minpack expects an array type to be returned. You're returning a > python list. So try this: > > def residuals(p,y,t): > return array([V - S(t_val,p) for t_val in t]) > > Here I've also removed the unnecessary i index variable -- Python lets > you iterate directly over the contents of the list t. > > -Rob > > -- > Robert H. Clewley, Ph. D. > Assistant Professor > Department of Mathematics and Statistics > Georgia State University > 720 COE, 30 Pryor St > Atlanta, GA 30303, USA > > tel: 404-413-6420 fax: 404-651-2246 > http://www.mathstat.gsu.edu/~matrhc > http://brainsbehavior.gsu.edu/ > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From oliphant at enthought.com Fri Apr 4 13:09:03 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Fri, 04 Apr 2008 12:09:03 -0500 Subject: [SciPy-user] conforming to Python GIL... In-Reply-To: <53983.88.89.195.89.1207324174.squirrel@cens.ioc.ee> References: <47F52DE4.0@gmail.com> <52711.88.89.195.179.1207289068.squirrel@cens.ioc.ee> <47F635C8.7080001@enthought.com> <53983.88.89.195.89.1207324174.squirrel@cens.ioc.ee> Message-ID: <47F660AF.5020005@enthought.com> > > What do you mean? Currently only lapack wrappers in scipy use threadsafe > and there has been no attempts to use it elsewhere in scipy, awaik. > I don't understand whether you consider it as a bad or good thing. > Really, the lapack wrappers have threadsafe? I'll have to check again. Yesterday, when I looked at the f2py-generated code for gesv I did not see any evidence of releasing of the GIL. I suspect that the GIL should be released for most of the lapack code as well. Is there much overhead that way? -Travis O. From heytrent at gmail.com Fri Apr 4 13:35:18 2008 From: heytrent at gmail.com (Trent) Date: Fri, 4 Apr 2008 17:35:18 +0000 (UTC) Subject: [SciPy-user] =?utf-8?q?Real-time_plotting_and_data=09storage=09fo?= =?utf-8?q?rmat=09questions?= References: <31488cc20804010936q2ab49167qa43a660b1f28e51b@mail.gmail.com> <47F26B9B.4050809@llnl.gov> <20080401170949.GK1301@phare.normalesup.org> <47F48828.5080703@astraw.com> <43B6FBFA-B02C-478C-BBAD-CAE1927C1A19@physics.ucsd.edu> Message-ID: Thanks to everyone for the recommendations. I investigated HDF5 and given everyone's thoughts and the information I've read it seems to be the best route for us to take. I'm a little overwhelmed with it right now (Rick I may take you up on your offer for Q&A!) but excited about the prospects of it. Between the platform independence, the fact that it can be read in by python (pytables), matlab and others I can't think of a better choice. I just hope we can execute it properly! Thanks for everyone's suggestions. From peridot.faceted at gmail.com Fri Apr 4 13:47:01 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Fri, 4 Apr 2008 19:47:01 +0200 Subject: [SciPy-user] Getting the set of parameters from a leastsq function at each iteration. In-Reply-To: <49312.192.168.42.175.1207328509.squirrel@webmail.aims.ac.za> References: <59052.192.168.42.175.1206989619.squirrel@webmail.aims.ac.za> <20080331190712.GA6796@encolpuis> <42271.192.168.42.175.1206994141.squirrel@webmail.aims.ac.za> <40576.192.168.42.175.1207128154.squirrel@webmail.aims.ac.za> <49312.192.168.42.175.1207328509.squirrel@webmail.aims.ac.za> Message-ID: On 04/04/2008, Doreen Mbabazi wrote: > y = odeint(f,initial_y,t,args=(p,)) > > def S(t,p): > v = y[:,2] > return v Here you are running odeint only once. You must run it every time S is evaluated. Anne From pearu at cens.ioc.ee Fri Apr 4 13:53:17 2008 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Fri, 4 Apr 2008 20:53:17 +0300 (EEST) Subject: [SciPy-user] conforming to Python GIL... In-Reply-To: <47F660AF.5020005@enthought.com> References: <47F52DE4.0@gmail.com> <52711.88.89.195.179.1207289068.squirrel@cens.ioc.ee> <47F635C8.7080001@enthought.com> <53983.88.89.195.89.1207324174.squirrel@cens.ioc.ee> <47F660AF.5020005@enthought.com> Message-ID: <62862.88.89.195.89.1207331597.squirrel@cens.ioc.ee> On Fri, April 4, 2008 8:09 pm, Travis E. Oliphant wrote: > >> >> What do you mean? Currently only lapack wrappers in scipy use threadsafe >> and there has been no attempts to use it elsewhere in scipy, awaik. >> I don't understand whether you consider it as a bad or good thing. >> > Really, the lapack wrappers have threadsafe? I'll have to check again. > Yesterday, when I looked at the f2py-generated code for gesv I did not > see any evidence of releasing of the GIL. Apparently not all lapack wrappers have. I just did `grep threadsafe ..` and it showed at least 3 matches. > I suspect that the GIL should be released for most of the lapack code as > well. Is there much overhead that way? I haven't tested but it would interesting to know the answer indeed. Pearu From tjhnson at gmail.com Fri Apr 4 14:54:58 2008 From: tjhnson at gmail.com (Tom Johnson) Date: Fri, 4 Apr 2008 11:54:58 -0700 Subject: [SciPy-user] Faster allclose, comparing arrays In-Reply-To: References: Message-ID: On Sat, Feb 2, 2008 at 12:36 PM ... > For a more subtle example, suppose you want to compare a vector and a > result obtained by Fourier transforming. If your vector is something > like [1,2,3,4] allclose() will do pretty much what you want. But if > your vector is something like [1e40,0,0,0], you might have a problem: > the Fourier transform can be expected to introduce numerical errors in > all the components of size about machine epsilon times the *largest > component*. Since allclose() does an element-wise comparison, if you > get [1e40+1,1,1], allclose returns False when the answer is true to > numerical accuracy. On the other hand, sometimes the different > elements of a vector have wildly differing sizes by design, so > normalizing by the largest vector isn't what you want. This is good to know. Are there similar statements that can be made about matrix multiplication in scipy? If one is multiplying numerous matrices together, this could be relevant. From discerptor at gmail.com Sat Apr 5 13:49:01 2008 From: discerptor at gmail.com (Joshua Lippai) Date: Sat, 5 Apr 2008 10:49:01 -0700 Subject: [SciPy-user] Build warning in scipy svn r4083 Message-ID: <9911419a0804051049q782cc502h4dc266962f9efade@mail.gmail.com> I am building and using Scipy successfully on Mac OS X 10.5 (no errors in the build dialog), but every time I build it, this warning shows up and I'm wondering if it might be responsible for some of the errors in the scipy.test results that are there in the current svn revision: customize UnixCCompiler using build_ext library 'mach' defined more than once, overwriting build_info {'sources': ['scipy/integrate/mach/d1mach.f', 'scipy/integrate/mach/i1mach.f', 'scipy/integrate/mach/r1mach.f', 'scipy/integrate/mach/xerror.f'], 'config_fc': {'noopt': ('scipy/integrate/setup.pyc', 1)}, 'source_languages': ['f77']}... with {'sources': ['scipy/special/mach/d1mach.f', 'scipy/special/mach/i1mach.f', 'scipy/special/mach/r1mach.f', 'scipy/special/mach/xerror.f'], 'config_fc': {'noopt': ('scipy/special/setup.pyc', 1)}, 'source_languages': ['f77']}... Everything else seems to check out fine in the build dialog, but there's this. Does anyone know if this could cause a problem that would show in the nose tests? Josh From jeevan.baretto at gmail.com Sun Apr 6 06:13:42 2008 From: jeevan.baretto at gmail.com (Jeevan Baretto) Date: Sun, 6 Apr 2008 10:13:42 +0000 Subject: [SciPy-user] Fsolve- giving different answer while in loop Message-ID: <46f941590804060313u4c00a0adr96f634ecca457a03@mail.gmail.com> Hi, I was using the scipy.optimize.fsolve module. I feel I found a bug in that fsolve module. I shall attach the code. When I solve the equation separately with input values of x and y it gives me 201 as the result. While I input the arrays x and y which has the previous values in it, it gives me a different answer. To make things clear I have x as [298.0, 571.3, 580.3, 585.8, 589.9, 593.0, 595.6502, 598.18101, 600.34314, 602.21445, 603.86372, 605.42646, 606.91392, 608.28402, 609.56121, 610.77252, 611.92214, 613.01943, 614.13321, 615.16644, 616.16372, 617.1371, 618.08782, 619.01824, 619.93076, 620.79804, 621.73607, 622.58504, 623.45065, 624.3112, 625.16966, 626.0283, 626.89099, 627.76315, 628.64615, 629.54137, 630.45473, 631.341, 632.28681, 633.267, 634.28634, 635.3525, 636.55254, 637.69639, 639.05644, 640.55579, 642.27081, 644.54286, 647.81437, 657.77397] and y as [298.0, 576.51903, 585.51341, 591.20359, 595.332, 598.55687, 601.28444, 603.73461, 605.84835, 607.79922, 609.48345, 611.32751, 612.84132, 614.23754, 615.54028, 616.77623, 617.95329, 619.07467, 620.20759, 621.26395, 622.28363, 623.27789, 624.25009, 625.20078, 626.13659, 627.02366, 627.98247, 628.8532, 629.74054, 630.61663, 631.4766, 632.36279, 633.24062, 634.14565, 635.05868, 635.9553, 636.92742, 637.77739, 638.78596, 639.74769, 640.77499, 641.88289, 643.07579, 644.24899, 645.57804, 647.20167, 648.98315, 651.25973, 654.57302, 662.9283] And my function is root(T0=298,r=0.008314,B1=20,B2=30): Ei=[] def T1(i): return x[i] def T2(i): return y[i] print T1(38),T2(38) for i in range(1): i=38 O=(fsolve(lambda Ei:(((T0*exp(-Ei/r/T0)-Ei/r*(exp(-Ei/r/T0)/Ei/r/T0*((Ei/r/T0)**2+4.03648*(Ei/r/T0)+1.15198)/((Ei/r/T0)**2+5.03637*Ei/r/T0+4.1969))-T1(i)*exp(-Ei/r/T1(i))+Ei/r*(exp(-Ei/r/T1(i))/Ei/r/T1(i)*(((Ei/r/T1(i))**2+4.03648*Ei/r/T1(i)+1.15198)/((Ei/r/T1(i))**2+5.03637*Ei/r/T1(i)+4.1969))))/B1) - ((exp(-Ei/r/T0)*T0-Ei/r*(exp(-Ei/(r*T0))/Ei/(r*T0)*(((Ei/(r*T0))**2+4.0364*Ei/(r*T0)+1.15198)/((Ei/(r*T0))**2+5.03637*Ei/(r*T0)+4.1969))) -exp(-Ei/r/T2(i))*T2(i)+(exp(-Ei/(r*T2(i)))/Ei/(r*T2(i))*(((Ei/(r*T2(i)))**2+4.0364*Ei/(r*T2(i))+1.15198)/((Ei/(r*T2(i)))**2+5.03637*Ei/(r*T2(i))+4.1969))))/B2))**2,150,maxfev=10000)) Ei.append(O) return Ei Now for say the x[38] and y[38] I evaluate the function for Ei separately on the console using fsolve and I get 201 as the answer. But when I use it in the loop, like above I get a different answer for x[38] and y[38], i.e. Ei[38] is not 201 !! Hope you have got what I wanted to tell. Please let me know if you can fix this bug.. Thanks in advance, Jeevan -------------- next part -------------- An HTML attachment was scrubbed... URL: From s.mientki at ru.nl Sun Apr 6 19:17:13 2008 From: s.mientki at ru.nl (Stef Mientki) Date: Mon, 07 Apr 2008 01:17:13 +0200 Subject: [SciPy-user] NumArray + Pygame give problems in Py2Exe ... Message-ID: <47F959F9.8000808@ru.nl> hello, I'm using the orginal Enthought suite 2.4.3 and added Pygame to it. Now everything works fine, until .. ... I try to build a windows execuatble with Py2Exe. All kinds of numarray modules are not detected by Py2Exe. Adding the missing numarray (which one, Enthought contains more of them), later on manual to the dirstibution didn't help. Does anyone recognize this problem, and maybe even have a solution ? thanks, Stef Mientki From cedwards at ucsc.edu Mon Apr 7 02:37:46 2008 From: cedwards at ucsc.edu (Chris Edwards) Date: Sun, 06 Apr 2008 23:37:46 -0700 Subject: [SciPy-user] dfftf1.f:0: error: CPU you selected does not support x86-64 instruction set Message-ID: <47F9C13A.6050607@ucsc.edu> Hi. I'm trying to install scipy, and have run into what is probably a simple problem. I think I've successfully built numpy (as well as blas, lapack, fftw, etc.), but run into the following error after running python setup.py build: . . . Fortran f77 compiler: /usr/bin/g77 -g -Wall -fno-second-underscore -fPIC -O3 -funroll-loops -march=i686 -mmmx -msse2 -msse -fomit-frame-pointer creating build/temp.linux-x86_64-2.4 creating build/temp.linux-x86_64-2.4/scipy creating build/temp.linux-x86_64-2.4/scipy/fftpack creating build/temp.linux-x86_64-2.4/scipy/fftpack/dfftpack compile options: '-c' g77:f77: scipy/fftpack/dfftpack/dfftf1.f scipy/fftpack/dfftpack/dfftf1.f:0: error: CPU you selected does not support x86-64 instruction set scipy/fftpack/dfftpack/dfftf1.f:0: error: CPU you selected does not support x86-64 instruction set scipy/fftpack/dfftpack/dfftf1.f:0: error: CPU you selected does not support x86-64 instruction set scipy/fftpack/dfftpack/dfftf1.f:0: error: CPU you selected does not support x86-64 instruction set error: Command "/usr/bin/g77 -g -Wall -fno-second-underscore -fPIC -O3 -funroll-loops -march=i686 -mmmx -msse2 -msse -fomit-frame-pointer -c -c scipy/fftpack/dfftpack/dfftf1.f -o build/temp.linux-x86_64-2.4/scipy/fftpack/dfftpack/dfftf1.o" failed with exit status 1 Looking through past posts to this group, this error seems similar to a previous issue posted where the Core2 processor wasn't properly being identified. I'm running CentOS 5 on a system with two Xeon 5400 series chips. Just in case this issue had been recently fixed, I downloaded the latest scipy from the svn today, but had the same problem. Just in case this is helpful, here is my /proc/cpuinfo: processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 23 model name : Intel(R) Xeon(R) CPU E5440 @ 2.83GHz stepping : 6 cpu MHz : 2833.504 cache size : 6144 KB physical id : 0 siblings : 4 core id : 0 cpu cores : 4 fpu : yes fpu_exception : yes cpuid level : 10 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx lm constant_tsc pni monitor ds_cpl vmx est tm2 cx16 xtpr lahf_lm bogomips : 5669.15 clflush size : 64 cache_alignment : 64 address sizes : 38 bits physical, 48 bits virtual power management: Then similar information is repeated 7 more times, with processor going from 1 to 7. Thanks for any help. Chris Edwards From pearu at cens.ioc.ee Mon Apr 7 03:30:26 2008 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Mon, 07 Apr 2008 09:30:26 +0200 Subject: [SciPy-user] dfftf1.f:0: error: CPU you selected does not support x86-64 instruction set In-Reply-To: <47F9C13A.6050607@ucsc.edu> References: <47F9C13A.6050607@ucsc.edu> Message-ID: <47F9CD92.8010702@cens.ioc.ee> Chris Edwards wrote: > Just in case this is helpful, here is my /proc/cpuinfo: ... Could you also send the output of $ python numpy/distutils/cpuinfo.py Thanks, Pearu From pieter.cogghe at gmail.com Mon Apr 7 09:18:04 2008 From: pieter.cogghe at gmail.com (Pieter) Date: Mon, 7 Apr 2008 15:18:04 +0200 Subject: [SciPy-user] execute function on an array elementwise Message-ID: <5c0bbcb30804070618y6f72c1c3pf6456b0fc3a9ce0@mail.gmail.com> Hi all, I guess this is an easy one, but can't seem to find it. Suppose I have this function: def myFunction(x): result = None if x > 3: result = "some value" else: result = "another value" return result And I want to run it on an array a: b = myFunction(a) which then returns an array with "some value" and "another value". I could loop over the array, but I guess there's a better way to do this? (something like arrayMagic from Matlab if I'm not mistaken. thanks a lot, Pieter -- Pieter Cogghe Ganzendries 186 9000 Gent 0487 10 14 21 From matthieu.brucher at gmail.com Mon Apr 7 09:26:09 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Mon, 7 Apr 2008 15:26:09 +0200 Subject: [SciPy-user] execute function on an array elementwise In-Reply-To: <5c0bbcb30804070618y6f72c1c3pf6456b0fc3a9ce0@mail.gmail.com> References: <5c0bbcb30804070618y6f72c1c3pf6456b0fc3a9ce0@mail.gmail.com> Message-ID: Hi, I think you are looking for numpy.vectorize() It will not be faster than your own loop, but it will respect the shape of your array. Matthieu 2008/4/7, Pieter : > > Hi all, > > I guess this is an easy one, but can't seem to find it. Suppose I have > this function: > > def myFunction(x): > result = None > if x > 3: > result = "some value" > else: > result = "another value" > return result > > And I want to run it on an array a: > b = myFunction(a) > > which then returns an array with "some value" and "another value". I > could loop over the array, but I guess there's a better way to do > this? (something like arrayMagic from Matlab if I'm not mistaken. > > thanks a lot, > > Pieter > > > > -- > > Pieter Cogghe > Ganzendries 186 > 9000 Gent > 0487 10 14 21 > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-pascal.mercier at inrialpes.fr Mon Apr 7 09:33:17 2008 From: jean-pascal.mercier at inrialpes.fr (J-Pascal Mercier) Date: Mon, 7 Apr 2008 15:33:17 +0200 Subject: [SciPy-user] execute function on an array elementwise In-Reply-To: <5c0bbcb30804070618y6f72c1c3pf6456b0fc3a9ce0@mail.gmail.com> References: <5c0bbcb30804070618y6f72c1c3pf6456b0fc3a9ce0@mail.gmail.com> Message-ID: <8C0947EB-6B3A-4B1B-B0D9-CACFB1936F94@inrialpes.fr> Hi Pieter, You could use something like : def myFunction(X): res = zeros(X.shape) res[where(X > 3)] = "some value" res[where(X <= 3)] = "another value" return res cheers, J-Pascal Projet PRIMA - Laboratoire LIG INRIA Grenoble Rhone-Alpes Research Centre 655, Avenue de l'Europe 38330 Montbonnot, France On 7-Apr-08, at 3:18 PM, Pieter wrote: > Hi all, > > I guess this is an easy one, but can't seem to find it. Suppose I have > this function: > > def myFunction(x): > result = None > if x > 3: > result = "some value" > else: > result = "another value" > return result > > And I want to run it on an array a: > b = myFunction(a) > > which then returns an array with "some value" and "another value". I > could loop over the array, but I guess there's a better way to do > this? (something like arrayMagic from Matlab if I'm not mistaken. > > thanks a lot, > > Pieter > > > > -- > Pieter Cogghe > Ganzendries 186 > 9000 Gent > 0487 10 14 21 > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From lbolla at gmail.com Mon Apr 7 09:44:59 2008 From: lbolla at gmail.com (lorenzo bolla) Date: Mon, 7 Apr 2008 15:44:59 +0200 Subject: [SciPy-user] execute function on an array elementwise In-Reply-To: <8C0947EB-6B3A-4B1B-B0D9-CACFB1936F94@inrialpes.fr> References: <5c0bbcb30804070618y6f72c1c3pf6456b0fc3a9ce0@mail.gmail.com> <8C0947EB-6B3A-4B1B-B0D9-CACFB1936F94@inrialpes.fr> Message-ID: <80c99e790804070644s1319eb95pe02a61ad20ed0fae@mail.gmail.com> or simply: def myFunction(X): return where(X > 3, "some value", "other value") hth, L. On Mon, Apr 7, 2008 at 3:33 PM, J-Pascal Mercier < jean-pascal.mercier at inrialpes.fr> wrote: > Hi Pieter, > > You could use something like : > > def myFunction(X): > res = zeros(X.shape) > res[where(X > 3)] = "some value" > res[where(X <= 3)] = "another value" > return res > > cheers, > > > J-Pascal > > Projet PRIMA - Laboratoire LIG > INRIA Grenoble Rhone-Alpes Research Centre > 655, Avenue de l'Europe > 38330 Montbonnot, France > > > > On 7-Apr-08, at 3:18 PM, Pieter wrote: > > > Hi all, > > > > I guess this is an easy one, but can't seem to find it. Suppose I have > > this function: > > > > def myFunction(x): > > result = None > > if x > 3: > > result = "some value" > > else: > > result = "another value" > > return result > > > > And I want to run it on an array a: > > b = myFunction(a) > > > > which then returns an array with "some value" and "another value". I > > could loop over the array, but I guess there's a better way to do > > this? (something like arrayMagic from Matlab if I'm not mistaken. > > > > thanks a lot, > > > > Pieter > > > > > > > > -- > > Pieter Cogghe > > Ganzendries 186 > > 9000 Gent > > 0487 10 14 21 > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Lorenzo Bolla lbolla at gmail.com http://lorenzobolla.emurse.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.brucher at gmail.com Mon Apr 7 09:48:24 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Mon, 7 Apr 2008 15:48:24 +0200 Subject: [SciPy-user] Manifold Learning Technology Preview Message-ID: Hi, For those who want to use manifold learning tools, I'm happy to announce that scikits.learn has now an implementation of the usual techniques. They may not all work (I'm in the process of testing them and fixing the porting issues) at the moment, but they will in the near future. What's inside ? - compression is where the usual techniques are located (PCA by Zachary Pincus, Isomap, LLE, Laplacian Eigenmaps, Hessian Eigenmaps, Diffusion maps, CCA and my own technique). Only the dimensionality reduction is done here, that is original space to a reduced space. - regression is a set of multidimensional regression tools that will generate a model between the reduced space to the original space. Here is a linear model (called PCA, because it is generally used in conjunction with PCA) and a piecewise linear model - projection will enable the projection on a new point on the manifold with the help of the model. No Nystr?m extension at the moment, but perhaps some one will create a regression model based on this. Some techniques create a reduced space and a model at the same time (with a fixed number of linear models, like Brandt's one), I did not implement them, but they could benefit from the projection module. I will add a tutorial on the scikits trac when I have some time, with details on the interfaces that can be used and reused. Here is a small test for people who want to test it right now. Suppose you have an array with 1000 points in a 3D space (so a 1000x3 array) : >>> from scikits.learn.machine.manifold_learning import compression >>> coords = compression.isomap(test, 2, neighbors=9) Here the Isomap algorithm was used, the test array was reduced from 3D to 2D, and the number of neighbors used to create the neighbors graph was 9 (in fact |point + number of neighbors| = 9, this may need some fixes). The TP does not need an additional scikit, only numpy and scipy (trunk) and optionally scikits.openopt (trunk) for CCA, my reduction technique and the projections (if needed). Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbolla at gmail.com Mon Apr 7 10:27:41 2008 From: lbolla at gmail.com (lorenzo bolla) Date: Mon, 7 Apr 2008 16:27:41 +0200 Subject: [SciPy-user] optimization with genetic algorithms Message-ID: <80c99e790804070727t58121c4bu38ba200e25de48cc@mail.gmail.com> hi all! are there in scipy (or numpy?) any genetic algorithms for optimization? if not, can anyone suggest me where I can find some good, python friendly, ones? thank you in advance, lorenzo bolla -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.brucher at gmail.com Mon Apr 7 10:31:35 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Mon, 7 Apr 2008 16:31:35 +0200 Subject: [SciPy-user] optimization with genetic algorithms In-Reply-To: <80c99e790804070727t58121c4bu38ba200e25de48cc@mail.gmail.com> References: <80c99e790804070727t58121c4bu38ba200e25de48cc@mail.gmail.com> Message-ID: Hi, There were genetic algorithms in the sandbox, they are now in scikits.learn.machine.ga. I don't know how to use them, but they are available. Matthieu 2008/4/7, lorenzo bolla : > > hi all! > > are there in scipy (or numpy?) any genetic algorithms for optimization? > if not, can anyone suggest me where I can find some good, python friendly, > ones? > > thank you in advance, > lorenzo bolla > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmitrey.kroshko at scipy.org Mon Apr 7 11:23:50 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Mon, 07 Apr 2008 18:23:50 +0300 Subject: [SciPy-user] optimization with genetic algorithms In-Reply-To: <80c99e790804070727t58121c4bu38ba200e25de48cc@mail.gmail.com> References: <80c99e790804070727t58121c4bu38ba200e25de48cc@mail.gmail.com> Message-ID: <47FA3C86.5070004@scipy.org> lorenzo bolla wrote: > hi all! > > are there in scipy (or numpy?) any genetic algorithms for optimization? BTW there is galileo solver written in Python (GPL), also available in scikits.openopt Regars, D. From rob.clewley at gmail.com Mon Apr 7 11:28:58 2008 From: rob.clewley at gmail.com (Rob Clewley) Date: Mon, 7 Apr 2008 11:28:58 -0400 Subject: [SciPy-user] Manifold Learning Technology Preview In-Reply-To: References: Message-ID: Matthieu, I look forward to this! Are these going to be pure python implementations only? I know that Isomap in Matlab came with a DLL for faster processing of the networks -- is there any such plan to do this in yours? Best, Rob On Mon, Apr 7, 2008 at 9:48 AM, Matthieu Brucher wrote: > Hi, > > For those who want to use manifold learning tools, I'm happy to announce > that scikits.learn has now an implementation of the usual techniques. They > may not all work (I'm in the process of testing them and fixing the porting > issues) at the moment, but they will in the near future. > > What's inside ? > - compression is where the usual techniques are located (PCA by Zachary > Pincus, Isomap, LLE, Laplacian Eigenmaps, Hessian Eigenmaps, Diffusion maps, > CCA and my own technique). Only the dimensionality reduction is done here, > that is original space to a reduced space. > - regression is a set of multidimensional regression tools that will > generate a model between the reduced space to the original space. Here is a > linear model (called PCA, because it is generally used in conjunction with > PCA) and a piecewise linear model > - projection will enable the projection on a new point on the manifold > with the help of the model. > > No Nystr?m extension at the moment, but perhaps some one will create a > regression model based on this. > Some techniques create a reduced space and a model at the same time (with a > fixed number of linear models, like Brandt's one), I did not implement them, > but they could benefit from the projection module. > > I will add a tutorial on the scikits trac when I have some time, with > details on the interfaces that can be used and reused. > > Here is a small test for people who want to test it right now. Suppose you > have an array with 1000 points in a 3D space (so a 1000x3 array) : > > >>> from scikits.learn.machine.manifold_learning import compression > >>> coords = compression.isomap(test, 2, neighbors=9) > > Here the Isomap algorithm was used, the test array was reduced from 3D to > 2D, and the number of neighbors used to create the neighbors graph was 9 (in > fact |point + number of neighbors| = 9, this may need some fixes). > > The TP does not need an additional scikit, only numpy and scipy (trunk) and > optionally scikits.openopt (trunk) for CCA, my reduction technique and the > projections (if needed). > > Matthieu > -- > French PhD student > Website : http://matthieu-brucher.developpez.com/ > Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 > LinkedIn : http://www.linkedin.com/in/matthieubrucher > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -- Robert H. Clewley, Ph. D. Assistant Professor Department of Mathematics and Statistics Georgia State University 720 COE, 30 Pryor St Atlanta, GA 30303, USA tel: 404-413-6420 fax: 404-651-2246 http://www.mathstat.gsu.edu/~matrhc http://brainsbehavior.gsu.edu/ From matthieu.brucher at gmail.com Mon Apr 7 11:38:28 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Mon, 7 Apr 2008 17:38:28 +0200 Subject: [SciPy-user] Manifold Learning Technology Preview In-Reply-To: References: Message-ID: 2008/4/7, Rob Clewley : > > Matthieu, > > I look forward to this! Are these going to be pure python > implementations only? I know that Isomap in Matlab came with a DLL for > faster processing of the networks -- is there any such plan to do this > in yours? > > Best, > Rob For the moment, there are 3 additional libraries : - one for my dimensionality reduction technique (that could be rewritten in Python) - one for a neighborhood search (that could rely on ANN ?), used in one of the regression function - one for solving the correlation clustering problem The last one is not feasible in Python, too slow, and even this implementation needs tweaking to use a memory-aware clustering approach. I didn't implement every flavour of Isomap, but those can be easily added, after a first version is released. Once this part of the machine learning scikit is no longer a TP and if David gives his approval, I will build eggs so that everyone can use it. Thanks for the feedback ;) Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From daniel.wheeler2 at gmail.com Mon Apr 7 14:23:17 2008 From: daniel.wheeler2 at gmail.com (Daniel Wheeler) Date: Mon, 7 Apr 2008 14:23:17 -0400 Subject: [SciPy-user] openmp and weave? In-Reply-To: <80b160a0804071051h38901724ndd96654349998c09@mail.gmail.com> References: <827183970711071455m29f4869bo8c8312e0eca79a6e@mail.gmail.com> <80b160a0804071051h38901724ndd96654349998c09@mail.gmail.com> Message-ID: <80b160a0804071123g2c375f01p33898980aa86e277@mail.gmail.com> On Wed, Nov 7, 2007 at 6:55 PM, william ratcliff wrote: > Has anyone had any luck using weave with openMP? Yes. I have been trying a test case and getting reasonable speed ups. You'll need gcc version 4.3, 4.2 doesn't seem to work. I have recorded some of the results on the pages below. Be warned, I am inexperienced in parallel computing so there is a chance of gross errors in my thinking and coding. The following is the test code for the above results. Note that there is no need to recompile everything with 4.3, only weave. Getting 4.3 built was a little involved. The links above provide some rudimentary instructions. The above results are for a machine with 2 and another with 8 nodes. I have also been evaluating openmp on a 64 node Altix machine. There are a number of issues with building scipy and numpy that have been resolved mostly concerning the SGI's scientific library, but I am currently having issues with getting weave to compile on the Altix. This seems to be due to having all the python stuff compiled with gcc, but using the Intel compiler for weave. Cheers > If so, what did you > have to do? I've started by updating my compiler to MinGW: > gcc-4.2.1-dw-2-2 (and similarly for g++), but am running into problems > with code written in weave that doesn't use any of openmp: > > Here is the code: > > import numpy as N > import weave > from weave import converters > > > > def blitz_interpolate(x,y): > > > code = """ > int pts = Ny[0]; > //#pragma omp parallel for > for (int i=0; i < pts-1; i++){ > y(i) = sin( exp( cos( - exp( sin(x(i)) ) ) ) ); > } > return_val = 3; > """ > extra=["-fopenmp -Lc:/python25/ -lPthreadGC2"] > extra=[] > z=weave.inline(code,['x','y'],type_converters=converters.blitz,compiler='gcc') > print z > return > > > > > > if __name__=="__main__": > x=N.arange(1000) > y=N.zeros(x.shape,'d') > blitz_interpolate(x,y) > print x[35], y[35],N.sin(N.exp(N.cos(-N.exp(N.sin(x[35]))))) > > > > > > This works fine with version 3.4.2 of gcc, g++ > > Thanks, > William > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Daniel Wheeler From robert.kern at gmail.com Mon Apr 7 14:39:43 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 7 Apr 2008 11:39:43 -0700 Subject: [SciPy-user] Build warning in scipy svn r4083 In-Reply-To: <9911419a0804051049q782cc502h4dc266962f9efade@mail.gmail.com> References: <9911419a0804051049q782cc502h4dc266962f9efade@mail.gmail.com> Message-ID: <3d375d730804071139o240ce837m811706ec253c3bc6@mail.gmail.com> On Sat, Apr 5, 2008 at 10:49 AM, Joshua Lippai wrote: > I am building and using Scipy successfully on Mac OS X 10.5 (no errors > in the build dialog), but every time I build it, this warning shows up > and I'm wondering if it might be responsible for some of the errors in > the scipy.test results that are there in the current svn revision: > > customize UnixCCompiler using build_ext > library 'mach' defined more than once, overwriting build_info > {'sources': ['scipy/integrate/mach/d1mach.f', > 'scipy/integrate/mach/i1mach.f', 'scipy/integrate/mach/r1mach.f', > 'scipy/integrate/mach/xerror.f'], 'config_fc': {'noopt': > ('scipy/integrate/setup.pyc', 1)}, 'source_languages': ['f77']}... > with > {'sources': ['scipy/special/mach/d1mach.f', > 'scipy/special/mach/i1mach.f', 'scipy/special/mach/r1mach.f', > 'scipy/special/mach/xerror.f'], 'config_fc': {'noopt': > ('scipy/special/setup.pyc', 1)}, 'source_languages': ['f77']}... > > Everything else seems to check out fine in the build dialog, but > there's this. Does anyone know if this could cause a problem that > would show in the nose tests? It shouldn't be a problem. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From wesmckinn at gmail.com Mon Apr 7 14:42:24 2008 From: wesmckinn at gmail.com (Wes McKinney) Date: Mon, 7 Apr 2008 14:42:24 -0400 Subject: [SciPy-user] Getting scipy.stats.models to work on Windows Message-ID: <6c476c8a0804071142uf1cf54drae0b9b6788019a7c@mail.gmail.com> I'm trying to distance myself from using R for my research work, and unfortunately I need to use Windows. I have been able to get the svn version of scipy to work on my OS X machine, but on Windows the scipy.stats.models regression models simply hang when I try to use them, ex, from test_rlm.py : >>> from numpy.random import standard_normal as W >>> X = W((40,10)) >>> import scipy.stats.models.rlm as rlm >>> rlm(design=X) --> Python freezes Tried using stable (Numpy 1.0.4, scipy 0.6.0) releases and full latest SVN builds for both to no avail. Anyone have any luck with this? Thanks, Wes -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Mon Apr 7 14:47:16 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 07 Apr 2008 20:47:16 +0200 Subject: [SciPy-user] Getting scipy.stats.models to work on Windows In-Reply-To: <6c476c8a0804071142uf1cf54drae0b9b6788019a7c@mail.gmail.com> References: <6c476c8a0804071142uf1cf54drae0b9b6788019a7c@mail.gmail.com> Message-ID: On Mon, 7 Apr 2008 14:42:24 -0400 "Wes McKinney" wrote: > I'm trying to distance myself from using R for my >research work, and > unfortunately I need to use Windows. I have been able to >get the svn version > of > scipy to work on my OS X machine, but on Windows the >scipy.stats.models > regression models simply hang when I try to use them, >ex, from test_rlm.py : > >>>> from numpy.random import standard_normal as W >>>> X = W((40,10)) >>>> import scipy.stats.models.rlm as rlm >>>> rlm(design=X) > > --> Python freezes > > Tried using stable (Numpy 1.0.4, scipy 0.6.0) releases >and full latest SVN > builds for both to no avail. Anyone have any luck with >this? > > Thanks, > Wes Works for me on linux. >>> import scipy.stats.models.rlm as rlm >>> rlm(design=X) >>> import numpy >>> numpy.__version__ '1.0.5.dev4972' >>> import scipy >>> scipy.__version__ '0.7.0.dev4083' Nils From robert.kern at gmail.com Mon Apr 7 14:54:09 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 7 Apr 2008 11:54:09 -0700 Subject: [SciPy-user] Getting scipy.stats.models to work on Windows In-Reply-To: <6c476c8a0804071142uf1cf54drae0b9b6788019a7c@mail.gmail.com> References: <6c476c8a0804071142uf1cf54drae0b9b6788019a7c@mail.gmail.com> Message-ID: <3d375d730804071154h171a9f0eq819c1c94aa220b0d@mail.gmail.com> On Mon, Apr 7, 2008 at 11:42 AM, Wes McKinney wrote: > I'm trying to distance myself from using R for my research work, and > unfortunately I need to use Windows. I have been able to get the svn version > of > scipy to work on my OS X machine, but on Windows the scipy.stats.models > regression models simply hang when I try to use them, ex, from test_rlm.py > : > > >>> from numpy.random import standard_normal as W > >>> X = W((40,10)) > >>> import scipy.stats.models.rlm as rlm > >>> rlm(design=X) > > --> Python freezes > > Tried using stable (Numpy 1.0.4, scipy 0.6.0) releases and full latest SVN > builds for both to no avail. Anyone have any luck with this? Is your CPU capable of SSE2? Unfortunately, the official numpy binary has been built with an ATLAS library that was compiled for a CPU capable of SSE2. This means that the SSE2 instructions will cause older CPUs to crash. The rlm() constructor does call some linear algebra functions, so I suspect this is the problem. There is a binary that was built without SSE2. When I dredge it up, I will let you know. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From wesmckinn at gmail.com Mon Apr 7 14:57:49 2008 From: wesmckinn at gmail.com (Wes McKinney) Date: Mon, 7 Apr 2008 14:57:49 -0400 Subject: [SciPy-user] Getting scipy.stats.models to work on Windows In-Reply-To: <3d375d730804071154h171a9f0eq819c1c94aa220b0d@mail.gmail.com> References: <6c476c8a0804071142uf1cf54drae0b9b6788019a7c@mail.gmail.com> <3d375d730804071154h171a9f0eq819c1c94aa220b0d@mail.gmail.com> Message-ID: <6c476c8a0804071157l75807361p526dfde3d7dca73e@mail.gmail.com> On Mon, Apr 7, 2008 at 2:54 PM, Robert Kern wrote: > > > Is your CPU capable of SSE2? Unfortunately, the official numpy binary > has been built with an ATLAS library that was compiled for a CPU > capable of SSE2. This means that the SSE2 instructions will cause > older CPUs to crash. The rlm() constructor does call some linear > algebra functions, so I suspect this is the problem. > > There is a binary that was built without SSE2. When I dredge it up, I > will let you know. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > I'm running a fairly new machine, Pentium 4 dual-core, so seems perhaps unlikely. The machine passes the official NumPy and SciPy test suites. - Wes -------------- next part -------------- An HTML attachment was scrubbed... URL: From wesmckinn at gmail.com Mon Apr 7 14:53:46 2008 From: wesmckinn at gmail.com (Wes McKinney) Date: Mon, 7 Apr 2008 18:53:46 +0000 (UTC) Subject: [SciPy-user] Getting scipy.stats.models to work on Windows References: <6c476c8a0804071142uf1cf54drae0b9b6788019a7c@mail.gmail.com> Message-ID: Nils Wagner iam.uni-stuttgart.de> writes: > > Works for me on linux. > Works for me on OS X as well, but unfortunately not on Windows! From aisaac at american.edu Mon Apr 7 15:04:22 2008 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 7 Apr 2008 15:04:22 -0400 Subject: [SciPy-user] Getting scipy.stats.models to work on Windows In-Reply-To: <3d375d730804071154h171a9f0eq819c1c94aa220b0d@mail.gmail.com> References: <6c476c8a0804071142uf1cf54drae0b9b6788019a7c@mail.gmail.com><3d375d730804071154h171a9f0eq819c1c94aa220b0d@mail.gmail.com> Message-ID: On Mon, 7 Apr 2008, Robert Kern apparently wrote: > There is a binary that was built without SSE2. http://sourceforge.net/project/showfiles.php?group_id=1369&package_id=175103 http://sourceforge.net/project/showfiles.php?group_id=27747&package_id=19531&release_id=540981 Look for the binaries with "p3" in the name. hth, Alan Isaac From wesmckinn at gmail.com Mon Apr 7 15:16:27 2008 From: wesmckinn at gmail.com (Wes McKinney) Date: Mon, 7 Apr 2008 15:16:27 -0400 Subject: [SciPy-user] Getting scipy.stats.models to work on Windows In-Reply-To: References: <6c476c8a0804071142uf1cf54drae0b9b6788019a7c@mail.gmail.com> <3d375d730804071154h171a9f0eq819c1c94aa220b0d@mail.gmail.com> Message-ID: <6c476c8a0804071216s322e2234i244431dfd808743@mail.gmail.com> On Mon, Apr 7, 2008 at 3:04 PM, Alan G Isaac wrote: > On Mon, 7 Apr 2008, Robert Kern apparently wrote: > > There is a binary that was built without SSE2. > > > http://sourceforge.net/project/showfiles.php?group_id=1369&package_id=175103 > > > http://sourceforge.net/project/showfiles.php?group_id=27747&package_id=19531&release_id=540981 > > Look for the binaries with "p3" in the name. > > hth, > Alan Isaac > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > The problem apparently lies with numpy.linalg.pinv running on Windows, at least XP SP2 with Python 2.5.1, a quick search found that this has been a problem for more than just me. Anyone know anything about this? >>> from numpy.random import standard_normal as W >>> X = W((40,10)) >>> import numpy.linalg as L >>> L.pinv(X) frozen -------------- next part -------------- An HTML attachment was scrubbed... URL: From wesmckinn at gmail.com Mon Apr 7 15:49:17 2008 From: wesmckinn at gmail.com (Wes McKinney) Date: Mon, 7 Apr 2008 15:49:17 -0400 Subject: [SciPy-user] Getting scipy.stats.models to work on Windows In-Reply-To: <6c476c8a0804071216s322e2234i244431dfd808743@mail.gmail.com> References: <6c476c8a0804071142uf1cf54drae0b9b6788019a7c@mail.gmail.com> <3d375d730804071154h171a9f0eq819c1c94aa220b0d@mail.gmail.com> <6c476c8a0804071216s322e2234i244431dfd808743@mail.gmail.com> Message-ID: <6c476c8a0804071249s4860cc39o5a4451f2d2e134b1@mail.gmail.com> > > > The problem apparently lies with numpy.linalg.pinv running on Windows, at > least XP SP2 with Python 2.5.1, a quick search found that this has been a > problem for more than just me. Anyone know anything about this? > > >>> from numpy.random import standard_normal as W > >>> X = W((40,10)) > >>> import numpy.linalg as L > >>> L.pinv(X) > > frozen > This page was helpful for anyone who should have this problem and find this thread: http://www.scipy.org/scipy/numpy/ticket/627 -------------- next part -------------- An HTML attachment was scrubbed... URL: From cedwards at ucsc.edu Mon Apr 7 16:48:30 2008 From: cedwards at ucsc.edu (Chris Edwards) Date: Mon, 07 Apr 2008 13:48:30 -0700 Subject: [SciPy-user] dfftf1.f:0: error: CPU you selected does not support x86-64 instruction set In-Reply-To: <47F9CD92.8010702@cens.ioc.ee> References: <47F9C13A.6050607@ucsc.edu> <47F9CD92.8010702@cens.ioc.ee> Message-ID: <47FA889E.1060200@ucsc.edu> Hi. Thanks! $ python numpy/distutils/cpuinfo.py CPU information: getNCPUs=8 has_mmx has_sse has_sse2 is_64bit is_Intel is_XEON is_Xeon is_i686 Chris Pearu Peterson wrote: > Chris Edwards wrote: > > >> Just in case this is helpful, here is my /proc/cpuinfo: >> > ... > > Could you also send the output of > $ python numpy/distutils/cpuinfo.py > > Thanks, > Pearu > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- ---------------------------------------------------------------------------- Christopher A. Edwards cedwards at ucsc.edu Ocean Sciences Department phone: (831) 459-3734 University of California fax: (831) 459-4882 Santa Cruz, CA 95064 ---------------------------------------------------------------------------- From pearu at cens.ioc.ee Mon Apr 7 17:13:00 2008 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Tue, 8 Apr 2008 00:13:00 +0300 (EEST) Subject: [SciPy-user] dfftf1.f:0: error: CPU you selected does not support x86-64 instruction set In-Reply-To: <47F9C13A.6050607@ucsc.edu> References: <47F9C13A.6050607@ucsc.edu> Message-ID: <58440.88.89.195.89.1207602780.squirrel@cens.ioc.ee> On Mon, April 7, 2008 9:37 am, Chris Edwards wrote: > Fortran f77 compiler: /usr/bin/g77 -g -Wall -fno-second-underscore -fPIC > -O3 -funroll-loops -march=i686 -mmmx -msse2 -msse -fomit-frame-pointer Thanks for sending the cpuinfo.py output. Comparing it's output and the numpy/distutils/fcompiler/gnu.py source, it seems that you are using some older version of numpy. The -march value should be 'nocona' for your case. So, try upgrade numpy from svn (or just copy the gnu.py file from svn to numpy installation directory). HTH, Pearu From amcmorl at gmail.com Mon Apr 7 23:54:03 2008 From: amcmorl at gmail.com (Angus McMorland) Date: Mon, 7 Apr 2008 23:54:03 -0400 Subject: [SciPy-user] matlab's regress Message-ID: Hi all, I've been in need of the equivalent of matlab's regress function which performs multilinear regression. After a bit of google searching, I found this old code from ancient history: http://osdir.com/ml/python.scientific.user/2004-04/msg00029.html However, after a quick spruce-up to current scipy and numpy notation (which I could post here if it's useful) it seems, from a quick test, to perform as advertised. Here begin my questions. I have looked through the scipy documentation, and can't see any other routines that do the same task, apart, perhaps from the odr module or using routines from the lapack or blas libraries. These latter options, however, I don't know anything about, and there aren't readily applicable examples floating around to base my effort on. (1) Have I missed some multilinear regression routine directly implemented in scipy? If yes, how can we improve the documentation so the next person can find it more easily. (2) If there isn't an equivalent routine, would it be useful to include this one? It could perhaps go in scipy.linalg. Thanks for your thoughts, Angus. -- AJC McMorland, PhD candidate Physiology, University of Auckland (Nearly) post-doctoral research fellow Neurobiology, University of Pittsburgh From wesmckinn at gmail.com Tue Apr 8 00:12:09 2008 From: wesmckinn at gmail.com (Wes McKinney) Date: Tue, 8 Apr 2008 00:12:09 -0400 Subject: [SciPy-user] matlab's regress In-Reply-To: References: Message-ID: <95654B89-1FD2-46F3-A6E2-366AC4195BF5@gmail.com> On Apr 7, 2008, at 11:54 PM, Angus McMorland wrote: > > > (1) Have I missed some multilinear regression routine directly > implemented in scipy? If yes, how can we improve the documentation so > the next person can find it more easily. scipy.stats.models in the current SVN branch has a bunch of modelling tools for least-squares estimation, robust estimation, and some other statistical methods. It's more object oriented than the matlab equivalent, but you can do simple multivariate regressions without too much work: >>> import scipy.stats.models.regression as R >>> from numpy.random import standard_normal as W >>> X = W((40,10)) >>> Y = W((40,)) >>> model = R.OLSModel(design=X) >>> result = model.fit(Y) >>> result.beta array([-0.2296546 , -0.15835343, -0.07127199, 0.02934717, 0.15778939, 0.14087653, 0.09279021, -0.03412604, -0.28726236, 0.03078167]) From aisaac at american.edu Tue Apr 8 00:18:11 2008 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 8 Apr 2008 00:18:11 -0400 Subject: [SciPy-user] matlab's regress In-Reply-To: References: Message-ID: numpy.linalg.lstsq Also look at: http://svn.scipy.org/svn/scipy/trunk/scipy/stats/models/regression.py hth, Alan Isaac From cedwards at ucsc.edu Tue Apr 8 02:44:18 2008 From: cedwards at ucsc.edu (Chris Edwards) Date: Mon, 07 Apr 2008 23:44:18 -0700 Subject: [SciPy-user] dfftf1.f:0: error: CPU you selected does not support x86-64 instruction set In-Reply-To: <58440.88.89.195.89.1207602780.squirrel@cens.ioc.ee> References: <47F9C13A.6050607@ucsc.edu> <58440.88.89.195.89.1207602780.squirrel@cens.ioc.ee> Message-ID: <47FB1442.1010501@ucsc.edu> Hi. Thanks. I had downloaded the latest build of scipy via svn, but had not done so for numpy. Thanks for what turned out to be a very simple solution. Chris Pearu Peterson wrote: > On Mon, April 7, 2008 9:37 am, Chris Edwards wrote: > > >> Fortran f77 compiler: /usr/bin/g77 -g -Wall -fno-second-underscore -fPIC >> -O3 -funroll-loops -march=i686 -mmmx -msse2 -msse -fomit-frame-pointer >> > > Thanks for sending the cpuinfo.py output. Comparing it's output > and the numpy/distutils/fcompiler/gnu.py source, it seems that > you are using some older version of numpy. The -march value should > be 'nocona' for your case. > > So, try upgrade numpy from svn (or just copy the gnu.py file from > svn to numpy installation directory). > > HTH, > Pearu > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- ---------------------------------------------------------------------------- Christopher A. Edwards cedwards at ucsc.edu Ocean Sciences Department phone: (831) 459-3734 University of California fax: (831) 459-4882 Santa Cruz, CA 95064 ---------------------------------------------------------------------------- From massimo.sandal at unibo.it Tue Apr 8 07:00:33 2008 From: massimo.sandal at unibo.it (massimo sandal) Date: Tue, 08 Apr 2008 13:00:33 +0200 Subject: [SciPy-user] linear (polynomial) fit with error bars Message-ID: <47FB5051.4050907@unibo.it> Hi, Is there a scipy implementation of linear fit that keeps into account the existence and width of error bars for each point? I googled but I can't find that. m. -- Massimo Sandal University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo.sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From josegomez at gmx.net Tue Apr 8 08:08:04 2008 From: josegomez at gmx.net (Jose Luis Gomez Dans) Date: Tue, 08 Apr 2008 14:08:04 +0200 Subject: [SciPy-user] Segmentation code Message-ID: <20080408120804.182760@gmx.net> Hi, I want to do some image segmentation. I see that ndimage does have some segmentation algorithms implemented, but these committs are very new. Will they be in scipy at some point? Any suggestions for trying some "off-the-shelf" segmentation algorithms with scipy? Thanks! -- Psst! Geheimtipp: Online Games kostenlos spielen bei den GMX Free Games! http://games.entertainment.gmx.net/de/entertainment/games/free From flyhyena at yahoo.com.cn Tue Apr 8 08:52:51 2008 From: flyhyena at yahoo.com.cn (sun) Date: Tue, 8 Apr 2008 14:52:51 +0200 Subject: [SciPy-user] is there any plan to import BNT(bayesian network toolkit) from matlab? Message-ID: as topic From silva at lma.cnrs-mrs.fr Tue Apr 8 09:40:11 2008 From: silva at lma.cnrs-mrs.fr (Fabrice Silva) Date: Tue, 08 Apr 2008 15:40:11 +0200 Subject: [SciPy-user] linear (polynomial) fit with error bars In-Reply-To: <47FB5051.4050907@unibo.it> References: <47FB5051.4050907@unibo.it> Message-ID: <1207662011.2763.1.camel@localhost> Le mardi 08 avril 2008 ? 13:00 +0200, massimo sandal a ?crit : > Hi, > > Is there a scipy implementation of linear fit that keeps into account > the existence and width of error bars for each point? I googled but I > can't find that. What about computing least squares approximation using weightening coefficients inversely proportional to value incertainty (i.e. length of each error bars) ? -- Fabrice Silva LMA UPR CNRS 7051 From strawman at astraw.com Tue Apr 8 11:17:13 2008 From: strawman at astraw.com (Andrew Straw) Date: Tue, 08 Apr 2008 08:17:13 -0700 Subject: [SciPy-user] is there any plan to import BNT(bayesian network toolkit) from matlab? In-Reply-To: References: Message-ID: <47FB8C79.2030808@astraw.com> See http://scipy.org/scipy/scikits/wiki/BayesNet . There are plans, the BNT license is scipy-compatible, and there are already 5 people interesting in developing. Speaking for myself, however, this has taken a lower priority than I initially expected due to the ever-changing energy landscape of a research program -- I'm simply too busy on other stuff right now to contribute meaningfully. I suspect the situation may be similar for the others listed. Nonetheless, we would welcome effort in this direction. -Andrew From massimo.sandal at unibo.it Tue Apr 8 12:13:04 2008 From: massimo.sandal at unibo.it (massimo sandal) Date: Tue, 08 Apr 2008 18:13:04 +0200 Subject: [SciPy-user] linear (polynomial) fit with error bars In-Reply-To: <1207662011.2763.1.camel@localhost> References: <47FB5051.4050907@unibo.it> <1207662011.2763.1.camel@localhost> Message-ID: <47FB9990.8080009@unibo.it> Fabrice Silva ha scritto: > Le mardi 08 avril 2008 ? 13:00 +0200, massimo sandal a ?crit : >> Hi, >> >> Is there a scipy implementation of linear fit that keeps into account >> the existence and width of error bars for each point? I googled but I >> can't find that. > > What about computing least squares approximation using weightening > coefficients inversely proportional to value incertainty (i.e. length of > each error bars) ? Yes, that's what I was trying to ask. But I can't find the function that does it (if it exists). m. -- Massimo Sandal University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo.sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From flyhyena at yahoo.com.cn Tue Apr 8 12:02:38 2008 From: flyhyena at yahoo.com.cn (sun) Date: Tue, 8 Apr 2008 18:02:38 +0200 Subject: [SciPy-user] is there any plan to import BNT(bayesian network toolkit) from matlab? References: <47FB8C79.2030808@astraw.com> Message-ID: "Andrew Straw" wrote in message news:47FB8C79.2030808 at astraw.com... > See http://scipy.org/scipy/scikits/wiki/BayesNet . There are plans, the > BNT license is scipy-compatible, and there are already 5 people > interesting in developing. Speaking for myself, however, this has taken > a lower priority than I initially expected due to the ever-changing > energy landscape of a research program -- I'm simply too busy on other > stuff right now to contribute meaningfully. I suspect the situation may > be similar for the others listed. > > Nonetheless, we would welcome effort in this direction. > > -Andrew Thanks for the link and also thanks to those who started to import it. Looking forward to see this project blooming soon. From ed at lamedomain.net Tue Apr 8 12:05:40 2008 From: ed at lamedomain.net (Ed Rahn) Date: Tue, 08 Apr 2008 09:05:40 -0700 Subject: [SciPy-user] is there any plan to import BNT(bayesian network toolkit) from matlab? In-Reply-To: References: Message-ID: <47FB97D4.9090608@lamedomain.net> http://scipy.org/scipy/scikits/ticket/52 contains a patch that gets openbayes to run on numpy. Openbayes is inspired by BNT. It works nicely for discrete nodes, continuous don't work in all situations. I don't have the proper education in math at this time to do much with the core algorithms. But would be more than happy to help others with python programming and design. - Ed sun wrote: > as topic > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From mhearne at usgs.gov Tue Apr 8 12:22:36 2008 From: mhearne at usgs.gov (Michael Hearne) Date: Tue, 8 Apr 2008 10:22:36 -0600 Subject: [SciPy-user] overriding __getitem__ Message-ID: <45AD037F-10DC-4F2B-8AD0-368E186EAA6D@usgs.gov> In numpy, it is possible to use the square bracket operators [] in various ways: x = numpy.array([[1,2,3,4],[5,6,7,8],[9,10,11,12],[13,14,15,16]]) x[0,0] x[0:2,0:2] x[0:4:2,0:4:2] I presume that under the hood, the numpy array object is overriding the __getitem__ method. Based on that assumption, I created the following test code that I _thought_ should work: ------------------------------------------------------------------------ -------------------- import numpy class Sequence: def __init__(self): self.data = numpy.array([[1,2,3,4],[5,6,7,8],[9,10,11,12], [13,14,15,16]]) def __getitem__(self,key1,key2): if isinstance(key1,int) and isinstance(key2,int): return(self.data[key1,key2]) if __name__ == "__main__": s = Sequence() s[2,2] ------------------------------------------------------------------------ -------------------- However, I get the error: "TypeError: __getitem__() takes exactly 3 arguments (2 given)" Isn't self an implied argument since __getitem__ is a class method? Is this the wrong way to attain the interface I want? Thanks, Mike ------------------------------------------------------ Michael Hearne mhearne at usgs.gov (303) 273-8620 USGS National Earthquake Information Center 1711 Illinois St. Golden CO 80401 Senior Software Engineer Synergetics, Inc. ------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.brucher at gmail.com Tue Apr 8 12:39:36 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Tue, 8 Apr 2008 18:39:36 +0200 Subject: [SciPy-user] overriding __getitem__ In-Reply-To: <45AD037F-10DC-4F2B-8AD0-368E186EAA6D@usgs.gov> References: <45AD037F-10DC-4F2B-8AD0-368E186EAA6D@usgs.gov> Message-ID: Hi, In fact the __getitem__() method has only two parameters, as the second parameter is the list (or tuple ?) of the keys you have. Matthieu 2008/4/8, Michael Hearne : > > In numpy, it is possible to use the square bracket operators [] in various > ways: > x = numpy.array([[1,2,3,4],[5,6,7,8],[9,10,11,12],[13,14,15,16]]) > x[0,0] > x[0:2,0:2] > x[0:4:2,0:4:2] > > I presume that under the hood, the numpy array object is overriding the > __getitem__ method. > > Based on that assumption, I created the following test code that I > _thought_ should work: > > -------------------------------------------------------------------------------------------- > import numpy > class Sequence: > def __init__(self): self.data = > numpy.array([[1,2,3,4],[5,6,7,8],[9,10,11,12],[13,14,15,16]]) > def __getitem__(self,key1,key2): if isinstance(key1,int) and > isinstance(key2,int): return(self.data[key1,key2]) > if __name__ == "__main__": s = Sequence() > s[2,2]--------------------------------------------------------------------------------------------However, > I get the error:"TypeError: __getitem__() takes exactly 3 arguments (2 > given)" > Isn't self an implied argument since __getitem__ is a class method? > Is this the wrong way to attain the interface I want? > Thanks, > Mike > > > > ------------------------------------------------------ > Michael Hearne > mhearne at usgs.gov > (303) 273-8620 > USGS National Earthquake Information Center > 1711 Illinois St. Golden CO 80401 > Senior Software Engineer > Synergetics, Inc. > ------------------------------------------------------ > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From seefeld at sympatico.ca Tue Apr 8 12:41:43 2008 From: seefeld at sympatico.ca (Stefan Seefeld) Date: Tue, 08 Apr 2008 12:41:43 -0400 Subject: [SciPy-user] overriding __getitem__ In-Reply-To: <45AD037F-10DC-4F2B-8AD0-368E186EAA6D@usgs.gov> References: <45AD037F-10DC-4F2B-8AD0-368E186EAA6D@usgs.gov> Message-ID: <47FBA047.9030205@sympatico.ca> Michael Hearne wrote: > -------------------------------------------------------------------------------------------- > import numpy > class Sequence: > def __init__(self): self.data = > numpy.array([[1,2,3,4],[5,6,7,8],[9,10,11,12],[13,14,15,16]]) > def __getitem__(self,key1,key2): if isinstance(key1,int) and > isinstance(key2,int): return(self.data[key1,key2]) > if __name__ == "__main__": s = Sequence() > s[2,2]--------------------------------------------------------------------------------------------However, > I get the error:"TypeError: __getitem__() takes exactly 3 arguments (2 > given)" > Isn't self an implied argument since __getitem__ is a class method? > Is this the wrong way to attain the interface I want? I did the following little experiment: class Sequence: def __getitem__(self, *args): print 'getitem', args s = Sequence() s[1] s[1,2] s[1:2] ... The above shows that 's[1,2]' will pass a single argument: a '(1,2)' tuple, while 's[1:2,1:2]' will pass two arguments (two slice objects). This is with python 2.5. A quick search didn't reveal any documentation of this behavior. This suggests that you may rewrite your definition of __getitem__ to be more flexible in its expectations as to number and type of arguments. (I'm not sure what makes you think __getitem__ is a class method. It's an ordinary attribute, expecting the first argument to be the object reference, just what you pass above.) HTH, Stefan -- ...ich hab' noch einen Koffer in Berlin... From mhearne at usgs.gov Tue Apr 8 12:51:47 2008 From: mhearne at usgs.gov (Michael Hearne) Date: Tue, 8 Apr 2008 10:51:47 -0600 Subject: [SciPy-user] overriding __getitem__ In-Reply-To: <47FBA047.9030205@sympatico.ca> References: <45AD037F-10DC-4F2B-8AD0-368E186EAA6D@usgs.gov> <47FBA047.9030205@sympatico.ca> Message-ID: Stefan and Matthieu - Thanks for the responses. I will work on my code as you suggest. Stefan - I think I used "class method" inappropriately - I think the appropriate term is "instance method". --Mike On Apr 8, 2008, at 10:41 AM, Stefan Seefeld wrote: > Michael Hearne wrote: > >> --------------------------------------------------------------------- >> ----------------------- >> import numpy >> class Sequence: >> def __init__(self): self.data = >> numpy.array([[1,2,3,4],[5,6,7,8],[9,10,11,12],[13,14,15,16]]) >> def __getitem__(self,key1,key2): if isinstance >> (key1,int) and >> isinstance(key2,int): return(self.data[key1,key2]) >> if __name__ == "__main__": s = Sequence() >> s >> [2,2]---------------------------------------------------------------- >> ----------------------------However, >> I get the error:"TypeError: __getitem__() takes exactly 3 >> arguments (2 >> given)" >> Isn't self an implied argument since __getitem__ is a class method? >> Is this the wrong way to attain the interface I want? > > > I did the following little experiment: > > class Sequence: > def __getitem__(self, *args): print 'getitem', args > > s = Sequence() > s[1] > s[1,2] > s[1:2] > ... > > The above shows that 's[1,2]' will pass a single argument: a '(1,2)' > tuple, while 's[1:2,1:2]' will pass two arguments (two slice objects). > This is with python 2.5. A quick search didn't reveal any > documentation > of this behavior. > > This suggests that you may rewrite your definition of __getitem__ > to be > more flexible in its expectations as to number and type of arguments. > > (I'm not sure what makes you think __getitem__ is a class method. It's > an ordinary attribute, expecting the first argument to be the object > reference, just what you pass above.) > > HTH, > > Stefan > > -- > > ...ich hab' noch einen Koffer in Berlin... > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user ------------------------------------------------------ Michael Hearne mhearne at usgs.gov (303) 273-8620 USGS National Earthquake Information Center 1711 Illinois St. Golden CO 80401 Senior Software Engineer Synergetics, Inc. ------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From Karl.Young at ucsf.edu Tue Apr 8 13:37:51 2008 From: Karl.Young at ucsf.edu (Karl Young) Date: Tue, 08 Apr 2008 10:37:51 -0700 Subject: [SciPy-user] is there any plan to import BNT(bayesian network toolkit) from matlab? In-Reply-To: <47FB8C79.2030808@astraw.com> References: <47FB8C79.2030808@astraw.com> Message-ID: <47FBAD6F.3010903@ucsf.edu> Just confirming Andrew's assessment of my situation (the description of his situation fits mine as well). I'm certainly willing to put some time into this project but don't have the time to organize it; while not an expert in either Bayes nets or python hacking I've done a fair amount of work with both (as well as Matlab in the ancient past, which might be of some use with the port). >See http://scipy.org/scipy/scikits/wiki/BayesNet . There are plans, the >BNT license is scipy-compatible, and there are already 5 people >interesting in developing. Speaking for myself, however, this has taken >a lower priority than I initially expected due to the ever-changing >energy landscape of a research program -- I'm simply too busy on other >stuff right now to contribute meaningfully. I suspect the situation may >be similar for the others listed. > >Nonetheless, we would welcome effort in this direction. > >-Andrew >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.org >http://projects.scipy.org/mailman/listinfo/scipy-user > > > -- Karl Young Center for Imaging of Neurodegenerative Diseases, UCSF VA Medical Center (114M) Phone: (415) 221-4810 x3114 lab 4150 Clement Street FAX: (415) 668-2864 San Francisco, CA 94121 Email: karl young at ucsf edu From gav451 at gmail.com Tue Apr 8 14:15:05 2008 From: gav451 at gmail.com (Gerard Vermeulen) Date: Tue, 8 Apr 2008 20:15:05 +0200 Subject: [SciPy-user] linear (polynomial) fit with error bars In-Reply-To: <47FB9990.8080009@unibo.it> References: <47FB5051.4050907@unibo.it> <1207662011.2763.1.camel@localhost> <47FB9990.8080009@unibo.it> Message-ID: <20080408201505.47f549e3@jupiter.rozan.fr> On Tue, 08 Apr 2008 18:13:04 +0200 massimo sandal wrote: > Fabrice Silva ha scritto: > > Le mardi 08 avril 2008 ? 13:00 +0200, massimo sandal a ?crit : > >> Hi, > >> > >> Is there a scipy implementation of linear fit that keeps into > >> account the existence and width of error bars for each point? I > >> googled but I can't find that. > > > > What about computing least squares approximation using weightening > > coefficients inversely proportional to value incertainty (i.e. > > length of each error bars) ? > > Yes, that's what I was trying to ask. > But I can't find the function that does it (if it exists). > > m. > You are looking for scipy.odr (requires scipy-0.6 or later). It is much more useful to fit experimental data than leastsq. Gerard From strawman at astraw.com Tue Apr 8 23:19:16 2008 From: strawman at astraw.com (Andrew Straw) Date: Tue, 08 Apr 2008 20:19:16 -0700 Subject: [SciPy-user] is there any plan to import BNT(bayesian network toolkit) from matlab? In-Reply-To: <47FB97D4.9090608@lamedomain.net> References: <47FB97D4.9090608@lamedomain.net> Message-ID: <47FC35B4.5020102@astraw.com> Thanks, I added that link to the scikit wiki ( http://scipy.org/scipy/scikits/wiki/BayesNet ) so it doesn't get lost. Ed Rahn wrote: > http://scipy.org/scipy/scikits/ticket/52 > > contains a patch that gets openbayes to run on numpy. Openbayes is > inspired by BNT. It works nicely for discrete nodes, continuous don't > work in all situations. > > I don't have the proper education in math at this time to do much with > the core algorithms. But would be more than happy to help others with > python programming and design. > > - Ed From w.richert at gmx.net Wed Apr 9 02:45:17 2008 From: w.richert at gmx.net (Willi Richert) Date: Wed, 9 Apr 2008 08:45:17 +0200 Subject: [SciPy-user] Incremental Nearest Neighbor Message-ID: <200804090845.17817.w.richert@gmx.net> Hi, I'm looking for a nearest neighbor lib (accessible from Python) which increments on-line adding and removing of training data. The best kNN lib I've come across so far is approximate nearest neighbor library (http://www.cs.umd.edu/~mount/ANN/) together with the Python wrappers created by Barry Wark (posted at scipy-user some time ago). Howerver, that lib only supports batch mode. I need an approach with which I can add and remove training samples at run-time, while always being able to classify/test arriving data according to the actual realization of the kNN. Thanks for any help, wr From zhangchipr at gmail.com Wed Apr 9 04:42:36 2008 From: zhangchipr at gmail.com (zhang chi) Date: Wed, 9 Apr 2008 16:42:36 +0800 Subject: [SciPy-user] Is there a modified bessel function of the second kind in scipy? Message-ID: <90c482ab0804090142y1dbdb9cfg976e9ac8d813a6c9@mail.gmail.com> I only find the following functions, and don't find a modified bessel function of the second kind. thank you * jn -- Bessel function of integer order and real argument. * jv -- Bessel function of real-valued order and complex argument. * jve -- Exponentially scaled Bessel function. * yn -- Bessel function of second kind (integer order). * yv -- Bessel function of the second kind (real-valued order). * yve -- Exponentially scaled Bessel function of the second kind. * kn -- Modified Bessel function of the third kind (integer order). * kv -- Modified Bessel function of the third kind (real order). * kve -- Exponentially scaled modified Bessel function of the third kind. * iv -- Modified Bessel function. * ive -- Exponentially scaled modified Bessel function. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Wed Apr 9 04:48:15 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Wed, 9 Apr 2008 10:48:15 +0200 Subject: [SciPy-user] NetworkX Message-ID: <20080409084815.GD1761@phare.normalesup.org> Hi, I am doing a review of major scientific Python packages. I would like to know if people have used networkX as an important package in their stack, and what their research was on. I know it is an important package, I am just trying to figure out the usecases to find out why it is important, and not in terms of algorithmes, but in terms of applications, because I can explain this to non technical users. Cheers, Ga?l From lbolla at gmail.com Wed Apr 9 05:43:48 2008 From: lbolla at gmail.com (lorenzo bolla) Date: Wed, 9 Apr 2008 11:43:48 +0200 Subject: [SciPy-user] Is there a modified bessel function of the second kind in scipy? In-Reply-To: <90c482ab0804090142y1dbdb9cfg976e9ac8d813a6c9@mail.gmail.com> References: <90c482ab0804090142y1dbdb9cfg976e9ac8d813a6c9@mail.gmail.com> Message-ID: <80c99e790804090243w5b452f33ia154fc76b68d6cb1@mail.gmail.com> Aren't the "modified bessel functions of the second kind" also known as "modified bessel functions of the third kind"? http://mathworld.wolfram.com/ModifiedBesselFunctionoftheSecondKind.html http://en.wikipedia.org/wiki/Bessel_function#Modified_Bessel_functions_:_I.CE.B1.2CK.CE.B1 L. On Wed, Apr 9, 2008 at 10:42 AM, zhang chi wrote: > I only find the following functions, and don't find a modified bessel > function of the second kind. > > thank you > > > * jn -- Bessel function of integer order and real argument. > * jv -- Bessel function of real-valued order and complex argument. > * jve -- Exponentially scaled Bessel function. > * yn -- Bessel function of second kind (integer order). > * yv -- Bessel function of the second kind (real-valued order). > * yve -- Exponentially scaled Bessel function of the second kind. > * kn -- Modified Bessel function of the third kind (integer order). > * kv -- Modified Bessel function of the third kind (real order). > * kve -- Exponentially scaled modified Bessel function of the third > kind. > * iv -- Modified Bessel function. > * ive -- Exponentially scaled modified Bessel function. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -- Lorenzo Bolla lbolla at gmail.com http://lorenzobolla.emurse.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ggellner at uoguelph.ca Wed Apr 9 08:04:28 2008 From: ggellner at uoguelph.ca (Gabriel Gellner) Date: Wed, 9 Apr 2008 08:04:28 -0400 Subject: [SciPy-user] NetworkX In-Reply-To: <20080409084815.GD1761@phare.normalesup.org> References: <20080409084815.GD1761@phare.normalesup.org> Message-ID: <20080409120428.GA8892@basestar> > I am doing a review of major scientific Python packages. I would like to > know if people have used networkX as an important package in their stack, > and what their research was on. I know it is an important package, I am > just trying to figure out the usecases to find out why it is important, > and not in terms of algorithmes, but in terms of applications, because I > can explain this to non technical users. > I use networkX all the time for my research in Food Webs in mathematical ecology. I find that networkX in nice for both creating algorithms that walk digraphs, so that I can calculate different food web statistics, as well as using the many built in network theory metrics so I can reproduce the results from the literature. Is this what you where looking for? I can be more specific . . . Gabriel From gael.varoquaux at normalesup.org Wed Apr 9 09:18:13 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Wed, 9 Apr 2008 15:18:13 +0200 Subject: [SciPy-user] NetworkX In-Reply-To: <20080409120428.GA8892@basestar> References: <20080409084815.GD1761@phare.normalesup.org> <20080409120428.GA8892@basestar> Message-ID: <20080409131813.GG1429@phare.normalesup.org> On Wed, Apr 09, 2008 at 08:04:28AM -0400, Gabriel Gellner wrote: > I use networkX all the time for my research in Food Webs in mathematical > ecology. I find that networkX in nice for both creating algorithms that walk > digraphs, so that I can calculate different food web statistics, as well as > using the many built in network theory metrics so I can reproduce the > results from the literature. > Is this what you where looking for? I can be more specific . . . I would be interested in knowing what your graphs represent, in terms that I can explain to a non scientist. What are the vertices and the arrows? Cheers, Ga?l From abhinav.sarkar at gmail.com Wed Apr 9 09:17:21 2008 From: abhinav.sarkar at gmail.com (Abhinav Sarkar) Date: Wed, 9 Apr 2008 13:17:21 +0000 (UTC) Subject: [SciPy-user] =?utf-8?q?What_happened_to_ARPACK_shift-invert/gener?= =?utf-8?q?al=09eigenproblem_routine=3F?= References: Message-ID: Neilen Marais sun.ac.za> writes: > > Hi, > > I used to use scipy.sandbox.arpack.speigs.ARPACK_gen_eigs() to use the > shift-invert mode of ARPACK to solve my problems. With the move of arpack > from sandbox to splinalg.arpack I can't seem to find this function. Any hints? > > Thanks > Neilen > It has been moved to scipy.sparse.linalg.eigen.arpack.speigs.ARPACK_gen_eigs(). Abhinav From lou_boog2000 at yahoo.com Wed Apr 9 13:55:20 2008 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Wed, 9 Apr 2008 10:55:20 -0700 (PDT) Subject: [SciPy-user] Is there a modified bessel function of the second kind in scipy? In-Reply-To: <80c99e790804090243w5b452f33ia154fc76b68d6cb1@mail.gmail.com> Message-ID: <931433.49292.qm@web34406.mail.mud.yahoo.com> Yes, modified of the 3rd kind (SciPy) should be the same as modified 2nd kind. see http://en.wikipedia.org/wiki/Bessel_function --- lorenzo bolla wrote: > Aren't the "modified bessel functions of the second > kind" also known as > "modified bessel functions of the third kind"? > > http://mathworld.wolfram.com/ModifiedBesselFunctionoftheSecondKind.html > http://en.wikipedia.org/wiki/Bessel_function#Modified_Bessel_functions_:_I.CE.B1.2CK.CE.B1 > > L. > > On Wed, Apr 9, 2008 at 10:42 AM, zhang chi > wrote: > > > I only find the following functions, and don't > find a modified bessel > > function of the second kind. > > > > thank you > > > > > > * jn -- Bessel function of integer order > and real argument. > > * jv -- Bessel function of real-valued > order and complex argument. > > * jve -- Exponentially scaled Bessel > function. > > * yn -- Bessel function of second kind > (integer order). > > * yv -- Bessel function of the second kind > (real-valued order). > > * yve -- Exponentially scaled Bessel > function of the second kind. > > * kn -- Modified Bessel function of the > third kind (integer order). > > * kv -- Modified Bessel function of the > third kind (real order). > > * kve -- Exponentially scaled modified > Bessel function of the third > > kind. > > * iv -- Modified Bessel function. > > * ive -- Exponentially scaled modified > Bessel function. > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > -- > Lorenzo Bolla > lbolla at gmail.com > http://lorenzobolla.emurse.com/ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Lou Pecora, my views are my own. __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com From barrywark at gmail.com Wed Apr 9 16:03:59 2008 From: barrywark at gmail.com (Barry Wark) Date: Wed, 9 Apr 2008 13:03:59 -0700 Subject: [SciPy-user] Incremental Nearest Neighbor In-Reply-To: <200804090845.17817.w.richert@gmx.net> References: <200804090845.17817.w.richert@gmx.net> Message-ID: Willi, You're right that libANN doesn't allow you to update the kd-tree after construction. Depending on your use, however, you may find that re-creating the kd-tree with the new set of points is fast enough. If you really need to dynamically update a kd-tree, you may want to start at http://citeseer.ist.psu.edu/procopiuc02bkdtree.html. I've never used this structure, but the Bkd-tree appears to allow dynamically updating the set of points while retaining the kd-tree's efficient query performance. I don't know of any python implementations or wrappers of the Bkd-tree, however. barry On Tue, Apr 8, 2008 at 11:45 PM, Willi Richert wrote: > Hi, > > I'm looking for a nearest neighbor lib (accessible from Python) which > increments on-line adding and removing of training data. > > The best kNN lib I've come across so far is approximate > nearest neighbor library (http://www.cs.umd.edu/~mount/ANN/) together with the > Python wrappers created by Barry Wark (posted at scipy-user some time ago). > Howerver, that lib only supports batch mode. > > I need an approach with which I can add and remove training samples at > run-time, while always being able to classify/test arriving data according to > the actual realization of the kNN. > > > Thanks for any help, > wr > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From cjellison at ucdavis.edu Wed Apr 9 19:27:41 2008 From: cjellison at ucdavis.edu (Christopher Ellison) Date: Wed, 09 Apr 2008 16:27:41 -0700 Subject: [SciPy-user] NetworkX In-Reply-To: <20080409084815.GD1761@phare.normalesup.org> References: <20080409084815.GD1761@phare.normalesup.org> Message-ID: <47FD50ED.9070808@ucdavis.edu> Gael Varoquaux wrote the following on 04/09/2008 01:48 AM: > Hi, > > I am doing a review of major scientific Python packages. I would like to > know if people have used networkX as an important package in their stack, > and what their research was on. I subclass from NetworkX as part of my research with stochastic finite state automata. One could also use this for traditional CS finite state machines as well... Chris From cjellison at ucdavis.edu Wed Apr 9 19:30:30 2008 From: cjellison at ucdavis.edu (Christopher Ellison) Date: Wed, 09 Apr 2008 16:30:30 -0700 Subject: [SciPy-user] Incremental Nearest Neighbor In-Reply-To: References: <200804090845.17817.w.richert@gmx.net> Message-ID: <47FD5196.5010909@ucdavis.edu> Barry Wark wrote the following on 04/09/2008 01:03 PM: > You're right that libANN doesn't allow you to update the kd-tree after > construction. Depending on your use, however, you may find that > re-creating the kd-tree with the new set of points is fast enough. If > you really need to dynamically update a kd-tree, you may want to start > at http://citeseer.ist.psu.edu/procopiuc02bkdtree.html. Wow. This would be quite useful. Anyone know of a Python implementation? From gael.varoquaux at normalesup.org Wed Apr 9 19:42:23 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 10 Apr 2008 01:42:23 +0200 Subject: [SciPy-user] NetworkX In-Reply-To: <47FD50ED.9070808@ucdavis.edu> References: <20080409084815.GD1761@phare.normalesup.org> <47FD50ED.9070808@ucdavis.edu> Message-ID: <20080409234223.GK28763@phare.normalesup.org> On Wed, Apr 09, 2008 at 04:27:41PM -0700, Christopher Ellison wrote: > I subclass from NetworkX as part of my research with stochastic finite > state automata. One could also use this for traditional CS finite state > machines as well... OK. If I understand well this would fall in the field of theoretical computer science, right? Cheers, Ga?l From cjellison at ucdavis.edu Wed Apr 9 20:00:49 2008 From: cjellison at ucdavis.edu (Christopher Ellison) Date: Wed, 09 Apr 2008 17:00:49 -0700 Subject: [SciPy-user] NetworkX In-Reply-To: <20080409234223.GK28763@phare.normalesup.org> References: <20080409084815.GD1761@phare.normalesup.org> <47FD50ED.9070808@ucdavis.edu> <20080409234223.GK28763@phare.normalesup.org> Message-ID: <47FD58B1.5090707@ucdavis.edu> Gael Varoquaux wrote the following on 04/09/2008 04:42 PM: > OK. If I understand well this would fall in the field of theoretical > computer science, right? > That is certainly one area, especially with regard to traditional FSM. Depending on the focus, the field can vary: physics, applied math, cs, machine learning, etc. From strawman at astraw.com Wed Apr 9 22:21:09 2008 From: strawman at astraw.com (Andrew Straw) Date: Wed, 09 Apr 2008 19:21:09 -0700 Subject: [SciPy-user] NetworkX In-Reply-To: <20080409084815.GD1761@phare.normalesup.org> References: <20080409084815.GD1761@phare.normalesup.org> Message-ID: <47FD7995.9060207@astraw.com> I am using it in an image analysis app. Gael Varoquaux wrote: > Hi, > > I am doing a review of major scientific Python packages. I would like to > know if people have used networkX as an important package in their stack, > and what their research was on. I know it is an important package, I am > just trying to figure out the usecases to find out why it is important, > and not in terms of algorithmes, but in terms of applications, because I > can explain this to non technical users. > > Cheers, > > Ga?l > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From pieter.cogghe at gmail.com Thu Apr 10 04:31:48 2008 From: pieter.cogghe at gmail.com (Pieter) Date: Thu, 10 Apr 2008 10:31:48 +0200 Subject: [SciPy-user] execute function on an array elementwise In-Reply-To: <80c99e790804070644s1319eb95pe02a61ad20ed0fae@mail.gmail.com> References: <5c0bbcb30804070618y6f72c1c3pf6456b0fc3a9ce0@mail.gmail.com> <8C0947EB-6B3A-4B1B-B0D9-CACFB1936F94@inrialpes.fr> <80c99e790804070644s1319eb95pe02a61ad20ed0fae@mail.gmail.com> Message-ID: <5c0bbcb30804100131gefb8325p89a998ee6bb2b25b@mail.gmail.com> Hi, Thanks for the answers, I think vectorize was the one I was looking for. My example function was really bad, I'm sorry for that. I'm actually reading an image and have to convert every pixel value to an 8 bit vector. thanks a lot, Pieter 2008/4/7, lorenzo bolla : > or simply: > > def myFunction(X): > return where(X > 3, "some value", "other value") > > hth, > L. > > > On Mon, Apr 7, 2008 at 3:33 PM, J-Pascal Mercier > wrote: > > > Hi Pieter, > > > > You could use something like : > > > > def myFunction(X): > > res = zeros(X.shape) > > res[where(X > 3)] = "some value" > > res[where(X <= 3)] = "another value" > > return res > > > > cheers, > > > > > > J-Pascal > > > > Projet PRIMA - Laboratoire LIG > > INRIA Grenoble Rhone-Alpes Research Centre > > 655, Avenue de l'Europe > > 38330 Montbonnot, France > > > > > > > > On 7-Apr-08, at 3:18 PM, Pieter wrote: > > > > > Hi all, > > > > > > I guess this is an easy one, but can't seem to find it. Suppose I have > > > this function: > > > > > > def myFunction(x): > > > result = None > > > if x > 3: > > > result = "some value" > > > else: > > > result = "another value" > > > return result > > > > > > And I want to run it on an array a: > > > b = myFunction(a) > > > > > > which then returns an array with "some value" and "another value". I > > > could loop over the array, but I guess there's a better way to do > > > this? (something like arrayMagic from Matlab if I'm not mistaken. > > > > > > thanks a lot, > > > > > > Pieter > > > > > > > > > > > > -- > > > Pieter Cogghe > > > Ganzendries 186 > > > 9000 Gent > > > 0487 10 14 21 > > > _______________________________________________ > > > SciPy-user mailing list > > > SciPy-user at scipy.org > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > -- > Lorenzo Bolla > lbolla at gmail.com > http://lorenzobolla.emurse.com/ > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -- Pieter Cogghe Ganzendries 186 9000 Gent 0487 10 14 21 From jelleferinga at gmail.com Thu Apr 10 07:12:08 2008 From: jelleferinga at gmail.com (jelle) Date: Thu, 10 Apr 2008 11:12:08 +0000 (UTC) Subject: [SciPy-user] NetworkX References: <20080409084815.GD1761@phare.normalesup.org> Message-ID: Hi Ga?l, Actually when working with graphs I preferred igraph over networkx: http://cneurocvs.rmki.kfki.hu/igraph/ -jelle From contact at pythonxy.com Thu Apr 10 07:36:43 2008 From: contact at pythonxy.com (Python(x,y) - Python for Scientists) Date: Thu, 10 Apr 2008 13:36:43 +0200 (CEST) Subject: [SciPy-user] Python(x,y) - Python for Scientists Message-ID: <53287.132.165.76.2.1207827403.squirrel@secure.nuxit.net> Dear all, The scientists among you may be interested in Python(x,y), a new scientific-oriented Python distribution. This Python/Eclipse distribution is freely available as a one-click Windows installer (a release for GNU/Linux with similar features will follow soon): http://www.pythonxy.com Please do not hesitate to forward this announcement... (I am very sorry if you have already received this e-mail through "python-list" mailing list) Thanks a lots, PR -- P. Raybaut Python(x,y) http://www.pythonxy.com From massimo.sandal at unibo.it Thu Apr 10 08:09:12 2008 From: massimo.sandal at unibo.it (massimo sandal) Date: Thu, 10 Apr 2008 14:09:12 +0200 Subject: [SciPy-user] linear (polynomial) fit with error bars In-Reply-To: <1207674245.2763.4.camel@localhost> References: <47FB5051.4050907@unibo.it> <1207662011.2763.1.camel@localhost> <47FB9990.8080009@unibo.it> <1207674245.2763.4.camel@localhost> Message-ID: <47FE0368.8030403@unibo.it> Hi, Fabrice Silva ha scritto: > Using the diag input argument of leastsq : > > from scipy import optimize > def errfunc(a,X,Y): > return Y-(a[0]*X+a[1]) > #b may be the vector containing the error bars sizes. > weigths = 1./b > a, success = optimize.leastsq(errfunc, [0,0],args=(X,Y), diag=weigths) > > You here give more importance to points having small error bars. Thanks for your advice. I am trying however to use your code, but I am stuck upon an error. Here is the script, that reads a very raw data file (see below): #!/usr/bin/env python from scipy import optimize def errfunc(a,X,Y): return Y-(a[0]*X+a[1]) #b may be the vector containing the error bars sizes. f=open("data.txt", "r") datf=f.readlines() numofdatapoints=len(datf)/3 xval=[0.0]*numofdatapoints yval=[0.0]*numofdatapoints err=[0.0]*numofdatapoints # print len(datf) # print datf for i in range(numofdatapoints): xval[i]=float(datf[i]) yval[i]=float(datf[i+numofdatapoints]) err[i]=float(datf[i+2*numofdatapoints]) weigths=[0.0]*numofdatapoints for i in range(numofdatapoints): weigths[i] = 1./err[i] w=[0.0]*numofdatapoints success=[0.0]*numofdatapoints w, success = optimize.leastsq(errfunc, [0,0], args=(xval,yval), diag=weigths) print valA print success ------- data.txt: 118.877580092022 110.450590941286 108.684062758621 109.314800167624 103.090778781767 98.5714370869397 29.1 31.42 33.74 36.06 38.38 40.7 2.76010170015786 3.52143474842509 2.45059986418858 3.21254530326032 2.11363073382134 2.14664809861522 ------- and the script dies with the following error: massimo at calliope:~/Python/linfit$ python linfit.py Traceback (most recent call last): File "linfit.py", line 31, in w, success = optimize.leastsq(errfunc, [0,0], args=(xval,yval), diag=weigths) File "/usr/lib/python2.5/site-packages/scipy/optimize/minpack.py", line 262, in leastsq m = check_func(func,x0,args,n)[0] File "/usr/lib/python2.5/site-packages/scipy/optimize/minpack.py", line 12, in check_func res = atleast_1d(apply(thefunc,args)) File "linfit.py", line 4, in errfunc return Y-(a[0]*X+a[1]) ValueError: shape mismatch: objects cannot be broadcast to a single shape which baffles me. What should I look for to understand what I am doing wrong? m. -- Massimo Sandal University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo.sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From jeevan.baretto at gmail.com Thu Apr 10 12:51:59 2008 From: jeevan.baretto at gmail.com (Jeevan Baretto) Date: Thu, 10 Apr 2008 22:21:59 +0530 Subject: [SciPy-user] Non-negative least squares method Message-ID: <46f941590804100951u69056646gb539995dd310e9f@mail.gmail.com> I was looking for the non-negative least squares method in scipy and couldn't find one. Can anyone help me out with this? Thanks, Jeevan -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmitrey.kroshko at scipy.org Thu Apr 10 13:24:23 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Thu, 10 Apr 2008 20:24:23 +0300 Subject: [SciPy-user] Non-negative least squares method In-Reply-To: <46f941590804100951u69056646gb539995dd310e9f@mail.gmail.com> References: <46f941590804100951u69056646gb539995dd310e9f@mail.gmail.com> Message-ID: <47FE4D47.5080401@scipy.org> Afaik scipy & numpy has no the one, mb bvls from scikits.openopt can be helpful. Regards, D. Jeevan Baretto wrote: > I was looking for the non-negative least squares method in scipy and > couldn't find one. Can anyone help me out with this? > > Thanks, > Jeevan > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From nwagner at iam.uni-stuttgart.de Thu Apr 10 13:40:46 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 10 Apr 2008 19:40:46 +0200 Subject: [SciPy-user] Non-negative least squares method In-Reply-To: <46f941590804100951u69056646gb539995dd310e9f@mail.gmail.com> References: <46f941590804100951u69056646gb539995dd310e9f@mail.gmail.com> Message-ID: On Thu, 10 Apr 2008 22:21:59 +0530 "Jeevan Baretto" wrote: > I was looking for the non-negative least squares method >in scipy and > couldn't find one. Can anyone help me out with this? > > Thanks, > Jeevan AFAIK, this is on the TODO list in the Openopt framework See http://scipy.org/scipy/scikits/wiki/OpenOptTODO Dmitrey, correct me if I am missing something. http://lib.stat.cmu.edu/general/bvls Nils From keflavich at gmail.com Thu Apr 10 13:45:24 2008 From: keflavich at gmail.com (Keflavich) Date: Thu, 10 Apr 2008 10:45:24 -0700 (PDT) Subject: [SciPy-user] ipython memory leak Message-ID: Hi, I'm running a script that loads a lot of fits images into the global namespace; I need at least most of them to be present for debugging purposes. However, every time I re-run the script, it consumes exactly the same amount of memory: nothing is freed even though all of the variables are overwritten. I've tried playing with the gc module and %clear out and manually deleting the variables, but the memory is never freed. After 2-3 runs, ipython crashes complaining of a memory issues. Plotting with matplotlib may be part of the problem, but the memory leak (?) still occurs if I remove the plotting commands. Any tips on cleaning things up? The memory error: python2.5(751) malloc: *** mmap(size=102875136) failed (error code=12) *** error: can't allocate region *** set a breakpoint in malloc_error_break to debug Thanks, Adam From gael.varoquaux at normalesup.org Thu Apr 10 13:56:32 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 10 Apr 2008 19:56:32 +0200 Subject: [SciPy-user] Non-negative least squares method In-Reply-To: <47FE4D47.5080401@scipy.org> References: <46f941590804100951u69056646gb539995dd310e9f@mail.gmail.com> <47FE4D47.5080401@scipy.org> Message-ID: <20080410175632.GH29029@phare.normalesup.org> On Thu, Apr 10, 2008 at 08:24:23PM +0300, dmitrey wrote: > Afaik scipy & numpy has no the one, mb bvls from scikits.openopt can be > helpful. > Regards, D. Can't this be done with a cobyla (constraint optimisation, can be found in scipy.optimise) ? Cheers, Ga?l > Jeevan Baretto wrote: > > I was looking for the non-negative least squares method in scipy and > > couldn't find one. Can anyone help me out with this? From michael.abshoff at googlemail.com Thu Apr 10 13:53:16 2008 From: michael.abshoff at googlemail.com (Michael.Abshoff) Date: Thu, 10 Apr 2008 19:53:16 +0200 Subject: [SciPy-user] ipython memory leak In-Reply-To: References: Message-ID: <47FE540C.9070904@gmail.com> Keflavich wrote: Hi, > Hi, I'm running a script that loads a lot of fits images into the > global namespace; I need at least most of them to be present for > debugging purposes. However, every time I re-run the script, it > consumes exactly the same amount of memory: nothing is freed even > though all of the variables are overwritten. I've tried playing with > the gc module and %clear out and manually deleting the variables, but > the memory is never freed. After 2-3 runs, ipython crashes > complaining of a memory issues. Plotting with matplotlib may be part > of the problem, but the memory leak (?) still occurs if I remove the > plotting commands. Any tips on cleaning things up? I have seen similar things with Sage, i.e. if you allocate a matrix and do not delete it via del before exit it does not get deallocated despite its reference count being one. One the other hand if you you reassign that matrix to a different one the now no longer referenced matrix object is being deallocated. We are using a rather old ipython release, so it might have been fixed upstream. It might also be a bug in our code base, but this sounded eerily familiar. > The memory error: > python2.5(751) malloc: *** mmap(size=102875136) failed (error code=12) > *** error: can't allocate region > *** set a breakpoint in malloc_error_break to debug > > Thanks, > Adam Should this be discussed here or what is the general policy to discuss ipyhton related issues? Cheers, Michael > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From bing.jian at gmail.com Thu Apr 10 14:25:03 2008 From: bing.jian at gmail.com (Bing) Date: Thu, 10 Apr 2008 14:25:03 -0400 Subject: [SciPy-user] Non-negative least squares method In-Reply-To: <46f941590804100951u69056646gb539995dd310e9f@mail.gmail.com> References: <46f941590804100951u69056646gb539995dd310e9f@mail.gmail.com> Message-ID: Hi Jeevan, I have a python wrapper of the following NNLS implementation http://www.cs.utexas.edu/~suvrit/work/progs/nnls.html after asking the same question in this mailing list. (google "scipy NNLS") Klaus Schuch also kindly sent me his implementation in numpy. If you would like to try my implementation, please let me know. Bing On Thu, Apr 10, 2008 at 12:51 PM, Jeevan Baretto wrote: > I was looking for the non-negative least squares method in scipy and > couldn't find one. Can anyone help me out with this? > > Thanks, > Jeevan > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Thu Apr 10 14:33:32 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Thu, 10 Apr 2008 20:33:32 +0200 Subject: [SciPy-user] is there any plan to import BNT(bayesian network toolkit) from matlab? In-Reply-To: <47FBAD6F.3010903@ucsf.edu> References: <47FB8C79.2030808@astraw.com> <47FBAD6F.3010903@ucsf.edu> Message-ID: <9457e7c80804101133s3043dc91n18f49ff90c5f155c@mail.gmail.com> Hi, We have a master's student who is interested in doing the initial port, and I hope we can form a productive collaboration: he has the time and you guys have a vision. It would be helpful if you could either discuss the idea further here, or expand the wiki page, so that I can give him a concrete plan to start with. Amongs other things, I'd like to know which parts are most important to port (i.e. which parts are not already provided by other libraries), how we should refactor the interface to best make use of Python's strong object oriented facilities (maybe this will develop intuitively), and which references to use in docstrings (Chris Bishop's book looks like a good place to start?). Regards St?fan On 08/04/2008, Karl Young wrote: > > Just confirming Andrew's assessment of my situation (the description of > his situation fits mine as well). I'm certainly willing to put some time > into this project but don't have the time to organize it; while not an > expert in either Bayes nets or python hacking I've done a fair amount of > work with both (as well as Matlab in the ancient past, which might be of > some use with the port). > > > >See http://scipy.org/scipy/scikits/wiki/BayesNet . There are plans, the > >BNT license is scipy-compatible, and there are already 5 people > >interesting in developing. Speaking for myself, however, this has taken > >a lower priority than I initially expected due to the ever-changing > >energy landscape of a research program -- I'm simply too busy on other > >stuff right now to contribute meaningfully. I suspect the situation may > >be similar for the others listed. > > > >Nonetheless, we would welcome effort in this direction. > > > >-Andrew From gael.varoquaux at normalesup.org Thu Apr 10 15:33:15 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 10 Apr 2008 21:33:15 +0200 Subject: [SciPy-user] ipython memory leak In-Reply-To: <47FE540C.9070904@gmail.com> References: <47FE540C.9070904@gmail.com> Message-ID: <20080410193314.GB2873@phare.normalesup.org> On Thu, Apr 10, 2008 at 07:53:16PM +0200, Michael.Abshoff wrote: > Should this be discussed here or what is the general policy to discuss > ipyhton related issues? They should be discussed on the ipython-dev (http://projects.scipy.org/mailman/listinfo/ipython-dev ) or ipython-users (http://projects.scipy.org/mailman/listinfo/ipython-user ) mailing lists. They will recieve more attention, especially from some of the ipython developpers who do not use scientific tools. Cheers, Ga?l From Karl.Young at ucsf.edu Thu Apr 10 15:26:17 2008 From: Karl.Young at ucsf.edu (Karl Young) Date: Thu, 10 Apr 2008 12:26:17 -0700 Subject: [SciPy-user] is there any plan to import BNT(bayesian network toolkit) from matlab? In-Reply-To: <9457e7c80804101133s3043dc91n18f49ff90c5f155c@mail.gmail.com> References: <47FB8C79.2030808@astraw.com> <47FBAD6F.3010903@ucsf.edu> <9457e7c80804101133s3043dc91n18f49ff90c5f155c@mail.gmail.com> Message-ID: <47FE69D9.3070200@ucsf.edu> >We have a master's student who is interested in doing the initial >port, and I hope we can form a productive collaboration: he has the >time and you guys have a vision. It would be helpful if you could >either discuss the idea further here, or expand the wiki page, so that >I can give him a concrete plan to start with. > >Amongs other things, I'd like to know which parts are most important >to port (i.e. which parts are not already provided by other >libraries), how we should refactor the interface to best make use of >Python's strong object oriented facilities (maybe this will develop >intuitively), and which references to use in docstrings (Chris >Bishop's book looks like a good place to start?). > >Regards >St?fan > > Stefan, that sounds great. After talking with Jarrod and David yesterday I'm getting a better feel for how the port might fit into the overall SciPy picture (initially as part of the learn scikit). One of the things I thought was great about the toolkit was Kevin Murphy's overarching view of graphical models, e.g. Hidden Markov Models are a particular case of his general scheme. But David made the important point yesterday that if you wanted to use HMM's for something like speech processing using such a general approach would be inefficient and it would be better to use more specific code (currently existing in SciPy I think). So one of the first tasks, consistent with what you describe, is to generate a priority list for the port, perhaps looking for overlapping functionality in SciPy and leaving that stuff out of the initial port. I'll start on that (and we can try to reconcile that with what anyone else comes up with) and I guess I can post my thoughts on the priority list on the wiki. I think I should leave decisions about the interface to those more expert in that (I'll be happy to start coding once those decisions are made though). What is the title of Chris Bishop's book ? -- Karl Young Center for Imaging of Neurodegenerative Diseases, UCSF VA Medical Center (114M) Phone: (415) 221-4810 x3114 lab 4150 Clement Street FAX: (415) 668-2864 San Francisco, CA 94121 Email: karl young at ucsf edu From dmitrey.kroshko at scipy.org Thu Apr 10 15:39:46 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Thu, 10 Apr 2008 22:39:46 +0300 Subject: [SciPy-user] Non-negative least squares method In-Reply-To: References: <46f941590804100951u69056646gb539995dd310e9f@mail.gmail.com> Message-ID: <47FE6D02.1020803@scipy.org> Nils Wagner wrote: > On Thu, 10 Apr 2008 22:21:59 +0530 > "Jeevan Baretto" wrote: > >> I was looking for the non-negative least squares method >> in scipy and >> couldn't find one. Can anyone help me out with this? >> >> Thanks, >> Jeevan >> > > AFAIK, this is on the TODO list in the Openopt framework > > See > > http://scipy.org/scipy/scikits/wiki/OpenOptTODO > > Dmitrey, correct me if I am missing something. > What do you mean? Connecting bvls to scipy? I had explained that I'm not skilled enough for now in f2py to provide callback function (to enable OO graphic output), so I'm not working on the task for now. > http://lib.stat.cmu.edu/general/bvls > > Nils Regards, D. From dmitrey.kroshko at scipy.org Thu Apr 10 15:41:27 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Thu, 10 Apr 2008 22:41:27 +0300 Subject: [SciPy-user] Non-negative least squares method In-Reply-To: References: <46f941590804100951u69056646gb539995dd310e9f@mail.gmail.com> Message-ID: <47FE6D67.4000403@scipy.org> Bing wrote: > Hi Jeevan, > I have a python wrapper of the following NNLS implementation > http://www.cs.utexas.edu/~suvrit/work/progs/nnls.html > > after asking the same question in this mailing list. (google "scipy > NNLS") > Klaus Schuch also kindly sent me his implementation in numpy. > If you would like to try my implementation, please let me know. > > Bing Afaik (mb I'm wrong) NNLS is just a successor of bvls (latter can handle lb-ub while former non-zeros only). D. From dmitrey.kroshko at scipy.org Thu Apr 10 15:42:42 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Thu, 10 Apr 2008 22:42:42 +0300 Subject: [SciPy-user] Non-negative least squares method In-Reply-To: <20080410175632.GH29029@phare.normalesup.org> References: <46f941590804100951u69056646gb539995dd310e9f@mail.gmail.com> <47FE4D47.5080401@scipy.org> <20080410175632.GH29029@phare.normalesup.org> Message-ID: <47FE6DB2.3090100@scipy.org> Gael Varoquaux wrote: > On Thu, Apr 10, 2008 at 08:24:23PM +0300, dmitrey wrote: > >> Afaik scipy & numpy has no the one, mb bvls from scikits.openopt can be >> helpful. >> Regards, D. >> > > Can't this be done with a cobyla (constraint optimisation, can be found > in scipy.optimise) ? > Yes, of course, but cobyla is not specialized solver for the LLS problem. scipy lbfgsb could be much more appropriate, since it can handle derivatives, that are easy to compute here. However, even the one fails to solve rather large-scale problems (yields sufficiently worse obj func value), I had mentioned the issue here: http://openopt.blogspot.com/2008/03/new-llsp-solver-bvls.html If you have OO installed, you can see /examples/llsp_2.py for benchmark scipy_lbfgsb (or any other OO NLP solver, btw I hadn't ALGENCAN checked, mb it works better) vs bvls, try different N = 10, 50, etc. D. From hoytak at gmail.com Thu Apr 10 16:00:23 2008 From: hoytak at gmail.com (Hoyt Koepke) Date: Thu, 10 Apr 2008 13:00:23 -0700 Subject: [SciPy-user] is there any plan to import BNT(bayesian network toolkit) from matlab? In-Reply-To: <9457e7c80804101133s3043dc91n18f49ff90c5f155c@mail.gmail.com> References: <47FB8C79.2030808@astraw.com> <47FBAD6F.3010903@ucsf.edu> <9457e7c80804101133s3043dc91n18f49ff90c5f155c@mail.gmail.com> Message-ID: <4db580fd0804101300p1ce8df5eocec9acc256837208@mail.gmail.com> Very good news; I'll be following the development closely. As for references, one suggestion is Kevin Murphy's PhD thesis on bayes nets. http://www.cs.ubc.ca/~murphyk/Thesis/thesis.html Just a thought. --Hoyt On Thu, Apr 10, 2008 at 11:33 AM, St?fan van der Walt wrote: > Hi, > > We have a master's student who is interested in doing the initial > port, and I hope we can form a productive collaboration: he has the > time and you guys have a vision. It would be helpful if you could > either discuss the idea further here, or expand the wiki page, so that > I can give him a concrete plan to start with. > > Amongs other things, I'd like to know which parts are most important > to port (i.e. which parts are not already provided by other > libraries), how we should refactor the interface to best make use of > Python's strong object oriented facilities (maybe this will develop > intuitively), and which references to use in docstrings (Chris > Bishop's book looks like a good place to start?). > > Regards > St?fan > > > > On 08/04/2008, Karl Young wrote: > > > > Just confirming Andrew's assessment of my situation (the description of > > his situation fits mine as well). I'm certainly willing to put some time > > into this project but don't have the time to organize it; while not an > > expert in either Bayes nets or python hacking I've done a fair amount of > > work with both (as well as Matlab in the ancient past, which might be of > > some use with the port). > > > > > > >See http://scipy.org/scipy/scikits/wiki/BayesNet . There are plans, the > > >BNT license is scipy-compatible, and there are already 5 people > > >interesting in developing. Speaking for myself, however, this has taken > > >a lower priority than I initially expected due to the ever-changing > > >energy landscape of a research program -- I'm simply too busy on other > > >stuff right now to contribute meaningfully. I suspect the situation may > > >be similar for the others listed. > > > > > >Nonetheless, we would welcome effort in this direction. > > > > > >-Andrew > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- +++++++++++++++++++++++++++++++++++ Hoyt Koepke UBC Department of Computer Science http://www.cs.ubc.ca/~hoytak/ hoytak at gmail.com +++++++++++++++++++++++++++++++++++ From robert.kern at gmail.com Thu Apr 10 16:08:49 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 10 Apr 2008 15:08:49 -0500 Subject: [SciPy-user] linear (polynomial) fit with error bars In-Reply-To: <47FE0368.8030403@unibo.it> References: <47FB5051.4050907@unibo.it> <1207662011.2763.1.camel@localhost> <47FB9990.8080009@unibo.it> <1207674245.2763.4.camel@localhost> <47FE0368.8030403@unibo.it> Message-ID: <3d375d730804101308k46cc529ei64a3eca9afbcb51@mail.gmail.com> On Thu, Apr 10, 2008 at 7:09 AM, massimo sandal wrote: > massimo at calliope:~/Python/linfit$ python linfit.py > Traceback (most recent call last): > File "linfit.py", line 31, in > w, success = optimize.leastsq(errfunc, [0,0], args=(xval,yval), > diag=weigths) > File "/usr/lib/python2.5/site-packages/scipy/optimize/minpack.py", > line 262, in leastsq > m = check_func(func,x0,args,n)[0] > File "/usr/lib/python2.5/site-packages/scipy/optimize/minpack.py", > line 12, in check_func > res = atleast_1d(apply(thefunc,args)) > File "linfit.py", line 4, in errfunc > return Y-(a[0]*X+a[1]) > ValueError: shape mismatch: objects cannot be broadcast to a single shape > > which baffles me. What should I look for to understand what I am doing > wrong? Print out the various individual parts of that expression to make sure they are compatible (or use a debugger to do the same interactively). E.g. print Y print X print a In this case, the problem is that you are passing in X and Y as lists instead of arrays. Since a[0] is a numpy scalar type that inherits from the builtin int type, a[0]*X uses the list type's multiplication which leaves you with an empty list. Instead of constructing xval and yval as lists, use numpy.empty(). xval=numpy.empty([numofdatapoints]) yval=numpy.empty([numofdatapoints]) err=numpy.empty([numofdatapoints]) Also, you don't need to "declare" the variables "w" and "success". -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ed at lamedomain.net Thu Apr 10 16:09:30 2008 From: ed at lamedomain.net (Ed Rahn) Date: Thu, 10 Apr 2008 13:09:30 -0700 Subject: [SciPy-user] is there any plan to import BNT(bayesian network toolkit) from matlab? In-Reply-To: <9457e7c80804101133s3043dc91n18f49ff90c5f155c@mail.gmail.com> References: <47FB8C79.2030808@astraw.com> <47FBAD6F.3010903@ucsf.edu> <9457e7c80804101133s3043dc91n18f49ff90c5f155c@mail.gmail.com> Message-ID: <47FE73FA.1090607@lamedomain.net> St?fan van der Walt wrote: > and which references to use in docstrings (Chris > Bishop's book looks like a good place to start?). http://www.cs.ubc.ca/~murphyk/Papers/bnt.pdf and listed references would be a good place to start. - Ed From travis at enthought.com Thu Apr 10 16:13:08 2008 From: travis at enthought.com (Travis Vaught) Date: Thu, 10 Apr 2008 15:13:08 -0500 Subject: [SciPy-user] [ANN] EuroSciPy Registration now open Message-ID: <238028CC-F3FA-4716-9027-A0C40EC5083D@enthought.com> Greetings, I'm pleased to announce that the registration for the first-annual EuroSciPy Conference is now open. http://scipy.org/EuroSciPy2008 Please take advantage of the early-bird rate and register soon. We'd love to have an early idea of attendance so that we can scale the venue appropriately (the available room is flexible in this regard). The EuroSciPy Conference will be held July 26-27, 2008 in Leipzig, Germany. About EuroSciPy --------------- EuroSciPy is designed to complement the popular SciPy Conferences which have been held for the last 7 years at Caltech (the 2008 SciPy Conference in the U.S. will be held the week of August 19-24). Similarly, the EuroSciPy Conference provides a unique opportunity to learn and affect what is happening in the realm of scientific computing with Python. Attendees will have the opportunity to review the available tools and how they apply to specific problems. By providing a forum for developers to share their Python expertise with the wider commercial, academic, and research communities, this conference fosters collaboration and facilitates the sharing of software components, techniques and a vision for high level language use in scientific computing. Typical presentations include general python use in the sciences, as well as NumPy and SciPy usage for general problem solving. Beyond the excellent talks, there are inter- session discussions that prove stimulating and helpful. Registration ------------ The direct link to the registration site is here: http://www.python-academy.com/euroscipy/index.html The registration fee will be 100.00? for early registrants and will increase to 150.00? for late registration (after June 15). Registration will include breakfast, snacks and lunch for Saturday and Sunday. Call for Participation ---------------------- If you are interested in presenting at the EuroSciPy Conference you may submit an abstract in Plain Text, PDF or MS Word formats to euroabstracts at scipy.org . The deadline for abstract submission is April 30,2008. Papers and/ or presentation slides are acceptable and are due by June 15, 2008. Presentations will be allotted 30 minutes. Please pass this announcement along to any other relevant contacts. Many Thanks, Travis N. Vaught From gael.varoquaux at normalesup.org Thu Apr 10 16:31:44 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 10 Apr 2008 22:31:44 +0200 Subject: [SciPy-user] Non-negative least squares method In-Reply-To: <47FE6D02.1020803@scipy.org> References: <46f941590804100951u69056646gb539995dd310e9f@mail.gmail.com> <47FE6D02.1020803@scipy.org> Message-ID: <20080410203144.GB30722@phare.normalesup.org> On Thu, Apr 10, 2008 at 10:39:46PM +0300, dmitrey wrote: > What do you mean? Connecting bvls to scipy? I had explained that I'm not > skilled enough for now in f2py to provide callback function (to enable > OO graphic output), so I'm not working on the task for now. What to you mean by graphic output? I don't see what kind of graphic output should go in a numerical algorithm. Cheers, Ga?l From matthieu.brucher at gmail.com Thu Apr 10 16:43:28 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 10 Apr 2008 22:43:28 +0200 Subject: [SciPy-user] Non-negative least squares method In-Reply-To: <20080410203144.GB30722@phare.normalesup.org> References: <46f941590804100951u69056646gb539995dd310e9f@mail.gmail.com> <47FE6D02.1020803@scipy.org> <20080410203144.GB30722@phare.normalesup.org> Message-ID: 2008/4/10, Gael Varoquaux : > > On Thu, Apr 10, 2008 at 10:39:46PM +0300, dmitrey wrote: > > What do you mean? Connecting bvls to scipy? I had explained that I'm not > > skilled enough for now in f2py to provide callback function (to enable > > OO graphic output), so I'm not working on the task for now. > > > What to you mean by graphic output? I don't see what kind of graphic > output should go in a numerical algorithm. If people want, they can have graphic output (see the examples, if you want). It can be very usefull to track the evolution of the optimization. Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Thu Apr 10 16:45:59 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 10 Apr 2008 22:45:59 +0200 Subject: [SciPy-user] Non-negative least squares method In-Reply-To: References: <46f941590804100951u69056646gb539995dd310e9f@mail.gmail.com> <47FE6D02.1020803@scipy.org> <20080410203144.GB30722@phare.normalesup.org> Message-ID: <20080410204559.GC30722@phare.normalesup.org> On Thu, Apr 10, 2008 at 10:43:28PM +0200, Matthieu Brucher wrote: > If people want, they can have graphic output (see the examples, if you > want). It can be very usefull to track the evolution of the optimization. Yes, I agree, but coupling this in scientific algorithms to heavily seems a very very bad idea, IMHO. I think it should be made possible to get a package with not one line related to graphics, or plotting, and of course no printing on the screen. Now adding the option of passing a few callbacks to do the plotting, but not giving these callbacks in the core package is a different story. Cheers, Ga?l From dwf at cs.toronto.edu Thu Apr 10 17:23:08 2008 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Thu, 10 Apr 2008 17:23:08 -0400 Subject: [SciPy-user] is there any plan to import BNT(bayesian network toolkit) from matlab? In-Reply-To: <47FE69D9.3070200@ucsf.edu> References: <47FB8C79.2030808@astraw.com> <47FBAD6F.3010903@ucsf.edu> <9457e7c80804101133s3043dc91n18f49ff90c5f155c@mail.gmail.com> <47FE69D9.3070200@ucsf.edu> Message-ID: <9BC32CF5-8F7E-4671-9F3A-1311260C2493@cs.toronto.edu> On 10-Apr-08, at 3:26 PM, Karl Young wrote: > Stefan, that sounds great. After talking with Jarrod and David > yesterday > I'm getting a better feel for how the port might fit into the overall > SciPy picture (initially as part of the learn scikit). One of the > things > I thought was great about the toolkit was Kevin Murphy's overarching > view of graphical models, e.g. Hidden Markov Models are a particular > case of his general scheme. But David made the important point > yesterday > that if you wanted to use HMM's for something like speech processing > using such a general approach would be inefficient and it would be > better to use more specific code (currently existing in SciPy I > think). > So one of the first tasks, consistent with what you describe, is to > generate a priority list for the port, perhaps looking for overlapping > functionality in SciPy and leaving that stuff out of the initial port. > I'll start on that (and we can try to reconcile that with what anyone > else comes up with) and I guess I can post my thoughts on the priority > list on the wiki. I think I should leave decisions about the interface > to those more expert in that (I'll be happy to start coding once those > decisions are made though). What is the title of Chris Bishop's book ? I believe he's referring to "Pattern Recognition and Machine Learning", which was published in 2006. It's quickly become a favourite in many circles, and it contains one of the most comprehensive treatments of graphical models that I know of in any textbook. The website for the book: http://research.microsoft.com/ ~cmbishop/PRML/ Conveniently enough for all interested, the graphical models chapter is available as a free sample chapter! http://research.microsoft.com/~cmbishop/PRML/Bishop-PRML-sample.pdf Stefan, I think that Bishop's chapters on graphical models, Markov Chain Monte Carlo, and variational methods are an excellent place to start. Off the top of my head, these papers are rather illuminating as well: http://www.cs.toronto.edu/~roweis/papers/NC110201.pdf http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=910572 http://www.cs.toronto.edu/~radford/em.abstract.html Once I'm done my coursework for the term and have more time on my hands I'll be contributing whatever I can to the effort. Cheers, David From keflavich at gmail.com Thu Apr 10 17:31:53 2008 From: keflavich at gmail.com (Keflavich) Date: Thu, 10 Apr 2008 14:31:53 -0700 (PDT) Subject: [SciPy-user] ipython memory leak In-Reply-To: <20080410193314.GB2873@phare.normalesup.org> References: <47FE540C.9070904@gmail.com> <20080410193314.GB2873@phare.normalesup.org> Message-ID: <6a6989b1-5892-42d1-934f-1cb9630c1c84@e67g2000hsa.googlegroups.com> OK, thanks, I'll forward my message there. I wasn't sure which package was causing the issue, but if you both think it's ipython I'll go to those groups. Adam On Apr 10, 1:33 pm, Gael Varoquaux wrote: > On Thu, Apr 10, 2008 at 07:53:16PM +0200, Michael.Abshoff wrote: > > Should this be discussed here or what is the general policy to discuss > > ipyhton related issues? > > They should be discussed on the ipython-dev > (http://projects.scipy.org/mailman/listinfo/ipython-dev) or > ipython-users (http://projects.scipy.org/mailman/listinfo/ipython-user) > mailing lists. They will recieve more attention, especially from some of > the ipython developpers who do not use scientific tools. > > Cheers, > > Ga?l > _______________________________________________ > SciPy-user mailing list > SciPy-u... at scipy.orghttp://projects.scipy.org/mailman/listinfo/scipy-user From matthieu.brucher at gmail.com Thu Apr 10 17:35:10 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 10 Apr 2008 23:35:10 +0200 Subject: [SciPy-user] Non-negative least squares method In-Reply-To: <20080410204559.GC30722@phare.normalesup.org> References: <46f941590804100951u69056646gb539995dd310e9f@mail.gmail.com> <47FE6D02.1020803@scipy.org> <20080410203144.GB30722@phare.normalesup.org> <20080410204559.GC30722@phare.normalesup.org> Message-ID: 2008/4/10, Gael Varoquaux : > > On Thu, Apr 10, 2008 at 10:43:28PM +0200, Matthieu Brucher wrote: > > If people want, they can have graphic output (see the examples, if > you > > want). It can be very usefull to track the evolution of the > optimization. > > > Yes, I agree, but coupling this in scientific algorithms to heavily seems > a very very bad idea, IMHO. I think it should be made possible to get a > package with not one line related to graphics, or plotting, and of course > no printing on the screen. > > Now adding the option of passing a few callbacks to do the plotting, but > not giving these callbacks in the core package is a different story. I think that is what dmitrey does. But as he provides a specific interface to a lot of different solvers, providing a half-baked wrapper to bvls is not what he wants, because people expect the full interface ;) Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Thu Apr 10 17:35:40 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 10 Apr 2008 23:35:40 +0200 Subject: [SciPy-user] Non-negative least squares method In-Reply-To: References: <46f941590804100951u69056646gb539995dd310e9f@mail.gmail.com> <47FE6D02.1020803@scipy.org> <20080410203144.GB30722@phare.normalesup.org> <20080410204559.GC30722@phare.normalesup.org> Message-ID: <20080410213540.GE30722@phare.normalesup.org> On Thu, Apr 10, 2008 at 11:35:10PM +0200, Matthieu Brucher wrote: > I think that is what dmitrey does. But as he provides a specific interface > to a lot of different solvers, providing a half-baked wrapper to bvls is > not what he wants, because people expect the full interface ;) Great. I was asking the question because I was curious to know what was the issue. I had sort of guess it could be this, but I wasn't too sure, and I thought I might have overlooked something. Thanks, Ga?l From stefan at sun.ac.za Thu Apr 10 18:03:48 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Fri, 11 Apr 2008 00:03:48 +0200 Subject: [SciPy-user] ipython memory leak In-Reply-To: <6a6989b1-5892-42d1-934f-1cb9630c1c84@e67g2000hsa.googlegroups.com> References: <47FE540C.9070904@gmail.com> <20080410193314.GB2873@phare.normalesup.org> <6a6989b1-5892-42d1-934f-1cb9630c1c84@e67g2000hsa.googlegroups.com> Message-ID: <9457e7c80804101503t533c7b24r42e7ce3231651400@mail.gmail.com> On 10/04/2008, Keflavich wrote: > OK, thanks, I'll forward my message there. I wasn't sure which > package was causing the issue, but if you both think it's ipython I'll > go to those groups. I have a suspicion they'll tell you that IPython can't leak memory (other than what Python leaks by itself), since it is written in pure Python. St?fan From ellisonbg.net at gmail.com Thu Apr 10 18:07:09 2008 From: ellisonbg.net at gmail.com (Brian Granger) Date: Thu, 10 Apr 2008 16:07:09 -0600 Subject: [SciPy-user] ipython memory leak In-Reply-To: <9457e7c80804101503t533c7b24r42e7ce3231651400@mail.gmail.com> References: <47FE540C.9070904@gmail.com> <20080410193314.GB2873@phare.normalesup.org> <6a6989b1-5892-42d1-934f-1cb9630c1c84@e67g2000hsa.googlegroups.com> <9457e7c80804101503t533c7b24r42e7ce3231651400@mail.gmail.com> Message-ID: <6ce0ac130804101507s8850772ue8b3ceebe3a84c7c@mail.gmail.com> > I have a suspicion they'll tell you that IPython can't leak memory > (other than what Python leaks by itself), since it is written in pure > Python. Yep, try, to run the code with regular python (not ipython) and I bet you will still see the problem :) > St?fan > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From gael.varoquaux at normalesup.org Thu Apr 10 18:06:56 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Fri, 11 Apr 2008 00:06:56 +0200 Subject: [SciPy-user] ipython memory leak In-Reply-To: <9457e7c80804101503t533c7b24r42e7ce3231651400@mail.gmail.com> References: <47FE540C.9070904@gmail.com> <20080410193314.GB2873@phare.normalesup.org> <6a6989b1-5892-42d1-934f-1cb9630c1c84@e67g2000hsa.googlegroups.com> <9457e7c80804101503t533c7b24r42e7ce3231651400@mail.gmail.com> Message-ID: <20080410220656.GH30722@phare.normalesup.org> On Fri, Apr 11, 2008 at 12:03:48AM +0200, St?fan van der Walt wrote: > On 10/04/2008, Keflavich wrote: > > OK, thanks, I'll forward my message there. I wasn't sure which > > package was causing the issue, but if you both think it's ipython I'll > > go to those groups. > I have a suspicion they'll tell you that IPython can't leak memory > (other than what Python leaks by itself), since it is written in pure > Python. Sure it can: references that do not get destroyed. Actually iptyhon does leak memory by default: the result of each command you enter are stored in the _%i variable, where %i is the number of the prompt. And if you return huge results without storing them in a variable that will get garbage collected, you leak memory quite badly. That's a feature, not a bug (:-P), but it is easy to work around, simply delete those variables. Cheers, Ga?l From robert.kern at gmail.com Thu Apr 10 18:14:09 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 10 Apr 2008 17:14:09 -0500 Subject: [SciPy-user] ipython memory leak In-Reply-To: <20080410220656.GH30722@phare.normalesup.org> References: <47FE540C.9070904@gmail.com> <20080410193314.GB2873@phare.normalesup.org> <6a6989b1-5892-42d1-934f-1cb9630c1c84@e67g2000hsa.googlegroups.com> <9457e7c80804101503t533c7b24r42e7ce3231651400@mail.gmail.com> <20080410220656.GH30722@phare.normalesup.org> Message-ID: <3d375d730804101514v3d8cf1a5h19d366e3cb0da327@mail.gmail.com> On Thu, Apr 10, 2008 at 5:06 PM, Gael Varoquaux wrote: > Sure it can: references that do not get destroyed. > > Actually iptyhon does leak memory by default: the result of each command > you enter are stored in the _%i variable, where %i is the number of the > prompt. That's not the usual definition of a memory leak. A program leaks memory when it has allocated memory, but it has lost track of it and cannot reach it anymore. Memory leaks are, by definition, bugs. If it's a feature, it's not a memory leak. http://en.wikipedia.org/wiki/Memory_leak -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From Karl.Young at ucsf.edu Thu Apr 10 18:07:27 2008 From: Karl.Young at ucsf.edu (Karl Young) Date: Thu, 10 Apr 2008 15:07:27 -0700 Subject: [SciPy-user] is there any plan to import BNT(bayesian network toolkit) from matlab? In-Reply-To: <9BC32CF5-8F7E-4671-9F3A-1311260C2493@cs.toronto.edu> References: <47FB8C79.2030808@astraw.com> <47FBAD6F.3010903@ucsf.edu> <9457e7c80804101133s3043dc91n18f49ff90c5f155c@mail.gmail.com> <47FE69D9.3070200@ucsf.edu> <9BC32CF5-8F7E-4671-9F3A-1311260C2493@cs.toronto.edu> Message-ID: <47FE8F9F.2010802@ucsf.edu> > >>. What is the title of Chris Bishop's book ? >> >> > >I believe he's referring to "Pattern Recognition and Machine >Learning", which was published in 2006. It's quickly become a >favourite in many circles, and it contains one of the most >comprehensive treatments of graphical models that I know of in any >textbook. The website for the book: http://research.microsoft.com/ >~cmbishop/PRML/ > >Conveniently enough for all interested, the graphical models chapter >is available as a free sample chapter! > >http://research.microsoft.com/~cmbishop/PRML/Bishop-PRML-sample.pdf > >Stefan, I think that Bishop's chapters on graphical models, Markov >Chain Monte Carlo, and variational methods are an excellent place to >start. > >Off the top of my head, these papers are rather illuminating as well: > >http://www.cs.toronto.edu/~roweis/papers/NC110201.pdf >http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=910572 >http://www.cs.toronto.edu/~radford/em.abstract.html > >Once I'm done my coursework for the term and have more time on my >hands I'll be contributing whatever I can to the effort > > Thanks for the references ! For me the "mahine learning bibles" have been the Weka "Data Mining" book and Hastie, Tibshirani, and Friedman's book ("The Elements of Statistical Learning") but save for a few lines in the Weka book there isn't anything in those on Bayes nets. For Bayes nets I've been trying to get through Spirtes, et. al.'s "Causation, Prediction and Search" but the chapter from Murphy's book on graphical models looks great; I look forward to reading it and the other references above. -- KY Karl Young Center for Imaging of Neurodegenerative Diseases, UCSF VA Medical Center (114M) Phone: (415) 221-4810 x3114 lab 4150 Clement Street FAX: (415) 668-2864 San Francisco, CA 94121 Email: karl young at ucsf edu From robert.kern at gmail.com Thu Apr 10 18:20:10 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 10 Apr 2008 17:20:10 -0500 Subject: [SciPy-user] ipython memory leak In-Reply-To: References: Message-ID: <3d375d730804101520n4c3fa10fif91d0bf7fa0fd4c7@mail.gmail.com> On Thu, Apr 10, 2008 at 12:45 PM, Keflavich wrote: > Hi, I'm running a script that loads a lot of fits images into the > global namespace; I need at least most of them to be present for > debugging purposes. However, every time I re-run the script, it > consumes exactly the same amount of memory: nothing is freed even > though all of the variables are overwritten. I've tried playing with > the gc module and %clear out and manually deleting the variables, but > the memory is never freed. After 2-3 runs, ipython crashes > complaining of a memory issues. Plotting with matplotlib may be part > of the problem, but the memory leak (?) still occurs if I remove the > plotting commands. Any tips on cleaning things up? > > The memory error: > python2.5(751) malloc: *** mmap(size=102875136) failed (error code=12) > *** error: can't allocate region > *** set a breakpoint in malloc_error_break to debug Can you put a loop at the top of your code so that it runs 3-4 times just as a script without ipython? If you can run your script using gdb with a breakpoint set in malloc_error_break, you could give us a backtrace that might help us pin down the problem. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From yaroslavvb at gmail.com Thu Apr 10 18:46:52 2008 From: yaroslavvb at gmail.com (Yaroslav Bulatov) Date: Thu, 10 Apr 2008 15:46:52 -0700 Subject: [SciPy-user] Installing Scipy In-Reply-To: <4F98C7F5-CA90-4568-839D-0CD8D9295A4A@suddenlink.net> References: <4F98C7F5-CA90-4568-839D-0CD8D9295A4A@suddenlink.net> Message-ID: I'm having the same problem installing SciPy as the user below, I'm on MacOS 10.5.2, gcc 4.0.1, gfortran 4.2 I tried it with both MacPython and ActiveState 2.5.2 for macs, using instructions on http://www.scipy.org/Installing_SciPy/Mac_OS_X NumPy works and passes all the tests but when I do "import scipy", I get "ImportError: No module named __config__" Any suggestions? > File "scipy/__init__.py", line 54, in > from __config__ import show as show_config On Sat, Nov 17, 2007 at 10:27 PM, David Arnold wrote: > All, > > Mac OS X 10.4.11 PPC. > > Installed python from this: > > http://www.python.org/ftp/python/2.5.1/python-2.5.1-macosx.dmg > > Installation finishes with "There were errors installing the > software. Please try installing again." Installing again makes no > difference, but Python appears to be running: > > scipy-0.6.0 $ python > Python 2.5.1 (r251:54869, Apr 18 2007, 22:08:04) > [GCC 4.0.1 (Apple Computer, Inc. build 5367)] on darwin > Type "help", "copyright", "credits" or "license" for more information. > >>> > > > Downloaded scipy from: http://prdownloads.sourceforge.net/scipy/ > scipy-0.6.0.tar.gz?download > > After unpacking and changing directories, simply did this: > > scipy-0.6.0 $ sudo python setup.py install > > Seem to finish OK. Lots of "Warnings.' But testing according to > INSTALL.txt resulted in: > > scipy-0.6.0 $ python > Python 2.5.1 (r251:54869, Apr 18 2007, 22:08:04) > [GCC 4.0.1 (Apple Computer, Inc. build 5367)] on darwin > Type "help", "copyright", "credits" or "license" for more information. > >>> import scipy > Traceback (most recent call last): > File "", line 1, in > File "scipy/__init__.py", line 54, in > from __config__ import show as show_config > ImportError: No module named __config__ > > > Any ideas or advice? > > Thanks > > David. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From robert.kern at gmail.com Thu Apr 10 18:50:14 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 10 Apr 2008 17:50:14 -0500 Subject: [SciPy-user] Installing Scipy In-Reply-To: References: <4F98C7F5-CA90-4568-839D-0CD8D9295A4A@suddenlink.net> Message-ID: <3d375d730804101550v38665307p697e705bf32958e6@mail.gmail.com> On Thu, Apr 10, 2008 at 5:46 PM, Yaroslav Bulatov wrote: > I'm having the same problem installing SciPy as the user below, I'm on > MacOS 10.5.2, gcc 4.0.1, gfortran 4.2 > I tried it with both MacPython and ActiveState 2.5.2 for macs, using > instructions on http://www.scipy.org/Installing_SciPy/Mac_OS_X > > NumPy works and passes all the tests but when I do "import scipy", I get > > "ImportError: No module named __config__" > > Any suggestions? > > > > > File "scipy/__init__.py", line 54, in > > from __config__ import show as show_config Don't try to import scipy while inside the source tree. Change directories first. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From timmichelsen at gmx-topmail.de Thu Apr 10 18:58:26 2008 From: timmichelsen at gmx-topmail.de (Tim Michelsen) Date: Fri, 11 Apr 2008 00:58:26 +0200 Subject: [SciPy-user] Python(x,y) - Python for Scientists In-Reply-To: <53287.132.165.76.2.1207827403.squirrel@secure.nuxit.net> References: <53287.132.165.76.2.1207827403.squirrel@secure.nuxit.net> Message-ID: Hello! I like your initiative and will test it. It will ease up the installing whoes in Windows. > The scientists among you may be interested in Python(x,y), a new > scientific-oriented Python distribution. This Python/Eclipse distribution > is freely available as a one-click Windows installer (a release for > GNU/Linux with similar features will follow soon): > http://www.pythonxy.com Please consider preparing a new release after the forthcoming release of the Numpy 1.0.5. I would also like if you could include the Scipy-Scikits. Many thanks for your efforts. Kind regards, Timmei From Joris.DeRidder at ster.kuleuven.be Thu Apr 10 21:01:54 2008 From: Joris.DeRidder at ster.kuleuven.be (Joris De Ridder) Date: Fri, 11 Apr 2008 03:01:54 +0200 Subject: [SciPy-user] Installing Scipy In-Reply-To: <3d375d730804101550v38665307p697e705bf32958e6@mail.gmail.com> References: <4F98C7F5-CA90-4568-839D-0CD8D9295A4A@suddenlink.net> <3d375d730804101550v38665307p697e705bf32958e6@mail.gmail.com> Message-ID: <65B3A913-8BB6-496C-9EFA-39DA19E35774@ster.kuleuven.be> On 11 Apr 2008, at 00:50, Robert Kern wrote: > On Thu, Apr 10, 2008 at 5:46 PM, Yaroslav Bulatov > wrote: >> I'm having the same problem installing SciPy as the user below, I'm >> on >> MacOS 10.5.2, gcc 4.0.1, gfortran 4.2 >> I tried it with both MacPython and ActiveState 2.5.2 for macs, using >> instructions on http://www.scipy.org/Installing_SciPy/Mac_OS_X >> >> NumPy works and passes all the tests but when I do "import scipy", >> I get >> >> "ImportError: No module named __config__" >> >> Any suggestions? >> >>> File "scipy/__init__.py", line 54, in >>> from __config__ import show as show_config > > Don't try to import scipy while inside the source tree. Change > directories first. The question pops up once in a while, so I added this remark to the wiki page mentioned above. J. Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm From keflavich at gmail.com Thu Apr 10 22:45:02 2008 From: keflavich at gmail.com (Keflavich) Date: Thu, 10 Apr 2008 19:45:02 -0700 (PDT) Subject: [SciPy-user] ipython memory leak In-Reply-To: <3d375d730804101520n4c3fa10fif91d0bf7fa0fd4c7@mail.gmail.com> References: <3d375d730804101520n4c3fa10fif91d0bf7fa0fd4c7@mail.gmail.com> Message-ID: I'm afraid I don't know what gdb is; is that related to pdb? Anyway, it turns out running it in a loop doesn't cause any problems: it won't crash if the code is looped over, only if it is called in ipython. It also proved difficult to run in pure python, probably because I picked the wrong version of python / wrong startup file (between scipy, scisoft, and macOS I had to strictly define where to search for packages in ipython). More tidbits: - I don't get a crash unless I'm using matplotlib, however I still do use all of my computer's memory and drive it to a standstill - from that, I think the crash happens when pylab/matplotlib tries to write to memory and gets only virtual memory, but that's a guess So my best guess is that the %run magic is keeping everything in memory for some reason? That's as far as I can speculate. Thanks for the help, Adam On Apr 10, 4:20 pm, "Robert Kern" wrote: > On Thu, Apr 10, 2008 at 12:45 PM,Keflavich wrote: > > Hi, I'm running a script that loads a lot of fits images into the > > global namespace; I need at least most of them to be present for > > debugging purposes. However, every time I re-run the script, it > > consumes exactly the same amount of memory: nothing is freed even > > though all of the variables are overwritten. I've tried playing with > > the gc module and %clear out and manually deleting the variables, but > > the memory is never freed. After 2-3 runs, ipython crashes > > complaining of a memory issues. Plotting with matplotlib may be part > > of the problem, but the memory leak (?) still occurs if I remove the > > plotting commands. Any tips on cleaning things up? > > > The memory error: > > python2.5(751) malloc: *** mmap(size=102875136) failed (error code=12) > > *** error: can't allocate region > > *** set a breakpoint in malloc_error_break to debug > > Can you put a loop at the top of your code so that it runs 3-4 times > just as a script without ipython? If you can run your script using gdb > with a breakpoint set in malloc_error_break, you could give us a > backtrace that might help us pin down the problem. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-u... at scipy.orghttp://projects.scipy.org/mailman/listinfo/scipy-user From zhangchipr at gmail.com Fri Apr 11 08:57:50 2008 From: zhangchipr at gmail.com (zhang chi) Date: Fri, 11 Apr 2008 20:57:50 +0800 Subject: [SciPy-user] How to draw a 3D graphic of a function? Message-ID: <90c482ab0804110557s246f5fb9gca06d6e7381ab8b7@mail.gmail.com> hi I want to draw a matrix of 100 X 100, its elements are the values of a function. Is there a package to draw the graphic in scipy? -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Fri Apr 11 09:02:03 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Fri, 11 Apr 2008 15:02:03 +0200 Subject: [SciPy-user] How to draw a 3D graphic of a function? In-Reply-To: <90c482ab0804110557s246f5fb9gca06d6e7381ab8b7@mail.gmail.com> References: <90c482ab0804110557s246f5fb9gca06d6e7381ab8b7@mail.gmail.com> Message-ID: <20080411130203.GD1942@phare.normalesup.org> On Fri, Apr 11, 2008 at 08:57:50PM +0800, zhang chi wrote: > I want to draw a matrix of 100 X 100, its elements are the values of a > function. I suppose you want to map the value of your matrix to the altitude of a surface? You can do this with Mayavi2. Have a look at the user guide, https://svn.enthought.com/enthought/attachment/wiki/MayaVi/user_guide.pdf?format=raw section "simple scripting with mlab", or "mlab reference" to see if you find what you want. If you need help installing Mayavi2, give me more details on your platform. Cheers, Ga?l From zunzun at zunzun.com Fri Apr 11 09:06:34 2008 From: zunzun at zunzun.com (James Phillips) Date: Fri, 11 Apr 2008 08:06:34 -0500 Subject: [SciPy-user] How to draw a 3D graphic of a function? In-Reply-To: <90c482ab0804110557s246f5fb9gca06d6e7381ab8b7@mail.gmail.com> References: <90c482ab0804110557s246f5fb9gca06d6e7381ab8b7@mail.gmail.com> Message-ID: <268756d30804110606t6da55eb3p49a6b14d620b6da2@mail.gmail.com> DISLIN (http://www.mps.mpg.de/dislin/) works well for surface plots, and comes with Python wrappers. Free for non-commercial use. I use it to draw surface plots on my curve and fitting web site http://zunzun.com - although I would rather use something else, DISLIN works and I can afford it (free for my use). James On Fri, Apr 11, 2008 at 7:57 AM, zhang chi wrote: > hi > I want to draw a matrix of 100 X 100, its elements are the values of a > function. > > Is there a package to draw the graphic in scipy? > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zachary.pincus at yale.edu Fri Apr 11 09:09:44 2008 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Fri, 11 Apr 2008 09:09:44 -0400 Subject: [SciPy-user] How to draw a 3D graphic of a function? In-Reply-To: <90c482ab0804110557s246f5fb9gca06d6e7381ab8b7@mail.gmail.com> References: <90c482ab0804110557s246f5fb9gca06d6e7381ab8b7@mail.gmail.com> Message-ID: <846752BA-1A8F-4100-848E-3E869835C22E@yale.edu> Hello, You could use PIL to write the matrix out as an image and open it in an external viewer (the TIFF format supports 32-bit floating point data, and at least ImageJ can open these files fine), or use matplotlib to plot the matrix as an image, or use Gnuplot / gnuplot.py to display a wireframe/surface view of the matrix. This list is of course very incomplete, and none of the packages are in scipy. No matter though -- most are easy to get and install, and are quite useful. It's certainly worth the time to learn matplotlib and/or Gnuplot. Zach On Apr 11, 2008, at 8:57 AM, zhang chi wrote: > hi > I want to draw a matrix of 100 X 100, its elements are the values > of a function. > > Is there a package to draw the graphic in scipy? > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From lists at benair.net Fri Apr 11 09:10:06 2008 From: lists at benair.net (BK) Date: Fri, 11 Apr 2008 15:10:06 +0200 Subject: [SciPy-user] How to draw a 3D graphic of a function? In-Reply-To: <90c482ab0804110557s246f5fb9gca06d6e7381ab8b7@mail.gmail.com> References: <90c482ab0804110557s246f5fb9gca06d6e7381ab8b7@mail.gmail.com> Message-ID: <1207919406.26630.5.camel@iagpc71.iag.uni-stuttgart.de> I recommend the PyX package, which is great for 2D plotting but can also handle simple 3D plots as the one you are describing. www.pyx.sourceforge.net/examples/3dgraphs/index.html Cheers, bene Am Freitag, den 11.04.2008, 20:57 +0800 schrieb zhang chi: > hi > I want to draw a matrix of 100 X 100, its elements are the values > of a function. > > Is there a package to draw the graphic in scipy? > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From lbolla at gmail.com Fri Apr 11 09:11:35 2008 From: lbolla at gmail.com (lorenzo bolla) Date: Fri, 11 Apr 2008 15:11:35 +0200 Subject: [SciPy-user] BVP Message-ID: <80c99e790804110611q2ea46213j509a127f19952c0d@mail.gmail.com> Hi all! Is there any package to solve Boundary Value Problems with Scipy? I'm thinking to something like Matlab's bvp4c. I found this, by Pauli Virtanen: http://www.elisanet.fi/ptvirtan/software/bvp/index.html, which is a wrapper to COLNEW, but it fails to compile/install. Here is the installation error: ---------------------- $> python setup.py install setup.py:17: UserWarning: Atlas (http://math-atlas.sourceforge.net/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [atlas]) or by setting the ATLAS environment variable. warnings.warn(AtlasNotFoundError.__doc__) Traceback (most recent call last): File "setup.py", line 63, in setup(**configuration(top_path='').todict()) File "setup.py", line 24, in configuration info = __import__('bvp/info') ImportError: No module named bvp/info ---------------------- It looks like a problem with bvp/info, so I get rid of that instruction, but I have a compilation problem: ---------------------- $ python setup.py install setup.py:17: UserWarning: Atlas (http://math-atlas.sourceforge.net/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [atlas]) or by setting the ATLAS environment variable. warnings.warn(AtlasNotFoundError.__doc__) non-existing path in '.': 'lib/colnew.pyf' non-existing path in '.': 'lib/mus.pyf' Warning: Assuming default configuration (.\bvp\tests/{setup_tests,setup}.py was not found) Appending bvp.tests configuration to bvp Ignoring attempt to set 'name' (from 'bvp' to 'bvp.tests') running install running build running config_cc unifing config_cc, config, build_clib, build_ext, build commands --compiler options running config_fc unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options running build_src building extension "bvp._colnew" sources target build\src.win32-2.5\lib\_colnewmodule.c does not exist: Assuming _colnewmodule.c was generated with "build_src --inplace" command. error: 'lib\\_colnewmodule.c' missing ---------------------- Any hints? Thank you very much!! Lorenzo -- Lorenzo Bolla, Ph. D. lbolla at gmail.com http://lorenzobolla.emurse.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbolla at gmail.com Fri Apr 11 09:15:41 2008 From: lbolla at gmail.com (lorenzo bolla) Date: Fri, 11 Apr 2008 15:15:41 +0200 Subject: [SciPy-user] How to draw a 3D graphic of a function? In-Reply-To: <1207919406.26630.5.camel@iagpc71.iag.uni-stuttgart.de> References: <90c482ab0804110557s246f5fb9gca06d6e7381ab8b7@mail.gmail.com> <1207919406.26630.5.camel@iagpc71.iag.uni-stuttgart.de> Message-ID: <80c99e790804110615g6d7a6eb3q161d3945da594a5@mail.gmail.com> The link should be: http://pyx.sourceforge.net/examples/3dgraphs/index.html L. On Fri, Apr 11, 2008 at 3:10 PM, BK wrote: > I recommend the PyX package, which is great for 2D plotting but can also > handle simple 3D plots as the one you are describing. > www.pyx.sourceforge.net/examples/3dgraphs/index.html > > Cheers, > bene > > > Am Freitag, den 11.04.2008, 20:57 +0800 schrieb zhang chi: > > hi > > I want to draw a matrix of 100 X 100, its elements are the values > > of a function. > > > > Is there a package to draw the graphic in scipy? > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Lorenzo Bolla lbolla at gmail.com http://lorenzobolla.emurse.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From markbak at gmail.com Fri Apr 11 10:06:24 2008 From: markbak at gmail.com (Mark Bakker) Date: Fri, 11 Apr 2008 16:06:24 +0200 Subject: [SciPy-user] Python(x,y) - Python for Scientists Message-ID: <6946b9500804110706u438b81e3tc5659cc7ca34292a@mail.gmail.com> Most excellent. This was highly needed, I think. Any chance you will also add a Mac version. That would be even better, Mark > > Message: 10 > Date: Thu, 10 Apr 2008 13:36:43 +0200 (CEST) > From: "Python(x,y) - Python for Scientists" > Subject: [SciPy-user] Python(x,y) - Python for Scientists > To: scipy-user at scipy.org > Message-ID: <53287.132.165.76.2.1207827403.squirrel at secure.nuxit.net> > Content-Type: text/plain;charset=iso-8859-1 > > Dear all, > > The scientists among you may be interested in Python(x,y), a new > scientific-oriented Python distribution. This Python/Eclipse distribution > is freely available as a one-click Windows installer (a release for > GNU/Linux with similar features will follow soon): > http://www.pythonxy.com > > Please do not hesitate to forward this announcement... > (I am very sorry if you have already received this e-mail through > "python-list" mailing list) > > Thanks a lots, > PR > > -- > P. Raybaut > Python(x,y) > http://www.pythonxy.com > > > > ------------------------------ > > Message: 11 > Date: Thu, 10 Apr 2008 14:09:12 +0200 > From: massimo sandal > Subject: Re: [SciPy-user] linear (polynomial) fit with error bars > To: SciPy Users List > Message-ID: <47FE0368.8030403 at unibo.it> > Content-Type: text/plain; charset="iso-8859-15" > > Hi, > > Fabrice Silva ha scritto: > > Using the diag input argument of leastsq : > > > > from scipy import optimize > > def errfunc(a,X,Y): > > return Y-(a[0]*X+a[1]) > > #b may be the vector containing the error bars sizes. > > weigths = 1./b > > a, success = optimize.leastsq(errfunc, [0,0],args=(X,Y), > diag=weigths) > > > > You here give more importance to points having small error bars. > > Thanks for your advice. I am trying however to use your code, but I am > stuck upon an error. > Here is the script, that reads a very raw data file (see below): > > #!/usr/bin/env python > from scipy import optimize > > def errfunc(a,X,Y): > return Y-(a[0]*X+a[1]) > #b may be the vector containing the error bars sizes. > > f=open("data.txt", "r") > datf=f.readlines() > > > numofdatapoints=len(datf)/3 > xval=[0.0]*numofdatapoints > yval=[0.0]*numofdatapoints > err=[0.0]*numofdatapoints > > # print len(datf) > # print datf > > for i in range(numofdatapoints): > xval[i]=float(datf[i]) > yval[i]=float(datf[i+numofdatapoints]) > err[i]=float(datf[i+2*numofdatapoints]) > > weigths=[0.0]*numofdatapoints > for i in range(numofdatapoints): > weigths[i] = 1./err[i] > > w=[0.0]*numofdatapoints > success=[0.0]*numofdatapoints > > w, success = optimize.leastsq(errfunc, [0,0], args=(xval,yval), > diag=weigths) > > print valA > print success > > ------- > data.txt: > 118.877580092022 > 110.450590941286 > 108.684062758621 > 109.314800167624 > 103.090778781767 > 98.5714370869397 > 29.1 > 31.42 > 33.74 > 36.06 > 38.38 > 40.7 > 2.76010170015786 > 3.52143474842509 > 2.45059986418858 > 3.21254530326032 > 2.11363073382134 > 2.14664809861522 > > ------- > and the script dies with the following error: > > massimo at calliope:~/Python/linfit$ python linfit.py > Traceback (most recent call last): > File "linfit.py", line 31, in > w, success = optimize.leastsq(errfunc, [0,0], args=(xval,yval), > diag=weigths) > File "/usr/lib/python2.5/site-packages/scipy/optimize/minpack.py", > line 262, in leastsq > m = check_func(func,x0,args,n)[0] > File "/usr/lib/python2.5/site-packages/scipy/optimize/minpack.py", > line 12, in check_func > res = atleast_1d(apply(thefunc,args)) > File "linfit.py", line 4, in errfunc > return Y-(a[0]*X+a[1]) > ValueError: shape mismatch: objects cannot be broadcast to a single shape > > which baffles me. What should I look for to understand what I am doing > wrong? > > m. > -- > Massimo Sandal > University of Bologna > Department of Biochemistry "G.Moruzzi" > > snail mail: > Via Irnerio 48, 40126 Bologna, Italy > > email: > massimo.sandal at unibo.it > > tel: +39-051-2094388 > fax: +39-051-2094387 > > -------------- next part -------------- > A non-text attachment was scrubbed... > Name: massimo.sandal.vcf > Type: text/x-vcard > Size: 274 bytes > Desc: not available > Url : > http://projects.scipy.org/pipermail/scipy-user/attachments/20080410/62d3445c/attachment.vcf > > ------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > End of SciPy-user Digest, Vol 56, Issue 20 > ****************************************** > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emsellem at obs.univ-lyon1.fr Fri Apr 11 10:11:34 2008 From: emsellem at obs.univ-lyon1.fr (Eric Emsellem) Date: Fri, 11 Apr 2008 16:11:34 +0200 Subject: [SciPy-user] Python(x,y) - Python for Scientists Message-ID: <47FF7196.5000005@obs.univ-lyon1.fr> Hi, very useful! When do you expect to have a Linux version (I am working in openSuse 10.3...:-)) Eric From normandwilliams at gmail.com Fri Apr 11 10:15:57 2008 From: normandwilliams at gmail.com (Normand Williams) Date: Fri, 11 Apr 2008 10:15:57 -0400 Subject: [SciPy-user] Newbie question: Is SciPy supported by Jython Message-ID: <744c355c0804110715r5527ba8xc1b77403490da72e@mail.gmail.com> Hi, I have looked everywhere but cannot seem to get a clear answer to my question therefore I go to the source: Can someone tell me if SciPy is supported by Jython and if not what would be the way to go to have similar sciPy functionalities available for Jython? Thanks a lot for take time to answer this newbie. Normand. -------------- next part -------------- An HTML attachment was scrubbed... URL: From carlos.s.santos at gmail.com Fri Apr 11 12:20:54 2008 From: carlos.s.santos at gmail.com (Carlos da Silva Santos) Date: Fri, 11 Apr 2008 13:20:54 -0300 Subject: [SciPy-user] Newbie question: Is SciPy supported by Jython In-Reply-To: <744c355c0804110715r5527ba8xc1b77403490da72e@mail.gmail.com> References: <744c355c0804110715r5527ba8xc1b77403490da72e@mail.gmail.com> Message-ID: <1dc6ddb60804110920h14f90f45q2fedbdd603d87278@mail.gmail.com> There is a (very old) port of Numeric to Jython: http://jnumerical.sourceforge.net/ Not exactly what you are asking for, but there are some solutions to use java libraries from python or python modules from java, like: http://jepp.sourceforge.net/ http://sourceforge.net/projects/jpype Maybe you can do the trick with them. I never used those, so I don't know whether they really work. You could also use Javolution or other Java scientific package "to have similar sciPy functionalities" inside Jython. Of course, the API would be different from scipy. HTH, Carlos On 4/11/08, Normand Williams wrote: > Hi, > > I have looked everywhere but cannot seem to get a clear answer to my > question therefore I go to the source: Can someone tell me if SciPy is > supported by Jython and if not what would be the way to go to have similar > sciPy functionalities available for Jython? > > Thanks a lot for take time to answer this newbie. Normand. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From daniel.wheeler2 at gmail.com Fri Apr 11 12:23:03 2008 From: daniel.wheeler2 at gmail.com (Daniel Wheeler) Date: Fri, 11 Apr 2008 12:23:03 -0400 Subject: [SciPy-user] openmp and weave? In-Reply-To: <80b160a0804071123g2c375f01p33898980aa86e277@mail.gmail.com> References: <827183970711071455m29f4869bo8c8312e0eca79a6e@mail.gmail.com> <80b160a0804071051h38901724ndd96654349998c09@mail.gmail.com> <80b160a0804071123g2c375f01p33898980aa86e277@mail.gmail.com> Message-ID: <80b160a0804110923p39871964je467eb85757a9838@mail.gmail.com> Just following up from the email below. I got openmp + weave working on an Altix 64 node machine, but the speedups for a simple matrix multiplication are horrendous . This is using an intel compiler for weave, and using gcc 4.1 for scipy + numpy. I have no idea why the results are so poor. Cheers On Mon, Apr 7, 2008 at 2:23 PM, Daniel Wheeler wrote: > On Wed, Nov 7, 2007 at 6:55 PM, william ratcliff > wrote: > > Has anyone had any luck using weave with openMP? > > Yes. I have been trying a test case and getting reasonable speed ups. > You'll need gcc version 4.3, 4.2 doesn't seem to work. I have recorded > some of the results on the pages below. Be warned, I am inexperienced > in parallel computing so there is a chance of gross errors in my > thinking and coding. > > > > > > The following is the test code for the above results. > > > > Note that there is no need to recompile everything with 4.3, only > weave. Getting 4.3 built was a little involved. The links above > provide some rudimentary instructions. > > The above results are for a machine with 2 and another with 8 nodes. I > have also been evaluating openmp on a 64 node Altix machine. There are > a number of issues with building scipy and numpy that have been > resolved mostly concerning the SGI's scientific library, but I am > currently having issues with getting weave to compile on the Altix. > This seems to be due to having all the python stuff compiled with gcc, > > > but using the Intel compiler for weave. > > Cheers > > > > > If so, what did you > > have to do? I've started by updating my compiler to MinGW: > > gcc-4.2.1-dw-2-2 (and similarly for g++), but am running into problems > > with code written in weave that doesn't use any of openmp: > > > > Here is the code: > > > > import numpy as N > > import weave > > from weave import converters > > > > > > > > def blitz_interpolate(x,y): > > > > > > code = """ > > int pts = Ny[0]; > > //#pragma omp parallel for > > for (int i=0; i < pts-1; i++){ > > y(i) = sin( exp( cos( - exp( sin(x(i)) ) ) ) ); > > } > > return_val = 3; > > """ > > extra=["-fopenmp -Lc:/python25/ -lPthreadGC2"] > > extra=[] > > z=weave.inline(code,['x','y'],type_converters=converters.blitz,compiler='gcc') > > print z > > return > > > > > > > > > > > > if __name__=="__main__": > > x=N.arange(1000) > > y=N.zeros(x.shape,'d') > > blitz_interpolate(x,y) > > print x[35], y[35],N.sin(N.exp(N.cos(-N.exp(N.sin(x[35]))))) > > > > > > > > > > > > This works fine with version 3.4.2 of gcc, g++ > > > > Thanks, > > William > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > -- > Daniel Wheeler > -- Daniel Wheeler From mforbes at physics.ubc.ca Fri Apr 11 13:03:30 2008 From: mforbes at physics.ubc.ca (Michael McNeil Forbes) Date: Fri, 11 Apr 2008 10:03:30 -0700 Subject: [SciPy-user] BVP In-Reply-To: <80c99e790804110611q2ea46213j509a127f19952c0d@mail.gmail.com> References: <80c99e790804110611q2ea46213j509a127f19952c0d@mail.gmail.com> Message-ID: <19CBCFB1-724E-4203-9EC4-474485DC830D@physics.ubc.ca> Hi Lorenzo, On 11 Apr 2008, at 6:11 AM, lorenzo bolla wrote: > Hi all! > > Is there any package to solve Boundary Value Problems with Scipy? > I'm thinking to something like Matlab's bvp4c. > I found this, by Pauli Virtanen: http://www.elisanet.fi/ptvirtan/ > software/bvp/index.html, which is a wrapper to COLNEW, but it fails > to compile/install. > Here is the installation error: > > ---------------------- > $> python setup.py install > > setup.py:17: UserWarning: > Atlas (http://math-atlas.sourceforge.net/) libraries not found. > Directories to search for the libraries can be specified in the > numpy/distutils/site.cfg file (section [atlas]) or by setting > the ATLAS environment variable. > warnings.warn(AtlasNotFoundError.__doc__) > Traceback (most recent call last): > File "setup.py", line 63, in > setup(**configuration(top_path='').todict()) > File "setup.py", line 24, in configuration > info = __import__('bvp/info') > ImportError: No module named bvp/info > > ---------------------- I just fixed this by adding: sys.path.insert(0,'bvp') then changing the line to info = __import__('info') (Does anyone know a better way of allowing the setup file to refer to the info file which is down a path?) After this, running python setup.py build python setup.py install works for me on Mac OS X and Linux. I do not have a windows box. The missing file does get generated (presumably by f2c). What versions of bvp and numpy are you using? (Were you able to build numpy, or did you use a binary?) Michael. > It looks like a problem with bvp/info, so I get rid of that > instruction, but I have a compilation problem: > > ---------------------- > > $ python setup.py install > setup.py:17: UserWarning: > Atlas (http://math-atlas.sourceforge.net/) libraries not found. > Directories to search for the libraries can be specified in the > numpy/distutils/site.cfg file (section [atlas]) or by setting > the ATLAS environment variable. > warnings.warn(AtlasNotFoundError.__doc__) > non-existing path in '.': 'lib/colnew.pyf' > non-existing path in '.': 'lib/mus.pyf' > Warning: Assuming default configuration (.\bvp\tests/ > {setup_tests,setup}.py was not found) > Appending bvp.tests configuration to bvp > Ignoring attempt to set 'name' (from 'bvp' to 'bvp.tests') > running install > running build > running config_cc > unifing config_cc, config, build_clib, build_ext, build commands -- > compiler options > running config_fc > unifing config_fc, config, build_clib, build_ext, build commands -- > fcompiler options > running build_src > building extension "bvp._colnew" sources > target build\src.win32-2.5\lib\_colnewmodule.c does not exist: > Assuming _colnewmodule.c was generated with "build_src -- > inplace" command. > error: 'lib\\_colnewmodule.c' missing > > ---------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From normandwilliams at gmail.com Fri Apr 11 13:10:38 2008 From: normandwilliams at gmail.com (Normand Williams) Date: Fri, 11 Apr 2008 13:10:38 -0400 Subject: [SciPy-user] Newbie question: Is SciPy supported by Jython In-Reply-To: <1dc6ddb60804110920h14f90f45q2fedbdd603d87278@mail.gmail.com> References: <744c355c0804110715r5527ba8xc1b77403490da72e@mail.gmail.com> <1dc6ddb60804110920h14f90f45q2fedbdd603d87278@mail.gmail.com> Message-ID: <744c355c0804111010p74fdfe14saaea4ffef5b71ac4@mail.gmail.com> Thanks Carlos, this information is very useful. Normand On Fri, Apr 11, 2008 at 12:20 PM, Carlos da Silva Santos < carlos.s.santos at gmail.com> wrote: > There is a (very old) port of Numeric to Jython: > > http://jnumerical.sourceforge.net/ > > Not exactly what you are asking for, but there are some solutions to > use java libraries from python or python modules from java, like: > http://jepp.sourceforge.net/ > http://sourceforge.net/projects/jpype > Maybe you can do the trick with them. I never used those, so I don't > know whether they really work. > > You could also use Javolution or other Java scientific package "to have > similar > sciPy functionalities" inside Jython. Of course, the API would be > different from scipy. > > HTH, > > Carlos > > On 4/11/08, Normand Williams wrote: > > Hi, > > > > I have looked everywhere but cannot seem to get a clear answer to my > > question therefore I go to the source: Can someone tell me if SciPy is > > supported by Jython and if not what would be the way to go to have > similar > > sciPy functionalities available for Jython? > > > > Thanks a lot for take time to answer this newbie. Normand. > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant at enthought.com Fri Apr 11 13:23:34 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Fri, 11 Apr 2008 12:23:34 -0500 Subject: [SciPy-user] SciPy to be down intermittently today Message-ID: <47FF9E96.4030202@enthought.com> Hey everyone, The scipy.org site will be down intermittently today. We are trying to upgrade its memory to improve performance. Thank you, -Travis O. From nwagner at iam.uni-stuttgart.de Fri Apr 11 14:03:24 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 11 Apr 2008 20:03:24 +0200 Subject: [SciPy-user] bvp Message-ID: Hi all, I installed bvp using the mercurial repository hg clone static-http://www.iki.fi/pav/hg/bvp.hg bvp.hg The second example doesn't work for me. Here is the output /usr/bin/python -i ex2.py 1.0 unexpected array size: new_size=1, got array with arr_size=0 Traceback (most recent call last): File "ex2.py", line 61, in ? coarsen_initial_guess_mesh=True) File "/usr/lib/python2.4/site-packages/bvp/colnew.py", line 522, in solve vectorized_guess) _colnew.error: failed in converting 8th argument `fixpnt' of _colnew.colnew to C/Fortran array Cheers, Nils I am using >>> numpy.__version__ '1.0.5.dev5024' >>> scipy.__version__ '0.7.0.dev4125' From pav at iki.fi Fri Apr 11 16:31:27 2008 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 11 Apr 2008 20:31:27 +0000 (UTC) Subject: [SciPy-user] BVP References: <80c99e790804110611q2ea46213j509a127f19952c0d@mail.gmail.com> Message-ID: Hi Lorenzo, Fri, 11 Apr 2008 15:11:35 +0200, lorenzo bolla wrote: > Is there any package to solve Boundary Value Problems with Scipy? I'm > thinking to something like Matlab's bvp4c. I found this, by Pauli > Virtanen: > http://www.elisanet.fi/ptvirtan/software/bvp/index.html, which is a > wrapper to COLNEW, but it fails to compile/install. Here is the > installation error: I released 0.2.2 which should fix these bugs. (If not, I guess it's time for 0.2.3 ...) > $> python setup.py install [clip] > ImportError: No module named bvp/info Fixed. [clip] > building extension "bvp._colnew" > sources > target build\src.win32-2.5\lib\_colnewmodule.c does not > exist: > Assuming _colnewmodule.c was generated with "build_src --inplace" > command. > error: 'lib\\_colnewmodule.c' missing Brown paper bag time. The 0.2.1.tar.gz is missing the .pyf files: somehow they don't get included by distutils when setuptools is imported, even though they are in the sources list. Anyway, this is fixed in 0.2.2. -- Pauli Virtanen From pav at iki.fi Fri Apr 11 17:08:59 2008 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 11 Apr 2008 21:08:59 +0000 (UTC) Subject: [SciPy-user] bvp References: Message-ID: Hi Nils, Fri, 11 Apr 2008 20:03:24 +0200, Nils Wagner wrote: > I installed bvp using the mercurial repository > > hg clone static-http://www.iki.fi/pav/hg/bvp.hg bvp.hg > > The second example doesn't work for me. Here is the output > > /usr/bin/python -i ex2.py > 1.0 > unexpected array size: new_size=1, got array with arr_size=0 > Traceback (most recent call last): > File "ex2.py", line 61, in ? > coarsen_initial_guess_mesh=True) > File "/usr/lib/python2.4/site-packages/bvp/colnew.py", > line 522, in solve > vectorized_guess) > _colnew.error: failed in converting 8th argument `fixpnt' of > _colnew.colnew to C/Fortran array Should also be fixed in 0.2.2 and current bvp.hg. (And yep, it was also caught by automated tests.) The cause is that apperently something changed in f2py between numpy 1.0.4 and 1.0.5: in colnew.pyf I have integer, dimension(11), intent(in) :: ipar double precision, dimension(ipar[10]), intent(in) :: fixpnt However, f2py bugs out with the "failed in converting" if ipar[10] == 0 and fixpnt.size == 0, which I don't think it did in 1.0.3 or 1.0.4. If fixed this by making fixpnt a shape = (1,) array even for ipar[10] == 0, and it appears to work on numpy 1.0.2, 1.0.4, 1.0.5, even though I don't know whether it should. -- Pauli Virtanen From mwojc at p.lodz.pl Fri Apr 11 16:56:51 2008 From: mwojc at p.lodz.pl (Marek Wojciechowski) Date: Fri, 11 Apr 2008 22:56:51 +0200 Subject: [SciPy-user] scipy.io.read_array exception on Windows (bug?) In-Reply-To: References: Message-ID: <200804112256.51392.mwojc@p.lodz.pl> Hi! On Windows, Python 2.5 and scipy-0.6.0 when i call data = read_array( file, **kwargs ) and file argument is the fileobject the following exception occurs: Traceback (most recent call last): ? File "C:\Python25\Lib\site-packages\ffnet\ffnet.py", line 851, in readdata ? ? data = read_array( file, **kwargs ) ? File "C:\Python25\Lib\site-packages\scipy\io\array_import.py", line 364, in read_array ? ? ascii_object = ascii_stream(fileobject, lines=lines, comment=comment, linesep=linesep) ? File "C:\Python25\Lib\site-packages\scipy\io\array_import.py", line 141, in __init__ ? ? self.file = get_open_file(fileobject, mode='r') ? File "C:\Python25\Lib\site-packages\scipy\io\array_import.py", line 97, in get_open_file ? ? fileobject = os.path.expanduser(fileobject) ? File "C:\Python25\lib\ntpath.py", line 350, in expanduser ? ? if path[:1] != '~': TypeError: 'file' object is unsubscriptable However if i use a filename string as the argument everything works fine. Is this a known problem? I suppose both fileobject and filename arguments should work (as for example on my Linux, Python 2.4 and scipy-0.6.0) Greetings -- Marek Wojciechowski From robert.kern at gmail.com Fri Apr 11 17:59:39 2008 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 11 Apr 2008 16:59:39 -0500 Subject: [SciPy-user] scipy.io.read_array exception on Windows (bug?) In-Reply-To: <200804112256.51392.mwojc@p.lodz.pl> References: <200804112256.51392.mwojc@p.lodz.pl> Message-ID: <3d375d730804111459u560f581alce847df85e87610d@mail.gmail.com> On Fri, Apr 11, 2008 at 3:56 PM, Marek Wojciechowski wrote: > Hi! > > On Windows, Python 2.5 and scipy-0.6.0 when i call > data = read_array( file, **kwargs ) > and file argument is the fileobject the following exception occurs: > > Traceback (most recent call last): > File "C:\Python25\Lib\site-packages\ffnet\ffnet.py", line 851, in readdata > data = read_array( file, **kwargs ) > File "C:\Python25\Lib\site-packages\scipy\io\array_import.py", line 364, in > read_array > ascii_object = ascii_stream(fileobject, lines=lines, comment=comment, > linesep=linesep) > File "C:\Python25\Lib\site-packages\scipy\io\array_import.py", line 141, in > __init__ > self.file = get_open_file(fileobject, mode='r') > File "C:\Python25\Lib\site-packages\scipy\io\array_import.py", line 97, in > get_open_file > fileobject = os.path.expanduser(fileobject) > File "C:\Python25\lib\ntpath.py", line 350, in expanduser > if path[:1] != '~': > TypeError: 'file' object is unsubscriptable > > However if i use a filename string as the argument everything works fine. Is > this a known problem? It is now! We do catch AttributeError, but it appears that a TypeError is also possible, at least in some versions of Python. I have fixed this in r4133. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From lbolla at gmail.com Fri Apr 11 18:48:36 2008 From: lbolla at gmail.com (lorenzo bolla) Date: Sat, 12 Apr 2008 00:48:36 +0200 Subject: [SciPy-user] BVP In-Reply-To: References: <80c99e790804110611q2ea46213j509a127f19952c0d@mail.gmail.com> Message-ID: <80c99e790804111548m5fd65d5r4c617b2bb495e45@mail.gmail.com> thank you pauli! I'll give it a try on monday! L. On Fri, Apr 11, 2008 at 10:31 PM, Pauli Virtanen wrote: > Hi Lorenzo, > > Fri, 11 Apr 2008 15:11:35 +0200, lorenzo bolla wrote: > > Is there any package to solve Boundary Value Problems with Scipy? I'm > > thinking to something like Matlab's bvp4c. I found this, by Pauli > > Virtanen: > > http://www.elisanet.fi/ptvirtan/software/bvp/index.html, which is a > > wrapper to COLNEW, but it fails to compile/install. Here is the > > installation error: > > I released 0.2.2 which should fix these bugs. (If not, I guess it's time > for 0.2.3 ...) > > > $> python setup.py install > [clip] > > ImportError: No module named bvp/info > > Fixed. > > [clip] > > building extension "bvp._colnew" > > sources > > target build\src.win32-2.5\lib\_colnewmodule.c does not > > exist: > > Assuming _colnewmodule.c was generated with "build_src --inplace" > > command. > > error: 'lib\\_colnewmodule.c' missing > > Brown paper bag time. The 0.2.1.tar.gz is missing the .pyf files: somehow > they don't get included by distutils when setuptools is imported, even > though they are in the sources list. Anyway, this is fixed in 0.2.2. > > -- > Pauli Virtanen > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Lorenzo Bolla lbolla at gmail.com http://lorenzobolla.emurse.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at pythonxy.com Sat Apr 12 16:39:20 2008 From: contact at pythonxy.com (Python(x,y)) Date: Sat, 12 Apr 2008 22:39:20 +0200 Subject: [SciPy-user] Python(x,y) - Mailing list Message-ID: <48011DF8.80201@pythonxy.com> Hi all, One more e-mail on Python(x,y), the Python/Eclipse distribution for Scientists: if you are interested in this project, you can subscribe to the Python(x,y) users mailing list (http://www.pythonxy.com). By the way, thank you all for your comments and suggestions regarding Python(x,y). Here are some answers to the most frequently asked questions: - linux version: it is really too soon to tell when it will be ready ; - MacOS version: at the moment, no action is taken to implement it (except if we find someone who would want to contribute to Python(x,y) and do it...) ; - mailing list? done! - consider preparing a new release for Numpy 1.0.5: it won't be a problem, because updating Python(x,y) is a matter of a few minutes (it will certainly take us more time to upload it than to update it) ; - Mayavi 2: it could be included in next version of Python(x,y). Best regards, PR -- P. Raybaut Python(x,y) http://www.pythonxy.com From gonzalezmancera+scipy at gmail.com Sat Apr 12 22:00:28 2008 From: gonzalezmancera+scipy at gmail.com (Andres Gonzalez-Mancera) Date: Sat, 12 Apr 2008 21:00:28 -0500 Subject: [SciPy-user] Building Scipy with ifort in Mac os X 10.5.2 Message-ID: Hi all, I'm trying to build Scipy with ifort on a core 2 duo iMac running 10.5.2 and the latest version of the developer tools. I'm using python 2.5.2 from python.org. In the same system I'm able to build and run Scipy with gfortran from the att research web site without any problems. I do not have the intel C compiler so I'm using gcc for this purpose. I'm using the 32 bit version of ifort 10.1.014 with 10.0.2.018 version of the intel MKL. I haven't been able to find concrete building instructions so I'm following the instructions in scipy.org except I'm using: python setup.py build_src build_clib --fcompiler=intel build_ext --fcompiler=intel build to build Scipy. The build process fails with the following message: creating build/lib.macosx-10.3-i386-2.5/scipy/fftpack /opt/intel/fc/10.1.014/bin/ifort -shared -shared -nofor_main build/temp.macosx-10.3-i386-2.5/build/src.macosx-10.3-i386-2.5/scipy/fftpack/_fftpackmodule.o build/temp.macosx-10.3-i386-2.5/scipy/fftpack/src/zfft.o build/temp.macosx-10.3-i386-2.5/scipy/fftpack/src/drfft.o build/temp.macosx-10.3-i386-2.5/scipy/fftpack/src/zrfft.o build/temp.macosx-10.3-i386-2.5/scipy/fftpack/src/zfftnd.o build/temp.macosx-10.3-i386-2.5/build/src.macosx-10.3-i386-2.5/fortranobject.o -L/usr/local/lib -Lbuild/temp.macosx-10.3-i386-2.5 -ldfftpack -lfftw3 -o build/lib.macosx-10.3-i386-2.5/scipy/fftpack/_fftpack.so ifort: command line warning #10156: ignoring option '-s'; no argument required ifort: command line warning #10156: ignoring option '-s'; no argument required Undefined symbols: "_PyExc_AttributeError", referenced from: _PyExc_AttributeError$non_lazy_ptr in fortranobject.o "_PyObject_Str", referenced from: _array_from_pyobj in fortranobject.o "_PyArg_ParseTupleAndKeywords", referenced from: _f2py_rout__fftpack_zfft in _fftpackmodule.o _f2py_rout__fftpack_drfft in _fftpackmodule.o _f2py_rout__fftpack_zrfft in _fftpackmodule.o _f2py_rout__fftpack_zfftnd in _fftpackmodule.o _f2py_rout__fftpack_destroy_zfft_cache in _fftpackmodule.o _f2py_rout__fftpack_destroy_zfftnd_cache in _fftpackmodule.o _f2py_rout__fftpack_destroy_drfft_cache in _fftpackmodule.o "_PyExc_ValueError", referenced from: _PyExc_ValueError$non_lazy_ptr in fortranobject.o "_PyExc_TypeError", referenced from: _PyExc_TypeError$non_lazy_ptr in fortranobject.o "_PyDict_GetItemString", referenced from: _fortran_getattr in fortranobject.o "_PyCObject_AsVoidPtr", referenced from: _init_fftpack in _fftpackmodule.o "_Py_BuildValue", referenced from: _f2py_rout__fftpack_zfft in _fftpackmodule.o _f2py_rout__fftpack_drfft in _fftpackmodule.o _f2py_rout__fftpack_zrfft in _fftpackmodule.o _f2py_rout__fftpack_zfftnd in _fftpackmodule.o _f2py_rout__fftpack_destroy_zfft_cache in _fftpackmodule.o _f2py_rout__fftpack_destroy_zfftnd_cache in _fftpackmodule.o _f2py_rout__fftpack_destroy_drfft_cache in _fftpackmodule.o "_PyComplex_Type", referenced from: _PyComplex_Type$non_lazy_ptr in _fftpackmodule.o "_PyDict_New", referenced from: _PyFortranObject_NewAsAttr in fortranobject.o _fortran_setattr in fortranobject.o _PyFortranObject_New in fortranobject.o _PyFortranObject_New in fortranobject.o "_MAIN__", referenced from: _main in libifcore.a(for_main.o) "_PyDict_SetItemString", referenced from: _init_fftpack in _fftpackmodule.o _init_fftpack in _fftpackmodule.o _init_fftpack in _fftpackmodule.o _F2PyDict_SetItemString in fortranobject.o _fortran_getattr in fortranobject.o _fortran_getattr in fortranobject.o _fortran_setattr in fortranobject.o _PyFortranObject_New in fortranobject.o "_PyType_Type", referenced from: _PyType_Type$non_lazy_ptr in _fftpackmodule.o "__PyObject_New", referenced from: _PyFortranObject_NewAsAttr in fortranobject.o _PyFortranObject_New in fortranobject.o _PyFortranObject_New in fortranobject.o "_PyInt_Type", referenced from: _PyInt_Type$non_lazy_ptr in _fftpackmodule.o "_PyString_FromString", referenced from: _init_fftpack in _fftpackmodule.o _init_fftpack in _fftpackmodule.o _fortran_getattr in fortranobject.o _fortran_getattr in fortranobject.o "_PyErr_Occurred", referenced from: _int_from_pyobj in _fftpackmodule.o _f2py_rout__fftpack_zfft in _fftpackmodule.o _f2py_rout__fftpack_zfft in _fftpackmodule.o _f2py_rout__fftpack_drfft in _fftpackmodule.o _f2py_rout__fftpack_drfft in _fftpackmodule.o _f2py_rout__fftpack_zrfft in _fftpackmodule.o _f2py_rout__fftpack_zrfft in _fftpackmodule.o _f2py_rout__fftpack_zfftnd in _fftpackmodule.o _f2py_rout__fftpack_zfftnd in _fftpackmodule.o _f2py_rout__fftpack_zfftnd in _fftpackmodule.o _f2py_rout__fftpack_zfftnd in _fftpackmodule.o _f2py_rout__fftpack_destroy_zfft_cache in _fftpackmodule.o _f2py_rout__fftpack_destroy_zfftnd_cache in _fftpackmodule.o _f2py_rout__fftpack_destroy_drfft_cache in _fftpackmodule.o _init_fftpack in _fftpackmodule.o _F2PyDict_SetItemString in fortranobject.o "_PyErr_NewException", referenced from: _init_fftpack in _fftpackmodule.o "_PyImport_ImportModule", referenced from: _init_fftpack in _fftpackmodule.o "_PyMem_Free", referenced from: _fortran_dealloc in fortranobject.o _fortran_dealloc in fortranobject.o "_PyCObject_Type", referenced from: _PyCObject_Type$non_lazy_ptr in _fftpackmodule.o "_PyExc_ImportError", referenced from: _PyExc_ImportError$non_lazy_ptr in _fftpackmodule.o "_PyErr_Format", referenced from: _init_fftpack in _fftpackmodule.o _fortran_call in fortranobject.o _fortran_call in fortranobject.o "_PyNumber_Int", referenced from: _int_from_pyobj in _fftpackmodule.o "_PyCObject_FromVoidPtr", referenced from: _fortran_getattr in fortranobject.o "_PyObject_GetAttrString", referenced from: _int_from_pyobj in _fftpackmodule.o _init_fftpack in _fftpackmodule.o "_PyErr_Print", referenced from: _init_fftpack in _fftpackmodule.o _F2PyDict_SetItemString in fortranobject.o "_PyString_Type", referenced from: _PyString_Type$non_lazy_ptr in _fftpackmodule.o "__Py_NoneStruct", referenced from: __Py_NoneStruct$non_lazy_ptr in _fftpackmodule.o __Py_NoneStruct$non_lazy_ptr in fortranobject.o "_Py_FindMethod", referenced from: _fortran_getattr in fortranobject.o "_PyString_ConcatAndDel", referenced from: _fortran_getattr in fortranobject.o "_PyErr_Clear", referenced from: _int_from_pyobj in _fftpackmodule.o _F2PyDict_SetItemString in fortranobject.o "_Py_InitModule4", referenced from: _init_fftpack in _fftpackmodule.o "_PyModule_GetDict", referenced from: _init_fftpack in _fftpackmodule.o "_PyExc_RuntimeError", referenced from: _PyExc_RuntimeError$non_lazy_ptr in _fftpackmodule.o _PyExc_RuntimeError$non_lazy_ptr in fortranobject.o "_PyDict_DelItemString", referenced from: _fortran_setattr in fortranobject.o "_PyObject_Type", referenced from: _array_from_pyobj in fortranobject.o "_PySequence_Check", referenced from: _int_from_pyobj in _fftpackmodule.o "_PyString_AsString", referenced from: _array_from_pyobj in fortranobject.o "_PySequence_GetItem", referenced from: _int_from_pyobj in _fftpackmodule.o "_PyErr_SetString", referenced from: _int_from_pyobj in _fftpackmodule.o _f2py_rout__fftpack_zfft in _fftpackmodule.o _f2py_rout__fftpack_zfft in _fftpackmodule.o _f2py_rout__fftpack_zfft in _fftpackmodule.o _f2py_rout__fftpack_drfft in _fftpackmodule.o _f2py_rout__fftpack_drfft in _fftpackmodule.o _f2py_rout__fftpack_drfft in _fftpackmodule.o _f2py_rout__fftpack_zrfft in _fftpackmodule.o _f2py_rout__fftpack_zrfft in _fftpackmodule.o _f2py_rout__fftpack_zrfft in _fftpackmodule.o _f2py_rout__fftpack_zfftnd in _fftpackmodule.o _f2py_rout__fftpack_zfftnd in _fftpackmodule.o _f2py_rout__fftpack_zfftnd in _fftpackmodule.o _f2py_rout__fftpack_zfftnd in _fftpackmodule.o _f2py_rout__fftpack_zfftnd in _fftpackmodule.o _init_fftpack in _fftpackmodule.o _array_from_pyobj in fortranobject.o _fortran_setattr in fortranobject.o _fortran_setattr in fortranobject.o "_PyType_IsSubtype", referenced from: _int_from_pyobj in _fftpackmodule.o _int_from_pyobj in _fftpackmodule.o _int_from_pyobj in _fftpackmodule.o _array_from_pyobj in fortranobject.o ld: symbol(s) not found ifort: command line warning #10156: ignoring option '-s'; no argument required ifort: command line warning #10156: ignoring option '-s'; no argument required Undefined symbols: "_PyExc_AttributeError", referenced from: _PyExc_AttributeError$non_lazy_ptr in fortranobject.o "_PyObject_Str", referenced from: _array_from_pyobj in fortranobject.o "_PyArg_ParseTupleAndKeywords", referenced from: _f2py_rout__fftpack_zfft in _fftpackmodule.o _f2py_rout__fftpack_drfft in _fftpackmodule.o _f2py_rout__fftpack_zrfft in _fftpackmodule.o _f2py_rout__fftpack_zfftnd in _fftpackmodule.o _f2py_rout__fftpack_destroy_zfft_cache in _fftpackmodule.o _f2py_rout__fftpack_destroy_zfftnd_cache in _fftpackmodule.o _f2py_rout__fftpack_destroy_drfft_cache in _fftpackmodule.o "_PyExc_ValueError", referenced from: _PyExc_ValueError$non_lazy_ptr in fortranobject.o "_PyExc_TypeError", referenced from: _PyExc_TypeError$non_lazy_ptr in fortranobject.o "_PyDict_GetItemString", referenced from: _fortran_getattr in fortranobject.o "_PyCObject_AsVoidPtr", referenced from: _init_fftpack in _fftpackmodule.o "_Py_BuildValue", referenced from: _f2py_rout__fftpack_zfft in _fftpackmodule.o _f2py_rout__fftpack_drfft in _fftpackmodule.o _f2py_rout__fftpack_zrfft in _fftpackmodule.o _f2py_rout__fftpack_zfftnd in _fftpackmodule.o _f2py_rout__fftpack_destroy_zfft_cache in _fftpackmodule.o _f2py_rout__fftpack_destroy_zfftnd_cache in _fftpackmodule.o _f2py_rout__fftpack_destroy_drfft_cache in _fftpackmodule.o "_PyComplex_Type", referenced from: _PyComplex_Type$non_lazy_ptr in _fftpackmodule.o "_PyDict_New", referenced from: _PyFortranObject_NewAsAttr in fortranobject.o _fortran_setattr in fortranobject.o _PyFortranObject_New in fortranobject.o _PyFortranObject_New in fortranobject.o "_MAIN__", referenced from: _main in libifcore.a(for_main.o) "_PyDict_SetItemString", referenced from: _init_fftpack in _fftpackmodule.o _init_fftpack in _fftpackmodule.o _init_fftpack in _fftpackmodule.o _F2PyDict_SetItemString in fortranobject.o _fortran_getattr in fortranobject.o _fortran_getattr in fortranobject.o _fortran_setattr in fortranobject.o _PyFortranObject_New in fortranobject.o "_PyType_Type", referenced from: _PyType_Type$non_lazy_ptr in _fftpackmodule.o "__PyObject_New", referenced from: _PyFortranObject_NewAsAttr in fortranobject.o _PyFortranObject_New in fortranobject.o _PyFortranObject_New in fortranobject.o "_PyInt_Type", referenced from: _PyInt_Type$non_lazy_ptr in _fftpackmodule.o "_PyString_FromString", referenced from: _init_fftpack in _fftpackmodule.o _init_fftpack in _fftpackmodule.o _fortran_getattr in fortranobject.o _fortran_getattr in fortranobject.o "_PyErr_Occurred", referenced from: _int_from_pyobj in _fftpackmodule.o _f2py_rout__fftpack_zfft in _fftpackmodule.o _f2py_rout__fftpack_zfft in _fftpackmodule.o _f2py_rout__fftpack_drfft in _fftpackmodule.o _f2py_rout__fftpack_drfft in _fftpackmodule.o _f2py_rout__fftpack_zrfft in _fftpackmodule.o _f2py_rout__fftpack_zrfft in _fftpackmodule.o _f2py_rout__fftpack_zfftnd in _fftpackmodule.o _f2py_rout__fftpack_zfftnd in _fftpackmodule.o _f2py_rout__fftpack_zfftnd in _fftpackmodule.o _f2py_rout__fftpack_zfftnd in _fftpackmodule.o _f2py_rout__fftpack_destroy_zfft_cache in _fftpackmodule.o _f2py_rout__fftpack_destroy_zfftnd_cache in _fftpackmodule.o _f2py_rout__fftpack_destroy_drfft_cache in _fftpackmodule.o _init_fftpack in _fftpackmodule.o _F2PyDict_SetItemString in fortranobject.o "_PyErr_NewException", referenced from: _init_fftpack in _fftpackmodule.o "_PyImport_ImportModule", referenced from: _init_fftpack in _fftpackmodule.o "_PyMem_Free", referenced from: _fortran_dealloc in fortranobject.o _fortran_dealloc in fortranobject.o "_PyCObject_Type", referenced from: _PyCObject_Type$non_lazy_ptr in _fftpackmodule.o "_PyExc_ImportError", referenced from: _PyExc_ImportError$non_lazy_ptr in _fftpackmodule.o "_PyErr_Format", referenced from: _init_fftpack in _fftpackmodule.o _fortran_call in fortranobject.o _fortran_call in fortranobject.o "_PyNumber_Int", referenced from: _int_from_pyobj in _fftpackmodule.o "_PyCObject_FromVoidPtr", referenced from: _fortran_getattr in fortranobject.o "_PyObject_GetAttrString", referenced from: _int_from_pyobj in _fftpackmodule.o _init_fftpack in _fftpackmodule.o "_PyErr_Print", referenced from: _init_fftpack in _fftpackmodule.o _F2PyDict_SetItemString in fortranobject.o "_PyString_Type", referenced from: _PyString_Type$non_lazy_ptr in _fftpackmodule.o "__Py_NoneStruct", referenced from: __Py_NoneStruct$non_lazy_ptr in _fftpackmodule.o __Py_NoneStruct$non_lazy_ptr in fortranobject.o "_Py_FindMethod", referenced from: _fortran_getattr in fortranobject.o "_PyString_ConcatAndDel", referenced from: _fortran_getattr in fortranobject.o "_PyErr_Clear", referenced from: _int_from_pyobj in _fftpackmodule.o _F2PyDict_SetItemString in fortranobject.o "_Py_InitModule4", referenced from: _init_fftpack in _fftpackmodule.o "_PyModule_GetDict", referenced from: _init_fftpack in _fftpackmodule.o "_PyExc_RuntimeError", referenced from: _PyExc_RuntimeError$non_lazy_ptr in _fftpackmodule.o _PyExc_RuntimeError$non_lazy_ptr in fortranobject.o "_PyDict_DelItemString", referenced from: _fortran_setattr in fortranobject.o "_PyObject_Type", referenced from: _array_from_pyobj in fortranobject.o "_PySequence_Check", referenced from: _int_from_pyobj in _fftpackmodule.o "_PyString_AsString", referenced from: _array_from_pyobj in fortranobject.o "_PySequence_GetItem", referenced from: _int_from_pyobj in _fftpackmodule.o "_PyErr_SetString", referenced from: _int_from_pyobj in _fftpackmodule.o _f2py_rout__fftpack_zfft in _fftpackmodule.o _f2py_rout__fftpack_zfft in _fftpackmodule.o _f2py_rout__fftpack_zfft in _fftpackmodule.o _f2py_rout__fftpack_drfft in _fftpackmodule.o _f2py_rout__fftpack_drfft in _fftpackmodule.o _f2py_rout__fftpack_drfft in _fftpackmodule.o _f2py_rout__fftpack_zrfft in _fftpackmodule.o _f2py_rout__fftpack_zrfft in _fftpackmodule.o _f2py_rout__fftpack_zrfft in _fftpackmodule.o _f2py_rout__fftpack_zfftnd in _fftpackmodule.o _f2py_rout__fftpack_zfftnd in _fftpackmodule.o _f2py_rout__fftpack_zfftnd in _fftpackmodule.o _f2py_rout__fftpack_zfftnd in _fftpackmodule.o _f2py_rout__fftpack_zfftnd in _fftpackmodule.o _init_fftpack in _fftpackmodule.o _array_from_pyobj in fortranobject.o _fortran_setattr in fortranobject.o _fortran_setattr in fortranobject.o "_PyType_IsSubtype", referenced from: _int_from_pyobj in _fftpackmodule.o _int_from_pyobj in _fftpackmodule.o _int_from_pyobj in _fftpackmodule.o _array_from_pyobj in fortranobject.o ld: symbol(s) not found error: Command "/opt/intel/fc/10.1.014/bin/ifort -shared -shared -nofor_main build/temp.macosx-10.3-i386-2.5/build/src.macosx-10.3-i386-2.5/scipy/fftpack/_fftpackmodule.o build/temp.macosx-10.3-i386-2.5/scipy/fftpack/src/zfft.o build/temp.macosx-10.3-i386-2.5/scipy/fftpack/src/drfft.o build/temp.macosx-10.3-i386-2.5/scipy/fftpack/src/zrfft.o build/temp.macosx-10.3-i386-2.5/scipy/fftpack/src/zfftnd.o build/temp.macosx-10.3-i386-2.5/build/src.macosx-10.3-i386-2.5/fortranobject.o -L/usr/local/lib -Lbuild/temp.macosx-10.3-i386-2.5 -ldfftpack -lfftw3 -o build/lib.macosx-10.3-i386-2.5/scipy/fftpack/_fftpack.so" failed with exit status 1 I appreciate any advice. Thanks, Andr?s -------------- next part -------------- An HTML attachment was scrubbed... URL: From jbednar at inf.ed.ac.uk Sun Apr 13 03:22:33 2008 From: jbednar at inf.ed.ac.uk (James A. Bednar) Date: Sun, 13 Apr 2008 08:22:33 +0100 Subject: [SciPy-user] How to draw a 3D graphic of a function? In-Reply-To: <20080411130203.GD1942@phare.normalesup.org> References: <20080411130203.GD1942@phare.normalesup.org> Message-ID: <18433.46265.689213.106061@lodestar.inf.ed.ac.uk> | Date: Fri, 11 Apr 2008 15:02:03 +0200 | From: Gael Varoquaux | | On Fri, Apr 11, 2008 at 08:57:50PM +0800, zhang chi wrote: | > I want to draw a matrix of 100 X 100, its elements are the values of a function. | | I suppose you want to map the value of your matrix to the altitude of a | surface? | | You can do this with Mayavi2. Have a look at the user guide, As shown below, you can also do this with matplotlib, which more people will probably have installed. There was a suggestion that I add this to the matplotlib cookbook, but I still haven't gotten a chance to do so... Jim | Date: Tue, 02 Oct 2007 04:56:56 -0400 | From: Joe Harrington | | Or, you could just do it with matplotlib... | | http://physicsmajor.wordpress.com/2007/04/22/3d-surface-with-matplotlib/ | | This was the first hit on a google search for "matplotlib surface". I | tested it and it works in 0.90.1. Interesting! I couldn't find any documentation at all, but after some hacking on that script I was able to make matplotlib 0.90.1 plot a wireframe surface for a 2D numpy array, so I thought it could be useful to include the code (below). Note that the original example uses plot_surface instead of plot_wireframe, but I've found plot_surface to be quite buggy, with plots disappearing entirely much of the time, while plot_wireframe has been reliable so far. There is also contour3D, though that doesn't look very useful yet. Hopefully these 3D plots will all be polished up a bit and made public in a new matplotlib release soon! Jim _______________________________________________________________________________ import pylab from numpy import outer,arange,cos,sin,ones,zeros,array from matplotlib import axes3d def matrixplot3d(mat,title=None): fig = pylab.figure() ax = axes3d.Axes3D(fig) # Construct matrices for r and c values rn,cn = mat.shape c = outer(ones(rn),arange(cn*1.0)) r = outer(arange(rn*1.0),ones(cn)) ax.plot_wireframe(r,c,mat) ax.set_xlabel('R') ax.set_ylabel('C') ax.set_zlabel('Value') if title: windowtitle(title) pylab.show() matrixplot3d(array([[0.1,0.5,0.9],[0.2,0.1,0.0]])) -- The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From spmcinerney at hotmail.com Sun Apr 13 03:54:01 2008 From: spmcinerney at hotmail.com (Stephen McInerney) Date: Sun, 13 Apr 2008 00:54:01 -0700 Subject: [SciPy-user] Python(x,y) scientific distribution In-Reply-To: References: Message-ID: Dear PR, Sounds great. As to your platform support: is this primarily Linux and MacOS (no Windows?) This is largely similar to what Enthought do, but primarily for Windows. Have you talked to them? http://www.enthought.com/products/epd.php Also your license is EPL and I think theirs is "EPD". I think you are duplicating their effort. In any case, can you continue to post major release announces on scipy-user list? Best regards, Stephen > From: "Python(x,y)" > Subject: [SciPy-user] Python(x,y) - Mailing list> To: scipy-user at scipy.org> Message-ID: <48011DF8.80201 at pythonxy.com>> Content-Type: text/plain; charset=ISO-8859-1; format=flowed> > Hi all,> > One more e-mail on Python(x,y), the Python/Eclipse distribution for > Scientists:> if you are interested in this project, you can subscribe to the > Python(x,y) users mailing list (http://www.pythonxy.com).> > By the way, thank you all for your comments and suggestions regarding > Python(x,y).> Here are some answers to the most frequently asked questions:> - linux version: it is really too soon to tell when it will be ready ;> - MacOS version: at the moment, no action is taken to implement it > (except if we find someone who would want to contribute to Python(x,y) > and do it...) ;> - mailing list? done!> - consider preparing a new release for Numpy 1.0.5: it won't be a > problem, because updating Python(x,y) is a matter of a few minutes (it > will certainly take us more time to upload it than to update it) ;> - Mayavi 2: it could be included in next version of Python(x,y).> > Best regards,> PR> > -- > P. Raybaut> Python(x,y)> http://www.pythonxy.com _________________________________________________________________ Get in touch in an instant. Get Windows Live Messenger now. http://www.windowslive.com/messenger/overview.html?ocid=TXT_TAGLM_WL_Refresh_getintouch_042008 -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at pythonxy.com Sun Apr 13 05:02:00 2008 From: contact at pythonxy.com (Python(x,y)) Date: Sun, 13 Apr 2008 11:02:00 +0200 Subject: [SciPy-user] Python(x,y) scientific distribution In-Reply-To: References: Message-ID: <4801CC08.6030502@pythonxy.com> Dear Stephen, Sorry if my last e-mail was confusing, but the only platforms supported by Python(x,y) at the moment are Windows XP/2003/Vista (Linux version is being developed, and MacOS is not implemented). I am familiar with Enthought distribution which includes a lot of interesting packages. Python(x,y) is quite a different approach (we are not pretending to have a better approach, just different) : - it is a Python / Eclipse / Qt distribution : its long-term goal is to provide a free alternative to commercial scientific "languages" (like MATLAB or IDL) with a complete development environment ; - the number of packages distributed has been intentionally kept as low as possible ; - and last but not least: Python(x,y) can be freely download and use by anyone, without any restriction. And, yes, of course, I can continue to post major release announces on scipy-user list. Thanks for your support. Best regards, PR Stephen McInerney a ?crit : > Dear PR, > > Sounds great. > > As to your platform support: is this primarily Linux and MacOS (no > Windows?) > > This is largely similar to what Enthought do, but primarily for Windows. > Have you talked to them? > http://www.enthought.com/products/epd.php > Also your license is EPL and I think theirs is "EPD". > I think you are duplicating their effort. > > In any case, can you continue to post major release announces on > scipy-user list? > > Best regards, > Stephen > > > From: "Python(x,y)" > > Subject: [SciPy-user] Python(x,y) - Mailing list > > To: scipy-user at scipy.org > > Message-ID: <48011DF8.80201 at pythonxy.com> > > Content-Type: text/plain; charset=ISO-8859-1; format=flowed > > > > Hi all, > > > > One more e-mail on Python(x,y), the Python/Eclipse distribution for > > Scientists: > > if you are interested in this project, you can subscribe to the > > Python(x,y) users mailing list (http://www.pythonxy.com). > > > > By the way, thank you all for your comments and suggestions regarding > > Python(x,y). > > Here are some answers to the most frequently asked questions: > > - linux version: it is really too soon to tell when it will be ready ; > > - MacOS version: at the moment, no action is taken to implement it > > (except if we find someone who would want to contribute to Python(x,y) > > and do it...) ; > > - mailing list? done! > > - consider preparing a new release for Numpy 1.0.5: it won't be a > > problem, because updating Python(x,y) is a matter of a few minutes (it > > will certainly take us more time to upload it than to update it) ; > > - Mayavi 2: it could be included in next version of Python(x,y). > > > > Best regards, > > PR > > > > -- > > P. Raybaut > > Python(x,y) > > http://www.pythonxy.com > > > ------------------------------------------------------------------------ > Get in touch in an instant. Get Windows Live Messenger now. > From robert.kern at gmail.com Sun Apr 13 05:15:48 2008 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 13 Apr 2008 04:15:48 -0500 Subject: [SciPy-user] Building Scipy with ifort in Mac os X 10.5.2 In-Reply-To: References: Message-ID: <3d375d730804130215h5dfa1e80u2ed2d07d58df29ac@mail.gmail.com> On Sat, Apr 12, 2008 at 9:00 PM, Andres Gonzalez-Mancera wrote: > Hi all, > > I'm trying to build Scipy with ifort on a core 2 duo iMac running 10.5.2 and > the latest version of the developer tools. I'm using python 2.5.2 from > python.org. In the same system I'm able to build and run Scipy with gfortran > from the att research web site without any problems. I do not have the intel > C compiler so I'm using gcc for this purpose. > > I'm using the 32 bit version of ifort 10.1.014 with 10.0.2.018 version of > the intel MKL. I haven't been able to find concrete building instructions so > I'm following the instructions in scipy.org except I'm using: > > python setup.py build_src build_clib --fcompiler=intel build_ext > --fcompiler=intel build > > to build Scipy. The build process fails with the following message: This is a known (to me, at any rate) deficiency. We simply haven't implemented the correct link flags for the Intel Fortran compiler on OS X. I have occasionally taken a stab at the problem, but I haven't found a satisfactory solution. As far as I can tell, the Intel Fortran compiler is only capable of building dynamic libraries (.dylib files), not the bundles (.so files) which is the required format for Python extension modules. If you come up with a set of link flags that will create a Python extension module, please let us know. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From tom.denniston at alum.dartmouth.org Sun Apr 13 22:24:41 2008 From: tom.denniston at alum.dartmouth.org (Tom Denniston) Date: Sun, 13 Apr 2008 21:24:41 -0500 Subject: [SciPy-user] interpolate.interp2d Message-ID: Am I misunderstanding the scipy.interpolate.interp2d docs? It says that anything outside the domain returns nan by default. When I try 5.0, 5.0, clearly outside of the domain it returns the value of the nearest point, which is 4.0. >>> interpolate.interp2d([1,1,2,2.0], [1,2,1,2.0], [1,2,3,4.0])(5.0,5.0) array([ 4.]) --Tom From tom.denniston at alum.dartmouth.org Sun Apr 13 22:24:41 2008 From: tom.denniston at alum.dartmouth.org (Tom Denniston) Date: Sun, 13 Apr 2008 21:24:41 -0500 Subject: [SciPy-user] interpolate.interp2d Message-ID: Am I misunderstanding the scipy.interpolate.interp2d docs? It says that anything outside the domain returns nan by default. When I try 5.0, 5.0, clearly outside of the domain it returns the value of the nearest point, which is 4.0. >>> interpolate.interp2d([1,1,2,2.0], [1,2,1,2.0], [1,2,3,4.0])(5.0,5.0) array([ 4.]) --Tom From vincefn at users.sourceforge.net Mon Apr 14 06:30:41 2008 From: vincefn at users.sourceforge.net (Vincent Favre-Nicolin) Date: Mon, 14 Apr 2008 12:30:41 +0200 Subject: [SciPy-user] [Matplotlib-users] ssh options In-Reply-To: <529E0C005F46104BA9DB3CB93F397975016162B1@TOKYO.intra.cea.fr> References: <16646647.post@talk.nabble.com> <529E0C005F46104BA9DB3CB93F397975016162B1@TOKYO.intra.cea.fr> Message-ID: <200804141230.42405.vincefn@users.sourceforge.net> On lundi 14 avril 2008, sa6113 wrote: > No one to help me ?? Wrong mailing list, I guess... This is hardly related to matplotlib. Try google "python-ssh" ? Vincent From lanceboyle at qwest.net Mon Apr 14 06:33:34 2008 From: lanceboyle at qwest.net (Jerry) Date: Mon, 14 Apr 2008 03:33:34 -0700 Subject: [SciPy-user] How to draw a 3D graphic of a function? In-Reply-To: <90c482ab0804110557s246f5fb9gca06d6e7381ab8b7@mail.gmail.com> References: <90c482ab0804110557s246f5fb9gca06d6e7381ab8b7@mail.gmail.com> Message-ID: On Apr 11, 2008, at 5:57 AM, zhang chi wrote: > hi > I want to draw a matrix of 100 X 100, its elements are the > values of a function. > > Is there a package to draw the graphic in scipy? > The PLplot library http://plplot.sourceforge.net/ is extremely good and easy to use. It is also typically in the 98th or 99th percentile in downloads on sourceforge and the developers are very active with adding new bindings and features. (Don't be put off by the rather dated looking example plots.) Jerry From discerptor at gmail.com Mon Apr 14 13:27:52 2008 From: discerptor at gmail.com (Joshua Lippai) Date: Mon, 14 Apr 2008 10:27:52 -0700 Subject: [SciPy-user] How to draw a 3D graphic of a function? In-Reply-To: <18433.46265.689213.106061@lodestar.inf.ed.ac.uk> References: <20080411130203.GD1942@phare.normalesup.org> <18433.46265.689213.106061@lodestar.inf.ed.ac.uk> Message-ID: <9911419a0804141027h2df32fd4u4e22bafd5aff6b11@mail.gmail.com> I can't import matplotlib.axes3d using a 0.98pre SVN build. Here's my output: In [3]: from matplotlib import axes3d --------------------------------------------------------------------------- ImportError Traceback (most recent call last) /Users/Josh/ in () /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/matplotlib/axes3d.py in () 14 from axes import Axes 15 import cbook ---> 16 from transforms import unit_bbox 17 18 import numpy as npy ImportError: cannot import name unit_bbox On Sun, Apr 13, 2008 at 12:22 AM, James A. Bednar wrote: > | Date: Fri, 11 Apr 2008 15:02:03 +0200 > | From: Gael Varoquaux > > | > | On Fri, Apr 11, 2008 at 08:57:50PM +0800, zhang chi wrote: > | > I want to draw a matrix of 100 X 100, its elements are the values of a function. > | > | I suppose you want to map the value of your matrix to the altitude of a > | surface? > | > | You can do this with Mayavi2. Have a look at the user guide, > > As shown below, you can also do this with matplotlib, which more > people will probably have installed. There was a suggestion that I add > this to the matplotlib cookbook, but I still haven't gotten a chance > to do so... > > Jim > > | Date: Tue, 02 Oct 2007 04:56:56 -0400 > | From: Joe Harrington > | > | Or, you could just do it with matplotlib... > | > | http://physicsmajor.wordpress.com/2007/04/22/3d-surface-with-matplotlib/ > | > | This was the first hit on a google search for "matplotlib surface". I > | tested it and it works in 0.90.1. > > Interesting! I couldn't find any documentation at all, but after some > hacking on that script I was able to make matplotlib 0.90.1 plot a > wireframe surface for a 2D numpy array, so I thought it could be > useful to include the code (below). > > Note that the original example uses plot_surface instead of > plot_wireframe, but I've found plot_surface to be quite buggy, with > plots disappearing entirely much of the time, while plot_wireframe has > been reliable so far. There is also contour3D, though that doesn't > look very useful yet. Hopefully these 3D plots will all be polished > up a bit and made public in a new matplotlib release soon! > > Jim > _______________________________________________________________________________ > > import pylab > from numpy import outer,arange,cos,sin,ones,zeros,array > from matplotlib import axes3d > > def matrixplot3d(mat,title=None): > fig = pylab.figure() > ax = axes3d.Axes3D(fig) > > # Construct matrices for r and c values > rn,cn = mat.shape > c = outer(ones(rn),arange(cn*1.0)) > r = outer(arange(rn*1.0),ones(cn)) > > ax.plot_wireframe(r,c,mat) > > ax.set_xlabel('R') > ax.set_ylabel('C') > ax.set_zlabel('Value') > > if title: windowtitle(title) > pylab.show() > > > matrixplot3d(array([[0.1,0.5,0.9],[0.2,0.1,0.0]])) > > > -- > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From abhinav.sarkar at gmail.com Mon Apr 14 16:20:20 2008 From: abhinav.sarkar at gmail.com (abhinav sarkar) Date: Tue, 15 Apr 2008 01:50:20 +0530 Subject: [SciPy-user] generalized eigenvalue problem for sparse matrices In-Reply-To: References: Message-ID: Hi I am trying to solve a generalized eigenvalue problem for sparse matrices. The problem is in form A*x = ?*M*x where A and M are sparse matrices and x is a vector and sigma is a scalar whose value is to be found. For this I am using the Arpack function ARPACK_gen_eigs provided in the module scipy.sparse.linalg.eigen.arpack.speigs . However the solution which I am getting does not match with the solution obtained from the eig function in MATLAB. For example: A = [1 0 0 -33 16 0; 0 1 0 16 -33 16; 0 0 1 0 16 -33; 1601 -96 256 -1800 0 0; -96 1601 -96 0 -1800 0; 256 -96 1601 0 0 -1800] M = [0 0 0 1 0 0; 0 0 0 0 1 0; 0 0 0 0 0 1; -33 16 0 0 0 0; 16 -33 16 0 0 0; 0 16 -33 0 0 0] The matrices are written in MATLAB format, spaces separating the row elements and ";" separating the rows. When I solve this problem in MATLAB using the function eig, I get the following eigenvalues: -155.0345, -56.9898, -45.2209, -31.9376, -28.5367, -9.1611 However when I solve it using ARPACK_gen_eigs to find the two eigenvalues near 10, I get the following solution: 13.83675153-1.71439075j, 13.83675153+1.71439075j which is certainly not correct. I am using the following code to solve the problem: from scipy import sparse from scipy.sparse.linalg import dsolve from scipy.sparse.linalg import eigen from scipy.sparse.linalg.eigen.arpack.speigs import ARPACK_gen_eigs n = 3 h = 1.0/(n+1) a = 1 Pr = 7.0 Ra = 1000 A = get_A_mat(n, h, a, Pr, Ra) M = get_M_mat(n, h, a, Pr, Ra) sigma = 10.0 B = A - sigma*M s = dsolve.splu(B) e, v = ARPACK_gen_eigs(M.matvec, s.solve, 2*n, sigma, 2, 'LR') print e which also seems to be correct to me. Please tell if the method I am using is correct or not and why am I not getting correct solutions. Is the ARPACK_gen_eigs broken? Or is there a problem in my code? Regards -- Abhinav Sarkar 4th year Undergraduate Student Deptt. of Mechanical Engg. Indian Institute of Technology, Kharagpur India Web: http://claimid.com/abhin4v Twitter: http://twitter.com/abhin4v --------- The world is a book, those who do not travel read only one page. From turian at gmail.com Mon Apr 14 17:45:25 2008 From: turian at gmail.com (Joseph Turian) Date: Mon, 14 Apr 2008 17:45:25 -0400 Subject: [SciPy-user] __eq__ for scipy.sparse not working? Message-ID: <4dacb2560804141445l5966a597g9f0327a1de220dbc@mail.gmail.com> I am having trouble determining equality of sparse matrices. Consider this code snippet. Although the sparse matrices appear to be equal, z==y returns false (until I convert the matrices to dense matrices). What is the problem with the equality test here? Thanks, Joseph ================= import scipy.sparse import numpy x = scipy.sparse.csc_matrix((500,3)) x[(10, 1)] = 1 x[(20, 2)] = 2 y = numpy.asarray([[1., 2], [3, 4], [2, 1]]) z = x.dot(y) assert(z.shape == (500,2)) w = scipy.sparse.csc_matrix((500,2)) w[(10, 0)] = 3. w[(20, 0)] = 4 w[(10, 1)] = 4 w[(20, 1)] = 2 assert(z.shape == w.shape) assert(type(z) == type(w)) assert(z.dtype == w.dtype) print z print w print "z:", z print "w:", w print "type(z) == type(w):", type(z) == type(w) print "dtype(z) == dtype(w):", z.dtype == w.dtype print cmp(z, w) print "Sparse z and w are equal:", z == w z = z.todense() w = w.todense() print "Dense z and w are equal:", (z == w).all() == True -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcotuckner at public-files.de Mon Apr 14 17:56:37 2008 From: marcotuckner at public-files.de (Marco Tuckner) Date: Mon, 14 Apr 2008 21:56:37 +0000 (UTC) Subject: [SciPy-user] =?utf-8?q?creating_timeseries_for_non_convertional_c?= =?utf-8?q?ustom=09frequencies?= References: Message-ID: Hello Pierre, Matt and others! The thing you suggested worked and gave the result that I wanted to achieve. The crutial thing was -- as Pierre write -- the filling of the missing dates: timeseries.fill_missing_dates(series) But now I have kind of 'two different' masks: (1) One mask that I created when importing the data or creating the masked array. This is used to mask all data values are physically inplausible or invalid. (2) Another mask that I just created with fill_missing_dates to get the missing dates filled. You'd say that this is fine. I now want continue to mask invalid data with filters (e.g. discard x lower 5 AND higher 100). And many more filters in between. In the end I would like to count the all masked data points to get a feeling of the performance of my logging device or the measurement process as a whole. When I now count all masked values the result would include those data points masked in stage (2). This would signifcantly reduce the accuracy of my data recovery ratio: number of valid data points / number of expected data points. Any suggestion who I can get around this? BTW, Is there a more efficient way to get properties of the masked array like number of masked and not masked values? I tried this: # return the number of masked values number_of_valid_values = filled.mask.size-sum(filled.mask) #return number of False values in a masked array number_of_valid_values = filled.mask.size-filled.mask.size-sum(filled.mask) Greetings, Marco From pgmdevlist at gmail.com Mon Apr 14 18:12:58 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Mon, 14 Apr 2008 18:12:58 -0400 Subject: [SciPy-user] =?iso-8859-1?q?creating_timeseries_for_non_convertio?= =?iso-8859-1?q?nal_custom=09frequencies?= In-Reply-To: References: Message-ID: <200804141813.00098.pgmdevlist@gmail.com> Marco, > (1) One mask that I created when importing the data or creating the masked > array. This is used to mask all data values are physically inplausible or > invalid. > (2) Another mask that I just created with fill_missing_dates to get the > missing dates filled. Just count the number of unmasked data with series.count(), and store it into a count_ini variable. Then, keep on applying your filters, counting the number of unmasked data each time. You can then compare this new counts to count_ini (the original one). > BTW, Is there a more efficient way to get properties of the masked array > like number of masked and not masked values? If you look at the source code for the count method (in numpy.ma), you'll see that the result of count is only the difference between the size along the given axis and the sum of the mask along the same axis: ma.count(s, axis) = numpy.size(s._data, axis) - numpy.sum(s._mask, axis) So, the nb of "valid" values is given by series.count(axis), the nb of "invalid" values by series._mask.sum(axis), the total nb of data by numpy.size(s,axis) or simply series.shape[axis]. If you only have 1D data, that's even faster: nb of valid: series.count() nb of invalid: series._mask.sum() nb of data: series.size From wnbell at gmail.com Mon Apr 14 19:57:29 2008 From: wnbell at gmail.com (Nathan Bell) Date: Mon, 14 Apr 2008 18:57:29 -0500 Subject: [SciPy-user] __eq__ for scipy.sparse not working? In-Reply-To: <4dacb2560804141445l5966a597g9f0327a1de220dbc@mail.gmail.com> References: <4dacb2560804141445l5966a597g9f0327a1de220dbc@mail.gmail.com> Message-ID: On Mon, Apr 14, 2008 at 4:45 PM, Joseph Turian wrote: > I am having trouble determining equality of sparse matrices. > Consider this code snippet. Although the sparse matrices appear to be equal, > z==y returns false (until I convert the matrices to dense matrices). > What is the problem with the equality test here? Sparse matrices don't currently support that functionality. A workaround could be abs(A-B).nnz == 0 -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From agile.aspect at gmail.com Tue Apr 15 01:15:01 2008 From: agile.aspect at gmail.com (Agile Aspect) Date: Mon, 14 Apr 2008 22:15:01 -0700 Subject: [SciPy-user] bvp In-Reply-To: References: Message-ID: <480439D5.5050605@GMail.COM> Hi - I tried version 0.2.2 using Python 2.4.2 from the command line and it works great! I have a related question which may be off topic. When I run this test script from inside Eclipse using Python/Qt4 (the same C based Python I used to test bvp from the command line) it generates the following error Traceback (most recent call last): File "/home/ken/projects/workspace/cpp/bvp/src/test/ex1.py", line 24, in ? solution = bvp.colnew.solve( AttributeError: 'module' object has no attribute 'colnew' Any ideas as to what might be causing this problem? Is this permission problem? Would it be possible to use reflection to get around it? I've been using Eclipse/Python/Q4t for a couple of years and this is the first time I've had this problem. Any help would be greatly appreciated. -- Ken Pauli Virtanen wrote: > Hi Nils, > > Fri, 11 Apr 2008 20:03:24 +0200, Nils Wagner wrote: > > >> I installed bvp using the mercurial repository >> >> hg clone static-http://www.iki.fi/pav/hg/bvp.hg bvp.hg >> >> The second example doesn't work for me. Here is the output >> >> /usr/bin/python -i ex2.py >> 1.0 >> unexpected array size: new_size=1, got array with arr_size=0 >> Traceback (most recent call last): >> File "ex2.py", line 61, in ? >> coarsen_initial_guess_mesh=True) >> File "/usr/lib/python2.4/site-packages/bvp/colnew.py", >> line 522, in solve >> vectorized_guess) >> _colnew.error: failed in converting 8th argument `fixpnt' of >> _colnew.colnew to C/Fortran array >> > > Should also be fixed in 0.2.2 and current bvp.hg. (And yep, it was also > caught by automated tests.) > > The cause is that apperently something changed in f2py between numpy > 1.0.4 and 1.0.5: in colnew.pyf I have > > integer, dimension(11), intent(in) :: ipar > double precision, dimension(ipar[10]), intent(in) :: fixpnt > > However, f2py bugs out with the "failed in converting" if ipar[10] == 0 > and fixpnt.size == 0, which I don't think it did in 1.0.3 or 1.0.4. If > fixed this by making fixpnt a shape = (1,) array even for ipar[10] == 0, > and it appears to work on numpy 1.0.2, 1.0.4, 1.0.5, even though I don't > know whether it should. > > -- Article. VI. Clause 3 of the constitution of the United States states: "The Senators and Representatives before mentioned, and the Members of the several State Legislatures, and all executive and judicial Officers, both of the United States and of the several States, shall be bound by Oath or Affirmation, to support this Constitution; but no religious Test shall ever be required as a Qualification to any Office or public Trust under the United States." From abhinav.sarkar at gmail.com Tue Apr 15 02:48:50 2008 From: abhinav.sarkar at gmail.com (Abhinav Sarkar) Date: Tue, 15 Apr 2008 06:48:50 +0000 (UTC) Subject: [SciPy-user] generalized eigenvalue problem for sparse matrices References: Message-ID: abhinav sarkar gmail.com> writes: > > Hi > I am trying to solve a generalized eigenvalue problem for sparse > matrices. The problem is in form > A*x = ?*M*x > where A and M are sparse matrices and x is a vector and sigma is a > scalar whose value is to be found. > > For this I am using the Arpack function ARPACK_gen_eigs provided in > the module scipy.sparse.linalg.eigen.arpack.speigs . However the > solution which I am getting does not match with the solution obtained > from the eig function in MATLAB. For example: > > A = [1 0 0 -33 16 0; 0 1 0 16 -33 16; 0 0 1 0 16 -33; 1601 -96 256 > -1800 0 0; -96 1601 -96 0 -1800 0; 256 -96 1601 0 0 -1800] > M = [0 0 0 1 0 0; 0 0 0 0 1 0; 0 0 0 0 0 1; -33 16 0 0 0 0; 16 -33 16 > 0 0 0; 0 16 -33 0 0 0] > > The matrices are written in MATLAB format, spaces separating the row > elements and ";" separating the rows. > When I solve this problem in MATLAB using the function eig, I get the > following eigenvalues: > -155.0345, -56.9898, -45.2209, -31.9376, -28.5367, -9.1611 > > However when I solve it using ARPACK_gen_eigs to find the two > eigenvalues near 10, I get the following solution: > 13.83675153-1.71439075j, 13.83675153+1.71439075j > which is certainly not correct. > -- > Abhinav Sarkar I tried the eigenvalue problem solver provided for the dense matrices at scipy.linalg.eig and it gives the same result as the MATLAB functin eig. Then why is scipy.sparse.linalg.eigen.arpack.speigs.ARPACK_gen_eigs giving different result? Please help me out. Regards --- Abhinav Sarkar From jbednar at inf.ed.ac.uk Tue Apr 15 02:51:46 2008 From: jbednar at inf.ed.ac.uk (James A. Bednar) Date: Tue, 15 Apr 2008 07:51:46 +0100 Subject: [SciPy-user] How to draw a 3D graphic of a function? In-Reply-To: <9911419a0804141027h2df32fd4u4e22bafd5aff6b11@mail.gmail.com> References: <20080411130203.GD1942@phare.normalesup.org> <18433.46265.689213.106061@lodestar.inf.ed.ac.uk> <9911419a0804141027h2df32fd4u4e22bafd5aff6b11@mail.gmail.com> Message-ID: <18436.20610.243298.892661@lodestar.inf.ed.ac.uk> | From: Joshua Lippai | Date: Apr 14 10:27:52 2008 -0700 | | I can't import matplotlib.axes3d using a 0.98pre SVN build. Here's my output: | | In [3]: from matplotlib import axes3d | --------------------------------------------------------------------------- | ImportError Traceback (most recent call last) | | /Users/Josh/ in () | | /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/matplotlib/axes3d.py | in () | 14 from axes import Axes | 15 import cbook | ---> 16 from transforms import unit_bbox | 17 | 18 import numpy as npy | | ImportError: cannot import name unit_bbox That sounds like something to report to the matplotlib list. I just checked on the latest released version of matplotlib (0.91.2), and it still works on that version. Jim | On Sun, Apr 13, 2008 at 12:22 AM, James A. Bednar wrote: | > | Date: Fri, 11 Apr 2008 15:02:03 +0200 | > | From: Gael Varoquaux | > | > | | > | On Fri, Apr 11, 2008 at 08:57:50PM +0800, zhang chi wrote: | > | > I want to draw a matrix of 100 X 100, its elements are the values of a function. | > | | > | I suppose you want to map the value of your matrix to the altitude of a | > | surface? | > | | > | You can do this with Mayavi2. Have a look at the user guide, | > | > As shown below, you can also do this with matplotlib, which more | > people will probably have installed. There was a suggestion that I add | > this to the matplotlib cookbook, but I still haven't gotten a chance | > to do so... | > | > Jim | > | > | Date: Tue, 02 Oct 2007 04:56:56 -0400 | > | From: Joe Harrington | > | | > | Or, you could just do it with matplotlib... | > | | > | http://physicsmajor.wordpress.com/2007/04/22/3d-surface-with-matplotlib/ | > | | > | This was the first hit on a google search for "matplotlib surface". I | > | tested it and it works in 0.90.1. | > | > Interesting! I couldn't find any documentation at all, but after some | > hacking on that script I was able to make matplotlib 0.90.1 plot a | > wireframe surface for a 2D numpy array, so I thought it could be | > useful to include the code (below). | > | > Note that the original example uses plot_surface instead of | > plot_wireframe, but I've found plot_surface to be quite buggy, with | > plots disappearing entirely much of the time, while plot_wireframe has | > been reliable so far. There is also contour3D, though that doesn't | > look very useful yet. Hopefully these 3D plots will all be polished | > up a bit and made public in a new matplotlib release soon! | > | > Jim | > _______________________________________________________________________________ | > | > import pylab | > from numpy import outer,arange,cos,sin,ones,zeros,array | > from matplotlib import axes3d | > | > def matrixplot3d(mat,title=None): | > fig = pylab.figure() | > ax = axes3d.Axes3D(fig) | > | > # Construct matrices for r and c values | > rn,cn = mat.shape | > c = outer(ones(rn),arange(cn*1.0)) | > r = outer(arange(rn*1.0),ones(cn)) | > | > ax.plot_wireframe(r,c,mat) | > | > ax.set_xlabel('R') | > ax.set_ylabel('C') | > ax.set_zlabel('Value') | > | > if title: windowtitle(title) | > pylab.show() | > | > | > matrixplot3d(array([[0.1,0.5,0.9],[0.2,0.1,0.0]])) | > | > | > -- | > The University of Edinburgh is a charitable body, registered in | > Scotland, with registration number SC005336. | > | > | > | > _______________________________________________ | > SciPy-user mailing list | > SciPy-user at scipy.org | > http://projects.scipy.org/mailman/listinfo/scipy-user | > -- The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From pav at iki.fi Tue Apr 15 03:34:24 2008 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 15 Apr 2008 07:34:24 +0000 (UTC) Subject: [SciPy-user] bvp References: <480439D5.5050605@GMail.COM> Message-ID: Hi, Mon, 14 Apr 2008 22:15:01 -0700, Agile Aspect wrote: > Hi - I tried version 0.2.2 using Python 2.4.2 from the command line and > it works great! > > I have a related question which may be off topic. > > When I run this test script from inside Eclipse using Python/Qt4 (the > same C based Python I used to test bvp from the command line) it > generates the following error > > Traceback (most recent call last): > File > "/home/ken/projects/workspace/cpp/bvp/src/test/ex1.py", line 24, in ? > solution = bvp.colnew.solve( > AttributeError: 'module' object has no attribute > 'colnew' > > Any ideas as to what might be causing this problem? > > Is this permission problem? Would it be possible to use reflection to > get around it? This is strange. Does import bvp.colnew work inside eclipse? Are you running Eclipse in the bvp source directory (will not work, because the bvp directory in the source package doesn't contain the compiled extensions). Is your Python path correctly set in Eclipse? -- Pauli Virtanen From lbolla at gmail.com Tue Apr 15 03:40:01 2008 From: lbolla at gmail.com (lorenzo bolla) Date: Tue, 15 Apr 2008 09:40:01 +0200 Subject: [SciPy-user] bvp In-Reply-To: <480439D5.5050605@GMail.COM> References: <480439D5.5050605@GMail.COM> Message-ID: <80c99e790804150040s72bff476g661eca64a473f89e@mail.gmail.com> Could this simply be because you are running the example file from bvp install directory? By the way, I translated a couple of examples from the MATLAB's bvp4c tutorial to python: you can find them here: http://lbolla.wordpress.com/2008/04/14/bvp/ L. On Tue, Apr 15, 2008 at 7:15 AM, Agile Aspect wrote: > Hi - I tried version 0.2.2 using Python 2.4.2 from the command > line and it works great! > > I have a related question which may be off topic. > > When I run this test script from inside Eclipse using Python/Qt4 > (the same C based Python I used to test bvp from the command > line) it generates the following error > > Traceback (most recent call last): > File > "/home/ken/projects/workspace/cpp/bvp/src/test/ex1.py", line 24, in ? > solution = bvp.colnew.solve( > AttributeError: 'module' object has no attribute 'colnew' > > Any ideas as to what might be causing this problem? > > Is this permission problem? Would it be possible to use reflection > to get around it? > > I've been using Eclipse/Python/Q4t for a couple of years and this > is the first time I've had this problem. > > Any help would be greatly appreciated. > > -- Ken > > Pauli Virtanen wrote: > > Hi Nils, > > > > Fri, 11 Apr 2008 20:03:24 +0200, Nils Wagner wrote: > > > > > >> I installed bvp using the mercurial repository > >> > >> hg clone static-http://www.iki.fi/pav/hg/bvp.hg bvp.hg > >> > >> The second example doesn't work for me. Here is the output > >> > >> /usr/bin/python -i ex2.py > >> 1.0 > >> unexpected array size: new_size=1, got array with arr_size=0 > >> Traceback (most recent call last): > >> File "ex2.py", line 61, in ? > >> coarsen_initial_guess_mesh=True) > >> File "/usr/lib/python2.4/site-packages/bvp/colnew.py", > >> line 522, in solve > >> vectorized_guess) > >> _colnew.error: failed in converting 8th argument `fixpnt' of > >> _colnew.colnew to C/Fortran array > >> > > > > Should also be fixed in 0.2.2 and current bvp.hg. (And yep, it was also > > caught by automated tests.) > > > > The cause is that apperently something changed in f2py between numpy > > 1.0.4 and 1.0.5: in colnew.pyf I have > > > > integer, dimension(11), intent(in) :: ipar > > double precision, dimension(ipar[10]), intent(in) :: fixpnt > > > > However, f2py bugs out with the "failed in converting" if ipar[10] == 0 > > and fixpnt.size == 0, which I don't think it did in 1.0.3 or 1.0.4. If > > fixed this by making fixpnt a shape = (1,) array even for ipar[10] == 0, > > and it appears to work on numpy 1.0.2, 1.0.4, 1.0.5, even though I don't > > know whether it should. > > > > > -- > Article. VI. Clause 3 of the constitution of the United States states: > > "The Senators and Representatives before mentioned, and the Members of > the several State Legislatures, and all executive and judicial Officers, > both of the United States and of the several States, shall be bound by > Oath or Affirmation, to support this Constitution; but no religious Test > shall ever be required as a Qualification to any Office or public Trust > under the United States." > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Lorenzo Bolla lbolla at gmail.com http://lorenzobolla.emurse.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From cohen at slac.stanford.edu Tue Apr 15 04:02:29 2008 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Tue, 15 Apr 2008 10:02:29 +0200 Subject: [SciPy-user] How to draw a 3D graphic of a function? In-Reply-To: <18436.20610.243298.892661@lodestar.inf.ed.ac.uk> References: <20080411130203.GD1942@phare.normalesup.org> <18433.46265.689213.106061@lodestar.inf.ed.ac.uk> <9911419a0804141027h2df32fd4u4e22bafd5aff6b11@mail.gmail.com> <18436.20610.243298.892661@lodestar.inf.ed.ac.uk> Message-ID: <48046115.4050307@slac.stanford.edu> I think that the whole 3D part of matplotlib is now broken. There are discussions but not a lot of work yet as to how to get this functionnality back. Might be via VTK.... best, J. James A. Bednar wrote: > | From: Joshua Lippai > | Date: Apr 14 10:27:52 2008 -0700 > | > | I can't import matplotlib.axes3d using a 0.98pre SVN build. Here's my output: > | > | In [3]: from matplotlib import axes3d > | --------------------------------------------------------------------------- > | ImportError Traceback (most recent call last) > | > | /Users/Josh/ in () > | > | /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/matplotlib/axes3d.py > | in () > | 14 from axes import Axes > | 15 import cbook > | ---> 16 from transforms import unit_bbox > | 17 > | 18 import numpy as npy > | > | ImportError: cannot import name unit_bbox > > That sounds like something to report to the matplotlib list. I just > checked on the latest released version of matplotlib (0.91.2), and it > still works on that version. > > Jim > > | On Sun, Apr 13, 2008 at 12:22 AM, James A. Bednar wrote: > | > | Date: Fri, 11 Apr 2008 15:02:03 +0200 > | > | From: Gael Varoquaux > | > > | > | > | > | On Fri, Apr 11, 2008 at 08:57:50PM +0800, zhang chi wrote: > | > | > I want to draw a matrix of 100 X 100, its elements are the values of a function. > | > | > | > | I suppose you want to map the value of your matrix to the altitude of a > | > | surface? > | > | > | > | You can do this with Mayavi2. Have a look at the user guide, > | > > | > As shown below, you can also do this with matplotlib, which more > | > people will probably have installed. There was a suggestion that I add > | > this to the matplotlib cookbook, but I still haven't gotten a chance > | > to do so... > | > > | > Jim > | > > | > | Date: Tue, 02 Oct 2007 04:56:56 -0400 > | > | From: Joe Harrington > | > | > | > | Or, you could just do it with matplotlib... > | > | > | > | http://physicsmajor.wordpress.com/2007/04/22/3d-surface-with-matplotlib/ > | > | > | > | This was the first hit on a google search for "matplotlib surface". I > | > | tested it and it works in 0.90.1. > | > > | > Interesting! I couldn't find any documentation at all, but after some > | > hacking on that script I was able to make matplotlib 0.90.1 plot a > | > wireframe surface for a 2D numpy array, so I thought it could be > | > useful to include the code (below). > | > > | > Note that the original example uses plot_surface instead of > | > plot_wireframe, but I've found plot_surface to be quite buggy, with > | > plots disappearing entirely much of the time, while plot_wireframe has > | > been reliable so far. There is also contour3D, though that doesn't > | > look very useful yet. Hopefully these 3D plots will all be polished > | > up a bit and made public in a new matplotlib release soon! > | > > | > Jim > | > _______________________________________________________________________________ > | > > | > import pylab > | > from numpy import outer,arange,cos,sin,ones,zeros,array > | > from matplotlib import axes3d > | > > | > def matrixplot3d(mat,title=None): > | > fig = pylab.figure() > | > ax = axes3d.Axes3D(fig) > | > > | > # Construct matrices for r and c values > | > rn,cn = mat.shape > | > c = outer(ones(rn),arange(cn*1.0)) > | > r = outer(arange(rn*1.0),ones(cn)) > | > > | > ax.plot_wireframe(r,c,mat) > | > > | > ax.set_xlabel('R') > | > ax.set_ylabel('C') > | > ax.set_zlabel('Value') > | > > | > if title: windowtitle(title) > | > pylab.show() > | > > | > > | > matrixplot3d(array([[0.1,0.5,0.9],[0.2,0.1,0.0]])) > | > > | > > | > -- > | > The University of Edinburgh is a charitable body, registered in > | > Scotland, with registration number SC005336. > | > > | > > | > > | > _______________________________________________ > | > SciPy-user mailing list > | > SciPy-user at scipy.org > | > http://projects.scipy.org/mailman/listinfo/scipy-user > | > > > From robince at gmail.com Tue Apr 15 04:06:38 2008 From: robince at gmail.com (Robin) Date: Tue, 15 Apr 2008 09:06:38 +0100 Subject: [SciPy-user] test_add_sub crashes interpreter Message-ID: Hi, Using scipy dev4140 on Windows XP with cygwin the intepreter crashes on the following test: test_add_sub (test_base.TestBSR) ... If this is likely to be ATLAS related I could try rebuilding the latest version... (not sure what component of scipy the test is actually for) Robin From robince at gmail.com Tue Apr 15 04:51:09 2008 From: robince at gmail.com (Robin) Date: Tue, 15 Apr 2008 09:51:09 +0100 Subject: [SciPy-user] test_add_sub crashes interpreter In-Reply-To: References: Message-ID: After reading the README file in the sparsetools directory I found out the cygwin version of swig (1.3.27) was older than required. However after building the latest swig (1.3.35) and rebuilding scipy the problem persists. Robin On Tue, Apr 15, 2008 at 9:06 AM, Robin wrote: > Hi, > > Using scipy dev4140 on Windows XP with cygwin the intepreter crashes > on the following test: > > test_add_sub (test_base.TestBSR) ... > > If this is likely to be ATLAS related I could try rebuilding the > latest version... (not sure what component of scipy the test is > actually for) > > Robin > From ondrej at certik.cz Tue Apr 15 06:12:25 2008 From: ondrej at certik.cz (Ondrej Certik) Date: Tue, 15 Apr 2008 12:12:25 +0200 Subject: [SciPy-user] How to draw a 3D graphic of a function? In-Reply-To: <48046115.4050307@slac.stanford.edu> References: <20080411130203.GD1942@phare.normalesup.org> <18433.46265.689213.106061@lodestar.inf.ed.ac.uk> <9911419a0804141027h2df32fd4u4e22bafd5aff6b11@mail.gmail.com> <18436.20610.243298.892661@lodestar.inf.ed.ac.uk> <48046115.4050307@slac.stanford.edu> Message-ID: <85b5c3130804150312g36771879ke94f338f916ace80@mail.gmail.com> On Tue, Apr 15, 2008 at 10:02 AM, Johann Cohen-Tanugi wrote: > I think that the whole 3D part of matplotlib is now broken. There are > discussions but not a lot of work yet as to how to get this > functionnality back. Might be via VTK.... There is a GSoC application for SymPy to integrating it's nice 3D plotting capabilities to matplotlib. In the meantime, you can try it out ourself, as described here: http://code.google.com/p/sympy/wiki/PlottingModule Ondrej From nmarais at sun.ac.za Tue Apr 15 08:54:35 2008 From: nmarais at sun.ac.za (Neilen Marais) Date: Tue, 15 Apr 2008 12:54:35 +0000 (UTC) Subject: [SciPy-user] generalized eigenvalue problem for sparse matrices References: Message-ID: Abinav, On Tue, 15 Apr 2008 01:50:20 +0530, abhinav sarkar wrote: > Hi > n = 3 > h = 1.0/(n+1) > > a = 1 > Pr = 7.0 > Ra = 1000 > > A = get_A_mat(n, h, a, Pr, Ra) > M = get_M_mat(n, h, a, Pr, Ra) > > sigma = 10.0 ^^^ I think you're looking for -10. > B = A - sigma*M > s = dsolve.splu(B) > e, v = ARPACK_gen_eigs(M.matvec, s.solve, 2*n, sigma, 2, 'LR') ^^ You may try futzing around with this parameter. Anyway, I did play around a bit with your example and never really got a good answer. But my experience has been that ARPACK doesn't really work well with small matrices. Try a bigger problem and see if things turn out better, I mean, obviously you don't need a sparse solver to solve a 6x6 matrix system ;). You may also find that if you force Matlab to do this problem using a sparse solver you'll run into the same issues. Unfortunately I don't know enough about the ARPACK routines to give more insight. They do work quite well on my problems, where the eigenvalues are always positive. > which also seems to be correct to me. Please tell if the method I am > using is correct or not and why am I not getting correct solutions. Is > the ARPACK_gen_eigs broken? Or is there a problem in my code? Seems OK, but I'm no expert ;) Regards Neilen > > Regards > -- > Abhinav Sarkar > 4th year Undergraduate Student > Deptt. of Mechanical Engg. > Indian Institute of Technology, Kharagpur India > > Web: http://claimid.com/abhin4v > Twitter: http://twitter.com/abhin4v > --------- > The world is a book, those who do not travel read only one page. > _______________________________________________ SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From wnbell at gmail.com Tue Apr 15 10:24:04 2008 From: wnbell at gmail.com (Nathan Bell) Date: Tue, 15 Apr 2008 09:24:04 -0500 Subject: [SciPy-user] generalized eigenvalue problem for sparse matrices In-Reply-To: References: Message-ID: On Tue, Apr 15, 2008 at 1:48 AM, Abhinav Sarkar wrote: > I tried the eigenvalue problem solver provided for the dense matrices at > scipy.linalg.eig and it gives the same result as the MATLAB functin eig. Then > why is scipy.sparse.linalg.eigen.arpack.speigs.ARPACK_gen_eigs giving different > result? Please help me out. Have you also tried eigs() in MATLAB? I'm not sure what happens when you call eig() with a sparse matrix. As Neilen said, this isn't a good test for ARPACK. Try setting the 'tol' parameter to something smaller. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From matthieu.brucher at gmail.com Tue Apr 15 10:29:58 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Tue, 15 Apr 2008 16:29:58 +0200 Subject: [SciPy-user] generalized eigenvalue problem for sparse matrices In-Reply-To: References: Message-ID: Hi, I've tested this function with a 2000x2000 array, and like Abhinav, I didn't manage to find correct eigenvalues. Everything worked as expected with a dense array with the same element with numpy.linalg.eig, but not with Arpack... Matthieu 2008/4/15, Nathan Bell : > > On Tue, Apr 15, 2008 at 1:48 AM, Abhinav Sarkar > wrote: > > I tried the eigenvalue problem solver provided for the dense matrices > at > > scipy.linalg.eig and it gives the same result as the MATLAB functin > eig. Then > > why is scipy.sparse.linalg.eigen.arpack.speigs.ARPACK_gen_eigs giving > different > > result? Please help me out. > > > Have you also tried eigs() in MATLAB? I'm not sure what happens when > you call eig() with a sparse matrix. > > As Neilen said, this isn't a good test for ARPACK. Try setting the > 'tol' parameter to something smaller. > > > -- > Nathan Bell wnbell at gmail.com > http://graphics.cs.uiuc.edu/~wnbell/ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From abhinav.sarkar at gmail.com Tue Apr 15 10:43:38 2008 From: abhinav.sarkar at gmail.com (abhinav sarkar) Date: Tue, 15 Apr 2008 20:13:38 +0530 Subject: [SciPy-user] generalized eigenvalue problem for sparse matrices In-Reply-To: References: Message-ID: On Tue, Apr 15, 2008 at 6:24 PM, Neilen Marais wrote: > Abinav, > > On Tue, 15 Apr 2008 01:50:20 +0530, abhinav sarkar wrote: > > > Hi > > > n = 3 > > h = 1.0/(n+1) > > > > a = 1 > > Pr = 7.0 > > Ra = 1000 > > > > A = get_A_mat(n, h, a, Pr, Ra) > > M = get_M_mat(n, h, a, Pr, Ra) > > > > sigma = 10.0 > ^^^ > I think you're looking for -10. i am looking for the largest eigenvalue > > > > B = A - sigma*M > > s = dsolve.splu(B) > > e, v = ARPACK_gen_eigs(M.matvec, s.solve, 2*n, sigma, 2, 'LR') > ^^ > You may try futzing around with this parameter. > > Anyway, I did play around a bit with your example and never really got a > good answer. But my experience has been that ARPACK doesn't really work > well with small matrices. Try a bigger problem and see if things turn out > better, I mean, obviously you don't need a sparse solver to solve a 6x6 > matrix system ;). You may also find that if you force Matlab to do this > problem using a sparse solver you'll run into the same issues. > Unfortunately I don't know enough about the ARPACK routines to give more > insight. They do work quite well on my problems, where the eigenvalues > are always positive. > > > > which also seems to be correct to me. Please tell if the method I am > > using is correct or not and why am I not getting correct solutions. Is > > the ARPACK_gen_eigs broken? Or is there a problem in my code? > > Seems OK, but I'm no expert ;) > > Regards > Neilen > > > > > > Regards > > -- > > Abhinav Sarkar > > 4th year Undergraduate Student > > Deptt. of Mechanical Engg. > > Indian Institute of Technology, Kharagpur India > > > > Web: http://claimid.com/abhin4v > > Twitter: http://twitter.com/abhin4v > > --------- > > The world is a book, those who do not travel read only one page. > > _______________________________________________ SciPy-user mailing list > > > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > I have formulated the problem in a slightly different way and now I am able to obtain the correct eigenvalues. Here is the new code: class Solver: def __init__(self, A, M, sigma): self.A = A self.M = M self.sigma = sigma def solve(self, b): #(M-sigma*A)*x=A*b #return x return dsolve.spsolve((self.M-self.sigma*self.A), self.A*b) A = get_A_mat(n, h, a, Pr, Ra) M = get_M_mat(n, h, a, Pr, Ra) I = sparse.eye(2*n,2*n) sigma = 0.0 sigma_solve = Solver(A, M, sigma).solve e = 1.0/ARPACK_gen_eigs(I.matvec, sigma_solve, I.shape[0], sigma, 5, 'LR')[0] Here I have converted the A*x = sigma*M*x to inv(A)*M*x = nu*I*x where nu = 1/sigma. And replaced s.solve by my own Solver class. Now solutions obtained are correct. -- Abhinav Sarkar 4th year Undergraduate Student Deptt. of Mechanical Engg. Indian Institute of Technology, Kharagpur India Mobile:+91-99327-41517 Residence:+91-631-2429887 Web: http://claimid.com/abhin4v Twitter: http://twitter.com/abhin4v --------- The world is a book, those who do not travel read only one page. From ondrej at certik.cz Tue Apr 15 10:47:04 2008 From: ondrej at certik.cz (Ondrej Certik) Date: Tue, 15 Apr 2008 16:47:04 +0200 Subject: [SciPy-user] generalized eigenvalue problem for sparse matrices In-Reply-To: References: Message-ID: <85b5c3130804150747j6f6117b6tf36d0b5cbf0c1862@mail.gmail.com> On Tue, Apr 15, 2008 at 4:29 PM, Matthieu Brucher wrote: > Hi, > > I've tested this function with a 2000x2000 array, and like Abhinav, I didn't > manage to find correct eigenvalues. Everything worked as expected with a > dense array with the same element with numpy.linalg.eig, but not with > Arpack... When I compiled arpack myself and used it myself, I also didn't manage to get correct results. Anyway, I wouldn't recommend to use arpack. Ondrej From matthieu.brucher at gmail.com Tue Apr 15 10:55:23 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Tue, 15 Apr 2008 16:55:23 +0200 Subject: [SciPy-user] generalized eigenvalue problem for sparse matrices In-Reply-To: <85b5c3130804150747j6f6117b6tf36d0b5cbf0c1862@mail.gmail.com> References: <85b5c3130804150747j6f6117b6tf36d0b5cbf0c1862@mail.gmail.com> Message-ID: 2008/4/15, Ondrej Certik : > > On Tue, Apr 15, 2008 at 4:29 PM, Matthieu Brucher > wrote: > > Hi, > > > > I've tested this function with a 2000x2000 array, and like Abhinav, I > didn't > > manage to find correct eigenvalues. Everything worked as expected with a > > dense array with the same element with numpy.linalg.eig, but not with > > Arpack... > > > > When I compiled arpack myself and used it myself, I also didn't manage > to get correct results. Anyway, I wouldn't recommend to use arpack. > > > Ondrej > What do you recommend for generalized eigenproblems with sparse arrays ? Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From ondrej at certik.cz Tue Apr 15 11:08:29 2008 From: ondrej at certik.cz (Ondrej Certik) Date: Tue, 15 Apr 2008 17:08:29 +0200 Subject: [SciPy-user] generalized eigenvalue problem for sparse matrices In-Reply-To: References: <85b5c3130804150747j6f6117b6tf36d0b5cbf0c1862@mail.gmail.com> Message-ID: <85b5c3130804150808l351460b9l24705bf0f4c29cd1@mail.gmail.com> On Tue, Apr 15, 2008 at 4:55 PM, Matthieu Brucher wrote: > > > 2008/4/15, Ondrej Certik : > > > On Tue, Apr 15, 2008 at 4:29 PM, Matthieu Brucher > > wrote: > > > Hi, > > > > > > I've tested this function with a 2000x2000 array, and like Abhinav, I > didn't > > > manage to find correct eigenvalues. Everything worked as expected with a > > > dense array with the same element with numpy.linalg.eig, but not with > > > Arpack... > > > > > > > > When I compiled arpack myself and used it myself, I also didn't manage > > to get correct results. Anyway, I wouldn't recommend to use arpack. > > > > > > Ondrej > > > > What do you recommend for generalized eigenproblems with sparse arrays ? I have a very good experience with pysparse. That's why I was objecting to it's removal from scipy. But I think I got ok to put it back in, I just didn't find time for that. Plus there are other good solvers like bzlpack or primme, both should be able to do generalized eigenvalues and both are more modern than arpack, but I haven't yet tried them extensively, but it's on my todo list to create a scikit with easy to use interface to all of them. Any help appreciated. :) Ondrej From turian at gmail.com Tue Apr 15 13:37:25 2008 From: turian at gmail.com (Joseph Turian) Date: Tue, 15 Apr 2008 13:37:25 -0400 Subject: [SciPy-user] __eq__ for scipy.sparse not working? In-Reply-To: References: <4dacb2560804141445l5966a597g9f0327a1de220dbc@mail.gmail.com> Message-ID: <4dacb2560804151037g323c65e9qb232cffce34bbfe@mail.gmail.com> Is there a reason that sparse matrices don't support this functionality? What is actually happening when I try equality testing for A == B? It seems undesirable that equality comparison is permitted, even though it has unexpected behavior. On Mon, Apr 14, 2008 at 7:57 PM, Nathan Bell wrote: > On Mon, Apr 14, 2008 at 4:45 PM, Joseph Turian wrote: > > I am having trouble determining equality of sparse matrices. > > Consider this code snippet. Although the sparse matrices appear to be > equal, > > z==y returns false (until I convert the matrices to dense matrices). > > What is the problem with the equality test here? > > Sparse matrices don't currently support that functionality. A > workaround could be abs(A-B).nnz == 0 > > -- > Nathan Bell wnbell at gmail.com > http://graphics.cs.uiuc.edu/~wnbell/ > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Academic: http://www-etud.iro.umontreal.ca/~turian/ Business: http://www.metaoptimize.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From Achim.Gaedke at physik.tu-darmstadt.de Tue Apr 15 14:15:33 2008 From: Achim.Gaedke at physik.tu-darmstadt.de (Achim Gaedke) Date: Tue, 15 Apr 2008 20:15:33 +0200 Subject: [SciPy-user] number of function evaluation for leastsq Message-ID: <4804F0C5.3000901@physik.tu-darmstadt.de> Hello! I use scipy.optimize.leastsq to adopt paramters of a model to measured data. Each evaluation of that model costs 1.5 h of computation time. Unfortunately I can not specify a gradient function. While observing the approximation process I found that the first 3 runs were always with the same parameters. First I thought, the parameter variation for gradient approximation is too tiny for a simple print command. Later I found out, that these three runs were independent of the number of fit parameters. A closer look to the code reveals the reason (svn dir trunk/scipy/optimize): 1st call is to check with python code wether the function is valid line 265 of minpack.py m = check_func(func,x0,args,n)[0] 2nd call is to get the right amount of memory for paramters. line 449 of __minpack.h ap_fvec = (PyArrayObject *)call_python_function(fcn, n, x, extra_args, 1, minpack_error); 3rd call is from inside the fortran algorithm (the essential one!) Unfortunately that behaviour is not described and I would eagerly demand to avoid the superficial calls to the function. Yours, Achim From dmitrey.kroshko at scipy.org Tue Apr 15 14:47:59 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Tue, 15 Apr 2008 21:47:59 +0300 Subject: [SciPy-user] number of function evaluation for leastsq In-Reply-To: <4804F0C5.3000901@physik.tu-darmstadt.de> References: <4804F0C5.3000901@physik.tu-darmstadt.de> Message-ID: <4804F85F.8070404@scipy.org> You could try using leastsq from scikits.openopt, it has mechanism for preventing double-call objective function with same x value as before. Regards, D. Achim Gaedke wrote: > Hello! > > I use scipy.optimize.leastsq to adopt paramters of a model to measured > data. Each evaluation of that model costs 1.5 h of computation time. > Unfortunately I can not specify a gradient function. > > While observing the approximation process I found that the first 3 runs > were always with the same parameters. First I thought, the parameter > variation for gradient approximation is too tiny for a simple print > command. Later I found out, that these three runs were independent of > the number of fit parameters. > > A closer look to the code reveals the reason (svn dir trunk/scipy/optimize): > > 1st call is to check with python code wether the function is valid > > line 265 of minpack.py > m = check_func(func,x0,args,n)[0] > > 2nd call is to get the right amount of memory for paramters. > > line 449 of __minpack.h > ap_fvec = (PyArrayObject *)call_python_function(fcn, n, x, extra_args, > 1, minpack_error); > > 3rd call is from inside the fortran algorithm (the essential one!) > > Unfortunately that behaviour is not described and I would eagerly demand > to avoid the superficial calls to the function. > > Yours, Achim > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > From wnbell at gmail.com Tue Apr 15 15:09:28 2008 From: wnbell at gmail.com (Nathan Bell) Date: Tue, 15 Apr 2008 14:09:28 -0500 Subject: [SciPy-user] __eq__ for scipy.sparse not working? In-Reply-To: <4dacb2560804151037g323c65e9qb232cffce34bbfe@mail.gmail.com> References: <4dacb2560804141445l5966a597g9f0327a1de220dbc@mail.gmail.com> <4dacb2560804151037g323c65e9qb232cffce34bbfe@mail.gmail.com> Message-ID: On Tue, Apr 15, 2008 at 12:37 PM, Joseph Turian wrote: > Is there a reason that sparse matrices don't support this functionality? > What is actually happening when I try equality testing for A == B? > It seems undesirable that equality comparison is permitted, even though it > has unexpected behavior. I agree that __eq__ should either work or raise an exception. As to why __eq__ isn't supported, I haven't written the necessary code to handle arrays with dtype='bool'. Offhand, I don't know what specifically needs to be changed to make sparse matrices agree with numpy's handling of boolean arrays. Many of the necessary ingredients are already present, but I have not fully explored this matter. I created a ticket in Trac for this issue: http://scipy.org/scipy/scipy/ticket/639 Unfortunately, time is scarce for me at the moment, so I can't say when I'll get around to it. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From wnbell at gmail.com Tue Apr 15 15:12:49 2008 From: wnbell at gmail.com (Nathan Bell) Date: Tue, 15 Apr 2008 14:12:49 -0500 Subject: [SciPy-user] __eq__ for scipy.sparse not working? In-Reply-To: References: <4dacb2560804141445l5966a597g9f0327a1de220dbc@mail.gmail.com> <4dacb2560804151037g323c65e9qb232cffce34bbfe@mail.gmail.com> Message-ID: On Tue, Apr 15, 2008 at 2:09 PM, Nathan Bell wrote: > On Tue, Apr 15, 2008 at 12:37 PM, Joseph Turian wrote: > > Is there a reason that sparse matrices don't support this functionality? > > What is actually happening when I try equality testing for A == B? > > It seems undesirable that equality comparison is permitted, even though it > > has unexpected behavior. > > I agree that __eq__ should either work or raise an exception. As to > why __eq__ isn't supported, I haven't written the necessary code to > handle arrays with dtype='bool'. > I should also add that some operations cannot be supported in a straightforward manner. For instance, (A < 2.0) is not a safe operation on large sparse matrices. It's unclear what should be done in this case. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From zunzun at zunzun.com Tue Apr 15 15:36:52 2008 From: zunzun at zunzun.com (James Phillips) Date: Tue, 15 Apr 2008 14:36:52 -0500 Subject: [SciPy-user] number of function evaluation for leastsq In-Reply-To: <4804F0C5.3000901@physik.tu-darmstadt.de> References: <4804F0C5.3000901@physik.tu-darmstadt.de> Message-ID: <268756d30804151236s597767b6m64115bece4367e5b@mail.gmail.com> If you have multiple CPUs available and your modeling can be adapted to parallelization, you might look at Parallel Python: http://www.parallelpython.com/ James Phillips http://zunzun.com On Tue, Apr 15, 2008 at 1:15 PM, Achim Gaedke < Achim.Gaedke at physik.tu-darmstadt.de> wrote: > > Each evaluation of that model costs 1.5 h of computation time. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Tue Apr 15 16:11:37 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Tue, 15 Apr 2008 22:11:37 +0200 Subject: [SciPy-user] number of function evaluation for leastsq In-Reply-To: <4804F0C5.3000901@physik.tu-darmstadt.de> References: <4804F0C5.3000901@physik.tu-darmstadt.de> Message-ID: <9457e7c80804151311v1e7d03bfmee54a14c7e9fbf3f@mail.gmail.com> Hi Achim On 15/04/2008, Achim Gaedke wrote: > Hello! > > I use scipy.optimize.leastsq to adopt paramters of a model to measured > data. Each evaluation of that model costs 1.5 h of computation time. > Unfortunately I can not specify a gradient function. > > While observing the approximation process I found that the first 3 runs > were always with the same parameters. Woops, that should be fixed, if possible. In the meantime, you can use the memoize decorator as a workaround: http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/466320 Your function calls take so long that you really won't notice the (tiny) overhead. Regards St?fan From mforbes at physics.ubc.ca Tue Apr 15 16:12:15 2008 From: mforbes at physics.ubc.ca (Michael McNeil Forbes) Date: Tue, 15 Apr 2008 13:12:15 -0700 Subject: [SciPy-user] number of function evaluation for leastsq In-Reply-To: <4804F85F.8070404@scipy.org> References: <4804F0C5.3000901@physik.tu-darmstadt.de> <4804F85F.8070404@scipy.org> Message-ID: <0140E6D1-57D8-430B-B87D-B4D17B889DB6@physics.ubc.ca> On 15 Apr 2008, at 11:47 AM, dmitrey wrote: > You could try using leastsq from scikits.openopt, it has mechanism for > preventing double-call objective function with same x value as before. > > Regards, D. > > Achim Gaedke wrote: >> Hello! >> >> I use scipy.optimize.leastsq to adopt paramters of a model to >> measured >> data. Each evaluation of that model costs 1.5 h of computation time. ... >> Unfortunately that behaviour is not described and I would eagerly >> demand >> to avoid the superficial calls to the function. >> >> Yours, Achim In principle, you could also just roll your own memoization to cache the results (assuming that you can afford to store them, but since it takes 1.5 hours per call, you can't have to many calls!): def memoize(f): def memoized_f(x,_cache={}): key = tuple(numpy.ravel(x)) # Numpy arrays are not valid keys... if not _cache.has_key(key): _cache[key] = f(x) return _cache[key] return memoized_f Now you can decorate your expensive function, and it will not compute values for the same input. @memoize def expensive_function(x): print "Computing %s**2..."%str(x) return x*x >>> expensive_function(1.01) Computing 1.010000**2... 1.0201 >>> expensive_function(1.01) 1.0201 >>> expensive_function(array([1,2,3])) >>> expensive_function(array([1,2,3])) Computing [1 2 3]**2... array([1, 4, 9]) >>> expensive_function(array([1,2,3])) array([1, 4, 9]) Michael. From peridot.faceted at gmail.com Tue Apr 15 16:14:33 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Tue, 15 Apr 2008 16:14:33 -0400 Subject: [SciPy-user] number of function evaluation for leastsq In-Reply-To: <4804F0C5.3000901@physik.tu-darmstadt.de> References: <4804F0C5.3000901@physik.tu-darmstadt.de> Message-ID: On 15/04/2008, Achim Gaedke wrote: > Hello! > > I use scipy.optimize.leastsq to adopt paramters of a model to measured > data. Each evaluation of that model costs 1.5 h of computation time. > Unfortunately I can not specify a gradient function. Yikes. I'm afraid this is going to be a rather painful process. Unfortunately, while minimzing the number of function evaluations is a goal of optimization procedures, they are not necessarily tuned carefully enough for such an expensive function. You may want to look into taking advantage of any structure your problem has (for example when I had a similar problem I found I could modify most of my parameters rather rapidly, while changing one of them was expensive, so I used nested optimizers) or, if you have to do this often, coming up with an interpolating function and optimizing that. You may also want to write a function that is fast but behaves in a similar fashion and hold a shootout of all the optimizers available to see which ones require the fewest evaluations for the accuracy you need. If you have access to a computing cluster, you may also want to look into some kind of parallel optimization procedure that can run a number of function evaluations concurrently. > While observing the approximation process I found that the first 3 runs > were always with the same parameters. First I thought, the parameter > variation for gradient approximation is too tiny for a simple print > command. Later I found out, that these three runs were independent of > the number of fit parameters. > > A closer look to the code reveals the reason (svn dir trunk/scipy/optimize): > > 1st call is to check with python code wether the function is valid > > line 265 of minpack.py > m = check_func(func,x0,args,n)[0] > > 2nd call is to get the right amount of memory for paramters. > > line 449 of __minpack.h > ap_fvec = (PyArrayObject *)call_python_function(fcn, n, x, extra_args, > 1, minpack_error); > > 3rd call is from inside the fortran algorithm (the essential one!) > > Unfortunately that behaviour is not described and I would eagerly demand > to avoid the superficial calls to the function. This is annoying, and it should be fixed inside scipy if possible; the FORTRAN code will make this more difficult, but file a bug on the scipy Trac and we'll look at it. In the meantime, you can use "memoizing" to avoid recomputing your function. See http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/498110 for a bells-and-whistles implementation, but the basic idea is just that you wrap your function in a wrapper that stores a dictionary mapping inputs to outputs. Then every time you call the function it checks whether the function has been called before with these values, and if so, returns the value computed before. Good luck, Anne From pav at iki.fi Tue Apr 15 16:26:23 2008 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 15 Apr 2008 20:26:23 +0000 (UTC) Subject: [SciPy-user] number of function evaluation for leastsq References: <4804F0C5.3000901@physik.tu-darmstadt.de> Message-ID: Tue, 15 Apr 2008 20:15:33 +0200, Achim Gaedke wrote: > Hello! > > I use scipy.optimize.leastsq to adopt paramters of a model to measured > data. Each evaluation of that model costs 1.5 h of computation time. > Unfortunately I can not specify a gradient function. [clip] > Unfortunately that behaviour is not described and I would eagerly demand > to avoid the superficial calls to the function. As a workaround, you can memoize the function calls, something like this: ---- clip ---- import scipy as sp def memoize_single(func): """Optimize out repeated function calls with the same arguments""" last_z = [] last_f = None def wrapper(z, *a, **kw): if sp.all(z == last_z): return last_f.copy() last_z = sp.array(z, copy=True) last_f = sp.array(func(z, *a, **kw), copy=True) return last_f.copy() return wrapper @memoize_single def target_function(z): print "Evaluating..." z = sp.asarray(z) return sum(z**2) for k in xrange(10): target_function([1,2,3]) ---- clip ---- Is should output ---- clip ---- Evaluating... 14 14 14 14 14 14 14 14 14 14 ---- clip ---- -- Pauli Virtanen From peridot.faceted at gmail.com Tue Apr 15 16:28:59 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Tue, 15 Apr 2008 16:28:59 -0400 Subject: [SciPy-user] __eq__ for scipy.sparse not working? In-Reply-To: References: <4dacb2560804141445l5966a597g9f0327a1de220dbc@mail.gmail.com> <4dacb2560804151037g323c65e9qb232cffce34bbfe@mail.gmail.com> Message-ID: On 15/04/2008, Nathan Bell wrote: > On Tue, Apr 15, 2008 at 12:37 PM, Joseph Turian wrote: > > Is there a reason that sparse matrices don't support this functionality? > > What is actually happening when I try equality testing for A == B? > > It seems undesirable that equality comparison is permitted, even though it > > has unexpected behavior. > > I agree that __eq__ should either work or raise an exception. As to > why __eq__ isn't supported, I haven't written the necessary code to > handle arrays with dtype='bool'. > > Offhand, I don't know what specifically needs to be changed to make > sparse matrices agree with numpy's handling of boolean arrays. Many > of the necessary ingredients are already present, but I have not fully > explored this matter. > > I created a ticket in Trac for this issue: > http://scipy.org/scipy/scipy/ticket/639 > > Unfortunately, time is scarce for me at the moment, so I can't say > when I'll get around to it. This is actually tricky: you definitely want "not" to be a reasonable operation on sparse boolean arrays, which means you can't just store the "True" values as nonzero entries. It's still doable, with some sort of flag in the sparse object indicating whether the array as a whole has been negated, but it's going to be a pain. Anne From ckkart at hoc.net Tue Apr 15 17:12:34 2008 From: ckkart at hoc.net (Christian K.) Date: Tue, 15 Apr 2008 23:12:34 +0200 Subject: [SciPy-user] number of function evaluation for leastsq In-Reply-To: References: <4804F0C5.3000901@physik.tu-darmstadt.de> Message-ID: Pauli Virtanen wrote: > > @memoize_single > def target_function(z): > print "Evaluating..." > z = sp.asarray(z) > return sum(z**2) Sorry for this off-topic questions, but can somebody explain me what that @.... syntax means? I searched the python manuals several times without luck. Thanks, Christian From matthieu.brucher at gmail.com Tue Apr 15 17:19:09 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Tue, 15 Apr 2008 23:19:09 +0200 Subject: [SciPy-user] number of function evaluation for leastsq In-Reply-To: References: <4804F0C5.3000901@physik.tu-darmstadt.de> Message-ID: It's a decorator (Python 2.5). It only does : target_function = memoize_single(target_function) after the function definition. Matthieu 2008/4/15, Christian K. : > > Pauli Virtanen wrote: > > > > @memoize_single > > def target_function(z): > > print "Evaluating..." > > z = sp.asarray(z) > > return sum(z**2) > > > Sorry for this off-topic questions, but can somebody explain me what > that @.... syntax means? I searched the python manuals several times > without luck. > > Thanks, Christian > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From mforbes at physics.ubc.ca Tue Apr 15 17:21:24 2008 From: mforbes at physics.ubc.ca (Michael McNeil Forbes) Date: Tue, 15 Apr 2008 14:21:24 -0700 Subject: [SciPy-user] number of function evaluation for leastsq In-Reply-To: References: <4804F0C5.3000901@physik.tu-darmstadt.de> Message-ID: <79A9C06B-5691-4F27-9EC9-7755276E9586@physics.ubc.ca> On 15 Apr 2008, at 2:12 PM, Christian K. wrote: > Sorry for this off-topic questions, but can somebody explain me what > that @.... syntax means? I searched the python manuals several times > without luck. Function decorator syntax. http://www.python.org/dev/peps/pep-0318/ http://docs.python.org/ref/function.html#l2h-629 @memoize def f(x)... is equivalent to def f(x)... f = memoize(f) Michael. From wesmckinn at gmail.com Tue Apr 15 17:23:21 2008 From: wesmckinn at gmail.com (Wes McKinney) Date: Tue, 15 Apr 2008 17:23:21 -0400 Subject: [SciPy-user] SciPy equivalent of R's acf Message-ID: <6c476c8a0804151423v1bd60597wc152cf4ba14b4a4c@mail.gmail.com> Was looking for an alternative to using R's built in acf to calculate sample autocovariance matrices, which can be quite cumbersome to call through python. Any suggestions for where to look? Also, has anyone successfully used acf through RPy, having some trouble with it. Thanks, Wes -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Tue Apr 15 17:30:39 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Tue, 15 Apr 2008 23:30:39 +0200 Subject: [SciPy-user] SciPy equivalent of R's acf In-Reply-To: <6c476c8a0804151423v1bd60597wc152cf4ba14b4a4c@mail.gmail.com> References: <6c476c8a0804151423v1bd60597wc152cf4ba14b4a4c@mail.gmail.com> Message-ID: <20080415213039.GA425@phare.normalesup.org> On Tue, Apr 15, 2008 at 05:23:21PM -0400, Wes McKinney wrote: > Also, has anyone > successfully used acf through RPy, having some trouble with it. Waht kind of trouble. I have been succesful with it. Ga?l From wesmckinn at gmail.com Tue Apr 15 17:40:41 2008 From: wesmckinn at gmail.com (Wes McKinney) Date: Tue, 15 Apr 2008 17:40:41 -0400 Subject: [SciPy-user] SciPy equivalent of R's acf In-Reply-To: <20080415213039.GA425@phare.normalesup.org> References: <6c476c8a0804151423v1bd60597wc152cf4ba14b4a4c@mail.gmail.com> <20080415213039.GA425@phare.normalesup.org> Message-ID: <6c476c8a0804151440v7fa397d1y7da449d3fc51d72c@mail.gmail.com> I got it to work-- didn't realize you had to package your arguments in a dict and provide **kwds in the R call, so it works now. On Tue, Apr 15, 2008 at 5:30 PM, Gael Varoquaux < gael.varoquaux at normalesup.org> wrote: > On Tue, Apr 15, 2008 at 05:23:21PM -0400, Wes McKinney wrote: > > Also, has anyone > > successfully used acf through RPy, having some trouble with it. > > Waht kind of trouble. I have been succesful with it. > > Ga?l > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Tue Apr 15 17:40:08 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Tue, 15 Apr 2008 23:40:08 +0200 Subject: [SciPy-user] SciPy equivalent of R's acf In-Reply-To: <6c476c8a0804151440v7fa397d1y7da449d3fc51d72c@mail.gmail.com> References: <6c476c8a0804151423v1bd60597wc152cf4ba14b4a4c@mail.gmail.com> <20080415213039.GA425@phare.normalesup.org> <6c476c8a0804151440v7fa397d1y7da449d3fc51d72c@mail.gmail.com> Message-ID: <20080415214008.GB425@phare.normalesup.org> On Tue, Apr 15, 2008 at 05:40:41PM -0400, Wes McKinney wrote: > I got it to work-- didn't realize you had to package your arguments in a > dict and provide **kwds in the R call, so it works now. You can also pass them as keyword argument, IIRC. Ga?l From haase at msg.ucsf.edu Wed Apr 16 04:23:39 2008 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Wed, 16 Apr 2008 10:23:39 +0200 Subject: [SciPy-user] ndimage - why is output_type deprecated ? Message-ID: Hi, I have an 3d - image in uint8 pixel format and would like to use ndimage.gaussian_filter(). The doctring says: """ .... The intermediate arrays are stored in the same data type as the output. Therefore, for output types with a limited precision, the results may be imprecise because intermediate results may be stored with insufficient precision. """ Since I don't have an output array allocated a-priory, it would be convenient if I could specify that I want the output to be of dtype np.float32. So I thought about adding a "dtype" argument to the function definition in filters.py. However: I found that this function in turn calls _ni_support._get_output(output, input) and _ni_support._get_output(output, input) already had an optional `output_type` argument, which however is "deprecated" -- really it causes even a "raise RuntimeError" (so it's actually already beyond deprecated ....) Does anyone here know why that was taken out !? I would like to put it back (just calling it "dtype=None" -- None meaning "same as input", and ignored [maybe one should raise an exception !?] if output is specified) Thanks, Sebastian Haase From nmarais at sun.ac.za Wed Apr 16 09:04:50 2008 From: nmarais at sun.ac.za (Neilen Marais) Date: Wed, 16 Apr 2008 13:04:50 +0000 (UTC) Subject: [SciPy-user] generalized eigenvalue problem for sparse matrices References: Message-ID: Abihnav, On Tue, 15 Apr 2008 20:13:38 +0530, abhinav sarkar wrote: > Here I have converted the A*x = sigma*M*x to inv(A)*M*x = nu*I*x where > nu = 1/sigma. And replaced s.solve by my own Solver class. Now solutions > obtained are correct. I'm glad you found a method to make it work. Not sure why the obvious solution didn't work, but at least anyone else who needs to solve a similar problem can use your method for now :) Regards Neilen From Achim.Gaedke at physik.tu-darmstadt.de Wed Apr 16 09:32:32 2008 From: Achim.Gaedke at physik.tu-darmstadt.de (Achim Gaedke) Date: Wed, 16 Apr 2008 15:32:32 +0200 Subject: [SciPy-user] number of function evaluation for leastsq In-Reply-To: References: <4804F0C5.3000901@physik.tu-darmstadt.de> Message-ID: <4805FFF0.5070703@physik.tu-darmstadt.de> Anne Archibald wrote: > On 15/04/2008, Achim Gaedke wrote: > >> While observing the approximation process I found that the first 3 runs >> were always with the same parameters. First I thought, the parameter >> variation for gradient approximation is too tiny for a simple print >> command. Later I found out, that these three runs were independent of >> the number of fit parameters. >> >> A closer look to the code reveals the reason (svn dir trunk/scipy/optimize): >> >> 1st call is to check with python code wether the function is valid >> >> line 265 of minpack.py >> m = check_func(func,x0,args,n)[0] >> >> 2nd call is to get the right amount of memory for paramters. >> >> line 449 of __minpack.h >> ap_fvec = (PyArrayObject *)call_python_function(fcn, n, x, extra_args, >> 1, minpack_error); >> >> 3rd call is from inside the fortran algorithm (the essential one!) >> >> Unfortunately that behaviour is not described and I would eagerly demand >> to avoid the superficial calls to the function. >> > > This is annoying, and it should be fixed inside scipy if possible; the > FORTRAN code will make this more difficult, but file a bug on the > scipy Trac and we'll look at it. In the meantime, you can use > "memoizing" to avoid recomputing your function. See > http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/498110 > for a bells-and-whistles implementation, but the basic idea is just > that you wrap your function in a wrapper that stores a dictionary > mapping inputs to outputs. Then every time you call the function it > checks whether the function has been called before with these values, > and if so, returns the value computed before. > I am testing different physics models whether they comply with the measurement. So I can not optimize each model before testing in detail, I could not even simplify the model, because I want to investigate the contribution of different effects. I am not really terrified by 1.5h for each data point. But I have the opportunity to compare the logs of each run. I've already written a stub to avoid unneceassary evaluation. So my intention was to notify people that the behaviour is not as expected. There are parameters for function evaluation limits. They might be wrong, because they count only fortran calls. Thanks for all the answers, Achim From will.woods at ynic.york.ac.uk Wed Apr 16 09:39:42 2008 From: will.woods at ynic.york.ac.uk (Will Woods) Date: Wed, 16 Apr 2008 14:39:42 +0100 Subject: [SciPy-user] impz Message-ID: <4806019E.4050109@ynic.york.ac.uk> Hi, Is there a direct equivalent of matlab's impz() function in scipy? In matlab I can do: [b,a] = butter(4,[1.5/678,15.0/678],'bandpass'); [h,t] = impz(b,a); plot(t,h) The scipy.signal.impulse function is the closest I can find, but b,a = scipy.signal.butter(4,[1.5/678,15.0/678],'bandpass') T,h = scipy.signal.impulse((b,a)) plot(T,h) doesn't give the same answer. Will From silva at lma.cnrs-mrs.fr Wed Apr 16 10:08:29 2008 From: silva at lma.cnrs-mrs.fr (Fabrice Silva) Date: Wed, 16 Apr 2008 16:08:29 +0200 Subject: [SciPy-user] number of function evaluation for leastsq In-Reply-To: <4805FFF0.5070703@physik.tu-darmstadt.de> References: <4804F0C5.3000901@physik.tu-darmstadt.de> <4805FFF0.5070703@physik.tu-darmstadt.de> Message-ID: <1208354910.3093.6.camel@Portable-s2m.cnrs-mrs.fr> Le mercredi 16 avril 2008 ? 15:32 +0200, Achim Gaedke a ?crit : > I am testing different physics models whether they comply with the > measurement. So I can not optimize each model before testing in detail, > I could not even simplify the model, because I want to investigate the > contribution of different effects. > I am not really terrified by 1.5h for each data point. But I have the > opportunity to compare the logs of each run. Which language do you use to compute each 1h30 evaluation ? python, C, fortran ? If python, it may be worth writing it in fortran and use f2py for example... -- Fabrice Silva From oliphant at enthought.com Wed Apr 16 10:21:58 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Wed, 16 Apr 2008 09:21:58 -0500 Subject: [SciPy-user] impz In-Reply-To: <4806019E.4050109@ynic.york.ac.uk> References: <4806019E.4050109@ynic.york.ac.uk> Message-ID: <48060B86.6070001@enthought.com> Will Woods wrote: > Hi, > > Is there a direct equivalent of matlab's impz() function in scipy? > In matlab I can do: > > [b,a] = butter(4,[1.5/678,15.0/678],'bandpass'); > [h,t] = impz(b,a); > plot(t,h) > > The scipy.signal.impulse function is the closest I can find, but > > b,a = scipy.signal.butter(4,[1.5/678,15.0/678],'bandpass') > T,h = scipy.signal.impulse((b,a)) > plot(T,h) > > doesn't give the same answer. > Right, the latter gives samples of the continuous-time impulse response. You are looking for the discrete-time impulse response. The function is not directly available. However, you can get it (minus the auto-compute N part), using N = 100 x = scipy.zeros(N) x[0] = 1 h = scipy.signal.lfilter(b,a, x) -Travis From bsouthey at gmail.com Wed Apr 16 10:40:15 2008 From: bsouthey at gmail.com (Bruce Southey) Date: Wed, 16 Apr 2008 09:40:15 -0500 Subject: [SciPy-user] number of function evaluation for leastsq In-Reply-To: <1208354910.3093.6.camel@Portable-s2m.cnrs-mrs.fr> References: <4804F0C5.3000901@physik.tu-darmstadt.de> <4805FFF0.5070703@physik.tu-darmstadt.de> <1208354910.3093.6.camel@Portable-s2m.cnrs-mrs.fr> Message-ID: <48060FCF.3090103@gmail.com> Fabrice Silva wrote: > Le mercredi 16 avril 2008 ? 15:32 +0200, Achim Gaedke a ?crit : > >> I am testing different physics models whether they comply with the >> measurement. So I can not optimize each model before testing in detail, >> I could not even simplify the model, because I want to investigate the >> contribution of different effects. >> I am not really terrified by 1.5h for each data point. But I have the >> opportunity to compare the logs of each run. >> > > Which language do you use to compute each 1h30 evaluation ? python, C, > fortran ? If python, it may be worth writing it in fortran and use f2py > for example... > Hi, I would also add it would be clearer to know what you want to achieve. In part, one reason is that there may be a better way to achieve your goal as the model is extremely complex or the data used is inappropriate (very badly conditioned). If you are just changing parameters and assuming the model is reasonably well behaved, then you should use starting values close to the base model. If it is the data then you should do something to the data like normalize. Bruce From zachary.pincus at yale.edu Wed Apr 16 11:28:49 2008 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Wed, 16 Apr 2008 11:28:49 -0400 Subject: [SciPy-user] ndimage - why is output_type deprecated ? In-Reply-To: References: Message-ID: <06E48D0A-39B3-482C-B244-1F032083D1F0@yale.edu> Hello, You want the 'output' parameter, which either takes a pre-allocated array, OR a dtype. This is a bit confusing, overloaded-API-wise, and there's also a bug in (at least) ndimage.zoom with pre-allocated arrays(*). But the functionality is there. Zach * http://scipy.org/scipy/scipy/ticket/643 On Apr 16, 2008, at 4:23 AM, Sebastian Haase wrote: > Hi, > > I have an 3d - image in uint8 pixel format and would like to use > ndimage.gaussian_filter(). > The doctring says: > """ .... The intermediate arrays are > stored in the same data type as the output. Therefore, for output > types with a limited precision, the results may be imprecise > because intermediate results may be stored with insufficient > precision. > """ > > Since I don't have an output array allocated a-priory, it would be > convenient if I could specify that I want the output to be of dtype > np.float32. > So I thought about adding a "dtype" argument to the function > definition in filters.py. > > However: I found that this function in turn calls > _ni_support._get_output(output, input) > and _ni_support._get_output(output, input) already had an optional > `output_type` argument, > which however is "deprecated" -- really it causes even a "raise > RuntimeError" (so it's actually already beyond deprecated ....) > > Does anyone here know why that was taken out !? > > I would like to put it back (just calling it "dtype=None" -- None > meaning "same as input", and ignored [maybe one should raise an > exception !?] if output is specified) > > Thanks, > Sebastian Haase > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From haase at msg.ucsf.edu Wed Apr 16 11:36:43 2008 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Wed, 16 Apr 2008 17:36:43 +0200 Subject: [SciPy-user] ndimage - why is output_type deprecated ? In-Reply-To: <06E48D0A-39B3-482C-B244-1F032083D1F0@yale.edu> References: <06E48D0A-39B3-482C-B244-1F032083D1F0@yale.edu> Message-ID: On Wed, Apr 16, 2008 at 5:28 PM, Zachary Pincus wrote: > Hello, > > You want the 'output' parameter, which either takes a pre-allocated > array, OR a dtype. > > This is a bit confusing, overloaded-API-wise, and there's also a bug > in (at least) ndimage.zoom with pre-allocated arrays(*). But the > functionality is there. > > Zach > Hi Zach, thanks for the info. But I have to say, that this is a really non-intuitive overloading. I would suggest, to use a separate keyword argument like "dtype" instead. "dtype" is used for this already in many other functions like np.array, np.zeros, np.empty, ..... I would have never guessed -- the doc strings didn't say either .... Thanks again, Sebastian Haase From zachary.pincus at yale.edu Wed Apr 16 13:58:09 2008 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Wed, 16 Apr 2008 13:58:09 -0400 Subject: [SciPy-user] ndimage - why is output_type deprecated ? In-Reply-To: References: <06E48D0A-39B3-482C-B244-1F032083D1F0@yale.edu> Message-ID: > thanks for the info. > But I have to say, that this is a really non-intuitive overloading. > > I would suggest, to use a separate keyword argument like > "dtype" instead. > "dtype" is used for this already in many other functions like > np.array, np.zeros, np.empty, ..... > I would have never guessed -- the doc strings didn't say either .... A ton about ndimage is under-documented, confusing, or slightly broken. (Nevertheless, it remains really useful.) Ndimage could really use a good API and documentation cleaning, as well as some attention from someone who understands the spline-interpolation code internal to the interpolators (and can thus fix some of the open bugs on that). Unfortunately, that person isn't me... Maybe we can start by filing various bugs on it, and then organize folks who use ndimage to have a doc-day sort of thing. As to adding a 'dtype' parameter, I guess that the output_type parameter has been deprecated long enough that it could be removed, and 'dtype' put in its place, and then the overloaded use of 'output' be itself deprecated. Hopefully someone more familiar with the API- breakage/deprecation policy could chime in here. Zach From contact at pythonxy.com Wed Apr 16 15:26:06 2008 From: contact at pythonxy.com (Python(x,y)) Date: Wed, 16 Apr 2008 21:26:06 +0200 Subject: [SciPy-user] Python(x,y) - New release 1.1.0 Message-ID: <480652CE.20800@pythonxy.com> Hi all, Python(x,y) 1.1.0 is now available on http://www.pythonxy.com. Python(x,y) is a free Python/Eclipse/Qt distribution providing a complete scientific development environment. Main new features (take a look at the new screenshots on the website) : 3D visualization (with Mayavi2), Enthought Tool Suite and new Pydev version. Changes history : * Added: o Pydev 1.3.15 - New interactive console ! (code completion, history management, auto-import, send selected code to console, ...) o Enthought Tool Suite 2.7.0 (including MayaVi 2, the powerful 2D and 3D scientific visualization tool) Special thanks to Ga?l Varoquaux for helping us integrating ETS in /Python(x,y)/ and testing Mayavi 2 o VTK 5.0 o Cython 0.9.6.13.1 - Cython is a language that makes writing C extensions for the Python language as easy as Python itself o GDAL 1.5.0 - Geospatial Data Abstraction Library o Windows installer now supports the .egg packages o SetupTools 0.6c8 * Corrected: o Uninstall: PyParallel and PySerial were not removed * Updated: o Python(x,y) documentation -- P. Raybaut Python(x,y) http://www.pythonxy.com From warren.weckesser at gmail.com Wed Apr 16 16:35:56 2008 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Wed, 16 Apr 2008 16:35:56 -0400 Subject: [SciPy-user] Testing build before installing (numpy and scipy) Message-ID: <114880320804161335k2983eb69u23eac7cbae897f2a@mail.gmail.com> I am have a Mac running OSX 10.4. My current installation of numpy and scipy is broken, so I want to clean up and start from scratch. I just installed Python 2.5.2 using a binary from the python web site. I have several questions: 1. numpy's setup.py doesn't have an "uninstall" command. Is that normal? What is the standard way to remove something that I installed using setup.py? I want to clean out my Python2.4 broken installations. 2. Having just installed python 2.5.2, I ran "setup.py build" in the numpy-1.0.4 directory. It generated lots of output, but I don't know if everything built correctly. There are some error messages about _configtest.c having errors. Can I test the build without first installing it? 3. The instructions at http://www.scipy.org/Installing_SciPy/Mac_OS_Xsuggest that the command "export MACOSX_DEPLOYMENT_TARGET=10.4" be given before building scipy for OSX 10.4. It is not clear from those instructions if that macro is also used when building numpy. I ran the build command twice, once before defining the variable and once after defining it, and it does change how numpy is built--well, it changes the names of some directories, anyway. Is this macro also supposed to be defined when building numpy? Thanks, Warren -------------- next part -------------- An HTML attachment was scrubbed... URL: From turian at gmail.com Wed Apr 16 17:02:12 2008 From: turian at gmail.com (Joseph Turian) Date: Wed, 16 Apr 2008 17:01:12 -0401 Subject: [SciPy-user] __eq__ for scipy.sparse not working? In-Reply-To: References: <4dacb2560804141445l5966a597g9f0327a1de220dbc@mail.gmail.com> <4dacb2560804151037g323c65e9qb232cffce34bbfe@mail.gmail.com> Message-ID: <4dacb2560804161402u474e4585s58cf02900ece0c9b@mail.gmail.com> My preference is that if something does not work as expected, then throw an exception. This is better than silently doing the wrong thing, plus it doesn't take much time to code :^) Best, Joseph On Tue, Apr 15, 2008 at 4:27 PM, Anne Archibald wrote: > On 15/04/2008, Nathan Bell wrote: > > On Tue, Apr 15, 2008 at 12:37 PM, Joseph Turian > wrote: > > > Is there a reason that sparse matrices don't support this > functionality? > > > What is actually happening when I try equality testing for A == B? > > > It seems undesirable that equality comparison is permitted, even > though it > > > has unexpected behavior. > > > > I agree that __eq__ should either work or raise an exception. As to > > why __eq__ isn't supported, I haven't written the necessary code to > > handle arrays with dtype='bool'. > > > > Offhand, I don't know what specifically needs to be changed to make > > sparse matrices agree with numpy's handling of boolean arrays. Many > > of the necessary ingredients are already present, but I have not fully > > explored this matter. > > > > I created a ticket in Trac for this issue: > > http://scipy.org/scipy/scipy/ticket/639 > > > > Unfortunately, time is scarce for me at the moment, so I can't say > > when I'll get around to it. > > This is actually tricky: you definitely want "not" to be a reasonable > operation on sparse boolean arrays, which means you can't just store > the "True" values as nonzero entries. It's still doable, with some > sort of flag in the sparse object indicating whether the array as a > whole has been negated, but it's going to be a pain. > > Anne > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Academic: http://www-etud.iro.umontreal.ca/~turian/ Business: http://www.metaoptimize.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ndbecker2 at gmail.com Wed Apr 16 17:07:48 2008 From: ndbecker2 at gmail.com (Neal Becker) Date: Wed, 16 Apr 2008 17:07:48 -0400 Subject: [SciPy-user] Python(x,y) - New release 1.1.0 References: <480652CE.20800@pythonxy.com> Message-ID: Python(x,y) is restricted to only use for non-profit? From ondrej at certik.cz Wed Apr 16 17:24:10 2008 From: ondrej at certik.cz (Ondrej Certik) Date: Wed, 16 Apr 2008 23:24:10 +0200 Subject: [SciPy-user] Python(x,y) - New release 1.1.0 In-Reply-To: References: <480652CE.20800@pythonxy.com> Message-ID: <85b5c3130804161424h141d454xd9b39bfd8f8033a6@mail.gmail.com> On Wed, Apr 16, 2008 at 11:07 PM, Neal Becker wrote: > Python(x,y) is restricted to only use for non-profit? Seems like that according to: http://pythonxy.com/license.php Just a couple days ago there used to be an open source license. Well, that reduces me interest a lot, as I am not really motivated to contribute to a non-opensource solution. Ondrej From stefan at sun.ac.za Wed Apr 16 17:27:24 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Wed, 16 Apr 2008 23:27:24 +0200 Subject: [SciPy-user] Testing build before installing (numpy and scipy) In-Reply-To: <114880320804161335k2983eb69u23eac7cbae897f2a@mail.gmail.com> References: <114880320804161335k2983eb69u23eac7cbae897f2a@mail.gmail.com> Message-ID: <9457e7c80804161427q5ab8805cj4f7ac27ebe0c213f@mail.gmail.com> Hi Warren On 16/04/2008, Warren Weckesser wrote: > 1. numpy's setup.py doesn't have an "uninstall" command. Is that normal? > What is the standard way to remove something that I installed using > setup.py? I want to clean out my Python2.4 broken installations. Not as far as I know -- I always erase the directories by hand. > 2. Having just installed python 2.5.2, I ran "setup.py build" in the > numpy-1.0.4 directory. It generated lots of output, but I don't know if > everything built correctly. There are some error messages about > _configtest.c having errors. Can I test the build without first installing > it? python setup.py install --prefix=${HOME}/test_install Then export PYTHONPATH=${HOME}/test_install/lib/python2.5/site/packages:${PYTHONPATH} Now you can run Python, import numpy, and execute the test suite. > 3. The instructions at > http://www.scipy.org/Installing_SciPy/Mac_OS_X suggest that > the command "export MACOSX_DEPLOYMENT_TARGET=10.4" be given before building > scipy for OSX 10.4. It is not clear from those instructions if that macro > is also used when building numpy. I ran the build command twice, once > before defining the variable and once after defining it, and it does change > how numpy is built--well, it changes the names of some directories, anyway. > Is this macro also supposed to be defined when building numpy? I don't export that variable on my system, but I have very little experience with building on OSX. Cheers St?fan From matthieu.brucher at gmail.com Wed Apr 16 17:29:42 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 16 Apr 2008 23:29:42 +0200 Subject: [SciPy-user] Testing build before installing (numpy and scipy) In-Reply-To: <114880320804161335k2983eb69u23eac7cbae897f2a@mail.gmail.com> References: <114880320804161335k2983eb69u23eac7cbae897f2a@mail.gmail.com> Message-ID: 2008/4/16, Warren Weckesser : > > I am have a Mac running OSX 10.4. My current installation of numpy and > scipy is broken, so I want to clean up and start from scratch. I just > installed Python 2.5.2 using a binary from the python web site. I have > several questions: > > 1. numpy's setup.py doesn't have an "uninstall" command. Is that > normal? What is the standard way to remove something that I installed using > setup.py? I want to clean out my Python2.4 broken installations. > Hi, This is not available with distutils or setuptools, unfortunately. Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From dwf at cs.toronto.edu Wed Apr 16 17:55:02 2008 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Wed, 16 Apr 2008 17:55:02 -0400 Subject: [SciPy-user] Python(x,y) - New release 1.1.0 In-Reply-To: <85b5c3130804161424h141d454xd9b39bfd8f8033a6@mail.gmail.com> References: <480652CE.20800@pythonxy.com> <85b5c3130804161424h141d454xd9b39bfd8f8033a6@mail.gmail.com> Message-ID: <1850A7AF-236A-400B-B12B-2C7E61EDF6C9@cs.toronto.edu> On 16-Apr-08, at 5:24 PM, Ondrej Certik wrote: > On Wed, Apr 16, 2008 at 11:07 PM, Neal Becker > wrote: >> Python(x,y) is restricted to only use for non-profit? > > Seems like that according to: > > http://pythonxy.com/license.php > > Just a couple days ago there used to be an open source license. Well, > that reduces me interest a lot, as I am not really motivated to > contribute to a non-opensource solution. Hmm... I read that license, quite puzzling. IANAL, but each of the included packages in Python(x,y) have their own licenses, and most of the major ones are licensed under the BSD license or something similar, which permit commercial use. I'm not sure that a license on a software collection can regulate use of its more liberally licensed components. At the very least it would be hard to enforce. So, maybe what's restricted is the use of the Python (x,y) installer by a commercial entity? I really have no idea. Also, I think there may be some murky water concerning distributing GPL'd packages under this license (I notice PyQt is distributed under the GPL). Again, no law degree here, but it seems like it might be a problem. David From robert.kern at gmail.com Wed Apr 16 18:02:16 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 16 Apr 2008 17:02:16 -0500 Subject: [SciPy-user] Testing build before installing (numpy and scipy) In-Reply-To: <114880320804161335k2983eb69u23eac7cbae897f2a@mail.gmail.com> References: <114880320804161335k2983eb69u23eac7cbae897f2a@mail.gmail.com> Message-ID: <3d375d730804161502i4073b9b5g2d4208ccafbda35@mail.gmail.com> On Wed, Apr 16, 2008 at 3:35 PM, Warren Weckesser wrote: > I am have a Mac running OSX 10.4. My current installation of numpy and > scipy is broken, so I want to clean up and start from scratch. I just > installed Python 2.5.2 using a binary from the python web site. I have > several questions: > > 1. numpy's setup.py doesn't have an "uninstall" command. Is that normal? Yes. > What is the standard way to remove something that I installed using > setup.py? I want to clean out my Python2.4 broken installations. Delete the numpy/ directory in site-packages/ and delete the f2py script wherever it got installed to. It is probably in /Library/Frameworks/Python.framework/Versions/2.4/bin/. > 2. Having just installed python 2.5.2, I ran "setup.py build" in the > numpy-1.0.4 directory. It generated lots of output, but I don't know if > everything built correctly. There are some error messages about > _configtest.c having errors. Don't worry about these. In order to figure out if your system supports certain features, we try to compile and execute a number of small C programs. If the compilation fails, then your system doesn't support that feature; that's fine, we just make the appropriate configuration setting. > 3. The instructions at http://www.scipy.org/Installing_SciPy/Mac_OS_X > suggest that the command "export MACOSX_DEPLOYMENT_TARGET=10.4" be given > before building scipy for OSX 10.4. It is not clear from those instructions > if that macro is also used when building numpy. I ran the build command > twice, once before defining the variable and once after defining it, and it > does change how numpy is built--well, it changes the names of some > directories, anyway. Is this macro also supposed to be defined when > building numpy? Don't bother for either numpy or scipy. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From warren.weckesser at gmail.com Wed Apr 16 18:11:51 2008 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Wed, 16 Apr 2008 18:11:51 -0400 Subject: [SciPy-user] Testing build before installing (numpy and scipy) In-Reply-To: <9457e7c80804161427q5ab8805cj4f7ac27ebe0c213f@mail.gmail.com> References: <114880320804161335k2983eb69u23eac7cbae897f2a@mail.gmail.com> <9457e7c80804161427q5ab8805cj4f7ac27ebe0c213f@mail.gmail.com> Message-ID: <114880320804161511h62f4f0f2q47e12849fb043551@mail.gmail.com> On Wed, Apr 16, 2008 at 5:27 PM, St?fan van der Walt wrote: > Hi Warren > > On 16/04/2008, Warren Weckesser wrote: > > 1. numpy's setup.py doesn't have an "uninstall" command. Is that > normal? > > What is the standard way to remove something that I installed using > > setup.py? I want to clean out my Python2.4 broken installations. > > Not as far as I know -- I always erase the directories by hand. > Seems like a significant feature to be missing. > > 2. Having just installed python 2.5.2, I ran "setup.py build" in the > > numpy-1.0.4 directory. It generated lots of output, but I don't know if > > everything built correctly. There are some error messages about > > _configtest.c having errors. Can I test the build without first > installing > > it? > > python setup.py install --prefix=${HOME}/test_install > > Then > > export > PYTHONPATH=${HOME}/test_install/lib/python2.5/site/packages:${PYTHONPATH} > > Now you can run Python, import numpy, and execute the test suite. > Thanks! Works great. Warren -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed Apr 16 18:14:28 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 16 Apr 2008 17:14:28 -0500 Subject: [SciPy-user] Testing build before installing (numpy and scipy) In-Reply-To: <114880320804161511h62f4f0f2q47e12849fb043551@mail.gmail.com> References: <114880320804161335k2983eb69u23eac7cbae897f2a@mail.gmail.com> <9457e7c80804161427q5ab8805cj4f7ac27ebe0c213f@mail.gmail.com> <114880320804161511h62f4f0f2q47e12849fb043551@mail.gmail.com> Message-ID: <3d375d730804161514q19145855jfb4bc2f776c025dc@mail.gmail.com> On Wed, Apr 16, 2008 at 5:11 PM, Warren Weckesser wrote: > > On Wed, Apr 16, 2008 at 5:27 PM, St?fan van der Walt > wrote: > > Hi Warren > > > > > > On 16/04/2008, Warren Weckesser wrote: > > > 1. numpy's setup.py doesn't have an "uninstall" command. Is that > normal? > > > What is the standard way to remove something that I installed using > > > setup.py? I want to clean out my Python2.4 broken installations. > > > > Not as far as I know -- I always erase the directories by hand. > > Seems like a significant feature to be missing. True, but it's not really up to us, but rather distutils. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From michael.abshoff at googlemail.com Wed Apr 16 17:53:21 2008 From: michael.abshoff at googlemail.com (Michael.Abshoff) Date: Wed, 16 Apr 2008 23:53:21 +0200 Subject: [SciPy-user] Python(x,y) - New release 1.1.0 In-Reply-To: <1850A7AF-236A-400B-B12B-2C7E61EDF6C9@cs.toronto.edu> References: <480652CE.20800@pythonxy.com> <85b5c3130804161424h141d454xd9b39bfd8f8033a6@mail.gmail.com> <1850A7AF-236A-400B-B12B-2C7E61EDF6C9@cs.toronto.edu> Message-ID: <48067551.1040503@gmail.com> David Warde-Farley wrote: Hi, > On 16-Apr-08, at 5:24 PM, Ondrej Certik wrote: >> On Wed, Apr 16, 2008 at 11:07 PM, Neal Becker >> wrote: >>> Python(x,y) is restricted to only use for non-profit? >> Seems like that according to: >> >> http://pythonxy.com/license.php >> >> Just a couple days ago there used to be an open source license. Well, >> that reduces me interest a lot, as I am not really motivated to >> contribute to a non-opensource solution. > > Hmm... I read that license, quite puzzling. That puts it nicely. The announcement email states that Python(x,y) 1.1.0 is now available on http://www.pythonxy.com. Python(x,y) is a free Python/Eclipse/Qt distribution providing a complete scientific development environment. and around here free is generally meant in either the BSD or GPL way. The "Non-profit OSL 3.0" license might be a free software license by definition, but as Ondrej stated it reduces my interest to zero once I saw the license. > IANAL, but each of the included packages in Python(x,y) have their > own licenses, and most of the major ones are licensed under the BSD > license or something similar, which permit commercial use. I'm not > sure that a license on a software collection can regulate use of its > more liberally licensed components. At the very least it would be > hard to enforce. So, maybe what's restricted is the use of the Python > (x,y) installer by a commercial entity? I really have no idea. > > Also, I think there may be some murky water concerning distributing > GPL'd packages under this license (I notice PyQt is distributed under > the GPL). Again, no law degree here, but it seems like it might be a > problem. IANAL either, but the licsense states Python(x,y) software collection is licensed under the terms of the following license: and I read that that the license covers all it components. It isn't only PyQt, but also components of MinGW like the compiler and so on that is under the GPL. So this seems more than fishy. > David Cheers, Michael > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From robert.kern at gmail.com Wed Apr 16 18:58:05 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 16 Apr 2008 17:58:05 -0500 Subject: [SciPy-user] Python(x,y) - New release 1.1.0 In-Reply-To: <1850A7AF-236A-400B-B12B-2C7E61EDF6C9@cs.toronto.edu> References: <480652CE.20800@pythonxy.com> <85b5c3130804161424h141d454xd9b39bfd8f8033a6@mail.gmail.com> <1850A7AF-236A-400B-B12B-2C7E61EDF6C9@cs.toronto.edu> Message-ID: <3d375d730804161558g222e6d1ewcabb14049b28a0d0@mail.gmail.com> On Wed, Apr 16, 2008 at 4:55 PM, David Warde-Farley wrote: > On 16-Apr-08, at 5:24 PM, Ondrej Certik wrote: > > On Wed, Apr 16, 2008 at 11:07 PM, Neal Becker > > wrote: > >> Python(x,y) is restricted to only use for non-profit? > > > > Seems like that according to: > > > > http://pythonxy.com/license.php > > > > Just a couple days ago there used to be an open source license. Well, > > that reduces me interest a lot, as I am not really motivated to > > contribute to a non-opensource solution. > > Hmm... I read that license, quite puzzling. > > IANAL, but each of the included packages in Python(x,y) have their > own licenses, and most of the major ones are licensed under the BSD > license or something similar, which permit commercial use. I'm not > sure that a license on a software collection can regulate use of its > more liberally licensed components. At the very least it would be > hard to enforce. So, maybe what's restricted is the use of the Python > (x,y) installer by a commercial entity? I really have no idea. > > Also, I think there may be some murky water concerning distributing > GPL'd packages under this license (I notice PyQt is distributed under > the GPL). Again, no law degree here, but it seems like it might be a > problem. IANAL. TINLA. At least in the US, collections of other copyrighted works can have their own copyright and licensing terms. Each of the individual works also has an independent copyright and potentially a license. The application of the Non-Profit OSL to the collection does not imply that each of the components falls under that license, just the collection. A clarification on the Python(x,y) page would be in order. The Python(x,y) license cannot place additional restrictions on the redistribution of the particular GPLed code. It can place restrictions on the collected work, but if one were to extract the GPLed code from the collection, one should be able to only deal with the GPL and not the license of the collection. Or rather, if the license of the collection attempts to forbid that, then Python(x,y) cannot legally include the GPLed components as part of the collection. For example, the official OpenBSD CDs are copyrighted and have restricted distribution. However, the CDs include GPLed packages like gcc. http://www.openbsd.org/faq/faq3.html -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From warren.weckesser at gmail.com Wed Apr 16 19:44:07 2008 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Wed, 16 Apr 2008 19:44:07 -0400 Subject: [SciPy-user] Testing build before installing (numpy and scipy) In-Reply-To: <3d375d730804161502i4073b9b5g2d4208ccafbda35@mail.gmail.com> References: <114880320804161335k2983eb69u23eac7cbae897f2a@mail.gmail.com> <3d375d730804161502i4073b9b5g2d4208ccafbda35@mail.gmail.com> Message-ID: <114880320804161644n56ef7240j5be0323d4778263c@mail.gmail.com> On Wed, Apr 16, 2008 at 6:02 PM, Robert Kern wrote: > On Wed, Apr 16, 2008 at 3:35 PM, Warren Weckesser > wrote: > > I am have a Mac running OSX 10.4. > > > > 3. The instructions at http://www.scipy.org/Installing_SciPy/Mac_OS_X > > suggest that the command "export MACOSX_DEPLOYMENT_TARGET=10.4" be > given > > before building scipy for OSX 10.4. It is not clear from those > instructions > > if that macro is also used when building numpy. I ran the build command > > twice, once before defining the variable and once after defining it, and > it > > does change how numpy is built--well, it changes the names of some > > directories, anyway. Is this macro also supposed to be defined when > > building numpy? > > Don't bother for either numpy or scipy. > This really doesn't matter? Changing the variable changes the names of the directories that are created in the build subdirectory. Does it also affect how the build process finds Frameworks? I have Frameworks for 10.3.9 and 10.4 (this computer was upgraded from 10.3.9 to 10.4). I built numpy with MACOSX_DEPLOYMENT_TARGET=10.4 and it passed all the tests. -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Wed Apr 16 19:34:16 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 17 Apr 2008 08:34:16 +0900 Subject: [SciPy-user] Testing build before installing (numpy and scipy) In-Reply-To: <9457e7c80804161427q5ab8805cj4f7ac27ebe0c213f@mail.gmail.com> References: <114880320804161335k2983eb69u23eac7cbae897f2a@mail.gmail.com> <9457e7c80804161427q5ab8805cj4f7ac27ebe0c213f@mail.gmail.com> Message-ID: <48068CF8.2090102@ar.media.kyoto-u.ac.jp> St?fan van der Walt wrote: > > I don't export that variable on my system, but I have very little > experience with building on OSX. > Distutils is supposed to take care of it in recent pythons, I believe. In numscons, I had problems because this was not defined. cheers, David From warren.weckesser at gmail.com Wed Apr 16 20:10:58 2008 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Wed, 16 Apr 2008 20:10:58 -0400 Subject: [SciPy-user] Warning about UMFPACK while build scipy-0.6.0 on Mac OSX PPC Message-ID: <114880320804161710p603f5150l7de9c38de7671f5d@mail.gmail.com> Hello again, I am trying to build scipy-0.6.0 on a Mac running OSX 10.4 (PPC). I have python 2.5.2 installed; gcc is Apple's 4.0.1, and gfortran is version 4.2.1. I am following the instructions here: http://www.scipy.org/Installing_SciPy/Mac_OS_X When I run this command: python setup.py build_src build_clib --fcompiler=gnu95 build_ext --fcompiler=gnu95 build > build.out the following message is printed to stderr: /Users/wweckesser/test_install/lib/python2.5/site-packages/numpy/distutils/system_info.py:414: UserWarning: UMFPACK sparse solver (http://www.cise.ufl.edu/research/sparse/umfpack/) not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [umfpack]) or by setting the UMFPACK environment variable. warnings.warn(self.notfounderror.__doc__) I thought I had numpy installed correctly :( numpy.test() reported no errors. Am I supposed to install UMFPACK before numpy? Do I need UMFPACK? I don't know what to make of that warning. Warren -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Wed Apr 16 20:07:26 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 17 Apr 2008 09:07:26 +0900 Subject: [SciPy-user] Warning about UMFPACK while build scipy-0.6.0 on Mac OSX PPC In-Reply-To: <114880320804161710p603f5150l7de9c38de7671f5d@mail.gmail.com> References: <114880320804161710p603f5150l7de9c38de7671f5d@mail.gmail.com> Message-ID: <480694BE.9070805@ar.media.kyoto-u.ac.jp> Warren Weckesser wrote: > > /Users/wweckesser/test_install/lib/python2.5/site-packages/numpy/distutils/system_info.py:414: > UserWarning: > UMFPACK sparse solver > (http://www.cise.ufl.edu/research/sparse/umfpack/) > not found. Directories to search for the libraries can be > specified in the > numpy/distutils/site.cfg file (section [umfpack]) or by setting > the UMFPACK environment variable. > warnings.warn(self.notfounderror.__doc__) > > > I thought I had numpy installed correctly :( > numpy.test() reported no errors. > > Am I supposed to install UMFPACK before numpy? No, you don't need umfpack, it is optional. cheers, David From warren.weckesser at gmail.com Wed Apr 16 20:25:59 2008 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Wed, 16 Apr 2008 20:25:59 -0400 Subject: [SciPy-user] Warning about UMFPACK while build scipy-0.6.0 on Mac OSX PPC In-Reply-To: <480694BE.9070805@ar.media.kyoto-u.ac.jp> References: <114880320804161710p603f5150l7de9c38de7671f5d@mail.gmail.com> <480694BE.9070805@ar.media.kyoto-u.ac.jp> Message-ID: <114880320804161725p12b5a9e2qdda5e6f022defac2@mail.gmail.com> On Wed, Apr 16, 2008 at 8:07 PM, David Cournapeau < david at ar.media.kyoto-u.ac.jp> wrote: > > > Am I supposed to install UMFPACK before numpy? > > No, you don't need umfpack, it is optional. > > OK, I'll ignore it. Thanks. WW -------------- next part -------------- An HTML attachment was scrubbed... URL: From intel.g33k at gmail.com Wed Apr 16 20:44:53 2008 From: intel.g33k at gmail.com (Kacey A.) Date: Wed, 16 Apr 2008 20:44:53 -0400 Subject: [SciPy-user] Issues with TeX symbols and font changes Message-ID: <76346d4d0804161744w22af198dn5b2a5ba55d627350@mail.gmail.com> Hello all, I've spent more time than I care to share trying to remedy this, but it just doesn't seem to be working... So long story short, I'm trying to insert a special symbol in my axis label, so I type the following: [1] xlabel = (r"Wavelength ($\AA$)") ...in order to receive the symbol for angstroms (an "A" with a circle above it). Problem is... whereas the rest of my plot is in whatever the default Scipy plot font is (Tahoma, perhaps?), the stubborn angstroms symbol is in Times New Roman-esque font, which is bothering me to no end. I've attempted just changing the font of the entire font by using [2] font = {"fontname":"Times New Roman"} ... [3] xlabel = (ur"Wavelength ($\AA$)", **font) ...but unfortunately that does nothing to affect the font of the axis tickmark labels (i.e. the numbers along the axes) -- so while my axis labels (excluding the angstrom symbol) and plot text (i.e. text(x,y,string)) might be in the font set via command [2], the axis numbering will *still* be in the default (i.e. Tahoma). For what help it's worth, I'm running OS X (10.4.11) and Python 2.5.1, although I'm not 100% certain of what version of Scipy and Numpy I have installed... (I *think* I'm running Scipy 0.3.2). I'm also using TextMate to type and run my scripts. Thanks so much for any help in advance! I really love SciPy, but if there's really no feasible workaround for this... I might just have to use a different package altogether. Thanks again. -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Wed Apr 16 20:41:53 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 17 Apr 2008 09:41:53 +0900 Subject: [SciPy-user] Issues with TeX symbols and font changes In-Reply-To: <76346d4d0804161744w22af198dn5b2a5ba55d627350@mail.gmail.com> References: <76346d4d0804161744w22af198dn5b2a5ba55d627350@mail.gmail.com> Message-ID: <48069CD1.2080706@ar.media.kyoto-u.ac.jp> Kacey A. wrote: > Hello all, > > I've spent more time than I care to share trying to remedy this, but > it just doesn't seem to be working... So long story short, I'm trying > to insert a special symbol in my axis label, so I type the following: > > [1] xlabel = (r"Wavelength ($\AA$)") > > ...in order to receive the symbol for angstroms (an "A" with a circle > above it). Problem is... whereas the rest of my plot is in whatever > the default Scipy plot font is (Tahoma, perhaps?), the stubborn > angstroms symbol is in Times New Roman-esque font, which is bothering > me to no end. I've attempted just changing the font of the entire font > by using > > [2] font = {"fontname":"Times New Roman"} > ... > [3] xlabel = (ur"Wavelength ($\AA$)", **font) > > ...but unfortunately that does nothing to affect the font of the axis > tickmark labels (i.e. the numbers along the axes) -- so while my axis > labels (excluding the angstrom symbol) and plot text (i.e. > text(x,y,string)) might be in the font set via command [2], the axis > numbering will *still* be in the default (i.e. Tahoma). I am not sure which package you are using for plotting, but in any case, this does not seem to be linked to scipy. Maybe you will have more luck on the maplotlib mailing list. > > For what help it's worth, I'm running OS X (10.4.11) and Python 2.5.1, > although I'm not 100% certain of what version of Scipy and Numpy I > have installed... (I *think* I'm running Scipy 0.3.2). I'm also using > TextMate to type and run my scripts. python -c "import scipy; scipy.version.version; import numpy; numpy.version.version" will give you this information, BTW. cheers, David From warren.weckesser at gmail.com Wed Apr 16 21:24:42 2008 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Wed, 16 Apr 2008 21:24:42 -0400 Subject: [SciPy-user] scipy.test() failure Message-ID: <114880320804161824t62016e89x8f79a0c9478368ef@mail.gmail.com> Hello once again, A reminder of the facts: Mac OSX 10.4 PPC, gcc 4.0.1 (Apple), gfortran 4.2.1, numpy-1.0.4 installed. I am installing scipy-0.6.0. I built scipy-0.6.0, and I just ran scipy.test(). I get many warnings and two failures. A transcript of running scipy.test(1,10) is attached. Here are the failures: ====================================================================== FAIL: check loadmat case sparse ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/wweckesser/test_install/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 85, in cc self._check_case(name, files, expected) File "/Users/wweckesser/test_install/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 80, in _check_case self._check_level(k_label, expected, matdict[k]) File "/Users/wweckesser/test_install/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 63, in _check_level decimal = 5) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/testing/utils.py", line 232, in assert_array_almost_ equal header='Arrays are not almost equal') File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/testing/utils.py", line 217, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal test sparse; file /Users/wweckesser/test_install/lib/python2.5/site-packages/scipy/io/tests/./data/testsparse_6.5.1_GLNX86.mat, variable testspa rse (mismatch 46.6666666667%) x: array([[ 3.03865194e-319, 3.16202013e-322, 1.04346664e-320, 2.05531309e-320, 2.56123631e-320], [ 3.16202013e-322, 0.00000000e+000, 0.00000000e+000,... y: array([[ 1., 2., 3., 4., 5.], [ 2., 0., 0., 0., 0.], [ 3., 0., 0., 0., 0.]]) ====================================================================== FAIL: check_dot (scipy.lib.tests.test_blas.test_fblas1_simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/wweckesser/test_install/lib/python2.5/site-packages/scipy/lib/blas/tests/test_blas.py", line 76, in check_dot assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/testing/utils.py", line 158, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: ACTUAL: (-1.9987702369689941+4.6526445226967791e-37j) DESIRED: (-9+2j) ---------------------------------------------------------------------- I found this: http://projects.scipy.org/scipy/scipy/ticket/238 The second failure that I get looks like the same one in that ticket. Any help is appreciated. Warren -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: test.out.gz Type: application/x-gzip Size: 13183 bytes Desc: not available URL: From intel.g33k at gmail.com Wed Apr 16 21:57:07 2008 From: intel.g33k at gmail.com (Kacey A.) Date: Wed, 16 Apr 2008 21:57:07 -0400 Subject: [SciPy-user] Issues with TeX symbols and font changes Message-ID: <76346d4d0804161857ocf03309rf5f4da5e9b576185@mail.gmail.com> On Wed, Apr 16, 2008 at 9:24 PM, wrote: > Message: 5 > Date: Thu, 17 Apr 2008 09:41:53 +0900 > From: David Cournapeau > Subject: Re: [SciPy-user] Issues with TeX symbols and font changes > To: SciPy Users List > Message-ID: <48069CD1.2080706 at ar.media.kyoto-u.ac.jp> > Content-Type: text/plain; charset=ISO-8859-1; format=flowed > > Kacey A. wrote: > > Hello all, > > > > I've spent more time than I care to share trying to remedy this, but > > it just doesn't seem to be working... So long story short, I'm trying > > to insert a special symbol in my axis label, so I type the following: > > > > [1] xlabel = (r"Wavelength ($\AA$)") > > > > ...in order to receive the symbol for angstroms (an "A" with a circle > > above it). Problem is... whereas the rest of my plot is in whatever > > the default Scipy plot font is (Tahoma, perhaps?), the stubborn > > angstroms symbol is in Times New Roman-esque font, which is bothering > > me to no end. I've attempted just changing the font of the entire font > > by using > > > > [2] font = {"fontname":"Times New Roman"} > > ... > > [3] xlabel = (ur"Wavelength ($\AA$)", **font) > > > > ...but unfortunately that does nothing to affect the font of the axis > > tickmark labels (i.e. the numbers along the axes) -- so while my axis > > labels (excluding the angstrom symbol) and plot text (i.e. > > text(x,y,string)) might be in the font set via command [2], the axis > > numbering will *still* be in the default (i.e. Tahoma). > > I am not sure which package you are using for plotting, but in any case, > this does not seem to be linked to scipy. Maybe you will have more luck > on the maplotlib mailing list. > > > > For what help it's worth, I'm running OS X (10.4.11) and Python 2.5.1, > > although I'm not 100% certain of what version of Scipy and Numpy I > > have installed... (I *think* I'm running Scipy 0.3.2). I'm also using > > TextMate to type and run my scripts. > > python -c "import scipy; scipy.version.version; import numpy; > numpy.version.version" > > will give you this information, BTW. > > cheers, > > David > Thanks so much for those tips. I'll definitely ask the matplotlib mailing list for tips. My apologies for spamming the wrong list -- I assumed it was linked with SciPy but clearly I have much to learn about which package is doing what :) Also, thanks for that line on how to find version information, that'll definitely be useful in the future. Cheers, Kacey -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed Apr 16 22:19:11 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 16 Apr 2008 21:19:11 -0500 Subject: [SciPy-user] scipy.test() failure In-Reply-To: <114880320804161824t62016e89x8f79a0c9478368ef@mail.gmail.com> References: <114880320804161824t62016e89x8f79a0c9478368ef@mail.gmail.com> Message-ID: <3d375d730804161919h1646f613g20580648d5706c44@mail.gmail.com> On Wed, Apr 16, 2008 at 8:24 PM, Warren Weckesser wrote: > Hello once again, > > A reminder of the facts: Mac OSX 10.4 PPC, gcc 4.0.1 (Apple), gfortran > 4.2.1, > numpy-1.0.4 installed. I am installing scipy-0.6.0. > > I built scipy-0.6.0, and I just ran scipy.test(). I get many warnings and > two failures. A transcript of running scipy.test(1,10) is attached. > > Here are the failures: > > > ====================================================================== > FAIL: check loadmat case sparse > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/Users/wweckesser/test_install/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", > line 85, in cc > self._check_case(name, files, expected) > File > "/Users/wweckesser/test_install/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", > line 80, in _check_case > self._check_level(k_label, expected, matdict[k]) > File > "/Users/wweckesser/test_install/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", > line 63, in _check_level > decimal = 5) > File > "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/testing/utils.py", > line 232, in assert_array_almost_ > equal > header='Arrays are not almost equal') > File > "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/testing/utils.py", > line 217, in assert_array_compare > assert cond, msg > AssertionError: > Arrays are not almost equal > test sparse; file > /Users/wweckesser/test_install/lib/python2.5/site-packages/scipy/io/tests/./data/testsparse_6.5.1_GLNX86.mat, > variable testspa > rse > (mismatch 46.6666666667%) > x: array([[ 3.03865194e-319, 3.16202013e-322, 1.04346664e-320, > 2.05531309e-320, 2.56123631e-320], > [ 3.16202013e-322, 0.00000000e+000, 0.00000000e+000,... > y: array([[ 1., 2., 3., 4., 5.], > [ 2., 0., 0., 0., 0.], > [ 3., 0., 0., 0., 0.]]) This looks like a new endianness problem in the matfile reader. > ====================================================================== > FAIL: check_dot (scipy.lib.tests.test_blas.test_fblas1_simple) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/Users/wweckesser/test_install/lib/python2.5/site-packages/scipy/lib/blas/tests/test_blas.py", > line 76, in check_dot > assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) > File > "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/testing/utils.py", > line 158, in assert_almost_equal > assert round(abs(desired - actual),decimal) == 0, msg > AssertionError: > Items are not equal: > ACTUAL: (-1.9987702369689941+4.6526445226967791e-37j) > DESIRED: (-9+2j) > > ---------------------------------------------------------------------- > > > I found this: http://projects.scipy.org/scipy/scipy/ticket/238 > The second failure that I get looks like the same one in that ticket. Yes, it is fixed. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From warren.weckesser at gmail.com Thu Apr 17 00:06:03 2008 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Thu, 17 Apr 2008 00:06:03 -0400 Subject: [SciPy-user] scipy.test() failure In-Reply-To: <3d375d730804161919h1646f613g20580648d5706c44@mail.gmail.com> References: <114880320804161824t62016e89x8f79a0c9478368ef@mail.gmail.com> <3d375d730804161919h1646f613g20580648d5706c44@mail.gmail.com> Message-ID: <114880320804162106w4670bf84m6b5a84940a4450b4@mail.gmail.com> On Wed, Apr 16, 2008 at 10:19 PM, Robert Kern wrote: > > > > I found this: http://projects.scipy.org/scipy/scipy/ticket/238 > > The second failure that I get looks like the same one in that ticket. > > Yes, it is fixed. > > How do I get the fix? (I think the answer will be "Apply the patch"; if so, explicit instructions for how to do that would be great.) Warren -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Thu Apr 17 00:10:34 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 16 Apr 2008 23:10:34 -0500 Subject: [SciPy-user] scipy.test() failure In-Reply-To: <114880320804162106w4670bf84m6b5a84940a4450b4@mail.gmail.com> References: <114880320804161824t62016e89x8f79a0c9478368ef@mail.gmail.com> <3d375d730804161919h1646f613g20580648d5706c44@mail.gmail.com> <114880320804162106w4670bf84m6b5a84940a4450b4@mail.gmail.com> Message-ID: <3d375d730804162110i31c260fy547ba93a15f8670@mail.gmail.com> On Wed, Apr 16, 2008 at 11:06 PM, Warren Weckesser wrote: > On Wed, Apr 16, 2008 at 10:19 PM, Robert Kern wrote: > > > I found this: http://projects.scipy.org/scipy/scipy/ticket/238 > > > The second failure that I get looks like the same one in that ticket. > > > > Yes, it is fixed. > > How do I get the fix? (I think the answer will be "Apply the patch"; if so, > explicit instructions for how to do that would be great.) Check out the latest code using SVN: svn co http://svn.scipy.org/svn/scipy/trunk scipy -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From warren.weckesser at gmail.com Thu Apr 17 00:56:59 2008 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Thu, 17 Apr 2008 00:56:59 -0400 Subject: [SciPy-user] scipy.test() failure In-Reply-To: <3d375d730804162110i31c260fy547ba93a15f8670@mail.gmail.com> References: <114880320804161824t62016e89x8f79a0c9478368ef@mail.gmail.com> <3d375d730804161919h1646f613g20580648d5706c44@mail.gmail.com> <114880320804162106w4670bf84m6b5a84940a4450b4@mail.gmail.com> <3d375d730804162110i31c260fy547ba93a15f8670@mail.gmail.com> Message-ID: <114880320804162156s255f6971nd8bddd183e4f0bad@mail.gmail.com> On Thu, Apr 17, 2008 at 12:10 AM, Robert Kern wrote: > > > Check out the latest code using SVN: > > svn co http://svn.scipy.org/svn/scipy/trunk scipy > Does the svn version depend on the svn version of numpy, or can I stick with numpy 1.0.4? The comment at http://projects.scipy.org/scipy/scipy/roadmap makes me think I'll need a new numpy, but I'll try 1.0.4 first. Warren -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Thu Apr 17 01:01:37 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 17 Apr 2008 00:01:37 -0500 Subject: [SciPy-user] scipy.test() failure In-Reply-To: <114880320804162156s255f6971nd8bddd183e4f0bad@mail.gmail.com> References: <114880320804161824t62016e89x8f79a0c9478368ef@mail.gmail.com> <3d375d730804161919h1646f613g20580648d5706c44@mail.gmail.com> <114880320804162106w4670bf84m6b5a84940a4450b4@mail.gmail.com> <3d375d730804162110i31c260fy547ba93a15f8670@mail.gmail.com> <114880320804162156s255f6971nd8bddd183e4f0bad@mail.gmail.com> Message-ID: <3d375d730804162201w1cfa35f7g33b95a5ea7ea579d@mail.gmail.com> On Wed, Apr 16, 2008 at 11:56 PM, Warren Weckesser wrote: > Does the svn version depend on the svn version of numpy, or can I stick > with numpy 1.0.4? The comment at > http://projects.scipy.org/scipy/scipy/roadmap makes me think I'll need a new > numpy, but I'll try 1.0.4 first. It would be best to use SVN numpy. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From rob.clewley at gmail.com Thu Apr 17 01:23:29 2008 From: rob.clewley at gmail.com (Rob Clewley) Date: Thu, 17 Apr 2008 01:23:29 -0400 Subject: [SciPy-user] PyDSTool and SciPy (general) Message-ID: Dear colleagues, We would like to begin a discussion that should help us set the stage for maintaining effective utilities for ODE and difference equation models through scipy, in both the short and long term. This is the first of two emails we are posting to the scipy list on this topic; the second one will focus more specifically on integrating ODEs. As some of you know, we put together the PyDSTool package a couple of years ago while at Cornell University; our intent was to develop a flexible platform for dynamical systems analysis and simulation, replacing and greatly improving upon the dstool (C/TclTk-based) and ODETools (Matlab-based) programs that had been developed under the direction of John Guckenheimer at Cornell. As it stands now, PyDSTool is a fully usable, reliable dynamical systems package, and we have used it extensively in our scientific work. In terms of what we had originally envisioned for PyDSTool when we began program design and initial coding, our current product is a proof of concept of various ideas about supporting dynamical systems analysis and simulation using Python. We are actively seeking your help in planning out and executing the integration of some of our code base into scipy. We believe we have several exciting features to offer the scipy community in general, and we think that the effort of others in helping us to improve, maintain and document our code over the longer term will be the best way to move our project forward and benefit the most people. Features that we believe are of interest for potential relocation as add-ons to scipy include: * Fast C- and Fortran-based ODE integrators for stiff and non-stiff systems that we have interfaced in a more powerful fashion that the existing lsoda and vode integrators in scipy (utilizing C-based vector field callback functions rather than slower Python callback functions). * Event detection, autonomous external input features at the C level for these integrators. * Index-free trajectory classes for manipulating multi-dimensional time series in a mathematically natural way. * Support for "hybrid" discrete and smooth dynamics, e.g. as used in control theory applications. * Bifurcation analysis tools (PyCont, and its interface to AUTO). * Model manipulation classes built over symbolic expression processing, which is particularly useful for model inference algorithms or building large, structured models from smaller components. * Prototype classes associated with numerical phase-plane analysis (fixed points, nullclines, periodic orbits, manifolds, etc.). * Some data analysis algorithms (esp. our pointwise dimension estimation code recently published in IEEE TBME). (More details about PyDSTool's features can be found on our wiki pages: pydstool.sourceforge.net) We also have noticed recent developments in the scientific python community of relevance to our project goals, including advances in CAS symbolic processing with SymPy, optimization with OpenOpt and other potentially interesting tools like Spyke for python-to-C compilation. While some of PyDSTool's design and its high level classes are focused specifically towards facilitating data-driven modeling for dynamical systems, and symbolic manipulations, it seems worthwhile to explore moving our support for the underlying machinery to these other projects. We have reached a stage in both our code development and our professional situations where we need to make decisions regarding PyDSTool's future carefully. PyDSTool is (uniquely) useful to us on a daily basis, and it has a small base of active users, but it would require substantial refactoring and redesign to reach its potential as a dynamical systems package that is also broadly accessible and useful to a wider audience. At the same time, having moved onto faculty/post-doc positions, we have less time and energy available for coding work on the core parts of PyDSTool. We do have increasing opportunities for supporting such work through grant funding, and have already obtained some funding to help pay for small technical developments for PyDSTool by third parties. We recognize that we will need to put in effort ourselves in order to advocate, educate, and refactor our designs and code for use in scipy. In the past, we have heard people's interest in some of our ideas, and would like to hear from those people again. We'd like the community's suggestions as to how we might proceed with choosing features to cross into a scikit (for instance), and whether anyone might be interested in helping us do that (we have not yet studied existing scikits or followed debate on the mailing lists about setting them up). We are inspired by the apparent success of taking the early sketches that were SymPy (a year or more ago) to the masses and turning it into a very promising and strongly featured package. We would like to use our time, energy and (perhaps) money wisely over the coming year or two to make the most of what we've already achieved with PyDSTool. Regards, Rob Clewley Erik Sherwood -- Robert H. Clewley, Ph. D. Assistant Professor Department of Mathematics and Statistics Georgia State University 720 COE, 30 Pryor St Atlanta, GA 30303, USA tel: 404-413-6420 fax: 404-651-2246 http://www.mathstat.gsu.edu/~matrhc http://brainsbehavior.gsu.edu/ From rob.clewley at gmail.com Thu Apr 17 01:24:39 2008 From: rob.clewley at gmail.com (Rob Clewley) Date: Thu, 17 Apr 2008 01:24:39 -0400 Subject: [SciPy-user] PyDSTool and SciPy (integrators) Message-ID: Dear colleagues, Picking up on our previous email: One of the features of PyDSTool that seems most broadly useful to the scipy community is the facility for fast integration of ODEs using C and Fortran based codes from E. Hairer (dopri853 and radau5). Our implementation dynamically creates C-based right hand sides which are compiled and linked against the dopri/radau .o files to create a DLL file, which is then loaded as a python module. Integration of vector fields is very fast, as everything is handled at the C-level, rather than through callbacks to python functions as in scipy.odeint. (This is a *very* important distinction regarding performance, vs. merely having a C-based integrator but retaining the user's vector field function at the Python level). Though we have not done rigorous testing, we have seen speed-ups of 10-50+ times over scipy.odeint. In addition, we are able to generate 'event' functions and 'auxiliary' functions that are calculated alongside and 'online' with the integration of the ODEs. These (nearly) arbitrary functions of the phase space variables and parameters may be used to detect, e.g. when the computed trajectory crosses a boundary, achieves a local maximum, or when some function of the phase space variables and parameters reaches a zero-crossing. Event functions may be used to halt integration when a particular condition is achieved (critical for use in hybrid systems, for example), while auxiliary functions may be used to calculate quantities of interest that are complementary to the ODE calculations (e.g. ionic currents across membranes in neuronal model ODEs that describe changes in potential as ion channels open and close). We are also able to use autonomous external inputs as part of the ODE vector fields. For instance, we can use real data from experimentally obtained voltage traces from a neuron as input to an ODE model of neuronal membrane voltage dynamics. One catch to our system is in the generation of C-based right hand sides, which must be compiled. We use distutils to accomplish this in a simple call that is naturally platform independent. This is a great benefit to us and saves us relying on a third party library or application, but this solution has some frustrating quirks (we very much understand that distutils was not *designed* to be used this way!). Also, we have extensive machinery to generate appropriate C vector field descriptions from Python vector field descriptions, and our event/auxiliary function mechanisms currently depend on this. Model description, compilation, and integration are rather intertwined at present. As it now stands, a model (ODE right-hand side, event/auxiliary functions, inputs) is defined as a set of strings, which are converted to a representation as Variable, Parameter, and Expression classes (or the string step is skipped and the model is just given in terms of classes). The class representation is checked for consistency and is used to generate a text file containing C code, including event functions, etc., which is then compiled and linked against the integrator and event detection code. SWIG is used to generate an interface to python; the resulting module is then loaded into the (interactive) python session. When the vector field is integrated, the results (trajectories, auxiliary function values, event values) are returned numpy arrays, packaged as Trajectory classes (which have added functionality over numpy arrays). If our integration facilities were to be incorporated into scipy, the underlying code for our integrators could be adapted to used python callbacks, like odeint, though this would negate much of the speed advantage. If the C compilation route were maintained, then we would need to figure out a better way to use distutils, or find an alternative compilation mechanism. In either case, we would have to determine if and how to keep the event/auxiliary function and autonomous external input facilities, which are closely bound to the model building routines. The results of integration could be returned as plain numpy arrays, but even doing this requires quite some disentanglement. Since integration is a very common task, and our integrators perform well, this appears to us to be the best place to start moving PyDSTool capabilities to scipy, either directly or in scikit form. However, such a move would require significant effort on our part. We welcome comments/advice/suggestions regarding community interest in and the planning/design of such an effort. In particular, the issues of model building, C compilation vs. python callbacks, and the use of distutils are points where we are looking for input. We have already begun seeking help from a consultant on fixing some of the quirks of using distutils this way, in case we can continue to viably use it. But, for instance, we'd like to get an update from Scipy developers about the intended/expected future of distutils in scipy vs. the regular python version (we use scipy's for Fortran compatilibity). Maybe it would be reasonable to consider a modification of distutils (at least in its API) so that it could be viewed as a platform-indepedent way to compile DLLs on the fly in the way we need. Maybe you can see a much better solution to solve our problem, given the ever-changing python technology available (Spyke looks like it might be relevant, for instance). Regards, Erik Sherwood Rob Clewley -- Robert H. Clewley, Ph. D. Assistant Professor Department of Mathematics and Statistics Georgia State University 720 COE, 30 Pryor St Atlanta, GA 30303, USA tel: 404-413-6420 fax: 404-651-2246 http://www.mathstat.gsu.edu/~matrhc http://brainsbehavior.gsu.edu/ From contact at pythonxy.com Thu Apr 17 01:37:06 2008 From: contact at pythonxy.com (Python(x,y)) Date: Thu, 17 Apr 2008 07:37:06 +0200 Subject: [SciPy-user] Python(x,y) license Message-ID: <4806E202.1060803@pythonxy.com> Hi all, Ok, the new license was supposed to be very protecting for the licensor. But, the side effect is that a lot of people misunderstood what is licensed under it. In fact, this is only the software collection itself. Each package is distributed under its own license/copyright. So, yes it is possible to use it for commercial purpose, but the limitation is that you can't sell the software collection itself... so I don't see why it is bothering you. However, I am ready to change the license of the software collection if this message is not clear enough. The purpose of this license was to protect the licensor, and not prevent you from using it !! So, I am waiting for your feedback. And eventually, I will release 1.1.1 in OSL 3.0. Best regards, -- P. Raybaut Python(x,y) http://www.pythonxy.com From contact at pythonxy.com Thu Apr 17 03:03:23 2008 From: contact at pythonxy.com (Python(x,y)) Date: Thu, 17 Apr 2008 09:03:23 +0200 (CEST) Subject: [SciPy-user] Python(x,y) - New release 1.1.0 Message-ID: <54819.132.165.76.2.1208415803.squirrel@secure.nuxit.net> > On Wed, Apr 16, 2008 at 11:07 PM, Neal Becker wrote: > > Python(x,y) is restricted to only use for non-profit? > > Seems like that according to: > > http://pythonxy.com/license.php > > Just a couple days ago there used to be an open source license. Well, > that reduces me interest a lot, as I am not really motivated to > contribute to a non-opensource solution. > > Ondrej I am very sorry for all this misunderstanding. First of all, the Non-profit OSL is an open-source license. And obviously it is the software collection which is licensed under it, not the individual packages which all remain under their own license/copyright (so the only restriction is that any "derived work" would be under Non-profit OSL : but software created using Python(x,y) is absolutely NOT a "derived work" - on the contrary, another software based on Python(x,y) would be a "derived work"). Anyway, I try to keep an open mind, and I have no interest in explaining or defending this license in particular. So, as I posted earlier, I can switch to OSL v3 if this remains ambiguous to a lot of people. PR From robert.kern at gmail.com Thu Apr 17 03:13:44 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 17 Apr 2008 02:13:44 -0500 Subject: [SciPy-user] Python(x,y) - New release 1.1.0 In-Reply-To: <54819.132.165.76.2.1208415803.squirrel@secure.nuxit.net> References: <54819.132.165.76.2.1208415803.squirrel@secure.nuxit.net> Message-ID: <3d375d730804170013k30b0ebf3sf999f6630b9027af@mail.gmail.com> On Thu, Apr 17, 2008 at 2:03 AM, Python(x,y) wrote: > > On Wed, Apr 16, 2008 at 11:07 PM, Neal Becker wrote: > > > Python(x,y) is restricted to only use for non-profit? > > > > Seems like that according to: > > > > http://pythonxy.com/license.php > > > > Just a couple days ago there used to be an open source license. Well, > > that reduces me interest a lot, as I am not really motivated to > > contribute to a non-opensource solution. > > > > Ondrej > > I am very sorry for all this misunderstanding. > First of all, the Non-profit OSL is an open-source license. I think you need to be more careful with that language. The Non-profit OSL very explicitly fails to meet the first clause of the Open Source Definition: http://opensource.org/docs/osd """ 1. Free Redistribution The license shall not restrict any party from selling or giving away the software as a component of an aggregate software distribution containing programs from several different sources. The license shall not require a royalty or other fee for such sale. """ > And obviously > it is the software collection which is licensed under it, not the > individual packages which all remain under their own license/copyright It is not exactly obvious. Since there is no mention of the individual licenses, one may easily think that you are trying to claim that this license covers each package, not just the collection. A paragraph of clarification on the license page would help a lot. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From contact at pythonxy.com Thu Apr 17 03:27:31 2008 From: contact at pythonxy.com (Python(x,y)) Date: Thu, 17 Apr 2008 09:27:31 +0200 (CEST) Subject: [SciPy-user] Python(x,y) - New release 1.1.0 In-Reply-To: <3d375d730804170013k30b0ebf3sf999f6630b9027af@mail.gmail.com> References: <54819.132.165.76.2.1208415803.squirrel@secure.nuxit.net> <3d375d730804170013k30b0ebf3sf999f6630b9027af@mail.gmail.com> Message-ID: <57909.132.165.76.2.1208417251.squirrel@secure.nuxit.net> >> And obviously >> it is the software collection which is licensed under it, not the >> individual packages which all remain under their own license/copyright > > It is not exactly obvious. Since there is no mention of the individual > licenses, one may easily think that you are trying to claim that this > license covers each package, not just the collection. A paragraph of > clarification on the license page would help a lot. You are absolutely right: if it was so obvious, maybe it would have not scared so many people! On the previous Python(x,y) license, I did mention something like "all packages included in Python(x,y) are distributed under their own copyright/license". But when I updated the website, I tried to simplify the license page... and it seems that I cut too much! So, I will mention it again. But I think that it will not be enought to reassure people. The terms "Non-profit" seem to be a real problem in people's mind. So, I think I will switch to OSL (even if I am convinced that it is not a real issue) which has been approved by OSI. Thanks for your comments. PR From pearu at cens.ioc.ee Thu Apr 17 03:36:52 2008 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Thu, 17 Apr 2008 10:36:52 +0300 (EEST) Subject: [SciPy-user] Python(x,y) - New release 1.1.0 In-Reply-To: <54819.132.165.76.2.1208415803.squirrel@secure.nuxit.net> References: <54819.132.165.76.2.1208415803.squirrel@secure.nuxit.net> Message-ID: <59348.129.240.222.155.1208417812.squirrel@cens.ioc.ee> On Thu, April 17, 2008 10:03 am, Python(x,y) wrote: >> On Wed, Apr 16, 2008 at 11:07 PM, Neal Becker >> wrote: >> > Python(x,y) is restricted to only use for non-profit? >> >> Seems like that according to: >> >> http://pythonxy.com/license.php >> >> Just a couple days ago there used to be an open source license. Well, >> that reduces me interest a lot, as I am not really motivated to >> contribute to a non-opensource solution. >> >> Ondrej > > I am very sorry for all this misunderstanding. > First of all, the Non-profit OSL is an open-source license. And obviously > it is the software collection which is licensed under it, not the > individual packages which all remain under their own license/copyright (so > the only restriction is that any "derived work" would be under Non-profit > OSL : but software created using Python(x,y) is absolutely NOT a "derived > work" - on the contrary, another software based on Python(x,y) would be a > "derived work"). > > Anyway, I try to keep an open mind, and I have no interest in explaining > or defending this license in particular. So, as I posted earlier, I can > switch to OSL v3 if this remains ambiguous to a lot of people. I think everyone has the right to choose whatever license s/he will use for its software and not feel bad about it. PERIOD. In many cases BSD license is preferred because this allows using it most freely, both in open source as well as commercial products. In that sense BSD seems to most suitable for library type of software provided that the author of software wants that the product can be used as wildly as possible - that is basically the aim of creating library type of software anyway. On the other hand, BSD license may not be suitable for end products as anyone could take the code and start selling it without having obligations to the original author. Here it is appropriate to think of protecting the work by choosing another license from BSD. (Note that selling library software can be very difficult and hence the need to protect such libraries from selling as products, has higher threshold.) It seems that Python(x,y) falls more to the category of end products rather than being a library like software and choosing a more restricted license seems appropriate. I think it is great that such software are released as open source at all. Regards, Pearu From gael.varoquaux at normalesup.org Thu Apr 17 03:45:37 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 17 Apr 2008 09:45:37 +0200 Subject: [SciPy-user] Python(x,y) - New release 1.1.0 In-Reply-To: <57909.132.165.76.2.1208417251.squirrel@secure.nuxit.net> References: <54819.132.165.76.2.1208415803.squirrel@secure.nuxit.net> <3d375d730804170013k30b0ebf3sf999f6630b9027af@mail.gmail.com> <57909.132.165.76.2.1208417251.squirrel@secure.nuxit.net> Message-ID: <20080417074537.GB1301@phare.normalesup.org> On Thu, Apr 17, 2008 at 09:27:31AM +0200, Python(x,y) wrote: > But I think that it will not be enought to reassure people. The terms > "Non-profit" seem to be a real problem in people's mind. So, I think I > will switch to OSL (even if I am convinced that it is not a real issue) > which has been approved by OSI. It is a real issue. People in eg comany labs are going to want to use Python(x,y). They cannot if its under non-profit. The Python scientific community is not really one that is centered on use of the tools for hobby. If you are going to change license, I think it is best you choose a major and well-known license, eg BSD if you don't want copyleft, or GPL if you do. My 2 cents, Ga?l From david at ar.media.kyoto-u.ac.jp Thu Apr 17 03:46:19 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 17 Apr 2008 16:46:19 +0900 Subject: [SciPy-user] Python(x,y) - New release 1.1.0 In-Reply-To: <59348.129.240.222.155.1208417812.squirrel@cens.ioc.ee> References: <54819.132.165.76.2.1208415803.squirrel@secure.nuxit.net> <59348.129.240.222.155.1208417812.squirrel@cens.ioc.ee> Message-ID: <4807004B.9030903@ar.media.kyoto-u.ac.jp> Pearu Peterson wrote: > > I think everyone has the right to choose whatever license s/he will > use for its software and not feel bad about it. PERIOD. I don't think anyone said or implied the contrary. The project was announced as being open source, while being released under a license which was not obviously open source; some people felt this was a bit strange at first, but it just looks like a misunderstanding. Personally, I think that as a rule, if you *want* to release something under an open source license, you are better choosing the BSD or the GPL by default, because they are well known, massively used, and everybody more or less understand what they imply. Most of the time, the license is dictated by the project you are contributing to anyway (I personally much prefer the GPL to BSD, but I guess that now, 95 % of my open source code is under the BSD :) ). cheers, David From markbak at gmail.com Thu Apr 17 04:01:55 2008 From: markbak at gmail.com (Mark Bakker) Date: Thu, 17 Apr 2008 10:01:55 +0200 Subject: [SciPy-user] Python(x, y) - New release 1.1.0 --> Hooray for Pyhon(x, y) Message-ID: <6946b9500804170101u7e942e1fvda35d4e56d7a0d4d@mail.gmail.com> I want to pitch-in my fifty cents. I think the Pyhon(x,y) effort is very important in moving forward the use of Python for scientific applications. I used to use the Entought distribution for teaching class, which was great. The only downside was that they didn't release often enough. The new Enthought distribution with eggs just isn't convenient for teaching a class. Users want a one-click install. And sorting through Eggs is undoable for the beginner. The two major problems for Python(x,y) are (IMHO): 1. What packages to include. It started out slim, but is now slowly moving to big, then probably on to sumo. Not sure what a good alternative is. As long as everything works, and it doesn't slow down releases of Python(x,y) I see no problem. I REALLY would like a Mac version though. 2. What license to put on it. Not sure what the developers are thinking here. I read the confusion/discussion on the list, and when I clicked on the license on the pythonxy.com website it turns out to be gone. So maybe they are rethinking it. I want to stress though that we should do as much as we can to support this effort. So let's not bash them for picking a license we don't like. Let's talk to them and convince them it needs something else, or better yet, nothing else. All packages already have their own license. So I say HOORAY for Pyhon(x,y). Let's make it the best thing since, well, sliced matrices? Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Thu Apr 17 04:07:24 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 17 Apr 2008 03:07:24 -0500 Subject: [SciPy-user] Python(x, y) - New release 1.1.0 --> Hooray for Pyhon(x, y) In-Reply-To: <6946b9500804170101u7e942e1fvda35d4e56d7a0d4d@mail.gmail.com> References: <6946b9500804170101u7e942e1fvda35d4e56d7a0d4d@mail.gmail.com> Message-ID: <3d375d730804170107w437c889bxe0768976bae8056a@mail.gmail.com> On Thu, Apr 17, 2008 at 3:01 AM, Mark Bakker wrote: > The new Enthought distribution with eggs just isn't convenient for teaching > a class. Users want a one-click install. And sorting through Eggs is > undoable for the beginner. While the new EPD does have an egg repository and individual packages can be continuously updated from it, we also build an MSI for Windows for initial installation. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Thu Apr 17 04:08:30 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 17 Apr 2008 03:08:30 -0500 Subject: [SciPy-user] Python(x, y) - New release 1.1.0 --> Hooray for Pyhon(x, y) In-Reply-To: <3d375d730804170107w437c889bxe0768976bae8056a@mail.gmail.com> References: <6946b9500804170101u7e942e1fvda35d4e56d7a0d4d@mail.gmail.com> <3d375d730804170107w437c889bxe0768976bae8056a@mail.gmail.com> Message-ID: <3d375d730804170108h7c7eb2fbofb0a622d7d769865@mail.gmail.com> On Thu, Apr 17, 2008 at 3:07 AM, Robert Kern wrote: > On Thu, Apr 17, 2008 at 3:01 AM, Mark Bakker wrote: > > > The new Enthought distribution with eggs just isn't convenient for teaching > > a class. Users want a one-click install. And sorting through Eggs is > > undoable for the beginner. > > While the new EPD does have an egg repository and individual packages > can be continuously updated from it, we also build an MSI for Windows > for initial installation. And the link: http://www.enthought.com/products/epd.php -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From contact at pythonxy.com Thu Apr 17 05:00:04 2008 From: contact at pythonxy.com (Python(x,y)) Date: Thu, 17 Apr 2008 11:00:04 +0200 (CEST) Subject: [SciPy-user] Python(x,y) - New release 1.1.0 --> Hooray Message-ID: <58733.132.165.76.2.1208422804.squirrel@secure.nuxit.net> Pearu Peterson wrote: > > I think everyone has the right to choose whatever license s/he will > use for its software and not feel bad about it. PERIOD. Thank you for your support, Pearu! However, the purpose of this project is to help scientists to use Python: licensing should be clear enough so everyone can download and use it without any doubt. > 1. What packages to include. It started out slim, but is now slowly moving > to big, then probably on to sumo. Not sure what a good alternative is. As > long as everything works, and it doesn't slow down releases of Python(x,y) I > see no problem. I REALLY would like a Mac version though. New version is indeed bigger (~280Mb instead of ~230Mb) due to the integration of ETS (mostly), but the overall size will be quite the same for future versions (we always try to limit the number of packages). No, it will not slow down releases. In fact, fast updating is one of the top priorities of this project. Mac version: We would like a Mac version too, but there is no Mac user involved in this project. Of course, we would be pleased to help any new contributor to develop a Mac version. > 2. What license to put on it. Not sure what the developers are thinking > here. I read the confusion/discussion on the list, and when I clicked on the > license on the pythonxy.com website it turns out to be gone. So maybe they > are rethinking it. Yes, we are indeed rethinking Python(x,y) license. Regarding the website, download and license pages will remain unavailable until this licensing issue is cleared out. (hopefully around 6 P.M. GMT) PR From lorenzo.isella at gmail.com Thu Apr 17 05:33:01 2008 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Thu, 17 Apr 2008 11:33:01 +0200 Subject: [SciPy-user] How to save image from a window Message-ID: <4807194D.3030808@gmail.com> Dear All, I have already posted this to the visual python mailing list, but with no success. The problem is the following: I want to make a movie using Python tools. The short script I paste below (it needs visual python) plots a chain of spheres in a window. How can I automatically save the window into e.g. a jpeg file? I need automation because I aim at generating O(100) jpeg files for my movie. I was suggested to use the PIL (python image library) and the ImageGrab module, but this seems to be available for Windows only. Any suggestion? Here is the code: #! /usr/bin/env python import scipy as s import numpy as n import pylab as p import visual as v x_list=s.arange(10) y_list=s.zeros(10) y_list[:]=1. z_list=y_list my_rad=1. particles=[v.sphere(pos=loc,radius=my_rad,color=v.color.blue)\ for loc in zip(x_list,y_list,z_list)] Now you should see a window with a line of spheres. How would you save that to a file (without e.g. using Gimp and clicking here and there)? In case it matters, I am running Debian testing on my box. Many thanks Lorenzo From jasperstolte at gmail.com Thu Apr 17 05:34:37 2008 From: jasperstolte at gmail.com (Jasper Stolte) Date: Thu, 17 Apr 2008 11:34:37 +0200 Subject: [SciPy-user] Python(x, y) - New release 1.1.0 --> Hooray for Pyhon(x, y) In-Reply-To: <3d375d730804170108h7c7eb2fbofb0a622d7d769865@mail.gmail.com> References: <6946b9500804170101u7e942e1fvda35d4e56d7a0d4d@mail.gmail.com> <3d375d730804170107w437c889bxe0768976bae8056a@mail.gmail.com> <3d375d730804170108h7c7eb2fbofb0a622d7d769865@mail.gmail.com> Message-ID: <89198da10804170234u5f1379aarb0afae892e72b562@mail.gmail.com> Hi Robert, Do you guys realize the MSI installer link is broken? Just thought I'd mail you personally, instead of spamming the thread with offtopic information. Maybe I should mail Enthought about it, but you appear to be talking about them in the 'we' form, and I figure you'd like to know anyway. Jasper On Thu, Apr 17, 2008 at 10:08 AM, Robert Kern wrote: > On Thu, Apr 17, 2008 at 3:07 AM, Robert Kern > wrote: > > On Thu, Apr 17, 2008 at 3:01 AM, Mark Bakker wrote: > > > > > The new Enthought distribution with eggs just isn't convenient for > teaching > > > a class. Users want a one-click install. And sorting through Eggs is > > > undoable for the beginner. > > > > While the new EPD does have an egg repository and individual packages > > can be continuously updated from it, we also build an MSI for Windows > > for initial installation. > > And the link: > > http://www.enthought.com/products/epd.php > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Thu Apr 17 05:38:36 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 17 Apr 2008 04:38:36 -0500 Subject: [SciPy-user] How to save image from a window In-Reply-To: <4807194D.3030808@gmail.com> References: <4807194D.3030808@gmail.com> Message-ID: <3d375d730804170238x15984cb8j64cee3feec12bdb1@mail.gmail.com> On Thu, Apr 17, 2008 at 4:33 AM, Lorenzo Isella wrote: > Dear All, > I have already posted this to the visual python mailing list, but with > no success. I'm sorry, but I don't think you'll get a better response here. Whatever method will depend heavily on how Visual Python is implemented. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Thu Apr 17 05:49:29 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 17 Apr 2008 04:49:29 -0500 Subject: [SciPy-user] Python(x, y) - New release 1.1.0 --> Hooray for Pyhon(x, y) In-Reply-To: <89198da10804170234u5f1379aarb0afae892e72b562@mail.gmail.com> References: <6946b9500804170101u7e942e1fvda35d4e56d7a0d4d@mail.gmail.com> <3d375d730804170107w437c889bxe0768976bae8056a@mail.gmail.com> <3d375d730804170108h7c7eb2fbofb0a622d7d769865@mail.gmail.com> <89198da10804170234u5f1379aarb0afae892e72b562@mail.gmail.com> Message-ID: <3d375d730804170249p447a4a6r55d77ac262873a82@mail.gmail.com> On Thu, Apr 17, 2008 at 4:34 AM, Jasper Stolte wrote: > Hi Robert, > > Do you guys realize the MSI installer link is broken? Uh, I do now. We'll fix the link shortly. In the meantime, the MSI can be located here: http://code.enthought.com/epd/installs/epd-2.5.2001-windows_x86.msi Thank you for bringing this to my attention. > Just thought I'd mail you personally, instead of spamming the thread with > offtopic information. Too late! -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at ar.media.kyoto-u.ac.jp Thu Apr 17 05:45:45 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 17 Apr 2008 18:45:45 +0900 Subject: [SciPy-user] Segfault in scipy.sparse module Message-ID: <48071C49.80003@ar.media.kyoto-u.ac.jp> Hi, I got a segfault with scipy.svn on the sparse module. I think I saw a similar report on a Mailing List, but cannot find it. My configuration: - winxp - atlas 3.8.0 compiled by myself with cygwin (I also tried with pure blas/lapack from netlib) - mingw32 compilers - both numpy and scipy are today checkout. The segfault always happens at the add_sub_test (TestBSR). This seems at least os/compiler specific, since it does not happen on linux with gcc. Maybe some problem between g++ and the msvc runtime ? cheers, David From stefan at sun.ac.za Thu Apr 17 07:42:08 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Thu, 17 Apr 2008 13:42:08 +0200 Subject: [SciPy-user] How to save image from a window In-Reply-To: <4807194D.3030808@gmail.com> References: <4807194D.3030808@gmail.com> Message-ID: <9457e7c80804170442kb01739fo7843575588a0ba6d@mail.gmail.com> Hi Lorenzo On 17/04/2008, Lorenzo Isella wrote: > The problem is the following: I want to make a movie using Python tools. > The short script I paste below (it needs visual python) plots a chain of > spheres in a window. > How can I automatically save the window into e.g. a jpeg file? > I need automation because I aim at generating O(100) jpeg files for my > movie. > I was suggested to use the PIL (python image library) and the ImageGrab > module, but this seems to be available for Windows only. > Any suggestion? You can use the povray plugin (you'll find it via google), which renders .pov files of every scene. Povray can then be used to convert each scene to a jpg. Hope that helps, St?fan From stefan at sun.ac.za Thu Apr 17 07:43:12 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Thu, 17 Apr 2008 13:43:12 +0200 Subject: [SciPy-user] Segfault in scipy.sparse module In-Reply-To: <48071C49.80003@ar.media.kyoto-u.ac.jp> References: <48071C49.80003@ar.media.kyoto-u.ac.jp> Message-ID: <9457e7c80804170443y6c41f55do6ad036d5f76098bd@mail.gmail.com> On 17/04/2008, David Cournapeau wrote: > I got a segfault with scipy.svn on the sparse module. I think I saw > a similar report on a Mailing List, but cannot find it. My configuration: > > - winxp > - atlas 3.8.0 compiled by myself with cygwin (I also tried with pure > blas/lapack from netlib) > - mingw32 compilers > - both numpy and scipy are today checkout. > > The segfault always happens at the add_sub_test (TestBSR). This seems at > least os/compiler specific, since it does not happen on linux with gcc. > Maybe some problem between g++ and the msvc runtime ? IIRC, Tom Waite filed a ticket for the same issue? Regards St?fan From david at ar.media.kyoto-u.ac.jp Thu Apr 17 07:37:46 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 17 Apr 2008 20:37:46 +0900 Subject: [SciPy-user] Segfault in scipy.sparse module In-Reply-To: <9457e7c80804170443y6c41f55do6ad036d5f76098bd@mail.gmail.com> References: <48071C49.80003@ar.media.kyoto-u.ac.jp> <9457e7c80804170443y6c41f55do6ad036d5f76098bd@mail.gmail.com> Message-ID: <4807368A.1090305@ar.media.kyoto-u.ac.jp> St?fan van der Walt wrote: > > IIRC, Tom Waite filed a ticket for the same issue? > Ah, thanks, I found the ticket, now. This seems nasty (see my email on numpy about distutils and c++): if it is what I think it is, it will be hard to fix. cheers, David From s.mientki at ru.nl Thu Apr 17 08:54:30 2008 From: s.mientki at ru.nl (Stef Mientki) Date: Thu, 17 Apr 2008 14:54:30 +0200 Subject: [SciPy-user] Python(x, y) - New release 1.1.0 --> Hooray for Pyhon(x, y) In-Reply-To: <3d375d730804170249p447a4a6r55d77ac262873a82@mail.gmail.com> References: <6946b9500804170101u7e942e1fvda35d4e56d7a0d4d@mail.gmail.com> <3d375d730804170107w437c889bxe0768976bae8056a@mail.gmail.com> <3d375d730804170108h7c7eb2fbofb0a622d7d769865@mail.gmail.com> <89198da10804170234u5f1379aarb0afae892e72b562@mail.gmail.com> <3d375d730804170249p447a4a6r55d77ac262873a82@mail.gmail.com> Message-ID: <48074886.5040800@ru.nl> >> Just thought I'd mail you personally, instead of spamming the thread with >> offtopic information. >> > > Too late! > > By replying to the list, you prevent dozens of others, spamming Robert's mail box ! cheers, Stef From prabhu at aero.iitb.ac.in Thu Apr 17 08:53:17 2008 From: prabhu at aero.iitb.ac.in (Prabhu Ramachandran) Date: Thu, 17 Apr 2008 18:23:17 +0530 Subject: [SciPy-user] How to save image from a window In-Reply-To: <4807194D.3030808@gmail.com> References: <4807194D.3030808@gmail.com> Message-ID: <4807483D.5090902@aero.iitb.ac.in> Lorenzo Isella wrote: > Dear All, > I have already posted this to the visual python mailing list, but with > no success. > The problem is the following: I want to make a movie using Python tools. > The short script I paste below (it needs visual python) plots a chain of > spheres in a window. If you don't expect good interactive performance you can use tvtk's visual to do this easily. Attached is a script. It requires that TVTK be installed, ETS-2.7.1 will do fine. cheers, prabhu -------------- next part -------------- A non-text attachment was scrubbed... Name: t.py Type: text/x-python Size: 399 bytes Desc: not available URL: From ggellner at uoguelph.ca Thu Apr 17 09:17:15 2008 From: ggellner at uoguelph.ca (Gabriel Gellner) Date: Thu, 17 Apr 2008 09:17:15 -0400 Subject: [SciPy-user] PyDSTool and SciPy (integrators) In-Reply-To: References: Message-ID: <20080417131715.GA27233@encolpuis> If it is of any use to get started, I have wrapped dopri5, and dopri853 using cython, it allows for both c and python callbacks, and I am actively trying to get your event detection setup working. I would be happy to talk with the PyDSTool people if you guys are interested in a Cython approach. Sadly as radau5 is using fortran my method is not yet ready for this system, though I am hoping that I can learn enough f2py to make this a happen. My wrapper also supports both range initial conditions and odeint style specific step points (this is similar to how ode45 works, that is if the user gives to time points, it returns the ode solvers chosen irregular steps, otherwise it returns steps at the given intervals). Gabriel On Thu, Apr 17, 2008 at 01:24:39AM -0400, Rob Clewley wrote: > Dear colleagues, > > Picking up on our previous email: > > One of the features of PyDSTool that seems most broadly useful to the > scipy community is the facility for fast integration of ODEs using C > and Fortran based codes from E. Hairer (dopri853 and radau5). Our > implementation dynamically creates C-based right hand sides which are > compiled and linked against the dopri/radau .o files to create a DLL > file, which is then loaded as a python module. Integration of vector > fields is very fast, as everything is handled at the C-level, rather > than through callbacks to python functions as in scipy.odeint. (This > is a *very* important distinction regarding performance, vs. merely > having a C-based integrator but retaining the user's vector field > function at the Python level). Though we have not done rigorous > testing, we have seen speed-ups of 10-50+ times over scipy.odeint. > > In addition, we are able to generate 'event' functions and 'auxiliary' > functions that are calculated alongside and 'online' with the > integration of the ODEs. These (nearly) arbitrary functions of the > phase space variables and parameters may be used to detect, e.g. when > the computed trajectory crosses a boundary, achieves a local maximum, > or when some function of the phase space variables and parameters > reaches a zero-crossing. Event functions may be used to halt > integration when a particular condition is achieved (critical for use > in hybrid systems, for example), while auxiliary functions may be used > to calculate quantities of interest that are complementary to the ODE > calculations (e.g. ionic currents across membranes in neuronal model > ODEs that describe changes in potential as ion channels open and > close). > > We are also able to use autonomous external inputs as part of the ODE > vector fields. For instance, we can use real data from experimentally > obtained voltage traces from a neuron as input to an ODE model of > neuronal membrane voltage dynamics. > > One catch to our system is in the generation of C-based right hand > sides, which must be compiled. We use distutils to accomplish this in > a simple call that is naturally platform independent. This is a great > benefit to us and saves us relying on a third party library or > application, but this solution has some frustrating quirks (we very > much understand that distutils was not *designed* to be used this > way!). Also, we have extensive machinery to generate appropriate C > vector field descriptions from Python vector field descriptions, and > our event/auxiliary function mechanisms currently depend on > this. Model description, compilation, and integration are rather > intertwined at present. > > As it now stands, a model (ODE right-hand side, event/auxiliary > functions, inputs) is defined as a set of strings, which are converted > to a representation as Variable, Parameter, and Expression classes (or > the string step is skipped and the model is just given in terms of > classes). The class representation is checked for consistency and is > used to generate a text file containing C code, including event > functions, etc., which is then compiled and linked against the > integrator and event detection code. SWIG is used to generate an > interface to python; the resulting module is then loaded into the > (interactive) python session. > > When the vector field is integrated, the results (trajectories, > auxiliary function values, event values) are returned numpy arrays, > packaged as Trajectory classes (which have added functionality over > numpy arrays). > > If our integration facilities were to be incorporated into scipy, the > underlying code for our integrators could be adapted to used python > callbacks, like odeint, though this would negate much of the speed > advantage. If the C compilation route were maintained, then we would > need to figure out a better way to use distutils, or find an > alternative compilation mechanism. In either case, we would have to > determine if and how to keep the event/auxiliary function and > autonomous external input facilities, which are closely bound to the > model building routines. The results of integration could be returned > as plain numpy arrays, but even doing this requires quite some > disentanglement. > > Since integration is a very common task, and our integrators perform > well, this appears to us to be the best place to start moving PyDSTool > capabilities to scipy, either directly or in scikit form. However, > such a move would require significant effort on our part. We welcome > comments/advice/suggestions regarding community interest in and the > planning/design of such an effort. In particular, the issues of model > building, C compilation vs. python callbacks, and the use of distutils > are points where we are looking for input. We have already begun > seeking help from a consultant on fixing some of the quirks of using > distutils this way, in case we can continue to viably use it. But, for > instance, we'd like to get an update from Scipy developers about the > intended/expected future of distutils in scipy vs. the regular python > version (we use scipy's for Fortran compatilibity). Maybe it would be > reasonable to consider a modification of distutils (at least in its > API) so that it could be viewed as a platform-indepedent way to > compile DLLs on the fly in the way we need. Maybe you can see a much > better solution to solve our problem, given the ever-changing python > technology available (Spyke looks like it might be relevant, for > instance). > > > Regards, > > Erik Sherwood > Rob Clewley > > -- > Robert H. Clewley, Ph. D. > Assistant Professor > Department of Mathematics and Statistics > Georgia State University > 720 COE, 30 Pryor St > Atlanta, GA 30303, USA > > tel: 404-413-6420 fax: 404-651-2246 > http://www.mathstat.gsu.edu/~matrhc > http://brainsbehavior.gsu.edu/ > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From bsouthey at gmail.com Thu Apr 17 09:44:19 2008 From: bsouthey at gmail.com (Bruce Southey) Date: Thu, 17 Apr 2008 08:44:19 -0500 Subject: [SciPy-user] Python(x,y) license In-Reply-To: <4806E202.1060803@pythonxy.com> References: <4806E202.1060803@pythonxy.com> Message-ID: <48075433.9060109@gmail.com> Hi, Please read the resources provided by the Software Freedom Law Center (http://www.softwarefreedom.org/resources) especially "Maintaining Permissive-Licensed Files in a GPL-Licensed Project: Guidelines for Developers" (http://www.softwarefreedom.org/resources/2007/gpl-non-gpl-collaboration.html). This is mainly resulted from including BSD licensed code in a GPL-licensed project but many of the points are valid for other projects with non-GPL licenses. While IANAL, you must pay special attention to all clauses of the licenses (especially the GPL V3) because you are redistributing code. Many terms only appear occur when redistributing code. Regards Bruce Python(x,y) wrote: > Hi all, > > Ok, the new license was supposed to be very protecting for the licensor. > But, the side effect is that a lot of people misunderstood what is > licensed under it. In fact, this is only the software collection itself. > Each package is distributed under its own license/copyright. So, yes it > is possible to use it for commercial purpose, but the limitation is that > you can't sell the software collection itself... so I don't see why it > is bothering you. > > However, I am ready to change the license of the software collection if > this message is not clear enough. The purpose of this license was to > protect the licensor, and not prevent you from using it !! > > So, I am waiting for your feedback. And eventually, I will release 1.1.1 > in OSL 3.0. > > Best regards, > > -- > P. Raybaut > Python(x,y) > http://www.pythonxy.com > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From aisaac at american.edu Thu Apr 17 10:12:54 2008 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 17 Apr 2008 10:12:54 -0400 Subject: [SciPy-user] Python(x,y) - New release 1.1.0 In-Reply-To: <59348.129.240.222.155.1208417812.squirrel@cens.ioc.ee> References: <54819.132.165.76.2.1208415803.squirrel@secure.nuxit.net> <59348.129.240.222.155.1208417812.squirrel@cens.ioc.ee> Message-ID: On Thu, 17 Apr 2008, (EEST) Pearu Peterson apparently wrote: > I think everyone has the right to choose whatever license s/he will > use for its software and not feel bad about it. PERIOD. ... > I think it is great that such software are released as > open source at all. I strongly agree. But it is the case that many people pick licenses that are not well suited to their goals. It can be useful to ask them if they have considered the implication of a particular license. It is also the case that picking a license that is not obviously liberal (MIT or BSD) or well-known copyleft (LGPL, GPL) can be a bad idea for a new open source project that is trying to attract a user base. The discussion in this thread illustrates why. Cheers, Alan Isaac From ondrej at certik.cz Thu Apr 17 10:17:26 2008 From: ondrej at certik.cz (Ondrej Certik) Date: Thu, 17 Apr 2008 16:17:26 +0200 Subject: [SciPy-user] Python(x,y) - New release 1.1.0 In-Reply-To: References: <54819.132.165.76.2.1208415803.squirrel@secure.nuxit.net> <59348.129.240.222.155.1208417812.squirrel@cens.ioc.ee> Message-ID: <85b5c3130804170717x2a4d1112kd320d0ecb2af9426@mail.gmail.com> On Thu, Apr 17, 2008 at 4:12 PM, Alan G Isaac wrote: > On Thu, 17 Apr 2008, (EEST) Pearu Peterson apparently wrote: > > I think everyone has the right to choose whatever license s/he will > > use for its software and not feel bad about it. PERIOD. > ... > > > I think it is great that such software are released as > > open source at all. > > I strongly agree. > > But it is the case that many people pick licenses that are > not well suited to their goals. It can be useful to ask > them if they have considered the implication of a particular > license. > > It is also the case that picking a license that is not > obviously liberal (MIT or BSD) or well-known copyleft > (LGPL, GPL) can be a bad idea for a new open source project > that is trying to attract a user base. The discussion in > this thread illustrates why. Exactly. One should (imho) just use well known opensource licenses. Ondrej From turian at gmail.com Thu Apr 17 17:43:02 2008 From: turian at gmail.com (Joseph Turian) Date: Thu, 17 Apr 2008 17:43:02 -0400 Subject: [SciPy-user] scipy.sparse -= Message-ID: <4dacb2560804171443q2329f7b7ya2ebe14ac9d42aac@mail.gmail.com> Is there a -= for scipy.sparse matrices? Thanks, Joseph -- Academic: http://www-etud.iro.umontreal.ca/~turian/ Business: http://www.metaoptimize.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From peridot.faceted at gmail.com Thu Apr 17 19:57:19 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Thu, 17 Apr 2008 19:57:19 -0400 Subject: [SciPy-user] PyDSTool and SciPy (integrators) In-Reply-To: References: Message-ID: Hi, PyDSTool looks to be full of useful code, and I hope it can become a self-sustaining project. The pieces that look most useful to me for inclusion in SciPy are the integrators, as you say. But it's not clear to me why I should prefer dopri853 or radau5 to the Adams integrator built in to scipy. Are somehow better for all functions? Take better advantage of the smoothness of objective functions? More robust in the face of stiff or partially-stiff systems? What I have needed is an integrator that offers more control. The particular problem I ran into was one where I needed to be able to set stopping conditions based on the dependent variables, not least because my model stopped making sense at a certain point. I ended up wrapping LSODAR from ODEPACK, and it made my life vastly easier. If you have integrators that can provide that kind of control, that would be very useful. Similarly something like a trajectory object that knows enough about the integrator to provide a spline with the right derivatives and knot spacing would be handy. Your scheme for producing compiled objective functions appeals less as a component of scipy. It's definitely useful as a part of PyDSTool, and many people working with ODEs will want to take advantage of it, but I foresee problems with incorporating it in scipy. For one thing it requires that the user have a C compiler installed and configured in the right way, and it requires that the system figure out how to call it appropriately at runtime. This seems like it's going to make installation significantly more tricky; already installation is one of the big stumbling blocks for scipy. A second problem is that presumably it places restrictions on the right-hand-side functions that one can use. In some of the cases I have used odeint, the RHS has been quite complicated, passing through methods of several objects, calling various functions from scipy.special, and so on. An integrator that can't deal with this sort of RHS is going to be a specialized tool (and presumably much faster within its domain of applicability). On the other hand, the general problem of making callbacks run faster turns up in a number of places in scipy - the optimization routines, for example, have the same problem. A general approach to solving this problem would be very valuable. In summary, I think that the most useful additions to scipy's differential equation integration would be: * A good model for solving ODEs and representing their results * Ability to set stopping conditions and additional functions to be computed Anne From ggellner at uoguelph.ca Thu Apr 17 20:28:13 2008 From: ggellner at uoguelph.ca (Gabriel Gellner) Date: Thu, 17 Apr 2008 20:28:13 -0400 Subject: [SciPy-user] PyDSTool and SciPy (integrators) In-Reply-To: References: Message-ID: <20080418002813.GA1752@basestar> > The pieces that look most useful to me for inclusion in SciPy are the > integrators, as you say. But it's not clear to me why I should prefer > dopri853 or radau5 to the Adams integrator built in to scipy. Are > somehow better for all functions? Take better advantage of the > smoothness of objective functions? More robust in the face of stiff or > partially-stiff systems? > Any information on this would be really interesting! I have ordered the Hairer book, but a quick overview of the benefits of the multistep vs runge kutta methods for use in scipy would be greatly appreciated. My feeling from my wrapper is that the multistep seem to be a bit faster when using python callbacks as they trade off increased memory usage and lack of easy self starting, for fewer function calls (which in this case are slow python calls). But the tradeoff might be for the next point, ie easy ways to stop the integrator . . . again any information from someone who knows better would be greatly appreciated. > What I have needed is an integrator that offers more control. The > particular problem I ran into was one where I needed to be able to set > stopping conditions based on the dependent variables, not least > because my model stopped making sense at a certain point. I ended up > wrapping LSODAR from ODEPACK, and it made my life vastly easier. If > you have integrators that can provide that kind of control, that would > be very useful. Similarly something like a trajectory object that > knows enough about the integrator to provide a spline with the right > derivatives and knot spacing would be handy. > The dopri5 and dop853 codes have a really good interface for this, that is after each successful integration step they call a solution callback that allows for dense interpolation of the solution between stepping points (with error of the same order as the solution), and any kind of stopping or event detection routines. The hard part is thinking of good ways of making default callbacks that are extensible, but it is nice that the base codes were designed with this functionality. Ultimately with my Cython wrapper you could just pass in an optional callback as well (coded in C or Cython as otherwise it would be painfully slow), that would give you any kind of control you might want. Though the way forward is to have good default templates available, I have been looking at PyDSTool's method as well as how matlab (not the code, rather the api) deals with this control. If you now of any other good examples of event detection or stopping rule api's I would love to look at them! Another side benefit is that we can make the codes work identically to ode45 from matlab, as they are using the same algorithm . . . which may not be better, but might help some users that seem to want these codes for compatibility. > Your scheme for producing compiled objective functions appeals less as > a component of scipy. It's definitely useful as a part of PyDSTool, > and many people working with ODEs will want to take advantage of it, > but I foresee problems with incorporating it in scipy. For one thing > it requires that the user have a C compiler installed and configured > in the right way, and it requires that the system figure out how to > call it appropriately at runtime. This seems like it's going to make > installation significantly more tricky; already installation is one of > the big stumbling blocks for scipy. A second problem is that > presumably it places restrictions on the right-hand-side functions > that one can use. In some of the cases I have used odeint, the RHS has > been quite complicated, passing through methods of several objects, > calling various functions from scipy.special, and so on. An integrator > that can't deal with this sort of RHS is going to be a specialized > tool (and presumably much faster within its domain of applicability). > > On the other hand, the general problem of making callbacks run faster > turns up in a number of places in scipy - the optimization routines, > for example, have the same problem. A general approach to solving this > problem would be very valuable. > Is not the sage approach of using either optional Cython callbacks, or what I often do, using f2py callbacks, a good solution? That way you really don't need to change the api's, and the availability of compilers is only needed in an opt in bases (kind of like when using mex files in matlab). > In summary, I think that the most useful additions to scipy's > differential equation integration would be: > * A good model for solving ODEs and representing their results > * Ability to set stopping conditions and additional functions to be computed > What do people think about returning an object from the integrators? I have been using R a lot lately and I really like how they use simple data objects to represent the output of solutions. That way you could have an object that would work like an array of the solution points, but would also have the full output available as attributes (like the time vector, even locations, etc). That way, just like in R, the complexity of the solution would be hidden, but it gets rid of the awkward use of 'full_output' booleans, and dealing with the resulting tuples (which seems like a suboptimal attempt to get matlab like functionality with out the syntax that matlab gives for only taking the information you need). Anyway, just my thoughts. I am very excited about progress in the ode tools for python! Gabriel From peridot.faceted at gmail.com Thu Apr 17 20:58:05 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Thu, 17 Apr 2008 20:58:05 -0400 Subject: [SciPy-user] PyDSTool and SciPy (integrators) In-Reply-To: <20080418002813.GA1752@basestar> References: <20080418002813.GA1752@basestar> Message-ID: On 17/04/2008, Gabriel Gellner wrote: > > What I have needed is an integrator that offers more control. The > > particular problem I ran into was one where I needed to be able to set > > stopping conditions based on the dependent variables, not least > > because my model stopped making sense at a certain point. I ended up > > wrapping LSODAR from ODEPACK, and it made my life vastly easier. If > > you have integrators that can provide that kind of control, that would > > be very useful. Similarly something like a trajectory object that > > knows enough about the integrator to provide a spline with the right > > derivatives and knot spacing would be handy. > > The dopri5 and dop853 codes have a really good interface for this, that is > after each successful integration step they call a solution callback that > allows for dense interpolation of the solution between stepping points (with > error of the same order as the solution), and any kind of stopping or event > detection routines. The hard part is thinking of good ways of making default > callbacks that are extensible, but it is nice that the base codes were > designed with this functionality. This sounds like it might have been a problem for me: what was happening was I was integrating an ODE within a domain. Outside the domain my RHS function raised exceptions, so it was not even safe to evaluate it there. I tried hacking up a solution with scipy's one-step-at-a-time integrator modes, but the best I could do was stop "near" the boundary, which failed if the integrator took bigger steps than my notion of "near". > Ultimately with my Cython wrapper you could just pass in an optional callback > as well (coded in C or Cython as otherwise it would be painfully slow), that > would give you any kind of control you might want. Though the way forward is > to have good default templates available, I have been looking at PyDSTool's > method as well as how matlab (not the code, rather the api) deals with this > control. > > If you now of any other good examples of event detection or stopping rule > api's I would love to look at them! I used LSODAR in ODEPACK and it did just what I wanted: before evaluating the RHS at any point, it called a list of user-provided functions. If any of them returned negative values, that point was outside the domain and the integrator took a smaller step. It assumed the user-provided stopping functions were somewhat smooth; in effect it ended up doing some root-finding so that it could stop right at the boundary. The API I came up with was simple and ugly: I just passed a list of functions, each of which took the same kind of pair of values as the RHS does, and each of which returned a float of appropriate sign. > What do people think about returning an object from the integrators? I have > been using R a lot lately and I really like how they use simple data objects > to represent the output of solutions. That way you could have an object that > would work like an array of the solution points, but would also have the full > output available as attributes (like the time vector, even locations, etc). > That way, just like in R, the complexity of the solution would be hidden, but > it gets rid of the awkward use of 'full_output' booleans, and dealing with the > resulting tuples (which seems like a suboptimal attempt to get matlab like > functionality with out the syntax that matlab gives for only taking the > information you need). I think the right return format depends what people want from the solution. I see two major kinds of problem one might use an ODE solver for: * Integrate out to some stopping point and give the values of dependent and independent variable there. * Integrate out to some stopping point and give back a representation of the solution. (The first case can in principle be obtained by evaluating the second at the endpoint, but at the cost of replacing O(1) space by O(n) space.) For the second it is clear that some kind of object to represent the solution is a good idea. What exactly it should contain is open for discussion; I envision keeping estimated function values and derivatives at each step, along with convergence information, and providing an interface to view it as either an array of values or a function akin to our interpolating spline objects. Convergence information and auxiliary function values may also be useful. Even for the first, simple case, I think returning an object is a more manageable way to provide convergence information. Are there other styles of using an ODE solver that people want? Perhaps a "running" mode, where the solver keeps track of whatever state information it has and is asked to advance by a certain amount (say, because more experimental data has become available)? Anne From rob.clewley at gmail.com Thu Apr 17 22:02:56 2008 From: rob.clewley at gmail.com (Rob Clewley) Date: Thu, 17 Apr 2008 22:02:56 -0400 Subject: [SciPy-user] PyDSTool and SciPy (integrators) In-Reply-To: <20080418002813.GA1752@basestar> References: <20080418002813.GA1752@basestar> Message-ID: On Thu, Apr 17, 2008 at 8:28 PM, Gabriel Gellner wrote: > > The pieces that look most useful to me for inclusion in SciPy are the > > integrators, as you say. But it's not clear to me why I should prefer > > dopri853 or radau5 to the Adams integrator built in to scipy. Are > > somehow better for all functions? Take better advantage of the > > smoothness of objective functions? More robust in the face of stiff or > > partially-stiff systems? Unfortunately, I'm not an expert in the numerical analysis associated with these methods, but I know that Radau in particular is a very good stiff integrator. Also, Radau lets you solve systems of the form M(x,t) dx/dt = f(x,t) where M is a non-invertible mass matrix (we have actually extended this feature to allow state and time dependence in M). My colleague Erik hopes to add other Hairer-Wanner integrators (e.g. one for delay-differential equations) at some point, using a very similar interface to the one we have already developed. Beyond that, we should both read Hairer and Wanner's books in detail! I haven't managed much of them yet. I finally bought them a few months ago. > > What I have needed is an integrator that offers more control. The > > particular problem I ran into was one where I needed to be able to set > > stopping conditions based on the dependent variables, not least > > because my model stopped making sense at a certain point. I ended up > > wrapping LSODAR from ODEPACK, and it made my life vastly easier. If > > you have integrators that can provide that kind of control, that would > > be very useful. Right, that's what we have. > > Similarly something like a trajectory object that > > knows enough about the integrator to provide a spline with the right > > derivatives and knot spacing would be handy. Well, we created the Trajectory class for just such a purpose at the python level. You call the trajectory object as if it was a continuously-defined function. As a prototype, it only does linear interpolation between points, but the whole idea is that the "density" of points can be implemented with any kind of spline if you replace that modular part (in the Variable class). > The dopri5 and dop853 codes have a really good interface for this, that is > after each successful integration step they call a solution callback that > allows for dense interpolation of the solution between stepping points (with > error of the same order as the solution), and any kind of stopping or event > detection routines. The hard part is thinking of good ways of making default > callbacks that are extensible, but it is nice that the base codes were > designed with this functionality. We don't currently pass on information about the dense output to the python level. I think that could be worked out, but would obviously require a huge amount of extra memory, but Erik would know better about this prospect because he wrote the interfaces to PyDSTool. Ultimately, a better solution would be a proper implementation of a Taylor-series based integrator that is extremely accurate! > Ultimately with my Cython wrapper you could just pass in an optional callback > as well (coded in C or Cython as otherwise it would be painfully slow), that > would give you any kind of control you might want. Though the way forward is > to have good default templates available, I have been looking at PyDSTool's > method as well as how matlab (not the code, rather the api) deals with this > control. I would like to understand using cython in this way -- I'm very unfamiliar with it. Would you maybe share that wrapper with us so that we can look into it as a possible replacement for SWIG? Can you comment on how efficient it is in reducing the number of interfacing calls between python/numpy and C? I heard that for Sage this was a big improvement. > If you now of any other good examples of event detection or stopping rule > api's I would love to look at them! Our user-defined event detection functions get created at the C-level for extra speed. IIRC, the stopping algorithm does not involve an intelligent look-ahead, rather it just detects a sign change during a step, and then uses a root solver (bisection, I think) on the "dense" solution function. > Another side benefit is that we can make the codes work identically to ode45 > from matlab, as they are using the same algorithm . . . which may not be > better, but might help some users that seem to want these codes for > compatibility. I can't say I like the Matlab APIs, but if people want to make multiple exposures of the same underlying integrator API to support this, maybe that has its benefits for some. > > Your scheme for producing compiled objective functions appeals less as > > a component of scipy. It's definitely useful as a part of PyDSTool, > > and many people working with ODEs will want to take advantage of it, > > but I foresee problems with incorporating it in scipy. For one thing > > it requires that the user have a C compiler installed and configured > > in the right way, and it requires that the system figure out how to > > call it appropriately at runtime. I don't really see why it would be a problem, given that supporting VODE is already an example of doing this in scipy. There are two aspects to a solution there. The integrators can be ready-compiled just as VODE is right now for use in binary distros, and someone could add python vector field callback functionality to them using cython or whatever (as Gael has apparently already done). But there would be an option so that users with a working compiler can use it! > > This seems like it's going to make > > installation significantly more tricky; already installation is one of > > the big stumbling blocks for scipy. I don't see how there'd be a change to the installability of scipy as a result. We (ab-)used distutils to do the compilation precisely because it provides a platform independent way of doing so -- it works right out of the box if you have a compiler installed. Cleaning that up is one of the things we'd like help with. > > A second problem is that > > presumably it places restrictions on the right-hand-side functions > > that one can use. In some of the cases I have used odeint, the RHS has > > been quite complicated, passing through methods of several objects, > > calling various functions from scipy.special, and so on. An integrator > > that can't deal with this sort of RHS is going to be a specialized > > tool (and presumably much faster within its domain of applicability). This is true to some extent, but I already added support for user-defined auxiliary functions of state variables and time (they can also be nested) and for a range of special functions that are easily accessible from their C math library equivalents. As for a v.f. calculation passing through object methods: sure, that's not going to be so easy -- but that would just be another reason to add a python v.f. callback facility as an option. I've never needed to do that in my work. I don't know what you're using it for, but if your code is making non-smooth changes to the vector field (e.g. through conditional statements) we support proper hybrid systems models based on our integrators -- otherwise it's not safe to be changing the v.f. non-smoothly as you'll confuse the integrator and make it spit out inaccurate points or fail to converge. > Is not the sage approach of using either optional Cython callbacks, or what I > often do, using f2py callbacks, a good solution? That way you really don't need > to change the api's, and the availability of compilers is only needed in an > opt in bases (kind of like when using mex files in matlab). If this means the user can choose whether to supply a python v.f. function or ask for the system to build C code with more-or-less the same scipy-level interface, then yes, it would seem like the ideal solution. > > In summary, I think that the most useful additions to scipy's > > differential equation integration would be: > > * A good model for solving ODEs and representing their results > > * Ability to set stopping conditions and additional functions to be computed > > > What do people think about returning an object from the integrators? I have > been using R a lot lately and I really like how they use simple data objects > to represent the output of solutions. That way you could have an object that > would work like an array of the solution points, but would also have the full > output available as attributes (like the time vector, even locations, etc). > That way, just like in R, the complexity of the solution would be hidden, but > it gets rid of the awkward use of 'full_output' booleans, and dealing with the > resulting tuples (which seems like a suboptimal attempt to get matlab like > functionality with out the syntax that matlab gives for only taking the > information you need). Our Trajectory (and embedded Variable) objects go some way towards this already, which is why I'm touting not just the integrators from PyDSTool as potentially useful classes for scipy. They could be easily extended to hold more information from the integrator in the way you suggest. The time data is already built into these objects, as is domain restriction/bounds checking. Also, I personally find the index-free nature of these objects and their ability to spit out index-free Point and Pointset objects when called at arbitrary time values (including a vectorized version for calling with arrays of time values) extremely liberating when using the integrators inside other code -- I hate keeping track of indices for different types of vector. The underlying raw output data from the integrator can also be extracted to bypass any interpolation. -Rob From rob.clewley at gmail.com Thu Apr 17 22:18:05 2008 From: rob.clewley at gmail.com (Rob Clewley) Date: Thu, 17 Apr 2008 22:18:05 -0400 Subject: [SciPy-user] PyDSTool and SciPy (integrators) In-Reply-To: References: <20080418002813.GA1752@basestar> Message-ID: On Thu, Apr 17, 2008 at 8:58 PM, Anne Archibald wrote: > This sounds like it might have been a problem for me: what was > happening was I was integrating an ODE within a domain. Outside the > domain my RHS function raised exceptions, so it was not even safe to > evaluate it there. I tried hacking up a solution with scipy's > one-step-at-a-time integrator modes, but the best I could do was stop > "near" the boundary, which failed if the integrator took bigger steps > than my notion of "near". I don't think our event detection routine solves this, but IIRC the one implemented for the sloppycell package by some colleagues of ours at Cornell would solve your problem. It's based around the lsoda integrator I think, but I think the API etc. is possibly even more entangled with the specifics of their systems biology package than ours is. > I think the right return format depends what people want from the > solution. I see two major kinds of problem one might use an ODE solver > for: > > * Integrate out to some stopping point and give the values of > dependent and independent variable there. > * Integrate out to some stopping point and give back a representation > of the solution. > > (The first case can in principle be obtained by evaluating the second > at the endpoint, but at the cost of replacing O(1) space by O(n) > space.) I personally do not find the former very useful, as I almost always want to know something about the intermediate points. My understanding is the dopri and radau integrators inherently return the whole solution, so maybe an additional calling option supporting the former usage could just throw everything out but the last point before the whole solution gets copied into a numpy object. > For the second it is clear that some kind of object to represent the > solution is a good idea. What exactly it should contain is open for > discussion; I envision keeping estimated function values and > derivatives at each step, along with convergence information, and > providing an interface to view it as either an array of values or a > function akin to our interpolating spline objects. Convergence > information and auxiliary function values may also be useful. Even for > the first, simple case, I think returning an object is a more > manageable way to provide convergence information. This is definitely something that the community should discuss. IMO, the more that can be stored for re-use, error analysis, etc. after the fact, the better. Memory is (relatively) cheap for these problems. Simple (even default) options can always be provided to turn them off. Your only problem is that all of the integrators we're talking about are *old* inasmuch as their APIs were not designed with this kind of en masse export of information out to the user. Any of them (current scipy integrators and PyDSTool's) would need their interfaces substantially re-written to accommodate this. > Are there other styles of using an ODE solver that people want? > Perhaps a "running" mode, where the solver keeps track of whatever > state information it has and is asked to advance by a certain amount > (say, because more experimental data has become available)? There are two issues that need separating here: 1) the short term refactoring, reuse and adaptation of existing PyDSTool code for scipy 2) the long term design goals for supporting ODEs in scipy Much as I want to keep (2) in mind, I think getting some short-term progress on improving scipy's ODE support based on existing code would be good. I foresee it taking a long time to get ODE support truly "right" for as many users as possible, and in the meantime I think some our code and other short-term solutions would be an appropriate stop gap. -Rob From ggellner at uoguelph.ca Thu Apr 17 22:35:09 2008 From: ggellner at uoguelph.ca (Gabriel Gellner) Date: Thu, 17 Apr 2008 22:35:09 -0400 Subject: [SciPy-user] PyDSTool and SciPy (integrators) In-Reply-To: References: <20080418002813.GA1752@basestar> Message-ID: <20080418023509.GB2240@basestar> > > I think the right return format depends what people want from the > > solution. I see two major kinds of problem one might use an ODE solver > > for: > > > > * Integrate out to some stopping point and give the values of > > dependent and independent variable there. > > * Integrate out to some stopping point and give back a representation > > of the solution. > > > > (The first case can in principle be obtained by evaluating the second > > at the endpoint, but at the cost of replacing O(1) space by O(n) > > space.) > > I personally do not find the former very useful, as I almost always > want to know something about the intermediate points. My understanding > is the dopri and radau integrators inherently return the whole > solution, so maybe an additional calling option supporting the former > usage could just throw everything out but the last point before the > whole solution gets copied into a numpy object. > I don't think this is true, as far as I understand the dopri codes don't save anymore than the runge kutta method needs which is a single step, it is up to the user supplied callback to save any information, so I think you could just write a callback that after each point checks if the time is the terminal time and only save that point. I will try a proof of concept of this. Gabriel From tournesol33 at gmail.com Thu Apr 17 23:14:25 2008 From: tournesol33 at gmail.com (tournesol) Date: Fri, 18 Apr 2008 12:14:25 +0900 Subject: [SciPy-user] Input value from data file to 2D array Message-ID: <48081211.8060403@gmail.com> Hi All. I'm a newbie of python and scipy, numpy and try to rewriting the Fortran77 code to Python. First, here is my Fortran code. DIMENSION A(10,10), B(10,10) OPEN(21,FILE='aaa.dat') DO 10 K=1,2 READ(21,1602) ((A(I,J),J=1,3),I=1,2) READ(21,1602) ((B(I,J),J=1,3),I=1,2) 10 CONTINUE WRITE(6,1602) ((A(I,J),J=1,3),I=1,2) WRITE(6,1602) ((B(I,J),J=1,3),I=1,2) 1602 FORMAT(3I4) END and aaa.dat file is 0 1 1 1 1 1 2 1 1 ?; ; ; ; ; ; 18 1 1 19 1 1 The code is going to read each value of the data file(aaa.dat) and input the value to the array A, B. According the "DO 10 K=1,2" which means a loop of Fortran it is going to try to read 1st 2x3 array and set them to A, also read 2nd 2x3 array to B. Two "WRITE(6,1602) "s mean that the final value of array A and B are 3rd 2x3 array adn 4th 2x3 array ? 4 1 1 5 1 1 6 1 1 7 1 1 Question 1: How to input value from data file to 2D array ? Can I write code like following ? ?a=zeros([2,3], Float) ??? for i in range(1,2): ?? ?for j in range(1,3): ??? a[i][j]=i,j (If the size of data file is very big , using read_array() or fromfile() will be very hard. ) Question 2: Such like "DO 10 K=1,1500", I just read a part of data file(<=very big file). How cat write this with Python ? Any advice please! From haase at msg.ucsf.edu Fri Apr 18 04:36:20 2008 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Fri, 18 Apr 2008 10:36:20 +0200 Subject: [SciPy-user] The IO library and image file formats -- compare with with PIL Message-ID: Hi, Could someone here please summarize the plans for the io library !? I saw that there are parts of it going into numpy already. And more stuff is already in SciPy -- IIRC. What is the status / plans regrading image formats like TIFF. Are you guys planning to duplicate the efforts of the Python Imaging Library ( PIL ) ? Or can you just copy-paste some of the code ? The reason I'm asking now, is that I just submitted a patch to PIL that adds writing capabilities for MultiPage TIFF files. I did not get any response on the PIL list, and I'm actually still not clear on their handling of free vs. commercial. The commercial PIL license sounds like you have to pay $2000 to get access to svn, i.e. the current development version of PIL. Anyone here who could comment on this ? Thanks, Sebastian Haase From robert.kern at gmail.com Fri Apr 18 04:54:19 2008 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 18 Apr 2008 03:54:19 -0500 Subject: [SciPy-user] The IO library and image file formats -- compare with with PIL In-Reply-To: References: Message-ID: <3d375d730804180154u7a1883c4mc6e4a8f169338186@mail.gmail.com> On Fri, Apr 18, 2008 at 3:36 AM, Sebastian Haase wrote: > Hi, > > Could someone here please summarize the plans for the io library !? > > I saw that there are parts of it going into numpy already. > And more stuff is already in SciPy -- IIRC. > > What is the status / plans regrading image formats like TIFF. There are no such plans. > Are you guys planning to duplicate the efforts of the Python Imaging > Library ( PIL ) ? No. > Or can you just copy-paste some of the code ? We don't intend to, no. > The reason I'm asking now, is that I just submitted a patch to PIL > that adds writing capabilities for MultiPage TIFF files. > I did not get any response on the PIL list, > and I'm actually still not clear on their handling of free vs. commercial. > The commercial PIL license sounds like you have to pay $2000 to get > access to svn, i.e. the current development version of PIL. > Anyone here who could comment on this ? I doubt it. Fredrik Lundh is the person you have to ask about such things. You might try emailing him personally. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From stefan at sun.ac.za Fri Apr 18 05:18:15 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Fri, 18 Apr 2008 11:18:15 +0200 Subject: [SciPy-user] The IO library and image file formats -- compare with with PIL In-Reply-To: References: Message-ID: <9457e7c80804180218t32730fcbn1b26ff0e389e6774@mail.gmail.com> On 18/04/2008, Sebastian Haase wrote: > The reason I'm asking now, is that I just submitted a patch to PIL > that adds writing capabilities for MultiPage TIFF files. > I did not get any response on the PIL list, If you *do* figure out how to get their attention, please let me know (maybe I should try the route Robert suggested). Zachary Pincus (IIRC) and myself both have patches we'd like to have applied. We mailed it to the Image SIG, without any response. Regards St?fan From contact at pythonxy.com Fri Apr 18 06:22:30 2008 From: contact at pythonxy.com (contact at pythonxy.com) Date: Fri, 18 Apr 2008 12:22:30 +0200 Subject: [SciPy-user] Fwd: [ Python(x,y) ] - New release 1.1.1 In-Reply-To: <48083318.6050303@pythonxy.com> References: <48083318.6050303@pythonxy.com> Message-ID: <629b08a40804180322h4909645g4bcabd12900e8f63@mail.gmail.com> ---------- Forwarded message ---------- From: "Python(x,y)" Date: Fri, 18 Apr 2008 07:35:20 +0200 Subject: [ Python(x,y) ] - New release 1.1.1 To: users at pythonxy.com Hi all, Python(x,y) 1.1.1 is now available on http://www.pythonxy.com. Changes history 04 -17 -2008 - Version 1.1.1 : * Added: o Enthought Tool Suite installation is now optional o Interactive consoles with matplotlib, Qt4 or wxPython threading support * Corrected: o Python(x,y) licensing has been clarified. 04 -14 -2008 - Version 1.1.0 : * Added: o Pydev 1.3.15 - New interactive console! (code completion, history management, auto-import, send selected code to console, ...) o Enthought Tool Suite 2.7.0 (including MayaVi 2, the powerful 2D and 3D scientific visualization tool) Special thanks to Ga?l Varoquaux for helping us integrating ETS in Python(x,y) and testing Mayavi 2 o VTK 5.0 o Cython 0.9.6.13.1 - Cython is a language that makes writing C extensions for the Python language as easy as Python itself o GDAL 1.5.0 - Geospatial Data Abstraction Library o Windows installer now supports the .egg packages o SetupTools 0.6c8 * Corrected: o Uninstall: PyParallel and PySerial were not removed * Updated: o Python(x,y) documentation -- P. Raybaut Python(x,y) http://www.pythonxy.com From dikshie at gmail.com Fri Apr 18 06:39:03 2008 From: dikshie at gmail.com (dikshie) Date: Fri, 18 Apr 2008 19:39:03 +0900 Subject: [SciPy-user] Lib/linsolve/umfpack/umfpack.i:192: Error: Unable to find 'umfpack.h' Message-ID: <910e60e80804180339k156c8644y6e77c599e9e24337@mail.gmail.com> Hi, i have strange situation here. when i installed numpy and scipy on new PC, i dont have any problems. BUT when i try installed, deinstall, and re-install i always get: --------------------------------------------------------------------------- swig: Lib/linsolve/umfpack/umfpack.i swig -python -o build/src.freebsd-7.0-STABLE-amd64-2.5/Lib/linsolve/umfpack/_umfpack_wrap.c -outdir build/src.freebsd-7.0-STABLE-amd64-2.5/Lib/linsolve/umfpack Lib/linsolve/umfpack/umfpack.i Lib/linsolve/umfpack/umfpack.i:192: Error: Unable to find 'umfpack.h' Lib/linsolve/umfpack/umfpack.i:193: Error: Unable to find 'umfpack_solve.h' Lib/linsolve/umfpack/umfpack.i:194: Error: Unable to find 'umfpack_defaults.h' Lib/linsolve/umfpack/umfpack.i:195: Error: Unable to find 'umfpack_triplet_to_col.h' Lib/linsolve/umfpack/umfpack.i:196: Error: Unable to find 'umfpack_col_to_triplet.h' ------------------------------------------------------------------------------ same error message when i tried to upgrade scipy. try to ldconfig -m /usr/local/include does not help. any clue? with best regards, -dikshie- From haase at msg.ucsf.edu Fri Apr 18 07:42:19 2008 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Fri, 18 Apr 2008 13:42:19 +0200 Subject: [SciPy-user] The IO library and image file formats -- compare with with PIL In-Reply-To: <9457e7c80804180218t32730fcbn1b26ff0e389e6774@mail.gmail.com> References: <9457e7c80804180218t32730fcbn1b26ff0e389e6774@mail.gmail.com> Message-ID: On Fri, Apr 18, 2008 at 11:18 AM, St?fan van der Walt wrote: > On 18/04/2008, Sebastian Haase wrote: > > The reason I'm asking now, is that I just submitted a patch to PIL > > that adds writing capabilities for MultiPage TIFF files. > > I did not get any response on the PIL list, > > If you *do* figure out how to get their attention, please let me know > (maybe I should try the route Robert suggested). Zachary Pincus > (IIRC) and myself both have patches we'd like to have applied. We > mailed it to the Image SIG, without any response. > > Regards > St?fan > Ultimately we has to consider a fork of PIL. Do you guys know, if this is allowed -- per the PIL license !? Of course this would be super sub optimal, but then, it's effectively what I have right now -- and you have your own version ..... To summerize: numpy can probably do many things of PIL already better -- I'm talking about all the transformation stuff of course. So only the file IO would have to get forked out -- to scipy for example ;-) Cheers -Sebastian From david.huard at gmail.com Fri Apr 18 09:41:10 2008 From: david.huard at gmail.com (David Huard) Date: Fri, 18 Apr 2008 09:41:10 -0400 Subject: [SciPy-user] Lib/linsolve/umfpack/umfpack.i:192: Error: Unable to find 'umfpack.h' In-Reply-To: <910e60e80804180339k156c8644y6e77c599e9e24337@mail.gmail.com> References: <910e60e80804180339k156c8644y6e77c599e9e24337@mail.gmail.com> Message-ID: <91cf711d0804180641va95e215s91a201c4de394b4d@mail.gmail.com> Hi, Try editing the site.cfg file in the scipy directory with the following: [umfpack] library_dirs=/usr/lib64 include_dirs=/usr/include/suitesparse This is on FC8. You might have to change the include_dirs for your distro ($ locate umfpack.h) David 2008/4/18, dikshie : > > Hi, > i have strange situation here. > when i installed numpy and scipy on new PC, i dont have any problems. > BUT when i try installed, deinstall, and re-install i always get: > > --------------------------------------------------------------------------- > swig: Lib/linsolve/umfpack/umfpack.i > swig -python -o > > build/src.freebsd-7.0-STABLE-amd64-2.5/Lib/linsolve/umfpack/_umfpack_wrap.c > -outdir build/src.freebsd-7.0-STABLE-amd64-2.5/Lib/linsolve/umfpack > Lib/linsolve/umfpack/umfpack.i > Lib/linsolve/umfpack/umfpack.i:192: Error: Unable to find 'umfpack.h' > Lib/linsolve/umfpack/umfpack.i:193: Error: Unable to find > 'umfpack_solve.h' > Lib/linsolve/umfpack/umfpack.i:194: Error: Unable to find > 'umfpack_defaults.h' > Lib/linsolve/umfpack/umfpack.i:195: Error: Unable to find > 'umfpack_triplet_to_col.h' > Lib/linsolve/umfpack/umfpack.i:196: Error: Unable to find > 'umfpack_col_to_triplet.h' > > ------------------------------------------------------------------------------ > same error message when i tried to upgrade scipy. > try to ldconfig -m /usr/local/include does not help. > > any clue? > > with best regards, > > > -dikshie- > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zachary.pincus at yale.edu Fri Apr 18 10:31:34 2008 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Fri, 18 Apr 2008 10:31:34 -0400 Subject: [SciPy-user] The IO library and image file formats -- compare with with PIL In-Reply-To: References: <9457e7c80804180218t32730fcbn1b26ff0e389e6774@mail.gmail.com> Message-ID: > Ultimately we has to consider a fork of PIL. > Do you guys know, if this is allowed -- per the PIL license !? > > Of course this would be super sub optimal, > but then, it's effectively what I have right now -- and you have your > own version ..... > > To summerize: numpy can probably do many things of PIL already better > -- I'm talking about all the transformation stuff of course. > So only the file IO would have to get forked out -- to scipy for > example ;-) I have my own "internal fork" of PIL that I've been calling "PIL- lite". I tore out everything except the file IO, and I fixed that to handle 16-bit files correctly on all endian machines, and to have a more robust array interface. IIRC, PIL is BSD-licensed (or BSD-compatible), so the fork should be OK to re-distribute. Now, part of the reason that we may have heard nothing about the PIL patches we've submitted variously is that I understand that they're doing a big re-write of PIL, and in particular, its memory handling, that should address these sort of issues. However, we all know how well "big rewrites" go... If people wanted to make a proper "fork" of PIL into a numpy- compatible image IO layer, I would be all for that. I'd be happy to donate "PIL-lite" as a starting point. Now, the file IO in PIL is a bit circuitous -- files are initially read by pure-Python code that determines the file type, etc. This information is then passed to (brittle and ugly) C code to unpack and swizzle the bits as necessary, and pack them into the PIL structs in memory. I think that basically all of what PIL does, bit-twiddling-wise, could be done with numpy. So really, what's needed is to take the pure- Python "file format reading" functionality from PIL (with my modifications thereof to handle 16-bit files better, and St?fan and Sebastian's modifications for other functionality, etc), and then attach it to a layer that uses Python and numpy to actually read the bits out of the files and directly into numpy arrays. I've been meaning to do this for a while, but just haven't gotten around to it. I think it will be a surprisingly small amount of code needed around PIL's python file format readers. Zach From lbolla at gmail.com Fri Apr 18 11:03:42 2008 From: lbolla at gmail.com (lorenzo bolla) Date: Fri, 18 Apr 2008 17:03:42 +0200 Subject: [SciPy-user] Enthought installation Message-ID: <80c99e790804180803x3e517862nccda9320728c17d0@mail.gmail.com> Hi all! I'm trying to install the Enthought distribution on my windows box, with the following command: enstaller enthon ets After having downloaded all the packages, the installation aborts with the following error: easy_install> Processing enthought.traits-2.0.4.tar.gz easy_install> Running enthought.traits-2.0.4\setup.py -q bdist_egg --dist-dir c:\docume~1\bollal~1\locals~1\temp\easy_install-7dblsa\enthought.traits-2.0.4\egg-dist-tmp-nxxivf error: Setup script exited with error: Python was built with Visual Studio 2003; extensions must be built with a compiler than can generate compatible binaries. Visual Studio 2003 was not found on this system. If you have Cygwin installed, you can try compiling with MingW32, by passing "-c mingw32" to setup.py. Obviously, I do not have Visual Studio, but I do have cygwin with mingw32 compiler. I tried the simple command: enstaller enthon ets -c mingw32, with no luck. Any hints? Lorenzo -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Fri Apr 18 11:08:03 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Fri, 18 Apr 2008 17:08:03 +0200 Subject: [SciPy-user] Enthought installation In-Reply-To: <80c99e790804180803x3e517862nccda9320728c17d0@mail.gmail.com> References: <80c99e790804180803x3e517862nccda9320728c17d0@mail.gmail.com> Message-ID: <20080418150803.GM14390@phare.normalesup.org> On Fri, Apr 18, 2008 at 05:03:42PM +0200, lorenzo bolla wrote: > I'm trying to install the Enthought distribution on my windows box, with > the following command: > enstaller enthon ets > After having downloaded all the packages, the installation aborts with the > following error: For help installing the Enthought distribution, you are better off asking on the enthought-dev mailing list. I could try to help you, but I don't use windows, so my help will be limited. However, I can see that you are trying to install the source packages, which is making things harder. If you installed the binary packages it would be easier. How to do this with enstaller, I do not know. Cheers, Ga?l From dikshie at gmail.com Fri Apr 18 11:14:30 2008 From: dikshie at gmail.com (dikshie) Date: Sat, 19 Apr 2008 00:14:30 +0900 Subject: [SciPy-user] Lib/linsolve/umfpack/umfpack.i:192: Error: Unable to find 'umfpack.h' In-Reply-To: <91cf711d0804180641va95e215s91a201c4de394b4d@mail.gmail.com> References: <910e60e80804180339k156c8644y6e77c599e9e24337@mail.gmail.com> <91cf711d0804180641va95e215s91a201c4de394b4d@mail.gmail.com> Message-ID: <910e60e80804180814t1944338cnd9cb78c8b26ae664@mail.gmail.com> On Fri, Apr 18, 2008 at 10:41 PM, David Huard wrote: > Hi, > > Try editing the site.cfg file in the scipy directory with the following: > > [umfpack] > library_dirs=/usr/lib64 > include_dirs=/usr/include/suitesparse > > This is on FC8. You might have to change the include_dirs for your distro ($ > locate umfpack.h) solved. 1.I removed suitesparse and octave. 2.install scipy 3.re-install suitesparse and octave. -dikshie- From wesher at bu.edu Fri Apr 18 11:34:57 2008 From: wesher at bu.edu (Erik Sherwood) Date: Fri, 18 Apr 2008 11:34:57 -0400 Subject: [SciPy-user] PyDSTool and SciPy (integrators) In-Reply-To: References: Message-ID: <4D547BCC-B8BC-4822-9909-ABD8560A3C62@bu.edu> Thanks to everyone who has offered their thoughts about PyDSTool. To address some of the points that have been raised in the discussion so far: 1. Best integration method There is no single best integrator or class of integrator to use; optimality is determined by the problem and the computing resources available. Every 10 years or so, people have surveyed the integrator landscape, tested the performance of the available methods on an evolving battery of test problems, and come up with heuristics to guide users who want a black box solution to apply to their ODE/DAE problems. The Adams-type method used by scipy was considered the best all-around solution in the 1970s (Hull, et al., SIAM J. Num. Anal. 1972) if function evaluations were relatively expensive; otherwise Bullirsch and Stoer's extrapolation method was better. Hairer and Wanner's (H&W) Radau methods and Petzold's extensions to Gear's version of a variable-order Adams method basically seem to be the best general purpose methods for stiff systems right now (Cash, Proc. Royal Soc. 2003 & references). The advantage of H&W's Radau method is that it provides significantly higher accuracy for the same choices of tolerances than the DASSL (Petzold, Brennan) codes. The trade off is that it requires more function calls per step, though it takes fewer steps. If the function calls are expensive, as they would be if done through callbacks to Python functions, this can be a significant disadvantage. For the kinds of problems Rob and I deal with, high accuracy with reasonable speed is essential, and our function evaluations are not excessively expensive, hence our preference for Radau. We have found the HW dopri853 implementation to be fast and robust for non-stiff (and some moderately stiff) systems and the Radau5 implementation to be able to handle almost all stiff problems that stymie dopri. 2. Integrator control I would say the codes provided by H&W provide essentially the same level of control as the codes in ODEPACK, based on my experience with the H&W codes and an inspection of the ODEPACK documentation. 3. Stopping events, auxiliary functions PyDSTool's event functions, which can be set as 'terminal' (halting integration) or 'non-terminal' (not stopping integration, but saved and returned to the caller), may be any C function which returns a double. The event is considered to have happened when the event function returns 0.0. There is a layer of code between the interface to Python and the interface to the H&W codes where the events are checked. After each step of the integration, the event handling routines check if any event functions have changed sign; if so, the exact point at which the sign change (and hence event) occurred is found using bisection and the dense output function provided by the H&W integrator, to user specified accuracy, which can be set independently for each event function. Anne: This feature of PyDSTool would seem to be what you wanted, and does not require restarting the integration with a smaller step size. However, it is possible that your problem involves other features that I didn't take into consideration when designing this part of PyDSTool. If you could provide us the example, we would find it useful. At the end of an integration, the intermediate layer of the PyDSTool integration code will compute any function of the phase space variables, parameters, etc. you want on the computed trajectory and return the result. Performing this internally rather than in python makes it much faster. 4. Compilation Using compiled RHSs is essential for fast integration. I prefer to keep this feature in; for many applications, the using callbacks to Python is simply too slow. That said, I can see a way to modify what I have written to support (1) an all python callback interface to the H&W integrators, eventually with event handling, etc. (2) a version of what PyDSTool has now which allows for a mix of C functions and callbacks to python functions for events, etc. Note that (1) will nullify the speed increases we currently get, particularly for Radau. Also, implementing (1) would likely be much easier than (2), though both would take me some time. 5. Misc I think it is better to return an entire trajectory rather than the endpoint of a Returning an object is what we do in PyDSTool, but it goes against the lightweight advantages of the odeint interface as it stands now. I think our Trajectory class is overkill for many people. Anne: Providing so much information at every trajectory point would require much revision of the current code, for which I see little additional benefit. Could you explain how you would use all of the additional information? Also, could you elaborate on your "running" mode? I do not understand what you are suggesting. I would like to provide an interface for all of the H&W codes that have dense output. This includes other methods for stiff integration, a delay differential equation integrator, and possibly a symplectic integrator. However, these integrators require the systems to be input with particular forms which do not necessarily conform so nicely to the odeint interface, for example. I would like to settle on an API/ calling format before moving ahead with this project. I have explored possibilities for adding some automatic differentiation and Taylor series integration functionality to PyDSTool. There is an AD package for python, which I have not used. The applications I had in mind would probably require RHS compilation, along the lines of the Jorba & Zhou package, in order for it to run in reasonable time. At this point, AD is fairly low priority, though it would certainly give you (Anne) all of the information you are looking for along the trajectory. Taylor series integrators do not avoid stiffness problems, of course. Please keep up the discussion, as it is very useful for us to get lots of input as we make design decisions. It would be nice to hear from people doing dynamical systems work, as well as those who use ODEs from a different perspective. The more examples we have of how people want to use the integrators, the better we can plan interfaces and features. Thanks again, Erik ----- William Erik Sherwood, Ph. D. Research Fellow Center for BioDynamics, Boston University 111 Cummington Street Boston, MA 02215 (617) 358-4684 From wesher at bu.edu Fri Apr 18 11:54:14 2008 From: wesher at bu.edu (Erik Sherwood) Date: Fri, 18 Apr 2008 11:54:14 -0400 Subject: [SciPy-user] PyDSTool and SciPy (integrators) Message-ID: > I don't think this is true, as far as I understand the dopri codes > don't save > anymore than the runge kutta method needs which is a single step, it > is up to the > user supplied callback to save any information, so I think you could > just > write a callback that after each point checks if the time is the > terminal time > and only save that point. > > I will try a proof of concept of this. > > Gabriel Gabriel is correct about this. Erik ----- William Erik Sherwood, Ph. D. Research Fellow Center for BioDynamics, Boston University 111 Cummington Street Boston, MA 02215 (617) 358-4684 From ggellner at uoguelph.ca Fri Apr 18 12:26:51 2008 From: ggellner at uoguelph.ca (Gabriel Gellner) Date: Fri, 18 Apr 2008 12:26:51 -0400 Subject: [SciPy-user] PyDSTool and SciPy (integrators) In-Reply-To: References: Message-ID: <20080418162651.GA4966@basestar> On Fri, Apr 18, 2008 at 11:54:14AM -0400, Erik Sherwood wrote: > > I don't think this is true, as far as I understand the dopri codes > > don't save > > anymore than the runge kutta method needs which is a single step, it > > is up to the > > user supplied callback to save any information, so I think you could > > just > > write a callback that after each point checks if the time is the > > terminal time > > and only save that point. > > > > I will try a proof of concept of this. > > > > Gabriel > > > Gabriel is correct about this. > Erik Yes I have a proof of concept at: http://www.mudskipper.ca/dopri5.zip Which is the mentioned cython prototype I have been working on. Having just the end ouput is a single if statement, though like the others I am not sure this is ever what I would want . . . Gabriel > ----- > William Erik Sherwood, Ph. D. > Research Fellow > Center for BioDynamics, Boston University > 111 Cummington Street > Boston, MA 02215 > (617) 358-4684 > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From peridot.faceted at gmail.com Fri Apr 18 13:06:01 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Fri, 18 Apr 2008 13:06:01 -0400 Subject: [SciPy-user] PyDSTool and SciPy (integrators) In-Reply-To: <4D547BCC-B8BC-4822-9909-ABD8560A3C62@bu.edu> References: <4D547BCC-B8BC-4822-9909-ABD8560A3C62@bu.edu> Message-ID: On 18/04/2008, Erik Sherwood wrote: > For the kinds of problems Rob and I deal with, high accuracy with > reasonable speed is essential, and our function evaluations are not > excessively expensive, hence our preference for Radau. We have found > the HW dopri853 implementation to be fast and robust for non-stiff > (and some moderately stiff) systems and the Radau5 implementation to > be able to handle almost all stiff problems that stymie dopri. Just to check: radau and dopri853 require you to know whether your problem is stiff or not? An extremely convenient feature of the ODEPACK integrator is that it can switch between stiff and non-stiff solvers as needed. > 3. Stopping events, auxiliary functions > > PyDSTool's event functions, which can be set as 'terminal' (halting > integration) or 'non-terminal' (not stopping integration, but saved > and returned to the caller), may be any C function which returns a > double. The event is considered to have happened when the event > function returns 0.0. There is a layer of code between the interface > to Python and the interface to the H&W codes where the events are > checked. After each step of the integration, the event handling > routines check if any event functions have changed sign; if so, the > exact point at which the sign change (and hence event) occurred is > found using bisection and the dense output function provided by the > H&W integrator, to user specified accuracy, which can be set > independently for each event function. > > Anne: This feature of PyDSTool would seem to be what you wanted, and > does not require restarting the integration with a smaller step size. > However, it is possible that your problem involves other features that > I didn't take into consideration when designing this part of PyDSTool. > If you could provide us the example, we would find it useful. This does seem like just what I wanted; the only concern is that it be able to handle situations where the RHS cannot be evaluated outside the domain. (The specific system I was using involved tracing rays around a black hole. Far from the hole I used Schwarzschild coordinates, but these are singular as you cross the event horizon. So I wanted to stop and switch coordinates when I got to, say, R=3M. Unfortunately the integrators kept trying to take steps down past R=2M, where the RHS blew up.) The reason I suggested reducing step sizes in such a situation is because with most algorithms, there simply isn't the RHS information available to take a normal-sized step. > At the end of an integration, the intermediate layer of the PyDSTool > integration code will compute any function of the phase space > variables, parameters, etc. you want on the computed trajectory and > return the result. Performing this internally rather than in python > makes it much faster. Does the integrator take auxiliary functions into account when deciding on step sizes? That is, suppose my auxiliary function has oscillatory behaviour for certain values; will the integrator notice and take smaller steps there? > Anne: Providing so much information at every trajectory point would > require much revision of the current code, for which I see little > additional benefit. Could you explain how you would use all of the > additional information? Also, could you elaborate on your "running" > mode? I do not understand what you are suggesting. Most of the extra information I had in mind was intended to reduce memory usage and improve smoothness of the (interpolated) trajectory. Since the trajectory is the result of solving an ODE, it will normally be at least differentiable, but linear interpolation will produce a non-differentiable trajectory. I can envision many situations where the derivative of the objective function is wanted. This can at worst be obtained by evaluating the RHS, but if the integrator is internally working with something that is locally a polynomial, ideally the trajectory would follow that. Using polynomials, at least to take the derivative into account, should reduce the density with which one must sample the trajecotry. The other information, on convergence, I suggested mostly so that users could evaluate the trajectory and look for places where the solution risked being inaccurate. The current ODEINT provides a fair amount of such instrumentation, in its messy FORTRAN way. The "running" mode I suggested is based on the way ODEPACK is designed to be called from FORTRAN: one call initializes the integrator, then it maintains internal state. It can be told to advance either one step (of its choice) or a fixed amount (obtained by interpolation based on the solution). The result can then be "walked" forward, keeping the integrator's internal state (derivatives, appropriate step size, knowledge about stiffness) at each step. This kind of calling convention is useful for (say) particle systems, where for every frame of an animation one wants to advance the system by a thirtieth of a second, continuing indefinitely and forgetting the past. > I would like to provide an interface for all of the H&W codes that > have dense output. This includes other methods for stiff integration, > a delay differential equation integrator, and possibly a symplectic > integrator. However, these integrators require the systems to be input > with particular forms which do not necessarily conform so nicely to > the odeint interface, for example. I would like to settle on an API/ > calling format before moving ahead with this project. The odeint interface is, frankly, a disaster. I have had to abuse it every time I needed to solve an ODE. The underlying ODEPACK interface is fairly horrible, but I think it's valuable to look at it because it's seen a lot of use and has been adjusted to be able to do all the various things that users want from it. Anne From wnbell at gmail.com Fri Apr 18 13:40:59 2008 From: wnbell at gmail.com (Nathan Bell) Date: Fri, 18 Apr 2008 12:40:59 -0500 Subject: [SciPy-user] scipy.sparse -= In-Reply-To: <4dacb2560804171443q2329f7b7ya2ebe14ac9d42aac@mail.gmail.com> References: <4dacb2560804171443q2329f7b7ya2ebe14ac9d42aac@mail.gmail.com> Message-ID: On Thu, Apr 17, 2008 at 4:43 PM, Joseph Turian wrote: > Is there a -= for scipy.sparse matrices? Not currently. Unless the sparsity patterns of A and B in A -= B were the same, then there's little benefit in supporting in-place arithmetic. If A and B are CSR or CSC matrices with the same index arrays (i.e. A.indptr == B.indptr and A.indices == B.indices) then A -= B is equivalent to A.data -= B.data -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From pwang at enthought.com Fri Apr 18 14:43:19 2008 From: pwang at enthought.com (Peter Wang) Date: Fri, 18 Apr 2008 13:43:19 -0500 Subject: [SciPy-user] Enthought installation In-Reply-To: <80c99e790804180803x3e517862nccda9320728c17d0@mail.gmail.com> References: <80c99e790804180803x3e517862nccda9320728c17d0@mail.gmail.com> Message-ID: <94F8C3D7-398C-4A07-8DE9-BFDB5DBD728A@enthought.com> On Apr 18, 2008, at 10:03 AM, lorenzo bolla wrote: > Hi all! > > I'm trying to install the Enthought distribution on my windows box, > with the following command: > enstaller enthon ets > > After having downloaded all the packages, the installation aborts > with the following error: > Obviously, I do not have Visual Studio, but I do have cygwin with > mingw32 compiler. > I tried the simple command: enstaller enthon ets -c mingw32, with no > luck. You should create a pydistutils.cfg file to let distutils know that you want to use mingw32 to build extensions. Here are the steps: 1. Create a c:\documents and settings\USERNAME\pydistutils.cfg file (where USERNAME is your login) with the following contents: [build] compiler=mingw32 2. Create the new environment variable HOME and set it to the value c: \docume~1\USERNAME where USERNAME is your login name. 3. Start a new instance of the cmd.exe. Let me know if the above doesn't work for some reason. Also, as Gael mentioned, questions regarding enthought-specific stuff will get more visibility if you ask them on the enthought-dev mailing list. :) -Peter From ggellner at uoguelph.ca Fri Apr 18 15:22:38 2008 From: ggellner at uoguelph.ca (Gabriel Gellner) Date: Fri, 18 Apr 2008 15:22:38 -0400 Subject: [SciPy-user] PyDSTool and SciPy (integrators) In-Reply-To: References: <4D547BCC-B8BC-4822-9909-ABD8560A3C62@bu.edu> Message-ID: <20080418192238.GA6937@basestar> On Fri, Apr 18, 2008 at 01:06:01PM -0400, Anne Archibald wrote: > On 18/04/2008, Erik Sherwood wrote: > > > For the kinds of problems Rob and I deal with, high accuracy with > > reasonable speed is essential, and our function evaluations are not > > excessively expensive, hence our preference for Radau. We have found > > the HW dopri853 implementation to be fast and robust for non-stiff > > (and some moderately stiff) systems and the Radau5 implementation to > > be able to handle almost all stiff problems that stymie dopri. > > Just to check: radau and dopri853 require you to know whether your > problem is stiff or not? An extremely convenient feature of the > ODEPACK integrator is that it can switch between stiff and non-stiff > solvers as needed. > True, using the radau and dopri codes is much more like using the matlab codes, you must explicitly choose the method. > > Anne: Providing so much information at every trajectory point would > > require much revision of the current code, for which I see little > > additional benefit. Could you explain how you would use all of the > > additional information? Also, could you elaborate on your "running" > > mode? I do not understand what you are suggesting. > > Most of the extra information I had in mind was intended to reduce > memory usage and improve smoothness of the (interpolated) trajectory. > Since the trajectory is the result of solving an ODE, it will normally > be at least differentiable, but linear interpolation will produce a > non-differentiable trajectory. I can envision many situations where > the derivative of the objective function is wanted. This can at worst > be obtained by evaluating the RHS, but if the integrator is internally > working with something that is locally a polynomial, ideally the > trajectory would follow that. Using polynomials, at least to take the > derivative into account, should reduce the density with which one must > sample the trajecotry. > Does scipy have the required interpolation routines? Could we do just as you say, save the information and then have the option to interpolate as needed? > The other information, on convergence, I suggested mostly so that > users could evaluate the trajectory and look for places where the > solution risked being inaccurate. The current ODEINT provides a fair > amount of such instrumentation, in its messy FORTRAN way. > I think outputting this information is necessary, if the size is an issue we can just have a flag that turns it off, but in general I think having convergence codes available to be essential. > The "running" mode I suggested is based on the way ODEPACK is designed > to be called from FORTRAN: one call initializes the integrator, then > it maintains internal state. It can be told to advance either one step > (of its choice) or a fixed amount (obtained by interpolation based on > the solution). The result can then be "walked" forward, keeping the > integrator's internal state (derivatives, appropriate step size, > knowledge about stiffness) at each step. This kind of calling > convention is useful for (say) particle systems, where for every frame > of an animation one wants to advance the system by a thirtieth of a > second, continuing indefinitely and forgetting the past. > The problem with this of course is that it would be really slow in python. But I don't see any reason why we can't do this if the user wants. I imagine this might be a nice use of a generator. > > I would like to provide an interface for all of the H&W codes that > > have dense output. This includes other methods for stiff integration, > > a delay differential equation integrator, and possibly a symplectic > > integrator. However, these integrators require the systems to be input > > with particular forms which do not necessarily conform so nicely to > > the odeint interface, for example. I would like to settle on an API/ > > calling format before moving ahead with this project. > > The odeint interface is, frankly, a disaster. I have had to abuse it > every time I needed to solve an ODE. The underlying ODEPACK interface > is fairly horrible, but I think it's valuable to look at it because > it's seen a lot of use and has been adjusted to be able to do all the > various things that users want from it. > Could you give some examples why it is a disaster? As I would like avoid repeating said mistakes. (Above and beyond the example of stepping into regions that might be undefined . . .) Gabriel From peridot.faceted at gmail.com Fri Apr 18 15:44:02 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Fri, 18 Apr 2008 21:44:02 +0200 Subject: [SciPy-user] PyDSTool and SciPy (integrators) In-Reply-To: <20080418192238.GA6937@basestar> References: <4D547BCC-B8BC-4822-9909-ABD8560A3C62@bu.edu> <20080418192238.GA6937@basestar> Message-ID: On 18/04/2008, Gabriel Gellner wrote: > On Fri, Apr 18, 2008 at 01:06:01PM -0400, Anne Archibald wrote: > > Most of the extra information I had in mind was intended to reduce > > memory usage and improve smoothness of the (interpolated) trajectory. > > Since the trajectory is the result of solving an ODE, it will normally > > be at least differentiable, but linear interpolation will produce a > > non-differentiable trajectory. I can envision many situations where > > the derivative of the objective function is wanted. This can at worst > > be obtained by evaluating the RHS, but if the integrator is internally > > working with something that is locally a polynomial, ideally the > > trajectory would follow that. Using polynomials, at least to take the > > derivative into account, should reduce the density with which one must > > sample the trajecotry. > > Does scipy have the required interpolation routines? Could we do just as you > say, save the information and then have the option to interpolate as needed? I think that the correct tool for this is a spline whose values and (first?) derivatives are specified at each control point; higher derivatives would be determined by smoothness and minimum bending energy constraints as usual. I don't think scipy has tools for constructing splines like this, but I would be willing to take a swing at writing them using scipy's banded matrix solver. My idea would be to produce a data structure that could be evaluated using scipy.splev. This gives, for free, C implementations of evaluation, integration, and derivatives. > > The odeint interface is, frankly, a disaster. I have had to abuse it > > every time I needed to solve an ODE. The underlying ODEPACK interface > > is fairly horrible, but I think it's valuable to look at it because > > it's seen a lot of use and has been adjusted to be able to do all the > > various things that users want from it. > > Could you give some examples why it is a disaster? As I would like avoid > repeating said mistakes. (Above and beyond the example of stepping into > regions that might be undefined . . .) My complaints with it, apart from the mass of impenetrably-named keyword arguments and "infodict" keys, have to do with control of the t values. Since you can't stop at a place based on the y value, I usually end up having to use it in a "running" mode, which it supports rather badly. If I want to evaluate the solution at multiple points, it is usually because I want an approximation to the true solution function, but such an approximation should normally have points distributed in a way that takes into account the behaviour of the function - more where it behaves in a complicated fashion, and fewer where it is simple. On the other hand, for the lightweight uses I've had, it's quite inconvenient to have to prepend the initial time to the list (often only one element long) of times at which I want to evaluate the result. Anne From nmb at wartburg.edu Fri Apr 18 11:53:09 2008 From: nmb at wartburg.edu (Neil Martinsen-Burrell) Date: Fri, 18 Apr 2008 15:53:09 +0000 (UTC) Subject: [SciPy-user] PyDSTool and SciPy (integrators) References: <20080418002813.GA1752@basestar> Message-ID: Rob Clewley gmail.com> writes: > > I think the right return format depends what people want from the > > solution. I see two major kinds of problem one might use an ODE solver > > for: > > > > * Integrate out to some stopping point and give the values of > > dependent and independent variable there. > > * Integrate out to some stopping point and give back a representation > > of the solution. > > > > (The first case can in principle be obtained by evaluating the second > > at the endpoint, but at the cost of replacing O(1) space by O(n) > > space.) > > I personally do not find the former very useful, as I almost always > want to know something about the intermediate points. My understanding > is the dopri and radau integrators inherently return the whole > solution, so maybe an additional calling option supporting the former > usage could just throw everything out but the last point before the > whole solution gets copied into a numpy object. I use both of these cases in different situations. The first is useful in contexts where the analysis is in terms of maps (e.g. data assimilation), but the underling dynamics is continuous in time. I like the ability to wrap up an integration into a function that returns the solution after a specified time interval. I tend to use the second in cases where the dynamics of the solution *is* the problem under investigation. I think that a DS package should support both of these modes of use: integration over a time interval as a simple function and generation of solution objects. It is certainly possible to implement one in terms of the other and probably worth a convenience function to that effect. > > For the second it is clear that some kind of object to represent the > > solution is a good idea. What exactly it should contain is open for > > discussion; I envision keeping estimated function values and > > derivatives at each step, along with convergence information, and > > providing an interface to view it as either an array of values or a > > function akin to our interpolating spline objects. Convergence > > information and auxiliary function values may also be useful. Even for > > the first, simple case, I think returning an object is a more > > manageable way to provide convergence information. > > > Are there other styles of using an ODE solver that people want? > > Perhaps a "running" mode, where the solver keeps track of whatever > > state information it has and is asked to advance by a certain amount > > (say, because more experimental data has become available)? > > There are two issues that need separating here: > > 1) the short term refactoring, reuse and adaptation of existing > PyDSTool code for scipy > 2) the long term design goals for supporting ODEs in scipy > > Much as I want to keep (2) in mind, I think getting some short-term > progress on improving scipy's ODE support based on existing code would > be good. I foresee it taking a long time to get ODE support truly > "right" for as many users as possible, and in the meantime I think > some our code and other short-term solutions would be an appropriate > stop gap. Re (1): It looks to me as if the Trajectory object would be useful to include in SciPy, particularly as the current functionality can be implemented on top of it. We should continue to provide the current interface to ODE solvers (all of them?), while also providing a more capable object-oriented interface that we will encourage people to use for new code. I also think that adding a variety of integrators is useful, particularly if those integrators are accessible through a uniform interface (similar to what OpenOpt has done for optimizers). It seems that Trajectories, Coordinates and Models provide the necessary abstractions to unify all of the existing solvers. In addition, these objects should be able to abstract whether or not these are constructed through compiling generated C code, or by defining python obects. -Neil From contact at pythonxy.com Sat Apr 19 04:00:33 2008 From: contact at pythonxy.com (Python(x,y)) Date: Sat, 19 Apr 2008 10:00:33 +0200 Subject: [SciPy-user] [ Python(x,y) ] - Upgrading to future releases Message-ID: <4809A6A1.8030702@pythonxy.com> Hi all, In a few days, we will release the first patch for Python(x,y) 1.1.1. The idea is to allow users to upgrade Python(x,y) without having to download the whole package each time. So, you will be able to download typically the last three patches : "Patch 1.1.2 for Python(x,y) 1.1.1", "Patch 1.1.3 for Python(x,y) 1.1.2", and so on. PR -- P. Raybaut Python(x,y) http://www.pythonxy.com From invite+zgidodze at facebookmail.com Sat Apr 19 15:14:36 2008 From: invite+zgidodze at facebookmail.com (Natali Melgarejo Diaz) Date: Sat, 19 Apr 2008 12:14:36 -0700 Subject: [SciPy-user] Chequea mi perfil en Facebook Message-ID: <478e5e1d333598392fd5ee446177786d@register.facebook.com> He creado un perfil en Facebook donde puedo publicar mis im?genes, v?deos y eventos, y quiero agregarte como amigo/a para que puedas verlo. Primero, necesitas registrarte en Facebook. Una vez registrado/a, puedes tambi?n crear tu propio perfil. Gracias, Natali Esta es la direcci?n: http://www.facebook.com/p.php?i=1167589445&k=Z4BUP2PY3Y5M51DAXAVXUU&r&v=2 ___________________ este correo puede contener material promocional. si no deseas recivir futuros correo comerciales de facebook, por favor haz click sobre el vinculo debajo. oficiales de facebook estan localizado a 156 University Ave., Palo Alto, CA 94301. http://www.facebook.com/o.php?u=1201939441&k=b44e75 -------------- next part -------------- An HTML attachment was scrubbed... URL: From lopmart at gmail.com Sat Apr 19 17:00:37 2008 From: lopmart at gmail.com (Jose Lopez) Date: Sat, 19 Apr 2008 14:00:37 -0700 Subject: [SciPy-user] help about solver sparse matrix Message-ID: <4eeef9d40804191400m29928db5m4ec72fdde133f8e2@mail.gmail.com> hi the method linalg.cg (conjugate gradient) work with sparse matrix? where sparse matrix is defined with "Matrix=sparse.lil_matrix(((r*c)-1,(r*c)-1),float)" r =# rows and c=# cols and what parameters return the method?, for example at scilab, return number iterations etc thanks for your help -------------- next part -------------- An HTML attachment was scrubbed... URL: From dwf at cs.toronto.edu Sat Apr 19 17:04:27 2008 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Sat, 19 Apr 2008 17:04:27 -0400 Subject: [SciPy-user] help about solver sparse matrix In-Reply-To: <4eeef9d40804191400m29928db5m4ec72fdde133f8e2@mail.gmail.com> References: <4eeef9d40804191400m29928db5m4ec72fdde133f8e2@mail.gmail.com> Message-ID: <4633D489-5F94-4A16-89FE-5746E0907E5F@cs.toronto.edu> Use scipy.sparse.linalg.cgs , but convert your matrix to csc or csr first. On 19-Apr-08, at 5:00 PM, Jose Lopez wrote: > hi > > > the method linalg.cg (conjugate gradient) work with sparse matrix? > > where sparse matrix is defined with > "Matrix=sparse.lil_matrix(((r*c)-1,(r*c)-1),float)" r =# rows and > c=# cols > > and what parameters return the method?, for example at scilab, > return number iterations etc > > > thanks for your help > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From peridot.faceted at gmail.com Sat Apr 19 17:27:48 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Sat, 19 Apr 2008 17:27:48 -0400 Subject: [SciPy-user] Polynomial interpolation Message-ID: Hi, It appears that scipy does not have a facility for using the Lagrange polynomial to interpolate data. (Or did I miss it?) I am well aware of the numerical difficulties this poses, but it (and its generalization, the Hermite polynomial) does prove useful on occasion. I have written prototype implementations of two algorithms for evaluating this polynomial, and I'd like comments before submitting them for inclusion in scipy. One implementation is based on Krogh 1970, "Efficient Algorithms for Polynomial Interpolation and Numerical Differentiation"; it allows the construction of Hermite polynomials (where some derivatives at each point may also be specified) and the evaluation of arbitrary derivatives. It is based on a Neville-like algorithm, so that it does O(n^2) work when constructed but only O(n) per point evaluated, or O(n^2) per point for which all derivatives must be evaluated. (n here is the degree of the polynomial.) The other implementation is based on Berrut and Trefethen 2004, "Barycentric Lagrange Interpolation". This implementation uses a rewriting of the Lagrange polynomial as a rational function, and is efficient and numerically stable. It also allows efficient updating of the y values at which interpolation occurs, as well as addition of new x values. It does not support evaluation of derivatives or construction of Hermite polynomials. Finally, I have written a "PiecewisePolynomial" class for constructing splines in which each piece may have arbitrary degree, and for which the function values and some derivatives are specified at each knot. The intent is that this be used to represent solutions obtained from ODE solvers, using one polynomial for each solver step, with the same order as the solver's internal polynomial solution, and with (some) derivatives matching at the ends. Such a Trajectory object would contain additional information about the solution (for example stiffness or error estimates) beyond what is in PiecewisePolynomial. (I tried implementing Trajectory using splines, which are evaluated in compiled code, but their maximum degree is 5 while the solvers will go up to degree 12.) They all need work, in particular, efficiency would be improved by making the y values vectors, error checking needs to be more robust, and documentation is not in reST form. Ultimately, too, the evaluation functions at least should be written in a compiled language (cython?). But I thought I'd solicit comments on the code first - is the object-oriented design cumbersome? Is including the algorithm in the name confusing to users? Is the calling convention for Hermite polynomials too confusing or error-prone? Thanks, Anne -------------- next part -------------- A non-text attachment was scrubbed... Name: polyint.py Type: text/x-python Size: 9798 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: test_polyint.py Type: text/x-python Size: 2859 bytes Desc: not available URL: From ggellner at uoguelph.ca Sat Apr 19 17:41:42 2008 From: ggellner at uoguelph.ca (Gabriel Gellner) Date: Sat, 19 Apr 2008 17:41:42 -0400 Subject: [SciPy-user] Polynomial interpolation In-Reply-To: References: Message-ID: <20080419214142.GA12913@basestar> > They all need work, in particular, efficiency would be improved by > making the y values vectors, error checking needs to be more robust, > and documentation is not in reST form. Ultimately, too, the evaluation > functions at least should be written in a compiled language (cython?). > But I thought I'd solicit comments on the code first - is the > object-oriented design cumbersome? Is including the algorithm in the > name confusing to users? Is the calling convention for Hermite > polynomials too confusing or error-prone? > I really like the OO design. If you continue down this route would you make the class new style (inherit from object), and why not use properties for the set_yi, would make it all the sweeter. Gabriel From lopmart at gmail.com Sat Apr 19 19:04:01 2008 From: lopmart at gmail.com (Jose Lopez) Date: Sat, 19 Apr 2008 16:04:01 -0700 Subject: [SciPy-user] help about solver sparse matrix In-Reply-To: <4633D489-5F94-4A16-89FE-5746E0907E5F@cs.toronto.edu> References: <4eeef9d40804191400m29928db5m4ec72fdde133f8e2@mail.gmail.com> <4633D489-5F94-4A16-89FE-5746E0907E5F@cs.toronto.edu> Message-ID: <4eeef9d40804191604r562b9bc1gcccb32c17713875e@mail.gmail.com> thanks but, how to get the number of iterations of the result? On Sat, Apr 19, 2008 at 2:04 PM, David Warde-Farley wrote: > Use scipy.sparse.linalg.cgs , but convert your matrix to csc or csr first. > On 19-Apr-08, at 5:00 PM, Jose Lopez wrote: > > hi > > > the method linalg.cg (conjugate gradient) work with sparse matrix? > > where sparse matrix is defined with > "Matrix=sparse.lil_matrix(((r*c)-1,(r*c)-1),float)" r =# rows and c=# cols > > and what parameters return the method?, for example at scilab, return > number iterations etc > > > thanks for your help > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dwf at cs.toronto.edu Sat Apr 19 19:49:45 2008 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Sat, 19 Apr 2008 19:49:45 -0400 Subject: [SciPy-user] help about solver sparse matrix In-Reply-To: <4eeef9d40804191604r562b9bc1gcccb32c17713875e@mail.gmail.com> References: <4eeef9d40804191400m29928db5m4ec72fdde133f8e2@mail.gmail.com> <4633D489-5F94-4A16-89FE-5746E0907E5F@cs.toronto.edu> <4eeef9d40804191604r562b9bc1gcccb32c17713875e@mail.gmail.com> Message-ID: <063947F1-4EA7-4061-8003-80ADAFC96C12@cs.toronto.edu> On 19-Apr-08, at 7:04 PM, Jose Lopez wrote: > thanks > > but, how to get the number of iterations of the result? There's no explicit support for this in the cgs method, but it does let you pass in a function that gets called after iteration. You could therefore use callbacks to keep track of how many iterations your solution took. Take a look at the following example: I create an object with a method "call" that updates it's internal 'cnt' variable every time it gets called. (I commented out the print statement because it'd be kind of long in this e-mail. In [153]: from scipy import * In [154]: class Counter: .....: def __init__(self): .....: self.cnt = 0 .....: def call(self, arg): .....: self.cnt += 1 .....: # print "Iteration %3d: sqnorm=%5.4e" % (self.cnt, sum(arg**2)) .....: In [155]: c = Counter() In [156]: A = abs(randn(50,50)) In [157]: A = sparse.csr_matrix(A + A.T) In [158]: y = splinalg.cgs(A, randn(50), callback=c.call) In [159]: c.cnt Out[159]: 82 After that, stored inside the Counter object is the quantity you're looking for. Regards, David From dominique.orban at gmail.com Sat Apr 19 20:31:27 2008 From: dominique.orban at gmail.com (Dominique Orban) Date: Sat, 19 Apr 2008 20:31:27 -0400 Subject: [SciPy-user] help about solver sparse matrix In-Reply-To: <063947F1-4EA7-4061-8003-80ADAFC96C12@cs.toronto.edu> References: <4eeef9d40804191400m29928db5m4ec72fdde133f8e2@mail.gmail.com> <4633D489-5F94-4A16-89FE-5746E0907E5F@cs.toronto.edu> <4eeef9d40804191604r562b9bc1gcccb32c17713875e@mail.gmail.com> <063947F1-4EA7-4061-8003-80ADAFC96C12@cs.toronto.edu> Message-ID: <8793ae6e0804191731x5bbab56ej74e630dbed840c5e@mail.gmail.com> On Sat, Apr 19, 2008 at 7:49 PM, David Warde-Farley wrote: > On 19-Apr-08, at 7:04 PM, Jose Lopez wrote: > > thanks > > > > but, how to get the number of iterations of the result? > > There's no explicit support for this in the cgs method, but it does This is just to point out that that Jose asked for the conjugate gradient (cg), which is not the same as the conjugate gradient squared (cgs) method. If the coefficient matrix is symmetric and positive definite, you'd want to use cg instead of cgs as it cgs will be more sensitive to ill conditioning. Dominique From hoytak at gmail.com Sat Apr 19 23:00:27 2008 From: hoytak at gmail.com (Hoyt Koepke) Date: Sat, 19 Apr 2008 20:00:27 -0700 Subject: [SciPy-user] bug in scipy.stats ? Message-ID: <4db580fd0804192000i6136e2ccp3c2aff08b4998de6@mail.gmail.com> Been playing around with scipy.stats a little bit and noticed some puzzling behavior with the gamma function. Is this a bug? In [1]: import scipy.stats as st In [2]: g = st.gamma(a=1, b=1) In [3]: g.rvs() --------------------------------------------------------------------------- Traceback (most recent call last) /home/hoytak/ in () /opt/sysroot/lib/python2.5/site-packages/scipy/stats/distributions.py in rvs(self, size) 115 kwds = self.kwds 116 kwds.update({'size':size}) --> 117 return self.dist.rvs(*self.args,**kwds) 118 def sf(self,x): 119 return self.dist.sf(x,*self.args,**self.kwds) /opt/sysroot/lib/python2.5/site-packages/scipy/stats/distributions.py in rvs(self, *args, **kwds) 444 size = (size,) 445 --> 446 vals = reshape(self._rvs(*args),size) 447 if scale == 0: 448 return loc*ones(size,'d') : _rvs() takes exactly 2 arguments (1 given) In [4]: g.rvs(0.1) --------------------------------------------------------------------------- Traceback (most recent call last) /home/hoytak/ in () /opt/sysroot/lib/python2.5/site-packages/scipy/stats/distributions.py in rvs(self, size) 115 kwds = self.kwds 116 kwds.update({'size':size}) --> 117 return self.dist.rvs(*self.args,**kwds) 118 def sf(self,x): 119 return self.dist.sf(x,*self.args,**self.kwds) /opt/sysroot/lib/python2.5/site-packages/scipy/stats/distributions.py in rvs(self, *args, **kwds) 444 size = (size,) 445 --> 446 vals = reshape(self._rvs(*args),size) 447 if scale == 0: 448 return loc*ones(size,'d') : _rvs() takes exactly 2 arguments (1 given) In [5]: g.rvs(0.1, 0.1) --------------------------------------------------------------------------- Traceback (most recent call last) /home/hoytak/ in () : rvs() takes at most 2 arguments (3 given) At the very least, this is quite confusing. There's no obvious way to get around either _rvs() or rvs() raising an exception. Am I missing something? Thanks! --Hoyt -- +++++++++++++++++++++++++++++++++++ Hoyt Koepke UBC Department of Computer Science http://www.cs.ubc.ca/~hoytak/ hoytak at gmail.com +++++++++++++++++++++++++++++++++++ From berthe.loic at gmail.com Sun Apr 20 04:00:40 2008 From: berthe.loic at gmail.com (LB) Date: Sun, 20 Apr 2008 01:00:40 -0700 (PDT) Subject: [SciPy-user] Polynomial interpolation In-Reply-To: References: Message-ID: > It appears that scipy does not have a facility for using the Lagrange > polynomial to interpolate data. (Or did I miss it?) I am well aware of > the numerical difficulties this poses, but it (and its generalization, > the Hermite polynomial) does prove useful on occasion. I have written > prototype implementations of two algorithms for evaluating this > polynomial, and I'd like comments before submitting them for inclusion > in scipy. In fact, I've seen that there was a lagrange function in scipy.interpolate In [5]: from scipy import interpolate In [6]: interpolate.lagrange? Type: function Base Class: String Form: Namespace: Interactive File: /usr/lib/python2.4/site-packages/scipy/interpolate/ interpolate.py Definition: interpolate.lagrange(x, w) Docstring: Return the Lagrange interpolating polynomial of the data-points (x,w) In scipy 0.6, this seems to be broken, because numpy's poly1d is not imported at the top of the interpolate module : In [16]: scipy.__version__ Out[16]: '0.6.0' In [17]: x = array([1, 4, 5, 7]) In [18]: y = array([0, 1, 2, 1.5]) In [19]: poly = interpolate.lagrange(x,y) --------------------------------------------------------------------------- NameError Traceback (most recent call last) /usr/lib/python2.4/site-packages/scipy/interpolate/interpolate.py in lagrange(x, w) 28 """ 29 M = len(x) ---> 30 p = poly1d(0.0) 31 for j in xrange(M): 32 pt = poly1d(w[j]) NameError: global name 'poly1d' is not defined This is related in #626. I don't know why this function has been put aside and do not appears in scipy.interpolate docstring. -- LB From stefan at sun.ac.za Sun Apr 20 13:18:40 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Sun, 20 Apr 2008 19:18:40 +0200 Subject: [SciPy-user] Polynomial interpolation In-Reply-To: References: Message-ID: <9457e7c80804201018i4ce62906g3602f52754f6d114@mail.gmail.com> Hi Anne I'm surprised there haven't been more responses to this thread. I would certainly like to see this code included. There's been some other work on the fitpack2 module (we're still waiting for an update on that front from John Travers). We should also think of including Stineman- http://projects.scipy.org/pipermail/scipy-dev/2006-April/005652.html and Lanczos/Sinc interpolation. Regards St?fan On 19/04/2008, Anne Archibald wrote: > Hi, > > It appears that scipy does not have a facility for using the Lagrange > polynomial to interpolate data. (Or did I miss it?) I am well aware of > the numerical difficulties this poses, but it (and its generalization, > the Hermite polynomial) does prove useful on occasion. I have written > prototype implementations of two algorithms for evaluating this > polynomial, and I'd like comments before submitting them for inclusion > in scipy. > > One implementation is based on Krogh 1970, "Efficient Algorithms for > Polynomial Interpolation and Numerical Differentiation"; it allows the > construction of Hermite polynomials (where some derivatives at each > point may also be specified) and the evaluation of arbitrary > derivatives. It is based on a Neville-like algorithm, so that it does > O(n^2) work when constructed but only O(n) per point evaluated, or > O(n^2) per point for which all derivatives must be evaluated. (n here > is the degree of the polynomial.) > > The other implementation is based on Berrut and Trefethen 2004, > "Barycentric Lagrange Interpolation". This implementation uses a > rewriting of the Lagrange polynomial as a rational function, and is > efficient and numerically stable. It also allows efficient updating of > the y values at which interpolation occurs, as well as addition of new > x values. It does not support evaluation of derivatives or > construction of Hermite polynomials. > > Finally, I have written a "PiecewisePolynomial" class for constructing > splines in which each piece may have arbitrary degree, and for which > the function values and some derivatives are specified at each knot. > The intent is that this be used to represent solutions obtained from > ODE solvers, using one polynomial for each solver step, with the same > order as the solver's internal polynomial solution, and with (some) > derivatives matching at the ends. Such a Trajectory object would > contain additional information about the solution (for example > stiffness or error estimates) beyond what is in PiecewisePolynomial. > (I tried implementing Trajectory using splines, which are evaluated in > compiled code, but their maximum degree is 5 while the solvers will go > up to degree 12.) > > They all need work, in particular, efficiency would be improved by > making the y values vectors, error checking needs to be more robust, > and documentation is not in reST form. Ultimately, too, the evaluation > functions at least should be written in a compiled language (cython?). > But I thought I'd solicit comments on the code first - is the > object-oriented design cumbersome? Is including the algorithm in the > name confusing to users? Is the calling convention for Hermite > polynomials too confusing or error-prone? > > Thanks, > > Anne > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > From stefan at sun.ac.za Sun Apr 20 13:42:24 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Sun, 20 Apr 2008 19:42:24 +0200 Subject: [SciPy-user] The IO library and image file formats -- compare with with PIL In-Reply-To: References: <9457e7c80804180218t32730fcbn1b26ff0e389e6774@mail.gmail.com> Message-ID: <9457e7c80804201042m3aacf93ey36ad6cb46a93bd5c@mail.gmail.com> On 18/04/2008, Zachary Pincus wrote: > I have my own "internal fork" of PIL that I've been calling "PIL- > lite". I tore out everything except the file IO, and I fixed that to > handle 16-bit files correctly on all endian machines, and to have a > more robust array interface. > > If people wanted to make a proper "fork" of PIL into a numpy- > compatible image IO layer, I would be all for that. I'd be happy to > donate "PIL-lite" as a starting point. Now, the file IO in PIL is a > bit circuitous -- files are initially read by pure-Python code that > determines the file type, etc. This information is then passed to > (brittle and ugly) C code to unpack and swizzle the bits as necessary, > and pack them into the PIL structs in memory. I would really try and avoid the forking route, if we could. Each extra dependency (i.e. libpng, libjpeg etc.) is a potential build problem, and PIL already comes packaged everywhere. My changes can easily be included in SciPy, rather than in PIL. Could we do the same for yours? Then we could rather build scipy.image (Travis' and Robert's colour-space codes can be incorporated there, as well?) on top of the PIL. I'm really unhappy about the current state of ndimage. It's written in (Python API) C, so no one wants to touch the code. Much of it can be rewritten in equivalent pure Python, using modern NumPy constructs that weren't available to Peter. What we really need is to get knowledgeable people together for a week and hack on this (ndimage is an extremely useful module!), but I don't know when we're going to have that chance. Who fancies a visit to South Africa? :) Cheers St?fan From stefan at sun.ac.za Sun Apr 20 14:21:38 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Sun, 20 Apr 2008 20:21:38 +0200 Subject: [SciPy-user] bug in scipy.stats ? In-Reply-To: <4db580fd0804192000i6136e2ccp3c2aff08b4998de6@mail.gmail.com> References: <4db580fd0804192000i6136e2ccp3c2aff08b4998de6@mail.gmail.com> Message-ID: <9457e7c80804201121w64629b1dk7300d56dc97137d2@mail.gmail.com> Hi Hoyt On 20/04/2008, Hoyt Koepke wrote: > Been playing around with scipy.stats a little bit and noticed some > puzzling behavior with the gamma function. Is this a bug? > > In [1]: import scipy.stats as st > > In [2]: g = st.gamma(a=1, b=1) > > In [3]: g.rvs() >From the docstring: Alternatively, the object may be called (as a function) to fix the shape, location, and scale parameters returning a "frozen" continuous RV object: myrv = gamma(a,loc=0,scale=1) - frozen RV object with the same methods but holding the given shape, location, and scale fixed For the gamma function: b = theta = scale So you can either do: g = st.gamma(a, scale=b) g.rvs(), g.rvs(3) etc. or g = st.gamma.rvs(a, scale=b) Regards St?fan From c-b at asu.edu Sun Apr 20 16:58:10 2008 From: c-b at asu.edu (Christopher Brown) Date: Sun, 20 Apr 2008 13:58:10 -0700 Subject: [SciPy-user] Passing numpy array to a c function Message-ID: <480BAE62.2000004@asu.edu> Hi List, I am trying to pass a numpy array of float64's to a c function, and I am not having much luck. I have tried every example I could find, and based on everything I have read, this seems (to me) like it should work. Here's a minimal version of my function: #include "Python.h" #include "Numeric/arrayobject.h" PyObject* my_c_function(PyObject* self, PyObject* args) { PyArrayObject *array; int i; if (!PyArg_ParseTuple(args, "O|i", &array, &i)) return NULL; printf("length is: %5.0f\n",i); printf("type_num is: %3.0f\n",array->descr->type_num); return 0; } Here's what I get in ipython: In [1]: import mymodule In [2]: from scikits.audiolab import wavread In [3]: targ = wavread('/home/code-breaker/ding.wav') In [4]: buff = targ[0][:,0] # take the left channel of a stereo wavefile In [5]: type(buff) Out[5]: In [6]: type(buff[0]) Out[6]: In [7]: len(buff) Out[7]: 20191 In [8]: buff Out[8]: array([ -2.44140625e-04, -3.05175781e-05, 1.52587891e-04, ..., 0.00000000e+00, 0.00000000e+00, 0.00000000e+00]) In [9]: test = mymodule.my_c_function(buff,len(buff)) length is: -0 type_num is: -0 In [10]: What am I doing wrong? -- Chris From python-ml at nn7.de Sun Apr 20 17:00:51 2008 From: python-ml at nn7.de (Soeren Sonnenburg) Date: Sun, 20 Apr 2008 23:00:51 +0200 Subject: [SciPy-user] Shogun - A Large Scale Machine Learning Toolbox Message-ID: <1208725251.23244.11.camel@localhost> SHOGUN ( http://www.shogun-toolbox.org ) is a large scale machine learning toolbox with focus on especially Support Vector Machines (SVM). * It provides a generic SVM object interfacing to several different SVM implementations, among them the state of the art LibSVM, SVMLight, SVMLin and GPDT. Each of the SVMs can be combined with a variety of kernels. The toolbox not only provides efficient implementations of the most common kernels, like the Linear, Polynomial, Gaussian and Sigmoid Kernel but also comes with a number of recent string kernels as e.g. the Locality Improved, Fischer, TOP, Spectrum, Weighted Degree Kernel (with shifts). For the latter the efficient LINADD optimizations are implemented. * Also SHOGUN offers the freedom of working with custom pre-computed kernels. One of its key features is the combined kernel which can be constructed by a weighted linear combination of a number of sub-kernels, each of which not necessarily working on the same domain. An optimal sub-kernel weighting can be learned using Multiple Kernel Learning. Currently SVM 2-class classification and regression problems can be dealt with. * SHOGUN also implements a number of linear methods like Linear Discriminant Analysis (LDA), Linear Programming Machine (LPM), (Kernel) Perceptrons and features algorithms to train hidden markov models. * The input feature-objects can be dense, sparse or strings and of type int/short/double/char and can be converted into different feature types. Chains of preprocessors (e.g. substracting the mean) can be attached to each feature object allowing for on-the-fly pre-processing. * SHOGUN is implemented in C++ and interfaces to Matlab(tm), R, Octave and Python and is proudly released as Machine Learning Open Source Software ( http://mloss.org/ ) under the GPLv3+. From lou_boog2000 at yahoo.com Sun Apr 20 17:13:34 2008 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Sun, 20 Apr 2008 14:13:34 -0700 (PDT) Subject: [SciPy-user] Passing numpy array to a c function In-Reply-To: <480BAE62.2000004@asu.edu> Message-ID: <223352.36385.qm@web34402.mail.mud.yahoo.com> I think you have a typo. See below: --- Christopher Brown wrote: > Hi List, > > I am trying to pass a numpy array of float64's to a > c function, and I am > not having much luck. I have tried every example I > could find, and based > on everything I have read, this seems (to me) like > it should work. > Here's a minimal version of my function: > > #include "Python.h" > #include "Numeric/arrayobject.h" > > PyObject* my_c_function(PyObject* self, PyObject* > args) { > PyArrayObject *array; > int i; > if (!PyArg_ParseTuple(args, "O|i", &array, &i)) > return NULL; Shouldn't the format string be "O!i" ?? Not "O|i". -- Lou Pecora, my views are my own. ____________________________________________________________________________________ Be a better friend, newshound, and know-it-all with Yahoo! Mobile. Try it now. http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ From stefan at sun.ac.za Sun Apr 20 17:19:14 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Sun, 20 Apr 2008 23:19:14 +0200 Subject: [SciPy-user] Passing numpy array to a c function In-Reply-To: <480BAE62.2000004@asu.edu> References: <480BAE62.2000004@asu.edu> Message-ID: <9457e7c80804201419j7abf2c87v31a35f525c3f0afb@mail.gmail.com> Hi Chris On 20/04/2008, Christopher Brown wrote: > I am trying to pass a numpy array of float64's to a c function, and I am > not having much luck. I have tried every example I could find, and based > on everything I have read, this seems (to me) like it should work. Your example works perfectly over here. > #include "Python.h" > #include "Numeric/arrayobject.h" You probably want #include "numpy/arrayobject.h" > PyObject* my_c_function(PyObject* self, PyObject* args) { > PyArrayObject *array; > int i; > if (!PyArg_ParseTuple(args, "O|i", &array, &i)) > return NULL; > printf("length is: %5.0f\n",i); > printf("type_num is: %3.0f\n",array->descr->type_num); These should use "%d" instead of "%f". In [5]: x = np.array([1,2,3]) In [6]: boo.my_c_function(x,len(x)) length is: 3 type_num is: 7 I attach the source code and setup.py file to build it: python setup.py build_ext -i Regards St?fan -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: boo.c URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: setup.py Type: application/octet-stream Size: 231 bytes Desc: not available URL: From c-b at asu.edu Sun Apr 20 19:48:42 2008 From: c-b at asu.edu (Christopher Brown) Date: Sun, 20 Apr 2008 16:48:42 -0700 Subject: [SciPy-user] Passing numpy array to a c function In-Reply-To: <9457e7c80804201419j7abf2c87v31a35f525c3f0afb@mail.gmail.com> References: <480BAE62.2000004@asu.edu> <9457e7c80804201419j7abf2c87v31a35f525c3f0afb@mail.gmail.com> Message-ID: <480BD65A.5080301@asu.edu> Hi St?fan, Thanks. That solved one problem. The length is now printing correctly. But the bigger problem seems to be with the float64 type. I added the following to my function: if (array->descr->type_num == PyArray_DOUBLE) { printf("Double!\n"); } else if (array->descr->type_num == PyArray_LONG) { printf("Long!\n"); } else if (array->descr->type_num == PyArray_INT) { printf("Int!\n"); } else if (array->descr->type_num == PyArray_FLOAT) { printf("Float!\n"); } else if (array->descr->type_num == PyArray_SHORT) { printf("Short!\n"); } else if (array->descr->type_num == PyArray_USHORT) { printf("UShort!\n"); } else if (array->descr->type_num == PyArray_UBYTE) { printf("UByte!\n"); } else if (array->descr->type_num == PyArray_SBYTE) { printf("SByte!\n"); } else if (array->descr->type_num == PyArray_UINT) { printf("UInt!\n"); } else if (array->descr->type_num == PyArray_CFLOAT) { printf("CFloat!\n"); } else if (array->descr->type_num == PyArray_CDOUBLE) { printf("CDouble!\n"); } else if (array->descr->type_num == PyArray_OBJECT) { printf("Object!\n"); } else if (array->descr->type_num == PyArray_NTYPES) { printf("NTypes!\n"); } else if (array->descr->type_num == PyArray_NOTYPE) { printf("NoType!\n"); } else printf("Type Unknown!\n"); printf("length is: %5.0d\n",n); printf("type_num is: %3.0d\n\n",array->descr->type_num); for (i = 0; i < 10; i++) printf("%8.5e\n",array[i]); And the output looks like: In [6]: for i in range(0,10): ...: print buff[i] ...: ...: -0.000244140625 -3.0517578125e-05 0.000152587890625 0.000152587890625 9.1552734375e-05 3.0517578125e-05 6.103515625e-05 -3.0517578125e-05 -0.0001220703125 -0.000152587890625 In [7]: test = mymodule.my_c_function(buff,len(buff)) Type Unknown! length is: 20191 type_num is: -4.96427e-42 2.56761e-312 2.22810e-312 2.12200e-312 8.16968e-312 -4.11026e-41 2.80103e-312 2.55310e-313 0.00000e+00 0.00000e+00 A *completely* uneducated guess: Might it be the case that c doesn't know about float64's, and is treating the elements of the array as float32's? -- Chris From robert.kern at gmail.com Sun Apr 20 20:10:14 2008 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 20 Apr 2008 19:10:14 -0500 Subject: [SciPy-user] Polynomial interpolation In-Reply-To: References: Message-ID: <3d375d730804201710m737de47bw6b91bbbf0f874ab3@mail.gmail.com> On Sat, Apr 19, 2008 at 4:27 PM, Anne Archibald wrote: > They all need work, in particular, efficiency would be improved by > making the y values vectors, error checking needs to be more robust, > and documentation is not in reST form. Ultimately, too, the evaluation > functions at least should be written in a compiled language (cython?). > But I thought I'd solicit comments on the code first - is the > object-oriented design cumbersome? It looks fine although Gabriel's suggestions would be good improvements. Some people may want single-function interfaces (e.g. krogh(xi, yi, x)); they can be written on top of the OO implementation rather more easily than otherwise. > Is including the algorithm in the > name confusing to users? I doubt it. > Is the calling convention for Hermite > polynomials too confusing or error-prone? It looks fine to me. For PiecewisePolynomial, I might transpose the yi input such that yi[0] is the function value, yi[1] is the first derivative, etc. A nitpick: the code pos[pos==-1] = 0 pos[pos==self.n-1] = self.n-2 can be replaced with pos.clip(0, self.n-2, out=pos) But this all looks good, and I want it in scipy. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From peridot.faceted at gmail.com Sun Apr 20 21:03:05 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Sun, 20 Apr 2008 21:03:05 -0400 Subject: [SciPy-user] Polynomial interpolation In-Reply-To: <3d375d730804201710m737de47bw6b91bbbf0f874ab3@mail.gmail.com> References: <3d375d730804201710m737de47bw6b91bbbf0f874ab3@mail.gmail.com> Message-ID: On 20/04/2008, Robert Kern wrote: > On Sat, Apr 19, 2008 at 4:27 PM, Anne Archibald > wrote: > > > They all need work, in particular, efficiency would be improved by > > making the y values vectors, error checking needs to be more robust, > > and documentation is not in reST form. Ultimately, too, the evaluation > > functions at least should be written in a compiled language (cython?). > > But I thought I'd solicit comments on the code first - is the > > object-oriented design cumbersome? > > It looks fine although Gabriel's suggestions would be good > improvements. Some people may want single-function interfaces (e.g. > krogh(xi, yi, x)); they can be written on top of the OO implementation > rather more easily than otherwise. Reasonable, though really, is krogh_interpolate(xi, yi, x) much better than KroghInterpolator(xi, yi)(x)? It's also good to emphasize that the construction of the interpolating polynomial is a relatively slow process compared to its evaluation. > > Is the calling convention for Hermite > > polynomials too confusing or error-prone? > > It looks fine to me. For PiecewisePolynomial, I might transpose the yi > input such that yi[0] is the function value, yi[1] is the first > derivative, etc. The problem with this is that it forces the user to specify the same number of derivatives at each point. It may be desirable (for example) to specify the derivative at only one point but the function value at many. The way I did it you just pass in a nested list. If you have them in the form you suggest, a quick np.transpose([y,yp,ypp]) should get the form that I want. > A nitpick: the code > > pos[pos==-1] = 0 > pos[pos==self.n-1] = self.n-2 > > can be replaced with > > pos.clip(0, self.n-2, out=pos) > > But this all looks good, and I want it in scipy. Ah, good. What's the story on including cython code in scipy? Is it an additional build dependency, and so to be avoided? Can it be used in a SWIG-like role to produce files that can be distributed and compiled with a C compiler? For any interpolation scheme, it's obviously essential that it be able to be evaluated rapidly... Anne From peridot.faceted at gmail.com Sun Apr 20 21:21:58 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Sun, 20 Apr 2008 21:21:58 -0400 Subject: [SciPy-user] Polynomial interpolation In-Reply-To: <20080419214142.GA12913@basestar> References: <20080419214142.GA12913@basestar> Message-ID: On 19/04/2008, Gabriel Gellner wrote: > I really like the OO design. If you continue down this route would you make the > class new style (inherit from object), and why not use properties for the > set_yi, would make it all the sweeter. Hmm. I'm lukewarm on using properties. It seems to me that if I provide set_foo functions people will assume that it's not safe to modify any attribute directly; if I override __setattr__ so that the right magic happens, that implicitly encourages people to change any attribute that doesn't raise an exception. Which means that I need to evaluate whether each attribute can be safely modified on the fly, and access those that can't through object.__setattr__(self,"name",value). I also need to explicitly document which attributes may be safely written to. Is this about right? Is this the new python standard practice? More generally, it seems to me that generically, interpolation schemes should produce objects which can be evaluated like functions. They should perhaps have certain other standard features: providing derivatives, if they can, through one of a few standardly-named functions, providing acceptable input ranges (or recommended input ranges) in standardly-named attributes, ... anything else? What about other "function" objects in numpy/scipy that know their derivatives (or integrals), should they provide extra functions to compute these? numpy.arctan.derivative(x)? numpy.arctan2.partial_derivative(x,y,1,2)? It seems like "knowing your derivatives" comes in at least three forms: being able to compute the value of a particular derivative on demand (e.g. splev(x,tck,der=2)), being able to evaluate all (or some) derivatives on demand (e.g. KroghInterpolator(xi,yi).derivatives(x)), and being able to construct a full-fledged function object representing the derivative (e.g. poly1d([2,1,0]).deriv()). Certainly one can use any of these to fudge the others, but it involves varying degrees of clumsiness and inefficiency. I'm just thinking about how to make duck typing as useful as possible for interpolator objects. Anne From ggellner at uoguelph.ca Sun Apr 20 21:27:20 2008 From: ggellner at uoguelph.ca (Gabriel Gellner) Date: Sun, 20 Apr 2008 21:27:20 -0400 Subject: [SciPy-user] Polynomial interpolation In-Reply-To: References: <3d375d730804201710m737de47bw6b91bbbf0f874ab3@mail.gmail.com> Message-ID: <20080421012720.GA18310@basestar> > Ah, good. > > What's the story on including cython code in scipy? Is it an > additional build dependency, and so to be avoided? Can it be used in a > SWIG-like role to produce files that can be distributed and compiled > with a C compiler? For any interpolation scheme, it's obviously > essential that it be able to be evaluated rapidly... > Yes as long as you provide the generated C wrappers the user can just compile them without having cython on their computer. Check out setup.py example included in numpy svn at: http://svn.scipy.org/svn/numpy/trunk/numpy/doc/cython/ Gabriel From zachary.pincus at yale.edu Sun Apr 20 22:19:49 2008 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Sun, 20 Apr 2008 22:19:49 -0400 Subject: [SciPy-user] The IO library and image file formats -- compare with with PIL In-Reply-To: <9457e7c80804201042m3aacf93ey36ad6cb46a93bd5c@mail.gmail.com> References: <9457e7c80804180218t32730fcbn1b26ff0e389e6774@mail.gmail.com> <9457e7c80804201042m3aacf93ey36ad6cb46a93bd5c@mail.gmail.com> Message-ID: On Apr 20, 2008, at 1:42 PM, St?fan van der Walt wrote: > On 18/04/2008, Zachary Pincus wrote: >> I have my own "internal fork" of PIL that I've been calling "PIL- >> lite". I tore out everything except the file IO, and I fixed that to >> handle 16-bit files correctly on all endian machines, and to have a >> more robust array interface. >> >> If people wanted to make a proper "fork" of PIL into a numpy- >> compatible image IO layer, I would be all for that. I'd be happy to >> donate "PIL-lite" as a starting point. Now, the file IO in PIL is a >> bit circuitous -- files are initially read by pure-Python code that >> determines the file type, etc. This information is then passed to >> (brittle and ugly) C code to unpack and swizzle the bits as >> necessary, >> and pack them into the PIL structs in memory. > > I would really try and avoid the forking route, if we could. Each > extra dependency (i.e. libpng, libjpeg etc.) is a potential build > problem, and PIL already comes packaged everywhere. My changes can > easily be included in SciPy, rather than in PIL. Could we do the same > for yours? Then we could rather build scipy.image (Travis' and > Robert's colour-space codes can be incorporated there, as well?) on > top of the PIL. Nothing should be built "on top" of PIL, or any other image IO library, IMO. Just build things to work with numpy arrays (or things that have an array interface, so can be converted by numpy), and let the user decide what package is best for getting bits into and out of files on disk. Any explicit PIL dependencies should be really discouraged, because of that library's continued unsuitability for dealing with scientifically-relevant file formats and data types. As to the problems with PIL that I've addressed (and several others), these are deep-seated issues that won't be fixed without a major overhaul. My thought was thus to take the pure-python file-sniffing part of PIL and marry it to numpy tools for taking in byte sequences and interpreting them as necessary. This would be have no library dependencies, and really wouldn't be a "fork" of PIL so much as using a small amount of non-broken PIL file-format-reading code that's there and abandoning the awkward/broken byte-IO and memory-model. I can't promise I have any time to work on this -- but I'll look into it, maybe -- and if anyone else wants to look into it as well, I'm happy to provide some code to start with. > I'm really unhappy about the current state of ndimage. It's written > in (Python API) C, so no one wants to touch the code. Much of it can > be rewritten in equivalent pure Python, using modern NumPy constructs > that weren't available to Peter. What we really need is to get > knowledgeable people together for a week and hack on this (ndimage is > an extremely useful module!), but I don't know when we're going to > have that chance. Who fancies a visit to South Africa? :) A major difficulty with ndimage, beyond the hairy C-code, is the spline-interpolation model that nearly everything is built on. While it's technically a nice infrastructure, it's quite dissimilar from what a lot of people (well, especially me) are used to with regard to how image resampling systems are generally constructed. So that makes it a lot harder to hack on or track down and fix bugs. I don't really have a good suggestion for addressing this, though, because the spline model is really quite nice when it works. Zach From hoytak at gmail.com Sun Apr 20 23:42:57 2008 From: hoytak at gmail.com (Hoyt Koepke) Date: Sun, 20 Apr 2008 20:42:57 -0700 Subject: [SciPy-user] bug in scipy.stats ? In-Reply-To: <9457e7c80804201121w64629b1dk7300d56dc97137d2@mail.gmail.com> References: <4db580fd0804192000i6136e2ccp3c2aff08b4998de6@mail.gmail.com> <9457e7c80804201121w64629b1dk7300d56dc97137d2@mail.gmail.com> Message-ID: <4db580fd0804202042u5c567f64v78f12c8a278fd828@mail.gmail.com> Okay, thanks, that helps --though I kinda think some of this could be better organized / documented. I will probably be working a lot with this module, so maybe I will try to incrementally try to improve it. --Hoyt On Sun, Apr 20, 2008 at 11:21 AM, St?fan van der Walt wrote: > Hi Hoyt > > > On 20/04/2008, Hoyt Koepke wrote: > > Been playing around with scipy.stats a little bit and noticed some > > puzzling behavior with the gamma function. Is this a bug? > > > > In [1]: import scipy.stats as st > > > > In [2]: g = st.gamma(a=1, b=1) > > > > In [3]: g.rvs() > > >From the docstring: > > Alternatively, the object may be called (as a function) to fix > the shape, location, and scale parameters returning a > "frozen" continuous RV object: > > myrv = gamma(a,loc=0,scale=1) > - frozen RV object with the same methods but holding the > given shape, location, and scale fixed > > For the gamma function: > > b = theta = scale > > So you can either do: > > g = st.gamma(a, scale=b) > g.rvs(), g.rvs(3) etc. > > or > > g = st.gamma.rvs(a, scale=b) > > Regards > St?fan > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- +++++++++++++++++++++++++++++++++++ Hoyt Koepke UBC Department of Computer Science http://www.cs.ubc.ca/~hoytak/ hoytak at gmail.com +++++++++++++++++++++++++++++++++++ From stefan at sun.ac.za Mon Apr 21 01:49:14 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 21 Apr 2008 07:49:14 +0200 Subject: [SciPy-user] Passing numpy array to a c function In-Reply-To: <480BD65A.5080301@asu.edu> References: <480BAE62.2000004@asu.edu> <9457e7c80804201419j7abf2c87v31a35f525c3f0afb@mail.gmail.com> <480BD65A.5080301@asu.edu> Message-ID: <9457e7c80804202249w2ac29c5dld845b971704df70a@mail.gmail.com> On 21/04/2008, Christopher Brown wrote: > Thanks. That solved one problem. The length is now printing correctly. > But the bigger problem seems to be with the float64 type. > > I added the following to my function: > > if (array->descr->type_num == PyArray_DOUBLE) { > printf("Double!\n"); You're looking for NPY_DOUBLE here. You'll find Travis Oliphant's book (http://www.tramy.us/) tremendously helpful in accessing the NumPy C API. The book is being released for free at SciPy 2008, but at $35 it's a bargain anyway. Regards St?fan From stefan at sun.ac.za Mon Apr 21 02:02:37 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 21 Apr 2008 08:02:37 +0200 Subject: [SciPy-user] Polynomial interpolation In-Reply-To: References: <20080419214142.GA12913@basestar> Message-ID: <9457e7c80804202302r23a07026gc056e0ba9fb95fea@mail.gmail.com> On 21/04/2008, Anne Archibald wrote: > On 19/04/2008, Gabriel Gellner wrote: > > > I really like the OO design. If you continue down this route would you make the > > class new style (inherit from object), and why not use properties for the > > set_yi, would make it all the sweeter. > > Hmm. I'm lukewarm on using properties. It seems to me that if I > provide set_foo functions people will assume that it's not safe to > modify any attribute directly; If you don't want people to use a method, start its name with an underscore. In most cases, set_foo isn't necessary. Either use _foo privately, or use foo as a property which users can manipulate. > More generally, it seems to me that generically, interpolation schemes > should produce objects which can be evaluated like functions. Sounds good. Talking about derivatives, does anyone know whether http://en.wikipedia.org/wiki/Automatic_differentiation is of value? It's been on my TODO list for a while, but I haven't gotten round to studying it in detail. Regards St?fan From stefan at sun.ac.za Mon Apr 21 03:38:35 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 21 Apr 2008 09:38:35 +0200 Subject: [SciPy-user] The IO library and image file formats -- compare with with PIL In-Reply-To: References: <9457e7c80804180218t32730fcbn1b26ff0e389e6774@mail.gmail.com> <9457e7c80804201042m3aacf93ey36ad6cb46a93bd5c@mail.gmail.com> Message-ID: <9457e7c80804210038n2d619393k74cdee0cb481a59b@mail.gmail.com> On 21/04/2008, Zachary Pincus wrote: > > I would really try and avoid the forking route, if we could. Each > > extra dependency (i.e. libpng, libjpeg etc.) is a potential build > > problem, and PIL already comes packaged everywhere. My changes can > > easily be included in SciPy, rather than in PIL. Could we do the same > > for yours? Then we could rather build scipy.image (Travis' and > > Robert's colour-space codes can be incorporated there, as well?) on > > top of the PIL. > > > Nothing should be built "on top" of PIL, or any other image IO > library, IMO. Just build things to work with numpy arrays (or things > that have an array interface, so can be converted by numpy), and let > the user decide what package is best for getting bits into and out of > files on disk. Any explicit PIL dependencies should be really > discouraged, because of that library's continued unsuitability for > dealing with scientifically-relevant file formats and data types. I agree with you, but we still need to provide the user with an easy way to access images on disk (SciPy comes pretty much batteries included). > My thought was thus to take the pure-python file-sniffing > part of PIL and marry it to numpy tools for taking in byte sequences > and interpreting them as necessary. This would be have no library > dependencies, and really wouldn't be a "fork" of PIL so much as using > a small amount of non-broken PIL file-format-reading code that's there > and abandoning the awkward/broken byte-IO and memory-model. I can't > promise I have any time to work on this -- but I'll look into it, > maybe -- and if anyone else wants to look into it as well, I'm happy > to provide some code to start with. How would this code be used in practice? I'm just trying to form a mental image of how the parts fit together. > A major difficulty with ndimage, beyond the hairy C-code, is the > spline-interpolation model that nearly everything is built on. While > it's technically a nice infrastructure, it's quite dissimilar from > what a lot of people (well, especially me) are used to with regard to > how image resampling systems are generally constructed. So that makes > it a lot harder to hack on or track down and fix bugs. I don't really > have a good suggestion for addressing this, though, because the spline > model is really quite nice when it works. I have the article on which this is based (I think?). It is "Splines: A Perfect Fit for Signal/Image Processing" by Michael Unser (http://citeseer.ist.psu.edu/unser99splines.html) The irony is that we have something like three or four spline implementations in SciPy! We should re-factor that into a standard location, but as you can imagine it is no small task. Regards St?fan From haase at msg.ucsf.edu Mon Apr 21 04:19:59 2008 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Mon, 21 Apr 2008 10:19:59 +0200 Subject: [SciPy-user] The IO library and image file formats -- compare with with PIL In-Reply-To: <9457e7c80804210038n2d619393k74cdee0cb481a59b@mail.gmail.com> References: <9457e7c80804180218t32730fcbn1b26ff0e389e6774@mail.gmail.com> <9457e7c80804201042m3aacf93ey36ad6cb46a93bd5c@mail.gmail.com> <9457e7c80804210038n2d619393k74cdee0cb481a59b@mail.gmail.com> Message-ID: On Mon, Apr 21, 2008 at 9:38 AM, St?fan van der Walt wrote: > On 21/04/2008, Zachary Pincus wrote: > > > I would really try and avoid the forking route, if we could. Each > > > extra dependency (i.e. libpng, libjpeg etc.) is a potential build > > > problem, and PIL already comes packaged everywhere. My changes can > > > easily be included in SciPy, rather than in PIL. Could we do the same > > > for yours? Then we could rather build scipy.image (Travis' and > > > Robert's colour-space codes can be incorporated there, as well?) on > > > top of the PIL. > > > > > > Nothing should be built "on top" of PIL, or any other image IO > > library, IMO. Just build things to work with numpy arrays (or things > > that have an array interface, so can be converted by numpy), and let > > the user decide what package is best for getting bits into and out of > > files on disk. Any explicit PIL dependencies should be really > > discouraged, because of that library's continued unsuitability for > > dealing with scientifically-relevant file formats and data types. > > I agree with you, but we still need to provide the user with an easy > way to access images on disk (SciPy comes pretty much batteries > included). > > > > My thought was thus to take the pure-python file-sniffing > > part of PIL and marry it to numpy tools for taking in byte sequences > > and interpreting them as necessary. This would be have no library > > dependencies, and really wouldn't be a "fork" of PIL so much as using > > a small amount of non-broken PIL file-format-reading code that's there > > and abandoning the awkward/broken byte-IO and memory-model. I can't > > promise I have any time to work on this -- but I'll look into it, > > maybe -- and if anyone else wants to look into it as well, I'm happy > > to provide some code to start with. > > How would this code be used in practice? I'm just trying to form a > mental image of how the parts fit together. > > > > A major difficulty with ndimage, beyond the hairy C-code, is the > > spline-interpolation model that nearly everything is built on. While > > it's technically a nice infrastructure, it's quite dissimilar from > > what a lot of people (well, especially me) are used to with regard to > > how image resampling systems are generally constructed. So that makes > > it a lot harder to hack on or track down and fix bugs. I don't really > > have a good suggestion for addressing this, though, because the spline > > model is really quite nice when it works. > > I have the article on which this is based (I think?). It is > > "Splines: A Perfect Fit for Signal/Image Processing" by Michael Unser > (http://citeseer.ist.psu.edu/unser99splines.html) > > The irony is that we have something like three or four spline > implementations in SciPy! We should re-factor that into a standard > location, but as you can imagine it is no small task. > > Regards > St?fan > Hi all, please don't discuss ndimage and image file-IO (alas PIL) in the same thread !!! the "image" in "ndimage" has nothing (!!) to do with jpeg or tiff ---- you might know this.... So, I summarize then from the recent discussion here, that PIL could be divided into consisting of five parts: a) file format handling based on external libs such as libjpg, libpng, (not libtiff, I think, please confirm !!) b) file format handling based on PIL's python code c) image processing, such as contrast change, pixelwise mapping, transformations like rotation, ... d) image drawing, like addiong text into an image e) image display I like some of PIL's "d" features. I don't use "e" at all (I have written my own OpenGL based 2d-section viewer [BSD lic.]) (I think this is minimal tk code, and some "calling to external OS viewer programs") "c" should all be done using numpy ( + here is a connection to ndimage, but don't dwell on it ...) "a" might be harder to build, because of dependencies, but this is also optional, and setup.py exists (???) "b" is mainly what is the "annoying" part where patches seem to get stuck and lie unused..... I hope you guys can agree with my summary -- now I'm waiting for comments .... -Sebastian From Joris.DeRidder at ster.kuleuven.be Mon Apr 21 06:07:15 2008 From: Joris.DeRidder at ster.kuleuven.be (Joris De Ridder) Date: Mon, 21 Apr 2008 12:07:15 +0200 Subject: [SciPy-user] Polynomial interpolation In-Reply-To: <9457e7c80804202302r23a07026gc056e0ba9fb95fea@mail.gmail.com> References: <20080419214142.GA12913@basestar> <9457e7c80804202302r23a07026gc056e0ba9fb95fea@mail.gmail.com> Message-ID: <71066A28-D1B5-4206-8D8B-17039D3F4639@ster.kuleuven.be> On 21 Apr 2008, at 08:02, St?fan van der Walt wrote: > Talking about derivatives, does anyone know whether > > http://en.wikipedia.org/wiki/Automatic_differentiation > > is of value? It's been on my TODO list for a while, but I haven't > gotten round to studying it in detail. I think the Scientific.Functions.Derivatives subpackage of Konrad Hinsen has such functionality. Also, to come back to the original thread subject, Scientific.Functions.Interpolation is also able to generate Interpolators (don't know the algorithm). However, last time I checked, Scientific had no stable version for NumPy, only for the old Numeric package (which is the reason why I haven't used Scientific anymore). That was quite some time ago, though, so it may have changed by now. Cheers, Joris Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm From stefan at sun.ac.za Mon Apr 21 08:11:58 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 21 Apr 2008 14:11:58 +0200 Subject: [SciPy-user] is there any plan to import BNT(bayesian network toolkit) from matlab? In-Reply-To: <47FE69D9.3070200@ucsf.edu> References: <47FB8C79.2030808@astraw.com> <47FBAD6F.3010903@ucsf.edu> <9457e7c80804101133s3043dc91n18f49ff90c5f155c@mail.gmail.com> <47FE69D9.3070200@ucsf.edu> Message-ID: <9457e7c80804210511yfc3286dw414d6e80dd9da053@mail.gmail.com> On 10/04/2008, Karl Young wrote: > Stefan, that sounds great. After talking with Jarrod and David yesterday > I'm getting a better feel for how the port might fit into the overall > SciPy picture (initially as part of the learn scikit). I see the guys at NICTA are also working on something similar: http://elefant.developer.nicta.com.au/about/roadmap Regards St?fan From zachary.pincus at yale.edu Mon Apr 21 09:12:48 2008 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Mon, 21 Apr 2008 09:12:48 -0400 Subject: [SciPy-user] The IO library and image file formats -- compare with with PIL In-Reply-To: References: <9457e7c80804180218t32730fcbn1b26ff0e389e6774@mail.gmail.com> <9457e7c80804201042m3aacf93ey36ad6cb46a93bd5c@mail.gmail.com> <9457e7c80804210038n2d619393k74cdee0cb481a59b@mail.gmail.com> Message-ID: <10B72505-ACEB-4EAE-A1B9-39A5038224CD@yale.edu> Hello all, > please don't discuss ndimage and image file-IO (alas PIL) in the > same thread !!! > the "image" in "ndimage" has nothing (!!) to do with jpeg or tiff > ---- you might know this.... Ha ha, agreed. Sorry! > So, I summarize then from the recent discussion here, > that PIL could be divided into consisting of five parts: > a) file format handling based on external libs such as libjpg, libpng, > (not libtiff, I think, please confirm !!) > b) file format handling based on PIL's python code > c) image processing, such as contrast change, pixelwise mapping, > transformations like rotation, ... > d) image drawing, like addiong text into an image > e) image display One note: (b) should be divided into: (b.1) File _format_ interpretation in pure-python code (that is, figuring out where in the file the image pixels are, in what format they are stored, and in what format they need to be unpacked); and (b.2) Pixel unpacking in C-code. I never use (a) -- the pure-python code for interpreting files is just fine, and much more flexible. The only thing that (a) doesn't handle, as far as I know, are JPEG files -- those need an external library, I fear. (b.1) is quite good, but there have been a few things I've needed to patch with regard to 16-bit TIFFs and PNGs, and 32-bit TIFFs. (b.2) is not so good, mostly because it's written around the PIL's memory model, which is trickier to work with than numpy, and has much less flexibility with regard to data type. I'm with you on the interpretation of the rest. To answer St?fan's earlier question of how I see things fitting together, I *think* that the pure-python file format interpretation code could be used (either by importing from PIL or using patched copies as needed) to figure out what a given image file type is, and where in the file the pixels are stored. Then the relevant region of the file would be passed through python.zipfile/deflate/etc. if needed to decompress the pixels, and sent to numpy for unpacking the bits from the string. I think this isn't the perfect solution for everyone: it doesn't use external libs, so it will be a bit slower and won't support all the strange corners of the format specifications (which PIL doesn't either). Also JPEG will be hard to support. But what this would be is a very light-weight python-and-numpy image reader with no external dependencies, which has merits. Zach From grs2103 at columbia.edu Mon Apr 21 09:59:08 2008 From: grs2103 at columbia.edu (Gideon Simpson) Date: Mon, 21 Apr 2008 09:59:08 -0400 Subject: [SciPy-user] test failures on os x Message-ID: finally got scipy running on os x 10.5.2, with gfortran 4.3, Apple Python 2.5.1, and Apple gcc 4.0.1, but I'm getting the following test errors: ====================================================================== FAIL: check_dot (scipy.lib.tests.test_blas.test_fblas1_simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/opt/lib/python2.5/site-packages/scipy/lib/blas/tests/ test_blas.py", line 76, in check_dot assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) File "/opt/lib/python2.5/site-packages/numpy/testing/utils.py", line 158, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: ACTUAL: 1.1081566907292542e-37j DESIRED: (-9+2j) ====================================================================== FAIL: Solve: single precision complex ---------------------------------------------------------------------- Traceback (most recent call last): File "/opt/lib/python2.5/site-packages/scipy/linsolve/umfpack/tests/ test_umfpack.py", line 32, in check_solve_complex_without_umfpack assert_array_almost_equal(a*x, b) File "/opt/lib/python2.5/site-packages/numpy/testing/utils.py", line 232, in assert_array_almost_equal header='Arrays are not almost equal') File "/opt/lib/python2.5/site-packages/numpy/testing/utils.py", line 217, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 20.0%) x: array([ 1.00000000+0.j, 1.99999809+0.j, 3.00000000+0.j, 4.00000048+0.j, 5.00000000+0.j], dtype=complex64) y: array([1, 2, 3, 4, 5]) ---------------------------------------------------------------------- Ran 1725 tests in 4.014s FAILED (failures=2) From stefan at sun.ac.za Mon Apr 21 10:04:41 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 21 Apr 2008 16:04:41 +0200 Subject: [SciPy-user] The IO library and image file formats -- compare with with PIL In-Reply-To: <10B72505-ACEB-4EAE-A1B9-39A5038224CD@yale.edu> References: <9457e7c80804180218t32730fcbn1b26ff0e389e6774@mail.gmail.com> <9457e7c80804201042m3aacf93ey36ad6cb46a93bd5c@mail.gmail.com> <9457e7c80804210038n2d619393k74cdee0cb481a59b@mail.gmail.com> <10B72505-ACEB-4EAE-A1B9-39A5038224CD@yale.edu> Message-ID: <9457e7c80804210704n348d2129l7228e5fce3ab164d@mail.gmail.com> On 21/04/2008, Zachary Pincus wrote: > To answer St?fan's earlier question of how I see things fitting > together, I *think* that the pure-python file format interpretation > code could be used (either by importing from PIL or using patched > copies as needed) to figure out what a given image file type is, and > where in the file the pixels are stored. Then the relevant region of > the file would be passed through python.zipfile/deflate/etc. if needed > to decompress the pixels, and sent to numpy for unpacking the bits > from the string. So this is the bit that I don't understand. Those pixel values are encoded, so which component do you use to take the data chunk and convert it to actual pixel values? Regards St?fan From warren.weckesser at gmail.com Mon Apr 21 11:29:34 2008 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Mon, 21 Apr 2008 11:29:34 -0400 Subject: [SciPy-user] test failures on os x In-Reply-To: References: Message-ID: <114880320804210829r7c24dd28n9ba224f928778b4d@mail.gmail.com> I recently ran into the "check_dot" failure in Mac OSX 10.4. The bug was fixed after 0.6.0 was released, so the "solution" was to get the source code from subversion. (It was also recommended to get the latest numpy to with it, but I was able to get what I needed working with the stable numpy release.) Scipy devs: Do you do stable point releases, e.g. 0.6.1? On Mon, Apr 21, 2008 at 9:59 AM, Gideon Simpson wrote: > finally got scipy running on os x 10.5.2, with gfortran 4.3, Apple > Python 2.5.1, and Apple gcc 4.0.1, but I'm getting the following test > errors: > > ====================================================================== > FAIL: check_dot (scipy.lib.tests.test_blas.test_fblas1_simple) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/opt/lib/python2.5/site-packages/scipy/lib/blas/tests/ > test_blas.py", line 76, in check_dot > assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) > File "/opt/lib/python2.5/site-packages/numpy/testing/utils.py", > line 158, in assert_almost_equal > assert round(abs(desired - actual),decimal) == 0, msg > AssertionError: > Items are not equal: > ACTUAL: 1.1081566907292542e-37j > DESIRED: (-9+2j) > > ====================================================================== > FAIL: Solve: single precision complex > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/opt/lib/python2.5/site-packages/scipy/linsolve/umfpack/tests/ > test_umfpack.py", line 32, in check_solve_complex_without_umfpack > assert_array_almost_equal(a*x, b) > File "/opt/lib/python2.5/site-packages/numpy/testing/utils.py", > line 232, in assert_array_almost_equal > header='Arrays are not almost equal') > File "/opt/lib/python2.5/site-packages/numpy/testing/utils.py", > line 217, in assert_array_compare > assert cond, msg > AssertionError: > Arrays are not almost equal > > (mismatch 20.0%) > x: array([ 1.00000000+0.j, 1.99999809+0.j, 3.00000000+0.j, > 4.00000048+0.j, > 5.00000000+0.j], dtype=complex64) > y: array([1, 2, 3, 4, 5]) > > ---------------------------------------------------------------------- > Ran 1725 tests in 4.014s > > FAILED (failures=2) > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zachary.pincus at yale.edu Mon Apr 21 11:31:25 2008 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Mon, 21 Apr 2008 11:31:25 -0400 Subject: [SciPy-user] The IO library and image file formats -- compare with with PIL In-Reply-To: <9457e7c80804210704n348d2129l7228e5fce3ab164d@mail.gmail.com> References: <9457e7c80804180218t32730fcbn1b26ff0e389e6774@mail.gmail.com> <9457e7c80804201042m3aacf93ey36ad6cb46a93bd5c@mail.gmail.com> <9457e7c80804210038n2d619393k74cdee0cb481a59b@mail.gmail.com> <10B72505-ACEB-4EAE-A1B9-39A5038224CD@yale.edu> <9457e7c80804210704n348d2129l7228e5fce3ab164d@mail.gmail.com> Message-ID: <53E48B1E-91A0-422A-8309-CA58D4BCE486@yale.edu> > On 21/04/2008, Zachary Pincus wrote: >> To answer St?fan's earlier question of how I see things fitting >> together, I *think* that the pure-python file format interpretation >> code could be used (either by importing from PIL or using patched >> copies as needed) to figure out what a given image file type is, and >> where in the file the pixels are stored. Then the relevant region of >> the file would be passed through python.zipfile/deflate/etc. if >> needed >> to decompress the pixels, and sent to numpy for unpacking the bits >> from the string. > > So this is the bit that I don't understand. Those pixel values are > encoded, so which component do you use to take the data chunk and > convert it to actual pixel values? numpy.fromstring takes a byte sequence and unpacks it into an array of a specified shape and data type. Most image file formats are just different ways of putting byte sequences on disk and specifying how they were compressed, if at all. Most formats have either no compression, or LZW/Deflate/zlib-style compression, for which there are already python libraries. So for example, reading a TIFF file would consist of looking at the header to determine the pixel format, image size, and compression, then rooting around in the file to assemble the relevant bytes, then running that through deflate (most often), and passing the resulting string to numpy.fromstring. Same for PNG, or most anything that's not JPEG. Writing is similar. Again, what I'm imagining wouldn't be a full-featured image IO library, but something lightweight with no dependencies outside of numpy, and potentially (if JPEG decoding isn't desired), no C- extensions. (One could conceivably use numpy to do JPEG encoding and decoding, but I've no interest in doing that...) This is all just an idea, and I'm not convinced whether it's a great idea. But I just wanted to put the suggestion out there... Zach From hectorvd at gmail.com Mon Apr 21 11:36:59 2008 From: hectorvd at gmail.com (Hector Villafuerte) Date: Mon, 21 Apr 2008 09:36:59 -0600 Subject: [SciPy-user] PDE solvers Message-ID: <350f72300804210836o46931c0bwd6047c33939fa66a@mail.gmail.com> Hi, I'm fairly new to Scipy, though I've been using python for a while. I wonder what the current situation in Scipy is for solving partial differential equations. I've heard of Fenics, Simfe (which use Finite Element Analysis), and FiPy (which uses Finite Volume Analysis) but I wonder what the Scipy community has to say about them (in general, I've found documentation to be a problem). Thanks in advance for your help, -- Hector From discerptor at gmail.com Mon Apr 21 12:11:57 2008 From: discerptor at gmail.com (Joshua Lippai) Date: Mon, 21 Apr 2008 09:11:57 -0700 Subject: [SciPy-user] test failures on os x In-Reply-To: References: Message-ID: <9911419a0804210911x8518eb8t1dddaa1170161db3@mail.gmail.com> That looks like it was built with MacPorts. You might want to inform them, since from the looks of the test output, it looks like you're using an older version of SciPy (that is, not current SVN; you'd be getting a lot more errors with the current build, unfortunately). I've never had an error show up relevant to the UMFPACK libraries in any version of SciPy, so there's a good chance that the single precision complex test failure might be due to something on the MacPorts side. Josh On Mon, Apr 21, 2008 at 6:59 AM, Gideon Simpson wrote: > finally got scipy running on os x 10.5.2, with gfortran 4.3, Apple > Python 2.5.1, and Apple gcc 4.0.1, but I'm getting the following test > errors: > > ====================================================================== > FAIL: check_dot (scipy.lib.tests.test_blas.test_fblas1_simple) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/opt/lib/python2.5/site-packages/scipy/lib/blas/tests/ > test_blas.py", line 76, in check_dot > assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) > File "/opt/lib/python2.5/site-packages/numpy/testing/utils.py", > line 158, in assert_almost_equal > assert round(abs(desired - actual),decimal) == 0, msg > AssertionError: > Items are not equal: > ACTUAL: 1.1081566907292542e-37j > DESIRED: (-9+2j) > > ====================================================================== > FAIL: Solve: single precision complex > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/opt/lib/python2.5/site-packages/scipy/linsolve/umfpack/tests/ > test_umfpack.py", line 32, in check_solve_complex_without_umfpack > assert_array_almost_equal(a*x, b) > File "/opt/lib/python2.5/site-packages/numpy/testing/utils.py", > line 232, in assert_array_almost_equal > header='Arrays are not almost equal') > File "/opt/lib/python2.5/site-packages/numpy/testing/utils.py", > line 217, in assert_array_compare > assert cond, msg > AssertionError: > Arrays are not almost equal > > (mismatch 20.0%) > x: array([ 1.00000000+0.j, 1.99999809+0.j, 3.00000000+0.j, > 4.00000048+0.j, > 5.00000000+0.j], dtype=complex64) > y: array([1, 2, 3, 4, 5]) > > ---------------------------------------------------------------------- > Ran 1725 tests in 4.014s > > FAILED (failures=2) > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From stefan at sun.ac.za Mon Apr 21 13:13:50 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 21 Apr 2008 19:13:50 +0200 Subject: [SciPy-user] The IO library and image file formats -- compare with with PIL In-Reply-To: <53E48B1E-91A0-422A-8309-CA58D4BCE486@yale.edu> References: <9457e7c80804201042m3aacf93ey36ad6cb46a93bd5c@mail.gmail.com> <9457e7c80804210038n2d619393k74cdee0cb481a59b@mail.gmail.com> <10B72505-ACEB-4EAE-A1B9-39A5038224CD@yale.edu> <9457e7c80804210704n348d2129l7228e5fce3ab164d@mail.gmail.com> <53E48B1E-91A0-422A-8309-CA58D4BCE486@yale.edu> Message-ID: <9457e7c80804211013i27a32020l4d704bf75d318c5f@mail.gmail.com> On 21/04/2008, Zachary Pincus wrote: > Again, what I'm imagining wouldn't be a full-featured image IO > library, but something lightweight with no dependencies outside of > numpy, and potentially (if JPEG decoding isn't desired), no C- > extensions. (One could conceivably use numpy to do JPEG encoding and > decoding, but I've no interest in doing that...) I love it -- let's do it (if everyone agrees, of course). Having a Python reference implementation is the way to go. Should we separate the io and the image processing routines, or put it all in scipy.image? Regards St?fan From robert.kern at gmail.com Mon Apr 21 13:20:05 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 21 Apr 2008 12:20:05 -0500 Subject: [SciPy-user] The IO library and image file formats -- compare with with PIL In-Reply-To: <9457e7c80804211013i27a32020l4d704bf75d318c5f@mail.gmail.com> References: <9457e7c80804201042m3aacf93ey36ad6cb46a93bd5c@mail.gmail.com> <9457e7c80804210038n2d619393k74cdee0cb481a59b@mail.gmail.com> <10B72505-ACEB-4EAE-A1B9-39A5038224CD@yale.edu> <9457e7c80804210704n348d2129l7228e5fce3ab164d@mail.gmail.com> <53E48B1E-91A0-422A-8309-CA58D4BCE486@yale.edu> <9457e7c80804211013i27a32020l4d704bf75d318c5f@mail.gmail.com> Message-ID: <3d375d730804211020n22ba0d72k8ae86cf8724a4c5b@mail.gmail.com> On Mon, Apr 21, 2008 at 12:13 PM, St?fan van der Walt wrote: > On 21/04/2008, Zachary Pincus wrote: > > > Again, what I'm imagining wouldn't be a full-featured image IO > > library, but something lightweight with no dependencies outside of > > numpy, and potentially (if JPEG decoding isn't desired), no C- > > extensions. (One could conceivably use numpy to do JPEG encoding and > > decoding, but I've no interest in doing that...) > > I love it -- let's do it (if everyone agrees, of course). Having a > Python reference implementation is the way to go. Should we separate > the io and the image processing routines, or put it all in > scipy.image? If you are thinking about using PIL's code, I would prefer that it not go into scipy. It smells too much like a fork, and I just don't want scipy to get involved. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From stefan at sun.ac.za Mon Apr 21 14:19:46 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 21 Apr 2008 20:19:46 +0200 Subject: [SciPy-user] The IO library and image file formats -- compare with with PIL In-Reply-To: <3d375d730804211020n22ba0d72k8ae86cf8724a4c5b@mail.gmail.com> References: <9457e7c80804201042m3aacf93ey36ad6cb46a93bd5c@mail.gmail.com> <9457e7c80804210038n2d619393k74cdee0cb481a59b@mail.gmail.com> <10B72505-ACEB-4EAE-A1B9-39A5038224CD@yale.edu> <9457e7c80804210704n348d2129l7228e5fce3ab164d@mail.gmail.com> <53E48B1E-91A0-422A-8309-CA58D4BCE486@yale.edu> <9457e7c80804211013i27a32020l4d704bf75d318c5f@mail.gmail.com> <3d375d730804211020n22ba0d72k8ae86cf8724a4c5b@mail.gmail.com> Message-ID: <9457e7c80804211119i61aca20fq7f25225eca8c650@mail.gmail.com> On 21/04/2008, Robert Kern wrote: > If you are thinking about using PIL's code, I would prefer that it not > go into scipy. It smells too much like a fork, and I just don't want > scipy to get involved. It sounded like Zachary said we could do it without PIL (or that was my understanding, at least). I certainly don't want to be involved in any PILfering. Cheers St?fan From aisaac at american.edu Mon Apr 21 14:39:04 2008 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 21 Apr 2008 14:39:04 -0400 Subject: [SciPy-user] The IO library and image file formats -- compare with with PIL In-Reply-To: <3d375d730804211020n22ba0d72k8ae86cf8724a4c5b@mail.gmail.com> References: <9457e7c80804201042m3aacf93ey36ad6cb46a93bd5c@mail.gmail.com><9457e7c80804210038n2d619393k74cdee0cb481a59b@mail.gmail.com><10B72505-ACEB-4EAE-A1B9-39A5038224CD@yale.edu><9457e7c80804210704n348d2129l7228e5fce3ab164d@mail.gmail.com><53E48B1E-91A0-422A-8309-CA58D4BCE486@yale.edu><9457e7c80804211013i27a32020l4d704bf75d318c5f@mail.gmail.com> <3d375d730804211020n22ba0d72k8ae86cf8724a4c5b@mail.gmail.com> Message-ID: On Mon, 21 Apr 2008, Robert Kern apparently wrote: > If you are thinking about using PIL's code, I would prefer > that it not go into scipy. 1. Maybe the PIL developers would like to see this *narrowly* targeted development happen no matter where it happens. Possibly even for use in PIL. It never hurts to ask them. 2. Of course, getting a response has been problematic. That is one key driver of this push, I believe. So if they do not respond, how about have this as a SciKit, so that it is outside of SciPy proper. Cheers, Alan Isaac From oliphant at enthought.com Mon Apr 21 14:43:12 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Mon, 21 Apr 2008 13:43:12 -0500 Subject: [SciPy-user] The IO library and image file formats -- compare with with PIL In-Reply-To: <53E48B1E-91A0-422A-8309-CA58D4BCE486@yale.edu> References: <9457e7c80804180218t32730fcbn1b26ff0e389e6774@mail.gmail.com> <9457e7c80804201042m3aacf93ey36ad6cb46a93bd5c@mail.gmail.com> <9457e7c80804210038n2d619393k74cdee0cb481a59b@mail.gmail.com> <10B72505-ACEB-4EAE-A1B9-39A5038224CD@yale.edu> <9457e7c80804210704n348d2129l7228e5fce3ab164d@mail.gmail.com> <53E48B1E-91A0-422A-8309-CA58D4BCE486@yale.edu> Message-ID: <480CE040.9080102@enthought.com> Zachary Pincus wrote: > numpy.fromstring takes a byte sequence and unpacks it into an array of > a specified shape and data type. Most image file formats are just > different ways of putting byte sequences on disk and specifying how > they were compressed, if at all. Most formats have either no > compression, or LZW/Deflate/zlib-style compression, for which there > are already python libraries. > > So for example, reading a TIFF file would consist of looking at the > header to determine the pixel format, image size, and compression, > then rooting around in the file to assemble the relevant bytes, then > running that through deflate (most often), and passing the resulting > string to numpy.fromstring. Same for PNG, or most anything that's not > JPEG. Writing is similar. > > Again, what I'm imagining wouldn't be a full-featured image IO > library, but something lightweight with no dependencies outside of > numpy, and potentially (if JPEG decoding isn't desired), no C- > extensions. (One could conceivably use numpy to do JPEG encoding and > decoding, but I've no interest in doing that...) > > This is all just an idea, and I'm not convinced whether it's a great > idea. But I just wanted to put the suggestion out there... > I've wanted to have native image readers in SciPy for a long time for a lot of reasons (teaching being one of them so I like this approach). I'd rather not have a PIL dependency to do such things. But, that is just my point of view. So, I'm very supportive of this project generally. -Travis From dineshbvadhia at hotmail.com Mon Apr 21 15:18:52 2008 From: dineshbvadhia at hotmail.com (Dinesh B Vadhia) Date: Mon, 21 Apr 2008 12:18:52 -0700 Subject: [SciPy-user] pre-compiled Windows svn Message-ID: Hello! I want to avoid installing a C/C++ compiler to get at the latest Scipy svn trunk for a Windows build. Is there an alternative way to obtain pre-compiled binaries? Probably not but thought I'd ask. We are just desparate for the integer support in the sparse matrix library and it looks like it is going to be a while before the next Scipy release comes out. Dinesh -------------- next part -------------- An HTML attachment was scrubbed... URL: From millman at berkeley.edu Mon Apr 21 15:44:38 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Mon, 21 Apr 2008 14:44:38 -0500 Subject: [SciPy-user] The IO library and image file formats -- compare with with PIL In-Reply-To: <480CE040.9080102@enthought.com> References: <9457e7c80804201042m3aacf93ey36ad6cb46a93bd5c@mail.gmail.com> <9457e7c80804210038n2d619393k74cdee0cb481a59b@mail.gmail.com> <10B72505-ACEB-4EAE-A1B9-39A5038224CD@yale.edu> <9457e7c80804210704n348d2129l7228e5fce3ab164d@mail.gmail.com> <53E48B1E-91A0-422A-8309-CA58D4BCE486@yale.edu> <480CE040.9080102@enthought.com> Message-ID: On Mon, Apr 21, 2008 at 1:43 PM, Travis E. Oliphant wrote: > I've wanted to have native image readers in SciPy for a long time for a > lot of reasons (teaching being one of them so I like this approach). > I'd rather not have a PIL dependency to do such things. But, that is > just my point of view. So, I'm very supportive of this project generally. +1. I would also like to see basic image/movie readers in scipy.io that just give you a numpy array and not a special image object. We should avoid depending on PIL for this. -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From millman at berkeley.edu Mon Apr 21 15:46:00 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Mon, 21 Apr 2008 14:46:00 -0500 Subject: [SciPy-user] The IO library and image file formats -- compare with with PIL In-Reply-To: References: <9457e7c80804201042m3aacf93ey36ad6cb46a93bd5c@mail.gmail.com> <9457e7c80804210038n2d619393k74cdee0cb481a59b@mail.gmail.com> <10B72505-ACEB-4EAE-A1B9-39A5038224CD@yale.edu> <9457e7c80804210704n348d2129l7228e5fce3ab164d@mail.gmail.com> <53E48B1E-91A0-422A-8309-CA58D4BCE486@yale.edu> <480CE040.9080102@enthought.com> Message-ID: On Mon, Apr 21, 2008 at 2:44 PM, Jarrod Millman wrote: > +1. I would also like to see basic image/movie readers in scipy.io > that just give you a numpy array and not a special image object. We > should avoid depending on PIL for this. I meant to say readers/writers. Sorry. -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From zachary.pincus at yale.edu Mon Apr 21 16:37:17 2008 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Mon, 21 Apr 2008 16:37:17 -0400 Subject: [SciPy-user] The IO library and image file formats -- compare with with PIL In-Reply-To: <3d375d730804211020n22ba0d72k8ae86cf8724a4c5b@mail.gmail.com> References: <9457e7c80804201042m3aacf93ey36ad6cb46a93bd5c@mail.gmail.com> <9457e7c80804210038n2d619393k74cdee0cb481a59b@mail.gmail.com> <10B72505-ACEB-4EAE-A1B9-39A5038224CD@yale.edu> <9457e7c80804210704n348d2129l7228e5fce3ab164d@mail.gmail.com> <53E48B1E-91A0-422A-8309-CA58D4BCE486@yale.edu> <9457e7c80804211013i27a32020l4d704bf75d318c5f@mail.gmail.com> <3d375d730804211020n22ba0d72k8ae86cf8724a4c5b@mail.gmail.com> Message-ID: <0D36E713-05E6-43FA-9634-AD8BEA2B8743@yale.edu> On Apr 21, 2008, at 1:20 PM, Robert Kern wrote: > On Mon, Apr 21, 2008 at 12:13 PM, St?fan van der Walt > wrote: >> On 21/04/2008, Zachary Pincus wrote: >> >>> Again, what I'm imagining wouldn't be a full-featured image IO >>> library, but something lightweight with no dependencies outside of >>> numpy, and potentially (if JPEG decoding isn't desired), no C- >>> extensions. (One could conceivably use numpy to do JPEG encoding and >>> decoding, but I've no interest in doing that...) >> >> I love it -- let's do it (if everyone agrees, of course). Having a >> Python reference implementation is the way to go. Should we separate >> the io and the image processing routines, or put it all in >> scipy.image? > > If you are thinking about using PIL's code, I would prefer that it not > go into scipy. It smells too much like a fork, and I just don't want > scipy to get involved. Understandable. It does seem pretty clear to me that the most expeditious way to proceed with a lightweight library would be to start by modifying (perhaps heavily, as in beyond-recognition, or perhaps lightly to not-at-all) some of the pure-python PIL code that is responsible for reading image file headers. While I wouldn't call this a "fork" so much as the kind of horizontal- code-transfer between related projects that is one of the major rationales for open-source, there's no reason to be potentially provocative. Let me look into whether this idea is at all feasible, and if it is we can revisit the issue of whether it belongs anywhere near scipy. (Would getting Fredrik Lundh's OK to use various bits in this way make things easier? He does seem much more responsive to direct queries than patch-submissions.) I'm glad that there's some support for the idea, so let me look at the code that I have and see if it's worth pursuing. Zach From ggellner at uoguelph.ca Mon Apr 21 20:26:25 2008 From: ggellner at uoguelph.ca (Gabriel Gellner) Date: Mon, 21 Apr 2008 20:26:25 -0400 Subject: [SciPy-user] Behaviour of odeint Message-ID: <20080422002624.GA9478@giton> I just noticed that when you send odeint an empty array, it returns an empty array (that is it doesn't raise and error), is this intentional? I noticed similarly with array functions like numpy.sin, have the same behaviour. Can anyone shed light on the logic behind this, I find in confusing, but I am easily convince ;-) Gabriel From david at ar.media.kyoto-u.ac.jp Mon Apr 21 20:37:52 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 22 Apr 2008 09:37:52 +0900 Subject: [SciPy-user] pre-compiled Windows svn In-Reply-To: References: Message-ID: <480D3360.1090901@ar.media.kyoto-u.ac.jp> Dinesh B Vadhia wrote: > Hello! I want to avoid installing a C/C++ compiler to get at the > latest Scipy svn trunk for a Windows build. Is there an alternative > way to obtain pre-compiled binaries? Probably not but thought I'd ask. Not that I am aware of. The mingw instructions are extremely bad, but installing mingw is not really difficult, though: you just need to run the installer found (automated mingw installer) http://sourceforge.net/project/showfiles.php?group_id=2435 To get a blas/lapack for windows, you can for example use my installer, which will give you optimized blas/lapack from ATLAS: http://www.ar.media.kyoto-u.ac.jp/members/david/archives/blas-lapack-superpack.exe cheers, David From c-b at asu.edu Mon Apr 21 19:59:32 2008 From: c-b at asu.edu (Christopher Brown) Date: Mon, 21 Apr 2008 16:59:32 -0700 Subject: [SciPy-user] Passing numpy array to a c function In-Reply-To: <9457e7c80804202249w2ac29c5dld845b971704df70a@mail.gmail.com> References: <480BAE62.2000004@asu.edu> <9457e7c80804201419j7abf2c87v31a35f525c3f0afb@mail.gmail.com> <480BD65A.5080301@asu.edu> <9457e7c80804202249w2ac29c5dld845b971704df70a@mail.gmail.com> Message-ID: <480D2A64.2000809@asu.edu> Hi St?fan, > You're looking for NPY_DOUBLE here. Thanks. > You'll find Travis Oliphant's book (http://www.tramy.us/) > tremendously helpful in accessing the NumPy C API. The book is being > released for free at SciPy 2008, but at $35 it's a bargain anyway. And thanks! I will pick up a copy! -- Chris From travis at enthought.com Mon Apr 21 19:59:24 2008 From: travis at enthought.com (Travis Vaught) Date: Mon, 21 Apr 2008 18:59:24 -0500 Subject: [SciPy-user] ANN: EPD - Enthought Python Distribution released Message-ID: Greetings, Enthought is pleased to announce the release of the Enthought Python Distribution (EPD) version 2.5.2001. http://www.enthought.com/epd This release makes available both the RedHat 3.x (amd64) and Windows XP (x86) installers. OS X, Ubuntu and more (modern) RHEL versions are coming soon(!). About EPD --------- The Enthought Python Distribution is a "kitchen-sink-included" distribution of the Python Programming Language as well as over 60 additional tools and libraries. It includes NumPy, SciPy, IPython, 2D and 3D visualization, database adapters and a lot of other tools right out of the box. Enthought is offering access to this bundle as a free service to academic and other non-profit organizations. We also offer an annual fee-based subscription service for Commercial and Governmental users to download and update the software bundle. (Everyone may try it out for free. Please see the License Information below.) Included Software ----------------- A short list includes: Python 2.5.2, NumPy, SciPy, Traits, Mayavi, Chaco, Kiva, Enable, Matplotlib, wxPython and VTK. The complete list of software with version numbers is available here: http://www.enthought.com/products/epdlibraries.php License Information ------------------- EPD is a bundle of software, every piece of which is available separately for free under various open-source licenses. Not-for- profit, private-sector access to the bundle and its updates is, and will remain, free under the terms of the Subscription Agreement (see http://www.enthought.com/products/epdlicense.php ). Commercial and Governmental users may try the bundle for free for 30 days. After the trial period, users may purchase a one-year subscription to download and update the bundle. Downloaded software obtained under the subscription agreement may be used by the subscriber in perpetuity. This model should sound familiar, as our commercial offering is quite similar to the business model of a certain linux distributor. More information is also available in the FAQ ( http://www.enthought.com/products/epdfaq.php ). For larger deployments, or those with special build or distribution needs, an Enterprise Subscription is also available. Thanks ------ EPD is compelling because it solves a thorny packaging and distribution problem, but also because of the libraries which it includes. The folks here at Enthought would like to thank the Python developer community and the wider community that authors and contributes to these included libraries. We put these things to work every day and would be much less productive without them. So, thanks! From rob.clewley at gmail.com Mon Apr 21 16:59:10 2008 From: rob.clewley at gmail.com (Rob Clewley) Date: Mon, 21 Apr 2008 16:59:10 -0400 Subject: [SciPy-user] PyDSTool and SciPy (integrators) In-Reply-To: References: <4D547BCC-B8BC-4822-9909-ABD8560A3C62@bu.edu> <20080418192238.GA6937@basestar> Message-ID: Anne, I am very interested in getting spline support to a point where it could be used as a more accurate way of representing solutions to ODEs. I will certainly use it in my Trajectory class if you give it a similar interface to that already used by interp1d. > If I want to evaluate the solution at multiple points, > it is usually because I want an approximation to the true solution > function, but such an approximation should normally have points > distributed in a way that takes into account the behaviour of the > function - more where it behaves in a complicated fashion, and fewer > where it is simple. In case this can help in your problem, our current interface to the H&W codes allows you to specify time points at which the integrator is guaranteed to step on to as it goes along. If someone is willing to help improve the low level interfaces to the H&W codes by exposing internal information like the polynomial coefficients for the dense solution then I would include that and other trajectory-related data in our ODE support. -Rob From robert.kern at gmail.com Mon Apr 21 16:49:04 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 21 Apr 2008 15:49:04 -0500 Subject: [SciPy-user] The IO library and image file formats -- compare with with PIL In-Reply-To: <0D36E713-05E6-43FA-9634-AD8BEA2B8743@yale.edu> References: <9457e7c80804210038n2d619393k74cdee0cb481a59b@mail.gmail.com> <10B72505-ACEB-4EAE-A1B9-39A5038224CD@yale.edu> <9457e7c80804210704n348d2129l7228e5fce3ab164d@mail.gmail.com> <53E48B1E-91A0-422A-8309-CA58D4BCE486@yale.edu> <9457e7c80804211013i27a32020l4d704bf75d318c5f@mail.gmail.com> <3d375d730804211020n22ba0d72k8ae86cf8724a4c5b@mail.gmail.com> <0D36E713-05E6-43FA-9634-AD8BEA2B8743@yale.edu> Message-ID: <3d375d730804211349p4e75ab5cs6603fc13869f519@mail.gmail.com> On Mon, Apr 21, 2008 at 3:37 PM, Zachary Pincus wrote: > Let me look into whether this idea is at all feasible, and if it is we > can revisit the issue of whether it belongs anywhere near scipy. > (Would getting Fredrik Lundh's OK to use various bits in this way make > things easier? He does seem much more responsive to direct queries > than patch-submissions.) That would alleviate my concerns, yes. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From peridot.faceted at gmail.com Mon Apr 21 22:02:11 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Mon, 21 Apr 2008 22:02:11 -0400 Subject: [SciPy-user] Behaviour of odeint In-Reply-To: <20080422002624.GA9478@giton> References: <20080422002624.GA9478@giton> Message-ID: On 21/04/2008, Gabriel Gellner wrote: > I just noticed that when you send odeint an empty array, it returns an empty > array (that is it doesn't raise and error), is this intentional? > > I noticed similarly with array functions like numpy.sin, have the same > behaviour. Can anyone shed light on the logic behind this, I find in > confusing, but I am easily convince ;-) For numpy.sin and friends (ufuncs) the idea is that these functions act elementwise on arrays; you can get the same effect in pure python with, for example, map(math.sin, list) or [sin(x) for x in list] In both cases it's clear that the empty list means "do nothing". For odeint I'm not so convinced; since you normally need to supply it with the starting time, plus any times you want it to integrate to, it seems like calling it with an empty list is always an error. Still, I suppose I can imagine some application for odeint working on an empty list. I don't know that it's worth putting much effort into replicating odeint's interface. Anne From peridot.faceted at gmail.com Mon Apr 21 22:33:44 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Mon, 21 Apr 2008 22:33:44 -0400 Subject: [SciPy-user] PyDSTool and SciPy (integrators) In-Reply-To: References: <4D547BCC-B8BC-4822-9909-ABD8560A3C62@bu.edu> <20080418192238.GA6937@basestar> Message-ID: On 21/04/2008, Rob Clewley wrote: > I am very interested in getting spline support to a point where it > could be used as a more accurate way of representing solutions to > ODEs. I will certainly use it in my Trajectory class if you give it a > similar interface to that already used by interp1d. The interface I've used is similar to that used in interp1d, though a bit more complicated as I rely on having derivative information (if you want a continuous derivative). You can feed my code a list/array of x values and a list the same length of y values and derivatives, but the list of y values is of the form [[y1, y1prime, y1doubleprime], [y2, y2prime], ... ] You can construct a PiecewisePolynomial by giving it x and y values (in the above format), or by appending new points onto the end of an existing PiecewisePolynomial. > In case this can help in your problem, our current interface to the > H&W codes allows you to specify time points at which the integrator is > guaranteed to step on to as it goes along. > > If someone is willing to help improve the low level interfaces to the > H&W codes by exposing internal information like the polynomial > coefficients for the dense solution then I would include that and > other trajectory-related data in our ODE support. When working out this interface design I looked at the FORTRAN interface to LSODA/LSODAR. The way I envisioned the interface working was that the inner loop would ask LSODAR to take a single internal step then return; the inner loop would then read off the time that had been reached, the order, and all the available derivatives from arrays the FORTRAN code exposed. This information then allows you to add a (single) new point to the end of the trajectory; the polynomial is constructed to have the correct order and to match enough derivatives at each end to completely determine it. (Thus it will not be *exactly* the polynomial the solver uses, but should be close. The solver's internal polynomial is probably not even continuous from step to step.) Other information related to the solution and its convergence can straightforwardly be added. At the moment all this will be relatively slow, since the PiecewisePolynomial object is implemented in pure python. But for modest orders - 12 is the highest LSODAR uses - the operations are fairly inexpensive, so conversion to cython should produce a Trajectory object that is about as good an approximation to the solution as the solver's internal polynomials, with exactly the same knots as the solution steps. I think it makes a certain amount of sense to use the one-step-at-a-time approach if you want to keep convergence information, since you need to store information per step in any case. Time points that require special care are a valuable feature, I think (certainly the FORTRAN code goes out of its way to support them), but didn't solve my original problem, as I only knew where problems would occur based on the y coordinates rather than the t coordinates. Anne From rob.clewley at gmail.com Mon Apr 21 23:20:45 2008 From: rob.clewley at gmail.com (Rob Clewley) Date: Mon, 21 Apr 2008 23:20:45 -0400 Subject: [SciPy-user] Polynomial interpolation In-Reply-To: <71066A28-D1B5-4206-8D8B-17039D3F4639@ster.kuleuven.be> References: <20080419214142.GA12913@basestar> <9457e7c80804202302r23a07026gc056e0ba9fb95fea@mail.gmail.com> <71066A28-D1B5-4206-8D8B-17039D3F4639@ster.kuleuven.be> Message-ID: On Mon, Apr 21, 2008 at 6:07 AM, Joris De Ridder wrote: > > On 21 Apr 2008, at 08:02, St?fan van der Walt wrote: > > > Talking about derivatives, does anyone know whether > > > > http://en.wikipedia.org/wiki/Automatic_differentiation > > > > is of value? It's been on my TODO list for a while, but I haven't > > gotten round to studying it in detail. It is of great value for several reasons (e.g. see Eric Phipps' thesis at www.math.cornell.edu/~gucken/PDF/phipps.pdf). Our group at Cornell a couple of years ago had been waiting to see if there would be a standard package emerging for us to interface into Python.... > I think the Scientific.Functions.Derivatives subpackage of Konrad > Hinsen has such functionality. For one thing, I strongly dislike the way of interacting with the numeric objects and their derivatives in that package. It's not very Pythonic in its use of classes and duck typing. The other problem is, the situations where it would be of most use need so many computations that it's not an effective approach unless it is done at the level of C code. A couple of years ago we tried interfacing to a new release of ADOL-C through SWIG, but found some extremely strange memory errors that we couldn't sort out. I think those only showed up when we tried to pass in data that had been prepared by SWIG from Python arrays. Anyway, getting a package like that properly interfaced would be the way forward as far as we are concerned. OpenAD looks like another good bet for an open-source library. Then again, the problem of efficiency becomes getting the users functions into C code so that "source code transformation" can be performed. I certainly like the idea of having all in-built functions "knowing" their derivatives, but it's not clear how these python-level representations can be best interfaced to C code, whether the basis for the AD is "source code transformation" or "operator overloading". I think there would need to be a new class that allows "user" functions that know their derivatives but which are defined in a piecewise-fashion, e.g. to include solutions to differential equations (for instance) represented as interpolated polynomials. > Also, to come back to the original > thread subject, Scientific.Functions.Interpolation is also able to > generate Interpolators (don't know the algorithm). It's only linear interpolation, but, on the plus side, does support multi-dimensional meshes, which is a generalization I wholeheartedly endorse. Alas, multivariate polynomials or wavelet-type bases would be needed. If we're going to start thinking "big" for supporting more mathematically natural functionality, I believe we ought to be thinking far enough out to support the basic objects of PDE computations too (or at least a compatible class structure and API), even if it's not fully utilized just yet. Scipy should support scientific computation based around mathematically-oriented fundamental objects and functionality (i.e. to hide the "dumbness" of arrays inside some sugar coating). I think writing and debugging complex mathematical tools will be a lot easier if we raise the bar a little and use somewhat more sophisticated basic objects than arrays (our efforts with Pointsets have helped enormously in writing sub-packages of PyDSTool, in particular for putting PyCont together so quickly). Such an approach will also be a lot easier to introduce to new and less technical scientific users. The field of scientific computation is still weighed down by keeping old Fortran-style APIs up to date for re-use (of course, I'm guilty of this). Python brings a fresh opportunity to break these shackles, at least interpreted in terms of adding an extra level of indirection and duck-typed magic at the heart of scipy. Some imagination is needed here (a la "import antigravity" ;). -Rob From ggellner at uoguelph.ca Mon Apr 21 23:56:00 2008 From: ggellner at uoguelph.ca (Gabriel Gellner) Date: Mon, 21 Apr 2008 23:56:00 -0400 Subject: [SciPy-user] Behaviour of odeint In-Reply-To: References: <20080422002624.GA9478@giton> Message-ID: <20080422035600.GA30917@basestar> On Mon, Apr 21, 2008 at 10:02:11PM -0400, Anne Archibald wrote: > On 21/04/2008, Gabriel Gellner wrote: > > I just noticed that when you send odeint an empty array, it returns an empty > > array (that is it doesn't raise and error), is this intentional? > > > > I noticed similarly with array functions like numpy.sin, have the same > > behaviour. Can anyone shed light on the logic behind this, I find in > > confusing, but I am easily convince ;-) > > For numpy.sin and friends (ufuncs) the idea is that these functions > act elementwise on arrays; you can get the same effect in pure python > with, for example, > > map(math.sin, list) > or > [sin(x) for x in list] > > In both cases it's clear that the empty list means "do nothing". > Ahhh, the list comprehension makes it clear to me! Thanks! > For odeint I'm not so convinced; since you normally need to supply it > with the starting time, plus any times you want it to integrate to, it > seems like calling it with an empty list is always an error. Still, I > suppose I can imagine some application for odeint working on an empty > list. > I agree I have decided this is a miss feature. Also matlab raises and error, and yet has the previous behavior so at least one other group agrees :-) > I don't know that it's worth putting much effort into replicating > odeint's interface. > Nah, but I don't want to mess things up for no reason. Gabriel From scipy at mspacek.mm.st Mon Apr 21 23:59:56 2008 From: scipy at mspacek.mm.st (Martin Spacek) Date: Mon, 21 Apr 2008 20:59:56 -0700 Subject: [SciPy-user] Measuring local image contrast Message-ID: <480D62BC.4000104@mspacek.mm.st> Given a 2D numpy array of image pixel values, I'd like to get an output array whose entries describe the local contrast around each of the pixels in the input. I'm not sure exactly how this is typically done in the image processing field, but I'm guessing one way would be to take the difference between the center pixel and the mean of the surrounding 8 pixels. Is there a standard way to do this in numpy/scipy? Maybe describe the operation in a kernel somehow, and then convolve it with the image? I've looked at scipy.ndimage, but nothing pops out for me. Same goes for PIL's ImageFilter. My problem is probably lack of familiarity with the nomenclature. Cheers, Martin From stef.mientki at gmail.com Tue Apr 22 02:51:22 2008 From: stef.mientki at gmail.com (Stef Mientki) Date: Tue, 22 Apr 2008 08:51:22 +0200 Subject: [SciPy-user] ANN: EPD - Enthought Python Distribution released In-Reply-To: References: Message-ID: <480D8AEA.3020904@gmail.com> Travis Vaught wrote: > Greetings, > > Enthought is pleased to announce the release of the Enthought Python > Distribution (EPD) version 2.5.2001. > > http://www.enthought.com/epd > > Could someone tell me the difference between EPD and ETS ? If I look at the summary, I see EPD = ETS + 10 other packages, but 8 of the 10 other packages are already in ETS ??? thanks, Stef Mientki From gael.varoquaux at normalesup.org Tue Apr 22 03:33:58 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Tue, 22 Apr 2008 09:33:58 +0200 Subject: [SciPy-user] ANN: EPD - Enthought Python Distribution released In-Reply-To: <480D8AEA.3020904@gmail.com> References: <480D8AEA.3020904@gmail.com> Message-ID: <20080422073358.GC25082@phare.normalesup.org> On Tue, Apr 22, 2008 at 08:51:22AM +0200, Stef Mientki wrote: > Could someone tell me the difference between EPD and ETS ? > If I look at the summary, I see > EPD = ETS + 10 other packages, > but 8 of the 10 other packages are already in ETS ??? ETS is a bunch of code, libraries if you want. It is released under BSD license and is really code, nothing else. EPD is a set of binaries, a distribution. It actually contains much more than ETS, with numpy, scipy, matplotlib, vtk, pytables, ipython, and much more. The two differences are that ETS is source code, where EPD is a distribution. ETS is indeed available as some binary eggs, but clearly its value is not there, where the value of EPD is in the fact that is is a consistent set of binary packages. The other difference is that EPD contains much more than ETS. I hope this answers your question. Ga?l From robert.kern at gmail.com Tue Apr 22 05:13:28 2008 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 22 Apr 2008 04:13:28 -0500 Subject: [SciPy-user] ANN: EPD - Enthought Python Distribution released In-Reply-To: <480D8AEA.3020904@gmail.com> References: <480D8AEA.3020904@gmail.com> Message-ID: <3d375d730804220213s6f6a5394xdf1e5b9670f4b13f@mail.gmail.com> On Tue, Apr 22, 2008 at 1:51 AM, Stef Mientki wrote: > Travis Vaught wrote: > > Greetings, > > > > Enthought is pleased to announce the release of the Enthought Python > > Distribution (EPD) version 2.5.2001. > > > > http://www.enthought.com/epd > > > > > Could someone tell me the difference between EPD and ETS ? > If I look at the summary, I see > EPD = ETS + 10 other packages, > but 8 of the 10 other packages are already in ETS ??? A bit more than that: http://www.enthought.com/products/epdlibraries.php To add to Ga?l's statement, most of the Enthought Tool Suite (ETS) was written by Enthought with notable contributions like Prabhu and Ga?l's work on Mayavi (which builds on the rest of the ETS). The Enthought Python Distribution (EPD) is a binary distribution of Python and a large number of third-party packages, most of which we didn't write. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at ar.media.kyoto-u.ac.jp Tue Apr 22 05:13:19 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 22 Apr 2008 18:13:19 +0900 Subject: [SciPy-user] [numscons] 0.6.3 release: building scipy with MS compilers works Message-ID: <480DAC2F.9010602@ar.media.kyoto-u.ac.jp> Hi, Sorry for announcing one more numscons release in such a short time: this releases can finally handle the last common platform, Visual Studio on win32. Win32 installers and source tarballs can be found on launchpad, as usual: https://code.launchpad.net/numpy.scons.support/0.6/0.6.3 Python eggs on pypi should follow soon (I cannot upload to pypi from my lab, unfortunately). Except the various fixes necessary to make the build work with MS compilers, there were a few improvements: - some more improvements for f2py scons tool, which simplified a bit some scipy scons scripts - the addition of a silent mode, to get more terse command lines output (not perfect yet), ala kbuild for people familiar with compiling linux (e.g. you get CC foo.c instead of gcc -W -Wall blas blas blas). The goal is to make warning more obvious (and to fix them at some point, of course :) ). Just use python setupscons.py scons --silent=N with N between 1 and 3 to see the result. As this was the last major platform I wanted to support, I can now move on polishing the API as well as starting a real documentation for package developers. cheers, David From ondrej at certik.cz Tue Apr 22 06:24:40 2008 From: ondrej at certik.cz (Ondrej Certik) Date: Tue, 22 Apr 2008 12:24:40 +0200 Subject: [SciPy-user] PDE solvers In-Reply-To: <350f72300804210836o46931c0bwd6047c33939fa66a@mail.gmail.com> References: <350f72300804210836o46931c0bwd6047c33939fa66a@mail.gmail.com> Message-ID: <85b5c3130804220324sda90b98j61c453ac44c6948f@mail.gmail.com> On Mon, Apr 21, 2008 at 5:36 PM, Hector Villafuerte wrote: > Hi, > I'm fairly new to Scipy, though I've been using python for a while. I > wonder what the current situation in Scipy is for solving partial > differential equations. I've heard of Fenics, Simfe (which use Finite > Element Analysis), and FiPy (which uses Finite Volume Analysis) but I > wonder what the Scipy community has to say about them (in general, > I've found documentation to be a problem). > Thanks in advance for your help, There is also sfepy: http://code.google.com/p/sfepy/ But yes, documentation is a problem. Ondrej From lbolla at gmail.com Tue Apr 22 11:43:44 2008 From: lbolla at gmail.com (lorenzo bolla) Date: Tue, 22 Apr 2008 17:43:44 +0200 Subject: [SciPy-user] different behaviour in asfarray(None) effects scikits.openopt Message-ID: <80c99e790804220843k35b01295o114d9e720cce6347@mail.gmail.com> numpy.asfarray(None) behaviour from numpy version 1.0.5 to 1.1.0 breaks scikits.openopt: in particular the file scikits/openopt/Kernel/BaseProblem.py a possible patch is as follows: $ diff BaseProblem.py.orig BaseProblem.py 120c120 < self.x0 = None --- > self.x0 = nan L. On Tue, Apr 22, 2008 at 5:29 PM, St?fan van der Walt wrote: > 2008/4/22 lorenzo bolla : > > I noticed a change in the behaviour of numpy.asfarray between numpy > version > > 1.0.5 and 1.1.0: > > > > 1.0.5 > > ==== > > > > In [3]: numpy.asfarray(None) > > Out[3]: array(nan) > > In [4]: numpy.__version__ > > Out[4]: '1.0.5.dev4455' > > > > 1.1.0 > > ==== > > > > In [16]: numpy.asfarray(None) > > > --------------------------------------------------------------------------- > > : float() argument must be a string or a > number > > > > Is this intended? why? > > Yes, 'asfarray' is equivalent to > > array(input).astype(dtype) > > I think it would be wrong to assume that None means NaN. > > Cheers > St?fan > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at scipy.org > http://projects.scipy.org/mailman/listinfo/numpy-discussion > -- Lorenzo Bolla lbolla at gmail.com http://lorenzobolla.emurse.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmitrey.kroshko at scipy.org Tue Apr 22 12:12:29 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Tue, 22 Apr 2008 19:12:29 +0300 Subject: [SciPy-user] different behaviour in asfarray(None) effects scikits.openopt In-Reply-To: <80c99e790804220843k35b01295o114d9e720cce6347@mail.gmail.com> References: <80c99e790804220843k35b01295o114d9e720cce6347@mail.gmail.com> Message-ID: <480E0E6D.4000500@scipy.org> lorenzo bolla wrote: > numpy.asfarray(None) behaviour from numpy version 1.0.5 to 1.1.0 > breaks scikits.openopt: in particular the file > scikits/openopt/Kernel/BaseProblem.py > > a possible patch is as follows: > > $ diff BaseProblem.py.orig BaseProblem.py > 120c120 > < self.x0 = None > --- > > self.x0 = nan > > L. > Ok, I'll fix it today. Regards, D. From elmico.filos at gmail.com Tue Apr 22 12:19:24 2008 From: elmico.filos at gmail.com (=?ISO-8859-1?Q?Mico_Fil=F3s?=) Date: Tue, 22 Apr 2008 18:19:24 +0200 Subject: [SciPy-user] Random sparse matrices Message-ID: Dear all, Is there any function similar to octave's 'sprandn', which generates a sparse random matrix with values normally distributed? http://octave.sourceforge.net/index/f/sprand.html Thanks a lot in advance From nwagner at iam.uni-stuttgart.de Tue Apr 22 12:24:57 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 22 Apr 2008 18:24:57 +0200 Subject: [SciPy-user] Random sparse matrices In-Reply-To: References: Message-ID: On Tue, 22 Apr 2008 18:19:24 +0200 "Mico Fil?s" wrote: > Dear all, > > Is there any function similar to octave's 'sprandn', >which generates a > sparse random matrix with values normally distributed? > > http://octave.sourceforge.net/index/f/sprand.html > > Thanks a lot in advance > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user AFAIK, there is no function like sprandn >>> dir (scipy.sparse) ['SparseEfficiencyWarning', 'SparseWarning', 'Tester', '__all__', '__builtins__', '__doc__', '__file__', '__name__', '__path__', 'base', 'bench', 'bmat', 'bsr', 'bsr_matrix', 'compressed', 'construct', 'coo', 'coo_matrix', 'csc', 'csc_matrix', 'csr', 'csr_matrix', 'data', 'dia', 'dia_matrix', 'dok', 'dok_matrix', 'eye', 'hstack', 'identity', 'issparse', 'isspmatrix', 'isspmatrix_bsr', 'isspmatrix_coo', 'isspmatrix_csc', 'isspmatrix_csr', 'isspmatrix_dia', 'isspmatrix_dok', 'isspmatrix_lil', 'kron', 'kronsum', 'lil', 'lil_diags', 'lil_eye', 'lil_matrix', 'sparsetools', 'spdiags', 'speye', 'spfuncs', 'spidentity', 'spkron', 'spmatrix', 'sputils', 'test', 'vstack'] Nils From c.gillespie at ncl.ac.uk Tue Apr 22 13:00:54 2008 From: c.gillespie at ncl.ac.uk (Colin Gillespie) Date: Tue, 22 Apr 2008 18:00:54 +0100 Subject: [SciPy-user] Coupled ODEs Message-ID: <480E19C6.1070900@ncl.ac.uk> Hi, How do I solve coupled ODEs using scipy? For the ODE dz/dt = z^2 I have from scipy import * def f(t,z): return z**2 t = [0,1] z0 = array([0]) z = integrate.odeint(f, z0, t) print z But how do I handle: dy/dt = y*z dz/dt = z^2+y Thanks Colin -- Dr Colin Gillespie http://www.mas.ncl.ac.uk/~ncsg3/ From lou_boog2000 at yahoo.com Tue Apr 22 13:14:10 2008 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Tue, 22 Apr 2008 10:14:10 -0700 (PDT) Subject: [SciPy-user] Coupled ODEs In-Reply-To: Message-ID: <512371.976.qm@web34404.mail.mud.yahoo.com> You use arrays (2-dimensional in your example) for initial conditions and define f to be a function from R^2 to R^2 (i.e. it returns a 2D array of function values - called the vector field of the ODE- using the input 2D array (y,z) ). SciPy.odeint is set up to handle arrays, not just scalars. You should get the tutorial on SciPy by Travis Oliphant at http://www.scipy.org/Wiki/Documentation?action=AttachFile&do=get&target=scipy_tutorial.pdf the above URL should be all on one line. Search for ODEINT and all is explained. -- Lou Pecora --- Colin Gillespie wrote: Hi, How do I solve coupled ODEs using scipy? For the ODE dz/dt = z2 I have from scipy import * def f(t,z): return z**2 t = [0,1] z0 = array([0]) z = integrate.odeint(f, z0, t) print z But how do I handle: dy/dt = y*z dz/dt = z2+y Thanks Colin ____________________________________________________________________________________ Be a better friend, newshound, and know-it-all with Yahoo! Mobile. Try it now. http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ From carlos.s.santos at gmail.com Tue Apr 22 13:14:18 2008 From: carlos.s.santos at gmail.com (Carlos da Silva Santos) Date: Tue, 22 Apr 2008 14:14:18 -0300 Subject: [SciPy-user] Measuring local image contrast In-Reply-To: <480D62BC.4000104@mspacek.mm.st> References: <480D62BC.4000104@mspacek.mm.st> Message-ID: <1dc6ddb60804221014y31649f81rb9acd48783f27e26@mail.gmail.com> It seems you are looking for a laplacian filter. Take a closer look at the implementation in scipy.ndimage. HTH Carlos On Tue, Apr 22, 2008 at 12:59 AM, Martin Spacek wrote: > Given a 2D numpy array of image pixel values, I'd like to get an output array whose entries describe the local contrast around each of the pixels in the input. I'm not sure exactly how this is typically done in the image processing field, but I'm guessing one way would be to take the difference between the center pixel and the mean of the surrounding 8 pixels. Is there a standard way to do this in numpy/scipy? Maybe describe the operation in a kernel somehow, and then convolve it with the image? I've looked at scipy.ndimage, but nothing pops out for me. Same goes for PIL's ImageFilter. My problem is probably lack of familiarity with the nomenclature. > > Cheers, > > Martin > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From elmico.filos at gmail.com Tue Apr 22 13:26:37 2008 From: elmico.filos at gmail.com (=?ISO-8859-1?Q?Mico_Fil=F3s?=) Date: Tue, 22 Apr 2008 19:26:37 +0200 Subject: [SciPy-user] Random sparse matrices In-Reply-To: References: Message-ID: > AFAIK, there is no function like sprandn Thanks Nils. Could anyone suggest a hint of how to get a random sparse matrix without reinventing the wheel (using numpy/scipy functions whenever possible). Is there any reason why 'sprandn' does not exist? (I mean, perhaps there is a more general method to do that, I don't know). Thanks again From nwagner at iam.uni-stuttgart.de Tue Apr 22 13:29:37 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 22 Apr 2008 19:29:37 +0200 Subject: [SciPy-user] Coupled ODEs In-Reply-To: <480E19C6.1070900@ncl.ac.uk> References: <480E19C6.1070900@ncl.ac.uk> Message-ID: On Tue, 22 Apr 2008 18:00:54 +0100 Colin Gillespie wrote: > Hi, > > How do I solve coupled ODEs using scipy? > >For the ODE > > dz/dt = z^2 > > I have > > from scipy import * > def f(t,z): > return z**2 > t = [0,1] > z0 = array([0]) > > z = integrate.odeint(f, z0, t) > print z > > But how do I handle: > > dy/dt = y*z > dz/dt = z^2+y > > Thanks > > Colin > > > > -- > Dr Colin Gillespie > http://www.mas.ncl.ac.uk/~ncsg3/ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user from scipy import * from pylab import plot, show def func(x,t): tmp = zeros(2,float) tmp[0] = x[0]*x[1] tmp[1] = x[1]**2+x[0] return tmp x0 = zeros(2,float) # Initial conditions x0[0] = 0.1 x0[1] = 0.9 t = linspace(0,1.,100) x = integrate.odeint(func,x0,t) plot(t,x[:,0],t,x[:,1]) show() Nils From nwagner at iam.uni-stuttgart.de Tue Apr 22 13:34:40 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 22 Apr 2008 19:34:40 +0200 Subject: [SciPy-user] Random sparse matrices In-Reply-To: References: Message-ID: On Tue, 22 Apr 2008 19:26:37 +0200 "Mico Fil?s" wrote: >> AFAIK, there is no function like sprandn > > Thanks Nils. > > Could anyone suggest a hint of how to get a random >sparse matrix > without reinventing the wheel (using numpy/scipy >functions whenever > possible). > If you use help (scipy.sparse) you will find the following example >>> from scipy import sparse, linsolve >>> from numpy import linalg >>> from numpy.random import rand >>> A = sparse.lil_matrix((1000, 1000)) >>> A[0, :100] = rand(100) >>> A[1, 100:200] = A[0, :100] >>> A.setdiag(rand(1000)) > Is there any reason why 'sprandn' does not exist? (I >mean, perhaps > there is a more general method to do that, I don't >know). > > Thanks again > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From wnbell at gmail.com Tue Apr 22 14:08:15 2008 From: wnbell at gmail.com (Nathan Bell) Date: Tue, 22 Apr 2008 13:08:15 -0500 Subject: [SciPy-user] Random sparse matrices In-Reply-To: References: Message-ID: On Tue, Apr 22, 2008 at 12:26 PM, Mico Fil?s wrote: > Could anyone suggest a hint of how to get a random sparse matrix > without reinventing the wheel (using numpy/scipy functions whenever > possible). Here's a first stab at it. from numpy.random import random_integers from scipy import rand, randn, ones from scipy.sparse import csr_matrix def _rand_sparse(m, n, density): # check parameters here nnz = max( min( int(m*n*density), m*n), 0) row = random_integers(low=0, high=m-1, size=nnz) col = random_integers(low=0, high=n-1, size=nnz) data = ones(nnz, dtype='int8') # duplicate (i,j) entries will be summed together return csr_matrix( (data,(row,col)), shape=(m,n) ) def sprand(m, n, density): """Document me""" A = _rand_sparse(m, n, density) A.data = rand(A.nnz) return A def sprandn(m, n, density): """Document me""" A = _rand_sparse(m, n, density) A.data = randn(A.nnz) return A if __name__ == '__main__': print sprand(4, 3, 0.5).todense() print sprandn(4, 3, 0.5).todense() > Is there any reason why 'sprandn' does not exist? (I mean, perhaps > there is a more general method to do that, I don't know). Simply a lack of time :) If you (or someone else) would add the appropriate docstrings and error checking then I'd happily add these functions to scipy.sparse. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From elmico.filos at gmail.com Tue Apr 22 14:32:21 2008 From: elmico.filos at gmail.com (=?ISO-8859-1?Q?Mico_Fil=F3s?=) Date: Tue, 22 Apr 2008 20:32:21 +0200 Subject: [SciPy-user] Random sparse matrices In-Reply-To: References: Message-ID: Well, that's not exactly what I want. Randomness is not only in the values of the non-empty elements, but also in the position (i,j) of these non-empty elements. The idea is to draw randomly a fraction of the M*N possible elements in the matrix (M and N are the number of rows and columns), and assign to each of these elements a normal random number. From elmico.filos at gmail.com Tue Apr 22 14:34:33 2008 From: elmico.filos at gmail.com (=?ISO-8859-1?Q?Mico_Fil=F3s?=) Date: Tue, 22 Apr 2008 20:34:33 +0200 Subject: [SciPy-user] Random sparse matrices In-Reply-To: References: Message-ID: Thanks Nathan. That's exactly what I needed. I would be happy to write the docstrings, but I cannot do it right now. From wnbell at gmail.com Tue Apr 22 14:51:43 2008 From: wnbell at gmail.com (Nathan Bell) Date: Tue, 22 Apr 2008 13:51:43 -0500 Subject: [SciPy-user] Random sparse matrices In-Reply-To: References: Message-ID: On Tue, Apr 22, 2008 at 1:34 PM, Mico Fil?s wrote: > I would be happy to write the docstrings, but I cannot do it right now. Thanks. I would appreciate if the docstrings conformed to those in : http://projects.scipy.org/scipy/scipy/browser/trunk/scipy/sparse/construct.py#L85 Does anyone know a way to include an example in a docstring that *will not* be treated as a doctest (for non-deterministic functions like this)? -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From xavier.gnata at gmail.com Tue Apr 22 15:18:38 2008 From: xavier.gnata at gmail.com (Gnata Xavier) Date: Tue, 22 Apr 2008 21:18:38 +0200 Subject: [SciPy-user] scipy.weave.inline and libs In-Reply-To: <480E19C6.1070900@ncl.ac.uk> References: <480E19C6.1070900@ncl.ac.uk> Message-ID: <480E3A0E.9010906@gmail.com> Hi, I'm trying to figure out how scipy.weave.inline works. It is pretty easy (even if the doc is missing) but I still have on pb : I would like to be able to link a lib to my C code. How should I add -lm or -lgsl -lgslcblas -lm or -lwhatever ? val = scipy.weave.inline(code, ['pri', 'max'], type_converters=converters.blitz, compiler = 'gcc',extra_compile_args = ['-O3','-fopenmp','-lm']) "g++: -lm: linker input file unused because linking not done" Yep it is true...so is it even possible? Xavier From dwf at cs.toronto.edu Tue Apr 22 15:30:49 2008 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Tue, 22 Apr 2008 15:30:49 -0400 Subject: [SciPy-user] Random sparse matrices Message-ID: <20080422193102.C495039C46B@new.scipy.org> I think the basic algorithm you want is to draw Z nonnegative integers from [0,M) and [0,N) respectively, use those as the indices and create a sparsecoo_matrix from that. I believe there's a way to sample integers with replacement built in to numpy; check help(numpy.random). This has a slight chance of producing duplicate row/col pairs so you'll end up with less than Z nonzero elements. Another strategy would be to sample on [0,M*N) without replacement and use integer division by M to get goes and modulus by M to get cols. Cheers, DWF -----Original Message----- From: Mico Fil?s Sent: April 22, 2008 2:32 PM To: SciPy Users List Subject: Re: [SciPy-user] Random sparse matrices Well, that's not exactly what I want. Randomness is not only in the values of the non-empty elements, but also in the position (i,j) of these non-empty elements. The idea is to draw randomly a fraction of the M*N possible elements in the matrix (M and N are the number of rows and columns), and assign to each of these elements a normal random number. _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user From hoytak at gmail.com Tue Apr 22 15:33:41 2008 From: hoytak at gmail.com (Hoyt Koepke) Date: Tue, 22 Apr 2008 12:33:41 -0700 Subject: [SciPy-user] scipy.weave.inline and libs In-Reply-To: <480E3A0E.9010906@gmail.com> References: <480E19C6.1070900@ncl.ac.uk> <480E3A0E.9010906@gmail.com> Message-ID: <4db580fd0804221233w31ac2451xbc45510ed718bc88@mail.gmail.com> If you look at the doc string for the inline function, it lists all the possible keyword options. One of them is libraries, if I recall right. --Hoyt On Tue, Apr 22, 2008 at 12:18 PM, Gnata Xavier wrote: > Hi, > > I'm trying to figure out how scipy.weave.inline works. > It is pretty easy (even if the doc is missing) but I still have on pb : > I would like to be able to link a lib to my C code. > How should I add -lm or -lgsl -lgslcblas -lm or -lwhatever ? > > val = scipy.weave.inline(code, > ['pri', 'max'], > type_converters=converters.blitz, > compiler = 'gcc',extra_compile_args = > ['-O3','-fopenmp','-lm']) > "g++: -lm: linker input file unused because linking not done" > > Yep it is true...so is it even possible? > > Xavier > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- +++++++++++++++++++++++++++++++++++ Hoyt Koepke UBC Department of Computer Science http://www.cs.ubc.ca/~hoytak/ hoytak at gmail.com +++++++++++++++++++++++++++++++++++ From lechtlr at yahoo.com Tue Apr 22 16:19:35 2008 From: lechtlr at yahoo.com (lechtlr) Date: Tue, 22 Apr 2008 13:19:35 -0700 (PDT) Subject: [SciPy-user] Constrained Optimization using Simulated Annealing Message-ID: <163117.70473.qm@web57914.mail.re3.yahoo.com> Is there way to introduce constraints for the objective function in the Simulated Annealing Optimization method in scipy ? Thanks, -Lex --------------------------------- Be a better friend, newshound, and know-it-all with Yahoo! Mobile. Try it now. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmitrey.kroshko at scipy.org Tue Apr 22 16:26:28 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Tue, 22 Apr 2008 23:26:28 +0300 Subject: [SciPy-user] Constrained Optimization using Simulated Annealing In-Reply-To: <163117.70473.qm@web57914.mail.re3.yahoo.com> References: <163117.70473.qm@web57914.mail.re3.yahoo.com> Message-ID: <480E49F4.4060306@scipy.org> lechtlr wrote: > > Is there way to introduce constraints for the objective function in > the Simulated Annealing Optimization method in scipy ? > > Thanks, > -Lex IIRC scipy GA solver has implicit constraints lb=-100, ub=+100. there is also box-bound constrained solver galileo in scikits.openopt Regards, D. From lbolla at gmail.com Tue Apr 22 16:26:46 2008 From: lbolla at gmail.com (lorenzo bolla) Date: Tue, 22 Apr 2008 22:26:46 +0200 Subject: [SciPy-user] Constrained Optimization using Simulated Annealing In-Reply-To: <163117.70473.qm@web57914.mail.re3.yahoo.com> References: <163117.70473.qm@web57914.mail.re3.yahoo.com> Message-ID: <80c99e790804221326h2df9e259ua4176aad38b810c8@mail.gmail.com> I had the same problem some time ago and I concluded that the easiest way is to introduce a map on the objective function's parameters to force the constrains. For example, if I have a function f(x) and I want to force x to be in [-1,1], I can introduce a map like: x --> t / (1 + |t|) which maps x in [-1,1] to t in [-inf,inf]. Then use the unconstrained optimizer over t. hth, L. On Tue, Apr 22, 2008 at 10:19 PM, lechtlr wrote: > > Is there way to introduce constraints for the objective function in the > Simulated Annealing Optimization method in scipy ? > > Thanks, > -Lex > > > ------------------------------ > Be a better friend, newshound, and know-it-all with Yahoo! Mobile. Try it > now. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -- Lorenzo Bolla lbolla at gmail.com http://lorenzobolla.emurse.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From lopmart at gmail.com Tue Apr 22 16:38:54 2008 From: lopmart at gmail.com (Jose Lopez) Date: Tue, 22 Apr 2008 13:38:54 -0700 Subject: [SciPy-user] help about a function to solve no linear equations Message-ID: <4eeef9d40804221338g99aea92q55563ded3ff382dc@mail.gmail.com> hi somebody know, any function at scipy for solver system no linear equations? thanks atte JL -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Tue Apr 22 16:45:49 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 22 Apr 2008 22:45:49 +0200 Subject: [SciPy-user] help about a function to solve no linear equations In-Reply-To: <4eeef9d40804221338g99aea92q55563ded3ff382dc@mail.gmail.com> References: <4eeef9d40804221338g99aea92q55563ded3ff382dc@mail.gmail.com> Message-ID: On Tue, 22 Apr 2008 13:38:54 -0700 "Jose Lopez" wrote: > hi > > somebody know, any function at scipy for solver system >no linear equations? > > thanks > atte JL scipy.optimize.fsolve fsolve(func, x0, args=(), fprime=None, full_output=0, col_deriv=0, xtol=1.49012e-08, maxfev=0, band=None, epsfcn=0.0, fact or=100, diag=None, warning=True) Find the roots of a function. Description: Return the roots of the (non-linear) equations defined by func(x)=0 given a starting estimate. Inputs: func -- A Python function or method which takes at least one (possibly vector) argument. x0 -- The starting estimate for the roots of func(x)=0. args -- Any extra arguments to func are placed in this tuple. fprime -- A function or method to compute the Jacobian of func with derivatives across the rows. If this is None, the Jacobian will be estimated. full_output -- non-zero to return the optional outputs. col_deriv -- non-zero to specify that the Jacobian function computes derivatives down the columns (faster, because there is no transpose operation). warning -- True to print a warning message when the call is unsuccessful; False to suppress the warning message. Outputs: (x, {infodict, ier, mesg}) x -- the solution (or the result of the last iteration for an unsuccessful call. HTH Nils From hoytak at gmail.com Tue Apr 22 19:10:44 2008 From: hoytak at gmail.com (Hoyt Koepke) Date: Tue, 22 Apr 2008 16:10:44 -0700 Subject: [SciPy-user] code for weave / blitz function wrapper generation Message-ID: <4db580fd0804221610j6d5924e8ia413889af13d6c39@mail.gmail.com> Hello, I recently wrote a general purpose function for automatically creating wrappers to C++ functions using weave, and I thought others might find it useful as well. In particular, I think a good place for it would be as a method in the weave ext_module class if others agree. I'm also looking for thoughts, suggestions, and for more people to test it. All the heavy lifting is done by weave, so hopefully the subtle errors are minimal. I've attached the code and an example (with setup.py) for people to look at. To try it out, run ./setup.py build and then ./test_wrapper.py. The interesting function is in weave_wrap_functions.py. Here's a brief overview: To create a function module, you specify the python name, the cpp function's call name, a list of argument types as strings, and whether it returns a value. It supports argument types of (from the doc string): Scalars: "double" (python float) "float" (python float) "int" (python int) "long int" (python int) Blitz Arrays (converted from numpy arrays), 1 <= dim <= 11 "Array", (numpy dtype=float64) "Array", (numpy dtype=float32) "Array" (numpy dtype=int32) "Array" (numpy dtype=int32) "Array" (numpy dtype=int32) "Array" (numpy dtype=int16) "Array" (numpy dtype=int16) "Array" (numpy dtype=uint32) "Array" (numpy dtype=uint32) "Array" (numpy dtype=uint32) "Array" (numpy dtype=uint32) "Array" (numpy dtype=uint16) C++ types: "string" (python string) Thus, for example, if you have the following c++ function definition: string printData(Array& array2d, Array& idxarray, const string& msg, double a_double, int an_int); You could wrap it by calling add_function_wrapper(wrappermod, "printData", "example::printData", arglist = ["Array", "Array", "string", "double", "int"], returns_value=True) This would add a function to the ext_module wrappermod called printData which simply converts everything and calls the function. Anyway, I hope this might be useful to someone! I'll welcome any feedback. --Hoyt -- +++++++++++++++++++++++++++++++++++ Hoyt Koepke UBC Department of Computer Science http://www.cs.ubc.ca/~hoytak/ hoytak at gmail.com +++++++++++++++++++++++++++++++++++ -------------- next part -------------- A non-text attachment was scrubbed... Name: weave_wrapper.tar.gz Type: application/x-gzip Size: 3688 bytes Desc: not available URL: From lopmart at gmail.com Tue Apr 22 19:37:56 2008 From: lopmart at gmail.com (Jose Lopez) Date: Tue, 22 Apr 2008 16:37:56 -0700 Subject: [SciPy-user] help about a function to solve no linear equations In-Reply-To: References: <4eeef9d40804221338g99aea92q55563ded3ff382dc@mail.gmail.com> Message-ID: <4eeef9d40804221637m62b3a33amfb8a83faab8336fa@mail.gmail.com> ok, thank you. On Tue, Apr 22, 2008 at 1:45 PM, Nils Wagner wrote: > On Tue, 22 Apr 2008 13:38:54 -0700 > "Jose Lopez" wrote: > > hi > > > > somebody know, any function at scipy for solver system > >no linear equations? > > > > thanks > > atte JL > > scipy.optimize.fsolve > > fsolve(func, x0, args=(), fprime=None, full_output=0, > col_deriv=0, xtol=1.49012e-08, maxfev=0, band=None, > epsfcn=0.0, fact > or=100, diag=None, warning=True) > Find the roots of a function. > > Description: > > Return the roots of the (non-linear) equations > defined by > func(x)=0 given a starting estimate. > > Inputs: > > func -- A Python function or method which takes at > least one > (possibly vector) argument. > x0 -- The starting estimate for the roots of > func(x)=0. > args -- Any extra arguments to func are placed in > this tuple. > fprime -- A function or method to compute the > Jacobian of func with > derivatives across the rows. If this is > None, the > Jacobian will be estimated. > full_output -- non-zero to return the optional > outputs. > col_deriv -- non-zero to specify that the Jacobian > function > computes derivatives down the columns > (faster, because > there is no transpose operation). > warning -- True to print a warning message when the > call is > unsuccessful; False to suppress the > warning message. > Outputs: (x, {infodict, ier, mesg}) > > x -- the solution (or the result of the last > iteration for an > unsuccessful call. > > > HTH > > Nils > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wangxj.uc at gmail.com Tue Apr 22 19:56:02 2008 From: wangxj.uc at gmail.com (Xiaojian Wang) Date: Tue, 22 Apr 2008 16:56:02 -0700 Subject: [SciPy-user] Constrained Optimization using Simulated Annealing In-Reply-To: <163117.70473.qm@web57914.mail.re3.yahoo.com> References: <163117.70473.qm@web57914.mail.re3.yahoo.com> Message-ID: The bound constraints for design variables x0,x1 ... can be done by generating the x[i] inside upper and lower bounds easily. (or by forcing them into bounds), for other kinds of constraints, say cj(x0,x2...xn) < 0. j=1,... I like to use penalty function methods to include them for SA and GA. xiaojian On Tue, Apr 22, 2008 at 1:19 PM, lechtlr wrote: > > Is there way to introduce constraints for the objective function in the > Simulated Annealing Optimization method in scipy ? > > Thanks, > -Lex > > > ------------------------------ > Be a better friend, newshound, and know-it-all with Yahoo! Mobile. Try it > now. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Wed Apr 23 05:22:14 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Wed, 23 Apr 2008 11:22:14 +0200 Subject: [SciPy-user] Random sparse matrices In-Reply-To: References: Message-ID: <9457e7c80804230222x5a9bc315h3862f388fd03f02d@mail.gmail.com> 2008/4/22 Nathan Bell : > On Tue, Apr 22, 2008 at 1:34 PM, Mico Fil?s wrote: > > I would be happy to write the docstrings, but I cannot do it right now. > > Thanks. I would appreciate if the docstrings conformed to those in : > http://projects.scipy.org/scipy/scipy/browser/trunk/scipy/sparse/construct.py#L85 > > Does anyone know a way to include an example in a docstring that *will > not* be treated as a doctest (for non-deterministic functions like > this)? For each line you wish to skip, use >>> some_func(foo) # doctest: +SKIP Regards St?fan From fredmfp at gmail.com Wed Apr 23 10:08:31 2008 From: fredmfp at gmail.com (fred) Date: Wed, 23 Apr 2008 16:08:31 +0200 Subject: [SciPy-user] remove duplicate points... Message-ID: <480F42DF.50604@gmail.com> Hi, I have array of 3D points: [x0, y0, z0, v0] [x1, y1, z1, v1] ... [xn, yn, zn, vn] This array have duplicate elements (points), by construction. How could I remove these duplicate elements ? numpy.unique seems to not fit my needs... TIA. Cheers, -- Fred From Dharhas.Pothina at twdb.state.tx.us Wed Apr 23 10:31:06 2008 From: Dharhas.Pothina at twdb.state.tx.us (Dharhas Pothina) Date: Wed, 23 Apr 2008 09:31:06 -0500 Subject: [SciPy-user] loadtxt error Message-ID: <480F01DA0200009B00011C01@GWWEB.twdb.state.tx.us> Hi, I'm trying to transition from matlab to scipy/numpy and am having problems with things that should be fairly straightforward. I have an ascii text file with the format : year month sumprcp(in) 1976 01 2.07 1976 02 0.76 1976 03 2.53 1976 04 2.25 ... When trying to read it with the function loadtxt() I am getting an error : in iPython I type : from numpy import * year,month,data = loadtxt('portarthurcity_monthlyprecip.txt',skiprows=1,unpack=True) I get the error : --------------------------------------------------------------------------- ValueError Traceback (most recent call last) /home/dharhas/projects/sabine-neches/keithlake/analysis/hydrology/ in () /usr/lib64/python2.5/site-packages/numpy/core/numeric.py in loadtxt(fname, dtype, comments, delimiter, converters, skiprows, usecols, unpack) 723 X.append(row) 724 --> 725 X = array(X, dtype) 726 r,c = X.shape 727 if r==1 or c==1: ValueError: setting an array element with a sequence. Am I doing something basic wrong? Variations of the function call with a file without header lines etc give the same error. thanks. - dharhas From nwagner at iam.uni-stuttgart.de Wed Apr 23 10:44:16 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 23 Apr 2008 16:44:16 +0200 Subject: [SciPy-user] loadtxt error In-Reply-To: <480F01DA0200009B00011C01@GWWEB.twdb.state.tx.us> References: <480F01DA0200009B00011C01@GWWEB.twdb.state.tx.us> Message-ID: On Wed, 23 Apr 2008 09:31:06 -0500 "Dharhas Pothina" wrote: > Hi, > > I'm trying to transition from matlab to scipy/numpy and >am having problems with things that should be fairly >straightforward. > > I have an ascii text file with the format : > > year month sumprcp(in) > 1976 01 2.07 > 1976 02 0.76 > 1976 03 2.53 > 1976 04 2.25 > ... > > When trying to read it with the function loadtxt() I am >getting an error : > in iPython I type : > > from numpy import * > year,month,data = >loadtxt('portarthurcity_monthlyprecip.txt',skiprows=1,unpack=True) > > I get the error : > > --------------------------------------------------------------------------- > ValueError Traceback >(most recent call last) > > /home/dharhas/projects/sabine-neches/keithlake/analysis/hydrology/console> in () > > /usr/lib64/python2.5/site-packages/numpy/core/numeric.py >in loadtxt(fname, dtype, comments, delimiter, converters, >skiprows, usecols, unpack) > 723 X.append(row) > 724 > --> 725 X = array(X, dtype) > 726 r,c = X.shape > 727 if r==1 or c==1: > > ValueError: setting an array element with a sequence. > > > > Am I doing something basic wrong? Variations of the >function call with a file without header lines etc give >the same error. > > thanks. > > - dharhas > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user No problem here. In [1]: run test_load.py In [2]: year Out[2]: array([ 1976., 1976., 1976., 1976.]) In [3]: month Out[3]: array([ 1., 2., 3., 4.]) In [4]: data Out[4]: array([ 2.07, 0.76, 2.53, 2.25]) In [5]: import numpy In [6]: numpy.__version__ Out[6]: '1.1.0.dev5070' Nils From pav at iki.fi Wed Apr 23 10:50:42 2008 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 23 Apr 2008 14:50:42 +0000 (UTC) Subject: [SciPy-user] loadtxt error References: <480F01DA0200009B00011C01@GWWEB.twdb.state.tx.us> Message-ID: Wed, 23 Apr 2008 16:44:16 +0200, Nils Wagner wrote: > On Wed, 23 Apr 2008 09:31:06 -0500 > "Dharhas Pothina" > wrote: [clip] >> I have an ascii text file with the format : >> >> year month sumprcp(in) >> 1976 01 2.07 >> 1976 02 0.76 >> 1976 03 2.53 >> 1976 04 2.25 >> ... >> >> When trying to read it with the function loadtxt() I am >>getting an error : >> in iPython I type : >> >> from numpy import * >> year,month,data = >>loadtxt('portarthurcity_monthlyprecip.txt',skiprows=1,unpack=True) >> >> I get the error : No problem here either, with numpy 1.0.4. Is there garbage later on in the file? -- Pauli Virtanen From Dharhas.Pothina at twdb.state.tx.us Wed Apr 23 10:54:54 2008 From: Dharhas.Pothina at twdb.state.tx.us (Dharhas Pothina) Date: Wed, 23 Apr 2008 09:54:54 -0500 Subject: [SciPy-user] loadtxt error In-Reply-To: References: <480F01DA0200009B00011C01@GWWEB.twdb.state.tx.us> Message-ID: <480F076D.63BA.009B.0@twdb.state.tx.us> Ok it looks like I had a couple of rows with values missing in the third column. I didn't notice before since matlab and octave assumes zeros in those cases. Is there a function in numpy/scipy that can read files like that or should I fix the file before reading it in? thanks for your help. - dharhas >>> Pauli Virtanen 4/23/2008 9:50 AM >>> Wed, 23 Apr 2008 16:44:16 +0200, Nils Wagner wrote: > On Wed, 23 Apr 2008 09:31:06 -0500 > "Dharhas Pothina" > wrote: [clip] >> I have an ascii text file with the format : >> >> year month sumprcp(in) >> 1976 01 2.07 >> 1976 02 0.76 >> 1976 03 2.53 >> 1976 04 2.25 >> ... >> >> When trying to read it with the function loadtxt() I am >>getting an error : >> in iPython I type : >> >> from numpy import * >> year,month,data = >>loadtxt('portarthurcity_monthlyprecip.txt',skiprows=1,unpack=True) >> >> I get the error : No problem here either, with numpy 1.0.4. Is there garbage later on in the file? -- Pauli Virtanen _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user From amcmorl at gmail.com Wed Apr 23 11:09:07 2008 From: amcmorl at gmail.com (Angus McMorland) Date: Wed, 23 Apr 2008 11:09:07 -0400 Subject: [SciPy-user] remove duplicate points... In-Reply-To: <480F42DF.50604@gmail.com> References: <480F42DF.50604@gmail.com> Message-ID: Hi Fred et al, On 23/04/2008, fred wrote: > Hi, > > I have array of 3D points: > > [x0, y0, z0, v0] > [x1, y1, z1, v1] > ... > [xn, yn, zn, vn] > This array have duplicate elements (points), by construction. > > How could I remove these duplicate elements ? I had to do this prior to Delaunay triangulation at one point, and wrote a quick routine to remove the duplicates, which works on the assumption that unique pts will have a unique product (x * y * z). I doubt this is very efficient, but usually when I post these sorts of answers it stimulates the gurus to post better solutions, which is always good. import numpy as np def remove_dups(ptsar, atol=1e-8): '''removes duplicate entries from pts array''' prods = ptsar.prod(axis=1) sorted = ptsar[prods.argsort()] prodsorted = prods[prods.argsort()] diffs = np.greater(np.absolute(np.diff(prodsorted)), atol) diffs = np.hstack(((True,), diffs)) return sorted[diffs] I hope that helps, A. -- AJC McMorland, PhD candidate Physiology, University of Auckland (Nearly) post-doctoral research fellow Neurobiology, University of Pittsburgh From stefan at sun.ac.za Wed Apr 23 11:32:51 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Wed, 23 Apr 2008 17:32:51 +0200 Subject: [SciPy-user] remove duplicate points... In-Reply-To: References: <480F42DF.50604@gmail.com> Message-ID: <9457e7c80804230832n4c52bfcejc3ca20f69c298ac3@mail.gmail.com> 2008/4/23 Angus McMorland : > > I have array of 3D points: > > > > [x0, y0, z0, v0] > > [x1, y1, z1, v1] > > ... > > [xn, yn, zn, vn] > > > This array have duplicate elements (points), by construction. > > > > How could I remove these duplicate elements ? > > I had to do this prior to Delaunay triangulation at one point, and > wrote a quick routine to remove the duplicates, which works on the > assumption that unique pts will have a unique product (x * y * z). That doesn't sound like a valid assumption? Something like this should work, although it will shuffle the data: import numpy as np x = np.array([[1,2,3],[4,5,6],[1,3,4],[1,2,3],[4,5,6],[7,8,9],[1,2,3],[5,6,7],[1,3,4]]) N = len(x) for i in xrange(N): if i < len(x): x = np.vstack((x[i], x[~np.all(x[i] == x,axis=1)])) else: break print x Regards St?fan From pav at iki.fi Wed Apr 23 11:47:27 2008 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 23 Apr 2008 15:47:27 +0000 (UTC) Subject: [SciPy-user] remove duplicate points... References: <480F42DF.50604@gmail.com> Message-ID: Wed, 23 Apr 2008 11:09:07 -0400, Angus McMorland wrote: [clip] > On 23/04/2008, fred wrote: > > I have array of 3D points: > > > > [x0, y0, z0, v0] > > [x1, y1, z1, v1] > > ... > > [xn, yn, zn, vn] > > > > This array have duplicate elements (points), by construction. > > How could I remove these duplicate elements ? > > I had to do this prior to Delaunay triangulation at one point, and wrote [clip] Something like this works, although it's probably not optimal: import numpy as np z = np.array([[1,2,3,4], [4,5,1,3], [1,2,5,3], [4,5,1,3], [7,7,4,1], [4,5,1,3], [4,5,1,3], [6,4,5,5]]) # Find the indices of the unique entries j_sorted = np.lexsort(z.T) v = None for q in z.T: q = q[j_sorted] w = (q[1:] != q[:-1]) if v is None: v = w else: v |= w unique_mask = np.hstack([True, v]) j_unique = j_sorted[unique_mask] j_unique.sort() # Done: z_unique = z[j_unique] print z_unique From amcmorl at gmail.com Wed Apr 23 11:50:29 2008 From: amcmorl at gmail.com (Angus McMorland) Date: Wed, 23 Apr 2008 11:50:29 -0400 Subject: [SciPy-user] remove duplicate points... In-Reply-To: <9457e7c80804230832n4c52bfcejc3ca20f69c298ac3@mail.gmail.com> References: <480F42DF.50604@gmail.com> <9457e7c80804230832n4c52bfcejc3ca20f69c298ac3@mail.gmail.com> Message-ID: On 23/04/2008, St?fan van der Walt wrote: > 2008/4/23 Angus McMorland : > > > > I have array of 3D points: > > > > > > [x0, y0, z0, v0] > > > [x1, y1, z1, v1] > > > ... > > > [xn, yn, zn, vn] > > > > > This array have duplicate elements (points), by construction. > > > > > > How could I remove these duplicate elements ? > > > > I had to do this prior to Delaunay triangulation at one point, and > > wrote a quick routine to remove the duplicates, which works on the > > assumption that unique pts will have a unique product (x * y * z). > > That doesn't sound like a valid assumption? At least in my scenario, where points exist in a large floating point space, the probability of two unique points having the same product is, I think, very small. It doesn't seem to have arisen in any of my testing, but I'd be interested to hear if I've made an error of judgement here. As a quick test: sz = 1e7 sz - np.unique(np.random.normal(size=(sz,3)).prod(axis=1)).size returns 0 - i.e. no common products unless the points are deliberately made the same. Am I mis-thinking this? Thanks, Angus. -- AJC McMorland, PhD candidate Physiology, University of Auckland (Nearly) post-doctoral research fellow Neurobiology, University of Pittsburgh From pgmdevlist at gmail.com Wed Apr 23 12:04:43 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Wed, 23 Apr 2008 12:04:43 -0400 Subject: [SciPy-user] remove duplicate points... In-Reply-To: References: <480F42DF.50604@gmail.com> Message-ID: <200804231204.43851.pgmdevlist@gmail.com> Er, that may sound silly, but why not using numpy.unique ? 1. Transform your nx4 array into a n record array 2. use numpy.unique on the record array 3. revert to a nx4 array. For example: z=np.array([[1,1],[1,2],[1,2],[1,3]]) zr=z.view([('a',int),('b',int)]) zs = numpy.unique(zr).view((int,2)) Am I missing something ? From wnbell at gmail.com Wed Apr 23 12:10:27 2008 From: wnbell at gmail.com (Nathan Bell) Date: Wed, 23 Apr 2008 11:10:27 -0500 Subject: [SciPy-user] remove duplicate points... In-Reply-To: References: <480F42DF.50604@gmail.com> <9457e7c80804230832n4c52bfcejc3ca20f69c298ac3@mail.gmail.com> Message-ID: On Wed, Apr 23, 2008 at 10:50 AM, Angus McMorland wrote: > > As a quick test: > > sz = 1e7 > sz - np.unique(np.random.normal(size=(sz,3)).prod(axis=1)).size > > returns 0 - i.e. no common products unless the points are deliberately > made the same. Am I mis-thinking this? I think so. You example shows that 2^53 is a much bigger number than 10^7. OTOH the likelihood that two different points have a zero coordinate is higher. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From emanuele at relativita.com Wed Apr 23 12:32:48 2008 From: emanuele at relativita.com (Emanuele Olivetti) Date: Wed, 23 Apr 2008 18:32:48 +0200 Subject: [SciPy-user] remove duplicate points... In-Reply-To: <200804231204.43851.pgmdevlist@gmail.com> References: <480F42DF.50604@gmail.com> <200804231204.43851.pgmdevlist@gmail.com> Message-ID: <480F64B0.1090107@relativita.com> Pierre GM wrote: > Er, that may sound silly, but why not using numpy.unique ? > 1. Transform your nx4 array into a n record array > 2. use numpy.unique on the record array > 3. revert to a nx4 array. > > For example: > z=np.array([[1,1],[1,2],[1,2],[1,3]]) > zr=z.view([('a',int),('b',int)]) > zs = numpy.unique(zr).view((int,2)) > > Am I missing something ? > Very interesting solution. Could you explain line 2: """ zr=z.view([('a',int),('b',int)]) """ in more detail? I understand you shrink 2 values in 1 single object but I don't understand the syntax. Thanks, Emanuele From mnandris at btinternet.com Wed Apr 23 12:33:36 2008 From: mnandris at btinternet.com (Michael) Date: Wed, 23 Apr 2008 17:33:36 +0100 Subject: [SciPy-user] remove duplicate points... In-Reply-To: References: <480F42DF.50604@gmail.com> Message-ID: <1208968416.6361.30.camel@mik> Hello list, I don't know a 'numponic' way of doing this, but have found it useful in the past to tuplise the data, then eliminate duplicates by keying them into a dictionary. Maybe this routine could be adapted using built-in numpy routines. Most plotting packages accept tuples no problem (pylab does). d = {} with_duplicates = [ [xn, yn, zn, vn], ... ] for k,v in enumerate( with_duplicates ): d[v] = tuple(k) without_duplicates = d.keys() Possible the only known use of tuples that is actually usefull! Incidentally, what _are_ python tuples for (and why aren't they called 'allelles? :) - they're immutable, gedit?) I approve of the existence of tuples, on philosophical grounds alone; i just don't know what they're for; i never encountered a burning need for them, apart from the above. On Wed, 2008-04-23 at 11:09 -0400, Angus McMorland wrote: > Hi Fred et al, > > On 23/04/2008, fred wrote: > > Hi, > > > > I have array of 3D points: > > > > [x0, y0, z0, v0] > > [x1, y1, z1, v1] > > ... > > [xn, yn, zn, vn] > > > This array have duplicate elements (points), by construction. > > > > How could I remove these duplicate elements ? > > I had to do this prior to Delaunay triangulation at one point, and > wrote a quick routine to remove the duplicates, which works on the > assumption that unique pts will have a unique product (x * y * z). I > doubt this is very efficient, but usually when I post these sorts of > answers it stimulates the gurus to post better solutions, which is > always good. > > import numpy as np > > def remove_dups(ptsar, atol=1e-8): > '''removes duplicate entries from pts array''' > prods = ptsar.prod(axis=1) > sorted = ptsar[prods.argsort()] > prodsorted = prods[prods.argsort()] > diffs = np.greater(np.absolute(np.diff(prodsorted)), atol) > diffs = np.hstack(((True,), diffs)) > return sorted[diffs] > > I hope that helps, > > A. From stefan at sun.ac.za Wed Apr 23 12:43:35 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Wed, 23 Apr 2008 18:43:35 +0200 Subject: [SciPy-user] remove duplicate points... In-Reply-To: <200804231204.43851.pgmdevlist@gmail.com> References: <480F42DF.50604@gmail.com> <200804231204.43851.pgmdevlist@gmail.com> Message-ID: <9457e7c80804230943k502fcbc7lc62bfe2fcd36babf@mail.gmail.com> 2008/4/23 Pierre GM : > 1. Transform your nx4 array into a n record array > 2. use numpy.unique on the record array > 3. revert to a nx4 array. > > For example: > z=np.array([[1,1],[1,2],[1,2],[1,3]]) > zr=z.view([('a',int),('b',int)]) > zs = numpy.unique(zr).view((int,2)) > > Am I missing something ? Beautiful! I didn't even think of that; should be much faster, too. Thanks St?fan From pgmdevlist at gmail.com Wed Apr 23 12:44:20 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Wed, 23 Apr 2008 12:44:20 -0400 Subject: [SciPy-user] remove duplicate points... In-Reply-To: <480F64B0.1090107@relativita.com> References: <480F42DF.50604@gmail.com> <200804231204.43851.pgmdevlist@gmail.com> <480F64B0.1090107@relativita.com> Message-ID: <200804231244.20541.pgmdevlist@gmail.com> On Wednesday 23 April 2008 12:32:48 Emanuele Olivetti wrote: > Pierre GM wrote: > Very interesting solution. Could you explain line 2: > > """ zr=z.view([('a',int),('b',int)]) """ > > in more detail? I understand you shrink 2 values in 1 single object > but I don't understand the syntax. With that line, you get a view of the ndarray z as a record array, each record consisting of two integer fields named a and b. http://www.scipy.org/RecordArrays The names don't matter, actually, they're just placeholders in our case. As you have a nx4 array, each record (viz, set of coordinates) should have 4 fields (x,y,z,value, for example), so use somethng like [('a',int),('b',int),('c',int),('d',int)] From stefan at sun.ac.za Wed Apr 23 13:02:59 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Wed, 23 Apr 2008 19:02:59 +0200 Subject: [SciPy-user] remove duplicate points... In-Reply-To: <200804231244.20541.pgmdevlist@gmail.com> References: <480F42DF.50604@gmail.com> <200804231204.43851.pgmdevlist@gmail.com> <480F64B0.1090107@relativita.com> <200804231244.20541.pgmdevlist@gmail.com> Message-ID: <9457e7c80804231002h5347693cl7de048abd1d1f28@mail.gmail.com> 2008/4/23 Pierre GM : > The names don't matter, actually, they're just placeholders in our case. You can leave them out if you want: In [5]: np.dtype([('',int),('',int)]) Out[5]: dtype([('f0', ' Hello, I was running SVN builds of SciPy retty well until just recently (I got a few filures and some errors, but apparently this is normal right now, or so I've been told). But at some recent revision, my number of errors jumped to a whopping 44, and failures to 5 from scipy.test(). Here is my output: ====================================================================== ERROR: Tests pdist(X, 'chebychev') on the Iris data set. ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", line 538, in test_pdist_chebychev_iris Y_test1 = pdist(X, 'chebychev') File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/hierarchy.py", line 1307, in pdist if X.dtype == np.float32 or X.dtype == np.float96: AttributeError: 'module' object has no attribute 'float96' ====================================================================== ERROR: Tests pdist(X, 'test_chebychev') [the non-C implementation] on the Iris data set. ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", line 548, in test_pdist_chebychev_iris_nonC Y_test2 = pdist(X, 'test_chebychev') File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/hierarchy.py", line 1307, in pdist if X.dtype == np.float32 or X.dtype == np.float96: AttributeError: 'module' object has no attribute 'float96' ====================================================================== ERROR: Tests pdist(X, 'chebychev') on random data. ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", line 517, in test_pdist_chebychev_random Y_test1 = pdist(X, 'chebychev') File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/hierarchy.py", line 1307, in pdist if X.dtype == np.float32 or X.dtype == np.float96: AttributeError: 'module' object has no attribute 'float96' ====================================================================== ERROR: Tests pdist(X, 'test_chebychev') [the non-C implementation] on random data. ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", line 527, in test_pdist_chebychev_random_nonC Y_test2 = pdist(X, 'test_chebychev') File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/hierarchy.py", line 1307, in pdist if X.dtype == np.float32 or X.dtype == np.float96: AttributeError: 'module' object has no attribute 'float96' ====================================================================== ERROR: Tests pdist(X, 'cityblock') on the Iris data set. ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", line 304, in test_pdist_cityblock_iris Y_test1 = pdist(X, 'cityblock') File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/hierarchy.py", line 1307, in pdist if X.dtype == np.float32 or X.dtype == np.float96: AttributeError: 'module' object has no attribute 'float96' ====================================================================== ERROR: Tests pdist(X, 'test_cityblock') [the non-C implementation] on the Iris data set. ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", line 314, in test_pdist_cityblock_iris_nonC Y_test2 = pdist(X, 'test_cityblock') File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/hierarchy.py", line 1307, in pdist if X.dtype == np.float32 or X.dtype == np.float96: AttributeError: 'module' object has no attribute 'float96' ====================================================================== ERROR: Tests pdist(X, 'cityblock') on random data. ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", line 284, in test_pdist_cityblock_random Y_test1 = pdist(X, 'cityblock') File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/hierarchy.py", line 1307, in pdist if X.dtype == np.float32 or X.dtype == np.float96: AttributeError: 'module' object has no attribute 'float96' ====================================================================== ERROR: Tests pdist(X, 'test_cityblock') [the non-C implementation] on random data. ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", line 294, in test_pdist_cityblock_random_nonC Y_test2 = pdist(X, 'test_cityblock') File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/hierarchy.py", line 1307, in pdist if X.dtype == np.float32 or X.dtype == np.float96: AttributeError: 'module' object has no attribute 'float96' ====================================================================== ERROR: Tests pdist(X, 'correlation') on the Iris data set. ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", line 345, in test_pdist_correlation_iris Y_test1 = pdist(X, 'correlation') File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/hierarchy.py", line 1307, in pdist if X.dtype == np.float32 or X.dtype == np.float96: AttributeError: 'module' object has no attribute 'float96' ====================================================================== ERROR: Tests pdist(X, 'test_correlation') [the non-C implementation] on the Iris data set. ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", line 355, in test_pdist_correlation_iris_nonC Y_test2 = pdist(X, 'test_correlation') File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/hierarchy.py", line 1307, in pdist if X.dtype == np.float32 or X.dtype == np.float96: AttributeError: 'module' object has no attribute 'float96' ====================================================================== ERROR: Tests pdist(X, 'correlation') on random data. ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", line 325, in test_pdist_correlation_random Y_test1 = pdist(X, 'correlation') File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/hierarchy.py", line 1307, in pdist if X.dtype == np.float32 or X.dtype == np.float96: AttributeError: 'module' object has no attribute 'float96' ====================================================================== ERROR: Tests pdist(X, 'test_correlation') [the non-C implementation] on random data. ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", line 335, in test_pdist_correlation_random_nonC Y_test2 = pdist(X, 'test_correlation') File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/hierarchy.py", line 1307, in pdist if X.dtype == np.float32 or X.dtype == np.float96: AttributeError: 'module' object has no attribute 'float96' ====================================================================== ERROR: Tests pdist(X, 'cosine') on the Iris data set. ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", line 263, in test_pdist_cosine_iris Y_test1 = pdist(X, 'cosine') File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/hierarchy.py", line 1307, in pdist if X.dtype == np.float32 or X.dtype == np.float96: AttributeError: 'module' object has no attribute 'float96' ====================================================================== ERROR: Tests pdist(X, 'test_cosine') [the non-C implementation] on the Iris data set. ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", line 273, in test_pdist_cosine_iris_nonC Y_test2 = pdist(X, 'test_cosine') File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/hierarchy.py", line 1307, in pdist if X.dtype == np.float32 or X.dtype == np.float96: AttributeError: 'module' object has no attribute 'float96' ====================================================================== ERROR: Tests pdist(X, 'cosine') on random data. ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", line 244, in test_pdist_cosine_random Y_test1 = pdist(X, 'cosine') File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/hierarchy.py", line 1307, in pdist if X.dtype == np.float32 or X.dtype == np.float96: AttributeError: 'module' object has no attribute 'float96' ====================================================================== ERROR: Tests pdist(X, 'test_cosine') [the non-C implementation] on random data. ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", line 253, in test_pdist_cosine_random_nonC Y_test2 = pdist(X, 'test_cosine') File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/hierarchy.py", line 1307, in pdist if X.dtype == np.float32 or X.dtype == np.float96: AttributeError: 'module' object has no attribute 'float96' ====================================================================== ERROR: Tests pdist(X, 'hamming') on random data. ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", line 451, in test_pdist_dhamming_random Y_test1 = pdist(X, 'hamming') File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/hierarchy.py", line 1307, in pdist if X.dtype == np.float32 or X.dtype == np.float96: AttributeError: 'module' object has no attribute 'float96' ====================================================================== ERROR: Tests pdist(X, 'test_hamming') [the non-C implementation] on random data. ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", line 461, in test_pdist_dhamming_random_nonC Y_test2 = pdist(X, 'test_hamming') File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/hierarchy.py", line 1307, in pdist if X.dtype == np.float32 or X.dtype == np.float96: AttributeError: 'module' object has no attribute 'float96' ====================================================================== ERROR: Tests pdist(X, 'jaccard') on random data. ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", line 495, in test_pdist_djaccard_random Y_test1 = pdist(X, 'jaccard') File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/hierarchy.py", line 1307, in pdist if X.dtype == np.float32 or X.dtype == np.float96: AttributeError: 'module' object has no attribute 'float96' ====================================================================== ERROR: Tests pdist(X, 'test_jaccard') [the non-C implementation] on random data. ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", line 505, in test_pdist_djaccard_random_nonC Y_test2 = pdist(X, 'test_jaccard') File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/hierarchy.py", line 1307, in pdist if X.dtype == np.float32 or X.dtype == np.float96: AttributeError: 'module' object has no attribute 'float96' ====================================================================== ERROR: Tests pdist(X, 'euclidean') on the Iris data set. ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", line 185, in test_pdist_euclidean_iris Y_test1 = pdist(X, 'euclidean') File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/hierarchy.py", line 1307, in pdist if X.dtype == np.float32 or X.dtype == np.float96: AttributeError: 'module' object has no attribute 'float96' ====================================================================== ERROR: Tests pdist(X, 'test_euclidean') [the non-C implementation] on the Iris data set. ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", line 194, in test_pdist_euclidean_iris_nonC Y_test2 = pdist(X, 'test_euclidean') File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/hierarchy.py", line 1307, in pdist if X.dtype == np.float32 or X.dtype == np.float96: AttributeError: 'module' object has no attribute 'float96' ====================================================================== ERROR: Tests pdist(X, 'euclidean') on random data. ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", line 166, in test_pdist_euclidean_random Y_test1 = pdist(X, 'euclidean') File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/hierarchy.py", line 1307, in pdist if X.dtype == np.float32 or X.dtype == np.float96: AttributeError: 'module' object has no attribute 'float96' ====================================================================== ERROR: Tests pdist(X, 'test_euclidean') [the non-C implementation] on random data. ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", line 175, in test_pdist_euclidean_random_nonC Y_test2 = pdist(X, 'test_euclidean') File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/hierarchy.py", line 1307, in pdist if X.dtype == np.float32 or X.dtype == np.float96: AttributeError: 'module' object has no attribute 'float96' ====================================================================== ERROR: Tests pdist(X, 'hamming') on random data. ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", line 429, in test_pdist_hamming_random Y_test1 = pdist(X, 'hamming') File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/hierarchy.py", line 1307, in pdist if X.dtype == np.float32 or X.dtype == np.float96: AttributeError: 'module' object has no attribute 'float96' ====================================================================== ERROR: Tests pdist(X, 'test_hamming') [the non-C implementation] on random data. ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", line 439, in test_pdist_hamming_random_nonC Y_test2 = pdist(X, 'test_hamming') File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/hierarchy.py", line 1307, in pdist if X.dtype == np.float32 or X.dtype == np.float96: AttributeError: 'module' object has no attribute 'float96' ====================================================================== ERROR: Testing whether passing a float96 variance matrix generates an exception. ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", line 149, in test_pdist_ivar_raises_type_error_float96 VI = numpy.zeros((10, 10), dtype=numpy.float96) AttributeError: 'module' object has no attribute 'float96' ====================================================================== ERROR: Tests pdist(X, 'jaccard') on random data. ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", line 473, in test_pdist_jaccard_random Y_test1 = pdist(X, 'jaccard') File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/hierarchy.py", line 1307, in pdist if X.dtype == np.float32 or X.dtype == np.float96: AttributeError: 'module' object has no attribute 'float96' ====================================================================== ERROR: Tests pdist(X, 'test_jaccard') [the non-C implementation] on random data. ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", line 483, in test_pdist_jaccard_random_nonC Y_test2 = pdist(X, 'test_jaccard') File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/hierarchy.py", line 1307, in pdist if X.dtype == np.float32 or X.dtype == np.float96: AttributeError: 'module' object has no attribute 'float96' ====================================================================== ERROR: Tests pdist(X, 'minkowski') on iris data. ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", line 408, in test_pdist_minkowski_iris Y_test1 = pdist(X, 'minkowski', 5.8) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/hierarchy.py", line 1307, in pdist if X.dtype == np.float32 or X.dtype == np.float96: AttributeError: 'module' object has no attribute 'float96' ====================================================================== ERROR: Tests pdist(X, 'test_minkowski') [the non-C implementation] on iris data. ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", line 418, in test_pdist_minkowski_iris_nonC Y_test2 = pdist(X, 'test_minkowski', 5.8) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/hierarchy.py", line 1307, in pdist if X.dtype == np.float32 or X.dtype == np.float96: AttributeError: 'module' object has no attribute 'float96' ====================================================================== ERROR: Tests pdist(X, 'minkowski') on random data. ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", line 368, in test_pdist_minkowski_random Y_test1 = pdist(X, 'minkowski', 3.2) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/hierarchy.py", line 1307, in pdist if X.dtype == np.float32 or X.dtype == np.float96: AttributeError: 'module' object has no attribute 'float96' ====================================================================== ERROR: Tests pdist(X, 'test_minkowski') [the non-C implementation] on random data. ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", line 378, in test_pdist_minkowski_random_nonC Y_test2 = pdist(X, 'test_minkowski', 3.2) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/hierarchy.py", line 1307, in pdist if X.dtype == np.float32 or X.dtype == np.float96: AttributeError: 'module' object has no attribute 'float96' ====================================================================== ERROR: Testing whether passing a float96 observation array generates an exception. ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", line 104, in test_pdist_raises_type_error_float96 X = numpy.zeros((10, 10), dtype=numpy.float96) AttributeError: 'module' object has no attribute 'float96' ====================================================================== ERROR: Tests pdist(X, 'seuclidean') on the Iris data set. ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", line 224, in test_pdist_seuclidean_iris Y_test1 = pdist(X, 'seuclidean') File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/hierarchy.py", line 1307, in pdist if X.dtype == np.float32 or X.dtype == np.float96: AttributeError: 'module' object has no attribute 'float96' ====================================================================== ERROR: Tests pdist(X, 'test_seuclidean') [the non-C implementation] on the Iris data set. ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", line 233, in test_pdist_seuclidean_iris_nonC Y_test2 = pdist(X, 'test_sqeuclidean') File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/hierarchy.py", line 1307, in pdist if X.dtype == np.float32 or X.dtype == np.float96: AttributeError: 'module' object has no attribute 'float96' ====================================================================== ERROR: Tests pdist(X, 'seuclidean') on random data. ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", line 205, in test_pdist_seuclidean_random Y_test1 = pdist(X, 'seuclidean') File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/hierarchy.py", line 1307, in pdist if X.dtype == np.float32 or X.dtype == np.float96: AttributeError: 'module' object has no attribute 'float96' ====================================================================== ERROR: Tests pdist(X, 'test_sqeuclidean') [the non-C implementation] on random data. ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", line 214, in test_pdist_seuclidean_random_nonC Y_test2 = pdist(X, 'test_sqeuclidean') File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/hierarchy.py", line 1307, in pdist if X.dtype == np.float32 or X.dtype == np.float96: AttributeError: 'module' object has no attribute 'float96' ====================================================================== ERROR: Testing whether passing a float96 variance matrix generates an exception. ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", line 126, in test_pdist_var_raises_type_error_float96 V = numpy.zeros((10, 10), dtype=numpy.float96) AttributeError: 'module' object has no attribute 'float96' ====================================================================== ERROR: Failure: ImportError (cannot import name _bspline) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/nose-0.10.1-py2.5.egg/nose/loader.py", line 364, in loadTestsFromName File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/nose-0.10.1-py2.5.egg/nose/importer.py", line 39, in importFromPath File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/nose-0.10.1-py2.5.egg/nose/importer.py", line 84, in importFromDir File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/stats/models/tests/test_bspline.py", line 9, in import scipy.stats.models.bspline as B File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/stats/models/bspline.py", line 23, in from scipy.stats.models import _bspline ImportError: cannot import name _bspline ====================================================================== ERROR: test_factor3 (test_formula.TestFormula) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/stats/models/tests/test_formula.py", line 231, in test_factor3 m = fac.main_effect(reference=1) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/stats/models/formula.py", line 273, in main_effect reference = names.index(reference) ValueError: list.index(x): x not in list ====================================================================== ERROR: test_factor4 (test_formula.TestFormula) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/stats/models/tests/test_formula.py", line 239, in test_factor4 m = fac.main_effect(reference=2) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/stats/models/formula.py", line 273, in main_effect reference = names.index(reference) ValueError: list.index(x): x not in list ====================================================================== ERROR: test_huber (test_scale.TestScale) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/stats/models/tests/test_scale.py", line 35, in test_huber m = scale.huber(X) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/stats/models/robust/scale.py", line 82, in __call__ for donothing in self: File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/stats/models/robust/scale.py", line 102, in next scale = N.sum(subset * (a - mu)**2, axis=self.axis) / (self.n * Huber.gamma - N.sum(1. - subset, axis=self.axis) * Huber.c**2) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/core/fromnumeric.py", line 930, in sum return sum(axis, dtype, out) TypeError: only length-1 arrays can be converted to Python scalars ====================================================================== ERROR: test_huberaxes (test_scale.TestScale) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/stats/models/tests/test_scale.py", line 40, in test_huberaxes m = scale.huber(X, axis=0) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/stats/models/robust/scale.py", line 82, in __call__ for donothing in self: File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/stats/models/robust/scale.py", line 102, in next scale = N.sum(subset * (a - mu)**2, axis=self.axis) / (self.n * Huber.gamma - N.sum(1. - subset, axis=self.axis) * Huber.c**2) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/core/fromnumeric.py", line 930, in sum return sum(axis, dtype, out) TypeError: only length-1 arrays can be converted to Python scalars ====================================================================== FAIL: Testing whether passing a float32 variance matrix generates an exception. ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", line 144, in test_pdist_ivar_raises_type_error_float32 self.fail("float32 matrices should generate an error in pdist('mahalanobis').") AssertionError: float32 matrices should generate an error in pdist('mahalanobis'). ====================================================================== FAIL: Testing whether passing a float32 variance matrix generates an exception. ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", line 121, in test_pdist_var_raises_type_error_float32 self.fail("float32 V matrices should generate an error in pdist('seuclidean').") AssertionError: float32 V matrices should generate an error in pdist('seuclidean'). ====================================================================== FAIL: test_texture2 (test_segment.TestSegment) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/ndimage/tests/test_segment.py", line 152, in test_texture2 assert_array_almost_equal(tem0, truth_tem0, decimal=6) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/testing/utils.py", line 255, in assert_array_almost_equal header='Arrays are not almost equal') File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/testing/utils.py", line 240, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 66.6666666667%) x: array([ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 1.91816598e-01, 1.02515288e-01, 9.30087343e-02,... y: array([ 0. , 0. , 0. , 0. , 0. , 0. , 0.13306101, 0.08511007, 0.05084148, 0.07550675, 0.4334695 , 0.03715914, 0.00289055, 0.02755581, 0.48142046, 0.03137803, 0.00671277, 0.51568902, 0.01795249, 0.49102375, 1. ], dtype=float32) ====================================================================== FAIL: test_gammaincinv (test_basic.TestGamma) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/special/tests/test_basic.py", line 1056, in test_gammaincinv assert_almost_equal(0.05, x, decimal=10) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/testing/utils.py", line 158, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: ACTUAL: 0.050000000000000003 DESIRED: 0.0 ====================================================================== FAIL: test_namespace (test_formula.TestFormula) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/stats/models/tests/test_formula.py", line 119, in test_namespace self.assertEqual(xx.namespace, Y.namespace) AssertionError: {} != {'Y': array([ 0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30, 32, 34, 36, 38, 40, 42, 44, 46, 48, 50, 52, 54, 56, 58, 60, 62, 64, 66, 68, 70, 72, 74, 76, 78, 80, 82, 84, 86, 88, 90, 92, 94, 96, 98]), 'X': array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49])} ====================================================================== SKIP: Getting factors of complex matrix ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/testing/decorators.py", line 81, in skipper raise nose.SkipTest, msg SkipTest: UMFPACK appears not to be compiled ====================================================================== SKIP: Getting factors of real matrix ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/testing/decorators.py", line 81, in skipper raise nose.SkipTest, msg SkipTest: UMFPACK appears not to be compiled ====================================================================== SKIP: Getting factors of complex matrix ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/testing/decorators.py", line 81, in skipper raise nose.SkipTest, msg SkipTest: UMFPACK appears not to be compiled ====================================================================== SKIP: Getting factors of real matrix ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/testing/decorators.py", line 81, in skipper raise nose.SkipTest, msg SkipTest: UMFPACK appears not to be compiled ====================================================================== SKIP: Prefactorize (with UMFPACK) matrix for solving with multiple rhs ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/testing/decorators.py", line 81, in skipper raise nose.SkipTest, msg SkipTest: UMFPACK appears not to be compiled ====================================================================== SKIP: Prefactorize matrix for solving with multiple rhs ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/testing/decorators.py", line 81, in skipper raise nose.SkipTest, msg SkipTest: UMFPACK appears not to be compiled ====================================================================== SKIP: Solve with UMFPACK: double precision complex ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/testing/decorators.py", line 81, in skipper raise nose.SkipTest, msg SkipTest: UMFPACK appears not to be compiled ====================================================================== SKIP: Solve: single precision complex ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/testing/decorators.py", line 81, in skipper raise nose.SkipTest, msg SkipTest: UMFPACK appears not to be compiled ====================================================================== SKIP: Solve with UMFPACK: double precision, sparse rhs ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/testing/decorators.py", line 81, in skipper raise nose.SkipTest, msg SkipTest: UMFPACK appears not to be compiled ====================================================================== SKIP: Solve with UMFPACK: double precision ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/testing/decorators.py", line 81, in skipper raise nose.SkipTest, msg SkipTest: UMFPACK appears not to be compiled ====================================================================== SKIP: Solve: single precision ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/testing/decorators.py", line 81, in skipper raise nose.SkipTest, msg SkipTest: UMFPACK appears not to be compiled ---------------------------------------------------------------------- Ran 2116 tests in 23.236s FAILED (failures=5, errors=44) Is this some kind of issue with a recent revision to the scipy tests, or is it all me? Josh From nwagner at iam.uni-stuttgart.de Wed Apr 23 13:54:00 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 23 Apr 2008 19:54:00 +0200 Subject: [SciPy-user] Scipy 0.7.0dev4167 ... 44 errors? In-Reply-To: <9911419a0804231026q33a466f8pf1936dc38ba853b5@mail.gmail.com> References: <9911419a0804231026q33a466f8pf1936dc38ba853b5@mail.gmail.com> Message-ID: On Wed, 23 Apr 2008 10:26:26 -0700 "Joshua Lippai" wrote: > Hello, > > I was running SVN builds of SciPy retty well until just >recently (I > got a few filures and some errors, but apparently this >is normal right > now, or so I've been told). But at some recent revision, >my number of > errors jumped to a whopping 44, and failures to 5 from >scipy.test(). > Here is my output: > Ran 2116 tests in 23.236s > >FAILED (failures=5, errors=44) > > Is this some kind of issue with a recent revision to the >scipy tests, > or is it all me? > > Josh More or less the same output here. ---------------------------------------------------------------------- Ran 2117 tests in 27.520s FAILED (failures=6, errors=45) >>> scipy.__version__ '0.7.0.dev4167' Anyway, I have filed tickets for these errors/failures. Nils From david.huard at gmail.com Wed Apr 23 15:35:35 2008 From: david.huard at gmail.com (David Huard) Date: Wed, 23 Apr 2008 15:35:35 -0400 Subject: [SciPy-user] loadtxt error In-Reply-To: <480F076D.63BA.009B.0@twdb.state.tx.us> References: <480F01DA0200009B00011C01@GWWEB.twdb.state.tx.us> <480F076D.63BA.009B.0@twdb.state.tx.us> Message-ID: <91cf711d0804231235j5c612beajfda410ceac7fe1ea@mail.gmail.com> You can use loadtxt's converters: >From numpy SVN, here is the docstring: converters : {} A dictionary mapping column number to a function that will convert that column to a float. Eg, if column 0 is a date string: converters={0:datestr2num}. Converters can also be used to provide a default value for missing data: converters={3:lambda s: float(s or 0)}. David 2008/4/23, Dharhas Pothina : > > > Ok it looks like I had a couple of rows with values missing in the third > column. I didn't notice before since matlab and octave assumes zeros in > those cases. Is there a function in numpy/scipy that can read files like > that or should I fix the file before reading it in? > > thanks for your help. > > - dharhas > > >>> Pauli Virtanen 4/23/2008 9:50 AM >>> > > Wed, 23 Apr 2008 16:44:16 +0200, Nils Wagner wrote: > > > On Wed, 23 Apr 2008 09:31:06 -0500 > > "Dharhas Pothina" > > wrote: > [clip] > >> I have an ascii text file with the format : > >> > >> year month sumprcp(in) > >> 1976 01 2.07 > >> 1976 02 0.76 > >> 1976 03 2.53 > >> 1976 04 2.25 > >> ... > >> > >> When trying to read it with the function loadtxt() I am > >>getting an error : > >> in iPython I type : > >> > >> from numpy import * > >> year,month,data = > >>loadtxt('portarthurcity_monthlyprecip.txt',skiprows=1,unpack=True) > >> > >> I get the error : > > No problem here either, with numpy 1.0.4. Is there garbage later on in > the file? > > -- > Pauli Virtanen > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mhearne at usgs.gov Wed Apr 23 17:47:12 2008 From: mhearne at usgs.gov (Michael Hearne) Date: Wed, 23 Apr 2008 15:47:12 -0600 Subject: [SciPy-user] failure to build scipy on RHEL5 Message-ID: <71551062-3FB2-4FA4-B617-D6FE011DEEF3@usgs.gov> Just tried to compile SciPy on a RedHat Enterprise 5 system, following the instructions on http://pong.tamu.edu/tiki/tiki-view_blog_post.php?blogId=6&postId=97 and got the errors below: error: Command "/usr/bin/g77 -g -Wall -g -Wall -shared build/ temp.linux-x86_64-2.5/build/src.linux-x86_64-2.5/build/src.linux- x86_64-2.5/scipy/lib/blas/fblasmodule.o build/temp.linux-x86_64-2.5/ build/src.linux-x86_64-2.5/fortranobject.o build/temp.linux-x86_64-2.5/ build/src.linux-x86_64-2.5/scipy/lib/blas/fblaswrap.o build/temp.linux- x86_64-2.5/build/src.linux-x86_64-2.5/build/src.linux-x86_64-2.5/scipy/ lib/blas/fblas-f2pywrappers.o -L/usr/local/lib -Lbuild/temp.linux- x86_64-2.5 -lfblas -lg2c -o build/lib.linux-x86_64-2.5/scipy/lib/blas/ fblas.so" failed with exit status 1 NOTE: I did not build numpy myself - I used easy_install for that. I'm trying to determine the path of least resistance for building a fully functional numpy/scipy suite, and since numpy seems to install just fine from PyPI, I went with that. Any tips? --Mike ------------------------------------------------------ Michael Hearne mhearne at usgs.gov (303) 273-8620 USGS National Earthquake Information Center 1711 Illinois St. Golden CO 80401 Senior Software Engineer Synergetics, Inc. ------------------------------------------------------ From robert.kern at gmail.com Wed Apr 23 17:49:57 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 23 Apr 2008 16:49:57 -0500 Subject: [SciPy-user] failure to build scipy on RHEL5 In-Reply-To: <71551062-3FB2-4FA4-B617-D6FE011DEEF3@usgs.gov> References: <71551062-3FB2-4FA4-B617-D6FE011DEEF3@usgs.gov> Message-ID: <3d375d730804231449h1d48d809ra7806beecd6270db@mail.gmail.com> On Wed, Apr 23, 2008 at 4:47 PM, Michael Hearne wrote: > Just tried to compile SciPy on a RedHat Enterprise 5 system, following > the instructions on http://pong.tamu.edu/tiki/tiki-view_blog_post.php?blogId=6&postId=97 > > and got the errors below: > > error: Command "/usr/bin/g77 -g -Wall -g -Wall -shared build/ > temp.linux-x86_64-2.5/build/src.linux-x86_64-2.5/build/src.linux- > x86_64-2.5/scipy/lib/blas/fblasmodule.o build/temp.linux-x86_64-2.5/ > build/src.linux-x86_64-2.5/fortranobject.o build/temp.linux-x86_64-2.5/ > build/src.linux-x86_64-2.5/scipy/lib/blas/fblaswrap.o build/temp.linux- > x86_64-2.5/build/src.linux-x86_64-2.5/build/src.linux-x86_64-2.5/scipy/ > lib/blas/fblas-f2pywrappers.o -L/usr/local/lib -Lbuild/temp.linux- > x86_64-2.5 -lfblas -lg2c -o build/lib.linux-x86_64-2.5/scipy/lib/blas/ > fblas.so" failed with exit status 1 The actual error message from the linker is just above that. Please post it. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From mhearne at usgs.gov Wed Apr 23 17:53:56 2008 From: mhearne at usgs.gov (Michael Hearne) Date: Wed, 23 Apr 2008 15:53:56 -0600 Subject: [SciPy-user] failure to build scipy on RHEL5 In-Reply-To: <3d375d730804231449h1d48d809ra7806beecd6270db@mail.gmail.com> References: <71551062-3FB2-4FA4-B617-D6FE011DEEF3@usgs.gov> <3d375d730804231449h1d48d809ra7806beecd6270db@mail.gmail.com> Message-ID: <48FB4760-6D8E-47E2-AE9B-E514C2172298@usgs.gov> Wasn't sure how far back to go, so here's several more lines' worth: compile options: '-DNO_ATLAS_INFO=1 -Ibuild/src.linux-x86_64-2.5 -I/ usr/local/lib/python2.5/site-packages/numpy-1.0.4-py2.5-linux- x86_64.egg/numpy/core/include -I/usr/local/include/python2.5 -c' gcc: build/src.linux-x86_64-2.5/build/src.linux-x86_64-2.5/scipy/lib/ blas/fblasmodule.c compiling Fortran sources Fortran f77 compiler: /usr/bin/g77 -g -Wall -fno-second-underscore - fPIC -O3 -funroll-loops -march=nocona -mmmx -msse2 -msse -fomit-frame- pointer creating build/temp.linux-x86_64-2.5/build/src.linux-x86_64-2.5/scipy/ lib creating build/temp.linux-x86_64-2.5/build/src.linux-x86_64-2.5/scipy/ lib/blas compile options: '-DNO_ATLAS_INFO=1 -Ibuild/src.linux-x86_64-2.5 -I/ usr/local/lib/python2.5/site-packages/numpy-1.0.4-py2.5-linux- x86_64.egg/numpy/core/include -I/usr/local/include/python2.5 -c' g77:f77: build/src.linux-x86_64-2.5/scipy/lib/blas/fblaswrap.f g77:f77: build/src.linux-x86_64-2.5/build/src.linux-x86_64-2.5/scipy/ lib/blas/fblas-f2pywrappers.f /usr/bin/g77 -g -Wall -g -Wall -shared build/temp.linux-x86_64-2.5/ build/src.linux-x86_64-2.5/build/src.linux-x86_64-2.5/scipy/lib/blas/ fblasmodule.o build/temp.linux-x86_64-2.5/build/src.linux-x86_64-2.5/ fortranobject.o build/temp.linux-x86_64-2.5/build/src.linux-x86_64-2.5/ scipy/lib/blas/fblaswrap.o build/temp.linux-x86_64-2.5/build/src.linux- x86_64-2.5/build/src.linux-x86_64-2.5/scipy/lib/blas/fblas- f2pywrappers.o -L/usr/local/lib -Lbuild/temp.linux-x86_64-2.5 -lfblas - lg2c -o build/lib.linux-x86_64-2.5/scipy/lib/blas/fblas.so /usr/bin/ld: /usr/local/lib/libfblas.a(cgemm.o): relocation R_X86_64_32 against `a local symbol' can not be used when making a shared object; recompile with -fPIC /usr/local/lib/libfblas.a: could not read symbols: Bad value collect2: ld returned 1 exit status /usr/bin/ld: /usr/local/lib/libfblas.a(cgemm.o): relocation R_X86_64_32 against `a local symbol' can not be used when making a shared object; recompile with -fPIC /usr/local/lib/libfblas.a: could not read symbols: Bad value collect2: ld returned 1 exit status error: Command "/usr/bin/g77 -g -Wall -g -Wall -shared build/ temp.linux-x86_64-2.5/build/src.linux-x86_64-2.5/build/src.linux- x86_64-2.5/scipy/lib/blas/fblasmodule.o build/temp.linux-x86_64-2.5/ build/src.linux-x86_64-2.5/fortranobject.o build/temp.linux-x86_64-2.5/ build/src.linux-x86_64-2.5/scipy/lib/blas/fblaswrap.o build/temp.linux- x86_64-2.5/build/src.linux-x86_64-2.5/build/src.linux-x86_64-2.5/scipy/ lib/blas/fblas-f2pywrappers.o -L/usr/local/lib -Lbuild/temp.linux- x86_64-2.5 -lfblas -lg2c -o build/lib.linux-x86_64-2.5/scipy/lib/blas/ fblas.so" failed with exit status 1 On Apr 23, 2008, at 3:49 PM, Robert Kern wrote: > On Wed, Apr 23, 2008 at 4:47 PM, Michael Hearne > wrote: >> Just tried to compile SciPy on a RedHat Enterprise 5 system, >> following >> the instructions on http://pong.tamu.edu/tiki/tiki-view_blog_post.php?blogId=6&postId=97 >> >> and got the errors below: >> >> error: Command "/usr/bin/g77 -g -Wall -g -Wall -shared build/ >> temp.linux-x86_64-2.5/build/src.linux-x86_64-2.5/build/src.linux- >> x86_64-2.5/scipy/lib/blas/fblasmodule.o build/temp.linux-x86_64-2.5/ >> build/src.linux-x86_64-2.5/fortranobject.o build/temp.linux- >> x86_64-2.5/ >> build/src.linux-x86_64-2.5/scipy/lib/blas/fblaswrap.o build/ >> temp.linux- >> x86_64-2.5/build/src.linux-x86_64-2.5/build/src.linux-x86_64-2.5/ >> scipy/ >> lib/blas/fblas-f2pywrappers.o -L/usr/local/lib -Lbuild/temp.linux- >> x86_64-2.5 -lfblas -lg2c -o build/lib.linux-x86_64-2.5/scipy/lib/ >> blas/ >> fblas.so" failed with exit status 1 > > The actual error message from the linker is just above that. Please > post it. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user ------------------------------------------------------ Michael Hearne mhearne at usgs.gov (303) 273-8620 USGS National Earthquake Information Center 1711 Illinois St. Golden CO 80401 Senior Software Engineer Synergetics, Inc. ------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed Apr 23 17:59:28 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 23 Apr 2008 16:59:28 -0500 Subject: [SciPy-user] failure to build scipy on RHEL5 In-Reply-To: <48FB4760-6D8E-47E2-AE9B-E514C2172298@usgs.gov> References: <71551062-3FB2-4FA4-B617-D6FE011DEEF3@usgs.gov> <3d375d730804231449h1d48d809ra7806beecd6270db@mail.gmail.com> <48FB4760-6D8E-47E2-AE9B-E514C2172298@usgs.gov> Message-ID: <3d375d730804231459r7842973fje9f2c650bd6180a@mail.gmail.com> On Wed, Apr 23, 2008 at 4:53 PM, Michael Hearne wrote: > Wasn't sure how far back to go, so here's several more lines' worth: Here's the important one: /usr/bin/ld: /usr/local/lib/libfblas.a(cgemm.o): relocation R_X86_64_32 against `a local symbol' can not be used when making a shared object; recompile with -fPIC /usr/local/lib/libfblas.a: could not read symbols: Bad value You need to compile BLAS to be relocatable using the -fPIC flag to the compilation flags. If you are following the directions on that wiki strictly, replace this line: g77 -fno-second-underscore -O2 -c *.f with this line: g77 -fPIC -fno-second-underscore -O2 -c *.f -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From pebarrett at gmail.com Wed Apr 23 18:17:18 2008 From: pebarrett at gmail.com (Paul Barrett) Date: Wed, 23 Apr 2008 18:17:18 -0400 Subject: [SciPy-user] failure to build scipy on RHEL5 In-Reply-To: <3d375d730804231459r7842973fje9f2c650bd6180a@mail.gmail.com> References: <71551062-3FB2-4FA4-B617-D6FE011DEEF3@usgs.gov> <3d375d730804231449h1d48d809ra7806beecd6270db@mail.gmail.com> <48FB4760-6D8E-47E2-AE9B-E514C2172298@usgs.gov> <3d375d730804231459r7842973fje9f2c650bd6180a@mail.gmail.com> Message-ID: <40e64fa20804231517g4e13f4c8w2d972de1a1555799@mail.gmail.com> Mike, I have numpy, scipy, and matplotlib RPMs for EL5 x86_64, if you want them. I also think that I have the i386 ones too, but the SRPMs are of course available. I've been meaning to offer them to the EPEL repository, but I haven't made the time. -- Paul On Wed, Apr 23, 2008 at 5:59 PM, Robert Kern wrote: > On Wed, Apr 23, 2008 at 4:53 PM, Michael Hearne wrote: > > Wasn't sure how far back to go, so here's several more lines' worth: > > Here's the important one: > > > /usr/bin/ld: /usr/local/lib/libfblas.a(cgemm.o): relocation > R_X86_64_32 against `a local symbol' can not be used when making a > shared object; recompile with -fPIC > /usr/local/lib/libfblas.a: could not read symbols: Bad value > > You need to compile BLAS to be relocatable using the -fPIC flag to the > compilation flags. If you are following the directions on that wiki > strictly, replace this line: > > g77 -fno-second-underscore -O2 -c *.f > > with this line: > > g77 -fPIC -fno-second-underscore -O2 -c *.f > > > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From s.collis at bom.gov.au Wed Apr 23 19:43:48 2008 From: s.collis at bom.gov.au (Scott Collis) Date: Thu, 24 Apr 2008 09:43:48 +1000 Subject: [SciPy-user] Help with Scientific's NetCDF[SEC=UNCLASSIFIED] Message-ID: <1208994228.7559.10.camel@mendenhall> Ok, I know this is not strictly SciPy, being from the Scientific Branch... But I was hoping some one might know... I have been able to get the Scientific.IO.NetCDF library working on other machines but trying to install it, as a user (no root access...) on a cluster machine here I get the following error: >>> import Scientific.IO.NetCDF Traceback (most recent call last): File "", line 1, in ? File "/flurry/home/scollis/pylibs/lib64/python2.4/site-packages/Scientific/IO/NetCDF.py", line 165, in ? from Scientific_netcdf import * ImportError: libnetcdf.so.4: cannot open shared object file: No such file or directory now I have found where libnetcdf.so.4 is located and I have added that to sys.path {sys.path.append("/usr/local/netcdf-3.6.2a/lib/")} I have tried assing said directory to PYTHONPATH (export PYTHONPATH="/usr/local/netcdf-3.6.2a/lib/") It just does not seem to want ot find the NetCDF library... I have not really tried the scipy NetCDF library... I am used to the way the Scientific one works.... I have had a look though some of the source (Scientific_netcdf.c) and can not obviously see where it specifically points to the libs... Any Help appreciated... Scott From robert.kern at gmail.com Wed Apr 23 19:52:07 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 23 Apr 2008 18:52:07 -0500 Subject: [SciPy-user] Help with Scientific's NetCDF[SEC=UNCLASSIFIED] In-Reply-To: <1208994228.7559.10.camel@mendenhall> References: <1208994228.7559.10.camel@mendenhall> Message-ID: <3d375d730804231652i767ef26cw906115d85946d316@mail.gmail.com> On Wed, Apr 23, 2008 at 6:43 PM, Scott Collis wrote: > Ok, I know this is not strictly SciPy, being from the Scientific > Branch... But I was hoping some one might know... > > I have been able to get the Scientific.IO.NetCDF library working on > other machines but trying to install it, as a user (no root access...) > on a cluster machine here I get the following error: > >>> import Scientific.IO.NetCDF > Traceback (most recent call last): > File "", line 1, in ? > File > "/flurry/home/scollis/pylibs/lib64/python2.4/site-packages/Scientific/IO/NetCDF.py", line 165, in ? > from Scientific_netcdf import * > ImportError: libnetcdf.so.4: cannot open shared object file: No such > file or directory > > now I have found where libnetcdf.so.4 is located and I have added that > to sys.path {sys.path.append("/usr/local/netcdf-3.6.2a/lib/")} I have > tried assing said directory to PYTHONPATH (export > PYTHONPATH="/usr/local/netcdf-3.6.2a/lib/") Both of those are for loading Python modules, not linking non-python shared libraries. Instead, you need to set the environment variable LD_LIBRARY_PATH. Note that you must set this variable before the python executable even starts; you cannot add it inside your code with os.putenv(). -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From s.collis at bom.gov.au Wed Apr 23 19:57:51 2008 From: s.collis at bom.gov.au (Scott Collis) Date: Thu, 24 Apr 2008 09:57:51 +1000 Subject: [SciPy-user] Help with Scientific's NetCDF[SEC=UNCLASSIFIED] In-Reply-To: <3d375d730804231652i767ef26cw906115d85946d316@mail.gmail.com> References: <1208994228.7559.10.camel@mendenhall> <3d375d730804231652i767ef26cw906115d85946d316@mail.gmail.com> Message-ID: <1208995071.7559.13.camel@mendenhall> Absolutely wonderful. Worked a treat! Thanks! On Wed, 2008-04-23 at 18:52 -0500, Robert Kern wrote: > On Wed, Apr 23, 2008 at 6:43 PM, Scott Collis wrote: > > Ok, I know this is not strictly SciPy, being from the Scientific > > Branch... But I was hoping some one might know... > > > > I have been able to get the Scientific.IO.NetCDF library working on > > other machines but trying to install it, as a user (no root access...) > > on a cluster machine here I get the following error: > > >>> import Scientific.IO.NetCDF > > Traceback (most recent call last): > > File "", line 1, in ? > > File > > "/flurry/home/scollis/pylibs/lib64/python2.4/site-packages/Scientific/IO/NetCDF.py", line 165, in ? > > from Scientific_netcdf import * > > ImportError: libnetcdf.so.4: cannot open shared object file: No such > > file or directory > > > > now I have found where libnetcdf.so.4 is located and I have added that > > to sys.path {sys.path.append("/usr/local/netcdf-3.6.2a/lib/")} I have > > tried assing said directory to PYTHONPATH (export > > PYTHONPATH="/usr/local/netcdf-3.6.2a/lib/") > > Both of those are for loading Python modules, not linking non-python > shared libraries. Instead, you need to set the environment variable > LD_LIBRARY_PATH. Note that you must set this variable before the > python executable even starts; you cannot add it inside your code with > os.putenv(). > From warren.weckesser at gmail.com Wed Apr 23 21:37:47 2008 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Wed, 23 Apr 2008 21:37:47 -0400 Subject: [SciPy-user] remove duplicate points... In-Reply-To: <9457e7c80804231002h5347693cl7de048abd1d1f28@mail.gmail.com> References: <480F42DF.50604@gmail.com> <200804231204.43851.pgmdevlist@gmail.com> <480F64B0.1090107@relativita.com> <200804231244.20541.pgmdevlist@gmail.com> <9457e7c80804231002h5347693cl7de048abd1d1f28@mail.gmail.com> Message-ID: <114880320804231837n372b4587ufea2a47911a1e0ff@mail.gmail.com> For kicks, here's a method that converts the array to a set, then back to an array. >>> z=numpy.array([[1,1],[1,2],[1,2],[1,3],[3,4],[5,0],[6,0],[5,0]]) >>> z array([[1, 1], [1, 2], [1, 2], [1, 3], [3, 4], [5, 0], [6, 0], [5, 0]]) >>> uz = numpy.array([t for t in set([tuple(p) for p in z])]) >>> uz array([[1, 2], [1, 3], [6, 0], [5, 0], [3, 4], [1, 1]]) >>> On Wed, Apr 23, 2008 at 1:02 PM, St?fan van der Walt wrote: > 2008/4/23 Pierre GM : > > The names don't matter, actually, they're just placeholders in our > case. > > You can leave them out if you want: > > In [5]: np.dtype([('',int),('',int)]) > Out[5]: dtype([('f0', ' > Cheers > St?fan > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From peridot.faceted at gmail.com Wed Apr 23 21:51:34 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Wed, 23 Apr 2008 21:51:34 -0400 Subject: [SciPy-user] remove duplicate points... In-Reply-To: <1208968416.6361.30.camel@mik> References: <480F42DF.50604@gmail.com> <1208968416.6361.30.camel@mik> Message-ID: On 23/04/2008, Michael wrote: > I approve of the existence of tuples, on philosophical grounds alone; i > just don't know what they're for; i never encountered a burning need for > them, apart from the above. Well, that's basically it. They're immutable and can therefore be used as keys in dictionaries and sets. Sets have a "FrozenSet" incarnation for the same reason. It's kind of a clumsy solution to the problem, but all the others are clumsy too (hey, MAPLE has a sequence type that unboxes itself when you put it in another sequence, to your total astonishment). And there are occasional times when you want something to be immutable in other contexts. Anne From cohen at slac.stanford.edu Thu Apr 24 08:28:00 2008 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Thu, 24 Apr 2008 14:28:00 +0200 Subject: [SciPy-user] problem with UMFPACK in scipy.test Message-ID: <48107CD0.6020401@slac.stanford.edu> hello, I have compiled and built UMFPACK, following the wiki, and when configuring scipy I read : umfpack_info: amd_info: libraries amd not found in /usr/local/lib FOUND: libraries = ['amd'] library_dirs = ['/usr/lib'] FOUND: libraries = ['umfpack', 'amd'] library_dirs = ['/data1/sources/MATHSTUFF/UMFPACK/Lib/', '/usr/lib'] swig_opts = ['-I/data1/sources/MATHSTUFF/UMFPACK/Include'] define_macros = [('SCIPY_UMFPACK_H', None)] include_dirs = ['/data1/sources/MATHSTUFF/UMFPACK/Include'] I would infer from that that UMFPACK was correctly found, but after building scipy from svn, I get : In [4]: scipy.test(verbose=2) ====================================================================== SKIP: Getting factors of complex matrix ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.5/site-packages/scipy/testing/decorators.py", line 81, in skipper raise nose.SkipTest, msg SkipTest: UMFPACK appears not to be compiled ====================================================================== SKIP: Solve: single precision ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.5/site-packages/scipy/testing/decorators.py", line 81, in skipper raise nose.SkipTest, msg SkipTest: UMFPACK appears not to be compiled ---------------------------------------------------------------------- Ran 1524 tests in 39.269s FAILED (failures=5, errors=12) What does that mean? I noticed that when building UMFPACK I put -fPIC in the UFConfig.mk but it does not seem to be honored. For instance : gcc -O3 -fexceptions -D_FILE_OFFSET_BITS=64 -D_LARGEFILE64_SOURCE -I../Include -I../Source -I../../AMD/Include -I../../UFconfig -DDINT -c ../Source/umfpack_qsymbolic.c -o umfpack_di_qsymbolic.o Is that related? thanks in advance, Johann From robince at gmail.com Thu Apr 24 08:47:59 2008 From: robince at gmail.com (Robin) Date: Thu, 24 Apr 2008 13:47:59 +0100 Subject: [SciPy-user] maximum size of sparse array Message-ID: Hi, I was wondering what is the maximum size of a sparse array? I am assuming indices must be less than 2**32 on a 32 bit system, but is it 2**64 on 64 bit systems? Obviously the matrix in question would be very sparse... Thanks, Robin From fredmfp at gmail.com Thu Apr 24 09:03:42 2008 From: fredmfp at gmail.com (fred) Date: Thu, 24 Apr 2008 15:03:42 +0200 Subject: [SciPy-user] [trait] Array issue... Message-ID: <4810852E.4020409@gmail.com> Hi, As the enthought-dev ml seems to be "dead" for the moment, I post here... What's wrong with the following Complete Minimal Example ? Traceback (most recent call last): File "./cme2.py", line 6, in class Foo(HasTraits): File "./cme2.py", line 8, in Foo a = Array((3, 3), Int) File "/usr/local/lib/python2.5/site-packages/enthought.traits-2.0.4.dev_r18169-py2.5-linux-i686.egg/enthought/traits/traits.py", line 329, in __call__ return self.maker_function( *args, **metadata ) File "/usr/local/lib/python2.5/site-packages/enthought.traits-2.0.4.dev_r18169-py2.5-linux-i686.egg/enthought/traits/traits.py", line 805, in Array return _Array( dtype, shape, value, coerce = False, **metadata ) File "/usr/local/lib/python2.5/site-packages/enthought.traits-2.0.4.dev_r18169-py2.5-linux-i686.egg/enthought/traits/traits.py", line 879, in _Array raise TraitError( "could not convert %r to a numpy dtype" % dtype) TypeError: not all arguments converted during string formatting If somebody has any clue... TIA. Cheers, -- Fred -------------- next part -------------- A non-text attachment was scrubbed... Name: cme2.py Type: text/x-python Size: 210 bytes Desc: not available URL: From cohen at slac.stanford.edu Thu Apr 24 09:29:45 2008 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Thu, 24 Apr 2008 15:29:45 +0200 Subject: [SciPy-user] cephes : gfortran Message-ID: <48108B49.2050600@slac.stanford.edu> hello again, I also have a problem between cephes and gfortran : In [4]: from scipy import * --------------------------------------------------------------------------- ImportError Traceback (most recent call last) /home/cohen/ in () /usr/lib/python2.5/site-packages/scipy/special/__init__.py in () 6 #from special_version import special_version as __version__ 7 ----> 8 from basic import * 9 import specfun 10 import orthogonal /usr/lib/python2.5/site-packages/scipy/special/basic.py in () 6 7 from numpy import * ----> 8 from _cephes import * 9 import types 10 import specfun ImportError: /usr/lib/python2.5/site-packages/scipy/special/_cephes.so: undefined symbol: _gfortran_st_write_done I rebuilt with --fcompiler=gfortran to make sure, but I still get this.... thanks, Johann From wnbell at gmail.com Thu Apr 24 10:17:48 2008 From: wnbell at gmail.com (Nathan Bell) Date: Thu, 24 Apr 2008 09:17:48 -0500 Subject: [SciPy-user] maximum size of sparse array In-Reply-To: References: Message-ID: On Thu, Apr 24, 2008 at 7:47 AM, Robin wrote: > I was wondering what is the maximum size of a sparse array? I am > assuming indices must be less than 2**32 on a 32 bit system, but is it > 2**64 on 64 bit systems? > > Obviously the matrix in question would be very sparse... Currently the CSR/CSC and COO matrices are limited to signed 32-bit indices. I don't know whether the other formats (LIL/DOK) have any such limitation since they are implemented in pure Python. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From mhearne at usgs.gov Thu Apr 24 12:52:37 2008 From: mhearne at usgs.gov (Michael Hearne) Date: Thu, 24 Apr 2008 10:52:37 -0600 Subject: [SciPy-user] failure to build scipy on RHEL5 In-Reply-To: <40e64fa20804231517g4e13f4c8w2d972de1a1555799@mail.gmail.com> References: <71551062-3FB2-4FA4-B617-D6FE011DEEF3@usgs.gov> <3d375d730804231449h1d48d809ra7806beecd6270db@mail.gmail.com> <48FB4760-6D8E-47E2-AE9B-E514C2172298@usgs.gov> <3d375d730804231459r7842973fje9f2c650bd6180a@mail.gmail.com> <40e64fa20804231517g4e13f4c8w2d972de1a1555799@mail.gmail.com> Message-ID: Paul - I apologize if you get two responses - it's not clear if my email client actually delivered another email I just sent on this topic. In answer to your offer - yes, that would be great if I could get the RPMs, as I am completely unable to compile scipy on my own. I'm using f77, and getting complaints about not having a Fortran 90 compiler, and not having compiled with LAPACK with -fPIC (which I certainly _tried_ to do). Anyway, if you have time to put them on EPEL, that would help me and hopefully other frustrated RHEL users out there. Otherwise, let me know the best way for me to get the files. Thanks, Mike On Apr 23, 2008, at 4:17 PM, Paul Barrett wrote: > Mike, > > I have numpy, scipy, and matplotlib RPMs for EL5 x86_64, if you want > them. I also think that I have the i386 ones too, but the SRPMs are > of course available. > > I've been meaning to offer them to the EPEL repository, but I haven't > made the time. > > -- Paul > > > On Wed, Apr 23, 2008 at 5:59 PM, Robert Kern > wrote: >> On Wed, Apr 23, 2008 at 4:53 PM, Michael Hearne >> wrote: >>> Wasn't sure how far back to go, so here's several more lines' worth: >> >> Here's the important one: >> >> >> /usr/bin/ld: /usr/local/lib/libfblas.a(cgemm.o): relocation >> R_X86_64_32 against `a local symbol' can not be used when making a >> shared object; recompile with -fPIC >> /usr/local/lib/libfblas.a: could not read symbols: Bad value >> >> You need to compile BLAS to be relocatable using the -fPIC flag to >> the >> compilation flags. If you are following the directions on that wiki >> strictly, replace this line: >> >> g77 -fno-second-underscore -O2 -c *.f >> >> with this line: >> >> g77 -fPIC -fno-second-underscore -O2 -c *.f >> >> >> >> -- >> Robert Kern >> >> "I have come to believe that the whole world is an enigma, a harmless >> enigma that is made terrible by our own mad attempt to interpret it >> as >> though it had an underlying truth." >> -- Umberto Eco >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user ------------------------------------------------------ Michael Hearne mhearne at usgs.gov (303) 273-8620 USGS National Earthquake Information Center 1711 Illinois St. Golden CO 80401 Senior Software Engineer Synergetics, Inc. ------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Thu Apr 24 14:56:44 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 24 Apr 2008 13:56:44 -0500 Subject: [SciPy-user] cephes : gfortran In-Reply-To: <48108B49.2050600@slac.stanford.edu> References: <48108B49.2050600@slac.stanford.edu> Message-ID: <3d375d730804241156l467db5das5cb4a629c1ff043c@mail.gmail.com> On Thu, Apr 24, 2008 at 8:29 AM, Johann Cohen-Tanugi wrote: > hello again, > I also have a problem between cephes and gfortran : > In [4]: from scipy import * > --------------------------------------------------------------------------- > ImportError Traceback (most recent call last) > > /home/cohen/ in () > > /usr/lib/python2.5/site-packages/scipy/special/__init__.py in () > 6 #from special_version import special_version as __version__ > 7 > ----> 8 from basic import * > 9 import specfun > 10 import orthogonal > > /usr/lib/python2.5/site-packages/scipy/special/basic.py in () > 6 > 7 from numpy import * > ----> 8 from _cephes import * > 9 import types > 10 import specfun > > ImportError: /usr/lib/python2.5/site-packages/scipy/special/_cephes.so: > undefined symbol: _gfortran_st_write_done > > I rebuilt with --fcompiler=gfortran to make sure, but I still get this.... It's --fcompiler=gnu95 . Presumably, the Fortran runtime libraries didn't get linked in. Please post the full build log. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Thu Apr 24 14:58:18 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 24 Apr 2008 13:58:18 -0500 Subject: [SciPy-user] problem with UMFPACK in scipy.test In-Reply-To: <48107CD0.6020401@slac.stanford.edu> References: <48107CD0.6020401@slac.stanford.edu> Message-ID: <3d375d730804241158v38aedc9dsc93386bb6191f065@mail.gmail.com> On Thu, Apr 24, 2008 at 7:28 AM, Johann Cohen-Tanugi wrote: > hello, > I have compiled and built UMFPACK, following the wiki, and when > configuring scipy I read : > umfpack_info: > amd_info: > libraries amd not found in /usr/local/lib > FOUND: > libraries = ['amd'] > library_dirs = ['/usr/lib'] > > FOUND: > libraries = ['umfpack', 'amd'] > library_dirs = ['/data1/sources/MATHSTUFF/UMFPACK/Lib/', '/usr/lib'] > swig_opts = ['-I/data1/sources/MATHSTUFF/UMFPACK/Include'] > define_macros = [('SCIPY_UMFPACK_H', None)] > include_dirs = ['/data1/sources/MATHSTUFF/UMFPACK/Include'] > > I would infer from that that UMFPACK was correctly found, but after > building scipy from svn, I get : You should actually look later in the build log to see if the .so files got linked correctly. > In [4]: scipy.test(verbose=2) > > ====================================================================== > SKIP: Getting factors of complex matrix > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/lib/python2.5/site-packages/scipy/testing/decorators.py", > line 81, in skipper > raise nose.SkipTest, msg > SkipTest: UMFPACK appears not to be compiled > > > > ====================================================================== > SKIP: Solve: single precision > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/lib/python2.5/site-packages/scipy/testing/decorators.py", > line 81, in skipper > raise nose.SkipTest, msg > SkipTest: UMFPACK appears not to be compiled > > ---------------------------------------------------------------------- > Ran 1524 tests in 39.269s > > FAILED (failures=5, errors=12) > > > What does that mean? I noticed that when building UMFPACK I put -fPIC in > the UFConfig.mk but it does not seem to be honored. For instance : > gcc -O3 -fexceptions -D_FILE_OFFSET_BITS=64 -D_LARGEFILE64_SOURCE > -I../Include -I../Source -I../../AMD/Include -I../../UFconfig -DDINT -c > ../Source/umfpack_qsymbolic.c -o umfpack_di_qsymbolic.o > > Is that related? Quite possibly, yes. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From shao at msg.ucsf.edu Thu Apr 24 15:24:35 2008 From: shao at msg.ucsf.edu (Lin Shao) Date: Thu, 24 Apr 2008 12:24:35 -0700 Subject: [SciPy-user] EPD PIL _imaging C module Message-ID: Running scipy.test() in the latest EPD windows distribution, I got the following exception: ====================================================================== ERROR: test_imresize (scipy.misc.tests.test_pilutil.test_pilutil) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\Python25\lib\site-packages\scipy-0.6.0.0003-py2.5-win32.egg\scipy\misc\tests\test_pilutil.py", line 17, in test_imresize im1 = pilutil.imresize(im,T(1.1)) File "E:\Python25\lib\site-packages\scipy-0.6.0.0003-py2.5-win32.egg\scipy\misc\pilutil.py", line 225, in imresize im = toimage(arr) File "E:\Python25\lib\site-packages\scipy-0.6.0.0003-py2.5-win32.egg\scipy\misc\pilutil.py", line 104, in toimage image = Image.fromstring('L',shape,bytedata.tostring()) File "E:\Python25\lib\site-packages\PIL-1.1.6.0003_s-py2.5-win32.egg\PIL\Image.py", line 1743, in fromstring im = new(mode, size) File "E:\Python25\lib\site-packages\PIL-1.1.6.0003_s-py2.5-win32.egg\PIL\Image.py", line 1710, in new return Image()._new(core.fill(mode, size, color)) File "E:\Python25\lib\site-packages\PIL-1.1.6.0003_s-py2.5-win32.egg\PIL\Image.py", line 36, in __getattr__ raise ImportError("The _imaging C module is not installed") ImportError: The _imaging C module is not installed It's a bit strange because I do see _imging.pyd in the PIL folder. I'm using XP 32-bit. -lin -------------- next part -------------- An HTML attachment was scrubbed... URL: From fredmfp at gmail.com Thu Apr 24 16:34:45 2008 From: fredmfp at gmail.com (fred) Date: Thu, 24 Apr 2008 22:34:45 +0200 Subject: [SciPy-user] remove duplicate points... In-Reply-To: <200804231244.20541.pgmdevlist@gmail.com> References: <480F42DF.50604@gmail.com> <200804231204.43851.pgmdevlist@gmail.com> <480F64B0.1090107@relativita.com> <200804231244.20541.pgmdevlist@gmail.com> Message-ID: <4810EEE5.7000508@gmail.com> Pierre GM a ?crit : > With that line, you get a view of the ndarray z as a record array, each record > consisting of two integer fields named a and b. > http://www.scipy.org/RecordArrays > The names don't matter, actually, they're just placeholders in our case. As > you have a nx4 array, each record (viz, set of coordinates) should have 4 > fields (x,y,z,value, for example), so use somethng like > [('a',int),('b',int),('c',int),('d',int)] I do like this one :-) Many thanks to all of you ! Cheers, -- Fred From pijus at virketis.com Thu Apr 24 16:51:04 2008 From: pijus at virketis.com (Pijus Virketis) Date: Thu, 24 Apr 2008 16:51:04 -0400 (EDT) Subject: [SciPy-user] __setstate__ of TimeSeries subclass Message-ID: <16163.216.223.55.126.1209070264.squirrel@216.223.55.126> Hi, I am trying to get the __setstate__ method working on a subclass of scikits.TimeSeries. The setup: from scikits.timeseries import * class Test(TimeSeries): def __new__(cls, *args, **kwargs): return(TimeSeries.__new__(cls, [], date_array([]))) test = Test() # works fine print(repr(test)) The problem: # one-observation TimeSeries instance ts = time_series([1], date_array([Date(string="2000-01-01",freq="U")])) # would like to transfer state from ts to test test.__setstate__(ts.__getstate__()) # hm, seems to have gone well? print(repr(test)) # --------------------- Traceback (most recent call last): File "", line 1, in File "c:\python25\lib\site-packages\scikits\timeseries\tseries.py", line 527, in __repr__ timestr = str(self.dates) File "C:\Python25\lib\site-packages\numpy-1.0.5.dev4749.0013-py2.5-win32.egg\numpy\core\numeric.py", line 499, in array_str return array2string(a, max_line_width, precision, suppress_small, ' ', "", str) File "C:\Python25\lib\site-packages\numpy-1.0.5.dev4749.0013-py2.5-win32.egg\numpy\core\arrayprint.py", line 240, in array2string separator, prefix) File "C:\Python25\lib\site-packages\numpy-1.0.5.dev4749.0013-py2.5-win32.egg\numpy\core\arrayprint.py", line 177, in _array2string _summaryEdgeItems, summary_insert)[:-1] File "C:\Python25\lib\site-packages\numpy-1.0.5.dev4749.0013-py2.5-win32.egg\numpy\core\arrayprint.py", line 286, in _formatArray word = format_function(a[-1]) File "c:\python25\lib\site-packages\scikits\timeseries\tdates.py", line 219, in __getitem__ unsorted = self._unsorted[indx] IndexError: index out of bounds # --------------------- Tracing through with pdb showed that while the _data attribute is set nicely, something seems to go wrong with _dates, which are turned into None. I am continuing up the stack with DateArray.__setstate__, but I figure this is a reasonable time to ask for help. I am using an approximatley month-old version of numpy 1.0.5 and today's version of scikits.TimeSeries. Thanks for taking a look! Pijus From cohen at slac.stanford.edu Thu Apr 24 17:07:23 2008 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Thu, 24 Apr 2008 23:07:23 +0200 Subject: [SciPy-user] cephes : gfortran In-Reply-To: <3d375d730804241156l467db5das5cb4a629c1ff043c@mail.gmail.com> References: <48108B49.2050600@slac.stanford.edu> <3d375d730804241156l467db5das5cb4a629c1ff043c@mail.gmail.com> Message-ID: <4810F68B.2090901@slac.stanford.edu> hi Robert, thanks for your help! Here is the full build log (I cant find anything wrong related to cephes, but there are many things which look weird) : [cohen at jarrett scipy-svn]$ python setup.py build --fcompiler=gnu95 Warning: No configuration returned, assuming unavailable. mkl_info: libraries mkl,vml,guide not found in /usr/local/lib libraries mkl,vml,guide not found in /usr/lib NOT AVAILABLE fftw3_info: libraries fftw3 not found in /usr/local/lib FOUND: libraries = ['fftw3'] library_dirs = ['/usr/lib'] define_macros = [('SCIPY_FFTW3_H', None)] include_dirs = ['/usr/include'] djbfft_info: NOT AVAILABLE blas_opt_info: blas_mkl_info: libraries mkl,vml,guide not found in /usr/local/lib libraries mkl,vml,guide not found in /usr/lib NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS Setting PTATLAS=ATLAS Setting PTATLAS=ATLAS FOUND: libraries = ['ptf77blas', 'ptcblas', 'atlas', 'lapack', 'ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['/usr/local/atlas/lib'] language = c include_dirs = ['/usr/local/atlas/include'] customize GnuFCompiler Found executable /usr/bin/g77 gnu: no Fortran 90 compiler found gnu: no Fortran 90 compiler found customize GnuFCompiler gnu: no Fortran 90 compiler found gnu: no Fortran 90 compiler found customize GnuFCompiler using config compiling '_configtest.c': /* This file is generated from numpy/distutils/system_info.py */ void ATL_buildinfo(void); int main(void) { ATL_buildinfo(); return 0; } C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m32 -march=i386 -mtune=generic -fasynchronous-unwind-tables -D_GNU_SOURCE -fPIC -fPIC compile options: '-c' gcc: _configtest.c gcc -pthread _configtest.o -L/usr/local/atlas/lib -lptf77blas -lptcblas -latlas -llapack -lptf77blas -lptcblas -latlas -o _configtest ATLAS version 3.8.0 built by cohen on Sun Mar 30 22:47:43 CEST 2008: UNAME : Linux jarrett 2.6.24.3-50.fc8 #1 SMP Thu Mar 20 14:47:10 EDT 2008 i686 i686 i386 GNU/Linux INSTFLG : -1 0 -a 1 ARCHDEFS : -DATL_OS_Linux -DATL_ARCH_Core2Duo -DATL_CPUMHZ=1801 -DATL_SSE3 -DATL_SSE2 -DATL_SSE1 -DATL_GAS_x8632 F2CDEFS : -DAdd_ -DF77_INTEGER=int -DStringSunStyle CACHEEDGE: 2097152 F77 : gfortran, version GNU Fortran (GCC) 4.1.2 20070925 (Red Hat 4.1.2-33) F77FLAGS : -O -fPIC -m32 SMC : gcc, version gcc (GCC) 4.1.2 20070925 (Red Hat 4.1.2-33) SMCFLAGS : -fomit-frame-pointer -mfpmath=sse -msse3 -O2 -fPIC -m32 SKC : gcc, version gcc (GCC) 4.1.2 20070925 (Red Hat 4.1.2-33) SKCFLAGS : -fomit-frame-pointer -mfpmath=sse -msse3 -O2 -fPIC -m32 success! removing: _configtest.c _configtest.o _configtest FOUND: libraries = ['ptf77blas', 'ptcblas', 'atlas', 'lapack', 'ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['/usr/local/atlas/lib'] language = c define_macros = [('ATLAS_INFO', '"\\"3.8.0\\""')] include_dirs = ['/usr/local/atlas/include'] ATLAS version 3.8.0 lapack_opt_info: lapack_mkl_info: NOT AVAILABLE atlas_threads_info: Setting PTATLAS=ATLAS libraries lapack_atlas not found in /usr/local/atlas/lib numpy.distutils.system_info.atlas_threads_info Setting PTATLAS=ATLAS Setting PTATLAS=ATLAS FOUND: libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas', 'lapack', 'ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['/usr/local/atlas/lib'] language = f77 include_dirs = ['/usr/local/atlas/include'] customize GnuFCompiler gnu: no Fortran 90 compiler found gnu: no Fortran 90 compiler found customize GnuFCompiler gnu: no Fortran 90 compiler found gnu: no Fortran 90 compiler found customize GnuFCompiler using config compiling '_configtest.c': /* This file is generated from numpy/distutils/system_info.py */ void ATL_buildinfo(void); int main(void) { ATL_buildinfo(); return 0; } C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m32 -march=i386 -mtune=generic -fasynchronous-unwind-tables -D_GNU_SOURCE -fPIC -fPIC compile options: '-c' gcc: _configtest.c gcc -pthread _configtest.o -L/usr/local/atlas/lib -llapack -lptf77blas -lptcblas -latlas -llapack -lptf77blas -lptcblas -latlas -o _configtest ATLAS version 3.8.0 built by cohen on Sun Mar 30 22:47:43 CEST 2008: UNAME : Linux jarrett 2.6.24.3-50.fc8 #1 SMP Thu Mar 20 14:47:10 EDT 2008 i686 i686 i386 GNU/Linux INSTFLG : -1 0 -a 1 ARCHDEFS : -DATL_OS_Linux -DATL_ARCH_Core2Duo -DATL_CPUMHZ=1801 -DATL_SSE3 -DATL_SSE2 -DATL_SSE1 -DATL_GAS_x8632 F2CDEFS : -DAdd_ -DF77_INTEGER=int -DStringSunStyle CACHEEDGE: 2097152 F77 : gfortran, version GNU Fortran (GCC) 4.1.2 20070925 (Red Hat 4.1.2-33) F77FLAGS : -O -fPIC -m32 SMC : gcc, version gcc (GCC) 4.1.2 20070925 (Red Hat 4.1.2-33) SMCFLAGS : -fomit-frame-pointer -mfpmath=sse -msse3 -O2 -fPIC -m32 SKC : gcc, version gcc (GCC) 4.1.2 20070925 (Red Hat 4.1.2-33) SKCFLAGS : -fomit-frame-pointer -mfpmath=sse -msse3 -O2 -fPIC -m32 success! removing: _configtest.c _configtest.o _configtest FOUND: libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas', 'lapack', 'ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['/usr/local/atlas/lib'] language = f77 define_macros = [('ATLAS_INFO', '"\\"3.8.0\\""')] include_dirs = ['/usr/local/atlas/include'] ATLAS version 3.8.0 ATLAS version 3.8.0 umfpack_info: amd_info: libraries amd not found in /usr/local/lib FOUND: libraries = ['amd'] library_dirs = ['/usr/lib'] FOUND: libraries = ['umfpack', 'amd'] library_dirs = ['/data1/sources/MATHSTUFF/UMFPACK/Lib/', '/usr/lib'] swig_opts = ['-I/data1/sources/MATHSTUFF/UMFPACK/Include'] define_macros = [('SCIPY_UMFPACK_H', None)] include_dirs = ['/data1/sources/MATHSTUFF/UMFPACK/Include'] running build running scons customize UnixCCompiler Found executable /usr/lib/ccache/gcc customize GnuFCompiler gnu: no Fortran 90 compiler found gnu: no Fortran 90 compiler found customize GnuFCompiler gnu: no Fortran 90 compiler found gnu: no Fortran 90 compiler found customize UnixCCompiler customize UnixCCompiler using scons running config_cc unifing config_cc, config, build_clib, build_ext, build commands --compiler options running config_fc unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options running build_src building py_modules sources building library "dfftpack" sources building library "linpack_lite" sources building library "mach" sources building library "quadpack" sources building library "odepack" sources building library "fitpack" sources building library "odrpack" sources building library "minpack" sources building library "rootfind" sources building library "superlu_src" sources building library "arpack" sources building library "c_misc" sources building library "cephes" sources building library "mach" sources building library "toms" sources building library "amos" sources building library "cdf" sources building library "specfun" sources building library "statlib" sources building extension "scipy.cluster._vq" sources building extension "scipy.cluster._hierarchy_wrap" sources building extension "scipy.fftpack._fftpack" sources f2py options: [] adding 'build/src.linux-i686-2.5/fortranobject.c' to sources. adding 'build/src.linux-i686-2.5' to include_dirs. building extension "scipy.fftpack.convolve" sources f2py options: [] adding 'build/src.linux-i686-2.5/fortranobject.c' to sources. adding 'build/src.linux-i686-2.5' to include_dirs. building extension "scipy.integrate._quadpack" sources building extension "scipy.integrate._odepack" sources building extension "scipy.integrate.vode" sources f2py options: [] adding 'build/src.linux-i686-2.5/fortranobject.c' to sources. adding 'build/src.linux-i686-2.5' to include_dirs. building extension "scipy.interpolate._fitpack" sources building extension "scipy.interpolate.dfitpack" sources f2py options: [] adding 'build/src.linux-i686-2.5/fortranobject.c' to sources. adding 'build/src.linux-i686-2.5' to include_dirs. adding 'build/src.linux-i686-2.5/scipy/interpolate/dfitpack-f2pywrappers.f' to sources. building extension "scipy.io.numpyio" sources building extension "scipy.lib.blas.fblas" sources f2py options: ['skip:', ':'] adding 'build/src.linux-i686-2.5/fortranobject.c' to sources. adding 'build/src.linux-i686-2.5' to include_dirs. adding 'build/src.linux-i686-2.5/build/src.linux-i686-2.5/scipy/lib/blas/fblas-f2pywrappers.f' to sources. building extension "scipy.lib.blas.cblas" sources adding 'scipy/lib/blas/cblas.pyf.src' to sources. f2py options: ['skip:', ':'] adding 'build/src.linux-i686-2.5/fortranobject.c' to sources. adding 'build/src.linux-i686-2.5' to include_dirs. building extension "scipy.lib.lapack.flapack" sources f2py options: ['skip:', ':'] adding 'build/src.linux-i686-2.5/fortranobject.c' to sources. adding 'build/src.linux-i686-2.5' to include_dirs. building extension "scipy.lib.lapack.clapack" sources adding 'scipy/lib/lapack/clapack.pyf.src' to sources. f2py options: ['skip:', ':'] adding 'build/src.linux-i686-2.5/fortranobject.c' to sources. adding 'build/src.linux-i686-2.5' to include_dirs. building extension "scipy.lib.lapack.calc_lwork" sources f2py options: [] adding 'build/src.linux-i686-2.5/fortranobject.c' to sources. adding 'build/src.linux-i686-2.5' to include_dirs. building extension "scipy.lib.lapack.atlas_version" sources building extension "scipy.linalg.fblas" sources adding 'build/src.linux-i686-2.5/scipy/linalg/fblas.pyf' to sources. f2py options: [] adding 'build/src.linux-i686-2.5/fortranobject.c' to sources. adding 'build/src.linux-i686-2.5' to include_dirs. adding 'build/src.linux-i686-2.5/build/src.linux-i686-2.5/scipy/linalg/fblas-f2pywrappers.f' to sources. building extension "scipy.linalg.cblas" sources adding 'build/src.linux-i686-2.5/scipy/linalg/cblas.pyf' to sources. f2py options: [] adding 'build/src.linux-i686-2.5/fortranobject.c' to sources. adding 'build/src.linux-i686-2.5' to include_dirs. building extension "scipy.linalg.flapack" sources adding 'build/src.linux-i686-2.5/scipy/linalg/flapack.pyf' to sources. f2py options: [] adding 'build/src.linux-i686-2.5/fortranobject.c' to sources. adding 'build/src.linux-i686-2.5' to include_dirs. adding 'build/src.linux-i686-2.5/build/src.linux-i686-2.5/scipy/linalg/flapack-f2pywrappers.f' to sources. building extension "scipy.linalg.clapack" sources adding 'build/src.linux-i686-2.5/scipy/linalg/clapack.pyf' to sources. f2py options: [] adding 'build/src.linux-i686-2.5/fortranobject.c' to sources. adding 'build/src.linux-i686-2.5' to include_dirs. building extension "scipy.linalg._flinalg" sources f2py options: [] adding 'build/src.linux-i686-2.5/fortranobject.c' to sources. adding 'build/src.linux-i686-2.5' to include_dirs. building extension "scipy.linalg.calc_lwork" sources f2py options: [] adding 'build/src.linux-i686-2.5/fortranobject.c' to sources. adding 'build/src.linux-i686-2.5' to include_dirs. building extension "scipy.linalg.atlas_version" sources building extension "scipy.odr.__odrpack" sources building extension "scipy.optimize._minpack" sources building extension "scipy.optimize._zeros" sources building extension "scipy.optimize._lbfgsb" sources f2py options: [] adding 'build/src.linux-i686-2.5/fortranobject.c' to sources. adding 'build/src.linux-i686-2.5' to include_dirs. building extension "scipy.optimize.moduleTNC" sources building extension "scipy.optimize._cobyla" sources f2py options: [] adding 'build/src.linux-i686-2.5/fortranobject.c' to sources. adding 'build/src.linux-i686-2.5' to include_dirs. building extension "scipy.optimize.minpack2" sources f2py options: [] adding 'build/src.linux-i686-2.5/fortranobject.c' to sources. adding 'build/src.linux-i686-2.5' to include_dirs. building extension "scipy.optimize._slsqp" sources f2py options: [] adding 'build/src.linux-i686-2.5/fortranobject.c' to sources. adding 'build/src.linux-i686-2.5' to include_dirs. building extension "scipy.signal.sigtools" sources building extension "scipy.signal.spline" sources building extension "scipy.sparse.linalg.isolve._iterative" sources f2py options: [] adding 'build/src.linux-i686-2.5/fortranobject.c' to sources. adding 'build/src.linux-i686-2.5' to include_dirs. building extension "scipy.sparse.linalg.dsolve._zsuperlu" sources building extension "scipy.sparse.linalg.dsolve._dsuperlu" sources building extension "scipy.sparse.linalg.dsolve._csuperlu" sources building extension "scipy.sparse.linalg.dsolve._ssuperlu" sources building extension "scipy.sparse.linalg.dsolve.umfpack.__umfpack" sources adding 'scipy/sparse/linalg/dsolve/umfpack/umfpack.i' to sources. building extension "scipy.sparse.linalg.eigen.arpack._arpack" sources f2py options: [] adding 'build/src.linux-i686-2.5/fortranobject.c' to sources. adding 'build/src.linux-i686-2.5' to include_dirs. adding 'build/src.linux-i686-2.5/build/src.linux-i686-2.5/scipy/sparse/linalg/eigen/arpack/_arpack-f2pywrappers.f' to sources. building extension "scipy.sparse.sparsetools._csr" sources building extension "scipy.sparse.sparsetools._csc" sources building extension "scipy.sparse.sparsetools._coo" sources building extension "scipy.sparse.sparsetools._bsr" sources building extension "scipy.sparse.sparsetools._dia" sources building extension "scipy.special._cephes" sources building extension "scipy.special.specfun" sources f2py options: ['--no-wrap-functions'] adding 'build/src.linux-i686-2.5/fortranobject.c' to sources. adding 'build/src.linux-i686-2.5' to include_dirs. building extension "scipy.stats.statlib" sources f2py options: ['--no-wrap-functions'] adding 'build/src.linux-i686-2.5/fortranobject.c' to sources. adding 'build/src.linux-i686-2.5' to include_dirs. building extension "scipy.stats.futil" sources f2py options: [] adding 'build/src.linux-i686-2.5/fortranobject.c' to sources. adding 'build/src.linux-i686-2.5' to include_dirs. building extension "scipy.stats.mvn" sources f2py options: [] adding 'build/src.linux-i686-2.5/fortranobject.c' to sources. adding 'build/src.linux-i686-2.5' to include_dirs. adding 'build/src.linux-i686-2.5/scipy/stats/mvn-f2pywrappers.f' to sources. building extension "scipy.ndimage._nd_image" sources building extension "scipy.ndimage._segment" sources building extension "scipy.ndimage._register" sources building extension "scipy.stsci.convolve._correlate" sources building extension "scipy.stsci.convolve._lineshape" sources building extension "scipy.stsci.image._combine" sources building data_files sources running build_py copying scipy/__svn_version__.py -> build/lib.linux-i686-2.5/scipy copying build/src.linux-i686-2.5/scipy/__config__.py -> build/lib.linux-i686-2.5/scipy running build_clib customize UnixCCompiler customize UnixCCompiler using build_clib customize Gnu95FCompiler Found executable /usr/bin/gfortran customize Gnu95FCompiler using build_clib running build_ext customize UnixCCompiler customize UnixCCompiler using build_ext library 'mach' defined more than once, overwriting build_info {'sources': ['scipy/integrate/mach/r1mach.f', 'scipy/integrate/mach/d1mach.f', 'scipy/integrate/mach/xerror.f', 'scipy/integrate/mach/i1mach.f'], 'config_fc': {'noopt': ('scipy/integrate/setup.pyc', 1)}, 'source_languages': ['f77']}... with {'sources': ['scipy/special/mach/r1mach.f', 'scipy/special/mach/d1mach.f', 'scipy/special/mach/xerror.f', 'scipy/special/mach/i1mach.f'], 'config_fc': {'noopt': ('scipy/special/setup.pyc', 1)}, 'source_languages': ['f77']}... resetting extension 'scipy.integrate._odepack' language from 'c' to 'f77'. resetting extension 'scipy.integrate.vode' language from 'c' to 'f77'. resetting extension 'scipy.lib.blas.fblas' language from 'c' to 'f77'. resetting extension 'scipy.odr.__odrpack' language from 'c' to 'f77'. extending extension 'scipy.sparse.linalg.dsolve._zsuperlu' defined_macros with [('USE_VENDOR_BLAS', 1)] extending extension 'scipy.sparse.linalg.dsolve._dsuperlu' defined_macros with [('USE_VENDOR_BLAS', 1)] extending extension 'scipy.sparse.linalg.dsolve._csuperlu' defined_macros with [('USE_VENDOR_BLAS', 1)] extending extension 'scipy.sparse.linalg.dsolve._ssuperlu' defined_macros with [('USE_VENDOR_BLAS', 1)] customize UnixCCompiler customize UnixCCompiler using build_ext customize Gnu95FCompiler customize Gnu95FCompiler using build_ext best, Johann Robert Kern wrote: > On Thu, Apr 24, 2008 at 8:29 AM, Johann Cohen-Tanugi > wrote: > >> hello again, >> I also have a problem between cephes and gfortran : >> In [4]: from scipy import * >> --------------------------------------------------------------------------- >> ImportError Traceback (most recent call last) >> >> /home/cohen/ in () >> >> /usr/lib/python2.5/site-packages/scipy/special/__init__.py in () >> 6 #from special_version import special_version as __version__ >> 7 >> ----> 8 from basic import * >> 9 import specfun >> 10 import orthogonal >> >> /usr/lib/python2.5/site-packages/scipy/special/basic.py in () >> 6 >> 7 from numpy import * >> ----> 8 from _cephes import * >> 9 import types >> 10 import specfun >> >> ImportError: /usr/lib/python2.5/site-packages/scipy/special/_cephes.so: >> undefined symbol: _gfortran_st_write_done >> >> I rebuilt with --fcompiler=gfortran to make sure, but I still get this.... >> > > It's --fcompiler=gnu95 . Presumably, the Fortran runtime libraries > didn't get linked in. Please post the full build log. > > From robert.kern at gmail.com Thu Apr 24 17:28:17 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 24 Apr 2008 16:28:17 -0500 Subject: [SciPy-user] cephes : gfortran In-Reply-To: <4810F68B.2090901@slac.stanford.edu> References: <48108B49.2050600@slac.stanford.edu> <3d375d730804241156l467db5das5cb4a629c1ff043c@mail.gmail.com> <4810F68B.2090901@slac.stanford.edu> Message-ID: <3d375d730804241428m495b0fcfg7997ce98d524b5d2@mail.gmail.com> On Thu, Apr 24, 2008 at 4:07 PM, Johann Cohen-Tanugi wrote: > hi Robert, > thanks for your help! > Here is the full build log (I cant find anything wrong related to > cephes, but there are many things which look weird) : > [cohen at jarrett scipy-svn]$ python setup.py build --fcompiler=gnu95 Delete the build directory so that everything gets rebuilt. Otherwise, nothing gets linked for us to see what went wrong. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From pgmdevlist at gmail.com Thu Apr 24 17:58:04 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Thu, 24 Apr 2008 17:58:04 -0400 Subject: [SciPy-user] __setstate__ of TimeSeries subclass In-Reply-To: <16163.216.223.55.126.1209070264.squirrel@216.223.55.126> References: <16163.216.223.55.126.1209070264.squirrel@216.223.55.126> Message-ID: <200804241758.07620.pgmdevlist@gmail.com> On Thursday 24 April 2008 16:51:04 Pijus Virketis wrote: > Hi, > > I am trying to get the __setstate__ method working on a subclass of > scikits.TimeSeries. Pijus, I gonna investigate, but I'm not sure I understand what you're trying to do. Basically, __setstate__ and __getstate__ are not to be used by themselves, but by the _tsreconstruct functionh for pickling/unpickling. What's your goal ? From dineshbvadhia at hotmail.com Thu Apr 24 18:18:03 2008 From: dineshbvadhia at hotmail.com (Dinesh B Vadhia) Date: Thu, 24 Apr 2008 15:18:03 -0700 Subject: [SciPy-user] Sparse matrix todense Message-ID: Hi! The following code worked until I installed the latest svn. I want to select a particular row (eg. row=1818) from the sparse matrix A: > import numpy > import scipy > from scipy import sparse > A = sparse.csr_matrix(A) > q[0,:] = sparse.csr_matrix.todense(sparse.csr_matrix.getrow(A, 1818)) The Traceback is: Traceback (most recent call last): File "C:\...\.py", line 157, in q[0,:] = sparse.csr_matrix.todense(sparse.csr_matrix.getrow(A, 1818)) File "C:\Python25\Lib\site-packages\scipy\sparse\base.py", line 357, in getrow a = csr_matrix((1, m), dtype=self.dtype) NameError: global name 'csr_matrix' is not defined Thanks! Dinesh -------------- next part -------------- An HTML attachment was scrubbed... URL: From cohen at slac.stanford.edu Thu Apr 24 18:20:19 2008 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Fri, 25 Apr 2008 00:20:19 +0200 Subject: [SciPy-user] cephes : gfortran In-Reply-To: <3d375d730804241428m495b0fcfg7997ce98d524b5d2@mail.gmail.com> References: <48108B49.2050600@slac.stanford.edu> <3d375d730804241156l467db5das5cb4a629c1ff043c@mail.gmail.com> <4810F68B.2090901@slac.stanford.edu> <3d375d730804241428m495b0fcfg7997ce98d524b5d2@mail.gmail.com> Message-ID: <481107A3.9020707@slac.stanford.edu> hi again, it must indeed have been a stale library compiled with the wrong fortran compiler.... Erasing the whole build dir and starting from scratch, I do not see this error again. thanks! Johann Robert Kern wrote: > On Thu, Apr 24, 2008 at 4:07 PM, Johann Cohen-Tanugi > wrote: > >> hi Robert, >> thanks for your help! >> Here is the full build log (I cant find anything wrong related to >> cephes, but there are many things which look weird) : >> [cohen at jarrett scipy-svn]$ python setup.py build --fcompiler=gnu95 >> > > Delete the build directory so that everything gets rebuilt. Otherwise, > nothing gets linked for us to see what went wrong. > > From wnbell at gmail.com Thu Apr 24 18:33:24 2008 From: wnbell at gmail.com (Nathan Bell) Date: Thu, 24 Apr 2008 17:33:24 -0500 Subject: [SciPy-user] Sparse matrix todense In-Reply-To: References: Message-ID: On Thu, Apr 24, 2008 at 5:18 PM, Dinesh B Vadhia wrote: > > > Hi! The following code worked until I installed the latest svn. I want to > select a particular row (eg. row=1818) from the sparse matrix A: > > > q[0,:] = sparse.csr_matrix.todense(sparse.csr_matrix.getrow(A, 1818)) That's definitely a bug in getrow(). I'll fix it soon. Note that with the latest SVN you can do q[0,:] = A[1818,:].todense(). In other words, sparse slicing works correctly for CSR and CSC matrices now. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From cohen at slac.stanford.edu Thu Apr 24 18:29:11 2008 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Fri, 25 Apr 2008 00:29:11 +0200 Subject: [SciPy-user] problem with UMFPACK in scipy.test In-Reply-To: <3d375d730804241158v38aedc9dsc93386bb6191f065@mail.gmail.com> References: <48107CD0.6020401@slac.stanford.edu> <3d375d730804241158v38aedc9dsc93386bb6191f065@mail.gmail.com> Message-ID: <481109B7.3090605@slac.stanford.edu> hi again, ok there was an uncommented CFLAG line at the bottom of the UFConfig.mk, my bad.... Commenting that line does bring the -fPIC arg back. As far as I can tell, there is nothing wrong occurring during the build : [cohen at jarrett scipy-svn]$ grep -i umfpack build.log umfpack_info: libraries = ['umfpack', 'amd'] library_dirs = ['/data1/sources/MATHSTUFF/UMFPACK/Lib/', '/usr/lib'] swig_opts = ['-I/data1/sources/MATHSTUFF/UMFPACK/Include'] define_macros = [('SCIPY_UMFPACK_H', None)] include_dirs = ['/data1/sources/MATHSTUFF/UMFPACK/Include'] building extension "scipy.sparse.linalg.dsolve.umfpack.__umfpack" sources creating build/src.linux-i686-2.5/scipy/sparse/linalg/dsolve/umfpack adding 'scipy/sparse/linalg/dsolve/umfpack/umfpack.i' to sources. swig: scipy/sparse/linalg/dsolve/umfpack/umfpack.i swig -python -I/data1/sources/MATHSTUFF/UMFPACK/Include -I/usr/local/atlas/include -o build/src.linux-i686-2.5/scipy/sparse/linalg/dsolve/umfpack/_umfpack_wrap.c -outdir build/src.linux-i686-2.5/scipy/sparse/linalg/dsolve/umfpack scipy/sparse/linalg/dsolve/umfpack/umfpack.i creating build/lib.linux-i686-2.5/scipy/sparse/linalg/dsolve/umfpack copying scipy/sparse/linalg/dsolve/umfpack/info.py -> build/lib.linux-i686-2.5/scipy/sparse/linalg/dsolve/umfpack copying scipy/sparse/linalg/dsolve/umfpack/umfpack.py -> build/lib.linux-i686-2.5/scipy/sparse/linalg/dsolve/umfpack copying scipy/sparse/linalg/dsolve/umfpack/setup.py -> build/lib.linux-i686-2.5/scipy/sparse/linalg/dsolve/umfpack copying scipy/sparse/linalg/dsolve/umfpack/setupscons.py -> build/lib.linux-i686-2.5/scipy/sparse/linalg/dsolve/umfpack copying scipy/sparse/linalg/dsolve/umfpack/__init__.py -> build/lib.linux-i686-2.5/scipy/sparse/linalg/dsolve/umfpack copying build/src.linux-i686-2.5/scipy/sparse/linalg/dsolve/umfpack/_umfpack.py -> build/lib.linux-i686-2.5/scipy/sparse/linalg/dsolve/umfpack building 'scipy.sparse.linalg.dsolve.umfpack.__umfpack' extension creating build/temp.linux-i686-2.5/build/src.linux-i686-2.5/scipy/sparse/linalg/dsolve/umfpack compile options: '-DSCIPY_UMFPACK_H -DATLAS_INFO="\"3.8.0\"" -I/data1/sources/MATHSTUFF/UMFPACK/Include -I/usr/local/atlas/include -I/usr/lib/python2.5/site-packages/numpy/core/include -I/usr/include/python2.5 -c' gcc: build/src.linux-i686-2.5/scipy/sparse/linalg/dsolve/umfpack/_umfpack_wrap.c gcc -pthread -shared build/temp.linux-i686-2.5/build/src.linux-i686-2.5/scipy/sparse/linalg/dsolve/umfpack/_umfpack_wrap.o -L/data1/sources/MATHSTUFF/UMFPACK/Lib/ -L/usr/lib -L/usr/local/atlas/lib -L/usr/lib -Lbuild/temp.linux-i686-2.5 -lumfpack -lamd -lptf77blas -lptcblas -latlas -llapack -lptf77blas -lptcblas -latlas -lpython2.5 -o build/lib.linux-i686-2.5/scipy/sparse/linalg/dsolve/umfpack/__umfpack.so and the test now reaches : Prefactorize (with UMFPACK) matrix for solving with multiple rhs ... ok Prefactorize matrix for solving with multiple rhs ... ok Solve with UMFPACK: double precision complex ... ok Solve: single precision complex ... ok Solve with UMFPACK: double precision, sparse rhs ... ok Solve with UMFPACK: double precision ... ok and then continue until it seems to hang at: test_gammaincinv (test_basic.TestGamma) ... I guess that is for another thread! and another day.... thanks a lot for the hints, Johann Robert Kern wrote: > On Thu, Apr 24, 2008 at 7:28 AM, Johann Cohen-Tanugi > wrote: > >> hello, >> I have compiled and built UMFPACK, following the wiki, and when >> configuring scipy I read : >> umfpack_info: >> amd_info: >> libraries amd not found in /usr/local/lib >> FOUND: >> libraries = ['amd'] >> library_dirs = ['/usr/lib'] >> >> FOUND: >> libraries = ['umfpack', 'amd'] >> library_dirs = ['/data1/sources/MATHSTUFF/UMFPACK/Lib/', '/usr/lib'] >> swig_opts = ['-I/data1/sources/MATHSTUFF/UMFPACK/Include'] >> define_macros = [('SCIPY_UMFPACK_H', None)] >> include_dirs = ['/data1/sources/MATHSTUFF/UMFPACK/Include'] >> >> I would infer from that that UMFPACK was correctly found, but after >> building scipy from svn, I get : >> > > You should actually look later in the build log to see if the .so > files got linked correctly. > > >> In [4]: scipy.test(verbose=2) >> >> ====================================================================== >> SKIP: Getting factors of complex matrix >> ---------------------------------------------------------------------- >> Traceback (most recent call last): >> File "/usr/lib/python2.5/site-packages/scipy/testing/decorators.py", >> line 81, in skipper >> raise nose.SkipTest, msg >> SkipTest: UMFPACK appears not to be compiled >> >> >> >> ====================================================================== >> SKIP: Solve: single precision >> ---------------------------------------------------------------------- >> Traceback (most recent call last): >> File "/usr/lib/python2.5/site-packages/scipy/testing/decorators.py", >> line 81, in skipper >> raise nose.SkipTest, msg >> SkipTest: UMFPACK appears not to be compiled >> >> ---------------------------------------------------------------------- >> Ran 1524 tests in 39.269s >> >> FAILED (failures=5, errors=12) >> >> >> What does that mean? I noticed that when building UMFPACK I put -fPIC in >> the UFConfig.mk but it does not seem to be honored. For instance : >> gcc -O3 -fexceptions -D_FILE_OFFSET_BITS=64 -D_LARGEFILE64_SOURCE >> -I../Include -I../Source -I../../AMD/Include -I../../UFconfig -DDINT -c >> ../Source/umfpack_qsymbolic.c -o umfpack_di_qsymbolic.o >> >> Is that related? >> > > Quite possibly, yes. > > From pijus at virketis.com Thu Apr 24 18:36:29 2008 From: pijus at virketis.com (Pijus Virketis) Date: Thu, 24 Apr 2008 18:36:29 -0400 (EDT) Subject: [SciPy-user] __setstate__ of TimeSeries subclass In-Reply-To: <200804241758.07620.pgmdevlist@gmail.com> References: <16163.216.223.55.126.1209070264.squirrel@216.223.55.126> <200804241758.07620.pgmdevlist@gmail.com> Message-ID: <42494.216.223.55.126.1209076589.squirrel@216.223.55.126> Pierre, I suspected that would be the first question. What's really going on is that I am building my subclass via multiple inheritance. As is the case with most vanilla Python, the other superclass is instantiated only with a non-trivial __init__, and I would like to be able to do the same "downstream" as well. I am trying to put all of the heavy lifting into __init__, and have a __new__ that is just a step above "pass". It won't come as a big surprise to me if I learn here that this is anywhere between ill-advised and totally dumb. ;) class PlainSuperclass(object): def __init__(self, x): self.x = x class Test(TimeSeries, PlainSuperclass): def __new__(cls, *args, **kwargs): return(TimeSeries.__new__(cls, [], date_array([]))) def __init__(self, data, dates, x): # set the actual state of the time series ts = time_series(data, dates) self.__setstate__(ts.__getstate__()) # state of PlainSuperclass is added naturally PlainSuperclass.__init__(self, x) # again, the real goal is to be able to subclass without knowing that we had # to deal with a serious __new__ method upstream at all # hypothetical class PlainSubclass(Test): def __init__(self, data, dates, x, foo): Test.__init__(self, data, dates, x) self.foo = foo Thanks again, -P Pierre GM said: > On Thursday 24 April 2008 16:51:04 Pijus Virketis wrote: >> Hi, >> >> I am trying to get the __setstate__ method working on a subclass of >> scikits.TimeSeries. > > Pijus, > I gonna investigate, but I'm not sure I understand what you're trying to > do. > Basically, __setstate__ and __getstate__ are not to be used by themselves, > but by the _tsreconstruct functionh for pickling/unpickling. > What's your goal ? > From aisaac at american.edu Thu Apr 24 19:12:50 2008 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 24 Apr 2008 19:12:50 -0400 Subject: [SciPy-user] remove duplicate points... In-Reply-To: <4810EEE5.7000508@gmail.com> References: <480F42DF.50604@gmail.com> <200804231204.43851.pgmdevlist@gmail.com> <480F64B0.1090107@relativita.com><200804231244.20541.pgmdevlist@gmail.com> <4810EEE5.7000508@gmail.com> Message-ID: Does this thread raise the following question: should unique take an axis argument? Cheers, Alan Isaac From pgmdevlist at gmail.com Thu Apr 24 20:21:17 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Thu, 24 Apr 2008 20:21:17 -0400 Subject: [SciPy-user] remove duplicate points... In-Reply-To: References: <480F42DF.50604@gmail.com> <4810EEE5.7000508@gmail.com> Message-ID: <200804242021.18562.pgmdevlist@gmail.com> On Thursday 24 April 2008 19:12:50 Alan G Isaac wrote: > Does this thread raise the following question: > should unique take an axis argument? Mmh, I'd prefer unique to stay as it is (on 1D arrays), and use a apply_along_axis when needed. I'm OK either way... From pgmdevlist at gmail.com Thu Apr 24 21:01:29 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Thu, 24 Apr 2008 21:01:29 -0400 Subject: [SciPy-user] __setstate__ of TimeSeries subclass In-Reply-To: <42494.216.223.55.126.1209076589.squirrel@216.223.55.126> References: <16163.216.223.55.126.1209070264.squirrel@216.223.55.126> <200804241758.07620.pgmdevlist@gmail.com> <42494.216.223.55.126.1209076589.squirrel@216.223.55.126> Message-ID: <200804242101.29674.pgmdevlist@gmail.com> On Thursday 24 April 2008 18:36:29 Pijus Virketis wrote: > Pierre, > > I suspected that would be the first question. Eh... > What's really going on is > that I am building my subclass via multiple inheritance. As is the case > with most vanilla Python, the other superclass is instantiated only with a > non-trivial __init__, and I would like to be able to do the same > "downstream" as well. I am trying to put all of the heavy lifting into > __init__, and have a __new__ that is just a step above "pass". It won't > come as a big surprise to me if I learn here that this is anywhere between > ill-advised and totally dumb. ;) The pb w/ this approach (and the example you give) is that __init__ is never called for ndarrays and their subclasses: instead, you need __new__ and __array_finalize__. You'll find more info here: http://www.scipy.org/Subclasses So, what to do (following the nomenclature of your example)? A possibility would be to reproduce most of the PlainSuperClass.__init__ in your Test.__new__, and make sure that the attributes of an instance of PlainSuperClass are dealt with properly in Test.__array_finalize__, something along those lines: class PlainSuperClass(object): def __init__(self, color): self.color = color def echocolor(self): print str(self.color).upper() class Test(TimeSeries, PlainSuperClass): def __new__(cls, color, *args, **kwargs): data = TimeSeries.__new__(cls, [], date_array([])) data.color = color return data def __array_finalize__(self,obj): TimeSeries.__array_finalize__(self,obj) self.color = getattr(obj,'color',YourDefaultColor) Now, I guess that this won't suit you if your PlainSuperClass.__init__ is relatively complex. Maybe you could try a method similar to the old implementation of MaskedArray, using containers: don't inherit from ndarray, but define a __array__ method that could translate part of your data to a ndarray (in that case, a TimeSeries). You should find some info in numpy.lib.user_array. That solution might be quite clunky, however... That's why why we developed TimeSeries as subclasses of ndarrays in the first place... Still: why are you trying to pass the pickling state of a TimeSeries to your Test object ? In any case, let me know how it goes, and don't hesitate to contact me off-list if you need more help/info. From elmico.filos at gmail.com Fri Apr 25 10:17:48 2008 From: elmico.filos at gmail.com (=?ISO-8859-1?Q?Mico_Fil=F3s?=) Date: Fri, 25 Apr 2008 16:17:48 +0200 Subject: [SciPy-user] Random sparse matrices In-Reply-To: <9457e7c80804230222x5a9bc315h3862f388fd03f02d@mail.gmail.com> References: <9457e7c80804230222x5a9bc315h3862f388fd03f02d@mail.gmail.com> Message-ID: Dear all, here is my first attempt. I basically use Nathan's suggested functions, and _rand_sparse incorporates the algorithm proposed by David to avoid ending up with fewer nonzero elements than expected. It is the first time I propose an update for scipy code, so be lenient with me :) from numpy.random import random_integers, randint, permutation from scipy import rand, randn, ones, array from scipy.sparse import csr_matrix def _rand_sparse(m, n, density): # check parameters here if density > 1.0 or density < 0.0: raise ValueError('density should be between 0 and 1') # More checks? # Here I use the algorithm suggested by David to avoid ending # up with less than m*n*density nonzero elements (with the algorithm # provided by Nathan there is a nonzero probability of having duplicate # rwo/col pairs). nnz = max( min( int(m*n*density), m*n), 0) rand_seq = permutation(m*n)[:nnz] row = rand_seq / n col = rand_seq % n data = ones(nnz, dtype='int8') # duplicate (i,j) entries will be summed together return csr_matrix( (data,(row,col)), shape=(m,n) ) def sprand(m, n, density): """Build a sparse uniformly distributed random matrix Parameters ---------- m, n : dimensions of the result (rows, columns) density : fraction of nonzero entries. Example ------- >>> from scipy.sparse import sprand >>> print sprand(2, 3, 0.5).todense() matrix[[ 0.5724829 0. 0.92891214] [ 0. 0.07712993 0. ]] """ A = _rand_sparse(m, n, density) A.data = rand(A.nnz) return A def sprandn(m, n, density): """Build a sparse normally distributed random matrix Parameters ---------- m, n : dimensions of the result (rows, columns) density : fraction of nonzero entries. Example ------- >>> from scipy.sparse import sprandn >>> print sprandn(2, 4, 0.5).todense() matrix([[-0.84041995, 0. , 0. , -0.22398594], [-0.664707 , 0. , 0. , -0.06084135]]) """ A = _rand_sparse(m, n, density) A.data = randn(A.nnz) return A if __name__ == '__main__': print sprand(2, 3, 0.5).todense() print sprandn(2, 5, 0.2).todense() From jdh2358 at gmail.com Fri Apr 25 10:28:20 2008 From: jdh2358 at gmail.com (John Hunter) Date: Fri, 25 Apr 2008 09:28:20 -0500 Subject: [SciPy-user] problem building svn scipy Message-ID: <88e473830804250728o24834ec1o1201fd434f4b0f83@mail.gmail.com> With numpy r5083 and scipy r4176 I am getting the following build error building 'scipy.cluster._hierarchy_wrap' extension compiling C sources C compiler: /opt/app/g++lib6/gcc-3.4/bin/gcc -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -fPIC compile options: '-I/home/titan/johnh/dev/lib/python2.4/site-packages/numpy/core/include -I/home/titan/johnh/dev/lib/python2.4/site-packages/numpy/core/include -I/opt/app/g++lib6/python-2.4/include/python2.4 -c' gcc: scipy/cluster/src/hierarchy_wrap.c scipy/cluster/src/hierarchy_wrap.c: In function `calculate_cluster_sizes_wrap': scipy/cluster/src/hierarchy_wrap.c:117: error: parse error before numeric constant scipy/cluster/src/hierarchy_wrap.c:120: error: invalid lvalue in unary `&' scipy/cluster/src/hierarchy_wrap.c:124: error: invalid type argument of `->' scipy/cluster/src/hierarchy_wrap.c: In function `cluster_maxclust_monocrit_wrap': scipy/cluster/src/hierarchy_wrap.c:251: warning: unused variable `cutoff' scipy/cluster/src/hierarchy_wrap.c: In function `pdist_euclidean_wrap': scipy/cluster/src/hierarchy_wrap.c:387: error: parse error before numeric constant johnh at flag:numpy> uname -a SunOS flag 5.10 Generic_118855-15 i86pc i386 i86pc johnh at flag:numpy> gcc --version gcc (GCC) 3.4.1 Any ideas? Thanks, JDH From mhearne at usgs.gov Fri Apr 25 11:42:54 2008 From: mhearne at usgs.gov (Michael Hearne) Date: Fri, 25 Apr 2008 09:42:54 -0600 Subject: [SciPy-user] failure to build scipy on RHEL5 In-Reply-To: <40e64fa20804241831j4dfaf308vba157dc7b898624c@mail.gmail.com> References: <71551062-3FB2-4FA4-B617-D6FE011DEEF3@usgs.gov> <3d375d730804231449h1d48d809ra7806beecd6270db@mail.gmail.com> <48FB4760-6D8E-47E2-AE9B-E514C2172298@usgs.gov> <3d375d730804231459r7842973fje9f2c650bd6180a@mail.gmail.com> <40e64fa20804231517g4e13f4c8w2d972de1a1555799@mail.gmail.com> <40e64fa20804241327j2e7fa87dt3cd14c0823ba143f@mail.gmail.com> <40e64fa20804241831j4dfaf308vba157dc7b898624c@mail.gmail.com> Message-ID: <86B128C9-0EE1-4836-A003-68788CF35C6B@usgs.gov> Paul - Thanks! I took a first crack at installing these, and started with BLAS. When I do "sudo rpm -i blas-3.1.1-2.el5.x86_64.rpm", I get a bunch of conflicts... I then tried "sudo rpm -q blas" and was returned: blas-3.0-37.el5 blas-3.0-37.el5 I'm not terribly familiar with RH, as I use Ubuntu at home, does this mean I have two instances of BLAS 3.0 already installed? If so, is it safe to use rpm to remove them? --Mike On Apr 24, 2008, at 7:31 PM, Paul Barrett wrote: > Mike, > > Attached are the following RPMs: > > atlas, blas, lapack, python-numpy, python-scipy > > You may need some additional packages such as fftw3, etc. which are > readily available in one or more repositories. > > I'll send the devel RPMs in the next email. > > I hope that you get them. > > -- Paul > > On Thu, Apr 24, 2008 at 6:22 PM, Michael Hearne > wrote: >> Yes. >> >> >> >> On Apr 24, 2008, at 2:27 PM, Paul Barrett wrote: >> Mike, >> >> Are you on a 64 bit platform? >> >> -- Paul >> >> On Thu, Apr 24, 2008 at 12:52 PM, Michael Hearne >> wrote: >> Paul - I apologize if you get two responses - it's not clear if my >> email >> client actually delivered another email I just sent on this topic. >> >> In answer to your offer - yes, that would be great if I could get >> the RPMs, >> as I am completely unable to compile scipy on my own. I'm using >> f77, and >> getting complaints about not having a Fortran 90 compiler, and not >> having >> compiled with LAPACK with -fPIC (which I certainly _tried_ to do). >> >> Anyway, if you have time to put them on EPEL, that would help me and >> hopefully other frustrated RHEL users out there. Otherwise, let me >> know the >> best way for me to get the files. >> >> Thanks, >> >> Mike >> >> >> >> On Apr 23, 2008, at 4:17 PM, Paul Barrett wrote: >> Mike, >> >> I have numpy, scipy, and matplotlib RPMs for EL5 x86_64, if you want >> them. I also think that I have the i386 ones too, but the SRPMs are >> of course available. >> >> I've been meaning to offer them to the EPEL repository, but I haven't >> made the time. >> >> -- Paul >> >> >> On Wed, Apr 23, 2008 at 5:59 PM, Robert Kern >> wrote: >> On Wed, Apr 23, 2008 at 4:53 PM, Michael Hearne >> wrote: >> >> Wasn't sure how far back to go, so here's several more lines' worth: >> >> Here's the important one: >> >> >> /usr/bin/ld: /usr/local/lib/libfblas.a(cgemm.o): relocation >> R_X86_64_32 against `a local symbol' can not be used when making a >> shared object; recompile with -fPIC >> /usr/local/lib/libfblas.a: could not read symbols: Bad value >> >> You need to compile BLAS to be relocatable using the -fPIC flag to >> the >> compilation flags. If you are following the directions on that wiki >> strictly, replace this line: >> >> g77 -fno-second-underscore -O2 -c *.f >> >> with this line: >> >> g77 -fPIC -fno-second-underscore -O2 -c *.f >> >> >> >> -- >> Robert Kern >> >> "I have come to believe that the whole world is an enigma, a harmless >> enigma that is made terrible by our own mad attempt to interpret it >> as >> though it had an underlying truth." >> -- Umberto Eco >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> >> >> >> >> ------------------------------------------------------ >> Michael Hearne >> mhearne at usgs.gov >> (303) 273-8620 >> USGS National Earthquake Information Center >> 1711 Illinois St. Golden CO 80401 >> Senior Software Engineer >> Synergetics, Inc. >> ------------------------------------------------------ >> >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> >> >> >> >> >> >> ------------------------------------------------------ >> Michael Hearne >> mhearne at usgs.gov >> (303) 273-8620 >> USGS National Earthquake Information Center >> 1711 Illinois St. Golden CO 80401 >> Senior Software Engineer >> Synergetics, Inc. >> ------------------------------------------------------ >> >> > < > atlas > -3.6.0 > -12 > .el5 > .x86_64 > .rpm > > numpy-1.0.4-1.el5.x86_64.rpm> ------------------------------------------------------ Michael Hearne mhearne at usgs.gov (303) 273-8620 USGS National Earthquake Information Center 1711 Illinois St. Golden CO 80401 Senior Software Engineer Synergetics, Inc. ------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From lev at columbia.edu Fri Apr 25 11:58:54 2008 From: lev at columbia.edu (Lev Givon) Date: Fri, 25 Apr 2008 11:58:54 -0400 Subject: [SciPy-user] failure to build scipy on RHEL5 In-Reply-To: <86B128C9-0EE1-4836-A003-68788CF35C6B@usgs.gov> References: <71551062-3FB2-4FA4-B617-D6FE011DEEF3@usgs.gov> <3d375d730804231449h1d48d809ra7806beecd6270db@mail.gmail.com> <48FB4760-6D8E-47E2-AE9B-E514C2172298@usgs.gov> <3d375d730804231459r7842973fje9f2c650bd6180a@mail.gmail.com> <40e64fa20804231517g4e13f4c8w2d972de1a1555799@mail.gmail.com> <40e64fa20804241327j2e7fa87dt3cd14c0823ba143f@mail.gmail.com> <40e64fa20804241831j4dfaf308vba157dc7b898624c@mail.gmail.com> <86B128C9-0EE1-4836-A003-68788CF35C6B@usgs.gov> Message-ID: <20080425155854.GW19981@localhost.cc.columbia.edu> Received from Michael Hearne on Fri, Apr 25, 2008 at 11:42:54AM EDT: > Paul - Thanks! I took a first crack at installing these, and started with > BLAS. > > When I do "sudo rpm -i blas-3.1.1-2.el5.x86_64.rpm", I get a bunch of > conflicts... > > I then tried "sudo rpm -q blas" and was returned: > blas-3.0-37.el5 > blas-3.0-37.el5 > > I'm not terribly familiar with RH, as I use Ubuntu at home, does this mean > I have two instances of BLAS 3.0 already installed? If so, is it safe to > use rpm to remove them? > > --Mike The fact that the blas package is seemingly listed twice implies that you probably have both the i386 and x86_64 versions of the package installed. To see the architecture of the installed packages, try using the following query: sudo rpm -q --queryformat="%{NAME}-%{VERSION}-%{RELEASE}.%{ARCH}\n" blas Removing the package that does not correspond to your machine's architecture should be harmless. L.G. From robert.kern at gmail.com Fri Apr 25 14:48:07 2008 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 25 Apr 2008 13:48:07 -0500 Subject: [SciPy-user] problem building svn scipy In-Reply-To: <88e473830804250728o24834ec1o1201fd434f4b0f83@mail.gmail.com> References: <88e473830804250728o24834ec1o1201fd434f4b0f83@mail.gmail.com> Message-ID: <3d375d730804251148h349b9619k846759cf03ba3fa4@mail.gmail.com> On Fri, Apr 25, 2008 at 9:28 AM, John Hunter wrote: > With numpy r5083 and scipy r4176 I am getting the following build error > > building 'scipy.cluster._hierarchy_wrap' extension > compiling C sources > C compiler: /opt/app/g++lib6/gcc-3.4/bin/gcc -fno-strict-aliasing > -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -fPIC > > compile options: > '-I/home/titan/johnh/dev/lib/python2.4/site-packages/numpy/core/include > -I/home/titan/johnh/dev/lib/python2.4/site-packages/numpy/core/include > -I/opt/app/g++lib6/python-2.4/include/python2.4 -c' > gcc: scipy/cluster/src/hierarchy_wrap.c > scipy/cluster/src/hierarchy_wrap.c: In function `calculate_cluster_sizes_wrap': > scipy/cluster/src/hierarchy_wrap.c:117: error: parse error before > numeric constant > scipy/cluster/src/hierarchy_wrap.c:120: error: invalid lvalue in unary `&' > scipy/cluster/src/hierarchy_wrap.c:124: error: invalid type argument of `->' It looks like something is #defining CS to be a number. Search through the headers for this. > scipy/cluster/src/hierarchy_wrap.c: In function > `cluster_maxclust_monocrit_wrap': > scipy/cluster/src/hierarchy_wrap.c:251: warning: unused variable `cutoff' > scipy/cluster/src/hierarchy_wrap.c: In function `pdist_euclidean_wrap': > scipy/cluster/src/hierarchy_wrap.c:387: error: parse error before > numeric constant Possibly the same thing with _X or _dm. Are there any more errors following this? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From wnbell at gmail.com Fri Apr 25 14:56:21 2008 From: wnbell at gmail.com (Nathan Bell) Date: Fri, 25 Apr 2008 13:56:21 -0500 Subject: [SciPy-user] Random sparse matrices In-Reply-To: References: <9457e7c80804230222x5a9bc315h3862f388fd03f02d@mail.gmail.com> Message-ID: On Fri, Apr 25, 2008 at 9:17 AM, Mico Fil?s wrote: > Dear all, > > here is my first attempt. I basically use Nathan's suggested > functions, and _rand_sparse incorporates the algorithm proposed by > David to avoid ending up with fewer nonzero elements than expected. It > is the first time I propose an update > for scipy code, so be lenient with me :) Thanks for your contribution Mico. Unfortunately, the line rand_seq = permutation(m*n)[:nnz] is a *dense* MxN operation, so we cannot use this approach. MATLAB's sprand() and sprandn() have the same artifact as the code I presented, so I don't know whether it's worth trying to avoid the duplicate entries. If you can figure out an economial way to produce exactly nnz elements in the result, then we would probably use it. I wasn't able to come up with anything better than the MATLAB approach. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From peridot.faceted at gmail.com Fri Apr 25 16:04:00 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Fri, 25 Apr 2008 22:04:00 +0200 Subject: [SciPy-user] Random sparse matrices In-Reply-To: References: <9457e7c80804230222x5a9bc315h3862f388fd03f02d@mail.gmail.com> Message-ID: On 25/04/2008, Nathan Bell wrote: > On Fri, Apr 25, 2008 at 9:17 AM, Mico Fil?s wrote: > > Dear all, > > > > here is my first attempt. I basically use Nathan's suggested > > functions, and _rand_sparse incorporates the algorithm proposed by > > David to avoid ending up with fewer nonzero elements than expected. It > > is the first time I propose an update > > for scipy code, so be lenient with me :) > > > Thanks for your contribution Mico. Unfortunately, the line > > rand_seq = permutation(m*n)[:nnz] > > is a *dense* MxN operation, so we cannot use this approach. > > MATLAB's sprand() and sprandn() have the same artifact as the code I > presented, so I don't know whether it's worth trying to avoid the > duplicate entries. > > If you can figure out an economial way to produce exactly nnz elements > in the result, then we would probably use it. I wasn't able to come > up with anything better than the MATLAB approach. Here's an approach that works. Not ideal, but still only O(nnz): pick nnz distinct integers. Throw out any repeats and pick replacements. Repeat until you have no repeats. Requires an average of just a few iterations unless nnz>(m*n)/2 (say), in which case you can safely just use permutation(). Anne -------------- next part -------------- A non-text attachment was scrubbed... Name: sprand.py Type: text/x-python Size: 730 bytes Desc: not available URL: From nwagner at iam.uni-stuttgart.de Fri Apr 25 16:14:53 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 25 Apr 2008 22:14:53 +0200 Subject: [SciPy-user] Random sparse matrices In-Reply-To: References: <9457e7c80804230222x5a9bc315h3862f388fd03f02d@mail.gmail.com> Message-ID: On Fri, 25 Apr 2008 22:04:00 +0200 "Anne Archibald" wrote: > On 25/04/2008, Nathan Bell wrote: >> On Fri, Apr 25, 2008 at 9:17 AM, Mico Fil?s >> wrote: >> > Dear all, >> > >> > here is my first attempt. I basically use Nathan's >>suggested >> > functions, and _rand_sparse incorporates the >>algorithm proposed by >> > David to avoid ending up with fewer nonzero elements >>than expected. It >> > is the first time I propose an update >> > for scipy code, so be lenient with me :) >> >> >> Thanks for your contribution Mico. Unfortunately, the >>line >> >> rand_seq = permutation(m*n)[:nnz] >> >> is a *dense* MxN operation, so we cannot use this >>approach. >> >> MATLAB's sprand() and sprandn() have the same artifact >>as the code I >> presented, so I don't know whether it's worth trying to >>avoid the >> duplicate entries. >> >> If you can figure out an economial way to produce >>exactly nnz elements >> in the result, then we would probably use it. I wasn't >>able to come >> up with anything better than the MATLAB approach. > > Here's an approach that works. Not ideal, but still only >O(nnz): pick > nnz distinct integers. Throw out any repeats and pick >replacements. > Repeat until you have no repeats. Requires an average of >just a few > iterations unless nnz>(m*n)/2 (say), in which case you >can safely just > use permutation(). > > Anne Hi Anne, I run your script several times. Sometimes I get >>> sprandn(5,5,3).todense() matrix([[ 0. , 0. , 0. , 0. , 0. ], [ 0. , 0.36548958, 0. , 0. , 0. ], [ 1.51125878, 0. , 0. , 0. , -0.20285678], [ 0. , 0. , 0. , 0. , 0. ], [ 0. , 0. , 0. , 0. , 0. ]]) >>> sprandn(5,5,3).todense() Traceback (most recent call last): File "", line 1, in ? File "sprand.py", line 23, in sprandn return scipy.sparse.coo_matrix((np.random.randn(nnz),ij),(m,n)) File "/usr/lib/python2.4/site-packages/scipy/sparse/coo.py", line 180, in __init__ self._check() File "/usr/lib/python2.4/site-packages/scipy/sparse/coo.py", line 213, in _check raise ValueError, "row index exceedes matrix dimensions" ValueError: row index exceedes matrix dimensions Cheers, Nils From dineshbvadhia at hotmail.com Fri Apr 25 16:23:54 2008 From: dineshbvadhia at hotmail.com (Dinesh B Vadhia) Date: Fri, 25 Apr 2008 13:23:54 -0700 Subject: [SciPy-user] Matrix, Sparse problem with svn 4167 Message-ID: I have a working program with b=Ax, where A is a large sparse matrix. However, I need the int8 support in the sparse library to utilize larger matrices. I managed to get hold of a scipy svn 4167 build and b=Ax now returns garbage results. Nothing was changed in the program except replacing getrow with .todense() and I checked these to make sure the right rows were being picked up. I wish could point out where exactly the problem lies but there was no Traceback just the wrong results but it can safely be assumed it is with the numpy/scipy matrix support. Any ideas? Dinesh -------------- next part -------------- An HTML attachment was scrubbed... URL: From mhearne at usgs.gov Fri Apr 25 16:27:23 2008 From: mhearne at usgs.gov (Michael Hearne) Date: Fri, 25 Apr 2008 14:27:23 -0600 Subject: [SciPy-user] failure to build scipy on RHEL5 In-Reply-To: <40e64fa20804241831j4dfaf308vba157dc7b898624c@mail.gmail.com> References: <71551062-3FB2-4FA4-B617-D6FE011DEEF3@usgs.gov> <3d375d730804231449h1d48d809ra7806beecd6270db@mail.gmail.com> <48FB4760-6D8E-47E2-AE9B-E514C2172298@usgs.gov> <3d375d730804231459r7842973fje9f2c650bd6180a@mail.gmail.com> <40e64fa20804231517g4e13f4c8w2d972de1a1555799@mail.gmail.com> <40e64fa20804241327j2e7fa87dt3cd14c0823ba143f@mail.gmail.com> <40e64fa20804241831j4dfaf308vba157dc7b898624c@mail.gmail.com> Message-ID: <8690829C-CA8B-4642-A759-92241023A594@usgs.gov> Paul (or anyone) - You're right, there are other dependencies: error: Failed dependencies: libamd.so.2()(64bit) is needed by python-scipy-0.6.0-3.el5.x86_64 libfftw3.so.3()(64bit) is needed by python-scipy-0.6.0-3.el5.x86_64 libumfpack.so.5()(64bit) is needed by python-scipy-0.6.0-3.el5.x86_64 I did some searching around and found that libamd and libumfpack are both included in something called SparseSuite (http://www.cise.ufl.edu/research/sparse/SuiteSparse/ ). I found links to the RPM for this package here: http://rpm.pbone.net/index.php3/stat/4/idpl/4852092/com/suitesparse-3.0.0-1.el5.i386.rpm.html but unfortunately can't connect to any of the mirrors. Does anyone have any tips or pre-made RPMs for SparseSuite or the two libraries mentioned above? I haven't even started to look for the FFT library yet... --Mike On Apr 24, 2008, at 7:31 PM, Paul Barrett wrote: > Mike, > > Attached are the following RPMs: > > atlas, blas, lapack, python-numpy, python-scipy > > You may need some additional packages such as fftw3, etc. which are > readily available in one or more repositories. > > I'll send the devel RPMs in the next email. > > I hope that you get them. > > -- Paul > > On Thu, Apr 24, 2008 at 6:22 PM, Michael Hearne > wrote: >> Yes. >> >> >> >> On Apr 24, 2008, at 2:27 PM, Paul Barrett wrote: >> Mike, >> >> Are you on a 64 bit platform? >> >> -- Paul >> >> On Thu, Apr 24, 2008 at 12:52 PM, Michael Hearne >> wrote: >> Paul - I apologize if you get two responses - it's not clear if my >> email >> client actually delivered another email I just sent on this topic. >> >> In answer to your offer - yes, that would be great if I could get >> the RPMs, >> as I am completely unable to compile scipy on my own. I'm using >> f77, and >> getting complaints about not having a Fortran 90 compiler, and not >> having >> compiled with LAPACK with -fPIC (which I certainly _tried_ to do). >> >> Anyway, if you have time to put them on EPEL, that would help me and >> hopefully other frustrated RHEL users out there. Otherwise, let me >> know the >> best way for me to get the files. >> >> Thanks, >> >> Mike >> >> >> >> On Apr 23, 2008, at 4:17 PM, Paul Barrett wrote: >> Mike, >> >> I have numpy, scipy, and matplotlib RPMs for EL5 x86_64, if you want >> them. I also think that I have the i386 ones too, but the SRPMs are >> of course available. >> >> I've been meaning to offer them to the EPEL repository, but I haven't >> made the time. >> >> -- Paul >> >> >> On Wed, Apr 23, 2008 at 5:59 PM, Robert Kern >> wrote: >> On Wed, Apr 23, 2008 at 4:53 PM, Michael Hearne >> wrote: >> >> Wasn't sure how far back to go, so here's several more lines' worth: >> >> Here's the important one: >> >> >> /usr/bin/ld: /usr/local/lib/libfblas.a(cgemm.o): relocation >> R_X86_64_32 against `a local symbol' can not be used when making a >> shared object; recompile with -fPIC >> /usr/local/lib/libfblas.a: could not read symbols: Bad value >> >> You need to compile BLAS to be relocatable using the -fPIC flag to >> the >> compilation flags. If you are following the directions on that wiki >> strictly, replace this line: >> >> g77 -fno-second-underscore -O2 -c *.f >> >> with this line: >> >> g77 -fPIC -fno-second-underscore -O2 -c *.f >> >> >> >> -- >> Robert Kern >> >> "I have come to believe that the whole world is an enigma, a harmless >> enigma that is made terrible by our own mad attempt to interpret it >> as >> though it had an underlying truth." >> -- Umberto Eco >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> >> >> >> >> ------------------------------------------------------ >> Michael Hearne >> mhearne at usgs.gov >> (303) 273-8620 >> USGS National Earthquake Information Center >> 1711 Illinois St. Golden CO 80401 >> Senior Software Engineer >> Synergetics, Inc. >> ------------------------------------------------------ >> >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> >> >> >> >> >> >> ------------------------------------------------------ >> Michael Hearne >> mhearne at usgs.gov >> (303) 273-8620 >> USGS National Earthquake Information Center >> 1711 Illinois St. Golden CO 80401 >> Senior Software Engineer >> Synergetics, Inc. >> ------------------------------------------------------ >> >> > < > atlas > -3.6.0 > -12 > .el5 > .x86_64 > .rpm > > numpy-1.0.4-1.el5.x86_64.rpm> ------------------------------------------------------ Michael Hearne mhearne at usgs.gov (303) 273-8620 USGS National Earthquake Information Center 1711 Illinois St. Golden CO 80401 Senior Software Engineer Synergetics, Inc. ------------------------------------------------------ From peridot.faceted at gmail.com Fri Apr 25 16:41:17 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Fri, 25 Apr 2008 22:41:17 +0200 Subject: [SciPy-user] Random sparse matrices In-Reply-To: References: <9457e7c80804230222x5a9bc315h3862f388fd03f02d@mail.gmail.com> Message-ID: On 25/04/2008, Nils Wagner wrote: > I run your script several times. Sometimes I get > >>> sprandn(5,5,3).todense() > matrix([[ 0. , 0. , 0. , 0. > , 0. ], > [ 0. , 0.36548958, 0. , 0. > , 0. ], > [ 1.51125878, 0. , 0. , 0. > , -0.20285678], > [ 0. , 0. , 0. , 0. > , 0. ], > [ 0. , 0. , 0. , 0. > , 0. ]]) > >>> sprandn(5,5,3).todense() > Traceback (most recent call last): > File "", line 1, in ? > File "sprand.py", line 23, in sprandn > return > scipy.sparse.coo_matrix((np.random.randn(nnz),ij),(m,n)) > File > "/usr/lib/python2.4/site-packages/scipy/sparse/coo.py", > line 180, in __init__ > self._check() > File > "/usr/lib/python2.4/site-packages/scipy/sparse/coo.py", > line 213, in _check > raise ValueError, "row index exceedes matrix > dimensions" > ValueError: row index exceedes matrix dimensions Strange. Works for me, even in a while loop, and with various values of m,n, and nnz. Did numpy.random.random_integer change to including the endpoint or something? A quick assert() ought to turn up whether the random distinct integers really are all less than m*n. Alternatively, perhaps there is some weirdness to do with the "//" operator? Serves me right for not writing proper unit tests. Anne From wnbell at gmail.com Fri Apr 25 16:41:52 2008 From: wnbell at gmail.com (Nathan Bell) Date: Fri, 25 Apr 2008 15:41:52 -0500 Subject: [SciPy-user] Matrix, Sparse problem with svn 4167 In-Reply-To: References: Message-ID: On Fri, Apr 25, 2008 at 3:23 PM, Dinesh B Vadhia wrote: > > > I have a working program with b=Ax, where A is a large sparse matrix. > However, I need the int8 support in the sparse library to utilize larger > matrices. I managed to get hold of a scipy svn 4167 build and b=Ax now > returns garbage results. Nothing was changed in the program except > replacing getrow with .todense() and I checked these to make sure the right > rows were being picked up. > > I wish could point out where exactly the problem lies but there was no > Traceback just the wrong results but it can safely be assumed it is with the > numpy/scipy matrix support. > > Any ideas? You need to provide an example that demonstrates the error. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From robert.kern at gmail.com Fri Apr 25 16:48:56 2008 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 25 Apr 2008 15:48:56 -0500 Subject: [SciPy-user] Random sparse matrices In-Reply-To: References: <9457e7c80804230222x5a9bc315h3862f388fd03f02d@mail.gmail.com> Message-ID: <3d375d730804251348p95a4cc8h57988305dd28f50@mail.gmail.com> On Fri, Apr 25, 2008 at 3:41 PM, Anne Archibald wrote: > Strange. Works for me, even in a while loop, and with various values > of m,n, and nnz. Did numpy.random.random_integer change to including > the endpoint or something? random_integers() always did include the endpoint. randint() does not. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From dineshbvadhia at hotmail.com Fri Apr 25 17:01:16 2008 From: dineshbvadhia at hotmail.com (Dinesh B Vadhia) Date: Fri, 25 Apr 2008 14:01:16 -0700 Subject: [SciPy-user] Large matrix support Message-ID: I'm (sadly or otherwise) developing using Python w/ Numpy/Scipy on a Windows XP machine. The Python code will be moved onto a Linux server eventually. My matrix sizes can be large ie. > minimum 20,000 rows x 1m columns. When just creating these data sets using Python/Numpy/Scipy, I get pythonw.exe errors and the program aborts. I guess the Microsoft 32K limit has been reached somewhere. I haven't got any Python program to run with these data sets because of 'not enough memory' errors. I'm waiting for the int8 support in the sparse library to hopefully resolve that issue. Assuming, that the int8 support were available, will Python in a Windows environment allow the creation of these large matrices let alone run Python programs against them? Btw, I have sufficient memory on my computer. Btw btw, I'm not setup to create Numpy/Scipy svn builds but if some kind soul(s) were able to provide access to the latest pre-compiled Windows builds I'll happily be a guinea pig to test Numpy/Scipy against these large matrix sizes. Cheers Dinesh -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan.mcintyre at gmail.com Fri Apr 25 19:16:04 2008 From: alan.mcintyre at gmail.com (Alan McIntyre) Date: Fri, 25 Apr 2008 19:16:04 -0400 Subject: [SciPy-user] Large matrix support In-Reply-To: References: Message-ID: <1d36917a0804251616q4babd57crf8acc2b47c6b9072@mail.gmail.com> On Fri, Apr 25, 2008 at 5:01 PM, Dinesh B Vadhia wrote: > My matrix sizes can be large ie. > minimum 20,000 rows x 1m columns. > > When just creating these data sets using Python/Numpy/Scipy, I get > pythonw.exe errors and the program aborts. I guess the Microsoft 32K limit > has been reached somewhere. > > I haven't got any Python program to run with these data sets because of 'not > enough memory' errors. I'm waiting for the int8 support in the sparse > library to hopefully resolve that issue. Dinesh, I apologize if this is too basic a question to ask, but are the matrices sparse? Could you post some sample code to show how you're trying to construct these matrices? Thanks, Alan From eads at soe.ucsc.edu Fri Apr 25 20:19:04 2008 From: eads at soe.ucsc.edu (Damian Eads) Date: Fri, 25 Apr 2008 17:19:04 -0700 Subject: [SciPy-user] problem building svn scipy Message-ID: <481274F8.2060701@soe.ucsc.edu> Hi there, The hierarchy_wrap.c file is code I wrote and Chris Burns checked in. I did not try to compile it with an earlier version of gcc. Sorry about that. $ gcc --version gcc (GCC) 4.1.2 20070925 (Red Hat 4.1.2-33) $ I will log-in to a machine with gcc-3.4.1 and try recompiling. I hope to have a fix soon. Robert: interesting catch. I could try renaming CS, _X, and _dm. Let me try reproducing this compiler error on a machine with the gcc version John is using. Damian On Fri, Apr 25, 2008 at 9:28 AM, John Hunter wrote: > With numpy r5083 and scipy r4176 I am getting the following build error > > building 'scipy.cluster._hierarchy_wrap' extension > compiling C sources > C compiler: /opt/app/g++lib6/gcc-3.4/bin/gcc -fno-strict-aliasing > -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -fPIC > > compile options: > '-I/home/titan/johnh/dev/lib/python2.4/site-packages/numpy/core/include > -I/home/titan/johnh/dev/lib/python2.4/site-packages/numpy/core/include > -I/opt/app/g++lib6/python-2.4/include/python2.4 -c' > gcc: scipy/cluster/src/hierarchy_wrap.c > scipy/cluster/src/hierarchy_wrap.c: In function `calculate_cluster_sizes_wrap': > scipy/cluster/src/hierarchy_wrap.c:117: error: parse error before > numeric constant > scipy/cluster/src/hierarchy_wrap.c:120: error: invalid lvalue in unary `&' > scipy/cluster/src/hierarchy_wrap.c:124: error: invalid type argument of `->' It looks like something is #defining CS to be a number. Search through the headers for this. > scipy/cluster/src/hierarchy_wrap.c: In function > `cluster_maxclust_monocrit_wrap': > scipy/cluster/src/hierarchy_wrap.c:251: warning: unused variable `cutoff' > scipy/cluster/src/hierarchy_wrap.c: In function `pdist_euclidean_wrap': > scipy/cluster/src/hierarchy_wrap.c:387: error: parse error before > numeric constant Possibly the same thing with _X or _dm. Are there any more errors following this? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From eads at soe.ucsc.edu Fri Apr 25 21:20:17 2008 From: eads at soe.ucsc.edu (Damian Eads) Date: Fri, 25 Apr 2008 18:20:17 -0700 Subject: [SciPy-user] problem building svn scipy In-Reply-To: <481274F8.2060701@soe.ucsc.edu> References: <481274F8.2060701@soe.ucsc.edu> Message-ID: <48128351.9040105@soe.ucsc.edu> Hi, I learned there is a compatibility package called compat-gcc-34, which I installed with yum. This is the first time I've tried compiling scipy with gcc-34. $ yum install compat-gcc-34 gcc-gcc-34-c++ Then, I tried doing a clean, changing CC, reconfigure, and build. $ python setup.py clean # in the scipy root $ rm -rf build # just to be sure $ export CC=gcc34 $ python setup.py config $ python setup.py build ... C compiler: gcc34 -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m32 -march=i386 -mtune=generic -fasynchronous-unwind-tables -D_GNU_SOURCE -fPIC -fPIC compile options: '-c' gcc34: optimize/Zeros/bisect.c cc1: error: unrecognized command line option "-fstack-protector" cc1: error: invalid parameter `ssp-buffer-size' optimize/Zeros/bisect.c:1: error: bad value (generic) for -mtune= switch cc1: error: unrecognized command line option "-fstack-protector" cc1: error: invalid parameter `ssp-buffer-size' optimize/Zeros/bisect.c:1: error: bad value (generic) for -mtune= switch error: Command "gcc34 -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m32 -march=i386 -mtune=generic -fasynchronous-unwind-tables -D_GNU_SOURCE -fPIC -fPIC -c optimize/Zeros/bisect.c -o build/temp.linux-i686-2.5/optimize/Zeros/bisect.o" failed with exit status 1 removed ./__svn_version__.py $ For some reason, the options -fstack-protector, -mtune=generic are being inserted in the gcc command. I have a hunch this is an error may be caused by the distutils that comes bundled with Fedora's Python 2.5, which was probably compiled with gcc 4.1.2. Ideas? Thanks. Damian Damian Eads wrote: > Hi there, > > The hierarchy_wrap.c file is code I wrote and Chris Burns checked in. I > did not try to compile it with an earlier version of gcc. Sorry about that. > > $ gcc --version > gcc (GCC) 4.1.2 20070925 (Red Hat 4.1.2-33) > $ > > I will log-in to a machine with gcc-3.4.1 and try recompiling. I hope to > have a fix soon. > > Robert: interesting catch. I could try renaming CS, _X, and _dm. Let me > try reproducing this compiler error on a machine with the gcc version > John is using. > > Damian > > > On Fri, Apr 25, 2008 at 9:28 AM, John Hunter wrote: > > With numpy r5083 and scipy r4176 I am getting the following build error > > > > building 'scipy.cluster._hierarchy_wrap' extension > > compiling C sources > > C compiler: /opt/app/g++lib6/gcc-3.4/bin/gcc -fno-strict-aliasing > > -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -fPIC > > > > compile options: > > '-I/home/titan/johnh/dev/lib/python2.4/site-packages/numpy/core/include > > -I/home/titan/johnh/dev/lib/python2.4/site-packages/numpy/core/include > > -I/opt/app/g++lib6/python-2.4/include/python2.4 -c' > > gcc: scipy/cluster/src/hierarchy_wrap.c > > scipy/cluster/src/hierarchy_wrap.c: In function > `calculate_cluster_sizes_wrap': > > scipy/cluster/src/hierarchy_wrap.c:117: error: parse error before > > numeric constant > > scipy/cluster/src/hierarchy_wrap.c:120: error: invalid lvalue in > unary `&' > > scipy/cluster/src/hierarchy_wrap.c:124: error: invalid type argument > of `->' > > It looks like something is #defining CS to be a number. Search through > the headers for this. > > > scipy/cluster/src/hierarchy_wrap.c: In function > > `cluster_maxclust_monocrit_wrap': > > scipy/cluster/src/hierarchy_wrap.c:251: warning: unused variable > `cutoff' > > scipy/cluster/src/hierarchy_wrap.c: In function `pdist_euclidean_wrap': > > scipy/cluster/src/hierarchy_wrap.c:387: error: parse error before > > numeric constant > > Possibly the same thing with _X or _dm. Are there any more errors > following this? From gael.varoquaux at normalesup.org Fri Apr 25 21:23:25 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sat, 26 Apr 2008 03:23:25 +0200 Subject: [SciPy-user] Polynomial interpolation In-Reply-To: References: <3d375d730804201710m737de47bw6b91bbbf0f874ab3@mail.gmail.com> Message-ID: <20080426012325.GC7685@phare.normalesup.org> On Sun, Apr 20, 2008 at 09:03:05PM -0400, Anne Archibald wrote: > Reasonable, though really, is krogh_interpolate(xi, yi, x) much better > than KroghInterpolator(xi, yi)(x)? Yes. Some people don't understand functional/object code. We need to keep scipy accesssible for them. > It's also good to emphasize that > the construction of the interpolating polynomial is a relatively slow > process compared to its evaluation. Sure, than provide also KroghInterpolator My 2 cents, Ga?l From gael.varoquaux at normalesup.org Fri Apr 25 21:26:53 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sat, 26 Apr 2008 03:26:53 +0200 Subject: [SciPy-user] Polynomial interpolation In-Reply-To: References: <20080419214142.GA12913@basestar> <9457e7c80804202302r23a07026gc056e0ba9fb95fea@mail.gmail.com> <71066A28-D1B5-4206-8D8B-17039D3F4639@ster.kuleuven.be> Message-ID: <20080426012653.GD7685@phare.normalesup.org> On Mon, Apr 21, 2008 at 11:20:45PM -0400, Rob Clewley wrote: > > If we're going to start thinking "big" for supporting more > mathematically natural functionality, I believe we ought to be > thinking far enough out to support the basic objects of PDE > computations too (or at least a compatible class structure and API), > even if it's not fully utilized just yet. Scipy should support > scientific computation based around mathematically-oriented > fundamental objects and functionality > [...] > Sounds like sympy to me. Ga?l From robert.kern at gmail.com Fri Apr 25 21:34:08 2008 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 25 Apr 2008 20:34:08 -0500 Subject: [SciPy-user] problem building svn scipy In-Reply-To: <48128351.9040105@soe.ucsc.edu> References: <481274F8.2060701@soe.ucsc.edu> <48128351.9040105@soe.ucsc.edu> Message-ID: <3d375d730804251834x7237e7edme2fa5e65a8e7334e@mail.gmail.com> On Fri, Apr 25, 2008 at 8:20 PM, Damian Eads wrote: > Hi, > > I learned there is a compatibility package called compat-gcc-34, which I > installed with yum. This is the first time I've tried compiling scipy > with gcc-34. > > $ yum install compat-gcc-34 gcc-gcc-34-c++ > > Then, I tried doing a clean, changing CC, reconfigure, and build. > > $ python setup.py clean # in the scipy root > $ rm -rf build # just to be sure > $ export CC=gcc34 > $ python setup.py config > $ python setup.py build > > ... > > C compiler: gcc34 -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall > -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector > --param=ssp-buffer-size=4 -m32 -march=i386 -mtune=generic > -fasynchronous-unwind-tables -D_GNU_SOURCE -fPIC -fPIC > > compile options: '-c' > gcc34: optimize/Zeros/bisect.c > cc1: error: unrecognized command line option "-fstack-protector" > cc1: error: invalid parameter `ssp-buffer-size' > optimize/Zeros/bisect.c:1: error: bad value (generic) for -mtune= switch > cc1: error: unrecognized command line option "-fstack-protector" > cc1: error: invalid parameter `ssp-buffer-size' > optimize/Zeros/bisect.c:1: error: bad value (generic) for -mtune= switch > error: Command "gcc34 -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall > -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector > --param=ssp-buffer-size=4 -m32 -march=i386 -mtune=generic > -fasynchronous-unwind-tables -D_GNU_SOURCE -fPIC -fPIC -c > optimize/Zeros/bisect.c -o > build/temp.linux-i686-2.5/optimize/Zeros/bisect.o" failed with exit status 1 > removed ./__svn_version__.py > $ > > For some reason, the options -fstack-protector, -mtune=generic are being > inserted in the gcc command. I have a hunch this is an error may be > caused by the distutils that comes bundled with Fedora's Python 2.5, > which was probably compiled with gcc 4.1.2. Quite probably, yes. I don't recommend trying to build extensions with gcc 3.4 when your Python is gcc 4.1. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From eads at soe.ucsc.edu Fri Apr 25 21:40:56 2008 From: eads at soe.ucsc.edu (Damian Eads) Date: Fri, 25 Apr 2008 18:40:56 -0700 Subject: [SciPy-user] problem building svn scipy In-Reply-To: <3d375d730804251834x7237e7edme2fa5e65a8e7334e@mail.gmail.com> References: <481274F8.2060701@soe.ucsc.edu> <48128351.9040105@soe.ucsc.edu> <3d375d730804251834x7237e7edme2fa5e65a8e7334e@mail.gmail.com> Message-ID: <48128828.7090802@soe.ucsc.edu> Robert Kern wrote: > On Fri, Apr 25, 2008 at 8:20 PM, Damian Eads wrote: >> For some reason, the options -fstack-protector, -mtune=generic are being >> inserted in the gcc command. I have a hunch this is an error may be >> caused by the distutils that comes bundled with Fedora's Python 2.5, >> which was probably compiled with gcc 4.1.2. > > Quite probably, yes. I don't recommend trying to build extensions with > gcc 3.4 when your Python is gcc 4.1. Does this mean I should build python with gcc34? Set my environment variables appropriately and then try compiling numpy and scipy from source? Damian From eads at soe.ucsc.edu Fri Apr 25 21:51:13 2008 From: eads at soe.ucsc.edu (Damian Eads) Date: Fri, 25 Apr 2008 18:51:13 -0700 Subject: [SciPy-user] Scipy 0.7.0dev4167 ... 44 errors? In-Reply-To: References: <9911419a0804231026q33a466f8pf1936dc38ba853b5@mail.gmail.com> Message-ID: <48128A91.3010409@soe.ucsc.edu> I wrote the hierarchy tests last week. The number of errors in scipy/cluster/hierarchy* may sound overwhelming but most of them seem to be caused by the same problem -- numpy.float96 not being defined. Is it sometimes the case this type code is undefined, maybe because numpy was compiled on a machine not supporting 96-bit floats? Damian Joshua Lippai wrote: > From: *Joshua Lippai* > > Date: Wed, Apr 23, 2008 at 12:26 PM > To: SciPy Users List > > > > Hello, > > I was running SVN builds of SciPy retty well until just recently (I > got a few filures and some errors, but apparently this is normal right > now, or so I've been told). But at some recent revision, my number of > errors jumped to a whopping 44, and failures to 5 from scipy.test(). > Here is my output: From eads at soe.ucsc.edu Fri Apr 25 22:30:24 2008 From: eads at soe.ucsc.edu (Damian Eads) Date: Fri, 25 Apr 2008 19:30:24 -0700 Subject: [SciPy-user] problem building svn scipy In-Reply-To: <48128828.7090802@soe.ucsc.edu> References: <481274F8.2060701@soe.ucsc.edu> <48128351.9040105@soe.ucsc.edu> <3d375d730804251834x7237e7edme2fa5e65a8e7334e@mail.gmail.com> <48128828.7090802@soe.ucsc.edu> Message-ID: <481293C0.90700@soe.ucsc.edu> Damian Eads wrote: > Robert Kern wrote: >> On Fri, Apr 25, 2008 at 8:20 PM, Damian Eads wrote: >>> For some reason, the options -fstack-protector, -mtune=generic are being >>> inserted in the gcc command. I have a hunch this is an error may be >>> caused by the distutils that comes bundled with Fedora's Python 2.5, >>> which was probably compiled with gcc 4.1.2. >> Quite probably, yes. I don't recommend trying to build extensions with >> gcc 3.4 when your Python is gcc 4.1. > > Does this mean I should build python with gcc34? Set my environment > variables appropriately and then try compiling numpy and scipy from source? > > Damian I've just built python with gcc34. I will now try compiling numpy and scipy. Stay tuned. Damian From dineshbvadhia at hotmail.com Fri Apr 25 22:40:11 2008 From: dineshbvadhia at hotmail.com (Dinesh B Vadhia) Date: Fri, 25 Apr 2008 19:40:11 -0700 Subject: [SciPy-user] Large matrix support Message-ID: Nathan / Alan Here is how I'm constructing the sparse matrices: > import numpy > import scipy > from scipy import sparse > # create matrix A using the scipy.sparse coo_matrix method. A is very sparse <10% > ij = numpy.array(numpy.zeros((nnz, 2), dtype=numpy.int)) > row = ij[:,0] > column = ij[:,1] > n = ij.shape[0] > data = scipy.ones(n, dtype=numpy.int8) > A = sparse.coo_matrix((data, (row, column)), shape=(I,J)) # changed from dims to shape for new svn > A = sparse.csr_matrix(X) > f = open('A.pkl', 'wb') # pickle the sparse matrix > pickle.dump(A, f, 2) > f.close() > f = open('A.pkl', 'rb') # unpickle the sparse matrix > A = pickle.load(f) > f.close() > I = A.shape[0] > J = A.shape[1] > b = A*x ------------------------------ Message: 4 Date: Fri, 25 Apr 2008 19:16:04 -0400 From: "Alan McIntyre" Subject: Re: [SciPy-user] Large matrix support To: "SciPy Users List" Message-ID: <1d36917a0804251616q4babd57crf8acc2b47c6b9072 at mail.gmail.com> Content-Type: text/plain; charset=ISO-8859-1 On Fri, Apr 25, 2008 at 5:01 PM, Dinesh B Vadhia wrote: > My matrix sizes can be large ie. > minimum 20,000 rows x 1m columns. > > When just creating these data sets using Python/Numpy/Scipy, I get > pythonw.exe errors and the program aborts. I guess the Microsoft 32K limit > has been reached somewhere. > > I haven't got any Python program to run with these data sets because of 'not > enough memory' errors. I'm waiting for the int8 support in the sparse > library to hopefully resolve that issue. Dinesh, I apologize if this is too basic a question to ask, but are the matrices sparse? Could you post some sample code to show how you're trying to construct these matrices? Thanks, Alan -------------- next part -------------- An HTML attachment was scrubbed... URL: From rob.clewley at gmail.com Fri Apr 25 22:58:01 2008 From: rob.clewley at gmail.com (Rob Clewley) Date: Fri, 25 Apr 2008 22:58:01 -0400 Subject: [SciPy-user] Polynomial interpolation In-Reply-To: <20080426012653.GD7685@phare.normalesup.org> References: <20080419214142.GA12913@basestar> <9457e7c80804202302r23a07026gc056e0ba9fb95fea@mail.gmail.com> <71066A28-D1B5-4206-8D8B-17039D3F4639@ster.kuleuven.be> <20080426012653.GD7685@phare.normalesup.org> Message-ID: On Fri, Apr 25, 2008 at 9:26 PM, Gael Varoquaux wrote: > On Mon, Apr 21, 2008 at 11:20:45PM -0400, Rob Clewley wrote: > > > > If we're going to start thinking "big" for supporting more > > mathematically natural functionality, I believe we ought to be > > thinking far enough out to support the basic objects of PDE > > computations too (or at least a compatible class structure and API), > > even if it's not fully utilized just yet. Scipy should support > > scientific computation based around mathematically-oriented > > fundamental objects and functionality > > [...] > > > > Sounds like sympy to me. > You sound dismissive, but symbolic expressions are just one part of it, and not at all what this thread has been about. There are plenty of other types of mathematical objects important in scientific computation. This thread has been about geometric objects such as curves, and in particular trajectories as numerical solutions to differential equations. From cburns at berkeley.edu Fri Apr 25 23:43:56 2008 From: cburns at berkeley.edu (Christopher Burns) Date: Fri, 25 Apr 2008 20:43:56 -0700 Subject: [SciPy-user] Scipy 0.7.0dev4167 ... 44 errors? In-Reply-To: References: <9911419a0804231026q33a466f8pf1936dc38ba853b5@mail.gmail.com> Message-ID: <764e38540804252043s52e0671ek4d576ac7ebad040c@mail.gmail.com> This should be fixed thanks to David. I closed the ticket. Chris On Wed, Apr 23, 2008 at 10:54 AM, Nils Wagner wrote: > On Wed, 23 Apr 2008 10:26:26 -0700 > "Joshua Lippai" wrote: > > Hello, > > > > I was running SVN builds of SciPy retty well until just > >recently (I > > got a few filures and some errors, but apparently this > >is normal right > > now, or so I've been told). But at some recent revision, > >my number of > > errors jumped to a whopping 44, and failures to 5 from > >scipy.test(). > > Here is my output: > > Ran 2116 tests in 23.236s > > > >FAILED (failures=5, errors=44) > > > > Is this some kind of issue with a recent revision to the > >scipy tests, > > or is it all me? > > > > Josh > > More or less the same output here. > > ---------------------------------------------------------------------- > Ran 2117 tests in 27.520s > > FAILED (failures=6, errors=45) > >>> scipy.__version__ > '0.7.0.dev4167' > > Anyway, I have filed tickets for these errors/failures. > > > Nils > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eads at soe.ucsc.edu Sat Apr 26 02:17:38 2008 From: eads at soe.ucsc.edu (Damian Eads) Date: Fri, 25 Apr 2008 23:17:38 -0700 Subject: [SciPy-user] problem building svn scipy In-Reply-To: <481274F8.2060701@soe.ucsc.edu> References: <481274F8.2060701@soe.ucsc.edu> Message-ID: <4812C902.5060601@soe.ucsc.edu> Hi, I have compiled python 2.5 with gcc 3.4, as verified below. [eads scipy]$ gcc --version gcc34 (GCC) 3.4.6 20060404 (Red Hat 3.4.6-8) Copyright (C) 2006 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. [eads scipy]$ python Python 2.5.2 (r252:60911, Apr 25 2008, 19:21:27) [GCC 3.4.6 20060404 (Red Hat 3.4.6-8)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> ^D [eads scipy]$ I checked out numpy and scipy from scratch and they build with gcc 3.4.6 without error. I tried searching through the numpy and scipy headers for CS, _X, and _dm. I could not find any references to them. I then tried building numpy and scipy with gcc34-python 2.4, and they also built without error. Damian Damian Eads wrote: > Hi there, > > The hierarchy_wrap.c file is code I wrote and Chris Burns checked in. I > did not try to compile it with an earlier version of gcc. Sorry about that. > > $ gcc --version > gcc (GCC) 4.1.2 20070925 (Red Hat 4.1.2-33) > $ > > I will log-in to a machine with gcc-3.4.1 and try recompiling. I hope to > have a fix soon. > > Robert: interesting catch. I could try renaming CS, _X, and _dm. Let me > try reproducing this compiler error on a machine with the gcc version > John is using. > > Damian > > > On Fri, Apr 25, 2008 at 9:28 AM, John Hunter wrote: > > With numpy r5083 and scipy r4176 I am getting the following build error > > > > building 'scipy.cluster._hierarchy_wrap' extension > > compiling C sources > > C compiler: /opt/app/g++lib6/gcc-3.4/bin/gcc -fno-strict-aliasing > > -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -fPIC > > > > compile options: > > '-I/home/titan/johnh/dev/lib/python2.4/site-packages/numpy/core/include > > -I/home/titan/johnh/dev/lib/python2.4/site-packages/numpy/core/include > > -I/opt/app/g++lib6/python-2.4/include/python2.4 -c' > > gcc: scipy/cluster/src/hierarchy_wrap.c > > scipy/cluster/src/hierarchy_wrap.c: In function > `calculate_cluster_sizes_wrap': > > scipy/cluster/src/hierarchy_wrap.c:117: error: parse error before > > numeric constant > > scipy/cluster/src/hierarchy_wrap.c:120: error: invalid lvalue in > unary `&' > > scipy/cluster/src/hierarchy_wrap.c:124: error: invalid type argument > of `->' > > It looks like something is #defining CS to be a number. Search through > the headers for this. > > > scipy/cluster/src/hierarchy_wrap.c: In function > > `cluster_maxclust_monocrit_wrap': > > scipy/cluster/src/hierarchy_wrap.c:251: warning: unused variable > `cutoff' > > scipy/cluster/src/hierarchy_wrap.c: In function `pdist_euclidean_wrap': > > scipy/cluster/src/hierarchy_wrap.c:387: error: parse error before > > numeric constant > > Possibly the same thing with _X or _dm. Are there any more errors > following this? From gael.varoquaux at normalesup.org Sat Apr 26 05:06:41 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sat, 26 Apr 2008 11:06:41 +0200 Subject: [SciPy-user] Polynomial interpolation In-Reply-To: References: <20080419214142.GA12913@basestar> <9457e7c80804202302r23a07026gc056e0ba9fb95fea@mail.gmail.com> <71066A28-D1B5-4206-8D8B-17039D3F4639@ster.kuleuven.be> <20080426012653.GD7685@phare.normalesup.org> Message-ID: <20080426090641.GA24606@phare.normalesup.org> On Fri, Apr 25, 2008 at 10:58:01PM -0400, Rob Clewley wrote: > > Sounds like sympy to me. > You sound dismissive, but symbolic expressions are just one part of > it, and not at all what this thread has been about. There are plenty > of other types of mathematical objects important in scientific > computation. This thread has been about geometric objects such as > curves, and in particular trajectories as numerical solutions to > differential equations. Yes, you are right. I just have the fealing that a good portion of the work has been done in sympy. Maybe I am wrong. Cheers, Ga?l From cohen at slac.stanford.edu Sat Apr 26 16:17:53 2008 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Sat, 26 Apr 2008 22:17:53 +0200 Subject: [SciPy-user] gammaincinv test hangs? Message-ID: <48138DF1.2000509@slac.stanford.edu> hello, I am testing scipy on my FC8 box with gcc 4.1 and current revision seems to hang at the gammaincinv test. Can anyone tell me whether it is normal that this test takes a lot of time, or else what issue I might have? thanks, Johann From millman at berkeley.edu Sat Apr 26 17:27:21 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Sat, 26 Apr 2008 16:27:21 -0500 Subject: [SciPy-user] test failures on os x In-Reply-To: <114880320804210829r7c24dd28n9ba224f928778b4d@mail.gmail.com> References: <114880320804210829r7c24dd28n9ba224f928778b4d@mail.gmail.com> Message-ID: On Mon, Apr 21, 2008 at 10:29 AM, Warren Weckesser wrote: > Scipy devs: Do you do stable point releases, e.g. 0.6.1? Unfortunately we are a little busy working to get NumPy 1.1.0 out ASAP. As soon as that happens (hopefully within the next week), I will take a look at where we are in terms of whether it is better to release a SciPy 0.6.1 or 0.7.0 in the next month. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From wnbell at gmail.com Sat Apr 26 17:52:17 2008 From: wnbell at gmail.com (Nathan Bell) Date: Sat, 26 Apr 2008 16:52:17 -0500 Subject: [SciPy-user] gammaincinv test hangs? In-Reply-To: <48138DF1.2000509@slac.stanford.edu> References: <48138DF1.2000509@slac.stanford.edu> Message-ID: On Sat, Apr 26, 2008 at 3:17 PM, Johann Cohen-Tanugi wrote: > hello, > I am testing scipy on my FC8 box with gcc 4.1 and current revision seems > to hang at the gammaincinv test. Can anyone tell me whether it is normal > that this test takes a lot of time, or else what issue I might have? > thanks, This is a recent problem in SciPy svn. It has already been reported. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From contact at pythonxy.com Sun Apr 27 16:56:09 2008 From: contact at pythonxy.com (Python(x,y)) Date: Sun, 27 Apr 2008 22:56:09 +0200 Subject: [SciPy-user] [ Python(x,y) ] New release : 1.1.3 Message-ID: <4814E869.4030901@pythonxy.com> Hi all, Python(x,y) 1.1.3 is now available on http://www.pythonxy.com. Changes history 04 -28 -2008 - Version 1.1.3 : * Added: o "Welcome to Python(x,y)": new GUI to launch useful scripts or consoles, and to find help quickly on some of the installed packages (see screenshot on http://www.pythonxy.com/screenshots.php) o Eclipse: workspace will now refresh automatically * Updated: o SymPy 0.5.14 (see changes on http://code.google.com/p/sympy/) o Enthought Tool Suite 2.7.1 o Interactive consoles with matplotlib, Qt4 or wxPython threading support : Console 2 configuration has again been updated (appearance, mostly) * Corrected: o Eclipse workspace existing settings will no longer be erased during installation -- P. Raybaut Python(x,y) http://www.pythonxy.com From erik.tollerud at gmail.com Sun Apr 27 19:29:21 2008 From: erik.tollerud at gmail.com (Erik Tollerud) Date: Sun, 27 Apr 2008 16:29:21 -0700 Subject: [SciPy-user] scipy.stats rv objects from data Message-ID: I'm finding the scipy.stats documentation somewhat difficult to follow, so maybe the answer to this question is in there... I can't really find it, though. What I have is a sequence of numbers X_i . Two things I'd like to be able to do with this: 1. Create a discrete probability distribution (class rv_discrete) from this data so as to use the utility functions that take rv_discrete objects. The rv_discrete documentation suggests should be easy. I did the following >>>ddist=rv_discrete(values=(x,[1/len(x) for i in x]),name='test') >>>ddist.pmf(50) array(0.0) Any value I try to get of the pmf seems to be 0. Do I have to explicitly subclass rv_discrete with my data and a _pmf method or something? This seems like a very natural thing to want to do, and hence it seems odd to not have some helper like make_dist(x,name='whatever') . I can take a shot at creating such a function, but I don't want to do so if one exists. 2. Create a continuous probability distribution from something like spline fitting or simple linear interpolation of a the data in X_i. Does this require explict subclassing, or is there a straightforward way to do it that's builtin? I'm not sure if this step is strictly necessary - what I really want to do is be able to draw from the discrete distribution in 1 just by sampling the cdf... maybe this is how it's supposed to work with the discrete distribution, but when I tried to sample it using ddist.rvs, I would always get the input values I specified rather random values sampled from the cdf. I'm on scipy 0.6.0 and numpy 1.0.4 From dineshbvadhia at hotmail.com Sun Apr 27 19:41:13 2008 From: dineshbvadhia at hotmail.com (Dinesh B Vadhia) Date: Sun, 27 Apr 2008 16:41:13 -0700 Subject: [SciPy-user] Sparse csr_matrix and column sum Message-ID: If A is a sparse csr_matrix and you want to calculate the sum of each column then the 'normal' method is: import numpy import scipy from scipy import sparse colSum = scipy.asmatrix(scipy.zeros((1,J), dtype=numpy.float)) colSum = A.mean(0) This isn't working. Do we have to do something else (eg. a todense()) for a sparse matrix? If so, how? Dinesh -------------- next part -------------- An HTML attachment was scrubbed... URL: From wnbell at gmail.com Sun Apr 27 20:08:23 2008 From: wnbell at gmail.com (Nathan Bell) Date: Sun, 27 Apr 2008 19:08:23 -0500 Subject: [SciPy-user] Sparse csr_matrix and column sum In-Reply-To: References: Message-ID: On Sun, Apr 27, 2008 at 6:41 PM, Dinesh B Vadhia wrote: > > > If A is a sparse csr_matrix and you want to calculate the sum of each column > then the 'normal' method is: > > import numpy > import scipy > from scipy import sparse > > colSum = scipy.asmatrix(scipy.zeros((1,J), dtype=numpy.float)) > colSum = A.mean(0) > > This isn't working. Do we have to do something else (eg. a todense()) for a > sparse matrix? If so, how? What do you mean by "isn't working"? In [1]: from scipy import * In [2]: from scipy.sparse import * In [3]: A = csr_matrix(rand(3,3)) In [4]: A.todense() Out[4]: matrix([[ 0.95297535, 0.81029421, 0.79146232], [ 0.88477059, 0.9025494 , 0.80259054], [ 0.06691343, 0.76691617, 0.68518027]]) In [5]: A.mean(0) Out[5]: matrix([[ 0.63488646, 0.82658659, 0.75974438]]) In [6]: A.mean(1) Out[6]: matrix([[ 0.85157729], [ 0.86330351], [ 0.50633662]]) In [7]: A.todense().mean(0) Out[7]: matrix([[ 0.63488646, 0.82658659, 0.75974438]]) In [8]: A.todense().mean(1) Out[8]: matrix([[ 0.85157729], [ 0.86330351], [ 0.50633662]]) Dinesh, as a courtesy, would you provide specific details when reporting your problems with SciPy? I'd rather not have to speculate on the precise nature of each issue raised. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From dineshbvadhia at hotmail.com Sun Apr 27 22:32:29 2008 From: dineshbvadhia at hotmail.com (Dinesh B Vadhia) Date: Sun, 27 Apr 2008 19:32:29 -0700 Subject: [SciPy-user] Sparse csr_matrix and column sum Message-ID: Thanks Nathan. Sorry, I wasn't being imprecise as A.sum(0) didn't work and still doesn't - I've just tried again. However, A.todense().sum(0) does work - but takes a performance hit. My import statements are: > import numpy > import scipy > from scipy import sparse Which means that I have to qualify each function/operation with a numpy. or scipy. - is the problem that I haven't qualified the statement: > colSum = A.sum(0) correctly? Anyway, A.todense().sum() works for small sized matrices but unfortunately, because of the large matrices being used the A.todense().sum(0) results in a memory error. For I = 20000 and J = 66000, here is the Traceback: Traceback (most recent call last): File "C:\... sparseTest.py", line 42 colSum = A.todense().sum(0) # sum of each column of A File "C:\Python25\Lib\site-packages\scipy\sparse\base.py", line 416, in todense return asmatrix(self.toarray()) File "C:\Python25\Lib\site-packages\scipy\sparse\compressed.py", line 627, in toarray M = zeros(self.shape, dtype=self.dtype) MemoryError Is there a way around this? Cheers Dinesh -------------------------------------------------------------------------------- From: Nathan Bell gmail.com> Subject: Re: Sparse csr_matrix and column sum Newsgroups: gmane.comp.python.scientific.user Date: 2008-04-28 00:08:23 GMT (1 hour and 38 minutes ago) On Sun, Apr 27, 2008 at 6:41 PM, Dinesh B Vadhia hotmail.com> wrote: > > > If A is a sparse csr_matrix and you want to calculate the sum of each column > then the 'normal' method is: > > import numpy > import scipy > from scipy import sparse > > colSum = scipy.asmatrix(scipy.zeros((1,J), dtype=numpy.float)) > colSum = A.mean(0) > > This isn't working. Do we have to do something else (eg. a todense()) for a > sparse matrix? If so, how? What do you mean by "isn't working"? In [1]: from scipy import * In [2]: from scipy.sparse import * In [3]: A = csr_matrix(rand(3,3)) In [4]: A.todense() Out[4]: matrix([[ 0.95297535, 0.81029421, 0.79146232], [ 0.88477059, 0.9025494 , 0.80259054], [ 0.06691343, 0.76691617, 0.68518027]]) In [5]: A.mean(0) Out[5]: matrix([[ 0.63488646, 0.82658659, 0.75974438]]) In [6]: A.mean(1) Out[6]: matrix([[ 0.85157729], [ 0.86330351], [ 0.50633662]]) In [7]: A.todense().mean(0) Out[7]: matrix([[ 0.63488646, 0.82658659, 0.75974438]]) In [8]: A.todense().mean(1) Out[8]: matrix([[ 0.85157729], [ 0.86330351], [ 0.50633662]]) Dinesh, as a courtesy, would you provide specific details when reporting your problems with SciPy? I'd rather not have to speculate on the precise nature of each issue raised. -- Nathan Bell wnbell gmail.com http://graphics.cs.uiuc.edu/~wnbell/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From wnbell at gmail.com Sun Apr 27 22:48:41 2008 From: wnbell at gmail.com (Nathan Bell) Date: Sun, 27 Apr 2008 21:48:41 -0500 Subject: [SciPy-user] Sparse csr_matrix and column sum In-Reply-To: References: Message-ID: On Sun, Apr 27, 2008 at 9:32 PM, Dinesh B Vadhia wrote: > > Sorry, I wasn't being imprecise as A.sum(0) didn't work and still doesn't - > I've just tried again. However, A.todense().sum(0) does work - but takes a > performance hit. My import statements are: You still haven't informed us what exactly "didn't work". Can you provide the error message you get when using A.sum(0) where A is a csr_matrix? Are you using SciPy from SVN or a previous version? -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From alan.mcintyre at gmail.com Sun Apr 27 23:00:34 2008 From: alan.mcintyre at gmail.com (Alan McIntyre) Date: Sun, 27 Apr 2008 23:00:34 -0400 Subject: [SciPy-user] Large matrix support In-Reply-To: References: Message-ID: <1d36917a0804272000i17affce0s727bf653c5bc2d67@mail.gmail.com> Dinesh, I don't have a Windows machine to try it on, but on Linux I didn't get any issues until I actually ran out of memory and virtual memory (750MB total). Constructing an array in the same way as your code, 30 million randomly placed entries was enough to push me over 750MB, even though the array itself should only take up about 150M (that's an estimate based on an array with fewer filled slots). So even though you might have memory for the array itself, the other stuff that gets instantiated during the construction might be pushing you over the top. But that's just a guess, since I don't know if there's something different going on in Windows (I can't imagine what that might be, though). Can you watch the working set for your process as it's doing the construction? It could also help to know where the actual errors occur for you, if it's possible to tell. Cheers, Alan From dineshbvadhia at hotmail.com Mon Apr 28 00:57:44 2008 From: dineshbvadhia at hotmail.com (Dinesh B Vadhia) Date: Sun, 27 Apr 2008 21:57:44 -0700 Subject: [SciPy-user] [Sparse matrix library] csr_matrix and column sum Message-ID: I'm using the svn. There is no Traceback error but the results are incorrect based on previous results. Here is ouput from print statements that might help: > import numpy > import scipy > from scipy import sparse > colSum = scipy.asmatrix(scipy.zeros((1,J), dtype=numpy.float)) > colSum = A.sum(0) # A is a csr_matrix > for j in xrange(0, 25, 1): > print >> sys.stderr, colSum[0,j], 0 , -44 , 84 , -116 , -121 , -43 , -44 , -116 , -115 , -79 , 70 , -86 , 39 , -17 , -21 , -112 , 29 , -126 , -19 , 33 , 59 , -6 , 24 , 18 , 57 > # This is wrong because the matrix A contains positive integer 1's only. > colSum = A.todense().sum(0) > for j in xrange(0, 25, 1): > print >> sys.stderr, colSum[0,j], 768 , 724 , 1108 , 652 , 1927 , 2005 , 724 , 908 , 1421 , 1457 , 2118 , 1450 , 1575 , 3055 , 2283 , 656 , 1053 , 898 , 1517 , 1569 , 1339 , 762 , 3096 , 530 , 1081 ># This is correct and leads to the correct results but there is a large performance hit because of the .todense() Hope this helps to identify the problem. Dinesh -------------------------------------------------------------------------------- From: Nathan Bell gmail.com> Subject: Re: Sparse csr_matrix and column sum Newsgroups: gmane.comp.python.scientific.user Date: 2008-04-28 02:48:41 GMT (1 hour and 43 minutes ago) On Sun, Apr 27, 2008 at 9:32 PM, Dinesh B Vadhia hotmail.com> wrote: > > Sorry, I wasn't being imprecise as A.sum(0) didn't work and still doesn't - > I've just tried again. However, A.todense().sum(0) does work - but takes a > performance hit. My import statements are: You still haven't informed us what exactly "didn't work". Can you provide the error message you get when using A.sum(0) where A is a csr_matrix? Are you using SciPy from SVN or a previous version? -- Nathan Bell wnbell gmail.com http://graphics.cs.uiuc.edu/~wnbell/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Mon Apr 28 00:56:22 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 28 Apr 2008 13:56:22 +0900 Subject: [SciPy-user] [Sparse matrix library] csr_matrix and column sum In-Reply-To: References: Message-ID: <481558F6.4050505@ar.media.kyoto-u.ac.jp> Dinesh B Vadhia wrote: > I'm using the svn. There is no Traceback error but the results are > incorrect based on previous results. Here is ouput from print > statements that might help: > > > import numpy > > import scipy > > from scipy import sparse > > > colSum = scipy.asmatrix(scipy.zeros((1,J), dtype=numpy.float)) > > colSum = A.sum(0) # A is a csr_matrix I don't know if this is intended, but the first line has no effect (scipy.asmatrix: the call is wasteful because colSum will be assigned to A.sum(0) just after). We need the A matrix. David From wnbell at gmail.com Mon Apr 28 01:29:29 2008 From: wnbell at gmail.com (Nathan Bell) Date: Mon, 28 Apr 2008 00:29:29 -0500 Subject: [SciPy-user] [Sparse matrix library] csr_matrix and column sum In-Reply-To: References: Message-ID: On Sun, Apr 27, 2008 at 11:57 PM, Dinesh B Vadhia wrote: > > 0 , -44 , 84 , -116 , -121 , -43 , -44 , -116 , -115 , -79 , 70 , -86 , 39 , > -17 , -21 , -112 , 29 , -126 , -19 , 33 , 59 , -6 , 24 , 18 , 57 > > > 768 , 724 , 1108 , 652 , 1927 , 2005 , 724 , 908 , 1421 , 1457 , 2118 , 1450 > , 1575 , 3055 , 2283 , 656 , 1053 , 898 , 1517 , 1569 , 1339 , 762 , 3096 , > 530 , 1081 > This is due to the fact that when integer arithmetic overflows (e.g. A + B is too large) the result "wraps around". The solution is to use a data type with a greater range of values (more bits). Replace your int8 data array with an int16 array and you will get the expected results (albeit using one more byte per nonzero) provided that the sums do not exceed 2^15 - 1. To be safe, you might use int32 and not worry about ranges as much. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From stefan at sun.ac.za Mon Apr 28 03:29:20 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 28 Apr 2008 09:29:20 +0200 Subject: [SciPy-user] scipy.stats rv objects from data In-Reply-To: References: Message-ID: <9457e7c80804280029s2e2d3cbcj1569ccb1b38c2efd@mail.gmail.com> Hi Erik 2008/4/28 Erik Tollerud : > I'm finding the scipy.stats documentation somewhat difficult to > follow, so maybe the answer to this question is in there... I can't > really find it, though. > > What I have is a sequence of numbers X_i . Two things I'd like to be > able to do with this: > 1. Create a discrete probability distribution (class rv_discrete) from > this data so as to use the utility functions that take rv_discrete > objects. > The rv_discrete documentation suggests should be easy. I did the following > >>>ddist=rv_discrete(values=(x,[1/len(x) for i in x]),name='test') > >>>ddist.pmf(50) > array(0.0) That should be 1.0/len(x), otherwise all the probabilities are 0. Cheers St?fan From mnandris at btinternet.com Mon Apr 28 05:02:31 2008 From: mnandris at btinternet.com (Michael) Date: Mon, 28 Apr 2008 10:02:31 +0100 Subject: [SciPy-user] scipy.stats rv objects from data In-Reply-To: References: Message-ID: <1209373351.6142.40.camel@mik> rv_discrete: most (if not all) of the scipy.stats functions + numpy.random cannot handle zeros as _inputs_ (don't know whether this is related to getting zero's _out_, but it might be). The zero problem is, I am told, due to the underlying c code, not python. A quick workaround is to substitute any zeros for a small number like 1e-16 On Sun, 2008-04-27 at 16:29 -0700, Erik Tollerud wrote: > I'm finding the scipy.stats documentation somewhat difficult to > follow, so maybe the answer to this question is in there... I can't > really find it, though. > What I have is a sequence of numbers X_i . Two things I'd like to be > able to do with this: > 1. Create a discrete probability distribution (class rv_discrete) from > this data so as to use the utility functions that take rv_discrete > objects. > The rv_discrete documentation suggests should be easy. I did the following > >>>ddist=rv_discrete(values=(x,[1/len(x) for i in x]),name='test') > >>>ddist.pmf(50) > array(0.0) There are at least 2 ways of using rv_discrete e.g. 2 ways to calculate the next element of a simple Markov Chain with x(n+1)=Norm(0.5 x(n),1) from scipy.stats import rv_discrete from numpy.random import multinomial x = 3 n1 = stats.rv_continuous.rvs( stats.norm, 0.5*x, 1.0 )[0] print n1 n2 = stats.rv_discrete.rvs( stats.rv_discrete( name='sample', values=([0,1,2],[3/10.,5/10.,2/10.])), 0.5*x, 1.0 )[0] print n2 print sample = stats.rv_discrete( name='sample', values=([0,1,2],[3/10.,5/10.,2/10.]) ).rvs( size=10 ) print sample The multinomial distribution from numpy.random is somewhat faster (40 times or so) but has a different idiom: SIZE = 100000 VALUES = [0,1,2,3,4,5,6,7] PROBS = [1/8.,1/8.,1/8.,1/8.,1/8.,1/8.,1/8.,1/8.] The idiom for rv_discrete is rv_discrete( name='sample', values=(VALUES,PROBS) ) The idiom for numpy.multinomial is different; if memory serves, you get frequencies as output instead of the actual values multinomial( SIZE, PROBS ) >>> from numpy.random import multinomial >>> multinomial(100,[ 0.2, 0.4, 0.1, 0.3 ]) array([12, 44, 10, 34]) >>> multinomial( 100, [0.2, 0.0, 0.8, 0.0] ) <-- don't do this ... >>> multinomial( 100, [0.2, 1e-16, 0.8, 1e-16] ) <-- or this >>> multinomial( 100, [0.2-1e-16, 1e-16, 0.8-1e-16, 1e-16] ) <-- ok array([21, 0, 79, 0]) the last one is ok since the probability adds up to 1... painful, but it works > Any value I try to get of the pmf seems to be 0. Do I have to > explicitly subclass rv_discrete with my data and a _pmf method or > something? This seems like a very natural thing to want to do, and > hence it seems odd to not have some helper like > make_dist(x,name='whatever') . I can take a shot at creating such a > function, but I don't want to do so if one exists. > > 2. Create a continuous probability distribution from something like > spline fitting or simple linear interpolation of a the data in X_i. > Does this require explict subclassing, or is there a straightforward > way to do it that's builtin? I'm not sure if this step is strictly > necessary - what I really want to do is be able to draw from the > discrete distribution in 1 just by sampling the cdf... maybe this is > how it's supposed to work with the discrete distribution, but when I > tried to sample it using ddist.rvs, I would always get the input > values I specified rather random values sampled from the cdf. Continuous v's discrete: i found this in ./stats/scstats.py from scipy import stats, r_ from pylab import show, plot import copy # SOURCE: ./stats/scstats.py SPREAD = 10 class cdf( object ): """ Baseclass for the task of determining a sequence of numbers {vi} which is distributed as a random variable X """ def integerDensityFunction( self ): """ Outputs an integer density function: xs (ints) and ys (probabilities) which are the correspondence between the whole numbers on the x axis to the probabilities on the y axis, according to a normal distribution. """ opt = [] for i in r_[-SPREAD:SPREAD:100j]: # 2-tailed test (?) opt.append(( i, stats.norm.cdf(i) )) # ( int, P(int) ) return zip(*opt) # [ (int...), (P...) ] def display( self ): xs, ys = self.integerDensityFunction() plot( xs, ys ) show() if __name__=='__main__': d = cdf() d.display() Continuous: i can only suggest using rv_continuous stats.rv_continuous.rvs( stats.norm, 0.5*x, 1.0 ).whatever .rvs( shape, loc, scale ) is the random variates .pdf( x, shape, loc, scale ) is the probability density function which, i think, is or should be genuinely continuous > I'm on scipy 0.6.0 and numpy 1.0.4 > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From grh at mur.at Mon Apr 28 05:21:53 2008 From: grh at mur.at (Georg Holzmann) Date: Mon, 28 Apr 2008 11:21:53 +0200 Subject: [SciPy-user] Volterra system identification Message-ID: <48159731.8050602@mur.at> Hallo! Are there any libraries for volterra system identification somewhere in the scipy/numpy world ? I did not find something in the web ... Thanks for any hint, LG Georg From mnandris at btinternet.com Mon Apr 28 06:36:18 2008 From: mnandris at btinternet.com (Michael) Date: Mon, 28 Apr 2008 11:36:18 +0100 Subject: [SciPy-user] Volterra system identification In-Reply-To: <48159731.8050602@mur.at> References: <48159731.8050602@mur.at> Message-ID: <1209378978.7809.1.camel@mik> On Mon, 2008-04-28 at 11:21 +0200, Georg Holzmann wrote: > Hallo! > > Are there any libraries for volterra system identification somewhere in > the scipy/numpy world ? > I did not find something in the web ... >From a previous post: import numpy as n import pylab as p import scipy.integrate as integrate """ If you look closely to the second graph, you can see that the trajectory crosses some arrows of the direction field. I had this problem too, before forcing matplotlib to use equal axis. """ alpha, delta = 1, .25 beta, gamma = .2, .05 def dr(r, f): return alpha*r - beta*r*f def df(r, f): return gamma*r*f - delta*f def derivs(state, t): """ Map the state variable [rabbits, foxes] to the derivitives [deltar, deltaf] at time t """ #print t, state r, f = state # rabbits and foxes deltar = dr(r, f) # change in rabbits deltaf = df(r, f) # change in foxes return deltar, deltaf # the initial population of rabbits and foxes r0 = 20 f0 = 10 t = n.arange(0.0, 100, 0.1) y0 = [r0, f0] # the initial [rabbits, foxes] state vector y = integrate.odeint(derivs, y0, t) r = y[:,0] # extract the rabbits vector f = y[:,1] # extract the foxes vector p.figure() p.plot(t, r, label='rabbits') p.plot(t, f, label='foxes') p.xlabel('time (years)') p.ylabel('population') p.title('population trajectories') p.grid() p.legend() #p.savefig('lotka_volterra.png', dpi=150) #p.savefig('lotka_volterra.eps') p.figure() p.plot(r, f) p.xlabel('rabbits') p.ylabel('foxes') p.title('phase plane') # make a direction field plot with quiver rmax = 1.1 * r.max() fmax = 1.1 * f.max() R, F = n.meshgrid(n.arange(-1, rmax), n.arange(-1, fmax)) dR = dr(R, F) dF = df(R, F) p.quiver(R, F, dR, dF) R, F = n.meshgrid(n.arange(-1, rmax, .1), n.arange(-1, fmax, .1)) dR = dr(R, F) dF = df(R, F) p.contour(R, F, dR, levels=[0], linewidths=3, colors='black') p.contour(R, F, dF, levels=[0], linewidths=3, colors='black') p.ylabel('foxes') p.title('trajectory, direction field and null clines') #p.savefig('lotka_volterra_pplane.png', dpi=150) #p.savefig('lotka_volterra_pplane.eps') p.show() > Thanks for any hint, > LG > Georg > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From grh at mur.at Mon Apr 28 06:47:58 2008 From: grh at mur.at (Georg Holzmann) Date: Mon, 28 Apr 2008 12:47:58 +0200 Subject: [SciPy-user] Volterra system identification In-Reply-To: <1209378978.7809.1.camel@mik> References: <48159731.8050602@mur.at> <1209378978.7809.1.camel@mik> Message-ID: <4815AB5E.3080300@mur.at> Hallo! Thanks for the answer - but I don't understand it ... ;) I mean volterra series for nonlinear system identification. LG Georg >>From a previous post: > > import numpy as n > import pylab as p > import scipy.integrate as integrate > > """ > If you look closely to the second graph, you can see that the trajectory > crosses some arrows of the direction field. I had this problem too, > before forcing matplotlib to use equal axis. > """ > > alpha, delta = 1, .25 > beta, gamma = .2, .05 > > def dr(r, f): return alpha*r - beta*r*f > > def df(r, f): return gamma*r*f - delta*f > > def derivs(state, t): > """ Map the state variable [rabbits, foxes] to the derivitives > [deltar, deltaf] at time t """ > #print t, state > r, f = state # rabbits and foxes > deltar = dr(r, f) # change in rabbits > deltaf = df(r, f) # change in foxes > return deltar, deltaf > > # the initial population of rabbits and foxes > r0 = 20 > f0 = 10 > > t = n.arange(0.0, 100, 0.1) > > y0 = [r0, f0] # the initial [rabbits, foxes] state vector > y = integrate.odeint(derivs, y0, t) > r = y[:,0] # extract the rabbits vector > f = y[:,1] # extract the foxes vector > > p.figure() > p.plot(t, r, label='rabbits') > p.plot(t, f, label='foxes') > p.xlabel('time (years)') > p.ylabel('population') > p.title('population trajectories') > p.grid() > p.legend() > #p.savefig('lotka_volterra.png', dpi=150) > #p.savefig('lotka_volterra.eps') > > p.figure() > p.plot(r, f) > p.xlabel('rabbits') > p.ylabel('foxes') > p.title('phase plane') > > > # make a direction field plot with quiver > rmax = 1.1 * r.max() > fmax = 1.1 * f.max() > R, F = n.meshgrid(n.arange(-1, rmax), n.arange(-1, fmax)) > dR = dr(R, F) > dF = df(R, F) > p.quiver(R, F, dR, dF) > > > R, F = n.meshgrid(n.arange(-1, rmax, .1), n.arange(-1, fmax, .1)) > dR = dr(R, F) > dF = df(R, F) > > p.contour(R, F, dR, levels=[0], linewidths=3, colors='black') > p.contour(R, F, dF, levels=[0], linewidths=3, colors='black') > p.ylabel('foxes') > p.title('trajectory, direction field and null clines') > > #p.savefig('lotka_volterra_pplane.png', dpi=150) > #p.savefig('lotka_volterra_pplane.eps') > > > p.show() > >> Thanks for any hint, >> LG >> Georg >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From andrea.gavana at gmail.com Mon Apr 28 07:41:16 2008 From: andrea.gavana at gmail.com (Andrea Gavana) Date: Mon, 28 Apr 2008 12:41:16 +0100 Subject: [SciPy-user] Linear Interpolation Question Message-ID: Hi All, I have 2 matrices coming from 2 different simulations: the first column of the matrices is a date (time) at which all the other results in the matrix have been reported (simulation step). In these 2 matrices, very often the simulation steps do not coincide, so I just want to interpolate the results in the second matrix using the dates in the first matrix. The problem is, I have close to 13,000 columns in every matrices, and repeating interp1d all over the columns is quite expensive. An example of what I am doing is as follows: # Loop over all the columns for indx in indices: # Set up a linear interpolation with: # x = dates in the second simulation # y = single column in the second matrix simulation function = interp1d(secondaryMatrixDates, secondaryMatrixResults[:, indx], kind='linear') # Interpolate the second matrix results using the first simulation dates interpolationResults = function(mainMatrixDates) # I need the difference between the first simulation and the second newMatrix[:, indx] = mainMatrixResults[:, indx] - interpolationResults This is somehow a costly step, as it's taking up a lot of CPU (increasing at every iteration) and quite a long time (every column has about 350 data). Is there anything I can do to speed up this loop? Or may someone suggest a better approach? Thank you very much for your suggestions. Andrea. "Imagination Is The Only Weapon In The War Against Reality." http://xoomer.alice.it/infinity77/ From andrea.gavana at gmail.com Mon Apr 28 09:32:25 2008 From: andrea.gavana at gmail.com (Andrea Gavana) Date: Mon, 28 Apr 2008 14:32:25 +0100 Subject: [SciPy-user] Linear Interpolation Question In-Reply-To: References: Message-ID: Hi All, On Mon, Apr 28, 2008 at 12:41 PM, Andrea Gavana wrote: > Hi All, > > I have 2 matrices coming from 2 different simulations: the first > column of the matrices is a date (time) at which all the other results > in the matrix have been reported (simulation step). In these 2 > matrices, very often the simulation steps do not coincide, so I just > want to interpolate the results in the second matrix using the dates > in the first matrix. The problem is, I have close to 13,000 columns in > every matrices, and repeating interp1d all over the columns is quite > expensive. An example of what I am doing is as follows: > > # Loop over all the columns > for indx in indices: > > # Set up a linear interpolation with: > # x = dates in the second simulation > # y = single column in the second matrix simulation > function = interp1d(secondaryMatrixDates, > secondaryMatrixResults[:, indx], kind='linear') > > # Interpolate the second matrix results using the first simulation dates > interpolationResults = function(mainMatrixDates) > > # I need the difference between the first simulation and the second > newMatrix[:, indx] = mainMatrixResults[:, indx] - interpolationResults > > This is somehow a costly step, as it's taking up a lot of CPU > (increasing at every iteration) and quite a long time (every column > has about 350 data). Is there anything I can do to speed up this loop? > Or may someone suggest a better approach? > > Thank you very much for your suggestions. Ok, I have tried to be smart and use interp2d, but interp2d gives me a strange error message which I can't understand: D:\MyProjects>Interp2DSample.py Traceback (most recent call last): File "D:\MyProjects\Interp2DSample.py", line 25, in function = interp2d(xx, yy, z, kind="linear", copy=False) File "C:\Python25\lib\site-packages\scipy\interpolate\interpolate.py", line 91, in __init__ self.tck = fitpack.bisplrep(self.x, self.y, self.z, kx=kx, ky=ky, s=0.) File "C:\Python25\lib\site-packages\scipy\interpolate\fitpack.py", line 677, in bisplrep tx,ty,nxest,nyest,wrk,lwrk1,lwrk2) OverflowError: long int too large to convert to int I am able to get this error message using this simple script: import datetime import numpy from scipy.interpolate import interp2d date1, date2 = [], [] numColumns = 13000 for year in xrange(2007, 2038): for month in xrange(1, 13): date1.append(datetime.date(year, month, 1).toordinal()) date2.append(datetime.date(year, month, 5).toordinal()) timeSteps = len(date2) x = [date1[0] for i in xrange(numColumns)] y = date1 z = numpy.random.rand(timeSteps, numColumns) xx, yy = numpy.meshgrid(x, y) newX = [date2[0] for i in xrange(numColumns)] newY = date2 function = interp2d(xx, yy, z, kind="linear", copy=False) newZ = function(newX, newY) Does anyone know what I am doing wrong? I am on Windows XP, Python 2.5, scipy 0.5.2.1, numpy 1.0.3.1. Thank you very much for your suggestions. Andrea. "Imagination Is The Only Weapon In The War Against Reality." http://xoomer.alice.it/infinity77/ From silva at lma.cnrs-mrs.fr Mon Apr 28 09:58:59 2008 From: silva at lma.cnrs-mrs.fr (Fabrice Silva) Date: Mon, 28 Apr 2008 15:58:59 +0200 Subject: [SciPy-user] Matlab file reading and writing Message-ID: <1209391139.3051.14.camel@Portable-s2m.cnrs-mrs.fr> Hi here! Are you aware of the libmatio library. It intends to write and read mat files : http://sourceforge.net/projects/matio It provides a C library and a limited Fortran interface. Are there comparisons with scipy mio Python-written functions ? From nathaniel at aims.ac.za Mon Apr 28 10:43:41 2008 From: nathaniel at aims.ac.za (Nathaniel Egwu) Date: Mon, 28 Apr 2008 16:43:41 +0200 (SAST) Subject: [SciPy-user] Linear Interpolation Question In-Reply-To: References: Message-ID: <48506.192.168.42.87.1209393821.squirrel@webmail.aims.ac.za> Dear All, I have a little problem about positioning widgets in the Pwm package. I want to design a GUI made up of many entry and label widgets together with a graph area all on same window. I have been used to making use of the grid method found in tkinter for positioning. Is there any methhod equivalent to this in the Pmw package that one can use to control the appearance of these widgets? I will be happy if anyone can help. sincerely, Nate From matthew.brett at gmail.com Mon Apr 28 11:16:08 2008 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 28 Apr 2008 16:16:08 +0100 Subject: [SciPy-user] Matlab file reading and writing In-Reply-To: <1209391139.3051.14.camel@Portable-s2m.cnrs-mrs.fr> References: <1209391139.3051.14.camel@Portable-s2m.cnrs-mrs.fr> Message-ID: <1e2af89e0804280816g1c88350bp5b356b517fa200b8@mail.gmail.com> Hi, > Are you aware of the libmatio library. It intends to write and read mat > files : > http://sourceforge.net/projects/matio > > It provides a C library and a limited Fortran interface. Are there > comparisons with scipy mio Python-written functions ? Thanks for the link. I wasn't aware of it. I guess it may well be faster than the scipy code, but harder to maintain - the scipy code is pure python. We also don't - at the moment - have full matlab 5 writing capability. As it stands we could not incorporate libmatio because it is LGPL. - but maybe a wrappable option for a scikit... Best, Matthew > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From j.anderson at hull.ac.uk Mon Apr 28 11:24:35 2008 From: j.anderson at hull.ac.uk (Joseph Anderson) Date: Mon, 28 Apr 2008 16:24:35 +0100 Subject: [SciPy-user] Time varying lfilter? In-Reply-To: <200804242101.29674.pgmdevlist@gmail.com> Message-ID: Hello All, I'm wondering if anyone has any advice on realizing time varying di?erence-equation filtering in SciPy? In other words, I'd like to filter a signal with difference eqn coefficients (b's and a's) varying in time. While it is possible to call signal.lfilter repeatedly in a for loop for each member of an input array (saving state between calls, and assigning new diff eqn coefficients), this isn't particularly fast. Any advice? Thanks in advance. My regards, ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Dr Joseph Anderson Lecturer in Music School of Arts and New Media University of Hull, Scarborough Campus, Scarborough, North Yorkshire, YO11 3AZ, UK T: +44.(0)1723.357341 T: +44.(0)1723.357370 F: +44.(0)1723.350815 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ***************************************************************************************** To view the terms under which this email is distributed, please go to http://www.hull.ac.uk/legal/email_disclaimer.html ***************************************************************************************** From chris at percious.com Mon Apr 28 11:39:17 2008 From: chris at percious.com (Chris Perkins) Date: Mon, 28 Apr 2008 09:39:17 -0600 Subject: [SciPy-user] Python Student Internship - National Renewable Energy Laboratory Message-ID: <3f37c7c10804280839w1163695fp66fb61a9e64e5e22@mail.gmail.com> Student Intern ? Scientific Computing Group 5900 5900-7259 A student internship is available in the National Renewable Energy Laboratory's (NREL) Scientific Computing Group. NREL is the nation's primary laboratory for research, development and deployment of renewable energy and energy efficiency technologies. The intern will be supporting work concerning management of scientific and technical data. Our data group is cutting-edge with respect to capturing rapidly changing scientific metadata and allowing the scientists to relate different kinds of data in a meaningful way. We have an immediate opening for a summer student internship with possible extension to one year in our Golden, Colorado office. The position would be part-time (15 - 25 hours per week) during the school year and/or full time during the summer. DUTIES: Will include working with researchers on techniques to enable the capture and storage of technical data in a scientific setting. Your role in our development team would be to support data harvesting using existing software, and develop new visualization techniques for existing data sets. DESIRED QUALIFICATIONS: Undergraduate or graduate student in computer science or related field, with demonstrated experience in programming, databases and software development. Experience using agile techniques and test-driven development. Demonstrated of Unit Testing. Experience with major dynamic languages like Python, Ruby, or C#. PREFERRED: Demonstrated good writing skills and computer skills, specifically including programming in python and database use. Experience with systems related to management of scientific data. Candidate must be a US citizen. Qualified candidates should e-mail their resume to: Laura Davis NREL, Human Resources Office Reference: Req. #5900-7259 E-Mail: laura_da... @nrel.gov ========================================= feel free to email me with any questions. cheers. -chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From ed at lamedomain.net Mon Apr 28 11:49:06 2008 From: ed at lamedomain.net (Ed Rahn) Date: Mon, 28 Apr 2008 08:49:06 -0700 Subject: [SciPy-user] Polynomial interpolation In-Reply-To: <20080426012325.GC7685@phare.normalesup.org> References: <3d375d730804201710m737de47bw6b91bbbf0f874ab3@mail.gmail.com> <20080426012325.GC7685@phare.normalesup.org> Message-ID: <4815F1F2.2000100@lamedomain.net> Gael Varoquaux wrote: > On Sun, Apr 20, 2008 at 09:03:05PM -0400, Anne Archibald wrote: >> Reasonable, though really, is krogh_interpolate(xi, yi, x) much better >> than KroghInterpolator(xi, yi)(x)? > > Yes. Some people don't understand functional/object code. We need to keep > scipy accesssible for them. > It's silly not to use core features of the target language because some people may not have yet learned them. >> It's also good to emphasize that >> the construction of the interpolating polynomial is a relatively slow >> process compared to its evaluation. This is a perfect reason to use an object. - Ed From dineshbvadhia at hotmail.com Mon Apr 28 11:57:55 2008 From: dineshbvadhia at hotmail.com (Dinesh B Vadhia) Date: Mon, 28 Apr 2008 08:57:55 -0700 Subject: [SciPy-user] [Sparse matrix library] csr_matrix and Message-ID: Okay, I see what is going on. Uhmmm! If I replace int8 with int16 or int32 then this will increase the size of A in memory which will defeat the purpose of the original exercise to minimize the size of A (as they are very large). Unless there is a way to coerce the calculation A.sum(0) into a int16 then looks like I'll have to evaluate A.sum(0) in brute force fashion at the time of A's creation, save it and read it into the program at execution. Dinesh -------------------------------------------------------------------------------- Message: 2 Date: Mon, 28 Apr 2008 00:29:29 -0500 From: "Nathan Bell" Subject: Re: [SciPy-user] [Sparse matrix library] csr_matrix and column sum To: "SciPy Users List" Message-ID: Content-Type: text/plain; charset=ISO-8859-1 On Sun, Apr 27, 2008 at 11:57 PM, Dinesh B Vadhia wrote: > > 0 , -44 , 84 , -116 , -121 , -43 , -44 , -116 , -115 , -79 , 70 , -86 , 39 , > -17 , -21 , -112 , 29 , -126 , -19 , 33 , 59 , -6 , 24 , 18 , 57 > > > 768 , 724 , 1108 , 652 , 1927 , 2005 , 724 , 908 , 1421 , 1457 , 2118 , 1450 > , 1575 , 3055 , 2283 , 656 , 1053 , 898 , 1517 , 1569 , 1339 , 762 , 3096 , > 530 , 1081 > This is due to the fact that when integer arithmetic overflows (e.g. A + B is too large) the result "wraps around". The solution is to use a data type with a greater range of values (more bits). Replace your int8 data array with an int16 array and you will get the expected results (albeit using one more byte per nonzero) provided that the sums do not exceed 2^15 - 1. To be safe, you might use int32 and not worry about ranges as much. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Mon Apr 28 11:57:31 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 28 Apr 2008 17:57:31 +0200 Subject: [SciPy-user] Polynomial interpolation In-Reply-To: <4815F1F2.2000100@lamedomain.net> References: <3d375d730804201710m737de47bw6b91bbbf0f874ab3@mail.gmail.com> <20080426012325.GC7685@phare.normalesup.org> <4815F1F2.2000100@lamedomain.net> Message-ID: <20080428155731.GD20344@phare.normalesup.org> On Mon, Apr 28, 2008 at 08:49:06AM -0700, Ed Rahn wrote: > Gael Varoquaux wrote: > > On Sun, Apr 20, 2008 at 09:03:05PM -0400, Anne Archibald wrote: > >> Reasonable, though really, is krogh_interpolate(xi, yi, x) much better > >> than KroghInterpolator(xi, yi)(x)? > > Yes. Some people don't understand functional/object code. We need to keep > > scipy accesssible for them. > It's silly not to use core features of the target language because some > people may not have yet learned them. No it is not. I program in an environment, not alone, for myself. I want my cooleagues to understand my code. I have found out that they don't understand heavily functionnal programming. I have the choice between voiding this style, eventhought I like it or giving up having my code reused. Well I know what choice I make. Don't blame these people, or if you do, then learn all the optics and electronics they know, and come and run our experiments please (and fix my photodiode on which I am getting only 300kHz bandwidth while I need at least 1MHz). Computing is not their core job. I want scipy to be open to these user, because they can thus plug in a framework to eg control an experiment, interface to a database... This framework can be written in an elaborate way, they will never read it, but I want the option to have simple coding available. One of the things my colleague like with Matlab is that it doesn't forces them to learn new concepts. What I hate with it is that it forbids me (who is writting the experiment-control framework) to use advanced concepts. We need to find a middle ground between the two. Cheers, Ga?l From robert.kern at gmail.com Mon Apr 28 12:06:51 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 28 Apr 2008 11:06:51 -0500 Subject: [SciPy-user] [Sparse matrix library] csr_matrix and column sum In-Reply-To: References: Message-ID: <3d375d730804280906n3800148dv49ce1dc7349fa868@mail.gmail.com> On Mon, Apr 28, 2008 at 12:29 AM, Nathan Bell wrote: > On Sun, Apr 27, 2008 at 11:57 PM, Dinesh B Vadhia > wrote: > > > > 0 , -44 , 84 , -116 , -121 , -43 , -44 , -116 , -115 , -79 , 70 , -86 , 39 , > > -17 , -21 , -112 , 29 , -126 , -19 , 33 , 59 , -6 , 24 , 18 , 57 > > > > > > > 768 , 724 , 1108 , 652 , 1927 , 2005 , 724 , 908 , 1421 , 1457 , 2118 , 1450 > > , 1575 , 3055 , 2283 , 656 , 1053 , 898 , 1517 , 1569 , 1339 , 762 , 3096 , > > 530 , 1081 > > > > This is due to the fact that when integer arithmetic overflows (e.g. A > + B is too large) the result "wraps around". The solution is to use a > data type with a greater range of values (more bits). > > Replace your int8 data array with an int16 array and you will get the > expected results (albeit using one more byte per nonzero) provided > that the sums do not exceed 2^15 - 1. > > To be safe, you might use int32 and not worry about ranges as much. ndarray.sum() accepts a dtype= argument to specify the type of the accumulator. You might consider implementing the same thing for sparse arrays. Also, ndarray.sum() defaults to int32 (on 32-bit systems, int64 on 64-bit systems) as the accumulator dtype for all smaller integer types. In [1]: from numpy import * In [2]: a = ones(300, dtype=int8) In [3]: a.sum? Type: builtin_function_or_method Base Class: Namespace: Interactive Docstring: a.sum(axis=None, dtype=None) -> Sum of array over given axis. Sum the array over the given axis. If the axis is None, sum over all dimensions of the array. The optional dtype argument is the data type for the returned value and intermediate calculations. The default is to upcast (promote) smaller integer types to the platform-dependent int. For example, on 32-bit platforms: a.dtype default sum dtype --------------------------------------------------- bool, int8, int16, int32 int32 Warning: The arithmetic is modular and no error is raised on overflow. Examples -------- >>> array([0.5, 1.5]).sum() 2.0 >>> array([0.5, 1.5]).sum(dtype=int32) 1 >>> array([[0, 1], [0, 5]]).sum(axis=0) array([0, 6]) >>> array([[0, 1], [0, 5]]).sum(axis=1) array([1, 5]) >>> ones(128, dtype=int8).sum(dtype=int8) # overflow! -128 In [4]: a.sum(dtype=int16) Out[4]: 300 -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ed at lamedomain.net Mon Apr 28 12:20:48 2008 From: ed at lamedomain.net (Ed Rahn) Date: Mon, 28 Apr 2008 09:20:48 -0700 Subject: [SciPy-user] Polynomial interpolation In-Reply-To: <20080428155731.GD20344@phare.normalesup.org> References: <3d375d730804201710m737de47bw6b91bbbf0f874ab3@mail.gmail.com> <20080426012325.GC7685@phare.normalesup.org> <4815F1F2.2000100@lamedomain.net> <20080428155731.GD20344@phare.normalesup.org> Message-ID: <4815F960.6060008@lamedomain.net> Gael Varoquaux wrote: > On Mon, Apr 28, 2008 at 08:49:06AM -0700, Ed Rahn wrote: >> Gael Varoquaux wrote: >>> On Sun, Apr 20, 2008 at 09:03:05PM -0400, Anne Archibald wrote: >>>> Reasonable, though really, is krogh_interpolate(xi, yi, x) much better >>>> than KroghInterpolator(xi, yi)(x)? > >>> Yes. Some people don't understand functional/object code. We need to keep >>> scipy accesssible for them. > > >> It's silly not to use core features of the target language because some >> people may not have yet learned them. > > No it is not. I program in an environment, not alone, for myself. I want > my cooleagues to understand my code. I have found out that they don't > understand heavily functionnal programming. I have the choice between > voiding this style, eventhought I like it or giving up having my code > reused. Well I know what choice I make. The group of people who use scipy is much greater than you and your colleagues. The people who use scipy do so because it uses python. In this case a problem can be better solved, and the context better understood using objects. > > Don't blame these people, or if you do, then learn all the optics and > electronics they know, and come and run our experiments please (and fix > my photodiode on which I am getting only 300kHz bandwidth while I need at > least 1MHz). Computing is not their core job. I want scipy to be open to > these user, because they can thus plug in a framework to eg control an > experiment, interface to a database... This framework can be written in > an elaborate way, they will never read it, but I want the option to have > simple coding available. I have no problem blaming them. When I use a tool, it's my responsibility to understand it, not others to work around. > > One of the things my colleague like with Matlab is that it doesn't forces > them to learn new concepts. What I hate with it is that it forbids me > (who is writting the experiment-control framework) to use advanced > concepts. We need to find a middle ground between the two. Working with so many matlab people, maybe octave would be a better fit for you? The API's you write and expose to your colleagues can be independent of the scipy API. - Ed From rob.clewley at gmail.com Mon Apr 28 12:24:08 2008 From: rob.clewley at gmail.com (Rob Clewley) Date: Mon, 28 Apr 2008 12:24:08 -0400 Subject: [SciPy-user] Volterra system identification In-Reply-To: <4815AB5E.3080300@mur.at> References: <48159731.8050602@mur.at> <1209378978.7809.1.camel@mik> <4815AB5E.3080300@mur.at> Message-ID: Hi, yes, I think the respopnder was confusing Volterra sys ID with the Lotka-Volterra ODE system. I don't know of any such implementation of sys ID for python, sorry. -Rob On Mon, Apr 28, 2008 at 6:47 AM, Georg Holzmann wrote: > Hallo! > > Thanks for the answer - but I don't understand it ... ;) > I mean volterra series for nonlinear system identification. > > LG > Georg From wnbell at gmail.com Mon Apr 28 12:30:42 2008 From: wnbell at gmail.com (Nathan Bell) Date: Mon, 28 Apr 2008 11:30:42 -0500 Subject: [SciPy-user] [Sparse matrix library] csr_matrix and In-Reply-To: References: Message-ID: On Mon, Apr 28, 2008 at 10:57 AM, Dinesh B Vadhia wrote: > > > Okay, I see what is going on. Uhmmm! If I replace int8 with int16 or int32 > then this will increase the size of A in memory which will defeat the > purpose of the original exercise to minimize the size of A (as they are very > large). Unless there is a way to coerce the calculation A.sum(0) into a > int16 then looks like I'll have to evaluate A.sum(0) in brute force fashion > at the time of A's creation, save it and read it into the program at > execution. It's the difference between roughly 5 bytes per nonzero and 6 bytes per nonzero. The column index for each nonzero is a 4 byte integer, so there's little difference in the total memory cost (4+1 vs. 4+2). -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From gael.varoquaux at normalesup.org Mon Apr 28 12:40:24 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 28 Apr 2008 18:40:24 +0200 Subject: [SciPy-user] Polynomial interpolation In-Reply-To: <4815F960.6060008@lamedomain.net> References: <3d375d730804201710m737de47bw6b91bbbf0f874ab3@mail.gmail.com> <20080426012325.GC7685@phare.normalesup.org> <4815F1F2.2000100@lamedomain.net> <20080428155731.GD20344@phare.normalesup.org> <4815F960.6060008@lamedomain.net> Message-ID: <20080428164024.GF20344@phare.normalesup.org> On Mon, Apr 28, 2008 at 09:20:48AM -0700, Ed Rahn wrote: > The group of people who use scipy is much greater than you and your > colleagues. The people who use scipy do so because it uses python. In > this case a problem can be better solved, and the context better > understood using objects. Yes, that's all good. Do provide an elaborate interface, but don't kill the simple one. > > One of the things my colleague like with Matlab is that it doesn't forces > > them to learn new concepts. What I hate with it is that it forbids me > > (who is writting the experiment-control framework) to use advanced > > concepts. We need to find a middle ground between the two. > Working with so many matlab people, maybe octave would be a better fit > for you? The API's you write and expose to your colleagues can be > independent of the scipy API. I like to use Python to drive the experiments. Why add another language? Scipy already fits the job. Why force on people the more elaborate way of making things? The grat thing about Python is that it allows you to do things simple if you want, while keeping the option of more complexe design. Hell, I can write a script, with no functions, let even objects. If I was doing java, I'd have to write: """ public static void main(String [ ] args) { ... } """ In addition by multiplying the langages you are putting more stress on the support people, on the users, which might have to learn the framework one day, and on the developpers, who have to provide the same functionnality in different languages. Keep it simple, use one language, and keep it open to non advanced users. I don't see the cost of keeping a simple API, it doesn't kill the advanced one. Moreover, it provides a light learning curve for people who can later on move to the more advanced. And it is very little code to support. Ga?l From wnbell at gmail.com Mon Apr 28 12:43:26 2008 From: wnbell at gmail.com (Nathan Bell) Date: Mon, 28 Apr 2008 11:43:26 -0500 Subject: [SciPy-user] [Sparse matrix library] csr_matrix and column sum In-Reply-To: <3d375d730804280906n3800148dv49ce1dc7349fa868@mail.gmail.com> References: <3d375d730804280906n3800148dv49ce1dc7349fa868@mail.gmail.com> Message-ID: On Mon, Apr 28, 2008 at 11:06 AM, Robert Kern wrote: > > > ndarray.sum() accepts a dtype= argument to specify the type of the > accumulator. You might consider implementing the same thing for sparse > arrays. Also, ndarray.sum() defaults to int32 (on 32-bit systems, > int64 on 64-bit systems) as the accumulator dtype for all smaller > integer types. > I wasn't aware of that feature until now. I've tried to keep the behavior of scipy.sparse as close to that of numpy matrices, so we should support user-defined accumulator types. I've created a ticket for this issue; http://scipy.org/scipy/scipy/ticket/658 It won't require much work to implement, but I can't say when I'll have the time to carry it out. I think the "right" way is to support matvec operations y = A*x with different dtypes for A and x/y. This would allow one to compute A*x for integer A and floating point x without upcasting A's data array. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From ed at lamedomain.net Mon Apr 28 13:09:31 2008 From: ed at lamedomain.net (Ed Rahn) Date: Mon, 28 Apr 2008 10:09:31 -0700 Subject: [SciPy-user] Polynomial interpolation In-Reply-To: <20080428164024.GF20344@phare.normalesup.org> References: <3d375d730804201710m737de47bw6b91bbbf0f874ab3@mail.gmail.com> <20080426012325.GC7685@phare.normalesup.org> <4815F1F2.2000100@lamedomain.net> <20080428155731.GD20344@phare.normalesup.org> <4815F960.6060008@lamedomain.net> <20080428164024.GF20344@phare.normalesup.org> Message-ID: <481604CB.5070009@lamedomain.net> Gael Varoquaux wrote: > On Mon, Apr 28, 2008 at 09:20:48AM -0700, Ed Rahn wrote: >> The group of people who use scipy is much greater than you and your >> colleagues. The people who use scipy do so because it uses python. In >> this case a problem can be better solved, and the context better >> understood using objects. > > Yes, that's all good. Do provide an elaborate interface, but don't kill > the simple one. The original question was about how to implement a new interface, nothing is being killed. Simple and elaborate are relative to what one understand. Objects are a central part of python. To me it seems elaborate to not use a core feature of the language, and instead create some new type of information passing architecture between appropriate functions. > >>> One of the things my colleague like with Matlab is that it doesn't forces >>> them to learn new concepts. What I hate with it is that it forbids me >>> (who is writting the experiment-control framework) to use advanced >>> concepts. We need to find a middle ground between the two. > >> Working with so many matlab people, maybe octave would be a better fit >> for you? The API's you write and expose to your colleagues can be >> independent of the scipy API. > > I like to use Python to drive the experiments. Why add another language? > Scipy already fits the job. Why force on people the more elaborate way > of making things? I think at this point we are going in circles. I didn't suggest another language, simply a program that your colleagues might find more appropriate with their current knowledge and problem at hand. It seems your argument is that scipy is better than matlab, but you want to keep the same semantics. New code in scipy should use python idioms, and not matlab or fortran ones simple because people have prior experience and fell comfortable with them. - Ed From gael.varoquaux at normalesup.org Mon Apr 28 13:17:26 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 28 Apr 2008 19:17:26 +0200 Subject: [SciPy-user] Polynomial interpolation In-Reply-To: <481604CB.5070009@lamedomain.net> References: <3d375d730804201710m737de47bw6b91bbbf0f874ab3@mail.gmail.com> <20080426012325.GC7685@phare.normalesup.org> <4815F1F2.2000100@lamedomain.net> <20080428155731.GD20344@phare.normalesup.org> <4815F960.6060008@lamedomain.net> <20080428164024.GF20344@phare.normalesup.org> <481604CB.5070009@lamedomain.net> Message-ID: <20080428171726.GH20344@phare.normalesup.org> On Mon, Apr 28, 2008 at 10:09:31AM -0700, Ed Rahn wrote: > Gael Varoquaux wrote: > > On Mon, Apr 28, 2008 at 09:20:48AM -0700, Ed Rahn wrote: > >> The group of people who use scipy is much greater than you and your > >> colleagues. The people who use scipy do so because it uses python. In > >> this case a problem can be better solved, and the context better > >> understood using objects. > > Yes, that's all good. Do provide an elaborate interface, but don't kill > > the simple one. > The original question was about how to implement a new interface, > nothing is being killed. Simple and elaborate are relative to what one > understand. Objects are a central part of python. To me it seems > elaborate to not use a core feature of the language, and instead create > some new type of information passing architecture between appropriate > functions. OK, maybe I wasn't clear. I would like scipy to expose, amongst other things, a simple procedural interface. It should also expose an object-oriented one. By doing this you are not at all crippling the language. > It seems your argument is that scipy is better than matlab, but you want > to keep the same semantics. New code in scipy should use python idioms, > and not matlab or fortran ones simple because people have prior > experience and fell comfortable with them. It is not a question of prior experience, it is just that it is conceptual simpler. I want a language that expose multiple levels of complexity, so that different people can use different levels. That way I can work with people using the advanced features of the language when I want without forcing them onto my collaborators. The facts that these idioms are better is irrelevent to someone who is trying to do simple things. And I don't buy the argument that simple things should be done in a different language, keeping scipy for complex problems. It doesn't have to be this way. I am not suggesting to make a bad design choice, or to go out of our way, I just want a two liner helper function. Ga?l From robert.kern at gmail.com Mon Apr 28 14:24:46 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 28 Apr 2008 13:24:46 -0500 Subject: [SciPy-user] Polynomial interpolation In-Reply-To: <20080428171726.GH20344@phare.normalesup.org> References: <3d375d730804201710m737de47bw6b91bbbf0f874ab3@mail.gmail.com> <20080426012325.GC7685@phare.normalesup.org> <4815F1F2.2000100@lamedomain.net> <20080428155731.GD20344@phare.normalesup.org> <4815F960.6060008@lamedomain.net> <20080428164024.GF20344@phare.normalesup.org> <481604CB.5070009@lamedomain.net> <20080428171726.GH20344@phare.normalesup.org> Message-ID: <3d375d730804281124vd68b980qe6a2220b674ac904@mail.gmail.com> On Mon, Apr 28, 2008 at 12:17 PM, Gael Varoquaux wrote: > I am not suggesting to make a bad design choice, or to go > out of our way, I just want a two liner helper function. I'm afraid that *is* widely recognized as a bad design choice. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From jdh2358 at gmail.com Mon Apr 28 14:34:24 2008 From: jdh2358 at gmail.com (John Hunter) Date: Mon, 28 Apr 2008 13:34:24 -0500 Subject: [SciPy-user] constrained optimization Message-ID: <88e473830804281134x7467b6a1tf02379f5ee98e64e@mail.gmail.com> I need to do a N dimensional constrained optimization over a weight w vector with the constraints: * w[i] >=0 * w.sum() == 1.0 Scanning through the scipy.optimize docs, I see a number of examples where parameters can be bounded by a bracketing interval, but none where constraints can be placed on combinations of the parameters, eg the sum of them. One approach I am considering is doing a bracketed [0,1] constrained optimization over N-1 weights (assigning the last weight to be 1-sum others) and modifying my cost function to punish the optimizer when the N-1 input weights sum to more than one. Is there a better approach? Thanks, JDH From jdh2358 at gmail.com Mon Apr 28 14:54:48 2008 From: jdh2358 at gmail.com (John Hunter) Date: Mon, 28 Apr 2008 13:54:48 -0500 Subject: [SciPy-user] problem building svn scipy In-Reply-To: <3d375d730804251148h349b9619k846759cf03ba3fa4@mail.gmail.com> References: <88e473830804250728o24834ec1o1201fd434f4b0f83@mail.gmail.com> <3d375d730804251148h349b9619k846759cf03ba3fa4@mail.gmail.com> Message-ID: <88e473830804281154v63c6741aw5cb52d84c284966d@mail.gmail.com> On Fri, Apr 25, 2008 at 1:48 PM, Robert Kern wrote: > It looks like something is #defining CS to be a number. Search through > the headers for this. I am not sure where the _CS symbol is getting defined on my system, but it appears that this is the case for many of the _ prefixed symbols in this file. I was able to get the file to compile by replacing all the leading underscore prefixes for the python array object local declarations with _ postfix. Don't know if this is acceptable style for scipy, but here is the patch Thanks, JDH -------------- next part -------------- A non-text attachment was scrubbed... Name: scipy.cluster.diff Type: text/x-diff Size: 15979 bytes Desc: not available URL: From gael.varoquaux at normalesup.org Mon Apr 28 14:54:32 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 28 Apr 2008 20:54:32 +0200 Subject: [SciPy-user] Polynomial interpolation In-Reply-To: <3d375d730804281124vd68b980qe6a2220b674ac904@mail.gmail.com> References: <3d375d730804201710m737de47bw6b91bbbf0f874ab3@mail.gmail.com> <20080426012325.GC7685@phare.normalesup.org> <4815F1F2.2000100@lamedomain.net> <20080428155731.GD20344@phare.normalesup.org> <4815F960.6060008@lamedomain.net> <20080428164024.GF20344@phare.normalesup.org> <481604CB.5070009@lamedomain.net> <20080428171726.GH20344@phare.normalesup.org> <3d375d730804281124vd68b980qe6a2220b674ac904@mail.gmail.com> Message-ID: <20080428185432.GC10916@phare.normalesup.org> On Mon, Apr 28, 2008 at 01:24:46PM -0500, Robert Kern wrote: > On Mon, Apr 28, 2008 at 12:17 PM, Gael Varoquaux > wrote: > > I am not suggesting to make a bad design choice, or to go > > out of our way, I just want a two liner helper function. > I'm afraid that *is* widely recognized as a bad design choice. I don't agree. We are not changing the design of scipy. Users are welcomed not to use this procedural interface if they want. The choice is not ours. Functional code severely confuses beginners. In a French lab, in experimental physics, but also in biology and chemistry, bosses don't code, only PhD students do (I have heard a famous guy, working in computational physics, call his students "human-computer interfaces"). These PhD students stay in the lab between 3 and 4 years (yes a PhD is short in France, that's because studies before the PhD are long). Thus they spend a good fraction of their time being beginners. Some pick up quickly, others have more difficulties. Some people don't need much out of a computer. Please, give them interfaces that don't confuse them. I was once told that Matlab was superior to Python because it did not have confusing objects like lists, tuples, and arrays. Everything was an array. That's not true, by the way, but it reflects that some people are happy when they can do everything in a simple way, and not worry about the right way of doing things. Ga?l From robert.kern at gmail.com Mon Apr 28 14:57:41 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 28 Apr 2008 13:57:41 -0500 Subject: [SciPy-user] constrained optimization In-Reply-To: <88e473830804281134x7467b6a1tf02379f5ee98e64e@mail.gmail.com> References: <88e473830804281134x7467b6a1tf02379f5ee98e64e@mail.gmail.com> Message-ID: <3d375d730804281157t74364936r405bd920bc24bbb7@mail.gmail.com> On Mon, Apr 28, 2008 at 1:34 PM, John Hunter wrote: > I need to do a N dimensional constrained optimization over a weight w > vector with the constraints: > > * w[i] >=0 > > * w.sum() == 1.0 > > Scanning through the scipy.optimize docs, I see a number of examples > where parameters can be bounded by a bracketing interval, but none > where constraints can be placed on combinations of the parameters, eg > the sum of them. One approach I am considering is doing a bracketed > [0,1] constrained optimization over N-1 weights (assigning the last > weight to be 1-sum others) and modifying my cost function to punish > the optimizer when the N-1 input weights sum to more than one. > > Is there a better approach? Transform the coordinates to an unconstrained N-1-dimensional space. One such transformation is the Aitchison (or "additive log-ratio") transform: y = log(x[:-1] / x[-1]) And to go back: tmp = hstack([exp(y), 1.0]) x = tmp / tmp.sum() Searching for "compositional data analysis" should yield similar transformations, but this one should be sufficient for maintaining constraints. For doing statistics, the other have better properties. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ondrej at certik.cz Mon Apr 28 16:03:25 2008 From: ondrej at certik.cz (Ondrej Certik) Date: Mon, 28 Apr 2008 22:03:25 +0200 Subject: [SciPy-user] constrained optimization In-Reply-To: <3d375d730804281157t74364936r405bd920bc24bbb7@mail.gmail.com> References: <88e473830804281134x7467b6a1tf02379f5ee98e64e@mail.gmail.com> <3d375d730804281157t74364936r405bd920bc24bbb7@mail.gmail.com> Message-ID: <85b5c3130804281303q651f39b3nce055b5194ad809c@mail.gmail.com> On Mon, Apr 28, 2008 at 8:57 PM, Robert Kern wrote: > On Mon, Apr 28, 2008 at 1:34 PM, John Hunter wrote: > > I need to do a N dimensional constrained optimization over a weight w > > vector with the constraints: > > > > * w[i] >=0 > > > > * w.sum() == 1.0 > > > > Scanning through the scipy.optimize docs, I see a number of examples > > where parameters can be bounded by a bracketing interval, but none > > where constraints can be placed on combinations of the parameters, eg > > the sum of them. One approach I am considering is doing a bracketed > > [0,1] constrained optimization over N-1 weights (assigning the last > > weight to be 1-sum others) and modifying my cost function to punish > > the optimizer when the N-1 input weights sum to more than one. > > > > Is there a better approach? > > Transform the coordinates to an unconstrained N-1-dimensional space. > One such transformation is the Aitchison (or "additive log-ratio") > transform: > > y = log(x[:-1] / x[-1]) > > And to go back: > > tmp = hstack([exp(y), 1.0]) > x = tmp / tmp.sum() > > Searching for "compositional data analysis" should yield similar > transformations, but this one should be sufficient for maintaining > constraints. For doing statistics, the other have better properties. Wow, that is very clever. Just today I was thinking how to do it and it didn't occur to me I should read scipy-user. :) The exp/log transform is clear, but I didn't figure out that in order to maintain the norm, I can maintain it in the last element, so it's enough to do: y = x[:-1]/x[-1] tmp = hstack([y, 1.0]) x = tmp / tmp.sum() Very cool, thanks. However, the transform is not one to one, e.g. both x = [1, 2, 1, 4] x = [2, 4, 2, 8] represent the same thing: y = [0.25, 0.5, 0.25] and Ondrej From peridot.faceted at gmail.com Mon Apr 28 16:11:23 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Mon, 28 Apr 2008 16:11:23 -0400 Subject: [SciPy-user] Polynomial interpolation In-Reply-To: <3d375d730804281124vd68b980qe6a2220b674ac904@mail.gmail.com> References: <20080426012325.GC7685@phare.normalesup.org> <4815F1F2.2000100@lamedomain.net> <20080428155731.GD20344@phare.normalesup.org> <4815F960.6060008@lamedomain.net> <20080428164024.GF20344@phare.normalesup.org> <481604CB.5070009@lamedomain.net> <20080428171726.GH20344@phare.normalesup.org> <3d375d730804281124vd68b980qe6a2220b674ac904@mail.gmail.com> Message-ID: 2008/4/28 Robert Kern : > On Mon, Apr 28, 2008 at 12:17 PM, Gael Varoquaux > > wrote: > > > I am not suggesting to make a bad design choice, or to go > > out of our way, I just want a two liner helper function. > > I'm afraid that *is* widely recognized as a bad design choice. Well, I think the way to go is to mimic splrep/splev: def krogh_rep(xi, yi): return KroghInterpolator(xi, yi) def krogh_ev(kr, x, der=0): if der==0: return kr(x) else: return kr.derivative(x, der=der) The only danger is that if it ever raises an exception and the users see how simple it is they will feel like idiots. Anne From eads at soe.ucsc.edu Mon Apr 28 16:11:43 2008 From: eads at soe.ucsc.edu (Damian Eads) Date: Mon, 28 Apr 2008 13:11:43 -0700 Subject: [SciPy-user] problem building svn scipy In-Reply-To: <88e473830804281154v63c6741aw5cb52d84c284966d@mail.gmail.com> References: <88e473830804250728o24834ec1o1201fd434f4b0f83@mail.gmail.com> <3d375d730804251148h349b9619k846759cf03ba3fa4@mail.gmail.com> <88e473830804281154v63c6741aw5cb52d84c284966d@mail.gmail.com> Message-ID: <48162F7F.5040907@soe.ucsc.edu> Hi John, Thanks for the fix. It is possible that an earlier version of gcc 3.4 collapsed underscore prefixes on variable names. I was not aware there was a Scipy style for C code--just an unwritten understanding that the C code be cleanly written. I have structured my code so that the implementation of an algorithm goes in a function in one C file (hierarchy.c), its Python-C wrapper goes in another C file (hierarchy_wrap.c), and argument checking and array allocation happens in a Python function (hierarchy.py). I have tried to keep my argument names consistent in each of these files. If an argument is called X in Python, its corresponding PyArrayObject is called _X in C, and the C-array buffer is called X. This consistency really helps development as well as avoiding bugs. If using postfix underscores makes the code portable on your compiler, then I will change the convention. I will apply the patch and check in the fix. Thanks again, Damian John Hunter wrote: > On Fri, Apr 25, 2008 at 1:48 PM, Robert Kern wrote: > >> It looks like something is #defining CS to be a number. Search through >> the headers for this. > > I am not sure where the _CS symbol is getting defined on my system, > but it appears that this is the case for many of the _ prefixed > symbols in this file. I was able to get the file to compile by > replacing all the leading underscore prefixes for the python array > object local declarations with _ postfix. Don't know if this is > acceptable style for scipy, but here is the patch > > Thanks, > JDH From robert.kern at gmail.com Mon Apr 28 16:19:55 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 28 Apr 2008 15:19:55 -0500 Subject: [SciPy-user] constrained optimization In-Reply-To: <85b5c3130804281303q651f39b3nce055b5194ad809c@mail.gmail.com> References: <88e473830804281134x7467b6a1tf02379f5ee98e64e@mail.gmail.com> <3d375d730804281157t74364936r405bd920bc24bbb7@mail.gmail.com> <85b5c3130804281303q651f39b3nce055b5194ad809c@mail.gmail.com> Message-ID: <3d375d730804281319s2776b8deg8e4f548648b50a4b@mail.gmail.com> On Mon, Apr 28, 2008 at 3:03 PM, Ondrej Certik wrote: > On Mon, Apr 28, 2008 at 8:57 PM, Robert Kern wrote: > > On Mon, Apr 28, 2008 at 1:34 PM, John Hunter wrote: > > > I need to do a N dimensional constrained optimization over a weight w > > > vector with the constraints: > > > > > > * w[i] >=0 > > > > > > * w.sum() == 1.0 > > > > > > Scanning through the scipy.optimize docs, I see a number of examples > > > where parameters can be bounded by a bracketing interval, but none > > > where constraints can be placed on combinations of the parameters, eg > > > the sum of them. One approach I am considering is doing a bracketed > > > [0,1] constrained optimization over N-1 weights (assigning the last > > > weight to be 1-sum others) and modifying my cost function to punish > > > the optimizer when the N-1 input weights sum to more than one. > > > > > > Is there a better approach? > > > > Transform the coordinates to an unconstrained N-1-dimensional space. > > One such transformation is the Aitchison (or "additive log-ratio") > > transform: > > > > y = log(x[:-1] / x[-1]) > > > > And to go back: > > > > tmp = hstack([exp(y), 1.0]) > > x = tmp / tmp.sum() > > > > Searching for "compositional data analysis" should yield similar > > transformations, but this one should be sufficient for maintaining > > constraints. For doing statistics, the other have better properties. > > Wow, that is very clever. Just today I was thinking how to do it and > it didn't occur to me I should read scipy-user. :) > > The exp/log transform is clear, but I didn't figure out that in order > to maintain > the norm, I can maintain it in the last element, so it's enough to do: > > y = x[:-1]/x[-1] > > tmp = hstack([y, 1.0]) > x = tmp / tmp.sum() > > Very cool, thanks. However, the transform is not one to one, e.g. both > > x = [1, 2, 1, 4] > x = [2, 4, 2, 8] > > represent the same thing: > > y = [0.25, 0.5, 0.25] Yes, that is by design. With compositional data, only the ratios between components matter. They are unique only up to a scaling factor, and typically, you normalize them such that they sum to 1. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Mon Apr 28 16:26:17 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 28 Apr 2008 15:26:17 -0500 Subject: [SciPy-user] Polynomial interpolation In-Reply-To: References: <20080426012325.GC7685@phare.normalesup.org> <4815F1F2.2000100@lamedomain.net> <20080428155731.GD20344@phare.normalesup.org> <4815F960.6060008@lamedomain.net> <20080428164024.GF20344@phare.normalesup.org> <481604CB.5070009@lamedomain.net> <20080428171726.GH20344@phare.normalesup.org> <3d375d730804281124vd68b980qe6a2220b674ac904@mail.gmail.com> Message-ID: <3d375d730804281326q7f147ff5ta67db4d144f72d98@mail.gmail.com> On Mon, Apr 28, 2008 at 3:11 PM, Anne Archibald wrote: > 2008/4/28 Robert Kern : > > > On Mon, Apr 28, 2008 at 12:17 PM, Gael Varoquaux > > > > wrote: > > > > > I am not suggesting to make a bad design choice, or to go > > > out of our way, I just want a two liner helper function. > > > > I'm afraid that *is* widely recognized as a bad design choice. > > Well, I think the way to go is to mimic splrep/splev: > > def krogh_rep(xi, yi): > return KroghInterpolator(xi, yi) > def krogh_ev(kr, x, der=0): > if der==0: > return kr(x) > else: > return kr.derivative(x, der=der) > > The only danger is that if it ever raises an exception and the users > see how simple it is they will feel like idiots. Well those certainly aren't useful. The only functions I would consider adding would be "one-shot" functions, e.g.: def krogh(xi, yi, x): return KroghInterpolator(xi,yi)(x) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ed at lamedomain.net Mon Apr 28 16:23:29 2008 From: ed at lamedomain.net (Ed Rahn) Date: Mon, 28 Apr 2008 13:23:29 -0700 Subject: [SciPy-user] Polynomial interpolation In-Reply-To: References: <20080426012325.GC7685@phare.normalesup.org> <4815F1F2.2000100@lamedomain.net> <20080428155731.GD20344@phare.normalesup.org> <4815F960.6060008@lamedomain.net> <20080428164024.GF20344@phare.normalesup.org> <481604CB.5070009@lamedomain.net> <20080428171726.GH20344@phare.normalesup.org> <3d375d730804281124vd68b980qe6a2220b674ac904@mail.gmail.com> Message-ID: <48163241.2030309@lamedomain.net> If the functionality of krogh_ev is useful, why not put it in a method? A compatibility module is a very good idea. But a concern with that is one of documentation. If I am a new user to both scipy and the splrep/splev library, I will be confused with which calling convention is best. As a developer more external explanation is needed and more code needs to be commented. More tests also need to be written to ensure full code coverage. - Ed Anne Archibald wrote: > 2008/4/28 Robert Kern : >> On Mon, Apr 28, 2008 at 12:17 PM, Gael Varoquaux >> >> wrote: >> >>> I am not suggesting to make a bad design choice, or to go >> > out of our way, I just want a two liner helper function. >> >> I'm afraid that *is* widely recognized as a bad design choice. > > Well, I think the way to go is to mimic splrep/splev: > > def krogh_rep(xi, yi): > return KroghInterpolator(xi, yi) > def krogh_ev(kr, x, der=0): > if der==0: > return kr(x) > else: > return kr.derivative(x, der=der) > > The only danger is that if it ever raises an exception and the users > see how simple it is they will feel like idiots. > > Anne > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From robert.kern at gmail.com Mon Apr 28 16:28:24 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 28 Apr 2008 15:28:24 -0500 Subject: [SciPy-user] problem building svn scipy In-Reply-To: <48162F7F.5040907@soe.ucsc.edu> References: <88e473830804250728o24834ec1o1201fd434f4b0f83@mail.gmail.com> <3d375d730804251148h349b9619k846759cf03ba3fa4@mail.gmail.com> <88e473830804281154v63c6741aw5cb52d84c284966d@mail.gmail.com> <48162F7F.5040907@soe.ucsc.edu> Message-ID: <3d375d730804281328i1217c603pe773f6e9d7bd56af@mail.gmail.com> On Mon, Apr 28, 2008 at 3:11 PM, Damian Eads wrote: > Hi John, > > Thanks for the fix. It is possible that an earlier version of gcc 3.4 > collapsed underscore prefixes on variable names. > > I was not aware there was a Scipy style for C code--just an unwritten > understanding that the C code be cleanly written. I have structured my > code so that the implementation of an algorithm goes in a function in > one C file (hierarchy.c), its Python-C wrapper goes in another C file > (hierarchy_wrap.c), and argument checking and array allocation happens > in a Python function (hierarchy.py). I have tried to keep my argument > names consistent in each of these files. If an argument is called X in > Python, its corresponding PyArrayObject is called _X in C, and the > C-array buffer is called X. This consistency really helps development as > well as avoiding bugs. I use a similar convention, but instead of "_X", I tend to use "pyX". -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From peridot.faceted at gmail.com Mon Apr 28 16:36:44 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Mon, 28 Apr 2008 16:36:44 -0400 Subject: [SciPy-user] Polynomial interpolation In-Reply-To: <3d375d730804281326q7f147ff5ta67db4d144f72d98@mail.gmail.com> References: <4815F1F2.2000100@lamedomain.net> <20080428155731.GD20344@phare.normalesup.org> <4815F960.6060008@lamedomain.net> <20080428164024.GF20344@phare.normalesup.org> <481604CB.5070009@lamedomain.net> <20080428171726.GH20344@phare.normalesup.org> <3d375d730804281124vd68b980qe6a2220b674ac904@mail.gmail.com> <3d375d730804281326q7f147ff5ta67db4d144f72d98@mail.gmail.com> Message-ID: 2008/4/28 Robert Kern : > Well those certainly aren't useful. The only functions I would > consider adding would be "one-shot" functions, e.g.: > > def krogh(xi, yi, x): > return KroghInterpolator(xi,yi)(x) The problem here is that construction of the splines is an order degree**2 process, so I want an interface that encourages users to construct them once and for all. I think such an approach also discourages people from just y_interp = krogh(all_my_data_x, all_my_data_y, x_interp) with hundreds of points, the results of which will be meaningless and horrible. That said, python's regex system does provide such an interface, so I can do it.Perhaps a long and cumbersome name - evaluate_krogh_interpolation, maybe. What would be painful would be to expose the internal workings of the interpolator, as splrep/splev do. Anne From peridot.faceted at gmail.com Mon Apr 28 16:42:05 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Mon, 28 Apr 2008 16:42:05 -0400 Subject: [SciPy-user] Polynomial interpolation In-Reply-To: <48163241.2030309@lamedomain.net> References: <4815F1F2.2000100@lamedomain.net> <20080428155731.GD20344@phare.normalesup.org> <4815F960.6060008@lamedomain.net> <20080428164024.GF20344@phare.normalesup.org> <481604CB.5070009@lamedomain.net> <20080428171726.GH20344@phare.normalesup.org> <3d375d730804281124vd68b980qe6a2220b674ac904@mail.gmail.com> <48163241.2030309@lamedomain.net> Message-ID: 2008/4/28 Ed Rahn : > If the functionality of krogh_ev is useful, why not put it in a method? Because all it does is expose a method in a non-method-y way. > A compatibility module is a very good idea. But a concern with that is > one of documentation. If I am a new user to both scipy and the > splrep/splev library, I will be confused with which calling convention > is best. As a developer more external explanation is needed and more > code needs to be commented. > > More tests also need to be written to ensure full code coverage. What would you suggest? Would you care to write a few? I think KroghInterpolator is fairly thoroughly tested at this point, though I guess a few checks that it raises sensible errors when fed erroneous input wouldn't hurt. More generally, is it worth putting something in the tests for numerical code about stability? An example with degree thirty, say, and a reasonable error estimate, to make sure that modifications don't make the code less stable numerically? Anne From robert.kern at gmail.com Mon Apr 28 16:56:50 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 28 Apr 2008 15:56:50 -0500 Subject: [SciPy-user] Polynomial interpolation In-Reply-To: References: <20080428155731.GD20344@phare.normalesup.org> <4815F960.6060008@lamedomain.net> <20080428164024.GF20344@phare.normalesup.org> <481604CB.5070009@lamedomain.net> <20080428171726.GH20344@phare.normalesup.org> <3d375d730804281124vd68b980qe6a2220b674ac904@mail.gmail.com> <3d375d730804281326q7f147ff5ta67db4d144f72d98@mail.gmail.com> Message-ID: <3d375d730804281356l5e975be9y655f98e6893b39b1@mail.gmail.com> On Mon, Apr 28, 2008 at 3:36 PM, Anne Archibald wrote: > 2008/4/28 Robert Kern : > > > Well those certainly aren't useful. The only functions I would > > consider adding would be "one-shot" functions, e.g.: > > > > def krogh(xi, yi, x): > > return KroghInterpolator(xi,yi)(x) > > The problem here is that construction of the splines is an order > degree**2 process, so I want an interface that encourages users to > construct them once and for all. I think such an approach also > discourages people from just > > y_interp = krogh(all_my_data_x, all_my_data_y, x_interp) > > with hundreds of points, the results of which will be meaningless and horrible. If they repeat that with many x_interp with constant all_my_data, yes. However, there are a number of cases where they won't have many x_interp. I don't like it, either. I would have designed it precisely like you already have and left it as a clean, orthogonal API. But since people do request the alternative APIs, we need to at least consider them. My objection to krogh_rep() and krogh_ev() is that they don't address these people's concerns, either. Yes, they're functions instead of methods, but all they are doing is methody things on opaque objects with a different syntax. If people don't understand OO (and don't want to understand OO and don't have an actual need for multiple x_interps), these functions don't help them. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ed at lamedomain.net Mon Apr 28 17:06:37 2008 From: ed at lamedomain.net (Ed Rahn) Date: Mon, 28 Apr 2008 14:06:37 -0700 Subject: [SciPy-user] Polynomial interpolation In-Reply-To: References: <4815F1F2.2000100@lamedomain.net> <20080428155731.GD20344@phare.normalesup.org> <4815F960.6060008@lamedomain.net> <20080428164024.GF20344@phare.normalesup.org> <481604CB.5070009@lamedomain.net> <20080428171726.GH20344@phare.normalesup.org> <3d375d730804281124vd68b980qe6a2220b674ac904@mail.gmail.com> <48163241.2030309@lamedomain.net> Message-ID: <48163C5D.3050004@lamedomain.net> Anne Archibald wrote: > 2008/4/28 Ed Rahn : >> If the functionality of krogh_ev is useful, why not put it in a method? > > Because all it does is expose a method in a non-method-y way. It also does some logic, which seems useful to some. > >> A compatibility module is a very good idea. But a concern with that is >> one of documentation. If I am a new user to both scipy and the >> splrep/splev library, I will be confused with which calling convention >> is best. As a developer more external explanation is needed and more >> code needs to be commented. >> >> More tests also need to be written to ensure full code coverage. > > What would you suggest? Would you care to write a few? I think > KroghInterpolator is fairly thoroughly tested at this point, though I > guess a few checks that it raises sensible errors when fed erroneous > input wouldn't hurt. I was specifically referring to the wrapper code. In the case given you'd just test that the logic is done right when setup and called properly, one with der equal to 0 and one without. Testing with erroneous data is a good idea. > More generally, is it worth putting something in the tests for > numerical code about stability? An example with degree thirty, say, > and a reasonable error estimate, to make sure that modifications don't > make the code less stable numerically? I haven't looked at the code or tests specifically, nor do I have a good understanding of the problem being solved. However, as a general rule I assume calculations based on external modules and libraries to be correct and only test internal logic. Tests in the external code should catch any changes to output caused by their modification. - Ed From eike.welk at gmx.net Mon Apr 28 17:21:38 2008 From: eike.welk at gmx.net (Eike Welk) Date: Mon, 28 Apr 2008 23:21:38 +0200 Subject: [SciPy-user] Polynomial interpolation In-Reply-To: References: <3d375d730804281326q7f147ff5ta67db4d144f72d98@mail.gmail.com> Message-ID: <200804282321.39406.eike.welk@gmx.net> On Monday 28 April 2008 22:36, Anne Archibald wrote: > 2008/4/28 Robert Kern : > > Well those certainly aren't useful. The only functions I would > > consider adding would be "one-shot" functions, e.g.: > > > > def krogh(xi, yi, x): > > return KroghInterpolator(xi,yi)(x) > > The problem here is that construction of the splines is an order > degree**2 process, so I want an interface that encourages users to > construct them once and for all. I think such an approach also > discourages people from just > > y_interp = krogh(all_my_data_x, all_my_data_y, x_interp) > > with hundreds of points, the results of which will be meaningless > and horrible. You could store already constructed interpolation objects in a dictionary. (I didn't test it.): krogh_interpolator_cache={} def evaluate_krogh_interpolation(all_my_data_x, all_my_data_y, x_interp): global krogh_interpolator_cache if (all_my_data_x, all_my_data_y) in krogh_interpolator_cache: theInterpolator = krogh_interpolator_cache[(all_my_data_x, all_my_data_y)] return theInterpolator(x_interp) else: newInterpolator = KroghInterpolator(all_my_data_x, all_my_data_y) krogh_interpolator_cache[(all_my_data_x, all_my_data_y)] \ = newInterpolator return newInterpolator(x_interp) Offcourse you could empty the dictionary when there are more than a certain number of objects in it, to avoid memory leaks. When you have implemented this too, the function doesn't look so empty anymore, and then nobody has to feel like an idiot.:-) Kind regards, Eike. From robert.kern at gmail.com Mon Apr 28 17:24:55 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 28 Apr 2008 16:24:55 -0500 Subject: [SciPy-user] Polynomial interpolation In-Reply-To: <200804282321.39406.eike.welk@gmx.net> References: <3d375d730804281326q7f147ff5ta67db4d144f72d98@mail.gmail.com> <200804282321.39406.eike.welk@gmx.net> Message-ID: <3d375d730804281424m8d9f6d2pfbdbca0206e26c01@mail.gmail.com> On Mon, Apr 28, 2008 at 4:21 PM, Eike Welk wrote: > On Monday 28 April 2008 22:36, Anne Archibald wrote: > > 2008/4/28 Robert Kern : > > > Well those certainly aren't useful. The only functions I would > > > consider adding would be "one-shot" functions, e.g.: > > > > > > def krogh(xi, yi, x): > > > return KroghInterpolator(xi,yi)(x) > > > > The problem here is that construction of the splines is an order > > degree**2 process, so I want an interface that encourages users to > > construct them once and for all. I think such an approach also > > discourages people from just > > > > y_interp = krogh(all_my_data_x, all_my_data_y, x_interp) > > > > with hundreds of points, the results of which will be meaningless > > and horrible. > > You could store already constructed interpolation objects in a > dictionary. (I didn't test it.): Arrays are unhashable and cannot be used as dictionary keys. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From eike.welk at gmx.net Mon Apr 28 18:04:16 2008 From: eike.welk at gmx.net (Eike Welk) Date: Tue, 29 Apr 2008 00:04:16 +0200 Subject: [SciPy-user] Polynomial interpolation In-Reply-To: <3d375d730804281424m8d9f6d2pfbdbca0206e26c01@mail.gmail.com> References: <200804282321.39406.eike.welk@gmx.net> <3d375d730804281424m8d9f6d2pfbdbca0206e26c01@mail.gmail.com> Message-ID: <200804290004.17520.eike.welk@gmx.net> On Monday 28 April 2008 23:24, Robert Kern wrote: > On Mon, Apr 28, 2008 at 4:21 PM, Eike Welk wrote: > > On Monday 28 April 2008 22:36, Anne Archibald wrote: > > > 2008/4/28 Robert Kern : > > > > Well those certainly aren't useful. The only functions I > > > > would consider adding would be "one-shot" functions, e.g.: > > > > > > > > def krogh(xi, yi, x): > > > > return KroghInterpolator(xi,yi)(x) > > > > > > The problem here is that construction of the splines is an > > > order degree**2 process, so I want an interface that > > > encourages users to construct them once and for all. I think > > > such an approach also discourages people from just > > > > > > y_interp = krogh(all_my_data_x, all_my_data_y, x_interp) > > > > > > with hundreds of points, the results of which will be > > > meaningless and horrible. > > > > You could store already constructed interpolation objects in a > > dictionary. (I didn't test it.): > > Arrays are unhashable and cannot be used as dictionary keys. Ah, yes! One could however store a copy of the last x-data and y-data together with the interpolator. When x-data and y-data are the same at the next call, the interpolator could be reused. Kind regards, Eike. From gael.varoquaux at normalesup.org Mon Apr 28 18:54:54 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Tue, 29 Apr 2008 00:54:54 +0200 Subject: [SciPy-user] Polynomial interpolation In-Reply-To: <3d375d730804281356l5e975be9y655f98e6893b39b1@mail.gmail.com> References: <20080428155731.GD20344@phare.normalesup.org> <4815F960.6060008@lamedomain.net> <20080428164024.GF20344@phare.normalesup.org> <481604CB.5070009@lamedomain.net> <20080428171726.GH20344@phare.normalesup.org> <3d375d730804281124vd68b980qe6a2220b674ac904@mail.gmail.com> <3d375d730804281326q7f147ff5ta67db4d144f72d98@mail.gmail.com> <3d375d730804281356l5e975be9y655f98e6893b39b1@mail.gmail.com> Message-ID: <20080428225454.GB25991@phare.normalesup.org> On Mon, Apr 28, 2008 at 03:56:50PM -0500, Robert Kern wrote: > If they repeat that with many x_interp with constant all_my_data, yes. > However, there are a number of cases where they won't have many > x_interp. Robert, you have understood my request very well, and I think you are right, only the function you are proposing should be added, and in addition its docstring should document how it is equivalent to the OOP approach, and why the OOP approach is better. Simple solutions are only for simple problems. If you want the solution to a more complex problem, use OOP. My 2 cents, Ga?l From stefan at sun.ac.za Mon Apr 28 20:24:13 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Tue, 29 Apr 2008 02:24:13 +0200 Subject: [SciPy-user] Polynomial interpolation In-Reply-To: <3d375d730804281326q7f147ff5ta67db4d144f72d98@mail.gmail.com> References: <4815F1F2.2000100@lamedomain.net> <20080428155731.GD20344@phare.normalesup.org> <4815F960.6060008@lamedomain.net> <20080428164024.GF20344@phare.normalesup.org> <481604CB.5070009@lamedomain.net> <20080428171726.GH20344@phare.normalesup.org> <3d375d730804281124vd68b980qe6a2220b674ac904@mail.gmail.com> <3d375d730804281326q7f147ff5ta67db4d144f72d98@mail.gmail.com> Message-ID: <9457e7c80804281724q44592b0dtfda039c2101f0bf5@mail.gmail.com> 2008/4/28 Robert Kern : > Well those certainly aren't useful. The only functions I would > consider adding would be "one-shot" functions, e.g.: > > def krogh(xi, yi, x): > return KroghInterpolator(xi,yi)(x) I agree. But I also think Anne's original interface was simple enough as well as intuitive, and that we should stick to it. A good docstring should comfortably bridge the gap for users not too familiar with objects. Regards St?fan From emanuele at relativita.com Tue Apr 29 03:58:53 2008 From: emanuele at relativita.com (Emanuele Olivetti) Date: Tue, 29 Apr 2008 09:58:53 +0200 Subject: [SciPy-user] constrained optimization In-Reply-To: <88e473830804281134x7467b6a1tf02379f5ee98e64e@mail.gmail.com> References: <88e473830804281134x7467b6a1tf02379f5ee98e64e@mail.gmail.com> Message-ID: <4816D53D.4040302@relativita.com> John Hunter wrote: > I need to do a N dimensional constrained optimization over a weight w > vector with the constraints: > > * w[i] >=0 > > * w.sum() == 1.0 > > .... > > Is there a better approach? > > I guess OpenOpt should handle easily this kind of constraint. Emanuele From dmitrey.kroshko at scipy.org Tue Apr 29 04:04:48 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Tue, 29 Apr 2008 11:04:48 +0300 Subject: [SciPy-user] constrained optimization In-Reply-To: <3d375d730804281319s2776b8deg8e4f548648b50a4b@mail.gmail.com> References: <88e473830804281134x7467b6a1tf02379f5ee98e64e@mail.gmail.com> <3d375d730804281157t74364936r405bd920bc24bbb7@mail.gmail.com> <85b5c3130804281303q651f39b3nce055b5194ad809c@mail.gmail.com> <3d375d730804281319s2776b8deg8e4f548648b50a4b@mail.gmail.com> Message-ID: <4816D6A0.4070502@scipy.org> Robert Kern wrote: > On Mon, Apr 28, 2008 at 3:03 PM, Ondrej Certik wrote: > >> On Mon, Apr 28, 2008 at 8:57 PM, Robert Kern wrote: >> > On Mon, Apr 28, 2008 at 1:34 PM, John Hunter wrote: >> > > I need to do a N dimensional constrained optimization over a weight w >> > > vector with the constraints: >> > > >> > > * w[i] >=0 >> > > >> > > * w.sum() == 1.0 >> > > >> > > Scanning through the scipy.optimize docs, I see a number of examples >> > > where parameters can be bounded by a bracketing interval, but none >> > > where constraints can be placed on combinations of the parameters, eg >> > > the sum of them. One approach I am considering is doing a bracketed >> > > [0,1] constrained optimization over N-1 weights (assigning the last >> > > weight to be 1-sum others) and modifying my cost function to punish >> > > the optimizer when the N-1 input weights sum to more than one. >> > > >> > > Is there a better approach? >> > >> > Transform the coordinates to an unconstrained N-1-dimensional space. >> > One such transformation is the Aitchison (or "additive log-ratio") >> > transform: >> > >> > y = log(x[:-1] / x[-1]) >> > >> > And to go back: >> > >> > tmp = hstack([exp(y), 1.0]) >> > x = tmp / tmp.sum() >> > >> > Searching for "compositional data analysis" should yield similar >> > transformations, but this one should be sufficient for maintaining >> > constraints. For doing statistics, the other have better properties. >> >> Wow, that is very clever. Just today I was thinking how to do it and >> it didn't occur to me I should read scipy-user. :) >> >> The exp/log transform is clear, but I didn't figure out that in order >> to maintain >> the norm, I can maintain it in the last element, so it's enough to do: >> >> y = x[:-1]/x[-1] >> >> tmp = hstack([y, 1.0]) >> x = tmp / tmp.sum() >> >> Very cool, thanks. However, the transform is not one to one, e.g. both >> >> x = [1, 2, 1, 4] >> x = [2, 4, 2, 8] >> >> represent the same thing: >> >> y = [0.25, 0.5, 0.25] >> > > Yes, that is by design. With compositional data, only the ratios > between components matter. They are unique only up to a scaling > factor, and typically, you normalize them such that they sum to 1. > > I guess the real problems for using this and/or almost any other coordinate transformation could be possible yielding ill-condition problem (or increasing ill-condition, if former problem is already ill-conditioned), for example when coord x[-1] is close to zero either in optimal point or during moving along trajectory. This makes optimization solvers to work unstable, unpredictable and maybe stop far from optimum. Regards, D. From robert.kern at gmail.com Tue Apr 29 04:15:39 2008 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 29 Apr 2008 03:15:39 -0500 Subject: [SciPy-user] constrained optimization In-Reply-To: <4816D6A0.4070502@scipy.org> References: <88e473830804281134x7467b6a1tf02379f5ee98e64e@mail.gmail.com> <3d375d730804281157t74364936r405bd920bc24bbb7@mail.gmail.com> <85b5c3130804281303q651f39b3nce055b5194ad809c@mail.gmail.com> <3d375d730804281319s2776b8deg8e4f548648b50a4b@mail.gmail.com> <4816D6A0.4070502@scipy.org> Message-ID: <3d375d730804290115i6187d977ma514066a8dee895f@mail.gmail.com> On Tue, Apr 29, 2008 at 3:04 AM, dmitrey wrote: > I guess the real problems for using this and/or almost any other > coordinate transformation could be possible yielding ill-condition > problem (or increasing ill-condition, if former problem is already > ill-conditioned), for example when coord x[-1] is close to zero either > in optimal point or during moving along trajectory. This makes > optimization solvers to work unstable, unpredictable and maybe stop far > from optimum. True. However, it should be noted that this transform and the related ones have very nice properties (e.g., they have a Euclidean metric). One could equally say that the original formulation is transformed from the more natural Euclidean coordinates and could lead optimization solvers to perform poorly, particularly those which require gradients (and thus, a metric). -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From dineshbvadhia at hotmail.com Tue Apr 29 10:16:10 2008 From: dineshbvadhia at hotmail.com (Dinesh B Vadhia) Date: Tue, 29 Apr 2008 07:16:10 -0700 Subject: [SciPy-user] [Sparse matrix library] csr_matrix and datatypes Message-ID: For reference, the column index for each non-zero is a 4 byte integer. A int8 is an additional 1 byte (ie. total 4+1=5 bytes), an int16 is 2 bytes (4+2=6 bytes). And, how much is an int32 - another 2 bytes? -------------------------------------------------------------------------------- Date: Mon, 28 Apr 2008 11:30:42 -0500 From: "Nathan Bell" Subject: Re: [SciPy-user] [Sparse matrix library] csr_matrix and To: "SciPy Users List" Message-ID: Content-Type: text/plain; charset=ISO-8859-1 On Mon, Apr 28, 2008 at 10:57 AM, Dinesh B Vadhia wrote: > > > Okay, I see what is going on. Uhmmm! If I replace int8 with int16 or int32 > then this will increase the size of A in memory which will defeat the > purpose of the original exercise to minimize the size of A (as they are very > large). Unless there is a way to coerce the calculation A.sum(0) into a > int16 then looks like I'll have to evaluate A.sum(0) in brute force fashion > at the time of A's creation, save it and read it into the program at > execution. It's the difference between roughly 5 bytes per nonzero and 6 bytes per nonzero. The column index for each nonzero is a 4 byte integer, so there's little difference in the total memory cost (4+1 vs. 4+2). -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From wnbell at gmail.com Tue Apr 29 10:50:18 2008 From: wnbell at gmail.com (Nathan Bell) Date: Tue, 29 Apr 2008 09:50:18 -0500 Subject: [SciPy-user] [Sparse matrix library] csr_matrix and datatypes In-Reply-To: References: Message-ID: On Tue, Apr 29, 2008 at 9:16 AM, Dinesh B Vadhia wrote: > > > For reference, the column index for each non-zero is a 4 byte integer. A > int8 is an additional 1 byte (ie. total 4+1=5 bytes), an int16 is 2 bytes > (4+2=6 bytes). And, how much is an int32 - another 2 bytes? > Correct. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From prabhu at aero.iitb.ac.in Tue Apr 29 12:51:28 2008 From: prabhu at aero.iitb.ac.in (Prabhu Ramachandran) Date: Tue, 29 Apr 2008 22:21:28 +0530 Subject: [SciPy-user] code for weave / blitz function wrapper generation In-Reply-To: <4db580fd0804221610j6d5924e8ia413889af13d6c39@mail.gmail.com> References: <4db580fd0804221610j6d5924e8ia413889af13d6c39@mail.gmail.com> Message-ID: <48175210.8090600@aero.iitb.ac.in> Hoyt Koepke wrote: > I recently wrote a general purpose function for automatically creating > wrappers to C++ functions using weave, and I thought others might find > it useful as well. In particular, I think a good place for it would > be as a method in the weave ext_module class if others agree. I'm > also looking for thoughts, suggestions, and for more people to test > it. All the heavy lifting is done by weave, so hopefully the subtle > errors are minimal. I think this is quite useful and could go into weave. cheers, prabhu From eike.welk at gmx.net Tue Apr 29 12:57:59 2008 From: eike.welk at gmx.net (Eike Welk) Date: Tue, 29 Apr 2008 18:57:59 +0200 Subject: [SciPy-user] Polynomial interpolation In-Reply-To: <200804290004.17520.eike.welk@gmx.net> References: <3d375d730804281424m8d9f6d2pfbdbca0206e26c01@mail.gmail.com> <200804290004.17520.eike.welk@gmx.net> Message-ID: <200804291858.00365.eike.welk@gmx.net> On Tuesday 29 April 2008 00:04, Eike Welk wrote: > On Monday 28 April 2008 23:24, Robert Kern wrote: > > On Mon, Apr 28, 2008 at 4:21 PM, Eike Welk > > wrote: > > > On Monday 28 April 2008 22:36, Anne Archibald wrote: > > > > 2008/4/28 Robert Kern : > > > > > Well those certainly aren't useful. The only functions I > > > > > would consider adding would be "one-shot" functions, e.g.: > > > > > > > > > > def krogh(xi, yi, x): > > > > > return KroghInterpolator(xi,yi)(x) > > > > > > > > The problem here is that construction of the splines is an > > > > order degree**2 process, so I want an interface that > > > > encourages users to construct them once and for all. I think > > > > such an approach also discourages people from just > > > > > > > > y_interp = krogh(all_my_data_x, all_my_data_y, x_interp) > > > > > > > > with hundreds of points, the results of which will be > > > > meaningless and horrible. > > > > > > You could store already constructed interpolation objects in a > > > dictionary. (I didn't test it.): > > > > Arrays are unhashable and cannot be used as dictionary keys. > > Ah, yes! > > One could however store a copy of the last x-data and y-data > together with the interpolator. When x-data and y-data are the same > at the next call, the interpolator could be reused. The function could look like this: krogh_interpolator_cache=[] def evaluate_krogh_interpolation(all_my_data_x, all_my_data_y, x_interp): global krogh_interpolator_cache maxLen = 3 #search for already existing interpolators for record in krogh_interpolator_cache: x, y, interp = record if (x==all_my_data_x).all() and (y == all_my_data_y).all(): return interp(x_interp) #limit size of cache if len(krogh_interpolator_cache) >= maxLen: krogh_interpolator_cache = [] #create new interpolator newInterpolator = KroghInterpolator(all_my_data_x, all_my_data_y) krogh_interpolator_cache.append((all_my_data_x.copy(), all_my_data_y.copy(), newInterpolator)) return newInterpolator(x_interp) Kind regards, Eike. From dineshbvadhia at hotmail.com Tue Apr 29 13:16:38 2008 From: dineshbvadhia at hotmail.com (Dinesh B Vadhia) Date: Tue, 29 Apr 2008 10:16:38 -0700 Subject: [SciPy-user] Sparse matrix: division by vector Message-ID: Sparse matrix A is in csr_matrix format and I want to divide each column element j of A by A's column j sum (where colSum is a numpy vector) ie. > A[:,j] = A[:,j]/colSum[j] What is the most efficient way to achieve this apart from brute force ie. > A[i,j] = A[i,j]/colSum[j] ? I can initially create A in a different c**_matrix method if that helps but the final A has to be in csr_matrix form. Cheers! Dinesh -------------- next part -------------- An HTML attachment was scrubbed... URL: From wnbell at gmail.com Tue Apr 29 14:16:19 2008 From: wnbell at gmail.com (Nathan Bell) Date: Tue, 29 Apr 2008 13:16:19 -0500 Subject: [SciPy-user] Sparse matrix: division by vector In-Reply-To: References: Message-ID: On Tue, Apr 29, 2008 at 12:16 PM, Dinesh B Vadhia wrote: > > > Sparse matrix A is in csr_matrix format and I want to divide each column > element j of A by A's column j sum (where colSum is a numpy vector) ie. > > > A[:,j] = A[:,j]/colSum[j] > > What is the most efficient way to achieve this apart from brute force ie. > > > A[i,j] = A[i,j]/colSum[j] ? > > I can initially create A in a different c**_matrix method if that helps but > the final A has to be in csr_matrix form. > Assuing A is in CSR and colSum[j] stores the j-th column sum you can do A.data /= colSum[A.indices] -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From robert.kern at gmail.com Tue Apr 29 14:41:56 2008 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 29 Apr 2008 13:41:56 -0500 Subject: [SciPy-user] Polynomial interpolation In-Reply-To: <200804291858.00365.eike.welk@gmx.net> References: <3d375d730804281424m8d9f6d2pfbdbca0206e26c01@mail.gmail.com> <200804290004.17520.eike.welk@gmx.net> <200804291858.00365.eike.welk@gmx.net> Message-ID: <3d375d730804291141o9f01fd4xefca6797adb75e59@mail.gmail.com> On Tue, Apr 29, 2008 at 11:57 AM, Eike Welk wrote: > The function could look like this: > > > krogh_interpolator_cache=[] > > def evaluate_krogh_interpolation(all_my_data_x, all_my_data_y, > x_interp): > global krogh_interpolator_cache > maxLen = 3 > > #search for already existing interpolators > for record in krogh_interpolator_cache: > x, y, interp = record > if (x==all_my_data_x).all() and (y == all_my_data_y).all(): > return interp(x_interp) > > #limit size of cache > if len(krogh_interpolator_cache) >= maxLen: > krogh_interpolator_cache = [] > #create new interpolator > newInterpolator = KroghInterpolator(all_my_data_x, all_my_data_y) > krogh_interpolator_cache.append((all_my_data_x.copy(), > all_my_data_y.copy(), > > newInterpolator)) > return newInterpolator(x_interp) Please, don't spend too much time playing with this. We will not implement caching here. Caches are something better left for applications, not general libraries because it requires too much consideration of details only the application writer knows. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From gael.varoquaux at normalesup.org Tue Apr 29 15:01:17 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Tue, 29 Apr 2008 21:01:17 +0200 Subject: [SciPy-user] Polynomial interpolation In-Reply-To: <3d375d730804291141o9f01fd4xefca6797adb75e59@mail.gmail.com> References: <3d375d730804281424m8d9f6d2pfbdbca0206e26c01@mail.gmail.com> <200804290004.17520.eike.welk@gmx.net> <200804291858.00365.eike.welk@gmx.net> <3d375d730804291141o9f01fd4xefca6797adb75e59@mail.gmail.com> Message-ID: <20080429190117.GB5130@phare.normalesup.org> On Tue, Apr 29, 2008 at 01:41:56PM -0500, Robert Kern wrote: > Please, don't spend too much time playing with this. We will not > implement caching here. Caches are something better left for > applications, not general libraries because it requires too much > consideration of details only the application writer knows. +1. Ga?l From matthieu.brucher at gmail.com Tue Apr 29 15:12:17 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Tue, 29 Apr 2008 21:12:17 +0200 Subject: [SciPy-user] Polynomial interpolation In-Reply-To: <20080428155731.GD20344@phare.normalesup.org> References: <3d375d730804201710m737de47bw6b91bbbf0f874ab3@mail.gmail.com> <20080426012325.GC7685@phare.normalesup.org> <4815F1F2.2000100@lamedomain.net> <20080428155731.GD20344@phare.normalesup.org> Message-ID: > > One of the things my colleague like with Matlab is that it doesn't forces > them to learn new concepts. What I hate with it is that it forbids me > (who is writting the experiment-control framework) to use advanced > concepts. We need to find a middle ground between the two. > I completely agree with you. We, as developers, must bring the tools so that as many people as possible use Python. And object-oriented programming is not well known in the scientific community (at least the French one). They are used to Matlab and C/Fortran, nothing else. Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Tue Apr 29 15:21:03 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Tue, 29 Apr 2008 21:21:03 +0200 Subject: [SciPy-user] Polynomial interpolation In-Reply-To: References: <3d375d730804201710m737de47bw6b91bbbf0f874ab3@mail.gmail.com> <20080426012325.GC7685@phare.normalesup.org> <4815F1F2.2000100@lamedomain.net> <20080428155731.GD20344@phare.normalesup.org> Message-ID: <20080429192103.GD5130@phare.normalesup.org> On Tue, Apr 29, 2008 at 09:12:17PM +0200, Matthieu Brucher wrote: > I completely agree with you. We, as developers, must bring the tools so > that as many people as possible use Python. And object-oriented > programming is not well known in the scientific community (at least the > French one). They are used to Matlab and C/Fortran, nothing else. But we (you and I) want to be freed of these chains! :) Ga?l From hetland at tamu.edu Tue Apr 29 16:26:49 2008 From: hetland at tamu.edu (Rob Hetland) Date: Tue, 29 Apr 2008 22:26:49 +0200 Subject: [SciPy-user] Polynomial interpolation In-Reply-To: <20080429192103.GD5130@phare.normalesup.org> References: <3d375d730804201710m737de47bw6b91bbbf0f874ab3@mail.gmail.com> <20080426012325.GC7685@phare.normalesup.org> <4815F1F2.2000100@lamedomain.net> <20080428155731.GD20344@phare.normalesup.org> <20080429192103.GD5130@phare.normalesup.org> Message-ID: <66A33FD3-37B2-47DE-BFD8-C59F462ABB40@tamu.edu> I'm always glad to see more interpolation methods in scipy -- nice job Anne. My 2? on the discussion so far: I think the best solution so far suggested is no wrapper function in the package, but described in the docstring (it is only two lines long, after all..). Namespaces get cluttered enough without all that extra stuff, and even if people don't get OO, the can all certainly read a docstring. -Rob ---- Rob Hetland, Associate Professor Dept. of Oceanography, Texas A&M University http://pong.tamu.edu/~rob phone: 979-458-0096, fax: 979-845-6331 From gael.varoquaux at normalesup.org Tue Apr 29 16:39:27 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Tue, 29 Apr 2008 22:39:27 +0200 Subject: [SciPy-user] Polynomial interpolation In-Reply-To: <66A33FD3-37B2-47DE-BFD8-C59F462ABB40@tamu.edu> References: <3d375d730804201710m737de47bw6b91bbbf0f874ab3@mail.gmail.com> <20080426012325.GC7685@phare.normalesup.org> <4815F1F2.2000100@lamedomain.net> <20080428155731.GD20344@phare.normalesup.org> <20080429192103.GD5130@phare.normalesup.org> <66A33FD3-37B2-47DE-BFD8-C59F462ABB40@tamu.edu> Message-ID: <20080429203927.GB15538@phare.normalesup.org> On Tue, Apr 29, 2008 at 10:26:49PM +0200, Rob Hetland wrote: > Namespaces get cluttered enough without all that extra stuff, and even > if people don't get OO, the can all certainly read a docstring. These people don't read docs. In addition the people I am thinking about don't know what a docstring is. I used to think that is was normal to have to go through a learning process to use a tool, hardware or software. Nowadays I have learned in incredible amount of tools, I am faced with new tools on a day-to-day basis. Some of which are forced upon me, some that I choose to use, some that I try out and drop later on. I am also heavily overworked (it's almost a culture for me). When I want to get some problem solved, I rarely have much time to spend on it. Quite often I need a solution real fast. As a result I try out a tool. If I am not able to produce something fast, I give it up. The tool might be excellent. I might even be convinced the tool is excellent, but I have to move along, and use a tool that I believe is less technically good, because it has a smaller learning curve. I am glad that I learned Unix system administration, C coding, VIM, Make, Blender, and all those powerful tools with steep learning curves when I had time a few years ago. I just love those tools. However, I am so grateful when I can pick up a software like Inkscape that makes it really easy for me to draw something quickly. I thank the developers of this software for making it really easy to start with. Usability is making the software match the user expectations, helping him make the first step, and also helping him move forward to go from his beginners workflow, to a more advanced and efficient one. Python is great in this respect. You can start with scripting, than use functions, and later objects. You don't have to know what objects are to start with it, but if you use Python long-enough you will absorb these concepts and you will have learned useful things. Ga?l From andrea.gavana at gmail.com Tue Apr 29 16:48:25 2008 From: andrea.gavana at gmail.com (Andrea Gavana) Date: Tue, 29 Apr 2008 21:48:25 +0100 Subject: [SciPy-user] Linear Interpolation 2 Message-ID: Hi All, as I didn't get any answer, I thought I might repost with some more analysis the trouble I am having with interp2d. The attached script produces 3 different results depending on the value of the parameter numColumns, i.e.: 1) numColumns = 20 Traceback (most recent call last): File "C:\MyProjects\LinearInterpolation.py", line 27, in function = interp2d(xx, yy, z, kind="linear") File "C:\Python25\Lib\site-packages\scipy\interpolate\interpolate.py", line 113, in __init__ self.tck = fitpack.bisplrep(self.x, self.y, self.z, kx=kx, ky=ky, s=0.) File "C:\Python25\Lib\site-packages\scipy\interpolate\fitpack.py", line 702, in bisplrep tx,ty,nxest,nyest,wrk,lwrk1,lwrk2) ValueError: Invalid inputs. 2) numColumns = 200 Traceback (most recent call last): File "C:\MyProjects\LinearInterpolation.py", line 27, in function = interp2d(xx, yy, z, kind="linear") File "C:\Python25\Lib\site-packages\scipy\interpolate\interpolate.py", line 113, in __init__ self.tck = fitpack.bisplrep(self.x, self.y, self.z, kx=kx, ky=ky, s=0.) File "C:\Python25\Lib\site-packages\scipy\interpolate\fitpack.py", line 702, in bisplrep tx,ty,nxest,nyest,wrk,lwrk1,lwrk2) MemoryError (MemoryError? With a 372x200 matrix???) 3) numColumns = 2000 Traceback (most recent call last): File "C:\MyProjects\LinearInterpolation.py", line 27, in function = interp2d(xx, yy, z, kind="linear") File "C:\Python25\Lib\site-packages\scipy\interpolate\interpolate.py", line 113, in __init__ self.tck = fitpack.bisplrep(self.x, self.y, self.z, kx=kx, ky=ky, s=0.) File "C:\Python25\Lib\site-packages\scipy\interpolate\fitpack.py", line 702, in bisplrep tx,ty,nxest,nyest,wrk,lwrk1,lwrk2) OverflowError: long int too large to convert to int Is there anyone who could tell me, please, what I am doing wrong? This is on Windows XP 1GB RAM, 3.7 GHz, Python 2.5, scipy 0.6.0, numpy 1.0.4. Thank you for your suggestions. Andrea. "Imagination Is The Only Weapon In The War Against Reality." http://xoomer.alice.it/infinity77/ -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: LinearInterpolation.py URL: From robert.kern at gmail.com Tue Apr 29 16:48:04 2008 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 29 Apr 2008 15:48:04 -0500 Subject: [SciPy-user] Polynomial interpolation In-Reply-To: <20080429203927.GB15538@phare.normalesup.org> References: <3d375d730804201710m737de47bw6b91bbbf0f874ab3@mail.gmail.com> <20080426012325.GC7685@phare.normalesup.org> <4815F1F2.2000100@lamedomain.net> <20080428155731.GD20344@phare.normalesup.org> <20080429192103.GD5130@phare.normalesup.org> <66A33FD3-37B2-47DE-BFD8-C59F462ABB40@tamu.edu> <20080429203927.GB15538@phare.normalesup.org> Message-ID: <3d375d730804291348u42e068f8rf1824bbb5a77fbc0@mail.gmail.com> On Tue, Apr 29, 2008 at 3:39 PM, Gael Varoquaux wrote: > On Tue, Apr 29, 2008 at 10:26:49PM +0200, Rob Hetland wrote: > > Namespaces get cluttered enough without all that extra stuff, and even > > if people don't get OO, the can all certainly read a docstring. > > These people don't read docs. In addition the people I am thinking about > don't know what a docstring is. In that case, I see no reason to cater to them. I will bend over backwards to help someone who will meet me even a tenth of the way, but I just cannot care about someone who will not put forth the de minimis effort of using help(). -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From peridot.faceted at gmail.com Tue Apr 29 16:54:10 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Tue, 29 Apr 2008 22:54:10 +0200 Subject: [SciPy-user] Polynomial interpolation In-Reply-To: <66A33FD3-37B2-47DE-BFD8-C59F462ABB40@tamu.edu> References: <3d375d730804201710m737de47bw6b91bbbf0f874ab3@mail.gmail.com> <20080426012325.GC7685@phare.normalesup.org> <4815F1F2.2000100@lamedomain.net> <20080428155731.GD20344@phare.normalesup.org> <20080429192103.GD5130@phare.normalesup.org> <66A33FD3-37B2-47DE-BFD8-C59F462ABB40@tamu.edu> Message-ID: On 29/04/2008, Rob Hetland wrote: > I'm always glad to see more interpolation methods in scipy -- nice job > Anne. > > My 2? on the discussion so far: I think the best solution so far > suggested is no wrapper function in the package, but described in the > docstring (it is only two lines long, after all..). Namespaces get > cluttered enough without all that extra stuff, and even if people > don't get OO, the can all certainly read a docstring. Well, actually, this brings up a concern I have. Suppose I want to use the class scipy.interpolate.interp1d, but I don't know how. In [18]: scipy.interpolate.interp1d? Type: type Base Class: String Form: Namespace: Interactive File: /usr/lib/python2.4/site-packages/scipy/interpolate/interpolate.py Docstring: Interpolate a 1D function. See Also -------- splrep, splev - spline interpolation based on FITPACK UnivariateSpline - a more recent wrapper of the FITPACK routines Uh, great, but how do I actually *use* it? In [19]: scipy.interpolate.interp1d.__init__? String Form: Namespace: Interactive File: /usr/lib/python2.4/site-packages/scipy/interpolate/interpolate.py Definition: scipy.interpolate.interp1d.__init__(self, x, y, kind='linear', axis=-1, copy=True, bounds_error=True, fill_value=nan) Docstring: Initialize a 1D linear interpolation class. Description ----------- x and y are arrays of values used to approximate some function f: y = f(x) This class returns a function whose call method uses linear interpolation to find the value of new points. Parameters ---------- x : array How are users supposed to find .__init__? This is what they need to use to actually create an instance of the class, but the information is not presented when they look at the class' docstring. Even a moderately experienced user might have no idea there was a method called __init__ whose docstring would have alleviated their suffering. Should the class' docstring suggest users look at the docstring of .__init__? Should it *include* that docstring? There must be a general python solution to this... Anne From robert.kern at gmail.com Tue Apr 29 17:08:52 2008 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 29 Apr 2008 16:08:52 -0500 Subject: [SciPy-user] Polynomial interpolation In-Reply-To: References: <3d375d730804201710m737de47bw6b91bbbf0f874ab3@mail.gmail.com> <20080426012325.GC7685@phare.normalesup.org> <4815F1F2.2000100@lamedomain.net> <20080428155731.GD20344@phare.normalesup.org> <20080429192103.GD5130@phare.normalesup.org> <66A33FD3-37B2-47DE-BFD8-C59F462ABB40@tamu.edu> Message-ID: <3d375d730804291408vcbc7d4ep7edcd3ff760c407f@mail.gmail.com> On Tue, Apr 29, 2008 at 3:54 PM, Anne Archibald wrote: > Suppose I want to use the class scipy.interpolate.interp1d, but I > don't know how. > > In [18]: scipy.interpolate.interp1d? > Type: type > Base Class: > String Form: > > Namespace: Interactive > File: > /usr/lib/python2.4/site-packages/scipy/interpolate/interpolate.py > Docstring: > Interpolate a 1D function. > > See Also > -------- > splrep, splev - spline interpolation based on FITPACK > UnivariateSpline - a more recent wrapper of the FITPACK routines > > > Uh, great, but how do I actually *use* it? Well, the version of IPython I have does include the constructor's information when you do "interp1d?". So does help(). -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From rob.clewley at gmail.com Tue Apr 29 17:11:01 2008 From: rob.clewley at gmail.com (Rob Clewley) Date: Tue, 29 Apr 2008 17:11:01 -0400 Subject: [SciPy-user] Polynomial interpolation In-Reply-To: References: <3d375d730804201710m737de47bw6b91bbbf0f874ab3@mail.gmail.com> <20080426012325.GC7685@phare.normalesup.org> <4815F1F2.2000100@lamedomain.net> <20080428155731.GD20344@phare.normalesup.org> <20080429192103.GD5130@phare.normalesup.org> <66A33FD3-37B2-47DE-BFD8-C59F462ABB40@tamu.edu> Message-ID: I would have thought the general expectation (independent of ipython specifics) is that users should try help(interp1d) which indeed gives the docstring for __init__ (and more besides). -Rob From peridot.faceted at gmail.com Tue Apr 29 19:05:54 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Wed, 30 Apr 2008 01:05:54 +0200 Subject: [SciPy-user] Sparse matrix: division by vector In-Reply-To: References: Message-ID: On 29/04/2008, Nathan Bell wrote: > On Tue, Apr 29, 2008 at 12:16 PM, Dinesh B Vadhia > wrote: > > > > > > Sparse matrix A is in csr_matrix format and I want to divide each column > > element j of A by A's column j sum (where colSum is a numpy vector) ie. > > > > > A[:,j] = A[:,j]/colSum[j] > > > > What is the most efficient way to achieve this apart from brute force ie. > > > > > A[i,j] = A[i,j]/colSum[j] ? > > > > I can initially create A in a different c**_matrix method if that helps but > > the final A has to be in csr_matrix form. > > > > > > Assuing A is in CSR and colSum[j] stores the j-th column sum you can do > > A.data /= colSum[A.indices] Is there any way to make the OP's approach work? Or even A /= colsum[newaxis,:]? This seems like a basic expectation from sparse matrices... Is there a document describing which basic numpy operations work on sparse matrices and which don't? Anne From wnbell at gmail.com Tue Apr 29 19:41:07 2008 From: wnbell at gmail.com (Nathan Bell) Date: Tue, 29 Apr 2008 18:41:07 -0500 Subject: [SciPy-user] Sparse matrix: division by vector In-Reply-To: References: Message-ID: On Tue, Apr 29, 2008 at 6:05 PM, Anne Archibald wrote: > Is there any way to make the OP's approach work? Or even A /= > colsum[newaxis,:]? There is backend support for this operation in sparsetools for CSR,CSC and BSR matrices. I occasionally use it directly, but I haven't exposed it through SciPy yet. Ideally sparse would expose this in a manner consistent with numpy matrices. A safer approach is to use a diagonal matrix (e.g. D = spdiags(....)) to rescale rows or columns. With this approach, you don't need to worry whether the matrix is dense or sparse (or in which format it is stored). > This seems like a basic expectation from sparse > matrices... Is there a document describing which basic numpy > operations work on sparse matrices and which don't? Not currently. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From dineshbvadhia at hotmail.com Tue Apr 29 22:04:03 2008 From: dineshbvadhia at hotmail.com (Dinesh B Vadhia) Date: Tue, 29 Apr 2008 19:04:03 -0700 Subject: [SciPy-user] Sparse matrix: division by vector Message-ID: Does this work for row sums? For example, assuming A is in CSR and rowSum[i] stores the i-th row sum can you do > A.data /= rowSum[A.indices] Dinesh -------------------------------------------------------------------------------- From: Nathan Bell gmail.com> Subject: Re: Sparse matrix: division by vector Newsgroups: gmane.comp.python.scientific.user Date: 2008-04-29 18:16:19 GMT (7 hours and 44 minutes ago) On Tue, Apr 29, 2008 at 12:16 PM, Dinesh B Vadhia hotmail.com> wrote: > > > Sparse matrix A is in csr_matrix format and I want to divide each column > element j of A by A's column j sum (where colSum is a numpy vector) ie. > > > A[:,j] = A[:,j]/colSum[j] > > What is the most efficient way to achieve this apart from brute force ie. > > > A[i,j] = A[i,j]/colSum[j] ? > > I can initially create A in a different c**_matrix method if that helps but > the final A has to be in csr_matrix form. > Assuing A is in CSR and colSum[j] stores the j-th column sum you can do A.data /= colSum[A.indices] -- Nathan Bell wnbell gmail.com http://graphics.cs.uiuc.edu/~wnbell/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From rob.clewley at gmail.com Wed Apr 30 02:18:35 2008 From: rob.clewley at gmail.com (Rob Clewley) Date: Wed, 30 Apr 2008 02:18:35 -0400 Subject: [SciPy-user] Polynomial interpolation In-Reply-To: References: <20080426012325.GC7685@phare.normalesup.org> <4815F1F2.2000100@lamedomain.net> <20080428155731.GD20344@phare.normalesup.org> <20080429192103.GD5130@phare.normalesup.org> <66A33FD3-37B2-47DE-BFD8-C59F462ABB40@tamu.edu> Message-ID: Could someone please make the new interpolation classes into new-style classes? And I don't know if it's considered a big deal, but for future compatibility maybe the exception raising should be done in the functional style: ValueError("message") rather than ValueError, message ? Maybe it's time I asked for SVN access! From peridot.faceted at gmail.com Wed Apr 30 03:03:13 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Wed, 30 Apr 2008 03:03:13 -0400 Subject: [SciPy-user] Polynomial interpolation In-Reply-To: References: <20080426012325.GC7685@phare.normalesup.org> <4815F1F2.2000100@lamedomain.net> <20080428155731.GD20344@phare.normalesup.org> <20080429192103.GD5130@phare.normalesup.org> <66A33FD3-37B2-47DE-BFD8-C59F462ABB40@tamu.edu> Message-ID: 2008/4/30 Rob Clewley : > Could someone please make the new interpolation classes into new-style > classes? And I don't know if it's considered a big deal, but for > future compatibility maybe the exception raising should be done in the > functional style: ValueError("message") rather than ValueError, > message ? I can clean those up. But I'm not sure how to set things up as "properties" so that the right things happen when users try to manipulate the attributes. I'll also go in and add simple single-function-call interfaces at the same time. It may be a few days though. Could somebody point me at a link on making proper use of properties? Thanks, Anne From peridot.faceted at gmail.com Wed Apr 30 03:16:20 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Wed, 30 Apr 2008 03:16:20 -0400 Subject: [SciPy-user] Linear Interpolation Question In-Reply-To: References: Message-ID: 2008/4/28 Andrea Gavana : > I have 2 matrices coming from 2 different simulations: the first > column of the matrices is a date (time) at which all the other results > in the matrix have been reported (simulation step). In these 2 > matrices, very often the simulation steps do not coincide, so I just > want to interpolate the results in the second matrix using the dates > in the first matrix. The problem is, I have close to 13,000 columns in > every matrices, and repeating interp1d all over the columns is quite > expensive. An example of what I am doing is as follows: > > # Loop over all the columns > for indx in indices: > > # Set up a linear interpolation with: > # x = dates in the second simulation > # y = single column in the second matrix simulation > function = interp1d(secondaryMatrixDates, > secondaryMatrixResults[:, indx], kind='linear') > > # Interpolate the second matrix results using the first simulation dates > interpolationResults = function(mainMatrixDates) > > # I need the difference between the first simulation and the second > newMatrix[:, indx] = mainMatrixResults[:, indx] - interpolationResults > > This is somehow a costly step, as it's taking up a lot of CPU > (increasing at every iteration) and quite a long time (every column > has about 350 data). Is there anything I can do to speed up this loop? > Or may someone suggest a better approach? You have run into an unfortunate limitation of interp1d; it only handles scalar-valued data. That python loop, through all those interp1d objects, is pretty wasteful. Since you have only several hundred values to interpolate to, and thirteen thousand columns, I would write a vectorized linear interpolation by hand. That is, for each date in the main matrix, use searchsorted() to find it in the secondary matrix dates, then do something like for j in num_dates: date = main_matrix[j] i = searchsorted(date,secondary_date) # check the docstring t = (date-secondary_date[i])/(secondary_date[i+1]-secondary_date[i]) new_matrix[j,:] = t*secondary_matrix[i,:]+(1-t)*secondary_matrix[i+1,:] Good luck, Anne With 13000 columns, the overhead From robert.kern at gmail.com Wed Apr 30 03:33:45 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 30 Apr 2008 02:33:45 -0500 Subject: [SciPy-user] Polynomial interpolation In-Reply-To: References: <4815F1F2.2000100@lamedomain.net> <20080428155731.GD20344@phare.normalesup.org> <20080429192103.GD5130@phare.normalesup.org> <66A33FD3-37B2-47DE-BFD8-C59F462ABB40@tamu.edu> Message-ID: <3d375d730804300033n2c5fc21bh5195c858badd317a@mail.gmail.com> On Wed, Apr 30, 2008 at 2:03 AM, Anne Archibald wrote: > 2008/4/30 Rob Clewley : > > > Could someone please make the new interpolation classes into new-style > > classes? And I don't know if it's considered a big deal, but for > > future compatibility maybe the exception raising should be done in the > > functional style: ValueError("message") rather than ValueError, > > message ? > > I can clean those up. But I'm not sure how to set things up as > "properties" so that the right things happen when users try to > manipulate the attributes. If it doesn't feel right to you, don't do it. Your point about properies giving a false sense of safety in this case is quite valid. If it's not obvious how to apply properties nicely here, that might be a good sign that properties aren't appropriate. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From peridot.faceted at gmail.com Wed Apr 30 04:03:59 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Wed, 30 Apr 2008 04:03:59 -0400 Subject: [SciPy-user] Polynomial interpolation In-Reply-To: <3d375d730804300033n2c5fc21bh5195c858badd317a@mail.gmail.com> References: <20080428155731.GD20344@phare.normalesup.org> <20080429192103.GD5130@phare.normalesup.org> <66A33FD3-37B2-47DE-BFD8-C59F462ABB40@tamu.edu> <3d375d730804300033n2c5fc21bh5195c858badd317a@mail.gmail.com> Message-ID: 2008/4/30 Robert Kern : > On Wed, Apr 30, 2008 at 2:03 AM, Anne Archibald > > wrote: > > > 2008/4/30 Rob Clewley : > > > > > Could someone please make the new interpolation classes into new-style > > > classes? And I don't know if it's considered a big deal, but for > > > future compatibility maybe the exception raising should be done in the > > > functional style: ValueError("message") rather than ValueError, > > > message ? > > > > I can clean those up. But I'm not sure how to set things up as > > "properties" so that the right things happen when users try to > > manipulate the attributes. > > If it doesn't feel right to you, don't do it. Your point about > properies giving a false sense of safety in this case is quite valid. > If it's not obvious how to apply properties nicely here, that might be > a good sign that properties aren't appropriate. Well, actually what I meant was I've never used properties at all and didn't find any useful documentation. I had in mind a fairly draconian configuration that made nearly everything readonly, though I suppose that would become cumbersome within my own methods. And anyway it doesn't really work for numpy arrays, since someone can always do b = A.unwritable_array; b[i,j]=3. But set_yi is a bit ugly. Anne From robert.kern at gmail.com Wed Apr 30 04:40:38 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 30 Apr 2008 03:40:38 -0500 Subject: [SciPy-user] Polynomial interpolation In-Reply-To: References: <20080429192103.GD5130@phare.normalesup.org> <66A33FD3-37B2-47DE-BFD8-C59F462ABB40@tamu.edu> <3d375d730804300033n2c5fc21bh5195c858badd317a@mail.gmail.com> Message-ID: <3d375d730804300140i38b8329ftd65f094137fb4cf3@mail.gmail.com> On Wed, Apr 30, 2008 at 3:03 AM, Anne Archibald wrote: > 2008/4/30 Robert Kern : > > > On Wed, Apr 30, 2008 at 2:03 AM, Anne Archibald > > > > wrote: > > > > > I can clean those up. But I'm not sure how to set things up as > > > "properties" so that the right things happen when users try to > > > manipulate the attributes. > > > > If it doesn't feel right to you, don't do it. Your point about > > properies giving a false sense of safety in this case is quite valid. > > If it's not obvious how to apply properties nicely here, that might be > > a good sign that properties aren't appropriate. > > Well, actually what I meant was I've never used properties at all and > didn't find any useful documentation. I had in mind a fairly draconian > configuration that made nearly everything readonly, though I suppose > that would become cumbersome within my own methods. Properties Lesson #1: Don't do that. :-) Properties are useful to add functionality, or to expose functionality with attribute syntax, which may be appropriate. *Removing* functionality is not a good use of properties. But anyways, the general solution for the internal cumbersomeness is to not use the properties internally. Instead, the property should map to a _private attribute, and internally, you just manipulate those. > And anyway it > doesn't really work for numpy arrays, since someone can always do b = > A.unwritable_array; b[i,j]=3. But set_yi is a bit ugly. All things considered, it's fine. Call it update_yi() if you feel it's more appropriate. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From dave.hirschfeld at gmail.com Wed Apr 30 08:44:20 2008 From: dave.hirschfeld at gmail.com (Dave) Date: Wed, 30 Apr 2008 12:44:20 +0000 (UTC) Subject: [SciPy-user] numpy.fastCopyAndTranspose Segfaults Message-ID: I tried posting to the numpy mailing list but it didn't seem to make it through so I'm re-posting in case it's of interest to the scipy community. As the subject says the below code will crash the Python interpreter. I'm running 32-bit Win XP on an x64 xeon cpu. Python 2.5.1 (r251:54863, Apr 18 2007, 08:51:08) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. from datetime import datetime, timedelta from numpy import asarray, fastCopyAndTranspose import numpy numpy.__version__ '1.0.5.dev5008' timestamps = [datetime(2007,1,1)] while timestamps[-1] < datetime(2007,12,31): timestamps.append(timestamps[-1] + timedelta(1)) x = [asarray(timestamps),asarray(range(len(timestamps)))] fastCopyAndTranspose(x) -Dave From travis at enthought.com Wed Apr 30 10:47:30 2008 From: travis at enthought.com (Travis Vaught) Date: Wed, 30 Apr 2008 09:47:30 -0500 Subject: [SciPy-user] [ANN] EuroSciPy Abstracts Deadline Reminder Message-ID: Greetings, Just a reminder: the abstracts for the EuroSciPy Conference in Leipzig are due by midnight tonight (CST, US [UTC -6]) April, 30. If you'd like to present, please submit your abstract as a PDF, MS Word or plain text file to euroabstracts at scipy.org. For more information on the EuroSciPy Conference, please see: http://www.scipy.org/EuroSciPy2008 From ggellner at uoguelph.ca Wed Apr 30 10:51:50 2008 From: ggellner at uoguelph.ca (Gabriel Gellner) Date: Wed, 30 Apr 2008 10:51:50 -0400 Subject: [SciPy-user] Polynomial interpolation In-Reply-To: References: <20080426012325.GC7685@phare.normalesup.org> <4815F1F2.2000100@lamedomain.net> <20080428155731.GD20344@phare.normalesup.org> <20080429192103.GD5130@phare.normalesup.org> <66A33FD3-37B2-47DE-BFD8-C59F462ABB40@tamu.edu> Message-ID: <20080430145150.GA6594@giton> On Wed, Apr 30, 2008 at 03:03:13AM -0400, Anne Archibald wrote: > 2008/4/30 Rob Clewley : > > Could someone please make the new interpolation classes into new-style > > classes? And I don't know if it's considered a big deal, but for > > future compatibility maybe the exception raising should be done in the > > functional style: ValueError("message") rather than ValueError, > > message ? > Also the 'Value, message' format is going away in python 3000: http://www.python.org/dev/peps/pep-3109/#compatibility-issues > I can clean those up. But I'm not sure how to set things up as > "properties" so that the right things happen when users try to > manipulate the attributes. I'll also go in and add simple > single-function-call interfaces at the same time. It may be a few days > though. > > Could somebody point me at a link on making proper use of properties? > Not to disagree with the argument that if you find properties unnatural don't use them, but to give a link that I like . . . This link goes over descriptors in general, which I found useful to understand the magic of properties: http://users.rcn.com/python/download/Descriptor.htm If this doesn't cover what you want to do I can drum up some more links. Gabriel From oliphant at enthought.com Wed Apr 30 11:10:04 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Wed, 30 Apr 2008 10:10:04 -0500 Subject: [SciPy-user] numpy.fastCopyAndTranspose Segfaults In-Reply-To: References: Message-ID: <48188BCC.2070007@enthought.com> Dave wrote: > I tried posting to the numpy mailing list but it didn't seem to make it through > so I'm re-posting in case it's of interest to the scipy community. > > As the subject says the below code will crash the Python interpreter. I'm > running 32-bit Win XP on an x64 xeon cpu. > > Python 2.5.1 (r251:54863, Apr 18 2007, 08:51:08) [MSC v.1310 32 bit (Intel)] on > win32 > Type "help", "copyright", "credits" or "license" for more information. > > from datetime import datetime, timedelta > from numpy import asarray, fastCopyAndTranspose > import numpy > numpy.__version__ > '1.0.5.dev5008' > timestamps = [datetime(2007,1,1)] > while timestamps[-1] < datetime(2007,12,31): > timestamps.append(timestamps[-1] + timedelta(1)) > > x = [asarray(timestamps),asarray(range(len(timestamps)))] > fastCopyAndTranspose(x) > > > It looks like the segfault occurs in the printing of the result. If you set y = fastCopyAndTranspose(x) you don't get a segfault until you try to print it (actually __repr__ it). This should be filed as a ticket. -Travis From dave.hirschfeld at gmail.com Wed Apr 30 11:48:35 2008 From: dave.hirschfeld at gmail.com (Dave) Date: Wed, 30 Apr 2008 15:48:35 +0000 (UTC) Subject: [SciPy-user] numpy.fastCopyAndTranspose Segfaults References: <48188BCC.2070007@enthought.com> Message-ID: Travis E. Oliphant enthought.com> writes: > > It looks like the segfault occurs in the printing of the result. If > you set y = fastCopyAndTranspose(x) you don't get a segfault until you > try to print it (actually __repr__ it). > > This should be filed as a ticket. > > -Travis > Ticket #766 created. My first ticket so I hope it's ok! -Dave From alan at ajackson.org Wed Apr 30 22:11:52 2008 From: alan at ajackson.org (Alan Jackson) Date: Wed, 30 Apr 2008 21:11:52 -0500 Subject: [SciPy-user] problem referencing single element arrays Message-ID: <20080430211152.5164af7c@ajackson.org> Having a problem with numpy that has me stumped. I have a large set of arrays. I want to set up cross reference lists of elements within those arrays, with the lists just being pointers to the objects (floating point numbers) stored in the arrays. The problem is that some arrays have only a single element, so when I try to store a[0] for a single element array, I get an error, "0-d arrays can't be indexed". How do I get a pointer to the object stored and not the array for single element arrays? Or maybe I don't want to be doing this? -- ----------------------------------------------------------------------- | Alan K. Jackson | To see a World in a Grain of Sand | | alan at ajackson.org | And a Heaven in a Wild Flower, | | www.ajackson.org | Hold Infinity in the palm of your hand | | Houston, Texas | And Eternity in an hour. - Blake | ----------------------------------------------------------------------- From robert.kern at gmail.com Wed Apr 30 22:39:53 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 30 Apr 2008 21:39:53 -0500 Subject: [SciPy-user] problem referencing single element arrays In-Reply-To: <20080430211152.5164af7c@ajackson.org> References: <20080430211152.5164af7c@ajackson.org> Message-ID: <3d375d730804301939vcae770aj3c11dc6e27384d29@mail.gmail.com> On Wed, Apr 30, 2008 at 9:11 PM, Alan Jackson wrote: > Having a problem with numpy that has me stumped. > > I have a large set of arrays. I want to set up cross reference lists of > elements within those arrays, with the lists just being pointers to the objects > (floating point numbers) stored in the arrays. > > The problem is that some arrays have only a single element, so when I try to > store a[0] for a single element array, I get an error, "0-d arrays can't be > indexed". How do I get a pointer to the object stored and not the array for > single element arrays? a[()] > Or maybe I don't want to be doing this? Quite possibly. Can you expand on your use case? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From peridot.faceted at gmail.com Wed Apr 30 22:41:40 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Wed, 30 Apr 2008 22:41:40 -0400 Subject: [SciPy-user] problem referencing single element arrays In-Reply-To: <20080430211152.5164af7c@ajackson.org> References: <20080430211152.5164af7c@ajackson.org> Message-ID: 2008/4/30 Alan Jackson : > Having a problem with numpy that has me stumped. > > I have a large set of arrays. I want to set up cross reference lists of > elements within those arrays, with the lists just being pointers to the objects > (floating point numbers) stored in the arrays. > > The problem is that some arrays have only a single element, so when I try to > store a[0] for a single element array, I get an error, "0-d arrays can't be > indexed". How do I get a pointer to the object stored and not the array for > single element arrays? > > Or maybe I don't want to be doing this? Well, I'm not totally sure I know what you mean, but I think what you want can be done in a reasonable way. Numpy has two subtly different kinds of object, scalars and rank zero arrays; I think rank zero arrays can be made to do what you want, more or less, but they're kind of a neglected corner case. But a rank-1 array that happens to have only one element should behave in a perfectly reasonable fashion, and indexing it with [0] should work fine. The only trick is constructing one the right way: if A is a rank-1 array and we want to make a view that lets us get at element j, we can do e = A[j:j+1] and then, for example, e[0] = newvalue Anne From alan at ajackson.org Wed Apr 30 23:03:41 2008 From: alan at ajackson.org (Alan Jackson) Date: Wed, 30 Apr 2008 22:03:41 -0500 Subject: [SciPy-user] problem referencing single element arrays In-Reply-To: <3d375d730804301939vcae770aj3c11dc6e27384d29@mail.gmail.com> References: <20080430211152.5164af7c@ajackson.org> <3d375d730804301939vcae770aj3c11dc6e27384d29@mail.gmail.com> Message-ID: <20080430220341.503b0fa6@ajackson.org> On Wed, 30 Apr 2008 21:39:53 -0500 "Robert Kern" wrote: > On Wed, Apr 30, 2008 at 9:11 PM, Alan Jackson wrote: > > Having a problem with numpy that has me stumped. > > > > I have a large set of arrays. I want to set up cross reference lists of > > elements within those arrays, with the lists just being pointers to the objects > > (floating point numbers) stored in the arrays. > > > > The problem is that some arrays have only a single element, so when I try to > > store a[0] for a single element array, I get an error, "0-d arrays can't be > > indexed". How do I get a pointer to the object stored and not the array for > > single element arrays? > > a[()] > > > Or maybe I don't want to be doing this? > > Quite possibly. Can you expand on your use case? > Imagine a 3D coordinate system. I have connected, non-recumbent sheets of various sizes in this 3D space. I set up a dictionary of arrays where a set of 3 arrays contains the X, Y, Z-values of the points on a sheet, the dictionary so I can name each sheet. I'd also like to access the data points vertically, at a particular X,Y what are all the sheets and points? That's the purpose of my cross reference. It's actually a bit more complicated than this - I have multiple attributes attached to each point on each sheet, as well as sheet-level attributes. I did this in perl a few years ago - I think it will be much better with numpy. I like that bit of magic you showed. -- ----------------------------------------------------------------------- | Alan K. Jackson | To see a World in a Grain of Sand | | alan at ajackson.org | And a Heaven in a Wild Flower, | | www.ajackson.org | Hold Infinity in the palm of your hand | | Houston, Texas | And Eternity in an hour. - Blake | -----------------------------------------------------------------------