From stefan at sun.ac.za Fri Aug 1 03:04:12 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Fri, 1 Aug 2008 09:04:12 +0200 Subject: [SciPy-user] scipy.io.numpyio fwrite - appending or updating an array In-Reply-To: <48927807.3080603@visualreservoir.com> References: <489238C1.7040902@visualreservoir.com> <48927807.3080603@visualreservoir.com> Message-ID: <9457e7c80808010004r1b094151r8159442150c0e541@mail.gmail.com> Hi Brennan 2008/8/1 Brennan Williams : > I've tried replacing numpyio with both fopen and now also npfile but I'm > getting the same problem, i.e. if I write a numpy array to the file, > everything else before that position in the file is now zero. It is as > if it is a new file, not an existing one. Have you looked at SciPy's memmap class? Regards St?fan From stefan at sun.ac.za Fri Aug 1 03:50:26 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Fri, 1 Aug 2008 09:50:26 +0200 Subject: [SciPy-user] scipy.io.numpyio fwrite - appending or updating an array In-Reply-To: <48927807.3080603@visualreservoir.com> References: <489238C1.7040902@visualreservoir.com> <48927807.3080603@visualreservoir.com> Message-ID: <9457e7c80808010050j792c352ag538fbc43160537e3@mail.gmail.com> Hi Brennan 2008/8/1 Brennan Williams : > I've tried replacing numpyio with both fopen and now also npfile but I'm > getting the same problem, i.e. if I write a numpy array to the file, > everything else before that position in the file is now zero. It is as > if it is a new file, not an existing one. Have you looked at SciPy's memmap class? Regards St?fan From sgarcia at olfac.univ-lyon1.fr Fri Aug 1 04:47:13 2008 From: sgarcia at olfac.univ-lyon1.fr (Samuel GARCIA) Date: Fri, 01 Aug 2008 10:47:13 +0200 Subject: [SciPy-user] Numpy : in place moving data strange behaviour Message-ID: <4892CD91.8050609@olfac.univ-lyon1.fr> Hi list, I have a strange behaviour of numpy if I try to move in place a part of my array in an other place of the array. For 1D or for 2D there is not the same behaviour. The solution is of course copying the part I have move of thhe array. See the code : from scipy import * # for 1D a = arange(10) print a # a = [0 1 2 3 4 5 6 7 8 9] a[0:7] = a[3:10] print a # OK : a = [3 4 5 6 7 8 9 7 8 9] a = arange(10) print a a[3:10] = a[0:7] print a # OK a = [0 1 2 0 1 2 3 4 5 6] # for 2D a = concatenate( (arange(10)[newaxis,:] , arange(10)[newaxis,:]) ) print a #[[0 1 2 3 4 5 6 7 8 9] # [0 1 2 3 4 5 6 7 8 9]] a[:,0:7] = a[:,3:10] print a #OK a= # [[3 4 5 6 7 8 9 7 8 9] # [3 4 5 6 7 8 9 7 8 9]] a = concatenate( (arange(10)[newaxis,:] , arange(10)[newaxis,:]) ) print a a[:,3:10] = a[:,0:7] print a # Not expected a = #[[0 1 2 0 1 2 0 1 2 0] # [0 1 2 0 1 2 0 1 2 0]] # solution copy all the array : a = concatenate( (arange(10)[newaxis,:] , arange(10)[newaxis,:]) ) a[:,3:10] = a[:,0:7].copy() print a # OK a = #[[0 1 2 0 1 2 3 4 5 6] # [0 1 2 0 1 2 3 4 5 6]] Any explanation ? Thank Samuel -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Samuel Garcia Laboratoire de Neurosciences Sensorielles, Comportement, Cognition. CNRS - UMR5020 - Universite Claude Bernard LYON 1 Equipe logistique et technique 50, avenue Tony Garnier 69366 LYON Cedex 07 FRANCE T?l : 04 37 28 74 64 Fax : 04 37 28 76 01 http://olfac.univ-lyon1.fr/unite/equipe-07/ http://neuralensemble.org/trac/OpenElectrophy ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From robert.kern at gmail.com Fri Aug 1 05:06:14 2008 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 1 Aug 2008 04:06:14 -0500 Subject: [SciPy-user] Numpy : in place moving data strange behaviour In-Reply-To: <4892CD91.8050609@olfac.univ-lyon1.fr> References: <4892CD91.8050609@olfac.univ-lyon1.fr> Message-ID: <3d375d730808010206g7cb51948g61618da43fae090a@mail.gmail.com> On Fri, Aug 1, 2008 at 03:47, Samuel GARCIA wrote: > Hi list, > I have a strange behaviour of numpy if I try to move in place > a part of my array in an other place of the array. > For 1D or for 2D there is not the same behaviour. Basically, you're treading into areas where we make no guarantees. The exact order we iterate over the arrays should be considered an implementation detail, and you should not rely on it. This should only affect your results in cases where you are modifying an array inplace where the source is an overlapping view of that very array. Without diving into the source, I think what's happening is that the 1D version happens to deal with contiguous slices on both the LHS and the RHS, so memcpy() is used or something similar which handles overlaps specially. In the 2D case, neither is contiguous, so we iterate manually in the naive way. For example, we can construct a 1D non-contiguous case which demonstrates the same behavior: >>> from numpy import * >>> a = arange(10) >>> a array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) >>> a[2:9:2] = a[0:7:2] >>> a array([0, 1, 0, 3, 0, 5, 0, 7, 0, 9]) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From brennan.williams at visualreservoir.com Fri Aug 1 05:14:09 2008 From: brennan.williams at visualreservoir.com (Brennan Williams) Date: Fri, 01 Aug 2008 21:14:09 +1200 Subject: [SciPy-user] scipy.io.numpyio fwrite - appending or updating an array In-Reply-To: <48927807.3080603@visualreservoir.com> References: <489238C1.7040902@visualreservoir.com> <48927807.3080603@visualreservoir.com> Message-ID: <4892D3E1.40209@visualreservoir.com> I'll look into memmap as Stefan suggested. If Robert Kern's out there, do you have any comments about what I might be doing wrong. Id on't know memmap at all yet - basically each file will have multiple numpy arrays written, read, appended and updated as required. Brennan Brennan Williams wrote: > I've tried replacing numpyio with both fopen and now also npfile but I'm > getting the same problem, i.e. if I write a numpy array to the file, > everything else before that position in the file is now zero. It is as > if it is a new file, not an existing one. > > Brennan Williams wrote: > >> I have an existing binary file containing numpy array data. It has been >> created using open,fwrite & close and I can read the data using fread. >> >> I want to be able to either append a new array to the end of the file or >> update an existing array within the file. >> >> I've tried opening the file with a mode of either 'ab+' or 'wb+' and >> then writing the data using something like.... >> >> fd = open(vfname, 'ab+') >> if fd: >> filepos=(self.id-1)*self.yarray.size*4 >> fd.seek(filepos) >> fwrite(fd, self.yarray.size, self.yarray,'f') >> fd.close() >> >> When I use a mode of 'ab+' it looks like the data has been written to >> the file ok (no errors reported) but when I read it back I get my >> original data. >> >> When I use 'wb+' then my updated data gets written and read back ok. But >> when I reload the file, everything apart from my updated data (i.e. >> everything before it in the file) is now zero. >> >> The '+' in the mode seems to make no difference. >> >> What am I doing wrong? >> >> Thanks >> >> Bren. >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> >> >> > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From robert.kern at gmail.com Fri Aug 1 05:30:40 2008 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 1 Aug 2008 04:30:40 -0500 Subject: [SciPy-user] scipy.io.numpyio fwrite - appending or updating an array In-Reply-To: <4892D3E1.40209@visualreservoir.com> References: <489238C1.7040902@visualreservoir.com> <48927807.3080603@visualreservoir.com> <4892D3E1.40209@visualreservoir.com> Message-ID: <3d375d730808010230m6f1956e4sab5e5c3b10480f9a@mail.gmail.com> On Fri, Aug 1, 2008 at 04:14, Brennan Williams wrote: > I'll look into memmap as Stefan suggested. If Robert Kern's out there, > do you have any comments about what I might be doing wrong. I think you want 'rb+'. 'ab+' puts you at the end of the file for writing (because you asked to append). I believe the '+' in 'wb+' is simply ignored, so you are getting the truncating behavior of 'wb'. I've only tested 'rb+' with file.write(), but ultimately, both file.write() and scipy.io.fwrite() both use C's fwrite(3) down at the bottom. In [40]: f = open('foo.dat', 'wb') In [41]: f.write('Foo!' * 4) In [42]: f.close() In [43]: open('foo.dat', 'rb').read() Out[43]: 'Foo!Foo!Foo!Foo!' In [44]: f = open('foo.dat', 'rb+') In [45]: f.tell() Out[45]: 0L In [46]: f.seek(4) In [47]: f.tell() Out[47]: 4L In [48]: f.write('Bar!') In [49]: f.tell() Out[49]: 8L In [50]: f.close() In [51]: open('foo.dat', 'rb').read() Out[51]: 'Foo!Bar!Foo!Foo!' But yeah, look at memmap arrays. Much nicer. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From brennan.williams at visualreservoir.com Fri Aug 1 05:46:51 2008 From: brennan.williams at visualreservoir.com (Brennan Williams) Date: Fri, 01 Aug 2008 21:46:51 +1200 Subject: [SciPy-user] scipy.io.numpyio fwrite - appending or updating an array In-Reply-To: <3d375d730808010230m6f1956e4sab5e5c3b10480f9a@mail.gmail.com> References: <489238C1.7040902@visualreservoir.com> <48927807.3080603@visualreservoir.com> <4892D3E1.40209@visualreservoir.com> <3d375d730808010230m6f1956e4sab5e5c3b10480f9a@mail.gmail.com> Message-ID: <4892DB8B.9090907@visualreservoir.com> Robert Thanks for the help. 'rb+' seems to work. 'ab+' or 'ab' is working ok. 'wb' or 'wb+' doesn't work, i.e. everything preceding the position I'm writing to in the file becomes zero. I re-read (slowly this time) the online documentation - in fopen the permissions are the same as for the built-in open so I looked at that and yes, 'w+' will truncate. However, to me, truncation means truncation "after" but it evidently also means truncation "before" as well which just wasn't what I was expecting (probably due to a Fortran background many years ago). It also seems strange to me to open a file with 'r+' when I want to write to it. But there you go, it seems to be working. I will also look at memmap - as the size of my data files gets bigger with larger datasets etc. it may well be very useful. Brennan Robert Kern wrote: > On Fri, Aug 1, 2008 at 04:14, Brennan Williams > wrote: > >> I'll look into memmap as Stefan suggested. If Robert Kern's out there, >> do you have any comments about what I might be doing wrong. >> > > I think you want 'rb+'. 'ab+' puts you at the end of the file for > writing (because you asked to append). I believe the '+' in 'wb+' is > simply ignored, so you are getting the truncating behavior of 'wb'. > I've only tested 'rb+' with file.write(), but ultimately, both > file.write() and scipy.io.fwrite() both use C's fwrite(3) down at the > bottom. > > > In [40]: f = open('foo.dat', 'wb') > > In [41]: f.write('Foo!' * 4) > > In [42]: f.close() > > In [43]: open('foo.dat', 'rb').read() > Out[43]: 'Foo!Foo!Foo!Foo!' > > In [44]: f = open('foo.dat', 'rb+') > > In [45]: f.tell() > Out[45]: 0L > > In [46]: f.seek(4) > > In [47]: f.tell() > Out[47]: 4L > > In [48]: f.write('Bar!') > > In [49]: f.tell() > Out[49]: 8L > > In [50]: f.close() > > In [51]: open('foo.dat', 'rb').read() > Out[51]: 'Foo!Bar!Foo!Foo!' > > > But yeah, look at memmap arrays. Much nicer. > > From elmico.filos at gmail.com Fri Aug 1 06:28:03 2008 From: elmico.filos at gmail.com (=?ISO-8859-1?Q?Mico_Fil=F3s?=) Date: Fri, 1 Aug 2008 12:28:03 +0200 Subject: [SciPy-user] Differential Algebraic Equation Solvers In-Reply-To: References: Message-ID: Hi, Please, correct me if I am wrong. Is PyDSTool a package which aims at providing the functionalities of, say, XPP (http://www.math.pitt.edu/~bard/xpp/xpp.html)? That would be great. Although it is true that much can already be done with the existing SciPy functions, it would be very helpful to have some wrapper or some integrated environment (using ipython, matplotlib, NumPy/Scipy) for the analysis of nonlinear systems. But perhaps this is not exactly what the developers of PyDSTool have in mind. If this is the case, I would appreciate to hear any suggestion about possible Numpy/Scipy-based packages (or tips) that allow users to play (plotting trajectories with different initial conditions, changing parameters, drawing bifurcation diagrams, etc) with nonlinear systems in an interactive and easy way. Best, M. From david.huard at gmail.com Fri Aug 1 09:23:41 2008 From: david.huard at gmail.com (David Huard) Date: Fri, 1 Aug 2008 09:23:41 -0400 Subject: [SciPy-user] Is there a collection of useful functions/modules? In-Reply-To: <267237.208.qm@web32905.mail.mud.yahoo.com> References: <267237.208.qm@web32905.mail.mud.yahoo.com> Message-ID: <91cf711d0808010623s6e32a2fboecb11f681a2bfa7a@mail.gmail.com> Hi Joshua, There is a scipy cookbook at http://www.scipy.org/Cookbook It's a great place to put those kinds of functions, but also to write your thoughts on scipy. Having a newcomers perspective is always useful for other folks starting with python. Welcome aboard, David On Wed, Jul 30, 2008 at 10:42 AM, Joshua wrote: > I'm very new to Python, as in only a week of programming, and was > wondering if there is a page with a collection of highly useful > functions/modules that are not necessarily maintained on a > release-to-release basis for things like performing FFT's on data in a file. > > I wrote a function to do this using Numpy, and put in options to normalize > the FFT for me, and take the absolute value. There are many other options I > should add, but before I add complexity, I wouldn't mind having my code > scrutinized, and made available for others to use and optimize. That way > students and researchers don't have to reinvent the wheel, unless they want > to or are using some odd formatting scheme. Just have the amplitude data in > the list column wise. > > exempli gratia: > 0.0 > 0.1 > 0.0 > -0.1 > ... et cetera. > > This code makes no particular frequency sampling assumptions (other than > the sampling was done correctly), and only deals with simple FFT processing. > > I've attached the code and welcome scrutiny. > > There are few things that I know I should to add/restructure to my code: > better error handling, printing out the phase, multi-dimensional analysis, > and |Magnitude| in 20dB. > > Thanks for your help > Joshua > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rob.clewley at gmail.com Fri Aug 1 11:30:59 2008 From: rob.clewley at gmail.com (Rob Clewley) Date: Fri, 1 Aug 2008 11:30:59 -0400 Subject: [SciPy-user] Differential Algebraic Equation Solvers In-Reply-To: References: Message-ID: > Please, correct me if I am wrong. Is PyDSTool a package which aims at > providing the functionalities of, say, XPP > (http://www.math.pitt.edu/~bard/xpp/xpp.html)? That would be great. In part, yes. It's all here: http://www.cam.cornell.edu/~rclewley/cgi-bin/moin.cgi/ProjectOverview and in the rest of the documentation on the site. From nwagner at iam.uni-stuttgart.de Fri Aug 1 12:10:50 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 01 Aug 2008 18:10:50 +0200 Subject: [SciPy-user] Differential Algebraic Equation Solvers In-Reply-To: References: Message-ID: On Fri, 1 Aug 2008 11:30:59 -0400 "Rob Clewley" wrote: >> Please, correct me if I am wrong. Is PyDSTool a package >>which aims at >> providing the functionalities of, say, XPP >> (http://www.math.pitt.edu/~bard/xpp/xpp.html)? That >>would be great. > > In part, yes. It's all here: > http://www.cam.cornell.edu/~rclewley/cgi-bin/moin.cgi/ProjectOverview > and in the rest of the documentation on the site. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user BTW, the link to AUTO2000 http://sourceforge.net/projects/auto2000/ seems to be outdated. Cheers, Nils From spmcinerney at hotmail.com Fri Aug 1 17:03:45 2008 From: spmcinerney at hotmail.com (Stephen McInerney) Date: Fri, 1 Aug 2008 14:03:45 -0700 Subject: [SciPy-user] SciPy Conference: tentative BOF schedule? Message-ID: Hi, Is there any tentative schedule for the BOFs yet? - what evenings are these BOFs planned for? (Tue/Wed/Thu/Fri/Sat?) - does anyone get together in the Thu/Fri lunch slots? - does anything happen on the Tue/Wed (tutorial day) evenings? All that http://conference.scipy.org/bofs currently says is "Proposed BoF sessions:- Testing - Documentation - Low-level code-wrapping and optimisation Is there anything on IC design, hardware, animation, visualization etc? Can you please update the webpage with tentative dates? Thanks, Stephen _________________________________________________________________ Time for vacation? WIN what you need- enter now! http://www.gowindowslive.com/summergiveaway/?ocid=tag_jlyhm -------------- next part -------------- An HTML attachment was scrubbed... URL: From millman at berkeley.edu Sat Aug 2 16:29:15 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Sat, 2 Aug 2008 13:29:15 -0700 Subject: [SciPy-user] ANN: NumPy 1.1.1 Message-ID: I'm pleased to announce the release of NumPy 1.1.1. NumPy is the fundamental package needed for scientific computing with Python. It contains: * a powerful N-dimensional array object * sophisticated (broadcasting) functions * basic linear algebra functions * basic Fourier transforms * sophisticated random number capabilities * tools for integrating Fortran code. Besides it's obvious scientific uses, NumPy can also be used as an efficient multi-dimensional container of generic data. Arbitrary data-types can be defined. This allows NumPy to seamlessly and speedily integrate with a wide-variety of databases. Numpy 1.1.1 is a bug fix release featuring major improvements in Python 2.3.x compatibility and masked arrays. For information, please see the release notes: http://sourceforge.net/project/shownotes.php?group_id=1369&release_id=617279 Thank you to everybody who contributed to this release. Enjoy, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From wesmckinn at gmail.com Sat Aug 2 17:21:02 2008 From: wesmckinn at gmail.com (Wes McKinney) Date: Sat, 2 Aug 2008 17:21:02 -0400 Subject: [SciPy-user] STARPAC, other time series analysis tools for use in python/scipy Message-ID: <6c476c8a0808021421v32624540g4ea4d14ce999b42d@mail.gmail.com> Hello all, For the past several months I have been using numpy/scipy to do a decent amount of linear modeling, time series analysis, etc. Nothing too fancy, but have been able to put the scipy.stats.models package to good use. My code is fairly general but like most things been tailored specifically for my applications (which are primarily financial/econometric in nature), and I have put together some classes in a similar vein to the Scikits TimeSeries package. I am starting to think more big picture about developing a substantial toolkit in numpy/scipy that would be useful for econometricians and financial engineers (among others), and in particular which would replace / complement a lot of the functionality found in R and to some extent in the toolboxes available for MATLAB (I am decidedly anti-matlab). Toward that end, I am wondering if anyone knows of anything that exists in python for these purposes, the kind of stuff that is done in RATS, gretl, the MATLAB ARMAX/GARCH toolbox or financial engineering toolbox. Any suggestions about a good starting point? I have come across a smattering of code around the internet but there doesn't seem to be much cohesion in this area of research. I saw also there is an existing Fortran library, STARPAC, for regression / time series analysis which may suggest some f2py work-- doesn't look like anyone has wrapped this library yet, is this a worthwhile collection of tools? I have no Fortran / f2py experience, but am interested enough to learn. Wrapping STARPAC would probably be a pretty substantial project, maybe a worthwile future addition to scipy. Any advice here would be appreciated. Thanks, Wes McKinney -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Sat Aug 2 17:42:10 2008 From: aisaac at american.edu (Alan G Isaac) Date: Sat, 2 Aug 2008 17:42:10 -0400 Subject: [SciPy-user] STARPAC, other time series analysis tools for use in python/scipy In-Reply-To: <6c476c8a0808021421v32624540g4ea4d14ce999b42d@mail.gmail.com> References: <6c476c8a0808021421v32624540g4ea4d14ce999b42d@mail.gmail.com> Message-ID: On Sat, 2 Aug 2008, Wes McKinney apparently wrote: > I am starting to think more big picture about developing > a substantial toolkit in numpy/scipy that would be useful > for econometricians and financial engineers (among > others), and in particular which would replace > / complement a lot of the functionality found in R and to > some extent in the toolboxes available for MATLAB (I am > decidedly anti-matlab). The pytrix component of econpy is for exactly such things. I started this awhile back but then largely moved on to other things. You would be welcome to contribute to econpy/pytrix any econometrics related code. If it starts to evolve into an appropriate scikit, I will always be amenable to moving pytrix (or any part) over to the SciPy scikits. (License is MIT, which is SciPy compatible.) Cheers, Alan Isaac From stefan at sun.ac.za Sun Aug 3 19:05:16 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 4 Aug 2008 01:05:16 +0200 Subject: [SciPy-user] Birds-of-a-feather sessions at SciPy'08 Message-ID: <9457e7c80808031605y5b66d7e7j12524ac6d3191844@mail.gmail.com> Hi all, SciPy'08 is just around the corner! We need topics for birds-of-a-feather sessions. As the name indicates, these informal meetings are aimed at bringing together people with a common interest, and are normally held in the evenings after the day's events. I've opened a page on the scipy wiki; please feel free to add your idea, or to support someone else's by adding your vote: http://www.scipy.org/SciPy2008/BoF We probably won't be able to have more than 4 sessions. Thanks to Stephen McInerney for the reminder. Cheers St?fan From spmcinerney at hotmail.com Sun Aug 3 20:07:56 2008 From: spmcinerney at hotmail.com (Stephen McInerney) Date: Sun, 3 Aug 2008 17:07:56 -0700 Subject: [SciPy-user] Birds-of-a-feather sessions at SciPy'08 In-Reply-To: <9457e7c80808031605y5b66d7e7j12524ac6d3191844@mail.gmail.com> References: <9457e7c80808031605y5b66d7e7j12524ac6d3191844@mail.gmail.com> Message-ID: [Glen Jarvis had inquired about BioPython.He's not on the list I think so you might like to cc: him if replying on this.] >Glen Jarvis wrote: >I had asked about SciPy last year and was sad I missed it. However,>I don't see anything on the agenda for BioPython this year =(>>Is it just not as popular in the SciPy community? I know BioPerl>still has more "market share," but I'd like to learn more because:>I "heart" me python... I don't know BioPython at all, but wanted to>know more.. I've done some graduate work in Bioinformatics... Cheers, Glen > Date: Mon, 4 Aug 2008 01:05:16 +0200> From: stefan at sun.ac.za> To: scipy-user at scipy.org> Subject: Birds-of-a-feather sessions at SciPy'08> > Hi all,> > SciPy'08 is just around the corner! We need topics for> birds-of-a-feather sessions. As the name indicates, these informal> meetings are aimed at bringing together people with a common interest,> and are normally held in the evenings after the day's events.> I've opened a page on the scipy wiki; please feel free to add your> idea, or to support someone else's by adding your vote:> > http://www.scipy.org/SciPy2008/BoF> > We probably won't be able to have more than 4 sessions.> > Thanks to Stephen McInerney for the reminder.> > Cheers> St?fan _________________________________________________________________ Got Game? Win Prizes in the Windows Live Hotmail Mobile Summer Games Trivia Contest http://www.gowindowslive.com/summergames?ocid=TXT_TAGHM -------------- next part -------------- An HTML attachment was scrubbed... URL: From ebrosh at nana10.co.il Sun Aug 3 22:14:59 2008 From: ebrosh at nana10.co.il (Eli Brosh) Date: Mon, 4 Aug 2008 05:14:59 +0300 Subject: [SciPy-user] OpenOpt: ralg crash Message-ID: <957526FB6E347743AAB42B212AB54FDA95BB4F@NANAMAILBACK1.nanamail.co.il> Hello, I encountered problems with the ralg solver in OpenOpt. the example nlp_3.py, supplied with the OpenOpt library crashed on two computers. On an ubuntu linux machine, the program simply stopped without further information. On a windows machine, with entought suite, there was a windows error message "pythonw caused a fatal error and it will be closed". The problem disappeared when I erased ralg from the solvers list. It did not occur with lincher and scipy_cobyla solvers. On both machines, the openopt installation was from the latest tarball (no more than week old). What is the problem ? Can it be corrected ? Another small request concerning OpenOpt. Is it possible to provide an example for use of scipy_tnc and scipy_lbfgsb from openopt ? Can these solvers work with linear equality constraints ? Thanks Eli -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmitrey.kroshko at scipy.org Mon Aug 4 01:59:38 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Mon, 04 Aug 2008 08:59:38 +0300 Subject: [SciPy-user] OpenOpt: ralg crash In-Reply-To: <957526FB6E347743AAB42B212AB54FDA95BB4F@NANAMAILBACK1.nanamail.co.il> References: <957526FB6E347743AAB42B212AB54FDA95BB4F@NANAMAILBACK1.nanamail.co.il> Message-ID: <48969ACA.5080700@scipy.org> Hi Eli, It seems the bug was due to svn conflict changes (check your scikits/openopt/solvers/UkrOpt/ralg_oo.py, line 156, svn has put "<<<<<<< .mine" there). Try now latest tarball (or use download from subversion, the bug was absent there) > Another small request concerning OpenOpt. > Is it possible to provide an example for use of scipy_tnc and > scipy_lbfgsb from openopt ? > Can these solvers work with linear equality constraints ? > Examples are absolutely same to nlp1, nlp2, nlp3, nlp_bench_1, nlp_bench_2 etc. But the solvers can use only lb<=x<=ub constraints. Regards, D. Eli Brosh wrote: > > Hello, > > I encountered problems with the ralg solver in OpenOpt. > the example nlp_3.py, supplied with the OpenOpt library crashed on two > computers. > On an ubuntu linux machine, the program simply stopped without further > information. > On a windows machine, with entought suite, there was a windows error > message "pythonw caused a fatal error and it will be closed". > The problem disappeared when I erased ralg from the solvers list. It > did not occur with lincher and scipy_cobyla > solvers. > On both machines, the openopt installation was from the latest tarball > (no more than week old). > > What is the problem ? > Can it be corrected ? > > > Thanks > Eli > From cyril.giraudon at free.fr Mon Aug 4 05:31:29 2008 From: cyril.giraudon at free.fr (cyril giraudon) Date: Mon, 04 Aug 2008 11:31:29 +0200 Subject: [SciPy-user] scipy.stats.gamma usage. Message-ID: <4896CC71.2030602@free.fr> Hi, I am not an expert in statistics and had too validate some computations. I currently try to use the scipy.stats.gamma.pdf function. I don't know how to use the function and obtain the results presented in wikipedia http://en.wikipedia.org/wiki/Gamma_distribution. Can anybody help me ? Thanks a lot, Cyril. From ebrosh1 at gmail.com Mon Aug 4 08:59:48 2008 From: ebrosh1 at gmail.com (Eli Brosh) Date: Mon, 4 Aug 2008 08:59:48 -0400 Subject: [SciPy-user] OpenOpt: ralg crash In-Reply-To: <48969ACA.5080700@scipy.org> References: <957526FB6E347743AAB42B212AB54FDA95BB4F@NANAMAILBACK1.nanamail.co.il> <48969ACA.5080700@scipy.org> Message-ID: <9dbf42130808040559h60e9a82bx131863210d4f005e@mail.gmail.com> Thank you Dmitrey, I downloaded the latest tarball (uploaded 08/04/08 00:54:17) and installed it. Before installing, I erased the older version from the site-packages. However, the ralg still crashes, at least under windows. I did not find the bug you mentioned in ralg_oo.py. Could it be that the bug is caused by my numpy version ? (on windows I use 1.04 that came with the enthought suite) Thanks Eli On Mon, Aug 4, 2008 at 1:59 AM, dmitrey wrote: > Hi Eli, > It seems the bug was due to svn conflict changes > (check your scikits/openopt/solvers/UkrOpt/ralg_oo.py, line 156, svn has > put "<<<<<<< .mine" there). > Try now latest tarball (or use download from subversion, the bug was > absent there) > > > Another small request concerning OpenOpt. > > Is it possible to provide an example for use of scipy_tnc and > > scipy_lbfgsb from openopt ? > > Can these solvers work with linear equality constraints ? > > > Examples are absolutely same to nlp1, nlp2, nlp3, nlp_bench_1, > nlp_bench_2 etc. > But the solvers can use only lb<=x<=ub constraints. > Regards, D. > > > Eli Brosh wrote: > > > > Hello, > > > > I encountered problems with the ralg solver in OpenOpt. > > the example nlp_3.py, supplied with the OpenOpt library crashed on two > > computers. > > On an ubuntu linux machine, the program simply stopped without further > > information. > > On a windows machine, with entought suite, there was a windows error > > message "pythonw caused a fatal error and it will be closed". > > The problem disappeared when I erased ralg from the solvers list. It > > did not occur with lincher and scipy_cobyla > > solvers. > > On both machines, the openopt installation was from the latest tarball > > (no more than week old). > > > > What is the problem ? > > Can it be corrected ? > > > > > > > Thanks > > Eli > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tgrav at mac.com Mon Aug 4 09:29:39 2008 From: tgrav at mac.com (Tommy Grav) Date: Mon, 4 Aug 2008 09:29:39 -0400 Subject: [SciPy-user] rev 4595 broken Message-ID: <56B82C56-1605-49E0-B3F5-9249385DC79A@mac.com> I tried to install the current svn trunk and ran into this problem skathi:~] tgrav% python ActivePython 2.5.1.1 (ActiveState Software Inc.) based on Python 2.5.1 (r251:54863, May 1 2007, 17:40:00) [GCC 4.0.1 (Apple Computer, Inc. build 5250)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> numpy.__version__ '1.1.1' >>> import scipy Traceback (most recent call last): File "", line 1, in File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/site-packages/scipy/__init__.py", line 86, in from numpy.testing import Tester ImportError: cannot import name Tester >>> Is the current bleeding edge copy broken? Cheers Tommy Grav + -----------------------------------------------------------------------------------------------------------------+ Associate Research Scientist Dept. of Physics and Astronomy Johns Hopkins University Bloomberg 243 tgrav at pha.jhu.edu 3400 N. Charles St. (410) 516-7683 Baltimore, MD21218 + -----------------------------------------------------------------------------------------------------------------+ From aisaac at american.edu Mon Aug 4 10:23:07 2008 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 4 Aug 2008 10:23:07 -0400 Subject: [SciPy-user] resend bounced msg Message-ID: On Sat, 2 Aug 2008, Wes McKinney apparently wrote: > I am starting to think more big picture about developing > a substantial toolkit in numpy/scipy that would be useful > for econometricians and financial engineers (among > others), and in particular which would replace > / complement a lot of the functionality found in R and to > some extent in the toolboxes available for MATLAB (I am > decidedly anti-matlab). The pytrix component of econpy is for exactly such things. I started this awhile back but then largely moved on to other things. You would be welcome to contribute to econpy/pytrix any econometrics related code. If it starts to evolve into an appropriate scikit, I will always be amenable to moving pytrix (or any part) over to the SciPy scikits. (License is MIT, which is SciPy compatible.) Cheers, Alan Isaac From rjel at ceh.ac.uk Mon Aug 4 10:11:20 2008 From: rjel at ceh.ac.uk (Richard Ellis) Date: Mon, 04 Aug 2008 15:11:20 +0100 Subject: [SciPy-user] Sunperf libraries Message-ID: Hi, I am trying to get SciPy to compile on a Solaris box. I managed to get numpy to work and tell SciPy where the sunperf libs are but I cannot see where I tell SciPy that sunperf is the alias for BLAS, LINPACK etc. Your help would be gratefully received. Cheers, Rich **************************************** Richard Ellis. Centre for Ecology and Hydrology, Room 63, Mclean Building, Crowmarsh Gifford, Wallingford, Oxfordshire, OX10 8BB, UK. Tel: 01491 692571 ******************************************************************************************************* This message (and any attachments) is for the recipient only. NERC is subject to the Freedom of Information Act 2000 and the contents of this email and any reply you make may be disclosed by NERC unless it is exempt from release under the Act. Any material supplied to NERC may be stored in an electronic records management system. ******************************************************************************************************* -- This message (and any attachments) is for the recipient only. NERC is subject to the Freedom of Information Act 2000 and the contents of this email and any reply you make may be disclosed by NERC unless it is exempt from release under the Act. Any material supplied to NERC may be stored in an electronic records management system. From dmitrey.kroshko at scipy.org Mon Aug 4 10:31:32 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Mon, 04 Aug 2008 17:31:32 +0300 Subject: [SciPy-user] OpenOpt: ralg crash In-Reply-To: <9dbf42130808040559h60e9a82bx131863210d4f005e@mail.gmail.com> References: <957526FB6E347743AAB42B212AB54FDA95BB4F@NANAMAILBACK1.nanamail.co.il> <48969ACA.5080700@scipy.org> <9dbf42130808040559h60e9a82bx131863210d4f005e@mail.gmail.com> Message-ID: <489712C4.5050809@scipy.org> Hi Eli, please try to run example3.py from Linux OS terminal and inform what will output. As for numpy I remember error from 1.0.5 mentioned here http://openopt.blogspot.com/2008/06/numpy-related-bug.html but it was not triggered with ralg. Regards, D. Eli Brosh wrote: > Thank you Dmitrey, > I downloaded the latest tarball (uploaded 08/04/08 00:54:17) and > installed it. > Before installing, I erased the older version from the site-packages. > However, the ralg still crashes, at least under windows. > I did not find the bug you mentioned in ralg_oo.py. > Could it be that the bug is caused by my numpy version ? > (on windows I use 1.04 that came with the enthought suite) > > Thanks > Eli > > On Mon, Aug 4, 2008 at 1:59 AM, dmitrey > wrote: > > Hi Eli, > It seems the bug was due to svn conflict changes > (check your scikits/openopt/solvers/UkrOpt/ralg_oo.py, line 156, > svn has > put "<<<<<<< .mine" there). > Try now latest tarball (or use download from subversion, the bug was > absent there) > > > Another small request concerning OpenOpt. > > Is it possible to provide an example for use of scipy_tnc and > > scipy_lbfgsb from openopt ? > > Can these solvers work with linear equality constraints ? > > > Examples are absolutely same to nlp1, nlp2, nlp3, nlp_bench_1, > nlp_bench_2 etc. > But the solvers can use only lb<=x<=ub constraints. > Regards, D. > > > Eli Brosh wrote: > > > > Hello, > > > > I encountered problems with the ralg solver in OpenOpt. > > the example nlp_3.py, supplied with the OpenOpt library crashed > on two > > computers. > > On an ubuntu linux machine, the program simply stopped without > further > > information. > > On a windows machine, with entought suite, there was a windows error > > message "pythonw caused a fatal error and it will be closed". > > The problem disappeared when I erased ralg from the solvers list. It > > did not occur with lincher and scipy_cobyla > > solvers. > > On both machines, the openopt installation was from the latest > tarball > > (no more than week old). > > > > What is the problem ? > > Can it be corrected ? > > > > > > > Thanks > > Eli > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From elcorto at gmx.net Mon Aug 4 11:04:56 2008 From: elcorto at gmx.net (Steve Schmerler) Date: Mon, 4 Aug 2008 17:04:56 +0200 Subject: [SciPy-user] interactive work with extension modules Message-ID: <20080804150456.GA3490@ramrod.de> Hi all I've read in the scipy and IPython archives that Python cannot reload extension modules (C and Fortran). So if I change and re-compile my extension and do >>> reload(foo) >>> foo.some_function(args) in the [i]python shell, there is no change to "foo". Even >>> del foo >>> import foo >>> foo.some_function(args) does not change the module foo in the interactive session (why?). So, I'd like to hear how people develop/test extensions interactively, then. The only thing seems to be using a script which imports foo. import foo foo.some_function(args) But here, I can't pass `args` interactively anymore. This would effectively kill all interactive work. Any thoughts? steve From ebrosh1 at gmail.com Mon Aug 4 11:26:03 2008 From: ebrosh1 at gmail.com (Eli Brosh) Date: Mon, 4 Aug 2008 11:26:03 -0400 Subject: [SciPy-user] OpenOpt: ralg crash In-Reply-To: <9dbf42130808040559h60e9a82bx131863210d4f005e@mail.gmail.com> References: <957526FB6E347743AAB42B212AB54FDA95BB4F@NANAMAILBACK1.nanamail.co.il> <48969ACA.5080700@scipy.org> <9dbf42130808040559h60e9a82bx131863210d4f005e@mail.gmail.com> Message-ID: <9dbf42130808040826l3fc88c36k1189e20c0aa0e288@mail.gmail.com> Indeed, The bug disappeared when I installed numpy 1.1.1 Thanks to Dmitrey. Eli On Mon, Aug 4, 2008 at 8:59 AM, Eli Brosh wrote: > Thank you Dmitrey, > I downloaded the latest tarball (uploaded 08/04/08 00:54:17) and installed > it. > Before installing, I erased the older version from the site-packages. > However, the ralg still crashes, at least under windows. > I did not find the bug you mentioned in ralg_oo.py. > Could it be that the bug is caused by my numpy version ? > (on windows I use 1.04 that came with the enthought suite) > > Thanks > Eli > > > On Mon, Aug 4, 2008 at 1:59 AM, dmitrey wrote: > >> Hi Eli, >> It seems the bug was due to svn conflict changes >> (check your scikits/openopt/solvers/UkrOpt/ralg_oo.py, line 156, svn has >> put "<<<<<<< .mine" there). >> Try now latest tarball (or use download from subversion, the bug was >> absent there) >> >> > Another small request concerning OpenOpt. >> > Is it possible to provide an example for use of scipy_tnc and >> > scipy_lbfgsb from openopt ? >> > Can these solvers work with linear equality constraints ? >> > >> Examples are absolutely same to nlp1, nlp2, nlp3, nlp_bench_1, >> nlp_bench_2 etc. >> But the solvers can use only lb<=x<=ub constraints. >> Regards, D. >> >> >> Eli Brosh wrote: >> > >> > Hello, >> > >> > I encountered problems with the ralg solver in OpenOpt. >> > the example nlp_3.py, supplied with the OpenOpt library crashed on two >> > computers. >> > On an ubuntu linux machine, the program simply stopped without further >> > information. >> > On a windows machine, with entought suite, there was a windows error >> > message "pythonw caused a fatal error and it will be closed". >> > The problem disappeared when I erased ralg from the solvers list. It >> > did not occur with lincher and scipy_cobyla >> > solvers. >> > On both machines, the openopt installation was from the latest tarball >> > (no more than week old). >> > >> > What is the problem ? >> > Can it be corrected ? >> > >> >> > >> > Thanks >> > Eli >> > >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Mon Aug 4 11:26:39 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 4 Aug 2008 10:26:39 -0500 Subject: [SciPy-user] interactive work with extension modules In-Reply-To: <20080804150456.GA3490@ramrod.de> References: <20080804150456.GA3490@ramrod.de> Message-ID: <3d375d730808040826k7480a2bfx412516077707dec2@mail.gmail.com> On Mon, Aug 4, 2008 at 10:04, Steve Schmerler wrote: > Hi all > > I've read in the scipy and IPython archives that Python cannot reload extension > modules (C and Fortran). So if I change and re-compile my extension and do > > >>> reload(foo) > >>> foo.some_function(args) > > in the [i]python shell, there is no change to "foo". Even > > >>> del foo > >>> import foo > >>> foo.some_function(args) > > does not change the module foo in the interactive session (why?). So, I'd like > to hear how people develop/test extensions interactively, then. By and large, you have to use a new process. Python extension modules don't really have the capability to be unloaded. Consequently, they can't really be reloaded, either. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Mon Aug 4 11:38:05 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 4 Aug 2008 10:38:05 -0500 Subject: [SciPy-user] rev 4595 broken In-Reply-To: <56B82C56-1605-49E0-B3F5-9249385DC79A@mac.com> References: <56B82C56-1605-49E0-B3F5-9249385DC79A@mac.com> Message-ID: <3d375d730808040838o46ca2b6fvf136c09d01b00f37@mail.gmail.com> On Mon, Aug 4, 2008 at 08:29, Tommy Grav wrote: > I tried to install the current svn trunk and ran into this problem > > skathi:~] tgrav% python > ActivePython 2.5.1.1 (ActiveState Software Inc.) based on > Python 2.5.1 (r251:54863, May 1 2007, 17:40:00) > [GCC 4.0.1 (Apple Computer, Inc. build 5250)] on darwin > Type "help", "copyright", "credits" or "license" for more information. > >>> import numpy > >>> numpy.__version__ > '1.1.1' > >>> import scipy > Traceback (most recent call last): > File "", line 1, in > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/__init__.py", line 86, in > from numpy.testing import Tester > ImportError: cannot import name Tester > >>> > > Is the current bleeding edge copy broken? No. SVN scipy currently requires SVN numpy. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From alexandre.fayolle at logilab.fr Mon Aug 4 11:27:43 2008 From: alexandre.fayolle at logilab.fr (Alexandre Fayolle) Date: Mon, 4 Aug 2008 17:27:43 +0200 Subject: [SciPy-user] interactive work with extension modules In-Reply-To: <20080804150456.GA3490@ramrod.de> References: <20080804150456.GA3490@ramrod.de> Message-ID: <20080804152743.GA10842@logilab.fr> On Mon, Aug 04, 2008 at 05:04:56PM +0200, Steve Schmerler wrote: > Hi all > > I've read in the scipy and IPython archives that Python cannot reload extension > modules (C and Fortran). So if I change and re-compile my extension and do > > >>> reload(foo) > >>> foo.some_function(args) > > in the [i]python shell, there is no change to "foo". Even > > >>> del foo > >>> import foo > >>> foo.some_function(args) I suspect that some_function is imported indirectly in foo.py (and not defined in that module), because I'm able to do the following: alf at lacapelle:~$ cat toto.py def foo(a): return a alf at lacapelle:~$ ipython In [1]: import toto In [2]: toto.foo(1) # #edit toto.py in another window so that: #alf at lacapelle:~$ cat toto.py #def foo(a, b): # return a + b In [3]: reload(toto) Out[3]: In [4]: toto.foo(1,2) Out[4]: 3 -- Alexandre Fayolle LOGILAB, Paris (France) Formations Python, Zope, Plone, Debian: http://www.logilab.fr/formations D?veloppement logiciel sur mesure: http://www.logilab.fr/services Informatique scientifique: http://www.logilab.fr/science -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 481 bytes Desc: Digital signature URL: From alexandre.fayolle at logilab.fr Mon Aug 4 11:54:18 2008 From: alexandre.fayolle at logilab.fr (Alexandre Fayolle) Date: Mon, 4 Aug 2008 17:54:18 +0200 Subject: [SciPy-user] interactive work with extension modules In-Reply-To: <20080804152743.GA10842@logilab.fr> References: <20080804150456.GA3490@ramrod.de> <20080804152743.GA10842@logilab.fr> Message-ID: <20080804155418.GC10842@logilab.fr> On Mon, Aug 04, 2008 at 05:27:43PM +0200, Alexandre Fayolle wrote: > On Mon, Aug 04, 2008 at 05:04:56PM +0200, Steve Schmerler wrote: > > Hi all > > > > I've read in the scipy and IPython archives that Python cannot reload extension > > modules (C and Fortran). So if I change and re-compile my extension and do > > > > >>> reload(foo) > > >>> foo.some_function(args) > > > > in the [i]python shell, there is no change to "foo". Even > > > > >>> del foo > > >>> import foo > > >>> foo.some_function(args) > > I suspect that some_function is imported indirectly in foo.py (and not > defined in that module), because I'm able to do the following: My bad, I hadn't notice the 'extension' in 'extension module'. -- Alexandre Fayolle LOGILAB, Paris (France) Formations Python, Zope, Plone, Debian: http://www.logilab.fr/formations D?veloppement logiciel sur mesure: http://www.logilab.fr/services Informatique scientifique: http://www.logilab.fr/science -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 481 bytes Desc: Digital signature URL: From zachary.pincus at yale.edu Mon Aug 4 11:59:08 2008 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Mon, 4 Aug 2008 09:59:08 -0600 Subject: [SciPy-user] scipy.stats.gamma usage. In-Reply-To: <4896CC71.2030602@free.fr> References: <4896CC71.2030602@free.fr> Message-ID: <1B3F7B4B-9C94-43F8-AA0D-CD739A99DCA3@yale.edu> > I am not an expert in statistics and had too validate some > computations. > I currently try to use the scipy.stats.gamma.pdf function. > > I don't know how to use the function and obtain the results > presented in > wikipedia http://en.wikipedia.org/wiki/Gamma_distribution. Could you be a bit more specific about your question? Which "results" from the wikipedia page are you hoping to replicate? Here's a thumbnail of the usage of the function though: scipy.stats.gamma.pdf([0, 1, 2, 3], 9, scale=0.5) will give you the probability density at x = 0, 1, 2, and 3 of a gamma function with the shape parameter (k in the wikipedia page, a in the scipy.stats.gamma docstring -- did you read the documentation for scipy.stats.gamma?) set to 9 and the scale parameter (theta in wikipedia, 'scale' in scipy) set to 0.5. Zach On Aug 4, 2008, at 3:31 AM, cyril giraudon wrote: > Hi, > > I am not an expert in statistics and had too validate some > computations. > I currently try to use the scipy.stats.gamma.pdf function. > > I don't know how to use the function and obtain the results > presented in > wikipedia http://en.wikipedia.org/wiki/Gamma_distribution. > > Can anybody help me ? > > Thanks a lot, > > Cyril. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From elcorto at gmx.net Mon Aug 4 12:04:35 2008 From: elcorto at gmx.net (Steve Schmerler) Date: Mon, 4 Aug 2008 18:04:35 +0200 Subject: [SciPy-user] interactive work with extension modules In-Reply-To: <20080804152743.GA10842@logilab.fr> References: <20080804150456.GA3490@ramrod.de> <20080804152743.GA10842@logilab.fr> Message-ID: <20080804160435.GB3490@ramrod.de> On Aug 04 17:27, Alexandre Fayolle wrote: > On Mon, Aug 04, 2008 at 05:04:56PM +0200, Steve Schmerler wrote: > > Hi all > > > > I've read in the scipy and IPython archives that Python cannot reload extension > > modules (C and Fortran). So if I change and re-compile my extension and do > > > > >>> reload(foo) > > >>> foo.some_function(args) > > > > in the [i]python shell, there is no change to "foo". Even > > > > >>> del foo > > >>> import foo > > >>> foo.some_function(args) > > I suspect that some_function is imported indirectly in foo.py (and not > defined in that module), because I'm able to do the following: > > Alexandre, thank you, but I was talking about C and Fortran extensions compiled into a *.so file. steve From simpson at math.toronto.edu Mon Aug 4 14:26:27 2008 From: simpson at math.toronto.edu (Gideon Simpson) Date: Mon, 4 Aug 2008 14:26:27 -0400 Subject: [SciPy-user] complex integrateion Message-ID: <05E7C762-C0F6-40FB-BAD2-D65213887326@math.toronto.edu> Is there any extension of scipy's quadrature to complex numbers? -gideon From jh at physics.ucf.edu Mon Aug 4 16:42:12 2008 From: jh at physics.ucf.edu (Joe Harrington) Date: Mon, 04 Aug 2008 16:42:12 -0400 Subject: [SciPy-user] WTFM! Message-ID: SciPy Documentation Marathon 2008 Status Report We are now nearing the end of the summer. We have a ton of great docstrings, a nice PDF and HTML reference guide, a new package with pages on general topics like slicing, and a glossary. We had hoped to have all the numpy docstrings in first-draft form in time for the pre-fall (1.2) release. The actual number of pages was more than double our quick goal-setting assessment, so we won't make it. As of this moment, we have: status % pages Needs editing 52 430 Being written / Changed 27 226 Needs review 18 152 Needs review (revised) 0 1 Needs work (reviewed) 0 3 Reviewed (needs proof) 2 19 Proofed 0 0 Unimportant 1531 Our current status can always be seen at: http://sd-2116.dedibox.fr/pydocweb/stats/ Definitions of the categories are also on the wiki, but "being written" is our first-draft category. So, we're just shy of halfway there, and since the goal more than doubled, we can say we have not failed our expectations. But, we haven't succeeded, either, and we certainly haven't finished. So, this being a marathon, we're not going to stop! Please join us if you haven't already for the... PRE-CONFERENCE DOC BLITZ! We can, quite realistically, get up to 60% for numpy 1.2. We've had several 8% weeks this summer and we've got several weeks to go. Stefan will merge docstrings into the beta for 1.2 on 5 August, and will continue merging from the wiki for the release candidates and final cut. However, Writing has slowed to a crawl in recent weeks. Please pitch in to help those who are still writing so we can get to 60% by release 1.2. Looking further ahead, I hope all the volunteers will continue writing for the rest of the summer and fall, so that we can put 100% decent drafts into 1.3, and a 100% reviewed set of docstrings into 1.4. Then we can turn our attention to scipy. Enough stick, here's some carrot: This is the design for this year's documentation prize, a T-shirt in Robert Kern black, designed by Teresa Jeffcott: http://physics.ucf.edu/~jh/scipyshirt-2008-2.png We'll hand these out at SciPy '08 to anyone who has written 1000 words or more (according to the stats page) or who has made an equivalent contribution in other ways (reviewing, wiki creation, etc.; Stefan and I will judge). So far, 11 contributors qualify, but several more could easily reach that goal in time. In fact, several of our volunteer writers have produced 1000 words in one week. The offer remains good through the first-draft phase, though you'll need to act quickly to get your docs into 1.2 and be recognized at the conference! If you won't be at SciPy '08 or if you qualify later, we'll mail one to you. As always, further discussion belongs on the scipy-dev mailing list, as does your request to be added to the doc wiki editors group, so head over to the doc wiki main page, established an account there, and give us a shout! WTFM, --jh-- From stefan at sun.ac.za Mon Aug 4 17:19:18 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 4 Aug 2008 23:19:18 +0200 Subject: [SciPy-user] scipy.stats.gamma usage. In-Reply-To: <1B3F7B4B-9C94-43F8-AA0D-CD739A99DCA3@yale.edu> References: <4896CC71.2030602@free.fr> <1B3F7B4B-9C94-43F8-AA0D-CD739A99DCA3@yale.edu> Message-ID: <9457e7c80808041419m8c026d3p8ab2fd8b91e9a547@mail.gmail.com> 2008/8/4 Zachary Pincus : > Here's a thumbnail of the usage of the function though: > > scipy.stats.gamma.pdf([0, 1, 2, 3], 9, scale=0.5) > > will give you the probability density at x = 0, 1, 2, and 3 of a gamma > function with the shape parameter (k in the wikipedia page, a in the > scipy.stats.gamma docstring -- did you read the documentation for > scipy.stats.gamma?) set to 9 and the scale parameter (theta in > wikipedia, 'scale' in scipy) set to 0.5. The documentation is pretty confusing, and could use some work. Even just documenting the whole of NumPy proved challenging, but hopefully we'll get to SciPy too sometime later this year. Regards St?fan From jdh2358 at gmail.com Mon Aug 4 17:31:13 2008 From: jdh2358 at gmail.com (John Hunter) Date: Mon, 4 Aug 2008 16:31:13 -0500 Subject: [SciPy-user] scipy.stats.gamma usage. In-Reply-To: <4896CC71.2030602@free.fr> References: <4896CC71.2030602@free.fr> Message-ID: <88e473830808041431n502d213cudd26e02e33b59a6b@mail.gmail.com> On Mon, Aug 4, 2008 at 4:31 AM, cyril giraudon wrote: > Hi, > > I am not an expert in statistics and had too validate some computations. > I currently try to use the scipy.stats.gamma.pdf function. > > I don't know how to use the function and obtain the results presented in > wikipedia http://en.wikipedia.org/wiki/Gamma_distribution. You may want to look at this example: http://matplotlib.svn.sourceforge.net/viewvc/matplotlib/trunk/py4science/examples/stats_distributions.py?revision=4657&view=markup which uses the scipy.stats gamma and other distributions to compare some empirical PDFs w/ the analytic results. JDH From lorenzo.isella at gmail.com Tue Aug 5 03:53:23 2008 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Tue, 5 Aug 2008 09:53:23 +0200 Subject: [SciPy-user] Reading Empty Arrays Message-ID: Dear All, I have a code which saves in files called file001, file002, and so on a set of data. Now, each file contains a record of events which may or may not have occurred. This is to say that, e.g., file004 can be empty. I normally use pylab to read/write files; in my codes I normally use: import scipy as s import pylab as p The problem appears when I try reading one of these arrays. E.g.: my_array=p.load("file004") since then I get an error message. These are my points: (1) I do not know beforehand which files will be empty (2) Since I generate a lot of them which I then post-process automatically, I cannot select by hand which are empty and which are not. (3) Not saving file004 since it is empty would not be a good idea (I read the output files sequentially when postprocessing). If reading this file004 generated a zero-length array like the one you have if you type: f=s.zeros(0) Then I would be probably fine, since in my postprocessing, certain loops depending on the length of the array would be (rightly) skipped if the length of the array is zero. How can I do that? Basically, I may be in trouble if I have to re-write the code generating the results, whereas it would be ideal for me to be able to read empty files as zero-length arrays. Many thanks Lorenzo Isella From cimrman3 at ntc.zcu.cz Tue Aug 5 06:53:08 2008 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Tue, 05 Aug 2008 12:53:08 +0200 Subject: [SciPy-user] rev 4595 broken In-Reply-To: <3d375d730808040838o46ca2b6fvf136c09d01b00f37@mail.gmail.com> References: <56B82C56-1605-49E0-B3F5-9249385DC79A@mac.com> <3d375d730808040838o46ca2b6fvf136c09d01b00f37@mail.gmail.com> Message-ID: <48983114.1060707@ntc.zcu.cz> Robert Kern wrote: > On Mon, Aug 4, 2008 at 08:29, Tommy Grav wrote: >> I tried to install the current svn trunk and ran into this problem >> >> skathi:~] tgrav% python ActivePython 2.5.1.1 (ActiveState Software >> Inc.) based on Python 2.5.1 (r251:54863, May 1 2007, 17:40:00) >> [GCC 4.0.1 (Apple Computer, Inc. build 5250)] on darwin Type >> "help", "copyright", "credits" or "license" for more information. >>>>> import numpy numpy.__version__ >> '1.1.1' >>>>> import scipy >> Traceback (most recent call last): File "", line 1, in >> File >> "/Library/Frameworks/Python.framework/Versions/2.5/lib/ >> python2.5/site-packages/scipy/__init__.py", line 86, in >> from numpy.testing import Tester ImportError: cannot import name >> Tester >>>>> >> >> Is the current bleeding edge copy broken? > > No. SVN scipy currently requires SVN numpy. I have the same problem, and I cannot compile SVN numpy, see below. Do you have an idea where does the problem come from? r. building 'numpy.core.multiarray' extension compiling C sources C compiler: i486-pc-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -fPIC creating build/temp.linux-i686-2.4 creating build/temp.linux-i686-2.4/numpy creating build/temp.linux-i686-2.4/numpy/core creating build/temp.linux-i686-2.4/numpy/core/src compile options: '-Ibuild/src.linux-i686-2.4/numpy/core/src -Inumpy/core/include -Ibuild/src.linux-i686-2.4/numpy/core/include/numpy -Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -I/usr/include/python2.4 -c' i486-pc-linux-gnu-gcc: numpy/core/src/multiarraymodule.c numpy/core/src/multiarraymodule.c:130: error: static declaration of 'PyArray_OverflowMultiplyList' follows non-static declaration numpy/core/src/arrayobject.c:10185: error: previous implicit declaration of 'PyArray_OverflowMultiplyList' was here numpy/core/src/multiarraymodule.c:130: error: static declaration of 'PyArray_OverflowMultiplyList' follows non-static declaration numpy/core/src/arrayobject.c:10185: error: previous implicit declaration of 'PyArray_OverflowMultiplyList' was here error: Command "i486-pc-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -fPIC -Ibuild/src.linux-i686-2.4/numpy/core/src -Inumpy/core/include -Ibuild/src.linux-i686-2.4/numpy/core/include/numpy -Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -I/usr/include/python2.4 -c numpy/core/src/multiarraymodule.c -o build/temp.linux-i686-2.4/numpy/core/src/multiarraymodule.o" failed with exit status 1 From ryanlists at gmail.com Tue Aug 5 07:22:56 2008 From: ryanlists at gmail.com (Ryan Krauss) Date: Tue, 5 Aug 2008 06:22:56 -0500 Subject: [SciPy-user] Reading Empty Arrays In-Reply-To: References: Message-ID: Can you test for the emptiness of the file doing something like this: myfile = open("file004", 'r') contents = myfile.readlines() if not contents: #empty case here else: #normal case here On Tue, Aug 5, 2008 at 2:53 AM, Lorenzo Isella wrote: > Dear All, > I have a code which saves in files called file001, file002, and so on > a set of data. > Now, each file contains a record of events which may or may not have occurred. > This is to say that, e.g., file004 can be empty. > I normally use pylab to read/write files; in my codes I normally use: > > import scipy as s > import pylab as p > > The problem appears when I try reading one of these arrays. > E.g.: > my_array=p.load("file004") > > since then I get an error message. > These are my points: > (1) I do not know beforehand which files will be empty > (2) Since I generate a lot of them which I then post-process > automatically, I cannot select by hand which are empty and which are > not. > (3) Not saving file004 since it is empty would not be a good idea (I > read the output files sequentially when postprocessing). > > If reading this file004 generated a zero-length array like the one you > have if you type: > > f=s.zeros(0) > > Then I would be probably fine, since in my postprocessing, certain > loops depending on the length of the array would be (rightly) skipped > if the length of the array is zero. > > How can I do that? Basically, I may be in trouble if I have to > re-write the code generating the results, whereas it would be ideal > for me to be able to read empty files as zero-length arrays. > Many thanks > > Lorenzo Isella > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From cimrman3 at ntc.zcu.cz Tue Aug 5 08:03:29 2008 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Tue, 05 Aug 2008 14:03:29 +0200 Subject: [SciPy-user] rev 4595 broken In-Reply-To: <48983114.1060707@ntc.zcu.cz> References: <56B82C56-1605-49E0-B3F5-9249385DC79A@mac.com> <3d375d730808040838o46ca2b6fvf136c09d01b00f37@mail.gmail.com> <48983114.1060707@ntc.zcu.cz> Message-ID: <48984191.4050708@ntc.zcu.cz> Robert Cimrman wrote: > Robert Kern wrote: >> On Mon, Aug 4, 2008 at 08:29, Tommy Grav wrote: >>> I tried to install the current svn trunk and ran into this problem >>> >>> skathi:~] tgrav% python ActivePython 2.5.1.1 (ActiveState Software >>> Inc.) based on Python 2.5.1 (r251:54863, May 1 2007, 17:40:00) >>> [GCC 4.0.1 (Apple Computer, Inc. build 5250)] on darwin Type >>> "help", "copyright", "credits" or "license" for more information. >>>>>> import numpy numpy.__version__ >>> '1.1.1' >>>>>> import scipy >>> Traceback (most recent call last): File "", line 1, in >>> File >>> "/Library/Frameworks/Python.framework/Versions/2.5/lib/ >>> python2.5/site-packages/scipy/__init__.py", line 86, in >>> from numpy.testing import Tester ImportError: cannot import name >>> Tester >>> Is the current bleeding edge copy broken? >> No. SVN scipy currently requires SVN numpy. > > I have the same problem, and I cannot compile SVN numpy, see below. Do > you have an idea where does the problem come from? > > r. > > building 'numpy.core.multiarray' extension > compiling C sources > C compiler: i486-pc-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG > -fPIC > > creating build/temp.linux-i686-2.4 > creating build/temp.linux-i686-2.4/numpy > creating build/temp.linux-i686-2.4/numpy/core > creating build/temp.linux-i686-2.4/numpy/core/src > compile options: '-Ibuild/src.linux-i686-2.4/numpy/core/src > -Inumpy/core/include -Ibuild/src.linux-i686-2.4/numpy/core/include/numpy > -Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 > -I/usr/include/python2.4 -c' > i486-pc-linux-gnu-gcc: numpy/core/src/multiarraymodule.c > numpy/core/src/multiarraymodule.c:130: error: static declaration of > 'PyArray_OverflowMultiplyList' follows non-static declaration > numpy/core/src/arrayobject.c:10185: error: previous implicit declaration > of 'PyArray_OverflowMultiplyList' was here > numpy/core/src/multiarraymodule.c:130: error: static declaration of > 'PyArray_OverflowMultiplyList' follows non-static declaration > numpy/core/src/arrayobject.c:10185: error: previous implicit declaration > of 'PyArray_OverflowMultiplyList' was here > error: Command "i486-pc-linux-gnu-gcc -pthread -fno-strict-aliasing > -DNDEBUG -fPIC -Ibuild/src.linux-i686-2.4/numpy/core/src > -Inumpy/core/include -Ibuild/src.linux-i686-2.4/numpy/core/include/numpy > -Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 > -I/usr/include/python2.4 -c numpy/core/src/multiarraymodule.c -o > build/temp.linux-i686-2.4/numpy/core/src/multiarraymodule.o" failed with > exit status 1 I have persuaded numpy to compile: In [2]: nm.__version__ Out[2]: '1.2.0.dev5610' $ gcc --version gcc (GCC) 4.1.2 (Gentoo 4.1.2 p1.1) I had to do: Index: numpy/core/src/arrayobject.c =================================================================== --- numpy/core/src/arrayobject.c (revision 5610) +++ numpy/core/src/arrayobject.c (working copy) @@ -44,6 +44,10 @@ return priority; } +static intp +PyArray_OverflowMultiplyList(register intp *l1, register int n); + + static int _check_object_rec(PyArray_Descr *descr) { From cimrman3 at ntc.zcu.cz Tue Aug 5 08:27:43 2008 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Tue, 05 Aug 2008 14:27:43 +0200 Subject: [SciPy-user] wrong compiler selected? Message-ID: <4898473F.4020602@ntc.zcu.cz> Hi, This is a follow-up to "rev 4595 broken" thread. I got numpy to compile, as I explained (but the e-mail has not arrived yet). Now I have another problem: building 'scipy.sparse.sparsetools._csr' extension compiling C++ sources C compiler: i486-pc-linux-gnu-g++ -pthread -fno-strict-aliasing -DNDEBUG -fPIC creating build/temp.linux-i686-2.4/scipy/sparse creating build/temp.linux-i686-2.4/scipy/sparse/sparsetools compile options: '-I/home/share/software/usr/lib/python2.4/site-packages/numpy/core/include -I/usr/include/python2.4 -c' i486-pc-linux-gnu-g++: scipy/sparse/sparsetools/csr_wrap.cxx scipy/sparse/sparsetools/csr_wrap.cxx: In function 'int require_size(PyArrayObject*, npy_intp*, int)': scipy/sparse/sparsetools/csr_wrap.cxx:2902: error: expected `)' before 'PRIdPTR' scipy/sparse/sparsetools/csr_wrap.cxx:2909: error: expected `)' before 'PRIdPTR' It looks like NPY_USE_C99_FORMATS is set to one, causing 'PRIdPTR' instead of 'd' to be used in NPY_INTP_FMT Is this a problem of numpy compiler selection, or else? SciPy compiles fine when I tweaked this manually. r. From ebrosh1 at gmail.com Tue Aug 5 16:09:24 2008 From: ebrosh1 at gmail.com (Eli Brosh) Date: Tue, 5 Aug 2008 16:09:24 -0400 Subject: [SciPy-user] OpenOpt : how to avoid output Message-ID: <9dbf42130808051309t50d8e037l1ca772269c5b0d4e@mail.gmail.com> Hello, When OpenOpt is called without graphical output as: r=p.solve('solver',plot=False) I still get lots of output to the screen in the python interpreter. Is it possible to silence this output ? i.e. to cancel the printing of iterations etc. into the interpreter prompt. I only need to get the final result and to put it in other variables like finalf=r.ff finalx=r.xf Thanks Eli -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmitrey.kroshko at scipy.org Wed Aug 6 01:45:04 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Wed, 06 Aug 2008 08:45:04 +0300 Subject: [SciPy-user] OpenOpt : how to avoid output In-Reply-To: <9dbf42130808051309t50d8e037l1ca772269c5b0d4e@mail.gmail.com> References: <9dbf42130808051309t50d8e037l1ca772269c5b0d4e@mail.gmail.com> Message-ID: <48993A60.1040008@scipy.org> hi Eli, rtfm http://scipy.org/scipy/scikits/wiki/OpenOptFAQ (or help(NLP)) Regards, D. Eli Brosh wrote: > Hello, > When OpenOpt is called without graphical output as: > r=p.solve('solver',plot=False) > I still get lots of output to the screen in the python interpreter. > Is it possible to silence this output ? > i.e. to cancel the printing of iterations etc. into the interpreter > prompt. > > I only need to get the final result and to put it in other variables like > finalf=r.ff > finalx=r.xf > > Thanks > Eli > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From ebrosh1 at gmail.com Wed Aug 6 02:04:13 2008 From: ebrosh1 at gmail.com (Eli Brosh) Date: Wed, 6 Aug 2008 02:04:13 -0400 Subject: [SciPy-user] OpenOpt: ralg crash In-Reply-To: <9dbf42130808040826l3fc88c36k1189e20c0aa0e288@mail.gmail.com> References: <957526FB6E347743AAB42B212AB54FDA95BB4F@NANAMAILBACK1.nanamail.co.il> <48969ACA.5080700@scipy.org> <9dbf42130808040559h60e9a82bx131863210d4f005e@mail.gmail.com> <9dbf42130808040826l3fc88c36k1189e20c0aa0e288@mail.gmail.com> Message-ID: <9dbf42130808052304h762cf205id43948c10670c6d0@mail.gmail.com> I checked again in ubuntu, with numpy 1.03. I installed the latest tarball of OpenOpt but the bug did appear. It looks like ralg somehow depends strongly on the numpy version. Eli On Mon, Aug 4, 2008 at 11:26 AM, Eli Brosh wrote: > Indeed, > The bug disappeared when I installed numpy 1.1.1 > Thanks to Dmitrey. > Eli > > > > > > On Mon, Aug 4, 2008 at 8:59 AM, Eli Brosh wrote: > >> Thank you Dmitrey, >> I downloaded the latest tarball (uploaded 08/04/08 00:54:17) and installed >> it. >> Before installing, I erased the older version from the site-packages. >> However, the ralg still crashes, at least under windows. >> I did not find the bug you mentioned in ralg_oo.py. >> Could it be that the bug is caused by my numpy version ? >> (on windows I use 1.04 that came with the enthought suite) >> >> Thanks >> Eli >> >> >> On Mon, Aug 4, 2008 at 1:59 AM, dmitrey wrote: >> >>> Hi Eli, >>> It seems the bug was due to svn conflict changes >>> (check your scikits/openopt/solvers/UkrOpt/ralg_oo.py, line 156, svn has >>> put "<<<<<<< .mine" there). >>> Try now latest tarball (or use download from subversion, the bug was >>> absent there) >>> >>> > Another small request concerning OpenOpt. >>> > Is it possible to provide an example for use of scipy_tnc and >>> > scipy_lbfgsb from openopt ? >>> > Can these solvers work with linear equality constraints ? >>> > >>> Examples are absolutely same to nlp1, nlp2, nlp3, nlp_bench_1, >>> nlp_bench_2 etc. >>> But the solvers can use only lb<=x<=ub constraints. >>> Regards, D. >>> >>> >>> Eli Brosh wrote: >>> > >>> > Hello, >>> > >>> > I encountered problems with the ralg solver in OpenOpt. >>> > the example nlp_3.py, supplied with the OpenOpt library crashed on two >>> > computers. >>> > On an ubuntu linux machine, the program simply stopped without further >>> > information. >>> > On a windows machine, with entought suite, there was a windows error >>> > message "pythonw caused a fatal error and it will be closed". >>> > The problem disappeared when I erased ralg from the solvers list. It >>> > did not occur with lincher and scipy_cobyla >>> > solvers. >>> > On both machines, the openopt installation was from the latest tarball >>> > (no more than week old). >>> > >>> > What is the problem ? >>> > Can it be corrected ? >>> > >>> >>> > >>> > Thanks >>> > Eli >>> > >>> >>> _______________________________________________ >>> SciPy-user mailing list >>> SciPy-user at scipy.org >>> http://projects.scipy.org/mailman/listinfo/scipy-user >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmitrey.kroshko at scipy.org Wed Aug 6 02:11:04 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Wed, 06 Aug 2008 09:11:04 +0300 Subject: [SciPy-user] OpenOpt: ralg crash In-Reply-To: <9dbf42130808052304h762cf205id43948c10670c6d0@mail.gmail.com> References: <957526FB6E347743AAB42B212AB54FDA95BB4F@NANAMAILBACK1.nanamail.co.il> <48969ACA.5080700@scipy.org> <9dbf42130808040559h60e9a82bx131863210d4f005e@mail.gmail.com> <9dbf42130808040826l3fc88c36k1189e20c0aa0e288@mail.gmail.com> <9dbf42130808052304h762cf205id43948c10670c6d0@mail.gmail.com> Message-ID: <48994078.8030505@scipy.org> Eli Brosh wrote: > I checked again in ubuntu, with numpy 1.03. > I installed the latest tarball of OpenOpt but the bug did appear. > It looks like ralg somehow depends strongly on the numpy version. > > Eli I had committed requirements to numpy v >= 1.1.0 to INSTALL.txt and mentioned it in OO Install page. Regards, D. > > On Mon, Aug 4, 2008 at 11:26 AM, Eli Brosh > wrote: > > Indeed, > The bug disappeared when I installed numpy 1.1.1 > Thanks to Dmitrey. > Eli > > > > > > On Mon, Aug 4, 2008 at 8:59 AM, Eli Brosh > wrote: > > Thank you Dmitrey, > I downloaded the latest tarball (uploaded 08/04/08 00:54:17) > and installed it. > Before installing, I erased the older version from the > site-packages. > However, the ralg still crashes, at least under windows. > I did not find the bug you mentioned in ralg_oo.py. > Could it be that the bug is caused by my numpy version ? > (on windows I use 1.04 that came with the enthought suite) > > Thanks > Eli > > > On Mon, Aug 4, 2008 at 1:59 AM, dmitrey > > > wrote: > > Hi Eli, > It seems the bug was due to svn conflict changes > (check your scikits/openopt/solvers/UkrOpt/ralg_oo.py, > line 156, svn has > put "<<<<<<< .mine" there). > Try now latest tarball (or use download from subversion, > the bug was > absent there) > > > Another small request concerning OpenOpt. > > Is it possible to provide an example for use of > scipy_tnc and > > scipy_lbfgsb from openopt ? > > Can these solvers work with linear equality constraints ? > > > Examples are absolutely same to nlp1, nlp2, nlp3, nlp_bench_1, > nlp_bench_2 etc. > But the solvers can use only lb<=x<=ub constraints. > Regards, D. > > > Eli Brosh wrote: > > > > Hello, > > > > I encountered problems with the ralg solver in OpenOpt. > > the example nlp_3.py, supplied with the OpenOpt library > crashed on two > > computers. > > On an ubuntu linux machine, the program simply stopped > without further > > information. > > On a windows machine, with entought suite, there was a > windows error > > message "pythonw caused a fatal error and it will be > closed". > > The problem disappeared when I erased ralg from the > solvers list. It > > did not occur with lincher and scipy_cobyla > > solvers. > > On both machines, the openopt installation was from the > latest tarball > > (no more than week old). > > > > What is the problem ? > > Can it be corrected ? > > > > > > > Thanks > > Eli > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From lists at vrbka.net Wed Aug 6 03:16:06 2008 From: lists at vrbka.net (Lubos Vrbka) Date: Wed, 06 Aug 2008 09:16:06 +0200 Subject: [SciPy-user] sine transform prefactor - SOLVED In-Reply-To: <826c64da0807280707m3c340acv1b1d3413b340e0d2@mail.gmail.com> References: <9fddf64a0807260846r68c24f97k97fb964069bacfa4@mail.gmail.com> <488B4FD0.3070804@vrbka.net> <9fddf64a0807261115x426853e6wf888f7801472b539@mail.gmail.com> <826c64da0807280707m3c340acv1b1d3413b340e0d2@mail.gmail.com> Message-ID: <48994FB6.6070706@vrbka.net> hi guys, it seems that the problems i had with the prefactors involved in the sine transformation were more or less caused by my ignorance and a lack of knowledge (as usually) :) just in case somebody would stumble upon this issue in the future, i will summarize my 'findings'. the function for forward/inverse discrete sine transformation are themselves normalized to the number of sampling points. therefore, F' = DST(f) iDST(F') = f however, to get the real sine transformation of f, it's necessary to multiply the abovementioned expression with the step size in the real space F = DST(f) dr this expression leads to the equivalent of the sine transformation, where the 2/pi normalization is carried out in the inverse transformation. for the equivalent of the unitary sine transform, it's needed to further multiply with sqrt(2/pi) the same then applies for the inverse transformation, so in the end x = 2/pi dr dk iDST(DST(f)) but the dr dk product brings in the number of sampling points. for the sine transformation, dk = pi/(Ndr) and, therefore, x = 2/pi pi/N iDST(DST(f)) = 2/N f in order to get the function f back, it's necessary to multiply the inverse FT with N/2. this essentially solves the question posted in sine transformation weirdness 'thread' as well. -- Lubos _ at _" http://www.lubos.vrbka.net From fperez.net at gmail.com Wed Aug 6 03:48:04 2008 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 6 Aug 2008 00:48:04 -0700 Subject: [SciPy-user] Python tools at the annual SIAM meeting In-Reply-To: <8793ae6e0807281609i4075e048sbaefb22b3b211450@mail.gmail.com> References: <8793ae6e0807281609i4075e048sbaefb22b3b211450@mail.gmail.com> Message-ID: Howdy, On Mon, Jul 28, 2008 at 4:09 PM, Dominique Orban wrote: > Fernando, > Congratulations on being selected for the highlights of the meeting. For > those of us who were not in San Diego, is there any chance to see the slides > of the talks in the three sessions you co-organized? That would be awesome. Sorry it took a bit long, but here they are (the ones I got from the speakers): https://cirl.berkeley.edu/fperez/py4science/2008_siam/ Cheers, f From david at ar.media.kyoto-u.ac.jp Wed Aug 6 05:47:04 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 06 Aug 2008 18:47:04 +0900 Subject: [SciPy-user] wrong compiler selected? In-Reply-To: <4898473F.4020602@ntc.zcu.cz> References: <4898473F.4020602@ntc.zcu.cz> Message-ID: <48997318.1090209@ar.media.kyoto-u.ac.jp> Robert Cimrman wrote: > It looks like NPY_USE_C99_FORMATS is set to one, causing 'PRIdPTR' > instead of 'd' to be used in NPY_INTP_FMT > > Is this a problem of numpy compiler selection, or else? > > SciPy compiles fine when I tweaked this manually. > Nothing to do with the compiler. I added this change to numpy and did not think about checking how c++ reacted to this. It should work with a recent svn update of numpy (>=r5614). cheers, David From cimrman3 at ntc.zcu.cz Wed Aug 6 06:56:00 2008 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Wed, 06 Aug 2008 12:56:00 +0200 Subject: [SciPy-user] wrong compiler selected? In-Reply-To: <48997318.1090209@ar.media.kyoto-u.ac.jp> References: <4898473F.4020602@ntc.zcu.cz> <48997318.1090209@ar.media.kyoto-u.ac.jp> Message-ID: <48998340.9080604@ntc.zcu.cz> David Cournapeau wrote: > Robert Cimrman wrote: >> It looks like NPY_USE_C99_FORMATS is set to one, causing 'PRIdPTR' >> instead of 'd' to be used in NPY_INTP_FMT >> >> Is this a problem of numpy compiler selection, or else? >> >> SciPy compiles fine when I tweaked this manually. >> > > Nothing to do with the compiler. I added this change to numpy and did > not think about checking how c++ reacted to this. It should work with a > recent svn update of numpy (>=r5614). Thanks David, it works now. r. From cyril.giraudon at free.fr Wed Aug 6 08:09:57 2008 From: cyril.giraudon at free.fr (cyril giraudon) Date: Wed, 06 Aug 2008 14:09:57 +0200 Subject: [SciPy-user] scipy.stats.gamma usage. In-Reply-To: <1B3F7B4B-9C94-43F8-AA0D-CD739A99DCA3@yale.edu> References: <4896CC71.2030602@free.fr> <1B3F7B4B-9C94-43F8-AA0D-CD739A99DCA3@yale.edu> Message-ID: <48999495.90308@free.fr> Zachary Pincus a ?crit : >> I am not an expert in statistics and had too validate some >> computations. >> I currently try to use the scipy.stats.gamma.pdf function. >> >> I don't know how to use the function and obtain the results >> presented in >> wikipedia http://en.wikipedia.org/wiki/Gamma_distribution. >> > > Could you be a bit more specific about your question? Which "results" > from the wikipedia page are you hoping to replicate? > > Here's a thumbnail of the usage of the function though: > > scipy.stats.gamma.pdf([0, 1, 2, 3], 9, scale=0.5) > > will give you the probability density at x = 0, 1, 2, and 3 of a gamma > function with the shape parameter (k in the wikipedia page, a in the > scipy.stats.gamma docstring -- did you read the documentation for > scipy.stats.gamma?) set to 9 and the scale parameter (theta in > wikipedia, 'scale' in scipy) set to 0.5. > > Zach > > > On Aug 4, 2008, at 3:31 AM, cyril giraudon wrote: > It's OK now. Thanks a lot. Cyril. From fredmfp at gmail.com Wed Aug 6 08:54:54 2008 From: fredmfp at gmail.com (fred) Date: Wed, 06 Aug 2008 14:54:54 +0200 Subject: [SciPy-user] partially reading a file... Message-ID: <48999F1E.3080607@gmail.com> Hi, Let's say I want to read a (binary) file which contains a nx*ny*nz array. Is it possible to read a "sub-array" from this file, ie each block of (nx/4, ny/4, nz/4) for instance, without loading the whole file ? TIA. Cheers, -- Fred From aisaac at american.edu Wed Aug 6 09:01:49 2008 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 6 Aug 2008 09:01:49 -0400 Subject: [SciPy-user] Python tools at the annual SIAM meeting In-Reply-To: References: <8793ae6e0807281609i4075e048sbaefb22b3b211450@mail.gmail.com> Message-ID: On Wed, 6 Aug 2008, Fernando Perez apparently wrote: > https://cirl.berkeley.edu/fperez/py4science/2008_siam/ Great, but several links are broken. Cheers, Alan Isaac From oliphant at enthought.com Wed Aug 6 10:16:56 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Wed, 06 Aug 2008 09:16:56 -0500 Subject: [SciPy-user] partially reading a file... In-Reply-To: <48999F1E.3080607@gmail.com> References: <48999F1E.3080607@gmail.com> Message-ID: <4899B258.1000408@enthought.com> fred wrote: > Hi, > > Let's say I want to read a (binary) file which contains a nx*ny*nz array. > > Is it possible to read a "sub-array" from this file, ie each block of > (nx/4, ny/4, nz/4) for instance, without loading the whole file ? > An easy way to do this which forces the operating system to do the work of partial loading is to use a memory mapped file as the source of the array (i.e. a memmap array). Then, selecting out a block is as simple as slicing. -Travis From fredmfp at gmail.com Wed Aug 6 10:30:40 2008 From: fredmfp at gmail.com (fred) Date: Wed, 06 Aug 2008 16:30:40 +0200 Subject: [SciPy-user] partially reading a file... In-Reply-To: <4899B258.1000408@enthought.com> References: <48999F1E.3080607@gmail.com> <4899B258.1000408@enthought.com> Message-ID: <4899B590.4080406@gmail.com> Travis E. Oliphant a ?crit : > fred wrote: >> Hi, >> >> Let's say I want to read a (binary) file which contains a nx*ny*nz array. >> >> Is it possible to read a "sub-array" from this file, ie each block of >> (nx/4, ny/4, nz/4) for instance, without loading the whole file ? >> > An easy way to do this which forces the operating system to do the work > of partial loading is to use a memory mapped file as the source of the > array (i.e. a memmap array). > > Then, selecting out a block is as simple as slicing. Maybe I had to mention this: the aim is to cut in several files a "large" data file, _bigger_ than total available memory amount. Does memmap still apply ? Cheers, -- Fred From oliphant at enthought.com Wed Aug 6 13:43:50 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Wed, 06 Aug 2008 12:43:50 -0500 Subject: [SciPy-user] partially reading a file... In-Reply-To: <4899B590.4080406@gmail.com> References: <48999F1E.3080607@gmail.com> <4899B258.1000408@enthought.com> <4899B590.4080406@gmail.com> Message-ID: <4899E2D6.7080209@enthought.com> fred wrote: > Travis E. Oliphant a ?crit : > >> fred wrote: >> >>> Hi, >>> >>> Let's say I want to read a (binary) file which contains a nx*ny*nz array. >>> >>> Is it possible to read a "sub-array" from this file, ie each block of >>> (nx/4, ny/4, nz/4) for instance, without loading the whole file ? >>> >>> >> An easy way to do this which forces the operating system to do the work >> of partial loading is to use a memory mapped file as the source of the >> array (i.e. a memmap array). >> >> Then, selecting out a block is as simple as slicing. >> > Maybe I had to mention this: the aim is to cut in several files a > "large" data file, _bigger_ than total available memory amount. > Absolutely memory mapping still applies --- it's a perfect application for it. But, you will probably need a 64-bit system. Memory mapping is how the OS handles "virtual memory" which uses disk space to increase main memory. You are just using that idea directly with a memory mapped file. -Travis From fperez.net at gmail.com Wed Aug 6 13:49:58 2008 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 6 Aug 2008 10:49:58 -0700 Subject: [SciPy-user] Python tools at the annual SIAM meeting In-Reply-To: References: <8793ae6e0807281609i4075e048sbaefb22b3b211450@mail.gmail.com> Message-ID: On Wed, Aug 6, 2008 at 6:01 AM, Alan G Isaac wrote: > On Wed, 6 Aug 2008, Fernando Perez apparently wrote: >> https://cirl.berkeley.edu/fperez/py4science/2008_siam/ > > Great, but several links are broken. Oops, sorry about that. I'm still getting the hang of rest2web, which I'm using for the site (fantastic, lightweight little rst-based site generator) and I missed building the file index correctly. Should be OK now, thanks for pointing this out. Cheers, f From fredmfp at gmail.com Wed Aug 6 13:53:12 2008 From: fredmfp at gmail.com (fred) Date: Wed, 06 Aug 2008 19:53:12 +0200 Subject: [SciPy-user] partially reading a file... In-Reply-To: <4899E2D6.7080209@enthought.com> References: <48999F1E.3080607@gmail.com> <4899B258.1000408@enthought.com> <4899B590.4080406@gmail.com> <4899E2D6.7080209@enthought.com> Message-ID: <4899E508.6050407@gmail.com> Travis E. Oliphant a ?crit : > Absolutely memory mapping still applies --- it's a perfect application > for it. But, you will probably need a 64-bit system. No problem. > Memory mapping is how the OS handles "virtual memory" which uses disk > space to increase main memory. You are just using that idea directly > with a memory mapped file. Ok. Thanks for the hint. Cheers, -- Fred From oliphant at enthought.com Wed Aug 6 14:14:35 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Wed, 06 Aug 2008 13:14:35 -0500 Subject: [SciPy-user] partially reading a file... In-Reply-To: <4899E508.6050407@gmail.com> References: <48999F1E.3080607@gmail.com> <4899B258.1000408@enthought.com> <4899B590.4080406@gmail.com> <4899E2D6.7080209@enthought.com> <4899E508.6050407@gmail.com> Message-ID: <4899EA0B.4060205@enthought.com> fred wrote: > Travis E. Oliphant a ?crit : > > >> Absolutely memory mapping still applies --- it's a perfect application >> for it. But, you will probably need a 64-bit system. >> > No problem. > > >> Memory mapping is how the OS handles "virtual memory" which uses disk >> space to increase main memory. You are just using that idea directly >> with a memory mapped file. >> > Ok. > Thanks for the hint. > > More directly: Use numpy.memmap --- look at the docstring for example use and help on all the arguments available. But, something like this (untested): a = numpy.memmap(, mode='r', dtype=float, shape=(nx,ny,nz)) b = a[:nx/4,:ny/4,:nz/4] b.tofile() Should work... -Travis From fredmfp at gmail.com Wed Aug 6 14:18:19 2008 From: fredmfp at gmail.com (fred) Date: Wed, 06 Aug 2008 20:18:19 +0200 Subject: [SciPy-user] partially reading a file... In-Reply-To: <4899EA0B.4060205@enthought.com> References: <48999F1E.3080607@gmail.com> <4899B258.1000408@enthought.com> <4899B590.4080406@gmail.com> <4899E2D6.7080209@enthought.com> <4899E508.6050407@gmail.com> <4899EA0B.4060205@enthought.com> Message-ID: <4899EAEB.8090107@gmail.com> Travis E. Oliphant a ?crit : > More directly: > > Use numpy.memmap --- look at the docstring for example use and help on > all the arguments available. But, something like this (untested): > > a = numpy.memmap(, mode='r', dtype=float, shape=(nx,ny,nz)) > b = a[:nx/4,:ny/4,:nz/4] > b.tofile() > > Should work... Travis: tons of thanks ! :-)) Cheers, -- Fred From aisaac at american.edu Wed Aug 6 14:19:31 2008 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 6 Aug 2008 14:19:31 -0400 Subject: [SciPy-user] Python tools at the annual SIAM meeting In-Reply-To: References: <8793ae6e0807281609i4075e048sbaefb22b3b211450@mail.gmail.com> Message-ID: On Wed, 6 Aug 2008, Fernando Perez apparently wrote: > Should be OK now Thanks! Alan From haase at msg.ucsf.edu Wed Aug 6 15:06:42 2008 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Wed, 6 Aug 2008 21:06:42 +0200 Subject: [SciPy-user] partially reading a file... In-Reply-To: <4899EA0B.4060205@enthought.com> References: <48999F1E.3080607@gmail.com> <4899B258.1000408@enthought.com> <4899B590.4080406@gmail.com> <4899E2D6.7080209@enthought.com> <4899E508.6050407@gmail.com> <4899EA0B.4060205@enthought.com> Message-ID: On Wed, Aug 6, 2008 at 8:14 PM, Travis E. Oliphant wrote: > fred wrote: >> Travis E. Oliphant a ?crit : >> >> >>> Absolutely memory mapping still applies --- it's a perfect application >>> for it. But, you will probably need a 64-bit system. >>> >> No problem. >> >> >>> Memory mapping is how the OS handles "virtual memory" which uses disk >>> space to increase main memory. You are just using that idea directly >>> with a memory mapped file. >>> >> Ok. >> Thanks for the hint. >> >> > > More directly: > > Use numpy.memmap --- look at the docstring for example use and help on > all the arguments available. But, something like this (untested): > > a = numpy.memmap(, mode='r', dtype=float, shape=(nx,ny,nz)) > b = a[:nx/4,:ny/4,:nz/4] > b.tofile() > Hi, We should already get used to using "//" instead "/" if we want the result to be integer: So that is: > b = a[:nx//4,:ny//4,:nz//4] ... I'm just trying to advertise the 'future' (py3.0 or today: 'python -Qnew' ) so-called "true division"-feature .... Cheers, Sebastian Haase From fredmfp at gmail.com Wed Aug 6 15:17:50 2008 From: fredmfp at gmail.com (fred) Date: Wed, 06 Aug 2008 21:17:50 +0200 Subject: [SciPy-user] partially reading a file... In-Reply-To: <4899EA0B.4060205@enthought.com> References: <48999F1E.3080607@gmail.com> <4899B258.1000408@enthought.com> <4899B590.4080406@gmail.com> <4899E2D6.7080209@enthought.com> <4899E508.6050407@gmail.com> <4899EA0B.4060205@enthought.com> Message-ID: <4899F8DE.9080807@gmail.com> Travis E. Oliphant a ?crit : > Should work... It does ! Travis, as Ga?l like to say, you are my hero :-))) Many many thanks. Cheers, -- Fred From fredmfp at gmail.com Wed Aug 6 15:56:57 2008 From: fredmfp at gmail.com (fred) Date: Wed, 06 Aug 2008 21:56:57 +0200 Subject: [SciPy-user] partially reading a file [corollary] In-Reply-To: <4899EA0B.4060205@enthought.com> References: <48999F1E.3080607@gmail.com> <4899B258.1000408@enthought.com> <4899B590.4080406@gmail.com> <4899E2D6.7080209@enthought.com> <4899E508.6050407@gmail.com> <4899EA0B.4060205@enthought.com> Message-ID: <489A0209.4030209@gmail.com> Now, let's say I have scatter data in a big binary file (stored in the form (xi, yi, zi, vi)), like on the snapshot, showing a "small" scatter. How can I cut the scatter efficiently in several files, as in the previous mail ? I can use memmap to "read" the whole file, but after ? It's more a algorithmic issue from my own point of view. TIA. Cheers, -- Fred -------------- next part -------------- A non-text attachment was scrubbed... Name: snapshot.png Type: image/png Size: 39120 bytes Desc: not available URL: From dominique.orban at gmail.com Wed Aug 6 16:53:33 2008 From: dominique.orban at gmail.com (Dominique Orban) Date: Wed, 6 Aug 2008 16:53:33 -0400 Subject: [SciPy-user] Python tools at the annual SIAM meeting In-Reply-To: References: <8793ae6e0807281609i4075e048sbaefb22b3b211450@mail.gmail.com> Message-ID: <8793ae6e0808061353p4a7433ebu7f43215f07a08e6e@mail.gmail.com> On Wed, Aug 6, 2008 at 2:19 PM, Alan G Isaac wrote: > On Wed, 6 Aug 2008, Fernando Perez apparently wrote: > > Should be OK now > > Thanks! > Alan > > Thanks, Fernando! This is really useful. Dominique -------------- next part -------------- An HTML attachment was scrubbed... URL: From Dharhas.Pothina at twdb.state.tx.us Thu Aug 7 09:19:02 2008 From: Dharhas.Pothina at twdb.state.tx.us (Dharhas Pothina) Date: Thu, 07 Aug 2008 08:19:02 -0500 Subject: [SciPy-user] Mapping a series of files. In-Reply-To: References: <48999F1E.3080607@gmail.com> <4899B258.1000408@enthought.com> <4899B590.4080406@gmail.com> <4899E2D6.7080209@enthought.com> <4899E508.6050407@gmail.com> <4899EA0B.4060205@enthought.com> Message-ID: <489AAFF6.63BA.009B.0@twdb.state.tx.us> Hi, I've been following the thread on 'partially reading a file' with some interest and have a related question. So I have a series of large binary data files (1_data.dat, 2_data.dat, etc) that represent a 3D time series of data. Right now I am cycling through all the files reading the entire dataset to memory and extracting the subset I need. This works but is extremely memory hungry and slow and I'm running out of memory for datasets more than a year long. I could calculate which few files contain the data I need and only read those in but that is a bit cumbersome and also doesn't help if I need a 1d or 2d slice of the whole time period. In the other thread Travis gave an example of using memmap to map a file to memory. Can I do this to with multiple files. ie use memmap to generate an array[x,y,z,t] that I can then use slicing to actually read what I need? Another complication is that each binary file has a header section and then a data section. By reading the first file I can calculate the offset for the data part of the file. thanks, - dharhas From jkoschwanez at cgr.harvard.edu Thu Aug 7 10:07:57 2008 From: jkoschwanez at cgr.harvard.edu (John Koschwanez) Date: Thu, 7 Aug 2008 10:07:57 -0400 Subject: [SciPy-user] "Use minimum ordering" message Message-ID: <831C4727-0B8A-434B-B87B-3ECEBE6589EC@cgr.harvard.edu> Simple problem. I'm relatively new to Python, and I'm writing a diffusion simulator using the sparse and linsolve modules. Each time I solve a matrix, I get the following message: "Use minimum degree ordering on A'+A." which results from a printf() in the get_perm_c.c file when a permutation matrix is set up. Questions: 1. Why is this message printed when (I assume) it is not a warning or an error? Any output like this slows down the solver when I'm solving tens of thousands of matrices. 2. Is there a way in Python to selectively ignore output statements like this? I don't want to comment out the source code and recompile. Thanks, John From fredmfp at gmail.com Thu Aug 7 11:36:21 2008 From: fredmfp at gmail.com (fred) Date: Thu, 07 Aug 2008 17:36:21 +0200 Subject: [SciPy-user] partially reading a file... In-Reply-To: <4899EA0B.4060205@enthought.com> References: <48999F1E.3080607@gmail.com> <4899B258.1000408@enthought.com> <4899B590.4080406@gmail.com> <4899E2D6.7080209@enthought.com> <4899E508.6050407@gmail.com> <4899EA0B.4060205@enthought.com> Message-ID: <489B1675.2060905@gmail.com> Travis E. Oliphant a ?crit : > > Should work... I have tested the trick on a file which has 2.7 10**9 nodes, ie > 2**31 and I get the following message: File "/usr/local/lib/python2.5/site-packages/numpy/core/memmap.py", line 193, in __new__ mm = mmap.mmap(fid.fileno(), bytes, access=acc) ValueError: mmap length is greater than file size Is there a workaround to consider long integer (if this is the issue) ? TIA. Cheers, -- Fred From nicolas.chopin at bristol.ac.uk Thu Aug 7 11:37:01 2008 From: nicolas.chopin at bristol.ac.uk (Nicolas Chopin) Date: Thu, 07 Aug 2008 17:37:01 +0200 Subject: [SciPy-user] concatenate array with number Message-ID: <489B169D.2090406@bris.ac.uk> Hi list, I want to do this: x = concatenate( (x,x[-1]) ) i.e. append to 1d array x its last element. However, the only way I managed to do this is: x = concatenate( (x,array(x[-1],ndmin=1)) ) which is a bit cryptic. (if you remove ndmin, it does not work.) 1. Is there a better way? 2. Could concatenate accept floating point numbers as arguments for convenience? Thanks in advance, Nicolas -------------- next part -------------- An HTML attachment was scrubbed... URL: From haase at msg.ucsf.edu Thu Aug 7 12:00:46 2008 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Thu, 7 Aug 2008 18:00:46 +0200 Subject: [SciPy-user] Mapping a series of files. In-Reply-To: <489AAFF6.63BA.009B.0@twdb.state.tx.us> References: <48999F1E.3080607@gmail.com> <4899B258.1000408@enthought.com> <4899B590.4080406@gmail.com> <4899E2D6.7080209@enthought.com> <4899E508.6050407@gmail.com> <4899EA0B.4060205@enthought.com> <489AAFF6.63BA.009B.0@twdb.state.tx.us> Message-ID: On Thu, Aug 7, 2008 at 3:19 PM, Dharhas Pothina wrote: > Hi, > > I've been following the thread on 'partially reading a file' with some interest and have a related question. > > So I have a series of large binary data files (1_data.dat, 2_data.dat, etc) that represent a 3D time series of data. Right now I am cycling through all the files reading the entire dataset to memory and extracting the subset I need. This works but is extremely memory hungry and slow and I'm running out of memory for datasets more than a year long. I could calculate which few files contain the data I need and only read those in but that is a bit cumbersome and also doesn't help if I need a 1d or 2d slice of the whole time period. > > In the other thread Travis gave an example of using memmap to map a file to memory. Can I do this to with multiple files. ie use memmap to generate an array[x,y,z,t] that I can then use slicing to actually read what I need? Another complication is that each binary file has a header section and then a data section. By reading the first file I can calculate the offset for the data part of the file. > Hi dharhas yes, you can do all these things, I'm doing this for 3d and 4d images files. What file format are you interested in ? I use MRC files ... Cheers, Sebastian Haase From haase at msg.ucsf.edu Thu Aug 7 12:08:43 2008 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Thu, 7 Aug 2008 18:08:43 +0200 Subject: [SciPy-user] partially reading a file... In-Reply-To: <489B1675.2060905@gmail.com> References: <48999F1E.3080607@gmail.com> <4899B258.1000408@enthought.com> <4899B590.4080406@gmail.com> <4899E2D6.7080209@enthought.com> <4899E508.6050407@gmail.com> <4899EA0B.4060205@enthought.com> <489B1675.2060905@gmail.com> Message-ID: On Thu, Aug 7, 2008 at 5:36 PM, fred wrote: > Travis E. Oliphant a ?crit : >> >> Should work... > I have tested the trick on a file which has 2.7 10**9 nodes, ie > 2**31 > and I get the following message: > > File "/usr/local/lib/python2.5/site-packages/numpy/core/memmap.py", line > 193, in __new__ > mm = mmap.mmap(fid.fileno(), bytes, access=acc) > ValueError: mmap length is greater than file size > > Is there a workaround to consider long integer (if this is the issue) ? > > TIA. > Are you "really" on a 64-bit system ? Is this Linux ? Is your Python the original from the distro - or did you build it yourself ? Do a: >>> import sys;print sys.maxint Did you build numpy yourself or did you download a binary ? HTH, Sebastian From doutriaux1 at llnl.gov Thu Aug 7 12:15:43 2008 From: doutriaux1 at llnl.gov (Charles Doutriaux) Date: Thu, 07 Aug 2008 09:15:43 -0700 Subject: [SciPy-user] Mapping a series of files. In-Reply-To: <489AAFF6.63BA.009B.0@twdb.state.tx.us> References: <48999F1E.3080607@gmail.com> <4899B258.1000408@enthought.com> <4899B590.4080406@gmail.com> <4899E2D6.7080209@enthought.com> <4899E508.6050407@gmail.com> <4899EA0B.4060205@enthought.com> <489AAFF6.63BA.009B.0@twdb.state.tx.us> Message-ID: <489B1FAF.1070709@llnl.gov> Darhas, It's your files can be converted to netcdf (or grib), then we have a tool to do exactly what you want basically you'd run cdscan -x full.xml *.nc And it would generate an xml file that would simulate being a full file then using our cdms2 read module you would do f=cdms2.open('full.xml') data =f("var",time=('2008-1','2008-7')) It would figure out for you which files to open. You could even be more restrictive by selecting a sub region (latitude=(-20,20)) etc... for more info: http://cdat.sf.net C. Dharhas Pothina wrote: > Hi, > > I've been following the thread on 'partially reading a file' with some interest and have a related question. > > So I have a series of large binary data files (1_data.dat, 2_data.dat, etc) that represent a 3D time series of data. Right now I am cycling through all the files reading the entire dataset to memory and extracting the subset I need. This works but is extremely memory hungry and slow and I'm running out of memory for datasets more than a year long. I could calculate which few files contain the data I need and only read those in but that is a bit cumbersome and also doesn't help if I need a 1d or 2d slice of the whole time period. > > In the other thread Travis gave an example of using memmap to map a file to memory. Can I do this to with multiple files. ie use memmap to generate an array[x,y,z,t] that I can then use slicing to actually read what I need? Another complication is that each binary file has a header section and then a data section. By reading the first file I can calculate the offset for the data part of the file. > > thanks, > > - dharhas > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http:// projects.scipy.org/mailman/listinfo/scipy-user > > > From Dharhas.Pothina at twdb.state.tx.us Thu Aug 7 12:24:49 2008 From: Dharhas.Pothina at twdb.state.tx.us (Dharhas Pothina) Date: Thu, 07 Aug 2008 11:24:49 -0500 Subject: [SciPy-user] Mapping a series of files. In-Reply-To: References: <48999F1E.3080607@gmail.com> <4899B258.1000408@enthought.com> <4899B590.4080406@gmail.com> <4899E2D6.7080209@enthought.com> <4899E508.6050407@gmail.com> <4899EA0B.4060205@enthought.com> <489AAFF6.63BA.009B.0@twdb.state.tx.us> Message-ID: <489ADB81.63BA.009B.0@twdb.state.tx.us> It isn't a standardized format. It is the output of a Fortran hydrodynamic circulation model called SELFE. The output files are fortran binaries. I could probably cycle through the files and convert them to netcdf one by one with a python script but it would be quicker and more space efficient if I could directly use the original outputs. thanks, - dharhas >>> "Sebastian Haase" 8/7/2008 11:00 AM >>> On Thu, Aug 7, 2008 at 3:19 PM, Dharhas Pothina wrote: > Hi, > > I've been following the thread on 'partially reading a file' with some interest and have a related question. > > So I have a series of large binary data files (1_data.dat, 2_data.dat, etc) that represent a 3D time series of data. Right now I am cycling through all the files reading the entire dataset to memory and extracting the subset I need. This works but is extremely memory hungry and slow and I'm running out of memory for datasets more than a year long. I could calculate which few files contain the data I need and only read those in but that is a bit cumbersome and also doesn't help if I need a 1d or 2d slice of the whole time period. > > In the other thread Travis gave an example of using memmap to map a file to memory. Can I do this to with multiple files. ie use memmap to generate an array[x,y,z,t] that I can then use slicing to actually read what I need? Another complication is that each binary file has a header section and then a data section. By reading the first file I can calculate the offset for the data part of the file. > Hi dharhas yes, you can do all these things, I'm doing this for 3d and 4d images files. What file format are you interested in ? I use MRC files ... Cheers, Sebastian Haase _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user From Dharhas.Pothina at twdb.state.tx.us Thu Aug 7 12:29:31 2008 From: Dharhas.Pothina at twdb.state.tx.us (Dharhas Pothina) Date: Thu, 07 Aug 2008 11:29:31 -0500 Subject: [SciPy-user] Mapping a series of files. In-Reply-To: <489B1FAF.1070709@llnl.gov> References: <48999F1E.3080607@gmail.com> <4899B258.1000408@enthought.com> <4899B590.4080406@gmail.com> <4899E2D6.7080209@enthought.com> <4899E508.6050407@gmail.com> <4899EA0B.4060205@enthought.com> <489AAFF6.63BA.009B.0@twdb.state.tx.us><489AAFF6.63BA.009B.0@twdb.state.tx.us> <489B1FAF.1070709@llnl.gov> Message-ID: <489ADC9B.63BA.009B.0@twdb.state.tx.us> There are some issues with converting to netcdf. Mainly the fact that there is no standard for unstructured grids in netcdf. Most of the tools work for structured grids. There have been a couple of attempts to come up with an unstructured grid netcdf standard but from what I can tell they petered out in 2006. We are struggling with this right now since we have a couple of different hydro models and are trying to define a common format so we can develop our analysis and vis tools. My present idea is to write a module that abstracts the details of each model format and allows me to load the data into python. Will your module work with unstructured grids? - dharhas >>> Charles Doutriaux 8/7/2008 11:15 AM >>> Darhas, It's your files can be converted to netcdf (or grib), then we have a tool to do exactly what you want basically you'd run cdscan -x full.xml *.nc And it would generate an xml file that would simulate being a full file then using our cdms2 read module you would do f=cdms2.open('full.xml') data =f("var",time=('2008-1','2008-7')) It would figure out for you which files to open. You could even be more restrictive by selecting a sub region (latitude=(-20,20)) etc... for more info: http://cdat.sf.net C. Dharhas Pothina wrote: > Hi, > > I've been following the thread on 'partially reading a file' with some interest and have a related question. > > So I have a series of large binary data files (1_data.dat, 2_data.dat, etc) that represent a 3D time series of data. Right now I am cycling through all the files reading the entire dataset to memory and extracting the subset I need. This works but is extremely memory hungry and slow and I'm running out of memory for datasets more than a year long. I could calculate which few files contain the data I need and only read those in but that is a bit cumbersome and also doesn't help if I need a 1d or 2d slice of the whole time period. > > In the other thread Travis gave an example of using memmap to map a file to memory. Can I do this to with multiple files. ie use memmap to generate an array[x,y,z,t] that I can then use slicing to actually read what I need? Another complication is that each binary file has a header section and then a data section. By reading the first file I can calculate the offset for the data part of the file. > > thanks, > > - dharhas > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http:// projects.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user From bing.jian at gmail.com Thu Aug 7 12:34:05 2008 From: bing.jian at gmail.com (Bing) Date: Thu, 7 Aug 2008 12:34:05 -0400 Subject: [SciPy-user] concatenate array with number In-Reply-To: <489B169D.2090406@bris.ac.uk> References: <489B169D.2090406@bris.ac.uk> Message-ID: try this: numpy.r_[x,x[-1]] On Thu, Aug 7, 2008 at 11:37 AM, Nicolas Chopin < nicolas.chopin at bristol.ac.uk> wrote: > Hi list, > I want to do this: > x = concatenate( (x,x[-1]) ) > i.e. append to 1d array x its last element. > However, the only way I managed to do this is: > x = concatenate( (x,array(x[-1],ndmin=1)) ) > which is a bit cryptic. (if you remove ndmin, it does not work.) > > 1. Is there a better way? > 2. Could concatenate accept floating point numbers as arguments for > convenience? > > Thanks in advance, > Nicolas > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zachary.pincus at yale.edu Thu Aug 7 12:57:47 2008 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Thu, 7 Aug 2008 12:57:47 -0400 Subject: [SciPy-user] concatenate array with number In-Reply-To: <489B169D.2090406@bris.ac.uk> References: <489B169D.2090406@bris.ac.uk> Message-ID: <788CEDA9-78D5-443B-8DDB-1FA7F3D3BEB5@yale.edu> One way to do this is to wrap the last element in a list, not an array: numpy.concatenate((x, [x[-1]])) Perhaps simpler and definitely faster is to use a slice to grab the last element as an array: numpy.concatenate((x, x[-1:])) The latter is the fastest of the various options, and the most compact. >>> timeit y = numpy.concatenate((x, [x[-1]])) 100000 loops, best of 3: 12.5 ?s per loop >>> timeit y = numpy.concatenate((x, x[-1:])) 100000 loops, best of 3: 2.07 ?s per loop >>> timeit y = numpy.concatenate((x, numpy.array(x[-1], ndmin=1))) 100000 loops, best of 3: 4.45 ?s per loop On Aug 7, 2008, at 11:37 AM, Nicolas Chopin wrote: > Hi list, > I want to do this: > x = concatenate( (x,x[-1]) ) > i.e. append to 1d array x its last element. > However, the only way I managed to do this is: > x = concatenate( (x,array(x[-1],ndmin=1)) ) > which is a bit cryptic. (if you remove ndmin, it does not work.) > > 1. Is there a better way? > 2. Could concatenate accept floating point numbers as arguments for > convenience? > > Thanks in advance, > Nicolas > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From fredmfp at gmail.com Fri Aug 8 04:36:49 2008 From: fredmfp at gmail.com (fred) Date: Fri, 08 Aug 2008 10:36:49 +0200 Subject: [SciPy-user] partially reading a file... In-Reply-To: References: <48999F1E.3080607@gmail.com> <4899B258.1000408@enthought.com> <4899B590.4080406@gmail.com> <4899E2D6.7080209@enthought.com> <4899E508.6050407@gmail.com> <4899EA0B.4060205@enthought.com> <489B1675.2060905@gmail.com> Message-ID: <489C05A1.5090304@gmail.com> Sebastian Haase a ?crit : > Are you "really" on a 64-bit system ? Yes. > Is this Linux ? Yes. > Is your Python the original from the distro - or did you build it yourself ? Built myself. > Do a: >>> import sys;print sys.maxint I get the expected answer: 2**63-1. > Did you build numpy yourself or did you download a binary ? Built myself. What's going on ? Cheers, -- Fred From haase at msg.ucsf.edu Fri Aug 8 05:27:47 2008 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Fri, 8 Aug 2008 11:27:47 +0200 Subject: [SciPy-user] partially reading a file... In-Reply-To: <489C05A1.5090304@gmail.com> References: <48999F1E.3080607@gmail.com> <4899B258.1000408@enthought.com> <4899B590.4080406@gmail.com> <4899E2D6.7080209@enthought.com> <4899E508.6050407@gmail.com> <4899EA0B.4060205@enthought.com> <489B1675.2060905@gmail.com> <489C05A1.5090304@gmail.com> Message-ID: On Fri, Aug 8, 2008 at 10:36 AM, fred wrote: > Sebastian Haase a ?crit : > >> Are you "really" on a 64-bit system ? > Yes. > >> Is this Linux ? > Yes. > >> Is your Python the original from the distro - or did you build it yourself ? > Built myself. > >> Do a: >>> import sys;print sys.maxint > I get the expected answer: 2**63-1. > >> Did you build numpy yourself or did you download a binary ? > Built myself. > > What's going on ? > > Don't know .... what is the size of the file you are trying to open again - in bytes ? What file system are you using (don't know if this is of any interest ....) ? -S. From fredmfp at gmail.com Fri Aug 8 05:35:44 2008 From: fredmfp at gmail.com (fred) Date: Fri, 08 Aug 2008 11:35:44 +0200 Subject: [SciPy-user] partially reading a file... In-Reply-To: References: <48999F1E.3080607@gmail.com> <4899B258.1000408@enthought.com> <4899B590.4080406@gmail.com> <4899E2D6.7080209@enthought.com> <4899E508.6050407@gmail.com> <4899EA0B.4060205@enthought.com> <489B1675.2060905@gmail.com> <489C05A1.5090304@gmail.com> Message-ID: <489C1370.50004@gmail.com> Sebastian Haase a ?crit : > Don't know .... what is the size of the file you are trying to open > again - in bytes ? -rw-r--r-- 1 fred users 5529600000 2008-08-07 16:31 input.sep > What file system are you using (don't know if this is of any interest ....) ? ext3 Cheers, -- Fred From fperez.net at gmail.com Fri Aug 8 05:56:58 2008 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 8 Aug 2008 02:56:58 -0700 Subject: [SciPy-user] Python tools at the annual SIAM meeting In-Reply-To: <8793ae6e0808061353p4a7433ebu7f43215f07a08e6e@mail.gmail.com> References: <8793ae6e0807281609i4075e048sbaefb22b3b211450@mail.gmail.com> <8793ae6e0808061353p4a7433ebu7f43215f07a08e6e@mail.gmail.com> Message-ID: On Wed, Aug 6, 2008 at 1:53 PM, Dominique Orban wrote: > Thanks, Fernando! This is really useful. My pleasure. I've just added Bill Hart's talk, which was an impromptu replacement on optimization (I didn't have his slides before). Cheers, f From fredmfp at gmail.com Fri Aug 8 06:17:05 2008 From: fredmfp at gmail.com (fred) Date: Fri, 08 Aug 2008 12:17:05 +0200 Subject: [SciPy-user] partially reading a file... In-Reply-To: References: <48999F1E.3080607@gmail.com> <4899B258.1000408@enthought.com> <4899B590.4080406@gmail.com> <4899E2D6.7080209@enthought.com> <4899E508.6050407@gmail.com> <4899EA0B.4060205@enthought.com> <489B1675.2060905@gmail.com> <489C05A1.5090304@gmail.com> Message-ID: <489C1D21.1080808@gmail.com> Sebastian Haase a ?crit : > What file system are you using (don't know if this is of any interest ....) ? Hmmm, forget this thread. A keyboard-to-chair interface problem. Sorry. Cheers, -- Fred From haase at msg.ucsf.edu Fri Aug 8 06:24:43 2008 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Fri, 8 Aug 2008 12:24:43 +0200 Subject: [SciPy-user] partially reading a file... In-Reply-To: <489C1D21.1080808@gmail.com> References: <48999F1E.3080607@gmail.com> <4899B590.4080406@gmail.com> <4899E2D6.7080209@enthought.com> <4899E508.6050407@gmail.com> <4899EA0B.4060205@enthought.com> <489B1675.2060905@gmail.com> <489C05A1.5090304@gmail.com> <489C1D21.1080808@gmail.com> Message-ID: On Fri, Aug 8, 2008 at 12:17 PM, fred wrote: > Sebastian Haase a ?crit : > >> What file system are you using (don't know if this is of any interest ....) ? > Hmmm, forget this thread. > > A keyboard-to-chair interface problem. > > Sorry. > > That's O.K. >>> 5529600000 / 1024 / 1024 / 1024 5.14984130859 So you are saying you are mem-mapping a 5.2 GB file without problem !? That's pretty neat ;-) - Sebastian From fredmfp at gmail.com Fri Aug 8 08:11:29 2008 From: fredmfp at gmail.com (fred) Date: Fri, 08 Aug 2008 14:11:29 +0200 Subject: [SciPy-user] partially reading a file... In-Reply-To: References: <48999F1E.3080607@gmail.com> <4899B590.4080406@gmail.com> <4899E2D6.7080209@enthought.com> <4899E508.6050407@gmail.com> <4899EA0B.4060205@enthought.com> <489B1675.2060905@gmail.com> <489C05A1.5090304@gmail.com> <489C1D21.1080808@gmail.com> Message-ID: <489C37F1.203@gmail.com> Sebastian Haase a ?crit : >>>> 5529600000 / 1024 / 1024 / 1024 > 5.14984130859 > > So you are saying you are mem-mapping a 5.2 GB file without problem !? > > That's pretty neat ;-) Dimensions were wrong in my code, yes. For 1200x1600x720, it works fine ;-) Cheers, -- Fred From silva at lma.cnrs-mrs.fr Sat Aug 9 06:14:45 2008 From: silva at lma.cnrs-mrs.fr (Fabrice Silva) Date: Sat, 09 Aug 2008 12:14:45 +0200 Subject: [SciPy-user] Usage of PyDSTool Message-ID: <1218276885.2900.6.camel@localhost.localdomain> Hi, I used to deal with numerical solving of dynamical systems using ODE integrators provided by SciPy as VODE for example. I recently found PyDSTool which seems to be an excellent and almost complete tool for investigating bifurcation and other stuff... The point is that I do not even now how to begin! I do have a description of the dynamical system as y'=f(y,t) (N-dimensional vector y), this function being written in fortran for speed purpose and wrapped to python using f2py. How can I define a Model with such thing ? Is there any tutorial that could help ? I only found the example files which are not very explicit... Thanks -- Fabrice Silva LMA CNRS From nicolas.chopin at bristol.ac.uk Sat Aug 9 06:19:01 2008 From: nicolas.chopin at bristol.ac.uk (Nicolas Chopin) Date: Sat, 09 Aug 2008 12:19:01 +0200 Subject: [SciPy-user] concatenate array with number In-Reply-To: <489B169D.2090406@bris.ac.uk> References: <489B169D.2090406@bris.ac.uk> Message-ID: <489D6F15.2070407@bris.ac.uk> sorry, I realise now I should have posted to the numpy mailing list. Many thanks for the answers. My personal favourite is: numpy.concatenate((x, x[-1:])) because it's easier to remember. Too bad x[-1] is not recognised as an array as well (like in Matlab for instance). Best NC > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From elmico.filos at gmail.com Sat Aug 9 07:30:36 2008 From: elmico.filos at gmail.com (=?ISO-8859-1?Q?Mico_Fil=F3s?=) Date: Sat, 9 Aug 2008 13:30:36 +0200 Subject: [SciPy-user] Usage of PyDSTool In-Reply-To: <1218276885.2900.6.camel@localhost.localdomain> References: <1218276885.2900.6.camel@localhost.localdomain> Message-ID: Hi, I also have the same problem. Although some of the aspects of PyDSTool can be grasped from the code and the documentation, I think many potential users would benefit from a couple of simple and illustrative examples, or, better, a hands-on tutorial. How can we help? :) From warren.weckesser at gmail.com Sat Aug 9 10:05:42 2008 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Sat, 9 Aug 2008 10:05:42 -0400 Subject: [SciPy-user] Usage of PyDSTool In-Reply-To: References: <1218276885.2900.6.camel@localhost.localdomain> Message-ID: <114880320808090705i5663b9a7g70d646f2e840d818@mail.gmail.com> The pydstool web page of the VFGEN program, http://www.warrenweckesser.net/vfgen/menu_pydstool.html has two examples of using PyDSTool. One example simply plots a solution to the van der Pol equations, and the other uses PyCont to compute a two parameter bifurcation diagram of the Morris-Lecar equations. Even if you aren't interested in VFGEN, you might find the examples helpful. Cheers, Warren On Sat, Aug 9, 2008 at 7:30 AM, Mico Fil?s wrote: > Hi, > > I also have the same problem. Although some of the aspects of PyDSTool > can be grasped from the code and the documentation, I think many > potential users would benefit from a couple of simple and illustrative > examples, or, better, a hands-on tutorial. How can we help? :) > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rob.clewley at gmail.com Sat Aug 9 11:58:13 2008 From: rob.clewley at gmail.com (Rob Clewley) Date: Sat, 9 Aug 2008 11:58:13 -0400 Subject: [SciPy-user] Usage of PyDSTool In-Reply-To: References: <1218276885.2900.6.camel@localhost.localdomain> Message-ID: Hi, > I also have the same problem. Although some of the aspects of PyDSTool > can be grasped from the code and the documentation, I think many > potential users would benefit from a couple of simple and illustrative > examples, or, better, a hands-on tutorial. How can we help? :) If anyone is willing to write tutorial materials I can give them write access to the wiki, and of course I can help them to organize and write it too. Also, if anyone has example tutorials for other packages in mind that would make a good model then please tell me. I don't exactly know what aspects need most attention. There are good introductions to dynamical systems modeling techniques out there, as well issues about numerical simulation and analysis in general. I don't have time or inclination to repeat those, at least not to write them myself -- others may feel free to and put them on the wiki. To guide me better and make best use of my time, I'd like to get detailed input as to what is missing from the comments in the example scripts and some of the modules themselves (e.g., the suite of basic examples at the end of Points.py). But it's much better if you can actually write something instructive yourself, even just an outline that I can use. Also, there's a tutorial for the somewhat related package XPP: http://www.math.pitt.edu/~bard/bardware/tut/xpptut.html http://www.math.pitt.edu/~bard/bardware/tut/xpptut2.html Are there any things there that could be helpful? Cheers, Rob From elmico.filos at gmail.com Sat Aug 9 15:56:57 2008 From: elmico.filos at gmail.com (=?ISO-8859-1?Q?Mico_Fil=F3s?=) Date: Sat, 9 Aug 2008 21:56:57 +0200 Subject: [SciPy-user] Usage of PyDSTool In-Reply-To: References: <1218276885.2900.6.camel@localhost.localdomain> Message-ID: Thanks Rob, I think that the tutorial of XPP is a good example to look at. In my opinion, what is missing is some documentation oriented to users already familiar with dynamical systems (and, perhaps, with XPP) but not particularly interested in the implementation details of the module. My first stab at a tentative tutorial would include the following points: - How to specify a dynamical system (the equivalent of "Creating and running an ODE file" in XPP tutorial) - How to compute and plot trajectories. How to acces the data (illustrating the concept of Point & Pointsets) - How to change initial conditions ("Changing initial data" in XPP tutorial, which basically describes the different options inInitialconds menu). By the way, does PyDSTool includes a functionality similar to Initialconds->Mouse/Mice (which allows to specify with the mouse the initial conditions of a planar system)? I am just curious. - How to find the fixed points of the system and characterize their stability. Also for 2D systems, how to get and plot the nullclines. - How to plot simple bifurcation diagrams etc. I personally prefer a good selection of clear (and commented) examples to technical, "reference manual-like" descriptions. Best, From rob.clewley at gmail.com Sat Aug 9 17:10:01 2008 From: rob.clewley at gmail.com (Rob Clewley) Date: Sat, 9 Aug 2008 17:10:01 -0400 Subject: [SciPy-user] Usage of PyDSTool In-Reply-To: References: <1218276885.2900.6.camel@localhost.localdomain> Message-ID: Hi, On Sat, Aug 9, 2008 at 3:56 PM, Mico Fil?s wrote: > Thanks Rob, > > I think that the tutorial of XPP is a good example to look at. In my > opinion, what is missing is some documentation oriented to users > already familiar with dynamical systems (and, perhaps, with XPP) but > not particularly interested in the implementation details of the > module. OK, your comments are helpful, but could you also address the usefulness of the demonstration scripts provided in the /tests directory? Of course I don't expect anyone to learn how things work from the main modules, but there are many simple examples of all the basic tasks in these scripts. I know it's not a tutorial, but most of them are well commented and include screen output and graphs. I had pointed new users on the wiki to look at these scripts on multiple occasions. Is the problem that the directory is called 'tests' when maybe it should be called 'examples' or 'demos'? In the spirit of my above comment, here's what you should be looking at in lieu of a proper tutorial. > My first stab at a tentative tutorial would include the > following points: > > - How to specify a dynamical system (the equivalent of "Creating and > running an ODE file" in XPP tutorial) Most of the scripts in /tests show the different ways that a dynamical system can be specified. > - How to compute and plot trajectories. How to acces the data > (illustrating the concept of Point & Pointsets) Look at the end of the Points.py module, which is runnable as a script. The final two functions provide a ton of screen output with examples for making and using these classes. The Pointsets page on the wiki explains the general idea, and then directs users to those examples. If you run that and have more questions, I'll gladly add more explanation where necessary. > - How to change initial conditions ("Changing initial data" in XPP > tutorial, which basically describes the different options > inInitialconds menu). If you have an ODE model 'ode' as an instance of a Generator object or of a Model object, you'd call ode.set(ics=new_ics) where new_ics is a Point or a dictionary representation of the values for any or all of the variables (partial specifications just update the existing ICs). Setting parameters is the same. Just call ode.set(pars=new_pars) > By the way, does PyDSTool includes a > functionality similar to Initialconds->Mouse/Mice (which allows to > specify with the mouse the initial conditions of a planar system)? I > am just curious. Sorry, not unless someone writes a graphical interface! Even if I had time, I have no expertise in doing that. It's command line only. > - How to find the fixed points of the system and characterize their > stability. Also for 2D systems, how to get and plot the nullclines. This is covered in tests/phaseplane_HHtest.py > - How to plot simple bifurcation diagrams PyCont definitely needs better documentation, there's no doubt about that. In the meantime, copy the format of tests/PyCont_PredPrey.py and PyCont_Brusselator.py, which show simple examples. Any time you want to continue periodic orbits, AUTO is going to be called. So you'd better have your external C/Fortran compilation working for that. I am also willing to answer questions by email once you try to get started on your own. These scripts could also definitely do with better commenting and guidance. Nonetheless, with some effort to look stuff up on the wiki page PyCont, the meaning of most of the settings are fairly easy to determine if you have a clue about continuation algorithms. > etc. I personally prefer a good selection of clear (and commented) > examples to technical, "reference manual-like" descriptions. Would it be helpful to literally copy the input and output from some of the demo scripts (including the testing commands in Points.py, for instance) onto the wiki so that they can be read there? It would be a starting point, at least. If I don't get to it first (I'll try, but don't hold your breath) maybe you would consider compiling a minimal set of pieces from those scripts that you think would constitute a good tutorial? I could help you clean it up, add more explanation, and adapt it for posting on the wiki. -Rob From elmico.filos at gmail.com Sat Aug 9 18:08:48 2008 From: elmico.filos at gmail.com (=?ISO-8859-1?Q?Mico_Fil=F3s?=) Date: Sun, 10 Aug 2008 00:08:48 +0200 Subject: [SciPy-user] Usage of PyDSTool In-Reply-To: References: <1218276885.2900.6.camel@localhost.localdomain> Message-ID: Thanks Rob for your quick answer. > Of course I don't expect anyone to learn how things work > from the main modules, but there are many simple examples of all the > basic tasks in these scripts. I know it's not a tutorial, but most of > them are well commented and include screen output and graphs. You are absolutely right. Much of the documentation is already there. It is only a matter of making more visible. I think your idea of copying onto the wiki the input & output of the demos is a good solution. > Is the problem that the directory is called > 'tests' when maybe it should be called 'examples' or 'demos'? It may seem a stupid detail, but I think this would help. > Most of the scripts in /tests show the different ways that a dynamical system can be specified. Yes, you are right. The problem is that in /tests there are many different examples with different levels of difficulty, and one may not know where to start first. I will try to think of a possible sequence of examples. Again, thanks for your help. From silva at lma.cnrs-mrs.fr Sun Aug 10 10:53:41 2008 From: silva at lma.cnrs-mrs.fr (Fabrice Silva) Date: Sun, 10 Aug 2008 16:53:41 +0200 Subject: [SciPy-user] Usage of PyDSTool In-Reply-To: References: <1218276885.2900.6.camel@localhost.localdomain> Message-ID: <1218380021.4099.8.camel@localhost.localdomain> Le samedi 09 ao?t 2008 ? 10:05 -0400, Warren Weckesser a ?crit : > http://www.warrenweckesser.net/vfgen/menu_pydstool.html Le samedi 09 ao?t 2008 ? 11:58 -0400, Rob Clewley a ?crit : > http://www.math.pitt.edu/~bard/bardware/tut/xpptut.html > http://www.math.pitt.edu/~bard/bardware/tut/xpptut2.html Hi, thanks for the links, I wonder I have to provide a textual description of the model of the dynamical system, either using PyDSTool.args() with varspecs, pars and fnpecs attributes, or using a xml file in vfgen case. I do have a python fonction available computing y'=f(y,t). y is a vector of len 2n+2 corresponding to the real and imaginary parts of n complex variables p_n evolving as p_n' = Cn*sum(p_n)+sn*p_n, and two others quantities x and x'. The dynamics also depends on other time-varying quantities that are controlled (I do have a analytical expression for them). All that is defined and computed in a fortran file that is compiled with f2py. Is there a way to use that with pydstool or should I wrote again all that stuff with some difficulties as n may vary... -- Fabrice Silva From warren.weckesser at gmail.com Sun Aug 10 11:17:18 2008 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Sun, 10 Aug 2008 11:17:18 -0400 Subject: [SciPy-user] Usage of PyDSTool In-Reply-To: <1218380021.4099.8.camel@localhost.localdomain> References: <1218276885.2900.6.camel@localhost.localdomain> <1218380021.4099.8.camel@localhost.localdomain> Message-ID: <114880320808100817n73910ec9xae7a32a1381b42ed@mail.gmail.com> Hi, On Sun, Aug 10, 2008 at 10:53 AM, Fabrice Silva wrote: > > Hi, > thanks for the links, > I wonder I have to provide a textual description of the model of the > dynamical system, either using PyDSTool.args() with varspecs, pars and > fnpecs attributes, or using a xml file in vfgen case. > I do have a python fonction available computing y'=f(y,t). y is a vector > of len 2n+2 corresponding to the real and imaginary parts of n complex > variables p_n evolving as p_n' = Cn*sum(p_n)+sn*p_n, and two others > quantities x and x'. I'm sure Rob can give you pointers for defining your system in PyDSTool. Unfortunately, the current version of VFGEN doesn't handle vector variables (i.e. you can have variables like x, y, z, but not x[1], x[2], x[3]), so it looks like VFGEN is not an option for you. The next generation of VFGEN will handle vectors, but it is still under development. Cheers, Warren -------------- next part -------------- An HTML attachment was scrubbed... URL: From dineshbvadhia at hotmail.com Sun Aug 10 17:29:24 2008 From: dineshbvadhia at hotmail.com (Dinesh B Vadhia) Date: Sun, 10 Aug 2008 14:29:24 -0700 Subject: [SciPy-user] sparse csr_matrix memeory error Message-ID: I'm obtaining a memory error when creating a large sparse csr matrix as follows: I = 680000 J = 900000 nnz = 72000000 row = numpy.empty(nnz, dtype='intc') column = numpy.empty(nnz, dtype='intc') # read (i,j) data into row and column data = scipy.ones(nnz, dtype='intc') A = sparse.csr_matrix((data, (row, column)), shape=(I,J)) The traceback is: Traceback (most recent call last): File "C:\... \ijdata.py", line 72, in A = sparse.csr_matrix((data, (row, column)), shape=(I,J)) File "C:\Python25\Lib\site-packages\scipy\sparse\compressed.py", line 55, in __init__ other = self.__class__( coo_matrix(arg1, shape=shape) ) File "C:\Python25\Lib\site-packages\scipy\sparse\compressed.py", line 39, in __init__ arg1 = arg1.asformat(self.format) File "C:\Python25\Lib\site-packages\scipy\sparse\base.py", line 211, in asformat return getattr(self,'to' + format)() File "C:\Python25\Lib\site-packages\scipy\sparse\coo.py", line 278, in tocsr indices = empty(self.nnz, dtype=intc) MemoryError I'm running the program under Windows XP with over 2gb memory. Any thoughts on what the problem is? Dinesh -------------- next part -------------- An HTML attachment was scrubbed... URL: From wnbell at gmail.com Sun Aug 10 18:39:13 2008 From: wnbell at gmail.com (Nathan Bell) Date: Sun, 10 Aug 2008 18:39:13 -0400 Subject: [SciPy-user] sparse csr_matrix memeory error In-Reply-To: References: Message-ID: On Sun, Aug 10, 2008 at 5:29 PM, Dinesh B Vadhia wrote: > I'm obtaining a memory error when creating a large sparse csr matrix as > follows: > > I = 680000 > J = 900000 > nnz = 72000000 > row = numpy.empty(nnz, dtype='intc') > column = numpy.empty(nnz, dtype='intc') > # read (i,j) data into row and column > data = scipy.ones(nnz, dtype='intc') Together, the arrays for row,column, and data take 864MB of memory. You need approximately 2x that to do the conversion to CSR. > I'm running the program under Windows XP with over 2gb memory. Any thoughts > on what the problem is? Yes, your matrix is simply too large. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From mhearne at usgs.gov Mon Aug 11 13:47:13 2008 From: mhearne at usgs.gov (Michael Hearne) Date: Mon, 11 Aug 2008 11:47:13 -0600 Subject: [SciPy-user] installing numpy on ubuntu Message-ID: <48A07B21.9030808@usgs.gov> I'm trying to install numpy/scipy/matplotlib/basemap on a fresh installation of Ubuntu. The available versions of each I've found on Ubuntu: numpy: Version: 1:1.0.4-6ubuntu3 scipy: Version: 0.6.0-8ubuntu1 matplotlib: Version: 0.91.2-0ubuntu1 Basemap (a matplotlib toolkit) does not appear in Ubuntu, it seems. The problem is that the supported version of Basemap depends on matplotlib versions 0.98 and greater, and for that I need numpy 1.1. I have two questions: 1) Is there an apt-get repository for the more recent versions of numpy (or any of these other packages) 2) Failing 1, should the building of numpy on an Ubuntu system be a straightforward "configure;make;make install"? God, I hope so. Thanks, Mike -- ------------------------------------------------------ Michael Hearne mhearne at usgs.gov (303) 273-8620 USGS National Earthquake Information Center 1711 Illinois St. Golden CO 80401 Senior Software Engineer Synergetics, Inc. ------------------------------------------------------ From ggellner at uoguelph.ca Mon Aug 11 14:41:33 2008 From: ggellner at uoguelph.ca (Gabriel Gellner) Date: Mon, 11 Aug 2008 14:41:33 -0400 Subject: [SciPy-user] installing numpy on ubuntu In-Reply-To: <48A07B21.9030808@usgs.gov> References: <48A07B21.9030808@usgs.gov> Message-ID: <20080811184133.GA4112@encolpuis> On Mon, Aug 11, 2008 at 11:47:13AM -0600, Michael Hearne wrote: > I'm trying to install numpy/scipy/matplotlib/basemap on a fresh > installation of Ubuntu. > > The available versions of each I've found on Ubuntu: > numpy: Version: 1:1.0.4-6ubuntu3 > scipy: Version: 0.6.0-8ubuntu1 > matplotlib: Version: 0.91.2-0ubuntu1 > > Basemap (a matplotlib toolkit) does not appear in Ubuntu, it seems. > > The problem is that the supported version of Basemap depends on > matplotlib versions 0.98 and greater, and for that I need numpy 1.1. > > I have two questions: > 1) Is there an apt-get repository for the more recent versions of numpy > (or any of these other packages) There sure is! Check out the awesome repo maintained by Andrew Straw: http://debs.astraw.com/hardy/ (note: I use the hardy versions, but he maintains a lot of versions for older ubuntu's, see the direction at the link above) > 2) Failing 1, should the building of numpy on an Ubuntu system be a > straightforward "configure;make;make install"? God, I hope so. > This is also relatively easy, more of a: 1. Make sure you have the development libraries 2. Then do a python setup.py build; python setup.py install If you choose to go this route instead of using the above debs (which are awesome . . .) I can give you more help if needed. Good luck! Gabriel From timmichelsen at gmx-topmail.de Mon Aug 11 18:53:02 2008 From: timmichelsen at gmx-topmail.de (Tim Michelsen) Date: Tue, 12 Aug 2008 00:53:02 +0200 Subject: [SciPy-user] installing numpy on ubuntu In-Reply-To: <20080811184133.GA4112@encolpuis> References: <48A07B21.9030808@usgs.gov> <20080811184133.GA4112@encolpuis> Message-ID: >> 2) Failing 1, should the building of numpy on an Ubuntu system be a >> straightforward "configure;make;make install"? God, I hope so. here I described another possibility: http://thread.gmane.org/gmane.comp.python.numeric.general/23078 From elmico.filos at gmail.com Mon Aug 11 19:14:45 2008 From: elmico.filos at gmail.com (=?ISO-8859-1?Q?Mico_Fil=F3s?=) Date: Tue, 12 Aug 2008 01:14:45 +0200 Subject: [SciPy-user] Getting coordinates of a level (contour) curve Message-ID: Hi, I have a fixed bivariate gaussian density function, and I want to compute the coordinates (x(l), y(l)) of a given level curve, where l is the parameter of the curve. I can easily plot the level x curve with matplotlib, using function 'contour', but I have no idea about how to get its coordinates (something like an (N,2) array specifying the coordinates of N points along the curve). With fsolve I can find one of such points, but its is not enough :) Thanks in advance for your help, Best Mico From zane at ideotrope.org Mon Aug 11 20:52:21 2008 From: zane at ideotrope.org (Zane Selvans) Date: Mon, 11 Aug 2008 17:52:21 -0700 Subject: [SciPy-user] fit confidence intervals from minpack.leastsq, or odrpack.ODR? Message-ID: I have an observation, L, which consists of the shape of a particular one dimensional feature (a line on the surface of a sphere). I have a model of the process that I think may have generated the feature. Aside from the feature, L, the model has one parameter, B (an angle: 0 < B < pi). For a given feature and parameter value, I can calculate a metric f(B,L) describing how well my observation matches the model. f(B,L) has the following properties: - Small values of f(B,L) indicate agreement with my model, large f(B,L) indicate disagreement. - f(B,L) is periodic (with a wavelength of pi radians) - f(B,L) may have several local minima - 0 < f(B,L) < pi/2 I need to somehow quantify three things: i) At its best, how good is my model at explaining the observation (i.e. is it good enough to be significant?) ii) For what value of the input parameter does my model do the best job of explaining the observation? iii) How unique is that best value (i.e. are there many other values that do almost as well?) Currently, I'm using the global minimum of f(B,L) for i, and the value of B which results in the global minimum for ii. I'm kind of stuck on iii though. My current idea is to fit f(B,L) to some periodic function (e.g. cosine), and use the width of the 95% confidence interval of that fit as an indication of its uniqueness. If I use a function with the same wavelength (pi) as f(B,L), set its amplitude to be whatever the observed amplitude of f(B,L), and its phase such that the minimum of both f(B,L) and the cosine... I'll get a fit of some confidence. Or alternatively, I could allow the fitting function to determine the phase, and instead of using the global minimum of f(B,L) as the value which determines the best value of B, I could use the minimum of the fit cosine. Or I could just not worry about whether or not they're the same, and use the best-fit confidence interval as the measure of uniqueness. Does that sound at all like the right way to go about this? If so, which fitting/minimization module is more appropriate/easier to use (if what I want is the confidence interval, ultimately) minpack.leastsq or odrpack.ODR. I see that leastsq returns a covariance matrix, and that it's possible somehow to turn that into a confidence interval... and it looks like you can get a confidence interval (sd_beta) directly from ODR. Thanks for any insight... -- Zane Selvans Amateur Earthling http://zaneselvans.org zane at ideotrope.org 303/815-6866 PGP Key: 55E0815F From bing.jian at gmail.com Tue Aug 12 00:42:16 2008 From: bing.jian at gmail.com (Bing) Date: Tue, 12 Aug 2008 00:42:16 -0400 Subject: [SciPy-user] Getting coordinates of a level (contour) curve In-Reply-To: References: Message-ID: On Mon, Aug 11, 2008 at 7:14 PM, Mico Fil?s wrote: > Hi, > > I have a fixed bivariate gaussian density function, and I want to > compute the coordinates (x(l), y(l)) of a given level curve, where l > is the parameter of the curve. I can easily plot the level x curve > with matplotlib, using function 'contour', but I have no idea about > how to get its coordinates (something like an (N,2) array specifying > the coordinates of N points along the curve). With fsolve I can find > one of such points, but its is not enough :) > For bivariate normal distributions, these equal-density contours are ellipses which you can write down the parametric form of (x,y) from the mean and covariance matrix of your bivariate normal distribution. > Thanks in advance for your help, > > Best > > Mico > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dineshbvadhia at hotmail.com Tue Aug 12 01:31:17 2008 From: dineshbvadhia at hotmail.com (Dinesh B Vadhia) Date: Mon, 11 Aug 2008 22:31:17 -0700 Subject: [SciPy-user] sparse csr_matrix memory error Message-ID: Hi Nathan For future reference, how did you arrive at the 864MB? Dinesh -------------------------------------------------------------------------------- Message: 2 Date: Sun, 10 Aug 2008 18:39:13 -0400 From: "Nathan Bell" Subject: Re: [SciPy-user] sparse csr_matrix memeory error To: "SciPy Users List" Message-ID: Content-Type: text/plain; charset=ISO-8859-1 On Sun, Aug 10, 2008 at 5:29 PM, Dinesh B Vadhia wrote: > I'm obtaining a memory error when creating a large sparse csr matrix as > follows: > > I = 680000 > J = 900000 > nnz = 72000000 > row = numpy.empty(nnz, dtype='intc') > column = numpy.empty(nnz, dtype='intc') > # read (i,j) data into row and column > data = scipy.ones(nnz, dtype='intc') Together, the arrays for row,column, and data take 864MB of memory. You need approximately 2x that to do the conversion to CSR. > I'm running the program under Windows XP with over 2gb memory. Any thoughts > on what the problem is? Yes, your matrix is simply too large. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Tue Aug 12 01:54:35 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Tue, 12 Aug 2008 00:54:35 -0500 Subject: [SciPy-user] Getting coordinates of a level (contour) curve In-Reply-To: References: Message-ID: <9457e7c80808112254u8e12be1w75e5ddb406f696df@mail.gmail.com> 2008/8/11 Bing : >> I have a fixed bivariate gaussian density function, and I want to >> compute the coordinates (x(l), y(l)) of a given level curve, where l >> is the parameter of the curve. I can easily plot the level x curve >> with matplotlib, using function 'contour', but I have no idea about >> how to get its coordinates (something like an (N,2) array specifying >> the coordinates of N points along the curve). With fsolve I can find >> one of such points, but its is not enough :) > > For bivariate normal distributions, these equal-density contours > are ellipses which you can write down the parametric form > of (x,y) from the mean and covariance matrix of your bivariate normal > distribution. For those who are interested in plotting these: http://mentat.za.net/refer/gaussian_intersection.png (See also attached code) St?fan -------------- next part -------------- A non-text attachment was scrubbed... Name: gaussian_intersection.py Type: application/octet-stream Size: 3815 bytes Desc: not available URL: From elmico.filos at gmail.com Tue Aug 12 03:49:57 2008 From: elmico.filos at gmail.com (=?ISO-8859-1?Q?Mico_Fil=F3s?=) Date: Tue, 12 Aug 2008 09:49:57 +0200 Subject: [SciPy-user] Getting coordinates of a level (contour) curve In-Reply-To: <9457e7c80808112254u8e12be1w75e5ddb406f696df@mail.gmail.com> References: <9457e7c80808112254u8e12be1w75e5ddb406f696df@mail.gmail.com> Message-ID: Thanks for your quick replies. >> For bivariate normal distributions, these equal-density contours >> are ellipses which you can write down the parametric form >> of (x,y) from the mean and covariance matrix of your bivariate normal >> distribution. Yes, you are right. But what if I have a mixture of gaussians, or any other 2D probability density function? From wnbell at gmail.com Tue Aug 12 09:55:59 2008 From: wnbell at gmail.com (Nathan Bell) Date: Tue, 12 Aug 2008 09:55:59 -0400 Subject: [SciPy-user] sparse csr_matrix memory error In-Reply-To: References: Message-ID: On Tue, Aug 12, 2008 at 1:31 AM, Dinesh B Vadhia wrote: >> nnz = 72000000 >> row = numpy.empty(nnz, dtype='intc') >> column = numpy.empty(nnz, dtype='intc') >> # read (i,j) data into row and column >> data = scipy.ones(nnz, dtype='intc') > > For future reference, how did you arrive at the 864MB? > Each row and column index is 4 bytes (intc = 4 bytes). Likewise for the nonzero values themselves (again intc). 72e6 * (4 + 4 + 4) = 864e6 bytes The CSR format (data,indices,indptr) is slightly smaller since the row pointer is compressed: indptr = 4 * (680000 + 1) # number of rows + 1 indices = 4 * 72000000 # number of nonzeros data = 4 * 72000000 # number of nonzeros total = 578,720,004 bytes So combined, the two matrices require about 1.4 GB of storage. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From rob.clewley at gmail.com Tue Aug 12 10:36:12 2008 From: rob.clewley at gmail.com (Rob Clewley) Date: Tue, 12 Aug 2008 10:36:12 -0400 Subject: [SciPy-user] Getting coordinates of a level (contour) curve In-Reply-To: References: <9457e7c80808112254u8e12be1w75e5ddb406f696df@mail.gmail.com> Message-ID: > Yes, you are right. But what if I have a mixture of gaussians, or any > other 2D probability density function? Indeed. Isn't the question about how to extract the data points for the curve from the 'contour' object in matplotlib, in the general case? Unfortunately I don't have the answer to that, but maybe introspection of the object would lead to an answer. From the API doc I see a mysterious attribute called 'level'. From jdh2358 at gmail.com Tue Aug 12 11:08:02 2008 From: jdh2358 at gmail.com (John Hunter) Date: Tue, 12 Aug 2008 10:08:02 -0500 Subject: [SciPy-user] Getting coordinates of a level (contour) curve In-Reply-To: References: <9457e7c80808112254u8e12be1w75e5ddb406f696df@mail.gmail.com> Message-ID: <88e473830808120808v15eab772gb77371729c87f0a3@mail.gmail.com> On Tue, Aug 12, 2008 at 9:36 AM, Rob Clewley wrote: >> Yes, you are right. But what if I have a mixture of gaussians, or any >> other 2D probability density function? > > Indeed. Isn't the question about how to extract the data points for > the curve from the 'contour' object in matplotlib, in the general > case? Unfortunately I don't have the answer to that, but maybe > introspection of the object would lead to an answer. From the API doc > I see a mysterious attribute called 'level'. The mpl contour function returns a matplotlib.contour.ContourSet instance which has an attribute "level" array of levels that the contours are drawn on In [57]: CS = plt.contour(X, Y, Z) In [58]: CS.levels Out[58]: array([-1. , -0.5, 0. , 0.5, 1. , 1.5]) It also has an equal length list of line collections (matplotlib.collections.LineCollection) which you can use to extract the x, y vertices of the contour lines at a given level. For a single level, the line collection may contain one or more independent lines in the collections. Here is some example code to get you started: In [59]: level0 = CS.levels[0] In [60]: print level0 -1.0 In [61]: c0 = CS.collections[0] In [62]: paths = c0.get_paths() In [63]: len(paths) Out[63]: 1 In [64]: path0 = paths[0] In [65]: xy = path0.vertices In [66]: xy.shape Out[66]: (237, 2) In [67]: xy[:10,] Out[67]: array([[-0.15 , -0.95150169], [-0.15877627, -0.95 ], [-0.175 , -0.94720234], [-0.2 , -0.94221229], [-0.225 , -0.93652781], [-0.25 , -0.93013814], [-0.26810676, -0.925 ], [-0.275 , -0.9230207 ], [-0.3 , -0.91514134], [-0.325 , -0.90651218]]) In [68]: From zachary.pincus at yale.edu Tue Aug 12 11:15:58 2008 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Tue, 12 Aug 2008 11:15:58 -0400 Subject: [SciPy-user] Getting coordinates of a level (contour) curve In-Reply-To: References: <9457e7c80808112254u8e12be1w75e5ddb406f696df@mail.gmail.com> Message-ID: <6A20D155-0F7E-4C1A-AAD3-32C7F7D1F05A@yale.edu> Hi all, It's straightforward to *estimate* the level curves of a function that has been evaluated a regularly-spaced grid. (e.g. the "marching cubes" algorithm and it's 2D antecedent, "marching squares".) I suspect that this is what matplotlib is doing. I can send a reasonably-fast C implementation of this for the 2D case if anyone wants. (GPL or BSD, take your pick.) Following a level curve from an arbitrary function is a bit harder. If you have the function's gradient, you could in theory just go around in a direction orthogonal to the gradient, but in the real world that wouldn't work with numerical error and finite step sizes. You could probably take steps orthogonal to the gradient, then correct back to the desired level value by stepping along the gradient, and then repeat until you get back near to where you started. This sounds like far more trouble than it's worth, but if the function is very expensive to evaluate, it might be cheaper and more accurate than evaluating the function on a lattice and then estimating the level curves from that... Zach On Aug 12, 2008, at 10:36 AM, Rob Clewley wrote: >> Yes, you are right. But what if I have a mixture of gaussians, or any >> other 2D probability density function? > > Indeed. Isn't the question about how to extract the data points for > the curve from the 'contour' object in matplotlib, in the general > case? Unfortunately I don't have the answer to that, but maybe > introspection of the object would lead to an answer. From the API doc > I see a mysterious attribute called 'level'. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From rob.clewley at gmail.com Tue Aug 12 11:59:12 2008 From: rob.clewley at gmail.com (Rob Clewley) Date: Tue, 12 Aug 2008 11:59:12 -0400 Subject: [SciPy-user] Getting coordinates of a level (contour) curve In-Reply-To: <6A20D155-0F7E-4C1A-AAD3-32C7F7D1F05A@yale.edu> References: <9457e7c80808112254u8e12be1w75e5ddb406f696df@mail.gmail.com> <6A20D155-0F7E-4C1A-AAD3-32C7F7D1F05A@yale.edu> Message-ID: Zach, > > Following a level curve from an arbitrary function is a bit harder. If > you have the function's gradient, you could in theory just go around > in a direction orthogonal to the gradient, but in the real world that > wouldn't work with numerical error and finite step sizes. You could > probably take steps orthogonal to the gradient, then correct back to > the desired level value by stepping along the gradient, and then > repeat until you get back near to where you started. You can do that in many circumstances using Newton's method or variants of it. It's called continuation, and it's what packages like AUTO, PyCont, MatCont, and Content do as their bread and butter. There are several steps to getting it set up properly so that it converges. -Rob From zachary.pincus at yale.edu Tue Aug 12 13:49:27 2008 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Tue, 12 Aug 2008 13:49:27 -0400 Subject: [SciPy-user] Getting coordinates of a level (contour) curve In-Reply-To: References: <9457e7c80808112254u8e12be1w75e5ddb406f696df@mail.gmail.com> <6A20D155-0F7E-4C1A-AAD3-32C7F7D1F05A@yale.edu> Message-ID: <47678149-7874-4CF7-84B0-F2E8373BBED7@yale.edu> >> Following a level curve from an arbitrary function is a bit harder. >> If >> you have the function's gradient, you could in theory just go around >> in a direction orthogonal to the gradient, but in the real world that >> wouldn't work with numerical error and finite step sizes. You could >> probably take steps orthogonal to the gradient, then correct back to >> the desired level value by stepping along the gradient, and then >> repeat until you get back near to where you started. > > You can do that in many circumstances using Newton's method or > variants of it. It's called continuation, and it's what packages like > AUTO, PyCont, MatCont, and Content do as their bread and butter. There > are several steps to getting it set up properly so that it converges. Aah, cool... good to know! (I should have mentioned above that I was speculating off the top of my head. Makes sense that there would be formal numerical methodologies for that.) A lot of the PyCont, etc., descriptions seem to be wrapped up in pretty specialized terminology -- what would an example of tracing a level curve look like, given f(x), fprime(x), and some x0? Zach From rob.clewley at gmail.com Tue Aug 12 14:57:13 2008 From: rob.clewley at gmail.com (Rob Clewley) Date: Tue, 12 Aug 2008 14:57:13 -0400 Subject: [SciPy-user] Getting coordinates of a level (contour) curve In-Reply-To: <47678149-7874-4CF7-84B0-F2E8373BBED7@yale.edu> References: <9457e7c80808112254u8e12be1w75e5ddb406f696df@mail.gmail.com> <6A20D155-0F7E-4C1A-AAD3-32C7F7D1F05A@yale.edu> <47678149-7874-4CF7-84B0-F2E8373BBED7@yale.edu> Message-ID: > A lot of the PyCont, etc., descriptions seem to be wrapped up in > pretty specialized terminology -- what would an example of tracing a > level curve look like, given f(x), fprime(x), and some x0? I can't really explain it easily off the top of my head. It's a lot like you described, and there are several ways to do it. A popular method is pseudo-arc length continuation, and the idea for it is graphically shown on the wiki page for "numerical continuation". It takes into account how the curve can bend at "fold points," where a naive method based on parameterization of the curve along one axis would lead to a singularity (f' = 0 leading to a 1/0 in the algorithm). You might also read about predictor-corrector methods. However, I might be able to help you set up an example of applying PyCont to finding a level curve - it would be valuable tutorial for the PyCont documentation (I think all our present examples are based on bifurcations in dynamical systems). There are some good book references on the wiki page, maybe the most accessible are [B1], [B5] and [B6] but I don't know a few of them to comment on them all. From the title, [B12] looks like it might be too. -Rob From zachary.pincus at yale.edu Tue Aug 12 16:04:26 2008 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Tue, 12 Aug 2008 16:04:26 -0400 Subject: [SciPy-user] Getting coordinates of a level (contour) curve In-Reply-To: References: <9457e7c80808112254u8e12be1w75e5ddb406f696df@mail.gmail.com> <6A20D155-0F7E-4C1A-AAD3-32C7F7D1F05A@yale.edu> <47678149-7874-4CF7-84B0-F2E8373BBED7@yale.edu> Message-ID: <10116D7E-68BC-414E-A031-E1CBF19AE65F@yale.edu> On Aug 12, 2008, at 2:57 PM, Rob Clewley wrote: >> A lot of the PyCont, etc., descriptions seem to be wrapped up in >> pretty specialized terminology -- what would an example of tracing a >> level curve look like, given f(x), fprime(x), and some x0? > > I can't really explain it easily off the top of my head. It's a lot > like you described, and there are several ways to do it. A popular > method is pseudo-arc length continuation, and the idea for it is > graphically shown on the wiki page for "numerical continuation". It > takes into account how the curve can bend at "fold points," where a > naive method based on parameterization of the curve along one axis > would lead to a singularity (f' = 0 leading to a 1/0 in the > algorithm). You might also read about predictor-corrector methods. > However, I might be able to help you set up an example of applying > PyCont to finding a level curve - it would be valuable tutorial for > the PyCont documentation (I think all our present examples are based > on bifurcations in dynamical systems). There are some good book > references on the wiki page, maybe the most accessible are [B1], [B5] > and [B6] but I don't know a few of them to comment on them all. From > the title, [B12] looks like it might be too. Thanks for the background! A basic example on the PyCont page about tracing a level curve given python functions f(x, y), fprime(x, y) (*) and an initial (x0,y0) coordinate would be very useful indeed. I definitely had not realized that PyDSTool/PyCont could be used for these general purposes -- which purposes will be very useful for me and I presume others as well. Zach (*) Or perhaps f_fprime(x, y) which returns the value and gradient? I'm not sure how PyCont is set up... From silva at lma.cnrs-mrs.fr Wed Aug 13 08:37:21 2008 From: silva at lma.cnrs-mrs.fr (Fabrice Silva) Date: Wed, 13 Aug 2008 14:37:21 +0200 Subject: [SciPy-user] Usage of PyDSTool In-Reply-To: <1218380021.4099.8.camel@localhost.localdomain> References: <1218276885.2900.6.camel@localhost.localdomain> <1218380021.4099.8.camel@localhost.localdomain> Message-ID: <1218631041.3179.5.camel@Portable-s2m.cnrs-mrs.fr> Warren has answered about VFGEN : it can not handle vector variables right now... What about PyDSTool ? Do I have to provide a textual description of the dynamical system ? Can't symbolic-like calculations be avoided ? It should be possible to write a generator for an ODE file as the ones given in the XPP tutorial but I do not understand why such a description is needed... Can someone using PyDSTool give me some explanations ? -- Fabrice Silva LMA UPR CNRS 7051 - ?quipe S2M From ryanlists at gmail.com Wed Aug 13 09:32:12 2008 From: ryanlists at gmail.com (Ryan Krauss) Date: Wed, 13 Aug 2008 08:32:12 -0500 Subject: [SciPy-user] SciPy 2008 hotel recommendation Message-ID: I saw the link on nearby accommodations for the SciPy conference, but I don't know anything about the area. I will not have a car. Can anyone recommend a nearby (probably walking distance) hotel that is decent, reasonably priced, and not in a scary part of town (maybe that is a silly question, but I have never been to Pasadena)? Thanks, Ryan -------------- next part -------------- An HTML attachment was scrubbed... URL: From rwagner at physics.ucsd.edu Wed Aug 13 11:06:11 2008 From: rwagner at physics.ucsd.edu (Rick Wagner) Date: Wed, 13 Aug 2008 08:06:11 -0700 Subject: [SciPy-user] SciPy 2008 hotel recommendation In-Reply-To: References: Message-ID: <727B1F46-5850-409E-A393-1055703F62FA@physics.ucsd.edu> Good Morning, > I saw the link on nearby accommodations for the SciPy conference, > but I don't know anything about the area. I will not have a car. > Can anyone recommend a nearby (probably walking distance) hotel > that is decent, reasonably priced, and not in a scary part of town > (maybe that is a silly question, but I have never been to Pasadena)? After staying there several times, I think the Saga Motor Hotel is what you're looking for, depending on your "reasonably priced" level. http://www.thesagamotorhotel.com/ --Rick From jdh2358 at gmail.com Wed Aug 13 11:30:37 2008 From: jdh2358 at gmail.com (John Hunter) Date: Wed, 13 Aug 2008 10:30:37 -0500 Subject: [SciPy-user] SciPy 2008 hotel recommendation In-Reply-To: References: Message-ID: <88e473830808130830h77a27654w5d4f41d7da6163a8@mail.gmail.com> On Wed, Aug 13, 2008 at 8:32 AM, Ryan Krauss wrote: > I saw the link on nearby accommodations for the SciPy conference, but I > don't know anything about the area. I will not have a car. Can anyone > recommend a nearby (probably walking distance) hotel that is decent, > reasonably priced, and not in a scary part of town (maybe that is a silly > question, but I have never been to Pasadena)? Fernando and I have always stayed at the Vagabond -- it's fairly low rent, but they have internet and it is an easy walk to the conference http://www.vagabondinn.com JDH From eric at deeplycloudy.com Wed Aug 13 11:31:05 2008 From: eric at deeplycloudy.com (Eric Bruning) Date: Wed, 13 Aug 2008 11:31:05 -0400 Subject: [SciPy-user] SOM in scipy.cluster Message-ID: <56BD91CA-76FD-4F34-A765-5F82FCB302E9@deeplycloudy.com> Greetings, I'm considering using self-organizing maps for mining some lightning data, and info.py in scipy.cluster mentions that implementation of self-organizing maps is under development. I've found a few examples written in python around the web, but none that are likely to be efficient enough for my use. Is there still interest in including SOM in scipy? I'd be happy to coordinate on a contribution if nothing else is under way. Thanks, Eric From rob.clewley at gmail.com Wed Aug 13 12:13:19 2008 From: rob.clewley at gmail.com (Rob Clewley) Date: Wed, 13 Aug 2008 12:13:19 -0400 Subject: [SciPy-user] Usage of PyDSTool In-Reply-To: <1218631041.3179.5.camel@Portable-s2m.cnrs-mrs.fr> References: <1218276885.2900.6.camel@localhost.localdomain> <1218380021.4099.8.camel@localhost.localdomain> <1218631041.3179.5.camel@Portable-s2m.cnrs-mrs.fr> Message-ID: >> I wonder I have to provide a textual description of the model of the >> dynamical system, either using PyDSTool.args() with varspecs, pars and >> fnpecs attributes, or using a xml file in vfgen case. >> >> I do have a python fonction available computing y'=f(y,t). y is a vector >> of len 2n+2 corresponding to the real and imaginary parts of n complex >> variables p_n evolving as p_n' = Cn*sum(p_n)+sn*p_n, and two others >> quantities x and x'. Right now PyDSTool cannot handle pre-defined python functions for RHS defintions. In order to work in a non-index world and know properties of each ODE, certain things must be guaranteed, such as the ordering of variable names etc. This is all handled internally but the user has to specify everything using the textual description. However, there is a ModelSpec module which allows you to build representations of RHS functions in a way that depend on n, using symbolic objects. Summing over the p_n is then a matter of using a 'for' constructor that will take all variables of a certain type and add them up. You then "build" the PyDSTool representation from the symbolic one. This would be a good thing for me to write a tutorial on, I suppose -- right now you can only learn from looking at tests/ModelSpec_test.py and some of the factory functions in Toolbox/neuralcomp.py (in particular, the makeSoma function therein). So, that would be harder.... Anyway, even with the string version, you could just write a function that takes n and spits out a dictionary of strings for each p_n'. It would like a bit like this: def my_rhs(varname, n): rhs_dict = {} sum_term = " + ".join([varname+'_%i' % i for i in range(n)]) for i in range(n): rhs_dict[varname+'_%i' % i] = 'C_%i * (%s) + s_%i * %s_%i' % (i, sum_term, i, varname, i) return rhs_dict >>> my_rhs('p',3) {'p_0': 'C_0 * (p_0 + p_1 + p_2) + s_0 * p_0', 'p_1': 'C_1 * (p_0 + p_1 + p_2) + s_1 * p_1', 'p_2': 'C_2 * (p_0 + p_1 + p_2) + s_2 * p_2'} This dict is ready to be used in your PyDSTool ODE definition, and you have to declare your parameters C_i and s_i (which you can also make factory functions for). Feel free to post your attempt if you can't get it to work, and I'll try to fix it. Here's a link to info about the multi-quantity ModelSpec definitions which are modeled on the same syntax as that in XPP (sorry for the horrifically long link): http://www.cam.cornell.edu/~rclewley/cgi-bin/moin.cgi/ModelSpec#head-5a20b27323542c61c63ef968c27a924d18b82099 HTH, Rob -- Robert H. Clewley, Ph.D. Assistant Professor Department of Mathematics and Statistics Georgia State University 720 COE, 30 Pryor St Atlanta, GA 30303, USA tel: 404-413-6420 fax: 404-413-6403 http://www2.gsu.edu/~matrhc http://brainsbehavior.gsu.edu/ From millman at berkeley.edu Wed Aug 13 12:22:25 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Wed, 13 Aug 2008 09:22:25 -0700 Subject: [SciPy-user] SciPy 2008 hotel recommendation In-Reply-To: References: Message-ID: I will be at: Vagabond Inn 1203 E. Colorado Blvd. Pasadena, CA 91106 626-449-3170 It is walking distance, clean, reasonably priced, and not too fancy. On Wed, Aug 13, 2008 at 6:32 AM, Ryan Krauss wrote: > I saw the link on nearby accommodations for the SciPy conference, but I > don't know anything about the area. I will not have a car. Can anyone > recommend a nearby (probably walking distance) hotel that is decent, > reasonably priced, and not in a scary part of town (maybe that is a silly > question, but I have never been to Pasadena)? > > Thanks, > > Ryan > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From rob.clewley at gmail.com Wed Aug 13 13:59:22 2008 From: rob.clewley at gmail.com (Rob Clewley) Date: Wed, 13 Aug 2008 13:59:22 -0400 Subject: [SciPy-user] Getting coordinates of a level (contour) curve In-Reply-To: <10116D7E-68BC-414E-A031-E1CBF19AE65F@yale.edu> References: <9457e7c80808112254u8e12be1w75e5ddb406f696df@mail.gmail.com> <6A20D155-0F7E-4C1A-AAD3-32C7F7D1F05A@yale.edu> <47678149-7874-4CF7-84B0-F2E8373BBED7@yale.edu> <10116D7E-68BC-414E-A031-E1CBF19AE65F@yale.edu> Message-ID: Attached is a commented example in PyCont for a 2D zero level set that defines an ellipse. It's very easy! I'll add this to PyDSTool's examples. Thanks for giving me the impetus to do this. Let me know what you think. -Rob -- Robert H. Clewley, Ph.D. Assistant Professor Department of Mathematics and Statistics Georgia State University 720 COE, 30 Pryor St Atlanta, GA 30303, USA tel: 404-413-6420 fax: 404-413-6403 http://www2.gsu.edu/~matrhc http://brainsbehavior.gsu.edu/ -------------- next part -------------- A non-text attachment was scrubbed... Name: PyCont_LevelCurve.py Type: application/octet-stream Size: 2367 bytes Desc: not available URL: From zachary.pincus at yale.edu Wed Aug 13 15:37:05 2008 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Wed, 13 Aug 2008 15:37:05 -0400 Subject: [SciPy-user] Getting coordinates of a level (contour) curve In-Reply-To: References: <9457e7c80808112254u8e12be1w75e5ddb406f696df@mail.gmail.com> <6A20D155-0F7E-4C1A-AAD3-32C7F7D1F05A@yale.edu> <47678149-7874-4CF7-84B0-F2E8373BBED7@yale.edu> <10116D7E-68BC-414E-A031-E1CBF19AE65F@yale.edu> Message-ID: <3FA50B0F-E76A-41CD-97CB-792F14906E83@yale.edu> Aah, cool! Thanks for the example... Zach On Aug 13, 2008, at 1:59 PM, Rob Clewley wrote: > Attached is a commented example in PyCont for a 2D zero level set that > defines an ellipse. It's very easy! I'll add this to PyDSTool's > examples. Thanks for giving me the impetus to do this. Let me know > what you think. > > -Rob > > -- > Robert H. Clewley, Ph.D. > Assistant Professor > Department of Mathematics and Statistics > Georgia State University > 720 COE, 30 Pryor St > Atlanta, GA 30303, USA > > tel: 404-413-6420 fax: 404-413-6403 > http://www2.gsu.edu/~matrhc > http://brainsbehavior.gsu.edu/ > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From ryanlists at gmail.com Wed Aug 13 16:11:23 2008 From: ryanlists at gmail.com (Ryan Krauss) Date: Wed, 13 Aug 2008 15:11:23 -0500 Subject: [SciPy-user] SciPy 2008 hotel recommendation In-Reply-To: References: Message-ID: Thanks for the replies. Those both meet my needs. Ryan On 8/13/08, Jarrod Millman wrote: > > I will be at: > Vagabond Inn > 1203 E. Colorado Blvd. > Pasadena, CA 91106 > 626-449-3170 > > It is walking distance, clean, reasonably priced, and not too fancy. > > > On Wed, Aug 13, 2008 at 6:32 AM, Ryan Krauss wrote: > > > I saw the link on nearby accommodations for the SciPy conference, but I > > don't know anything about the area. I will not have a car. Can anyone > > recommend a nearby (probably walking distance) hotel that is decent, > > reasonably priced, and not in a scary part of town (maybe that is a silly > > question, but I have never been to Pasadena)? > > > > Thanks, > > > > Ryan > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > -- > Jarrod Millman > Computational Infrastructure for Research Labs > 10 Giannini Hall, UC Berkeley > phone: 510.643.4014 > http://cirl.berkeley.edu/ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed Aug 13 16:11:55 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 13 Aug 2008 15:11:55 -0500 Subject: [SciPy-user] SOM in scipy.cluster In-Reply-To: <56BD91CA-76FD-4F34-A765-5F82FCB302E9@deeplycloudy.com> References: <56BD91CA-76FD-4F34-A765-5F82FCB302E9@deeplycloudy.com> Message-ID: <3d375d730808131311t77531782y2b7abc49cc016c96@mail.gmail.com> On Wed, Aug 13, 2008 at 10:31, Eric Bruning wrote: > Greetings, > > I'm considering using self-organizing maps for mining some lightning > data, and info.py in scipy.cluster mentions that implementation of > self-organizing maps is under development. I've found a few examples > written in python around the web, but none that are likely to be > efficient enough for my use. > > Is there still interest in including SOM in scipy? I'd be happy to > coordinate on a contribution if nothing else is under way. Sure! My colleague Corran Webster was thinking about doing some SOM stuff for scipy, too, so you two should talk. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From kontakt at beberlei.de Wed Aug 13 18:07:28 2008 From: kontakt at beberlei.de (Benjamin Eberlei) Date: Thu, 14 Aug 2008 00:07:28 +0200 Subject: [SciPy-user] =?utf-8?q?fmin=5Fbfgs_-_Loglik_Estimation_leads_to_s?= =?utf-8?q?trange_error?= Message-ID: Hello everybody, i am quite new to numpy/scipy and currently porting some algorithms i wrote for Ox to python and run into a very strange error inside the bfgs optimization algorithm I cannot track down. I want to estimate Log Liklihood functions from different distributions. I prepared a small snippet: class fLoglik(object): def __init__(self, y): self._y = asmatrix(y).T pass def __call__(self, params): print params y = self._y a = _exp(-(params[0] * y)) return asmatrix(a) f = fLoglik( _log(data[:,16]) ) params = array(ones((1,1))) max = fmin_bfgs(f, params) data is a matrix so the input comes in correct. The fLoglik.__call__ is called twice before a ValueError occurs exactly as in the following backtrace: [ 1.] [ 1.00000001] Traceback (most recent call last): File "/var/www/workspace/pydiplom/src/diploma/weibull.py", line 42, in max = fmin_bfgs(f, params) File "/usr/lib/python2.5/site-packages/scipy/optimize/optimize.py", line 723, in fmin_bfgs gfk = myfprime(x0) File "/usr/lib/python2.5/site-packages/scipy/optimize/optimize.py", line 95, in function_wrapper return function(x, *args) File "/usr/lib/python2.5/site-packages/scipy/optimize/optimize.py", line 617, in approx_fprime grad[k] = (f(*((xk+ei,)+args)) - f0)/epsilon ValueError: setting an array element with a sequence. What is weird to me, the first two function calls to fLoglik are correct, that is the output [1] and [ 1.00000001] you see comes from the call func. Does somebody maybe know thats going on? From rpg.314 at gmail.com Thu Aug 14 05:08:08 2008 From: rpg.314 at gmail.com (Rohit Garg) Date: Thu, 14 Aug 2008 14:38:08 +0530 Subject: [SciPy-user] need lapack/atlas/fftw Message-ID: <4d5dd8c20808140208r712bcbc2n435566a562d7e67@mail.gmail.com> Hi all, I have installed scipy,numpy,lapack,atlas and fftw from the standard fedora repositories. Does anybody here know with options were they configured? I mean, when I am going to use the linalg module, am I using the lapack which uses ATLAS beneath it? I read in the list archives that scipy depends upon lapack and blas. Can I just set an environment variable somewhere to make it use ATLAS and FFTW? (probably not, but gotta ask). In case the answer is no, I guess the only way to do it is compile every thing by hand :( I am using fedora 9, 64 bit on dual core AMD machine. My machine supports sse2 Off topic - I understand that numpy does not make use of SSE and multithreading. If that is the case, I propose using the framewave library for implementing both of them. Their goal is not to bias it towards any particular company's chips and license is apache, so that should be no problem. I ask it here because numpy page has no links for dev lists and because I am eager to write code to achieve this. Cheers, -- Rohit Garg Junior Undergraduate Department of Physics Indian Institute of Technology Bombay From matthieu.brucher at gmail.com Thu Aug 14 05:21:37 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 14 Aug 2008 11:21:37 +0200 Subject: [SciPy-user] need lapack/atlas/fftw In-Reply-To: <4d5dd8c20808140208r712bcbc2n435566a562d7e67@mail.gmail.com> References: <4d5dd8c20808140208r712bcbc2n435566a562d7e67@mail.gmail.com> Message-ID: > Off topic - I understand that numpy does not make use of SSE and > multithreading. If that is the case, I propose using the framewave > library for implementing both of them. Their goal is not to bias it > towards any particular company's chips and license is apache, so that > should be no problem. I ask it here because numpy page has no links > for dev lists and because I am eager to write code to achieve this. > > Cheers, There is a discussion on that matter on numpy-discussion ;) Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher From rpg.314 at gmail.com Thu Aug 14 05:40:58 2008 From: rpg.314 at gmail.com (Rohit Garg) Date: Thu, 14 Aug 2008 15:10:58 +0530 Subject: [SciPy-user] need lapack/atlas/fftw In-Reply-To: References: <4d5dd8c20808140208r712bcbc2n435566a562d7e67@mail.gmail.com> Message-ID: <4d5dd8c20808140240r5e80b24fiade0912bf3db18e0@mail.gmail.com> > There is a discussion on that matter on numpy-discussion ;) Thanks a lot for that. I found it. It happened surprisingly recently though. -- Rohit Garg Junior Undergraduate Department of Physics Indian Institute of Technology Bombay From rpg.314 at gmail.com Thu Aug 14 06:16:58 2008 From: rpg.314 at gmail.com (Rohit Garg) Date: Thu, 14 Aug 2008 15:46:58 +0530 Subject: [SciPy-user] need lapack/atlas/fftw In-Reply-To: <4d5dd8c20808140240r5e80b24fiade0912bf3db18e0@mail.gmail.com> References: <4d5dd8c20808140208r712bcbc2n435566a562d7e67@mail.gmail.com> <4d5dd8c20808140240r5e80b24fiade0912bf3db18e0@mail.gmail.com> Message-ID: <4d5dd8c20808140316g3452f6ceg94d6fccb67ba58ce@mail.gmail.com> Sorry for being a pest. But I found this on scipy wiki. ==================== linalg linear algebra and BLAS routines based on the ATLAS implementation of LAPACK ==================== at ++++++++++++++++++++ http://www.scipy.org/FAQ#head-1c900dc4dcbe093cfed62c1d2f6302dfe5e06585 ++++++++++++++++++++ Does it mean that my installation of scipy uses (full) lapack+atlas automatically? Or it installs a lite version and for the real thing, the only option is to compile these packages by hand? Please help -- Rohit Garg Junior Undergraduate Department of Physics Indian Institute of Technology Bombay From matthieu.brucher at gmail.com Thu Aug 14 07:08:16 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 14 Aug 2008 13:08:16 +0200 Subject: [SciPy-user] need lapack/atlas/fftw In-Reply-To: <4d5dd8c20808140316g3452f6ceg94d6fccb67ba58ce@mail.gmail.com> References: <4d5dd8c20808140208r712bcbc2n435566a562d7e67@mail.gmail.com> <4d5dd8c20808140240r5e80b24fiade0912bf3db18e0@mail.gmail.com> <4d5dd8c20808140316g3452f6ceg94d6fccb67ba58ce@mail.gmail.com> Message-ID: 2008/8/14 Rohit Garg : > Sorry for being a pest. But I found this on scipy wiki. > > ==================== > linalg > linear algebra and BLAS routines based on the ATLAS implementation of LAPACK > ==================== > > at > > ++++++++++++++++++++ > http://www.scipy.org/FAQ#head-1c900dc4dcbe093cfed62c1d2f6302dfe5e06585 > ++++++++++++++++++++ > > Does it mean that my installation of scipy uses (full) lapack+atlas > automatically? Or it installs a lite version and for the real thing, > the only option is to compile these packages by hand? > > Please help linalg uses LAPACK, so if you install numpy, you will use one of the LAPACK libraries of your system, surely atlas. And for the BLAS interface, only the dot function can use it (for now). For more on that matter, you can see the different discussions on numpy-discussion ;) Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher From cwebster at enthought.com Thu Aug 14 10:03:35 2008 From: cwebster at enthought.com (Corran Webster) Date: Thu, 14 Aug 2008 09:03:35 -0500 Subject: [SciPy-user] SOM in scipy.cluster Message-ID: > On Wed, Aug 13, 2008 at 10:31, Eric Bruning > wrote: > > Greetings, > > > > I'm considering using self-organizing maps for mining some lightning > > data, and info.py in scipy.cluster mentions that implementation of > > self-organizing maps is under development. I've found a few examples > > written in python around the web, but none that are likely to be > > efficient enough for my use. > > > > Is there still interest in including SOM in scipy? I'd be happy to > > coordinate on a contribution if nothing else is under way. > > Sure! My colleague Corran Webster was thinking about doing some SOM > stuff for scipy, too, so you two should talk. Hi, yes, I've been thinking seriously about adding some SOM algorithms to scipy - I used them heavily in my previous job (we used SOMs to classify documents for the Mayo clinic and slot machine players for casinos...). I think that there is a place for a simple and fast batch SOM algorithm in scipy.cluster.vq, since the batch SOM can be viewed as a generalization of K-means. For more general variations of the SOM algorithm, it may make sense to put them elsewhere - possibly in the machine learning scikit where the algorithms can access other. I'd be interested to hear what your needs are as far as data types, distance functions, data set sizes and SOM topologies, as that would likely influence on where I concentrate my energy. Best Regards, Corran From matthieu.brucher at gmail.com Thu Aug 14 10:29:33 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 14 Aug 2008 16:29:33 +0200 Subject: [SciPy-user] SOM in scipy.cluster In-Reply-To: <3d375d730808131311t77531782y2b7abc49cc016c96@mail.gmail.com> References: <56BD91CA-76FD-4F34-A765-5F82FCB302E9@deeplycloudy.com> <3d375d730808131311t77531782y2b7abc49cc016c96@mail.gmail.com> Message-ID: Hi, SOM were supposed to be included in scikits.learn, weren't they ? (as scipy.cluster is in scipy only for historical reasons) Matthieu 2008/8/13 Robert Kern : > On Wed, Aug 13, 2008 at 10:31, Eric Bruning wrote: >> Greetings, >> >> I'm considering using self-organizing maps for mining some lightning >> data, and info.py in scipy.cluster mentions that implementation of >> self-organizing maps is under development. I've found a few examples >> written in python around the web, but none that are likely to be >> efficient enough for my use. >> >> Is there still interest in including SOM in scipy? I'd be happy to >> coordinate on a contribution if nothing else is under way. > > Sure! My colleague Corran Webster was thinking about doing some SOM > stuff for scipy, too, so you two should talk. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher From contact at pythonxy.com Thu Aug 14 13:22:05 2008 From: contact at pythonxy.com (Pierre Raybaut) Date: Thu, 14 Aug 2008 19:22:05 +0200 Subject: [SciPy-user] [ Python(x,y) ] New release : 2.0.1 Message-ID: <48A469BD.4060705@pythonxy.com> Hi all, As you may already know, Python(x,y) is a free scientific-oriented Python Distribution based on Qt and Eclipse providing a self-consistent scientific development environment. Release 2.0.1 is now available on http://www.pythonxy.com. (Full release, and update patch) Note: ITK will be updated in the next release (3.6 -> 3.8) and GDCM will be included. Changes history 08-14-2008 - Version 2.0.1 : ** Added ** o Python(x,y) is now available in two versions (due to disk quota limits - and to the unusual size of the 2.0.1 update, the Basic Edition will not be available online, at least in the next few weeks): Full Edition (all Python packages are installed, some are optional like ETS or ITK) and Basic Edition (with essential Python libraries only: PyQt4, NumPy, SciPy, IPython and matplotlib) o SWIG 1.3.36 - SWIG is a compiler that integrates C and C++ with several languages including Python o Pyrex 0.9.8.4 - Pyrex is a language for writing Python extension modules (Note: Cython - which is based on Pyrex - is already included in the distribution) o xy 1.0.2 - xy is a module that gathers all Python(x,y) tools ** Updated ** o Enthought Tool Suite 2.8.0 o NumPy 1.1.1 o matplotlib 0.98.3 o Pywin32 2.12 o pp (Parallel Python) 1.5.5 o SymPy 0.6.1 o PyTables 2.0.4 o Eclipse 3.4.0 (CDT 5.0) o PyDev 1.3.19 (Eclipse plugin) o Qt Eclipse Integration 1.4.1 o Photran 4.0b4 (Eclipse plugin) o Wicked Shell 2.0.4 (Eclipse plugin) o StartExplorer 0.0.4 (Eclipse plugin) o Notepad++ 5.0.3 (and added Python script execution shortcut: Shift+F1) o Console 2 installer: checking if an old configuration file exists (and deleting it) before installation o Console 2 configuration: window transparency has been disabled because of display bugs with TVTK o IPython(x,y) profile: added customizable startup script ** Corrected ** o IPython : IPython(x,y) profile startup script can now be customized o PyQt4: installation folder is added to PATH, allowing to use directly pyrcc4.exe, pylupdate.exe, ... o Missing documentation in the following packages: Cython, GDAL, DAP, MDP, PyXML, MinGW o Minor bug in package uninstallers: dialog box with an error message but without any consequence o ITK module installer: Visual C++ 2008 libraries installer "vcredist.exe" has a known bug which will be fixed in release 2008 SP1 - some temporary files are erroneously copied to the system root - meanwhile, these files are now deleted at the end of the installation process Regards, Pierre Raybaut From william.ratcliff at gmail.com Thu Aug 14 13:59:34 2008 From: william.ratcliff at gmail.com (william ratcliff) Date: Thu, 14 Aug 2008 13:59:34 -0400 Subject: [SciPy-user] [ Python(x,y) ] New release : 2.0.1 In-Reply-To: <48A469BD.4060705@pythonxy.com> References: <48A469BD.4060705@pythonxy.com> Message-ID: <827183970808141059u1069a322xe4af0b421a24c7f6@mail.gmail.com> Thanks for all of your work on this! A quick question about your VTK distribution--there is an option to compile VTK to use GL2PS to allow the export of scene's to eps, ps, pdf, etc. see: http://davis.lbl.gov/Manuals/VTK-4.5/classvtkGL2PSExporter.html Is your version compiled with this option. I believe it is not the default due to licensing issues. Thanks, William On Thu, Aug 14, 2008 at 1:22 PM, Pierre Raybaut wrote: > Hi all, > > As you may already know, Python(x,y) is a free scientific-oriented > Python Distribution based on Qt and Eclipse providing a self-consistent > scientific development environment. > > Release 2.0.1 is now available on http://www.pythonxy.com. > (Full release, and update patch) > Note: ITK will be updated in the next release (3.6 -> 3.8) and GDCM > will be included. > > Changes history > 08-14-2008 - Version 2.0.1 : > > ** Added ** > o Python(x,y) is now available in two versions (due to disk quota limits > - and to the unusual size of the 2.0.1 update, the Basic Edition will > not be available online, at least in the next few weeks): Full Edition > (all Python > packages are installed, some are optional like ETS or ITK) and Basic > Edition (with essential Python libraries only: PyQt4, NumPy, SciPy, > IPython and matplotlib) > o SWIG 1.3.36 - SWIG is a compiler that integrates C and C++ with > several languages including Python > o Pyrex 0.9.8.4 - Pyrex is a language for writing Python extension > modules (Note: Cython - which is based on Pyrex - is already included in > the distribution) > o xy 1.0.2 - xy is a module that gathers all Python(x,y) tools > > ** Updated ** > o Enthought Tool Suite 2.8.0 > o NumPy 1.1.1 > o matplotlib 0.98.3 > o Pywin32 2.12 > o pp (Parallel Python) 1.5.5 > o SymPy 0.6.1 > o PyTables 2.0.4 > o Eclipse 3.4.0 (CDT 5.0) > o PyDev 1.3.19 (Eclipse plugin) > o Qt Eclipse Integration 1.4.1 > o Photran 4.0b4 (Eclipse plugin) > o Wicked Shell 2.0.4 (Eclipse plugin) > o StartExplorer 0.0.4 (Eclipse plugin) > o Notepad++ 5.0.3 (and added Python script execution shortcut: Shift+F1) > o Console 2 installer: checking if an old configuration file exists (and > deleting it) before installation > o Console 2 configuration: window transparency has been disabled because > of display bugs with TVTK > o IPython(x,y) profile: added customizable startup script > > ** Corrected ** > o IPython : IPython(x,y) profile startup script can now be customized > o PyQt4: installation folder is added to PATH, allowing to use directly > pyrcc4.exe, pylupdate.exe, ... > o Missing documentation in the following packages: Cython, GDAL, DAP, > MDP, PyXML, MinGW > o Minor bug in package uninstallers: dialog box with an error message > but without any consequence > o ITK module installer: Visual C++ 2008 libraries installer > "vcredist.exe" has a known bug which will be fixed in release 2008 SP1 - > some temporary files are erroneously copied to the system root - > meanwhile, these files are now deleted at the end of the installation > process > > Regards, > Pierre Raybaut > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zane at ideotrope.org Thu Aug 14 15:40:18 2008 From: zane at ideotrope.org (Zane Selvans) Date: Thu, 14 Aug 2008 12:40:18 -0700 Subject: [SciPy-user] Finding local minima of greater than a given depth Message-ID: <660F6D33-AB88-499F-AE68-52CD42CEFDFF@ideotrope.org> Is there a function within scipy somewhere which will, given an array representing values of a function, find all the local minima having a depth greater than some specified minimum? The following works great for smooth functions, but when the data has noise in it, it also returns all of the (very) local minima, which I don't want. The functions I'm working with are periodic (hence the modulo in the indices for endpoint cases). Or, if there isn't such a built in functionality, what's the right way to measure the depth of a local minimum? def local_minima(fitlist): minima = [] for i in range(len(fitlist)): if fitlist[i] < fitlist[mod(i+1,len(fitlist))] and fitlist[i] < fitlist[mod(i-1,len(fitlist))]: minima.append(fitlist[i]) minima.sort() good_indices = [ fitlist.index(fit) for fit in minima ] good_fits = [ fit for fit in minima ] return(good_indices, good_fits) -- Zane Selvans Amateur Earthling http://zaneselvans.org zane at ideotrope.org 303/815-6866 PGP Key: 55E0815F From rob.clewley at gmail.com Thu Aug 14 16:01:19 2008 From: rob.clewley at gmail.com (Rob Clewley) Date: Thu, 14 Aug 2008 16:01:19 -0400 Subject: [SciPy-user] Finding local minima of greater than a given depth In-Reply-To: <660F6D33-AB88-499F-AE68-52CD42CEFDFF@ideotrope.org> References: <660F6D33-AB88-499F-AE68-52CD42CEFDFF@ideotrope.org> Message-ID: > Is there a function within scipy somewhere which will, given an array > representing values of a function, find all the local minima having a > depth greater than some specified minimum? The following works great > for smooth functions, but when the data has noise in it, it also > returns all of the (very) local minima, which I don't want. You could low-pass filter out the (presumably high frequency) noise, which might introduce a slight phase change to your data. But you can use the local minima of the filtered data as good starting points for a better search in your original data. You might need to fit local polynomials to your data near these to find the minimum without introducing too much statistical bias (e.g. by just taking the smallest data point, which might only be the smallest because of the noise). There isn't a scipy function to do this in one go, but there are filter functions in scipy.signal and some polynomial fitting functionality recently added by Anne Archibald which I use for this kind of problem (search these archives for reference to that). The depth of the minimum is better defined once you fit a function to the region, but the appropriate size of that region is context dependent. It could be meaningfully measured w.r.t. the next nearest local maximum, or to a global average or maximum level derived from the data. There's no one way to do it. -Rob From zachary.pincus at yale.edu Thu Aug 14 16:06:09 2008 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Thu, 14 Aug 2008 16:06:09 -0400 Subject: [SciPy-user] Finding local minima of greater than a given depth In-Reply-To: <660F6D33-AB88-499F-AE68-52CD42CEFDFF@ideotrope.org> References: <660F6D33-AB88-499F-AE68-52CD42CEFDFF@ideotrope.org> Message-ID: <1A487E19-9232-4ECE-AF6D-7B736F4CC32F@yale.edu> > Is there a function within scipy somewhere which will, given an array > representing values of a function, find all the local minima having a > depth greater than some specified minimum? The following works great > for smooth functions, but when the data has noise in it, it also > returns all of the (very) local minima, which I don't want. The > functions I'm working with are periodic (hence the modulo in the > indices for endpoint cases). Or, if there isn't such a built in > functionality, what's the right way to measure the depth of a local > minimum? You could measure "depth" of minima (of a 1D array) by also finding the flanking maxima and looking at the distance between them. Or any of the other methods Rob suggested. Another way to find local minima in a noise-robust manner that I've often seen is to not look for a minimum "depth", but for a minimum distance between minima. This is easy to implement using scipy.ndimage's minimum filter, which sets each element of an array to the minimum value seen over a specified neighborhood of that element. Then you just check for array elements where the element is equal to the minimum in the neighborhood... I'd also suggest smoothing the data a bit with a gaussian to get rid or some of the noise. Scipy.ndimage also provides these filters. Zach PS. Here's my implementation... it returns the indices of the local maxima in a list. Also, the min-distance is in terms of manhattan distance, not euclidian, so be warned. For a 2D array, the returned list will have two elements -- the row- indices of the maxima and the column-indices of the maxima. There's probably a better way to do that, but this is what I have. def local_maxima(array, min_distance = 1, periodic=False, edges_allowed=True): """Find all local maxima of the array, separated by at least min_distance.""" import scipy.ndimage as ndimage array = numpy.asarray(array) cval = 0 if periodic: mode = 'wrap' elif edges_allowed: mode = 'nearest' else: mode = 'constant' cval = array.max()+1 max_points = array == ndimage.maximum_filter(array, 1+2*min_distance, mode=mode, cval=cval) return [indices[max_points] for indices in numpy.indices(array.shape)] On Aug 14, 2008, at 3:40 PM, Zane Selvans wrote: > Is there a function within scipy somewhere which will, given an array > representing values of a function, find all the local minima having a > depth greater than some specified minimum? The following works great > for smooth functions, but when the data has noise in it, it also > returns all of the (very) local minima, which I don't want. The > functions I'm working with are periodic (hence the modulo in the > indices for endpoint cases). Or, if there isn't such a built in > functionality, what's the right way to measure the depth of a local > minimum? > > def local_minima(fitlist): > minima = [] > > for i in range(len(fitlist)): > if fitlist[i] < fitlist[mod(i+1,len(fitlist))] and fitlist[i] > < fitlist[mod(i-1,len(fitlist))]: > minima.append(fitlist[i]) > > minima.sort() > > good_indices = [ fitlist.index(fit) for fit in minima ] > good_fits = [ fit for fit in minima ] > > return(good_indices, good_fits) > > -- > Zane Selvans > Amateur Earthling > http://zaneselvans.org > zane at ideotrope.org > 303/815-6866 > PGP Key: 55E0815F > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From h5py at alfven.org Thu Aug 14 18:58:52 2008 From: h5py at alfven.org (Andrew Collette) Date: Thu, 14 Aug 2008 15:58:52 -0700 Subject: [SciPy-user] ANN: HDF5 for Python (h5py) 0.3.0 Message-ID: <1218754732.7289.15.camel@tachyon-laptop> ======================================= Announcing HDF5 for Python (h5py) 0.3.0 ======================================= HDF5 for Python (h5py) is a general-purpose Python interface to the Hierarchical Data Format library, version 5. HDF5 is a versatile, mature scientific software library designed for the fast, flexible storage of enormous amounts of data. The h5py project has been under informal development for a few months now, and has reached the point where it might be generally useful to others. Unlike the fantastic PyTables project, h5py aims to provide access to the full HDF5 C library, although in a more Pythonic fashion. Almost all of the HDF5 1.6.X API is covered, with improvements like: - object-oriented identifiers with reference counting - automatic raising of Python exceptions for HDF5 errors - conversion between NumPy and HDF5 datatypes It also includes a set of high-level, pure-Python classes which represent basic HDF5 abstractions like files, groups, and datasets, using native Python and NumPy infrastructure. For example, datasets carry a shape tuple and dtype, and support partial I/O via the standard extended slicing syntax. The primary focus of the project is read/write interoperability with existing HDF5 data; using Python and the NumPy package, you can access data in HDF5 format, process it, and write files that any HDF5-aware program can understand. Automatic conversion is provided between NumPy and HDF5 datatypes; almost all NumPy types can be transparently converted to their HDF5 equivalents. This includes constructs like complex numbers in addition to arbitrarily nested compound ("recarray") data types. Resources ========= Python 2.5 and Numpy >= 1.0.3 are required. For UNIX, a C compiler is also required which can build Python extensions. HDF5 versions 1.6.5, 1.6.7, 1.8.0 and 1.8.1 are supported. The Windows installer includes HDF5 1.8.1. Source installers for UNIX and an integrated installer for Windows are available from the Google Code development page: http://h5py.googlecode.com Comprehensive documentation, including installation instructions and a quick-start guide, is available at: http://h5py.alfven.org You can read more about the HDF5 library at the HDF Group web site: http://www.hdfgroup.com/HDF5 *** This project is NOT affiliated with the HDF Group. *** All code for this project is released under the BSD license. Thanks ====== Thanks to D. Dale, D. Brooks, E. Lawrence for their comments and suggestions in development, and the PyTables project for general inspiration, along with "definitions.pxd". :) ---- Andrew Collette http://www.alfven.org Mail: "h5py" at the domain "alfven.org" From zane at ideotrope.org Thu Aug 14 19:37:11 2008 From: zane at ideotrope.org (Zane Selvans) Date: Thu, 14 Aug 2008 23:37:11 +0000 (UTC) Subject: [SciPy-user] Finding local minima of greater than a given depth References: <660F6D33-AB88-499F-AE68-52CD42CEFDFF@ideotrope.org> <1A487E19-9232-4ECE-AF6D-7B736F4CC32F@yale.edu> Message-ID: Zachary Pincus yale.edu> writes: > Another way to find local minima in a noise-robust manner that I've > often seen is to not look for a minimum "depth", but for a minimum > distance between minima. This is easy to implement using > scipy.ndimage's minimum filter, which sets each element of an array to > the minimum value seen over a specified neighborhood of that element. > Then you just check for array elements where the element is equal to > the minimum in the neighborhood... > > I'd also suggest smoothing the data a bit with a gaussian to get rid > or some of the noise. Scipy.ndimage also provides these filters. Great! This works well: def local_minima(fits, window=15): #{{{ """ Find the local minima within fits, and return them and their indices. Returns a list of indices at which the minima were found, and a list of the minima, sorted in order of increasing minimum. The keyword argument window determines how close two local minima are allowed to be to one another. If two local minima are found closer together than that, then the lowest of them is taken as the real minimum. window=1 will return all local minima. """ from scipy.ndimage.filters import minimum_filter as min_filter minfits = min_filter(fits, size=window, mode="wrap") minima = [] for i in range(len(fits)): if fits[i] == minfits[i]: minima.append(fits[i]) minima.sort() good_indices = [ fits.index(fit) for fit in minima ] good_fits = [ fit for fit in minima ] return(good_indices, good_fits) From millman at berkeley.edu Thu Aug 14 19:50:41 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Thu, 14 Aug 2008 16:50:41 -0700 Subject: [SciPy-user] SOM in scipy.cluster In-Reply-To: References: <56BD91CA-76FD-4F34-A765-5F82FCB302E9@deeplycloudy.com> <3d375d730808131311t77531782y2b7abc49cc016c96@mail.gmail.com> Message-ID: On Thu, Aug 14, 2008 at 7:29 AM, Matthieu Brucher wrote: > SOM were supposed to be included in scikits.learn, weren't they ? (as > scipy.cluster is in scipy only for historical reasons) It depends. I think Corran's proposal to put a simple and fast batch SOM algorithm in scipy.cluster.vq and more general variations of the SOM algorithm in the scikit is a good approach. In general, it is OK to improve the cluster package in scipy. For instance, I think Damian's hierarchical clustering code (http://projects.scipy.org/scipy/scipy/browser/trunk/scipy/cluster/hierarchy.py) is a good example of what should go into scipy.cluster. -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From zachary.pincus at yale.edu Thu Aug 14 23:49:04 2008 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Thu, 14 Aug 2008 23:49:04 -0400 Subject: [SciPy-user] Finding local minima of greater than a given depth In-Reply-To: References: <660F6D33-AB88-499F-AE68-52CD42CEFDFF@ideotrope.org> <1A487E19-9232-4ECE-AF6D-7B736F4CC32F@yale.edu> Message-ID: <32873717-CD2F-4FB2-8E41-B23529D375BB@yale.edu> Hi, Not that it likely matters in this case, but below is a version of the local_minima function that uses some more advanced numpy features, like "fancy indexing". These features really make life easier in many cases (and are usually faster than explicit loops), so it's really worth learning how they work: def local_minima_fancy(fits, window=15): from scipy.ndimage import minimum_filter fits = numpy.asarray(fits) minfits = minimum_filter(fits, size=window, mode="wrap") minima_mask = fits == minfits good_indices = numpy.arange(len(fits))[minima_mask] good_fits = fits[minima_mask] order = good_fits.argsort() return good_indices[order], good_fits[order] We have two types of fancy indexing here. The first takes a boolean "mask": good_fits = fits[minima_mask] returns only the elements of fits where the minima_mask array is true. It's equivalent to: good_fits = numpy.array([fits[i] for i, m in enumerate(minima_mask) if m]) The second takes a list/array of indices: return good_indices[order] is equivalent to: return numpy.array([good_indices[i] for i in order]) Also note that the original function you sent has a slight bug when there are multiple minima with the same value: list.index returns the index of the *first* entry in the list with that value, so the indices of later minima will be incorrect. Plus, the function does some unnecessary work: good_fits = [ fit for fit in minima ] makes an unneeded copy of minima, and good_indices = [ fits.index(fit) for fit in minima ] requires traversing the fits list once for each minima (the "index" method runs a linear search), when only one traversal should be required. Here's a fixed and tuned-up version with just list processing: def local_minima_fixed(fits, window=15): from scipy.ndimage.filters import minimum_filter as min_filter minfits = min_filter(fits, size=window, mode="wrap") minima_and_indices = [] for i, (fit, minfit) in enumerate(zip(fits, minfits)): if fit == minfit: minima_and_indices.append([fit, i]) minima_and_indices.sort() good_fits, good_indices = zip(*minima_and_indices) return good_indices, good_fits For reference, here are some timings (made with ipython's excellent timeit magic command): In [60]: a = numpy.random.randint(400, size=2000) In [61]: timeit local_minima(list(a), 5) 100 loops, best of 3: 7.38 ms per loop In [62]: timeit local_minima_fixed(list(a), 5) 100 loops, best of 3: 2.69 ms per loop In [63]: timeit local_minima_fancy(list(a), 5) 1000 loops, best of 3: 973 [micro]s per loop As above, the speed of this routine probably doesn't matter too much, but it's a useful exercise to understand how and why these other versions work (if some step doesn't make sense, try working through the steps in the interpreter to see what's happening -- this is a good way to learn some nice features of python and numpy), and how the "fancy" version uses numpy's advanced features to good effect. Zach > def local_minima(fits, window=15): #{{{ > """ > Find the local minima within fits, and return them and their > indices. > > Returns a list of indices at which the minima were found, and a > list of the > minima, sorted in order of increasing minimum. The keyword > argument window > determines how close two local minima are allowed to be to one > another. If > two local minima are found closer together than that, then the > lowest of > them is taken as the real minimum. window=1 will return all > local minima. > > """ > from scipy.ndimage.filters import minimum_filter as min_filter > > minfits = min_filter(fits, size=window, mode="wrap") > > minima = [] > for i in range(len(fits)): > if fits[i] == minfits[i]: > minima.append(fits[i]) > > minima.sort() > > good_indices = [ fits.index(fit) for fit in minima ] > good_fits = [ fit for fit in minima ] > > return(good_indices, good_fits) > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From haase at msg.ucsf.edu Fri Aug 15 16:43:21 2008 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Fri, 15 Aug 2008 22:43:21 +0200 Subject: [SciPy-user] bezier splines from given control points (2D) Message-ID: Hi, I am looking for a way to calculate x,y-coordinates for a polygon interpolating a cubic or (better) a quadratic bezier spline. The bezier spline would be defined by known x,y coordinates of n control points. I found spline references both in scipy.interpolate and in scipy.signal. Which of the spline "stuff" in scipy in best suited for me ? Thanks for any help, Sebastian Haase From contact at pythonxy.com Sat Aug 16 03:29:42 2008 From: contact at pythonxy.com (Pierre Raybaut) Date: Sat, 16 Aug 2008 09:29:42 +0200 Subject: [SciPy-user] [ Python(x,y) ] New release : 2.0.1 In-Reply-To: References: Message-ID: <48A681E6.1070606@pythonxy.com> > > Message: 2 > Date: Thu, 14 Aug 2008 13:59:34 -0400 > From: "william ratcliff" > Subject: Re: [SciPy-user] [ Python(x,y) ] New release : 2.0.1 > To: "SciPy Users List" > Message-ID: > <827183970808141059u1069a322xe4af0b421a24c7f6 at mail.gmail.com> > Content-Type: text/plain; charset="iso-8859-1" > > Thanks for all of your work on this! A quick question about your VTK > distribution--there is an option to compile VTK to use GL2PS to allow the > export of scene's to eps, ps, pdf, etc. see: > http://davis.lbl.gov/Manuals/VTK-4.5/classvtkGL2PSExporter.html > > Is your version compiled with this option. I believe it is not the default > due to licensing issues. > > > Thanks, > William Hi William, Actually yes, the VTK version included in Python(x,y) was compiled with this option. Thank for your message. Regards, Pierre From fperez.net at gmail.com Sun Aug 17 01:03:57 2008 From: fperez.net at gmail.com (Fernando Perez) Date: Sat, 16 Aug 2008 22:03:57 -0700 Subject: [SciPy-user] Possible new multiplication operators for Python Message-ID: Hi all, [ please keep all replies to this only on the numpy list. I'm cc'ing the scipy ones to make others aware of the topic, but do NOT reply on those lists so we can have an organized thread for future reference] In the Python-dev mailing lists, there were recently two threads regarding the possibility of adding to the language new multiplication operators (amongst others). This would allow one to define things like an element-wise and a matrix product for numpy arrays, for example: http://mail.python.org/pipermail/python-dev/2008-July/081508.html http://mail.python.org/pipermail/python-dev/2008-July/081551.html It turns out that there's an old pep on this issue: http://www.python.org/dev/peps/pep-0225/ which hasn't been ruled out, simply postponed. At this point it seems that there is room for some discussion, and obviously the input of the numpy/scipy crowd would be very welcome. I volunteered to host a BOF next week at scipy so we could collect feedback from those present, but it's important that those NOT present at the conference can equally voice their ideas/opinions. So I wanted to open this thread here to collect feedback. We'll then try to have the bof next week at the conference, and I'll summarize everything for python-dev. Obviously this doesn't mean that we'll get any changes in, but at least there's interest in discussing a topic that has been dear to everyone here. Cheers, f From contact at pythonxy.com Sun Aug 17 15:56:38 2008 From: contact at pythonxy.com (Pierre Raybaut) Date: Sun, 17 Aug 2008 21:56:38 +0200 Subject: [SciPy-user] [ Python(x,y) ] New release : 2.0.2 Message-ID: <48A88276.4090106@pythonxy.com> Hi all, As you may already know, Python(x,y) is a free scientific-oriented Python Distribution based on Qt and Eclipse providing a self-consistent scientific development environment. Release 2.0.2 is now available on http://www.pythonxy.com. (Full release, and update patch for 2.0.0 only because of disk space quota - If you have already installed the 2.0.1 release, please take a look at http://code.google.com/p/pythonxy for an update in a three parts archive which will be soon available) Note: GDCM will be included but only in the next release Changes history 08 -17 -2008 - Version 2.0.2 : * Updated: o Enthought Tool Suite 3.0.0 o ITK 3.8 Regards, Pierre Raybaut From ryanlists at gmail.com Mon Aug 18 10:27:46 2008 From: ryanlists at gmail.com (Ryan Krauss) Date: Mon, 18 Aug 2008 09:27:46 -0500 Subject: [SciPy-user] from scipy import *, from scipy import signal Message-ID: I upgraded to svn this morning and am having a rough time. I think I eventually got everything to compile, but I am having problems running the script I am working with. I have two primary problems (I think). 1. I have a lot of legacy code of mine that starts with "from scipy import *". But with the current svn versions, this doens't seem to do what it used to (it doesn't seem to bring in signal and integrate at least). 2. Secondly, adding "from scipy import signal" to fix problem #1, produces this message: In [2]: from scipy import signal --------------------------------------------------------------------------- Traceback (most recent call last) : module compiled against version 1000009 of C-API but this version of numpy is 100000a _um=None What is the easiest way to resolve these two issues? Thanks, Ryan -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryanlists at gmail.com Mon Aug 18 10:32:55 2008 From: ryanlists at gmail.com (Ryan Krauss) Date: Mon, 18 Aug 2008 09:32:55 -0500 Subject: [SciPy-user] from scipy import *, from scipy import signal In-Reply-To: References: Message-ID: FYI, I also tried to build umfpack from scikits and get a similar error: In [3]: import scikits In [4]: import scikits.umfpack --------------------------------------------------------------------------- Traceback (most recent call last) : module compiled against version 1000009 of C-API but this version of numpy is 100000a _um=None On Mon, Aug 18, 2008 at 9:27 AM, Ryan Krauss wrote: > I upgraded to svn this morning and am having a rough time. I think I > eventually got everything to compile, but I am having problems running the > script I am working with. I have two primary problems (I think). > > 1. I have a lot of legacy code of mine that starts with "from scipy import > *". But with the current svn versions, this doens't seem to do what it used > to (it doesn't seem to bring in signal and integrate at least). > > 2. Secondly, adding "from scipy import signal" to fix problem #1, produces > this message: > > In [2]: from scipy import signal > --------------------------------------------------------------------------- > Traceback (most recent call last) > > > : module compiled against version 1000009 > of C-API but this version of numpy is 100000a > _um=None > > > > What is the easiest way to resolve these two issues? > > Thanks, > > Ryan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.brucher at gmail.com Mon Aug 18 11:21:44 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Mon, 18 Aug 2008 17:21:44 +0200 Subject: [SciPy-user] from scipy import *, from scipy import signal In-Reply-To: References: Message-ID: Hi, If you upgraded numpy, you may have to recompile scipy and the scikits. Matthieu 2008/8/18 Ryan Krauss : > FYI, I also tried to build umfpack from scikits and get a similar error: > > In [3]: import scikits > > In [4]: import scikits.umfpack > --------------------------------------------------------------------------- > Traceback (most recent call last) > > > : module compiled against version 1000009 of > C-API but this version of numpy is 100000a > _um=None > > > On Mon, Aug 18, 2008 at 9:27 AM, Ryan Krauss wrote: >> >> I upgraded to svn this morning and am having a rough time. I think I >> eventually got everything to compile, but I am having problems running the >> script I am working with. I have two primary problems (I think). >> >> 1. I have a lot of legacy code of mine that starts with "from scipy import >> *". But with the current svn versions, this doens't seem to do what it used >> to (it doesn't seem to bring in signal and integrate at least). >> >> 2. Secondly, adding "from scipy import signal" to fix problem #1, produces >> this message: >> >> In [2]: from scipy import signal >> >> --------------------------------------------------------------------------- >> Traceback (most recent call >> last) >> >> >> : module compiled against version 1000009 >> of C-API but this version of numpy is 100000a >> _um=None >> >> >> >> What is the easiest way to resolve these two issues? >> >> Thanks, >> >> Ryan > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher From lorenzo.isella at gmail.com Mon Aug 18 11:00:13 2008 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Mon, 18 Aug 2008 17:00:13 +0200 Subject: [SciPy-user] SciPy, MPI and OpenMP Message-ID: Dear All, I have recently attended a crash course on MPI and OpenMP. The examples always involved C or Fortran code. Now, I have a thought: if working on a single processor, I hardly need to use pure C or pure Fortran. I usually write a Fortran code for the bottlenecks and compile it with f2py to create a python module I then import. Hence two questions: (1) Can I do something similar with many processors? E.g.: write a Python code, embed some compiled Fortran code which is supposed to run on many processors, get the results and come back to Python. Python--->Fortran on many processors--->back to Python. (2)Is it also possible to directly parallelize a Python code? I heard about thread locking in Python. I did some online research, there seems to be a lot of projects trying to combine Python and MPI/OpenMP, but many look rather "experimental". In particular, of course, I would like to hear about SciPy and parallel computing. Many thanks Lorenzo From matthieu.brucher at gmail.com Mon Aug 18 11:47:07 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Mon, 18 Aug 2008 17:47:07 +0200 Subject: [SciPy-user] SciPy, MPI and OpenMP In-Reply-To: References: Message-ID: 2008/8/18 Lorenzo Isella : > Dear All, > I have recently attended a crash course on MPI and OpenMP. The > examples always involved C or Fortran code. > Now, I have a thought: if working on a single processor, I hardly need > to use pure C or pure Fortran. I usually write a Fortran code for the > bottlenecks and compile it with f2py to create a python module I then > import. > Hence two questions: > (1) Can I do something similar with many processors? E.g.: write a > Python code, embed some compiled Fortran code which is supposed to run > on many processors, get the results and come back to Python. Of course, this is possible. You can use OpenMP easily by using an OpenMP compatible compiler, and for MPI, there are several packages that can be used (search through the archives, one of them standas apparently out ;)) > Python--->Fortran on many processors--->back to Python. > (2)Is it also possible to directly parallelize a Python code? I heard > about thread locking in Python. The GIL is a lock on the interpreter. If your code is mainly C code, it can release the GIL and therefore you can run parallel code in Python. Numpy releases the lock whenever it can. If you use SWIG, you only have to add -threads to the arguments and it releases the GIL (there are several features in SWIG allowing you to tune the GIL processing, I've explained them on my blog). > I did some online research, there seems to be a lot of projects trying > to combine Python and MPI/OpenMP, but many look rather "experimental". > In particular, of course, I would like to hear about SciPy and > parallel computing. You can always use the processing module try to mimic the threading module, but using processes. For MPI, search the archives, like I said, there are some interesting posts. Cheers, Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher From eric at deeplycloudy.com Mon Aug 18 11:49:28 2008 From: eric at deeplycloudy.com (Eric Bruning) Date: Mon, 18 Aug 2008 11:49:28 -0400 Subject: [SciPy-user] SOM in scipy.cluster In-Reply-To: References: Message-ID: <2D6586F8-05DB-46F6-A167-A5EA5E0BFB7D@deeplycloudy.com> On Aug 14, 2008, at 10:03 AM, Corran Webster wrote: >> On Wed, Aug 13, 2008 at 10:31, Eric Bruning >> wrote: >> > Greetings, >> > >> > I'm considering using self-organizing maps for mining some >> lightning >> > data, and info.py in scipy.cluster mentions that implementation of >> > self-organizing maps is under development. I've found a few >> examples >> > written in python around the web, but none that are likely to be >> > efficient enough for my use. >> > >> > Is there still interest in including SOM in scipy? I'd be happy to >> > coordinate on a contribution if nothing else is under way. >> >> Sure! My colleague Corran Webster was thinking about doing some SOM >> stuff for scipy, too, so you two should talk. > > Hi, > > yes, I've been thinking seriously about adding some SOM algorithms > to scipy - I used them heavily in my previous job (we used SOMs to > classify documents for the Mayo clinic and slot machine players for > casinos...). > > I think that there is a place for a simple and fast batch SOM > algorithm in scipy.cluster.vq, since the batch SOM can be viewed as > a generalization of K-means. > > For more general variations of the SOM algorithm, it may make sense > to put them elsewhere - possibly in the machine learning scikit > where the algorithms can access other. > > I'd be interested to hear what your needs are as far as data types, > distance functions, data set sizes and SOM topologies, as that > would likely influence on where I concentrate my energy. Thanks for your interest! I have two data types that I'm considering mining. One is a space and time tracing of lightning channels, on the order of 10^2 to 10^3 points per flash. The spatial coordinates are inherently vectorial, but there is the complication of doing a distance measure along the time coordinate. I might want to look at the map generated by a single flash. Perhaps I might also want to throw 10^6 points from a bunch of flashes to characterize an entire thunderstorm at once. The other data type is a collection of flash properties, which don't naturally form any sort of vector. These are properties like extent, altitude, brightness, etc. We'd like to use mapped channels to predict the optical signal. SOMs are new to me, but my naive intuition finds them suited to this kind of exploratory data mining. Working on their implementation should be instructive. -Eric From ryanlists at gmail.com Mon Aug 18 12:11:31 2008 From: ryanlists at gmail.com (Ryan Krauss) Date: Mon, 18 Aug 2008 11:11:31 -0500 Subject: [SciPy-user] from scipy import *, from scipy import signal In-Reply-To: References: Message-ID: Thanks Matthieu. The biggest part of my problem was that I somehow built scipy and the scikit with an older version of numpy. I thought I was careful about that, but apparently not. My other question remains: does "from scipy import *" no longer pull in signal and integrate or is there something else wrong with my install? Thanks, Ryan On Mon, Aug 18, 2008 at 10:21 AM, Matthieu Brucher < matthieu.brucher at gmail.com> wrote: > Hi, > > If you upgraded numpy, you may have to recompile scipy and the scikits. > > Matthieu > > 2008/8/18 Ryan Krauss : > > FYI, I also tried to build umfpack from scikits and get a similar error: > > > > In [3]: import scikits > > > > In [4]: import scikits.umfpack > > > --------------------------------------------------------------------------- > > Traceback (most recent call > last) > > > > > > : module compiled against version 1000009 > of > > C-API but this version of numpy is 100000a > > _um=None > > > > > > On Mon, Aug 18, 2008 at 9:27 AM, Ryan Krauss > wrote: > >> > >> I upgraded to svn this morning and am having a rough time. I think I > >> eventually got everything to compile, but I am having problems running > the > >> script I am working with. I have two primary problems (I think). > >> > >> 1. I have a lot of legacy code of mine that starts with "from scipy > import > >> *". But with the current svn versions, this doens't seem to do what it > used > >> to (it doesn't seem to bring in signal and integrate at least). > >> > >> 2. Secondly, adding "from scipy import signal" to fix problem #1, > produces > >> this message: > >> > >> In [2]: from scipy import signal > >> > >> > --------------------------------------------------------------------------- > >> Traceback (most recent call > >> last) > >> > >> > >> : module compiled against version > 1000009 > >> of C-API but this version of numpy is 100000a > >> _um=None > >> > >> > >> > >> What is the easiest way to resolve these two issues? > >> > >> Thanks, > >> > >> Ryan > > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > -- > French PhD student > Website : http://matthieu-brucher.developpez.com/ > Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 > LinkedIn : http://www.linkedin.com/in/matthieubrucher > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Mon Aug 18 18:45:59 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 18 Aug 2008 17:45:59 -0500 Subject: [SciPy-user] SciPy, MPI and OpenMP In-Reply-To: References: Message-ID: <3d375d730808181545x2f978002g8c78bb7da8c2d5fe@mail.gmail.com> On Mon, Aug 18, 2008 at 10:00, Lorenzo Isella wrote: > Dear All, > I have recently attended a crash course on MPI and OpenMP. The > examples always involved C or Fortran code. > Now, I have a thought: if working on a single processor, I hardly need > to use pure C or pure Fortran. I usually write a Fortran code for the > bottlenecks and compile it with f2py to create a python module I then > import. > Hence two questions: > (1) Can I do something similar with many processors? E.g.: write a > Python code, embed some compiled Fortran code which is supposed to run > on many processors, get the results and come back to Python. > > Python--->Fortran on many processors--->back to Python. > (2)Is it also possible to directly parallelize a Python code? I heard > about thread locking in Python. There is a global interpreter lock (GIL) when touching Python data structures. If you handle the threads entirely in the Fortran code and never call back into Python until you are finished with the threads, this should not be an issue for you. In bad ASCII art: Python |Fortran /------\ | Python --------|-------<-------->--|-------- | \------/ | > I did some online research, there seems to be a lot of projects trying > to combine Python and MPI/OpenMP, but many look rather "experimental". > In particular, of course, I would like to hear about SciPy and > parallel computing. Most of the MPI wrappers these days are fairly mature. I think some of the OpenMP work is still pretty new, though. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cournape at gmail.com Tue Aug 19 02:10:34 2008 From: cournape at gmail.com (David Cournapeau) Date: Mon, 18 Aug 2008 23:10:34 -0700 Subject: [SciPy-user] from scipy import *, from scipy import signal In-Reply-To: References: Message-ID: <5b8d13220808182310j6beceb1eyf3943b9f850a0f69@mail.gmail.com> On Mon, Aug 18, 2008 at 9:11 AM, Ryan Krauss wrote: > Thanks Matthieu. The biggest part of my problem was that I somehow built > scipy and the scikit with an older version of numpy. I thought I was > careful about that, but apparently not. > > My other question remains: does "from scipy import *" no longer pull in > signal and integrate or is there something else wrong with my install? AFAIK, it was never supposed to work, and was a bug. Those automatic imports were pulled out recently cheers, David From alexander.borghgraef.rma at gmail.com Thu Aug 21 05:41:20 2008 From: alexander.borghgraef.rma at gmail.com (Alexander Borghgraef) Date: Thu, 21 Aug 2008 11:41:20 +0200 Subject: [SciPy-user] How to use where Message-ID: <9e8c52a20808210241m1b0daea9vec98452c6e32cd8f@mail.gmail.com> Hi all, I'm trying to figure out here how 'where' works exactly. I'm working on a list of vectors which I represent as a 2D array, and I'd like to remove the vectors which are out of bounds. So after some experimenting I got to basically this: listofvectors = ... # shape is ( 100, 2 ) bound = array( [ xmax, ymax ] ) inside = where( all( listofvectors < bound, axis = 1 ) ) # inside is ( array[ 1, 2, 4, 10, ... ] ) listofvectors = listofvectors[ inside, : ] # shape is ( 1, 100, 2 ) Ok, so I'm almost there. The change of shape is annoying, since I'll be using listofvectors in a loop to sample from an image. Not that I can't work around this, but I'm looking for a cleaner way. Also, I don't really get why numpy is doing this, anyone care to explain? Thanks. -- Alex Borghgraef From c.j.lee at tnw.utwente.nl Thu Aug 21 07:13:44 2008 From: c.j.lee at tnw.utwente.nl (Chris Lee) Date: Thu, 21 Aug 2008 13:13:44 +0200 Subject: [SciPy-user] How to use where In-Reply-To: <9e8c52a20808210241m1b0daea9vec98452c6e32cd8f@mail.gmail.com> References: <9e8c52a20808210241m1b0daea9vec98452c6e32cd8f@mail.gmail.com> Message-ID: <3CC8C418-EDC9-4E55-B655-E7EEF52881C9@tnw.utwente.nl> Well what I would normally do is this listofvectors = .. t1 = where(listofvectors < xmax, 1, 0) t2 = where(listofvectors < ymax,1,0) inBound = t1*t2 rows = nonzero(inBound)[0] cols = nonzero(inBound)[1] while someCondition: listofvectors[rows,cols] = someoperation(listofvectors[rows,cols]) the point is that the nonzero operation will allow you to slice the listofvectors so that only the elements still in bound are used in your loop. Hope this helps Cheers Chris On Aug 21, 2008, at 11:41 AM, Alexander Borghgraef wrote: > Hi all, > > I'm trying to figure out here how 'where' works exactly. I'm working > on a list of vectors which I represent as a 2D array, and I'd like to > remove the vectors which are out of bounds. So after some > experimenting I got to basically this: > > listofvectors = ... > # shape is ( 100, 2 ) > bound = array( [ xmax, ymax ] ) > inside = where( all( listofvectors < bound, axis = 1 ) ) # inside > is ( array[ 1, 2, 4, 10, ... ] ) > listofvectors = listofvectors[ inside, : ] > # shape is ( 1, 100, 2 ) > > Ok, so I'm almost there. The change of shape is annoying, since I'll > be using listofvectors in a loop to sample from an image. Not that I > can't work around this, but I'm looking for a cleaner way. Also, I > don't really get why numpy is doing this, anyone care to explain? > Thanks. > > -- > Alex Borghgraef > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user *************************************************** Chris Lee Laser Physics and Nonlinear Optics Group MESA+ Research Institute for Nanotechnology University of Twente Phone: ++31 (0)53 489 3968 fax: ++31 (0)53 489 1102 *************************************************** From yosefmel at post.tau.ac.il Thu Aug 21 07:35:26 2008 From: yosefmel at post.tau.ac.il (Yosef Meller) Date: Thu, 21 Aug 2008 14:35:26 +0300 Subject: [SciPy-user] How to use where In-Reply-To: <9e8c52a20808210241m1b0daea9vec98452c6e32cd8f@mail.gmail.com> References: <9e8c52a20808210241m1b0daea9vec98452c6e32cd8f@mail.gmail.com> Message-ID: <200808211435.26709.yosefmel@post.tau.ac.il> On Thursday 21 August 2008 12:41:20 Alexander Borghgraef wrote: > Hi all, > > I'm trying to figure out here how 'where' works exactly. I'm working > on a list of vectors which I represent as a 2D array, and I'd like to > remove the vectors which are out of bounds. So after some > experimenting I got to basically this: > > listofvectors = ... > # shape is ( 100, 2 ) > bound = array( [ xmax, ymax ] ) > inside = where( all( listofvectors < bound, axis = 1 ) ) # inside > is ( array[ 1, 2, 4, 10, ... ] ) I believe it's actually (array[ 1, 2, 4, 10, ... ], ) - note the comma at the end. You get a tuple with only one array for the single dimension. > listofvectors = listofvectors[ inside, : ] > # shape is ( 1, 100, 2 ) What you need is: listofvectors = listofvectors[ inside[0], : ] or drop the colon: listofvectors = listofvectors[inside] > don't really get why numpy is doing this, anyone care to explain? When you're using the result of where(), you're passing a sequence inside a tuple. When a tuple of sequences is passed as one of the indices, it creates a new dimension, whose size is the number of equal-length sequences passed in the tuple. Play with it a bit: In [1]: a = arange(1, 10).reshape(3,3) In [2]: a Out[2]: array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) In [3]: a[([1,2],[1,2]),:] Out[3]: array([[[4, 5, 6], [7, 8, 9]], [[4, 5, 6], [7, 8, 9]]]) In [4]: a[:,([1,2],[1,2])] Out[4]: array([[[2, 3], [2, 3]], [[5, 6], [5, 6]], [[8, 9], [8, 9]]]) In [5]: a[r_[1,2],:] # what you would expect: Out[5]: array([[4, 5, 6], [7, 8, 9]]) In [6]: a[(r_[1,2],),:] # new dimension. Out[6]: array([[[4, 5, 6], [7, 8, 9]]]) From dave.hirschfeld at gmail.com Thu Aug 21 10:03:21 2008 From: dave.hirschfeld at gmail.com (Dave) Date: Thu, 21 Aug 2008 14:03:21 +0000 (UTC) Subject: [SciPy-user] How to use where References: <9e8c52a20808210241m1b0daea9vec98452c6e32cd8f@mail.gmail.com> Message-ID: Alexander Borghgraef gmail.com> writes: > > Hi all, > > I'm trying to figure out here how 'where' works exactly. I'm working > on a list of vectors which I represent as a 2D array, and I'd like to > remove the vectors which are out of bounds. So after some > experimenting I got to basically this: > > listofvectors = ... > # shape is ( 100, 2 ) > bound = array( [ xmax, ymax ] ) > inside = where( all( listofvectors < bound, axis = 1 ) ) # inside > is ( array[ 1, 2, 4, 10, ... ] ) > listofvectors = listofvectors[ inside, : ] > # shape is ( 1, 100, 2 ) > In [1]: listofvectors = rand(100,2) In [2]: bound = array([0.5,0.8]) In [3]: idx = all(listofvectors < bounds,axis=1) In [4]: inbounds = listofvectors[idx,:] I'm not sure of the utility of where - I tend to use boolean masks. Is there any reason one wouldn't use the code I posted above? -Dave From alexander.borghgraef.rma at gmail.com Thu Aug 21 10:26:55 2008 From: alexander.borghgraef.rma at gmail.com (Alexander Borghgraef) Date: Thu, 21 Aug 2008 16:26:55 +0200 Subject: [SciPy-user] How to use where In-Reply-To: References: <9e8c52a20808210241m1b0daea9vec98452c6e32cd8f@mail.gmail.com> Message-ID: <9e8c52a20808210726g6b2cb1aapfdb6de619eb0cd6e@mail.gmail.com> On Thu, Aug 21, 2008 at 4:03 PM, Dave wrote: > In [1]: listofvectors = rand(100,2) > > In [2]: bound = array([0.5,0.8]) > > In [3]: idx = all(listofvectors < bounds,axis=1) > > In [4]: inbounds = listofvectors[idx,:] Neat. One less function call, seems like a good thing to me. > I'm not sure of the utility of where - I tend to use boolean masks. Is there any > reason one wouldn't use the code I posted above? Can't see why not (please someone correct me if I'm wrong). After playing with where a bit, my impression is that its main use is the conditional replacement of elements, and not what I was trying to do with it. For example: a = arange(10) > array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) where( a > 4, 0, a ** 2) > array([ 0, 1, 4, 9, 16, 25, 36, 0, 0, 0]) Anyway, thanks to everyone who responded! -- Alex Borghgraef From martin.hoefling at gmx.de Thu Aug 21 12:21:14 2008 From: martin.hoefling at gmx.de (Martin =?iso-8859-1?q?H=F6fling?=) Date: Thu, 21 Aug 2008 18:21:14 +0200 Subject: [SciPy-user] 2D data plot Message-ID: <200808211821.14382.martin.hoefling@gmx.de> Hi folks, I am trying to plot 2D data (x,y->z). Contour nearly does what I want except that I wanna fill a full pixel/square/rectangle for each x,y pair (since x and y are integer values). Contour plot draws lines and bevels between the points. What's the right plot/option for me. Best Martin From wnbell at gmail.com Thu Aug 21 13:16:16 2008 From: wnbell at gmail.com (Nathan Bell) Date: Thu, 21 Aug 2008 13:16:16 -0400 Subject: [SciPy-user] 2D data plot In-Reply-To: <200808211821.14382.martin.hoefling@gmx.de> References: <200808211821.14382.martin.hoefling@gmx.de> Message-ID: On Thu, Aug 21, 2008 at 12:21 PM, Martin H?fling wrote: > > I am trying to plot 2D data (x,y->z). Contour nearly does what I want except > that I wanna fill a full pixel/square/rectangle for each x,y pair (since x > and y are integer values). Contour plot draws lines and bevels between the > points. What's the right plot/option for me. > Have you tried pcolor()? http://matplotlib.sourceforge.net/matplotlib.pyplot.html#-pcolor -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From peridot.faceted at gmail.com Fri Aug 22 00:18:12 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Fri, 22 Aug 2008 00:18:12 -0400 Subject: [SciPy-user] How to use where In-Reply-To: References: <9e8c52a20808210241m1b0daea9vec98452c6e32cd8f@mail.gmail.com> Message-ID: 2008/8/21 Dave : > Alexander Borghgraef gmail.com> writes: > >> >> Hi all, >> >> I'm trying to figure out here how 'where' works exactly. I'm working >> on a list of vectors which I represent as a 2D array, and I'd like to >> remove the vectors which are out of bounds. So after some >> experimenting I got to basically this: >> >> listofvectors = ... >> # shape is ( 100, 2 ) >> bound = array( [ xmax, ymax ] ) >> inside = where( all( listofvectors < bound, axis = 1 ) ) # inside >> is ( array[ 1, 2, 4, 10, ... ] ) >> listofvectors = listofvectors[ inside, : ] >> # shape is ( 1, 100, 2 ) >> > > In [1]: listofvectors = rand(100,2) > > In [2]: bound = array([0.5,0.8]) > > In [3]: idx = all(listofvectors < bounds,axis=1) > > In [4]: inbounds = listofvectors[idx,:] > > I'm not sure of the utility of where - I tend to use boolean masks. Is there any > reason one wouldn't use the code I posted above? No, in fact boolean masks are usually more efficient. The exception I would make is when you want to keep the index around for a while and when it's only a small fraction of the array elements: then it's more efficient to keep track of where the elements you want are. It's also sometimes useful to do manipulations on the positions of array elements, and you might care about the order of the result. It's also worth noting that none of these uses require "where"; fancy indexing can do the same thing. Finally, there is another, totally unrelated, operation of the function called "where": it can be used to build arrays: where(a<3, -1, a-4) produces an array that is -1 anywhere a<3, and is a-4 everywhere else. Having these two unrelated operations built into the same function is poor UI design, but we're stuck with it. For that reason, I only ever use where in this second mode. Anne From c.j.lee at tnw.utwente.nl Fri Aug 22 03:05:44 2008 From: c.j.lee at tnw.utwente.nl (Chris Lee) Date: Fri, 22 Aug 2008 09:05:44 +0200 Subject: [SciPy-user] How to use where In-Reply-To: References: <9e8c52a20808210241m1b0daea9vec98452c6e32cd8f@mail.gmail.com> Message-ID: <1069749D-ACE7-4A38-AB6F-BC1EBFD5540C@tnw.utwente.nl> Now that is nice. I had been using where with nonzero because I misunderstood what all did and never even tried it :( Nevertheless a code rewrite is not on the horizon :) Cheers Chris On Aug 22, 2008, at 6:18 AM, Anne Archibald wrote: > 2008/8/21 Dave : >> Alexander Borghgraef gmail.com> >> writes: >> >>> >>> Hi all, >>> >>> I'm trying to figure out here how 'where' works exactly. I'm working >>> on a list of vectors which I represent as a 2D array, and I'd like >>> to >>> remove the vectors which are out of bounds. So after some >>> experimenting I got to basically this: >>> >>> listofvectors = ... >>> # shape is ( 100, 2 ) >>> bound = array( [ xmax, ymax ] ) >>> inside = where( all( listofvectors < bound, axis = 1 ) ) # inside >>> is ( array[ 1, 2, 4, 10, ... ] ) >>> listofvectors = listofvectors[ inside, : ] >>> # shape is ( 1, 100, 2 ) >>> >> >> In [1]: listofvectors = rand(100,2) >> >> In [2]: bound = array([0.5,0.8]) >> >> In [3]: idx = all(listofvectors < bounds,axis=1) >> >> In [4]: inbounds = listofvectors[idx,:] >> >> I'm not sure of the utility of where - I tend to use boolean masks. >> Is there any >> reason one wouldn't use the code I posted above? > > No, in fact boolean masks are usually more efficient. The exception I > would make is when you want to keep the index around for a while and > when it's only a small fraction of the array elements: then it's more > efficient to keep track of where the elements you want are. It's also > sometimes useful to do manipulations on the positions of array > elements, and you might care about the order of the result. > > It's also worth noting that none of these uses require "where"; fancy > indexing can do the same thing. > > Finally, there is another, totally unrelated, operation of the > function called "where": it can be used to build arrays: > > where(a<3, -1, a-4) > > produces an array that is -1 anywhere a<3, and is a-4 everywhere else. > Having these two unrelated operations built into the same function is > poor UI design, but we're stuck with it. For that reason, I only ever > use where in this second mode. > > Anne > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user *************************************************** Chris Lee Laser Physics and Nonlinear Optics Group MESA+ Research Institute for Nanotechnology University of Twente Phone: ++31 (0)53 489 3968 fax: ++31 (0)53 489 1102 *************************************************** From flyingdeckchair at googlemail.com Fri Aug 22 10:01:34 2008 From: flyingdeckchair at googlemail.com (peter websdell) Date: Fri, 22 Aug 2008 15:01:34 +0100 Subject: [SciPy-user] Scilab to Scipy Message-ID: Hello all, I'm converting from using sclab/matlab to python. I'm finding it great so far, but I've discovered a problem that I don't understand. The following code displays a contour plot of the natural modes of a plate in scilab: ################### Lx=1; Ly=1; n=2; m=2; f=100; w=2*%pi*f; t=1; A=2; Kx=n*%pi/Lx; Ky=m*%pi/Ly; x=linspace(0,100); y=linspace(0,100); z=zeros(100,100); for i = 1:100 for j = 1:100 z(i,j) = A * sin(Kx*x(i)) * sin(Ky*y(j)) * %e^(%i*w*t); end end contour(x,y,z,20) ################## Now here's how I've done it in python: ################## from pylab import * Lx=1 Ly=1 n=2 m=2 f=100 w=2*pi*f t=1 A=2 Kx=n*pi/Lx Ky=m*pi/Ly x,y =mgrid[0:100,0:100] z=empty((100,100)) z=A * sin(Kx*x) * sin(Ky*y) * e**(1j*w*t) contour(x,y,z) show() ################### The result does plot a contour, but it is garbage. I have tried replecating the silly for loop approach, and also using the real of abs value of the result, but it is still garbage. Can anyone offer some advice as to why the two scripts produce different results? Thanks a lot, Pete. -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric.bruning at gmail.com Fri Aug 22 11:15:00 2008 From: eric.bruning at gmail.com (Eric Bruning) Date: Fri, 22 Aug 2008 11:15:00 -0400 Subject: [SciPy-user] Thoughts on GUI development Message-ID: Hi Gael, et al.: Regrettably, I couldn't attend scipy this year, but have been enjoying everyone's slides. I'm not entirely sure what the purpose of this message is, but one item in your slides was quite relevant to what I've been working on this week. In your lightning talk on the interactive shell, you wrote: "What do we gain with GUIs? -Pretty look and feel -Doesn't make you more productive/richer: no economic or academic incentive" At a gut level, I disagree that GUIs don't make you more productive. I imagine that you and others probably disagree, too. After all, we must find *some* non-superficial value in them to go to the effort of writing them! For certain datasets, I've found that I can't do the analysis I want without a good GUI for browsing and tagging data. Since this involves syncing up plots and animations across 4D, the code is non-trivial. So, the snag comes at the point of trying to justify all the time you spend writing a GUI app; you hope for future efficiency in analysis, but the benefits are pushed into the future right along with publishable results (==riches, such as it is in academia). And much of the efficiency comes by implementing the hardest features: not just plots, but draggable, zoomable, taggable, animated, linked plots. Perhaps the best way to conclude is by saying thanks to all those that are pushing forward on the graphical toolkits. The faster it is to make an app, the less conflict we'll all feel when justifying time spent crafting GUIs. -Eric From warren.weckesser at gmail.com Fri Aug 22 12:37:23 2008 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Fri, 22 Aug 2008 12:37:23 -0400 Subject: [SciPy-user] Scilab to Scipy In-Reply-To: References: Message-ID: <114880320808220937l4743d45cxa43cec4ec1a518af@mail.gmail.com> Peter, The problem is in your original scilab code. You say x = linspace(0,100); and in the calculation of the mode, you have the term sin(Kx*x). Since Kx = 2*pi, and your x ranges from 0 to 100, you should see 100 oscillations in the x direction. You don't see it, because your grid is too coarse. The default number of samples for the linspace function is 100. What you really want is 0 <= x <= Lx. In your scilab code, x should be set like this: x = linspace(0,Lx,grid_size) The third argument, as its name suggests, is the number of samples to use. Here's a modified version of your scilab code (I also changed e^(%i*w*t) to cos(w*t), but that was not the source of the problem): #################### Lx = 1; Ly = 1; n = 2; m = 2; f = 100; w = 2*%pi*f; t = 1; A = 2; Kx = n*%pi/Lx; Ky = m*%pi/Ly; grid_size = 501; x = linspace(0,Lx,grid_size); y = linspace(0,Ly,grid_size); z = zeros(grid_size,grid_size); for i = 1:grid_size for j = 1:grid_size z(i,j) = A * sin(Kx*x(i)) * sin(Ky*y(j)) * cos(w*t); end end contour(x,y,z,20) #################### Cheers, Warren On Fri, Aug 22, 2008 at 10:01 AM, peter websdell < flyingdeckchair at googlemail.com> wrote: > Hello all, > > I'm converting from using sclab/matlab to python. I'm finding it great so > far, but I've discovered a problem that I don't understand. > > The following code displays a contour plot of the natural modes of a plate > in scilab: > > ################### > Lx=1; > Ly=1; > n=2; > m=2; > f=100; > w=2*%pi*f; > t=1; > A=2; > Kx=n*%pi/Lx; > Ky=m*%pi/Ly; > > x=linspace(0,100); > y=linspace(0,100); > z=zeros(100,100); > > for i = 1:100 > for j = 1:100 > z(i,j) = A * sin(Kx*x(i)) * sin(Ky*y(j)) * %e^(%i*w*t); > end > end > > contour(x,y,z,20) > ################## > > Now here's how I've done it in python: > > ################## > from pylab import * > > Lx=1 > Ly=1 > n=2 > m=2 > f=100 > w=2*pi*f > t=1 > A=2 > > Kx=n*pi/Lx > Ky=m*pi/Ly > > x,y =mgrid[0:100,0:100] > z=empty((100,100)) > z=A * sin(Kx*x) * sin(Ky*y) * e**(1j*w*t) > > contour(x,y,z) > show() > ################### > > The result does plot a contour, but it is garbage. I have tried replecating > the silly for loop approach, and also using the real of abs value of the > result, but it is still garbage. > > Can anyone offer some advice as to why the two scripts produce different > results? > > Thanks a lot, > Pete. > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rmay31 at gmail.com Fri Aug 22 12:39:05 2008 From: rmay31 at gmail.com (Ryan May) Date: Fri, 22 Aug 2008 09:39:05 -0700 Subject: [SciPy-user] Scilab to Scipy In-Reply-To: References: Message-ID: On Fri, Aug 22, 2008 at 7:01 AM, peter websdell < flyingdeckchair at googlemail.com> wrote: > Hello all, > > I'm converting from using sclab/matlab to python. I'm finding it great so > far, but I've discovered a problem that I don't understand. > > The following code displays a contour plot of the natural modes of a plate > in scilab: > > ################### > Lx=1; > Ly=1; > n=2; > m=2; > f=100; > w=2*%pi*f; > t=1; > A=2; > Kx=n*%pi/Lx; > Ky=m*%pi/Ly; > > x=linspace(0,100); > y=linspace(0,100); > z=zeros(100,100); > > for i = 1:100 > for j = 1:100 > z(i,j) = A * sin(Kx*x(i)) * sin(Ky*y(j)) * %e^(%i*w*t); > end > end > > contour(x,y,z,20) > ################## > > Now here's how I've done it in python: > > ################## > from pylab import * > > Lx=1 > Ly=1 > n=2 > m=2 > f=100 > w=2*pi*f > t=1 > A=2 > > Kx=n*pi/Lx > Ky=m*pi/Ly > > x,y =mgrid[0:100,0:100] > z=empty((100,100)) > z=A * sin(Kx*x) * sin(Ky*y) * e**(1j*w*t) > > contour(x,y,z) > show() > ################### > > The result does plot a contour, but it is garbage. I have tried replecating > the silly for loop approach, and also using the real of abs value of the > result, but it is still garbage. > > Can anyone offer some advice as to why the two scripts produce different > results? > Can you send/post an image somewhere that shows what you're expecting it to look like? I don't see any obvious problems with what you've written (other than z=empty((100,100)) being extranenous). Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Fri Aug 22 13:01:14 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Fri, 22 Aug 2008 19:01:14 +0200 Subject: [SciPy-user] Thoughts on GUI development In-Reply-To: References: Message-ID: <20080822170114.GB32181@phare.normalesup.org> On Fri, Aug 22, 2008 at 11:15:00AM -0400, Eric Bruning wrote: > In your lightning talk on the interactive shell, you wrote: > "What do we gain with GUIs? > -Pretty look and feel > -Doesn't make you more productive/richer: no economic or academic incentive" I guess that was just me being provocative as usual. :). That said, I would have fully agreed with you a month ago, but I came to realize that there is a heavy cost you pay by sitting in a GUI: you now have to deal with screen refresh, event-processing, and if your calculations are sitting in the same Python process, this slows them down. So I guess my point is that to pay this price, and still have a valuable scientific tools, you need a better incentive than looking pretty, you need to be able to solve additional problems, and this come with adding additional features. Ga?l From warren.weckesser at gmail.com Fri Aug 22 13:23:06 2008 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Fri, 22 Aug 2008 13:23:06 -0400 Subject: [SciPy-user] Scilab to Scipy In-Reply-To: <114880320808220937l4743d45cxa43cec4ec1a518af@mail.gmail.com> References: <114880320808220937l4743d45cxa43cec4ec1a518af@mail.gmail.com> Message-ID: <114880320808221023n2e7d807ar77143c8506e0b123@mail.gmail.com> Here is one way to implement the corrected scilab code with pylab: #################### from pylab import * Lx = 1.0 Ly = 1.0 n = 2 m = 3 f = 1.0 w = 2*pi*f t = 1.0 A = 2.0 Kx = n*pi/Lx Ky = m*pi/Ly grid_size = 200 x = linspace(0,Lx,grid_size) y = linspace(0,Ly,grid_size) [X,Y] = meshgrid(x,y) z = A * sin(Kx*X) * sin(Ky*Y) * cos(w*t) contour(X,Y,z) show() #################### On Fri, Aug 22, 2008 at 12:37 PM, Warren Weckesser < warren.weckesser at gmail.com> wrote: > Peter, > > The problem is in your original scilab code. You say > x = linspace(0,100); > and in the calculation of the mode, you have the term sin(Kx*x). Since Kx = > 2*pi, and your x ranges from 0 to 100, you should see 100 oscillations in > the x direction. You don't see it, because your grid is too coarse. The > default number of samples for the linspace function is 100. > > What you really want is 0 <= x <= Lx. In your scilab code, x should be set > like this: > x = linspace(0,Lx,grid_size) > The third argument, as its name suggests, is the number of samples to use. > > Here's a modified version of your scilab code (I also changed e^(%i*w*t) to > cos(w*t), but that was not the source of the problem): > > #################### > Lx = 1; > Ly = 1; > n = 2; > m = 2; > f = 100; > w = 2*%pi*f; > t = 1; > A = 2; > Kx = n*%pi/Lx; > Ky = m*%pi/Ly; > > grid_size = 501; > x = linspace(0,Lx,grid_size); > y = linspace(0,Ly,grid_size); > z = zeros(grid_size,grid_size); > > for i = 1:grid_size > for j = 1:grid_size > z(i,j) = A * sin(Kx*x(i)) * sin(Ky*y(j)) * cos(w*t); > end > end > > contour(x,y,z,20) > > #################### > > Cheers, > > Warren > > On Fri, Aug 22, 2008 at 10:01 AM, peter websdell < > flyingdeckchair at googlemail.com> wrote: > >> Hello all, >> >> I'm converting from using sclab/matlab to python. I'm finding it great so >> far, but I've discovered a problem that I don't understand. >> >> The following code displays a contour plot of the natural modes of a plate >> in scilab: >> >> ################### >> Lx=1; >> Ly=1; >> n=2; >> m=2; >> f=100; >> w=2*%pi*f; >> t=1; >> A=2; >> Kx=n*%pi/Lx; >> Ky=m*%pi/Ly; >> >> x=linspace(0,100); >> y=linspace(0,100); >> z=zeros(100,100); >> >> for i = 1:100 >> for j = 1:100 >> z(i,j) = A * sin(Kx*x(i)) * sin(Ky*y(j)) * %e^(%i*w*t); >> end >> end >> >> contour(x,y,z,20) >> ################## >> >> Now here's how I've done it in python: >> >> ################## >> from pylab import * >> >> Lx=1 >> Ly=1 >> n=2 >> m=2 >> f=100 >> w=2*pi*f >> t=1 >> A=2 >> >> Kx=n*pi/Lx >> Ky=m*pi/Ly >> >> x,y =mgrid[0:100,0:100] >> z=empty((100,100)) >> z=A * sin(Kx*x) * sin(Ky*y) * e**(1j*w*t) >> >> contour(x,y,z) >> show() >> ################### >> >> The result does plot a contour, but it is garbage. I have tried >> replecating the silly for loop approach, and also using the real of abs >> value of the result, but it is still garbage. >> >> Can anyone offer some advice as to why the two scripts produce different >> results? >> >> Thanks a lot, >> Pete. >> >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From s.mientki at ru.nl Fri Aug 22 15:23:57 2008 From: s.mientki at ru.nl (Stef Mientki) Date: Fri, 22 Aug 2008 21:23:57 +0200 Subject: [SciPy-user] Thoughts on GUI development In-Reply-To: References: Message-ID: <48AF124D.2010003@ru.nl> Eric Bruning wrote: > Hi Gael, et al.: > > Regrettably, I couldn't attend scipy this year, but have been enjoying > everyone's slides. I'm not entirely sure what the purpose of this > message is, but one item in your slides was quite relevant to what > I've been working on this week. > > In your lightning talk on the interactive shell, you wrote: > "What do we gain with GUIs? > -Pretty look and feel > -Doesn't make you more productive/richer: no economic or academic incentive" > > At a gut level, I disagree that GUIs don't make you more productive. I > imagine that you and others probably disagree, too. After all, we must > find *some* non-superficial value in them to go to the effort of > writing them! For certain datasets, I've found that I can't do the > analysis I want without a good GUI for browsing and tagging data. > Since this involves syncing up plots and animations across 4D, the > code is non-trivial. > > So, the snag comes at the point of trying to justify all the time you > spend writing a GUI app; you hope for future efficiency in analysis, > but the benefits are pushed into the future right along with > publishable results (==riches, such as it is in academia). And much of > the efficiency comes by implementing the hardest features: not just > plots, but draggable, zoomable, taggable, animated, linked plots. > > Perhaps the best way to conclude is by saying thanks to all those that > are pushing forward on the graphical toolkits. The faster it is to > make an app, the less conflict we'll all feel when justifying time > spent crafting GUIs. > > For every specialist, it doesn't matter what tool he uses, GUI or not. But for the boys and girls that once in a while need some tools, GUI is very very convenient way of not have to know / remember all the small details !! Why do you think all amateurs (and a few professionals) like LabView so much ;-) But even more important than the GUI is feeding the user with the right apriori knowledge of the tool and the domain at the right time. Just my 2 cents, I'm not an expert, but ... ... I'm trying to build a Labview equivalent in Python ;-) cheers, Stef From flyingdeckchair at googlemail.com Fri Aug 22 15:34:29 2008 From: flyingdeckchair at googlemail.com (peter websdell) Date: Fri, 22 Aug 2008 20:34:29 +0100 Subject: [SciPy-user] Scilab to Scipy In-Reply-To: <114880320808221023n2e7d807ar77143c8506e0b123@mail.gmail.com> References: <114880320808220937l4743d45cxa43cec4ec1a518af@mail.gmail.com> <114880320808221023n2e7d807ar77143c8506e0b123@mail.gmail.com> Message-ID: Hello, Thanks for the advice warren. However, the scilab code actually produces the results I am expecting. I will post a picture next week. What I don't understand is that python takes the exact same input, processess the data in the same way and then produces different results. Like I say, I'll post piccies next week. Adios for noo. Thanks again, Pete. > > > On Fri, Aug 22, 2008 at 12:37 PM, Warren Weckesser < > warren.weckesser at gmail.com> wrote: > >> Peter, >> >> The problem is in your original scilab code. You say >> x = linspace(0,100); >> and in the calculation of the mode, you have the term sin(Kx*x). Since Kx >> = 2*pi, and your x ranges from 0 to 100, you should see 100 oscillations in >> the x direction. You don't see it, because your grid is too coarse. The >> default number of samples for the linspace function is 100. >> >> What you really want is 0 <= x <= Lx. In your scilab code, x should be >> set like this: >> x = linspace(0,Lx,grid_size) >> The third argument, as its name suggests, is the number of samples to use. >> >> Here's a modified version of your scilab code (I also changed e^(%i*w*t) >> to cos(w*t), but that was not the source of the problem): >> >> #################### >> Lx = 1; >> Ly = 1; >> n = 2; >> m = 2; >> f = 100; >> w = 2*%pi*f; >> t = 1; >> A = 2; >> Kx = n*%pi/Lx; >> Ky = m*%pi/Ly; >> >> grid_size = 501; >> x = linspace(0,Lx,grid_size); >> y = linspace(0,Ly,grid_size); >> z = zeros(grid_size,grid_size); >> >> for i = 1:grid_size >> for j = 1:grid_size >> z(i,j) = A * sin(Kx*x(i)) * sin(Ky*y(j)) * cos(w*t); >> end >> end >> >> contour(x,y,z,20) >> >> #################### >> >> Cheers, >> >> Warren >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at gmail.com Fri Aug 22 16:01:19 2008 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Fri, 22 Aug 2008 16:01:19 -0400 Subject: [SciPy-user] Scilab to Scipy In-Reply-To: References: <114880320808220937l4743d45cxa43cec4ec1a518af@mail.gmail.com> <114880320808221023n2e7d807ar77143c8506e0b123@mail.gmail.com> Message-ID: <114880320808221301l15115e19i287b3521b962577b@mail.gmail.com> Hi Peter, On Fri, Aug 22, 2008 at 3:34 PM, peter websdell < flyingdeckchair at googlemail.com> wrote: > Hello, > > Thanks for the advice warren. However, the scilab code actually produces > the results I am expecting. > Only by a remarkable stroke of luck does it create a "correct" picture when n=2 and m=2. Try changing to n=1 and rerun the scilab code. Or try changing your grid size from the default of 100 to something even a little larger, say 101 (i.e. add a third argument of 101 to the linspace commands, adjust the initialization of z to "z=zeros(101,101)", and change the for loop limits appropriately). Your scilab plot will be a mess, like the one I'm looking at right now. > I will post a picture next week. > > What I don't understand is that python takes the exact same input, > processess the data in the same way and then produces different results. > Basically, the scilab code is bad, and you are recreating the bug in the python code. Cheers, Warren -------------- next part -------------- An HTML attachment was scrubbed... URL: From barrywark at gmail.com Fri Aug 22 16:12:24 2008 From: barrywark at gmail.com (Barry Wark) Date: Fri, 22 Aug 2008 16:12:24 -0400 Subject: [SciPy-user] Thoughts on GUI development In-Reply-To: <48AF124D.2010003@ru.nl> References: <48AF124D.2010003@ru.nl> Message-ID: I think what Stef is getting at is that effective scientific software may need to match its user model to the user's world model. In other words, the "workflow" matters. If the software requires the user (scientist/engineer/etc.) to deal with data or process in a different order than the order implied by their experiment, the software is not as good as it could be. In my view, this is why we build UIs--so that we can match the software model to the user's model such that the software is "invisible" to the user in doing their work. I contend that it is a rare case when a CLI interface is the *best* fit to the user's world model. Of course, as Gael points out, writing GUIs takes time. The tradeoff then is efficiency of use for the user versus efficiency of delivery time for the developer. Barry On Fri, Aug 22, 2008 at 3:23 PM, Stef Mientki wrote: > > > Eric Bruning wrote: >> Hi Gael, et al.: >> >> Regrettably, I couldn't attend scipy this year, but have been enjoying >> everyone's slides. I'm not entirely sure what the purpose of this >> message is, but one item in your slides was quite relevant to what >> I've been working on this week. >> >> In your lightning talk on the interactive shell, you wrote: >> "What do we gain with GUIs? >> -Pretty look and feel >> -Doesn't make you more productive/richer: no economic or academic incentive" >> >> At a gut level, I disagree that GUIs don't make you more productive. I >> imagine that you and others probably disagree, too. After all, we must >> find *some* non-superficial value in them to go to the effort of >> writing them! For certain datasets, I've found that I can't do the >> analysis I want without a good GUI for browsing and tagging data. >> Since this involves syncing up plots and animations across 4D, the >> code is non-trivial. >> >> So, the snag comes at the point of trying to justify all the time you >> spend writing a GUI app; you hope for future efficiency in analysis, >> but the benefits are pushed into the future right along with >> publishable results (==riches, such as it is in academia). And much of >> the efficiency comes by implementing the hardest features: not just >> plots, but draggable, zoomable, taggable, animated, linked plots. >> >> Perhaps the best way to conclude is by saying thanks to all those that >> are pushing forward on the graphical toolkits. The faster it is to >> make an app, the less conflict we'll all feel when justifying time >> spent crafting GUIs. >> >> > For every specialist, it doesn't matter what tool he uses, GUI or not. > But for the boys and girls that once in a while need some tools, > GUI is very very convenient way of not have to know / remember all the > small details !! > Why do you think all amateurs (and a few professionals) like LabView so > much ;-) > But even more important than the GUI is feeding the user with the right > apriori knowledge of the tool and the domain at the right time. > > Just my 2 cents, > I'm not an expert, but ... > ... I'm trying to build a Labview equivalent in Python ;-) > > cheers, > Stef > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From warren.weckesser at gmail.com Fri Aug 22 16:21:02 2008 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Fri, 22 Aug 2008 16:21:02 -0400 Subject: [SciPy-user] Scilab to Scipy In-Reply-To: <114880320808221301l15115e19i287b3521b962577b@mail.gmail.com> References: <114880320808220937l4743d45cxa43cec4ec1a518af@mail.gmail.com> <114880320808221023n2e7d807ar77143c8506e0b123@mail.gmail.com> <114880320808221301l15115e19i287b3521b962577b@mail.gmail.com> Message-ID: <114880320808221321u4c67403ej4123d5f8f5ed7537@mail.gmail.com> Peter, In case you are not convinced, here is a simpler demonstration of the problem with your scilab code. The graph of sin(2*pi*x) for 0 <= x <= 100 should be 100 oscillations. Try this in scilab: x = linspace(0,100); plot(x,sin(2*%pi*x)); You will see a *single* oscillation! The problem is that the spacing between the x coordinates is 100/99 = 1.0101..., which is just slightly larger than the actual period of the oscillations. So each sample catches one of the oscillations at increasing phase values within the oscillation, and creates the illusion of a single oscillation, when in fact, there should be 100 oscillations. If the number of samples is increased to 101, then the spacing of the samples is exactly 1, and the plot will be identically 0. Try this: x = linspace(0,100,101); plot(x,sin(2*%pi*x)); and you'll see a horizontal line. The basic problem is that Lx is the length of the plate, but you are not scaling your x coordinates to be in the range 0 <= x <= Lx when you compute the mode. The code that I sent in my first email shows one way to fix this. Good luck, Warren On Fri, Aug 22, 2008 at 4:01 PM, Warren Weckesser < warren.weckesser at gmail.com> wrote: > Hi Peter, > > On Fri, Aug 22, 2008 at 3:34 PM, peter websdell < > flyingdeckchair at googlemail.com> wrote: > >> Hello, >> >> Thanks for the advice warren. However, the scilab code actually produces >> the results I am expecting. >> > > Only by a remarkable stroke of luck does it create a "correct" picture > when n=2 and m=2. Try changing to n=1 and rerun the scilab code. Or try > changing your grid size from the default of 100 to something even a little > larger, say 101 (i.e. add a third argument of 101 to the linspace commands, > adjust the initialization of z to "z=zeros(101,101)", and change the for > loop limits appropriately). Your scilab plot will be a mess, like the one > I'm looking at right now. > > >> I will post a picture next week. >> >> What I don't understand is that python takes the exact same input, >> processess the data in the same way and then produces different results. >> > > Basically, the scilab code is bad, and you are recreating the bug in the > python code. > > > Cheers, > > Warren > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric.bruning at gmail.com Fri Aug 22 16:40:52 2008 From: eric.bruning at gmail.com (Eric Bruning) Date: Fri, 22 Aug 2008 16:40:52 -0400 Subject: [SciPy-user] Thoughts on GUI development In-Reply-To: References: <48AF124D.2010003@ru.nl> Message-ID: >> In your lightning talk on the interactive shell, you wrote: >> "What do we gain with GUIs? >> -Pretty look and feel >> -Doesn't make you more productive/richer: no economic or academic incentive" > I guess that was just me being provocative as usual. :). I'm glad you were! (And I took no offense) > That said, I would have fully agreed with you a month ago, but I came to > realize that there is a heavy cost you pay by sitting in a GUI: you now > have to deal with screen refresh, event-processing, and if your > calculations are sitting in the same Python process, this slows them > down. Are the problems you list specific to integrating an ipython prompt with a running GUI? I haven't encountered such concurrency issues thus far. My main struggle has been in getting the user interaction I want ... and that comes back to being able to trap the right event, reselect data in a timely and flexible way, etc. So maybe I have encountered what you describe after all. > So I guess my point is that to pay this price, and still have a > valuable scientific tools, you need a better incentive than looking > pretty, you need to be able to solve additional problems, and this come > with adding additional features. Agreed! -Eric From eric.bruning at gmail.com Fri Aug 22 16:48:04 2008 From: eric.bruning at gmail.com (Eric Bruning) Date: Fri, 22 Aug 2008 16:48:04 -0400 Subject: [SciPy-user] Thoughts on GUI development In-Reply-To: References: <48AF124D.2010003@ru.nl> Message-ID: On Fri, Aug 22, 2008 at 4:12 PM, Barry Wark wrote: > I think what Stef is getting at is that effective scientific software > may need to match its user model to the user's world model. In other > words, the "workflow" matters. If the software requires the user > (scientist/engineer/etc.) to deal with data or process in a different > order than the order implied by their experiment, the software is not > as good as it could be. In my view, this is why we build UIs--so that > we can match the software model to the user's model such that the > software is "invisible" to the user in doing their work. I contend > that it is a rare case when a CLI interface is the *best* fit to the > user's world model. Workflow is defininitely the key reason why I want a GUI. Perhaps Gael's point is that building a flexibile workflow is one of the most challenging parts! I'm glad to see tools being developed that are a more natural fit for the way I think about data and interfaces. > Of course, as Gael points out, writing GUIs takes time. The tradeoff > then is efficiency of use for the user versus efficiency of delivery > time for the developer. And this tradeoff is compounded when user and developer are the same - finite time and all that! -Eric From gael.varoquaux at normalesup.org Fri Aug 22 18:43:15 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sat, 23 Aug 2008 00:43:15 +0200 Subject: [SciPy-user] Thoughts on GUI development In-Reply-To: References: <48AF124D.2010003@ru.nl> Message-ID: <20080822224315.GA14708@phare.normalesup.org> On Fri, Aug 22, 2008 at 04:12:24PM -0400, Barry Wark wrote: > I think what Stef is getting at is that effective scientific software > may need to match its user model to the user's world model. In other > words, the "workflow" matters. If the software requires the user > (scientist/engineer/etc.) to deal with data or process in a different > order than the order implied by their experiment, the software is not > as good as it could be. In my view, this is why we build UIs--so that > we can match the software model to the user's model such that the > software is "invisible" to the user in doing their work. I contend > that it is a rare case when a CLI interface is the *best* fit to the > user's world model. I agree, but I was talking about a CLI interface, not a notebook, or something else, and I guess my point was that if you go in GUIs, you should should get more than a nice-looking terminal, eg Matlab, scilab. > Of course, as Gael points out, writing GUIs takes time. The tradeoff > then is efficiency of use for the user versus efficiency of delivery > time for the developer. That is not my point however. My point is that by definition when you move out of the nice, fuzzy environment of a terminal, and stick this functionality in the process in which you are running the calculation, you pay in cost in robustness and speed of your environment. Of course you can go mutliprocess, but this is not my point here, as when you do that, you loose the big gain of being able to introspect you calculation cheaply as it run. If you are going this way, I think a AJAX application is not that stupid (as sage does), as the webbrowser actually gives you a very robust, rather easy to code, and fairly powerful canvas. I am currently struggling with trying to define what is my model, what is my view, what should sit where, ie how many processes we want. I am now convinced that for a robust and powerful IDE with Python, we want several processes communicating together. For instance, I think that the editor, may it be written in Python, and not emacs or vim, or eclipse, should be sitting in a different process, so that the calculation does not block the editor, nor crashes it. I am now starting to wonder if the canvas on which we print IO could also not be in a different process (web browser?) as IO is just text, and transferring from a process to another is cheaper. However, what I am really interested in for a UI, is a view on my namespace, that I can use to dynamically explore the arrays, or the different objects. I want this view to be dynamically, and strongly interactive, and I don't think that sitting in a different process than the calculation engine will get me this. Obviously I am still trying to figure these things out and thinking out loud here, but I am trying to push people to think out of the box, and make people realize that the standard model of a GUI in which everything happens is not maybe the best for our purposes, and that people have to think about the costs and the gains. Ga?l From flyingdeckchair at googlemail.com Fri Aug 22 18:57:22 2008 From: flyingdeckchair at googlemail.com (peter websdell) Date: Fri, 22 Aug 2008 23:57:22 +0100 Subject: [SciPy-user] Scilab to Scipy In-Reply-To: <114880320808221321u4c67403ej4123d5f8f5ed7537@mail.gmail.com> References: <114880320808220937l4743d45cxa43cec4ec1a518af@mail.gmail.com> <114880320808221023n2e7d807ar77143c8506e0b123@mail.gmail.com> <114880320808221301l15115e19i287b3521b962577b@mail.gmail.com> <114880320808221321u4c67403ej4123d5f8f5ed7537@mail.gmail.com> Message-ID: Gotcha. Thanks for clarifying. I was being pretty dense. I've done a lot of acoustic calculations in the past, but it's been a wee while. I'm convinced! Pete. 2008/8/22 Warren Weckesser > Peter, > > In case you are not convinced, here is a simpler demonstration of the > problem with your scilab code. The graph of sin(2*pi*x) for 0 <= x <= 100 > should be 100 oscillations. Try this in scilab: > > x = linspace(0,100); > plot(x,sin(2*%pi*x)); > > You will see a *single* oscillation! The problem is that the spacing > between the x coordinates is 100/99 = 1.0101..., which is just slightly > larger than the actual period of the oscillations. So each sample catches > one of the oscillations at increasing phase values within the oscillation, > and creates the illusion of a single oscillation, when in fact, there should > be 100 oscillations. If the number of samples is increased to 101, then the > spacing of the samples is exactly 1, and the plot will be identically 0. > Try this: > > x = linspace(0,100,101); > plot(x,sin(2*%pi*x)); > > and you'll see a horizontal line. > > The basic problem is that Lx is the length of the plate, but you are not > scaling your x coordinates to be in the range 0 <= x <= Lx when you compute > the mode. The code that I sent in my first email shows one way to fix this. > > > Good luck, > > Warren > > > > On Fri, Aug 22, 2008 at 4:01 PM, Warren Weckesser < > warren.weckesser at gmail.com> wrote: > >> Hi Peter, >> >> On Fri, Aug 22, 2008 at 3:34 PM, peter websdell < >> flyingdeckchair at googlemail.com> wrote: >> >>> Hello, >>> >>> Thanks for the advice warren. However, the scilab code actually produces >>> the results I am expecting. >>> >> >> Only by a remarkable stroke of luck does it create a "correct" picture >> when n=2 and m=2. Try changing to n=1 and rerun the scilab code. Or try >> changing your grid size from the default of 100 to something even a little >> larger, say 101 (i.e. add a third argument of 101 to the linspace commands, >> adjust the initialization of z to "z=zeros(101,101)", and change the for >> loop limits appropriately). Your scilab plot will be a mess, like the one >> I'm looking at right now. >> >> >>> I will post a picture next week. >>> >>> What I don't understand is that python takes the exact same input, >>> processess the data in the same way and then produces different results. >>> >> >> Basically, the scilab code is bad, and you are recreating the bug in the >> python code. >> >> >> Cheers, >> >> Warren >> >> > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From flyingdeckchair at googlemail.com Fri Aug 22 19:08:37 2008 From: flyingdeckchair at googlemail.com (peter websdell) Date: Sat, 23 Aug 2008 00:08:37 +0100 Subject: [SciPy-user] Scilab to Scipy In-Reply-To: References: <114880320808220937l4743d45cxa43cec4ec1a518af@mail.gmail.com> <114880320808221023n2e7d807ar77143c8506e0b123@mail.gmail.com> <114880320808221301l15115e19i287b3521b962577b@mail.gmail.com> <114880320808221321u4c67403ej4123d5f8f5ed7537@mail.gmail.com> Message-ID: Excellent. I'm a happy man, as far as making pretty plots can make a man happy. Thanks again. Pete. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: plot.jpg Type: image/jpeg Size: 39918 bytes Desc: not available URL: From listservs at mac.com Sat Aug 23 00:58:07 2008 From: listservs at mac.com (Chris Fonnesbeck) Date: Sat, 23 Aug 2008 04:58:07 +0000 (UTC) Subject: [SciPy-user] looking for a negative binomial distribution Message-ID: I notice in the scipy dev wiki that the negative binomial random number generator is broken. In particular, it appears to round the first parameter, which is incorrect -- any real number is valid, not just integers. As a result, the method bombs out when passing it a parameter value of n<1, since it gets rounded to zero. This is a bit worrying for a common distribution like the NB, and it makes me wonder about other random number generators in numpy/scipy. Does anyone else know of a stable random number generating library for python? From robert.kern at gmail.com Sat Aug 23 01:41:06 2008 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 22 Aug 2008 22:41:06 -0700 Subject: [SciPy-user] looking for a negative binomial distribution In-Reply-To: References: Message-ID: <3d375d730808222241s6211f9d6pb0238391c65aff17@mail.gmail.com> On Fri, Aug 22, 2008 at 21:58, Chris Fonnesbeck wrote: > I notice in the scipy dev wiki that the negative binomial random number > generator is broken. In particular, it appears to round the first parameter, > which is incorrect -- any real number is valid, not just integers. Ah. My apologies. The reference I was working from (Luc Devroye's _Nonuniform Random Variate Generation, p.543) describes it as taking an integer n and real p. Float arguments get cast to C longs and thus truncated. > As a result, the method bombs out when passing it a parameter value of n<1, > since it gets rounded to zero. This is a bit worrying for a common > distribution like the NB, and it makes me wonder about other random number > generators in numpy/scipy. There was a bug found in the noncentral F distribution recently. > Does anyone else know of a stable random number generating library for python? PyGSL, if that's still a going concern. It would be nice, however, if you helped me test the distributions, though. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Sat Aug 23 02:15:16 2008 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 22 Aug 2008 23:15:16 -0700 Subject: [SciPy-user] looking for a negative binomial distribution In-Reply-To: References: Message-ID: <3d375d730808222315g2dfefb61yc716fede2cfa20b2@mail.gmail.com> On Fri, Aug 22, 2008 at 21:58, Chris Fonnesbeck wrote: > I notice in the scipy dev wiki that the negative binomial random number > generator is broken. In particular, it appears to round the first parameter, > which is incorrect -- any real number is valid, not just integers. Fixed in numpy SVN. This will be part of the upcoming 1.2.0 release and probably 1.1.2 if we do one. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From listservs at mac.com Sat Aug 23 04:41:26 2008 From: listservs at mac.com (Chris Fonnesbeck) Date: Sat, 23 Aug 2008 08:41:26 +0000 (UTC) Subject: [SciPy-user] looking for a negative binomial distribution References: <3d375d730808222315g2dfefb61yc716fede2cfa20b2@mail.gmail.com> Message-ID: Robert Kern gmail.com> writes: > > On Fri, Aug 22, 2008 at 21:58, Chris Fonnesbeck mac.com> wrote: > > I notice in the scipy dev wiki that the negative binomial random number > > generator is broken. In particular, it appears to round the first parameter, > > which is incorrect -- any real number is valid, not just integers. > > Fixed in numpy SVN. This will be part of the upcoming 1.2.0 release > and probably 1.1.2 if we do one. > Thanks Robert, I would have poked around the source code myself, but I'm on a deadline. I will update and test the code now though. The easiest way to generate NB random variables is to sample from a gamma, then use that value as the mean for a sample from a poisson. cf From listservs at mac.com Sat Aug 23 05:25:05 2008 From: listservs at mac.com (Chris Fonnesbeck) Date: Sat, 23 Aug 2008 09:25:05 +0000 (UTC) Subject: [SciPy-user] looking for a negative binomial distribution References: <3d375d730808222315g2dfefb61yc716fede2cfa20b2@mail.gmail.com> Message-ID: Chris Fonnesbeck mac.com> writes: > Robert Kern gmail.com> writes: > > Fixed in numpy SVN. This will be part of the upcoming 1.2.0 release > > and probably 1.1.2 if we do one. > > > > Thanks Robert, > > I would have poked around the source code myself, but I'm on a deadline. I will > update and test the code now though. > > The easiest way to generate NB random variables is to sample from a gamma, > then use that value as the mean for a sample from a poisson. > The fix appears to work, though the samples appear to be biased high relative to thosegenerated using another method, based on a couple tests. I'll make sure I'm not making an error first, before complaining again. The first parameter does accept non-integer arguments now, however. Thanks, cf From contact at pythonxy.com Sat Aug 23 14:24:45 2008 From: contact at pythonxy.com (Pierre Raybaut) Date: Sat, 23 Aug 2008 20:24:45 +0200 Subject: [SciPy-user] [ Python(x,y) ] New release : 2.0.3 Message-ID: <48B055ED.4000900@pythonxy.com> Hi all, As you may already know, Python(x,y) is a free scientific-oriented Python Distribution based on Qt and Eclipse providing a self-consistent scientific development environment. Release 2.0.3 is now available on http://www.pythonxy.com. (Full Edition, Basic Edition, and update patch will be available on monday on http://code.google.com/p/pythonxy) Changes history 08 -23 -2008 - Version 2.0.3 : * Added: o GDCM 2.0.8 (thanks to Mathieu Malaterre) - Grassroots DiCoM is a C++ library for dealing with DICOM medical files o pyExcelerator 0.6.3 - Generating Excel 97+ files, importing Excel 95+ files, support for UNICODE in Excel files, using variety of formatting features and printing options, Excel files and OLE2 compound files dumper o EasyGUI 0.83 - EasyGUI is a tiny Python module for very simple, very easy GUI programming * Updated: o PyDev 1.3.20 o Console 2.0.140 o PyQt 4.4.3 o Qt Eclipse Integration 1.4.1.1 (Qt help update) o VTK 5.0.4 o SymPy 0.6.2 o Cython 0.9.8.1.1 * Corrected: o maplotlib 0.98.3: compatibility issue with PyQt 4.4.x o Console 2: "Open console here..." now opens a command window instead of IPython-sh (the latter is less often used from Windows explorer, and can be opened afterwards thanks to Console2 multiple tabs management) Regards, Pierre Raybaut From robert.kern at gmail.com Sat Aug 23 20:05:43 2008 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 23 Aug 2008 17:05:43 -0700 Subject: [SciPy-user] looking for a negative binomial distribution In-Reply-To: References: <3d375d730808222315g2dfefb61yc716fede2cfa20b2@mail.gmail.com> Message-ID: <3d375d730808231705q7b4387bsd66af84d012e4197@mail.gmail.com> On Sat, Aug 23, 2008 at 02:25, Chris Fonnesbeck wrote: > Chris Fonnesbeck mac.com> writes: >> Robert Kern gmail.com> writes: >> > Fixed in numpy SVN. This will be part of the upcoming 1.2.0 release >> > and probably 1.1.2 if we do one. >> > >> >> Thanks Robert, >> >> I would have poked around the source code myself, but I'm on a deadline. I will >> update and test the code now though. >> >> The easiest way to generate NB random variables is to sample from a gamma, >> then use that value as the mean for a sample from a poisson. Yes, that's exactly what I'm doing. long rk_negative_binomial(rk_state *state, double n, double p) { double Y; Y = rk_gamma(state, n, (1-p)/p); return rk_poisson(state, Y); } > The fix appears to work, though the samples appear to be biased high relative to > thosegenerated using another method, based on a couple tests. I'll make sure > I'm not making an error first, before complaining again. The first parameter does > accept non-integer arguments now, however. I cannot see any particular deviation from the CDF or PDF (via Q-Q plots and histograms). The sample means appear to match the theoretical mean over a fairly wide range of parameters (see below). If you are still having trouble getting the distribution to match your other method, please let me know with details about your other method and the results that numpy generates for you. In [132]: def meandiff(r, mean): .....: p = float(r) / (r+mean) .....: return np.random.negative_binomial(r,p,size=10000).mean() - mean .....: -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From paratribulations at free.fr Sun Aug 24 16:22:19 2008 From: paratribulations at free.fr (Tribulations =?iso-8859-1?q?Parall=E8les?=) Date: Sun, 24 Aug 2008 22:22:19 +0200 Subject: [SciPy-user] interpolation speed problem Message-ID: <200808242222.20466.paratribulations@free.fr> Hi everybody, I try to perform quick interpolation with Python. So far I have compared two solutions. First, the scipy solution: ##### import numpy as N from scipy import interpolate from time import time number = 1000000 a=N.arange( 0, number, 0.1) b=N.arange( number, 2*number, 0.1) f = interpolate.interp1d( a, b ) t_initial = time() for i in range(0,50): print "foo=%.5f" % f( 49999.5 ), t_final = time() print "\nTotal time =", t_final-t_initial ##### Now, the pygsl version (the two lines "astype(numx.float_)" are very important for the speed, these lines have been given by Pierre from the pygsl mailing list): ##### import pygsl.interpolation from time import time import numpy numx = pygsl._numobj number = 1000000 a=numpy.arange( 0, number ) b=numpy.arange( number, 2*number ) a = a.astype(numx.float_) b = b.astype(numx.float_) c = pygsl.interpolation.linear( len(a) ) pygsl.interpolation.linear.init( c , a , b) t_initial = time() for i in range(0,50): print "foo=%.5f" % c.eval(49999.5), t_final = time() print "\nTotal time =", t_final-t_initial #### The pygsl version takes on my machine 0.0004 s The scipy version takes 0.005 s. Is there some means to improve the scipy version? Thanks Julien From roger.herikstad at gmail.com Tue Aug 26 02:03:36 2008 From: roger.herikstad at gmail.com (Roger Herikstad) Date: Tue, 26 Aug 2008 14:03:36 +0800 Subject: [SciPy-user] Conditionally adding items to a list Message-ID: Hi all, I have a prolem that I was wondering if anyone could come up with a good solution for. I basically have to lists of number and I want to add elements from one list to the other as long as the difference between the added element and all elements already in that list exceeds a certain threshold. The code I came up with is map(times1.append,ifilter(lambda(x): numpy.abs(x-numpy.array(times1)).min()>1000, times2)) but it quickly slows down if times2 becomes sufficiently long. I need to be able to do this with lists of 100,000++ elements. Does anyone know of a quicker way of doing this? Thanks! ~ Roger From dwf at cs.toronto.edu Tue Aug 26 03:29:39 2008 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Tue, 26 Aug 2008 03:29:39 -0400 Subject: [SciPy-user] Conditionally adding items to a list In-Reply-To: References: Message-ID: On 26-Aug-08, at 2:03 AM, Roger Herikstad wrote: > Hi all, > I have a prolem that I was wondering if anyone could come up with a > good solution for. I basically have to lists of number and I want to > add elements from one list to the other as long as the difference > between the added element and all elements already in that list > exceeds a certain threshold. The code I came up with is > > map(times1.append,ifilter(lambda(x): > numpy.abs(x-numpy.array(times1)).min()>1000, times2)) It seems like since this is such a sequentially dependent problem (whether you add element N of times2 depends on element N-1, N-2... etc.) it'll be somewhat difficult to gain a significant speedup. What you can do is avoid unnecessary copies, creating the array over and over, and perform certain comparisons only once. See below. ------------------- CUT HERE ------------------- import numpy as N times1a = N.array(times1) times2a = N.array(times2) # We can eliminate anyone that isn't at least 1000 away from all the # initial elements of times1a, right off the bat. Because of this initial # pass we can avoid repeatedly comparing against all the elements # in this list and focus on the ones we've just added. # # What this does is essentially do all the subtractions in parallel # by broadcasting to a 2D array and then taking the column min's; # this should be faster than a Python loop. candidates_idx = N.abs(times1a[:,None] - times2a).min(axis=0) > 1000 times2a_candidates = times2a[candidates_idx] # Initialize a boolean array to keep track of the things we've added added = N.empty(times2a_candidates.shape, dtype=bool) added[:] = False # We'll always be adding the first one in the candidate list, since # we haven't added any others. The 'if' is just to make sure our code # doesn't error in the event of an empty array. if added.shape[0] > 0: added[0] = True for i in xrange(times2a_candidates.shape[0]): x = times2a_candidates[i] # if x is 1000 away from every element from times2a we've already # added, add it to the list by flagging it True if N.all(N.abs(x - times2a_candidates[added]) > 1000): added[i] = True # Finally, merge the two lists result = N.concatenate((times1a, times2a_candidates[added])) \ ------------------- CUT HERE ------------------- If you like, you can then turn it back into a Python list with result.tolist(). Regards, David From roger.herikstad at gmail.com Tue Aug 26 04:59:44 2008 From: roger.herikstad at gmail.com (Roger Herikstad) Date: Tue, 26 Aug 2008 16:59:44 +0800 Subject: [SciPy-user] Conditionally adding items to a list In-Reply-To: References: Message-ID: Hi, Thanks! That gave med a speed-up of about 5 times over my current code. ~ Roger On Tue, Aug 26, 2008 at 3:29 PM, David Warde-Farley wrote: > > On 26-Aug-08, at 2:03 AM, Roger Herikstad wrote: > >> Hi all, >> I have a prolem that I was wondering if anyone could come up with a >> good solution for. I basically have to lists of number and I want to >> add elements from one list to the other as long as the difference >> between the added element and all elements already in that list >> exceeds a certain threshold. The code I came up with is >> >> map(times1.append,ifilter(lambda(x): >> numpy.abs(x-numpy.array(times1)).min()>1000, times2)) > > > It seems like since this is such a sequentially dependent problem > (whether you add element N of times2 depends on element N-1, N-2... > etc.) it'll be somewhat difficult to gain a significant speedup. What > you can do is avoid unnecessary copies, creating the array over and > over, and perform certain comparisons only once. See below. > > ------------------- CUT HERE ------------------- > import numpy as N > > times1a = N.array(times1) > times2a = N.array(times2) > > # We can eliminate anyone that isn't at least 1000 away from all the > # initial elements of times1a, right off the bat. Because of this > initial > # pass we can avoid repeatedly comparing against all the elements > # in this list and focus on the ones we've just added. > # > # What this does is essentially do all the subtractions in parallel > # by broadcasting to a 2D array and then taking the column min's; > # this should be faster than a Python loop. > > candidates_idx = N.abs(times1a[:,None] - times2a).min(axis=0) > 1000 > times2a_candidates = times2a[candidates_idx] > > # Initialize a boolean array to keep track of the things we've added > added = N.empty(times2a_candidates.shape, dtype=bool) > added[:] = False > > # We'll always be adding the first one in the candidate list, since > # we haven't added any others. The 'if' is just to make sure our code > # doesn't error in the event of an empty array. > if added.shape[0] > 0: > added[0] = True > > for i in xrange(times2a_candidates.shape[0]): > x = times2a_candidates[i] > > # if x is 1000 away from every element from times2a we've already > # added, add it to the list by flagging it True > > if N.all(N.abs(x - times2a_candidates[added]) > 1000): > added[i] = True > > # Finally, merge the two lists > result = N.concatenate((times1a, times2a_candidates[added])) > \ > ------------------- CUT HERE ------------------- > > If you like, you can then turn it back into a Python list with > result.tolist(). > > Regards, > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From bernardo.rocha at meduni-graz.at Tue Aug 26 08:27:59 2008 From: bernardo.rocha at meduni-graz.at (bernardo martins rocha) Date: Tue, 26 Aug 2008 14:27:59 +0200 Subject: [SciPy-user] pyqwt or matplotlib Message-ID: <48B3F6CF.2010600@meduni-graz.at> Hi everybody, I'm starting to write a program with PyQt to visualize some traces from a system of ODEs. I would like to plot one graphic for each variable inside this PyQt application...and I'm wondering which one is the best for it: matplotlib or pyqwt? I will read some files using PyTables and then I'll plot everything and some of these files are very big, so I need something fast and good. I've been using matplotlib for some small programs and it's very nice, powerful and beautiful. I've already embedded some matplotlib plots inside a PyQt application. But I have the impression that pyqwt is faster than matplotlib. Is it true? Is there another library for plotting that would do the job? Thanks! Bernardo M. Rocha From oliphant at enthought.com Tue Aug 26 11:06:35 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Tue, 26 Aug 2008 10:06:35 -0500 Subject: [SciPy-user] pyqwt or matplotlib In-Reply-To: <48B3F6CF.2010600@meduni-graz.at> References: <48B3F6CF.2010600@meduni-graz.at> Message-ID: <48B41BFB.40907@enthought.com> bernardo martins rocha wrote: > I've been using matplotlib for some small programs and it's very nice, > powerful and beautiful. I've already embedded some matplotlib plots > inside a PyQt application. But I have the impression that pyqwt is > faster than matplotlib. Is it true? Is there another library for > plotting that would do the job? > Chaco is very speedy and will definitely do the job, though the learning curve is daunting at first. The recent tutorial by Peter Wang at SciPy 2008 would really help, however, You might also check out Veusz -Travis O. > Thanks! > Bernardo M. Rocha > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From bsouthey at gmail.com Tue Aug 26 11:15:20 2008 From: bsouthey at gmail.com (Bruce Southey) Date: Tue, 26 Aug 2008 10:15:20 -0500 Subject: [SciPy-user] Conditionally adding items to a list In-Reply-To: References: Message-ID: <48B41E08.2020100@gmail.com> Roger Herikstad wrote: > Hi all, > I have a prolem that I was wondering if anyone could come up with a > good solution for. I basically have to lists of number and I want to > add elements from one list to the other as long as the difference > between the added element and all elements already in that list > exceeds a certain threshold. Can you please explain with an example which 'difference' you want? Do you mean the minimum, maximum, sum, average etc. of all elements in that list? Or even that the sum of differences is smaller than a certain threshold? Alternative, does the threshold vary as elements are added? You really need to be careful here because the criterion will change depending on the order that the elements are added and that not all elements within the final list meet the criterion used to create it (as well elements excluded that should have been in the list if the list was sorted differently). If it just depends on the first list, then you can use the 'where' function or boolean indexing to identify and extract the elements such as t2[numpy.abs(t2>(min(t1)+1000))] which can then be append to the times1 list (note times1 does not need to be converted to an array unless you actually need it as an array). I do recommend exploring the example web page on this as it has very informative examples on what different approaches actually work. If it depends on both lists hopefully there is a conditional approach where you can apply your criterion to one list first and then the other that does not depend on the order entered: threshold=min(numpy.min(t1),numpy.min(t2))+1000 t2[numpy.abs(t2>threshold)] Bruce > The code I came up with is > > map(times1.append,ifilter(lambda(x): > numpy.abs(x-numpy.array(times1)).min()>1000, times2)) > > but it quickly slows down if times2 becomes sufficiently long. I need > to be able to do this with lists of 100,000++ elements. Does anyone > know of a quicker way of doing this? > > Thanks! > > ~ Roger > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From roger.herikstad at gmail.com Tue Aug 26 12:31:59 2008 From: roger.herikstad at gmail.com (Roger Herikstad) Date: Wed, 27 Aug 2008 00:31:59 +0800 Subject: [SciPy-user] Conditionally adding items to a list In-Reply-To: <48B41E08.2020100@gmail.com> References: <48B41E08.2020100@gmail.com> Message-ID: Hi, My criterion is that no two elements can be closer than a given threshold, in this case 1000. I am aware that, depending on the sorting of the lists, the order would matter, and that's why I need to check each added element against all previously added elements, to make sure no violations occur. I found David's code to substantially faster than what I wrote, and the only thing I need to correct for is that fact that the difference matrix could exceed my available memory. That being said, I will definitely look into the other examples you mentioned. Thanks! ~ Roger On Tue, Aug 26, 2008 at 11:15 PM, Bruce Southey wrote: > Roger Herikstad wrote: >> Hi all, >> I have a prolem that I was wondering if anyone could come up with a >> good solution for. I basically have to lists of number and I want to >> add elements from one list to the other as long as the difference >> between the added element and all elements already in that list >> exceeds a certain threshold. > Can you please explain with an example which 'difference' you want? Do > you mean the minimum, maximum, sum, average etc. of all elements in that > list? Or even that the sum of differences is smaller than a certain > threshold? > > Alternative, does the threshold vary as elements are added? You really > need to be careful here because the criterion will change depending on > the order that the elements are added and that not all elements within > the final list meet the criterion used to create it (as well elements > excluded that should have been in the list if the list was sorted > differently). > > If it just depends on the first list, then you can use the 'where' > function or boolean indexing to identify and extract the elements such > as t2[numpy.abs(t2>(min(t1)+1000))] which can then be append to the > times1 list (note times1 does not need to be converted to an array > unless you actually need it as an array). I do recommend exploring the > example web page on this as it has very informative examples on what > different approaches actually work. > > If it depends on both lists hopefully there is a conditional approach > where you can apply your criterion to one list first and then the other > that does not depend on the order entered: > threshold=min(numpy.min(t1),numpy.min(t2))+1000 > t2[numpy.abs(t2>threshold)] > > > Bruce >> The code I came up with is >> >> map(times1.append,ifilter(lambda(x): >> numpy.abs(x-numpy.array(times1)).min()>1000, times2)) >> >> but it quickly slows down if times2 becomes sufficiently long. I need >> to be able to do this with lists of 100,000++ elements. Does anyone >> know of a quicker way of doing this? >> >> Thanks! >> >> ~ Roger >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> >> > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From peridot.faceted at gmail.com Tue Aug 26 12:37:48 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Tue, 26 Aug 2008 12:37:48 -0400 Subject: [SciPy-user] Conditionally adding items to a list In-Reply-To: References: Message-ID: 2008/8/26 Roger Herikstad : > I have a prolem that I was wondering if anyone could come up with a > good solution for. I basically have to lists of number and I want to > add elements from one list to the other as long as the difference > between the added element and all elements already in that list > exceeds a certain threshold. The code I came up with is > > map(times1.append,ifilter(lambda(x): > numpy.abs(x-numpy.array(times1)).min()>1000, times2)) > > but it quickly slows down if times2 becomes sufficiently long. I need > to be able to do this with lists of 100,000++ elements. Does anyone > know of a quicker way of doing this? As others have said, you should think carefully about whether this is what you actually want: the result you get will depend on the order of the incoming items: [500,1000,1500] -> [500,1500] [1000,500,1500] -> [1000] But if it is what you want, I would worry more about the fact that your algorithm is O(n**2) than about the fact that list operations are (supposedly) slow. Here's an O(n) way to do what you want: def trim(input,spacing=1000): r = {} for n in input: i = math.floor(n/float(spacing)) if i in r: continue if i-1 in r and n-r[i-1]=spacing: r.append[i] return r If there are many elements to be discarded, this may be faster: def trim3_gen(input, spacing=1000): input = np.sort(input) i = input[0] while True: yield i try: i = input[np.searchsorted(input,i+spacing)] except IndexError: break (note that unlike the others it's a generator, for no really compelling reason.) Good luck, Anne From contact at pythonxy.com Tue Aug 26 16:12:19 2008 From: contact at pythonxy.com (Pierre Raybaut) Date: Tue, 26 Aug 2008 22:12:19 +0200 Subject: [SciPy-user] pyqwt or matplotlib In-Reply-To: References: Message-ID: <48B463A3.3060204@pythonxy.com> > > Message: 4 > Date: Tue, 26 Aug 2008 14:27:59 +0200 > From: bernardo martins rocha > Subject: [SciPy-user] pyqwt or matplotlib > To: scipy-user at scipy.org > Message-ID: <48B3F6CF.2010600 at meduni-graz.at> > Content-Type: text/plain; charset=ISO-8859-1; format=flowed > > Hi everybody, > > I'm starting to write a program with PyQt to visualize some traces from > a system of ODEs. I would like to plot one graphic for each variable > inside this PyQt application...and I'm wondering which one is the best > for it: matplotlib or pyqwt? I will read some files using PyTables and > then I'll plot everything and some of these files are very big, so I > need something fast and good. > > I've been using matplotlib for some small programs and it's very nice, > powerful and beautiful. I've already embedded some matplotlib plots > inside a PyQt application. But I have the impression that pyqwt is > faster than matplotlib. Is it true? Is there another library for > plotting that would do the job? > > Thanks! > Bernardo M. Rocha Hi, That is not an impression: PyQwt is much faster than matplotlib and is often used precisely to analyse huge data sets (here is an example: http://pyqwt.sourceforge.net/images/meq.pdf -- simple plotting, but very effective). On the other hand, as you may know, matplotlib has *a lot* more features, but if you don't need them... Pierre From bryan at cole.uklinux.net Tue Aug 26 16:54:50 2008 From: bryan at cole.uklinux.net (Bryan Cole) Date: Tue, 26 Aug 2008 21:54:50 +0100 Subject: [SciPy-user] pyqwt or matplotlib In-Reply-To: <48B463A3.3060204@pythonxy.com> References: <48B463A3.3060204@pythonxy.com> Message-ID: <1219784090.11968.44.camel@pc2.cole.uklinux.net> On Tue, 2008-08-26 at 22:12 +0200, Pierre Raybaut wrote: > > > > Message: 4 > > Date: Tue, 26 Aug 2008 14:27:59 +0200 > > From: bernardo martins rocha > > Subject: [SciPy-user] pyqwt or matplotlib > > To: scipy-user at scipy.org > > Message-ID: <48B3F6CF.2010600 at meduni-graz.at> > > Content-Type: text/plain; charset=ISO-8859-1; format=flowed > > > > Hi everybody, > > > > I'm starting to write a program with PyQt to visualize some traces from > > a system of ODEs. I would like to plot one graphic for each variable > > inside this PyQt application...and I'm wondering which one is the best > > for it: matplotlib or pyqwt? I will read some files using PyTables and > > then I'll plot everything and some of these files are very big, so I > > need something fast and good. > > > > I've been using matplotlib for some small programs and it's very nice, > > powerful and beautiful. I've already embedded some matplotlib plots > > inside a PyQt application. But I have the impression that pyqwt is > > faster than matplotlib. Is it true? Is there another library for > > plotting that would do the job? > > > > Thanks! > > Bernardo M. Rocha > Hi, > > That is not an impression: PyQwt is much faster than matplotlib and is > often used precisely to analyse huge data sets (here is an example: > http://pyqwt.sourceforge.net/images/meq.pdf -- simple plotting, but very > effective). > On the other hand, as you may know, matplotlib has *a lot* more > features, but if you don't need them... Which is faster depends critically on whether you need antialiased drawing or not and also the content of the plots. I've been banchmarking the rendering speeds for large polylines, for an in-house plotting widget, using a variety of libraries. Matplotlib (mostly) uses the Antigrain (AGG) antialiased rendering library. For AA-plots with moderate-to-large numbers of vertices (say >1000), this seems to be the fastest rendering method. Bascially, none of the main native drawing APIs (cairo, Qt, GDI+, Quartz) yet use hardware-acceleration for diagonal line rendering (they all focus on efficient compositing and text rendering), so an optimised software-rendering library like AGG wins (by about a factor of 3 on the few machines I tested it on). One exception here is if Qwt can use the QGLWidget (in Qt4), which renders using OpenGL (I'm not 100% sure if it can, since I've not used Qwt much). If your hardware supports antialiasing with OpenGL, then this can lift the performance well above AGG. The rendering speed and quality is rather variable however, depending on hardware. Other factors could also interfere with OpenGL performance: if you need many plot windows, having an OpenGL context for each plot could also kill performance. Another exception may also be for scatter plots with a symbol at each point. The newer drawing APIs place a lot of emphasis on glyph rendering performance (for text rendering). If the points are rendered using the glyph-caching facilities of cairo, for example, this may beat AGG. I haven't tested this yet, however. I havn't checked how mpl does symbol rendering, so I'm not sure if this changes the mpl-vs-qwt question. On the other hand, if you don't need antialiasing, rendering can go *much* faster (by a factor of 10 or more) using the native APIs. Using OpenGL can increase non-AA rendering speed even further. However, Idon't think you see this speed improvement in matplotlib, with the non-Agg backends. For example, drawing polylines in wxPython from arrays of data is limited by the slow speed of sequence-iteration over arrays. When I pre-convert all my data-arrays to lists, the wxDC:drawLines calls go 5x faster. A C-function to pass the array data directly to the wxDC increases speed further still. Summary: If you want antialiased plots, matplotlib should be fastest (except if Qwt can use antialiased OpenGL). For aliased plots, Qwt will be fastest, since rendering will no longer be the bottleneck and the 100% C++ implementation of Qwt will pay off. An OpenGL backend to matplotlib would be sweet... This is an interesting topic for me (as you can probably tell...) BC > > Pierre From gael.varoquaux at normalesup.org Tue Aug 26 16:59:48 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Tue, 26 Aug 2008 22:59:48 +0200 Subject: [SciPy-user] pyqwt or matplotlib In-Reply-To: <1219784090.11968.44.camel@pc2.cole.uklinux.net> References: <48B463A3.3060204@pythonxy.com> <1219784090.11968.44.camel@pc2.cole.uklinux.net> Message-ID: <20080826205948.GD14865@phare.normalesup.org> On Tue, Aug 26, 2008 at 09:54:50PM +0100, Bryan Cole wrote: > Which is faster depends critically on whether you need antialiased > drawing or not and also the content of the plots. I've been banchmarking > the rendering speeds for large polylines, for an in-house plotting > widget, using a variety of libraries. Actually, I think the issue can very be not only the speed of a draw, which is very much based on you rendering engine, as you point out, but also how clever the library is in case of a redraw to minimize the operations. The latter is also very important in case of an interactive application. Chaco does a lot of work to minimize the redraw cost. Ga?l From jdh2358 at gmail.com Tue Aug 26 17:30:18 2008 From: jdh2358 at gmail.com (John Hunter) Date: Tue, 26 Aug 2008 16:30:18 -0500 Subject: [SciPy-user] pyqwt or matplotlib In-Reply-To: <1219784090.11968.44.camel@pc2.cole.uklinux.net> References: <48B463A3.3060204@pythonxy.com> <1219784090.11968.44.camel@pc2.cole.uklinux.net> Message-ID: <88e473830808261430u61435ecbk45c75dc93abb24f2@mail.gmail.com> On Tue, Aug 26, 2008 at 3:54 PM, Bryan Cole wrote: > Another exception may also be for scatter plots with a symbol at each > point. The newer drawing APIs place a lot of emphasis on glyph rendering > performance (for text rendering). If the points are rendered using the > glyph-caching facilities of cairo, for example, this may beat AGG. I > haven't tested this yet, however. I havn't checked how mpl does symbol > rendering, so I'm not sure if this changes the mpl-vs-qwt question. if the markers are homogeneous (same size, same color), eg from a plot command like plot(x, y, 'o') matplotlib agg does use cached glyph rendering and is fast. For non-homogeneous markers, eg in a 'scatter' where the color and/or size vary with each marker, matlpotlib does not use any cacheing and is pretty slow. JDH From zachary.pincus at yale.edu Tue Aug 26 20:25:50 2008 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Tue, 26 Aug 2008 20:25:50 -0400 Subject: [SciPy-user] pyqwt or matplotlib In-Reply-To: <48B3F6CF.2010600@meduni-graz.at> References: <48B3F6CF.2010600@meduni-graz.at> Message-ID: <09C4D732-872E-496B-8DEE-E920BBF4E5F8@yale.edu> > I've been using matplotlib for some small programs and it's very nice, > powerful and beautiful. I've already embedded some matplotlib plots > inside a PyQt application. But I have the impression that pyqwt is > faster than matplotlib. Is it true? Is there another library for > plotting that would do the job? Another option, depending on how much plumbing you're interested in, is to write a custom tool with OpenGL... I've been using Pyglet for some rather-specialized data display needs (blit live video from a microscope + plot derived measures on top of the video, using the mouse to pan and zoom), and it's pretty nice. Basically, Pyglet is a (pretty simple) pure-python, ctypes-based, multiplatform interface to OpenGL, windowing, and mouse/keyboard IO. It's quite hackable, too -- I rigged up a very simple system to run pyglet windows in a background thread, so I could control the microscope from an interactive python interpreter, while still being able to programmatically interact with pyglet window objects. (Happy to share this code with anyone who desires. It's much cleaner, IMO, than the gyrations that ipython has to go through to support nonblocking QT, Tk, etc. windows. This is becase the pyglet mainloop is in python, and is easy to subclass and otherwise mess with.) The downside is of course that OpenGL isn't a plotting library. The upside is that if you have a well-defined plotting task, and you want full aesthetic control and also high speed, you can get that with not too much work. Just a thought, Zach From fperez.net at gmail.com Tue Aug 26 21:49:21 2008 From: fperez.net at gmail.com (Fernando Perez) Date: Tue, 26 Aug 2008 18:49:21 -0700 Subject: [SciPy-user] pyqwt or matplotlib In-Reply-To: <09C4D732-872E-496B-8DEE-E920BBF4E5F8@yale.edu> References: <48B3F6CF.2010600@meduni-graz.at> <09C4D732-872E-496B-8DEE-E920BBF4E5F8@yale.edu> Message-ID: On Tue, Aug 26, 2008 at 5:25 PM, Zachary Pincus wrote: > It's quite hackable, too -- I rigged up a very simple system to run > pyglet windows in a background thread, so I could control the > microscope from an interactive python interpreter, while still being > able to programmatically interact with pyglet window objects. (Happy > to share this code with anyone who desires. It's much cleaner, IMO, > than the gyrations that ipython has to go through to support > nonblocking QT, Tk, etc. windows. This is becase the pyglet mainloop > is in python, and is easy to subclass and otherwise mess with.) Does your code work as-is inside ipython? Would you want to contribute it to ipython? We'd love to ship an out-of-the-box-pyglet-shell, just let us know. Cheers, f From anand.prabhakar.patil at gmail.com Wed Aug 27 06:28:01 2008 From: anand.prabhakar.patil at gmail.com (Anand Patil) Date: Wed, 27 Aug 2008 11:28:01 +0100 Subject: [SciPy-user] SciPy, MPI and OpenMP In-Reply-To: <3d375d730808181545x2f978002g8c78bb7da8c2d5fe@mail.gmail.com> References: <3d375d730808181545x2f978002g8c78bb7da8c2d5fe@mail.gmail.com> Message-ID: <2bc7a5a50808270328pe5048a1gbaff87071be39535@mail.gmail.com> Lorenzo, Sorry for posting to such an old thread, but I'm new to multithreaded programming and recently struggled with Python+OpenMP, and thought you might like to hear about my experience. Basically I'm MUCH better off now that I've stopped using OpenMP, and instead call a serial f2py subroutine from several threads created in Python as in the 'handythread' example on the SciPy cookbook. The reason is simply that OpenMP is supported by newer versions of gcc than those that were used to compile most binary distributions of Python, so if you use OpenMP you can get compatibility problems. I had to rebuild Python from source on every machine I used for the OpenMP stuff. Surely that's at least partly because of my lack of skill with gcc, but I know that at least four people have had similar problems, and the threading-from-Python route is easier to program anyway. It's ended up being faster for me despite the overhead of spawning Python thread objects, because I can use Python's superior flexibility to safely pare each thread's work down to the bare minimum. If you go this route, your serial f2py subroutines just need to have the line 'cf2py threadsafe' in them. That will make them release and reacquire the GIL as appropriate. Anand On Mon, Aug 18, 2008 at 11:45 PM, Robert Kern wrote: > On Mon, Aug 18, 2008 at 10:00, Lorenzo Isella > wrote: > > Dear All, > > I have recently attended a crash course on MPI and OpenMP. The > > examples always involved C or Fortran code. > > Now, I have a thought: if working on a single processor, I hardly need > > to use pure C or pure Fortran. I usually write a Fortran code for the > > bottlenecks and compile it with f2py to create a python module I then > > import. > > Hence two questions: > > (1) Can I do something similar with many processors? E.g.: write a > > Python code, embed some compiled Fortran code which is supposed to run > > on many processors, get the results and come back to Python. > > > > Python--->Fortran on many processors--->back to Python. > > (2)Is it also possible to directly parallelize a Python code? I heard > > about thread locking in Python. > > There is a global interpreter lock (GIL) when touching Python data > structures. If you handle the threads entirely in the Fortran code and > never call back into Python until you are finished with the threads, > this should not be an issue for you. > > In bad ASCII art: > > Python |Fortran /------\ | Python > --------|-------<-------->--|-------- > | \------/ | > > > I did some online research, there seems to be a lot of projects trying > > to combine Python and MPI/OpenMP, but many look rather "experimental". > > In particular, of course, I would like to hear about SciPy and > > parallel computing. > > Most of the MPI wrappers these days are fairly mature. I think some of > the OpenMP work is still pretty new, though. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anand.prabhakar.patil at gmail.com Wed Aug 27 06:57:36 2008 From: anand.prabhakar.patil at gmail.com (Anand Patil) Date: Wed, 27 Aug 2008 11:57:36 +0100 Subject: [SciPy-user] IFFT when FT is known on log scale Message-ID: <2bc7a5a50808270357y285501bbxd1c930eaad5ba34e@mail.gmail.com> Hi all, I've got a Fourier transform stored for frequency values that are evenly spaced on the log scale, and I need to inverse Fourier transform it. I'm wondering whether there's a simple trick that would let me use the inverse fast Fourier transform. I need the log scale because the tails of the FT are long but I need a detailed representation near the origin. The domain spans many decades, so I can't just linearize. Any tips? The package NFFT3, http://www-user.tu-chemnitz.de/~potts/nfft/looks great but it's a pretty heavy weapon for this little problem. Thanks, Anand -------------- next part -------------- An HTML attachment was scrubbed... URL: From bernardo.rocha at meduni-graz.at Wed Aug 27 08:04:37 2008 From: bernardo.rocha at meduni-graz.at (bernardo martins rocha) Date: Wed, 27 Aug 2008 14:04:37 +0200 Subject: [SciPy-user] pyqwt or matplotlib In-Reply-To: <48B463A3.3060204@pythonxy.com> References: <48B463A3.3060204@pythonxy.com> Message-ID: <48B542D5.9000108@meduni-graz.at> Pierre Raybaut wrote: >> >> Message: 4 >> Date: Tue, 26 Aug 2008 14:27:59 +0200 >> From: bernardo martins rocha >> Subject: [SciPy-user] pyqwt or matplotlib >> To: scipy-user at scipy.org >> Message-ID: <48B3F6CF.2010600 at meduni-graz.at> >> Content-Type: text/plain; charset=ISO-8859-1; format=flowed >> >> Hi everybody, >> >> I'm starting to write a program with PyQt to visualize some traces >> from a system of ODEs. I would like to plot one graphic for each >> variable inside this PyQt application...and I'm wondering which one >> is the best for it: matplotlib or pyqwt? I will read some files using >> PyTables and then I'll plot everything and some of these files are >> very big, so I need something fast and good. >> >> I've been using matplotlib for some small programs and it's very >> nice, powerful and beautiful. I've already embedded some matplotlib >> plots inside a PyQt application. But I have the impression that pyqwt >> is faster than matplotlib. Is it true? Is there another library for >> plotting that would do the job? >> >> Thanks! >> Bernardo M. Rocha > Hi, > > That is not an impression: PyQwt is much faster than matplotlib and is > often used precisely to analyse huge data sets (here is an example: > http://pyqwt.sourceforge.net/images/meq.pdf -- simple plotting, but > very effective). > On the other hand, as you may know, matplotlib has *a lot* more > features, but if you don't need them... > > Pierre Hi Guys, What I'm trying to do is something like this code: http://www.krugle.org/examples/p-ytHFYREhpD3udCb8/embedding_in_qt4.py But...with a lot of plot windows with two graphs in each (comparing different solutions) and for huge arrays....I'll try to do it initially with matplotlib, and then, if I figure out that it is too slow then I'll move to PyQwt. I had a look at Chaco and it looks really nice, but I don't know if it is possible to embed it in a PyQt application like the example above. Is it possible? Are there some examples/tutorials available? Thanks a lot! Bernardo M. Rocha From zachary.pincus at yale.edu Wed Aug 27 12:00:01 2008 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Wed, 27 Aug 2008 12:00:01 -0400 Subject: [SciPy-user] pyqwt or matplotlib In-Reply-To: References: <48B3F6CF.2010600@meduni-graz.at> <09C4D732-872E-496B-8DEE-E920BBF4E5F8@yale.edu> Message-ID: <78AC8363-523E-407B-8FF2-75E8B155783C@yale.edu> Hi Fernando, >> It's quite hackable, too -- I rigged up a very simple system to run >> pyglet windows in a background thread, so I could control the >> microscope from an interactive python interpreter, while still being >> able to programmatically interact with pyglet window objects. (Happy >> to share this code with anyone who desires. It's much cleaner, IMO, >> than the gyrations that ipython has to go through to support >> nonblocking QT, Tk, etc. windows. This is becase the pyglet mainloop >> is in python, and is easy to subclass and otherwise mess with.) > > Does your code work as-is inside ipython? Would you want to > contribute it to ipython? We'd love to ship an > out-of-the-box-pyglet-shell, just let us know. I'd be happy to provide this code. There are a few caveats to its use, though -- perhaps you could help me come up with an easier interface. Out of the box, pyglet ships with a main-loop that uses platform- specific code to sleep until a GUI event happens, or after a certain time elapses (to enforce a user-specified minimum framerate). Every time the loop wakes up, it sends repaint (in pyglet: on_draw) events to all of the windows. Now, Pyglet works fine if all the calls to it are made from one thread only. So what I do is run a subclass of the default event loop in a background thread. This subclassed event loop checks a "message queue" (basically, a list of callback functions) every time it wakes up, and if the queue is non-empty, calls a few of the callbacks. This way, code that calls pyglet functions can be added to the message queue so that it will be called in the context of the pyglet thread. (I've added a few bells and whistles, like proxy objects that "look" like pyglet windows, but route method calls and getattr/setattr through the event loop, so that interaction is seamless.) Here comes the caveat, though: to keep latency down for tending the message queue, the main loop needs to wake up frequently (at least 20 Hz). However, sending repaint events to every window that often is pretty inefficient in most cases. So I elected to not send the repaint events by default. Instead, a pyglet window needs to call it's on_draw method after every event that requires a redraw (e.g. a clock tick, a GUI interaction). This sort of event loop is a more efficient than the default pyglet loop in general, but unfortunately, the coding style is a bit different. As such, my code doesn't work with any old pyglet window class out of the box -- some very minor changes need to be made. I could perhaps fix things so this isn't necessary -- figure out a way for the main-loop to distinguish between being awoken to tend the message queue, and being awoken on GUI events or for "minimum framerate" reasons. But perhaps the issue is un-fixable... Another caveat is that this approach is almost completely backwards to that taken by the rest of the interactive windowing code in ipython. (That code essentially feeds an embedded python interpreter, which runs as a callback from the GIU mainloop, line-by-line input from stdin.) As such, my code might be a bit out-of-place and harder to maintain. I suspect that the original ipython approach will work fine for pyglet too, and perhaps without the above caveat; however it might be a bit more processor-intensive. Zach From tjhnson at gmail.com Wed Aug 27 12:26:44 2008 From: tjhnson at gmail.com (T J) Date: Wed, 27 Aug 2008 09:26:44 -0700 Subject: [SciPy-user] Revisiting Log Arrays Message-ID: <48B58044.4010204@gmail.com> Hi, A while back there was a discussion on including support for working with log arrays: http://www.mail-archive.com/numpy-discussion at scipy.org/msg08840.html It looked like some major steps were taken to this goal: http://www.mail-archive.com/numpy-discussion at scipy.org/msg08982.html but unfortunately, I didn't follow up from there. 1) The end of the second discussion made it seem like scipy.special was a good place for this. Has this been committed? I'd love to just do a 'svn update'. 2) If it has not been committed yet, I would still like to try it. What would I need to do to get this working? In particular, it looks like the patch was for numpy rather than scipy.special. 3) I'm curious how this works. Suppose I have a log array and that I multiply it (elementwise) by a normal array. Typically, 0 log 0 is defined to be 0. Would this be handled properly? Or would I need to post-process the array (that would be slow...)? 4) Eventually, it would be really nice to have logdot and the other functions mentioned in the second discussion. I'll do whatever I can to help move this along, but getting this committed and out there seems like a good next step. Thanks for the hard work already, T J From fperez.net at gmail.com Wed Aug 27 14:53:12 2008 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 27 Aug 2008 11:53:12 -0700 Subject: [SciPy-user] pyqwt or matplotlib In-Reply-To: <78AC8363-523E-407B-8FF2-75E8B155783C@yale.edu> References: <48B3F6CF.2010600@meduni-graz.at> <09C4D732-872E-496B-8DEE-E920BBF4E5F8@yale.edu> <78AC8363-523E-407B-8FF2-75E8B155783C@yale.edu> Message-ID: Hi Zach, On Wed, Aug 27, 2008 at 9:00 AM, Zachary Pincus wrote: > Out of the box, pyglet ships with a main-loop that uses platform- > specific code to sleep until a GUI event happens, or after a certain > time elapses (to enforce a user-specified minimum framerate). Every > time the loop wakes up, it sends repaint (in pyglet: on_draw) events > to all of the windows. [...] Thanks for the detailed explanation. It sounds to me like there are still a few questions on the approach, so I'll leave the decision up to you. Just so you know, if at any point you feel you'd like to have this be part of ipython, it's very simple: put up your own branch of ipython in launchpad and we'll review it, give you feedback, etc, until it's ready for inclusion. Several of us are already keeping our ipython branches publicly visible and permanently marked for merge, so it's easy to compare them against the trunk. For example: - https://code.launchpad.net/~fdo.perez/ipython/trunk-dev: my main working copy of trunk for all I do. - https://code.launchpad.net/~laurent-dufrechou/ipython/trunk-dev: Laurent's - https://code.launchpad.net/~robert-kern/ipython/contexts: Robert Kern's, but this one is focused on a specific feature (context management). This allows individual developers to expose for review both their 'main' copy of trunk and any feature-specific branches they may want to create for public comment and review before merging. In addition, we have team branches where everyone can directly commit (basically the equivalent of the svn repo with commit privileges). I think we're finally finding a good workflow for ipython that takes advantage of Launchpad's features to benefit the project. Cheers, f From loniedavid at gmail.com Wed Aug 27 22:09:06 2008 From: loniedavid at gmail.com (David Lonie) Date: Wed, 27 Aug 2008 21:09:06 -0500 Subject: [SciPy-user] Curve fitting and LaTeX output Message-ID: <199bcede0808271909y6ae27e4apf1a0975d6b93c3cf@mail.gmail.com> I'm using the scipy package to analyze data from my research. I'm running into a couple problems I'd like some help with. 2 questions: 1) Curve fitting -- I have found linregress for linear functions, polyfit for polynomial fits, and a description of using lstsq to fit data that linearly depends on x. Is there a way to fit a curve of the form, for example, y = a + b^x? I don't mind a RTFM response to this, just please let me know which FM to R, because I can't find anything :) 2) LaTeX output -- I remember using R for stats before, but I'd like to avoid going back to it if possible. IIRC it had the ability to output latex tables, etc. that could be inserted into a document via \input{} (I think). Is there a python module that provides a similar output interface? Thanks in advance, Dave From dg.numpy at thesamovar.net Wed Aug 27 22:03:45 2008 From: dg.numpy at thesamovar.net (Dan Goodman) Date: Thu, 28 Aug 2008 02:03:45 +0000 (UTC) Subject: [SciPy-user] =?utf-8?q?dgemm_overwrite=5Fc_bug=3F?= Message-ID: Hi all, I'm interesting in doing a matrix multiplication in-place, and because numpy's dot function doesn't do that I had to go looking. After much pain I finally found the linalg.blas etc. stuff in scipy. However, I think that the dgemm function is ignoring the overwrite_c parameter, e.g.: from numpy import array from scipy import linalg gemm, = linalg.blas.get_blas_funcs(['gemm']) x=array([[1.,2.],[3.,4.]]) y=array([[5.,6.],[7.,8.]]) c=array([[8.,9.],[10.,11.]]) print gemm(1.0,x,y,1.,c,0,0,overwrite_c=1) print print c gives the output [[ 27. 31.] [ 53. 61.]] [[ 8. 9.] [ 10. 11.]] but the two matrices ought to be the same... Or have I misunderstood something? There seems to be a lack of documentation for these functions, so that's quite possible. Is there any way of getting access to an in-place matrix multiplication like this? Many thanks, Dan Goodman From ryanlists at gmail.com Wed Aug 27 22:27:05 2008 From: ryanlists at gmail.com (Ryan Krauss) Date: Wed, 27 Aug 2008 21:27:05 -0500 Subject: [SciPy-user] Curve fitting and LaTeX output In-Reply-To: <199bcede0808271909y6ae27e4apf1a0975d6b93c3cf@mail.gmail.com> References: <199bcede0808271909y6ae27e4apf1a0975d6b93c3cf@mail.gmail.com> Message-ID: You can use optimize.fmin to fit a curve of an arbitrary form. You have to be careful to specify the output of your cost function correctly - you probably want the sum of the squared error. The first input to the cost function must be a vector of unknown coefficients. Something like import scipy y = #some data here x = #some other array here def mycost(c): y_model = c[0] + c[1]**x error_vect = y - y_model return sum(error_vect**2) c_initial_guess = [123412.1234, 1203740.0123984] c_final = scipy.optimize.fmin(mycost, c_initial_guess) I didn't test that code, but am 90% confident in it. I have written my own latex output code for lots of stuff, but it is messy. I don't know what exists that is well conceived :) Ryan On Wed, Aug 27, 2008 at 9:09 PM, David Lonie wrote: > I'm using the scipy package to analyze data from my research. I'm > running into a couple problems I'd like some help with. 2 questions: > > 1) Curve fitting -- I have found linregress for linear functions, > polyfit for polynomial fits, and a description of using lstsq to fit > data that linearly depends on x. Is there a way to fit a curve of the > form, for example, y = a + b^x? I don't mind a RTFM response to this, > just please let me know which FM to R, because I can't find anything > :) > > 2) LaTeX output -- I remember using R for stats before, but I'd like > to avoid going back to it if possible. IIRC it had the ability to > output latex tables, etc. that could be inserted into a document via > \input{} (I think). Is there a python module that provides a similar > output interface? > > Thanks in advance, > > Dave > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Wed Aug 27 22:38:09 2008 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 27 Aug 2008 22:38:09 -0400 Subject: [SciPy-user] Curve fitting and LaTeX output In-Reply-To: <199bcede0808271909y6ae27e4apf1a0975d6b93c3cf@mail.gmail.com> References: <199bcede0808271909y6ae27e4apf1a0975d6b93c3cf@mail.gmail.com> Message-ID: <48B60F91.70506@american.edu> David Lonie wrote: > 2) LaTeX output -- I remember using R for stats before, but I'd like > to avoid going back to it if possible. IIRC it had the ability to > output latex tables, etc. that could be inserted into a document via > \input{} (I think). Is there a python module that provides a similar > output interface? Maybe econpy's SimpleTable wd be useful to you? (License: MIT) Yes, you can then \input the tables into a LaTeX document. Alan Isaac From fusion_energy at hotmail.com Thu Aug 28 02:26:00 2008 From: fusion_energy at hotmail.com (F. B.) Date: Thu, 28 Aug 2008 06:26:00 +0000 Subject: [SciPy-user] KDE 2D, problem on axis and pylab.scatter Message-ID: Hello I have a problem with this code. ------------------------------------------------------------------- import numpy as np from numpy import array from numpy import log import scipy.stats as stats from matplotlib.pyplot import imshow import pylab as pl import copy f=open("datain.txt","r") xyval=f.readlines() xvect=[] yvect=[] for i in range(len(xyval)): if (i< (len(xyval)/2) ): yvect.append(float(xyval[i])) else: xvect.append( log( float(xyval[i]) ) ) rvs=np.r_[[xvect],[yvect]] #yvalues=array(yvect) #xvalues=array(xvect) #xmin=xvalues.min() #xmax=xvalues.max() #ymin=yvalues.min() #ymax=yvalues.max() #stepx=xmax/256 #stepy=ymax/256 print "xmin: "+str(xmin)+" xmax: "+str(xmax)+" stepx: "+str(stepx)+" ymin: "+str(ymin)+" ymax: "+str(ymax)+" stepy: "+str(stepy) #x_flat=np.arange(0,xmax,stepx) #y_flat=np.arange(0,ymax,stepy) x_flat=np.arange(-25,-15,0.1) y_flat=np.arange(0,900,1) kde = stats.kde.gaussian_kde(rvs) #rvs.T = [[....],[....]] x,y = np.meshgrid(x_flat,y_flat) grid_coords = np.append(x.reshape(-1,1),y.reshape(-1,1),axis=1) z = kde(grid_coords.T) z = z.reshape(len(y_flat),len(x_flat)) pl.hold(True) contplot=pl.contourf(z,20) ax=copy.deepcopy(pl.axis()) pl.scatter(xvect,yvect,c='y') pl.axis(ax) print pl.axis() pl.show() ------------------------------------------------------------------- I need to create a kernel density distribution of my data but the result that I obtain is strange. From readlines I read the x values that are in a range from 10**-7 to 10**-12, but for them (as you can see) I use the logarithm and then they go from -25 to -15. My y values are in range 50 to 1000 more or less. There are some problems, first of all, the range of the values. Using x_flat=np.arange(-25,-15,0.1) I have a problem with the value of the X axis. In the graph that I obtain, the minimum value in x is zero, and the maximum one is one hundred, but it is supposed to be between -25 and -15. (you can see that the length of x_flat is 100). The other problem that I have is the scatter plot. I should plot my data over the KDE but I cannot do that and I don't know why. However the shape of the KDE is that I expect. If someone could gently help me I would really appreciate it. Thanks and best regards F. B. _________________________________________________________________ Windows Live Mail: il programma gratuito per gestire tutta la tua posta! http://get.live.com/wlmail/overview -------------- next part -------------- An HTML attachment was scrubbed... URL: From s.mientki at ru.nl Thu Aug 28 02:42:14 2008 From: s.mientki at ru.nl (Stef Mientki) Date: Thu, 28 Aug 2008 08:42:14 +0200 Subject: [SciPy-user] possible to get INF in divide by zero ? Message-ID: <48B648C6.4070102@ru.nl> hello, I wonder if it's possible to get the MatLab behavior : divide by zero gives infinite (INF) ? I even wonder if something like INF exists in python / scipy / numpy. Can someone clarify this ? thanks, Stef Mientki From vincefn at users.sourceforge.net Thu Aug 28 02:49:57 2008 From: vincefn at users.sourceforge.net (Vincent Favre-Nicolin) Date: Thu, 28 Aug 2008 08:49:57 +0200 Subject: [SciPy-user] possible to get INF in divide by zero ? In-Reply-To: <529E0C005F46104BA9DB3CB93F39797501B5F5C1@TOKYO.intra.cea.fr> References: <529E0C005F46104BA9DB3CB93F39797501B5F5C1@TOKYO.intra.cea.fr> Message-ID: <200808280849.58330.vincefn@users.sourceforge.net> On jeudi 28 ao?t 2008, Stef Mientki wrote: > I wonder if it's possible to get the MatLab behavior : > divide by zero gives infinite (INF) ? > > I even wonder if something like INF exists in python / scipy / numpy. Isn't this already the behaviour of numpy ? In [6]: ones(10)/0 Warning: divide by zero encountered in divide Out[6]: array([ Inf, Inf, Inf, Inf, Inf, Inf, Inf, Inf, Inf, Inf]) Vincent -- Vincent Favre-Nicolin CEA Grenoble/INAC/SP2M http://inac.cea.fr Univ. Joseph Fourier (Grenoble) http://www.ujf-grenoble.fr ObjCryst & Fox http://objcryst.sf.net/Fox From s.mientki at ru.nl Thu Aug 28 03:56:36 2008 From: s.mientki at ru.nl (Stef Mientki) Date: Thu, 28 Aug 2008 09:56:36 +0200 Subject: [SciPy-user] possible to get INF in divide by zero ? In-Reply-To: <200808280849.58330.vincefn@users.sourceforge.net> References: <529E0C005F46104BA9DB3CB93F39797501B5F5C1@TOKYO.intra.cea.fr> <200808280849.58330.vincefn@users.sourceforge.net> Message-ID: <48B65A34.4010804@ru.nl> another observation: >>> a=numpy.array([2,3]) >>> a/0 array([0, 0]) ??? cheers, Stef Vincent Favre-Nicolin wrote: > On jeudi 28 ao?t 2008, Stef Mientki wrote: > >> I wonder if it's possible to get the MatLab behavior : >> divide by zero gives infinite (INF) ? >> >> I even wonder if something like INF exists in python / scipy / numpy. >> > > Isn't this already the behaviour of numpy ? > > In [6]: ones(10)/0 > Warning: divide by zero encountered in divide > Out[6]: array([ Inf, Inf, Inf, Inf, Inf, Inf, Inf, Inf, Inf, Inf]) > > Vincent > Het UMC St Radboud staat geregistreerd bij de Kamer van Koophandel in het handelsregister onder nummer 41055629. The Radboud University Nijmegen Medical Centre is listed in the Commercial Register of the Chamber of Commerce under file number 41055629. From s.mientki at ru.nl Thu Aug 28 03:48:39 2008 From: s.mientki at ru.nl (Stef Mientki) Date: Thu, 28 Aug 2008 09:48:39 +0200 Subject: [SciPy-user] possible to get INF in divide by zero ? In-Reply-To: <200808280849.58330.vincefn@users.sourceforge.net> References: <529E0C005F46104BA9DB3CB93F39797501B5F5C1@TOKYO.intra.cea.fr> <200808280849.58330.vincefn@users.sourceforge.net> Message-ID: <48B65857.5090304@ru.nl> Vincent Favre-Nicolin wrote: > On jeudi 28 ao?t 2008, Stef Mientki wrote: > >> I wonder if it's possible to get the MatLab behavior : >> divide by zero gives infinite (INF) ? >> >> I even wonder if something like INF exists in python / scipy / numpy. >> > > Isn't this already the behaviour of numpy ? > > In [6]: ones(10)/0 > Warning: divide by zero encountered in divide > Out[6]: array([ Inf, Inf, Inf, Inf, Inf, Inf, Inf, Inf, Inf, Inf]) > > Vincent > Thanks, so it works for arrays, but not for normal integers: >>> 3/0 Traceback (most recent call last): File "", line 1, in ZeroDivisionError: integer division or modulo by zero thanks, Stef Het UMC St Radboud staat geregistreerd bij de Kamer van Koophandel in het handelsregister onder nummer 41055629. The Radboud University Nijmegen Medical Centre is listed in the Commercial Register of the Chamber of Commerce under file number 41055629. From robert.kern at gmail.com Thu Aug 28 04:10:22 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 28 Aug 2008 03:10:22 -0500 Subject: [SciPy-user] possible to get INF in divide by zero ? In-Reply-To: <48B65857.5090304@ru.nl> References: <529E0C005F46104BA9DB3CB93F39797501B5F5C1@TOKYO.intra.cea.fr> <200808280849.58330.vincefn@users.sourceforge.net> <48B65857.5090304@ru.nl> Message-ID: <3d375d730808280110l77263f84wc0c7047e78f42b97@mail.gmail.com> On Thu, Aug 28, 2008 at 02:48, Stef Mientki wrote: > Thanks, so it works for arrays, > but not for normal integers: > >>> 3/0 > Traceback (most recent call last): > File "", line 1, in > ZeroDivisionError: integer division or modulo by zero Right. Python integers and floats explicitly check for this case and raise the error. numpy objects, either arrays or numpy scalar types, have a configurable mechanism. If you want scalars that work like this: In [1]: from numpy import * In [2]: float64(1.0) / 0.0 Out[2]: inf In [5]: seterr(divide='raise') Out[5]: {'divide': 'ignore', 'invalid': 'ignore', 'over': 'ignore', 'under': 'ignore'} In [6]: float64(1.0) / 0.0 --------------------------------------------------------------------------- FloatingPointError Traceback (most recent call last) /Users/rkern/Downloads/Video/avy/ in () FloatingPointError: divide by zero encountered in double_scalars In [7]: seterr(divide='warn') Out[7]: {'divide': 'raise', 'invalid': 'ignore', 'over': 'ignore', 'under': 'ignore'} In [8]: float64(1.0) / 0.0 /usr/local/bin/ipython:1: RuntimeWarning: divide by zero encountered in double_scalars #!/Library/Frameworks/Python.framework/Versions/2.5/Resources/Python.app/Contents/MacOS/Python Out[8]: inf In [9]: seterr(divide='print') Out[9]: {'divide': 'warn', 'invalid': 'ignore', 'over': 'ignore', 'under': 'ignore'} In [10]: float64(1.0) / 0.0 Warning: divide by zero encountered in double_scalars Out[10]: inf In [12]: seterr(divide='ignore') Out[12]: {'divide': 'print', 'invalid': 'ignore', 'over': 'ignore', 'under': 'ignore'} In [13]: float64(1.0) / 0.0 Out[13]: inf In [14]: seterr? Type: function Base Class: String Form: Namespace: Interactive File: /Users/rkern/svn/numpy/numpy/core/numeric.py Definition: seterr(all=None, divide=None, over=None, under=None, invalid=None) Docstring: Set how floating-point errors are handled. Valid values for each type of error are the strings "ignore", "warn", "raise", and "call". Returns the old settings. If 'all' is specified, values that are not otherwise specified will be set to 'all', otherwise they will retain their old values. Note that operations on integer scalar types (such as int16) are handled like floating point, and are affected by these settings. Example: >>> seterr(over='raise') # doctest: +SKIP {'over': 'ignore', 'divide': 'ignore', 'invalid': 'ignore', 'under': 'ignore'} >>> seterr(all='warn', over='raise') # doctest: +SKIP {'over': 'raise', 'divide': 'ignore', 'invalid': 'ignore', 'under': 'ignore'} >>> int16(32000) * int16(3) # doctest: +SKIP Traceback (most recent call last): File "", line 1, in ? FloatingPointError: overflow encountered in short_scalars >>> seterr(all='ignore') # doctest: +SKIP {'over': 'ignore', 'divide': 'ignore', 'invalid': 'ignore', 'under': 'ignore'} -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From anand.prabhakar.patil at gmail.com Thu Aug 28 04:49:55 2008 From: anand.prabhakar.patil at gmail.com (Anand Patil) Date: Thu, 28 Aug 2008 09:49:55 +0100 Subject: [SciPy-user] Curve fitting and LaTeX output In-Reply-To: <199bcede0808271909y6ae27e4apf1a0975d6b93c3cf@mail.gmail.com> References: <199bcede0808271909y6ae27e4apf1a0975d6b93c3cf@mail.gmail.com> Message-ID: <2bc7a5a50808280149i1ad25d7dk12a34afc34b6228f@mail.gmail.com> You can also use rpy, http://rpy.sourceforge.net/, to get the R table as a string straight from Python. Here's a very rough, non-working schematic of how it would go: from rpy import r mystring = r.lm_fit(blablabla) f = file('table.tex','w') f.write(mystring) Anand On Thu, Aug 28, 2008 at 3:09 AM, David Lonie wrote: > I'm using the scipy package to analyze data from my research. I'm > running into a couple problems I'd like some help with. 2 questions: > > 1) Curve fitting -- I have found linregress for linear functions, > polyfit for polynomial fits, and a description of using lstsq to fit > data that linearly depends on x. Is there a way to fit a curve of the > form, for example, y = a + b^x? I don't mind a RTFM response to this, > just please let me know which FM to R, because I can't find anything > :) > > 2) LaTeX output -- I remember using R for stats before, but I'd like > to avoid going back to it if possible. IIRC it had the ability to > output latex tables, etc. that could be inserted into a document via > \input{} (I think). Is there a python module that provides a similar > output interface? > > Thanks in advance, > > Dave > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From loniedavid at gmail.com Thu Aug 28 08:56:48 2008 From: loniedavid at gmail.com (David Lonie) Date: Thu, 28 Aug 2008 07:56:48 -0500 Subject: [SciPy-user] Curve fitting and LaTeX output In-Reply-To: References: <199bcede0808271909y6ae27e4apf1a0975d6b93c3cf@mail.gmail.com> Message-ID: <199bcede0808280556w1b3e0f2epf230248d0d5cf03e@mail.gmail.com> Thanks for the fast reply! I plugged in my data to that script and added some output, but I get an invalid fit out of it. The fit I got from oocalc is approx y = 32 - 0.85^x, and fmin is returning y = 12 - 0.65^x, which for some reason returns an array of non-numbers when I try to generate plot data? Also, changing the guess around causes large changes in the fit. Is this normal? Thanks again, Dave The code: ======================================================== from scipy.optimize import * from numpy import * from pylab import * from scipy import * y = array((31,14,2.9,2.0)) x = array((0,6,12,18)) def mycost(c): y_model = c[0] + c[1]**x error_vect = y - y_model return sum(error_vect**2) c_initial_guess = [5000 , 5000] c_final = fmin(mycost, c_initial_guess) xr = linspace(0,18) yr = c_final[0] + c_final[1] ** xr print "***************************************************************" print c_final print "***************************************************************" print xr print "***************************************************************" print yr print "***************************************************************" scatter(x,y) plot(xr,yr) show() On Wed, Aug 27, 2008 at 9:27 PM, Ryan Krauss wrote: > You can use optimize.fmin to fit a curve of an arbitrary form. You have to > be careful to specify the output of your cost function correctly - you > probably want the sum of the squared error. The first input to the cost > function must be a vector of unknown coefficients. From loniedavid at gmail.com Thu Aug 28 09:01:26 2008 From: loniedavid at gmail.com (David Lonie) Date: Thu, 28 Aug 2008 08:01:26 -0500 Subject: [SciPy-user] Curve fitting and LaTeX output In-Reply-To: <48B60F91.70506@american.edu> References: <199bcede0808271909y6ae27e4apf1a0975d6b93c3cf@mail.gmail.com> <48B60F91.70506@american.edu> Message-ID: <199bcede0808280601m45933fc3o922aefc52ef9bb75@mail.gmail.com> That looks like what I need, with a few minor tweaks. Thanks! On Wed, Aug 27, 2008 at 9:38 PM, Alan G Isaac wrote: > David Lonie wrote: >> 2) LaTeX output -- I remember using R for stats before, but I'd like >> to avoid going back to it if possible. IIRC it had the ability to >> output latex tables, etc. that could be inserted into a document via >> \input{} (I think). Is there a python module that provides a similar >> output interface? > > Maybe econpy's SimpleTable wd be useful to you? (License: MIT) > > Yes, you can then \input the tables into a LaTeX document. > > Alan Isaac > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From loniedavid at gmail.com Thu Aug 28 09:03:03 2008 From: loniedavid at gmail.com (David Lonie) Date: Thu, 28 Aug 2008 08:03:03 -0500 Subject: [SciPy-user] Curve fitting and LaTeX output In-Reply-To: <2bc7a5a50808280149i1ad25d7dk12a34afc34b6228f@mail.gmail.com> References: <199bcede0808271909y6ae27e4apf1a0975d6b93c3cf@mail.gmail.com> <2bc7a5a50808280149i1ad25d7dk12a34afc34b6228f@mail.gmail.com> Message-ID: <199bcede0808280603y6baf7e69y7a6efb80400b68c@mail.gmail.com> I've thought of doing this, but I need to use these scripts in windows sometimes and I've always had trouble getting rpy working there. Maybe things have gotten better lately and it's time for another try. Dave On Thu, Aug 28, 2008 at 3:49 AM, Anand Patil wrote: > You can also use rpy, http://rpy.sourceforge.net/, to get the R table as a > string straight from Python. Here's a very rough, non-working schematic of > how it would go: > from rpy import r > mystring = r.lm_fit(blablabla) > f = file('table.tex','w') > f.write(mystring) > Anand > > > On Thu, Aug 28, 2008 at 3:09 AM, David Lonie wrote: >> >> I'm using the scipy package to analyze data from my research. I'm >> running into a couple problems I'd like some help with. 2 questions: >> >> 1) Curve fitting -- I have found linregress for linear functions, >> polyfit for polynomial fits, and a description of using lstsq to fit >> data that linearly depends on x. Is there a way to fit a curve of the >> form, for example, y = a + b^x? I don't mind a RTFM response to this, >> just please let me know which FM to R, because I can't find anything >> :) >> >> 2) LaTeX output -- I remember using R for stats before, but I'd like >> to avoid going back to it if possible. IIRC it had the ability to >> output latex tables, etc. that could be inserted into a document via >> \input{} (I think). Is there a python module that provides a similar >> output interface? >> >> Thanks in advance, >> >> Dave >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From rob.clewley at gmail.com Thu Aug 28 09:30:59 2008 From: rob.clewley at gmail.com (Rob Clewley) Date: Thu, 28 Aug 2008 09:30:59 -0400 Subject: [SciPy-user] Curve fitting and LaTeX output In-Reply-To: <199bcede0808280556w1b3e0f2epf230248d0d5cf03e@mail.gmail.com> References: <199bcede0808271909y6ae27e4apf1a0975d6b93c3cf@mail.gmail.com> <199bcede0808280556w1b3e0f2epf230248d0d5cf03e@mail.gmail.com> Message-ID: > def mycost(c): > y_model = c[0] + c[1]**x > error_vect = y - y_model > return sum(error_vect**2) > FWIW norm(errorvect) will probably be faster than explicit summing and squaring. From bsouthey at gmail.com Thu Aug 28 10:12:42 2008 From: bsouthey at gmail.com (Bruce Southey) Date: Thu, 28 Aug 2008 09:12:42 -0500 Subject: [SciPy-user] possible to get INF in divide by zero ? In-Reply-To: <3d375d730808280110l77263f84wc0c7047e78f42b97@mail.gmail.com> References: <529E0C005F46104BA9DB3CB93F39797501B5F5C1@TOKYO.intra.cea.fr> <200808280849.58330.vincefn@users.sourceforge.net> <48B65857.5090304@ru.nl> <3d375d730808280110l77263f84wc0c7047e78f42b97@mail.gmail.com> Message-ID: <48B6B25A.2020206@gmail.com> Robert Kern wrote: > On Thu, Aug 28, 2008 at 02:48, Stef Mientki wrote: > > >> Thanks, so it works for arrays, >> but not for normal integers: >> >>> 3/0 >> Traceback (most recent call last): >> File "", line 1, in >> ZeroDivisionError: integer division or modulo by zero >> > > Right. Python integers and floats explicitly check for this case and > raise the error. numpy objects, either arrays or numpy scalar types, > have a configurable mechanism. If you want scalars that work like > this: > > > In [1]: from numpy import * > > In [2]: float64(1.0) / 0.0 > Out[2]: inf > > In [5]: seterr(divide='raise') > Out[5]: {'divide': 'ignore', 'invalid': 'ignore', 'over': 'ignore', > 'under': 'ignore'} > > In [6]: float64(1.0) / 0.0 > --------------------------------------------------------------------------- > FloatingPointError Traceback (most recent call last) > > /Users/rkern/Downloads/Video/avy/ in () > > FloatingPointError: divide by zero encountered in double_scalars > > In [7]: seterr(divide='warn') > Out[7]: {'divide': 'raise', 'invalid': 'ignore', 'over': 'ignore', > 'under': 'ignore'} > > In [8]: float64(1.0) / 0.0 > /usr/local/bin/ipython:1: RuntimeWarning: divide by zero encountered > in double_scalars > #!/Library/Frameworks/Python.framework/Versions/2.5/Resources/Python.app/Contents/MacOS/Python > Out[8]: inf > > In [9]: seterr(divide='print') > Out[9]: {'divide': 'warn', 'invalid': 'ignore', 'over': 'ignore', > 'under': 'ignore'} > > In [10]: float64(1.0) / 0.0 > Warning: divide by zero encountered in double_scalars > Out[10]: inf > > In [12]: seterr(divide='ignore') > Out[12]: {'divide': 'print', 'invalid': 'ignore', 'over': 'ignore', > 'under': 'ignore'} > > In [13]: float64(1.0) / 0.0 > Out[13]: inf > > In [14]: seterr? > Type: function > Base Class: > String Form: > Namespace: Interactive > File: /Users/rkern/svn/numpy/numpy/core/numeric.py > Definition: seterr(all=None, divide=None, over=None, under=None, > invalid=None) > Docstring: > Set how floating-point errors are handled. > > Valid values for each type of error are the strings > "ignore", "warn", "raise", and "call". Returns the old settings. > If 'all' is specified, values that are not otherwise specified > will be set to 'all', otherwise they will retain their old > values. > > Note that operations on integer scalar types (such as int16) are > handled like floating point, and are affected by these settings. > > Example: > > >>> seterr(over='raise') # doctest: +SKIP > {'over': 'ignore', 'divide': 'ignore', 'invalid': 'ignore', > 'under': 'ignore'} > > >>> seterr(all='warn', over='raise') # doctest: +SKIP > {'over': 'raise', 'divide': 'ignore', 'invalid': 'ignore', > 'under': 'ignore'} > > >>> int16(32000) * int16(3) # doctest: +SKIP > Traceback (most recent call last): > File "", line 1, in ? > FloatingPointError: overflow encountered in short_scalars > >>> seterr(all='ignore') # doctest: +SKIP > {'over': 'ignore', 'divide': 'ignore', 'invalid': 'ignore', > 'under': 'ignore'} > > In Python (probably system and version dependent) you can generate infinity as float('inf'). You are overlooking the basic definition of infinity (http://en.wikipedia.org/wiki/Infinity) and numerical representation of numbers in computer science especially integer versus floating point. In computer science integers refer to a finite range ( http://en.wikipedia.org/wiki/Integer_(computer_science) ) which is contrary to the meaning of infinity. Technically the same argument holds true for floating point number, but, fortunately there is the IEEE Standard for Binary Floating-Point for Arithmetic (IEEE 754 see for example, http://en.wikipedia.org/wiki/IEEE_754) that NumPy supports. This standard allows for representations of 'special' floating point values (such as positive infinity, negative infinity and Not a Number) and, more importantly, operations involving these values. Using integers: >>> numpy.ones(10, dtype=numpy.int) array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1]) >>> numpy.ones(10, dtype=numpy.int)/0 array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0]) >>> type((numpy.ones(10, dtype=numpy.int)/0)[0]) Compared to using floats (the default of dtype of numpy.ones, for example, http://sd-2116.dedibox.fr/pydocweb/doc/numpy.matlib.ones/ ): >>> numpy.ones(10, dtype=numpy.float)/0 array([ Inf, Inf, Inf, Inf, Inf, Inf, Inf, Inf, Inf, Inf]) >>> type((numpy.ones(10, dtype=numpy.float)/0)[0]) Bruce From aisaac at american.edu Thu Aug 28 10:40:29 2008 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 28 Aug 2008 10:40:29 -0400 Subject: [SciPy-user] Curve fitting and LaTeX output In-Reply-To: <199bcede0808280601m45933fc3o922aefc52ef9bb75@mail.gmail.com> References: <199bcede0808271909y6ae27e4apf1a0975d6b93c3cf@mail.gmail.com><48B60F91.70506@american.edu> <199bcede0808280601m45933fc3o922aefc52ef9bb75@mail.gmail.com> Message-ID: On Thu, 28 Aug 2008, David Lonie apparently wrote: > That looks like what I need, with a few minor tweaks. Thanks! I would be interested to know what needs to be tweaked to make SimpleTable useful for you. Any suggestions are welcome. Thanks, Alan From loniedavid at gmail.com Thu Aug 28 11:04:05 2008 From: loniedavid at gmail.com (David Lonie) Date: Thu, 28 Aug 2008 10:04:05 -0500 Subject: [SciPy-user] Curve fitting and LaTeX output In-Reply-To: References: <199bcede0808271909y6ae27e4apf1a0975d6b93c3cf@mail.gmail.com> <48B60F91.70506@american.edu> <199bcede0808280601m45933fc3o922aefc52ef9bb75@mail.gmail.com> Message-ID: <199bcede0808280804jd8fe1bsb966158bbb3c760b@mail.gmail.com> Mainly, a different data format for each column, i.e. some columns should be %s, some %.2f, some %.3f, some %.4e etc. Although, I could just convert the arrays to already formatted string arrays before passing them an make them all %s.... Also, I'd like to change the \\ that ends a row to \\\hline in some tables by simply changing an option. A function to output the table to a .tex file containing just the table would be useful too -- like R does. (I like to put all my data into a single .tsv, run a script on it to process it, and have a tex file "write itself" using a template with lots of \input{} macros, rather than copying and pasting data in from a terminal.) I haven't had a chance to thoroughly review the code, I may have missed a couple of these :) Dave On Thu, Aug 28, 2008 at 9:40 AM, Alan G Isaac wrote: > On Thu, 28 Aug 2008, David Lonie apparently wrote: >> That looks like what I need, with a few minor tweaks. Thanks! > > I would be interested to know what needs to be tweaked > to make SimpleTable useful for you. Any suggestions > are welcome. > > Thanks, > Alan > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From ryanlists at gmail.com Thu Aug 28 11:10:50 2008 From: ryanlists at gmail.com (Ryan Krauss) Date: Thu, 28 Aug 2008 10:10:50 -0500 Subject: [SciPy-user] Curve fitting and LaTeX output In-Reply-To: <199bcede0808280556w1b3e0f2epf230248d0d5cf03e@mail.gmail.com> References: <199bcede0808271909y6ae27e4apf1a0975d6b93c3cf@mail.gmail.com> <199bcede0808280556w1b3e0f2epf230248d0d5cf03e@mail.gmail.com> Message-ID: optimize.fmin will find local minima of your cost. Defining the cost carefully and giving good initial guesses are very important. On 8/28/08, David Lonie wrote: > > Thanks for the fast reply! I plugged in my data to that script and > added some output, but I get an invalid fit out of it. The fit I got > from oocalc is approx > > y = 32 - 0.85^x, and fmin is returning > y = 12 - 0.65^x, which for some reason returns an array of non-numbers > when I try to generate plot data? > > Also, changing the guess around causes large changes in the fit. Is this > normal? > > Thanks again, > > Dave > > The code: > ======================================================== > from scipy.optimize import * > from numpy import * > from pylab import * > from scipy import * > > y = array((31,14,2.9,2.0)) > x = array((0,6,12,18)) > > > def mycost(c): > y_model = c[0] + c[1]**x > error_vect = y - y_model > return sum(error_vect**2) > > > c_initial_guess = [5000 , 5000] > > c_final = fmin(mycost, c_initial_guess) > xr = linspace(0,18) > yr = c_final[0] + c_final[1] ** xr > > print "***************************************************************" > print c_final > print "***************************************************************" > print xr > print "***************************************************************" > print yr > print "***************************************************************" > > scatter(x,y) > plot(xr,yr) > show() > > > > On Wed, Aug 27, 2008 at 9:27 PM, Ryan Krauss wrote: > > You can use optimize.fmin to fit a curve of an arbitrary form. You have > to > > be careful to specify the output of your cost function correctly - you > > probably want the sum of the squared error. The first input to the cost > > function must be a vector of unknown coefficients. > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at gmail.com Thu Aug 28 11:22:53 2008 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Thu, 28 Aug 2008 11:22:53 -0400 Subject: [SciPy-user] Curve fitting and LaTeX output In-Reply-To: <199bcede0808280556w1b3e0f2epf230248d0d5cf03e@mail.gmail.com> References: <199bcede0808271909y6ae27e4apf1a0975d6b93c3cf@mail.gmail.com> <199bcede0808280556w1b3e0f2epf230248d0d5cf03e@mail.gmail.com> Message-ID: <114880320808280822g3d20d373oa5dd594a3817d282@mail.gmail.com> Hi David, On Thu, Aug 28, 2008 at 8:56 AM, David Lonie wrote: > Thanks for the fast reply! I plugged in my data to that script and > added some output, but I get an invalid fit out of it. The fit I got > from oocalc is approx > > y = 32 - 0.85^x, and fmin is returning > y = 12 - 0.65^x, which for some reason returns an array of non-numbers > when I try to generate plot data? If c[0]=12 and c[1]=-0.65, then your function is not 12-0.65**x, but rather 12 + (-0.65)**x, which is not defined when x is not an integer. As Ryan suggested, try a better starting guess. You know the function must be decreasing, so c[1] must be between 0 and 1. If I use c_initial_guess = [10, 0.5], I get c_final = [12.20289742 0.65819867] This is still a bad fit; I think this sum is not a good model for your data. You'll get a better fit with a simple scaled exponential: c[0]*(c[1]**x) Cheers, Warren -------------- next part -------------- An HTML attachment was scrubbed... URL: From bryan.cole at teraview.com Thu Aug 28 11:23:27 2008 From: bryan.cole at teraview.com (bryan cole) Date: Thu, 28 Aug 2008 15:23:27 +0000 (UTC) Subject: [SciPy-user] advice on stochastic(?) optimisation Message-ID: Hi, I'll looking for a bit of guidance as to what sort of algorithm is most appropriate/efficient for finding the local maximum of a function (in 2 dimensions), where each function evaluation is 1) noisy and 2) expensive/slow to evaluate. I'd welcome any suggestions for where best to start investigating this (text books, references, web-sites or existing optimisation libraries). I've no background in this field at all. cheers, Bryan From kartita at gmail.com Thu Aug 28 11:35:45 2008 From: kartita at gmail.com (Kimberly Artita) Date: Thu, 28 Aug 2008 10:35:45 -0500 Subject: [SciPy-user] advice on stochastic(?) optimisation In-Reply-To: References: Message-ID: I use several "flavors" of evolutionary optimization: particle swarm and variations of the genetic algorithm. Using latin hypercube sampling to generate the initial population is highly recommended. There are several online websites and you can even find some algorithms coded in python. Don't forget to use swig or f2py when you can. On Thu, Aug 28, 2008 at 10:23 AM, bryan cole wrote: > Hi, > > I'll looking for a bit of guidance as to what sort of algorithm is most > appropriate/efficient for finding the local maximum of a function (in 2 > dimensions), where each function evaluation is 1) noisy and 2) > expensive/slow to evaluate. > > I'd welcome any suggestions for where best to start investigating this > (text books, references, web-sites or existing optimisation libraries). > I've no background in this field at all. > > cheers, > Bryan > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Kimberly S. Artita Graduate Student, Engineering Science College of Engineering Southern Illinois University Carbondale Carbondale, Illinois 62901-6603 Office: ENGB 0044, Water Resources Research Lab Phone: (618)-528-0349 E-mail: kartita at gmail.com, kartita at siu.edu web: http://civil.engr.siu.edu/GraduateStudents/artita/index.html From loniedavid at gmail.com Thu Aug 28 12:03:08 2008 From: loniedavid at gmail.com (David Lonie) Date: Thu, 28 Aug 2008 11:03:08 -0500 Subject: [SciPy-user] Curve fitting Message-ID: <199bcede0808280903i450f63c7iba9b9504b3b9259d@mail.gmail.com> Thanks for the guidance -- I got a very good fit for an a*b^x+c model. Curve fitting seems a lot less intimidating now than it did 2 days ago :) Thanks again, Dave On Thu, Aug 28, 2008 at 10:22 AM, Warren Weckesser wrote: > Hi David, > > On Thu, Aug 28, 2008 at 8:56 AM, David Lonie wrote: >> >> Thanks for the fast reply! I plugged in my data to that script and >> added some output, but I get an invalid fit out of it. The fit I got >> from oocalc is approx >> >> y = 32 - 0.85^x, and fmin is returning >> y = 12 - 0.65^x, which for some reason returns an array of non-numbers >> when I try to generate plot data? > > If c[0]=12 and c[1]=-0.65, then your function is not 12-0.65**x, > but rather 12 + (-0.65)**x, which is not defined when x is not > an integer. As Ryan suggested, try a better starting guess. > You know the function must be decreasing, so c[1] must be > between 0 and 1. If I use c_initial_guess = [10, 0.5], I get > c_final = [12.20289742 0.65819867] > This is still a bad fit; I think this sum is not a good model for > your data. You'll get a better fit with a simple scaled > exponential: c[0]*(c[1]**x) > > > Cheers, > > Warren > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From anand.prabhakar.patil at gmail.com Thu Aug 28 12:06:26 2008 From: anand.prabhakar.patil at gmail.com (Anand Patil) Date: Thu, 28 Aug 2008 17:06:26 +0100 Subject: [SciPy-user] looking for a negative binomial distribution In-Reply-To: <3d375d730808222315g2dfefb61yc716fede2cfa20b2@mail.gmail.com> References: <3d375d730808222315g2dfefb61yc716fede2cfa20b2@mail.gmail.com> Message-ID: <2bc7a5a50808280906h5f6d6a99n86ed84fe50c66694@mail.gmail.com> I'm getting another issue: In [4]: numpy.random.geometric(0) Out[4]: -2147483648 In [5]: numpy.random.geometric(1e-13) Out[5]: -2147483648 but am using 1.2.0.dev5418, not the svn head. Does anyone get this with the svn head? Anand On Sat, Aug 23, 2008 at 7:15 AM, Robert Kern wrote: > On Fri, Aug 22, 2008 at 21:58, Chris Fonnesbeck wrote: > > I notice in the scipy dev wiki that the negative binomial random number > > generator is broken. In particular, it appears to round the first > parameter, > > which is incorrect -- any real number is valid, not just integers. > > Fixed in numpy SVN. This will be part of the upcoming 1.2.0 release > and probably 1.1.2 if we do one. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dominique.orban at gmail.com Thu Aug 28 13:45:43 2008 From: dominique.orban at gmail.com (Dominique Orban) Date: Thu, 28 Aug 2008 13:45:43 -0400 Subject: [SciPy-user] advice on stochastic(?) optimisation In-Reply-To: References: Message-ID: <8793ae6e0808281045w120db46cy9695cb838ad063bd@mail.gmail.com> On Thu, Aug 28, 2008 at 11:23 AM, bryan cole wrote: > I'll looking for a bit of guidance as to what sort of algorithm is most > appropriate/efficient for finding the local maximum of a function (in 2 > dimensions), where each function evaluation is 1) noisy and 2) > expensive/slow to evaluate. A few points that might influence the choice of a method are: - Are there any constraints? - Can you compute the first derivatives of your objective function (i.e., its gradient) and of your constraints? - How about second derivatives? - If not, do those functions have a known structure? Do you have access to a computer code that evaluates them or are they essentially a "black box"? > I'd welcome any suggestions for where best to start investigating this > (text books, references, web-sites or existing optimisation libraries). > I've no background in this field at all. For noisy optimization you might want to start with Tim Kelley's book and his method (called DIRECT). See http://www4.ncsu.edu/~ctk/iffco.html There is Fortran code that could be wrapped up with f2py and (apparently more up to date) Matlab code that could be converted to Python. For his book, see http://www.ec-securehost.com/SIAM/FR18.html There is also a class of very successful methods called mesh-adaptive direct search. There is a C++ code at http://www.gerad.ca/NOMAD/ for problems which are essentially made up of black boxes. The advantage of such methods is that, despite the difficulty of the problems, they guarantee certain convergence properties. That's not to say that they always work, although they often do, but rather that when they don't you can usually figure out why. Dominique From pwang at enthought.com Thu Aug 28 16:56:27 2008 From: pwang at enthought.com (Peter Wang) Date: Thu, 28 Aug 2008 15:56:27 -0500 Subject: [SciPy-user] pyqwt or matplotlib In-Reply-To: <48B542D5.9000108@meduni-graz.at> References: <48B463A3.3060204@pythonxy.com> <48B542D5.9000108@meduni-graz.at> Message-ID: On Aug 27, 2008, at 7:04 AM, bernardo martins rocha wrote: > Hi Guys, > What I'm trying to do is something like this code: > http://www.krugle.org/examples/p-ytHFYREhpD3udCb8/embedding_in_qt4.py > But...with a lot of plot windows with two graphs in each (comparing > different solutions) and for huge arrays....I'll try to do it > initially > with matplotlib, and then, if I figure out that it is too slow then > I'll > move to PyQwt. How big are your arrays, and how big is the resolution of your output devices? > I had a look at Chaco and it looks really nice, but I don't know if it > is possible to embed it in a PyQt application like the example > above. Is > it possible? Yes. Traits UI and Chaco are all compatible with Qt4. Just set your ETS_TOOLKIT environment variable to "qt4" before running any of the Chaco examples. > Are there some examples/tutorials available? Yes. We have a large number of examples. Every screenshot in the gallery links to its corresponding source in SVN: http://code.enthought.com/projects/chaco/gallery.php I'm still working on the docs (as always), but you can see them in- progress here: http://code.enthought.com/projects/chaco/docs/html/index.html The QuickStart is a good place to start, and you can find the PDF of my slides from the SciPy tutorial I gave last week in the Tutorials section: http://code.enthought.com/projects/chaco/docs/html/user_manual/tutorial.html You can also ask specific questions on the enthought-dev or chaco- users mailing lists: https://mail.enthought.com/mailman/listinfo/enthought-dev https://mail.enthought.com/mailman/listinfo/chaco-users -Peter From pwang at enthought.com Thu Aug 28 17:07:41 2008 From: pwang at enthought.com (Peter Wang) Date: Thu, 28 Aug 2008 16:07:41 -0500 Subject: [SciPy-user] pyqwt or matplotlib In-Reply-To: <09C4D732-872E-496B-8DEE-E920BBF4E5F8@yale.edu> References: <48B3F6CF.2010600@meduni-graz.at> <09C4D732-872E-496B-8DEE-E920BBF4E5F8@yale.edu> Message-ID: <69A34E96-97C6-49F7-9D4C-1E60EC283870@enthought.com> On Aug 26, 2008, at 7:25 PM, Zachary Pincus wrote: > Another option, depending on how much plumbing you're interested in, > is to write a custom tool with OpenGL... > > I've been using Pyglet for some rather-specialized data display needs > (blit live video from a microscope + plot derived measures on top of > the video, using the mouse to pan and zoom), and it's pretty nice. > Basically, Pyglet is a (pretty simple) pure-python, ctypes-based, > multiplatform interface to OpenGL, windowing, and mouse/keyboard IO. > It's quite hackable, too -- I rigged up a very simple system to run > pyglet windows in a background thread, so I could control the > microscope from an interactive python interpreter, while still being > able to programmatically interact with pyglet window objects. (Happy > to share this code with anyone who desires. It's much cleaner, IMO, > than the gyrations that ipython has to go through to support > nonblocking QT, Tk, etc. windows. This is becase the pyglet mainloop > is in python, and is easy to subclass and otherwise mess with.) > > The downside is of course that OpenGL isn't a plotting library. The > upside is that if you have a well-defined plotting task, and you want > full aesthetic control and also high speed, you can get that with not > too much work. > > Just a thought, > Zach Hey Zach, I've been working on an early version of an OpenGL/pyglet-based backend for Chaco. It currently does most of the plots that are supported in Chaco (although there are issues with the color bar rendering incorrectly). I use pyglet to get a window and provide an platform independent API for events, but most of the actual drawing is done via a C++ GraphicsContext class that makes calls to libOpenGl. (I use pyglet to render text and Andrew Straw's pygarrayimage to draw images.) This GraphicsContext has a transform stack, a clip stack, supports compiled paths, etc. I also have my own little "PygletSimpleApp" class analog of WxSimpleApp and whatnot. It is indeed very nice to have total control over the event loop - so much simpler than fiddling with the Wx event queue! I've tested my code on win32, Ubuntu 7, and OS X. The beauty of doing it this way is that I can reuse all of the data handling and rendering code from Chaco, and on systems where I have WX or Qt available, I can switch to using those instead of Pyglet by setting a single environment variable. I'm hoping to get this GL backend polished up enough to release it as a supported part of Chaco, maybe by the next large-ish release (3.1? 3.2?). -Peter From tonyyu at mit.edu Thu Aug 28 17:25:58 2008 From: tonyyu at mit.edu (Tony S Yu) Date: Thu, 28 Aug 2008 17:25:58 -0400 Subject: [SciPy-user] in-place add for sparse.lil_eye Message-ID: I'm not sure if this is a closed bug or not, but in-place adding for sparse.lil_eye *sometimes* raises: AttributeError: 'numpy.ndarray' object has no attribute 'append' In the code below, the error doesn't occur unless the sparse matrix is multiplied by some number (`a` below) AND the added matrix (`A_offdiag` below) has entries off the main diagonal. Normal adding works fine. I noticed the 0.7.0 changes noted "numerous bug fixes" for the sparse module, so this problem may already be fixed (possibly with the closing of Ticket#680); I just wanted to bring it up in case it hadn't. Cheers, -Tony #~~~~~~~~~~~~~~~~~~~~~~~~~~~ import numpy as np import scipy.sparse as sparse N = 15 a = 2. A_offdiag = sparse.lil_diags([np.ones(N)], [1], (N, N)) A_eye = a * sparse.lil_eye((N, N)) A_eye += A_offdiag #~~~~~~~~~~~~~~~~~~~~~~~~~~~ Specs: --------- scipy 0.6.0 numpy 1.1.1 python 2.5.1 os.x 10.5.3 -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Thu Aug 28 17:46:38 2008 From: cournape at gmail.com (David Cournapeau) Date: Fri, 29 Aug 2008 06:46:38 +0900 Subject: [SciPy-user] advice on stochastic(?) optimisation In-Reply-To: References: Message-ID: <5b8d13220808281446x51502978n3b374a6f4af162e@mail.gmail.com> On Fri, Aug 29, 2008 at 12:23 AM, bryan cole wrote: > Hi, > > I'll looking for a bit of guidance as to what sort of algorithm is most > appropriate/efficient for finding the local maximum of a function (in 2 > dimensions), where each function evaluation is 1) noisy and 2) > expensive/slow to evaluate. > Do you have a formula for the function f ? If what you have is only noisy observations of f(x), without knowing f, that's basically what stochastic approximation is about: you have a big literature about this kind of problems. The first article is the one introducing Robbins-Monro algorithm: Robbins, H. and Monro, S. "A Stochastic Approximation Method." Ann. Math. Stat. 22, 400-407, 1951. A recent book covering the field is the Kushner and Yin book: http://www.springer.com/math/probability/book/978-0-387-00894-3?cm_mmc=Google-_-Book%20Search-_-Springer-_-0 The problem of those algorithms is that they are hard to implement in python, because of their recursive nature, hence non vectorizable. If your function/observation is hard to compute, it may not be a big problem, though. cheers, David From bryan at cole.uklinux.net Thu Aug 28 17:49:29 2008 From: bryan at cole.uklinux.net (Bryan Cole) Date: Thu, 28 Aug 2008 22:49:29 +0100 Subject: [SciPy-user] pyqwt or matplotlib In-Reply-To: <69A34E96-97C6-49F7-9D4C-1E60EC283870@enthought.com> References: <48B3F6CF.2010600@meduni-graz.at> <09C4D732-872E-496B-8DEE-E920BBF4E5F8@yale.edu> <69A34E96-97C6-49F7-9D4C-1E60EC283870@enthought.com> Message-ID: <1219960169.18713.8.camel@pc2.cole.uklinux.net> On Thu, 2008-08-28 at 16:07 -0500, Peter Wang wrote: > On Aug 26, 2008, at 7:25 PM, Zachary Pincus wrote: > > Another option, depending on how much plumbing you're interested in, > > is to write a custom tool with OpenGL... > > > > I've been using Pyglet for some rather-specialized data display needs > > (blit live video from a microscope + plot derived measures on top of > > the video, using the mouse to pan and zoom), and it's pretty nice. > > Basically, Pyglet is a (pretty simple) pure-python, ctypes-based, > > multiplatform interface to OpenGL, windowing, and mouse/keyboard IO. > > It's quite hackable, too -- I rigged up a very simple system to run > > pyglet windows in a background thread, so I could control the > > microscope from an interactive python interpreter, while still being > > able to programmatically interact with pyglet window objects. (Happy > > to share this code with anyone who desires. It's much cleaner, IMO, > > than the gyrations that ipython has to go through to support > > nonblocking QT, Tk, etc. windows. This is becase the pyglet mainloop > > is in python, and is easy to subclass and otherwise mess with.) > > > > The downside is of course that OpenGL isn't a plotting library. The > > upside is that if you have a well-defined plotting task, and you want > > full aesthetic control and also high speed, you can get that with not > > too much work. > > > > Just a thought, > > Zach > > Hey Zach, > > I've been working on an early version of an OpenGL/pyglet-based > backend for Chaco. It currently does most of the plots that are > supported in Chaco (although there are issues with the color bar > rendering incorrectly). > > I use pyglet to get a window and provide an platform independent API > for events, but most of the actual drawing is done via a C++ > GraphicsContext class that makes calls to libOpenGl. (I use pyglet to > render text and Andrew Straw's pygarrayimage to draw images.) This > GraphicsContext has a transform stack, a clip stack, supports compiled > paths, etc. > > I also have my own little "PygletSimpleApp" class analog of > WxSimpleApp and whatnot. It is indeed very nice to have total control > over the event loop - so much simpler than fiddling with the Wx event > queue! The GL backend for Chaco is an exciting development. Althought the examples worked fine with the pyglet backend, when I tried to experiment with it I couldn't see how to use it within the context of a full TraitsUI/wx application. Pyglet doesn't seem to integrate with any other toolkit event loop. Can the gl_graphics_context be "dropped in" in place of the standard agg gc? Is there an easy switch to set this up? Bryan > > I've tested my code on win32, Ubuntu 7, and OS X. The beauty of doing > it this way is that I can reuse all of the data handling and rendering > code from Chaco, and on systems where I have WX or Qt available, I can > switch to using those instead of Pyglet by setting a single > environment variable. I'm hoping to get this GL backend polished up > enough to release it as a supported part of Chaco, maybe by the next > large-ish release (3.1? 3.2?). > > > > -Peter From cournape at gmail.com Thu Aug 28 17:53:28 2008 From: cournape at gmail.com (David Cournapeau) Date: Fri, 29 Aug 2008 06:53:28 +0900 Subject: [SciPy-user] advice on stochastic(?) optimisation In-Reply-To: <5b8d13220808281446x51502978n3b374a6f4af162e@mail.gmail.com> References: <5b8d13220808281446x51502978n3b374a6f4af162e@mail.gmail.com> Message-ID: <5b8d13220808281453o62e6f0f5j332beed5638e5fbb@mail.gmail.com> On Fri, Aug 29, 2008 at 6:46 AM, David Cournapeau wrote: > The problem of those algorithms is that they are hard to implement in > python, because of their recursive nature, hence non vectorizable. If > your function/observation is hard to compute, it may not be a big > problem, though. > I forgot a link to some implementation: http://leon.bottou.org/projects/sgd cheers, David From pwang at enthought.com Thu Aug 28 18:36:46 2008 From: pwang at enthought.com (Peter Wang) Date: Thu, 28 Aug 2008 17:36:46 -0500 Subject: [SciPy-user] Thoughts on GUI development In-Reply-To: <20080822224315.GA14708@phare.normalesup.org> References: <48AF124D.2010003@ru.nl> <20080822224315.GA14708@phare.normalesup.org> Message-ID: <33F810E4-8DA5-4360-90D0-3C957C3C1CBF@enthought.com> On Aug 22, 2008, at 5:43 PM, Gael Varoquaux wrote: > On Fri, Aug 22, 2008 at 04:12:24PM -0400, Barry Wark wrote: >> I think what Stef is getting at is that effective scientific software >> may need to match its user model to the user's world model. In other >> words, the "workflow" matters. If the software requires the user >> (scientist/engineer/etc.) to deal with data or process in a different >> order than the order implied by their experiment, the software is not >> as good as it could be. In my view, this is why we build UIs--so that >> we can match the software model to the user's model such that the >> software is "invisible" to the user in doing their work. I contend >> that it is a rare case when a CLI interface is the *best* fit to the >> user's world model. > > I agree, but I was talking about a CLI interface, not a notebook, or > something else, and I guess my point was that if you go in GUIs, you > should should get more than a nice-looking terminal, eg Matlab, > scilab. This point cannot be stressed enough. For scientific or engineering users, one of the major workflows *is* data exploration. Although writing expressions and small procedural logic blocks is a fairly reasonable fit for some of that exploration, I think people stick to it out of habit. Some scientists like to make fun of "business" users that do crazy, complex things in Excel, but we should make sure that the python/ipython/matlab/mathematica/ prompt does not become our little Excel. :) In my mind, the purpose of integrating a command-line prompt with the GUI is to provide that familiar exploratory interface while at the same time providing nice visual interfaces for things that actually *do* have workflow to them. Furthermore, if the data and object models behind the exploratory interface are easy to turn into a more workflow-oriented interface for non-expert users, then the entire lab benefits from this sharing of expertise. It's a lofty and a difficult goal, but I think it's the right way to interface GUIs with something as creative and open-ended as scientific analysis. > I am currently struggling with trying to define what is my model, > what is > my view, what should sit where, ie how many processes we want. I am > now > convinced that for a robust and powerful IDE with Python, we want > several > processes communicating together. For instance, I think that the > editor, > may it be written in Python, and not emacs or vim, or eclipse, > should be > sitting in a different process, so that the calculation does not block > the editor, nor crashes it. It's not clear to me why you want "several processes communicating together". Ease of coding (i.e. avoiding nasty multithreading issues)? The GIL? It's not obvious to me why the choice of Python over, say, C++ changes the decision making process here. If I were tasked with writing a C++ IDE, my first inclination would be to avoid multi-process as much as possible... -Peter From dwf at cs.toronto.edu Thu Aug 28 18:49:10 2008 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Thu, 28 Aug 2008 18:49:10 -0400 Subject: [SciPy-user] advice on stochastic(?) optimisation In-Reply-To: References: Message-ID: <803F9796-BB96-446B-A934-431520A2418A@cs.toronto.edu> On 28-Aug-08, at 11:23 AM, bryan cole wrote: > I'll looking for a bit of guidance as to what sort of algorithm is > most > appropriate/efficient for finding the local maximum of a function > (in 2 > dimensions), where each function evaluation is 1) noisy and 2) > expensive/slow to evaluate. Noisy how, exactly? And do you have gradients (or approximate gradients)? Can you at least be guaranteed that the function you are evaluating is proportional (on average) to the true function? There is a wide and deep literature on stochastic gradient descent, particularly in the context of neural networks. Here are some papers that you might find of interest: Local Gain Adaptation in Stochastic Gradient Descent by N. Schraudolph: http://tinyurl.com/69xm45 A set of lecture notes by Leon Bottou on the subject: http:// leon.bottou.org/papers/bottou-mlss-2004 In two dimensions, though, I doubt anything too complicated will be necessary. David From cohen at slac.stanford.edu Thu Aug 28 18:57:33 2008 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Fri, 29 Aug 2008 00:57:33 +0200 Subject: [SciPy-user] Getting coordinates of a level (contour) curve In-Reply-To: References: <9457e7c80808112254u8e12be1w75e5ddb406f696df@mail.gmail.com> <6A20D155-0F7E-4C1A-AAD3-32C7F7D1F05A@yale.edu> <47678149-7874-4CF7-84B0-F2E8373BBED7@yale.edu> <10116D7E-68BC-414E-A031-E1CBF19AE65F@yale.edu> Message-ID: <48B72D5D.7040209@slac.stanford.edu> Hi Rob, I just tried your code after installing PyDSTool from svn. But I get the following error : Velocity around curve is always 1, e.g. look at 100th point norm(Point(sol[100].labels['EP']['data'].V)) =--------------------------------------------------------------------------- NotImplementedError Traceback (most recent call last) /home/cohen/PyCont_LevelCurve.py in () 59 print "\nVelocity around curve is always 1, e.g. look at 100th point" 60 print "norm(Point(sol[100].labels['EP']['data'].V)) =", \ ---> 61 norm(Point(sol[100].labels['EP']['data'].V)) 62 63 print "... at which we have travelled distance ds =", \ /usr/lib/python2.5/site-packages/matplotlib/mlab.pyc in norm(x, y) 1783 Deprecated - see numpy.linalg.norm 1784 """ -> 1785 raise NotImplementedError('Deprecated - see numpy.linalg.norm') 1786 1787 NotImplementedError: Deprecated - see numpy.linalg.norm WARNING: Failure executing file: My version of numpy is '1.2.0.dev5694'. htanks, Johann Rob Clewley wrote: > Attached is a commented example in PyCont for a 2D zero level set that > defines an ellipse. It's very easy! I'll add this to PyDSTool's > examples. Thanks for giving me the impetus to do this. Let me know > what you think. > > -Rob > > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From pwang at enthought.com Thu Aug 28 19:25:40 2008 From: pwang at enthought.com (Peter Wang) Date: Thu, 28 Aug 2008 18:25:40 -0500 Subject: [SciPy-user] Pyglet + Chaco (was Re: pyqwt or matplotlib) In-Reply-To: <1219960169.18713.8.camel@pc2.cole.uklinux.net> References: <48B3F6CF.2010600@meduni-graz.at> <09C4D732-872E-496B-8DEE-E920BBF4E5F8@yale.edu> <69A34E96-97C6-49F7-9D4C-1E60EC283870@enthought.com> <1219960169.18713.8.camel@pc2.cole.uklinux.net> Message-ID: <71F903BD-3163-4775-98D8-781D7F1319FE@enthought.com> On Aug 28, 2008, at 4:49 PM, Bryan Cole wrote: > The GL backend for Chaco is an exciting development. Althought the > examples worked fine with the pyglet backend, when I tried to > experiment > with it I couldn't see how to use it within the context of a full > TraitsUI/wx application. Pyglet doesn't seem to integrate with any > other > toolkit event loop. Yep, there be the dragons. I started working on this late Tuesday night but haven't finished it yet. It is certainly possible to do, it's just that my test code is still doing funky things. :) The goal that Stefan and I were talking about is to embed a Pyglet-based CoverFlow into a Traits UI editor. (My personal goal is to then stick live Chaco plots on the CoverFlow covers. ;) > Can the gl_graphics_context be "dropped in" in place of the standard > agg > gc? Is there an easy switch to set this up? You can set the KIVA_WISHLIST environment variable to "gl", and then subsequent imports of GraphicsContext from enthought.kiva will use the GC from backend_gl.py. This doesn't get you nice event handling through Enable, but it does work. You can also directly import the graphics context and play with it: from enthought.kiva.backend_gl import GraphicsContext from enthought.kiva import Font from pyglet import clock, window def main(): clock.set_fps_limit(60) win = window.Window() win.set_size(480, 320) win.set_caption("Backend GL + Pyglet") gc = GraphicsContext((480, 320)) gc.gl_init() exit = False while not exit: win.dispatch_events() gc.clear() gc.set_stroke_color((1, 0, 0, 1)) gc.set_line_width(2) gc.rect(100, 100, 200, 75) gc.stroke_path() gc.set_fill_color((0,0,1,1)) font = Font("Arial", 48) gc.set_font(font) gc.show_text_at_point("Kiva GL!", 110, 110) win.flip() clock.tick() if win.has_exit: exit = True return if __name__ == "__main__": main() -Peter From rob.clewley at gmail.com Thu Aug 28 23:33:03 2008 From: rob.clewley at gmail.com (Rob Clewley) Date: Thu, 28 Aug 2008 23:33:03 -0400 Subject: [SciPy-user] Getting coordinates of a level (contour) curve In-Reply-To: <48B72D5D.7040209@slac.stanford.edu> References: <6A20D155-0F7E-4C1A-AAD3-32C7F7D1F05A@yale.edu> <47678149-7874-4CF7-84B0-F2E8373BBED7@yale.edu> <10116D7E-68BC-414E-A031-E1CBF19AE65F@yale.edu> <48B72D5D.7040209@slac.stanford.edu> Message-ID: > -> 1785 raise NotImplementedError('Deprecated - see numpy.linalg.norm') Hmm, that's a new one. Well, I guess I'll need to start making sure that the namespace is cleaned up when PyDSTool does its imports (these have been a mixture from numpy, scipy and matplotlib). For now it would appear that you'd get the script to work by importing norm from numpy.linalg explicitly in this script. From robert.kern at gmail.com Fri Aug 29 01:54:14 2008 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 29 Aug 2008 00:54:14 -0500 Subject: [SciPy-user] pyqwt or matplotlib In-Reply-To: <1219960169.18713.8.camel@pc2.cole.uklinux.net> References: <48B3F6CF.2010600@meduni-graz.at> <09C4D732-872E-496B-8DEE-E920BBF4E5F8@yale.edu> <69A34E96-97C6-49F7-9D4C-1E60EC283870@enthought.com> <1219960169.18713.8.camel@pc2.cole.uklinux.net> Message-ID: <3d375d730808282254g5c2e450ek2885e15459963013@mail.gmail.com> On Thu, Aug 28, 2008 at 16:49, Bryan Cole wrote: > The GL backend for Chaco is an exciting development. Althought the > examples worked fine with the pyglet backend, when I tried to experiment > with it I couldn't see how to use it within the context of a full > TraitsUI/wx application. Pyglet doesn't seem to integrate with any other > toolkit event loop. It can. However, the Enable backend using pyglet/OpenGL has not been made to do so, yet. It's just a matter of programming, to quote Eric. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From wnbell at gmail.com Fri Aug 29 03:19:13 2008 From: wnbell at gmail.com (Nathan Bell) Date: Fri, 29 Aug 2008 03:19:13 -0400 Subject: [SciPy-user] in-place add for sparse.lil_eye In-Reply-To: References: Message-ID: On Thu, Aug 28, 2008 at 5:25 PM, Tony S Yu wrote: > I'm not sure if this is a closed bug or not, but in-place adding for > sparse.lil_eye *sometimes* raises: > AttributeError: 'numpy.ndarray' object has no attribute 'append' > In the code below, the error doesn't occur unless the sparse matrix is > multiplied by some number (`a` below) AND the added matrix (`A_offdiag` > below) has entries off the main diagonal. Normal adding works fine. > I noticed the 0.7.0 changes noted "numerous bug fixes" for the sparse > module, so this problem may already be fixed (possibly with the closing > of Ticket#680); I just wanted to bring it up in case it hadn't. Hi Tony, Thanks for raising this issue as it had not been fixed yet. I've committed some changes in r4678 [1] that should resolve the issue. Please let us know of any other possible outstanding issues. [1] http://projects.scipy.org/scipy/scipy/changeset/4678 -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From cohen at slac.stanford.edu Fri Aug 29 03:45:15 2008 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Fri, 29 Aug 2008 09:45:15 +0200 Subject: [SciPy-user] Getting coordinates of a level (contour) curve In-Reply-To: References: <6A20D155-0F7E-4C1A-AAD3-32C7F7D1F05A@yale.edu> <47678149-7874-4CF7-84B0-F2E8373BBED7@yale.edu> <10116D7E-68BC-414E-A031-E1CBF19AE65F@yale.edu> <48B72D5D.7040209@slac.stanford.edu> Message-ID: <48B7A90B.7070900@slac.stanford.edu> thanks, Rob, I confirm that importing explicitely solves this issue. Johann Rob Clewley wrote: >> -> 1785 raise NotImplementedError('Deprecated - see numpy.linalg.norm') >> > > Hmm, that's a new one. Well, I guess I'll need to start making sure > that the namespace is cleaned up when PyDSTool does its imports (these > have been a mixture from numpy, scipy and matplotlib). For now it > would appear that you'd get the script to work by importing norm from > numpy.linalg explicitly in this script. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From bryan.cole at teraview.com Fri Aug 29 06:28:23 2008 From: bryan.cole at teraview.com (bryan cole) Date: Fri, 29 Aug 2008 10:28:23 +0000 (UTC) Subject: [SciPy-user] advice on stochastic(?) optimisation References: <5b8d13220808281446x51502978n3b374a6f4af162e@mail.gmail.com> Message-ID: Firstly, thanks everyone for the responses. I think there are enough pointers here to get me started. > > Do you have a formula for the function f ? If what you have is only > noisy observations of f(x), without knowing f, that's basically what > stochastic approximation is about: you have a big literature about > this kind of problems. In fact, this is an instrumentation optimisation. Each function sample operation is in fact an experimental measurement. > The first article is the one introducing > Robbins-Monro algorithm: > > Robbins, H. and Monro, S. "A Stochastic Approximation Method." Ann. > Math. Stat. 22, 400-407, 1951. > > A recent book covering the field is the Kushner and Yin book: > http://www.springer.com/math/probability/book/978-0-387-00894-3?cm_mmc=Google-_-Book%20Search-_-Springer-_-0 > "Stochasic Approximation" seems to be just what I need. I'm reading up on it now... > The problem of those algorithms is that they are hard to implement in > python, because of their recursive nature, hence non vectorizable. If > your function/observation is hard to compute, it may not be a big > problem, though. The expense in terms of my "function" evaluations is so great (~max sample rate is 15 measurements per second), the python overhead will be negligible. However, I hope I can exploit the fact that my function is quite slowly varying. It something like a distorted 2D Gaussian. It can be assumed there's only one maximum within the region bounds. The main problem is that the measurements are noisy, so attempts to estimate the function gradient are very error-prone. This seems like it should be a common problem in experimental science / instrumentation, so there ought to be lots of info on this subject. I just didn't know what heading to search under. cheers, Bryan From pav at iki.fi Fri Aug 29 08:23:33 2008 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 29 Aug 2008 12:23:33 +0000 (UTC) Subject: [SciPy-user] possible to get INF in divide by zero ? References: <529E0C005F46104BA9DB3CB93F39797501B5F5C1@TOKYO.intra.cea.fr> <200808280849.58330.vincefn@users.sourceforge.net> <48B65A34.4010804@ru.nl> Message-ID: Thu, 28 Aug 2008 09:56:36 +0200, Stef Mientki wrote: > another observation: > > >>> a=numpy.array([2,3]) > >>> a/0 > array([0, 0]) Division in Numpy preserves the data types: in this case you have an integer array divided by an integer, so the output is also an integer. But Inf is not one of integers so something must be substituted; and it is zero. There probably was some reason for this choice... If you change your seterr configuration frmo divide='ignore' you should get a warning, see http://mentat.za.net/numpy/refguide/routines.math.xhtml#numpy.divide -- Pauli Virtanen From stef.mientki at gmail.com Fri Aug 29 13:36:13 2008 From: stef.mientki at gmail.com (Stef Mientki) Date: Fri, 29 Aug 2008 19:36:13 +0200 Subject: [SciPy-user] possible to get INF in divide by zero ? In-Reply-To: References: <529E0C005F46104BA9DB3CB93F39797501B5F5C1@TOKYO.intra.cea.fr> <200808280849.58330.vincefn@users.sourceforge.net> <48B65A34.4010804@ru.nl> Message-ID: <48B8338D.8080106@gmail.com> thank you all for the explanations. I'm a practical engineer, who almost never divide by zero. But the question came from my son, familiar with python and a little with scipy. After his first lesson in MatLab, he said: Oh what a beautiful environment is MatLab, it's even capable to divide by zero without errors. So for the moment probably MatLab is better suited for educational purposes. I'll ask him again to compare scipy and Matlab, after a few weeks ;-) cheers, Stef Pauli Virtanen wrote: > Thu, 28 Aug 2008 09:56:36 +0200, Stef Mientki wrote: > >> another observation: >> >> >>> a=numpy.array([2,3]) >> >>> a/0 >> array([0, 0]) >> > > Division in Numpy preserves the data types: in this case you have an > integer array divided by an integer, so the output is also an integer. > > But Inf is not one of integers so something must be substituted; and it > is zero. There probably was some reason for this choice... > > If you change your seterr configuration frmo divide='ignore' you should > get a warning, see > http://mentat.za.net/numpy/refguide/routines.math.xhtml#numpy.divide > > From rmay31 at gmail.com Fri Aug 29 14:09:18 2008 From: rmay31 at gmail.com (Ryan May) Date: Fri, 29 Aug 2008 13:09:18 -0500 Subject: [SciPy-user] possible to get INF in divide by zero ? In-Reply-To: <48B8338D.8080106@gmail.com> References: <529E0C005F46104BA9DB3CB93F39797501B5F5C1@TOKYO.intra.cea.fr> <200808280849.58330.vincefn@users.sourceforge.net> <48B65A34.4010804@ru.nl> <48B8338D.8080106@gmail.com> Message-ID: <48B83B4E.40105@gmail.com> Stef Mientki wrote: > thank you all for the explanations. > > I'm a practical engineer, who almost never divide by zero. > But the question came from my son, > familiar with python and a little with scipy. > After his first lesson in MatLab, he said: > Oh what a beautiful environment is MatLab, it's even capable to divide > by zero without errors. > So for the moment probably MatLab is better suited for educational purposes. NO!!! > I'll ask him again to compare scipy and Matlab, after a few weeks ;-) The fundamental difference here is that Matlab operates using double precision numbers for *everything* by default. I personally consider that a design flaw. Numpy is being smart and letting you use integers if that's what you give it. So this isn't some limitation of numpy, it's just a slight difference of "opinion" between the packages. Numpy is capable of the same behavior with just one of two minor tweaks to the example: >>>a = np.array([2., 3.]) #Note the decimal points, this makes floats >>>a/0 array([ Inf, Inf]) -or- >>>a = np.array([2, 3], dtype=np.float32) #Explicitly ask for floats >>>a/0 array([ Inf, Inf], dtype=float32) Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma From pav at iki.fi Fri Aug 29 15:29:11 2008 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 29 Aug 2008 19:29:11 +0000 (UTC) Subject: [SciPy-user] possible to get INF in divide by zero ? References: <529E0C005F46104BA9DB3CB93F39797501B5F5C1@TOKYO.intra.cea.fr> <200808280849.58330.vincefn@users.sourceforge.net> <48B65A34.4010804@ru.nl> Message-ID: Fri, 29 Aug 2008 12:23:33 +0000, Pauli Virtanen wrote: > Thu, 28 Aug 2008 09:56:36 +0200, Stef Mientki wrote: >> another observation: >> >> >>> a=numpy.array([2,3]) >> >>> a/0 >> array([0, 0]) > > Division in Numpy preserves the data types: in this case you have an > integer array divided by an integer, so the output is also an integer. > > But Inf is not one of integers so something must be substituted; and it > is zero. There probably was some reason for this choice... Matlab substitutes MAXINT here: >> x = zeros([2,2],'int32') >> 1./x ans = 2147483647 2147483647 2147483647 2147483647 I don't know the rationale behind the choice 0 in Numpy... -- Pauli Virtanen From martin.enlund at gmail.com Sat Aug 30 10:19:47 2008 From: martin.enlund at gmail.com (Martin Enlund) Date: Sat, 30 Aug 2008 16:19:47 +0200 Subject: [SciPy-user] scikits.timeseries problem (error with e.g. cumprod) Message-ID: <887c5c2c0808300719m2f59addfo6e21b6d16b44faa2@mail.gmail.com> I am using scikits.timeseries and I found the toolkit very useful so far. However, I've run into a problem with functions such as cumprod, which just doesn't work (and I've thus created replacement functions using numpy functions) Even so, I'd like it to be solved seeing as I don't really like my ugly hacks. The error I get when trying to call cumprod is "TypeError: __call__() got an unexpected keyword argument 'dtype'" To replicate this error, try the following: ##### import numpy import scipy import scikits.timeseries as ts data = numpy.random.uniform(-1,1,90) new_series = ts.time_series(data, start_date=ts.now('D')-90) cp_series = new_series.cumprod() Btw, I am running this version of scikits.timeseries: '0.67.0.dev-r1228' From pgmdevlist at gmail.com Sat Aug 30 10:41:03 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Sat, 30 Aug 2008 10:41:03 -0400 Subject: [SciPy-user] scikits.timeseries problem (error with e.g. cumprod) In-Reply-To: <887c5c2c0808300719m2f59addfo6e21b6d16b44faa2@mail.gmail.com> References: <887c5c2c0808300719m2f59addfo6e21b6d16b44faa2@mail.gmail.com> Message-ID: <200808301041.03522.pgmdevlist@gmail.com> On Saturday 30 August 2008 10:19:47 Martin Enlund wrote: > I am using scikits.timeseries and I found the toolkit very useful so far. Thanks a lot ! > However, I've run into a problem with functions such as cumprod, which > just doesn't work (and I've thus created replacement functions using > numpy functions) OK, I see. Basically, there's been a change in numpy.ma a few weeks back whose side-effects are now felt on timeseries. I'm on it, I'll let you know how it goes. Could you send a list of functions you have a problem with ? From martin.enlund at gmail.com Sat Aug 30 10:54:44 2008 From: martin.enlund at gmail.com (Martin Enlund) Date: Sat, 30 Aug 2008 16:54:44 +0200 Subject: [SciPy-user] scikits.timeseries problem (error with e.g. cumprod) In-Reply-To: <200808301041.03522.pgmdevlist@gmail.com> References: <887c5c2c0808300719m2f59addfo6e21b6d16b44faa2@mail.gmail.com> <200808301041.03522.pgmdevlist@gmail.com> Message-ID: <887c5c2c0808300754q73248b4dud60765e5193d3e5b@mail.gmail.com> Thanks for the rapid response. I have only had problems with cumsum and cumprod so far. All other _tsarraymethods seem to work fine 2008/8/30 Pierre GM : > On Saturday 30 August 2008 10:19:47 Martin Enlund wrote: >> I am using scikits.timeseries and I found the toolkit very useful so far. > > Thanks a lot ! > >> However, I've run into a problem with functions such as cumprod, which >> just doesn't work (and I've thus created replacement functions using >> numpy functions) > > OK, I see. Basically, there's been a change in numpy.ma a few weeks back whose > side-effects are now felt on timeseries. I'm on it, I'll let you know how it > goes. Could you send a list of functions you have a problem with ? > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From pgmdevlist at gmail.com Sat Aug 30 11:58:01 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Sat, 30 Aug 2008 11:58:01 -0400 Subject: [SciPy-user] scikits.timeseries problem (error with e.g. cumprod) In-Reply-To: <887c5c2c0808300754q73248b4dud60765e5193d3e5b@mail.gmail.com> References: <887c5c2c0808300719m2f59addfo6e21b6d16b44faa2@mail.gmail.com> <200808301041.03522.pgmdevlist@gmail.com> <887c5c2c0808300754q73248b4dud60765e5193d3e5b@mail.gmail.com> Message-ID: <200808301158.01792.pgmdevlist@gmail.com> Martin, That should be fixed in SVN1245. Let me know if you run into other problems. Cheers P. From contact at pythonxy.com Sat Aug 30 14:32:06 2008 From: contact at pythonxy.com (Pierre Raybaut) Date: Sat, 30 Aug 2008 20:32:06 +0200 Subject: [SciPy-user] [ Python(x,y) ] New release : 2.0.4 Message-ID: <48B99226.50206@pythonxy.com> Hi all, As you may already know, Python(x,y) is a free scientific-oriented Python Distribution based on Qt and Eclipse providing a self-consistent scientific development environment. Release 2.0.4 is now available on http://www.pythonxy.com. (Full Edition, Basic Edition, and Update) The new Light Edition (i.e. Basic Edition without Eclipse: ~65MB) will be soon available on http://code.google.com/p/pythonxy Changes history Version 2.0.4 (08-30-2008) * Added: o PyQwt 5.1.0 - 2D plotting library (set of Python bindings for the Qwt library featuring fast plotting) o biopython 1.47 - Tools for computational molecular biology * Updated: o Pyrex 0.9.8.5 (Some minor bug fixes and improvements) o xy 1.0.4 (Minor bug fixes) Regards, Pierre Raybaut From ed at edmccaffrey.net Sat Aug 30 18:23:30 2008 From: ed at edmccaffrey.net (Ed McCaffrey) Date: Sat, 30 Aug 2008 18:23:30 -0400 Subject: [SciPy-user] Create a spectrogram from a waveform Message-ID: <86f16dc10808301523x37dfe309y29d496564c6cb305@mail.gmail.com> Hello, I wrote a program in C# that creates a spectrogram from the waveform of a .wav music file. I now want to port it to Python, and I want to try to use SciPy instead of a direct port of the existing code, because I am not sure that it is perfectly accurate, and it is probably slow. I am having a hard time finding out how to do this with SciPy. With my code, I had a FFT function that took an array of real and imaginary components for each sample, and a second function taking both that produced the amplitude. The FFT function in SciPy just takes one array. Has anyone done this task in SciPy? Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.brucher at gmail.com Sat Aug 30 18:45:53 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Sun, 31 Aug 2008 00:45:53 +0200 Subject: [SciPy-user] Create a spectrogram from a waveform In-Reply-To: <86f16dc10808301523x37dfe309y29d496564c6cb305@mail.gmail.com> References: <86f16dc10808301523x37dfe309y29d496564c6cb305@mail.gmail.com> Message-ID: Hi, You can start by checking the spectrogram function in matplotlib that uses numpy. Matthieu 2008/8/31 Ed McCaffrey : > Hello, > > I wrote a program in C# that creates a spectrogram from the waveform of a > .wav music file. I now want to port it to Python, and I want to try to use > SciPy instead of a direct port of the existing code, because I am not sure > that it is perfectly accurate, and it is probably slow. > > I am having a hard time finding out how to do this with SciPy. With my > code, I had a FFT function that took an array of real and imaginary > components for each sample, and a second function taking both that produced > the amplitude. The FFT function in SciPy just takes one array. > > Has anyone done this task in SciPy? > > > Thanks. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher From pwang at enthought.com Sat Aug 30 21:28:12 2008 From: pwang at enthought.com (Peter Wang) Date: Sat, 30 Aug 2008 20:28:12 -0500 Subject: [SciPy-user] Create a spectrogram from a waveform In-Reply-To: <86f16dc10808301523x37dfe309y29d496564c6cb305@mail.gmail.com> References: <86f16dc10808301523x37dfe309y29d496564c6cb305@mail.gmail.com> Message-ID: <20080830202812.bpnp4vvz0g4g4o80@mail.enthought.com> Quoting Ed McCaffrey : > I wrote a program in C# that creates a spectrogram from the waveform of a > .wav music file. I now want to port it to Python, and I want to try to use > SciPy instead of a direct port of the existing code, because I am not sure > that it is perfectly accurate, and it is probably slow. > > I am having a hard time finding out how to do this with SciPy. With my > code, I had a FFT function that took an array of real and imaginary > components for each sample, and a second function taking both that produced > the amplitude. The FFT function in SciPy just takes one array. > > Has anyone done this task in SciPy? We have a realtime spectrogram plot in the Audio Spectrum example for Chaco. (See the very last screenshot on the gallery page here: http://code.enthought.com/projects/chaco/gallery.php) You can see the full source code of the example here: https://svn.enthought.com/enthought/browser/Chaco/trunk/examples/advanced/spectrum.py The lines you would be interested in are the last few: def get_audio_data(): pa = PyAudio() stream = pa.open(format=paInt16, channels=1, rate=SAMPLING_RATE, input=True, frames_per_buffer=NUM_SAMPLES) string_audio_data = stream.read(NUM_SAMPLES) audio_data = fromstring(string_audio_data, dtype=short) normalized_data = audio_data / 32768.0 return (abs(fft(normalized_data))[:NUM_SAMPLES/2], normalized_data) Here we are using the PyAudio library to directly read from the sound card, normalize the 16-bit data, and perform an FFT on it. In your case, since you are reading a WAV file, you might be interested in the zoomed_plot example: http://code.enthought.com/projects/chaco/pu-zooming-plot.html This displays the time-space signal but can easily be modified to show the FFT. Here is the relevant code that uses the built-in python 'wave' module to read the data: https://svn.enthought.com/enthought/browser/Chaco/trunk/examples/zoomed_plot/wav_to_numeric.py You should be able to take the 'data' array in the wav_to_numeric function and hand that in to the fft function. -Peter From ed at edmccaffrey.net Sun Aug 31 09:49:18 2008 From: ed at edmccaffrey.net (Ed McCaffrey) Date: Sun, 31 Aug 2008 09:49:18 -0400 Subject: [SciPy-user] Create a spectrogram from a waveform In-Reply-To: <20080830202812.bpnp4vvz0g4g4o80@mail.enthought.com> References: <86f16dc10808301523x37dfe309y29d496564c6cb305@mail.gmail.com> <20080830202812.bpnp4vvz0g4g4o80@mail.enthought.com> Message-ID: <86f16dc10808310649v127c77acha3d13c5d03773a44@mail.gmail.com> Thanks for the replies. I think that now I am heading towards the right direction, but I have one problem. When I run my program all I get for the spectrogram is a solid blue graph. The program is: from scipy import * from pylab import * from wave import * import struct wav = open('song.wav') length = wav.getnframes() data = [struct.unpack('f', wav.readframes(1))[0] for x in range(length)] spectrogram = specgram(data) title('Spectrogram') show(); I tried it with a few different short clips with the same result. One of them can be found: http://edmccaffrey.net/misc/song.wav Thanks. On Sat, Aug 30, 2008 at 9:28 PM, Peter Wang wrote: > Quoting Ed McCaffrey : > > > I wrote a program in C# that creates a spectrogram from the waveform of a > > .wav music file. I now want to port it to Python, and I want to try to > use > > SciPy instead of a direct port of the existing code, because I am not > sure > > that it is perfectly accurate, and it is probably slow. > > > > I am having a hard time finding out how to do this with SciPy. With my > > code, I had a FFT function that took an array of real and imaginary > > components for each sample, and a second function taking both that > produced > > the amplitude. The FFT function in SciPy just takes one array. > > > > Has anyone done this task in SciPy? > > We have a realtime spectrogram plot in the Audio Spectrum example for > Chaco. (See the very last screenshot on the gallery page here: > http://code.enthought.com/projects/chaco/gallery.php) > > You can see the full source code of the example here: > > https://svn.enthought.com/enthought/browser/Chaco/trunk/examples/advanced/spectrum.py > > The lines you would be interested in are the last few: > > def get_audio_data(): > pa = PyAudio() > stream = pa.open(format=paInt16, channels=1, rate=SAMPLING_RATE, > input=True, > frames_per_buffer=NUM_SAMPLES) > string_audio_data = stream.read(NUM_SAMPLES) > audio_data = fromstring(string_audio_data, dtype=short) > normalized_data = audio_data / 32768.0 > return (abs(fft(normalized_data))[:NUM_SAMPLES/2], normalized_data) > > Here we are using the PyAudio library to directly read from the sound > card, normalize the 16-bit data, and perform an FFT on it. > > In your case, since you are reading a WAV file, you might be > interested in the zoomed_plot example: > http://code.enthought.com/projects/chaco/pu-zooming-plot.html > > This displays the time-space signal but can easily be modified to show > the FFT. Here is the relevant code that uses the built-in python > 'wave' module to read the data: > > https://svn.enthought.com/enthought/browser/Chaco/trunk/examples/zoomed_plot/wav_to_numeric.py > > You should be able to take the 'data' array in the wav_to_numeric > function and hand that in to the fft function. > > > -Peter > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ed at edmccaffrey.net Sun Aug 31 14:58:23 2008 From: ed at edmccaffrey.net (Ed McCaffrey) Date: Sun, 31 Aug 2008 14:58:23 -0400 Subject: [SciPy-user] Create a spectrogram from a waveform In-Reply-To: <86f16dc10808310649v127c77acha3d13c5d03773a44@mail.gmail.com> References: <86f16dc10808301523x37dfe309y29d496564c6cb305@mail.gmail.com> <20080830202812.bpnp4vvz0g4g4o80@mail.enthought.com> <86f16dc10808310649v127c77acha3d13c5d03773a44@mail.gmail.com> Message-ID: <86f16dc10808311158h68c550edh1c11a55d43641378@mail.gmail.com> I've found what is creating the solid blue screen. That code gives a few NaNs in the list, if I remove them then I get actual output. However, I think that something is wrong if I am getting NaNs, and the spectrogram just doesn't look right. Here is updated code: from scipy import * from pylab import * from wave import * import struct wav = open('song.wav') length = wav.getnframes() tmp = [struct.unpack('f', wav.readframes(1))[0] for x in range(length)] data = [x for x in tmp if isnan(x) == False] spectrogram = specgram(data) title('Spectrogram') show(); On Sun, Aug 31, 2008 at 9:49 AM, Ed McCaffrey wrote: > Thanks for the replies. I think that now I am heading towards the right > direction, but I have one problem. When I run my program all I get for the > spectrogram is a solid blue graph. > > The program is: > > from scipy import * > from pylab import * > from wave import * > import struct > > wav = open('song.wav') > length = wav.getnframes() > > data = [struct.unpack('f', wav.readframes(1))[0] for x in range(length)] > > spectrogram = specgram(data) > title('Spectrogram') > > show(); > > I tried it with a few different short clips with the same result. One of > them can be found: http://edmccaffrey.net/misc/song.wav > > > Thanks. > > > > > On Sat, Aug 30, 2008 at 9:28 PM, Peter Wang wrote: > >> Quoting Ed McCaffrey : >> >> > I wrote a program in C# that creates a spectrogram from the waveform of >> a >> > .wav music file. I now want to port it to Python, and I want to try to >> use >> > SciPy instead of a direct port of the existing code, because I am not >> sure >> > that it is perfectly accurate, and it is probably slow. >> > >> > I am having a hard time finding out how to do this with SciPy. With my >> > code, I had a FFT function that took an array of real and imaginary >> > components for each sample, and a second function taking both that >> produced >> > the amplitude. The FFT function in SciPy just takes one array. >> > >> > Has anyone done this task in SciPy? >> >> We have a realtime spectrogram plot in the Audio Spectrum example for >> Chaco. (See the very last screenshot on the gallery page here: >> http://code.enthought.com/projects/chaco/gallery.php) >> >> You can see the full source code of the example here: >> >> https://svn.enthought.com/enthought/browser/Chaco/trunk/examples/advanced/spectrum.py >> >> The lines you would be interested in are the last few: >> >> def get_audio_data(): >> pa = PyAudio() >> stream = pa.open(format=paInt16, channels=1, rate=SAMPLING_RATE, >> input=True, >> frames_per_buffer=NUM_SAMPLES) >> string_audio_data = stream.read(NUM_SAMPLES) >> audio_data = fromstring(string_audio_data, dtype=short) >> normalized_data = audio_data / 32768.0 >> return (abs(fft(normalized_data))[:NUM_SAMPLES/2], normalized_data) >> >> Here we are using the PyAudio library to directly read from the sound >> card, normalize the 16-bit data, and perform an FFT on it. >> >> In your case, since you are reading a WAV file, you might be >> interested in the zoomed_plot example: >> http://code.enthought.com/projects/chaco/pu-zooming-plot.html >> >> This displays the time-space signal but can easily be modified to show >> the FFT. Here is the relevant code that uses the built-in python >> 'wave' module to read the data: >> >> https://svn.enthought.com/enthought/browser/Chaco/trunk/examples/zoomed_plot/wav_to_numeric.py >> >> You should be able to take the 'data' array in the wav_to_numeric >> function and hand that in to the fft function. >> >> >> -Peter >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.brucher at gmail.com Sun Aug 31 15:19:30 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Sun, 31 Aug 2008 21:19:30 +0200 Subject: [SciPy-user] Create a spectrogram from a waveform In-Reply-To: <86f16dc10808311158h68c550edh1c11a55d43641378@mail.gmail.com> References: <86f16dc10808301523x37dfe309y29d496564c6cb305@mail.gmail.com> <20080830202812.bpnp4vvz0g4g4o80@mail.enthought.com> <86f16dc10808310649v127c77acha3d13c5d03773a44@mail.gmail.com> <86f16dc10808311158h68c550edh1c11a55d43641378@mail.gmail.com> Message-ID: If yu delete samples without replacing it with something, you're in trouble, you can not make an FFT or IFFT on that kind of data, it is not uniformely sampled anymore ! Try : data = numpy.array(data) (why don't you use audiolab for this ? or data = numpy.fromfile('song.wav').reshape(-1, 2)[0]) data[numpy.isnan(data)] = 0 instead. It just replaces NaN with 0, not the best course of action. Matthieu 2008/8/31 Ed McCaffrey : > I've found what is creating the solid blue screen. That code gives a few > NaNs in the list, if I remove them then I get actual output. > > However, I think that something is wrong if I am getting NaNs, and the > spectrogram just doesn't look right. Here is updated code: > > from scipy import * > from pylab import * > from wave import * > import struct > > wav = open('song.wav') > length = wav.getnframes() > > tmp = [struct.unpack('f', wav.readframes(1))[0] for x in range(length)] > data = [x for x in tmp if isnan(x) == False] > > spectrogram = specgram(data) > title('Spectrogram') > > show(); > > > > On Sun, Aug 31, 2008 at 9:49 AM, Ed McCaffrey wrote: >> >> Thanks for the replies. I think that now I am heading towards the right >> direction, but I have one problem. When I run my program all I get for the >> spectrogram is a solid blue graph. >> >> The program is: >> >> from scipy import * >> from pylab import * >> from wave import * >> import struct >> >> wav = open('song.wav') >> length = wav.getnframes() >> >> data = [struct.unpack('f', wav.readframes(1))[0] for x in range(length)] >> >> spectrogram = specgram(data) >> title('Spectrogram') >> >> show(); >> >> I tried it with a few different short clips with the same result. One of >> them can be found: http://edmccaffrey.net/misc/song.wav >> >> >> Thanks. >> >> >> >> On Sat, Aug 30, 2008 at 9:28 PM, Peter Wang wrote: >>> >>> Quoting Ed McCaffrey : >>> >>> > I wrote a program in C# that creates a spectrogram from the waveform of >>> > a >>> > .wav music file. I now want to port it to Python, and I want to try to >>> > use >>> > SciPy instead of a direct port of the existing code, because I am not >>> > sure >>> > that it is perfectly accurate, and it is probably slow. >>> > >>> > I am having a hard time finding out how to do this with SciPy. With my >>> > code, I had a FFT function that took an array of real and imaginary >>> > components for each sample, and a second function taking both that >>> > produced >>> > the amplitude. The FFT function in SciPy just takes one array. >>> > >>> > Has anyone done this task in SciPy? >>> >>> We have a realtime spectrogram plot in the Audio Spectrum example for >>> Chaco. (See the very last screenshot on the gallery page here: >>> http://code.enthought.com/projects/chaco/gallery.php) >>> >>> You can see the full source code of the example here: >>> >>> https://svn.enthought.com/enthought/browser/Chaco/trunk/examples/advanced/spectrum.py >>> >>> The lines you would be interested in are the last few: >>> >>> def get_audio_data(): >>> pa = PyAudio() >>> stream = pa.open(format=paInt16, channels=1, rate=SAMPLING_RATE, >>> input=True, >>> frames_per_buffer=NUM_SAMPLES) >>> string_audio_data = stream.read(NUM_SAMPLES) >>> audio_data = fromstring(string_audio_data, dtype=short) >>> normalized_data = audio_data / 32768.0 >>> return (abs(fft(normalized_data))[:NUM_SAMPLES/2], normalized_data) >>> >>> Here we are using the PyAudio library to directly read from the sound >>> card, normalize the 16-bit data, and perform an FFT on it. >>> >>> In your case, since you are reading a WAV file, you might be >>> interested in the zoomed_plot example: >>> http://code.enthought.com/projects/chaco/pu-zooming-plot.html >>> >>> This displays the time-space signal but can easily be modified to show >>> the FFT. Here is the relevant code that uses the built-in python >>> 'wave' module to read the data: >>> >>> https://svn.enthought.com/enthought/browser/Chaco/trunk/examples/zoomed_plot/wav_to_numeric.py >>> >>> You should be able to take the 'data' array in the wav_to_numeric >>> function and hand that in to the fft function. >>> >>> >>> -Peter >>> >>> _______________________________________________ >>> SciPy-user mailing list >>> SciPy-user at scipy.org >>> http://projects.scipy.org/mailman/listinfo/scipy-user >> > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher From ed at edmccaffrey.net Sun Aug 31 16:52:40 2008 From: ed at edmccaffrey.net (Ed McCaffrey) Date: Sun, 31 Aug 2008 16:52:40 -0400 Subject: [SciPy-user] Create a spectrogram from a waveform In-Reply-To: References: <86f16dc10808301523x37dfe309y29d496564c6cb305@mail.gmail.com> <20080830202812.bpnp4vvz0g4g4o80@mail.enthought.com> <86f16dc10808310649v127c77acha3d13c5d03773a44@mail.gmail.com> <86f16dc10808311158h68c550edh1c11a55d43641378@mail.gmail.com> Message-ID: <86f16dc10808311352k32bebbf7qd67b77b936b6973b@mail.gmail.com> Thanks for the reply. I had not heard of audiolab before, but I just tried using it. Looking at audiolab made me realize that I had forgotten how a .wav stores the data for multiple channels, so that was why the spectrogram I generated before looked so odd. Here's the program, in case anyone else wants it or if I made another mistake: from scipy import * from pylab import * from numpy import * import scikits.audiolab as audiolab import struct wav = audiolab.sndfile('song.wav', 'read') data = wav.read_frames(wav.get_nframes()) data = data[:,0] spectrogram = specgram(data) title('Spectrogram') show(); On Sun, Aug 31, 2008 at 3:19 PM, Matthieu Brucher < matthieu.brucher at gmail.com> wrote: > If yu delete samples without replacing it with something, you're in > trouble, you can not make an FFT or IFFT on that kind of data, it is > not uniformely sampled anymore ! > > Try : > data = numpy.array(data) (why don't you use audiolab for this ? or > data = numpy.fromfile('song.wav').reshape(-1, 2)[0]) > data[numpy.isnan(data)] = 0 > > instead. It just replaces NaN with 0, not the best course of action. > > Matthieu > > 2008/8/31 Ed McCaffrey : > > I've found what is creating the solid blue screen. That code gives a few > > NaNs in the list, if I remove them then I get actual output. > > > > However, I think that something is wrong if I am getting NaNs, and the > > spectrogram just doesn't look right. Here is updated code: > > > > from scipy import * > > from pylab import * > > from wave import * > > import struct > > > > wav = open('song.wav') > > length = wav.getnframes() > > > > tmp = [struct.unpack('f', wav.readframes(1))[0] for x in range(length)] > > data = [x for x in tmp if isnan(x) == False] > > > > spectrogram = specgram(data) > > title('Spectrogram') > > > > show(); > > > > > > > > On Sun, Aug 31, 2008 at 9:49 AM, Ed McCaffrey > wrote: > >> > >> Thanks for the replies. I think that now I am heading towards the right > >> direction, but I have one problem. When I run my program all I get for > the > >> spectrogram is a solid blue graph. > >> > >> The program is: > >> > >> from scipy import * > >> from pylab import * > >> from wave import * > >> import struct > >> > >> wav = open('song.wav') > >> length = wav.getnframes() > >> > >> data = [struct.unpack('f', wav.readframes(1))[0] for x in range(length)] > >> > >> spectrogram = specgram(data) > >> title('Spectrogram') > >> > >> show(); > >> > >> I tried it with a few different short clips with the same result. One > of > >> them can be found: http://edmccaffrey.net/misc/song.wav > >> > >> > >> Thanks. > >> > >> > >> > >> On Sat, Aug 30, 2008 at 9:28 PM, Peter Wang > wrote: > >>> > >>> Quoting Ed McCaffrey : > >>> > >>> > I wrote a program in C# that creates a spectrogram from the waveform > of > >>> > a > >>> > .wav music file. I now want to port it to Python, and I want to try > to > >>> > use > >>> > SciPy instead of a direct port of the existing code, because I am not > >>> > sure > >>> > that it is perfectly accurate, and it is probably slow. > >>> > > >>> > I am having a hard time finding out how to do this with SciPy. With > my > >>> > code, I had a FFT function that took an array of real and imaginary > >>> > components for each sample, and a second function taking both that > >>> > produced > >>> > the amplitude. The FFT function in SciPy just takes one array. > >>> > > >>> > Has anyone done this task in SciPy? > >>> > >>> We have a realtime spectrogram plot in the Audio Spectrum example for > >>> Chaco. (See the very last screenshot on the gallery page here: > >>> http://code.enthought.com/projects/chaco/gallery.php) > >>> > >>> You can see the full source code of the example here: > >>> > >>> > https://svn.enthought.com/enthought/browser/Chaco/trunk/examples/advanced/spectrum.py > >>> > >>> The lines you would be interested in are the last few: > >>> > >>> def get_audio_data(): > >>> pa = PyAudio() > >>> stream = pa.open(format=paInt16, channels=1, rate=SAMPLING_RATE, > >>> input=True, > >>> frames_per_buffer=NUM_SAMPLES) > >>> string_audio_data = stream.read(NUM_SAMPLES) > >>> audio_data = fromstring(string_audio_data, dtype=short) > >>> normalized_data = audio_data / 32768.0 > >>> return (abs(fft(normalized_data))[:NUM_SAMPLES/2], normalized_data) > >>> > >>> Here we are using the PyAudio library to directly read from the sound > >>> card, normalize the 16-bit data, and perform an FFT on it. > >>> > >>> In your case, since you are reading a WAV file, you might be > >>> interested in the zoomed_plot example: > >>> http://code.enthought.com/projects/chaco/pu-zooming-plot.html > >>> > >>> This displays the time-space signal but can easily be modified to show > >>> the FFT. Here is the relevant code that uses the built-in python > >>> 'wave' module to read the data: > >>> > >>> > https://svn.enthought.com/enthought/browser/Chaco/trunk/examples/zoomed_plot/wav_to_numeric.py > >>> > >>> You should be able to take the 'data' array in the wav_to_numeric > >>> function and hand that in to the fft function. > >>> > >>> > >>> -Peter > >>> > >>> _______________________________________________ > >>> SciPy-user mailing list > >>> SciPy-user at scipy.org > >>> http://projects.scipy.org/mailman/listinfo/scipy-user > >> > > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > -- > French PhD student > Website : http://matthieu-brucher.developpez.com/ > Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 > LinkedIn : http://www.linkedin.com/in/matthieubrucher > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: