From rob.clewley at gmail.com Sun Jun 1 14:26:17 2008 From: rob.clewley at gmail.com (Rob Clewley) Date: Sun, 1 Jun 2008 14:26:17 -0400 Subject: [SciPy-user] ANN: PyDSTool 0.86 released Message-ID: The latest update to the open-source python dynamical systems modeling toolbox, PyDSTool 0.86, has been posted on Sourceforge. Major highlights are: * Now compatible with Python 2.5 and Numpy 1.0.4 / Scipy 0.6.0 * Decreased overhead for simulating hybrid models * Improved efficiency of VODE Generator in computing trajectories * Interval class now supports discrete valued intervals * Improved diagnostic reporting structure in Generator and Model classes * Inclusion of intuitive arithmetic operations for Point and Pointset classes * Various bug fixes and other API tidying This is a minor update in preparation for a substantial upgrade at version 0.90, which will move symbolic expression support over to SymPy, support much more sophisticated data-driven model inference, and greatly improve the implementation of C-based ODE integrators. You can download the latest version from http://www.sourceforge.net/projects/pydstool/ For installation and setting up, as well as some tutorial information, see http://pydstool.sourceforge.net The download contains full API documentation, BSD license information, and further details of recent code changes. As ever, all feedback is welcome as we try to find time to improve our code base. From phhs80 at gmail.com Sun Jun 1 17:15:00 2008 From: phhs80 at gmail.com (Paul Smith) Date: Sun, 1 Jun 2008 14:15:00 -0700 (PDT) Subject: [SciPy-user] OpenOpt errors with ALGENCAN Message-ID: Dear All, I am trying to run the nlp_1.py example of OpenOpt on F9 with ALGENCAN solver, but getting these errors: ======================== starting solver ALGENCAN (license: GPL) with problem unnamed Traceback (most recent call last): File "nlp_1.py", line 109, in r = p.solve('ALGENCAN') File "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/ BaseProblem.py", line 230, in solve return runProbSolver(self, solvers, *args, **kwargs) File "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/ runProbSolver.py", line 225, in runProbSolver solver(p) File "/usr/lib/python2.5/site-packages/scikits/openopt/solvers/ BrasilOpt/ALGENCAN_oo.py", line 586, in __solver__ algencan.solvers(evalf,evalg,evalh,evalc,evaljac,evalhc,evalhlp,inip,endp) TypeError: solvers() takes exactly 12 arguments (9 given) Any ideas? Thanks in advance, Paul From phhs80 at gmail.com Sun Jun 1 17:25:43 2008 From: phhs80 at gmail.com (Paul Smith) Date: Sun, 1 Jun 2008 14:25:43 -0700 (PDT) Subject: [SciPy-user] OpenOpt errors with ALGENCAN In-Reply-To: References: Message-ID: <25912a3c-07e6-47b7-ad62-c02288fce0e2@59g2000hsb.googlegroups.com> On Jun 1, 10:15?pm, Paul Smith wrote: > I am trying to run the nlp_1.py example of OpenOpt on F9 with ALGENCAN > solver, but getting these errors: > > ======================== > starting solver ALGENCAN (license: GPL) ?with problem ?unnamed > Traceback (most recent call last): > ? File "nlp_1.py", line 109, in > ? ? r = p.solve('ALGENCAN') > ? File "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/ > BaseProblem.py", line 230, in solve > ? ? return runProbSolver(self, solvers, *args, **kwargs) > ? File "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/ > runProbSolver.py", line 225, in runProbSolver > ? ? solver(p) > ? File "/usr/lib/python2.5/site-packages/scikits/openopt/solvers/ > BrasilOpt/ALGENCAN_oo.py", line 586, in __solver__ > > algencan.solvers(evalf,evalg,evalh,evalc,evaljac,evalhc,evalhlp,inip,endp) > TypeError: solvers() takes exactly 12 arguments (9 given) > > Any ideas? Let me add that I have $ echo $PYTHONPATH /home/psmith/algencan-2.0-beta/bin/py $ and therein: $ dir algencan.py algencan.pyc pywrapper.so $ Paul From dmitrey.kroshko at scipy.org Mon Jun 2 02:51:48 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Mon, 02 Jun 2008 09:51:48 +0300 Subject: [SciPy-user] OpenOpt errors with ALGENCAN In-Reply-To: <25912a3c-07e6-47b7-ad62-c02288fce0e2@59g2000hsb.googlegroups.com> References: <25912a3c-07e6-47b7-ad62-c02288fce0e2@59g2000hsb.googlegroups.com> Message-ID: <48439884.3000606@scipy.org> OpenOpt has no connection to ALGENCAN 2.0 beta yet, as it is mentioned in http://scipy.org/scipy/scikits/wiki/OpenOptTODO AFAIK 2.0 beta has no Python API. So currently you should use 1.0 instead. If this one is already unavailable I could attach tarball with 1.0 to an openopt webpage. D. Paul Smith wrote: > On Jun 1, 10:15 pm, Paul Smith wrote: > >> I am trying to run the nlp_1.py example of OpenOpt on F9 with ALGENCAN >> solver, but getting these errors: >> >> ======================== >> starting solver ALGENCAN (license: GPL) with problem unnamed >> Traceback (most recent call last): >> File "nlp_1.py", line 109, in >> r = p.solve('ALGENCAN') >> File "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/ >> BaseProblem.py", line 230, in solve >> return runProbSolver(self, solvers, *args, **kwargs) >> File "/usr/lib/python2.5/site-packages/scikits/openopt/Kernel/ >> runProbSolver.py", line 225, in runProbSolver >> solver(p) >> File "/usr/lib/python2.5/site-packages/scikits/openopt/solvers/ >> BrasilOpt/ALGENCAN_oo.py", line 586, in __solver__ >> >> algencan.solvers(evalf,evalg,evalh,evalc,evaljac,evalhc,evalhlp,inip,endp) >> TypeError: solvers() takes exactly 12 arguments (9 given) >> >> Any ideas? >> > > Let me add that I have > > $ echo $PYTHONPATH > /home/psmith/algencan-2.0-beta/bin/py > $ > > and therein: > > $ dir > algencan.py algencan.pyc pywrapper.so > $ > > Paul > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > From phhs80 at gmail.com Mon Jun 2 04:47:04 2008 From: phhs80 at gmail.com (Paul Smith) Date: Mon, 2 Jun 2008 09:47:04 +0100 Subject: [SciPy-user] OpenOpt errors with ALGENCAN In-Reply-To: <48439884.3000606@scipy.org> References: <25912a3c-07e6-47b7-ad62-c02288fce0e2@59g2000hsb.googlegroups.com> <48439884.3000606@scipy.org> Message-ID: <6ade6f6c0806020147g26144a49i87811cbd5ecaa973@mail.gmail.com> On Mon, Jun 2, 2008 at 7:51 AM, dmitrey wrote: > OpenOpt has no connection to ALGENCAN 2.0 beta yet, as it is mentioned > in http://scipy.org/scipy/scikits/wiki/OpenOptTODO > > AFAIK 2.0 beta has no Python API. So currently you should use 1.0 > instead. If this one is already unavailable I could attach tarball with > 1.0 to an openopt webpage. Thanks, Dmitrey. I think it is no longer exact what you claim: there is now a Python interface for Algencan 2.0 beta (update: May 20th, 2008). Could you please confirm this? Paul From dmitrey.kroshko at scipy.org Mon Jun 2 05:54:57 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Mon, 02 Jun 2008 12:54:57 +0300 Subject: [SciPy-user] OpenOpt errors with ALGENCAN In-Reply-To: <6ade6f6c0806020147g26144a49i87811cbd5ecaa973@mail.gmail.com> References: <25912a3c-07e6-47b7-ad62-c02288fce0e2@59g2000hsb.googlegroups.com> <48439884.3000606@scipy.org> <6ade6f6c0806020147g26144a49i87811cbd5ecaa973@mail.gmail.com> Message-ID: <4843C371.9050008@scipy.org> I have read README file from algencan2.0beta.tgz. Python binding refers to $(ALGENCAN)/sources/interfaces/py/runalgencan.py and $(ALGENCAN)/sources/interfaces/py/toyprob.py but these files are absent (after all required steps before already done) the only one py-file is algencan.py. I had informed algencan developers of the issue. W/O up-to-date toyprob.py I cannot connect algencan2.0 beta, because I don't know those new 12 arguments instead of old-style 9. Regards, D. Paul Smith wrote: > On Mon, Jun 2, 2008 at 7:51 AM, dmitrey wrote: > >> OpenOpt has no connection to ALGENCAN 2.0 beta yet, as it is mentioned >> in http://scipy.org/scipy/scikits/wiki/OpenOptTODO >> >> AFAIK 2.0 beta has no Python API. So currently you should use 1.0 >> instead. If this one is already unavailable I could attach tarball with >> 1.0 to an openopt webpage. >> > > Thanks, Dmitrey. I think it is no longer exact what you claim: there > is now a Python interface for Algencan 2.0 beta (update: May 20th, > 2008). Could you please confirm this? > > Paul > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > From phhs80 at gmail.com Mon Jun 2 06:06:01 2008 From: phhs80 at gmail.com (Paul Smith) Date: Mon, 2 Jun 2008 11:06:01 +0100 Subject: [SciPy-user] OpenOpt errors with ALGENCAN In-Reply-To: <4843C371.9050008@scipy.org> References: <25912a3c-07e6-47b7-ad62-c02288fce0e2@59g2000hsb.googlegroups.com> <48439884.3000606@scipy.org> <6ade6f6c0806020147g26144a49i87811cbd5ecaa973@mail.gmail.com> <4843C371.9050008@scipy.org> Message-ID: <6ade6f6c0806020306r7d718a6cn2773efbf4ab57a89@mail.gmail.com> On Mon, Jun 2, 2008 at 10:54 AM, dmitrey wrote: > I have read README file from algencan2.0beta.tgz. Python binding refers to > > $(ALGENCAN)/sources/interfaces/py/runalgencan.py and > $(ALGENCAN)/sources/interfaces/py/toyprob.py > > but these files are absent (after all required steps before already done) > the only one py-file is algencan.py. > > I had informed algencan developers of the issue. > W/O up-to-date toyprob.py I cannot connect algencan2.0 beta, because I > don't know those new 12 arguments instead of old-style 9. Thanks again, Dmitrey. I had also noticed yesterday the problem that you describe: none of the two files (runalgencan.py and toyprob.py) are created. I sent an e-mail to one of the developers, telling him about the issue, but I have not received any reply so far. Paul From phhs80 at gmail.com Mon Jun 2 10:33:29 2008 From: phhs80 at gmail.com (Paul Smith) Date: Mon, 2 Jun 2008 15:33:29 +0100 Subject: [SciPy-user] OpenOpt errors with ALGENCAN In-Reply-To: <6ade6f6c0806020306r7d718a6cn2773efbf4ab57a89@mail.gmail.com> References: <25912a3c-07e6-47b7-ad62-c02288fce0e2@59g2000hsb.googlegroups.com> <48439884.3000606@scipy.org> <6ade6f6c0806020147g26144a49i87811cbd5ecaa973@mail.gmail.com> <4843C371.9050008@scipy.org> <6ade6f6c0806020306r7d718a6cn2773efbf4ab57a89@mail.gmail.com> Message-ID: <6ade6f6c0806020733w3d3b5244x5e7869b5919ba95c@mail.gmail.com> On Mon, Jun 2, 2008 at 11:06 AM, Paul Smith wrote: >> I have read README file from algencan2.0beta.tgz. Python binding refers to >> >> $(ALGENCAN)/sources/interfaces/py/runalgencan.py and >> $(ALGENCAN)/sources/interfaces/py/toyprob.py >> >> but these files are absent (after all required steps before already done) >> the only one py-file is algencan.py. >> >> I had informed algencan developers of the issue. >> W/O up-to-date toyprob.py I cannot connect algencan2.0 beta, because I >> don't know those new 12 arguments instead of old-style 9. > > Thanks again, Dmitrey. I had also noticed yesterday the problem that > you describe: none of the two files (runalgencan.py and toyprob.py) > are created. I sent an e-mail to one of the developers, telling him > about the issue, but I have not received any reply so far. Ernest Birgin has just written to me telling me that there is now available a new version of Algencan with the missing files. The problem is solved now. Paul From silva at lma.cnrs-mrs.fr Mon Jun 2 10:35:17 2008 From: silva at lma.cnrs-mrs.fr (Fabrice Silva) Date: Mon, 02 Jun 2008 16:35:17 +0200 Subject: [SciPy-user] OdrPack and complex model Message-ID: <1212417317.3242.12.camel@Portable-s2m.cnrs-mrs.fr> Hi, I would like to use odr to fit a complex model on reference values as causality issues occur when trying to adjust separately modulus and argument. Is it possible with odr ? Fabricio From timmichelsen at gmx-topmail.de Mon Jun 2 13:35:54 2008 From: timmichelsen at gmx-topmail.de (Tim Michelsen) Date: Mon, 2 Jun 2008 17:35:54 +0000 (UTC) Subject: [SciPy-user] building and installing timeseries fails on Windows Message-ID: Hello, following the (updated) instructions at: http://www.scipy.org/Cookbook/CompilingExtensionsOnWindowsWithMinGW I tried to build a windows installer package and install the timeseries scikit on a Windows XP computer with a recent MinGW. It fails with the following message: Microsoft Windows XP [Version 5.1.2600] (C) Copyright 1985-2001 Microsoft Corp. C:temp\SVNcheckouts\timeseries>python setup.py bdist_wininst running bdist_wininst running build running scons Traceback (most recent call last): File "setup.py", line 42, in setup_package() File "setup.py", line 36, in setup_package configuration = configuration, File "C:\python\Lib\site-packages\numpy\distutils\core.py", line 184, in setup return old_setup(**new_attr) File "C:\python\lib\distutils\core.py", line 151, in setup dist.run_commands() File "C:\python\lib\distutils\dist.py", line 974, in run_commands self.run_command(cmd) File "C:\python\lib\distutils\dist.py", line 994, in run_command cmd_obj.run() File "C:\python\lib\site-packages\setuptools\command\bdist_wininst.py", line 37, in run _bdist_wininst.run(self) File "C:\python\lib\distutils\command\bdist_wininst.py", line 107, in run self.run_command('build') File "C:\python\lib\distutils\cmd.py", line 333, in run_command self.distribution.run_command(command) File "C:\python\lib\distutils\dist.py", line 994, in run_command cmd_obj.run() File "C:\python\Lib\site-packages\numpy\distutils\command\build.py", line 38, in run self.run_command('scons') File "C:\python\lib\distutils\cmd.py", line 333, in run_command self.distribution.run_command(command) File "C:\python\lib\distutils\dist.py", line 993, in run_command cmd_obj.ensure_finalized() File "C:\python\lib\distutils\cmd.py", line 117, in ensure_finalized self.finalize_options() File "C:\python\Lib\site-packages\numpy\distutils\command\scons.py", line 258, in finalize_options force=self.force) File "C:\python\Lib\site-packages\numpy\distutils\ccompiler.py", line 366, in new_compiler compiler = klass(None, dry_run, force) File "C:\python\Lib\site-packages\numpy\distutils\mingw32ccompiler.py", line 46, in __init__ verbose,dry_run, force) File "C:\python\lib\distutils\cygwinccompiler.py", line 84, in __init__ get_versions() File "C:\python\lib\distutils\cygwinccompiler.py", line 424, in get_versions ld_version = StrictVersion(result.group(1)) File ".\distutils\version.py", line 40, in __init__ File ".\distutils\version.py", line 107, in parse ValueError: invalid version number '2.18.50.20080109' What can I do now? Kind regards, Timmie From nwagner at iam.uni-stuttgart.de Mon Jun 2 13:42:55 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 02 Jun 2008 19:42:55 +0200 Subject: [SciPy-user] OpenOpt errors with ALGENCAN In-Reply-To: <6ade6f6c0806020733w3d3b5244x5e7869b5919ba95c@mail.gmail.com> References: <25912a3c-07e6-47b7-ad62-c02288fce0e2@59g2000hsb.googlegroups.com> <48439884.3000606@scipy.org> <6ade6f6c0806020147g26144a49i87811cbd5ecaa973@mail.gmail.com> <4843C371.9050008@scipy.org> <6ade6f6c0806020306r7d718a6cn2773efbf4ab57a89@mail.gmail.com> <6ade6f6c0806020733w3d3b5244x5e7869b5919ba95c@mail.gmail.com> Message-ID: On Mon, 2 Jun 2008 15:33:29 +0100 "Paul Smith" wrote: > On Mon, Jun 2, 2008 at 11:06 AM, Paul Smith > wrote: >>> I have read README file from algencan2.0beta.tgz. Python >>>binding refers to >>> >>> $(ALGENCAN)/sources/interfaces/py/runalgencan.py and >>> $(ALGENCAN)/sources/interfaces/py/toyprob.py >>> >>> but these files are absent (after all required steps >>>before already done) >>> the only one py-file is algencan.py. >>> >>> I had informed algencan developers of the issue. >>> W/O up-to-date toyprob.py I cannot connect algencan2.0 >>>beta, because I >>> don't know those new 12 arguments instead of old-style >>>9. >> >> Thanks again, Dmitrey. I had also noticed yesterday the >>problem that >> you describe: none of the two files (runalgencan.py and >>toyprob.py) >> are created. I sent an e-mail to one of the developers, >>telling him >> about the issue, but I have not received any reply so >>far. > > Ernest Birgin has just written to me telling me that >there is now > available a new version of Algencan with the missing >files. The > problem is solved now. > > Paul > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user Hi, I have installed the new beta release. I am using g77 and python2.4 ./runalgencan.py Traceback (most recent call last): File "./runalgencan.py", line 30, in ? import algencan File "/home/nwagner/algencan.py", line 25, in ? from pywrapper import solver ImportError: /home/nwagner/pywrapper.so: cannot map zero-fill pages: Cannot allocate memory Any idea ? Nils From phhs80 at gmail.com Mon Jun 2 13:46:49 2008 From: phhs80 at gmail.com (Paul Smith) Date: Mon, 2 Jun 2008 18:46:49 +0100 Subject: [SciPy-user] OpenOpt errors with ALGENCAN In-Reply-To: References: <25912a3c-07e6-47b7-ad62-c02288fce0e2@59g2000hsb.googlegroups.com> <48439884.3000606@scipy.org> <6ade6f6c0806020147g26144a49i87811cbd5ecaa973@mail.gmail.com> <4843C371.9050008@scipy.org> <6ade6f6c0806020306r7d718a6cn2773efbf4ab57a89@mail.gmail.com> <6ade6f6c0806020733w3d3b5244x5e7869b5919ba95c@mail.gmail.com> Message-ID: <6ade6f6c0806021046g7d0a7653i7802706930bc8f11@mail.gmail.com> On Mon, Jun 2, 2008 at 6:42 PM, Nils Wagner wrote: > Hi, > > I have installed the new beta release. > I am using g77 and python2.4 > > ./runalgencan.py > Traceback (most recent call last): > File "./runalgencan.py", line 30, in ? > import algencan > File "/home/nwagner/algencan.py", line 25, in ? > from pywrapper import solver > ImportError: /home/nwagner/pywrapper.so: cannot map > zero-fill pages: Cannot allocate memory > > Any idea ? The interface Algencan expects Python 2.5 and not Python 2.4, I guess. It works fine here with Python 2.5. Paul From contact at pythonxy.com Mon Jun 2 14:26:05 2008 From: contact at pythonxy.com (Pierre Raybaut) Date: Mon, 02 Jun 2008 20:26:05 +0200 Subject: [SciPy-user] [ Python(x,y) ] New release : 1.2.4 Message-ID: <48443B3D.1090206@pythonxy.com> Hi all, Python(x,y) 1.2.4 is now available on http://www.pythonxy.com. Hopefully, updates should now be released at a lower rate. Changes history 06 -02 -2008 - Version 1.2.4 : * Updated: o Matplotlib 0.98 - Important change for Python(x,y) users: better integration in Qt GUIs (see release notes) o IPython 0.8.4 (see release notes) * Corrected: o Windows installer bug fix: installation for all users did not work properly Regards, Pierre Raybaut From gav451 at gmail.com Mon Jun 2 14:52:29 2008 From: gav451 at gmail.com (Gerard Vermeulen) Date: Mon, 2 Jun 2008 20:52:29 +0200 Subject: [SciPy-user] OdrPack and complex model In-Reply-To: <1212417317.3242.12.camel@Portable-s2m.cnrs-mrs.fr> References: <1212417317.3242.12.camel@Portable-s2m.cnrs-mrs.fr> Message-ID: <20080602205229.0561d20e@jupiter.rozan.fr> On Mon, 02 Jun 2008 16:35:17 +0200 Fabrice Silva wrote: > Hi, > I would like to use odr to fit a complex model on reference values as > causality issues occur when trying to adjust separately modulus and > argument. > Is it possible with odr ? I have adjusted real and imaginary values at the same time. Pseudo-code looks like: import numpy as np def f(parameters, omega, data=0, sigma=1.0): # len(data) = 2*len(omega) # data[:len(omega)] contains the "real" experimental data # data[len(omega):] contains the "imaginary" experimental data zs = np.zeros(len(omega), np.complex_) # calculate zs here, using parameters and omega xsys = np.zeros(2*len(omega), np.float64) xsys[:len(omega)] = zs.real xsys[len(omega):] = ys.real return (xsys-data)/sigma # f() Regards -- Gerard PS: if data can be complex, then f() can return (zs-data)/sigma. I have forgotten whether I have tried that. PPS: same idea works with leastsq() From calogero.colletto at gmx.de Tue Jun 3 14:49:43 2008 From: calogero.colletto at gmx.de (Calogero Colletto) Date: Tue, 3 Jun 2008 18:49:43 +0000 (UTC) Subject: [SciPy-user] Simpson Bug? Message-ID: Hi, recently I tested the simpson-rule from the scipy-package and I figured out an inconsistency in the result. Following rule: S[n] = h/3 * ( f(a) + f(b) + 4 * Sum( f(a+i*h), for all odd i, n) + 2 * Sum( f(a+i*h), for all even i, n) ) integration from a to b n: number of intervals. must be even h = (b-a)/n ... width of an interval My Script (simple implementation of the simpson rule): ______________________________________________ from math import pi, sin, cos from numpy import arange from scipy import integrate def simps(y_seq, x_seq): h = x_seq[1] - x_seq[0] val = y_seq[0] + y_seq[-1] for i, y in enumerate(y_seq): if (i % 2) == 1: val += 4 * y elif (i % 2) == 0: val += 2 * y print "loop ", i, h*val/3. return h*val/3. if __name__ == '__main__': s = 11 # Anzahl der St?tzstellen: s = n+1 # Je mehr St?tzstellen, umso genauer das Ergebnis f = lambda x: sin(x) x = arange(0., pi/2., pi/(2.*s)) y = tuple([f(i) for i in x]) result = simps(y, x) print "My own implementation: ", result print "Scipy implementation: ", integrate.simps(y, x) _______________________________________________________ I tested the script with the sin-function and integrate this from 0 to pi/2. The analytical result is 1. Following results gives me the script: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ loop 0 0.0471153904573 loop 1 0.0742120723007 loop 2 0.101032948993 loop 3 0.180127782511 loop 4 0.231596667976 loop 5 0.356281860151 loop 6 0.428229051385 loop 7 0.588403349479 loop 8 0.675000112936 loop 9 0.85768714791 loop 10 0.951917928825 My own implementation: 0.951917928825 Scipy implementation: 0.85768714791 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Maybe its my fault. But I think that the scipy-implementation is loosing the last increment. Calo From berthe.loic at gmail.com Tue Jun 3 15:05:34 2008 From: berthe.loic at gmail.com (LB) Date: Tue, 3 Jun 2008 12:05:34 -0700 (PDT) Subject: [SciPy-user] Scipy.test() fails when building scipy-0.6 against numpy 1.1 Message-ID: <61097727-a8d4-42d0-b772-81a341c5d7a5@d77g2000hsb.googlegroups.com> hi, I'm trying to build scipy 0.6 with numpy 1.1 on a linux (32bits )and I'm seeing errors when I run the test suite : #################### ERROR MESSAGES ##################################### ..................................................................................................... ====================================================================== ERROR: check_string_add_speed (test_scxx_sequence.test_list) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/loic/tmp/bluelagoon/lib/python2.5/site-packages/scipy/ weave/tests/test_scxx_sequence.py", line 405, in check_string_add_speed inline_tools.inline(code,['a','b']) File "/home/loic/tmp/bluelagoon/lib/python2.5/site-packages/scipy/ weave/inline_tools.py", line 339, in inline **kw) File "/home/loic/tmp/bluelagoon/lib/python2.5/site-packages/scipy/ weave/inline_tools.py", line 447, in compile_function verbose=verbose, **kw) File "/home/loic/tmp/bluelagoon/lib/python2.5/site-packages/scipy/ weave/ext_tools.py", line 365, in compile verbose = verbose, **kw) File "/home/loic/tmp/bluelagoon/lib/python2.5/site-packages/scipy/ weave/build_tools.py", line 269, in build_extension setup(name = module_name, ext_modules = [ext],verbose=verb) File "/home/loic/tmp/bluelagoon/lib/python2.5/site-packages/numpy/ distutils/core.py", line 184, in setup return old_setup(**new_attr) File "/home/loic/tmp/bluelagoon/lib/python2.5/distutils/core.py", line 168, in setup raise SystemExit, "error: " + str(msg) CompileError: error: Command "g++ -pthread -fno-strict-aliasing - DNDEBUG -g -O3 -Wall -fPIC -I/home/loic/tmp/bluelagoon/lib/python2.5/ site-packages/scipy/weave -I/home/loic/tmp/bluelagoon/lib/python2.5/ site-packages/scipy/weave/scxx -I/home/loic/tmp/bluelagoon/lib/ python2.5/site-packages/numpy/core/include -I/home/loic/tmp/bluelagoon/ include/python2.5 -c /home/loic/.python25_compiled/ sc_30bcdfe77c16ba01711b3f9b4f2d38df0.cpp -o /tmp/loic/ python25_intermediate/compiler_e0195f93cbce8aefbd6efe084f3c364b/home/ loic/.python25_compiled/sc_30bcdfe77c16ba01711b3f9b4f2d38df0.o" failed with exit status 1 ====================================================================== ERROR: check_false (test_scxx_object.test_object_is_true) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/loic/tmp/bluelagoon/lib/python2.5/site-packages/scipy/ weave/tests/test_scxx_object.py", line 656, in check_false res = inline_tools.inline('return_val = a.mcall("not");',['a']) File "/home/loic/tmp/bluelagoon/lib/python2.5/site-packages/scipy/ weave/inline_tools.py", line 342, in inline results = attempt_function_call(code,local_dict,global_dict) File "/home/loic/tmp/bluelagoon/lib/python2.5/site-packages/scipy/ weave/inline_tools.py", line 354, in attempt_function_call results = apply(func,(local_dict,global_dict)) AttributeError: foo instance has no attribute 'not' ====================================================================== ERROR: check_true (test_scxx_object.test_object_is_true) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/loic/tmp/bluelagoon/lib/python2.5/site-packages/scipy/ weave/tests/test_scxx_object.py", line 660, in check_true res = inline_tools.inline('return_val = a.mcall("not");',['a']) File "/home/loic/tmp/bluelagoon/lib/python2.5/site-packages/scipy/ weave/inline_tools.py", line 342, in inline results = attempt_function_call(code,local_dict,global_dict) File "/home/loic/tmp/bluelagoon/lib/python2.5/site-packages/scipy/ weave/inline_tools.py", line 354, in attempt_function_call results = apply(func,(local_dict,global_dict)) AttributeError: 'NoneType' object has no attribute 'not' ====================================================================== ERROR: check_set_from_member (test_scxx_object.test_object_set_item_op_key) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/loic/tmp/bluelagoon/lib/python2.5/site-packages/scipy/ weave/tests/test_scxx_object.py", line 820, in check_set_from_member inline_tools.inline('a["first"] = a["second"];',['a']) File "/home/loic/tmp/bluelagoon/lib/python2.5/site-packages/scipy/ weave/inline_tools.py", line 339, in inline **kw) File "/home/loic/tmp/bluelagoon/lib/python2.5/site-packages/scipy/ weave/inline_tools.py", line 447, in compile_function verbose=verbose, **kw) File "/home/loic/tmp/bluelagoon/lib/python2.5/site-packages/scipy/ weave/ext_tools.py", line 365, in compile verbose = verbose, **kw) File "/home/loic/tmp/bluelagoon/lib/python2.5/site-packages/scipy/ weave/build_tools.py", line 269, in build_extension setup(name = module_name, ext_modules = [ext],verbose=verb) File "/home/loic/tmp/bluelagoon/lib/python2.5/site-packages/numpy/ distutils/core.py", line 184, in setup return old_setup(**new_attr) File "/home/loic/tmp/bluelagoon/lib/python2.5/distutils/core.py", line 168, in setup raise SystemExit, "error: " + str(msg) CompileError: error: Command "g++ -pthread -fno-strict-aliasing - DNDEBUG -g -O3 -Wall -fPIC -I/home/loic/tmp/bluelagoon/lib/python2.5/ site-packages/scipy/weave -I/home/loic/tmp/bluelagoon/lib/python2.5/ site-packages/scipy/weave/scxx -I/home/loic/tmp/bluelagoon/lib/ python2.5/site-packages/numpy/core/include -I/home/loic/tmp/bluelagoon/ include/python2.5 -c /home/loic/.python25_compiled/ sc_a35665b241c24d6c9d3fd21e957285660.cpp -o /tmp/loic/ python25_intermediate/compiler_e0195f93cbce8aefbd6efe084f3c364b/home/ loic/.python25_compiled/sc_a35665b241c24d6c9d3fd21e957285660.o" failed with exit status 1 ====================================================================== ERROR: check_string_add_speed (scipy.weave.tests.test_scxx_sequence.test_list) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/loic/tmp/bluelagoon/lib/python2.5/site-packages/scipy/ weave/tests/test_scxx_sequence.py", line 405, in check_string_add_speed inline_tools.inline(code,['a','b']) File "/home/loic/tmp/bluelagoon/lib/python2.5/site-packages/scipy/ weave/inline_tools.py", line 339, in inline **kw) File "/home/loic/tmp/bluelagoon/lib/python2.5/site-packages/scipy/ weave/inline_tools.py", line 447, in compile_function verbose=verbose, **kw) File "/home/loic/tmp/bluelagoon/lib/python2.5/site-packages/scipy/ weave/ext_tools.py", line 365, in compile verbose = verbose, **kw) File "/home/loic/tmp/bluelagoon/lib/python2.5/site-packages/scipy/ weave/build_tools.py", line 269, in build_extension setup(name = module_name, ext_modules = [ext],verbose=verb) File "/home/loic/tmp/bluelagoon/lib/python2.5/site-packages/numpy/ distutils/core.py", line 184, in setup return old_setup(**new_attr) File "/home/loic/tmp/bluelagoon/lib/python2.5/distutils/core.py", line 168, in setup raise SystemExit, "error: " + str(msg) CompileError: error: Command "g++ -pthread -fno-strict-aliasing - DNDEBUG -g -O3 -Wall -fPIC -I/home/loic/tmp/bluelagoon/lib/python2.5/ site-packages/scipy/weave -I/home/loic/tmp/bluelagoon/lib/python2.5/ site-packages/scipy/weave/scxx -I/home/loic/tmp/bluelagoon/lib/ python2.5/site-packages/numpy/core/include -I/home/loic/tmp/bluelagoon/ include/python2.5 -c /home/loic/.python25_compiled/ sc_30bcdfe77c16ba01711b3f9b4f2d38df1.cpp -o /tmp/loic/ python25_intermediate/compiler_e0195f93cbce8aefbd6efe084f3c364b/home/ loic/.python25_compiled/sc_30bcdfe77c16ba01711b3f9b4f2d38df1.o" failed with exit status 1 ====================================================================== ERROR: check_false (scipy.weave.tests.test_scxx_object.test_object_is_true) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/loic/tmp/bluelagoon/lib/python2.5/site-packages/scipy/ weave/tests/test_scxx_object.py", line 656, in check_false res = inline_tools.inline('return_val = a.mcall("not");',['a']) File "/home/loic/tmp/bluelagoon/lib/python2.5/site-packages/scipy/ weave/inline_tools.py", line 325, in inline results = attempt_function_call(code,local_dict,global_dict) File "/home/loic/tmp/bluelagoon/lib/python2.5/site-packages/scipy/ weave/inline_tools.py", line 354, in attempt_function_call results = apply(func,(local_dict,global_dict)) AttributeError: foo instance has no attribute 'not' ====================================================================== ERROR: check_true (scipy.weave.tests.test_scxx_object.test_object_is_true) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/loic/tmp/bluelagoon/lib/python2.5/site-packages/scipy/ weave/tests/test_scxx_object.py", line 660, in check_true res = inline_tools.inline('return_val = a.mcall("not");',['a']) File "/home/loic/tmp/bluelagoon/lib/python2.5/site-packages/scipy/ weave/inline_tools.py", line 325, in inline results = attempt_function_call(code,local_dict,global_dict) File "/home/loic/tmp/bluelagoon/lib/python2.5/site-packages/scipy/ weave/inline_tools.py", line 354, in attempt_function_call results = apply(func,(local_dict,global_dict)) AttributeError: 'NoneType' object has no attribute 'not' ====================================================================== ERROR: check_set_from_member (scipy.weave.tests.test_scxx_object.test_object_set_item_op_key) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/loic/tmp/bluelagoon/lib/python2.5/site-packages/scipy/ weave/tests/test_scxx_object.py", line 820, in check_set_from_member inline_tools.inline('a["first"] = a["second"];',['a']) File "/home/loic/tmp/bluelagoon/lib/python2.5/site-packages/scipy/ weave/inline_tools.py", line 339, in inline **kw) File "/home/loic/tmp/bluelagoon/lib/python2.5/site-packages/scipy/ weave/inline_tools.py", line 447, in compile_function verbose=verbose, **kw) File "/home/loic/tmp/bluelagoon/lib/python2.5/site-packages/scipy/ weave/ext_tools.py", line 365, in compile verbose = verbose, **kw) File "/home/loic/tmp/bluelagoon/lib/python2.5/site-packages/scipy/ weave/build_tools.py", line 269, in build_extension setup(name = module_name, ext_modules = [ext],verbose=verb) File "/home/loic/tmp/bluelagoon/lib/python2.5/site-packages/numpy/ distutils/core.py", line 184, in setup return old_setup(**new_attr) File "/home/loic/tmp/bluelagoon/lib/python2.5/distutils/core.py", line 168, in setup raise SystemExit, "error: " + str(msg) CompileError: error: Command "g++ -pthread -fno-strict-aliasing - DNDEBUG -g -O3 -Wall -fPIC -I/home/loic/tmp/bluelagoon/lib/python2.5/ site-packages/scipy/weave -I/home/loic/tmp/bluelagoon/lib/python2.5/ site-packages/scipy/weave/scxx -I/home/loic/tmp/bluelagoon/lib/ python2.5/site-packages/numpy/core/include -I/home/loic/tmp/bluelagoon/ include/python2.5 -c /home/loic/.python25_compiled/ sc_a35665b241c24d6c9d3fd21e957285661.cpp -o /tmp/loic/ python25_intermediate/compiler_e0195f93cbce8aefbd6efe084f3c364b/home/ loic/.python25_compiled/sc_a35665b241c24d6c9d3fd21e957285661.o" failed with exit status 1 ====================================================================== FAIL: check_syevr (scipy.lib.lapack.tests.test_lapack.test_flapack_float) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/loic/tmp/bluelagoon/lib/python2.5/site-packages/scipy/ lib/lapack/tests/esv_tests.py", line 41, in check_syevr assert_array_almost_equal(w,exact_w) File "/home/loic/tmp/bluelagoon/lib/python2.5/site-packages/numpy/ testing/utils.py", line 255, in assert_array_almost_equal header='Arrays are not almost equal') File "/home/loic/tmp/bluelagoon/lib/python2.5/site-packages/numpy/ testing/utils.py", line 240, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 33.3333333333%) x: array([-0.66992444, 0.48769444, 9.18222713], dtype=float32) y: array([-0.66992434, 0.48769389, 9.18223045]) ====================================================================== FAIL: check_syevr_irange (scipy.lib.lapack.tests.test_lapack.test_flapack_float) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/loic/tmp/bluelagoon/lib/python2.5/site-packages/scipy/ lib/lapack/tests/esv_tests.py", line 66, in check_syevr_irange assert_array_almost_equal(w,exact_w[rslice]) File "/home/loic/tmp/bluelagoon/lib/python2.5/site-packages/numpy/ testing/utils.py", line 255, in assert_array_almost_equal header='Arrays are not almost equal') File "/home/loic/tmp/bluelagoon/lib/python2.5/site-packages/numpy/ testing/utils.py", line 240, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 33.3333333333%) x: array([-0.66992444, 0.48769444, 9.18222713], dtype=float32) y: array([-0.66992434, 0.48769389, 9.18223045]) ====================================================================== FAIL: check_noargs (test_scxx_object.test_object_call) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/loic/tmp/bluelagoon/lib/python2.5/site-packages/scipy/ weave/tests/test_scxx_object.py", line 485, in check_noargs assert_equal(sys.getrefcount(res),2) File "/home/loic/tmp/bluelagoon/lib/python2.5/site-packages/numpy/ testing/utils.py", line 145, in assert_equal assert desired == actual, msg AssertionError: Items are not equal: ACTUAL: 3 DESIRED: 2 ====================================================================== FAIL: check_set_complex (test_scxx_object.test_object_set_item_op_key) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/loic/tmp/bluelagoon/lib/python2.5/site-packages/scipy/ weave/tests/test_scxx_object.py", line 789, in check_set_complex assert_equal(sys.getrefcount(key),3) File "/home/loic/tmp/bluelagoon/lib/python2.5/site-packages/numpy/ testing/utils.py", line 145, in assert_equal assert desired == actual, msg AssertionError: Items are not equal: ACTUAL: 4 DESIRED: 3 ====================================================================== FAIL: check_noargs (scipy.weave.tests.test_scxx_object.test_object_call) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/loic/tmp/bluelagoon/lib/python2.5/site-packages/scipy/ weave/tests/test_scxx_object.py", line 485, in check_noargs assert_equal(sys.getrefcount(res),2) File "/home/loic/tmp/bluelagoon/lib/python2.5/site-packages/numpy/ testing/utils.py", line 145, in assert_equal assert desired == actual, msg AssertionError: Items are not equal: ACTUAL: 3 DESIRED: 2 ====================================================================== FAIL: check_set_complex (scipy.weave.tests.test_scxx_object.test_object_set_item_op_key) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/loic/tmp/bluelagoon/lib/python2.5/site-packages/scipy/ weave/tests/test_scxx_object.py", line 789, in check_set_complex assert_equal(sys.getrefcount(key),3) File "/home/loic/tmp/bluelagoon/lib/python2.5/site-packages/numpy/ testing/utils.py", line 145, in assert_equal assert desired == actual, msg AssertionError: Items are not equal: ACTUAL: 4 DESIRED: 3 ---------------------------------------------------------------------- Ran 2248 tests in 335.652s FAILED (failures=6, errors=8) ################################################################################### I didn't have theses errors with numpy 0.6. I didn't have any problem when compiling, installing or testing numpy 1.1. Besides, I see a lot of lines like """ /home/loic/.python25_compiled/sc_2b6af25d0035555e6a7579be09c9693c0.cpp: 34: attention : deprecated conversion from string constant to a char* """ With theses lines, my installation log is 2.6Mo big. Is this related with scipy's ticket #490 ? Have you got any clue to fix this ? Thanks, -- LB From peridot.faceted at gmail.com Tue Jun 3 18:54:51 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Tue, 3 Jun 2008 18:54:51 -0400 Subject: [SciPy-user] Simpson Bug? In-Reply-To: References: Message-ID: 2008/6/3 Calogero Colletto : > Hi, > > recently I tested the simpson-rule from the scipy-package and I figured out an > inconsistency in the result. Following rule: > > S[n] = h/3 * ( f(a) + f(b) + 4 * Sum( f(a+i*h), for all odd i, n) + 2 * Sum( > f(a+i*h), for all even i, n) ) This is not quite the right formula. The problem is (if there are an odd number of points) that it counts the first and last points three times, rather than once. If you start with -(f(a)+f(b)) it works out all right. As a cross-check, try using some functions whose integral is known to be exact when computed with Simpson's rule: constants, cubic polynomials. You will see that, for example, your code returns 1.1333 rather than 1 as the integral of 1 from 0 to 1. > x = arange(0., pi/2., pi/(2.*s)) This use of arange is highly error-prone. Depending on round-off error, you could get either s or s+1 points: In [10]: len(arange(0,pi/2,pi/(2*61))) Out[10]: 62 When you care how many points you get, use linspace: x = linspace(0,pi/2,s) It's clearer, and it's guaranteed to give you the right number of points. arange is appropriate when you are working with integers, or when what you care about is the size of the step. Anne From lopmart at gmail.com Tue Jun 3 19:45:24 2008 From: lopmart at gmail.com (Jose Lopez) Date: Tue, 3 Jun 2008 16:45:24 -0700 Subject: [SciPy-user] problem with optimize.leastsq Message-ID: <4eeef9d40806031645t20a180ffi9c6b13c71d971866@mail.gmail.com> hi i have a problem with "optimize.leastsq", give it the next error: MemoryError: File: "c:\python25\lib....\minpack.py",line 268,in leastsq retval = _minpack._lmdif(func,x0,args,full_output,ftol,xtol,gtol,maxfev,epsfcn,factor,diag) if i work whit the same code with some parameters work it! but i work with more parameters the error above show me. My os is windows someone happened the same error? thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From glen.shennan at gmail.com Tue Jun 3 22:23:13 2008 From: glen.shennan at gmail.com (Glen Shennan) Date: Wed, 4 Jun 2008 12:23:13 +1000 Subject: [SciPy-user] LabVIEW and LabPython Message-ID: Hi, I am trying to use a Python program in LabVIEW using LabPython. My program works fine outside of LabVIEW but not when compiled and run in LabVIEW via LabPython. I am using Scipy and Numpy functions in the program but I think they are causing the problems. Does anyone know if it is possible to use Scipy with LabPython? Thanks, Glen -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue Jun 3 22:31:43 2008 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 3 Jun 2008 21:31:43 -0500 Subject: [SciPy-user] LabVIEW and LabPython In-Reply-To: References: Message-ID: <3d375d730806031931p6603f4bbw237c2a699f6e21a6@mail.gmail.com> On Tue, Jun 3, 2008 at 9:23 PM, Glen Shennan wrote: > Hi, > > I am trying to use a Python program in LabVIEW using LabPython. My program > works fine outside of LabVIEW but not when compiled and run in LabVIEW via > LabPython. I am using Scipy and Numpy functions in the program but I think > they are causing the problems. Does anyone know if it is possible to use > Scipy with LabPython? Probably, but you will at least need to install numpy and scipy for the Python executable that LabPython uses. Since there does not appear to be any documentation, I doubt any of us here can help you beyond that. Hopefully, the LabPython author will respond to your email on the LabPython list. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From glen.shennan at gmail.com Tue Jun 3 22:37:32 2008 From: glen.shennan at gmail.com (Glen Shennan) Date: Wed, 4 Jun 2008 12:37:32 +1000 Subject: [SciPy-user] LabVIEW and LabPython In-Reply-To: <3d375d730806031931p6603f4bbw237c2a699f6e21a6@mail.gmail.com> References: <3d375d730806031931p6603f4bbw237c2a699f6e21a6@mail.gmail.com> Message-ID: "Probably, but you will at least need to install numpy and scipy for the Python executable that LabPython uses." Thanks! Any idea how I would do that? As far as I know I have one Python executable installed, and numpy and scipy are installed for that. 2008/6/4 Robert Kern : > On Tue, Jun 3, 2008 at 9:23 PM, Glen Shennan > wrote: > > Hi, > > > > I am trying to use a Python program in LabVIEW using LabPython. My > program > > works fine outside of LabVIEW but not when compiled and run in LabVIEW > via > > LabPython. I am using Scipy and Numpy functions in the program but I > think > > they are causing the problems. Does anyone know if it is possible to use > > Scipy with LabPython? > > Probably, but you will at least need to install numpy and scipy for > the Python executable that LabPython uses. Since there does not appear > to be any documentation, I doubt any of us here can help you beyond > that. Hopefully, the LabPython author will respond to your email on > the LabPython list. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue Jun 3 22:40:59 2008 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 3 Jun 2008 21:40:59 -0500 Subject: [SciPy-user] LabVIEW and LabPython In-Reply-To: References: <3d375d730806031931p6603f4bbw237c2a699f6e21a6@mail.gmail.com> Message-ID: <3d375d730806031940i762dd493t85bdf97276c8acaa@mail.gmail.com> On Tue, Jun 3, 2008 at 9:37 PM, Glen Shennan wrote: > "Probably, but you will at least need to install numpy and scipy for > the Python executable that LabPython uses." > > Thanks! Any idea how I would do that? As far as I know I have one Python > executable installed, and numpy and scipy are installed for that. I'm afraid not. Like I said, I can't find any documentation on how LabPython is set up. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cohen at slac.stanford.edu Wed Jun 4 04:21:49 2008 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Wed, 04 Jun 2008 10:21:49 +0200 Subject: [SciPy-user] problem with optimize.leastsq In-Reply-To: <4eeef9d40806031645t20a180ffi9c6b13c71d971866@mail.gmail.com> References: <4eeef9d40806031645t20a180ffi9c6b13c71d971866@mail.gmail.com> Message-ID: <4846509D.2030606@slac.stanford.edu> hi Jose, can you please provide a test example that fails? There is no way one can do anything to help you with the information you are providing below. best, JCT Jose Lopez wrote: > > hi > > i have a problem with "optimize.leastsq", give it the next error: > MemoryError: > File: "c:\python25\lib....\minpack.py",line 268,in leastsq > retval = > _minpack._lmdif(func,x0,args,full_output,ftol,xtol,gtol,maxfev,epsfcn,factor,diag) > > if i work whit the same code with some parameters work it! > > but i work with more parameters the error above show me. > My os is windows > > someone happened the same error? > > thanks > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From lbolla at gmail.com Wed Jun 4 10:47:15 2008 From: lbolla at gmail.com (lorenzo bolla) Date: Wed, 4 Jun 2008 16:47:15 +0200 Subject: [SciPy-user] interpn Message-ID: <80c99e790806040747j236ec655xac82a720b0acc593@mail.gmail.com> Hello all! Is there, in scipy or numpy, a function similar to Matlab's interpn? >From Matlab's help on interpn: "VI = interpn(X1,X2,X3,...,V,Y1,Y2,Y3,...) interpolates to find VI, the values of the underlying multidimensional function V at the points in the arrays Y1, Y2, Y3, etc. For an n-dimensional array V, interpn is called with 2*N+1 arguments. Arrays X1, X2, X3, etc. specify the points at which the data V is given. Out of range values are returned as NaNs. ..." Basically, I've got a function from R^n to R. I know its value in a set of points in R^n. I want to interpolate them so as to be able to evaluate the function in any R^n point. Thank you in advance! L. -- "Whereof one cannot speak, thereof one must be silent." -- Ludwig Wittgenstein From oliphant at enthought.com Wed Jun 4 11:58:39 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Wed, 04 Jun 2008 10:58:39 -0500 Subject: [SciPy-user] interpn In-Reply-To: <80c99e790806040747j236ec655xac82a720b0acc593@mail.gmail.com> References: <80c99e790806040747j236ec655xac82a720b0acc593@mail.gmail.com> Message-ID: <4846BBAF.9080003@enthought.com> lorenzo bolla wrote: > Hello all! > > Is there, in scipy or numpy, a function similar to Matlab's interpn? > >From Matlab's help on interpn: > "VI = interpn(X1,X2,X3,...,V,Y1,Y2,Y3,...) interpolates to find VI, > the values of the underlying multidimensional function V at the points > in the arrays Y1, Y2, Y3, etc. For an n-dimensional array V, interpn > is called with 2*N+1 arguments. Arrays X1, X2, X3, etc. specify the > points at which the data V is given. Out of range values are returned > as NaNs. ..." > > Basically, I've got a function from R^n to R. I know its value in a > set of points in R^n. I want to interpolate them so as to be able to > evaluate the function in any R^n point. > scipy.ndimage has something called map_coordinates which (I think) can do what you want, but it's interface is a bit more complicated than interpn. -Travis From orest.kozyar at gmail.com Wed Jun 4 12:01:58 2008 From: orest.kozyar at gmail.com (Orest Kozyar) Date: Wed, 4 Jun 2008 09:01:58 -0700 (PDT) Subject: [SciPy-user] Debugging scipy.weave? Message-ID: <931e7a5c-e26a-47f9-ae52-96297b4e4b18@m36g2000hse.googlegroups.com> I have written some C++ code that is integrated into my python scripts using scipy.weave. In general, it runs pretty smoothly when I run it over a small portion of my dataset. When I run my script over the entire dataset, it crashes the Python interpreter. I suspect this is a memory issue, but cannot figure out what is going on because it just refuses to crash when running under a debugger. When I run python under the GNU debugger (gdb), the script runs just fine. Any advice or pointers on how to debug this issue? From berthe.loic at gmail.com Wed Jun 4 12:12:34 2008 From: berthe.loic at gmail.com (LB) Date: Wed, 4 Jun 2008 09:12:34 -0700 (PDT) Subject: [SciPy-user] How to build scipy 0.6 with numpy 1.1 ? In-Reply-To: <61097727-a8d4-42d0-b772-81a341c5d7a5@d77g2000hsb.googlegroups.com> References: <61097727-a8d4-42d0-b772-81a341c5d7a5@d77g2000hsb.googlegroups.com> Message-ID: <07b8c0c5-0bd9-48b0-9b24-e73c05f5122e@j22g2000hsf.googlegroups.com> I have tested this on another computer which has a 64bits processor and I got the same problem. So is there a way have a clean installation of both numpy 1.1 and scipy 0.6 on linux or do we have to wait for the next release of scipy ? Regards, -- LB From zunzun at zunzun.com Wed Jun 4 13:00:32 2008 From: zunzun at zunzun.com (James Phillips) Date: Wed, 4 Jun 2008 12:00:32 -0500 Subject: [SciPy-user] Debugging scipy.weave? In-Reply-To: <931e7a5c-e26a-47f9-ae52-96297b4e4b18@m36g2000hse.googlegroups.com> References: <931e7a5c-e26a-47f9-ae52-96297b4e4b18@m36g2000hse.googlegroups.com> Message-ID: <268756d30806041000j1e1f4c90i64a5a882c1403b9a@mail.gmail.com> In your C++, use good old-fashioned: #include std::cout << "name of something: " << value_of_something << std::endl; Now *that* brings back memories of youth... P.U. James Phillips http://zunzun.com On Wed, Jun 4, 2008 at 11:01 AM, Orest Kozyar wrote: > I have written some C++ code that is integrated into my python scripts > using scipy.weave. In general, it runs pretty smoothly when I run it > over a small portion of my dataset. When I run my script over the > entire dataset, it crashes the Python interpreter. I suspect this is > a memory issue, but cannot figure out what is going on because it just > refuses to crash when running under a debugger. When I run python > under the GNU debugger (gdb), the script runs just fine. Any advice > or pointers on how to debug this issue? > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From orest.kozyar at gmail.com Wed Jun 4 15:03:38 2008 From: orest.kozyar at gmail.com (Orest Kozyar) Date: Wed, 4 Jun 2008 15:03:38 -0400 Subject: [SciPy-user] Debugging scipy.weave? In-Reply-To: <268756d30806041000j1e1f4c90i64a5a882c1403b9a@mail.gmail.com> References: <931e7a5c-e26a-47f9-ae52-96297b4e4b18@m36g2000hse.googlegroups.com> <268756d30806041000j1e1f4c90i64a5a882c1403b9a@mail.gmail.com> Message-ID: Perfect. Old fashioned "print" debugging helped me figure out the problem. Thanks! On Wed, Jun 4, 2008 at 1:00 PM, James Phillips wrote: > In your C++, use good old-fashioned: > > #include > std::cout << "name of something: " << value_of_something << std::endl; > > Now *that* brings back memories of youth... P.U. > > James Phillips > http://zunzun.com > > > On Wed, Jun 4, 2008 at 11:01 AM, Orest Kozyar > wrote: >> >> I have written some C++ code that is integrated into my python scripts >> using scipy.weave. In general, it runs pretty smoothly when I run it >> over a small portion of my dataset. When I run my script over the >> entire dataset, it crashes the Python interpreter. I suspect this is >> a memory issue, but cannot figure out what is going on because it just >> refuses to crash when running under a debugger. When I run python >> under the GNU debugger (gdb), the script runs just fine. Any advice >> or pointers on how to debug this issue? >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From topengineer at gmail.com Wed Jun 4 20:49:34 2008 From: topengineer at gmail.com (HuiChang MOON) Date: Thu, 5 Jun 2008 09:49:34 +0900 Subject: [SciPy-user] Does Scipy or Numpy have contemporary physical constants? Message-ID: <296323b50806041749o15424c44ucf43bd4f17350eab@mail.gmail.com> Hello, users. A lot of physical constants such as elementary charge and Planck's constant is used to make some physical calculations. It is annoying to define the constants so often. Does Numpy or Scipy include some physical constants? Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From peridot.faceted at gmail.com Wed Jun 4 21:04:05 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Wed, 4 Jun 2008 21:04:05 -0400 Subject: [SciPy-user] Does Scipy or Numpy have contemporary physical constants? In-Reply-To: <296323b50806041749o15424c44ucf43bd4f17350eab@mail.gmail.com> References: <296323b50806041749o15424c44ucf43bd4f17350eab@mail.gmail.com> Message-ID: 2008/6/4 HuiChang MOON : > Hello, users. > A lot of physical constants such as elementary charge and Planck's constant > is used to make some physical calculations. > It is annoying to define the constants so often. > Does Numpy or Scipy include some physical constants? No, unfortunately. I has been discussed from time to time, and the conclusion is that they're not very useful without an automatic unit-tracking system. Such a system does exist, and it provides standard constants; it's in Scientific (a slightly dusty package somewhat complementary to scipy). There is even some support for units built into ipython, though personally I don't use it. The biggest limitation of the unit-tracking system is that it doesn't really work with arrays. There are definitely applications for a subclass of ndarray that keeps track of units, but I don't know of a publicly-released one. For pocket-calculator type calculations, Scientific is pretty handy. Anne From hasslerjc at comcast.net Wed Jun 4 21:09:06 2008 From: hasslerjc at comcast.net (John Hassler) Date: Wed, 04 Jun 2008 21:09:06 -0400 Subject: [SciPy-user] Does Scipy or Numpy have contemporary physical constants? In-Reply-To: <296323b50806041749o15424c44ucf43bd4f17350eab@mail.gmail.com> References: <296323b50806041749o15424c44ucf43bd4f17350eab@mail.gmail.com> Message-ID: <48473CB2.8010308@comcast.net> An HTML attachment was scrubbed... URL: From hasslerjc at comcast.net Wed Jun 4 21:24:34 2008 From: hasslerjc at comcast.net (John Hassler) Date: Wed, 04 Jun 2008 21:24:34 -0400 Subject: [SciPy-user] Does Scipy or Numpy have contemporary physical constants? In-Reply-To: References: <296323b50806041749o15424c44ucf43bd4f17350eab@mail.gmail.com> Message-ID: <48474052.40700@comcast.net> An HTML attachment was scrubbed... URL: From peridot.faceted at gmail.com Wed Jun 4 21:40:54 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Wed, 4 Jun 2008 21:40:54 -0400 Subject: [SciPy-user] Does Scipy or Numpy have contemporary physical constants? In-Reply-To: <48474052.40700@comcast.net> References: <296323b50806041749o15424c44ucf43bd4f17350eab@mail.gmail.com> <48474052.40700@comcast.net> Message-ID: 2008/6/4 John Hassler : > Ah. I interpreted his question differently. I used to teach Chemical > Engineering. We used Mathcad, which has a built-in set of units, > conversions, and tracking. I found it to be a pain in the coccyx, but the > students adored it. They would throw in whatever units they happened to > find, and hope that M'cad would figure it all out for them. It usually did, > although the units in their answers sometimes didn't have (obvious) physical > meanings. M'cad would usually collapse everything as far as possible, > rather than recognizing meaningful collections like force (mass length / > time^2), etc. > > But I'm just an old fogy who grew up with a slide rule. Why, back in MY > day, we even had to keep track of our own DECIMAL POINTS ... barefoot in the > snow .... uphill all the way ... > john Interesting. Automatic units tracking has definitely kept me from shooting myself in the foot many times, but I agree there's a temptation to assume that if the units are right the answer must be right too. Anne From millman at berkeley.edu Wed Jun 4 21:56:29 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Wed, 4 Jun 2008 18:56:29 -0700 Subject: [SciPy-user] Does Scipy or Numpy have contemporary physical constants? In-Reply-To: <296323b50806041749o15424c44ucf43bd4f17350eab@mail.gmail.com> References: <296323b50806041749o15424c44ucf43bd4f17350eab@mail.gmail.com> Message-ID: On Wed, Jun 4, 2008 at 5:49 PM, HuiChang MOON wrote: > A lot of physical constants such as elementary charge and Planck's constant > is used to make some physical calculations. > It is annoying to define the constants so often. > Does Numpy or Scipy include some physical constants? SciPy trunk has this: http://projects.scipy.org/scipy/scipy/browser/trunk/scipy/constants -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From haase at msg.ucsf.edu Thu Jun 5 07:23:43 2008 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Thu, 5 Jun 2008 13:23:43 +0200 Subject: [SciPy-user] easy to fix bug in ndimage.center_of_mass Message-ID: Hi, I found this code in measurements.py def center_of_mass(input, labels = None, index = None): """Calculate the center of mass of of the array. The index parameter is a single label number or a sequence of label numbers of the objects to be measured. If index is None, all values are used where labels is larger than zero. """ input = numpy.asarray(input) if numpy.iscomplexobj(input): raise TypeError, 'Complex type not supported' if labels != None: labels = numpy.asarray(labels) labels = _broadcast(labels, input.shape) if labels.shape != input.shape: raise RuntimeError, 'input and labels shape are not equal' return _nd_image.center_of_mass(input, labels, index) but the if part: if labels.shape != input.shape: is probably indented one level too much !? Regards, Sebastian From stefan at sun.ac.za Thu Jun 5 08:11:40 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Thu, 5 Jun 2008 14:11:40 +0200 Subject: [SciPy-user] easy to fix bug in ndimage.center_of_mass In-Reply-To: References: Message-ID: <9457e7c80806050511v33705620s641453c31e15eedf@mail.gmail.com> 2008/6/5 Sebastian Haase : > Hi, > I found this code in measurements.py > def center_of_mass(input, labels = None, index = None): > """Calculate the center of mass of of the array. > > The index parameter is a single label number or a sequence of > label numbers of the objects to be measured. If index is None, all > values are used where labels is larger than zero. > """ > input = numpy.asarray(input) > if numpy.iscomplexobj(input): > raise TypeError, 'Complex type not supported' > if labels != None: > labels = numpy.asarray(labels) > labels = _broadcast(labels, input.shape) > > if labels.shape != input.shape: > raise RuntimeError, 'input and labels shape are not equal' > return _nd_image.center_of_mass(input, labels, index) > > > but the if part: > if labels.shape != input.shape: > > is probably indented one level too much !? That looks right. You don't want to access labels.shape if labels is None. The "!=" is wrong, though. It should be "is not" Regards St?fan From matthieu.brucher at gmail.com Thu Jun 5 12:39:01 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 5 Jun 2008 18:39:01 +0200 Subject: [SciPy-user] Using VariantDir Message-ID: Hi, I'm trying to build some files out of place, in a subfolder named "build". The sources are in the root folder. I've tried the obvious : env.VariantDir("build", ".", duplicate=0) but the files are built in the current folder instead of the "build" one. If I build some files in a subfolder with : env.SConscript('folder/SConstruct', variant_dir='build/folder', duplicate=0) the files are built in "build/folder", as expected. But I do not understand why it is not the same as using VariantDir. If someone has a clue, I'm all ears ;) Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From travis at enthought.com Thu Jun 5 13:59:18 2008 From: travis at enthought.com (Travis Vaught) Date: Thu, 5 Jun 2008 12:59:18 -0500 Subject: [SciPy-user] Does Scipy or Numpy have contemporary physical constants? In-Reply-To: References: <296323b50806041749o15424c44ucf43bd4f17350eab@mail.gmail.com> Message-ID: <42E77762-CFAB-4A91-8A9A-4614451AFEC4@enthought.com> On Jun 4, 2008, at 8:04 PM, Anne Archibald wrote: > 2008/6/4 HuiChang MOON : >> Hello, users. >> A lot of physical constants such as elementary charge and Planck's >> constant >> is used to make some physical calculations. >> It is annoying to define the constants so often. >> Does Numpy or Scipy include some physical constants? > > No, unfortunately. I has been discussed from time to time, and the > conclusion is that they're not very useful without an automatic > unit-tracking system. Such a system does exist, and it provides > standard constants; it's in Scientific (a slightly dusty package > somewhat complementary to scipy). There is even some support for units > built into ipython, though personally I don't use it. > > The biggest limitation of the unit-tracking system is that it doesn't > really work with arrays. There are definitely applications for a > subclass of ndarray that keeps track of units, but I don't know of a > publicly-released one. For pocket-calculator type calculations, > Scientific is pretty handy. > FWIW I'll mention the units package in ETS: https://svn.enthought.com/enthought/browser/SciMath/trunk/enthought/units It's based on some good work that Michael Avaizis at Caltech did. The upper layers are not very mature and could do with a refactor to make them more general (the notion of a unit manager and unit families is a nice one, but the current examples are very specific). You can do things like this: In [1]: # This allows me to use strings and it finds the symbols in the namespace for me In [2]: from enthought.units.unit_parser import unit_parser In [3]: import numpy In [4]: a = numpy.arange(0., 1., 0.01) In [5]: b = unit_parser.parse_unit("m/s") In [6]: c = b * a In [7]: c Out[7]: *m*s**-1 In [8]: d = unit_parser.parse_unit("minutes") * 15. In [9]: d # didn't work (minutes not in namespace) Out[9]: 15.0 In [10]: d = unit_parser.parse_unit("seconds") * 15. * 60. # 15 minutes In [11]: d Out[11]: 900.0*s In [12]: d * c Out[12]: *m In [13]: e = d*c In [14]: e.value Out[14]: array([ 0., 9., 18., 27., 36., 45., 54., 63., 72., 81., 90., 99., 108., 117., 126., 135., 144., 153., 162., 171., 180., 189., 198., 207., 216., 225., 234., 243., 252., 261., 270., 279., 288., 297., 306., 315., 324., 333., 342., 351., 360., 369., 378., 387., 396., 405., 414., 423., 432., 441., 450., 459., 468., 477., 486., 495., 504., 513., 522., 531., 540., 549., 558., 567., 576., 585., 594., 603., 612., 621., 630., 639., 648., 657., 666., 675., 684., 693., 702., 711., 720., 729., 738., 747., 756., 765., 774., 783., 792., 801., 810., 819., 828., 837., 846., 855., 864., 873., 882., 891.]) This doesn't work quite right if you reverse the multiplication because of NumPy's aggressive casting, but after talking with Travis O. it seems like I can add an __rmul__ method to fix this. Best, Travis > Anne > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From timmichelsen at gmx-topmail.de Thu Jun 5 14:50:48 2008 From: timmichelsen at gmx-topmail.de (Tim Michelsen) Date: Thu, 05 Jun 2008 20:50:48 +0200 Subject: [SciPy-user] [SOLVED] Re: building and installing timeseries fails on Windows In-Reply-To: References: Message-ID: > following the (updated) instructions at: > http://www.scipy.org/Cookbook/CompilingExtensionsOnWindowsWithMinGW > > What can I do now? After upgrading my SVN checkout it all worked well. Thanks again to Pierre and Matt for the very useful package. I may contribute more to the Cookbook some time. Kind regards, Timmie From fperez.net at gmail.com Thu Jun 5 20:21:34 2008 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 5 Jun 2008 17:21:34 -0700 Subject: [SciPy-user] [ Python(x,y) ] New release : 1.2.4 In-Reply-To: <48443B3D.1090206@pythonxy.com> References: <48443B3D.1090206@pythonxy.com> Message-ID: On Mon, Jun 2, 2008 at 11:26 AM, Pierre Raybaut wrote: > Hi all, > > Python(x,y) 1.2.4 is now available on http://www.pythonxy.com. > Hopefully, updates should now be released at a lower rate. > > Changes history > 06 -02 -2008 - Version 1.2.4 : > > * Updated: > o Matplotlib 0.98 - Important change for Python(x,y) users: > better integration in Qt GUIs (see release notes) > o IPython 0.8.4 (see release notes) Many thanks! This is really great. One feature request, if I might: any chance ofgetting scons in? http://www.scons.org/ The reason is that ctypes is one of the nicest and easiest ways of quickly prototyping access to libraries and extension code, but writing a Makefile to build a shared library portably (osx/win32/linux) isn't completely trivial. Scons knows a fair bit about shared library building, so having scons in would help users who want to test the ctypes/numpy support such as: http://www.scipy.org/Cookbook/Ctypes Best regards, f From fperez.net at gmail.com Thu Jun 5 20:26:30 2008 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 5 Jun 2008 17:26:30 -0700 Subject: [SciPy-user] [ Python(x,y) ] New release : 1.2.4 In-Reply-To: References: <48443B3D.1090206@pythonxy.com> Message-ID: On Thu, Jun 5, 2008 at 5:21 PM, Fernando Perez wrote: > On Mon, Jun 2, 2008 at 11:26 AM, Pierre Raybaut wrote: >> Hi all, >> >> Python(x,y) 1.2.4 is now available on http://www.pythonxy.com. >> Hopefully, updates should now be released at a lower rate. Ooops, and I forgot: do you ship nose? Since it is now required for scipy testing (and will soon be for numpy, as the tests transition), it would be nice to have it too :) http://code.google.com/p/python-nose/ Again, many thanks and sorry for asking without offering much in return. Your project is too nice not to :) Regards, f From bemclaugh at gmail.com Thu Jun 5 21:55:04 2008 From: bemclaugh at gmail.com (BEM) Date: Thu, 5 Jun 2008 18:55:04 -0700 (PDT) Subject: [SciPy-user] scikits.timeseries AttributeError Message-ID: <1f481223-faa2-4675-98c0-099d86956a69@z16g2000prn.googlegroups.com> Hi, I am trying to generate a plot of data sampled at about 1/5Hz... (ts.__version__ = 1.0) import matplotlib.pyplot as plt import scikits.timeseries as ts import scikits.timeseries.lib.plotlib as tpl dates = ts.date_array([q[1] for q in data], freq='S') val = [q[4] for q in data] raw_series = ts.time_series(val, dates) series = ts.fill_missing_dates(raw_series) fig = tpl.tsfigure() fsp = fig.add_tsplot(111) fsp.tsplot(series, '-') I get an: AttributeError: TimeSeries_DateLocator instance has no attribute 'finder' I can, however, run the Yahoo financial example and produce a report of the data I am attempting to plot that looks correct: import scikits.timeseries.lib.reportlib as rl basicReport = rl.Report(series) print basicReport() Any ideas? thanks, Brian From pgmdevlist at gmail.com Thu Jun 5 22:18:08 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Thu, 5 Jun 2008 22:18:08 -0400 Subject: [SciPy-user] scikits.timeseries AttributeError In-Reply-To: <1f481223-faa2-4675-98c0-099d86956a69@z16g2000prn.googlegroups.com> References: <1f481223-faa2-4675-98c0-099d86956a69@z16g2000prn.googlegroups.com> Message-ID: <200806052218.08419.pgmdevlist@gmail.com> Brian, What's the SVN version (ts.__version__ is far from informative, sorry, I need to check that) ? What version of matplotlib do you use (not that really matters here) ? Good call anyway. As stated on the doc page, support for series with a frequency higher than daily is shaky at best. In particular, plotting capabilities with timeseries.lib.plotlib are restricted to daily or less. But you're right, we should at least try to issue a warning (or raise an exception). Meanwhile, you can always use the standard matplotlib functions on series.dates.tolist() for abscissae and series.series for ordinates. From mattknox.ca at gmail.com Thu Jun 5 22:34:41 2008 From: mattknox.ca at gmail.com (Matt Knox) Date: Fri, 6 Jun 2008 02:34:41 +0000 (UTC) Subject: [SciPy-user] scikits.timeseries AttributeError References: <1f481223-faa2-4675-98c0-099d86956a69@z16g2000prn.googlegroups.com> Message-ID: >> dates = ts.date_array([q[1] for q in data], freq='S') This is one of the known limitations of the timeseries module right now unfortunately. Plotting for frequencies higher than daily is not currently supported. It actually shouldn't be too much work to implement it, but it just hasn't been a priority for me and I haven't had much free time lately. I hope to be able to get to this sometime this month... but contributions are always welcome if someone beats me to it :) Pierre and I have been discussing doing a first official release of the timeseries module in the near future, but there are a few remaining outstanding issues we want to clear up first. From lists at vrbka.net Fri Jun 6 03:10:54 2008 From: lists at vrbka.net (Lubos Vrbka) Date: Fri, 06 Jun 2008 09:10:54 +0200 Subject: [SciPy-user] Fourier-sine transform with SciPy Message-ID: <4848E2FE.9070402@vrbka.net> hi guys, i'm a new (and so far very happy) user of the scipy bundle. however, i have a problem at the moment, and i don't know how to solve it (apart from somehow writing slow and unoptimized transform routine myself). i need to perform a discrete fourier transform of a radially symmetric function, dependent only on the |r| and |k| in real and fourier space, respectively. after some straightforward math, i arrived at the expression for the sine transform (continuous transform), namely F(k) = A \int_0^\infty f(r) sin(kr) dr (FT) f(r) = B \int_0^\infty f(k) sin(kr) dk (iFT) with A and B being constant coefficients. naturally this has got the advantage of just one function being needed to perform the transform (just with different coefficient every time). the problem is, that i don't know how (and whether it actually is posible) to perform this task (discrete sine transform) with scipy. searching the google and lists wasn't of much help, either. i know that for example fftw includes the function for sine and cosine transforms, but i didn't find any connection between the fftw functions and scipy.fft, that would be younger than several years. i would be grateful for any help or hint. with best regards, -- Lubos _ at _" http://www.lubos.vrbka.net From peridot.faceted at gmail.com Fri Jun 6 04:44:07 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Fri, 6 Jun 2008 04:44:07 -0400 Subject: [SciPy-user] Fourier-sine transform with SciPy In-Reply-To: <4848E2FE.9070402@vrbka.net> References: <4848E2FE.9070402@vrbka.net> Message-ID: 2008/6/6 Lubos Vrbka : > i need to perform a discrete fourier transform of a radially symmetric > function, dependent only on the |r| and |k| in real and fourier space, > respectively. after some straightforward math, i arrived at the > expression for the sine transform (continuous transform), namely > F(k) = A \int_0^\infty f(r) sin(kr) dr (FT) > f(r) = B \int_0^\infty f(k) sin(kr) dk (iFT) > with A and B being constant coefficients. naturally this has got the > advantage of just one function being needed to perform the transform > (just with different coefficient every time). > > the problem is, that i don't know how (and whether it actually is > posible) to perform this task (discrete sine transform) with scipy. > searching the google and lists wasn't of much help, either. > > i know that for example fftw includes the function for sine and cosine > transforms, but i didn't find any connection between the fftw functions > and scipy.fft, that would be younger than several years. I don't think we have a function that computes discrete sine or cosine transforms, but if f is real, you can get the sine transform you wrote as the imaginary part of a complex Fourier transform. Anne From lists at vrbka.net Fri Jun 6 05:17:44 2008 From: lists at vrbka.net (Lubos Vrbka) Date: Fri, 06 Jun 2008 11:17:44 +0200 Subject: [SciPy-user] Fourier-sine transform with SciPy In-Reply-To: References: <4848E2FE.9070402@vrbka.net> Message-ID: <484900B8.6080006@vrbka.net> hi, thanks for quick reply! > I don't think we have a function that computes discrete sine or cosine > transforms, but if f is real, you can get the sine transform you wrote > as the imaginary part of a complex Fourier transform. by this you mean using scipy.fftpack.rfft()? i must admit i am a bit lost in the various possibilities that are available. scipy.fftpack.*fft* functions are clear (is this the netlib implementation?) scipy.ifft?? tells me (among others) File: /usr/lib/python2.4/site-packages/numpy/fft/fftpack.py or should i use numpy.fft? what's the difference in fftpack in scipy and numpy? is there any place where these things are documented, i.e. what code does actually lie behind these functions? and just out of curiosity - from what i've seen during my search for the information on google, fftw was (at least in the past) used in scipy. the scipy package on my debian box depends on fftw2, but the names refer to fftpack, that shouldn't be fftw but netlib implementation instead (if i am not wrong). could anybody clarify this for me, please? sorry if this was discussed before, but i wasn't able to find the relevant thread. thanks once more! lubos -- Lubos _ at _" http://www.lubos.vrbka.net From peridot.faceted at gmail.com Fri Jun 6 05:42:29 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Fri, 6 Jun 2008 05:42:29 -0400 Subject: [SciPy-user] Fourier-sine transform with SciPy In-Reply-To: <484900B8.6080006@vrbka.net> References: <4848E2FE.9070402@vrbka.net> <484900B8.6080006@vrbka.net> Message-ID: 2008/6/6 Lubos Vrbka : >> I don't think we have a function that computes discrete sine or cosine >> transforms, but if f is real, you can get the sine transform you wrote >> as the imaginary part of a complex Fourier transform. > by this you mean using scipy.fftpack.rfft()? i must admit i am a bit > lost in the various possibilities that are available. Yes, there are rather a lot. A complex fft - numpy.fft.fft - computes: F_m = A \sum_{k=0}^{N-1} f_k (cos(2\pi m k/N)+i*sin(2\pi m k/N)) and an ifft computes: f_k = B \sum_{m=0}^{N-1} F_m (cos(2\pi m k/N)-i*sin(2\pi m k/N)) So if you want \sum_{k=0}^{N-1} f_k sin(2\pi m k/N) all you have to do is take a complex FFT of your (real) data, then take the imaginary part. This is somewhat wasteful, which is why FFTW implements the sine and cosine transforms, but it's only a factor of two or four (i.e. still far better than coding it by hand). To save a little time, you can use a "real" FFT, which computes F_m as above, but (1) takes advantage of the fact that f_k is known to be real and (2) doesn't bother computing F_m for m>N/2 (or so), since when f_k is real, F_m = conjugate(F_{N-m}) IIRC. So you may want to do this and take the imaginary part. Note that the sine transform only works on odd functions, so either your input data should have f_k = f_{N-k} or you are representing only half the values, i.e., N is twice the number of data points you have. This too is somewhere a proper sine transform would help, if we had it. > scipy.fftpack.*fft* functions are clear (is this the netlib implementation?) > > scipy.ifft?? tells me (among others) > File: /usr/lib/python2.4/site-packages/numpy/fft/fftpack.py > > or should i use numpy.fft? what's the difference in fftpack in scipy and > numpy? > > is there any place where these things are documented, i.e. what code > does actually lie behind these functions? > > and just out of curiosity - from what i've seen during my search for the > information on google, fftw was (at least in the past) used in scipy. > the scipy package on my debian box depends on fftw2, but the names refer > to fftpack, that shouldn't be fftw but netlib implementation instead (if > i am not wrong). could anybody clarify this for me, please? sorry if > this was discussed before, but i wasn't able to find the relevant thread. There are several reasons this is complicated. numpy's fft is designed to use one of several external libraries depending on how numpy is compiled. FFTPACK is always available, but is not especially fast. FFTW is distributed under the GPL, so numpy cannot depend on it without also being distributed under the GPL. But if it's an optional plug-in, numpy can make use if it where it's available, since it's much faster than FFTPACK (in spite of the fact that we use its API in a not-very-efficient manner). numpy also supports several other libraries, most notably Intel's proprietary and hardware-specific but very fast implementation. At a python level, the user need not know which backend is being used, and FFTPACK is mentioned a few times. Anne From oliphant at enthought.com Fri Jun 6 12:10:23 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Fri, 06 Jun 2008 11:10:23 -0500 Subject: [SciPy-user] Fourier-sine transform with SciPy In-Reply-To: <4848E2FE.9070402@vrbka.net> References: <4848E2FE.9070402@vrbka.net> Message-ID: <4849616F.3050009@enthought.com> Lubos Vrbka wrote: > hi guys, > > > the problem is, that i don't know how (and whether it actually is > posible) to perform this task (discrete sine transform) with scipy. > searching the google and lists wasn't of much help, either. > In the scipy sandbox under image you can find a transforms.py file which contains an implementation of the DST (discrete sine transform). http://projects.scipy.org/scipy/scipy/browser/trunk/scipy/sandbox/image/transforms.py -Travis From Dharhas.Pothina at twdb.state.tx.us Fri Jun 6 12:32:01 2008 From: Dharhas.Pothina at twdb.state.tx.us (Dharhas Pothina) Date: Fri, 06 Jun 2008 11:32:01 -0500 Subject: [SciPy-user] Adding new columns to 2D array & populating them from another array. Message-ID: <484920310200009B00013308@GWWEB.twdb.state.tx.us> Hi, I have an array 'a' a = array([[2006, 1, 3, 9, 40], [2006, 1, 3, 10, 9], [2006, 1, 3, 10, 40], ..., [2008, 3, 20, 10, 27], [2008, 3, 20, 10, 51], [2008, 3, 20, 12, 2]]) where a.shape = (420, 5) I have other arrays b,c etc of the shape (420,) (say b contains all 99's and c contains all 50's) how to I add the new arrays to 'a' to form an array array([[2006, 1, 3, 9, 40 , 99, 50], [2006, 1, 3, 10, 9, 99 , 50], [2006, 1, 3, 10, 40, 99, 50], ..., [2008, 3, 20, 10, 27, 99, 50 ], [2008, 3, 20, 10, 51, 99, 50], [2008, 3, 20, 12, 2, 99, 50]]) I've tried hstack but I'm not sure how to use it with 2D arrays. thanks - dharhas From ivo.maljevic at gmail.com Fri Jun 6 12:38:44 2008 From: ivo.maljevic at gmail.com (Ivo Maljevic) Date: Fri, 6 Jun 2008 12:38:44 -0400 Subject: [SciPy-user] Adding new columns to 2D array & populating them from another array. In-Reply-To: <484920310200009B00013308@GWWEB.twdb.state.tx.us> References: <484920310200009B00013308@GWWEB.twdb.state.tx.us> Message-ID: <826c64da0806060938l1bc0a88dj5cdddd90a5d71346@mail.gmail.com> The simplest thing is to use c_[ ] operator. Example: >>> a=ones([2,2]) >>> a array([[ 1., 1.], [ 1., 1.]]) >>> b=array([4,4]) >>> c=array([9,9]) >>> d=c_[a,b,c] >>> d array([[ 1., 1., 4., 9.], [ 1., 1., 4., 9.]]) Hope this helps, Ivo 2008/6/6 Dharhas Pothina : > Hi, > > I have an array 'a' > > a = > array([[2006, 1, 3, 9, 40], > [2006, 1, 3, 10, 9], > [2006, 1, 3, 10, 40], > ..., > [2008, 3, 20, 10, 27], > [2008, 3, 20, 10, 51], > [2008, 3, 20, 12, 2]]) > > where a.shape = (420, 5) > > I have other arrays b,c etc of the shape (420,) (say b contains all 99's > and c contains all 50's) > > how to I add the new arrays to 'a' to form an array > > array([[2006, 1, 3, 9, 40 , 99, 50], > [2006, 1, 3, 10, 9, 99 , 50], > [2006, 1, 3, 10, 40, 99, 50], > ..., > [2008, 3, 20, 10, 27, 99, 50 ], > [2008, 3, 20, 10, 51, 99, 50], > [2008, 3, 20, 12, 2, 99, 50]]) > > I've tried hstack but I'm not sure how to use it with 2D arrays. > > thanks > > - dharhas > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Fri Jun 6 12:46:36 2008 From: aisaac at american.edu (Alan G Isaac) Date: Fri, 6 Jun 2008 12:46:36 -0400 Subject: [SciPy-user] Adding new columns to 2D array & populating them from another array. In-Reply-To: <484920310200009B00013308@GWWEB.twdb.state.tx.us> References: <484920310200009B00013308@GWWEB.twdb.state.tx.us> Message-ID: On Fri, 06 Jun 2008, Dharhas Pothina apparently wrote: > I have an array 'a' > a = > array([[2006, 1, 3, 9, 40], > [2006, 1, 3, 10, 9], > [2006, 1, 3, 10, 40], > ..., > [2008, 3, 20, 10, 27], > [2008, 3, 20, 10, 51], > [2008, 3, 20, 12, 2]]) > where a.shape = (420, 5) > I have other arrays b,c etc of the shape (420,) (say b contains all 99's and c contains all 50's) > how to I add the new arrays to 'a' to form an array > array([[2006, 1, 3, 9, 40 , 99, 50], > [2006, 1, 3, 10, 9, 99 , 50], > [2006, 1, 3, 10, 40, 99, 50], > ..., > [2008, 3, 20, 10, 27, 99, 50 ], > [2008, 3, 20, 10, 51, 99, 50], > [2008, 3, 20, 12, 2, 99, 50]]) >>> import numpy as np >>> x = np.ones((10,3)) >>> y = np.ones((10,))*2 >>> z = np.ones((10,))*3 >>> np.hstack([x,y[:,None],z[:,None]]) array([[ 1., 1., 1., 2., 3.], [ 1., 1., 1., 2., 3.], [ 1., 1., 1., 2., 3.], [ 1., 1., 1., 2., 3.], [ 1., 1., 1., 2., 3.], [ 1., 1., 1., 2., 3.], [ 1., 1., 1., 2., 3.], [ 1., 1., 1., 2., 3.], [ 1., 1., 1., 2., 3.], [ 1., 1., 1., 2., 3.]]) >>> hth, Alan Isaac From mhearne at usgs.gov Fri Jun 6 12:44:44 2008 From: mhearne at usgs.gov (Michael Hearne) Date: Fri, 06 Jun 2008 10:44:44 -0600 Subject: [SciPy-user] Adding new columns to 2D array & populating them from another array. In-Reply-To: <484920310200009B00013308@GWWEB.twdb.state.tx.us> References: <484920310200009B00013308@GWWEB.twdb.state.tx.us> Message-ID: <4849697C.3010607@usgs.gov> Here's a simple example that I think demonstrates what you want to do: a = zeros((4,5)) b = ones((4,1)) c = hstack((a,b)) --Mike Dharhas Pothina wrote: > Hi, > > I have an array 'a' > > a = > array([[2006, 1, 3, 9, 40], > [2006, 1, 3, 10, 9], > [2006, 1, 3, 10, 40], > ..., > [2008, 3, 20, 10, 27], > [2008, 3, 20, 10, 51], > [2008, 3, 20, 12, 2]]) > > where a.shape = (420, 5) > > I have other arrays b,c etc of the shape (420,) (say b contains all 99's and c contains all 50's) > > how to I add the new arrays to 'a' to form an array > > array([[2006, 1, 3, 9, 40 , 99, 50], > [2006, 1, 3, 10, 9, 99 , 50], > [2006, 1, 3, 10, 40, 99, 50], > ..., > [2008, 3, 20, 10, 27, 99, 50 ], > [2008, 3, 20, 10, 51, 99, 50], > [2008, 3, 20, 12, 2, 99, 50]]) > > I've tried hstack but I'm not sure how to use it with 2D arrays. > > thanks > > - dharhas > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- ------------------------------------------------------ Michael Hearne mhearne at usgs.gov (303) 273-8620 USGS National Earthquake Information Center 1711 Illinois St. Golden CO 80401 Senior Software Engineer Synergetics, Inc. ------------------------------------------------------ From strawman at astraw.com Fri Jun 6 12:45:08 2008 From: strawman at astraw.com (Andrew Straw) Date: Fri, 06 Jun 2008 09:45:08 -0700 Subject: [SciPy-user] Call for contributions: Python in Neuroscience Message-ID: <48496994.3030301@astraw.com> >From http://www.neuroinf.org/pipermail/comp-neuro/2008-June/000858.html CALL FOR CONTRIBUTIONS Special Section: "Python in Neuroscience" Journal: Frontiers in Neuroinformatics [www.frontiersin.org/neuroinformatics] Associate Editor: Rolf K?tter (Radboud University Medical Centre Nijmegen, Netherlands) Guest Editors: James A. Bednar (University of Edinburgh, UK) Andrew Davison (UNIC, CNRS, France) Markus Diesmann (RIKEN Brain Science Institute, Japan) Marc-Oliver Gewaltig (Honda Research Institute Europe GmbH, Germany) Michael Hines (Yale University, USA) Eilif Muller (LCN-EPFL, Switzerland) Important Dates: Abstract/outline submission deadline: June 14th, 2008. Invitations for full paper submissions sent by June 21st, 2008. Invited full paper submission deadline: September 14th, 2008. Special Section Abstract: Python is rapidly becoming the de facto standard language for systems integration. Python has a large user and developer-base external to the neuroscience community, and a vast module library that facilitates rapid and maintainable development of complex and intricate systems. In this special section, we highlight recent efforts to develop Python modules for the domain of neuroscience software and neuroinformatics: - simulators and simulator interfaces - data collection and analysis - sharing, re-use, storage and databasing of models and data - stimulus generation - parameter search and optimization - visualization - VLSI hardware interfacing - ... Moreover, we seek to provide a representative overview of existing mature Python modules for neuroscience and neuroinformatics, to demonstrate a critical mass and show that Python is an appropriate choice of interpreter interface for future neuroscience software development. Submission Procedure: Researchers and practitioners are invited to submit on or before June 14th, 2008 a max. 1 page abstract/outline of work related to the focus of the special section to eilif.mueller at epfl.ch, CC'd to rk at cns.umcn.nl for consideration for inclusion as an elaborated full article in the special section. Please include a provisional title, a full author list, and format the subject of your email as follows: "[python SI] outline - Your Name". Authors will be notified whether their contribution has been accepted by June 21st, 2008. Full Article Information: * Full articles will be solicited by invitation only, based on the abstracts/outlines we receive by June 14th, 2008. * The deadline for submission of invited full articles will be September 14th, 2008. * Article formatting will be as for standard Frontiers "Original Research Articles". Guidelines and instructions for their preparation can be found at http://frontiersin.org/authorinstructions#manuscriptGuidelines. * General author instructions for Frontiers in Neuroinformatics can be found at http://frontiersin.org/authorinstructions/. * Frontiers in Neuroinformatics is an open access journal, following a pay-for-publication model. Details of the publication fees can be found at http://www.frontiersin.org/publicationfees/. * Further details will be provided to authors of accepted abstracts by June 21st, 2008. From tgray at protozoic.com Fri Jun 6 13:15:53 2008 From: tgray at protozoic.com (Tim Gray) Date: Fri, 6 Jun 2008 13:15:53 -0400 (EDT) Subject: [SciPy-user] Adding new columns to 2D array & populating them In-Reply-To: References: Message-ID: So what is the difference between the c_ operator and hstack? Or r_ and vstack? I'm not too clear on the c_ and r_ operators. Is there an efficiency gain related to using them? Do the do something special? From Dharhas.Pothina at twdb.state.tx.us Fri Jun 6 13:16:10 2008 From: Dharhas.Pothina at twdb.state.tx.us (Dharhas Pothina) Date: Fri, 06 Jun 2008 12:16:10 -0500 Subject: [SciPy-user] Adding new columns to 2D array & populating themfrom another array. In-Reply-To: <4849697C.3010607@usgs.gov> References: <484920310200009B00013308@GWWEB.twdb.state.tx.us> <4849697C.3010607@usgs.gov> Message-ID: <48492A8A.63BA.009B.0@twdb.state.tx.us> Thanks to all of you. I used the c_[] technique. - dharhas >>> Michael Hearne 6/6/2008 11:44 AM >>> Here's a simple example that I think demonstrates what you want to do: a = zeros((4,5)) b = ones((4,1)) c = hstack((a,b)) --Mike Dharhas Pothina wrote: > Hi, > > I have an array 'a' > > a = > array([[2006, 1, 3, 9, 40], > [2006, 1, 3, 10, 9], > [2006, 1, 3, 10, 40], > ..., > [2008, 3, 20, 10, 27], > [2008, 3, 20, 10, 51], > [2008, 3, 20, 12, 2]]) > > where a.shape = (420, 5) > > I have other arrays b,c etc of the shape (420,) (say b contains all 99's and c contains all 50's) > > how to I add the new arrays to 'a' to form an array > > array([[2006, 1, 3, 9, 40 , 99, 50], > [2006, 1, 3, 10, 9, 99 , 50], > [2006, 1, 3, 10, 40, 99, 50], > ..., > [2008, 3, 20, 10, 27, 99, 50 ], > [2008, 3, 20, 10, 51, 99, 50], > [2008, 3, 20, 12, 2, 99, 50]]) > > I've tried hstack but I'm not sure how to use it with 2D arrays. > > thanks > > - dharhas > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- ------------------------------------------------------ Michael Hearne mhearne at usgs.gov (303) 273-8620 USGS National Earthquake Information Center 1711 Illinois St. Golden CO 80401 Senior Software Engineer Synergetics, Inc. ------------------------------------------------------ _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user From lists at vrbka.net Fri Jun 6 13:47:58 2008 From: lists at vrbka.net (Lubos Vrbka) Date: Fri, 06 Jun 2008 19:47:58 +0200 Subject: [SciPy-user] Fourier-sine transform with SciPy In-Reply-To: References: <4848E2FE.9070402@vrbka.net> <484900B8.6080006@vrbka.net> Message-ID: <4849784E.1020306@vrbka.net> anne, thanks for detailed discussion (and sorry for accidentally replying directly to you instead to list)! > Note that the sine transform only works on odd functions, so either > your input data should have f_k = f_{N-k} or you are representing only > half the values, i.e., N is twice the number of data points you have. > This too is somewhere a proper sine transform would help, if we had > it. well, the original function itself is neither even nor odd, since it is defined only in the interval <0, +inf>. at certain point in the derivation, the original function is multiplied by the r or k variable, so it might actually be odd. the sine comes from the fact that the original function is actually radially symmetric. i think i will have to have a look into that an check the result. hopefully the last question. how is it with the 'normalization constants' for the dft? from theory it should be T/N for FT and 1/T for iFT, but different packages treat this differently (for example fftw does not use them and it has to be done afterwards 'manually') - how is it in case of scipy.fft? best, lubos -- Lubos _ at _" http://www.lubos.vrbka.net From ivo.maljevic at gmail.com Fri Jun 6 13:49:23 2008 From: ivo.maljevic at gmail.com (Ivo Maljevic) Date: Fri, 6 Jun 2008 13:49:23 -0400 Subject: [SciPy-user] Adding new columns to 2D array & populating them In-Reply-To: References: Message-ID: <826c64da0806061049t678cedf0iff71417b6046ebcb@mail.gmail.com> The short answer is: they do not appear to be exactly the same. The longer answer (there are may who will know this way better than I do), can be found if you look at the source code in : /usr/lib/python2.5/site-packages/numpy/lib and then open file index_tricks.py You will find the concatenator class that does a whole bunch of things, and then r_class and c_class which inherit concatenator. c_ and r_ are just "attached" to those classes. 2008/6/6 Tim Gray : > So what is the difference between the c_ operator and hstack? Or r_ and > vstack? I'm not too clear on the c_ and r_ operators. Is there an > efficiency gain related to using them? Do the do something special? > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at vrbka.net Sat Jun 7 09:54:53 2008 From: lists at vrbka.net (Lubos Vrbka) Date: Sat, 07 Jun 2008 15:54:53 +0200 Subject: [SciPy-user] Fourier-sine transform with SciPy In-Reply-To: <4849616F.3050009@enthought.com> References: <4848E2FE.9070402@vrbka.net> <4849616F.3050009@enthought.com> Message-ID: <484A932D.7000500@vrbka.net> hi everybody! > In the scipy sandbox under image you can find a transforms.py file which > contains an implementation of the DST (discrete sine transform). travis, thanks for the code you sent me. actually, under my version of scipy/numpy (0.5.2/1.0.1) it doesn't work, since Xt = np.fft(xtilde,axis=axis) isn't a callable object. i had to use Xt = np.fft.fft(xtilde,axis=axis) and its equivalent in the inverse sinus transform. regarding the implementation - it seems to me that to get optimal performance, the length of the original array should be 2^m - 1. if i have, e.g., 4096 points in the array, then newly constructed one will have 8194 points and fft of the length of 16194 will have to be done. is this correct? i know that the following question isn't particularly related to scipy itself, but if anybody would be willing to explain these things to me, i'd be really grateful. let's assume the distance (r)/reciprocal distance (k) pair. having my odd real function f(r) discretized on <0,L>, i have dr = L/N. the value of dk should be then dk = 2*pi/L = 2*pi/(N*dr). actually, where does the value of N+1 come from (it's mentioned on the wiki page http://en.wikipedia.org/wiki/Discrete_sine_transform )? according to the sampling theorem, the wave vectors corresponding to 0, 1, ... N/2 should represent the function f in the fourier space, the rest of the wave vectors are redundant. what happens in the case of the dst() routine? is the interval extended to <-L,L>, with the dk = 2*pi/2*L? but then, for N points of original array, i get N non-redundant values. how does this compare to the sampling theorem? since i need to operate on the ft'ed function before transforming it back, and this operation involves the value of dk. i need to know what the value of dk really is. in one implementation of my problem (not using fft), the original function was paddded with zeros to the length of 2N and the sum over the products f(r)sin(kr) was performed... how does then this compare to the dst() above? any comments/pointers to relevant literature are more than welcome. with best regards, lubos -- Lubos _ at _" http://www.lubos.vrbka.net From dmitrey.kroshko at scipy.org Sat Jun 7 10:58:55 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Sat, 07 Jun 2008 17:58:55 +0300 Subject: [SciPy-user] "from scipy.linalg.flapack import dgelss" fails Message-ID: <484AA22F.7070508@scipy.org> I have problems with the statement from scipy.linalg.flapack import dgelss Traceback (innermost last): File "", line 1, in File "/usr/lib/python2.5/site-packages/scipy/linalg/__init__.py", line 13, in from iterative import * File "/usr/lib/python2.5/site-packages/scipy/linalg/iterative.py", line 5, in from scipy.sparse.linalg import isolve File "/usr/lib/python2.5/site-packages/scipy/sparse/__init__.py", line 5, in from base import * File "/usr/lib/python2.5/site-packages/scipy/sparse/base.py", line 45, in class spmatrix(object): File "/usr/lib/python2.5/site-packages/scipy/sparse/base.py", line 139, in spmatrix @deprecate TypeError: deprecate() takes exactly 3 arguments (1 given) at least with latest scipy from svn >>> scipy.__version__ '0.7.0.dev4416' Do you have the same? This one was working correctly in previous scipy versions Regards, D. From nwagner at iam.uni-stuttgart.de Sat Jun 7 11:11:10 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Sat, 07 Jun 2008 17:11:10 +0200 Subject: [SciPy-user] "from scipy.linalg.flapack import dgelss" fails In-Reply-To: <484AA22F.7070508@scipy.org> References: <484AA22F.7070508@scipy.org> Message-ID: On Sat, 07 Jun 2008 17:58:55 +0300 dmitrey wrote: > I have problems with the statement > from scipy.linalg.flapack import dgelss > > Traceback (innermost last): > File "", line 1, in > File >"/usr/lib/python2.5/site-packages/scipy/linalg/__init__.py", >line > 13, in > from iterative import * > File >"/usr/lib/python2.5/site-packages/scipy/linalg/iterative.py", > line 5, in > from scipy.sparse.linalg import isolve > File >"/usr/lib/python2.5/site-packages/scipy/sparse/__init__.py", >line > 5, in > from base import * > File >"/usr/lib/python2.5/site-packages/scipy/sparse/base.py", >line 45, > in > class spmatrix(object): > File >"/usr/lib/python2.5/site-packages/scipy/sparse/base.py", >line > 139, in spmatrix > @deprecate > TypeError: deprecate() takes exactly 3 arguments (1 >given) > > at least with latest scipy from svn > > >>> scipy.__version__ > '0.7.0.dev4416' > > Do you have the same? > This one was working correctly in previous scipy >versions > > Regards, D. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user No problem here /usr/bin/python Python 2.4 (#1, Oct 13 2006, 17:13:31) [GCC 3.3.5 20050117 (prerelease) (SUSE Linux)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from scipy.linalg.flapack import dgelss >>> import scipy >>> scipy.__version__ '0.7.0.dev4416' >>> help (dgelss) From dmitrey.kroshko at scipy.org Sat Jun 7 11:26:25 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Sat, 07 Jun 2008 18:26:25 +0300 Subject: [SciPy-user] "from scipy.linalg.flapack import dgelss" fails In-Reply-To: References: <484AA22F.7070508@scipy.org> Message-ID: <484AA8A1.5050901@scipy.org> IIRC Python 2.4 have no decorators (like the "@deprecate" line), they are available since 2.5 (that I have). Could anyone else try "from scipy.linalg.flapack import dgelss" from python2.5? Regards, D. Nils Wagner wrote: > On Sat, 07 Jun 2008 17:58:55 +0300 > dmitrey wrote: > >> "/usr/lib/python2.5/site-packages/scipy/sparse/base.py", >> line >> 139, in spmatrix >> @deprecate >> TypeError: deprecate() takes exactly 3 arguments (1 >> given) >> >> at least with latest scipy from svn >> >> >>>>> scipy.__version__ >>>>> >> '0.7.0.dev4416' >> >> Do you have the same? >> This one was working correctly in previous scipy >> versions >> >> Regards, D. >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> > > No problem here > > /usr/bin/python > Python 2.4 (#1, Oct 13 2006, 17:13:31) > [GCC 3.3.5 20050117 (prerelease) (SUSE Linux)] on linux2 > Type "help", "copyright", "credits" or "license" for more > information. > >>>> from scipy.linalg.flapack import dgelss >>>> import scipy >>>> scipy.__version__ >>>> > '0.7.0.dev4416' > >>>> help (dgelss) >>>> > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > From matthieu.brucher at gmail.com Sat Jun 7 11:31:21 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Sat, 7 Jun 2008 17:31:21 +0200 Subject: [SciPy-user] "from scipy.linalg.flapack import dgelss" fails In-Reply-To: <484AA22F.7070508@scipy.org> References: <484AA22F.7070508@scipy.org> Message-ID: Hi, Please update to the latest Numpy (this error showed up several times in this ML). Matthieu 2008/6/7 dmitrey : > I have problems with the statement > from scipy.linalg.flapack import dgelss > > Traceback (innermost last): > File "", line 1, in > File "/usr/lib/python2.5/site-packages/scipy/linalg/__init__.py", line > 13, in > from iterative import * > File "/usr/lib/python2.5/site-packages/scipy/linalg/iterative.py", > line 5, in > from scipy.sparse.linalg import isolve > File "/usr/lib/python2.5/site-packages/scipy/sparse/__init__.py", line > 5, in > from base import * > File "/usr/lib/python2.5/site-packages/scipy/sparse/base.py", line 45, > in > class spmatrix(object): > File "/usr/lib/python2.5/site-packages/scipy/sparse/base.py", line > 139, in spmatrix > @deprecate > TypeError: deprecate() takes exactly 3 arguments (1 given) > > at least with latest scipy from svn > > >>> scipy.__version__ > '0.7.0.dev4416' > > Do you have the same? > This one was working correctly in previous scipy versions > > Regards, D. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmitrey.kroshko at scipy.org Sat Jun 7 11:38:17 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Sat, 07 Jun 2008 18:38:17 +0300 Subject: [SciPy-user] "from scipy.linalg.flapack import dgelss" fails In-Reply-To: References: <484AA22F.7070508@scipy.org> Message-ID: <484AAB69.4030403@scipy.org> Matthieu Brucher wrote: > Hi, > > Please update to the latest Numpy (this error showed up several times > in this ML). so this will require numpy download & installation from scratch, because currently I use numpy 1.0.4 from KUBUNTU 8.04 distribution. Ok, I'll try. D. From pav at iki.fi Sat Jun 7 12:13:38 2008 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 7 Jun 2008 16:13:38 +0000 (UTC) Subject: [SciPy-user] "from scipy.linalg.flapack import dgelss" fails References: <484AA22F.7070508@scipy.org> <484AA8A1.5050901@scipy.org> Message-ID: Sat, 07 Jun 2008 18:26:25 +0300, dmitrey wrote: > IIRC Python 2.4 have no decorators (like the "@deprecate" line), they > are available since 2.5 (that I have). Could anyone else try "from > scipy.linalg.flapack import dgelss" from python2.5? Decorators were introduced already in Python 2.4. The problem you see is the changed semantics in @deprecate between numpy 1.0.4 and 1.1.0. (Scipy 0.7.dev uses the 1.1.0 semantics). It can be fixed by upgrading to numpy 1.1.0. -- Pauli Virtanen From contact at pythonxy.com Sat Jun 7 14:52:18 2008 From: contact at pythonxy.com (Pierre Raybaut) Date: Sat, 07 Jun 2008 20:52:18 +0200 Subject: [SciPy-user] [ Python(x,y) ] New release : 1.2.5 Message-ID: <484AD8E2.4080302@pythonxy.com> Hi all, Python(x,y) 1.2.5 is now available on http://www.pythonxy.com. Changes history 06 -07 -2008 - Version 1.2.5 : * Updated: o Parallel Python 1.5.4 * Added: o MDP 2.3 - Modular toolkit for Data Processing (MDP): data processing framework (see included algorithms on website) o Automatic logging management for IPython/Matplotlib consoles ("logs" folder added to the Python(x,y) base directory, associated shortcuts in start menu and "Welcome to Python(x,y)" GUI) - See screenshots o The (lightweight) default editor in IPython consoles (syntax: "edit script.py" or "ed script.py") is now set to Notepad++ - See screenshots o Windows explorer integration: "Edit with Notepad++" has been added to .py and .pyw context menu o StartExplorer Eclipse Plug-in * Corrected: o Some NumPy and SciPy unit tests were not successfull Regards, Pierre Raybaut From mattknox.ca at gmail.com Sun Jun 8 16:50:43 2008 From: mattknox.ca at gmail.com (Matt Knox) Date: Sun, 8 Jun 2008 20:50:43 +0000 (UTC) Subject: [SciPy-user] scikits.timeseries AttributeError References: <1f481223-faa2-4675-98c0-099d86956a69@z16g2000prn.googlegroups.com> Message-ID: Matt Knox gmail.com> writes: > > >> dates = ts.date_array([q[1] for q in data], freq='S') > > This is one of the known limitations of the timeseries module right now > unfortunately. Plotting for frequencies higher than daily is not currently > supported. I just added support for this today in the timeseries module if you want to try it again. - Matt From gary.pajer at gmail.com Sun Jun 8 18:21:00 2008 From: gary.pajer at gmail.com (Gary Pajer) Date: Sun, 8 Jun 2008 18:21:00 -0400 Subject: [SciPy-user] LabVIEW and LabPython In-Reply-To: References: Message-ID: <88fe22a0806081521j2db5b1eblb68692af2e3beb43@mail.gmail.com> I was unaware of LabPython until this post. But I can report that I've been using National Instruments' DAQmx library through ctypes, and it seems to work perfectly. I don't know how far along you are in development, but there's an option for you. I've married DAQmx to scipy/numpy to Traits UI ( http://code.enthought.com/projects/traits ) and I have a beautiful GUI based data acquisition/display/analysis system. I'll admit that I've been too scared to even try to implement callbacks (used by a small number of DAQmx C functions) but I've achieved the same functionality using python threads. Aside from that caveat, all features have worked as they should. -gary On Tue, Jun 3, 2008 at 10:23 PM, Glen Shennan wrote: > Hi, > > I am trying to use a Python program in LabVIEW using LabPython. My program > works fine outside of LabVIEW but not when compiled and run in LabVIEW via > LabPython. I am using Scipy and Numpy functions in the program but I think > they are causing the problems. Does anyone know if it is possible to use > Scipy with LabPython? > > Thanks, > Glen > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bernardo.rocha at meduni-graz.at Mon Jun 9 04:51:57 2008 From: bernardo.rocha at meduni-graz.at (Bernardo Martins Rocha) Date: Mon, 09 Jun 2008 10:51:57 +0200 Subject: [SciPy-user] Reading binary files Message-ID: <484D0B4D0200002A0001558D@si062.meduni-graz.at> Hello, I was trying to write a script in order to read a binary file. Then, I started looking for a python function to do it. First, I came up with scipy.io.fopen, which was OK. The only problem was that whenever I ran the script I got a message warning me that this class is deprecated, which is really annoying. Then I had to change to the new class for reading binary files: scipy.io.npfile And it took me some time to change the script properly...so I'm posting what I did in order to describe my experience...and I'm open for suggestions that could improve this script. ----- fopen ------------------------------- from scipy.io.fopen import fopen fh = fopen (filename,'r',fmt) # fmt is 'l' or 'B` (little or big endian) header = fh.read(1024,'char') slice_buf = fh.read(slice_size, dtype) fh.close() ----- npfile ------------------------------- from scipy.io.npfile import npfile fh = npfile (filename,'r',fmt) dt = numpy.dtype('uint8') header = fh.read_array(dt, 1024) dt = numpy.dtype(dtype) # dtype = 'float32' slice_buf = fh.read_array(dt, slice_size) fh.close() Bye! Bernardo M. Rocha From s.mientki at ru.nl Mon Jun 9 06:51:24 2008 From: s.mientki at ru.nl (Stef Mientki) Date: Mon, 09 Jun 2008 12:51:24 +0200 Subject: [SciPy-user] LabVIEW and LabPython In-Reply-To: <88fe22a0806081521j2db5b1eblb68692af2e3beb43@mail.gmail.com> References: <88fe22a0806081521j2db5b1eblb68692af2e3beb43@mail.gmail.com> Message-ID: <484D0B2C.8080505@ru.nl> Gary Pajer wrote: > I was unaware of LabPython until this post. But I can report that > I've been using National Instruments' DAQmx library through ctypes, > and it seems to work perfectly. I don't know how far along you are in > development, but there's an option for you. > > I've married DAQmx to scipy/numpy to Traits UI ( > http://code.enthought.com/projects/traits ) and I have a beautiful GUI > based data acquisition/display/analysis system. I'll admit that I've > been too scared to even try to implement callbacks (used by a small > number of DAQmx C functions) but I've achieved the same functionality > using python threads. Aside from that caveat, all features have > worked as they should. hi Gary, This sounds very very interesting. As I'm writing an opensource replacement for LabView / MatLab in Python (called PyLab_Works), and DAQmx is one of the major AD-systems in this package, I'm anxious to get DAQmx libraries (as till now I have some clumsy-windows-only DLL). Any change I could use your code ? thanks, Stef Mientki > > -gary > > > > On Tue, Jun 3, 2008 at 10:23 PM, Glen Shennan > wrote: > > Hi, > > I am trying to use a Python program in LabVIEW using LabPython. > My program works fine outside of LabVIEW but not when compiled and > run in LabVIEW via LabPython. I am using Scipy and Numpy > functions in the program but I think they are causing the > problems. Does anyone know if it is possible to use Scipy with > LabPython? > > Thanks, > Glen > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > Het UMC St Radboud staat geregistreerd bij de Kamer van Koophandel in het handelsregister onder nummer 41055629. The Radboud University Nijmegen Medical Centre is listed in the Commercial Register of the Chamber of Commerce under file number 41055629. From mwojc at p.lodz.pl Mon Jun 9 09:50:53 2008 From: mwojc at p.lodz.pl (Marek Wojciechowski) Date: Mon, 9 Jun 2008 15:50:53 +0200 Subject: [SciPy-user] optimize.fmin_l_bfgs_b problem Message-ID: <200806091550.53753.mwojc@p.lodz.pl> Hi! The following command: optimize.fmin_l_bfgs_b(lambda x: x[0]**2, [-1.], iprint=1) causes an error and breaks the python session with the following output: RUNNING THE L-BFGS-B CODE * * * At line 2647 of file scipy/optimize/lbfgsb/routines.f Internal Error: printf is broken Machine precision = This occurs on scipy-0.6.0 and python 2.4 under Gentoo Linux. Is this the known bug? Greetings, -- Marek Wojciechowski From nwagner at iam.uni-stuttgart.de Mon Jun 9 10:12:04 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 09 Jun 2008 16:12:04 +0200 Subject: [SciPy-user] optimize.fmin_l_bfgs_b problem In-Reply-To: <200806091550.53753.mwojc@p.lodz.pl> References: <200806091550.53753.mwojc@p.lodz.pl> Message-ID: On Mon, 9 Jun 2008 15:50:53 +0200 Marek Wojciechowski wrote: > Hi! > The following command: > optimize.fmin_l_bfgs_b(lambda x: x[0]**2, [-1.], >iprint=1) > causes an error and breaks the python session with the >following output: > > RUNNING THE L-BFGS-B CODE > > * * * > > At line 2647 of file scipy/optimize/lbfgsb/routines.f > Internal Error: printf is broken > Machine precision = > > This occurs on scipy-0.6.0 and python 2.4 under Gentoo >Linux. Is this the > known bug? > > Greetings, > -- > Marek Wojciechowski I get >>> optimize.fmin_l_bfgs_b(lambda x: x[0]**2, [-1.], iprint=1) RUNNING THE L-BFGS-B CODE * * * Machine precision = 2.220E-16 N = 1 M = 10 This problem is unconstrained. At X0 0 variables are exactly at the bounds Traceback (most recent call last): File "", line 1, in File "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/optimize/lbfgsb.py", line 205, in fmin_l_bfgs_b f, g = func_and_grad(x) File "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/optimize/lbfgsb.py", line 156, in func_and_grad f, g = func(x, *args) TypeError: 'numpy.float64' object is not iterable >>> scipy.__version__ '0.7.0.dev4420' From gary.pajer at gmail.com Mon Jun 9 22:41:41 2008 From: gary.pajer at gmail.com (Gary Pajer) Date: Mon, 9 Jun 2008 22:41:41 -0400 Subject: [SciPy-user] LabVIEW and LabPython In-Reply-To: <484D0B2C.8080505@ru.nl> References: <88fe22a0806081521j2db5b1eblb68692af2e3beb43@mail.gmail.com> <484D0B2C.8080505@ru.nl> Message-ID: <88fe22a0806091941o4b61750cn32dff2d311c5381e@mail.gmail.com> On Mon, Jun 9, 2008 at 6:51 AM, Stef Mientki wrote: > > > Gary Pajer wrote: > > I was unaware of LabPython until this post. But I can report that > > I've been using National Instruments' DAQmx library through ctypes, > > and it seems to work perfectly. I don't know how far along you are in > > development, but there's an option for you. > > > > I've married DAQmx to scipy/numpy to Traits UI ( > > http://code.enthought.com/projects/traits ) and I have a beautiful GUI > > based data acquisition/display/analysis system. I'll admit that I've > > been too scared to even try to implement callbacks (used by a small > > number of DAQmx C functions) but I've achieved the same functionality > > using python threads. Aside from that caveat, all features have > > worked as they should. > hi Gary, > This sounds very very interesting. > As I'm writing an opensource replacement for LabView / MatLab in Python > (called PyLab_Works), > and DAQmx is one of the major AD-systems in this package, > I'm anxious to get DAQmx libraries (as till now I have some > clumsy-windows-only DLL). > Any change I could use your code ? I'd be happy to share what I've done, but let me make clear that I am using NI's DAQmx Windows dll. I'm just calling the functions in there using ctypes. I have a feeling that you are after something different. I'm not sure what you have and what you want, but if I read between your lines, I think you might be looking for Linux libraries. NI does have Linux version with reduced functionality available for download. License may be an issue. Would Comedi help? If we get any deeper, we might want to take the discussion off-line. > > > thanks, > Stef Mientki -gary > > > > > > -gary > > > > > > > > On Tue, Jun 3, 2008 at 10:23 PM, Glen Shennan > > wrote: > > > > Hi, > > > > I am trying to use a Python program in LabVIEW using LabPython. > > My program works fine outside of LabVIEW but not when compiled and > > run in LabVIEW via LabPython. I am using Scipy and Numpy > > functions in the program but I think they are causing the > > problems. Does anyone know if it is possible to use Scipy with > > LabPython? > > > > Thanks, > > Glen > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > ------------------------------------------------------------------------ > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > Het UMC St Radboud staat geregistreerd bij de Kamer van Koophandel in het > handelsregister onder nummer 41055629. > The Radboud University Nijmegen Medical Centre is listed in the Commercial > Register of the Chamber of Commerce under file number 41055629. > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From strawman at astraw.com Tue Jun 10 00:01:33 2008 From: strawman at astraw.com (Andrew Straw) Date: Mon, 09 Jun 2008 21:01:33 -0700 Subject: [SciPy-user] LabVIEW and LabPython In-Reply-To: <88fe22a0806091941o4b61750cn32dff2d311c5381e@mail.gmail.com> References: <88fe22a0806081521j2db5b1eblb68692af2e3beb43@mail.gmail.com> <484D0B2C.8080505@ru.nl> <88fe22a0806091941o4b61750cn32dff2d311c5381e@mail.gmail.com> Message-ID: <484DFC9D.4000305@astraw.com> Gary Pajer wrote: > On Mon, Jun 9, 2008 at 6:51 AM, Stef Mientki > wrote: > > > > Gary Pajer wrote: > > I was unaware of LabPython until this post. But I can report that > > I've been using National Instruments' DAQmx library through ctypes, > > and it seems to work perfectly. I don't know how far along you > are in > > development, but there's an option for you. > > > > I've married DAQmx to scipy/numpy to Traits UI ( > > http://code.enthought.com/projects/traits ) and I have a > beautiful GUI > > based data acquisition/display/analysis system. I'll admit that > I've > > been too scared to even try to implement callbacks (used by a small > > number of DAQmx C functions) but I've achieved the same > functionality > > using python threads. Aside from that caveat, all features have > > worked as they should. > hi Gary, > This sounds very very interesting. > As I'm writing an opensource replacement for LabView / MatLab in > Python > (called PyLab_Works), > and DAQmx is one of the major AD-systems in this package, > I'm anxious to get DAQmx libraries (as till now I have some > clumsy-windows-only DLL). > Any change I could use your code ? > > > I'd be happy to share what I've done, but let me make clear that I am > using NI's DAQmx Windows dll. I'm just calling the functions in > there using ctypes. I have a feeling that you are after something > different. > > I'm not sure what you have and what you want, but if I read between > your lines, I think you might be looking for Linux libraries. NI does > have Linux version with reduced functionality available for download. > License may be an issue. Would Comedi help? > > If we get any deeper, we might want to take the discussion off-line. Please don't, I (for one!) think this is a very interesting discussion. From spmcinerney at hotmail.com Tue Jun 10 02:28:33 2008 From: spmcinerney at hotmail.com (Stephen McInerney) Date: Mon, 9 Jun 2008 23:28:33 -0700 Subject: [SciPy-user] Python fn for Pascal's coefficient? Message-ID: Sorry to ask such a basic question, but I searched and I can't find a function for Pascal's coefficient, either in standard Python, SciPy or NumPy. Thanks, Stephen _________________________________________________________________ Enjoy 5 GB of free, password-protected online storage. http://www.windowslive.com/skydrive/overview.html?ocid=TXT_TAGLM_WL_Refresh_skydrive_062008 -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue Jun 10 02:39:45 2008 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 10 Jun 2008 01:39:45 -0500 Subject: [SciPy-user] Python fn for Pascal's coefficient? In-Reply-To: References: Message-ID: <3d375d730806092339w26abc337q11b782f40d234161@mail.gmail.com> On Tue, Jun 10, 2008 at 01:28, Stephen McInerney wrote: > Sorry to ask such a basic question, but I searched and I can't find > a function for Pascal's coefficient, either in standard Python, SciPy or > NumPy. I presume you mean a function for the binomial coefficients ("Pascal's coefficient" appears to be a fairly rare term, at least in English usage). http://en.wikipedia.org/wiki/Binomial_coefficient In [3]: from scipy.misc import comb In [4]: comb? Type: function Base Class: String Form: Namespace: Interactive File: /Users/rkern/svn/scipy/scipy/misc/common.py Definition: comb(N, k, exact=0) Docstring: Combinations of N things taken k at a time. If exact==0, then floating point precision is used, otherwise exact long integer is computed. Notes: - Array arguments accepted only for exact=0 case. - If k > N, N < 0, or k < 0, then a 0 is returned. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From contact at pythonxy.com Tue Jun 10 05:08:30 2008 From: contact at pythonxy.com (Pierre Raybaut) Date: Tue, 10 Jun 2008 11:08:30 +0200 Subject: [SciPy-user] [ Python(x,y) ] Why a linux version? Message-ID: <629b08a40806100208i7e071f0bm17aaec8a31ce2e47@mail.gmail.com> Hi all, Regarding the Python(x,y) linux version, we would be happy to debate on the following post: http://groups.google.com/group/pythonxy/browse_thread/thread/e5c707a1ac8bd4a4?hl=en # Thanks, Regards, Pierre Raybaut -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Tue Jun 10 05:38:08 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Tue, 10 Jun 2008 11:38:08 +0200 Subject: [SciPy-user] Python fn for Pascal's coefficient? In-Reply-To: References: Message-ID: <9457e7c80806100238qa54ea50w657e4f31ce146e7@mail.gmail.com> Hey Stephen 2008/6/10 Stephen McInerney : > Sorry to ask such a basic question, but I searched and I can't find > a function for Pascal's coefficient, either in standard Python, SciPy or > NumPy. The Python cookbook is normally a good place to find this sort of thing: http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/392153 http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/528913 In the last one, you can replace comb with the function Robert mentioned. Cheers St?fan From michells at ims.uni-stuttgart.de Tue Jun 10 13:21:14 2008 From: michells at ims.uni-stuttgart.de (Lukas Michelbacher) Date: Tue, 10 Jun 2008 19:21:14 +0200 Subject: [SciPy-user] Sparse Boolean matrix Message-ID: <86faf560806101021v63fa86bub04e8c3633084037@mail.gmail.com> As the title says I'd like to use a matrix with Boolean values in some sparse format. My problem is that even though initialization seems to works fine, the matrix doesn't contain Boolean values but the default float type. In [18]: A = sparse.csr_matrix((1000,1000), dtype=bool) In [19]: A Out[19]: <1000x1000 sparse matrix of type '' with 0 stored elements (space for 100) in Compressed Sparse Row format> The same happens for CSC, LIL and COO formats. I use SciPy 0.6.0 From orest.kozyar at gmail.com Tue Jun 10 13:26:18 2008 From: orest.kozyar at gmail.com (Orest Kozyar) Date: Tue, 10 Jun 2008 10:26:18 -0700 (PDT) Subject: [SciPy-user] Getting PyArray_Resize to work under weave Message-ID: I originally posted this on Numpy, but did not get any feedback. I've set up a temporary workaround by ensuring the Numpy array I pass to weave is much, much larger than I really expect it needs to be. However, I'd like to try and figure out why resize is not working. Any guidance? The following code fails: from scipy import weave from numpy import zeros arr = zeros((10,2)) code = """ PyArray_Dims dims; dims.len = 2; dims.ptr = Narr; dims.ptr[0] += 10; PyArray_Resize(arr_array, &dims, 1); """ weave.inline(code, ['arr'], verbose=1) The error message is: In function 'PyObject* compiled_func(PyObject*, PyObject*)': :678: error: too few arguments to function 678 is the line number for PyArray_Resize. According to the NumPy Handbook, PyArray_Resize requires three arguments, which I am providing. Am I missing something obvious here? There are times when I need to be able to resize the array in C++ because I cannot predict exactly how big the array needs to be before I pass it to weave. Any advice or pointers greatly appreciated! Thanks, Orest From nwagner at iam.uni-stuttgart.de Tue Jun 10 13:41:53 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 10 Jun 2008 19:41:53 +0200 Subject: [SciPy-user] Sparse Boolean matrix In-Reply-To: <86faf560806101021v63fa86bub04e8c3633084037@mail.gmail.com> References: <86faf560806101021v63fa86bub04e8c3633084037@mail.gmail.com> Message-ID: On Tue, 10 Jun 2008 19:21:14 +0200 "Lukas Michelbacher" wrote: > As the title says I'd like to use a matrix with Boolean >values in some > sparse format. > My problem is that even though initialization seems to >works fine, the > matrix doesn't > contain Boolean values but the default float type. > > In [18]: A = sparse.csr_matrix((1000,1000), dtype=bool) > > In [19]: A > Out[19]: > <1000x1000 sparse matrix of type ''numpy.float64'>' > with 0 stored elements (space for 100) > in Compressed Sparse Row format> > > The same happens for CSC, LIL and COO formats. > > I use SciPy 0.6.0 >>> from scipy import * >>> A = sparse.csr_matrix((1000,1000), dtype=bool) >>> A <1000x1000 sparse matrix of type '' with 0 stored elements in Compressed Sparse Row format> >>> scipy.__version__ '0.7.0.dev4423' >>> A = sparse.coo_matrix((1000,1000), dtype=bool) >>> A <1000x1000 sparse matrix of type '' with 0 stored elements in COOrdinate format> >>> A = sparse.lil_matrix((1000,1000), dtype=bool) >>> A <1000x1000 sparse matrix of type '' with 0 stored elements in LInked List format> >>> A = sparse.dok_matrix((1000,1000), dtype=bool) >>> A <1000x1000 sparse matrix of type '' with 0 stored elements in Dictionary Of Keys format> Nils From wnbell at gmail.com Tue Jun 10 14:46:39 2008 From: wnbell at gmail.com (Nathan Bell) Date: Tue, 10 Jun 2008 13:46:39 -0500 Subject: [SciPy-user] Sparse Boolean matrix In-Reply-To: <86faf560806101021v63fa86bub04e8c3633084037@mail.gmail.com> References: <86faf560806101021v63fa86bub04e8c3633084037@mail.gmail.com> Message-ID: On Tue, Jun 10, 2008 at 12:21 PM, Lukas Michelbacher wrote: > As the title says I'd like to use a matrix with Boolean values in some > sparse format. > My problem is that even though initialization seems to works fine, the > matrix doesn't > contain Boolean values but the default float type. Lukas, SciPy 0.6.x doesn't support sparse boolean matrices, or even integer matrices for that matter. The current SVN version of SciPy and the next release (0.7.0) do support integer matrices (including int8) but don't properly support booleans. I believe they will get converted to int8. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From fperez.net at gmail.com Tue Jun 10 14:57:31 2008 From: fperez.net at gmail.com (Fernando Perez) Date: Tue, 10 Jun 2008 11:57:31 -0700 Subject: [SciPy-user] Plans for Scipy Tutorials Message-ID: Hi all, I've now put up the near-final tutorial plans for SciPy 2008 here: http://conference.scipy.org/tutorials If your name is listed there and you disagree/can't make it, please let me and Travis Oliphant know as soon as possible. As the various presenters fine-tune their plan, we'll update the details on each tutorial and provide links to pre-requisites, installation pages, etc. But the main topics are probably not going to change now, barring any unforeseen event. Regards, f From didier.rano at gmail.com Tue Jun 10 15:09:27 2008 From: didier.rano at gmail.com (didier rano) Date: Tue, 10 Jun 2008 15:09:27 -0400 Subject: [SciPy-user] Some mathematics/statisctics books Message-ID: Hi all, I am using Time Series moduled provided for scipy. I am very impressive by scipy in general. But I don't have enough backgrounds to understand all mathematics/statistics models inside scipy. Could you help to find some books, articles, courses to improve my scientific backgrounds ? In particular, I need some knowledges to generate graphs to show pertinent information (trends...). Thank you Didier Rano -------------- next part -------------- An HTML attachment was scrubbed... URL: From cohen at slac.stanford.edu Tue Jun 10 15:36:10 2008 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Tue, 10 Jun 2008 21:36:10 +0200 Subject: [SciPy-user] Some mathematics/statisctics books In-Reply-To: References: Message-ID: <484ED7AA.2040907@slac.stanford.edu> best is to start with the tutorials and examples distributed with scipy or available on the website. Then you are most welcome to ask questions about them in this forum, and only then might it make sense to look for references, because scipy covers a huge class of problems and thus there is no way one can possibly give you a reference for an overall picture of its possibilities. The only way is to dive in, starting with some specific little examples or problems you would like to solve. Hope that helps, Johann didier rano wrote: > Hi all, > > I am using Time Series moduled provided for scipy. I am very > impressive by scipy in general. But I don't have enough backgrounds to > understand all mathematics/statistics models inside scipy. > > Could you help to find some books, articles, courses to improve my > scientific backgrounds ? In particular, I need some knowledges to > generate graphs to show pertinent information (trends...). > > Thank you > Didier Rano > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From Karl.Young at ucsf.edu Tue Jun 10 15:35:55 2008 From: Karl.Young at ucsf.edu (Karl Young) Date: Tue, 10 Jun 2008 12:35:55 -0700 Subject: [SciPy-user] Some mathematics/statisctics books In-Reply-To: <484ED7AA.2040907@slac.stanford.edu> References: <484ED7AA.2040907@slac.stanford.edu> Message-ID: <484ED79B.8000607@ucsf.edu> I completely agree with Johann that the best way to start is to just dive into the tutorials and examples but there are a few books around that might not be bad to have at your side when doing so. The reason I say this is that a particular book came to mind when I saw the original post. Though like anything else it has it's pluses and minuses it seems to me that Neil Gershenfeld's book "The Nature of Mathematical Modeling" would be pretty useful in this context in having a broad enough scope to discuss a lot of the types of problem that one would be working on re. using SciPy (including some bits on time series analysis). >best is to start with the tutorials and examples distributed with scipy >or available on the website. Then you are most welcome to ask questions >about them in this forum, and only then might it make sense to look for >references, because scipy covers a huge class of problems and thus there >is no way one can possibly give you a reference for an overall picture >of its possibilities. The only way is to dive in, starting with some >specific little examples or problems you would like to solve. >Hope that helps, >Johann > >didier rano wrote: > > >>Hi all, >> >>I am using Time Series moduled provided for scipy. I am very >>impressive by scipy in general. But I don't have enough backgrounds to >>understand all mathematics/statistics models inside scipy. >> >>Could you help to find some books, articles, courses to improve my >>scientific backgrounds ? In particular, I need some knowledges to >>generate graphs to show pertinent information (trends...). >> >>Thank you >>Didier Rano >>------------------------------------------------------------------------ >> >>_______________________________________________ >>SciPy-user mailing list >>SciPy-user at scipy.org >>http://projects.scipy.org/mailman/listinfo/scipy-user >> >> >> >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.org >http://projects.scipy.org/mailman/listinfo/scipy-user > > > -- Karl Young Center for Imaging of Neurodegenerative Diseases, UCSF VA Medical Center (114M) Phone: (415) 221-4810 x3114 lab 4150 Clement Street FAX: (415) 668-2864 San Francisco, CA 94121 Email: karl young at ucsf edu From oliphant at enthought.com Tue Jun 10 15:56:47 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Tue, 10 Jun 2008 14:56:47 -0500 Subject: [SciPy-user] Getting PyArray_Resize to work under weave In-Reply-To: References: Message-ID: <484EDC7F.9090505@enthought.com> Orest Kozyar wrote: > I originally posted this on Numpy, but did not get any feedback. I've > set up a temporary workaround by ensuring the Numpy array I pass to > weave is much, much larger than I really expect it needs to be. > However, I'd like to try and figure out why resize is not working. > Any guidance? > > The following code fails: > > from scipy import weave > from numpy import zeros > > arr = zeros((10,2)) > code = """ > PyArray_Dims dims; > dims.len = 2; > dims.ptr = Narr; > dims.ptr[0] += 10; > PyArray_Resize(arr_array, &dims, 1); > """ > weave.inline(code, ['arr'], verbose=1) > > The error message is: > In function 'PyObject* compiled_func(PyObject*, PyObject*)': > :678: error: too few arguments to function > > 678 is the line number for PyArray_Resize. According to the NumPy > Handbook, PyArray_Resize requires three arguments, which I am > providing. Am I missing something obvious here? There are times when > I need to be able to resize the array in C++ because I cannot predict > exactly how big the array needs to be before I pass it to weave. Any > advice or pointers greatly appreciated! > The function was updated ahead of the book. There is a NPY_order fortran argument at the end that must be supplied to PyArray_Resize. Use this for the last line of your C-code. PyArray_Resize(arr_array, &dims, 1, NPY_ANYORDER); -Travis From didier.rano at gmail.com Tue Jun 10 16:00:07 2008 From: didier.rano at gmail.com (didier rano) Date: Tue, 10 Jun 2008 16:00:07 -0400 Subject: [SciPy-user] Some mathematics/statisctics books In-Reply-To: <484ED79B.8000607@ucsf.edu> References: <484ED7AA.2040907@slac.stanford.edu> <484ED79B.8000607@ucsf.edu> Message-ID: I tried some weeks ago to use the tutorial, but I was lost about my poor mathematics background. Then thank you to provide me the name of this book. 2008/6/10, Karl Young : > > > I completely agree with Johann that the best way to start is to just > dive into the tutorials and examples but there are a few books around > that might not be bad to have at your side when doing so. The reason I > say this is that a particular book came to mind when I saw the original > post. Though like anything else it has it's pluses and minuses it seems > to me that Neil Gershenfeld's book "The Nature of Mathematical Modeling" > would be pretty useful in this context in having a broad enough scope to > discuss a lot of the types of problem that one would be working on re. > using SciPy (including some bits on time series analysis). > > >best is to start with the tutorials and examples distributed with scipy > >or available on the website. Then you are most welcome to ask questions > >about them in this forum, and only then might it make sense to look for > >references, because scipy covers a huge class of problems and thus there > >is no way one can possibly give you a reference for an overall picture > >of its possibilities. The only way is to dive in, starting with some > >specific little examples or problems you would like to solve. > >Hope that helps, > >Johann > > > >didier rano wrote: > > > > > >>Hi all, > >> > >>I am using Time Series moduled provided for scipy. I am very > >>impressive by scipy in general. But I don't have enough backgrounds to > >>understand all mathematics/statistics models inside scipy. > >> > >>Could you help to find some books, articles, courses to improve my > >>scientific backgrounds ? In particular, I need some knowledges to > >>generate graphs to show pertinent information (trends...). > >> > >>Thank you > >>Didier Rano > >>------------------------------------------------------------------------ > >> > >>_______________________________________________ > >>SciPy-user mailing list > >>SciPy-user at scipy.org > >>http://projects.scipy.org/mailman/listinfo/scipy-user > >> > >> > >> > >_______________________________________________ > >SciPy-user mailing list > >SciPy-user at scipy.org > >http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > -- > > Karl Young > Center for Imaging of Neurodegenerative Diseases, UCSF > VA Medical Center (114M) Phone: (415) 221-4810 x3114 lab > 4150 Clement Street FAX: (415) 668-2864 > San Francisco, CA 94121 Email: karl young at ucsf edu > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Didier Rano didier.rano at gmail.com http://www.jaxtr.com/didierrano -------------- next part -------------- An HTML attachment was scrubbed... URL: From cohen at slac.stanford.edu Tue Jun 10 16:29:44 2008 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Tue, 10 Jun 2008 22:29:44 +0200 Subject: [SciPy-user] Some mathematics/statisctics books In-Reply-To: References: <484ED7AA.2040907@slac.stanford.edu> <484ED79B.8000607@ucsf.edu> Message-ID: <484EE438.9020802@slac.stanford.edu> well, I browsed through this book some time ago, and it is really nice, but does involve some mathematics. I reiterate that picking up one example in the tutorial, and asking about it might send you on a fruitful track already. best, Johann didier rano wrote: > I tried some weeks ago to use the tutorial, but I was lost about my > poor mathematics background. > Then thank you to provide me the name of this book. > > > 2008/6/10, Karl Young >: > > > I completely agree with Johann that the best way to start is to just > dive into the tutorials and examples but there are a few books around > that might not be bad to have at your side when doing so. The reason I > say this is that a particular book came to mind when I saw the > original > post. Though like anything else it has it's pluses and minuses it > seems > to me that Neil Gershenfeld's book "The Nature of Mathematical > Modeling" > would be pretty useful in this context in having a broad enough > scope to > discuss a lot of the types of problem that one would be working on re. > using SciPy (including some bits on time series analysis). > > >best is to start with the tutorials and examples distributed with > scipy > >or available on the website. Then you are most welcome to ask > questions > >about them in this forum, and only then might it make sense to > look for > >references, because scipy covers a huge class of problems and > thus there > >is no way one can possibly give you a reference for an overall > picture > >of its possibilities. The only way is to dive in, starting with some > >specific little examples or problems you would like to solve. > >Hope that helps, > >Johann > > > >didier rano wrote: > > > > > >>Hi all, > >> > >>I am using Time Series moduled provided for scipy. I am very > >>impressive by scipy in general. But I don't have enough > backgrounds to > >>understand all mathematics/statistics models inside scipy. > >> > >>Could you help to find some books, articles, courses to improve my > >>scientific backgrounds ? In particular, I need some knowledges to > >>generate graphs to show pertinent information (trends...). > >> > >>Thank you > >>Didier Rano > >>------------------------------------------------------------------------ > >> > >>_______________________________________________ > >>SciPy-user mailing list > >>SciPy-user at scipy.org > >>http://projects.scipy.org/mailman/listinfo/scipy-user > >> > >> > >> > >_______________________________________________ > >SciPy-user mailing list > >SciPy-user at scipy.org > >http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > -- > > Karl Young > Center for Imaging of Neurodegenerative Diseases, UCSF > VA Medical Center (114M) Phone: (415) 221-4810 > x3114 lab > 4150 Clement Street FAX: (415) 668-2864 > San Francisco, CA 94121 Email: karl young at ucsf edu > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > -- > Didier Rano > didier.rano at gmail.com > http://www.jaxtr.com/didierrano > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From millman at berkeley.edu Tue Jun 10 17:10:42 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Tue, 10 Jun 2008 14:10:42 -0700 Subject: [SciPy-user] ANN: SciPy 2008 Conference Message-ID: Greetings, The SciPy 2008 Conference website is now open: http://conference.scipy.org This year's conference will be at Caltech from August 19-24: Tutorials: August 19-20 (Tuesday and Wednesday) Conference: August 21-22 (Thursday and Friday) Sprints: August 23-24 (Saturday and Sunday) Exciting things are happening in the Python community, and the SciPy 2008 Conference is an excellent opportunity to exchange ideas, learn techniques, contribute code and affect the direction of scientific computing (or just to learn what all the fuss is about). We'll be announcing the Keynote Speaker and providing a detailed schedule in the coming weeks. This year we are asking presenters to submit short papers to be included in the conference proceedings: http://conference.scipy.org/call_for_papers Cheers, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From bryan at cole.uklinux.net Tue Jun 10 17:20:36 2008 From: bryan at cole.uklinux.net (Bryan Cole) Date: Tue, 10 Jun 2008 22:20:36 +0100 Subject: [SciPy-user] LabVIEW and LabPython In-Reply-To: <88fe22a0806081521j2db5b1eblb68692af2e3beb43@mail.gmail.com> References: <88fe22a0806081521j2db5b1eblb68692af2e3beb43@mail.gmail.com> Message-ID: <1213132836.25044.13.camel@pc2.cole.uklinux.net> > > I've married DAQmx to scipy/numpy to Traits UI > ( http://code.enthought.com/projects/traits ) and I have a beautiful > GUI based data acquisition/display/analysis system. I'll admit that > I've been too scared to even try to implement callbacks (used by a > small number of DAQmx C functions) but I've achieved the same > functionality using python threads. FWIW Python callbacks with DAQmx work great. ctypes makes these easy to implement. I'm also considering Trait'ification of my primary data-acquisition application. I'd like to see what you've done in this respect. Are you using Chaco? BC From dwf at cs.toronto.edu Tue Jun 10 19:26:28 2008 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Tue, 10 Jun 2008 19:26:28 -0400 Subject: [SciPy-user] Some mathematics/statisctics books In-Reply-To: <484ED79B.8000607@ucsf.edu> References: <484ED7AA.2040907@slac.stanford.edu> <484ED79B.8000607@ucsf.edu> Message-ID: <0BAB70CC-5B85-4CB5-9355-195B537B08EA@cs.toronto.edu> On 10-Jun-08, at 3:35 PM, Karl Young wrote: > I completely agree with Johann that the best way to start is to just > dive into the tutorials and examples but there are a few books around > that might not be bad to have at your side when doing so. I just looked through the Gershenfeld book's table of contents, and while it's full of tons of useful stuff, it may be the wrong place to start. Certainly, don't try to read the book in sequence from start to finish if you don't have the requisite background. What background? Probably some linear algebra at least, and some single and multivariable calculus. The OP didn't specify what level of education he's had in these matters, so just in case I'll include a few books. I'm not too familiar with books on single-variable calculus but Tom Apostol's book on the subject appears to be quite well reviewed. It's also been around since the 1960's so it should be possible to find an inexpensive used copy. As for multivariable calculus and linear algebra, a book that I'm fond of that takes an integrated approach to these two subjects is "Multivariable Mathematics" by Theodore Shifrin, ISBN 047152638X. It goes through a lot of examples but doesn't sacrifice rigour. It won't help you much if you haven't done any single variable calculus. My supervisor introduced me to Gilbert Strang's "Linear Algebra and its Applications", which I quite like. People seem to either love or hate this book, but it endeavours to help you develop intuition for linear algebraic concepts, and is quite light on theory, heavy on application. Hope that helps, David From patrick.m.bouffard at gmail.com Tue Jun 10 20:24:15 2008 From: patrick.m.bouffard at gmail.com (Patrick Bouffard) Date: Wed, 11 Jun 2008 00:24:15 +0000 (UTC) Subject: [SciPy-user] ANN: NumPy/SciPy Documentation Marathon 2008 References: Message-ID: Hi, I'm registered on the wiki as PatrickBouffard. I'd like to chip away at reviewing and proofing (I have a good eye for speling misstakes) and I can probably write some docstrings too. Cheers Pat From stefan at sun.ac.za Tue Jun 10 20:41:04 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Wed, 11 Jun 2008 02:41:04 +0200 Subject: [SciPy-user] ANN: NumPy/SciPy Documentation Marathon 2008 In-Reply-To: References: Message-ID: <9457e7c80806101741j393b81c2k255ecebd485c1ec6@mail.gmail.com> Hi Pat 2008/6/11 Patrick Bouffard : > I'm registered on the wiki as PatrickBouffard. I'd like to chip away at > reviewing and proofing (I have a good eye for speling misstakes) and I can > probably write some docstrings too. Thank you for helping! Your account is now active. Regards St?fan From ivo.maljevic at gmail.com Tue Jun 10 20:49:59 2008 From: ivo.maljevic at gmail.com (Ivo Maljevic) Date: Tue, 10 Jun 2008 20:49:59 -0400 Subject: [SciPy-user] NumPy vs. SciPy and other speed comparisons Message-ID: <826c64da0806101749w4cc8084h27edff6b47160859@mail.gmail.com> Hi, I am relatively new SciPy/NumPy user and I must say I like it. I don't know if this is something that is well known, but I run some simple tests, and I found out that, depending on whether one imports NumPy or SciPy, programs executes at quite different speeds. I'm guessing the problem has something to do with how it cycles through loops. My initial goal was to compare python scripts with octave/matlab ones, but then I noticed the speed difference between NumPy and SciPy. To cut the story short, here are results: Execution times in seconds, test 1: Python Octave Fortran C ======================================================== NumPy: 23.9 36.05 6.7 6.8 SciPy: 44.5 Execution times in seconds, test 2: Python Octave Fortran C ======================================================== NumPy: 9.4 5.5 6.7 6.8 SciPy: 10.1 Test cases 1 and 2 refer to different parameter values for the same scipts (test 1: FrameSize=100, test 2: FrameSize=10000). If anybody would like to experiment, I've copied the source code for the simple BPSK simulation that determines the raw bit error rate. To calculate erfc function with NumPy, I used a simple fotran funciton and comiled it into .so library: subroutine erfc(x,y) real, intent(in) :: x real, intent(out) :: y y = 1-erf(x) end # Python script # For some reason, numpy is much faster than scipy, Try to comment out one or the other line #from numpy import * from scipy import * import erfc SNR_MIN = -1 SNR_MAX = 10 FrameSize = 100 # <-- change me for test 2 Eb_No_dB = arange(SNR_MIN,SNR_MAX+1) # signal vector s = ones(FrameSize) a = random.rand(FrameSize) a_i = where(a < 0.5) s[a_i] = -1 for snr in Eb_No_dB: No = 10**(-snr/10.0) Pe = 0.5*erfc.erfc(sqrt(1.0/No)) nFrames = ceil(200.0/FrameSize/Pe) error_sum = 0 for frame in arange(nFrames): # noise #n = sqrt(No/2)*random.randn(FrameSize) n = random.normal(scale=sqrt(No/2), size=FrameSize) x = s + n # detection y = sign(x) # error counting err = where (y != s) error_sum += len(err[0]) print 'Eb_No_dB=%2d, BER=%10.4e, Pe=%10.4e' % \ (snr, error_sum/(FrameSize*nFrames), Pe) Octave m-file: % bpsk_sim.m - octave version clear tic SNR_MIN = -1; SNR_MAX = 10; FrameSize = 100; Eb_No_dB = SNR_MIN:SNR_MAX; % signal vector s = ones(1,FrameSize); a = rand(1,FrameSize); a_i = find(a < 0.5); s(a_i) = -1; for snr=SNR_MIN:SNR_MAX No = 10^(-snr/10.0); Pe = 0.5*erfc(sqrt(1.0/No)); nFrames = ceil(200/FrameSize/Pe); error_sum = 0; for frame=1:nFrames % noise n = sqrt(No/2)*randn(1,FrameSize); x = s + n; y = sign(x); err = find( y ~= s); error_sum = error_sum + length(err); end fprintf('Eb_No_dB=%2d, BER=%10.4e, Pe=%10.4e\n', ... snr, error_sum/(FrameSize*nFrames), Pe) end toc -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan at ajackson.org Tue Jun 10 21:10:05 2008 From: alan at ajackson.org (Alan Jackson) Date: Tue, 10 Jun 2008 20:10:05 -0500 Subject: [SciPy-user] Some mathematics/statisctics books In-Reply-To: References: Message-ID: <20080610201005.43296f34@ajackson.org> For learning statistics, sorry they aren't python, but I really like : Introductory Statistics with R - Peter Dalgaard An R and S-plus Companion to Applied Regression - John Fox Primer of Biostatistics - Stanton Glantz Modern Applied Statistics with S - Venebles and Ripley Practical Time Series - Gareth Janacek For what you want, the first two in particular are probably relevant. Personally I switch between scipy and R frequently, depending on which one seems best for the problem at hand. They both have their strengths. On Tue, 10 Jun 2008 15:09:27 -0400 "didier rano" wrote: > Hi all, > > I am using Time Series moduled provided for scipy. I am very impressive by > scipy in general. But I don't have enough backgrounds to understand all > mathematics/statistics models inside scipy. > > Could you help to find some books, articles, courses to improve my > scientific backgrounds ? In particular, I need some knowledges to generate > graphs to show pertinent information (trends...). > > Thank you > Didier Rano -- ----------------------------------------------------------------------- | Alan K. Jackson | To see a World in a Grain of Sand | | alan at ajackson.org | And a Heaven in a Wild Flower, | | www.ajackson.org | Hold Infinity in the palm of your hand | | Houston, Texas | And Eternity in an hour. - Blake | ----------------------------------------------------------------------- From peter.skomoroch at gmail.com Tue Jun 10 22:00:52 2008 From: peter.skomoroch at gmail.com (Peter Skomoroch) Date: Tue, 10 Jun 2008 19:00:52 -0700 Subject: [SciPy-user] Some mathematics/statisctics books In-Reply-To: <20080610201005.43296f34@ajackson.org> References: <20080610201005.43296f34@ajackson.org> Message-ID: Strang and the R books are good places to start.... for stats, check out: *"All of Statistics*. A Concise Course in Statistical Inference". by Larry Wasserman http://www.stat.cmu.edu/~larry/all-of-statistics/index.html On Tue, Jun 10, 2008 at 6:10 PM, Alan Jackson wrote: > For learning statistics, sorry they aren't python, but I really like : > > Introductory Statistics with R - Peter Dalgaard > An R and S-plus Companion to Applied Regression - John Fox > Primer of Biostatistics - Stanton Glantz > Modern Applied Statistics with S - Venebles and Ripley > Practical Time Series - Gareth Janacek > > For what you want, the first two in particular are probably relevant. > Personally I switch between scipy and R frequently, depending on which > one seems best for the problem at hand. They both have their strengths. > > On Tue, 10 Jun 2008 15:09:27 -0400 > "didier rano" wrote: > > > Hi all, > > > > I am using Time Series moduled provided for scipy. I am very impressive > by > > scipy in general. But I don't have enough backgrounds to understand all > > mathematics/statistics models inside scipy. > > > > Could you help to find some books, articles, courses to improve my > > scientific backgrounds ? In particular, I need some knowledges to > generate > > graphs to show pertinent information (trends...). > > > > Thank you > > Didier Rano > > > -- > ----------------------------------------------------------------------- > | Alan K. Jackson | To see a World in a Grain of Sand | > | alan at ajackson.org | And a Heaven in a Wild Flower, | > | www.ajackson.org | Hold Infinity in the palm of your hand | > | Houston, Texas | And Eternity in an hour. - Blake | > ----------------------------------------------------------------------- > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Peter N. Skomoroch peter.skomoroch at gmail.com http://www.datawrangling.com http://del.icio.us/pskomoroch -------------- next part -------------- An HTML attachment was scrubbed... URL: From didier.rano at gmail.com Tue Jun 10 22:07:23 2008 From: didier.rano at gmail.com (didier rano) Date: Tue, 10 Jun 2008 22:07:23 -0400 Subject: [SciPy-user] Some mathematics/statisctics books In-Reply-To: <20080610201005.43296f34@ajackson.org> References: <20080610201005.43296f34@ajackson.org> Message-ID: Wahh so many books to read. But to analysis graph related to time-series, I don't know if I need more a statistic approach or pure mathematic approach. Maybe that I could use both approaches. For example, I need to analysis the trend of a graph, or remove some no relevants data in a graph. Thanks for all your help, and sorry for my poor background in mathematics (I need to learn linear algebre too !) 2008/6/10 Alan Jackson : > For learning statistics, sorry they aren't python, but I really like : > > Introductory Statistics with R - Peter Dalgaard > An R and S-plus Companion to Applied Regression - John Fox > Primer of Biostatistics - Stanton Glantz > Modern Applied Statistics with S - Venebles and Ripley > Practical Time Series - Gareth Janacek > > For what you want, the first two in particular are probably relevant. > Personally I switch between scipy and R frequently, depending on which > one seems best for the problem at hand. They both have their strengths. > > On Tue, 10 Jun 2008 15:09:27 -0400 > "didier rano" wrote: > > > Hi all, > > > > I am using Time Series moduled provided for scipy. I am very impressive > by > > scipy in general. But I don't have enough backgrounds to understand all > > mathematics/statistics models inside scipy. > > > > Could you help to find some books, articles, courses to improve my > > scientific backgrounds ? In particular, I need some knowledges to > generate > > graphs to show pertinent information (trends...). > > > > Thank you > > Didier Rano > > > -- > ----------------------------------------------------------------------- > | Alan K. Jackson | To see a World in a Grain of Sand | > | alan at ajackson.org | And a Heaven in a Wild Flower, | > | www.ajackson.org | Hold Infinity in the palm of your hand | > | Houston, Texas | And Eternity in an hour. - Blake | > ----------------------------------------------------------------------- > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Didier Rano didier.rano at gmail.com http://www.jaxtr.com/didierrano -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.brucher at gmail.com Wed Jun 11 03:15:44 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 11 Jun 2008 09:15:44 +0200 Subject: [SciPy-user] NumPy vs. SciPy and other speed comparisons In-Reply-To: <826c64da0806101749w4cc8084h27edff6b47160859@mail.gmail.com> References: <826c64da0806101749w4cc8084h27edff6b47160859@mail.gmail.com> Message-ID: Hi, The difference in speed may be only due to the fact that the scipy modle has much more functions than numpy's. Import only what you need and try again. Matthieu 2008/6/11 Ivo Maljevic : > Hi, > I am relatively new SciPy/NumPy user and I must say I like it. I don't know > if this is something that is well known, but I run some simple tests, > and I found out that, depending on whether one imports NumPy or SciPy, > programs executes at quite different speeds. I'm guessing the problem has > something to do with how it cycles through loops. > > My initial goal was to compare python scripts with octave/matlab ones, but > then I noticed the speed difference between NumPy and SciPy. > > To cut the story short, here are results: > > Execution times in seconds, test 1: > > Python Octave Fortran C > ======================================================== > NumPy: 23.9 36.05 6.7 6.8 > SciPy: 44.5 > > > Execution times in seconds, test 2: > > Python Octave Fortran C > ======================================================== > NumPy: 9.4 5.5 6.7 6.8 > SciPy: 10.1 > > > Test cases 1 and 2 refer to different parameter values for the same scipts > (test 1: FrameSize=100, test 2: FrameSize=10000). > > If anybody would like to experiment, I've copied the source code for the > simple BPSK simulation that determines the raw bit error rate. > > To calculate erfc function with NumPy, I used a simple fotran funciton and > comiled it into .so library: > > subroutine erfc(x,y) > real, intent(in) :: x > real, intent(out) :: y > y = 1-erf(x) > end > > # Python script > > # For some reason, numpy is much faster than scipy, Try to comment out one > or the other line > #from numpy import * > from scipy import * > import erfc > > SNR_MIN = -1 > SNR_MAX = 10 > FrameSize = 100 # <-- change me for test 2 > > Eb_No_dB = arange(SNR_MIN,SNR_MAX+1) > > # signal vector > s = ones(FrameSize) > a = random.rand(FrameSize) > a_i = where(a < 0.5) > s[a_i] = -1 > > for snr in Eb_No_dB: > > No = 10**(-snr/10.0) > Pe = 0.5*erfc.erfc(sqrt(1.0/No)) > nFrames = ceil(200.0/FrameSize/Pe) > error_sum = 0 > > for frame in arange(nFrames): > > # noise > #n = sqrt(No/2)*random.randn(FrameSize) > n = random.normal(scale=sqrt(No/2), size=FrameSize) > x = s + n > > # detection > y = sign(x) > > # error counting > err = where (y != s) > error_sum += len(err[0]) > > print 'Eb_No_dB=%2d, BER=%10.4e, Pe=%10.4e' % \ > (snr, error_sum/(FrameSize*nFrames), Pe) > > > Octave m-file: > % bpsk_sim.m - octave version > clear > > tic > > SNR_MIN = -1; > SNR_MAX = 10; > FrameSize = 100; > Eb_No_dB = SNR_MIN:SNR_MAX; > > % signal vector > s = ones(1,FrameSize); > a = rand(1,FrameSize); > a_i = find(a < 0.5); > s(a_i) = -1; > > for snr=SNR_MIN:SNR_MAX > > No = 10^(-snr/10.0); > Pe = 0.5*erfc(sqrt(1.0/No)); > nFrames = ceil(200/FrameSize/Pe); > error_sum = 0; > > for frame=1:nFrames > > % noise > n = sqrt(No/2)*randn(1,FrameSize); > x = s + n; > > y = sign(x); > > err = find( y ~= s); > error_sum = error_sum + length(err); > end > > fprintf('Eb_No_dB=%2d, BER=%10.4e, Pe=%10.4e\n', ... > snr, error_sum/(FrameSize*nFrames), Pe) > end > toc > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Wed Jun 11 04:03:32 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 11 Jun 2008 17:03:32 +0900 Subject: [SciPy-user] Some mathematics/statisctics books In-Reply-To: References: <20080610201005.43296f34@ajackson.org> Message-ID: <484F86D4.8030408@ar.media.kyoto-u.ac.jp> didier rano wrote: > Wahh so many books to read. > > But to analysis graph related to time-series, I don't know if I need > more a statistic approach or pure mathematic approach. Maybe that I > could use both approaches. It really depends on what you mean by pure mathematic approach. Purely mathematic approach to probabilities and statistics are mostly just that: purely mathematical. Don't get me wrong, maths is great, and probabilities/statistics are interesting mathematics topics on their own, but if you want to handle graphs, time series and all that, I don't think it will help you much. I second the book by Wasserman, although it does not treat a lot of time series stuff. But it is concise and precise (it is written with a relatively practical POV by someone who is definitely familiar with the theory; in particular, there are a lot of subtle examples and counter examples which are well explained, contrary to many other books). Another book. which I have not read entirely yet, but looks related to what you are looking for, is the book by Gelman et al.: "Bayesian Data Analysis", by Gelman A., John B. Carlin , Hal S. Stern , and Donald B. Rubin. http://www.stat.columbia.edu/~gelman/book/ Not much theory there, but is really oriented toward data analysis as the title suggests :) > > Thanks for all your help, and sorry for my poor background in > mathematics (I need to learn linear algebre too !) If you do multivariate analysis, you need to be more than familiar with linear algebra, I think. I don't know any good reference on this, but some open courseware may be nice (they have some video, too): http://ocw.mit.edu/OcwWeb/Mathematics/index.htm cheers, David From gael.varoquaux at normalesup.org Wed Jun 11 04:39:27 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Wed, 11 Jun 2008 10:39:27 +0200 Subject: [SciPy-user] NumPy vs. SciPy and other speed comparisons In-Reply-To: References: <826c64da0806101749w4cc8084h27edff6b47160859@mail.gmail.com> Message-ID: <20080611083927.GA3926@phare.normalesup.org> On Wed, Jun 11, 2008 at 09:15:44AM +0200, Matthieu Brucher wrote: > The difference in speed may be only due to the fact that the scipy modle > has much more functions than numpy's. Matthieux does pipoint an interesting point: import time of scipy will be much larger than import time of numpy. You could try to run the tests in order to get rid of import time, eg by importing once, and running the code many times. However, if I recall correctly, there are other differences in fonctions between scipy and numpy. Some functions in scipy do more than the corresponding fonctions in numpy. Here is an example: In [1]: from scipy import sqrt as ssqrt In [2]: from numpy import sqrt as nsqrt In [3]: ssqrt(-1) Out[3]: 1j In [4]: nsqrt(-1) Out[4]: nan In [5]: %timeit ssqrt(-1) 10000 loops, best of 3: 37.7 ?s per loop In [6]: %timeit nsqrt(-1) 100000 loops, best of 3: 6.14 ?s per loop Maybe this is what you are seeing. Ga?l From haase at msg.ucsf.edu Wed Jun 11 04:50:14 2008 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Wed, 11 Jun 2008 10:50:14 +0200 Subject: [SciPy-user] NumPy vs. SciPy and other speed comparisons In-Reply-To: <20080611083927.GA3926@phare.normalesup.org> References: <826c64da0806101749w4cc8084h27edff6b47160859@mail.gmail.com> <20080611083927.GA3926@phare.normalesup.org> Message-ID: On Wed, Jun 11, 2008 at 10:39 AM, Gael Varoquaux wrote: > On Wed, Jun 11, 2008 at 09:15:44AM +0200, Matthieu Brucher wrote: >> The difference in speed may be only due to the fact that the scipy modle >> has much more functions than numpy's. > > Matthieux does pipoint an interesting point: import time of scipy will be > much larger than import time of numpy. You could try to run the tests in > order to get rid of import time, eg by importing once, and running the > code many times. > > However, if I recall correctly, there are other differences in fonctions > between scipy and numpy. Some functions in scipy do more than the > corresponding fonctions in numpy. Here is an example: > > In [1]: from scipy import sqrt as ssqrt > > In [2]: from numpy import sqrt as nsqrt > > In [3]: ssqrt(-1) > Out[3]: 1j > > In [4]: nsqrt(-1) > Out[4]: nan > > In [5]: %timeit ssqrt(-1) > 10000 loops, best of 3: 37.7 ?s per loop > > In [6]: %timeit nsqrt(-1) > 100000 loops, best of 3: 6.14 ?s per loop > > Maybe this is what you are seeing. > Not to say that this is scary, that two functions, same name, give different results .... But is there a list of these differences ? (Maybe a wiki page, like ScipyNumpyDifferences !?) Thanks, Sebastian Haase From gael.varoquaux at normalesup.org Wed Jun 11 04:55:55 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Wed, 11 Jun 2008 10:55:55 +0200 Subject: [SciPy-user] NumPy vs. SciPy and other speed comparisons In-Reply-To: References: <826c64da0806101749w4cc8084h27edff6b47160859@mail.gmail.com> <20080611083927.GA3926@phare.normalesup.org> Message-ID: <20080611085555.GC3926@phare.normalesup.org> On Wed, Jun 11, 2008 at 10:50:14AM +0200, Sebastian Haase wrote: > Not to say that this is scary, that two functions, same name, give > different results .... I don't agree. That's why you have namespaces. numpy.sum and __builtin__.sum don't give the same results, and I find that expected. Ga?l From haase at msg.ucsf.edu Wed Jun 11 05:18:31 2008 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Wed, 11 Jun 2008 11:18:31 +0200 Subject: [SciPy-user] NumPy vs. SciPy and other speed comparisons In-Reply-To: <20080611085555.GC3926@phare.normalesup.org> References: <826c64da0806101749w4cc8084h27edff6b47160859@mail.gmail.com> <20080611083927.GA3926@phare.normalesup.org> <20080611085555.GC3926@phare.normalesup.org> Message-ID: On Wed, Jun 11, 2008 at 10:55 AM, Gael Varoquaux wrote: > On Wed, Jun 11, 2008 at 10:50:14AM +0200, Sebastian Haase wrote: >> Not to say that this is scary, that two functions, same name, give >> different results .... > > I don't agree. That's why you have namespaces. numpy.sum and > __builtin__.sum don't give the same results, and I find that expected. > Of course -- but scipy.sum and numpy.sum *I* would intuitively presume to be "alike". But I really did not want to bring up a new discussion about this, I *do* see the point for having a "more user friendly" version of sqrt (even if it is slower) What I said is: there should be a list of all such differences, to minimize surprises. -Sebastian From gaedol at gmail.com Wed Jun 11 05:20:17 2008 From: gaedol at gmail.com (Marco) Date: Wed, 11 Jun 2008 09:20:17 +0000 Subject: [SciPy-user] Some mathematics/statisctics books In-Reply-To: <484F86D4.8030408@ar.media.kyoto-u.ac.jp> References: <20080610201005.43296f34@ajackson.org> <484F86D4.8030408@ar.media.kyoto-u.ac.jp> Message-ID: On Wed, Jun 11, 2008 at 8:03 AM, David Cournapeau wrote: > didier rano wrote: >> Wahh so many books to read. >> >> But to analysis graph related to time-series, I don't know if I need >> more a statistic approach or pure mathematic approach. Maybe that I >> could use both approaches. > > It really depends on what you mean by pure mathematic approach. Purely > mathematic approach to probabilities and statistics are mostly just > that: purely mathematical. Don't get me wrong, maths is great, and > probabilities/statistics are interesting mathematics topics on their > own, but if you want to handle graphs, time series and all that, I don't > think it will help you much. > > I second the book by Wasserman, although it does not treat a lot of time > series stuff. But it is concise and precise (it is written with a > relatively practical POV by someone who is definitely familiar with the > theory; in particular, there are a lot of subtle examples and counter > examples which are well explained, contrary to many other books). > > Another book. which I have not read entirely yet, but looks related to > what you are looking for, is the book by Gelman et al.: > > "Bayesian Data Analysis", by Gelman A., John B. Carlin > , Hal S. Stern > , and Donald B. Rubin. > http://www.stat.columbia.edu/~gelman/book/ > > Not much theory there, but is really oriented toward data analysis as > the title suggests :) >> >> Thanks for all your help, and sorry for my poor background in >> mathematics (I need to learn linear algebre too !) > > If you do multivariate analysis, you need to be more than familiar with > linear algebra, I think. I don't know any good reference on this, but > some open courseware may be nice (they have some video, too): > > http://ocw.mit.edu/OcwWeb/Mathematics/index.htm > > cheers, > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Fred Allen - "Imitation is the sincerest form of television." From robert.kern at gmail.com Wed Jun 11 05:20:30 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 11 Jun 2008 04:20:30 -0500 Subject: [SciPy-user] NumPy vs. SciPy and other speed comparisons In-Reply-To: <20080611085555.GC3926@phare.normalesup.org> References: <826c64da0806101749w4cc8084h27edff6b47160859@mail.gmail.com> <20080611083927.GA3926@phare.normalesup.org> <20080611085555.GC3926@phare.normalesup.org> Message-ID: <3d375d730806110220u432e808fkf292bd8b9765b91a@mail.gmail.com> On Wed, Jun 11, 2008 at 03:55, Gael Varoquaux wrote: > On Wed, Jun 11, 2008 at 10:50:14AM +0200, Sebastian Haase wrote: >> Not to say that this is scary, that two functions, same name, give >> different results .... > > I don't agree. That's why you have namespaces. numpy.sum and > __builtin__.sum don't give the same results, and I find that expected. While I don't *particularly* want to open this can of worms again, the power of the Internet compels me. To my mind, the big fault with the current arrangement is not that two functions with the same name have different functionality, but that scipy.* is *almost* the same namespace as numpy.* without any clear signposts as to the differences. It's one thing to have the single common name between numpy.* and __builtin__.* have different functions in each namespace. It's another to say that 9 of 489 common names do. For what it's worth, the "scipy versions" of the 9 functions are all exposed from numpy.lib.scimath. What really annoys me is that "from scipy import *" imports all of the subpackages. Again. I don't know how many times I thought I removed that nonsense, but like a bloody vampire, it just ... keeps ... coming ... back. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cohen at slac.stanford.edu Wed Jun 11 05:31:11 2008 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Wed, 11 Jun 2008 11:31:11 +0200 Subject: [SciPy-user] NumPy vs. SciPy and other speed comparisons In-Reply-To: <3d375d730806110220u432e808fkf292bd8b9765b91a@mail.gmail.com> References: <826c64da0806101749w4cc8084h27edff6b47160859@mail.gmail.com> <20080611083927.GA3926@phare.normalesup.org> <20080611085555.GC3926@phare.normalesup.org> <3d375d730806110220u432e808fkf292bd8b9765b91a@mail.gmail.com> Message-ID: <484F9B5F.80608@slac.stanford.edu> FWIW: In [28]: numpy.complex(0,1)**2 Out[28]: (-1+0j) In [29]: numpy.sqrt(numpy.complex(0,1)**2) Out[29]: (0.0+1.0j) In [30]: numpy.sqrt(-1) Out[30]: nan So numpy knows about complex, but numpy.sqrt does not cast -1 into a complex number. Beyond Gael's valid statement about namespacing, I find this slightly surprising and maybe even a tad inconsistent, but I will certainly not fight a war over it :) JCT Robert Kern wrote: > On Wed, Jun 11, 2008 at 03:55, Gael Varoquaux > wrote: > >> On Wed, Jun 11, 2008 at 10:50:14AM +0200, Sebastian Haase wrote: >> >>> Not to say that this is scary, that two functions, same name, give >>> different results .... >>> >> I don't agree. That's why you have namespaces. numpy.sum and >> __builtin__.sum don't give the same results, and I find that expected. >> > > While I don't *particularly* want to open this can of worms again, the > power of the Internet compels me. To my mind, the big fault with the > current arrangement is not that two functions with the same name have > different functionality, but that scipy.* is *almost* the same > namespace as numpy.* without any clear signposts as to the > differences. It's one thing to have the single common name between > numpy.* and __builtin__.* have different functions in each namespace. > It's another to say that 9 of 489 common names do. > > For what it's worth, the "scipy versions" of the 9 functions are all > exposed from numpy.lib.scimath. > > What really annoys me is that "from scipy import *" imports all of the > subpackages. Again. I don't know how many times I thought I removed > that nonsense, but like a bloody vampire, it just ... keeps ... coming > ... back. > > From silva at lma.cnrs-mrs.fr Wed Jun 11 05:43:57 2008 From: silva at lma.cnrs-mrs.fr (Fabrice Silva) Date: Wed, 11 Jun 2008 11:43:57 +0200 Subject: [SciPy-user] NumPy vs. SciPy and other speed comparisons In-Reply-To: <484F9B5F.80608@slac.stanford.edu> References: <826c64da0806101749w4cc8084h27edff6b47160859@mail.gmail.com> <20080611083927.GA3926@phare.normalesup.org> <20080611085555.GC3926@phare.normalesup.org> <3d375d730806110220u432e808fkf292bd8b9765b91a@mail.gmail.com> <484F9B5F.80608@slac.stanford.edu> Message-ID: <1213177437.3201.20.camel@Portable-s2m.cnrs-mrs.fr> Python 2.5.2 (r252:60911, May 28 2008, 08:35:32) [GCC 4.2.4 (Debian 4.2.4-1)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy as np >>> np.sqrt(-1.) nan >>> np.sqrt(-1.+0.j) 1j >>> np.__version__ '1.1.0' -- Fabrice Silva LMA UPR CNRS 7051 - ?quipe S2M From cohen at slac.stanford.edu Wed Jun 11 05:50:07 2008 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Wed, 11 Jun 2008 11:50:07 +0200 Subject: [SciPy-user] NumPy vs. SciPy and other speed comparisons In-Reply-To: <1213177437.3201.20.camel@Portable-s2m.cnrs-mrs.fr> References: <826c64da0806101749w4cc8084h27edff6b47160859@mail.gmail.com> <20080611083927.GA3926@phare.normalesup.org> <20080611085555.GC3926@phare.normalesup.org> <3d375d730806110220u432e808fkf292bd8b9765b91a@mail.gmail.com> <484F9B5F.80608@slac.stanford.edu> <1213177437.3201.20.camel@Portable-s2m.cnrs-mrs.fr> Message-ID: <484F9FCF.1030203@slac.stanford.edu> heuh... oui, c'est exactement mon point. JCT Fabrice Silva wrote: > Python 2.5.2 (r252:60911, May 28 2008, 08:35:32) > [GCC 4.2.4 (Debian 4.2.4-1)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>>> import numpy as np >>>> np.sqrt(-1.) >>>> > nan > >>>> np.sqrt(-1.+0.j) >>>> > 1j > >>>> np.__version__ >>>> > '1.1.0' > > From pearu at cens.ioc.ee Wed Jun 11 05:52:15 2008 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Wed, 11 Jun 2008 11:52:15 +0200 Subject: [SciPy-user] NumPy vs. SciPy and other speed comparisons In-Reply-To: <3d375d730806110220u432e808fkf292bd8b9765b91a@mail.gmail.com> References: <826c64da0806101749w4cc8084h27edff6b47160859@mail.gmail.com> <20080611083927.GA3926@phare.normalesup.org> <20080611085555.GC3926@phare.normalesup.org> <3d375d730806110220u432e808fkf292bd8b9765b91a@mail.gmail.com> Message-ID: <484FA04F.1000808@cens.ioc.ee> Robert Kern wrote: > What really annoys me is that "from scipy import *" imports all of the > subpackages. Again. I don't know how many times I thought I removed > that nonsense, but like a bloody vampire, it just ... keeps ... coming > ... back. There should be a unittest for that.. Eventhough scipy.__version__ == 0.7.0.dev4124 (that I have at the current computer) does not import scipy subpackages, it does import lots of other stuff that seems unnecessary. For example, importing nose takes 1/3 of the scipy import time: pearu at pearu-laptop:~$ python >>> %time import scipy CPU times: user 0.22 s, sys: 0.08 s, total: 0.30 s Wall time: 0.29 >>> pearu at pearu-laptop:~$ python >>> %time import nose CPU times: user 0.07 s, sys: 0.02 s, total: 0.10 s Wall time: 0.10 >>> %time import scipy CPU times: user 0.13 s, sys: 0.04 s, total: 0.17 s Wall time: 0.18 Another 1/3 of the time goes to importing numpy: pearu at pearu-laptop:~$ python >>> %time import nose CPU times: user 0.08 s, sys: 0.02 s, total: 0.10 s Wall time: 0.10 >>> %time import numpy CPU times: user 0.07 s, sys: 0.02 s, total: 0.09 s Wall time: 0.10 >>> %time import scipy CPU times: user 0.08 s, sys: 0.04 s, total: 0.11 s Wall time: 0.11 Pearu From robert.kern at gmail.com Wed Jun 11 05:55:37 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 11 Jun 2008 04:55:37 -0500 Subject: [SciPy-user] NumPy vs. SciPy and other speed comparisons In-Reply-To: <484FA04F.1000808@cens.ioc.ee> References: <826c64da0806101749w4cc8084h27edff6b47160859@mail.gmail.com> <20080611083927.GA3926@phare.normalesup.org> <20080611085555.GC3926@phare.normalesup.org> <3d375d730806110220u432e808fkf292bd8b9765b91a@mail.gmail.com> <484FA04F.1000808@cens.ioc.ee> Message-ID: <3d375d730806110255icfd6ebdv5b02ccd04d2a02a2@mail.gmail.com> On Wed, Jun 11, 2008 at 04:52, Pearu Peterson wrote: > Robert Kern wrote: > >> What really annoys me is that "from scipy import *" imports all of the >> subpackages. Again. I don't know how many times I thought I removed >> that nonsense, but like a bloody vampire, it just ... keeps ... coming >> ... back. > > There should be a unittest for that.. > > Eventhough scipy.__version__ == 0.7.0.dev4124 (that I have at the > current computer) does not import scipy subpackages, Specifically, it is "from scipy import *" that imports the subpackages. "import scipy" does not. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From pearu at cens.ioc.ee Wed Jun 11 06:06:15 2008 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Wed, 11 Jun 2008 12:06:15 +0200 Subject: [SciPy-user] NumPy vs. SciPy and other speed comparisons In-Reply-To: <3d375d730806110255icfd6ebdv5b02ccd04d2a02a2@mail.gmail.com> References: <826c64da0806101749w4cc8084h27edff6b47160859@mail.gmail.com> <20080611083927.GA3926@phare.normalesup.org> <20080611085555.GC3926@phare.normalesup.org> <3d375d730806110220u432e808fkf292bd8b9765b91a@mail.gmail.com> <484FA04F.1000808@cens.ioc.ee> <3d375d730806110255icfd6ebdv5b02ccd04d2a02a2@mail.gmail.com> Message-ID: <484FA397.5070704@cens.ioc.ee> Robert Kern wrote: > On Wed, Jun 11, 2008 at 04:52, Pearu Peterson wrote: >> Robert Kern wrote: >> >>> What really annoys me is that "from scipy import *" imports all of the >>> subpackages. Again. I don't know how many times I thought I removed >>> that nonsense, but like a bloody vampire, it just ... keeps ... coming >>> ... back. >> There should be a unittest for that.. >> >> Eventhough scipy.__version__ == 0.7.0.dev4124 (that I have at the >> current computer) does not import scipy subpackages, > > Specifically, it is "from scipy import *" that imports the > subpackages. "import scipy" does not. Ahh, right. We used to have postponed import hooks for that in past but afaik we dropped these because they were hackish and at some moment the time of importing scipy improved (I think it was in Python 2.4 or 2.5) considerably. Nevertheless, the test functions could import nose only when required. That would save some import time. Pearu From fredmfp at gmail.com Wed Jun 11 06:30:45 2008 From: fredmfp at gmail.com (fred) Date: Wed, 11 Jun 2008 12:30:45 +0200 Subject: [SciPy-user] array mean issue... Message-ID: <484FA955.3000009@gmail.com> Hi, I get the following issue I don't understand: marsu:~/{1}/> a=rand(400,400,400) marsu:~/{2}/> a.mean() Out[2]: 0.500002086829 marsu:~/{3}/> b=asarray(a, dtype='f') marsu:~/{4}/> b.mean() Out[4]: 0.262144 What's going on ? How can I compute the mean on "big" float arrays ? By "big", I mean that for array of 300x300x300, I get the same results for float32 and float64 arrays. TIA. Cheers, -- Fred From robert.kern at gmail.com Wed Jun 11 06:33:30 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 11 Jun 2008 05:33:30 -0500 Subject: [SciPy-user] NumPy vs. SciPy and other speed comparisons In-Reply-To: <484FA397.5070704@cens.ioc.ee> References: <826c64da0806101749w4cc8084h27edff6b47160859@mail.gmail.com> <20080611083927.GA3926@phare.normalesup.org> <20080611085555.GC3926@phare.normalesup.org> <3d375d730806110220u432e808fkf292bd8b9765b91a@mail.gmail.com> <484FA04F.1000808@cens.ioc.ee> <3d375d730806110255icfd6ebdv5b02ccd04d2a02a2@mail.gmail.com> <484FA397.5070704@cens.ioc.ee> Message-ID: <3d375d730806110333r43316877ne507befa6021bbe3@mail.gmail.com> On Wed, Jun 11, 2008 at 05:06, Pearu Peterson wrote: > > Robert Kern wrote: >> On Wed, Jun 11, 2008 at 04:52, Pearu Peterson wrote: >>> Robert Kern wrote: >>> >>>> What really annoys me is that "from scipy import *" imports all of the >>>> subpackages. Again. I don't know how many times I thought I removed >>>> that nonsense, but like a bloody vampire, it just ... keeps ... coming >>>> ... back. >>> There should be a unittest for that.. >>> >>> Eventhough scipy.__version__ == 0.7.0.dev4124 (that I have at the >>> current computer) does not import scipy subpackages, >> >> Specifically, it is "from scipy import *" that imports the >> subpackages. "import scipy" does not. > > Ahh, right. > > We used to have postponed import hooks for that in past > but afaik we dropped these because they were hackish and at some moment > the time of importing scipy improved (I think it was in Python 2.4 or > 2.5) considerably. The problem is that we left scipy.pkgload in, and creating that from numpy._import_tools.PackageLoader implicitly adds all of the subpackages to scipy.__all__. > Nevertheless, the test functions could import nose only when required. > That would save some import time. Yes, I have a patch for that. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Wed Jun 11 06:44:34 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 11 Jun 2008 05:44:34 -0500 Subject: [SciPy-user] NumPy vs. SciPy and other speed comparisons In-Reply-To: <3d375d730806110333r43316877ne507befa6021bbe3@mail.gmail.com> References: <826c64da0806101749w4cc8084h27edff6b47160859@mail.gmail.com> <20080611083927.GA3926@phare.normalesup.org> <20080611085555.GC3926@phare.normalesup.org> <3d375d730806110220u432e808fkf292bd8b9765b91a@mail.gmail.com> <484FA04F.1000808@cens.ioc.ee> <3d375d730806110255icfd6ebdv5b02ccd04d2a02a2@mail.gmail.com> <484FA397.5070704@cens.ioc.ee> <3d375d730806110333r43316877ne507befa6021bbe3@mail.gmail.com> Message-ID: <3d375d730806110344h2cf14d51h722589e8606ca2aa@mail.gmail.com> On Wed, Jun 11, 2008 at 05:33, Robert Kern wrote: > On Wed, Jun 11, 2008 at 05:06, Pearu Peterson wrote: >> We used to have postponed import hooks for that in past >> but afaik we dropped these because they were hackish and at some moment >> the time of importing scipy improved (I think it was in Python 2.4 or >> 2.5) considerably. > > The problem is that we left scipy.pkgload in, and creating that from > numpy._import_tools.PackageLoader implicitly adds all of the > subpackages to scipy.__all__. Correction: we actually do call pkgload(postpone=True). However, even with the postponed import (which does not add any proxy objects), it still appends to __all__. I believe the following patch to numpy fixes the problem, but I'm not sure if leaving the if clause alone is correct in all cases: Index: numpy/_import_tools.py =================================================================== --- numpy/_import_tools.py (revision 5245) +++ numpy/_import_tools.py (working copy) @@ -183,9 +183,6 @@ postpone_import = getattr(info_module,'postpone_import',False) if (postpone and not global_symbols) \ or (postpone_import and postpone is not None): - self.log('__all__.append(%r)' % (package_name)) - if '.' not in package_name: - self.parent_export_names.append(package_name) continue old_object = frame.f_locals.get(package_name,None) This can probably go into numpy 1.1.1 as a bugfix, so I don't think it's critical to work around it in scipy/__init__.py. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From pearu at cens.ioc.ee Wed Jun 11 07:12:40 2008 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Wed, 11 Jun 2008 13:12:40 +0200 Subject: [SciPy-user] NumPy vs. SciPy and other speed comparisons In-Reply-To: <3d375d730806110344h2cf14d51h722589e8606ca2aa@mail.gmail.com> References: <826c64da0806101749w4cc8084h27edff6b47160859@mail.gmail.com> <20080611083927.GA3926@phare.normalesup.org> <20080611085555.GC3926@phare.normalesup.org> <3d375d730806110220u432e808fkf292bd8b9765b91a@mail.gmail.com> <484FA04F.1000808@cens.ioc.ee> <3d375d730806110255icfd6ebdv5b02ccd04d2a02a2@mail.gmail.com> <484FA397.5070704@cens.ioc.ee> <3d375d730806110333r43316877ne507befa6021bbe3@mail.gmail.com> <3d375d730806110344h2cf14d51h722589e8606ca2aa@mail.gmail.com> Message-ID: <484FB328.3090405@cens.ioc.ee> Robert Kern wrote: > On Wed, Jun 11, 2008 at 05:33, Robert Kern wrote: >> On Wed, Jun 11, 2008 at 05:06, Pearu Peterson wrote: > >>> We used to have postponed import hooks for that in past >>> but afaik we dropped these because they were hackish and at some moment >>> the time of importing scipy improved (I think it was in Python 2.4 or >>> 2.5) considerably. >> The problem is that we left scipy.pkgload in, and creating that from >> numpy._import_tools.PackageLoader implicitly adds all of the >> subpackages to scipy.__all__. > > Correction: we actually do call pkgload(postpone=True). However, even > with the postponed import (which does not add any proxy objects), it > still appends to __all__. I believe the following patch to numpy fixes > the problem, but I'm not sure if leaving the if clause alone is > correct in all cases: > > Index: numpy/_import_tools.py > =================================================================== > --- numpy/_import_tools.py (revision 5245) > +++ numpy/_import_tools.py (working copy) > @@ -183,9 +183,6 @@ > postpone_import = getattr(info_module,'postpone_import',False) > if (postpone and not global_symbols) \ > or (postpone_import and postpone is not None): > - self.log('__all__.append(%r)' % (package_name)) > - if '.' not in package_name: > - self.parent_export_names.append(package_name) > continue > > old_object = frame.f_locals.get(package_name,None) > > > This can probably go into numpy 1.1.1 as a bugfix, so I don't think > it's critical to work around it in scipy/__init__.py. The patch does not affect the 'from scipy import *' time much. In my computer it is around 0.62secs in both cases when having names in __all__ or not. The long import seems to be due to scipy/linalg/iterative.py that imports scipy.sparse which takes most of the import time. And scipy.linalg should not be imported when importing scipy. So, I think the real reason is hiding somewhere else.. I am looking into it... Pearu From ivo.maljevic at gmail.com Wed Jun 11 07:17:08 2008 From: ivo.maljevic at gmail.com (Ivo Maljevic) Date: Wed, 11 Jun 2008 07:17:08 -0400 Subject: [SciPy-user] NumPy vs. SciPy and other speed comparisons In-Reply-To: <3d375d730806110255icfd6ebdv5b02ccd04d2a02a2@mail.gmail.com> References: <826c64da0806101749w4cc8084h27edff6b47160859@mail.gmail.com> <20080611083927.GA3926@phare.normalesup.org> <20080611085555.GC3926@phare.normalesup.org> <3d375d730806110220u432e808fkf292bd8b9765b91a@mail.gmail.com> <484FA04F.1000808@cens.ioc.ee> <3d375d730806110255icfd6ebdv5b02ccd04d2a02a2@mail.gmail.com> Message-ID: <826c64da0806110417s3b87fb95w497267004ed7e842@mail.gmail.com> 2008/6/11 Robert Kern : Specifically, it is "from scipy import *" that imports the subpackages. "import scipy" does not. Sorry, but I am not sure I understand everything here. 1. "import scipy as sp" vs "import numpy as np", and then I added prefixes in front of all the functions. The times are the same, 44 and 23 seconds, respectively. There goes that theory. 2. I tried explicitly importing only the required functions, e.g, from numpy import sqrt, arange, ones, zeros, random, where, ceil, sign Agian, the same results. Would the loading time of the two packages account for over 20 seconds difference in execution time? Robert, could you actually try the code I've sent? Thanks, Ivo > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Wed Jun 11 07:13:08 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 11 Jun 2008 20:13:08 +0900 Subject: [SciPy-user] NumPy vs. SciPy and other speed comparisons In-Reply-To: <826c64da0806110417s3b87fb95w497267004ed7e842@mail.gmail.com> References: <826c64da0806101749w4cc8084h27edff6b47160859@mail.gmail.com> <20080611083927.GA3926@phare.normalesup.org> <20080611085555.GC3926@phare.normalesup.org> <3d375d730806110220u432e808fkf292bd8b9765b91a@mail.gmail.com> <484FA04F.1000808@cens.ioc.ee> <3d375d730806110255icfd6ebdv5b02ccd04d2a02a2@mail.gmail.com> <826c64da0806110417s3b87fb95w497267004ed7e842@mail.gmail.com> Message-ID: <484FB344.4050905@ar.media.kyoto-u.ac.jp> Ivo Maljevic wrote: > 2. I tried explicitly importing only the required functions, e.g, > from numpy import sqrt, arange, ones, zeros, random, where, ceil, sign > Agian, the same results. > > Would the loading time of the two packages account for over 20 seconds > difference in execution time? No, I think the discussion carried away from your initial problem. Although importing scipy is slow, it certainly does not take 20 seconds, unless you are running an ancient computer. Let me check your problem, which may just show that one function is scipy is much slower than a similar one in numpy. cheers, David From pearu at cens.ioc.ee Wed Jun 11 07:27:22 2008 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Wed, 11 Jun 2008 13:27:22 +0200 Subject: [SciPy-user] NumPy vs. SciPy and other speed comparisons In-Reply-To: <484FB328.3090405@cens.ioc.ee> References: <826c64da0806101749w4cc8084h27edff6b47160859@mail.gmail.com> <20080611083927.GA3926@phare.normalesup.org> <20080611085555.GC3926@phare.normalesup.org> <3d375d730806110220u432e808fkf292bd8b9765b91a@mail.gmail.com> <484FA04F.1000808@cens.ioc.ee> <3d375d730806110255icfd6ebdv5b02ccd04d2a02a2@mail.gmail.com> <484FA397.5070704@cens.ioc.ee> <3d375d730806110333r43316877ne507befa6021bbe3@mail.gmail.com> <3d375d730806110344h2cf14d51h722589e8606ca2aa@mail.gmail.com> <484FB328.3090405@cens.ioc.ee> Message-ID: <484FB69A.80207@cens.ioc.ee> Ok, found it. To fix the long scipy import time one needs to apply Roberts patch to numpy/_import_tools.py and the following patch to scipy: Index: scipy/__init__.py =================================================================== --- scipy/__init__.py (revision 4424) +++ scipy/__init__.py (working copy) @@ -31,9 +31,11 @@ _num.seterr(all='ignore') __all__ += ['oldnumeric']+_num.__all__ __all__ += ['randn', 'rand', 'fft', 'ifft'] +if 'linalg' in __all__: + __all__.remove('linalg') + __doc__ += """ Contents -------- With these patches, the 'from scipy import *' time drops from 0.6 to 0.2 in my computer. Pearu Pearu Peterson wrote: > > Robert Kern wrote: >> On Wed, Jun 11, 2008 at 05:33, Robert Kern wrote: >>> On Wed, Jun 11, 2008 at 05:06, Pearu Peterson wrote: >>>> We used to have postponed import hooks for that in past >>>> but afaik we dropped these because they were hackish and at some moment >>>> the time of importing scipy improved (I think it was in Python 2.4 or >>>> 2.5) considerably. >>> The problem is that we left scipy.pkgload in, and creating that from >>> numpy._import_tools.PackageLoader implicitly adds all of the >>> subpackages to scipy.__all__. >> Correction: we actually do call pkgload(postpone=True). However, even >> with the postponed import (which does not add any proxy objects), it >> still appends to __all__. I believe the following patch to numpy fixes >> the problem, but I'm not sure if leaving the if clause alone is >> correct in all cases: >> >> Index: numpy/_import_tools.py >> =================================================================== >> --- numpy/_import_tools.py (revision 5245) >> +++ numpy/_import_tools.py (working copy) >> @@ -183,9 +183,6 @@ >> postpone_import = getattr(info_module,'postpone_import',False) >> if (postpone and not global_symbols) \ >> or (postpone_import and postpone is not None): >> - self.log('__all__.append(%r)' % (package_name)) >> - if '.' not in package_name: >> - self.parent_export_names.append(package_name) >> continue >> >> old_object = frame.f_locals.get(package_name,None) >> >> >> This can probably go into numpy 1.1.1 as a bugfix, so I don't think >> it's critical to work around it in scipy/__init__.py. > > The patch does not affect the 'from scipy import *' time much. > In my computer it is around 0.62secs in both cases when having > names in __all__ or not. > > The long import seems to be due to scipy/linalg/iterative.py that > imports scipy.sparse which takes most of the import time. And > scipy.linalg should not be imported when importing scipy. > So, I think the real reason is hiding somewhere else.. I am looking into > it... > > Pearu > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From se.berg at stud.uni-goettingen.de Wed Jun 11 07:55:59 2008 From: se.berg at stud.uni-goettingen.de (Sebastian Stephan Berg) Date: Wed, 11 Jun 2008 13:55:59 +0200 Subject: [SciPy-user] array mean issue... In-Reply-To: <484FA955.3000009@gmail.com> References: <484FA955.3000009@gmail.com> Message-ID: <1213185359.5968.8.camel@sebook> Hello, This is an overflow problem in the way the mean is computed I believe -- it would start to add nothing after a while. In any case you can get the correct/better result with b.mean(dtype='float64'). Maybe someone else can give some more info on why. Just to throw it in, maybe a warning could be issued by checking something using log2(size*mean) and float precision? Sebastian From david at ar.media.kyoto-u.ac.jp Wed Jun 11 07:45:09 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 11 Jun 2008 20:45:09 +0900 Subject: [SciPy-user] NumPy vs. SciPy and other speed comparisons In-Reply-To: <826c64da0806110417s3b87fb95w497267004ed7e842@mail.gmail.com> References: <826c64da0806101749w4cc8084h27edff6b47160859@mail.gmail.com> <20080611083927.GA3926@phare.normalesup.org> <20080611085555.GC3926@phare.normalesup.org> <3d375d730806110220u432e808fkf292bd8b9765b91a@mail.gmail.com> <484FA04F.1000808@cens.ioc.ee> <3d375d730806110255icfd6ebdv5b02ccd04d2a02a2@mail.gmail.com> <826c64da0806110417s3b87fb95w497267004ed7e842@mail.gmail.com> Message-ID: <484FBAC5.9@ar.media.kyoto-u.ac.jp> Ivo Maljevic wrote: > 2008/6/11 Robert Kern >: > > Specifically, it is "from scipy import *" that imports the > subpackages. "import scipy" does not. > > Sorry, but I am not sure I understand everything here. > > 1. "import scipy as sp" vs "import numpy as np", and then I added > prefixes in front of all the functions. The times are the same, 44 and > 23 seconds, respectively. There goes that theory. > > 2. I tried explicitly importing only the required functions, e.g, > from numpy import sqrt, arange, ones, zeros, random, where, ceil, sign > Agian, the same results. > > Would the loading time of the two packages account for over 20 seconds > difference in execution time? Ok, the bad guy is..... sqrt. scipy.sqrt is much slower than numpy.sqrt (note that in your script, you could avoid computing the scale of the normal in the loop). cheers, David From Jean-Pascal.Mercier at inrialpes.fr Wed Jun 11 08:03:03 2008 From: Jean-Pascal.Mercier at inrialpes.fr (Jean-Pascal Mercier) Date: Wed, 11 Jun 2008 14:03:03 +0200 Subject: [SciPy-user] array mean issue... In-Reply-To: <484FA955.3000009@gmail.com> References: <484FA955.3000009@gmail.com> Message-ID: <484FBEF7.80908@inrialpes.fr> Hi Fred, The problem is the container used to calculate the mean. The default behavior is to use the same container as the passed array dtype. When the array is really big, the float32 container simply become very inefficient which account for the error you've encounter. This can be easily solved by providing the dtype as a function parameter. In [1]: from scipy import * In [2]: a = rand(400,400,400) In [3]: b = asarray(a, dtype=float32) In [4]: a.mean() Out[4]: 0.500014522905 In [6]: b.mean() Out[6]: 0.262144 In [6]: b.mean(dtype=float64) Out[6]: 0.500014522902 Cheers, /J-Pascal MERCIER Projet PRIMA - Laboratoire LIG INRIA Grenoble Rhone-Alpes Research Centre / fred wrote: > Hi, > > I get the following issue I don't understand: > > marsu:~/{1}/> a=rand(400,400,400) > > marsu:~/{2}/> a.mean() > Out[2]: 0.500002086829 > > marsu:~/{3}/> b=asarray(a, dtype='f') > > marsu:~/{4}/> b.mean() > Out[4]: 0.262144 > > What's going on ? > > How can I compute the mean on "big" float arrays ? > > By "big", I mean that for array of 300x300x300, I get the same results > for float32 and float64 arrays. > > TIA. > > Cheers, > > From fredmfp at gmail.com Wed Jun 11 08:08:20 2008 From: fredmfp at gmail.com (fred) Date: Wed, 11 Jun 2008 14:08:20 +0200 Subject: [SciPy-user] array mean issue... In-Reply-To: <1213185359.5968.8.camel@sebook> References: <484FA955.3000009@gmail.com> <1213185359.5968.8.camel@sebook> Message-ID: <484FC034.2010904@gmail.com> Sebastian Stephan Berg a ?crit : > Hello, > > This is an overflow problem in the way the mean is computed I believe -- > it would start to add nothing after a while. In any case you can get the > correct/better result with b.mean(dtype='float64'). Thanks, Sebastian. However, I use nansum to compute the mean. nansum/size gives the same wrong result as mean(), and it seems that is not possible to set the dtype with nansum. Any clue ? TIA. Cheers, -- Fred From fredmfp at gmail.com Wed Jun 11 08:11:34 2008 From: fredmfp at gmail.com (fred) Date: Wed, 11 Jun 2008 14:11:34 +0200 Subject: [SciPy-user] array mean issue... In-Reply-To: <484FBEF7.80908@inrialpes.fr> References: <484FA955.3000009@gmail.com> <484FBEF7.80908@inrialpes.fr> Message-ID: <484FC0F6.9090201@gmail.com> Jean-Pascal Mercier a ?crit : > Hi Fred, > > The problem is the container used to calculate the mean. The default > behavior is to use the same container as the passed array dtype. When > the array is really big, the float32 container simply become very > inefficient which account for the error you've encounter. This can be > easily solved by providing the dtype as a function parameter. Thanks to you too, Jean-Pascal. But I still have an issue I have just posted, with nansum. Cheers, -- Fred From fredmfp at gmail.com Wed Jun 11 08:18:12 2008 From: fredmfp at gmail.com (fred) Date: Wed, 11 Jun 2008 14:18:12 +0200 Subject: [SciPy-user] specify NaN... Message-ID: <484FC284.7010308@gmail.com> Hi again, In my set of data, a few has "really" NaN, others has "special" value for it (say -9999 for positive integer arrays). So, I don't want to take in account these values to compute, min, max, mean and so on. How could I do this ? I recall that the arrays are quite "big", ie ~500x500x500 but < 1000x1000x1000. Any clue ? TIA. Cheers, -- Fred From david at ar.media.kyoto-u.ac.jp Wed Jun 11 08:23:13 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 11 Jun 2008 21:23:13 +0900 Subject: [SciPy-user] specify NaN... In-Reply-To: <484FC284.7010308@gmail.com> References: <484FC284.7010308@gmail.com> Message-ID: <484FC3B1.3000102@ar.media.kyoto-u.ac.jp> fred wrote: > Hi again, > > In my set of data, a few has "really" NaN, others has "special" value > for it (say -9999 for positive integer arrays). > > So, I don't want to take in account these values to compute, min, max, > mean and so on. > If you do not want to take into accout both special and nan, why not using masked array ? David From bsouthey at gmail.com Wed Jun 11 09:14:50 2008 From: bsouthey at gmail.com (Bruce Southey) Date: Wed, 11 Jun 2008 08:14:50 -0500 Subject: [SciPy-user] array mean issue... In-Reply-To: <484FC0F6.9090201@gmail.com> References: <484FA955.3000009@gmail.com> <484FBEF7.80908@inrialpes.fr> <484FC0F6.9090201@gmail.com> Message-ID: <484FCFCA.30807@gmail.com> fred wrote: > Jean-Pascal Mercier a ?crit : > >> Hi Fred, >> >> The problem is the container used to calculate the mean. The default >> behavior is to use the same container as the passed array dtype. When >> the array is really big, the float32 container simply become very >> inefficient which account for the error you've encounter. This can be >> easily solved by providing the dtype as a function parameter. >> > Thanks to you too, Jean-Pascal. > > But I still have an issue I have just posted, with nansum. > > > Cheers, > > The issue is the numerical precision (as indicated) and algorithm. See also the Numpy-discussion thread 'calculating the mean and variance of a large float vector' (http://projects.scipy.org/pipermail/numpy-discussion/2008-June/034766.html). Perhaps there is a case to promote everything to float128. Bruce From se.berg at stud.uni-goettingen.de Wed Jun 11 09:36:41 2008 From: se.berg at stud.uni-goettingen.de (Sebastian Stephan Berg) Date: Wed, 11 Jun 2008 15:36:41 +0200 Subject: [SciPy-user] array mean issue... In-Reply-To: <484FC034.2010904@gmail.com> References: <484FA955.3000009@gmail.com> <1213185359.5968.8.camel@sebook> <484FC034.2010904@gmail.com> Message-ID: <1213191401.5968.13.camel@sebook> I am not familiar with things really, but looking at ??nansum code: def nansum(a, axis=None): """Sum the array over the given axis, treating NaNs as 0. """ y = array(a,subok=True) if not issubclass(y.dtype.type, _nx.integer): y[isnan(a)] = 0 return y.sum(axis) If this is all there is to nansum, you can just do this small replacement with y[isnan[a]] = 0 and your special values including the isnan, and then again use sum with a more precise dtype. Regards, Sebastian From ivo.maljevic at gmail.com Wed Jun 11 09:40:27 2008 From: ivo.maljevic at gmail.com (Ivo Maljevic) Date: Wed, 11 Jun 2008 09:40:27 -0400 Subject: [SciPy-user] NumPy vs. SciPy and other speed comparisons In-Reply-To: <484FB344.4050905@ar.media.kyoto-u.ac.jp> References: <826c64da0806101749w4cc8084h27edff6b47160859@mail.gmail.com> <20080611083927.GA3926@phare.normalesup.org> <20080611085555.GC3926@phare.normalesup.org> <3d375d730806110220u432e808fkf292bd8b9765b91a@mail.gmail.com> <484FA04F.1000808@cens.ioc.ee> <3d375d730806110255icfd6ebdv5b02ccd04d2a02a2@mail.gmail.com> <826c64da0806110417s3b87fb95w497267004ed7e842@mail.gmail.com> <484FB344.4050905@ar.media.kyoto-u.ac.jp> Message-ID: <826c64da0806110640n4e3c7322l349b98310c5c469c@mail.gmail.com> Thank you David. Few pointers: 1. You can use scipy's implementation of erfc() function. It will not make the difference in relative results. That is: from scipy.special import erfc and than replace erfc.erfc with erfc 2. Robert discounted the speed difference observation as a nonsense that keeps coming back because he saw "from scipy import *" statement. I believe it would have been better if he looked at the numbers. When I change/reduce the number of inner loops (test 2 case, where I make the vector longer), the results for scipy and numpy are almost the same, which does not fit well with his comments about how to import scipy/numpy. You can easily exagarate the problem by reducing the FrameSize parameter to 10. The execution time difference becomes huge. Then again, it is my mistake that I did not elaborate the observation/"problem" more precisely. Another interesting thing is that octave beats even C and Fortran when there is effectively only one (snr) loop in the script. Could be that I am not optimizing the C and Fortran code properly. I did use -O3 option, but octave probably has randn function optimized, whereas I have a function that does a lot of things just to produce a gaussian random number. Thanks, Ivo Maljevic 2008/6/11 David Cournapeau : > Ivo Maljevic wrote: > > 2. I tried explicitly importing only the required functions, e.g, > > from numpy import sqrt, arange, ones, zeros, random, where, ceil, sign > > Agian, the same results. > > > > Would the loading time of the two packages account for over 20 seconds > > difference in execution time? > > No, I think the discussion carried away from your initial problem. > Although importing scipy is slow, it certainly does not take 20 > seconds, unless you are running an ancient computer. > > Let me check your problem, which may just show that one function is > scipy is much slower than a similar one in numpy. > > cheers, > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ivo.maljevic at gmail.com Wed Jun 11 09:42:46 2008 From: ivo.maljevic at gmail.com (Ivo Maljevic) Date: Wed, 11 Jun 2008 09:42:46 -0400 Subject: [SciPy-user] NumPy vs. SciPy and other speed comparisons In-Reply-To: <484FBAC5.9@ar.media.kyoto-u.ac.jp> References: <826c64da0806101749w4cc8084h27edff6b47160859@mail.gmail.com> <20080611083927.GA3926@phare.normalesup.org> <20080611085555.GC3926@phare.normalesup.org> <3d375d730806110220u432e808fkf292bd8b9765b91a@mail.gmail.com> <484FA04F.1000808@cens.ioc.ee> <3d375d730806110255icfd6ebdv5b02ccd04d2a02a2@mail.gmail.com> <826c64da0806110417s3b87fb95w497267004ed7e842@mail.gmail.com> <484FBAC5.9@ar.media.kyoto-u.ac.jp> Message-ID: <826c64da0806110642y28da67a8g1186e0b82a9443fc@mail.gmail.com> Wow, I was typing a message and when I hit "Send" I found all these replies. Robert, my apologies for my "counterattack" and thank you for the explanations. Thanks, Ivo 2008/6/11 David Cournapeau : > Ivo Maljevic wrote: > > 2008/6/11 Robert Kern > >: > > > > Specifically, it is "from scipy import *" that imports the > > subpackages. "import scipy" does not. > > > > Sorry, but I am not sure I understand everything here. > > > > 1. "import scipy as sp" vs "import numpy as np", and then I added > > prefixes in front of all the functions. The times are the same, 44 and > > 23 seconds, respectively. There goes that theory. > > > > 2. I tried explicitly importing only the required functions, e.g, > > from numpy import sqrt, arange, ones, zeros, random, where, ceil, sign > > Agian, the same results. > > > > Would the loading time of the two packages account for over 20 seconds > > difference in execution time? > > Ok, the bad guy is..... sqrt. scipy.sqrt is much slower than numpy.sqrt > (note that in your script, you could avoid computing the scale of the > normal in the loop). > > cheers, > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ivo.maljevic at gmail.com Wed Jun 11 09:44:29 2008 From: ivo.maljevic at gmail.com (Ivo Maljevic) Date: Wed, 11 Jun 2008 09:44:29 -0400 Subject: [SciPy-user] NumPy vs. SciPy and other speed comparisons In-Reply-To: <826c64da0806110642y28da67a8g1186e0b82a9443fc@mail.gmail.com> References: <826c64da0806101749w4cc8084h27edff6b47160859@mail.gmail.com> <20080611083927.GA3926@phare.normalesup.org> <20080611085555.GC3926@phare.normalesup.org> <3d375d730806110220u432e808fkf292bd8b9765b91a@mail.gmail.com> <484FA04F.1000808@cens.ioc.ee> <3d375d730806110255icfd6ebdv5b02ccd04d2a02a2@mail.gmail.com> <826c64da0806110417s3b87fb95w497267004ed7e842@mail.gmail.com> <484FBAC5.9@ar.media.kyoto-u.ac.jp> <826c64da0806110642y28da67a8g1186e0b82a9443fc@mail.gmail.com> Message-ID: <826c64da0806110644g36d60550ud74c1d35973d11f9@mail.gmail.com> Actually, it was David who responded, so thanks David. My apology for Robert still holds though. Ivo 2008/6/11 Ivo Maljevic : > Wow, I was typing a message and when I hit "Send" I found all these > replies. Robert, my apologies for my "counterattack" and thank you for the > explanations. > > Thanks, > Ivo > > 2008/6/11 David Cournapeau : > >> Ivo Maljevic wrote: >> >> > 2008/6/11 Robert Kern > > >: >> > >> > Specifically, it is "from scipy import *" that imports the >> > subpackages. "import scipy" does not. >> > >> > Sorry, but I am not sure I understand everything here. >> > >> > 1. "import scipy as sp" vs "import numpy as np", and then I added >> > prefixes in front of all the functions. The times are the same, 44 and >> > 23 seconds, respectively. There goes that theory. >> > >> > 2. I tried explicitly importing only the required functions, e.g, >> > from numpy import sqrt, arange, ones, zeros, random, where, ceil, sign >> > Agian, the same results. >> > >> > Would the loading time of the two packages account for over 20 seconds >> > difference in execution time? >> >> Ok, the bad guy is..... sqrt. scipy.sqrt is much slower than numpy.sqrt >> (note that in your script, you could avoid computing the scale of the >> normal in the loop). >> >> cheers, >> >> David >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Wed Jun 11 10:20:23 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 11 Jun 2008 23:20:23 +0900 Subject: [SciPy-user] NumPy vs. SciPy and other speed comparisons In-Reply-To: <826c64da0806110640n4e3c7322l349b98310c5c469c@mail.gmail.com> References: <826c64da0806101749w4cc8084h27edff6b47160859@mail.gmail.com> <20080611083927.GA3926@phare.normalesup.org> <20080611085555.GC3926@phare.normalesup.org> <3d375d730806110220u432e808fkf292bd8b9765b91a@mail.gmail.com> <484FA04F.1000808@cens.ioc.ee> <3d375d730806110255icfd6ebdv5b02ccd04d2a02a2@mail.gmail.com> <826c64da0806110417s3b87fb95w497267004ed7e842@mail.gmail.com> <484FB344.4050905@ar.media.kyoto-u.ac.jp> <826c64da0806110640n4e3c7322l349b98310c5c469c@mail.gmail.com> Message-ID: <484FDF27.7030007@ar.media.kyoto-u.ac.jp> Ivo Maljevic wrote: > > 2. Robert discounted the speed difference observation as a nonsense > that keeps coming back because he saw "from scipy import *" statement. > I believe it would have been better if he looked at the numbers. I think Robert just pointed out that he was tired of seeing erroneous import in scipy coming back when he fixed it some time ago. David From fredmfp at gmail.com Wed Jun 11 10:47:05 2008 From: fredmfp at gmail.com (fred) Date: Wed, 11 Jun 2008 16:47:05 +0200 Subject: [SciPy-user] array mean issue... In-Reply-To: <1213191401.5968.13.camel@sebook> References: <484FA955.3000009@gmail.com> <1213185359.5968.8.camel@sebook> <484FC034.2010904@gmail.com> <1213191401.5968.13.camel@sebook> Message-ID: <484FE569.70206@gmail.com> Sebastian Stephan Berg a ?crit : > I am not familiar with things really, but looking at ??nansum code: > > def nansum(a, axis=None): > """Sum the array over the given axis, treating NaNs as 0. > """ > y = array(a,subok=True) > if not issubclass(y.dtype.type, _nx.integer): > y[isnan(a)] = 0 > return y.sum(axis) > > If this is all there is to nansum, you can just do this small > replacement with y[isnan[a]] = 0 and your special values including the > isnan, and then again use sum with a more precise dtype. Ok, thanks to all of you. I'll look at it asap. However, I wonder why there is no nan*** methods related to arrays, like array.mean(), array.min(), etc. And also why there is no nanmean & nanstd method... Cheers, -- Fred From gary.pajer at gmail.com Wed Jun 11 10:47:48 2008 From: gary.pajer at gmail.com (Gary Pajer) Date: Wed, 11 Jun 2008 10:47:48 -0400 Subject: [SciPy-user] LabVIEW and LabPython In-Reply-To: <1213132836.25044.13.camel@pc2.cole.uklinux.net> References: <88fe22a0806081521j2db5b1eblb68692af2e3beb43@mail.gmail.com> <1213132836.25044.13.camel@pc2.cole.uklinux.net> Message-ID: <88fe22a0806110747p408f6c85nb9db9ec9492c7995@mail.gmail.com> On Tue, Jun 10, 2008 at 5:20 PM, Bryan Cole wrote: > > > > > I've married DAQmx to scipy/numpy to Traits UI > > ( http://code.enthought.com/projects/traits ) and I have a beautiful > > GUI based data acquisition/display/analysis system. I'll admit that > > I've been too scared to even try to implement callbacks (used by a > > small number of DAQmx C functions) but I've achieved the same > > functionality using python threads. > > FWIW Python callbacks with DAQmx work great. ctypes makes these easy to > implement. > > I'm also considering Trait'ification of my primary data-acquisition > application. I'd like to see what you've done in this respect. Are you > using Chaco? Yes, I'm using Chaco. I've attached a screenshot. It's a work in progress. All essential functionality works, and I use it daily. When it runs it looks more or less like an oscilloscope (but it does more than that). On the whole, it's about half done. The hard parts were not scipy/numpy/Traits related. The hard parts were understanding the timing, synchronization, read, and write mechanisms in DAQmx; I was starting from scratch with no knowledge. The NI docs are quite spare. The NI forum helped with that, although it took several days to get replies. Andrew Straw has translated one of the NI examples to python/ctypes. That helped me get started. The chaco part is just a little sticky because the docs just aren't there yet. But the response to questions on the enthought forum is almost instantaneous. The Traits/Traits UI part is *so easy*. I've made GUIs in Matlab and WX and Tkinter, but making them in Traits UI is easier and more sensible and easier to maintain and upgrade. The learning curve is much shorter than WX or Tkinter. For those who care: I'm not using Traits 3, but rather Traits 2. Question on callbacks: if I write a callback routine in python, do I simply use the python name of the routine as the callback parameter in the call (via ctypes) to the DAQmx routine in the dll? -gary > > > BC > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: acquire_77.png Type: image/png Size: 36494 bytes Desc: not available URL: From fredmfp at gmail.com Wed Jun 11 10:48:00 2008 From: fredmfp at gmail.com (fred) Date: Wed, 11 Jun 2008 16:48:00 +0200 Subject: [SciPy-user] specify NaN... In-Reply-To: <484FC3B1.3000102@ar.media.kyoto-u.ac.jp> References: <484FC284.7010308@gmail.com> <484FC3B1.3000102@ar.media.kyoto-u.ac.jp> Message-ID: <484FE5A0.50604@gmail.com> David Cournapeau a ?crit : > If you do not want to take into accout both special and nan, why not > using masked array ? Because I did not event think about it ;-) Thanks for the hint, David. Cheers, -- Fred From david at ar.media.kyoto-u.ac.jp Wed Jun 11 10:44:39 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 11 Jun 2008 23:44:39 +0900 Subject: [SciPy-user] array mean issue... In-Reply-To: <484FE569.70206@gmail.com> References: <484FA955.3000009@gmail.com> <1213185359.5968.8.camel@sebook> <484FC034.2010904@gmail.com> <1213191401.5968.13.camel@sebook> <484FE569.70206@gmail.com> Message-ID: <484FE4D7.5080006@ar.media.kyoto-u.ac.jp> fred wrote: > I'll look at it asap. > > However, I wonder why there is no nan*** methods related to arrays, > like array.mean(), array.min(), etc. > And also why there is no nanmean & nanstd method... > there is: >> from scipy.stats.stats import nanmean >> import numpy as np >> nanmean([np.Nan, 1]) 1.0 If you are about to say "how the hell would I know that", you may well be right. I am fixing this to put them into scipy.stats instead of scipy.stats.stats cheers, David From fredmfp at gmail.com Wed Jun 11 11:04:05 2008 From: fredmfp at gmail.com (fred) Date: Wed, 11 Jun 2008 17:04:05 +0200 Subject: [SciPy-user] array mean issue... In-Reply-To: <484FE4D7.5080006@ar.media.kyoto-u.ac.jp> References: <484FA955.3000009@gmail.com> <1213185359.5968.8.camel@sebook> <484FC034.2010904@gmail.com> <1213191401.5968.13.camel@sebook> <484FE569.70206@gmail.com> <484FE4D7.5080006@ar.media.kyoto-u.ac.jp> Message-ID: <484FE965.6080501@gmail.com> David Cournapeau a ?crit : > If you are about to say "how the hell would I know that", you may well > be right. I am fixing this to put them into scipy.stats instead of > scipy.stats.stats Thanks !! :-)) Cheers, -- Fred From lists at vrbka.net Wed Jun 11 11:15:45 2008 From: lists at vrbka.net (Lubos Vrbka) Date: Wed, 11 Jun 2008 17:15:45 +0200 Subject: [SciPy-user] optimal way to solve large amount of sets of linear equations Message-ID: <484FEC21.3090306@vrbka.net> hi guys, i have to solve a set of equations, that can be in matrix notation written as {E-C}H = C where E is identity matrix. actually as a result, i want to have function G, defined as H - C, i.e. G = H - C = {E-C}^(-1)C - C the problem is, that each of the matrices H, C, G represent set of discretized functions. then, i have to solve this problem for every step G(1) = {E-C(1)}^(-1)C(1) - C(1) G(2) = {E-C(2)}^(-1)C(2) - C(2) ... the matrices themselves are usually small (2x2, 3x3), but the number these equations is quite large (1024, 2048, 4096, or 8192; usually not larger) i tried to use the following pieces of code, but don't know (since i'm a newbie to scipy), whether it cannot be done in a better or (less stupid) way. i'd be really very grateful for any comments. G_f_ij is scipy.array((n,n,npoints)) E_ij is scipy.eye(npoints) c_f_ij is scipy.array((n,n,npoints)) the slow way: =============== for i in range(npoints): # solve the matrix problem for all dr G_f_ij[:,:,i] = scipy.mat((E_ij - c_f_ij[:,:,i])).I * scipy.mat(c_f_ij[:,:,i]) - scipy.mat(c_f_ij[:,:,i]) the faster way: =============== for i in range(npoints): G_f_ij[:,:,dr] = numpy.linalg.solve(scipy.mat((E_ij - c_f_ij[:,:,dr])), scipy.mat(c_f_ij[:,:,dr])) - scipy.mat(c_f_ij[:,:,dr]) the fastest way: =============== forget the scipy.mat as it's not needed for linalg for i in range(npoints): G_f_ij[:,:,dr] = numpy.linalg.solve(E_ij - c_f_ij[:,:,dr], c_f_ij[:,:,dr]) - c_f_ij[:,:,dr] the last snippet of code is roughly 2x faster than the first one. for 2x2x1024 points it takes 0.2, 1.3, 0.7 seconds on my computer. alone, that's not that bad - but when i need to repeat that 100-1000x during the execution of the program, then it starts to take a lot of time... best regards, lubos -- Lubos _ at _" http://www.lubos.vrbka.net From david at ar.media.kyoto-u.ac.jp Wed Jun 11 11:25:36 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 12 Jun 2008 00:25:36 +0900 Subject: [SciPy-user] array mean issue... In-Reply-To: <484FE965.6080501@gmail.com> References: <484FA955.3000009@gmail.com> <1213185359.5968.8.camel@sebook> <484FC034.2010904@gmail.com> <1213191401.5968.13.camel@sebook> <484FE569.70206@gmail.com> <484FE4D7.5080006@ar.media.kyoto-u.ac.jp> <484FE965.6080501@gmail.com> Message-ID: <484FEE70.1040205@ar.media.kyoto-u.ac.jp> fred wrote: > > Thanks !! :-)) > Fixed in trunk (r4426). I also noticed that for some reason, test_stats was not picked up by nosetests, which should be fixed now. cheers, David From ivo.maljevic at gmail.com Wed Jun 11 11:48:32 2008 From: ivo.maljevic at gmail.com (Ivo Maljevic) Date: Wed, 11 Jun 2008 11:48:32 -0400 Subject: [SciPy-user] Speed differences in sqrt calculation: what is good to know Message-ID: <826c64da0806110848r24248337jaf3caa2819e6906d@mail.gmail.com> Based on comments from Gael Varoquaux and David Cournapeau , I did the execution time test. At least for me, it is clear that if the number is a real scalar, AND the expected result is also real, the best way is to call the math version of sqrt() function. The differences are more than significant, as you can see: In [3]: from math import sqrt as msqrt In [4]: from numpy import sqrt as nsqrt In [5]: from scipy import sqrt as ssqrt In [6]: %timeit msqrt(3.14) 1000000 loops, best of 3: 479 ns per loop In [7]: %timeit nsqrt(3.14) 100000 loops, best of 3: 10.8 ?s per loop In [8]: %timeit ssqrt(3.14) 10000 loops, best of 3: 74.5 ?s per loop -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Wed Jun 11 11:54:43 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 12 Jun 2008 00:54:43 +0900 Subject: [SciPy-user] Speed differences in sqrt calculation: what is good to know In-Reply-To: <826c64da0806110848r24248337jaf3caa2819e6906d@mail.gmail.com> References: <826c64da0806110848r24248337jaf3caa2819e6906d@mail.gmail.com> Message-ID: <484FF543.10605@ar.media.kyoto-u.ac.jp> Ivo Maljevic wrote: > Based on comments from Gael Varoquaux and David Cournapeau , I did the > execution time test. > At least for me, it is clear that if the number is a real scalar, AND > the expected result is also real, > the best way is to call the math version of sqrt() function. The > differences are more than significant, as you can see: > It is expected for numpy/scipy functions to be much slower than python *for scalar*. Since they are optimized to be used with arrays, you are paying the cost to initialize the machinery to handle arrays (ufunc), without the benefit. The problem really is the difference between numpy and scipy. The funny thing is that for arrays, scipy.sqrt (that is, numpy.lib.scimath.sqrt) is faster than numpy.sqrt, which does not quite make sense to me since scipy.sqrt calls numpy.sqrt... But it is 1 a.m, so maybe I should just get some sleep cheers, David From Karl.Young at ucsf.edu Wed Jun 11 14:07:18 2008 From: Karl.Young at ucsf.edu (Karl Young) Date: Wed, 11 Jun 2008 11:07:18 -0700 Subject: [SciPy-user] ANN: SciPy 2008 Conference In-Reply-To: References: Message-ID: <48501456.4030306@ucsf.edu> Hey Jarrod, How are you doing ? Will you be in Melbourne ? I was thinking of submitting something for SciPy 2008 but wanted to get your take on it before doing so. The content is basically estimating pattern complexity from brain images for diagnostic purposes (the general content of a couple of papers in Neuroimage and Human Brain Mapping). I did the analysis using SciPy (and a little Rpy) but I'm sure the code is pretty brain dead and wouldn't teach the SciPy community anything - so can these submissions be more about content, e.g. gee look what I did using SciPy ? >Greetings, > >The SciPy 2008 Conference website is now open: http://conference.scipy.org > >This year's conference will be at Caltech from August 19-24: > >Tutorials: August 19-20 (Tuesday and Wednesday) >Conference: August 21-22 (Thursday and Friday) >Sprints: August 23-24 (Saturday and Sunday) > >Exciting things are happening in the Python community, and the SciPy >2008 Conference is an excellent opportunity to exchange ideas, learn >techniques, contribute code and affect the direction of scientific >computing (or just to learn what all the fuss is about). We'll be >announcing the Keynote Speaker and providing a detailed schedule in >the coming weeks. > >This year we are asking presenters to submit short papers to be included >in the conference proceedings: http://conference.scipy.org/call_for_papers > >Cheers, > > > -- Karl Young Center for Imaging of Neurodegenerative Diseases, UCSF VA Medical Center (114M) Phone: (415) 221-4810 x3114 lab 4150 Clement Street FAX: (415) 668-2864 San Francisco, CA 94121 Email: karl young at ucsf edu From stefan at sun.ac.za Wed Jun 11 14:42:02 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Wed, 11 Jun 2008 20:42:02 +0200 Subject: [SciPy-user] ANN: SciPy 2008 Conference In-Reply-To: <48501456.4030306@ucsf.edu> References: <48501456.4030306@ucsf.edu> Message-ID: <9457e7c80806111142o2518d6e4odf6265609190a369@mail.gmail.com> 2008/6/11 Karl Young : > How are you doing ? Will you be in Melbourne ? I was thinking of > submitting something for SciPy 2008 but wanted to get your take on it > before doing so. The content is basically estimating pattern complexity > from brain images for diagnostic purposes (the general content of a > couple of papers in Neuroimage and Human Brain Mapping). I did the > analysis using SciPy (and a little Rpy) but I'm sure the code is pretty > brain dead and wouldn't teach the SciPy community anything - so can > these submissions be more about content, e.g. gee look what I did using > SciPy ? I don't know if you intended this e-mail for such a wide audience, but it sounds very interesting! I look forward to seeing your talk. Regards St?fan From matthieu.brucher at gmail.com Wed Jun 11 14:54:48 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 11 Jun 2008 20:54:48 +0200 Subject: [SciPy-user] Speed differences in sqrt calculation: what is good to know In-Reply-To: <826c64da0806110848r24248337jaf3caa2819e6906d@mail.gmail.com> References: <826c64da0806110848r24248337jaf3caa2819e6906d@mail.gmail.com> Message-ID: Hi, Don't forget that numpy's sqrt and scipy's sqrt are not optimized towards single computations. If you really want to compare them, time the square root of 10000 elements. Matthieu 2008/6/11 Ivo Maljevic : > Based on comments from Gael Varoquaux and David Cournapeau , I did the > execution time test. > At least for me, it is clear that if the number is a real scalar, AND the > expected result is also real, > the best way is to call the math version of sqrt() function. The > differences are more than significant, as you can see: > > In [3]: from math import sqrt as msqrt > In [4]: from numpy import sqrt as nsqrt > In [5]: from scipy import sqrt as ssqrt > > In [6]: %timeit msqrt(3.14) > 1000000 loops, best of 3: 479 ns per loop > > In [7]: %timeit nsqrt(3.14) > 100000 loops, best of 3: 10.8 ?s per loop > > In [8]: %timeit ssqrt(3.14) > 10000 loops, best of 3: 74.5 ?s per loop > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From Karl.Young at ucsf.edu Wed Jun 11 15:25:46 2008 From: Karl.Young at ucsf.edu (Karl Young) Date: Wed, 11 Jun 2008 12:25:46 -0700 Subject: [SciPy-user] ANN: SciPy 2008 Conference In-Reply-To: <9457e7c80806111142o2518d6e4odf6265609190a369@mail.gmail.com> References: <48501456.4030306@ucsf.edu> <9457e7c80806111142o2518d6e4odf6265609190a369@mail.gmail.com> Message-ID: <485026BA.1000405@ucsf.edu> Yeah, thanks Stefan, this was a case of hitting return before putting Jarrod's address in the To field - but coincidentally it might also pertain to others on the list thinking about submitting something so I guess it wasn't the worst case of operator error... >2008/6/11 Karl Young : > > >>How are you doing ? Will you be in Melbourne ? I was thinking of >>submitting something for SciPy 2008 but wanted to get your take on it >>before doing so. The content is basically estimating pattern complexity >>from brain images for diagnostic purposes (the general content of a >>couple of papers in Neuroimage and Human Brain Mapping). I did the >>analysis using SciPy (and a little Rpy) but I'm sure the code is pretty >>brain dead and wouldn't teach the SciPy community anything - so can >>these submissions be more about content, e.g. gee look what I did using >>SciPy ? >> >> > >I don't know if you intended this e-mail for such a wide audience, but >it sounds very interesting! I look forward to seeing your talk. > >Regards >St?fan >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.org >http://projects.scipy.org/mailman/listinfo/scipy-user > > > -- Karl Young Center for Imaging of Neurodegenerative Diseases, UCSF VA Medical Center (114M) Phone: (415) 221-4810 x3114 lab 4150 Clement Street FAX: (415) 668-2864 San Francisco, CA 94121 Email: karl young at ucsf edu From gael.varoquaux at normalesup.org Wed Jun 11 15:54:51 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Wed, 11 Jun 2008 21:54:51 +0200 Subject: [SciPy-user] ANN: SciPy 2008 Conference In-Reply-To: <485026BA.1000405@ucsf.edu> References: <48501456.4030306@ucsf.edu> <9457e7c80806111142o2518d6e4odf6265609190a369@mail.gmail.com> <485026BA.1000405@ucsf.edu> Message-ID: <20080611195451.GH2924@phare.normalesup.org> On Wed, Jun 11, 2008 at 12:25:46PM -0700, Karl Young wrote: > Yeah, thanks Stefan, this was a case of hitting return before putting > Jarrod's address in the To field - but coincidentally it might also > pertain to others on the list thinking about submitting something so I > guess it wasn't the worst case of operator error... Sound interesting. For a 35 minutes talk I think you have enough content, especially if you give a good introduction to the field and the methods that are normaly used in the field. I invite you to submit, and we'll see what the selection commitee decides. We can also wait for Jarrod's feedback. Ga?l From mwojc at p.lodz.pl Wed Jun 11 16:00:05 2008 From: mwojc at p.lodz.pl (Marek Wojciechowski) Date: Wed, 11 Jun 2008 22:00:05 +0200 Subject: [SciPy-user] optimize.fmin_l_bfgs_b problem References: <200806091550.53753.mwojc@p.lodz.pl> Message-ID: Nils Wagner wrote: > On Mon, 9 Jun 2008 15:50:53 +0200 > Marek Wojciechowski wrote: >> Hi! >> The following command: >> optimize.fmin_l_bfgs_b(lambda x: x[0]**2, [-1.], >>iprint=1) >> causes an error and breaks the python session with the >>following output: >> >> RUNNING THE L-BFGS-B CODE >> >> * * * >> >> At line 2647 of file scipy/optimize/lbfgsb/routines.f >> Internal Error: printf is broken >> Machine precision = >> >> This occurs on scipy-0.6.0 and python 2.4 under Gentoo >>Linux. Is this the >> known bug? >> >> Greetings, >> -- >> Marek Wojciechowski > > I get > >>>> optimize.fmin_l_bfgs_b(lambda x: x[0]**2, [-1.], iprint=1) > RUNNING THE L-BFGS-B CODE > > * * * > > Machine precision = 2.220E-16 > N = 1 M = 10 > This problem is unconstrained. > > At X0 0 variables are exactly at the bounds > Traceback (most recent call last): > File "", line 1, in > File > "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/optimize/lbfgsb.py", > line 205, in fmin_l_bfgs_b > f, g = func_and_grad(x) > File > "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/optimize/lbfgsb.py", > line 156, in func_and_grad > f, g = func(x, *args) > TypeError: 'numpy.float64' object is not iterable > >>>> scipy.__version__ > '0.7.0.dev4420' It seems that my bug is corrected in development versions of scipy. Exception you obtained results probably from new layout of lbfgsb optimizer. I think the following should work: optimize.fmin_l_bfgs_b(lambda x: (x[0]**2, 2*x[0]), [-1.], iprint=1) Greetings, -- Marek Wojciechowski From cohen at slac.stanford.edu Wed Jun 11 15:59:08 2008 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Wed, 11 Jun 2008 21:59:08 +0200 Subject: [SciPy-user] Speed differences in sqrt calculation: what is good to know In-Reply-To: References: <826c64da0806110848r24248337jaf3caa2819e6906d@mail.gmail.com> Message-ID: <48502E8C.9030309@slac.stanford.edu> In [10]: rndvals=random.rand(10000) In [11]: %timeit ssqrt(rndvals) 1000 loops, best of 3: 598 ?s per loop In [12]: %timeit nsqrt(rndvals) 1000 loops, best of 3: 362 ?s per loop JCT Matthieu Brucher wrote: > Hi, > > Don't forget that numpy's sqrt and scipy's sqrt are not optimized > towards single computations. > If you really want to compare them, time the square root of 10000 > elements. > > Matthieu > > 2008/6/11 Ivo Maljevic >: > > Based on comments from Gael Varoquaux and David Cournapeau , I did > the execution time test. > At least for me, it is clear that if the number is a real scalar, > AND the expected result is also real, > the best way is to call the math version of sqrt() function. The > differences are more than significant, as you can see: > > In [3]: from math import sqrt as msqrt > In [4]: from numpy import sqrt as nsqrt > In [5]: from scipy import sqrt as ssqrt > > In [6]: %timeit msqrt(3.14) > 1000000 loops, best of 3: 479 ns per loop > > In [7]: %timeit nsqrt(3.14) > 100000 loops, best of 3: 10.8 ?s per loop > > In [8]: %timeit ssqrt(3.14) > 10000 loops, best of 3: 74.5 ?s per loop > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > -- > French PhD student > Website : http://matthieu-brucher.developpez.com/ > Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 > LinkedIn : http://www.linkedin.com/in/matthieubrucher > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From nwagner at iam.uni-stuttgart.de Wed Jun 11 16:05:10 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 11 Jun 2008 22:05:10 +0200 Subject: [SciPy-user] optimize.fmin_l_bfgs_b problem In-Reply-To: References: <200806091550.53753.mwojc@p.lodz.pl> Message-ID: On Wed, 11 Jun 2008 22:00:05 +0200 Marek Wojciechowski wrote: > Nils Wagner wrote: > >> On Mon, 9 Jun 2008 15:50:53 +0200 >> Marek Wojciechowski wrote: >>> Hi! >>> The following command: >>> optimize.fmin_l_bfgs_b(lambda x: x[0]**2, [-1.], >>>iprint=1) >>> causes an error and breaks the python session with the >>>following output: >>> >>> RUNNING THE L-BFGS-B CODE >>> >>> * * * >>> >>> At line 2647 of file scipy/optimize/lbfgsb/routines.f >>> Internal Error: printf is broken >>> Machine precision = >>> >>> This occurs on scipy-0.6.0 and python 2.4 under Gentoo >>>Linux. Is this the >>> known bug? >>> >>> Greetings, >>> -- >>> Marek Wojciechowski >> >> I get >> >>>>> optimize.fmin_l_bfgs_b(lambda x: x[0]**2, [-1.], >>>>>iprint=1) >> RUNNING THE L-BFGS-B CODE >> >> * * * >> >> Machine precision = 2.220E-16 >> N = 1 M = 10 >> This problem is unconstrained. >> >> At X0 0 variables are exactly at the bounds >> Traceback (most recent call last): >> File "", line 1, in >> File >> "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/optimize/lbfgsb.py", >> line 205, in fmin_l_bfgs_b >> f, g = func_and_grad(x) >> File >> "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/optimize/lbfgsb.py", >> line 156, in func_and_grad >> f, g = func(x, *args) >> TypeError: 'numpy.float64' object is not iterable >> >>>>> scipy.__version__ >> '0.7.0.dev4420' > > It seems that my bug is corrected in development >versions of scipy. > Exception you obtained results probably from new layout >of lbfgsb > optimizer. I think the following should work: > > optimize.fmin_l_bfgs_b(lambda x: (x[0]**2, 2*x[0]), >[-1.], iprint=1) > No. >>> optimize.fmin_l_bfgs_b(lambda x: (x[0]**2, 2*x[0]), [-1.], iprint=1) RUNNING THE L-BFGS-B CODE * * * Machine precision = 2.220E-16 N = 1 M = 10 This problem is unconstrained. At X0 0 variables are exactly at the bounds Traceback (most recent call last): File "", line 1, in File "/usr/local/lib64/python2.5/site-packages/scipy/optimize/lbfgsb.py", line 199, in fmin_l_bfgs_b isave, dsave) TypeError: failed to initialize intent(inout|inplace|cache) array -- input must be array but got Cheers, Nils From ivo.maljevic at gmail.com Wed Jun 11 16:06:09 2008 From: ivo.maljevic at gmail.com (Ivo Maljevic) Date: Wed, 11 Jun 2008 16:06:09 -0400 Subject: [SciPy-user] Speed differences in sqrt calculation: what is good to know In-Reply-To: <484FF543.10605@ar.media.kyoto-u.ac.jp> References: <826c64da0806110848r24248337jaf3caa2819e6906d@mail.gmail.com> <484FF543.10605@ar.media.kyoto-u.ac.jp> Message-ID: <826c64da0806111306x6902fdfbh691666ad713049f5@mail.gmail.com> 2008/6/11 David Cournapeau : > Ivo Maljevic wrote: > > Based on comments from Gael Varoquaux and David Cournapeau , I did the > > execution time test. > > At least for me, it is clear that if the number is a real scalar, AND > > the expected result is also real, > > the best way is to call the math version of sqrt() function. The > > differences are more than significant, as you can see: > > > > It is expected for numpy/scipy functions to be much slower than python > *for scalar*. Since they are optimized to be used with arrays, you are > paying the cost to initialize the machinery to handle arrays (ufunc), > without the benefit. No disagreement there. My comment, which I realize was probably unnecessary, is that one has to pay attention as to which version of the function is called. Since I started SciPy I always used everything from it, never even thinking that math.sqrt(), math.sin(), etc., are faster if I'm working with scalar values. And there are always coefficients or scaling parameters that do not need to be in a vector form. > The problem really is the difference between numpy and scipy. The funny > thing is that for arrays, scipy.sqrt (that is, numpy.lib.scimath.sqrt) > is faster than numpy.sqrt, which does not quite make sense to me since > scipy.sqrt calls numpy.sqrt... But it is 1 a.m, so maybe I should just > get some sleep > I think it is the numpy.sqrt that is faster (about 7 to 10 times) than scipy.sqrt, but anyway, it would be good to have them at the same speed level. Ivo -------------- next part -------------- An HTML attachment was scrubbed... URL: From mwojc at p.lodz.pl Wed Jun 11 16:15:48 2008 From: mwojc at p.lodz.pl (Marek Wojciechowski) Date: Wed, 11 Jun 2008 22:15:48 +0200 Subject: [SciPy-user] optimize.fmin_l_bfgs_b problem References: <200806091550.53753.mwojc@p.lodz.pl> Message-ID: Nils Wagner wrote: > On Wed, 11 Jun 2008 22:00:05 +0200 > Marek Wojciechowski wrote: >> Nils Wagner wrote: >> >>> On Mon, 9 Jun 2008 15:50:53 +0200 >>> Marek Wojciechowski wrote: >>>> Hi! >>>> The following command: >>>> optimize.fmin_l_bfgs_b(lambda x: x[0]**2, [-1.], >>>>iprint=1) >>>> causes an error and breaks the python session with the >>>>following output: >>>> >>>> RUNNING THE L-BFGS-B CODE >>>> >>>> * * * >>>> >>>> At line 2647 of file scipy/optimize/lbfgsb/routines.f >>>> Internal Error: printf is broken >>>> Machine precision = >>>> >>>> This occurs on scipy-0.6.0 and python 2.4 under Gentoo >>>>Linux. Is this the >>>> known bug? >>>> >>>> Greetings, >>>> -- >>>> Marek Wojciechowski >>> >>> I get >>> >>>>>> optimize.fmin_l_bfgs_b(lambda x: x[0]**2, [-1.], >>>>>>iprint=1) >>> RUNNING THE L-BFGS-B CODE >>> >>> * * * >>> >>> Machine precision = 2.220E-16 >>> N = 1 M = 10 >>> This problem is unconstrained. >>> >>> At X0 0 variables are exactly at the bounds >>> Traceback (most recent call last): >>> File "", line 1, in >>> File >>> "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/optimize/lbfgsb.py", >>> line 205, in fmin_l_bfgs_b >>> f, g = func_and_grad(x) >>> File >>> "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/optimize/lbfgsb.py", >>> line 156, in func_and_grad >>> f, g = func(x, *args) >>> TypeError: 'numpy.float64' object is not iterable >>> >>>>>> scipy.__version__ >>> '0.7.0.dev4420' >> >> It seems that my bug is corrected in development >>versions of scipy. >> Exception you obtained results probably from new layout >>of lbfgsb >> optimizer. I think the following should work: >> >> optimize.fmin_l_bfgs_b(lambda x: (x[0]**2, 2*x[0]), >>[-1.], iprint=1) >> > No. >>>> optimize.fmin_l_bfgs_b(lambda x: (x[0]**2, 2*x[0]), [-1.], iprint=1) > RUNNING THE L-BFGS-B CODE > > * * * > > Machine precision = 2.220E-16 > N = 1 M = 10 > This problem is unconstrained. > > At X0 0 variables are exactly at the bounds > Traceback (most recent call last): > File "", line 1, in > File > "/usr/local/lib64/python2.5/site-packages/scipy/optimize/lbfgsb.py", > line 199, in fmin_l_bfgs_b > isave, dsave) > TypeError: failed to initialize > intent(inout|inplace|cache) array -- input must be array > but got > > Cheers, > Nils The last chance :) : optimize.fmin_l_bfgs_b(lambda x: (x[0]**2, 2*x[0]), array([-1.]), iprint=1) -- Marek Wojciechowski From nwagner at iam.uni-stuttgart.de Wed Jun 11 16:19:12 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 11 Jun 2008 22:19:12 +0200 Subject: [SciPy-user] optimize.fmin_l_bfgs_b problem In-Reply-To: References: <200806091550.53753.mwojc@p.lodz.pl> Message-ID: On Wed, 11 Jun 2008 22:00:05 +0200 Marek Wojciechowski wrote: > Nils Wagner wrote: > >> On Mon, 9 Jun 2008 15:50:53 +0200 >> Marek Wojciechowski wrote: >>> Hi! >>> The following command: >>> optimize.fmin_l_bfgs_b(lambda x: x[0]**2, [-1.], >>>iprint=1) >>> causes an error and breaks the python session with the >>>following output: >>> >>> RUNNING THE L-BFGS-B CODE >>> >>> * * * >>> >>> At line 2647 of file scipy/optimize/lbfgsb/routines.f >>> Internal Error: printf is broken >>> Machine precision = >>> >>> This occurs on scipy-0.6.0 and python 2.4 under Gentoo >>>Linux. Is this the >>> known bug? >>> >>> Greetings, >>> -- >>> Marek Wojciechowski >> >> I get >> >>>>> optimize.fmin_l_bfgs_b(lambda x: x[0]**2, [-1.], >>>>>iprint=1) >> RUNNING THE L-BFGS-B CODE >> >> * * * >> >> Machine precision = 2.220E-16 >> N = 1 M = 10 >> This problem is unconstrained. >> >> At X0 0 variables are exactly at the bounds >> Traceback (most recent call last): >> File "", line 1, in >> File >> "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/optimize/lbfgsb.py", >> line 205, in fmin_l_bfgs_b >> f, g = func_and_grad(x) >> File >> "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/optimize/lbfgsb.py", >> line 156, in func_and_grad >> f, g = func(x, *args) >> TypeError: 'numpy.float64' object is not iterable >> >>>>> scipy.__version__ >> '0.7.0.dev4420' > > It seems that my bug is corrected in development >versions of scipy. > Exception you obtained results probably from new layout >of lbfgsb > optimizer. I think the following should work: > > optimize.fmin_l_bfgs_b(lambda x: (x[0]**2, 2*x[0]), >[-1.], iprint=1) > > Greetings, > -- > Marek Wojciechowski > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user Try python -i test_l_bfgs_b.py RUNNING THE L-BFGS-B CODE * * * Machine precision = 2.220E-16 N = 1 M = 10 This problem is unconstrained. At X0 0 variables are exactly at the bounds At iterate 0 f= 1.00000E+00 |proj g|= 2.00000E+00 At iterate 1 f= 0.00000E+00 |proj g|= 0.00000E+00 * * * Tit = total number of iterations Tnf = total number of function evaluations Tnint = total number of segments explored during Cauchy searches Skip = number of BFGS updates skipped Nact = number of active bounds at final generalized Cauchy point Projg = norm of the final projected gradient F = final function value * * * N Tit Tnf Tnint Skip Nact Projg F 1 1 2 1 0 0 0.000E+00 0.000E+00 F = 0. CONVERGENCE: NORM OF PROJECTED GRADIENT <= PGTOL Cauchy time 0.000E+00 seconds. Subspace minimization time 0.000E+00 seconds. Line search time 0.000E+00 seconds. Total User time 4.000E-03 seconds. Position of the minimum [ 0.] Value of func(x) [ 0.] Dictionary {'warnflag': 0, 'task': 'CONVERGENCE: NORM OF PROJECTED GRADIENT <= PGTOL', 'grad': array([ 0.]), 'funcalls': 2} -------------- next part -------------- A non-text attachment was scrubbed... Name: test_l_bfgs_b.py Type: text/x-python Size: 263 bytes Desc: not available URL: From millman at berkeley.edu Wed Jun 11 17:36:45 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Wed, 11 Jun 2008 14:36:45 -0700 Subject: [SciPy-user] ANN: SciPy 2008 Conference In-Reply-To: <48501456.4030306@ucsf.edu> References: <48501456.4030306@ucsf.edu> Message-ID: On Wed, Jun 11, 2008 at 11:07 AM, Karl Young wrote: > How are you doing ? Will you be in Melbourne ? I was thinking of > submitting something for SciPy 2008 but wanted to get your take on it > before doing so. The content is basically estimating pattern complexity > from brain images for diagnostic purposes (the general content of a > couple of papers in Neuroimage and Human Brain Mapping). I did the > analysis using SciPy (and a little Rpy) but I'm sure the code is pretty > brain dead and wouldn't teach the SciPy community anything - so can > these submissions be more about content, e.g. gee look what I did using > SciPy ? I agree with Stefan and Gael that this would be an interesting submission and encourage you to submit a proposal. I am leaving for Melbourne in a few hours, so if you are going to HBM as well I will see you there. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From kartita at gmail.com Wed Jun 11 17:57:17 2008 From: kartita at gmail.com (Kimberly Artita) Date: Wed, 11 Jun 2008 16:57:17 -0500 Subject: [SciPy-user] f2py "Segmentation fault", please help Message-ID: I recently updated my system (including python, scipy, numpy) and now get a bizarre "Segmentation fault" message where I didn't before. I have problems reading in data. Here is any example from my fortran file (*.f90): character(len=4) :: title(60) open (2,file="file.cio") read (2,5100) title 5100 format (20a4) The file "file.cio" contains this: General Input/Output section (file.cio): Thu Mar 13 17:32:19 2008 AVSWAT2000 - SWAT interface MDL Previously, it read in the data as :"General Input/Output section (file.cio): Thu Mar 1" But now all I get is: "Segmentation fault" and then my program stops. Please give me guidance! Thanks, Kim -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed Jun 11 18:07:24 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 11 Jun 2008 17:07:24 -0500 Subject: [SciPy-user] f2py "Segmentation fault", please help In-Reply-To: References: Message-ID: <3d375d730806111507m6eb9e09budf64b39af806ba93@mail.gmail.com> On Wed, Jun 11, 2008 at 16:57, Kimberly Artita wrote: > I recently updated my system (including python, scipy, numpy) and now get a > bizarre "Segmentation fault" message where I didn't before. > > I have problems reading in data. Here is any example from my fortran file > (*.f90): > > character(len=4) :: title(60) > > open (2,file="file.cio") > > read (2,5100) title > > 5100 format (20a4) > > The file "file.cio" contains this: > General Input/Output section (file.cio): Thu Mar 13 17:32:19 2008 > AVSWAT2000 - SWAT interface MDL > > Previously, it read in the data as :"General Input/Output section > (file.cio): Thu Mar 1" > > But now all I get is: "Segmentation fault" and then my program stops. We need some more information. Can you show us the code? Alternately, can you narrow the code down to just the part that is crashing? Please run your program under a C debugger like gdb in order to get a backtrace which will show us exactly where things are segfaulting. Here is what that looks like on OS X. The details will be different on other UNIces, but the "run -c ...", "continue", "backtrace" steps should be the same. $ gdb python GNU gdb 6.3.50-20050815 (Apple version gdb-768) (Tue Oct 2 04:07:49 UTC 2007) Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "i386-apple-darwin"...Reading symbols for shared libraries .. done (gdb) run -c crashing_script.py any_other_necessary_arguments_for_your_script Starting program: /usr/local/bin/python -c crashing_script.py any_other_necessary_arguments_for_your_script Reading symbols for shared libraries +. done Program received signal SIGTRAP, Trace/breakpoint trap. 0x8fe01010 in __dyld__dyld_start () (gdb) continue Continuing. Reading symbols for shared libraries .. done ... segfault happens here ... (gdb) backtrace ... backtrace output here; this is what we need to see ... -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From kartita at gmail.com Wed Jun 11 19:41:58 2008 From: kartita at gmail.com (Kimberly Artita) Date: Wed, 11 Jun 2008 18:41:58 -0500 Subject: [SciPy-user] f2py "Segmentation fault", please help In-Reply-To: <3d375d730806111507m6eb9e09budf64b39af806ba93@mail.gmail.com> References: <3d375d730806111507m6eb9e09budf64b39af806ba93@mail.gmail.com> Message-ID: Here is a dummy fortran/python code which highlights my problem: My fortran program readin_f90.f90: subroutine readin implicit none character(len=4) :: title(60) open (2,file="file.cio") print *, "file.cio opened" read (2,5100) title print *, title 5100 format (20a4) end subroutine My python code readin.py: from psyco import * full() from readin_f90 import * test_readin() My file file.cio: General Input/Output section (file.cio): Wed Jul 18 15:50:10 2007 AVSWAT2000 - SWAT interface MDL basins.bsb basins.sbs basins.rch basins.rsv basins.lqo basins.wtr basins.pso basins.eve basins.fig basins.cod basins.bsn basins.wwq crop.dat till.dat pest.dat fert.dat urban.dat 1 1 3 3 3 3 0 0 0 pcp.pcp tmp.tmp I generate the *.so file with: f2py -c -m readin_f90 readin_f90.f90 kimi at localhost ~ $ python readin.py Gives this: file.cio opened Segmentation fault Here the gdb output: kimi at localhost ~ $ gdb python (no debugging symbols found) (no debugging symbols found) file.cio opened ---Type to continue, or q to quit--- Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 0xb7d2d6c0 (LWP 8055)] 0xb5c14885 in ?? () from /usr/lib/gcc/i686-pc-linux-gnu/4.2.3/libgfortran.so.2 (gdb) backtrace #0 0xb5c14885 in ?? () from /usr/lib/gcc/i686-pc-linux-gnu/4.2.3/libgfortran.so.2 #1 0xb7e5fff4 in ?? () from /lib/libc.so.6 #2 0x00000000 in ?? () (gdb) -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed Jun 11 19:53:48 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 11 Jun 2008 18:53:48 -0500 Subject: [SciPy-user] f2py "Segmentation fault", please help In-Reply-To: References: <3d375d730806111507m6eb9e09budf64b39af806ba93@mail.gmail.com> Message-ID: <3d375d730806111653y3bcf2affse7585ccd289230bd@mail.gmail.com> On Wed, Jun 11, 2008 at 18:41, Kimberly Artita wrote: > Here is a dummy fortran/python code which highlights my problem: > > My fortran program readin_f90.f90: > subroutine readin > > implicit none > > character(len=4) :: title(60) > > open (2,file="file.cio") > print *, "file.cio opened" > read (2,5100) title > print *, title > > 5100 format (20a4) > > end subroutine > > My python code readin.py: > from psyco import * > full() Can you try this without psyco? It might be complicating matters. > from readin_f90 import * > > test_readin() > > My file file.cio: > General Input/Output section (file.cio): Wed Jul 18 15:50:10 2007 > AVSWAT2000 - SWAT interface MDL > > > basins.bsb basins.sbs basins.rch basins.rsv basins.lqo > basins.wtr > basins.pso basins.eve basins.fig basins.cod basins.bsn > basins.wwq > crop.dat till.dat pest.dat fert.dat urban.dat > 1 1 3 3 3 3 0 0 0 > pcp.pcp > > > tmp.tmp > > > > I generate the *.so file with: f2py -c -m readin_f90 readin_f90.f90 > > kimi at localhost ~ $ python readin.py > Gives this: > file.cio opened > Segmentation fault > > Here the gdb output: > kimi at localhost ~ $ gdb python > > (no debugging symbols found) > (no debugging symbols found) > file.cio opened > > ---Type to continue, or q to quit--- > Program received signal SIGSEGV, Segmentation fault. > [Switching to Thread 0xb7d2d6c0 (LWP 8055)] > 0xb5c14885 in ?? () from > /usr/lib/gcc/i686-pc-linux-gnu/4.2.3/libgfortran.so.2 > (gdb) backtrace > #0 0xb5c14885 in ?? () > from /usr/lib/gcc/i686-pc-linux-gnu/4.2.3/libgfortran.so.2 > #1 0xb7e5fff4 in ?? () from /lib/libc.so.6 > #2 0x00000000 in ?? () > (gdb) If psyco isn't to blame, then it looks like a problem in gfortran's libraries. On OS X with gfortran 4.2.0, I don't get a segfault. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From lorenzo.isella at gmail.com Thu Jun 12 05:11:05 2008 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Thu, 12 Jun 2008 11:11:05 +0200 Subject: [SciPy-user] How to get rid of nan and Inf Message-ID: Dear All, Say that you have an array which represents the time evolution of certain quantities as time progresses. Some entries of this array could be ill-defined (coming from a division by zero or an indefinite error on a linear regression). How can you automatically get rid of these? I am looking for something like: remove_ill_defined=where(array_ill_defined == is_a_number) array_well_defined=array_ill_defined[remove_ill_defined] but I do not know the command to identify a number vs an Inf or a nan (my fictitious is_a_number condition). Cheers Lorenzo From silva at lma.cnrs-mrs.fr Thu Jun 12 05:21:18 2008 From: silva at lma.cnrs-mrs.fr (Fabrice Silva) Date: Thu, 12 Jun 2008 11:21:18 +0200 Subject: [SciPy-user] How to get rid of nan and Inf In-Reply-To: References: Message-ID: <1213262478.3022.2.camel@Portable-s2m.cnrs-mrs.fr> Le jeudi 12 juin 2008 ? 11:11 +0200, Lorenzo Isella a ?crit : > I am looking for something like: > > remove_ill_defined=where(array_ill_defined == is_a_number) > array_well_defined=array_ill_defined[remove_ill_defined] What about isnan and isinf import numpy as np a = np.array([......]) index = (np.isnan(a)==False) & (np.isinf(a)==False) a = a[index] -- Fabrice Silva LMA UPR CNRS 7051 - ?quipe S2M From david at ar.media.kyoto-u.ac.jp Thu Jun 12 05:11:30 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 12 Jun 2008 18:11:30 +0900 Subject: [SciPy-user] How to get rid of nan and Inf In-Reply-To: References: Message-ID: <4850E842.6000005@ar.media.kyoto-u.ac.jp> Lorenzo Isella wrote: > but I do not know the command to identify a number vs an Inf or a nan > (my fictitious is_a_number condition). > What about numpy.isnan and numpy.isfinite ? David From peridot.faceted at gmail.com Thu Jun 12 09:31:54 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Thu, 12 Jun 2008 07:31:54 -0600 Subject: [SciPy-user] How to get rid of nan and Inf In-Reply-To: <4850E842.6000005@ar.media.kyoto-u.ac.jp> References: <4850E842.6000005@ar.media.kyoto-u.ac.jp> Message-ID: 2008/6/12 David Cournapeau : > Lorenzo Isella wrote: >> but I do not know the command to identify a number vs an Inf or a nan >> (my fictitious is_a_number condition). >> > > What about numpy.isnan and numpy.isfinite ? In fact np.isfinite(NaN) is False, so it's easy: A = A[np.isfinite(A)] Anne From philbinj at gmail.com Thu Jun 12 10:24:52 2008 From: philbinj at gmail.com (James Philbin) Date: Thu, 12 Jun 2008 15:24:52 +0100 Subject: [SciPy-user] Sparse eigenvalues/eigenvectors problem Message-ID: <2b1c8c4f0806120724n1d159964ubd39eb54922a62b5@mail.gmail.com> Hi, I'm trying to find the first few (~50) eigenvectors of a largish (2130x2130) sparse symmetric real matrix. I've used scipy.sparse.linalg.eigen_symmetric but have hit upon a few niggles. If I ask for 10 eigenvectors, with the following command eigen_symmetric(S,k=10,which='LA'), I get the following eigenvalues: [ 0.99729875 0.99770773 0.9987255 0.99883746 1. 1. 1. 1. 1. 1. ] Running with k=20 gives: [ 0.99470154 0.99567495 0.99619173 0.99729875 0.99770773 0.9987255 0.99883746 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. ] So it seems that eigen_symmetric is failing to find lots of the repeated 1 eigenvectors. Is this an inherent problem with ARPACK or symptomatic of a bug in scipy? Matlab's eigs seems to not suffer from this nearly as much, returning nearly all ones for both the 10 and 20 cases. I assume this is also using ARPACK under the surface. I can upload the matrix S somewhere if people are interested. I also think i've fixed a bug in the current version of arpack.py relating to the info returned (The warning that maxiters had been reached was never shown) and added a warning if some eigenvectors didn't converge: --- arpack.py.old 2008-06-12 13:28:29.000000000 +0100 +++ arpack.py 2008-06-12 15:22:55.000000000 +0100 @@ -463,8 +463,11 @@ if info < -1 : raise RuntimeError("Error info=%d in arpack"%info) return None - if info == -1: + if info == 1: warnings.warn("Maximum number of iterations taken: %s"%iparam[2]) + + if iparam[4] References: <826c64da0806110848r24248337jaf3caa2819e6906d@mail.gmail.com> Message-ID: <485138BC.7070506@vrbka.net> hi, slightly off-topic, but still regarding the speed differences... what's the difference between the following 2 functions? numpy.linalg.solve scipy.linalg.solve the first is almost 2 times faster on the same data (4096 repetitions of a A*a = b problem with A = 2x2 float32 matrix) - typically 0.30 vs. 0.57 sec for the whole operation... best, -- Lubos _ at _" http://www.lubos.vrbka.net From ivo.maljevic at gmail.com Thu Jun 12 11:05:45 2008 From: ivo.maljevic at gmail.com (Ivo Maljevic) Date: Thu, 12 Jun 2008 11:05:45 -0400 Subject: [SciPy-user] Speed differences in sqrt calculation: what is good to know In-Reply-To: <485138BC.7070506@vrbka.net> References: <826c64da0806110848r24248337jaf3caa2819e6906d@mail.gmail.com> <485138BC.7070506@vrbka.net> Message-ID: <826c64da0806120805i56166f5bue2dbbf74c8bc2ff8@mail.gmail.com> It looks like the same topic, that is, the difference in speed between numpy and scipy. There are all sorts of differences that I'm seeing while learning scipy/numpy/python. Also an interesting one, but probably with negligible impact: In [1]: import math In [2]: %timeit math.sqrt(2.2) 1000000 loops, best of 3: 327 ns per loop In [3]: %timeit math.sqrt(2.2) 1000000 loops, best of 3: 321 ns per loop In [4]: %timeit math.sqrt(2.2) 1000000 loops, best of 3: 323 ns per loop versus: In [1]: from math import sqrt In [2]: %timeit sqrt(2.2) 1000000 loops, best of 3: 245 ns per loop In [3]: %timeit sqrt(2.2) 1000000 loops, best of 3: 248 ns per loop In [4]: %timeit sqrt(2.2) 1000000 loops, best of 3: 248 ns per loop 2008/6/12 Lubos Vrbka : > hi, > > slightly off-topic, but still regarding the speed differences... what's > the difference between the following 2 functions? > > numpy.linalg.solve > scipy.linalg.solve > > the first is almost 2 times faster on the same data (4096 repetitions of > a A*a = b problem with A = 2x2 float32 matrix) - typically 0.30 vs. 0.57 > sec for the whole operation... > > best, > > -- > Lubos _ at _" > http://www.lubos.vrbka.net > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at vrbka.net Thu Jun 12 11:16:25 2008 From: lists at vrbka.net (Lubos Vrbka) Date: Thu, 12 Jun 2008 17:16:25 +0200 Subject: [SciPy-user] Speed differences in sqrt calculation: what is good to know In-Reply-To: <826c64da0806120805i56166f5bue2dbbf74c8bc2ff8@mail.gmail.com> References: <826c64da0806110848r24248337jaf3caa2819e6906d@mail.gmail.com> <485138BC.7070506@vrbka.net> <826c64da0806120805i56166f5bue2dbbf74c8bc2ff8@mail.gmail.com> Message-ID: <48513DC9.2050504@vrbka.net> Ivo Maljevic wrote: > It looks like the same topic, that is, the difference in speed between numpy > and scipy. > > There are all sorts of differences that I'm seeing while learning > scipy/numpy/python. Also an interesting one, > but probably with negligible impact: > > In [1]: import math > In [2]: %timeit math.sqrt(2.2) > 1000000 loops, best of 3: 327 ns per loop > In [3]: %timeit math.sqrt(2.2) > 1000000 loops, best of 3: 321 ns per loop > In [4]: %timeit math.sqrt(2.2) > 1000000 loops, best of 3: 323 ns per loop > > versus: > > In [1]: from math import sqrt > In [2]: %timeit sqrt(2.2) > 1000000 loops, best of 3: 245 ns per loop > In [3]: %timeit sqrt(2.2) > 1000000 loops, best of 3: 248 ns per loop > In [4]: %timeit sqrt(2.2) > 1000000 loops, best of 3: 248 ns per loop hm, i see similar thing for the import numpy; numpy.linalg.solve and from numpy import linalg; linalg.solve but the differences are here relatively small (negligible). it's just probably the time involved with the name resolution machinery in python... the discrepancy between numpy and scipy is much much larger... best, -- Lubos _ at _" http://www.lubos.vrbka.net From tgray at protozoic.com Thu Jun 12 14:02:11 2008 From: tgray at protozoic.com (Tim Gray) Date: Thu, 12 Jun 2008 14:02:11 -0400 Subject: [SciPy-user] How to get rid of nan and Inf Message-ID: <8f5413a60806121102h4849828dief9fd5387f1b5fee@mail.gmail.com> nan_to_num() does this. I think it's somewhere in numpy. Makes Nan's -> 0's and infs -> machine limits. From nwagner at iam.uni-stuttgart.de Thu Jun 12 14:03:43 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 12 Jun 2008 20:03:43 +0200 Subject: [SciPy-user] Sparse eigenvalues/eigenvectors problem In-Reply-To: <2b1c8c4f0806120724n1d159964ubd39eb54922a62b5@mail.gmail.com> References: <2b1c8c4f0806120724n1d159964ubd39eb54922a62b5@mail.gmail.com> Message-ID: On Thu, 12 Jun 2008 15:24:52 +0100 "James Philbin" wrote: > Hi, > > I'm trying to find the first few (~50) eigenvectors of a >largish > (2130x2130) sparse symmetric real matrix. I've used > scipy.sparse.linalg.eigen_symmetric but have hit upon a >few niggles. > If I ask for 10 eigenvectors, with the following command > eigen_symmetric(S,k=10,which='LA'), I get the following >eigenvalues: > [ 0.99729875 0.99770773 0.9987255 0.99883746 1. > 1. > 1. 1. 1. 1. ] > > Running with k=20 gives: > [ 0.99470154 0.99567495 0.99619173 0.99729875 > 0.99770773 0.9987255 > 0.99883746 1. 1. 1. 1. > 1. 1. > 1. 1. 1. 1. 1. > 1. > 1. ] > > So it seems that eigen_symmetric is failing to find lots >of the > repeated 1 eigenvectors. Is this an inherent problem >with ARPACK or > symptomatic of a bug in scipy? Matlab's eigs seems to >not suffer from > this nearly as much, returning nearly all ones for both >the 10 and 20 > cases. I assume this is also using ARPACK under the >surface. I can > upload the matrix S somewhere if people are interested. > > I also think i've fixed a bug in the current version of >arpack.py > relating to the info returned (The warning that maxiters >had been > reached was never shown) and added a warning if some >eigenvectors > didn't converge: > > --- arpack.py.old 2008-06-12 13:28:29.000000000 +0100 > +++ arpack.py 2008-06-12 15:22:55.000000000 +0100 > @@ -463,8 +463,11 @@ > if info < -1 : > raise RuntimeError("Error info=%d in >arpack"%info) > return None > - if info == -1: > + if info == 1: > warnings.warn("Maximum number of iterations >taken: %s"%iparam[2]) > + > + if iparam[4] + warnings.warn("Only %d/%d eigenvectors converged" >% (iparam[4], k)) > > # now extract eigenvalues and (optionally) >eigenvectors > rvec = return_eigenvectors > > > Thanks, > James Hi James, Please can you send me the matrix off-list (*.mtx). Thanks in advance, Nils From peridot.faceted at gmail.com Thu Jun 12 15:06:49 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Thu, 12 Jun 2008 13:06:49 -0600 Subject: [SciPy-user] Speed differences in sqrt calculation: what is good to know In-Reply-To: <826c64da0806111306x6902fdfbh691666ad713049f5@mail.gmail.com> References: <826c64da0806110848r24248337jaf3caa2819e6906d@mail.gmail.com> <484FF543.10605@ar.media.kyoto-u.ac.jp> <826c64da0806111306x6902fdfbh691666ad713049f5@mail.gmail.com> Message-ID: 2008/6/11 Ivo Maljevic : > 2008/6/11 David Cournapeau : >> >> Ivo Maljevic wrote: >> > Based on comments from Gael Varoquaux and David Cournapeau , I did the >> > execution time test. >> > At least for me, it is clear that if the number is a real scalar, AND >> > the expected result is also real, >> > the best way is to call the math version of sqrt() function. The >> > differences are more than significant, as you can see: >> > >> >> It is expected for numpy/scipy functions to be much slower than python >> *for scalar*. Since they are optimized to be used with arrays, you are >> paying the cost to initialize the machinery to handle arrays (ufunc), >> without the benefit. > > No disagreement there. My comment, which I realize was probably unnecessary, > is that > one has to pay attention as to which version of the function is called. > Since I started SciPy I always used everything from it, never even thinking > that math.sqrt(), math.sin(), etc., are faster if I'm working with scalar > values. And there are always coefficients or scaling parameters that do not > need to be in a vector form. Keep in mind that if you are doing many computations, doing them with scalars exposes you to a great deal of overhead from the python interpreter. Put another way, if you're writing a program in which you compute enough square roots that the speed difference between math.sqrt and numpy.sqrt is significant, you should almost certainly rewrite your program so that you are computing vector square roots; the time you save in python overhead will make your program vastly faster. In fact it is probably difficult to write anontrivial program in which the time spent computing scalar square roots is significant. Anne From dwf at cs.toronto.edu Tue Jun 10 19:26:28 2008 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Tue, 10 Jun 2008 19:26:28 -0400 Subject: [SciPy-user] Some mathematics/statisctics books In-Reply-To: <484ED79B.8000607@ucsf.edu> References: <484ED7AA.2040907@slac.stanford.edu> <484ED79B.8000607@ucsf.edu> Message-ID: <0BAB70CC-5B85-4CB5-9355-195B537B08EA@cs.toronto.edu> On 10-Jun-08, at 3:35 PM, Karl Young wrote: > I completely agree with Johann that the best way to start is to just > dive into the tutorials and examples but there are a few books around > that might not be bad to have at your side when doing so. I just looked through the Gershenfeld book's table of contents, and while it's full of tons of useful stuff, it may be the wrong place to start. Certainly, don't try to read the book in sequence from start to finish if you don't have the requisite background. What background? Probably some linear algebra at least, and some single and multivariable calculus. The OP didn't specify what level of education he's had in these matters, so just in case I'll include a few books. I'm not too familiar with books on single-variable calculus but Tom Apostol's book on the subject appears to be quite well reviewed. It's also been around since the 1960's so it should be possible to find an inexpensive used copy. As for multivariable calculus and linear algebra, a book that I'm fond of that takes an integrated approach to these two subjects is "Multivariable Mathematics" by Theodore Shifrin, ISBN 047152638X. It goes through a lot of examples but doesn't sacrifice rigour. It won't help you much if you haven't done any single variable calculus. My supervisor introduced me to Gilbert Strang's "Linear Algebra and its Applications", which I quite like. People seem to either love or hate this book, but it endeavours to help you develop intuition for linear algebraic concepts, and is quite light on theory, heavy on application. Hope that helps, David From timmichelsen at gmx-topmail.de Thu Jun 12 17:14:03 2008 From: timmichelsen at gmx-topmail.de (Tim Michelsen) Date: Thu, 12 Jun 2008 23:14:03 +0200 Subject: [SciPy-user] iteratively masking timeseries Message-ID: Hello, I am making my way forward into timeseries processing with the scikit package. I have a timeseries where NoData values are masked when loading the data into the timeseries. Now I would like to apply some filters on the data like discarding data values below measurement device accuracy or above a certain threshold. Therefore I followed the appraoch outlined in the FAQ [1]. But then I get the pasted below at the end. Does that mean that one can only mask an array once and cannot mask more values later on? I would like to do something like: 1) create timeseries with NoData values => already can do that 2) apply various filters masking more and more data. a) e.g. get a series with masked values below 10. b) e.g. get a series with masked values above 100. I would appreciate any help or hint here. Kind regards, Tim #### pasted from Ipython #### In [18]: mask[mask<0] = numpy.ma.masked --------------------------------------------------------------------------- IndexError Traceback (most recent call last) D:\scripts\timeseries.py i ----> 1 2 3 4 5 C:\python25\lib\site-packages\scikits\timeseries\tseries.pyc in __setitem__(se 522 if self is masked: 523 raise MAError, 'Cannot alter the masked element.' --> 524 (sindx, _) = self.__checkindex(indx) 525 super(TimeSeries, self).__setitem__(sindx, value) 526 #...................................................... C:\python25\lib\site-packages\scikits\timeseries\tseries.pyc in __checkindex(s 490 msg = "Masked arrays must be filled before they can be use 491 "as indices!" --> 492 raise IndexError, msg 493 return (indx,indx) 494 IndexError: Masked arrays must be filled before they can be used as indices! #### end from Ipython #### [1] http://www.scipy.org/Cookbook/TimeSeries/FAQ#head-cfe3617dda0b030f0474a2a773e2dca4da8eaea0 From wnbell at gmail.com Thu Jun 12 17:52:52 2008 From: wnbell at gmail.com (Nathan Bell) Date: Thu, 12 Jun 2008 16:52:52 -0500 Subject: [SciPy-user] Sparse eigenvalues/eigenvectors problem In-Reply-To: <2b1c8c4f0806120724n1d159964ubd39eb54922a62b5@mail.gmail.com> References: <2b1c8c4f0806120724n1d159964ubd39eb54922a62b5@mail.gmail.com> Message-ID: On Thu, Jun 12, 2008 at 9:24 AM, James Philbin wrote: > So it seems that eigen_symmetric is failing to find lots of the > repeated 1 eigenvectors. Is this an inherent problem with ARPACK or > symptomatic of a bug in scipy? Matlab's eigs seems to not suffer from > this nearly as much, returning nearly all ones for both the 10 and 20 > cases. I assume this is also using ARPACK under the surface. I can > upload the matrix S somewhere if people are interested. Hi James, While your problem could be caused by a bug in our ARPACK wrappers, it could also be due to differences in default parameters. Are you sure that MATLAB is calling ARPACK with the same parameters as eigen_symmetric? For instance, it the same tolerance used in both cases? Have you setup eigs() to use the symmetric algorithm? There may also be a difference in the way MATLAB and SciPy chose the initial vector of the Krylov subspace (parameter v0 in eigen()). If you believe this to be a genuine bug, please submit a ticket in Trac with a short script that demonstrates the error. http://projects.scipy.org/scipy/scipy If your matrix is small enough, you could include it with the ticket as well, perhaps as a compressed MatrixMarket file: >>> A = #your matrix >>> from scipy.io import mmwrite >>> mmwrite('A.mtx', A) $ gzip A.mtx > I also think i've fixed a bug in the current version of arpack.py > relating to the info returned (The warning that maxiters had been > reached was never shown) and added a warning if some eigenvectors > didn't converge: > Added in r4441: http://projects.scipy.org/scipy/scipy/changeset/4441 -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From philbinj at gmail.com Thu Jun 12 20:01:59 2008 From: philbinj at gmail.com (James Philbin) Date: Fri, 13 Jun 2008 01:01:59 +0100 Subject: [SciPy-user] Sparse eigenvalues/eigenvectors problem In-Reply-To: References: <2b1c8c4f0806120724n1d159964ubd39eb54922a62b5@mail.gmail.com> Message-ID: <2b1c8c4f0806121701l1e91ce99h2236a8ed20a73029@mail.gmail.com> Hi, >>>> A = #your matrix >>>> from scipy.io import mmwrite >>>> mmwrite('A.mtx', A) mmwrite seems not to work for me (latest svn checkout): File "/usr/lib/python2.5/site-packages/scipy/io/mmio.py", line 67, in mmwrite MMFile().write(target, a, comment, field, precision) File "/usr/lib/python2.5/site-packages/scipy/io/mmio.py", line 284, in write self._write(stream, a, comment, field, precision) File "/usr/lib/python2.5/site-packages/scipy/io/mmio.py", line 564, in _write IJV = vstack((a.row, a.col, a.data)).T File "/usr/lib/python2.5/site-packages/scipy/sparse/csr.py", line 93, in __getattr__ return _cs_matrix.__getattr__(self, attr) File "/usr/lib/python2.5/site-packages/scipy/sparse/base.py", line 316, in __getattr__ raise AttributeError, attr + " not found" AttributeError: row not found I've saved the matrix in (row,col,data) format and uploaded it here: www.robots.ox.ac.uk/~james/np_temp/S.txt.gz Thanks, James From david at ar.media.kyoto-u.ac.jp Thu Jun 12 22:49:43 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 13 Jun 2008 11:49:43 +0900 Subject: [SciPy-user] Speed differences in sqrt calculation: what is good to know In-Reply-To: <826c64da0806120805i56166f5bue2dbbf74c8bc2ff8@mail.gmail.com> References: <826c64da0806110848r24248337jaf3caa2819e6906d@mail.gmail.com> <485138BC.7070506@vrbka.net> <826c64da0806120805i56166f5bue2dbbf74c8bc2ff8@mail.gmail.com> Message-ID: <4851E047.2040704@ar.media.kyoto-u.ac.jp> Ivo Maljevic wrote: > It looks like the same topic, that is, the difference in speed between > numpy and scipy. > > There are all sorts of differences that I'm seeing while learning > scipy/numpy/python. Also an interesting one, > but probably with negligible impact: Name resolution is slow in python (it is generally slow in interpreted language, and python has a rather primitive implementation - no JIT, etc...). It is one of the reason why numpy exists in the first place (f([1, 2, 3, 4, 5, ....]) is much faster than for i in range([1, 2, 3, ....]): f(i) - name resolution is not the only slowdown involved here). cheers, David From wnbell at gmail.com Thu Jun 12 23:11:27 2008 From: wnbell at gmail.com (Nathan Bell) Date: Thu, 12 Jun 2008 22:11:27 -0500 Subject: [SciPy-user] Sparse eigenvalues/eigenvectors problem In-Reply-To: <2b1c8c4f0806121701l1e91ce99h2236a8ed20a73029@mail.gmail.com> References: <2b1c8c4f0806120724n1d159964ubd39eb54922a62b5@mail.gmail.com> <2b1c8c4f0806121701l1e91ce99h2236a8ed20a73029@mail.gmail.com> Message-ID: On Thu, Jun 12, 2008 at 7:01 PM, James Philbin wrote: > > mmwrite seems not to work for me (latest svn checkout): > Should be fixed now (SVN r4444). -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From david at ar.media.kyoto-u.ac.jp Thu Jun 12 23:23:00 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 13 Jun 2008 12:23:00 +0900 Subject: [SciPy-user] Speed differences in sqrt calculation: what is good to know In-Reply-To: <485138BC.7070506@vrbka.net> References: <826c64da0806110848r24248337jaf3caa2819e6906d@mail.gmail.com> <485138BC.7070506@vrbka.net> Message-ID: <4851E814.4020703@ar.media.kyoto-u.ac.jp> Lubos Vrbka wrote: > hi, > > slightly off-topic, but still regarding the speed differences... what's > the difference between the following 2 functions? > > numpy.linalg.solve > scipy.linalg.solve > > the first is almost 2 times faster on the same data (4096 repetitions of > a A*a = b problem with A = 2x2 float32 matrix) - typically 0.30 vs. 0.57 > sec for the whole operation... > I think you will find a lot of those differences with very small data. I would not even be surprised if it was faster to do the inversion in python. Using numpy and scipy for those kind of data just do not make much sense. I am sure you will have exactly the same problem in matlab, albeit in a less significant way, because recent matlab engines have JIT (and a much simpler language, which certainly helps here). Difference in functionalities between numpy and scipy are much more bothering to me (sqrt(-1) giving a different answer for numpy and scipy, for example). cheers, David From waterbug at pangalactic.us Fri Jun 13 00:49:18 2008 From: waterbug at pangalactic.us (Stephen Waterbury) Date: Fri, 13 Jun 2008 00:49:18 -0400 Subject: [SciPy-user] SciPy 2008 Registration payment page gives traceback Message-ID: <4851FC4E.9030100@pangalactic.us> When I tried to pay, I got a traceback -- something about a key error on "sprints" (which I didn't sign up for). I should have copied it, sorry. Steve From travis at enthought.com Fri Jun 13 08:16:04 2008 From: travis at enthought.com (Travis Vaught) Date: Fri, 13 Jun 2008 07:16:04 -0500 Subject: [SciPy-user] SciPy 2008 Registration payment page gives traceback In-Reply-To: <4851FC4E.9030100@pangalactic.us> References: <4851FC4E.9030100@pangalactic.us> Message-ID: Steve, This is fixed now (and tested). Many apologies. You will need to try again, though :/ Thanks, Travis On Jun 12, 2008, at 11:49 PM, Stephen Waterbury wrote: > When I tried to pay, I got a traceback -- > something about a key error on "sprints" (which > I didn't sign up for). I should have copied it, > sorry. > > Steve > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From alexander.borghgraef.rma at gmail.com Fri Jun 13 08:16:08 2008 From: alexander.borghgraef.rma at gmail.com (Alexander Borghgraef) Date: Fri, 13 Jun 2008 14:16:08 +0200 Subject: [SciPy-user] Mlab doesn't work Message-ID: <9e8c52a20806130516q2cab4081uaa2cce2ccc0b540f@mail.gmail.com> Hi all, I've been playing a bit with mayavi2/mlab, but I run into a problem when trying to run the first example at http://www.scipy.org/Cookbook/MayaVi/mlab: import scipy # prepare some interesting function: def f(x, y): return 3.0*scipy.sin(x*y+1e-4)/(x*y+1e-4) x = scipy.arange(-7., 7.05, 0.1) y = scipy.arange(-5., 5.05, 0.1) # 3D visualization of f: from enthought.tvtk.tools import mlab fig = mlab.figure() s = mlab.SurfRegular(x, y, f) fig.add(s) I get the following messages: mlabtest.py:3: DeprecationWarning: The wxPython compatibility package is no longer automatically generated or actively maintained. Please switch to the wx package as soon as possible. from wxPython.wx import * Then a blank gtk window pops up, disappears, and then I get: (python:4960): Gtk-CRITICAL **: gtk_widget_set_colormap: assertion `!GTK_WIDGET_REALIZED (widget)' failed I've built the enthought libraries from source on an Fedora 8 system, and installed them in a local directory (I have no access to /usr/lib). Importing the libraries works fine, and so does running the mayavi2 binary, but I haven't managed to get something plotted from a python script. Any ideas? -- Alex Borghgraef From fredmfp at gmail.com Fri Jun 13 08:25:00 2008 From: fredmfp at gmail.com (fred) Date: Fri, 13 Jun 2008 14:25:00 +0200 Subject: [SciPy-user] Mlab doesn't work In-Reply-To: <9e8c52a20806130516q2cab4081uaa2cce2ccc0b540f@mail.gmail.com> References: <9e8c52a20806130516q2cab4081uaa2cce2ccc0b540f@mail.gmail.com> Message-ID: <4852671C.9080402@gmail.com> Alexander Borghgraef a ?crit : > I've built the enthought libraries from source on an Fedora 8 system, > and installed them in a local directory (I have no access to > /usr/lib). Importing the libraries works fine, and so does running the > mayavi2 binary, but I haven't managed to get something plotted from a > python script. Any ideas? Can you tell us how you run your script ? Cheers, -- Fred From alexander.borghgraef.rma at gmail.com Fri Jun 13 08:34:00 2008 From: alexander.borghgraef.rma at gmail.com (Alexander Borghgraef) Date: Fri, 13 Jun 2008 14:34:00 +0200 Subject: [SciPy-user] Mlab doesn't work In-Reply-To: <4852671C.9080402@gmail.com> References: <9e8c52a20806130516q2cab4081uaa2cce2ccc0b540f@mail.gmail.com> <4852671C.9080402@gmail.com> Message-ID: <9e8c52a20806130534x12a6c087jf486daca9a28c6e1@mail.gmail.com> On Fri, Jun 13, 2008 at 2:25 PM, fred wrote: > Alexander Borghgraef a ?crit : > >> I've built the enthought libraries from source on an Fedora 8 system, >> and installed them in a local directory (I have no access to >> /usr/lib). Importing the libraries works fine, and so does running the >> mayavi2 binary, but I haven't managed to get something plotted from a >> python script. Any ideas? > Can you tell us how you run your script ? Saved it to mlabtest.py, then ran python mlabtest.py from the command line in a bash shell. -- Alex Borghgraef From travis at enthought.com Fri Jun 13 08:35:20 2008 From: travis at enthought.com (Travis Vaught) Date: Fri, 13 Jun 2008 07:35:20 -0500 Subject: [SciPy-user] EuroSciPy Early Registration Reminder Message-ID: <26C7F1FC-FE73-4DF2-8AF5-72D15A816D98@enthought.com> Greetings, This is a friendly reminder that the Early Registration deadline for the EuroSciPy Conference is June 15th. If you're interested in attending, but have not yet registered, please visit: http://www.scipy.org/EuroSciPy2008 The talks schedule is also now available there. Also, the keynote speaker this year will be Travis Oliphant, the primary author of the recent NumPy rewrite. For those doing scientific computing using Python, this is a conference you'll not want to miss. See you there. Travis From fredmfp at gmail.com Fri Jun 13 08:48:11 2008 From: fredmfp at gmail.com (fred) Date: Fri, 13 Jun 2008 14:48:11 +0200 Subject: [SciPy-user] Mlab doesn't work In-Reply-To: <9e8c52a20806130534x12a6c087jf486daca9a28c6e1@mail.gmail.com> References: <9e8c52a20806130516q2cab4081uaa2cce2ccc0b540f@mail.gmail.com> <4852671C.9080402@gmail.com> <9e8c52a20806130534x12a6c087jf486daca9a28c6e1@mail.gmail.com> Message-ID: <48526C8B.5040300@gmail.com> Alexander Borghgraef a ?crit : > Saved it to mlabtest.py, then ran > > python mlabtest.py > > from the command line in a bash shell. Ok. Note that these cookbook pages are said to be obsolete. Try it running ipython like this: ipython -whtread > run mlabtest.py Cheers, PS: Please don't be offensive with your subject. Mlab _does_ work and Ga?l is doing a great job on this. -- Fred From gael.varoquaux at normalesup.org Fri Jun 13 08:51:50 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Fri, 13 Jun 2008 14:51:50 +0200 Subject: [SciPy-user] Mlab doesn't work In-Reply-To: <9e8c52a20806130534x12a6c087jf486daca9a28c6e1@mail.gmail.com> References: <9e8c52a20806130516q2cab4081uaa2cce2ccc0b540f@mail.gmail.com> <4852671C.9080402@gmail.com> <9e8c52a20806130534x12a6c087jf486daca9a28c6e1@mail.gmail.com> Message-ID: <20080613125150.GA3573@phare.normalesup.org> On Fri, Jun 13, 2008 at 02:34:00PM +0200, Alexander Borghgraef wrote: > On Fri, Jun 13, 2008 at 2:25 PM, fred wrote: > > Alexander Borghgraef a ?crit : > >> I've built the enthought libraries from source on an Fedora 8 system, > >> and installed them in a local directory (I have no access to > >> /usr/lib). Importing the libraries works fine, and so does running the > >> mayavi2 binary, but I haven't managed to get something plotted from a > >> python script. Any ideas? > > Can you tell us how you run your script ? > Saved it to mlabtest.py, then ran As it is indicated on the top of the page you are looking at, you should run this in "ipython -wthread", for instance using "%run mlabtest.py". Just a question, why are you using TVTK's mlab. Mayavi's mlab (developped by the same people) is more maintained, eventhought it has a bit more dependancies. You can have a look at https://svn.enthought.com/enthought/attachment/wiki/MayaVi/user_guide.pdf?format=raw on section 6 (page 22). One remark, if you are using an oldish version of mayavi, you need to import mlab from enthought.mayavi.tools", rather than "enthought.mayavi". HTH, Ga?l From gael.varoquaux at normalesup.org Fri Jun 13 08:57:49 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Fri, 13 Jun 2008 14:57:49 +0200 Subject: [SciPy-user] Mlab doesn't work In-Reply-To: <48526C8B.5040300@gmail.com> References: <9e8c52a20806130516q2cab4081uaa2cce2ccc0b540f@mail.gmail.com> <4852671C.9080402@gmail.com> <9e8c52a20806130534x12a6c087jf486daca9a28c6e1@mail.gmail.com> <48526C8B.5040300@gmail.com> Message-ID: <20080613125749.GB3573@phare.normalesup.org> On Fri, Jun 13, 2008 at 02:48:11PM +0200, fred wrote: > PS: Please don't be offensive with your subject. > Mlab _does_ work and Ga?l is doing a great job on this. I didn't find the subject offensive at all. I actually think that this thread shows that we have a real problem, as many people stumble into this. First of all we have a communication problem. I have just edited the wiki page to hopefully improve this. Second, this limitation of mlab is real a stumbling block for new users. Prabhu and I are fully aware of it, and we are looking for a solution. It is hard to find a good solution, as the technical reasons behind this are fundamental. We need to find an API/trick to present this in a way that doesn't hurt the user's intuition (ipython -wthread is already much better then starting and stopping the GUI mainloop manually). Cheers, Ga?l From alexander.borghgraef.rma at gmail.com Fri Jun 13 09:27:46 2008 From: alexander.borghgraef.rma at gmail.com (Alexander Borghgraef) Date: Fri, 13 Jun 2008 15:27:46 +0200 Subject: [SciPy-user] Mlab doesn't work In-Reply-To: <20080613125150.GA3573@phare.normalesup.org> References: <9e8c52a20806130516q2cab4081uaa2cce2ccc0b540f@mail.gmail.com> <4852671C.9080402@gmail.com> <9e8c52a20806130534x12a6c087jf486daca9a28c6e1@mail.gmail.com> <20080613125150.GA3573@phare.normalesup.org> Message-ID: <9e8c52a20806130627ib41e905s3c92a7c10fbf4834@mail.gmail.com> On Fri, Jun 13, 2008 at 2:51 PM, Gael Varoquaux wrote: > On Fri, Jun 13, 2008 at 02:34:00PM +0200, Alexander Borghgraef wrote: >> On Fri, Jun 13, 2008 at 2:25 PM, fred wrote: >> > Alexander Borghgraef a ?crit : > >> >> I've built the enthought libraries from source on an Fedora 8 system, >> >> and installed them in a local directory (I have no access to >> >> /usr/lib). Importing the libraries works fine, and so does running the >> >> mayavi2 binary, but I haven't managed to get something plotted from a >> >> python script. Any ideas? >> > Can you tell us how you run your script ? > >> Saved it to mlabtest.py, then ran > > As it is indicated on the top of the page you are looking at, you should > run this in "ipython -wthread", for instance using "%run mlabtest.py". Ok. That works, thanks. > Just a question, why are you using TVTK's mlab. Mayavi's mlab (developped > by the same people) is more maintained, eventhought it has a bit more > dependancies. Well, because it says tvtk in the example on scipy's cookbook page. I tend to use matplotlib for data visualization, but since 3D visualization was hideously sluggish the last time I tried it, I decided to give mayavi a spin. Download it , build it, try to run the example code, harass the mailing list... you all know the drill, I guess. :-) Standard OSS user behaviour. > You can have a look at > https://svn.enthought.com/enthought/attachment/wiki/MayaVi/user_guide.pdf?format=raw > on section 6 (page 22). One remark, if you are using an oldish version of > mayavi, you need to import mlab from enthought.mayavi.tools", rather than > "enthought.mayavi". RTFM, IOW :-D Will do. I just wanted a quick 3D plot of my data, nothing fancy or interactive, so I skipped the manual and went for the wiki. Oh, and about the subject: no offense intended (and none taken apparently, thanks Ga?l). I just lacked inspiration for the topic title (and I didn't have much info to share in it) so I kept it simple. Furthermore, I truly appreciate the work done by the mayavi people, as well as the scipy people and all open source developers for that matter. Couldn't do my work without them. -- Alex Borghgraef From fredmfp at gmail.com Fri Jun 13 09:32:58 2008 From: fredmfp at gmail.com (fred) Date: Fri, 13 Jun 2008 15:32:58 +0200 Subject: [SciPy-user] Mlab doesn't work In-Reply-To: <9e8c52a20806130627ib41e905s3c92a7c10fbf4834@mail.gmail.com> References: <9e8c52a20806130516q2cab4081uaa2cce2ccc0b540f@mail.gmail.com> <4852671C.9080402@gmail.com> <9e8c52a20806130534x12a6c087jf486daca9a28c6e1@mail.gmail.com> <20080613125150.GA3573@phare.normalesup.org> <9e8c52a20806130627ib41e905s3c92a7c10fbf4834@mail.gmail.com> Message-ID: <4852770A.5010707@gmail.com> Alexander Borghgraef a ?crit : > Oh, and about the subject: no offense intended (and none taken > apparently, thanks Ga?l). I just lacked inspiration for the topic Sorry. I'm getting too nervous, too tired, these days ;-) Cheers, -- Fred From gael.varoquaux at normalesup.org Fri Jun 13 09:33:23 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Fri, 13 Jun 2008 15:33:23 +0200 Subject: [SciPy-user] Mlab doesn't work In-Reply-To: <9e8c52a20806130627ib41e905s3c92a7c10fbf4834@mail.gmail.com> References: <9e8c52a20806130516q2cab4081uaa2cce2ccc0b540f@mail.gmail.com> <4852671C.9080402@gmail.com> <9e8c52a20806130534x12a6c087jf486daca9a28c6e1@mail.gmail.com> <20080613125150.GA3573@phare.normalesup.org> <9e8c52a20806130627ib41e905s3c92a7c10fbf4834@mail.gmail.com> Message-ID: <20080613133323.GG3573@phare.normalesup.org> On Fri, Jun 13, 2008 at 03:27:46PM +0200, Alexander Borghgraef wrote: > > Just a question, why are you using TVTK's mlab. Mayavi's mlab (developped > > by the same people) is more maintained, eventhought it has a bit more > > dependancies. > Well, because it says tvtk in the example on scipy's cookbook page. I > tend to use matplotlib for data visualization, but since 3D > visualization was hideously sluggish the last time I tried it, I > decided to give mayavi a spin. Download it , build it, try to run the > example code, harass the mailing list... you all know the drill, I > guess. :-) Standard OSS user behaviour. And standard OSS communication problem on our part (that is especially bad for the Enthought Tools Suite, I will hopefully be working on it soon). That's a good answer. I have added a paragraph to the wiki page to help avoid user stumbling on TVTK. The problem is that these modules are still fairly new, and best practice is only starting to form, so the great "IntarWeb" isn't as full of helpful on up to date info as it should. But the mailing lists are reactive :) (thanks Fred for keeping a vigilant eye, I am often unreachable currently, and it is good to know that you will be there to answer these questions). Cheers, Ga?l From ndbecker2 at gmail.com Fri Jun 13 10:21:53 2008 From: ndbecker2 at gmail.com (Neal Becker) Date: Fri, 13 Jun 2008 10:21:53 -0400 Subject: [SciPy-user] Using scipy specfunc in integration Message-ID: Any ideas on this? from scipy.special import erf from math import exp, tan def cot(x): return 1/tan(x) N = 8 esnodB = 10 Rd = 10**(.1 * esnodB) def F (y): return exp (-(y**2)) * erf (y * cot (pi/N)) Pe = float(N-1)/float(N) - 0.5 * erf (sqrt (Rd * sin (pi/N))) - 1/(sqrt(pi)) * quadrature(F, 0, sqrt(Rd) * sin (pi/N))[0] TypeError: only length-1 arrays can be converted to Python scalars It seems to be complaining about erf. erf Out[33]: From philbinj at gmail.com Fri Jun 13 10:53:05 2008 From: philbinj at gmail.com (James Philbin) Date: Fri, 13 Jun 2008 15:53:05 +0100 Subject: [SciPy-user] Sparse eigenvalues/eigenvectors problem In-Reply-To: References: <2b1c8c4f0806120724n1d159964ubd39eb54922a62b5@mail.gmail.com> <2b1c8c4f0806121701l1e91ce99h2236a8ed20a73029@mail.gmail.com> Message-ID: <2b1c8c4f0806130753v4f141cb6x7548eaa55733598c@mail.gmail.com> > Should be fixed now (SVN r4444). http://www.robots.ox.ac.uk/~james/np_temp/S.mtx.gz James From philbinj at gmail.com Fri Jun 13 10:57:54 2008 From: philbinj at gmail.com (James Philbin) Date: Fri, 13 Jun 2008 15:57:54 +0100 Subject: [SciPy-user] Sparse eigenvalues/eigenvectors problem In-Reply-To: <2b1c8c4f0806130753v4f141cb6x7548eaa55733598c@mail.gmail.com> References: <2b1c8c4f0806120724n1d159964ubd39eb54922a62b5@mail.gmail.com> <2b1c8c4f0806121701l1e91ce99h2236a8ed20a73029@mail.gmail.com> <2b1c8c4f0806130753v4f141cb6x7548eaa55733598c@mail.gmail.com> Message-ID: <2b1c8c4f0806130757j7c3bc168u6ca78e7116addb79@mail.gmail.com> Hmm... Has something changed in eigen_symmetric? Now, when I run eigen_symmetric(S,k=10,which='LA') i get the following eigenvalues returned: [ 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.], which is good! Curious as to what was changed. Thanks, James From nwagner at iam.uni-stuttgart.de Fri Jun 13 11:24:01 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 13 Jun 2008 17:24:01 +0200 Subject: [SciPy-user] Sparse eigenvalues/eigenvectors problem In-Reply-To: <2b1c8c4f0806130757j7c3bc168u6ca78e7116addb79@mail.gmail.com> References: <2b1c8c4f0806120724n1d159964ubd39eb54922a62b5@mail.gmail.com> <2b1c8c4f0806121701l1e91ce99h2236a8ed20a73029@mail.gmail.com> <2b1c8c4f0806130753v4f141cb6x7548eaa55733598c@mail.gmail.com> <2b1c8c4f0806130757j7c3bc168u6ca78e7116addb79@mail.gmail.com> Message-ID: On Fri, 13 Jun 2008 15:57:54 +0100 "James Philbin" wrote: > Hmm... Has something changed in eigen_symmetric? Now, >when I run > eigen_symmetric(S,k=10,which='LA') i get the following >eigenvalues > returned: [ 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.], >which is good! > Curious as to what was changed. > > Thanks, > James See http://projects.scipy.org/scipy/scipy/changeset/4441 Nils From nwagner at iam.uni-stuttgart.de Fri Jun 13 12:54:13 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 13 Jun 2008 18:54:13 +0200 Subject: [SciPy-user] Sparse eigenvalues/eigenvectors problem In-Reply-To: <2b1c8c4f0806130757j7c3bc168u6ca78e7116addb79@mail.gmail.com> References: <2b1c8c4f0806120724n1d159964ubd39eb54922a62b5@mail.gmail.com> <2b1c8c4f0806121701l1e91ce99h2236a8ed20a73029@mail.gmail.com> <2b1c8c4f0806130753v4f141cb6x7548eaa55733598c@mail.gmail.com> <2b1c8c4f0806130757j7c3bc168u6ca78e7116addb79@mail.gmail.com> Message-ID: On Fri, 13 Jun 2008 15:57:54 +0100 "James Philbin" wrote: > Hmm... Has something changed in eigen_symmetric? Now, >when I run > eigen_symmetric(S,k=10,which='LA') i get the following >eigenvalues > returned: [ 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.], >which is good! > Curious as to what was changed. > > Thanks, > James > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user Is your matrix positive definite ? Did you try symeig ? http://mdp-toolkit.sourceforge.net/symeig.html from scipy import * from pylab import spy, show from symeig import symeig A = io.mmread('S.mtx.gz') w,v = symeig(A.todense(),range=(1,25)) spy(A.todense()) show() Nils From robert.kern at gmail.com Fri Jun 13 14:57:40 2008 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 13 Jun 2008 13:57:40 -0500 Subject: [SciPy-user] Using scipy specfunc in integration In-Reply-To: References: Message-ID: <3d375d730806131157k36ca3535mc2c428c2f3980137@mail.gmail.com> On Fri, Jun 13, 2008 at 09:21, Neal Becker wrote: > Any ideas on this? > from scipy.special import erf > from math import exp, tan > def cot(x): > return 1/tan(x) > > N = 8 > esnodB = 10 > Rd = 10**(.1 * esnodB) > > def F (y): > return exp (-(y**2)) * erf (y * cot (pi/N)) > > > Pe = float(N-1)/float(N) - 0.5 * erf (sqrt (Rd * sin (pi/N))) - 1/(sqrt(pi)) > * quadrature(F, 0, sqrt(Rd) * sin (pi/N))[0] > TypeError: only length-1 arrays can be converted to Python scalars > > It seems to be complaining about erf. > erf > Out[33]: I think it's complaining about exp(), actually. quadrature() is going to pass arrays to F(), not scalars. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From contact at pythonxy.com Fri Jun 13 17:06:05 2008 From: contact at pythonxy.com (Pierre Raybaut) Date: Fri, 13 Jun 2008 23:06:05 +0200 Subject: [SciPy-user] [ Python(x,y) ] New release : 1.2.6 Message-ID: <4852E13D.6000405@pythonxy.com> Hi all, Python(x,y) 1.2.6 is now available on http://www.pythonxy.com. Changes history 06 -14 -2008 - Version 1.2.6 : * Updated: o Cython 0.9.8 (see website) * Added: o py2exe 0.6.6 - Deployment tool which converts Python scripts into stand-alone Windows executables (i.e. target machine does not require Python or any other library to be installed) - see website o PyDAP 2.2.6.4 - Python implementation of the Data Access Protocol, a.k.a. DODS or OPeNDAP (see website) o httplib2 0.4 - A comprehensive HTTP client library that supports many features left out of other HTTP libraries (see website) o Python(x,y) console: some improvements on automatic logging o Interactive consoles: default working directory is the Eclipse/Python workspace folder (default path: User Documents\Python) o Notepad++: tab has been replaced by 4 spaces (better compatibility with Python indentation) Regards, Pierre Raybaut From pgmdevlist at gmail.com Fri Jun 13 19:23:46 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Fri, 13 Jun 2008 19:23:46 -0400 Subject: [SciPy-user] iteratively masking timeseries In-Reply-To: References: Message-ID: <200806131923.46336.pgmdevlist@gmail.com> On Thursday 12 June 2008 17:14:03 Tim Michelsen wrote: > Does that mean that one can only mask an array once and cannot mask more > values later on? No no, the message > IndexError: Masked arrays must be filled before they can be used as > indices! doesn't mean you can't mask arrays more than once, just that you can't use a MaskedArray as index in a TimeSeries without having to fill it first. The reason for this behavior is that we can't tell beforehand how you want to deal with your masked values: should a masked value be considered True ? False ? So, when you want to do something like: > mask[mask<0] = numpy.ma.masked just do: mask[(mask<0).filled(True)] = numpy.ma.masked That way, you're masking the values of mask that are negative, and keep the masked values as masked. From peridot.faceted at gmail.com Fri Jun 13 19:36:08 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Fri, 13 Jun 2008 17:36:08 -0600 Subject: [SciPy-user] Using scipy specfunc in integration In-Reply-To: <3d375d730806131157k36ca3535mc2c428c2f3980137@mail.gmail.com> References: <3d375d730806131157k36ca3535mc2c428c2f3980137@mail.gmail.com> Message-ID: 2008/6/13 Robert Kern : > On Fri, Jun 13, 2008 at 09:21, Neal Becker wrote: >> Any ideas on this? >> from scipy.special import erf >> from math import exp, tan >> def cot(x): >> return 1/tan(x) >> >> N = 8 >> esnodB = 10 >> Rd = 10**(.1 * esnodB) >> >> def F (y): >> return exp (-(y**2)) * erf (y * cot (pi/N)) >> >> >> Pe = float(N-1)/float(N) - 0.5 * erf (sqrt (Rd * sin (pi/N))) - 1/(sqrt(pi)) >> * quadrature(F, 0, sqrt(Rd) * sin (pi/N))[0] >> TypeError: only length-1 arrays can be converted to Python scalars >> >> It seems to be complaining about erf. >> erf >> Out[33]: > > I think it's complaining about exp(), actually. quadrature() is going > to pass arrays to F(), not scalars. Specifically, math.exp and math.tan do not accept vector arguments; don't use them. Use numpy.exp and numpy.tan instead. Anne From lopmart at gmail.com Sat Jun 14 05:26:21 2008 From: lopmart at gmail.com (Jose Lopez) Date: Sat, 14 Jun 2008 02:26:21 -0700 Subject: [SciPy-user] memory error at leastsq Message-ID: <4eeef9d40806140226w519c2b2do95d8f40067caf108@mail.gmail.com> hi, help with memory error at leastsq I am working with a model mathematic of mean squeared error, according to the least square appproach, a solution is sought by minimizing MSE(I,B) = (S1-(I*B))^2+ (S2-(I*B))^2 + (S3-(I*B))^2 where S1,S2,S3,I and B are matrix of k x l and i work whit the optimize.leastsq of scipy, well, for k = 5 and l = 5 work fine, but for k=64 and l =64 scipy give me the next error MemoryError: File: "c:\python25\lib....\minpack.py",line 268,in leastsq retval = _minpack._lmdif(func,x0,args,full_output,ftol,xtol,gtol,maxfev,epsfcn,factor,diag) my line code call is xopt=optimize.leastsq(funcion,IB,args=(Sgen)) where Sgen is S1,S2,S3 at order lexicografic and IB is I and B at order lexicografic. so, my questions are : why happend it? and how am I solve it? thanks JL -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Sat Jun 14 05:35:11 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 14 Jun 2008 18:35:11 +0900 Subject: [SciPy-user] memory error at leastsq In-Reply-To: <4eeef9d40806140226w519c2b2do95d8f40067caf108@mail.gmail.com> References: <4eeef9d40806140226w519c2b2do95d8f40067caf108@mail.gmail.com> Message-ID: <485390CF.3000806@ar.media.kyoto-u.ac.jp> Jose Lopez wrote: > > hi, help with memory error at leastsq > Hi Jose, We can't really help you unless you give us more information: - Operating System (here it looks like windows, but 32 bits, 64 bits, vista, xp ?) - How did you install numpy / scipy (from sources, binaries, etc...) cheers, David From robince at gmail.com Sat Jun 14 07:12:57 2008 From: robince at gmail.com (Robin) Date: Sat, 14 Jun 2008 12:12:57 +0100 Subject: [SciPy-user] memory error at leastsq In-Reply-To: <4eeef9d40806140226w519c2b2do95d8f40067caf108@mail.gmail.com> References: <4eeef9d40806140226w519c2b2do95d8f40067caf108@mail.gmail.com> Message-ID: On Sat, Jun 14, 2008 at 10:26 AM, Jose Lopez wrote: > hi, help with memory error at leastsq > > I am working with a model mathematic of mean squeared error, according to > the least square appproach, a solution is sought by minimizing > > MSE(I,B) = (S1-(I*B))^2+ (S2-(I*B))^2 + (S3-(I*B))^2 > > where S1,S2,S3,I and B are matrix of k x l I don't know if it could be related to your error, but you should be careful whether you are using arrays or matrices. I think with matrices the ^2 above will do matrix product, whereas for mean square error you probably want this operation element-wise... So perhaps try using arrays instead of matrices? You would probably want to take the sum as well. It might be faster to use dot as well to do the squaring (dot(v,v))... Cheers Robin From lopmart at gmail.com Sat Jun 14 08:44:50 2008 From: lopmart at gmail.com (Jose Lopez) Date: Sat, 14 Jun 2008 05:44:50 -0700 Subject: [SciPy-user] memory error at leastsq In-Reply-To: <485390CF.3000806@ar.media.kyoto-u.ac.jp> References: <4eeef9d40806140226w519c2b2do95d8f40067caf108@mail.gmail.com> <485390CF.3000806@ar.media.kyoto-u.ac.jp> Message-ID: <4eeef9d40806140544q45ee7869o9dbd1601b96d1547@mail.gmail.com> the operating system is windows vista 32 bits and install scipy from python xy (http://www.pythonxy.com) On Sat, Jun 14, 2008 at 2:35 AM, David Cournapeau < david at ar.media.kyoto-u.ac.jp> wrote: > Jose Lopez wrote: > > > > hi, help with memory error at leastsq > > > > Hi Jose, > > We can't really help you unless you give us more information: > - Operating System (here it looks like windows, but 32 bits, 64 > bits, vista, xp ?) > - How did you install numpy / scipy (from sources, binaries, etc...) > > cheers, > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Sat Jun 14 13:21:23 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sat, 14 Jun 2008 19:21:23 +0200 Subject: [SciPy-user] Alex Martelli giving the SciPy 2008 Keynote Message-ID: <20080614172123.GB31186@phare.normalesup.org> On behalf of the SciPy2008 organizing committee, I am happy to announce that the Keynote at the conference will be given by Alex Martelli. It is a pleasure for us to receive Alex. He currently works as "Uber Tech Leader" at Google and is the author of two of the Python classics: "Python in a nutshell" and the "Python CookBook". Alex graduated in electronic engineering from the university of Bologna and worked in chip design first for Texas Instrument, and later for IBM Research. During the 8 years he spent at IBM, he gradually shifted from hardware design to software development while winning three Outstanding Technical Achievement Awards. Then he joined think3 inc., and Italian CAD company, as Senior Software Consultant where he developed libraries, network protocols, GUI engines, event frameworks, and web access frontends. After 12 years at think3, he worked for 3 years as a freelance consultant, mostly doing Python development, before joining Google. Alex won the 2002 Activators' Choice Award, and the 2006 Frank Willison award for outstanding contributions to the Python community. Alex has also taught courses on programming, development methods, object-oriented design, and numerical computing, at Ferrara University (Italy) and other venues. Alex's proudest achievement is the articles that appeared in Bridge World (January/February 2000), which were hailed as giant steps towards solving issues that had haunted contract-bridge game theoreticians for decades. This biography was loosely adapted from Alex's autobiography (http://www.aleax.it/bio.txt), more information can be found on his website http://www.aleax.it . Ga?l From ndbecker2 at gmail.com Sat Jun 14 17:57:07 2008 From: ndbecker2 at gmail.com (Neal Becker) Date: Sat, 14 Jun 2008 17:57:07 -0400 Subject: [SciPy-user] Using scipy specfunc in integration References: <3d375d730806131157k36ca3535mc2c428c2f3980137@mail.gmail.com> Message-ID: Anne Archibald wrote: > 2008/6/13 Robert Kern : >> On Fri, Jun 13, 2008 at 09:21, Neal Becker wrote: >>> Any ideas on this? >>> from scipy.special import erf >>> from math import exp, tan >>> def cot(x): >>> return 1/tan(x) >>> >>> N = 8 >>> esnodB = 10 >>> Rd = 10**(.1 * esnodB) >>> >>> def F (y): >>> return exp (-(y**2)) * erf (y * cot (pi/N)) >>> >>> >>> Pe = float(N-1)/float(N) - 0.5 * erf (sqrt (Rd * sin (pi/N))) - >>> 1/(sqrt(pi)) * quadrature(F, 0, sqrt(Rd) * sin (pi/N))[0] >>> TypeError: only length-1 arrays can be converted to Python scalars >>> >>> It seems to be complaining about erf. >>> erf >>> Out[33]: >> >> I think it's complaining about exp(), actually. quadrature() is going >> to pass arrays to F(), not scalars. > > Specifically, math.exp and math.tan do not accept vector arguments; > don't use them. Use numpy.exp and numpy.tan instead. > > Anne Thanks, but I'm confused. The above code is strictly scalar - who's asking for vectors? From robert.kern at gmail.com Sat Jun 14 18:25:14 2008 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 14 Jun 2008 17:25:14 -0500 Subject: [SciPy-user] Using scipy specfunc in integration In-Reply-To: References: <3d375d730806131157k36ca3535mc2c428c2f3980137@mail.gmail.com> Message-ID: <3d375d730806141525r19ae804ek3691a3accf43fb36@mail.gmail.com> On Sat, Jun 14, 2008 at 16:57, Neal Becker wrote: > Anne Archibald wrote: > >> 2008/6/13 Robert Kern : >>> On Fri, Jun 13, 2008 at 09:21, Neal Becker wrote: >>>> Any ideas on this? >>>> from scipy.special import erf >>>> from math import exp, tan >>>> def cot(x): >>>> return 1/tan(x) >>>> >>>> N = 8 >>>> esnodB = 10 >>>> Rd = 10**(.1 * esnodB) >>>> >>>> def F (y): >>>> return exp (-(y**2)) * erf (y * cot (pi/N)) >>>> >>>> >>>> Pe = float(N-1)/float(N) - 0.5 * erf (sqrt (Rd * sin (pi/N))) - >>>> 1/(sqrt(pi)) * quadrature(F, 0, sqrt(Rd) * sin (pi/N))[0] >>>> TypeError: only length-1 arrays can be converted to Python scalars >>>> >>>> It seems to be complaining about erf. >>>> erf >>>> Out[33]: >>> >>> I think it's complaining about exp(), actually. quadrature() is going >>> to pass arrays to F(), not scalars. >> >> Specifically, math.exp and math.tan do not accept vector arguments; >> don't use them. Use numpy.exp and numpy.tan instead. >> >> Anne > Thanks, but I'm confused. The above code is strictly scalar - who's asking > for vectors? Like I said, quadrature() passes arrays to the integrand function, not scalars, and expects the integrand function to evaluate itself elementwise on those arrays. Look at its docstring. Set vec_func=False if you want the function to only take scalars. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From haase at msg.ucsf.edu Sun Jun 15 03:00:52 2008 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Sun, 15 Jun 2008 09:00:52 +0200 Subject: [SciPy-user] The IO library and image file formats -- compare with with PIL In-Reply-To: <3d375d730804211349p4e75ab5cs6603fc13869f519@mail.gmail.com> References: <9457e7c80804210038n2d619393k74cdee0cb481a59b@mail.gmail.com> <10B72505-ACEB-4EAE-A1B9-39A5038224CD@yale.edu> <9457e7c80804210704n348d2129l7228e5fce3ab164d@mail.gmail.com> <53E48B1E-91A0-422A-8309-CA58D4BCE486@yale.edu> <9457e7c80804211013i27a32020l4d704bf75d318c5f@mail.gmail.com> <3d375d730804211020n22ba0d72k8ae86cf8724a4c5b@mail.gmail.com> <0D36E713-05E6-43FA-9634-AD8BEA2B8743@yale.edu> <3d375d730804211349p4e75ab5cs6603fc13869f519@mail.gmail.com> Message-ID: Zach et al, hi, has there been a development on this ... ? I.e. has anyone gotten in contact with Fredrik Lundh ? I recently got more problems with reading a Zeiss Confocal Microscope (LSM) file -- as far as I can tell, LSM files are TIFF files, and ImageMagick recognizes the file in question as such .... The point here is, that I don't think that the Image-SIG mailing list is very helpful .... (it'is only a fraction as good as what we are used to over here at SciPy .... ) So, (again,) it would be great if we could unite our image-io interests over here at SciPy... Thanks, Sebastian On Mon, Apr 21, 2008 at 10:49 PM, Robert Kern wrote: > On Mon, Apr 21, 2008 at 3:37 PM, Zachary Pincus wrote: >> Let me look into whether this idea is at all feasible, and if it is we >> can revisit the issue of whether it belongs anywhere near scipy. >> (Would getting Fredrik Lundh's OK to use various bits in this way make >> things easier? He does seem much more responsive to direct queries >> than patch-submissions.) > > That would alleviate my concerns, yes. > > -- > Robert Kern From yosefmel at post.tau.ac.il Sun Jun 15 04:18:59 2008 From: yosefmel at post.tau.ac.il (Yosef Meller) Date: Sun, 15 Jun 2008 11:18:59 +0300 Subject: [SciPy-user] Solver with n-dimentional steps Message-ID: <200806151118.59818.yosefmel@post.tau.ac.il> Hi all, I'm trying to use scipy.optimize.fsolve(function, x0, args) with a function that has an input vector with length in the tens of thousands (about 3*4000). This turns out to be impractical, because fsolve runs the function after taking a step in each direction in turn. Runing the function takes about 1/10 second, so in the thousands of runs it's way too much. What I'm looking for, I guess, is a way to make fsolve take a multidimentional step, so the function does not need to run 12000 times for each step. Thanks in advance, Yosef Meller. From dwf at cs.toronto.edu Sun Jun 15 06:19:54 2008 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Sun, 15 Jun 2008 06:19:54 -0400 Subject: [SciPy-user] Solver with n-dimentional steps In-Reply-To: <200806151118.59818.yosefmel@post.tau.ac.il> References: <200806151118.59818.yosefmel@post.tau.ac.il> Message-ID: <1B9D8F05-27B4-4227-9119-B8EC66BD5C4B@cs.toronto.edu> On 15-Jun-08, at 4:18 AM, Yosef Meller wrote: > I'm trying to use scipy.optimize.fsolve(function, x0, args) with a > function > that has an input vector with length in the tens of thousands (about > 3*4000). > This turns out to be impractical, because fsolve runs the function > after > taking a step in each direction in turn. Runing the function takes > about 1/10 > second, so in the thousands of runs it's way too much. It's doing finite differences to estimate the gradient. This is pretty much unavoidable if you're only giving it those three arguments. Have you thought about supplying fprime as a function to analytically compute the gradient? David From dmitrey.kroshko at scipy.org Sun Jun 15 06:16:13 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Sun, 15 Jun 2008 13:16:13 +0300 Subject: [SciPy-user] Solver with n-dimentional steps In-Reply-To: <200806151118.59818.yosefmel@post.tau.ac.il> References: <200806151118.59818.yosefmel@post.tau.ac.il> Message-ID: <4854EBED.303@scipy.org> you should provide gradient analytically, there are no other ways to speedup fsolve with so much nVars problem. Regards, D. Yosef Meller wrote: > Hi all, > > I'm trying to use scipy.optimize.fsolve(function, x0, args) with a function > that has an input vector with length in the tens of thousands (about 3*4000). > This turns out to be impractical, because fsolve runs the function after > taking a step in each direction in turn. Runing the function takes about 1/10 > second, so in the thousands of runs it's way too much. > > What I'm looking for, I guess, is a way to make fsolve take a multidimentional > step, so the function does not need to run 12000 times for each step. > > Thanks in advance, > Yosef Meller. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > From yosefmel at post.tau.ac.il Sun Jun 15 06:41:16 2008 From: yosefmel at post.tau.ac.il (Yosef Meller) Date: Sun, 15 Jun 2008 13:41:16 +0300 Subject: [SciPy-user] Solver with n-dimentional steps In-Reply-To: <1B9D8F05-27B4-4227-9119-B8EC66BD5C4B@cs.toronto.edu> References: <200806151118.59818.yosefmel@post.tau.ac.il> <1B9D8F05-27B4-4227-9119-B8EC66BD5C4B@cs.toronto.edu> Message-ID: <200806151341.16358.yosefmel@post.tau.ac.il> On Sunday 15 June 2008 13:19:54 David Warde-Farley wrote: > It's doing finite differences to estimate the gradient. This is pretty > much unavoidable if you're only giving it those three arguments. Have > you thought about supplying fprime as a function to analytically > compute the gradient? Havind dug through the fortran code, I realized that I would have to do that. Thanks for the answer. From fredmfp at gmail.com Sun Jun 15 07:54:22 2008 From: fredmfp at gmail.com (fred) Date: Sun, 15 Jun 2008 13:54:22 +0200 Subject: [SciPy-user] reading binary files written by a gfortran code... Message-ID: <485502EE.6030708@gmail.com> Hi, Ok, this is a well known issue that binary files written by a fortran program have a "special" format. So my question is : is it possible to write binary file _using gfortran_ (no problem with intel fortran compiler) without this formatting ? Is there a peculiar syntax to the open() function ? I did not find any relevant information on the web. The obvious reason is that I have to read these files with scipy.io.numpyio fread method. TIA. Cheers, -- Fred From hoytak at gmail.com Sun Jun 15 09:55:47 2008 From: hoytak at gmail.com (Hoyt Koepke) Date: Sun, 15 Jun 2008 16:55:47 +0300 Subject: [SciPy-user] Solver with n-dimentional steps In-Reply-To: <200806151341.16358.yosefmel@post.tau.ac.il> References: <200806151118.59818.yosefmel@post.tau.ac.il> <1B9D8F05-27B4-4227-9119-B8EC66BD5C4B@cs.toronto.edu> <200806151341.16358.yosefmel@post.tau.ac.il> Message-ID: <4db580fd0806150655v4a13b4b5yd0c49b482d4a8ec1@mail.gmail.com> Yosef, I'm suggesting this partly to satisfy my own curiosity, so ignore this if it is difficult. Since you are running with so many dimensions, you might want to try minimizing the square of your function using fmin_l_bfgs_b. lbfgs is written for higher dimensions, and I've heard it can give incredible speed improvements over many other methods. I'm curious if it would help in your case. --Hoyt On Sun, Jun 15, 2008 at 1:41 PM, Yosef Meller wrote: > On Sunday 15 June 2008 13:19:54 David Warde-Farley wrote: >> It's doing finite differences to estimate the gradient. This is pretty >> much unavoidable if you're only giving it those three arguments. Have >> you thought about supplying fprime as a function to analytically >> compute the gradient? > > Havind dug through the fortran code, I realized that I would have to do that. > Thanks for the answer. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- +++++++++++++++++++++++++++++++++++ Hoyt Koepke UBC Department of Computer Science http://www.cs.ubc.ca/~hoytak/ hoytak at gmail.com +++++++++++++++++++++++++++++++++++ From zachary.pincus at yale.edu Sun Jun 15 11:33:57 2008 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Sun, 15 Jun 2008 11:33:57 -0400 Subject: [SciPy-user] The IO library and image file formats -- compare with with PIL In-Reply-To: References: <9457e7c80804210038n2d619393k74cdee0cb481a59b@mail.gmail.com> <10B72505-ACEB-4EAE-A1B9-39A5038224CD@yale.edu> <9457e7c80804210704n348d2129l7228e5fce3ab164d@mail.gmail.com> <53E48B1E-91A0-422A-8309-CA58D4BCE486@yale.edu> <9457e7c80804211013i27a32020l4d704bf75d318c5f@mail.gmail.com> <3d375d730804211020n22ba0d72k8ae86cf8724a4c5b@mail.gmail.com> <0D36E713-05E6-43FA-9634-AD8BEA2B8743@yale.edu> <3d375d730804211349p4e75ab5cs6603fc13869f519@mail.gmail.com> Message-ID: <53B94D38-4F29-4FB6-9AED-A9647FE3A905@yale.edu> Hi, I've made little progress on this as I've been really quite busy with my research, I'm afraid. Sorry. I will try to contact Fredrik today, though. > I recently got more problems with reading a Zeiss Confocal Microscope > (LSM) file -- as far as I can tell, LSM files are TIFF files, and > ImageMagick recognizes the file in question as such .... I recently have been looking at some numpy bindings for ImageMagick that were sent to me by an acquaintance at CMU. I can send them over (once I make sure there's no problem re-distributing them), if that would be helpful for anyone before we get a dependency-free solution. Also on the microscopy-file-format question (a most vexed issue, to be sure), perhaps the tools at http://www.loci.wisc.edu/ome/formats.html will be useful. I've been using the bfconvert tool to turn Zeiss ZVI files into normal tiffs, and it handles a ton of other microscopy formats. (The bad news is that it's all in Java, so direct python bindings are rather unlikely.) Zach On Jun 15, 2008, at 3:00 AM, Sebastian Haase wrote: > Zach et al, > > hi, has there been a development on this ... ? > I.e. has anyone gotten in contact with Fredrik Lundh ? > > I recently got more problems with reading a Zeiss Confocal Microscope > (LSM) file -- as far as I can tell, LSM files are TIFF files, and > ImageMagick recognizes the file in question as such .... > The point here is, > that I don't think that the Image-SIG mailing list is very > helpful .... > (it'is only a fraction as good as what we are used to over here at > SciPy .... ) > > So, (again,) it would be great if we could unite our image-io > interests over here at SciPy... > > Thanks, > Sebastian > > > > On Mon, Apr 21, 2008 at 10:49 PM, Robert Kern > wrote: >> On Mon, Apr 21, 2008 at 3:37 PM, Zachary Pincus > > wrote: >>> Let me look into whether this idea is at all feasible, and if it >>> is we >>> can revisit the issue of whether it belongs anywhere near scipy. >>> (Would getting Fredrik Lundh's OK to use various bits in this way >>> make >>> things easier? He does seem much more responsive to direct queries >>> than patch-submissions.) >> >> That would alleviate my concerns, yes. >> >> -- >> Robert Kern > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From phaustin at gmail.com Sun Jun 15 12:10:30 2008 From: phaustin at gmail.com (Phil Austin) Date: Sun, 15 Jun 2008 09:10:30 -0700 Subject: [SciPy-user] reading binary files written by a gfortran code... In-Reply-To: <485502EE.6030708@gmail.com> References: <485502EE.6030708@gmail.com> Message-ID: <48553EF6.1060001@gmail.com> fred wrote: > Hi, > > Ok, this is a well known issue that binary files written by a fortran > program have a "special" format. > > So my question is : is it possible to write binary file _using gfortran_ > (no problem with intel fortran compiler) without this formatting ? You want to write/read direct access files, which are just the bytes without the compiler-specific recordlength information. Here is an example of a some tutorial code I wrote for myself exercising memmap and a fortran direct-access read -- just replace read with write in the fortran code to get output 1) write a test binary data file in python with fortran layout and make sure you can read it back in using mmemap import numpy as np xdim=4 ydim=5 zdim=6 theData=np.empty([xdim,ydim,zdim],dtype=np.int32) for i in np.arange(xdim): for j in np.arange(ydim): for k in np.arange(zdim): theData[i,j,k]=100*(i+1) + 10*(j+1) + (k+1) print theData fortout=open('fortout.dat','w') Cout=open('Cout.dat','w') theData.tofile(fortout) theData.tofile(Cout) fortout.close() Cout.close() theFout = np.memmap('fortout.dat', dtype=np.int32,\ shape=(xdim,ydim,zdim), order='F') theCout = np.memmap('Cout.dat', dtype=np.int32, \ shape=(xdim,ydim,zdim), order='C') for i in np.arange(xdim): for j in np.arange(ydim): for k in np.arange(zdim): theFout[i,j,k]=theData[i,j,k] theCout[i,j,k]=theData[i,j,k] theFout.sync() theCout.sync() 2) and now read fortout.dat using standard f95 direct access: program readfile integer,parameter :: xdim=4,ydim=5,zdim=6 integer(kind=4):: thevar(xdim,ydim,zdim) integer :: i,j,k,recnum open(unit=12,file="fortout.dat",access="direct",action="read",recl=4) recnum=0 do i=1,xdim do j=1,ydim do k=1,zdim recnum=recnum+1 read(12,rec=recnum) thevar(i,j,k) print *,i,j,k,thevar(i,j,k) enddo enddo enddo close(unit=12) end From haase at msg.ucsf.edu Sun Jun 15 12:50:18 2008 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Sun, 15 Jun 2008 18:50:18 +0200 Subject: [SciPy-user] The IO library and image file formats -- compare with with PIL In-Reply-To: <53B94D38-4F29-4FB6-9AED-A9647FE3A905@yale.edu> References: <10B72505-ACEB-4EAE-A1B9-39A5038224CD@yale.edu> <9457e7c80804210704n348d2129l7228e5fce3ab164d@mail.gmail.com> <53E48B1E-91A0-422A-8309-CA58D4BCE486@yale.edu> <9457e7c80804211013i27a32020l4d704bf75d318c5f@mail.gmail.com> <3d375d730804211020n22ba0d72k8ae86cf8724a4c5b@mail.gmail.com> <0D36E713-05E6-43FA-9634-AD8BEA2B8743@yale.edu> <3d375d730804211349p4e75ab5cs6603fc13869f519@mail.gmail.com> <53B94D38-4F29-4FB6-9AED-A9647FE3A905@yale.edu> Message-ID: On Sun, Jun 15, 2008 at 5:33 PM, Zachary Pincus wrote: > Hi, > > I've made little progress on this as I've been really quite busy with > my research, I'm afraid. Sorry. > > I will try to contact Fredrik today, though. > great - thanks. >> I recently got more problems with reading a Zeiss Confocal Microscope >> (LSM) file -- as far as I can tell, LSM files are TIFF files, and >> ImageMagick recognizes the file in question as such .... > > I recently have been looking at some numpy bindings for ImageMagick > that were sent to me by an acquaintance at CMU. I can send them over > (once I make sure there's no problem re-distributing them), if that > would be helpful for anyone before we get a dependency-free solution. > What I have seen of imagemagick in terms of binding was very discouraging .... ( Mostly abandoned stuff because I.M. seems to be moving a target; with every release a new API -- maybe that has changed by now. ) Main point though: PIL is really close to what we need.... > Also on the microscopy-file-format question (a most vexed issue, to be > sure), perhaps the tools at http://www.loci.wisc.edu/ome/formats.html > will be useful. I've been using the bfconvert tool to turn Zeiss ZVI > files into normal tiffs, and it handles a ton of other microscopy > formats. (The bad news is that it's all in Java, so direct python > bindings are rather unlikely.) Yeah I know about this. I actually tried some Jython on it: and I made a nice script which converts "whole directory trees full of " LOCI-readable files into my preferred (memory mappable) file format (it's close to the Deltavision format, a derivative of the MRC format) - Sebastian > > Zach > > > > On Jun 15, 2008, at 3:00 AM, Sebastian Haase wrote: > >> Zach et al, >> >> hi, has there been a development on this ... ? >> I.e. has anyone gotten in contact with Fredrik Lundh ? >> >> I recently got more problems with reading a Zeiss Confocal Microscope >> (LSM) file -- as far as I can tell, LSM files are TIFF files, and >> ImageMagick recognizes the file in question as such .... >> The point here is, >> that I don't think that the Image-SIG mailing list is very >> helpful .... >> (it'is only a fraction as good as what we are used to over here at >> SciPy .... ) >> >> So, (again,) it would be great if we could unite our image-io >> interests over here at SciPy... >> >> Thanks, >> Sebastian >> >> >> >> On Mon, Apr 21, 2008 at 10:49 PM, Robert Kern >> wrote: >>> On Mon, Apr 21, 2008 at 3:37 PM, Zachary Pincus >> > wrote: >>>> Let me look into whether this idea is at all feasible, and if it >>>> is we >>>> can revisit the issue of whether it belongs anywhere near scipy. >>>> (Would getting Fredrik Lundh's OK to use various bits in this way >>>> make >>>> things easier? He does seem much more responsive to direct queries >>>> than patch-submissions.) >>> >>> That would alleviate my concerns, yes. >>> >>> -- >>> Robert Kern From zachary.pincus at yale.edu Sun Jun 15 13:14:45 2008 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Sun, 15 Jun 2008 13:14:45 -0400 Subject: [SciPy-user] The IO library and image file formats -- compare with with PIL In-Reply-To: References: <10B72505-ACEB-4EAE-A1B9-39A5038224CD@yale.edu> <9457e7c80804210704n348d2129l7228e5fce3ab164d@mail.gmail.com> <53E48B1E-91A0-422A-8309-CA58D4BCE486@yale.edu> <9457e7c80804211013i27a32020l4d704bf75d318c5f@mail.gmail.com> <3d375d730804211020n22ba0d72k8ae86cf8724a4c5b@mail.gmail.com> <0D36E713-05E6-43FA-9634-AD8BEA2B8743@yale.edu> <3d375d730804211349p4e75ab5cs6603fc13869f519@mail.gmail.com> <53B94D38-4F29-4FB6-9AED-A9647FE3A905@yale.edu> Message-ID: <32CCE7B7-F876-41E8-AD5B-0347BFC756DF@yale.edu> Hi all, >> I will try to contact Fredrik today, though. >> > great - thanks. I'll let you know what I learn. Anyhow, I think that I can't really take the lead on any project right now because I have a ton of other stuff on my plate for the next month or two. If I do hear back from Fredrik in the positive, what I can do is send an interested party what I have so far, which is called "PIL- Lite" and is a private fork of PIL. From there, it should be possible to tear everything out except the image header IO (in the *ImagePlugin files) and graft that on to numpy/python decompression/pixel decoding. (At that point, what we'd have is less a fork of PIL, which rightly concerned Robert, and more a separate entity that happens to share a bit of code with PIL.) But, it won't be super-easy -- image formats are a real bear, and there are things like palette-modes to consider (do we want to support them?) as well as the issue of JPEG decoding. Zach From cohen at slac.stanford.edu Sun Jun 15 13:36:04 2008 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Sun, 15 Jun 2008 19:36:04 +0200 Subject: [SciPy-user] How to get rid of nan and Inf In-Reply-To: <8f5413a60806121102h4849828dief9fd5387f1b5fee@mail.gmail.com> References: <8f5413a60806121102h4849828dief9fd5387f1b5fee@mail.gmail.com> Message-ID: <48555304.4080301@slac.stanford.edu> In [1]: import numpy In [2]: numpy.nan_to_num? Type: function Base Class: String Form: Namespace: Interactive File: /usr/lib/python2.5/site-packages/numpy/lib/type_check.py Definition: numpy.nan_to_num(x) Docstring: Returns a copy of replacing NaN's with 0 and Infs with large numbers The following mappings are applied: NaN -> 0 Inf -> limits.double_max -Inf -> limits.double_min JCT Tim Gray wrote: > nan_to_num() does this. I think it's somewhere in numpy. > > Makes Nan's -> 0's and infs -> machine limits. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From fredmfp at gmail.com Sun Jun 15 13:38:32 2008 From: fredmfp at gmail.com (fred) Date: Sun, 15 Jun 2008 19:38:32 +0200 Subject: [SciPy-user] reading binary files written by a gfortran code... In-Reply-To: <48553EF6.1060001@gmail.com> References: <485502EE.6030708@gmail.com> <48553EF6.1060001@gmail.com> Message-ID: <48555398.70100@gmail.com> Phil Austin a ?crit : > > You want to write/read direct access files, which are just the bytes > without the compiler-specific recordlength information. Here is > an example of a some tutorial code I wrote for myself exercising > memmap and a fortran direct-access read -- just replace read with > write in the fortran code to get output Thanks Phil. But I would rather prefer the "magic line" fortran code (if it exists) to write directly the binary file with the right format. For instance, as seen on the web, open(unit=20, file='foo.dat', form='unformatted', access='stream') does not work: Fortran runtime error: Bad ACCESS parameter in OPEN statement In fact, I would like to know all access parameters for the open statement. Where could I get them ? I did not find relevant info, in gfortran manpage, gfortran info, etc. Cheers, -- Fred From gnurser at googlemail.com Sun Jun 15 14:31:14 2008 From: gnurser at googlemail.com (George Nurser) Date: Sun, 15 Jun 2008 19:31:14 +0100 Subject: [SciPy-user] reading binary files written by a gfortran code... In-Reply-To: <485502EE.6030708@gmail.com> References: <485502EE.6030708@gmail.com> Message-ID: <1d1e6ea70806151131n6ed98424ye47e0da40ab24a05@mail.gmail.com> Hi Fred, Fortran ( and this includes gfortran) seems to add stuff at the beginning and end of a file. If the unformatted fortran file just holds one array of real*8 data the following works for me (and I think it works for real*4 as well). It may need some modernizing...: from numpy import * def readbin(file_in,swap=False,f8=True): if f8: htot =fromfile(file = file_in ,dtype=float) c = htot.view(single) hc = c[1:-1].view(double) else: htot =fromfile(file = file_in ,dtype=float32) hc = c[1:-1] if swap: hc = hc.byteswap() return hc import readbin a1 = readbin.readbin('unf.dat') 2008/6/15 fred : > Hi, > > Ok, this is a well known issue that binary files written by a fortran > program have a "special" format. > > So my question is : is it possible to write binary file _using gfortran_ > (no problem with intel fortran compiler) without this formatting ? > > Is there a peculiar syntax to the open() function ? > > I did not find any relevant information on the web. > > The obvious reason is that I have to read these files with > scipy.io.numpyio fread method. > > TIA. > > Cheers, > > -- > Fred > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From meesters at uni-mainz.de Sun Jun 15 15:07:23 2008 From: meesters at uni-mainz.de (Christian Meesters) Date: Sun, 15 Jun 2008 21:07:23 +0200 Subject: [SciPy-user] NNLS in scipy Message-ID: <1213556843.5734.10.camel@meesters.biologie.uni-mainz.de> Hi, I'd like to perform some "non-negatively constraint least squares" algorithm to fit my data, like this: Signal = SUM a_i C_i where C_i is some simulated signal and a_i the amplitude contributed by that simulated signal. Or in terms of arrays, I will have one reference array and several arrays of simulated signals. How can find the (non-negative) coefficients a_i for each simulated signal array? (All negative contributions should be discarded.) Is there anything like that in scipy (which I couldn't find)? Or any other code doing that? Else I could write it myself and contribute, but having some working code would be nice, of course. TIA Christian From mforbes at physics.ubc.ca Sun Jun 15 15:28:19 2008 From: mforbes at physics.ubc.ca (Michael McNeil Forbes) Date: Sun, 15 Jun 2008 12:28:19 -0700 Subject: [SciPy-user] Solver with n-dimentional steps In-Reply-To: <4db580fd0806150655v4a13b4b5yd0c49b482d4a8ec1@mail.gmail.com> References: <200806151118.59818.yosefmel@post.tau.ac.il> <1B9D8F05-27B4-4227-9119-B8EC66BD5C4B@cs.toronto.edu> <200806151341.16358.yosefmel@post.tau.ac.il> <4db580fd0806150655v4a13b4b5yd0c49b482d4a8ec1@mail.gmail.com> Message-ID: <0177B04F-BD46-4DE0-9E85-FDF9530573B4@physics.ubc.ca> Yosef, You can also try using the Brodyen method (the b in bfgs) directly to solve your problem. You could try some of the methods scipy.optimize.broyden*, or look at my attached code. -------------- next part -------------- A non-text attachment was scrubbed... Name: broyden.py Type: text/x-python-script Size: 9221 bytes Desc: not available URL: -------------- next part -------------- The Broyden method is a generalized secant method, so the Jacobian is slowly updated as the iteration proceeds. It is useful if you cannot compute the Jacobian. It starts as an iteration, so if you can arrange your equations so that they are expressed as an iteration x->F (x) that sort of converges, then this will work best, but it often works even if the iteration does not converge. Michael. On 15 Jun 2008, at 6:55 AM, Hoyt Koepke wrote: > Yosef, > > I'm suggesting this partly to satisfy my own curiosity, so ignore this > if it is difficult. Since you are running with so many dimensions, > you might want to try minimizing the square of your function using > fmin_l_bfgs_b. lbfgs is written for higher dimensions, and I've heard > it can give incredible speed improvements over many other methods. > I'm curious if it would help in your case. > > --Hoyt > > On Sun, Jun 15, 2008 at 1:41 PM, Yosef Meller > wrote: >> On Sunday 15 June 2008 13:19:54 David Warde-Farley wrote: >>> It's doing finite differences to estimate the gradient. This is >>> pretty >>> much unavoidable if you're only giving it those three arguments. >>> Have >>> you thought about supplying fprime as a function to analytically >>> compute the gradient? >> ---------------------- Mailing address: Michael McNeil Forbes UW Dept. of Physics Box 351560 Seattle, WA, 98195-1560 For couriers: Physics/Astronomy Building, Room C121 3910 15th Ave NE Seattle, WA, 98195-1560 Font Desk: (206)-543-2770 If you would like to visit me personally: Room B482 (Fourth floor) (206) 543-9754 From dmitrey.kroshko at scipy.org Sun Jun 15 16:02:23 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Sun, 15 Jun 2008 23:02:23 +0300 Subject: [SciPy-user] [optimization] OpenOpt release v 0.18 Message-ID: <4855754F.8090207@scipy.org> Hi all, I'm glad to inform you about new OpenOpt release: v 0.18. OpenOpt is free (license: BSD) optimization framework (written in Python language) with connections to lots of solvers (some are C- or Fortran-written) and some native ones. Changes since previous release 0.17 (March 15, 2008): * connection to glpk MILP solver (requires cvxopt v >= 1.0) * connection to NLP solver IPOPT (requires python-ipopt wrapper (made by Eric Xu You) installation, that is currently available for Linux only, see openopt NLP webpage for more details) * major changes for NLP/NSP solver ralg * splitting non-linear constraints can benefit for some solvers * unified text output for NLP solvers * handling of maximization problems (via p.goal = 'max' or 'maximum') * some bugfixes, lots of code cleanup more details here: http://openopt.blogspot.com/2008/06/openopt-018.html Regards, Dmitrey. From peter.skomoroch at gmail.com Sun Jun 15 17:45:08 2008 From: peter.skomoroch at gmail.com (Peter Skomoroch) Date: Sun, 15 Jun 2008 14:45:08 -0700 Subject: [SciPy-user] NNLS in scipy In-Reply-To: <1213556843.5734.10.camel@meesters.biologie.uni-mainz.de> References: <1213556843.5734.10.camel@meesters.biologie.uni-mainz.de> Message-ID: Christian, I was just going to do a blog post on this same topic next week. I have a few different versions of python NMF code I used a few years ago, including sparse, parallel, and memmap versions. Until I post my examples, you can find a basic implementation based on scipy here (from Toby Segaran): http://examples.oreilly.com/9780596529321/ These guys also did an implementation in python: http://www.ma.utexas.edu/users/zmccoy/plsinmf.html http://www.ma.utexas.edu/users/zmccoy/nmf.py Some more NMF related links here: http://del.icio.us/pskomoroch/nmf -Pete On Sun, Jun 15, 2008 at 12:07 PM, Christian Meesters wrote: > Hi, > > I'd like to perform some "non-negatively constraint least squares" > algorithm to fit my data, like this: > > Signal = SUM a_i C_i > > where C_i is some simulated signal and a_i the amplitude contributed by > that simulated signal. Or in terms of arrays, I will have one reference > array and several arrays of simulated signals. How can find the > (non-negative) coefficients a_i for each simulated signal array? (All > negative contributions should be discarded.) > > Is there anything like that in scipy (which I couldn't find)? Or any > other code doing that? > > Else I could write it myself and contribute, but having some working > code would be nice, of course. > > TIA > Christian > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Peter N. Skomoroch peter.skomoroch at gmail.com http://www.datawrangling.com http://del.icio.us/pskomoroch -------------- next part -------------- An HTML attachment was scrubbed... URL: From graeme.okeefe at petnm.unimelb.edu.au Sun Jun 15 18:56:37 2008 From: graeme.okeefe at petnm.unimelb.edu.au (Graeme O'Keefe) Date: Mon, 16 Jun 2008 08:56:37 +1000 Subject: [SciPy-user] NNLS in scipy In-Reply-To: <1213556843.5734.10.camel@meesters.biologie.uni-mainz.de> References: <1213556843.5734.10.camel@meesters.biologie.uni-mainz.de> Message-ID: Here is an implementation I put together to solve for x non-negatively the system: y = X.x -------------- next part -------------- A non-text attachment was scrubbed... Name: fnnls.py Type: text/x-python-script Size: 4667 bytes Desc: not available URL: -------------- next part -------------- regards, Graeme On 16/06/2008, at 5:07 AM, Christian Meesters wrote: > Hi, > > I'd like to perform some "non-negatively constraint least squares" > algorithm to fit my data, like this: > > Signal = SUM a_i C_i > > where C_i is some simulated signal and a_i the amplitude contributed > by > that simulated signal. Or in terms of arrays, I will have one > reference > array and several arrays of simulated signals. How can find the > (non-negative) coefficients a_i for each simulated signal array? (All > negative contributions should be discarded.) > > Is there anything like that in scipy (which I couldn't find)? Or any > other code doing that? > > Else I could write it myself and contribute, but having some working > code would be nice, of course. > > TIA > Christian > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From meesters at uni-mainz.de Mon Jun 16 02:45:54 2008 From: meesters at uni-mainz.de (Christian Meesters) Date: Mon, 16 Jun 2008 08:45:54 +0200 Subject: [SciPy-user] NNLS in scipy In-Reply-To: <1213556843.5734.10.camel@meesters.biologie.uni-mainz.de> References: <1213556843.5734.10.camel@meesters.biologie.uni-mainz.de> Message-ID: <1213598754.5734.13.camel@meesters.biologie.uni-mainz.de> Hi, Thanks Peter & Graeme! I'm curious how the examples will look like, Peter. At least you'll have a willing tester ;-). Christian From lists at benair.net Mon Jun 16 02:57:30 2008 From: lists at benair.net (Benedikt Koenig) Date: Mon, 16 Jun 2008 08:57:30 +0200 Subject: [SciPy-user] reading binary files written by a gfortran code... In-Reply-To: <485502EE.6030708@gmail.com> References: <485502EE.6030708@gmail.com> Message-ID: <1213599451.20630.7.camel@iagpc71.iag.uni-stuttgart.de> Hi Fred, I am not sure whether I get you question right, but I once had the problem, that a program complied with gfortran could not read binaries written by the same version but compiled with ifort or g77. This was caused by the marker length Fortran uses in binary files. Ifort and g77 used a length of 4 bytes, whereas gfortran uses whatever is stored in "off_t" on the parcticular system. So adding the following compile option to gfortran did the job for me: FFLOAT = -frecord-marker=4 HTH, bene Am Sonntag, den 15.06.2008, 13:54 +0200 schrieb fred: > Hi, > > Ok, this is a well known issue that binary files written by a fortran > program have a "special" format. > > So my question is : is it possible to write binary file _using gfortran_ > (no problem with intel fortran compiler) without this formatting ? > > Is there a peculiar syntax to the open() function ? > > I did not find any relevant information on the web. > > The obvious reason is that I have to read these files with > scipy.io.numpyio fread method. > > TIA. > > Cheers, > From fredmfp at gmail.com Mon Jun 16 04:11:54 2008 From: fredmfp at gmail.com (fred) Date: Mon, 16 Jun 2008 10:11:54 +0200 Subject: [SciPy-user] reading binary files written by a gfortran code... In-Reply-To: <1213599451.20630.7.camel@iagpc71.iag.uni-stuttgart.de> References: <485502EE.6030708@gmail.com> <1213599451.20630.7.camel@iagpc71.iag.uni-stuttgart.de> Message-ID: <4856204A.4040503@gmail.com> Benedikt Koenig a ?crit : > Hi Fred, Hi Benedikt, > I am not sure whether I get you question right, but I once had the > problem, that a program complied with gfortran could not read binaries > written by the same version but compiled with ifort or g77. One can say it like this, yes ;-) I want my program compiled with gfortran to write binary files with the _right_ size, say, 1 float -> 4 bytes. No more, no less ;-) > This was caused by the marker length Fortran uses in binary files. Ifort > and g77 used a length of 4 bytes, whereas gfortran uses whatever is > stored in "off_t" on the parcticular system. So adding the following > compile option to gfortran did the job for me: > FFLOAT = -frecord-marker=4 I guess you mean to put this flag in a Makefile, right? However, how do you open and write your file in your fortran code? I use this: real :: x x = 0 open(unit=20, file='a.dat', form='unformatted') write(20) x close(20) With FFLOAT set as above, I still have a 20 bytes file size for one float. In fact, whatever I set in FFLOT (even negative values), I get the same result. Cheers, -- Fred From lists at benair.net Mon Jun 16 04:38:55 2008 From: lists at benair.net (Benedikt Koenig) Date: Mon, 16 Jun 2008 10:38:55 +0200 Subject: [SciPy-user] reading binary files written by a gfortran code... In-Reply-To: <4856204A.4040503@gmail.com> References: <485502EE.6030708@gmail.com> <1213599451.20630.7.camel@iagpc71.iag.uni-stuttgart.de> <4856204A.4040503@gmail.com> Message-ID: <1213605538.21575.11.camel@iagpc71.iag.uni-stuttgart.de> Hi Fred, > > This was caused by the marker length Fortran uses in binary files. Ifort > > and g77 used a length of 4 bytes, whereas gfortran uses whatever is > > stored in "off_t" on the parcticular system. So adding the following > > compile option to gfortran did the job for me: > > FFLOAT = -frecord-marker=4 > I guess you mean to put this flag in a Makefile, right? yeah, the -frecord-marker option is in my case used in the Makefile. Actually I am using '-fdefault-real-8 -frecord-marker=4' together with the usual OPT settings and library stuff as compile options to gfortran. > However, how do you open and write your file in your fortran code? > I use this: > > real :: x > > x = 0 > > open(unit=20, file='a.dat', form='unformatted') > write(20) x > close(20) > > With FFLOAT set as above, I still have a 20 bytes file size for one float. > In fact, whatever I set in FFLOT (even negative values), I get the same > result. Unfortunately, I am not really familiar with fortran. The code I am using was not written by myself, I am just using the source code to compile on different platforms I am working on. So I am not the right one to ask about ways of Fortran. However, IIRC Fortran writes a marker before and after every record. If you write one real (4 byte) and you end up with 20 byte output this seems to me like you still have the markers of 8 byte length rather than the 4 bytes as should be the case using -frecord-marker=4. Maybe you want to check the hex code, that is acutally written? The record markers should be 4 bytes length and contain the length of the record (in your case 1 real) BTW, which version of gfortran are you using? cheers, bene > > Cheers, > From yosefmel at post.tau.ac.il Mon Jun 16 06:53:15 2008 From: yosefmel at post.tau.ac.il (Yosef Meller) Date: Mon, 16 Jun 2008 13:53:15 +0300 Subject: [SciPy-user] Solver with n-dimentional steps In-Reply-To: <4db580fd0806150655v4a13b4b5yd0c49b482d4a8ec1@mail.gmail.com> References: <200806151118.59818.yosefmel@post.tau.ac.il> <200806151341.16358.yosefmel@post.tau.ac.il> <4db580fd0806150655v4a13b4b5yd0c49b482d4a8ec1@mail.gmail.com> Message-ID: <200806161353.15378.yosefmel@post.tau.ac.il> On Sunday 15 June 2008 16:55:47 Hoyt Koepke wrote: > I'm suggesting this partly to satisfy my own curiosity, so ignore this > if it is difficult. Since you are running with so many dimensions, > you might want to try minimizing the square of your function using > fmin_l_bfgs_b. lbfgs is written for higher dimensions, and I've heard > it can give incredible speed improvements over many other methods. > I'm curious if it would help in your case. Thanks for the tip. After trying half a day to figure out why a jacobian that should take 1GiB (8 byte floats) clogs the memory in a 2GiB machine, I decided to go for fmin_l_bfgs_b(). I can't tell you if it improves the speed, but the low memory requirements make it work for me, as opposed to fsolve(). From fredmfp at gmail.com Mon Jun 16 07:24:09 2008 From: fredmfp at gmail.com (fred) Date: Mon, 16 Jun 2008 13:24:09 +0200 Subject: [SciPy-user] reading binary files written by a gfortran code... In-Reply-To: <1213605538.21575.11.camel@iagpc71.iag.uni-stuttgart.de> References: <485502EE.6030708@gmail.com> <1213599451.20630.7.camel@iagpc71.iag.uni-stuttgart.de> <4856204A.4040503@gmail.com> <1213605538.21575.11.camel@iagpc71.iag.uni-stuttgart.de> Message-ID: <48564D59.70701@gmail.com> Benedikt Koenig a ?crit : > Unfortunately, I am not really familiar with fortran. The code I am > using was not written by myself, I am just using the source code to > compile on different platforms I am working on. So I am not the right > one to ask about ways of Fortran. Ok. Maybe a fortran guru ? I can't believe there is no fortran guru here ;-) > However, IIRC Fortran writes a marker before and after every record. If > you write one real (4 byte) and you end up with 20 byte output this > seems to me like you still have the markers of 8 byte length rather than > the 4 bytes as should be the case using -frecord-marker=4. Maybe you > want to check the hex code, that is acutally written? The record markers > should be 4 bytes length and contain the length of the record (in your > case 1 real) Well, hexdump gives me 0000000 0004 0000 0000 0000 0000 0000 0004 0000 0000010 0000 0000 0000014 On a binary file created with the intel version (so a 4 bytes file), I get 0000000 0000 0000 0000004 The trick I don't understand is that if my file is 4 bytes length, there is no marker, no, as in the case of a binary file written by say a C code ? Or I'm completely stupid ? > BTW, which version of gfortran are you using? 4.1.1 on debian etch box. Cheers, -- Fred From hasslerjc at comcast.net Mon Jun 16 08:13:38 2008 From: hasslerjc at comcast.net (John Hassler) Date: Mon, 16 Jun 2008 08:13:38 -0400 Subject: [SciPy-user] reading binary files written by a gfortran code... In-Reply-To: <48564D59.70701@gmail.com> References: <485502EE.6030708@gmail.com> <1213599451.20630.7.camel@iagpc71.iag.uni-stuttgart.de> <4856204A.4040503@gmail.com> <1213605538.21575.11.camel@iagpc71.iag.uni-stuttgart.de> <48564D59.70701@gmail.com> Message-ID: <485658F2.1060808@comcast.net> An HTML attachment was scrubbed... URL: From fredmfp at gmail.com Mon Jun 16 08:55:38 2008 From: fredmfp at gmail.com (fred) Date: Mon, 16 Jun 2008 14:55:38 +0200 Subject: [SciPy-user] reading binary files written by a gfortran code... In-Reply-To: <485658F2.1060808@comcast.net> References: <485502EE.6030708@gmail.com> <1213599451.20630.7.camel@iagpc71.iag.uni-stuttgart.de> <4856204A.4040503@gmail.com> <1213605538.21575.11.camel@iagpc71.iag.uni-stuttgart.de> <48564D59.70701@gmail.com> <485658F2.1060808@comcast.net> Message-ID: <485662CA.20805@gmail.com> John Hassler a ?crit : > Reading FORTRAN unformatted binary files in C/C++ > Or....FORTRAN Weirdness, what were they thinking? > Written by Paul Bourke > April 2003 > http://local.wasp.uwa.edu.au/~pbourke/dataformats/fortran/ > > ... which is basically what you've already figured out. Yes, I have already read this. But this does not give me a solution to my issue. Cheers, -- Fred From w.henney at astrosmo.unam.mx Mon Jun 16 09:44:43 2008 From: w.henney at astrosmo.unam.mx (William Henney) Date: Mon, 16 Jun 2008 13:44:43 +0000 (UTC) Subject: [SciPy-user] reading binary files written by a gfortran code... References: <485502EE.6030708@gmail.com> <1213599451.20630.7.camel@iagpc71.iag.uni-stuttgart.de> <4856204A.4040503@gmail.com> <1213605538.21575.11.camel@iagpc71.iag.uni-stuttgart.de> <48564D59.70701@gmail.com> Message-ID: fred gmail.com> writes: > The trick I don't understand is that if my file is 4 bytes length, > there is no marker, no, as in the case of a binary file written by > say a C code ? Or I'm completely stupid ? I think you had the right answer to start with: you must use access='stream' This is a new feature of Fortran 2003, but it is already supported by most compilers. > > BTW, which version of gfortran are you using? > 4.1.1 on debian etch box. And here is your trouble - this is a very old version of gfortran. I have 4.3.0 and the following code works fine: ----------------- streamio.f90 -------------------------- program streamio implicit none real :: a = 0.0 open(unit=20, file='foo.dat', form='unformatted', access='stream') write(20) a close(20) end program streamio ------------------------------------------------------ $ gfortran -o streamio streamio.f90 && ./streamio && hexdump foo.dat 0000000 00 00 00 00 0000004 You might also want to read this page: http://www.star.le.ac.uk/~cgp/streamIO.html Cheers Will From fredmfp at gmail.com Mon Jun 16 10:07:10 2008 From: fredmfp at gmail.com (fred) Date: Mon, 16 Jun 2008 16:07:10 +0200 Subject: [SciPy-user] extracting z, v from a 3D array Message-ID: <4856738E.8050408@gmail.com> Hi, In a 3D array, each cell [i, j, k] has a value v which is the value at the point x=i*dx, y=j*dy, z=k*dz. How can I extract, "efficiently" (fast and memory resource) the 2D array (v, z), ie z vs. v ? "fast and memory resource" means "in acceptable time": for instance, I compute an histogram using scipy.histogram for a 800x850x900 cells float array in 3 mn. TIA. Cheers, -- Fred From david.huard at gmail.com Mon Jun 16 10:17:32 2008 From: david.huard at gmail.com (David Huard) Date: Mon, 16 Jun 2008 10:17:32 -0400 Subject: [SciPy-user] reading binary files written by a gfortran code... In-Reply-To: <48555398.70100@gmail.com> References: <485502EE.6030708@gmail.com> <48553EF6.1060001@gmail.com> <48555398.70100@gmail.com> Message-ID: <91cf711d0806160717gd7cb9c8ga460f23621bfcefd@mail.gmail.com> According to the Fortran95 handbook (1997) ACCESS arguments takes either DIRECT or SEQUENTIAL as values. Also, if the ACCESS specifier is DIRECT, a RECL specifier must be present. RECL is an integer specifying the length of each record in default characters if access is direct, or the maximum length of a record if the access method is sequential. HTH, David 2008/6/15 fred : > Phil Austin a ?crit : > > > > You want to write/read direct access files, which are just the bytes > > without the compiler-specific recordlength information. Here is > > an example of a some tutorial code I wrote for myself exercising > > memmap and a fortran direct-access read -- just replace read with > > write in the fortran code to get output > Thanks Phil. > > But I would rather prefer the "magic line" fortran code (if it exists) > to write directly the binary file with the right format. > > For instance, as seen on the web, > > open(unit=20, file='foo.dat', form='unformatted', access='stream') > does not work: > > Fortran runtime error: Bad ACCESS parameter in OPEN statement > > In fact, I would like to know all access parameters for the open > statement. Where could I get them ? > > I did not find relevant info, in gfortran manpage, gfortran info, etc. > > > Cheers, > > -- > Fred > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fredmfp at gmail.com Mon Jun 16 10:17:29 2008 From: fredmfp at gmail.com (fred) Date: Mon, 16 Jun 2008 16:17:29 +0200 Subject: [SciPy-user] reading binary files written by a gfortran code... In-Reply-To: References: <485502EE.6030708@gmail.com> <1213599451.20630.7.camel@iagpc71.iag.uni-stuttgart.de> <4856204A.4040503@gmail.com> <1213605538.21575.11.camel@iagpc71.iag.uni-stuttgart.de> <48564D59.70701@gmail.com> Message-ID: <485675F9.70303@gmail.com> William Henney a ?crit : > And here is your trouble - this is a very old version of gfortran. I > have 4.3.0 and the following code works fine: Hurray !! :-))) I just did not think my gfortran version was too old... Tons of thanks, William ! > You might also want to read this page: > > http://www.star.le.ac.uk/~cgp/streamIO.html Ok. Cheers, -- Fred From ivo.maljevic at gmail.com Mon Jun 16 10:32:00 2008 From: ivo.maljevic at gmail.com (Ivo Maljevic) Date: Mon, 16 Jun 2008 10:32:00 -0400 Subject: [SciPy-user] More on speed comparisons Message-ID: <826c64da0806160732i1ec11089wc3400d754276e8b2@mail.gmail.com> I was planing to refrain from this, but I believe this might be an interesting comparison for people who seriously think about switching to SciPy/NumPy. Few days ago I wrote about speed comparison between the scalar and vectorized versions of function calls. Based on several comments, I concluded that the same story that applies to Matlab and Octave applies here: vectorize thy code, and speed gain will come. Before I show the results of vectorized vs. non-vectorized results, just want to go on the record and say that I am by no means sayhing that SciPy/NumPy is not good. I still like what has been done here. There is a particular scenario that I use at my work where SciPy, combined with matplotlib, is extremely useful. That scenario is the following. In my wireless lab, I have basestations, mobile stations, whole bunch of instruments and PCs either connected via network or GPIB cables (for instruments).I use Python here to automate test cases and data collection and the ability to do SSH and GPIB communication is very useful. Once I collect data, I use SciPy for some simple postprocessing and I generate PNG plots, and finally, I generate HTML pages with results shown as tables and plots. So, it is all done in a single language/script instead of having to break the processing into several languages/scripts. However, I wanted to see if SciPy would be good enough speedwise to completely replace Matlab. An, at least for the type of processing I do, it comes nowhere near it. I wrote a small toy program that does some simple random variable manipulation in several languages. The python code consists of two versions, one uses for loop and basic pathon libraries and the other uses nympy's vectorized form (there was no difference between numpy and scipy). Here are the relative results after running the code on two machines: 64-bit Ubunty 8.04: ================= Fortran C Octave SciPy Pure Python ================================================= 1 1.2 2.2 16 20 32-Bit openSUSE 10.3: ================== Fortran C Octave SciPy Pure Python ================================================= 1 1.2 2.4 15 19.4 The numbers are rounded a little bit, but they are in that range. I see two problems here: 1. SciPy is very slow, even when compared to Octave 3.0 2. It is only sligtly faster than Python with a for loop. Below is the source code for the two python versions. While this processing is not from any real application, it is not very different from the processing I normally do. Now, it is very likely that for different type of processing people will find SciPy fast enough (matrix inversions, eigenvalues, etc), but for the type of processing I need it is not fast enough. Ivo ########################################################################## # rand_test_1.py from random import random from math import sqrt, sin N = 1000000 mean = 0 var = 0 for i in range(N): x = random() x = 3.14*sqrt(x) x = sin(x) mean += x var += x**2 mean = mean/N var = var/N - mean**2 print 'Mean=%g, var=%g' % (mean, var) # rand_test_2.py from numpy import random, sin, sqrt N = 1000000 x = random.rand(N) x = 3.14*sqrt(x) x = sin(x) mean = sum(x)/N var = sum(x**2)/N - mean**2 print 'Mean=%g, var=%g' % (mean, var) -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.brucher at gmail.com Mon Jun 16 10:43:33 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Mon, 16 Jun 2008 16:43:33 +0200 Subject: [SciPy-user] More on speed comparisons In-Reply-To: <826c64da0806160732i1ec11089wc3400d754276e8b2@mail.gmail.com> References: <826c64da0806160732i1ec11089wc3400d754276e8b2@mail.gmail.com> Message-ID: Hi, Try with complete Numpy support : # rand_test_2.py from numpy import random, sin, sqrt, mean, var N = 1000000 x = random.rand(N) x = 3.14*sqrt(x) x = sin(x) mean_ = mean(x) var_ = var(x) print 'Mean=%g, var=%g' % (mean_, var_) You can use numpy.sum as well. Matthieu 2008/6/16 Ivo Maljevic : > I was planing to refrain from this, but I believe this might be an > interesting comparison for people who seriously think about switching to > SciPy/NumPy. > > Few days ago I wrote about speed comparison between the scalar and > vectorized versions of function calls. Based on several comments, I > concluded that the same story that applies to Matlab and Octave applies > here: vectorize thy code, and speed gain will come. > > Before I show the results of vectorized vs. non-vectorized results, just > want to go on the record and say that I am by no means sayhing that > SciPy/NumPy is not good. I still like what has been done here. There is a > particular scenario that I use at my work where SciPy, combined with > matplotlib, is extremely useful. That scenario is the following. > > In my wireless lab, I have basestations, mobile stations, whole bunch of > instruments and PCs either connected via network or GPIB cables (for > instruments).I use Python here to automate test cases and data collection > and the ability to do SSH and GPIB communication is very useful. Once I > collect data, I use SciPy for some simple postprocessing and I generate PNG > plots, and finally, I generate HTML pages with results shown as tables and > plots. So, it is all done in a single language/script instead of having to > break the processing into several languages/scripts. > > However, I wanted to see if SciPy would be good enough speedwise to > completely replace Matlab. An, at least for the type of processing I do, it > comes nowhere near it. I wrote a small toy program that does some simple > random variable manipulation in several languages. The python code consists > of two versions, one uses for loop and basic pathon libraries and the other > uses nympy's vectorized form (there was no difference between numpy and > scipy). > > Here are the relative results after running the code on two machines: > > 64-bit Ubunty 8.04: > ================= > > Fortran C Octave SciPy Pure Python > ================================================= > 1 1.2 2.2 16 20 > > > 32-Bit openSUSE 10.3: > ================== > > Fortran C Octave SciPy Pure Python > ================================================= > 1 1.2 2.4 15 19.4 > > > The numbers are rounded a little bit, but they are in that range. I see two > problems here: > > 1. SciPy is very slow, even when compared to Octave 3.0 > 2. It is only sligtly faster than Python with a for loop. > > Below is the source code for the two python versions. While this processing > is not from any real application, > it is not very different from the processing I normally do. > > Now, it is very likely that for different type of processing people will > find SciPy fast enough (matrix inversions, eigenvalues, etc), but for the > type of > processing I need it is not fast enough. > > Ivo > > ########################################################################## > # rand_test_1.py > from random import random > from math import sqrt, sin > > N = 1000000 > > mean = 0 > var = 0 > > for i in range(N): > x = random() > x = 3.14*sqrt(x) > x = sin(x) > mean += x > var += x**2 > > mean = mean/N > var = var/N - mean**2 > print 'Mean=%g, var=%g' % (mean, var) > > # rand_test_2.py > from numpy import random, sin, sqrt > > N = 1000000 > > x = random.rand(N) > x = 3.14*sqrt(x) > x = sin(x) > > mean = sum(x)/N > var = sum(x**2)/N - mean**2 > > print 'Mean=%g, var=%g' % (mean, var) > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher From ivo.maljevic at gmail.com Mon Jun 16 10:45:10 2008 From: ivo.maljevic at gmail.com (Ivo Maljevic) Date: Mon, 16 Jun 2008 10:45:10 -0400 Subject: [SciPy-user] More on speed comparisons In-Reply-To: <826c64da0806160732i1ec11089wc3400d754276e8b2@mail.gmail.com> References: <826c64da0806160732i1ec11089wc3400d754276e8b2@mail.gmail.com> Message-ID: <826c64da0806160745mf57f37bqc5f1e87adddb590a@mail.gmail.com> I knew I should have refrained from sending my "findings". Especially because I was quick to jump to conclusions :( It turns out SciPy is as fast as octave when written in a vector form. The correct version of the array form of the script is (it was not using proper version of the sum function): from numpy import random, sin, sqrt, sum N = 1000000 x = random.rand(N) x = 3.14*sqrt(x) x = sin(x) mean = sum(x)/N var = sum(x**2)/N - mean**2 print 'Mean=%g, var=%g' % (mean, var) 2008/6/16 Ivo Maljevic : > I was planing to refrain from this, but I believe this might be an > interesting comparison for people who seriously think about switching to > SciPy/NumPy. > > Few days ago I wrote about speed comparison between the scalar and > vectorized versions of function calls. Based on several comments, I > concluded that the same story that applies to Matlab and Octave applies > here: vectorize thy code, and speed gain will come. > > Before I show the results of vectorized vs. non-vectorized results, just > want to go on the record and say that I am by no means sayhing that > SciPy/NumPy is not good. I still like what has been done here. There is a > particular scenario that I use at my work where SciPy, combined with > matplotlib, is extremely useful. That scenario is the following. > > In my wireless lab, I have basestations, mobile stations, whole bunch of > instruments and PCs either connected via network or GPIB cables (for > instruments).I use Python here to automate test cases and data collection > and the ability to do SSH and GPIB communication is very useful. Once I > collect data, I use SciPy for some simple postprocessing and I generate PNG > plots, and finally, I generate HTML pages with results shown as tables and > plots. So, it is all done in a single language/script instead of having to > break the processing into several languages/scripts. > > However, I wanted to see if SciPy would be good enough speedwise to > completely replace Matlab. An, at least for the type of processing I do, it > comes nowhere near it. I wrote a small toy program that does some simple > random variable manipulation in several languages. The python code consists > of two versions, one uses for loop and basic pathon libraries and the other > uses nympy's vectorized form (there was no difference between numpy and > scipy). > > Here are the relative results after running the code on two machines: > > 64-bit Ubunty 8.04: > ================= > > Fortran C Octave SciPy Pure Python > ================================================= > 1 1.2 2.2 16 20 > > > 32-Bit openSUSE 10.3: > ================== > > Fortran C Octave SciPy Pure Python > ================================================= > 1 1.2 2.4 15 19.4 > > > The numbers are rounded a little bit, but they are in that range. I see two > problems here: > > 1. SciPy is very slow, even when compared to Octave 3.0 > 2. It is only sligtly faster than Python with a for loop. > > Below is the source code for the two python versions. While this processing > is not from any real application, > it is not very different from the processing I normally do. > > Now, it is very likely that for different type of processing people will > find SciPy fast enough (matrix inversions, eigenvalues, etc), but for the > type of > processing I need it is not fast enough. > > Ivo > > ########################################################################## > # rand_test_1.py > from random import random > from math import sqrt, sin > > N = 1000000 > > mean = 0 > var = 0 > > for i in range(N): > x = random() > x = 3.14*sqrt(x) > x = sin(x) > mean += x > var += x**2 > > mean = mean/N > var = var/N - mean**2 > print 'Mean=%g, var=%g' % (mean, var) > > # rand_test_2.py > from numpy import random, sin, sqrt > > N = 1000000 > > x = random.rand(N) > x = 3.14*sqrt(x) > x = sin(x) > > mean = sum(x)/N > var = sum(x**2)/N - mean**2 > > print 'Mean=%g, var=%g' % (mean, var) > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ivo.maljevic at gmail.com Mon Jun 16 10:55:06 2008 From: ivo.maljevic at gmail.com (Ivo Maljevic) Date: Mon, 16 Jun 2008 10:55:06 -0400 Subject: [SciPy-user] More on speed comparisons In-Reply-To: References: <826c64da0806160732i1ec11089wc3400d754276e8b2@mail.gmail.com> Message-ID: <826c64da0806160755u314913adg5ec7a2dcd801d037@mail.gmail.com> I know, the wrong sum call was the problem. The built in mean and var don't make any difference, as long as I include sum (which I didn't the first time). Thanks, Ivo 2008/6/16 Matthieu Brucher : > Hi, > > Try with complete Numpy support : > > # rand_test_2.py > from numpy import random, sin, sqrt, mean, var > > N = 1000000 > x = random.rand(N) > x = 3.14*sqrt(x) > x = sin(x) > > mean_ = mean(x) > var_ = var(x) > > print 'Mean=%g, var=%g' % (mean_, var_) > > You can use numpy.sum as well. > > Matthieu > > 2008/6/16 Ivo Maljevic : > > I was planing to refrain from this, but I believe this might be an > > interesting comparison for people who seriously think about switching to > > SciPy/NumPy. > > > > Few days ago I wrote about speed comparison between the scalar and > > vectorized versions of function calls. Based on several comments, I > > concluded that the same story that applies to Matlab and Octave applies > > here: vectorize thy code, and speed gain will come. > > > > Before I show the results of vectorized vs. non-vectorized results, just > > want to go on the record and say that I am by no means sayhing that > > SciPy/NumPy is not good. I still like what has been done here. There is a > > particular scenario that I use at my work where SciPy, combined with > > matplotlib, is extremely useful. That scenario is the following. > > > > In my wireless lab, I have basestations, mobile stations, whole bunch of > > instruments and PCs either connected via network or GPIB cables (for > > instruments).I use Python here to automate test cases and data collection > > and the ability to do SSH and GPIB communication is very useful. Once I > > collect data, I use SciPy for some simple postprocessing and I generate > PNG > > plots, and finally, I generate HTML pages with results shown as tables > and > > plots. So, it is all done in a single language/script instead of having > to > > break the processing into several languages/scripts. > > > > However, I wanted to see if SciPy would be good enough speedwise to > > completely replace Matlab. An, at least for the type of processing I do, > it > > comes nowhere near it. I wrote a small toy program that does some simple > > random variable manipulation in several languages. The python code > consists > > of two versions, one uses for loop and basic pathon libraries and the > other > > uses nympy's vectorized form (there was no difference between numpy and > > scipy). > > > > Here are the relative results after running the code on two machines: > > > > 64-bit Ubunty 8.04: > > ================= > > > > Fortran C Octave SciPy Pure Python > > ================================================= > > 1 1.2 2.2 16 20 > > > > > > 32-Bit openSUSE 10.3: > > ================== > > > > Fortran C Octave SciPy Pure Python > > ================================================= > > 1 1.2 2.4 15 19.4 > > > > > > The numbers are rounded a little bit, but they are in that range. I see > two > > problems here: > > > > 1. SciPy is very slow, even when compared to Octave 3.0 > > 2. It is only sligtly faster than Python with a for loop. > > > > Below is the source code for the two python versions. While this > processing > > is not from any real application, > > it is not very different from the processing I normally do. > > > > Now, it is very likely that for different type of processing people will > > find SciPy fast enough (matrix inversions, eigenvalues, etc), but for the > > type of > > processing I need it is not fast enough. > > > > Ivo > > > > > ########################################################################## > > # rand_test_1.py > > from random import random > > from math import sqrt, sin > > > > N = 1000000 > > > > mean = 0 > > var = 0 > > > > for i in range(N): > > x = random() > > x = 3.14*sqrt(x) > > x = sin(x) > > mean += x > > var += x**2 > > > > mean = mean/N > > var = var/N - mean**2 > > print 'Mean=%g, var=%g' % (mean, var) > > > > # rand_test_2.py > > from numpy import random, sin, sqrt > > > > N = 1000000 > > > > x = random.rand(N) > > x = 3.14*sqrt(x) > > x = sin(x) > > > > mean = sum(x)/N > > var = sum(x**2)/N - mean**2 > > > > print 'Mean=%g, var=%g' % (mean, var) > > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > -- > French PhD student > Website : http://matthieu-brucher.developpez.com/ > Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 > LinkedIn : http://www.linkedin.com/in/matthieubrucher > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Mon Jun 16 10:59:16 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 16 Jun 2008 16:59:16 +0200 Subject: [SciPy-user] More on speed comparisons In-Reply-To: <826c64da0806160745mf57f37bqc5f1e87adddb590a@mail.gmail.com> References: <826c64da0806160732i1ec11089wc3400d754276e8b2@mail.gmail.com> <826c64da0806160745mf57f37bqc5f1e87adddb590a@mail.gmail.com> Message-ID: <20080616145916.GC3938@phare.normalesup.org> On Mon, Jun 16, 2008 at 10:45:10AM -0400, Ivo Maljevic wrote: > I knew I should have refrained from sending my "findings". Especially > because I was quick to > jump to conclusions :( > It turns out SciPy is as fast as octave when written in a vector form. The > correct version of > the array form of the script is (it was not using proper version of the > sum function): Indeed. I am also curious to know how you measure timings. The proper way of mesuring timings (ie measuring CPU time, and not wall time) is using the timeit module. You can either use the timeit shell command, or the timeit magic, in ipython. Here are the results I get: ########################################################################## # rand_test_1.py from random import random from math import sqrt, sin def do_python(): N = 1000000 mean = 0 var = 0 for i in range(N): x = random() x = 3.14*sqrt(x) x = sin(x) mean += x var += x**2 mean = mean/N var = var/N - mean**2 print 'Mean=%g, var=%g' % (mean, var) # rand_test_2.py def do_numpy(): import numpy as np N = 1000000 x = np.random.rand(N) x = 3.14*np.sqrt(x) x = np.sin(x) mean = x.mean() var = np.sum(x**2)/N - mean**2 print 'Mean=%g, var=%g' % (mean, var) ########################################################################## And in ipython: In [1]: %run test.py In [2]: %timeit do_python() [... snip ] 10 loops, best of 3: 1.18 s per loop In [3]: %timeit do_numpy() 10 loops, best of 3: 147 ms per loop This is significantly different from your timings. The numbers do not mean the same thing, but I trust these one more, and would try to use CPU time only for comparisons between different approachs. My 2 cents. Ga?l From david at ar.media.kyoto-u.ac.jp Mon Jun 16 10:46:48 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 16 Jun 2008 23:46:48 +0900 Subject: [SciPy-user] More on speed comparisons In-Reply-To: <826c64da0806160732i1ec11089wc3400d754276e8b2@mail.gmail.com> References: <826c64da0806160732i1ec11089wc3400d754276e8b2@mail.gmail.com> Message-ID: <48567CD8.5070207@ar.media.kyoto-u.ac.jp> Ivo Maljevic wrote: > However, I wanted to see if SciPy would be good enough speedwise to > completely replace Matlab. An, at least for the type of processing I > do, it comes nowhere near it. Note that random number generator greatly vary across languages and implementations. > The numbers are rounded a little bit, but they are in that range. I > see two problems here: > > 1. SciPy is very slow, even when compared to Octave 3.0 > 2. It is only sligtly faster than Python with a for loop. That's really surprising. Also, I quickly checked: random is as fast under scipy as under matlab. The problem is not random (which takes less than 10 % of the running time; as always, use profiling :) ). And I found your problem: sum. You are not using numpy sum, but python sum, which is extremely slow (it has to convert the array to a sequence first I think, which means it may well be slower than looping :) ). Here is my version: from numpy import random, sin, sqrt, mean, var def compute(): N = 1000000 x = random.rand(N) x = 3.14*sqrt(x) x = sin(x) m = mean(x) v = var(x) print 'Mean=', m, ', var=', v if __name__ == '__main__': compute() This is roughly ten times faster than your scipy version on my computer. Which means we are pretty close to C performances :) cheers, David From ivo.maljevic at gmail.com Mon Jun 16 11:13:48 2008 From: ivo.maljevic at gmail.com (Ivo Maljevic) Date: Mon, 16 Jun 2008 11:13:48 -0400 Subject: [SciPy-user] More on speed comparisons In-Reply-To: <20080616145916.GC3938@phare.normalesup.org> References: <826c64da0806160732i1ec11089wc3400d754276e8b2@mail.gmail.com> <826c64da0806160745mf57f37bqc5f1e87adddb590a@mail.gmail.com> <20080616145916.GC3938@phare.normalesup.org> Message-ID: <826c64da0806160813q6d2377d3pdad6811f10a7d8f7@mail.gmail.com> Aside from good speed with SciPy (I feel embarrassed that I made that 'sum' mistake), I am impressed with the speed of reaction of you guys (David, Gael, Anne, Matthieu, Robert). These are the names I have picked up so far to be the most frequent. Geal, you are working the internals of the SciPy and therefore you might be interested in timing for individual loop execution time, but for me the most relevant time is how long does it take to execute simulation from the moment I press return to the moment it is all done. Typically, I use: time command (e.g., time python ./rand_test_2.py). While you may find this methodology incorrect, I do the same with fortran and C code (time ./rand_test_c or time ./rand_test_f). Some of my matlab simulation run for more than one hour and I use tic/toc there. Ivo 2008/6/16 Gael Varoquaux : > On Mon, Jun 16, 2008 at 10:45:10AM -0400, Ivo Maljevic wrote: > > I knew I should have refrained from sending my "findings". Especially > > because I was quick to > > jump to conclusions :( > > > It turns out SciPy is as fast as octave when written in a vector form. > The > > correct version of > > the array form of the script is (it was not using proper version of > the > > sum function): > > Indeed. I am also curious to know how you measure timings. The proper way > of mesuring timings (ie measuring CPU time, and not wall time) is using > the timeit module. You can either use the timeit shell command, or the > timeit magic, in ipython. Here are the results I get: > > ########################################################################## > # rand_test_1.py > from random import random > from math import sqrt, sin > > def do_python(): > N = 1000000 > > mean = 0 > var = 0 > > for i in range(N): > x = random() > x = 3.14*sqrt(x) > x = sin(x) > mean += x > var += x**2 > > mean = mean/N > var = var/N - mean**2 > print 'Mean=%g, var=%g' % (mean, var) > > # rand_test_2.py > > def do_numpy(): > import numpy as np > > N = 1000000 > > x = np.random.rand(N) > x = 3.14*np.sqrt(x) > x = np.sin(x) > > mean = x.mean() > var = np.sum(x**2)/N - mean**2 > > print 'Mean=%g, var=%g' % (mean, var) > ########################################################################## > > And in ipython: > > In [1]: %run test.py > > In [2]: %timeit do_python() > [... snip ] > 10 loops, best of 3: 1.18 s per loop > > In [3]: %timeit do_numpy() > 10 loops, best of 3: 147 ms per loop > > This is significantly different from your timings. The numbers do not > mean the same thing, but I trust these one more, and would try to use CPU > time only for comparisons between different approachs. > > My 2 cents. > > Ga?l > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ivo.maljevic at gmail.com Mon Jun 16 11:22:36 2008 From: ivo.maljevic at gmail.com (Ivo Maljevic) Date: Mon, 16 Jun 2008 11:22:36 -0400 Subject: [SciPy-user] More on speed comparisons In-Reply-To: <20080616145916.GC3938@phare.normalesup.org> References: <826c64da0806160732i1ec11089wc3400d754276e8b2@mail.gmail.com> <826c64da0806160745mf57f37bqc5f1e87adddb590a@mail.gmail.com> <20080616145916.GC3938@phare.normalesup.org> Message-ID: <826c64da0806160822p284fef40rbe5712270b9cbbd4@mail.gmail.com> Ga?l, If you look at the numbers you have obtained, you will notice that your ratio is 1180/147 = 8 After I fixed the sum error, SciPy code became as fast as Octave, therefore: 16/2.2 = 7.3 => 8 vs. 7.3 not significant difference Ivo ########################################################################## > > And in ipython: > > In [1]: %run test.py > > In [2]: %timeit do_python() > [... snip ] > 10 loops, best of 3: 1.18 s per loop > > In [3]: %timeit do_numpy() > 10 loops, best of 3: 147 ms per loop > > This is significantly different from your timings. The numbers do not > mean the same thing, but I trust these one more, and would try to use CPU > time only for comparisons between different approachs. > > My 2 cents. > > Ga?l > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Mon Jun 16 11:12:43 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 17 Jun 2008 00:12:43 +0900 Subject: [SciPy-user] More on speed comparisons In-Reply-To: <826c64da0806160813q6d2377d3pdad6811f10a7d8f7@mail.gmail.com> References: <826c64da0806160732i1ec11089wc3400d754276e8b2@mail.gmail.com> <826c64da0806160745mf57f37bqc5f1e87adddb590a@mail.gmail.com> <20080616145916.GC3938@phare.normalesup.org> <826c64da0806160813q6d2377d3pdad6811f10a7d8f7@mail.gmail.com> Message-ID: <485682EB.4060504@ar.media.kyoto-u.ac.jp> Ivo Maljevic wrote: > Aside from good speed with SciPy (I feel embarrassed that I made that > 'sum' mistake) No need to be embarrassed, I think most of us went through the same. > > > time command (e.g., time python ./rand_test_2.py). While you may find > this methodology incorrect, > I do the same with fortran and C code (time ./rand_test_c or time > ./rand_test_f). It may be correct depending on what you want to measure: here you are measuring start up times (which is negligeable for C and fortran on a decent OS if you are not running cold, that is the C /F runtime is already in memory; it is really hard not to have the C runtime already loaded on unix :) ). Numpy startup time is significant for such short computations (a few seconds). If for some reason you need to call the script often, it may be the good way to measure things. If you just want to benchmark different methods, that's certainly the wrong approach, and Gael's one is the right one. What you are doing is similar to matlab -r, except that python is much more powerful for scripting here. > > Some of my matlab simulation run for more than one hour and I use > tic/toc there. On matlab, I think you should use cputime, but I cannot find this recommendation in matlab's online help anymore. cheers, David From ivo.maljevic at gmail.com Mon Jun 16 11:34:25 2008 From: ivo.maljevic at gmail.com (Ivo Maljevic) Date: Mon, 16 Jun 2008 11:34:25 -0400 Subject: [SciPy-user] More on speed comparisons In-Reply-To: <485682EB.4060504@ar.media.kyoto-u.ac.jp> References: <826c64da0806160732i1ec11089wc3400d754276e8b2@mail.gmail.com> <826c64da0806160745mf57f37bqc5f1e87adddb590a@mail.gmail.com> <20080616145916.GC3938@phare.normalesup.org> <826c64da0806160813q6d2377d3pdad6811f10a7d8f7@mail.gmail.com> <485682EB.4060504@ar.media.kyoto-u.ac.jp> Message-ID: <826c64da0806160834o105a4cdfkcb437f1090db5bcf@mail.gmail.com> > > > Some of my matlab simulation run for more than one hour and I use > > tic/toc there. > > On matlab, I think you should use cputime, but I cannot find this > recommendation in matlab's online help anymore. > > cheers, > I must admit I was never consistent with matlab's execution time measurements. I used all of them: clock/etime, cputime, tic/toc. Don't know if there is a preferred method. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsouthey at gmail.com Mon Jun 16 11:41:26 2008 From: bsouthey at gmail.com (Bruce Southey) Date: Mon, 16 Jun 2008 10:41:26 -0500 Subject: [SciPy-user] More on speed comparisons In-Reply-To: <826c64da0806160732i1ec11089wc3400d754276e8b2@mail.gmail.com> References: <826c64da0806160732i1ec11089wc3400d754276e8b2@mail.gmail.com> Message-ID: <485689A6.1040205@gmail.com> Ivo Maljevic wrote: > I was planing to refrain from this, but I believe this might be an > interesting comparison for people who seriously think about switching > to SciPy/NumPy. > > Few days ago I wrote about speed comparison between the scalar and > vectorized versions of function calls. Based on several comments, I > concluded that the same story that applies to Matlab and Octave > applies here: vectorize thy code, and speed gain will come. > > Before I show the results of vectorized vs. non-vectorized results, > just want to go on the record and say that I am by no means sayhing > that SciPy/NumPy is not good. I still like what has been done here. > There is a particular scenario that I use at my work where SciPy, > combined with matplotlib, is extremely useful. That scenario is the > following. > > In my wireless lab, I have basestations, mobile stations, whole bunch > of instruments and PCs either connected via network or GPIB cables > (for instruments).I use Python here to automate test cases and data > collection and the ability to do SSH and GPIB communication is very > useful. Once I collect data, I use SciPy for some simple > postprocessing and I generate PNG plots, and finally, I generate HTML > pages with results shown as tables and plots. So, it is all done in a > single language/script instead of having to break the processing into > several languages/scripts. > > However, I wanted to see if SciPy would be good enough speedwise to > completely replace Matlab. An, at least for the type of processing I > do, it comes nowhere near it. I wrote a small toy program that does > some simple random variable manipulation in several languages. The > python code consists of two versions, one uses for loop and basic > pathon libraries and the other uses nympy's vectorized form (there was > no difference between numpy and scipy). > > Here are the relative results after running the code on two machines: > > 64-bit Ubunty 8.04: > ================= > > Fortran C Octave SciPy Pure Python > ================================================= > 1 1.2 2.2 16 20 > > > 32-Bit openSUSE 10.3: > ================== > > Fortran C Octave SciPy Pure Python > ================================================= > 1 1.2 2.4 15 19.4 > > > The numbers are rounded a little bit, but they are in that range. I > see two problems here: > > 1. SciPy is very slow, even when compared to Octave 3.0 > 2. It is only sligtly faster than Python with a for loop. > > Below is the source code for the two python versions. While this > processing is not from any real application, > it is not very different from the processing I normally do. > > Now, it is very likely that for different type of processing people > will find SciPy fast enough (matrix inversions, eigenvalues, etc), but > for the type of > processing I need it is not fast enough. > > Ivo > > ########################################################################## > # rand_test_1.py > from random import random > from math import sqrt, sin > > N = 1000000 > > mean = 0 > var = 0 > > for i in range(N): > x = random() > x = 3.14*sqrt(x) > x = sin(x) > mean += x > var += x**2 > > mean = mean/N > var = var/N - mean**2 > print 'Mean=%g, var=%g' % (mean, var) > > # rand_test_2.py > from numpy import random, sin, sqrt > > N = 1000000 > > x = random.rand(N) > x = 3.14*sqrt(x) > x = sin(x) > > mean = sum(x)/N > var = sum(x**2)/N - mean**2 > > print 'Mean=%g, var=%g' % (mean, var) > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > Hi, Actually this was addressed in Hans Petter Langtangen's book 'Python scripting for Computational Science' and probably elsewhere before that. While the book is somewhat dated (2004) it does contain considerable useful information on numerical python. For example the section 4.2.3 shows this problem of using scalar arguments in Numeric and numarray compared using Python's math module. Bruce From anand.prabhakar.patil at gmail.com Mon Jun 16 11:58:04 2008 From: anand.prabhakar.patil at gmail.com (Anand Patil) Date: Mon, 16 Jun 2008 16:58:04 +0100 Subject: [SciPy-user] Bus error on Intel mac pro w/Leopard, gcc 4.2, numpy from svn head Message-ID: Hi all, I'm getting a bus error using the numpy head on an Intel Mac Pro running Leopard with gcc 4.2. It happens intermittently 1-2 hours into a long computation, so I haven't been able to boil the situation down... I'm hoping the gdb output and crash report below are enough to go on. I built Python from source using gcc 4.2, so I had to get rid of the 'no-cpp-precomp' and 'Wno-long-double' options. The really long call stack in the crash report below makes me wonder whether the problem is in numpy itself, or other code is misusing numpy somehow. I'm using several f2py'ed Fortran modules from PyMC and also PyTables here. Any suggestions are appreciated, including ideas on how to chase down the problem... I'll post if I get anywhere. Thanks, Anand ----------------GDB output: (gdb) attach 28942 Attaching to process 28942. 0x9400f5e2 in select$DARWIN_EXTSN () (gdb) continue Continuing. Reading symbols for shared libraries + done Reading symbols for shared libraries + done Reading symbols for shared libraries + done Reading symbols for shared libraries + done Reading symbols for shared libraries + done Reading symbols for shared libraries + done Reading symbols for shared libraries + done Reading symbols for shared libraries + done Reading symbols for shared libraries + done Program received signal EXC_BAD_ACCESS, Could not access memory. Reason: KERN_PROTECTION_FAILURE at address: 0xb0082ff0 [Switching to process 28942 thread 0x2603] 0x0122c8a6 in array_dealloc (self=0x3fe5e80) at arrayobject.c:2079 2079 Py_DECREF(self->base); (gdb) Continuing. Program received signal EXC_BAD_ACCESS, Could not access memory. Reason: KERN_PROTECTION_FAILURE at address: 0xb0082ff0 0x0122c8a6 in array_dealloc (self=0x3fe5e80) at arrayobject.c:2079 2079 Py_DECREF(self->base); ----------------Crash report: Process: Python [24936] Path: /Library/Frameworks/Python.framework/Versions/2.5/ Resources/Python.app/Contents/MacOS/Python Identifier: Python Version: ??? (???) Code Type: X86 (Native) Parent Process: bash [18452] Date/Time: 2008-06-16 13:36:45.992 +0100 OS Version: Mac OS X 10.5.3 (9D34) Report Version: 6 Exception Type: EXC_BAD_ACCESS (SIGBUS) Exception Codes: KERN_PROTECTION_FAILURE at 0x00000000b0082ff0 Crashed Thread: 2 Thread 0: 0 libSystem.B.dylib 0x9400f5e2 select$DARWIN_EXTSN + 10 1 org.python.python 0x00197cae PyOS_Readline + 254 2 org.python.python 0x002223d5 builtin_raw_input + 597 3 org.python.python 0x0022c6a8 PyEval_EvalFrameEx + 22856 4 org.python.python 0x0022d445 PyEval_EvalFrameEx + 26341 5 org.python.python 0x0022dba5 PyEval_EvalCodeEx + 1845 6 org.python.python 0x0022c211 PyEval_EvalFrameEx + 21681 7 org.python.python 0x0022dba5 PyEval_EvalCodeEx + 1845 8 org.python.python 0x0022c5eb PyEval_EvalFrameEx + 22667 9 org.python.python 0x0022dba5 PyEval_EvalCodeEx + 1845 10 org.python.python 0x0022c211 PyEval_EvalFrameEx + 21681 11 org.python.python 0x0022dba5 PyEval_EvalCodeEx + 1845 12 org.python.python 0x0022c211 PyEval_EvalFrameEx + 21681 13 org.python.python 0x0022d445 PyEval_EvalFrameEx + 26341 14 org.python.python 0x0022dba5 PyEval_EvalCodeEx + 1845 15 org.python.python 0x0022c211 PyEval_EvalFrameEx + 21681 16 org.python.python 0x0022dba5 PyEval_EvalCodeEx + 1845 17 org.python.python 0x0022c211 PyEval_EvalFrameEx + 21681 18 org.python.python 0x0022dba5 PyEval_EvalCodeEx + 1845 19 org.python.python 0x0022c211 PyEval_EvalFrameEx + 21681 20 org.python.python 0x0022dba5 PyEval_EvalCodeEx + 1845 21 org.python.python 0x0022dcd7 PyEval_EvalCode + 87 22 org.python.python 0x002518c7 PyRun_FileExFlags + 263 23 org.python.python 0x00251bd0 PyRun_SimpleFileExFlags + 496 24 org.python.python 0x002601ba Py_Main + 3194 25 org.python.python 0x00001fb6 0x1000 + 4022 Thread 1: 0 libSystem.B.dylib 0x9400f5e2 select$DARWIN_EXTSN + 10 1 libSystem.B.dylib 0x93ff06f5 _pthread_start + 321 2 libSystem.B.dylib 0x93ff05b2 thread_start + 34 Thread 2 Crashed: 0 multiarray.so 0x0122c8a6 array_dealloc + 150 (arrayobject.c:2079) 1 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 2 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 3 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 4 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 5 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 6 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 7 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 8 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 9 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 10 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 11 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 12 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 13 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 14 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 15 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 16 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 17 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 18 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 19 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 20 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 21 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 22 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 23 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 24 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 25 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 26 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 27 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 28 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 29 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 30 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 31 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 32 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 33 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 34 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 35 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 36 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 37 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 38 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 39 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 40 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 41 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 42 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 43 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 44 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 45 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 46 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 47 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 48 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 49 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 50 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 51 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 52 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 53 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 54 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 55 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 56 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 57 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 58 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 59 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 60 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 61 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 62 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 63 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 64 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 65 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 66 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 67 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 68 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 69 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 70 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 71 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 72 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 73 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 74 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 75 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 76 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 77 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 78 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 79 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 80 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 81 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 82 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 83 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 84 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 85 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 86 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 87 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 88 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 89 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 90 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 91 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 92 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 93 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 94 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 95 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 96 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 97 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 98 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 99 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 100 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 101 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 102 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 103 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 104 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 105 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 106 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 107 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 108 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 109 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 110 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 111 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 112 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 113 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 114 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 115 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 116 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 117 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 118 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 119 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 120 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 121 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 122 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 123 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 124 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 125 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 126 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 127 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 128 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 129 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 130 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 131 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 132 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 133 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 134 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 135 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 136 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 137 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 138 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 139 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 140 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 141 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 142 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 143 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 144 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 145 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 146 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 147 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 148 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 149 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 150 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 151 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 152 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 153 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 154 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 155 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 156 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 157 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 158 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 159 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 160 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 161 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 162 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 163 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 164 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 165 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 166 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 167 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 168 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 169 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 170 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 171 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 172 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 173 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 174 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 175 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 176 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 177 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 178 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 179 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 180 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 181 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 182 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 183 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 184 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 185 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 186 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 187 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 188 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 189 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 190 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 191 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 192 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 193 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 194 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 195 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 196 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 197 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 198 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 199 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 200 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 201 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 202 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 203 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 204 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 205 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 206 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 207 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 208 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 209 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 210 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 211 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 212 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 213 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 214 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 215 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 216 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 217 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 218 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 219 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 220 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 221 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 222 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 223 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 224 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 225 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 226 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 227 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 228 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 229 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 230 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 231 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 232 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 233 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 234 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 235 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 236 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 237 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 238 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 239 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 240 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 241 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 242 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 243 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 244 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 245 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 246 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 247 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 248 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 249 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 250 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 251 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 252 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 253 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 254 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 255 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 256 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 257 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 258 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 259 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 260 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 261 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 262 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 263 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 264 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 265 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 266 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 267 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 268 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 269 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 270 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 271 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 272 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 273 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 274 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 275 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 276 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 277 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 278 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 279 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 280 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 281 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 282 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 283 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 284 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 285 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 286 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 287 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 288 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 289 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 290 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 291 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 292 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 293 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 294 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 295 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 296 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 297 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 298 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 299 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 300 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 301 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 302 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 303 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 304 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 305 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 306 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 307 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 308 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 309 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 310 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 311 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 312 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 313 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 314 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 315 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 316 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 317 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 318 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 319 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 320 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 321 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 322 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 323 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 324 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 325 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 326 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 327 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 328 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 329 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 330 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 331 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 332 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 333 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 334 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 335 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 336 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 337 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 338 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 339 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 340 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 341 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 342 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 343 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 344 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 345 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 346 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 347 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 348 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 349 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 350 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 351 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 352 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 353 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 354 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 355 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 356 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 357 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 358 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 359 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 360 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 361 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 362 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 363 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 364 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 365 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 366 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 367 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 368 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 369 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 370 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 371 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 372 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 373 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 374 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 375 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 376 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 377 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 378 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 379 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 380 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 381 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 382 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 383 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 384 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 385 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 386 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 387 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 388 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 389 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 390 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 391 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 392 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 393 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 394 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 395 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 396 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 397 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 398 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 399 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 400 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 401 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 402 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 403 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 404 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 405 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 406 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 407 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 408 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 409 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 410 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 411 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 412 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 413 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 414 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 415 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 416 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 417 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 418 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 419 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 420 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 421 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 422 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 423 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 424 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 425 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 426 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 427 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 428 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 429 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 430 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 431 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 432 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 433 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 434 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 435 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 436 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 437 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 438 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 439 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 440 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 441 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 442 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 443 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 444 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 445 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 446 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 447 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 448 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 449 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 450 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 451 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 452 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 453 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 454 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 455 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 456 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 457 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 458 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 459 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 460 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 461 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 462 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 463 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 464 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 465 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 466 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 467 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 468 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 469 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 470 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 471 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 472 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 473 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 474 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 475 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 476 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 477 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 478 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 479 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 480 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 481 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 482 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 483 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 484 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 485 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 486 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 487 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 488 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 489 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 490 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 491 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 492 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 493 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 494 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 495 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 496 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 497 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 498 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 499 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 500 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 501 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 502 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 503 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 504 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 505 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 506 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 507 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 508 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 509 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 510 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) 511 multiarray.so 0x0122c8ac array_dealloc + 156 (arrayobject.c:2079) Thread 2 crashed with X86 Thread State (32-bit): eax: 0x041cf330 ebx: 0x001c7df1 ecx: 0x00000000 edx: 0x01252380 edi: 0x0486c210 esi: 0x043d2ea0 ebp: 0xb0083008 esp: 0xb0082ff0 ss: 0x0000001f efl: 0x00010246 eip: 0x0122c8a6 cs: 0x00000017 ds: 0x0000001f es: 0x0000001f fs: 0x0000001f gs: 0x00000037 cr2: 0xb0082ff0 Binary Images: 0x1000 - 0x1ff8 +org.python.python 2.5a0 (2.5alpha0) <11eb706d026ecbfb1f78c260c082c516> /Library/Frameworks/ Python.framework/Versions/2.5/Resources/Python.app/Contents/MacOS/Python 0x48000 - 0x49ff7 +cStringIO.so ??? (???) /Library/Frameworks/ Python.framework/Versions/2.5/lib/python2.5/lib-dynload/cStringIO.so 0xa5000 - 0xa5ff7 +resource.so ??? (???) <56321f762531e2ef9edcc11a639aecea> /Library/Frameworks/ Python.framework/Versions/2.5/lib/python2.5/lib-dynload/resource.so 0xa9000 - 0xacffb +_struct.so ??? (???) <9c3c7e367cf5f6d04ebdae0bcf52bd33> /Library/Frameworks/ Python.framework/Versions/2.5/lib/python2.5/lib-dynload/_struct.so 0xb2000 - 0xb2fff +gestalt.so ??? (???) <852918ac90f2d71a9e8b790274430cf5> /Library/Frameworks/ Python.framework/Versions/2.5/lib/python2.5/lib-dynload/gestalt.so 0xb6000 - 0xb6ff7 +_weakref.so ??? (???) <3413d47de76b9c955b5c5a3e41939a91> /Library/Frameworks/ Python.framework/Versions/2.5/lib/python2.5/lib-dynload/_weakref.so 0xba000 - 0xc1fed +libgcc_s.1.dylib ??? (???) /usr/local/lib/ libgcc_s.1.dylib 0xe4000 - 0xe7ff1 +strop.so ??? (???) <4761fef70b2f0e950a0309edd2f32d72> /Library/Frameworks/ Python.framework/Versions/2.5/lib/python2.5/lib-dynload/strop.so 0xed000 - 0xefff3 +operator.so ??? (???) <58d5206170fe1e6b0889d2216f8caeab> /Library/Frameworks/ Python.framework/Versions/2.5/lib/python2.5/lib-dynload/operator.so 0xf5000 - 0xf6fff +time.so ??? (???) /Library/Frameworks/ Python.framework/Versions/2.5/lib/python2.5/lib-dynload/time.so 0xfd000 - 0xfdff2 +_bisect.so ??? (???) /Library/Frameworks/ Python.framework/Versions/2.5/lib/python2.5/lib-dynload/_bisect.so 0x192000 - 0x292fef +org.python.python 2.5a0 (2.5) <0d30889f65289c605f069dc75cf9e498> /Library/Frameworks/ Python.framework/Versions/2.5/Python 0x3a4000 - 0x3a7ff7 +itertools.so ??? (???) <2750f2667689ff0fea4d50ab019d59fe> /Library/Frameworks/ Python.framework/Versions/2.5/lib/python2.5/lib-dynload/itertools.so 0x3ae000 - 0x3afffa +_heapq.so ??? (???) <35b276d5575f2fea7c63f50f4ce56ed0> /Library/Frameworks/ Python.framework/Versions/2.5/lib/python2.5/lib-dynload/_heapq.so 0x3b4000 - 0x3b5fff +math.so ??? (???) <0e51601fe1bc8d2e8c2768c0ab34768b> /Library/Frameworks/ Python.framework/Versions/2.5/lib/python2.5/lib-dynload/math.so 0x3fa000 - 0x3fbfff +_random.so ??? (???) /Library/Frameworks/ Python.framework/Versions/2.5/lib/python2.5/lib-dynload/_random.so 0x500000 - 0x502fff +binascii.so ??? (???) <58c7b821ffde5d10d82c178fd039316c> /Library/Frameworks/ Python.framework/Versions/2.5/lib/python2.5/lib-dynload/binascii.so 0x507000 - 0x508fff +fcntl.so ??? (???) <90ffe24832cf1b3bfbf24d47bb78e1fc> /Library/Frameworks/ Python.framework/Versions/2.5/lib/python2.5/lib-dynload/fcntl.so 0x50c000 - 0x50effa +collections.so ??? (???) <7c1097a3c3cceaf582791f175c1efcfb> /Library/Frameworks/ Python.framework/Versions/2.5/lib/python2.5/lib-dynload/collections.so 0x513000 - 0x51effd +_curses.so ??? (???) <7a23a7622ae20237a3be87d931a11cb0> /Library/Frameworks/ Python.framework/Versions/2.5/lib/python2.5/lib-dynload/_curses.so 0x527000 - 0x558fe7 +libncurses.5.dylib ??? (???) /Library/ Frameworks/Python.framework/Versions/2.5/lib/libncurses.5.dylib 0x574000 - 0x575fff +termios.so ??? (???) /Library/Frameworks/ Python.framework/Versions/2.5/lib/python2.5/lib-dynload/termios.so 0x57a000 - 0x57bff1 +_hashlib.so ??? (???) /Library/Frameworks/ Python.framework/Versions/2.5/lib/python2.5/lib-dynload/_hashlib.so 0x580000 - 0x582ff2 +_sha256.so ??? (???) <76bf92d04dd4d40b2ef259a967188e5d> /Library/Frameworks/ Python.framework/Versions/2.5/lib/python2.5/lib-dynload/_sha256.so 0x586000 - 0x590ff9 +_sha512.so ??? (???) /Library/Frameworks/ Python.framework/Versions/2.5/lib/python2.5/lib-dynload/_sha512.so 0x5a6000 - 0x5a9ff7 +Container_values.so ??? (???) /Users/anand/renearch/pymc/pymc/ Container_values.so 0x5ae000 - 0x5b2ff8 +LazyFunction.so ??? (???) /Users/anand/renearch/pymc/pymc/ LazyFunction.so 0x5b7000 - 0x5bfff7 +_sqlite3.so ??? (???) /Library/Frameworks/ Python.framework/Versions/2.5/lib/python2.5/lib-dynload/_sqlite3.so 0x5c8000 - 0x5dcfff +utilsExtension.so ??? (???) /Library/Frameworks/ Python.framework/Versions/Current/lib/python2.5/site-packages/tables/ utilsExtension.so 0x5e9000 - 0x5e9ff6 +_comp_lzo.so ??? (???) <725f6f7ec76003a55ffb8ec4f54d50ec> /Library/Frameworks/ Python.framework/Versions/Current/lib/python2.5/site-packages/tables/ _comp_lzo.so 0x5ed000 - 0x5edff1 +_comp_bzip2.so ??? (???) <141be1279ee2a60d047401a6dd9362ab> /Library/Frameworks/ Python.framework/Versions/Current/lib/python2.5/site-packages/tables/ _comp_bzip2.so 0x5f4000 - 0x61bffc +readline.so ??? (???) /Library/Frameworks/ Python.framework/Versions/Current/lib/python2.5/site-packages/ readline-2.5.1-py2.5-macosx-10.5-i386.egg/readline.so 0x66e000 - 0x67dfff +_ctypes.so ??? (???) <1020669adf8ac5a768dd23ee08e9ca6f> /Library/Frameworks/ Python.framework/Versions/2.5/lib/python2.5/lib-dynload/_ctypes.so 0x690000 - 0x691ff1 +_locale.so ??? (???) <3f5534c89080d70ceddbde1c824347d5> /Library/Frameworks/ Python.framework/Versions/2.5/lib/python2.5/lib-dynload/_locale.so 0x69c000 - 0x69dfff +_lsprof.so ??? (???) /Library/Frameworks/ Python.framework/Versions/2.5/lib/python2.5/lib-dynload/_lsprof.so 0x6a2000 - 0x6a3fff +_ssl.so ??? (???) <1b0ed0d4c6530fcad161516ca8e9e9f9> /Library/Frameworks/ Python.framework/Versions/2.5/lib/python2.5/lib-dynload/_ssl.so 0x729000 - 0x730fd3 +libintl.3.dylib ??? (???) /usr/local/lib/ libintl.3.dylib 0x748000 - 0x756ff7 +cPickle.so ??? (???) <008dcb580c3f6022fa51f5e571058fa1> /Library/Frameworks/ Python.framework/Versions/2.5/lib/python2.5/lib-dynload/cPickle.so 0x75e000 - 0x764ff7 +_socket.so ??? (???) <1aab699650fe8b337274a41a7f71cf3e> /Library/Frameworks/ Python.framework/Versions/2.5/lib/python2.5/lib-dynload/_socket.so 0x1100000 - 0x1113ff6 +_sort.so ??? (???) /Library/Frameworks/ Python.framework/Versions/Current/lib/python2.5/site-packages/numpy/ core/_sort.so 0x111a000 - 0x111efff +_dotblas.so ??? (???) <46eba2855880ea38b649be728e6b1041> /Library/Frameworks/ Python.framework/Versions/Current/lib/python2.5/site-packages/numpy/ core/_dotblas.so 0x11e8000 - 0x1251ff7 +multiarray.so ??? (???) /Library/Frameworks/ Python.framework/Versions/Current/lib/python2.5/site-packages/numpy/ core/multiarray.so 0x1284000 - 0x12a8fff +umath.so ??? (???) <5ce0337c2db27a055c4d86c6c07778a3> /Library/Frameworks/ Python.framework/Versions/Current/lib/python2.5/site-packages/numpy/ core/umath.so 0x1343000 - 0x134ffff +parser.so ??? (???) /Library/Frameworks/ Python.framework/Versions/2.5/lib/python2.5/lib-dynload/parser.so 0x1355000 - 0x1357ff3 +mmap.so ??? (???) <6c16036bb75555e57d47f2a94cddab18> /Library/Frameworks/ Python.framework/Versions/2.5/lib/python2.5/lib-dynload/mmap.so 0x135c000 - 0x1376ff7 +scalarmath.so ??? (???) <1ef6ab8b6f0286fd3cbb461202220020> /Library/Frameworks/ Python.framework/Versions/Current/lib/python2.5/site-packages/numpy/ core/scalarmath.so 0x13c9000 - 0x13cbfff +_compiled_base.so ??? (???) <40c2d83b32c143918fca6303fb206710> /Library/Frameworks/ Python.framework/Versions/Current/lib/python2.5/site-packages/numpy/ lib/_compiled_base.so 0x13cf000 - 0x13d3ff5 +bz2.so ??? (???) <161d3f6681ced0347e88db273869de45> /Library/Frameworks/ Python.framework/Versions/2.5/lib/python2.5/lib-dynload/bz2.so 0x141a000 - 0x141cffe +zlib.so ??? (???) <9825af0113467adf80182223e14e2a0f> /Library/Frameworks/ Python.framework/Versions/2.5/lib/python2.5/lib-dynload/zlib.so 0x1427000 - 0x142cff1 +lapack_lite.so ??? (???) <02d2a15d1b6c16e8f8fff8ae8fc3900a> /Library/Frameworks/ Python.framework/Versions/Current/lib/python2.5/site-packages/numpy/ linalg/lapack_lite.so 0x1431000 - 0x1439ff7 +fftpack_lite.so ??? (???) <02245b3339ac24c7b5ef46dab78eec9f> /Library/Frameworks/ Python.framework/Versions/Current/lib/python2.5/site-packages/numpy/ fft/fftpack_lite.so 0x143d000 - 0x146afff +mtrand.so ??? (???) /Library/Frameworks/ Python.framework/Versions/Current/lib/python2.5/site-packages/numpy/ random/mtrand.so 0x147d000 - 0x1488fff +datetime.so ??? (???) <2d3e277728c9a6c36585840a6da8483b> /Library/Frameworks/ Python.framework/Versions/2.5/lib/python2.5/lib-dynload/datetime.so 0x14ef000 - 0x14f0ff0 +nxutils.so ??? (???) <412c09eb1e056be48fb73854e2537329> /Library/Frameworks/ Python.framework/Versions/Current/lib/python2.5/site-packages/ matplotlib/nxutils.so 0x14f4000 - 0x14f6ffa +_csv.so ??? (???) <732bc826fbeac2c94ceb1d8b5dd40a06> /Library/Frameworks/ Python.framework/Versions/2.5/lib/python2.5/lib-dynload/_csv.so 0x1640000 - 0x166eff7 +_path.so ??? (???) /Library/Frameworks/ Python.framework/Versions/Current/lib/python2.5/site-packages/ matplotlib/_path.so 0x1692000 - 0x16c8ff3 +ft2font.so ??? (???) /Library/Frameworks/ Python.framework/Versions/Current/lib/python2.5/site-packages/ matplotlib/ft2font.so 0x17b6000 - 0x17b9fff +_cntr.so ??? (???) /Library/Frameworks/ Python.framework/Versions/Current/lib/python2.5/site-packages/ matplotlib/_cntr.so 0x17bd000 - 0x17d5ff7 +_png.so ??? (???) <84815bdd92d088c0aa41a45ed3281ab4> /Library/Frameworks/ Python.framework/Versions/Current/lib/python2.5/site-packages/ matplotlib/_png.so 0x1836000 - 0x1895fd7 +libfreetype.6.dylib ??? (???) /usr/local/lib/ libfreetype.6.dylib 0x1b2d000 - 0x1b60ff7 +_image.so ??? (???) <749429e16158897756345754aa6ea806> /Library/Frameworks/ Python.framework/Versions/Current/lib/python2.5/site-packages/ matplotlib/_image.so 0x1bc3000 - 0x1be2ffb +libpng12.0.dylib ??? (???) <944211fe5ff1afb93c3101b840c56d18> /usr/local/lib/libpng12.0.dylib 0x1c35000 - 0x1c3bff3 +_tkinter.so ??? (???) <0fbfb437897f096637bab8e73e600c0f> /Library/Frameworks/ Python.framework/Versions/2.5/lib/python2.5/lib-dynload/_tkinter.so 0x1c94000 - 0x1c95ff7 +MacOS.so ??? (???) <5884ea28d1b1f2f5f88579da4679c1b1> /Library/Frameworks/ Python.framework/Versions/2.5/lib/python2.5/lib-dynload/MacOS.so 0x1ca8000 - 0x1cadfff +minpack2.so ??? (???) <8ec48bcce4d9119b595e24fec46a214c> /Library/Frameworks/ Python.framework/Versions/Current/lib/python2.5/site-packages/scipy/ optimize/minpack2.so 0x1cb3000 - 0x1cb4fff +_zeros.so ??? (???) /Library/Frameworks/ Python.framework/Versions/Current/lib/python2.5/site-packages/scipy/ optimize/_zeros.so 0x1cfe000 - 0x1d7cfef com.tcltk.tcllibrary 8.4.7 b (8.4.7 b) /System/Library/Frameworks/ Tcl.framework/Versions/8.4/Tcl 0x1d9f000 - 0x1db2ff3 +_tkagg.so ??? (???) <5dd15d7cebcb67fbb2995e587bc673a1> /Library/Frameworks/ Python.framework/Versions/Current/lib/python2.5/site-packages/ matplotlib/backends/_tkagg.so 0x1f89000 - 0x1fecfe7 +unicodedata.so ??? (???) <636e2c9bc33d10b95d2c4dcc4dd46e14> /Library/Frameworks/ Python.framework/Versions/2.5/lib/python2.5/lib-dynload/unicodedata.so 0x1fff000 - 0x2088ffb +_backend_agg.so ??? (???) <49d5b507e3bfb76fab8bce9c999e866f> /Library/Frameworks/ Python.framework/Versions/Current/lib/python2.5/site-packages/ matplotlib/backends/_backend_agg.so 0x210f000 - 0x23acffb +libhdf5.5.dylib ??? (???) <56a4e109e3068f137d6fb9888e9a50b5> /usr/local/hdf5/lib/libhdf5.5.dylib 0x2422000 - 0x243efff +hdf5Extension.so ??? (???) <7324f0d980097db706ae8a45917483df> /Library/Frameworks/ Python.framework/Versions/Current/lib/python2.5/site-packages/tables/ hdf5Extension.so 0x2450000 - 0x246cfff +tableExtension.so ??? (???) <165670450d16ad850f10e1dd80975b9d> /Library/Frameworks/ Python.framework/Versions/Current/lib/python2.5/site-packages/tables/ tableExtension.so 0x24bc000 - 0x24ddff7 +interpreter.so ??? (???) /Library/Frameworks/ Python.framework/Versions/Current/lib/python2.5/site-packages/tables/ numexpr/interpreter.so 0x24e2000 - 0x24f9fff +_minpack.so ??? (???) <5507b3760fc1e2a240b27f587488f058> /Library/Frameworks/ Python.framework/Versions/Current/lib/python2.5/site-packages/scipy/ optimize/_minpack.so 0x2600000 - 0x2634fe7 +flib.so ??? (???) /Users/anand/renearch/pymc/pymc/ flib.so 0x2e4e000 - 0x2ed9fff +libgfortran.2.dylib ??? (???) /usr/local/lib/ libgfortran.2.dylib 0x30ce000 - 0x30e6fff +_lbfgsb.so ??? (???) <3dafdd8ea4e006706522bff5490a7e1c> /Library/Frameworks/ Python.framework/Versions/Current/lib/python2.5/site-packages/scipy/ optimize/_lbfgsb.so 0x30ec000 - 0x30f1fff +moduleTNC.so ??? (???) /Library/Frameworks/ Python.framework/Versions/Current/lib/python2.5/site-packages/scipy/ optimize/moduleTNC.so 0x30f5000 - 0x310cfff +_cobyla.so ??? (???) <6c5be5a3125efff0b7f323e2715a1395> /Library/Frameworks/ Python.framework/Versions/Current/lib/python2.5/site-packages/scipy/ optimize/_cobyla.so 0x3112000 - 0x3123ff3 +_slsqp.so ??? (???) <1a6f70b918125e7b6be6364e2c3942fb> /Library/Frameworks/ Python.framework/Versions/Current/lib/python2.5/site-packages/scipy/ optimize/_slsqp.so 0x3129000 - 0x3130ffc +linalg_utils.so ??? (???) <6f4cd5472d2a89dfc7b113a5e0ab7df1> /Users/anand/renearch/pymc/pymc/gp/ linalg_utils.so 0x3143000 - 0x314bff0 +incomplete_chol.so ??? (???) <6bd7b381bd9c1e077e7419f3ce4f4b75> /Users/anand/renearch/pymc/pymc/gp/ incomplete_chol.so 0x31aa000 - 0x31b8fff +isotropic_cov_funs.so ??? (???) /Users/anand/renearch/pymc/pymc/gp/ cov_funs/isotropic_cov_funs.so 0x31c3000 - 0x31ccfff +distances.so ??? (???) /Users/anand/renearch/pymc/pymc/gp/ cov_funs/distances.so 0x31e1000 - 0x32e8fff +_cephes.so ??? (???) <366d7cb3d7f641382d6d79019412f756> /Library/Frameworks/ Python.framework/Versions/Current/lib/python2.5/site-packages/scipy/ special/_cephes.so 0x3300000 - 0x33b7ff3 +specfun.so ??? (???) <363b09a95f4a86b444cf807aaf30ce36> /Library/Frameworks/ Python.framework/Versions/Current/lib/python2.5/site-packages/scipy/ special/specfun.so 0x33cb000 - 0x33cdff3 +clapack.so ??? (???) <1382d21c5011ae18278cd6f8d3283029> /Library/Frameworks/ Python.framework/Versions/Current/lib/python2.5/site-packages/scipy/ linalg/clapack.so 0x3413000 - 0x341cfff +_flinalg.so ??? (???) <468f18986a2ed6f1c5e4f45fb52acd46> /Library/Frameworks/ Python.framework/Versions/Current/lib/python2.5/site-packages/scipy/ linalg/_flinalg.so 0x3424000 - 0x345cfff +flapack.so ??? (???) /Library/Frameworks/ Python.framework/Versions/Current/lib/python2.5/site-packages/scipy/ linalg/flapack.so 0x3481000 - 0x3488fff +calc_lwork.so ??? (???) <7ff90d5038a1d7771564385e638f0526> /Library/Frameworks/ Python.framework/Versions/Current/lib/python2.5/site-packages/scipy/ linalg/calc_lwork.so 0x3490000 - 0x34b3fff +fblas.so ??? (???) <5ac17c542f994b528a15c9af434bd5f3> /Library/Frameworks/ Python.framework/Versions/Current/lib/python2.5/site-packages/scipy/ linalg/fblas.so 0x34c9000 - 0x34cbff7 +cblas.so ??? (???) /Library/Frameworks/ Python.framework/Versions/Current/lib/python2.5/site-packages/scipy/ linalg/cblas.so 0x34d0000 - 0x3590ff7 +_csr.so ??? (???) /Library/Frameworks/ Python.framework/Versions/Current/lib/python2.5/site-packages/scipy/ sparse/sparsetools/_csr.so 0x35ab000 - 0x3621fff +_csc.so ??? (???) <15cb9a9ac8f6482003220df9f75d4c60> /Library/Frameworks/ Python.framework/Versions/Current/lib/python2.5/site-packages/scipy/ sparse/sparsetools/_csc.so 0x362e000 - 0x3651ff3 +_coo.so ??? (???) <40c7657d221d8ea576a431beff4bb8d8> /Library/Frameworks/ Python.framework/Versions/Current/lib/python2.5/site-packages/scipy/ sparse/sparsetools/_coo.so 0x3658000 - 0x3667ff3 +_dia.so ??? (???) <04419611d1acf2d6049a3eb96c5cab0f> /Library/Frameworks/ Python.framework/Versions/Current/lib/python2.5/site-packages/scipy/ sparse/sparsetools/_dia.so 0x366e000 - 0x373bfff +_bsr.so ??? (???) <8be3e55fdca1a8ac9b8a0736d1cfbfbb> /Library/Frameworks/ Python.framework/Versions/Current/lib/python2.5/site-packages/scipy/ sparse/sparsetools/_bsr.so 0x3798000 - 0x37c1fff +_iterative.so ??? (???) <7f8e6eb898ca17c0cf33a83e5fb2255e> /Library/Frameworks/ Python.framework/Versions/Current/lib/python2.5/site-packages/scipy/ sparse/linalg/isolve/_iterative.so 0x37d3000 - 0x3801fff +_zsuperlu.so ??? (???) <572ed07a437d8b30f44491568f2af1de> /Library/Frameworks/ Python.framework/Versions/Current/lib/python2.5/site-packages/scipy/ sparse/linalg/dsolve/_zsuperlu.so 0x3810000 - 0x383efff +_ssuperlu.so ??? (???) <3f22e09d10027738c2594e29ed7faf49> /Library/Frameworks/ Python.framework/Versions/Current/lib/python2.5/site-packages/scipy/ sparse/linalg/dsolve/_ssuperlu.so 0x384d000 - 0x387bfff +_dsuperlu.so ??? (???) <92416bc4ddd6eeed6c955d40ff953669> /Library/Frameworks/ Python.framework/Versions/Current/lib/python2.5/site-packages/scipy/ sparse/linalg/dsolve/_dsuperlu.so 0x388a000 - 0x38b8fff +_csuperlu.so ??? (???) <6d7eb884ab3ff7beed867d9e087908e8> /Library/Frameworks/ Python.framework/Versions/Current/lib/python2.5/site-packages/scipy/ sparse/linalg/dsolve/_csuperlu.so 0x38c7000 - 0x394ffff +_arpack.so ??? (???) /Library/Frameworks/ Python.framework/Versions/Current/lib/python2.5/site-packages/scipy/ sparse/linalg/eigen/arpack/_arpack.so 0x3961000 - 0x3968fff +statlib.so ??? (???) <6fd2e4a41357a7c9a6d145683f997d2f> /Library/Frameworks/ Python.framework/Versions/Current/lib/python2.5/site-packages/scipy/ stats/statlib.so 0x396e000 - 0x3972ff3 +futil.so ??? (???) /Library/Frameworks/ Python.framework/Versions/Current/lib/python2.5/site-packages/scipy/ stats/futil.so 0x3978000 - 0x3979fff +select.so ??? (???) <8c792315a80303fb2d3279c516889c4e> /Library/Frameworks/ Python.framework/Versions/2.5/lib/python2.5/lib-dynload/select.so 0x3a1a000 - 0x3a27fe7 +mvn.so ??? (???) <67f2b43156b77f173da8a22360e04f24> /Library/Frameworks/ Python.framework/Versions/Current/lib/python2.5/site-packages/scipy/ stats/mvn.so 0xb000000 - 0xb0adfeb com.tcltk.tklibrary 8.4.7 b (8.4.7 b) /System/Library/Frameworks/ Tk.framework/Versions/8.4/Tk 0x8fe00000 - 0x8fe2da53 dyld 96.2 (???) <5013f43c4d2c33c9619011f103ec3238> /usr/lib/dyld 0x90128000 - 0x90136ffd libz.1.dylib ??? (???) <545ca09467025f77131cfac09d8b9375> /usr/lib/libz.1.dylib 0x90137000 - 0x9013efe9 libgcc_s.1.dylib ??? (???) <28a7cbc3a5ca2982d124668306f422d9> /usr/lib/libgcc_s.1.dylib 0x9013f000 - 0x901effff edu.mit.Kerberos 6.0.12 (6.0.12) <1dc515ebe407292db8e603938c72d4e8> /System/Library/Frameworks/ Kerberos.framework/Versions/A/Kerberos 0x901f0000 - 0x9046bfe7 com.apple.Foundation 6.5.5 (677.19) /System/Library/Frameworks/ Foundation.framework/Versions/C/Foundation 0x9046c000 - 0x90746ff3 com.apple.CoreServices.CarbonCore 786.4 (786.4) <059c4803a7a95e3c1a95a332baeb1edf> /System/Library/Frameworks/ CoreServices.framework/Versions/A/Frameworks/CarbonCore.framework/ Versions/A/CarbonCore 0x90747000 - 0x90767ff2 libGL.dylib ??? (???) /System/Library/ Frameworks/OpenGL.framework/Versions/A/Libraries/libGL.dylib 0x90819000 - 0x908a4fff com.apple.framework.IOKit 1.5.1 (???) <60cfc4b175c4ef60bb8e9036716a29f4> /System/Library/Frameworks/ IOKit.framework/Versions/A/IOKit 0x908ad000 - 0x908adffa com.apple.CoreServices 32 (32) <2760719f7a81e8c2bdfd15b0939abc29> /System/Library/Frameworks/ CoreServices.framework/Versions/A/CoreServices 0x90a2e000 - 0x90ab0ffb com.apple.CFNetwork 330 (330) <6c5eda16e640b09334809ba4c1df985d> /System/Library/Frameworks/ CoreServices.framework/Versions/A/Frameworks/CFNetwork.framework/ Versions/A/CFNetwork 0x90b34000 - 0x90ecafff com.apple.QuartzCore 1.5.3 (1.5.3) <1b65c05f89e81a499302fd63295b242d> /System/Library/Frameworks/ QuartzCore.framework/Versions/A/QuartzCore 0x90ecc000 - 0x90eeafff libresolv.9.dylib ??? (???) <32ccbe19e89a3fdd09a0c88151ea508c> /usr/lib/libresolv.9.dylib 0x90eeb000 - 0x90efbffc com.apple.LangAnalysis 1.6.4 (1.6.4) /System/Library/Frameworks/ ApplicationServices.framework/Versions/A/Frameworks/ LangAnalysis.framework/Versions/A/LangAnalysis 0x90efc000 - 0x90f20feb libssl.0.9.7.dylib ??? (???) <0ee18f8589ed06aabdc1df5b37a801cd> /usr/lib/libssl.0.9.7.dylib 0x90f21000 - 0x91067ff7 com.apple.ImageIO.framework 2.0.2 (2.0.2) <77dfee73f4c0d230425a5151ee0bce05> /System/Library/Frameworks/ ApplicationServices.framework/Versions/A/Frameworks/ImageIO.framework/ Versions/A/ImageIO 0x91086000 - 0x91559ffe libGLProgrammability.dylib ??? (???) <475db64244e011cd8811e076035b2632> /System/Library/Frameworks/ OpenGL.framework/Versions/A/Libraries/libGLProgrammability.dylib 0x9155a000 - 0x91582fff libcups.2.dylib ??? (???) /usr/lib/libcups.2.dylib 0x915bd000 - 0x915d3fe7 com.apple.CoreVideo 1.5.1 (1.5.1) /System/Library/Frameworks/ CoreVideo.framework/Versions/A/CoreVideo 0x915d4000 - 0x915d4ffd com.apple.Accelerate 1.4.2 (Accelerate 1.4.2) /System/Library/Frameworks/Accelerate.framework/Versions/A/ Accelerate 0x91648000 - 0x9164aff5 libRadiance.dylib ??? (???) /System/Library/Frameworks/ ApplicationServices.framework/Versions/A/Frameworks/ImageIO.framework/ Versions/A/Resources/libRadiance.dylib 0x9164b000 - 0x9164ffff libGIF.dylib ??? (???) <75b4fd9684d792add088205f987fb02e> /System/Library/Frameworks/ ApplicationServices.framework/Versions/A/Frameworks/ImageIO.framework/ Versions/A/Resources/libGIF.dylib 0x91650000 - 0x91731ff7 libxml2.2.dylib ??? (???) <1baef3d4972ee789d8fa6c1fa44da45c> /usr/lib/libxml2.2.dylib 0x91810000 - 0x91bcefea libLAPACK.dylib ??? (???) /System/Library/ Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/ Versions/A/libLAPACK.dylib 0x91c99000 - 0x91cc4fe7 libauto.dylib ??? (???) <2072d673706bbe463ed2426af57a28d7> /usr/lib/libauto.dylib 0x91d68000 - 0x91d6bfff com.apple.help 1.1 (36) <175489f8adf287b3ebd259362b0292c0> /System/Library/Frameworks/ Carbon.framework/Versions/A/Frameworks/Help.framework/Versions/A/Help 0x91d6c000 - 0x91d78fe7 com.apple.opengl 1.5.6 (1.5.6) <125de77ea2434a91364e79a0905a7771> /System/Library/Frameworks/ OpenGL.framework/Versions/A/OpenGL 0x91d79000 - 0x91e03fe3 com.apple.DesktopServices 1.4.6 (1.4.6) <94d1a28b351b7dff77becadab0967772> /System/Library/PrivateFrameworks/ DesktopServicesPriv.framework/Versions/A/DesktopServicesPriv 0x91e04000 - 0x91e04fff com.apple.Carbon 136 (136) /System/Library/Frameworks/ Carbon.framework/Versions/A/Carbon 0x91e05000 - 0x91e5eff7 libGLU.dylib ??? (???) /System/Library/ Frameworks/OpenGL.framework/Versions/A/Libraries/libGLU.dylib 0x91e5f000 - 0x91e9dff7 libGLImage.dylib ??? (???) <093b1b698ca93a0380f5fa262459ea28> /System/Library/Frameworks/ OpenGL.framework/Versions/A/Libraries/libGLImage.dylib 0x91e9e000 - 0x91f18ff8 com.apple.print.framework.PrintCore 5.5.3 (245.3) <222dade7b33b99708b8c09d1303f93fc> /System/Library/Frameworks/ ApplicationServices.framework/Versions/A/Frameworks/ PrintCore.framework/Versions/A/PrintCore 0x91f19000 - 0x91f46feb libvDSP.dylib ??? (???) <2ee4eb005babc90eaa352b33eb09226e> /System/Library/Frameworks/ Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/ libvDSP.dylib 0x925b0000 - 0x925b0ff8 com.apple.ApplicationServices 34 (34) /System/Library/Frameworks/ ApplicationServices.framework/Versions/A/ApplicationServices 0x925c4000 - 0x925c4ffd com.apple.vecLib 3.4.2 (vecLib 3.4.2) /System/ Library/Frameworks/vecLib.framework/Versions/A/vecLib 0x925c5000 - 0x92658fff com.apple.ink.framework 101.3 (86) /System/Library/Frameworks/ Carbon.framework/Versions/A/Frameworks/Ink.framework/Versions/A/Ink 0x92659000 - 0x92698fef libTIFF.dylib ??? (???) <4b7d3b3b9a9c8335c2538371cb39b60b> /System/Library/Frameworks/ ApplicationServices.framework/Versions/A/Frameworks/ImageIO.framework/ Versions/A/Resources/libTIFF.dylib 0x926e9000 - 0x926effff com.apple.print.framework.Print 218.0.2 (220.1) <2979f3be4e7e8adc875bf21658e9be94> /System/Library/Frameworks/ Carbon.framework/Versions/A/Frameworks/Print.framework/Versions/A/Print 0x92732000 - 0x9276cfff com.apple.coreui 1.1 (61) /System/Library/ PrivateFrameworks/CoreUI.framework/Versions/A/CoreUI 0x9276d000 - 0x92783fff com.apple.DictionaryServices 1.0.0 (1.0.0) <7e9ff586b5c9d02b09e2a5527d98524f> /System/Library/Frameworks/ CoreServices.framework/Versions/A/Frameworks/ DictionaryServices.framework/Versions/A/DictionaryServices 0x92784000 - 0x92803ff5 com.apple.SearchKit 1.2.0 (1.2.0) <5abfde5537969168b8a8743ccb9ec735> /System/Library/Frameworks/ CoreServices.framework/Versions/A/Frameworks/SearchKit.framework/ Versions/A/SearchKit 0x92804000 - 0x92890ff7 com.apple.LaunchServices 289.2 (289.2) <3577886e3a6d56ee3949850c4fde76c9> /System/Library/Frameworks/ CoreServices.framework/Versions/A/Frameworks/LaunchServices.framework/ Versions/A/LaunchServices 0x92891000 - 0x928d7fef com.apple.Metadata 10.5.2 (398.18) /System/Library/Frameworks/ CoreServices.framework/Versions/A/Frameworks/Metadata.framework/ Versions/A/Metadata 0x92b50000 - 0x92b54fff libmathCommon.A.dylib ??? (???) /usr/lib/ system/libmathCommon.A.dylib 0x92b7c000 - 0x92c0fff3 com.apple.ApplicationServices.ATS 3.3 (???) <064eb6d96417afa38a80b1735c4113aa> /System/Library/Frameworks/ ApplicationServices.framework/Versions/A/Frameworks/ATS.framework/ Versions/A/ATS 0x92c10000 - 0x92f17ff7 com.apple.HIToolbox 1.5.3 (???) /System/Library/Frameworks/ Carbon.framework/Versions/A/Frameworks/HIToolbox.framework/Versions/A/ HIToolbox 0x92f18000 - 0x92f1fffe libbsm.dylib ??? (???) <5582985a86ea36504cca31788bccf963> /usr/lib/libbsm.dylib 0x92f20000 - 0x92f7aff7 com.apple.CoreText 2.0.2 (???) <9fde11f84a72e890bbf2aa8b0b13b79a> /System/Library/Frameworks/ ApplicationServices.framework/Versions/A/Frameworks/CoreText.framework/ Versions/A/CoreText 0x92f7b000 - 0x92f7bffb com.apple.installserver.framework 1.0 (8) / System/Library/PrivateFrameworks/InstallServer.framework/Versions/A/ InstallServer 0x92f7c000 - 0x92f7efff com.apple.securityhi 3.0 (30817) <020419ad33b8638b174e1a472728a894> /System/Library/Frameworks/ Carbon.framework/Versions/A/Frameworks/SecurityHI.framework/Versions/A/ SecurityHI 0x9313b000 - 0x93156ff3 libPng.dylib ??? (???) /System/Library/Frameworks/ ApplicationServices.framework/Versions/A/Frameworks/ImageIO.framework/ Versions/A/Resources/libPng.dylib 0x9315d000 - 0x93224ff2 com.apple.vImage 3.0 (3.0) /System/Library/ Frameworks/Accelerate.framework/Versions/A/Frameworks/vImage.framework/ Versions/A/vImage 0x93225000 - 0x932acff7 libsqlite3.0.dylib ??? (???) <11311084bc4be9d4555dfac74fe7218a> /usr/lib/libsqlite3.0.dylib 0x932ad000 - 0x932dcff7 libncurses.5.4.dylib ??? (???) <00632d5180ac31e2cd437a1ce9d08562> /usr/lib/libncurses.5.4.dylib 0x933ac000 - 0x933c0ff3 com.apple.ImageCapture 4.0 (5.0.0) /System/ Library/Frameworks/Carbon.framework/Versions/A/Frameworks/ ImageCapture.framework/Versions/A/ImageCapture 0x933c1000 - 0x937d1fef libBLAS.dylib ??? (???) /System/Library/ Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/ Versions/A/libBLAS.dylib 0x937d2000 - 0x9399ffe7 com.apple.security 5.0.3 (33532) <3bef414f3c6f433e707ac5abee340e16> /System/Library/Frameworks/ Security.framework/Versions/A/Security 0x939a0000 - 0x93a1dfef libvMisc.dylib ??? (???) /System/Library/ Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/ Versions/A/libvMisc.dylib 0x93c65000 - 0x93d1ffe3 com.apple.CoreServices.OSServices 226.3 (226.3) <456bdd65b936baf1ef497b74b4f960a8> /System/Library/Frameworks/ CoreServices.framework/Versions/A/Frameworks/OSServices.framework/ Versions/A/OSServices 0x93d20000 - 0x93d21ffc libffi.dylib ??? (???) <596e0dbf626b211741cecaa9698f271b> /usr/lib/libffi.dylib 0x93e7f000 - 0x93edcffb libstdc++.6.dylib ??? (???) <6106b1f2b0b303b06ae476253dbb5f3f> /usr/lib/libstdc++.6.dylib 0x93fbe000 - 0x9411eff3 libSystem.B.dylib ??? (???) /usr/lib/libSystem.B.dylib 0x9411f000 - 0x9413effa libJPEG.dylib ??? (???) /System/Library/Frameworks/ ApplicationServices.framework/Versions/A/Frameworks/ImageIO.framework/ Versions/A/Resources/libJPEG.dylib 0x95295000 - 0x952c4fe3 com.apple.AE 402.2 (402.2) /System/Library/Frameworks/ CoreServices.framework/Versions/A/Frameworks/AE.framework/Versions/A/AE 0x952c5000 - 0x95315ff7 com.apple.HIServices 1.7.0 (???) /System/Library/Frameworks/ ApplicationServices.framework/Versions/A/Frameworks/ HIServices.framework/Versions/A/HIServices 0x95316000 - 0x953bdfeb com.apple.QD 3.11.52 (???) /System/Library/Frameworks/ ApplicationServices.framework/Versions/A/Frameworks/QD.framework/ Versions/A/QD 0x95425000 - 0x9545cfff com.apple.SystemConfiguration 1.9.2 (1.9.2) <8b26ebf26a009a098484f1ed01ec499c> /System/Library/Frameworks/ SystemConfiguration.framework/Versions/A/SystemConfiguration 0x9545d000 - 0x954b9ff7 com.apple.htmlrendering 68 (1.1.3) /System/Library/Frameworks/ Carbon.framework/Versions/A/Frameworks/HTMLRendering.framework/ Versions/A/HTMLRendering 0x954c0000 - 0x955f8ff7 libicucore.A.dylib ??? (???) <5031226ea28b371d8dfdbb32acfb48b5> /usr/lib/libicucore.A.dylib 0x95605000 - 0x95611fff libbz2.1.0.dylib ??? (???) /usr/lib/libbz2.1.0.dylib 0x95738000 - 0x9586afff com.apple.CoreFoundation 6.5.2 (476.13) /System/Library/Frameworks/ CoreFoundation.framework/Versions/A/CoreFoundation 0x9586b000 - 0x95874fff com.apple.speech.recognition.framework 3.7.24 (3.7.24) <6a6518b392d3d41ace3dcea69d6809d9> /System/Library/Frameworks/ Carbon.framework/Versions/A/Frameworks/SpeechRecognition.framework/ Versions/A/SpeechRecognition 0x958b2000 - 0x958bcfeb com.apple.audio.SoundManager 3.9.2 (3.9.2) /System/Library/Frameworks/ Carbon.framework/Versions/A/Frameworks/CarbonSound.framework/Versions/ A/CarbonSound 0x95a97000 - 0x95b49ffb libcrypto.0.9.7.dylib ??? (???) <8f92cbdc8777bea2ec49b06ee79fabc0> /usr/lib/libcrypto.0.9.7.dylib 0x95b7c000 - 0x95bf8feb com.apple.audio.CoreAudio 3.1.0 (3.1) /System/Library/Frameworks/ CoreAudio.framework/Versions/A/CoreAudio 0x95d81000 - 0x95e60fff libobjc.A.dylib ??? (???) <99a9ad33ca07114848fdd7580968a572> /usr/lib/libobjc.A.dylib 0x95e61000 - 0x95e66fff com.apple.CommonPanels 1.2.4 (85) <3b64ef0de184d09c6f99a1a7e77e42be> /System/Library/Frameworks/ Carbon.framework/Versions/A/Frameworks/CommonPanels.framework/Versions/ A/CommonPanels 0x95e67000 - 0x95f32fff com.apple.ColorSync 4.5.0 (4.5.0) /System/ Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ ColorSync.framework/Versions/A/ColorSync 0x95f39000 - 0x95f7bfef com.apple.NavigationServices 3.5.2 (163) <91844980804067b07a0b6124310d3f31> /System/Library/Frameworks/ Carbon.framework/Versions/A/Frameworks/NavigationServices.framework/ Versions/A/NavigationServices 0x95f7c000 - 0x95f84fff com.apple.DiskArbitration 2.2.1 (2.2.1) <42908e7ecc17a83cec4afef2850ec79e> /System/Library/Frameworks/ DiskArbitration.framework/Versions/A/DiskArbitration 0x95f85000 - 0x95f9dfff com.apple.openscripting 1.2.6 (???) <4e0b05f9f47c6f7e2b01b321b2eb1413> /System/Library/Frameworks/ Carbon.framework/Versions/A/Frameworks/OpenScripting.framework/ Versions/A/OpenScripting 0x95f9e000 - 0x9663afff com.apple.CoreGraphics 1.351.31 (???) /System/Library/Frameworks/ ApplicationServices.framework/Versions/A/Frameworks/ CoreGraphics.framework/Versions/A/CoreGraphics 0x9663b000 - 0x9672fff4 libiconv.2.dylib ??? (???) <3f183527811098bb7332f67a1f902bfd> /usr/lib/libiconv.2.dylib 0x96741000 - 0x96751fff com.apple.speech.synthesis.framework 3.7.1 (3.7.1) <06d8fc0307314f8ffc16f206ad3dbf44> /System/Library/Frameworks/ ApplicationServices.framework/Versions/A/Frameworks/ SpeechSynthesis.framework/Versions/A/SpeechSynthesis 0x96f55000 - 0x96f79fff libxslt.1.dylib ??? (???) <59399cc446ed903fd9479526ee9f116b> /usr/lib/libxslt.1.dylib 0x972b9000 - 0x972b9ffd com.apple.Accelerate.vecLib 3.4.2 (vecLib 3.4.2) /System/Library/Frameworks/Accelerate.framework/Versions/A/ Frameworks/vecLib.framework/Versions/A/vecLib 0xfffe8000 - 0xfffebfff libobjc.A.dylib ??? (???) /usr/lib/ libobjc.A.dylib 0xffff0000 - 0xffff1780 libSystem.B.dylib ??? (???) /usr/lib/ libSystem.B.dylib From ndbecker2 at gmail.com Mon Jun 16 12:03:04 2008 From: ndbecker2 at gmail.com (Neal Becker) Date: Mon, 16 Jun 2008 12:03:04 -0400 Subject: [SciPy-user] [optimization] OpenOpt release v 0.18 References: <4855754F.8090207@scipy.org> Message-ID: Easy_installable? sudo easy_install -U openopt Searching for openopt Reading http://pypi.python.org/simple/openopt/ Couldn't find index page for 'openopt' (maybe misspelled?) Scanning index of all packages (this may take a while) Reading http://pypi.python.org/simple/ Reading http://pypi.python.org/simple/OpenOpt/ Reading http://scipy.org/scipy/scikits/wiki/OpenOpt No local packages or download links found for openopt error: Could not find suitable distribution for Requirement.parse('openopt') sudo easy_install -U OpenOpt Searching for OpenOpt Reading http://pypi.python.org/simple/OpenOpt/ Reading http://scipy.org/scipy/scikits/wiki/OpenOpt No local packages or download links found for OpenOpt error: Could not find suitable distribution for Requirement.parse('OpenOpt') From robert.kern at gmail.com Mon Jun 16 12:49:12 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 16 Jun 2008 11:49:12 -0500 Subject: [SciPy-user] Bus error on Intel mac pro w/Leopard, gcc 4.2, numpy from svn head In-Reply-To: References: Message-ID: <3d375d730806160949xa3e83eag5f419d4d6026e0ca@mail.gmail.com> On Mon, Jun 16, 2008 at 10:58, Anand Patil wrote: > Hi all, > > I'm getting a bus error using the numpy head on an Intel Mac Pro > running Leopard with gcc 4.2. It happens intermittently 1-2 hours into > a long computation, so I haven't been able to boil the situation > down... I'm hoping the gdb output and crash report below are enough to > go on. > > I built Python from source using gcc 4.2, so I had to get rid of the > 'no-cpp-precomp' and 'Wno-long-double' options. > > The really long call stack in the crash report below makes me wonder > whether the problem is in numpy itself, or other code is misusing > numpy somehow. I'm using several f2py'ed Fortran modules from PyMC and > also PyTables here. It could be either. The long chain of array_dealloc() calls means that you have a view of a view of a view ... of and array. When the tail end of the view chain gets decrefed to 0 and deallocated, it will decref its reference to the array it is a view of (myarray.base, if you want to check this out at the Python level). If that was the only reference, it gets deallocated and decrefs its .base array. It is possible, though unlikely, that a cycle got formed somehow. Does your program crash using the www.python.org binary built with gcc 4.0.1? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From gael.varoquaux at normalesup.org Mon Jun 16 13:00:51 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 16 Jun 2008 19:00:51 +0200 Subject: [SciPy-user] More on speed comparisons In-Reply-To: <20080616145916.GC3938@phare.normalesup.org> References: <826c64da0806160732i1ec11089wc3400d754276e8b2@mail.gmail.com> <826c64da0806160745mf57f37bqc5f1e87adddb590a@mail.gmail.com> <20080616145916.GC3938@phare.normalesup.org> Message-ID: <20080616170051.GD3938@phare.normalesup.org> On Mon, Jun 16, 2008 at 04:59:16PM +0200, Gael Varoquaux wrote: > Indeed. I am also curious to know how you measure timings. The proper way > of mesuring timings (ie measuring CPU time, and not wall time) is using > the timeit module. That was bullshit: timeit measures Wall time, it seems. For the numbers, I was just being heavy, and had not notice that you where normalize everything to fortran = 1. Sorry Ga?l From anand.prabhakar.patil at gmail.com Mon Jun 16 13:06:03 2008 From: anand.prabhakar.patil at gmail.com (Anand Patil) Date: Mon, 16 Jun 2008 18:06:03 +0100 Subject: [SciPy-user] Bus error on Intel mac pro w/Leopard, gcc 4.2, numpy from svn head In-Reply-To: <3d375d730806160949xa3e83eag5f419d4d6026e0ca@mail.gmail.com> References: <3d375d730806160949xa3e83eag5f419d4d6026e0ca@mail.gmail.com> Message-ID: <5216ED6D-7899-445A-8224-3E138F7DE970@gmail.com> Robert, Thanks for the reply. On 16 Jun 2008, at 17:49, Robert Kern wrote: > On Mon, Jun 16, 2008 at 10:58, Anand Patil > > It could be either. The long chain of array_dealloc() calls means that > you have a view of a view of a view ... of and array. When the tail > end of the view chain gets decrefed to 0 and deallocated, it will > decref its reference to the array it is a view of (myarray.base, if > you want to check this out at the Python level). If that was the only > reference, it gets deallocated and decrefs its .base array. > > It is possible, though unlikely, that a cycle got formed somehow. > > Does your program crash using the www.python.org binary built with > gcc 4.0.1? Actually, I wasn't able to get my Python environment set up with the python.org binary . I need OpenMP, which means I need gcc 4.2 ... but the python.org binary has no-cpp-precomp and Wno-long-double baked in. In addition it sets MACOSX_DEPLOYMENT_TARGET=10.3 and it uses the 10.4 SDK's library directory, both of which confuse some packages' setup on Leopard. I'd be happy to try the binary if I could get past those problems... Anand From nwagner at iam.uni-stuttgart.de Mon Jun 16 14:14:19 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 16 Jun 2008 20:14:19 +0200 Subject: [SciPy-user] scikits.timeseries Message-ID: Hi all, I cannot install timeseries from svn. Here is the output linux:/home/nwagner/svn/timeseries # /usr/bin/python setup.py install Traceback (most recent call last): File "setup.py", line 6, in ? from scikits.timeseries.version import version File "/home/nwagner/svn/timeseries/scikits/timeseries/__init__.py", line 14, in ? import const File "/home/nwagner/svn/timeseries/scikits/timeseries/const.py", line 80, in ? from cseries import freq_constants ImportError: No module named cseries Any idea ? Nils From pgmdevlist at gmail.com Mon Jun 16 14:26:25 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Mon, 16 Jun 2008 14:26:25 -0400 Subject: [SciPy-user] scikits.timeseries In-Reply-To: References: Message-ID: <200806161426.26277.pgmdevlist@gmail.com> On Monday 16 June 2008 14:14:19 Nils Wagner wrote: > Hi all, > > I cannot install timeseries from svn. > Here is the output Mmh, strange. Seems to work OK on my machine. Have you tried a clean install (viz, removing any build directory beforehand, and any previous installation of the package ?). Note that it could very well be a problem recently introduced, so contact me offlist if this doesn't work, we need to sort that out. Thanks. From robert.kern at gmail.com Mon Jun 16 15:10:59 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 16 Jun 2008 14:10:59 -0500 Subject: [SciPy-user] Bus error on Intel mac pro w/Leopard, gcc 4.2, numpy from svn head In-Reply-To: <5216ED6D-7899-445A-8224-3E138F7DE970@gmail.com> References: <3d375d730806160949xa3e83eag5f419d4d6026e0ca@mail.gmail.com> <5216ED6D-7899-445A-8224-3E138F7DE970@gmail.com> Message-ID: <3d375d730806161210x7d5b2c46w83a817c1d58aae74@mail.gmail.com> On Mon, Jun 16, 2008 at 12:06, Anand Patil wrote: > Robert, > > Thanks for the reply. > > On 16 Jun 2008, at 17:49, Robert Kern wrote: > >> On Mon, Jun 16, 2008 at 10:58, Anand Patil >> >> It could be either. The long chain of array_dealloc() calls means that >> you have a view of a view of a view ... of and array. When the tail >> end of the view chain gets decrefed to 0 and deallocated, it will >> decref its reference to the array it is a view of (myarray.base, if >> you want to check this out at the Python level). If that was the only >> reference, it gets deallocated and decrefs its .base array. >> >> It is possible, though unlikely, that a cycle got formed somehow. >> >> Does your program crash using the www.python.org binary built with >> gcc 4.0.1? > > Actually, I wasn't able to get my Python environment set up with the > python.org binary . I need OpenMP, which means I need gcc 4.2 ... but > the python.org binary has no-cpp-precomp and Wno-long-double baked in. Good enough reason. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From zachary.pincus at yale.edu Mon Jun 16 15:13:12 2008 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Mon, 16 Jun 2008 15:13:12 -0400 Subject: [SciPy-user] The IO library and image file formats -- compare with with PIL In-Reply-To: <32CCE7B7-F876-41E8-AD5B-0347BFC756DF@yale.edu> References: <10B72505-ACEB-4EAE-A1B9-39A5038224CD@yale.edu> <9457e7c80804210704n348d2129l7228e5fce3ab164d@mail.gmail.com> <53E48B1E-91A0-422A-8309-CA58D4BCE486@yale.edu> <9457e7c80804211013i27a32020l4d704bf75d318c5f@mail.gmail.com> <3d375d730804211020n22ba0d72k8ae86cf8724a4c5b@mail.gmail.com> <0D36E713-05E6-43FA-9634-AD8BEA2B8743@yale.edu> <3d375d730804211349p4e75ab5cs6603fc13869f519@mail.gmail.com> <53B94D38-4F29-4FB6-9AED-A9647FE3A905@yale.edu> <32CCE7B7-F876-41E8-AD5B-0347BFC756DF@yale.edu> Message-ID: <8CE7BF5F-36A9-4F25-B555-EB3353DEC0AA@yale.edu> Hi all, I just heard back from Fredrik. He's supportive of the idea, and made some helpful suggestions for how to proceed. It might be simpler than I had thought, actually... Here's his email: > hi zachary, > >> Our general idea is to use the python standard library to handle most >> compressed data, and use numpy's internal unpacking features to >> decode the >> uncompressed and assembled data streams. Anyhow, it struck me that >> the >> *ImagePlugin.py files from the PIL would be a pretty useful (either >> unmodified, or more likely, slightly-modified) as the front end of >> this IO >> system, much as they are used in the PIL today. > > that definitely makes sense. to avoid fragmentation, I'd prefer if > you use unmodified versions (and submit any bug fixes etc upstream). > the ImagePlugin modules have very few dependencies, on purpose; you > should be able to create a light-weight "pil emulator" simply by > plugging in Image, ImageFile, and ImagePalette objects in sys.modules, > and then use the modules right away. e.g. > > class ImageEmulator: > ... stuff that implements necessary portions of the Image > interface ... > class ImageFileEmulator: > ... etc > class ImagePaletteEmulator: > ... > sys.modules["Image"] = ImageEmulator() > sys.modules["ImageFile"] = ImageFileEmulator() > sys.modules["ImagePalette"] = ImagePaletteEmulator() > > import PngImagePlugin > > see the "open" and "save" code in Image.py to get some ideas on how to > use the plugins. > > (and feel free to mail me if you want further integration ideas) > > cheers /F Best, Zach From robert.kern at gmail.com Mon Jun 16 15:19:35 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 16 Jun 2008 14:19:35 -0500 Subject: [SciPy-user] The IO library and image file formats -- compare with with PIL In-Reply-To: <8CE7BF5F-36A9-4F25-B555-EB3353DEC0AA@yale.edu> References: <9457e7c80804211013i27a32020l4d704bf75d318c5f@mail.gmail.com> <3d375d730804211020n22ba0d72k8ae86cf8724a4c5b@mail.gmail.com> <0D36E713-05E6-43FA-9634-AD8BEA2B8743@yale.edu> <3d375d730804211349p4e75ab5cs6603fc13869f519@mail.gmail.com> <53B94D38-4F29-4FB6-9AED-A9647FE3A905@yale.edu> <32CCE7B7-F876-41E8-AD5B-0347BFC756DF@yale.edu> <8CE7BF5F-36A9-4F25-B555-EB3353DEC0AA@yale.edu> Message-ID: <3d375d730806161219lfd13002p33a8b9af14e8c225@mail.gmail.com> On Mon, Jun 16, 2008 at 14:13, Zachary Pincus wrote: > Hi all, > > I just heard back from Fredrik. He's supportive of the idea, and made > some helpful suggestions for how to proceed. It might be simpler than > I had thought, actually... Excellent. All of my concerns are addressed. Thank you. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From jdh2358 at gmail.com Mon Jun 16 15:32:03 2008 From: jdh2358 at gmail.com (John Hunter) Date: Mon, 16 Jun 2008 14:32:03 -0500 Subject: [SciPy-user] scipy.stats.lognorm.rvs signature Message-ID: <88e473830806161232x10c4ebhf771d7f3d9d87518@mail.gmail.com> Is there a bug in the rvs method of scipy.stats.lognorm? In [241]: scipy.__version__ Out[241]: '0.7.0.dev4388' In [242]: dist = scipy.stats.lognorm() In [243]: dist.rvs(10) ------------------------------------------------------------ Traceback (most recent call last): File "", line 1, in ? File "/home/titan/johnh/dev/lib/python2.4/site-packages/scipy/stats/distributions.py", line 117, in rvs return self.dist.rvs(*self.args,**kwds) File "/home/titan/johnh/dev/lib/python2.4/site-packages/scipy/stats/distributions.py", line 446, in rvs vals = reshape(self._rvs(*args),size) TypeError: _rvs() takes exactly 2 arguments (1 given) I was trying to plot empirical histograms against the pdf with the following script when I bunped into this: import matplotlib.pyplot as plt import scipy.stats for name in 'uniform', 'norm', 'expon', 'lognorm': print 'making', name dist = getattr(scipy.stats, name)() samples = dist.rvs(1000) fig = plt.figure() ax = fig.add_subplot(111) n, bins, patches = ax.hist(samples, 30, normed=True, facecolor='blue', alpha=0.5) ax.set_title(name) ax.grid(True) binc = 0.5*(bins[1:]+bins[:-1]) p = dist.pdf(binc) ax.plot(binc, p, lw=2, color='black') plt.show() From timmichelsen at gmx-topmail.de Mon Jun 16 18:16:35 2008 From: timmichelsen at gmx-topmail.de (Tim Michelsen) Date: Tue, 17 Jun 2008 00:16:35 +0200 Subject: [SciPy-user] converting 01-24h logger data for timeseries.date_array Message-ID: Hello, I have an array with date and time and data values. I would like to read it into a timeseries. Therefore I would like to read the time information in the data file into a list of dates in order to create a timeseries.date_array. Unfortunately, many logging devices put the data out in hours 1-24 format (see A below). How can I reformat this into a format (B below) that's acceped by datetime.datetime? Example: * How to read in such a array: ### A: orignial data ### DATE; VALUE (tab separated) 03.08.99 23:50:00 10. 03.08.99 24:00:00 11. 04.08.99 00:10:00 10.5 ###B: needed data for datetime ### DATE; VALUE (tab separated) 03.08.99 23:50:00 10. 04.08.99 20:00:00; 11.; 04.08.99 00:10:00 10.5 ### ### Code I use to read the data: data_in = numpy.loadtxt(input_file, dtype=numpy.str_, skiprows=1) dates_list = ["%s %s" % (d[0], d[1]) for d in data_in] dates_dt = [(datetime.datetime.strptime(d, "%d.%m.%Y %H:%M:%S")) for d in dates_list] date_arr = ts.date_array(dates_dt, freq='minute') series_in = ts.time_series(data_in[:,3].astype(numpy.float_), date_arr, mask=(data_in[:,3]==nodata_string_input), freq='minute') => Now, if I read in the raw data A the row "03.08.99 24:00:00 11." gets pre-pended to all values of the day 03.08.99 because it get's parsed as hour 0 of that day. In reality it's hour 0 of day 04.08.99. Therefore I have to reformat the raw data to the way represented by B. My current workaround is to open the file in a spreadsheet application and save it as ascii again. but I would prefer a python only solution because there are data sets which even don't fit in spreadsheets due to their length. So far, I could find a way to do this efficently. I would really appreciate if someone could point me into a direction on how to achieve this. Kind regards, Timmie From elmico.filos at gmail.com Mon Jun 16 19:10:28 2008 From: elmico.filos at gmail.com (=?ISO-8859-1?Q?Mico_Fil=F3s?=) Date: Tue, 17 Jun 2008 01:10:28 +0200 Subject: [SciPy-user] Integration with precalculated values Message-ID: Hi, I need to evaluate this function for an array of ys f(y) = integral( exp(x*x) * psi(x), A -y, B -y) # 2nd and 3rd arguments: lower and upper limits where psi(x) = integral( exp(t*t) * ( 1 + erf(t) )**2 , -Inf, x) Apart from the fact that the integrand blows up easily, I would gain some speed by precalculating psi for the ranges I need for the computation of f(y). Is there any proper way to do that? I would really appreciate any hint or suggestion on how to evaluate the integral :) Best From robert.kern at gmail.com Mon Jun 16 19:18:54 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 16 Jun 2008 18:18:54 -0500 Subject: [SciPy-user] Integration with precalculated values In-Reply-To: References: Message-ID: <3d375d730806161618q389c0aafi8f7b632f47a479bd@mail.gmail.com> On Mon, Jun 16, 2008 at 18:10, Mico Fil?s wrote: > Hi, > > I need to evaluate this function for an array of ys > > f(y) = integral( exp(x*x) * psi(x), A -y, B -y) # 2nd and 3rd > arguments: lower and upper limits > > where > > psi(x) = integral( exp(t*t) * ( 1 + erf(t) )**2 , -Inf, x) > > Apart from the fact that the integrand blows up easily, I would gain > some speed by > precalculating psi for the ranges I need for the computation of f(y). > Is there any proper way to do that? Only if you can afford to give up adaptive sampling. Then you can use fixed_quad() for Gaussian quadrature over your function (it will do just one vectorized evaluation), or trapz(), simps() or romb() for uniform samples that you have computed yourself. If your function blows up easily, though, you may not be able to afford to give up on adaptive sampling. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From peridot.faceted at gmail.com Mon Jun 16 21:29:31 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Mon, 16 Jun 2008 19:29:31 -0600 Subject: [SciPy-user] Integration with precalculated values In-Reply-To: References: Message-ID: 2008/6/16 Mico Fil?s : > Hi, > > I need to evaluate this function for an array of ys > > f(y) = integral( exp(x*x) * psi(x), A -y, B -y) # 2nd and 3rd > arguments: lower and upper limits > > where > > psi(x) = integral( exp(t*t) * ( 1 + erf(t) )**2 , -Inf, x) > > Apart from the fact that the integrand blows up easily, I would gain > some speed by > precalculating psi for the ranges I need for the computation of f(y). > Is there any proper way to do that? > > I would really appreciate any hint or suggestion on how to evaluate > the integral :) Hmm. I would start with an implementation you know to be reliable; it may be fast enough, and if not, it can serve as a good test case. For this I'd use scipy.integrate.quad, more or less the way you wrote it there. If you want to accelerate things by precalculating psi, how you handle it depends on the y values and the accuracy you need. The most direct way assumes you have no control of the y values; then you can use a fairly standard trick to replace psi(t) by a function that is faster to calculate. (At least, I often use it.) Just compute the function at a reasonable set of values (I'm afraid you'll have to figure out what qualifies as "reasonable"), and use scipy.interpolate.splrep to fit a spline through them. Then instead of evaluating psi(x) directly, you can evaluate the spline (which will be very quick). Since your function is always positive and probably varies a lot in magnitude, I'd fit a spline to the log and then exponentiate on evaluation. Also, you can drastically accelerate the evaluation of psi(t) at a sequence of values by using an ODE integrator (say scipy.integrate.odeint) once. (Since integration needs to walk across t values anyway, you might as well use an integrator that can keep track of all the values. Also, if it's psi that likes to blow up, odeint can cope with that. See its docstring.) Given a fast way to evaluate psi(t), quad should allow you to evaluate the function you actually want rather quickly. If this is too slow, you can probably do something involving more carefully discretizing your two functions, but it will probably require you to restrict the range of y values. Looking at your function, I strongly suspect things can be simplified by rewriting it as a multidimensional integral and rethinking how you do the integral. In particular, I'd look to see if anything can be done with rewriting the erf as an integral, since its exp(-u**2) might help keep the other exp(x**2) and exp(t**2) under control. But as always, it's a tradeoff between work for you and work for the machine. Anne From anand.prabhakar.patil at gmail.com Tue Jun 17 04:34:22 2008 From: anand.prabhakar.patil at gmail.com (Anand Patil) Date: Tue, 17 Jun 2008 09:34:22 +0100 Subject: [SciPy-user] Bus error on Intel mac pro w/Leopard, gcc 4.2, numpy from svn head In-Reply-To: <3d375d730806161210x7d5b2c46w83a817c1d58aae74@mail.gmail.com> References: <3d375d730806160949xa3e83eag5f419d4d6026e0ca@mail.gmail.com> <5216ED6D-7899-445A-8224-3E138F7DE970@gmail.com> <3d375d730806161210x7d5b2c46w83a817c1d58aae74@mail.gmail.com> Message-ID: <3F964DF4-5AAB-4378-BD03-4126ED55AB3D@gmail.com> On 16 Jun 2008, at 20:10, Robert Kern wrote: > On Mon, Jun 16, 2008 at 12:06, Anand Patil > wrote: >> >> On 16 Jun 2008, at 17:49, Robert Kern wrote: >> >>> On Mon, Jun 16, 2008 at 10:58, Anand Patil >>> >>> Does your program crash using the www.python.org binary built with >>> gcc 4.0.1? >> >> Actually, I wasn't able to get my Python environment set up with the >> python.org binary . I need OpenMP, which means I need gcc 4.2 ... but >> the python.org binary has no-cpp-precomp and Wno-long-double baked >> in. > > Good enough reason. Can I be pretty sure that the problem is caused by a reference cycle, and how can I: - Reproduce the problem with a simple program, assuming it were in numpy itself; - Look for problematic reference cycles in my program? Thanks, Anand From anand.prabhakar.patil at gmail.com Tue Jun 17 07:24:49 2008 From: anand.prabhakar.patil at gmail.com (Anand Patil) Date: Tue, 17 Jun 2008 12:24:49 +0100 Subject: [SciPy-user] Bus error on Intel mac pro w/Leopard, gcc 4.2, numpy from svn head In-Reply-To: <3F964DF4-5AAB-4378-BD03-4126ED55AB3D@gmail.com> References: <3d375d730806160949xa3e83eag5f419d4d6026e0ca@mail.gmail.com> <5216ED6D-7899-445A-8224-3E138F7DE970@gmail.com> <3d375d730806161210x7d5b2c46w83a817c1d58aae74@mail.gmail.com> <3F964DF4-5AAB-4378-BD03-4126ED55AB3D@gmail.com> Message-ID: <465CCB8C-B480-47F6-97F7-86DBF78792C7@gmail.com> On 17 Jun 2008, at 09:34, Anand Patil wrote: > On 16 Jun 2008, at 20:10, Robert Kern wrote: > >> On Mon, Jun 16, 2008 at 12:06, Anand Patil >> wrote: >>> >>> On 16 Jun 2008, at 17:49, Robert Kern wrote: >>> >>>> On Mon, Jun 16, 2008 at 10:58, Anand Patil >>>> >>>> Does your program crash using the www.python.org binary built with >>>> gcc 4.0.1? >>> >>> Actually, I wasn't able to get my Python environment set up with the >>> python.org binary . I need OpenMP, which means I need gcc 4.2 ... >>> but >>> the python.org binary has no-cpp-precomp and Wno-long-double baked >>> in. >> >> Good enough reason. > > Can I be pretty sure that the problem is caused by a reference > cycle, and how can I: > - Reproduce the problem with a simple program, assuming it were in > numpy itself; > - Look for problematic reference cycles in my program? I found the bit of PyMC that was making views of views & changed it, and the problem seems to be gone (now that I have announced that I'm sure it'll come back this afternoon)... so this isn't a personal emergency anymore. So thanks for the tip about views. I tried reproducing the bug as follows: In [1]: from numpy import * In [2]: A = zeros(10) In [3]: for i in xrange(100000000): ...: B = A.view(ndarray) but no errors happened. Any other tests I can try? Would it be productive to submit the crash report as a bug, or is it too vague? Anand From elmico.filos at gmail.com Tue Jun 17 07:29:26 2008 From: elmico.filos at gmail.com (=?ISO-8859-1?Q?Mico_Fil=F3s?=) Date: Tue, 17 Jun 2008 13:29:26 +0200 Subject: [SciPy-user] Integration with precalculated values In-Reply-To: References: Message-ID: Thanks Robert and Anne, I will use scipy.interpolate.splrep for my precalculation. I finally managed to keep the factors exp(x**2) under control. The error function can be expanded in series that contain a common factor equal to exp(-x**2). I use in particular the approximation of the complementary error function with rational functions, as described in "Rational Chebyshev approximations of the error function" by WJ Cody, Math. Comp., 1969, pp 631--638. I have also had to split the definition of the integrand into positive and negative values of the argument, in order for the integral to go smoothly. Thanks for your help. From robert.kern at gmail.com Tue Jun 17 14:05:23 2008 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 17 Jun 2008 13:05:23 -0500 Subject: [SciPy-user] Bus error on Intel mac pro w/Leopard, gcc 4.2, numpy from svn head In-Reply-To: <465CCB8C-B480-47F6-97F7-86DBF78792C7@gmail.com> References: <3d375d730806160949xa3e83eag5f419d4d6026e0ca@mail.gmail.com> <5216ED6D-7899-445A-8224-3E138F7DE970@gmail.com> <3d375d730806161210x7d5b2c46w83a817c1d58aae74@mail.gmail.com> <3F964DF4-5AAB-4378-BD03-4126ED55AB3D@gmail.com> <465CCB8C-B480-47F6-97F7-86DBF78792C7@gmail.com> Message-ID: <3d375d730806171105n1198dbe8t5fdedbe868a2247@mail.gmail.com> On Tue, Jun 17, 2008 at 06:24, Anand Patil wrote: > I found the bit of PyMC that was making views of views & changed it, > and the problem seems to be gone (now that I have announced that I'm > sure it'll come back this afternoon)... so this isn't a personal > emergency anymore. So thanks for the tip about views. Okay, good. Can you post your patch? That might give us some clues. > I tried reproducing the bug as follows: > > In [1]: from numpy import * > > In [2]: A = zeros(10) > > In [3]: for i in xrange(100000000): > ...: B = A.view(ndarray) > > but no errors happened. In [1]: from numpy import * In [2]: A = zeros(1, dtype=uint8) In [3]: B = A[:] In [4]: for i in xrange(1000000): ...: B = B[:] ...: ...: In [5]: del B zsh: segmentation fault ipython Yay! Reproducibility! http://scipy.org/scipy/numpy/ticket/822 -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From pav at iki.fi Tue Jun 17 15:56:04 2008 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 17 Jun 2008 19:56:04 +0000 (UTC) Subject: [SciPy-user] Bus error on Intel mac pro w/Leopard, gcc 4.2, numpy from svn head References: <3d375d730806160949xa3e83eag5f419d4d6026e0ca@mail.gmail.com> <5216ED6D-7899-445A-8224-3E138F7DE970@gmail.com> <3d375d730806161210x7d5b2c46w83a817c1d58aae74@mail.gmail.com> <3F964DF4-5AAB-4378-BD03-4126ED55AB3D@gmail.com> <465CCB8C-B480-47F6-97F7-86DBF78792C7@gmail.com> <3d375d730806171105n1198dbe8t5fdedbe868a2247@mail.gmail.com> Message-ID: Tue, 17 Jun 2008 13:05:23 -0500, Robert Kern wrote: [clip: very long chain of views -> crash] > Yay! Reproducibility! > > http://scipy.org/scipy/numpy/ticket/822 It's probably the same as this one: http://scipy.org/scipy/numpy/ticket/466 -- Pauli Virtanen From robert.kern at gmail.com Tue Jun 17 16:54:39 2008 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 17 Jun 2008 15:54:39 -0500 Subject: [SciPy-user] Bus error on Intel mac pro w/Leopard, gcc 4.2, numpy from svn head In-Reply-To: References: <3d375d730806160949xa3e83eag5f419d4d6026e0ca@mail.gmail.com> <5216ED6D-7899-445A-8224-3E138F7DE970@gmail.com> <3d375d730806161210x7d5b2c46w83a817c1d58aae74@mail.gmail.com> <3F964DF4-5AAB-4378-BD03-4126ED55AB3D@gmail.com> <465CCB8C-B480-47F6-97F7-86DBF78792C7@gmail.com> <3d375d730806171105n1198dbe8t5fdedbe868a2247@mail.gmail.com> Message-ID: <3d375d730806171354o7efc360bs5a28deecfe935f2e@mail.gmail.com> On Tue, Jun 17, 2008 at 14:56, Pauli Virtanen wrote: > Tue, 17 Jun 2008 13:05:23 -0500, Robert Kern wrote: > > [clip: very long chain of views -> crash] >> Yay! Reproducibility! >> >> http://scipy.org/scipy/numpy/ticket/822 > > It's probably the same as this one: > http://scipy.org/scipy/numpy/ticket/466 Yes. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From anand.prabhakar.patil at gmail.com Tue Jun 17 17:32:38 2008 From: anand.prabhakar.patil at gmail.com (Anand Patil) Date: Tue, 17 Jun 2008 22:32:38 +0100 Subject: [SciPy-user] Bus error on Intel mac pro w/Leopard, gcc 4.2, numpy from svn head In-Reply-To: <3d375d730806171354o7efc360bs5a28deecfe935f2e@mail.gmail.com> References: <3d375d730806160949xa3e83eag5f419d4d6026e0ca@mail.gmail.com> <5216ED6D-7899-445A-8224-3E138F7DE970@gmail.com> <3d375d730806161210x7d5b2c46w83a817c1d58aae74@mail.gmail.com> <3F964DF4-5AAB-4378-BD03-4126ED55AB3D@gmail.com> <465CCB8C-B480-47F6-97F7-86DBF78792C7@gmail.com> <3d375d730806171105n1198dbe8t5fdedbe868a2247@mail.gmail.com> <3d375d730806171354o7efc360bs5a28deecfe935f2e@mail.gmail.com> Message-ID: <2bc7a5a50806171432obc98a6cm28f963af22f68890@mail.gmail.com> On Tue, Jun 17, 2008 at 9:54 PM, Robert Kern wrote: > On Tue, Jun 17, 2008 at 14:56, Pauli Virtanen wrote: > > Tue, 17 Jun 2008 13:05:23 -0500, Robert Kern wrote: > > > > [clip: very long chain of views -> crash] > >> Yay! Reproducibility! > >> > >> http://scipy.org/scipy/numpy/ticket/822 > > > > It's probably the same as this one: > > http://scipy.org/scipy/numpy/ticket/466 > > Yes. > Great, glad to hear it's being addressed & thanks again for the help. MCMC can be a royal hassle; if there's the slightest chance of something going wrong it'll eventually find it, often after hours or days of normal operation. In case it can still be helpful, my patch was to replace many consecutive manipulations of PyMC variable 'x' as follows: x.value = new_value x.value = x.last_value with: x.value = new_value x._value = x.last_value. That bypassed calls to x.set_value via the property x.value. set_value casts input arguments to the declared type of x, if provided, and then stores them as x._value: def set_value(self, value): # Record new value and increment counter if self.verbose > 0: print '\t' + self.__name__ + ': value set to ', value # Value can't be updated if isdata=True if self.isdata: raise AttributeError, 'Stochastic '+self.__name__+'\'s value cannot be updated if isdata flag is set' # Save current value as last_value # Don't copy because caching depends on the object's reference. self.last_value = self._value if isinstance(value, ndarray): value.flags['W'] = False if self.dtype is not None: if not self.dtype is value.dtype: self._value = asarray(value, dtype=self.dtype).view(value.__class__) else: self._value = value else: self._value = value elif self.dtype and self.dtype is not object: try: self._value = self.dtype(value) except TypeError: self._value = asarray(value, dtype=self.dtype) else: self._value = value value = property(fget=get_value, fset=set_value, doc="Self's current value.") Anand -------------- next part -------------- An HTML attachment was scrubbed... URL: From bryan.fodness at gmail.com Tue Jun 17 19:10:52 2008 From: bryan.fodness at gmail.com (Bryan Fodness) Date: Tue, 17 Jun 2008 19:10:52 -0400 Subject: [SciPy-user] three interpolation scenarios Message-ID: i have three different sets of data that need interpolation. the first with a single x and y was easy and works. x y 0.0 0.999 0.1 1.006 0.2 1.014 0.3 1.021 0.4 1.000 0.5 1.001 ... data = loadtxt('data.txt', skiprows=1) x, y = data[:,0], data[:,1] interp = interpolate.interp1d(x, y) the second with multiple y values is a little more complicated. x y1 y2 y3 y4 y5 0.0 1.000 0.999 0.999 0.999 1.000 0.1 1.000 1.000 1.001 1.005 1.006 0.2 1.000 1.001 1.004 1.008 1.014 0.3 1.000 1.002 1.006 1.012 1.021 0.4 1.000 1.003 1.009 1.015 1.028 0.5 1.001 1.005 1.012 1.019 1.036 ... y = 'y3' data = loadtxt('oaf.txt', skiprows=1) label = line.split() d = label[0] if d == 'd': dlist = label x, y = data[:,0], data[:,dlist.index(y)] interp = interpolate.interp1d(x, y) and the third, is the 2d case. 10 20 30 40 50 0.0 1.000 0.999 0.999 0.999 1.000 0.1 1.000 1.000 1.001 1.005 1.006 0.2 1.000 1.001 1.004 1.008 1.014 0.3 1.000 1.002 1.006 1.012 1.021 0.4 1.000 1.003 1.009 1.015 1.028 0.5 1.001 1.005 1.012 1.019 1.036 ... not sure how to get the x, y and z's is the 1st and 2nd way the best way to do this, and can someone help with the third case. -- "The game of science can accurately be described as a never-ending insult to human intelligence." - Jo?o Magueijo -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bryan.Fodness at gmail.com Tue Jun 17 19:19:45 2008 From: Bryan.Fodness at gmail.com (Bryan) Date: Tue, 17 Jun 2008 16:19:45 -0700 (PDT) Subject: [SciPy-user] three interpolation scenarios In-Reply-To: References: Message-ID: <97ef1481-e4cc-449c-89d5-714e5cfa547f@z72g2000hsb.googlegroups.com> On Jun 17, 7:10?pm, "Bryan Fodness" wrote: > i have three different sets of data that need interpolation. > > the first with a single x and y was easy and works. > > x ? ?y > 0.0 0.999 > 0.1 1.006 > 0.2 1.014 > 0.3 1.021 > 0.4 1.000 > 0.5 1.001 > ... > > data = loadtxt('data.txt', skiprows=1) > x, y = data[:,0], data[:,1] > interp = interpolate.interp1d(x, y) > > the second with multiple y values is a little more complicated. > x ? ?y1 ? ? y2 ? ? ?y3 ? ? y4 ? ? ?y5 > 0.0 1.000 0.999 0.999 0.999 1.000 > 0.1 1.000 1.000 1.001 1.005 1.006 > 0.2 1.000 1.001 1.004 1.008 1.014 > 0.3 1.000 1.002 1.006 1.012 1.021 > 0.4 1.000 1.003 1.009 1.015 1.028 > 0.5 1.001 1.005 1.012 1.019 1.036 > ... > > y = 'y3' > data = loadtxt('oaf.txt', skiprows=1) > label = line.split() > d = label[0] this is supposed to be if d == 'x': > if d == 'd': > ? ? dlist = label > x, y = data[:,0], data[:,dlist.index(y)] > interp = interpolate.interp1d(x, y) > > and the third, is the 2d case. > > ? ? ? ?10 ? ? 20 ? ? ?30 ? ? 40 ? ? ?50 > 0.0 1.000 0.999 0.999 0.999 1.000 > 0.1 1.000 1.000 1.001 1.005 1.006 > 0.2 1.000 1.001 1.004 1.008 1.014 > 0.3 1.000 1.002 1.006 1.012 1.021 > 0.4 1.000 1.003 1.009 1.015 1.028 > 0.5 1.001 1.005 1.012 1.019 1.036 > ... > > not sure how to get the x, y and z's > > is the 1st and 2nd way the best way to do this, and can someone help with > the third case. > > -- > "The game of science can accurately be described as a never-ending insult to > human intelligence." - Jo?o Magueijo > > _______________________________________________ > SciPy-user mailing list > SciPy-u... at scipy.orghttp://projects.scipy.org/mailman/listinfo/scipy-user From C.J.Lee at tnw.utwente.nl Wed Jun 18 03:26:57 2008 From: C.J.Lee at tnw.utwente.nl (Chris Lee) Date: Wed, 18 Jun 2008 09:26:57 +0200 Subject: [SciPy-user] netCDF files, ctypes, and numpy Message-ID: <2ADCDFB4-9E69-4578-86E4-56C9F5DAE4BB@tnw.utwente.nl> Hi All, I am trying to develop a script to read and write netCDF files. I could use Scientific.IO to do this (and in fact I already do this on a linux box) but I would rather use a single script that talks directly to the library so that I avoid needing to ensure that Scientific, Numeric, and numpy are all installed on every system the code gets used on. I have made a start, using ctypes, I can load the netCDF file, read the attributes of each variable, count the dimensions and the variables.... but there are a few things that baffle me... The file I am working with has multiple variables that use a single dimension. When I try and discover which dimensions are used, or load values from the variable, the buffer remains null. It appears that ctypes uses create_string_buffer(some_size) for all buffers. Does anyone have any experience with this? Cheers Chris From robert.kern at gmail.com Wed Jun 18 04:18:03 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 18 Jun 2008 03:18:03 -0500 Subject: [SciPy-user] netCDF files, ctypes, and numpy In-Reply-To: <2ADCDFB4-9E69-4578-86E4-56C9F5DAE4BB@tnw.utwente.nl> References: <2ADCDFB4-9E69-4578-86E4-56C9F5DAE4BB@tnw.utwente.nl> Message-ID: <3d375d730806180118h6bc7f335hc4916f376d18be8a@mail.gmail.com> On Wed, Jun 18, 2008 at 02:26, Chris Lee wrote: > Hi All, > > I am trying to develop a script to read and write netCDF files. I > could use Scientific.IO to do this (and in fact I already do this on a > linux box) but I would rather use a single script that talks directly > to the library so that I avoid needing to ensure that Scientific, > Numeric, and numpy are all installed on every system the code gets > used on. Numeric is unnecessary with recent versions of Scientific. pupynere is a pure-Python netcdf reader. You may want to look into implementing writer capabilities for it without using the netcdf C library. http://pypi.python.org/pypi/pupynere/ -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From c.j.lee at tnw.utwente.nl Wed Jun 18 04:44:41 2008 From: c.j.lee at tnw.utwente.nl (Chris Lee) Date: Wed, 18 Jun 2008 10:44:41 +0200 Subject: [SciPy-user] netCDF files, ctypes, and numpy In-Reply-To: <3d375d730806180118h6bc7f335hc4916f376d18be8a@mail.gmail.com> References: <2ADCDFB4-9E69-4578-86E4-56C9F5DAE4BB@tnw.utwente.nl> <3d375d730806180118h6bc7f335hc4916f376d18be8a@mail.gmail.com> Message-ID: On Jun 18, 2008, at 10:18 AM, Robert Kern wrote: > On Wed, Jun 18, 2008 at 02:26, Chris Lee > wrote: >> Hi All, >> >> I am trying to develop a script to read and write netCDF files. I >> could use Scientific.IO to do this (and in fact I already do this >> on a >> linux box) but I would rather use a single script that talks directly >> to the library so that I avoid needing to ensure that Scientific, >> Numeric, and numpy are all installed on every system the code gets >> used on. > > Numeric is unnecessary with recent versions of Scientific. Oh, that is good to know--certainly the Scientific website recommends a particular version of Numeric. > > > pupynere is a pure-Python netcdf reader. You may want to look into > implementing writer capabilities for it without using the netcdf C > library. > > http://pypi.python.org/pypi/pupynere/ A life-saver :) I had actually started making real progress.... but this is sooo much simpler :) /me stops development on alpha version of the wheel Thanks for your help Chris From lopmart at gmail.com Wed Jun 18 05:30:21 2008 From: lopmart at gmail.com (Jose Lopez) Date: Wed, 18 Jun 2008 02:30:21 -0700 Subject: [SciPy-user] error at optimize.fmin Message-ID: <4eeef9d40806180230y72fd83fdu8bcce6b6ccdfa7ff@mail.gmail.com> Hi, my code is the next and i have a error, but i not know what i do: from pylab import * from scipy import * def func(b,Hder): return (Hder[0]-(b[0]-b[1]))**2 + (Hder[1]-(b[1]-b[2]))**2+ (Hder[2]-(b[2]-b[3]))**2+ (Hder[3]-(b[3]-100.0))**2 b0=[0.0,0.0,0.0,0.0] H0=[0.0,-50.0,20.0,-20.0] xopt =optimize.fmin_l_bfgs_b(func,b0,args=(H0)) error is Traceback (most recent call last): File "C:/Users/Valeria2/JL-MAESTRIA/programas 3 avance/resultado2/pruebafmin_l.py", line 13, in xopt =optimize.fmin_l_bfgs_b(funcion,b0,args=(H0)) File "C:\Python25\Lib\site-packages\scipy\optimize\lbfgsb.py", line 205, in fmin_l_bfgs_b f, g = func_and_grad(x) File "C:\Python25\Lib\site-packages\scipy\optimize\lbfgsb.py", line 156, in func_and_grad f, g = func(x, *args) TypeError: funcion() takes exactly 2 arguments (5 given) thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From lopmart at gmail.com Wed Jun 18 05:29:02 2008 From: lopmart at gmail.com (Jose Lopez) Date: Wed, 18 Jun 2008 02:29:02 -0700 Subject: [SciPy-user] (no subject) Message-ID: <4eeef9d40806180229y6456cb59m75250b2eb007a75e@mail.gmail.com> Hi, my code is the next and i have a error, but i not know what i do: from pylab import * from scipy import * def func(b,Hder): return (Hder[0]-(b[0]-b[1]))**2 + (Hder[1]-(b[1]-b[2]))**2+ (Hder[2]-(b[2]-b[3]))**2+ (Hder[3]-(b[3]-100.0))**2 b0=[0.0,0.0,0.0,0.0] H0=[0.0,-50.0,20.0,-20.0] xopt =optimize.fmin_l_bfgs_b(func,b0,args=(H0)) error is Traceback (most recent call last): File "C:/Users/Valeria2/JL-MAESTRIA/programas 3 avance/resultado2/pruebafmin_l.py", line 13, in xopt =optimize.fmin_l_bfgs_b(funcion,b0,args=(H0)) File "C:\Python25\Lib\site-packages\scipy\optimize\lbfgsb.py", line 205, in fmin_l_bfgs_b f, g = func_and_grad(x) File "C:\Python25\Lib\site-packages\scipy\optimize\lbfgsb.py", line 156, in func_and_grad f, g = func(x, *args) TypeError: funcion() takes exactly 2 arguments (5 given) thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander.borghgraef.rma at gmail.com Wed Jun 18 08:07:08 2008 From: alexander.borghgraef.rma at gmail.com (Alexander Borghgraef) Date: Wed, 18 Jun 2008 14:07:08 +0200 Subject: [SciPy-user] Mlab doesn't work In-Reply-To: <20080613133323.GG3573@phare.normalesup.org> References: <9e8c52a20806130516q2cab4081uaa2cce2ccc0b540f@mail.gmail.com> <4852671C.9080402@gmail.com> <9e8c52a20806130534x12a6c087jf486daca9a28c6e1@mail.gmail.com> <20080613125150.GA3573@phare.normalesup.org> <9e8c52a20806130627ib41e905s3c92a7c10fbf4834@mail.gmail.com> <20080613133323.GG3573@phare.normalesup.org> Message-ID: <9e8c52a20806180507u4e862eebt72d3eda59e42d9b2@mail.gmail.com> I've been playing with mlab some more, and I ran into another problem. I upgraded to enthought 2.7.1 to get rid of the mayavi.tools thing, and tried plotting some things, which worked fine. Yay. Then I tried to use mlab in my existing code, and I kept running into the following error: python: Python/ceval.c:2624: PyEval_EvalCodeEx: Assertion `tstate != ((void *)0)' failed. Abort After commenting out nearly everyting, it seemed that there was some incompatibility between mlab and pylab (which I use for 2D plotting). For example, this works fine: from enthought.mayavi import mlab from scipy import lena mlab.surf(lena()) But this results in the above mentioned error: import pylab from enthought.mayavi import mlab from scipy import lena mlab.surf(lena()) All .py files were run with ipython -wthread. Calling an mlab function when pylab is loaded results in this crash. Pylab functions OTOH are fine when mayavi.mlab is loaded. Is this a known problem? Is this being fixed in 3.0.1b? Is there a workaround? -- Alex Borghgraef From se.berg at stud.uni-goettingen.de Wed Jun 18 08:08:09 2008 From: se.berg at stud.uni-goettingen.de (Sebastian Stephan Berg) Date: Wed, 18 Jun 2008 14:08:09 +0200 Subject: [SciPy-user] error at optimize.fmin In-Reply-To: <4eeef9d40806180230y72fd83fdu8bcce6b6ccdfa7ff@mail.gmail.com> References: <4eeef9d40806180230y72fd83fdu8bcce6b6ccdfa7ff@mail.gmail.com> Message-ID: <1213790889.5906.17.camel@sebook> Hi, very simple little error ;). Python will see (H0) as H0 and not a tuple with the list as element. Using args=(H0,) should fix it. Sebastian From gael.varoquaux at normalesup.org Wed Jun 18 09:01:37 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Wed, 18 Jun 2008 15:01:37 +0200 Subject: [SciPy-user] netCDF files, ctypes, and numpy In-Reply-To: References: <2ADCDFB4-9E69-4578-86E4-56C9F5DAE4BB@tnw.utwente.nl> <3d375d730806180118h6bc7f335hc4916f376d18be8a@mail.gmail.com> Message-ID: <20080618130137.GB7214@phare.normalesup.org> On Wed, Jun 18, 2008 at 10:44:41AM +0200, Chris Lee wrote: > > Numeric is unnecessary with recent versions of Scientific. > Oh, that is good to know--certainly the Scientific website recommends > a particular version of Numeric. Beware that you explicitely need to pass a build switch (can't numpy be default ?) Ga?l From gael.varoquaux at normalesup.org Wed Jun 18 09:17:29 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Wed, 18 Jun 2008 15:17:29 +0200 Subject: [SciPy-user] Mlab doesn't work In-Reply-To: <9e8c52a20806180507u4e862eebt72d3eda59e42d9b2@mail.gmail.com> References: <9e8c52a20806130516q2cab4081uaa2cce2ccc0b540f@mail.gmail.com> <4852671C.9080402@gmail.com> <9e8c52a20806130534x12a6c087jf486daca9a28c6e1@mail.gmail.com> <20080613125150.GA3573@phare.normalesup.org> <9e8c52a20806130627ib41e905s3c92a7c10fbf4834@mail.gmail.com> <20080613133323.GG3573@phare.normalesup.org> <9e8c52a20806180507u4e862eebt72d3eda59e42d9b2@mail.gmail.com> Message-ID: <20080618131729.GB10375@phare.normalesup.org> On Wed, Jun 18, 2008 at 02:07:08PM +0200, Alexander Borghgraef wrote: > After commenting out nearly everyting, it seemed that there was some > incompatibility between mlab and pylab (which I use for 2D plotting). > For example, this works fine: > from enthought.mayavi import mlab > from scipy import lena > mlab.surf(lena()) > But this results in the above mentioned error: > import pylab > from enthought.mayavi import mlab > from scipy import lena > mlab.surf(lena()) Interesting. I was not aware of this problem, but it partly makes sense. The problem is that pylab, as you are probably using it, uses the TK toolkit, whereas mayavi uses the Wx. Running both event loops in the same time results in a nice segfault due to race conditions. The solution is to have pylab use the Wx event loop. You can do this by doing (before importing pylab): """ import matplotlib matplotlib.use('WxAdd') """ Alternatively, in the recent versions of ipython, Fernando has made sure that if you use both the "-pylab" switch and the "-wthread" switch to ipython, this is enforced, and mayavi works fine. HTH, Ga?l From c.j.lee at tnw.utwente.nl Wed Jun 18 09:27:52 2008 From: c.j.lee at tnw.utwente.nl (Chris Lee) Date: Wed, 18 Jun 2008 15:27:52 +0200 Subject: [SciPy-user] netCDF files, ctypes, and numpy In-Reply-To: <20080618130137.GB7214@phare.normalesup.org> References: <2ADCDFB4-9E69-4578-86E4-56C9F5DAE4BB@tnw.utwente.nl> <3d375d730806180118h6bc7f335hc4916f376d18be8a@mail.gmail.com> <20080618130137.GB7214@phare.normalesup.org> Message-ID: <695AF992-A9A5-4F38-B274-203E40E0E8C5@tnw.utwente.nl> On Jun 18, 2008, at 3:01 PM, Gael Varoquaux wrote: > On Wed, Jun 18, 2008 at 10:44:41AM +0200, Chris Lee wrote: >>> Numeric is unnecessary with recent versions of Scientific. > >> Oh, that is good to know--certainly the Scientific website recommends >> a particular version of Numeric. > > Beware that you explicitely need to pass a build switch (can't numpy > be > default ?) Well since discovering pupynere, I have no need for Scientific at all. Though, it should be noted that the current version of pupynere will only load netCDF files with a small number of variables. I think I have patched the code to fix that now. If I find that it works I will send the patched code on to the author. Cheers Chris From ggellner at uoguelph.ca Wed Jun 18 09:12:55 2008 From: ggellner at uoguelph.ca (Gabriel Gellner) Date: Wed, 18 Jun 2008 09:12:55 -0400 Subject: [SciPy-user] error at optimize.fmin In-Reply-To: <4eeef9d40806180230y72fd83fdu8bcce6b6ccdfa7ff@mail.gmail.com> References: <4eeef9d40806180230y72fd83fdu8bcce6b6ccdfa7ff@mail.gmail.com> Message-ID: <20080618131255.GA7844@basestar> On Wed, Jun 18, 2008 at 02:30:21AM -0700, Jose Lopez wrote: > Hi, my code is the next and i have a error, but i not know what i do: > > from pylab import * > from scipy import * > > def func(b,Hder): > return (Hder[0]-(b[0]-b[1]))**2 + (Hder[1]-(b[1]-b[2]))**2+ > (Hder[2]-(b[2]-b[3]))**2+ (Hder[3]-(b[3]-100.0))**2 > > > b0=[0.0,0.0,0.0,0.0] > H0=[0.0,-50.0,20.0,-20.0] > > xopt =optimize.fmin_l_bfgs_b(func > ,b0,args=(H0)) > > error is > > Traceback (most recent call last): > File "C:/Users/Valeria2/JL-MAESTRIA/programas 3 > avance/resultado2/pruebafmin_l.py", line 13, in > xopt =optimize.fmin_l_bfgs_b(funcion,b0,args=(H0)) > File "C:\Python25\Lib\site-packages\scipy\optimize\lbfgsb.py", line 205, > in fmin_l_bfgs_b > f, g = func_and_grad(x) > File "C:\Python25\Lib\site-packages\scipy\optimize\lbfgsb.py", line 156, > in func_and_grad > f, g = func(x, *args) > TypeError: funcion() takes exactly 2 arguments (5 given) > > thanks > You have a couple problems. the error is because you are passing a single item, H0, in parenthesis, which is not the same as a tuple (as parenthesis are also used as grouping specifiers), instead you need to do args=(H0,). If you add this you will get a new traceback: Traceback (most recent call last): File "opt_error.py", line 15, in xopt = optimize.fmin_l_bfgs_b(func, b0, args=(H0,)) File "/usr/lib/python2.5/site-packages/scipy/optimize/lbfgsb.py", line 205, in fmin_l_bfgs_b f, g = func_and_grad(x) File "/usr/lib/python2.5/site-packages/scipy/optimize/lbfgsb.py", line 156, in func_and_grad f, g = func(x, *args) TypeError: 'numpy.float64' object is not iterable Which is a cryptic way of noting that you are using the range bounded version of bfgs, but you haven't provided bounds. If you meant to have the problem be unconstrained, then you should have xopt = optimize.fmin_bfgs(func, b0, args=(H0,)) And all will work. Otherwise provide bounds. Okay so the problem is fixed, but to give another way of passing arguments, which I find nicer (and is also the preferred way in modern matlab, using function handles) . . . Instead of using the args keyword for fmin_bfgs, make another function that calls func with Hder bound to H0, like so: def obj(b): return func(b, H0) now just pass obj to fmin_bfgs. So the code would be: from pylab import * from scipy import * def func(b,Hder): return (Hder[0]-(b[0]-b[1]))**2 + (Hder[1]-(b[1]-b[2]))**2+(Hder[2]-(b[2]-b[3]))**2+ (Hder[3]-(b[3]-100.0))**2 b0=[0.0,0.0,0.0,0.0] H0=[0.0,-50.0,20.0,-20.0] def obj(b): return func(b, H0) xopt=optimize.fmin_bfgs(obj, b0) I find this prettier, you can also do it more concisely using lambda functions (which makes it even closer to the matlab way), if you want to know how, send me a message. Gabriel From gael.varoquaux at normalesup.org Wed Jun 18 09:54:08 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Wed, 18 Jun 2008 15:54:08 +0200 Subject: [SciPy-user] error at optimize.fmin In-Reply-To: <1213790889.5906.17.camel@sebook> References: <4eeef9d40806180230y72fd83fdu8bcce6b6ccdfa7ff@mail.gmail.com> <1213790889.5906.17.camel@sebook> Message-ID: <20080618135408.GD10375@phare.normalesup.org> On Wed, Jun 18, 2008 at 02:08:09PM +0200, Sebastian Stephan Berg wrote: > very simple little error ;). Python will see (H0) as H0 and not a tuple > with the list as element. Using args=(H0,) should fix it. There is more to it. You'll find another puzzling error once you fix this one. If you read carefully the docs to optimize.fmin_l_bfgs_b, you can see that if you do not specify the gradient of the function, fprime, func should return two arguments: the value of the function you want to optimize, and the gradient. If you don't want this behavior, you should set "approx_grad" to true. HTH, Ga?l From lopmart at gmail.com Wed Jun 18 11:39:01 2008 From: lopmart at gmail.com (Jose Lopez) Date: Wed, 18 Jun 2008 08:39:01 -0700 Subject: [SciPy-user] error at optimize.fmin In-Reply-To: <20080618131255.GA7844@basestar> References: <4eeef9d40806180230y72fd83fdu8bcce6b6ccdfa7ff@mail.gmail.com> <20080618131255.GA7844@basestar> Message-ID: <4eeef9d40806180839l621b74b1lb9495004046ef656@mail.gmail.com> hi thanks for the answer. I want to know how to use the lambda functions. thanks and by the way, how i to provide bounds to optimize.fmin_l_bfgs_b? in my code, each element of b are [0,1]. thaks atte JL On Wed, Jun 18, 2008 at 6:12 AM, Gabriel Gellner wrote: > On Wed, Jun 18, 2008 at 02:30:21AM -0700, Jose Lopez wrote: > > Hi, my code is the next and i have a error, but i not know what i do: > > > > from pylab import * > > from scipy import * > > > > def func(b,Hder): > > return (Hder[0]-(b[0]-b[1]))**2 + (Hder[1]-(b[1]-b[2]))**2+ > > (Hder[2]-(b[2]-b[3]))**2+ (Hder[3]-(b[3]-100.0))**2 > > > > > > b0=[0.0,0.0,0.0,0.0] > > H0=[0.0,-50.0,20.0,-20.0] > > > > xopt =optimize.fmin_l_bfgs_b(func > > ,b0,args=(H0)) > > > > error is > > > > Traceback (most recent call last): > > File "C:/Users/Valeria2/JL-MAESTRIA/programas 3 > > avance/resultado2/pruebafmin_l.py", line 13, in > > xopt =optimize.fmin_l_bfgs_b(funcion,b0,args=(H0)) > > File "C:\Python25\Lib\site-packages\scipy\optimize\lbfgsb.py", line > 205, > > in fmin_l_bfgs_b > > f, g = func_and_grad(x) > > File "C:\Python25\Lib\site-packages\scipy\optimize\lbfgsb.py", line > 156, > > in func_and_grad > > f, g = func(x, *args) > > TypeError: funcion() takes exactly 2 arguments (5 given) > > > > thanks > > > You have a couple problems. > > the error is because you are passing a single item, H0, in parenthesis, > which > is not the same as a tuple (as parenthesis are also used as grouping > specifiers), instead you need to do args=(H0,). If you add this you will > get a > new traceback: > > Traceback (most recent call last): > File "opt_error.py", line 15, in > xopt = optimize.fmin_l_bfgs_b(func, b0, args=(H0,)) > File "/usr/lib/python2.5/site-packages/scipy/optimize/lbfgsb.py", line > 205, > in fmin_l_bfgs_b > f, g = func_and_grad(x) > File "/usr/lib/python2.5/site-packages/scipy/optimize/lbfgsb.py", line > 156, > in func_and_grad > f, g = func(x, *args) > TypeError: 'numpy.float64' object is not iterable > > Which is a cryptic way of noting that you are using the range bounded > version > of bfgs, but you haven't provided bounds. If you meant to have the problem > be > unconstrained, then you should have > > xopt = optimize.fmin_bfgs(func, b0, args=(H0,)) > > And all will work. Otherwise provide bounds. > > Okay so the problem is fixed, but to give another way of passing arguments, > which I find nicer (and is also the preferred way in modern matlab, using > function handles) . . . > > Instead of using the args keyword for fmin_bfgs, make another function that > calls func with Hder bound to H0, like so: > > def obj(b): > return func(b, H0) > > now just pass obj to fmin_bfgs. So the code would be: > > from pylab import * > from scipy import * > > def func(b,Hder): > return (Hder[0]-(b[0]-b[1]))**2 + > (Hder[1]-(b[1]-b[2]))**2+(Hder[2]-(b[2]-b[3]))**2+ > (Hder[3]-(b[3]-100.0))**2 > > > b0=[0.0,0.0,0.0,0.0] > H0=[0.0,-50.0,20.0,-20.0] > > def obj(b): > return func(b, H0) > > xopt=optimize.fmin_bfgs(obj, b0) > > I find this prettier, you can also do it more concisely using lambda > functions > (which makes it even closer to the matlab way), if you want to know how, > send > me a message. > > Gabriel > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From peridot.faceted at gmail.com Wed Jun 18 11:47:05 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Wed, 18 Jun 2008 09:47:05 -0600 Subject: [SciPy-user] (no subject) In-Reply-To: <4eeef9d40806180229y6456cb59m75250b2eb007a75e@mail.gmail.com> References: <4eeef9d40806180229y6456cb59m75250b2eb007a75e@mail.gmail.com> Message-ID: 2008/6/18 Jose Lopez : > Hi, my code is the next and i have a error, but i not know what i do: > > from pylab import * > from scipy import * > > def func(b,Hder): > return (Hder[0]-(b[0]-b[1]))**2 + (Hder[1]-(b[1]-b[2]))**2+ > (Hder[2]-(b[2]-b[3]))**2+ (Hder[3]-(b[3]-100.0))**2 > > > b0=[0.0,0.0,0.0,0.0] > H0=[0.0,-50.0,20.0,-20.0] > > xopt =optimize.fmin_l_bfgs_b(func,b0,args=(H0)) args should be a list or tuple of extra arguments to pass to func. In python syntax, (H0) is the same thing has H0, which is a list with four elements, so fmin_l_bfgs_b treats it as four additional arguments, rather than one argument that is a list. You should instead provide args= with a list or tuple of only one element. The most readable way to do this is to write args=[H0], but if you want to use a tuple, you should write (H0,) (that is, with a trailing comma, to indicate that it's a tuple). Anne From hetland at tamu.edu Wed Jun 18 11:48:01 2008 From: hetland at tamu.edu (Rob Hetland) Date: Wed, 18 Jun 2008 17:48:01 +0200 Subject: [SciPy-user] netCDF files, ctypes, and numpy In-Reply-To: <695AF992-A9A5-4F38-B274-203E40E0E8C5@tnw.utwente.nl> References: <2ADCDFB4-9E69-4578-86E4-56C9F5DAE4BB@tnw.utwente.nl> <3d375d730806180118h6bc7f335hc4916f376d18be8a@mail.gmail.com> <20080618130137.GB7214@phare.normalesup.org> <695AF992-A9A5-4F38-B274-203E40E0E8C5@tnw.utwente.nl> Message-ID: <205150A8-AB0F-4324-8D7C-6ECB4C136086@tamu.edu> On Jun 18, 2008, at 3:27 PM, Chris Lee wrote: > Well since discovering pupynere, I have no need for Scientific at all. > Though, it should be noted that the current version of pupynere will > only load netCDF files with a small number of variables. I think I > have patched the code to fix that now. If I find that it works I will > send the patched code on to the author. For writing (and reading netCDF3 and netCDF4 files), check out http://netcdf4-python.googlecode.com This package can now be installed linking to either the netcdf3 or netcdf4 libraries (which need to be installed prior). -Rob ---- Rob Hetland, Associate Professor Dept. of Oceanography, Texas A&M University http://pong.tamu.edu/~rob phone: 979-458-0096, fax: 979-845-6331 From hetland at tamu.edu Wed Jun 18 11:48:21 2008 From: hetland at tamu.edu (Rob Hetland) Date: Wed, 18 Jun 2008 17:48:21 +0200 Subject: [SciPy-user] netCDF files, ctypes, and numpy In-Reply-To: <695AF992-A9A5-4F38-B274-203E40E0E8C5@tnw.utwente.nl> References: <2ADCDFB4-9E69-4578-86E4-56C9F5DAE4BB@tnw.utwente.nl> <3d375d730806180118h6bc7f335hc4916f376d18be8a@mail.gmail.com> <20080618130137.GB7214@phare.normalesup.org> <695AF992-A9A5-4F38-B274-203E40E0E8C5@tnw.utwente.nl> Message-ID: <8D7591FA-7573-4F56-8651-8DAE3E9CF4F2@tamu.edu> On Jun 18, 2008, at 3:27 PM, Chris Lee wrote: > Well since discovering pupynere, I have no need for Scientific at all. > Though, it should be noted that the current version of pupynere will > only load netCDF files with a small number of variables. I think I > have patched the code to fix that now. If I find that it works I will > send the patched code on to the author. For writing (and reading netCDF3 and netCDF4 files), check out http://netcdf4-python.googlecode.com This package can now be installed linking to either the netcdf3 or netcdf4 libraries (which need to be installed prior). -Rob ---- Rob Hetland, Associate Professor Dept. of Oceanography, Texas A&M University http://pong.tamu.edu/~rob phone: 979-458-0096, fax: 979-845-6331 From rmay31 at gmail.com Wed Jun 18 20:20:30 2008 From: rmay31 at gmail.com (Ryan May) Date: Wed, 18 Jun 2008 20:20:30 -0400 Subject: [SciPy-user] Mlab doesn't work In-Reply-To: <20080618131729.GB10375@phare.normalesup.org> References: <9e8c52a20806130516q2cab4081uaa2cce2ccc0b540f@mail.gmail.com> <4852671C.9080402@gmail.com> <9e8c52a20806130534x12a6c087jf486daca9a28c6e1@mail.gmail.com> <20080613125150.GA3573@phare.normalesup.org> <9e8c52a20806130627ib41e905s3c92a7c10fbf4834@mail.gmail.com> <20080613133323.GG3573@phare.normalesup.org> <9e8c52a20806180507u4e862eebt72d3eda59e42d9b2@mail.gmail.com> <20080618131729.GB10375@phare.normalesup.org> Message-ID: <4859A64E.2070206@gmail.com> Gael Varoquaux wrote: > On Wed, Jun 18, 2008 at 02:07:08PM +0200, Alexander Borghgraef wrote: >> After commenting out nearly everyting, it seemed that there was some >> incompatibility between mlab and pylab (which I use for 2D plotting). >> For example, this works fine: > >> from enthought.mayavi import mlab >> from scipy import lena >> mlab.surf(lena()) > >> But this results in the above mentioned error: > >> import pylab >> from enthought.mayavi import mlab >> from scipy import lena >> mlab.surf(lena()) > > Interesting. I was not aware of this problem, but it partly makes sense. > The problem is that pylab, as you are probably using it, uses the TK > toolkit, whereas mayavi uses the Wx. Running both event loops in the same > time results in a nice segfault due to race conditions. > > The solution is to have pylab use the Wx event loop. You can do this by > doing (before importing pylab): > """ > import matplotlib > matplotlib.use('WxAdd') ^^^ You mean: matplotlib.use('WxAgg') Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma From gael.varoquaux at normalesup.org Wed Jun 18 20:26:11 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 19 Jun 2008 02:26:11 +0200 Subject: [SciPy-user] Mlab doesn't work In-Reply-To: <4859A64E.2070206@gmail.com> References: <9e8c52a20806130516q2cab4081uaa2cce2ccc0b540f@mail.gmail.com> <4852671C.9080402@gmail.com> <9e8c52a20806130534x12a6c087jf486daca9a28c6e1@mail.gmail.com> <20080613125150.GA3573@phare.normalesup.org> <9e8c52a20806130627ib41e905s3c92a7c10fbf4834@mail.gmail.com> <20080613133323.GG3573@phare.normalesup.org> <9e8c52a20806180507u4e862eebt72d3eda59e42d9b2@mail.gmail.com> <20080618131729.GB10375@phare.normalesup.org> <4859A64E.2070206@gmail.com> Message-ID: <20080619002611.GA1641@phare.normalesup.org> On Wed, Jun 18, 2008 at 08:20:30PM -0400, Ryan May wrote: > > The solution is to have pylab use the Wx event loop. You can do this by > > doing (before importing pylab): > > """ > > import matplotlib > > matplotlib.use('WxAdd') > ^^^ > You mean: > matplotlib.use('WxAgg') Indeed, Thank you Ryan. From alexander.borghgraef.rma at gmail.com Thu Jun 19 05:01:46 2008 From: alexander.borghgraef.rma at gmail.com (Alexander Borghgraef) Date: Thu, 19 Jun 2008 11:01:46 +0200 Subject: [SciPy-user] Mlab doesn't work In-Reply-To: <20080619002611.GA1641@phare.normalesup.org> References: <9e8c52a20806130516q2cab4081uaa2cce2ccc0b540f@mail.gmail.com> <4852671C.9080402@gmail.com> <9e8c52a20806130534x12a6c087jf486daca9a28c6e1@mail.gmail.com> <20080613125150.GA3573@phare.normalesup.org> <9e8c52a20806130627ib41e905s3c92a7c10fbf4834@mail.gmail.com> <20080613133323.GG3573@phare.normalesup.org> <9e8c52a20806180507u4e862eebt72d3eda59e42d9b2@mail.gmail.com> <20080618131729.GB10375@phare.normalesup.org> <4859A64E.2070206@gmail.com> <20080619002611.GA1641@phare.normalesup.org> Message-ID: <9e8c52a20806190201s3b72cbcbyc3bde4cab8628d9f@mail.gmail.com> On Thu, Jun 19, 2008 at 2:26 AM, Gael Varoquaux wrote: > On Wed, Jun 18, 2008 at 08:20:30PM -0400, Ryan May wrote: >> You mean: >> matplotlib.use('WxAgg') Thanks, that does the trick. -- Alex Borghgraef From fredmfp at gmail.com Fri Jun 20 09:30:29 2008 From: fredmfp at gmail.com (fred) Date: Fri, 20 Jun 2008 15:30:29 +0200 Subject: [SciPy-user] msked array & histogram... Message-ID: <485BB0F5.90204@gmail.com> Hi, It seems that numpy.histogram does not work with masked arrays. Is there some workaround ? TIA. Cheers, -- Fred From fredmfp at gmail.com Fri Jun 20 09:45:07 2008 From: fredmfp at gmail.com (fred) Date: Fri, 20 Jun 2008 15:45:07 +0200 Subject: [SciPy-user] msked array & histogram... In-Reply-To: <485BB0F5.90204@gmail.com> References: <485BB0F5.90204@gmail.com> Message-ID: <485BB463.1020600@gmail.com> fred a ?crit : > Is there some workaround ? Yes. Convert them to NaN. Sorry. Cheers, -- Fred From y.copin at ipnl.in2p3.fr Fri Jun 20 12:44:32 2008 From: y.copin at ipnl.in2p3.fr (Yannick Copin) Date: Fri, 20 Jun 2008 18:44:32 +0200 Subject: [SciPy-user] fmin_tnc changes since scipy 0.6.0 Message-ID: <485BDE70.4020308@ipnl.in2p3.fr> Hi, I'm a bit puzzled: I had a code which worked fine using scipy.optimize.fmin_tnc with scipy 0.5.2. Unfortunately, since I upgraded scipy to 0.6.0, I hit some convergence issues... Does it ring a bell to somebody? I understand I'm a bit vague for the moment (I did not look for a simple test case yet), but maybe somebody had this experience too. Besides the reverted output order, was there some significant algorithmic changes in fmin_tnc between 0.5.2 and 0.6.0? Cheers. -- .~. Yannick COPIN (o:>* Doctus cum libro /V\ Institut de physique nucleaire de Lyon (IN2P3 - France) // \\ Tel: (33/0) 472 431 968 AIM: YnCopin ICQ: 236931013 /( )\ http://snovae.in2p3.fr/ycopin/ ^`~'^ From nwagner at iam.uni-stuttgart.de Fri Jun 20 12:49:00 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 20 Jun 2008 18:49:00 +0200 Subject: [SciPy-user] fmin_tnc changes since scipy 0.6.0 In-Reply-To: <485BDE70.4020308@ipnl.in2p3.fr> References: <485BDE70.4020308@ipnl.in2p3.fr> Message-ID: On Fri, 20 Jun 2008 18:44:32 +0200 Yannick Copin wrote: > Hi, > > I'm a bit puzzled: I had a code which worked fine using > scipy.optimize.fmin_tnc with scipy 0.5.2. Unfortunately, >since I upgraded > scipy to 0.6.0, I hit some convergence issues... > > Does it ring a bell to somebody? I understand I'm a bit >vague for the moment > (I did not look for a simple test case yet), but maybe >somebody had this > experience too. Besides the reverted output order, was >there some significant > algorithmic changes in fmin_tnc between 0.5.2 and 0.6.0? > > Cheers. > -- > .~. Yannick COPIN (o:>* Doctus cum libro > /V\ Institut de physique nucleaire de Lyon (IN2P3 - >France) > // \\ Tel: (33/0) 472 431 968 AIM: YnCopin ICQ: >236931013 > /( )\ http://snovae.in2p3.fr/ycopin/ > ^`~'^ > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user There is a new version of tnc (1.3) in svn r2530 | jarrod.millman | 2007-01-11 07:19:21 +0100 (Do, 11 Jan 2007) | 2 lines initial check-in of tnc version 1.3 r3187 | dmitrey.kroshko | 2007-07-24 16:13:43 +0200 (Di, 24 Jul 2007) | 2 lines correct order of tnc return values + bugfix for tnc tests Nils From nwagner at iam.uni-stuttgart.de Fri Jun 20 12:59:55 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 20 Jun 2008 18:59:55 +0200 Subject: [SciPy-user] fmin_tnc changes since scipy 0.6.0 In-Reply-To: <485BDE70.4020308@ipnl.in2p3.fr> References: <485BDE70.4020308@ipnl.in2p3.fr> Message-ID: On Fri, 20 Jun 2008 18:44:32 +0200 Yannick Copin wrote: > Hi, > > I'm a bit puzzled: I had a code which worked fine using > scipy.optimize.fmin_tnc with scipy 0.5.2. Unfortunately, >since I upgraded > scipy to 0.6.0, I hit some convergence issues... > > Does it ring a bell to somebody? I understand I'm a bit >vague for the moment > (I did not look for a simple test case yet), but maybe >somebody had this > experience too. Besides the reverted output order, was >there some significant > algorithmic changes in fmin_tnc between 0.5.2 and 0.6.0? > > Cheers. > -- > .~. Yannick COPIN (o:>* Doctus cum libro > /V\ Institut de physique nucleaire de Lyon (IN2P3 - >France) > // \\ Tel: (33/0) 472 431 968 AIM: YnCopin ICQ: >236931013 > /( )\ http://snovae.in2p3.fr/ycopin/ > ^`~'^ > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user You might be also interested in http://www.jeannot.org/~js/code/index.en.html BTW, Openopt provides an interface to tnc as well. p.solve('scipy_tnc') Nils From nwagner at iam.uni-stuttgart.de Fri Jun 20 13:01:28 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 20 Jun 2008 19:01:28 +0200 Subject: [SciPy-user] fmin_tnc changes since scipy 0.6.0 In-Reply-To: <485BDE70.4020308@ipnl.in2p3.fr> References: <485BDE70.4020308@ipnl.in2p3.fr> Message-ID: On Fri, 20 Jun 2008 18:44:32 +0200 Yannick Copin wrote: > Hi, > > I'm a bit puzzled: I had a code which worked fine using > scipy.optimize.fmin_tnc with scipy 0.5.2. Unfortunately, >since I upgraded > scipy to 0.6.0, I hit some convergence issues... > > Does it ring a bell to somebody? I understand I'm a bit >vague for the moment > (I did not look for a simple test case yet), but maybe >somebody had this > experience too. Besides the reverted output order, was >there some significant > algorithmic changes in fmin_tnc between 0.5.2 and 0.6.0? > > Cheers. > -- > .~. Yannick COPIN (o:>* Doctus cum libro > /V\ Institut de physique nucleaire de Lyon (IN2P3 - >France) > // \\ Tel: (33/0) 472 431 968 AIM: YnCopin ICQ: >236931013 > /( )\ http://snovae.in2p3.fr/ycopin/ > ^`~'^ > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user Here is just another link http://openopt.blogspot.com/2007/08/tnc-connected-to-openopt.html Nils From yannick.copin at laposte.net Fri Jun 20 15:26:48 2008 From: yannick.copin at laposte.net (Yannick Copin) Date: Fri, 20 Jun 2008 19:26:48 +0000 (UTC) Subject: [SciPy-user] =?utf-8?q?fmin=5Ftnc_changes_since_scipy_0=2E6=2E0?= References: <485BDE70.4020308@ipnl.in2p3.fr> Message-ID: Yannick Copin ipnl.in2p3.fr> writes: > I'm a bit puzzled: I had a code which worked fine using > scipy.optimize.fmin_tnc with scipy 0.5.2. Unfortunately, since I upgraded > scipy to 0.6.0, I hit some convergence issues... > > Does it ring a bell to somebody? I understand I'm a bit vague for the moment > (I did not look for a simple test case yet), but maybe somebody had this > experience too. Besides the reverted output order, was there some significant > algorithmic changes in fmin_tnc between 0.5.2 and 0.6.0? To complete my previous email, it seems my convergence problems is related to the new 'offsets' in fmin_tnc, which allows for fine-tuning of parameter scaling. If I force these offsets to be all null (instead of their actual defaults), I find results very similar to scipy 0.5.2 ones. PS: I suspect the TNC algo within OpenOpt is actually the one from Scipy, right? So this would not solve my problem. Furthermore, I try to limit the number of external packages to be needed by my code, so I restrict myself to routines from numpy/scipy. From rrothkop at MIT.EDU Fri Jun 20 16:43:12 2008 From: rrothkop at MIT.EDU (rrothkop at MIT.EDU) Date: Fri, 20 Jun 2008 16:43:12 -0400 Subject: [SciPy-user] Image Morphology Message-ID: <20080620164312.re0odtu9fl08cg08@webmail.mit.edu> Hello, I am looking for python equivalents for the matlab functions BWselect and BWmorph(skel). Can the outcomes of these functions be accomplished using scipy.ndimage? Thanks, Rebecca From nwagner at iam.uni-stuttgart.de Fri Jun 20 16:56:19 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 20 Jun 2008 22:56:19 +0200 Subject: [SciPy-user] fmin_tnc changes since scipy 0.6.0 In-Reply-To: References: <485BDE70.4020308@ipnl.in2p3.fr> Message-ID: On Fri, 20 Jun 2008 19:26:48 +0000 (UTC) Yannick Copin wrote: > Yannick Copin ipnl.in2p3.fr> writes: >> I'm a bit puzzled: I had a code which worked fine using >> scipy.optimize.fmin_tnc with scipy 0.5.2. Unfortunately, >>since I upgraded >> scipy to 0.6.0, I hit some convergence issues... >> >> Does it ring a bell to somebody? I understand I'm a bit >>vague for the moment >> (I did not look for a simple test case yet), but maybe >>somebody had this >> experience too. Besides the reverted output order, was >>there some significant >> algorithmic changes in fmin_tnc between 0.5.2 and 0.6.0? > > To complete my previous email, it seems my convergence >problems is related to > the new 'offsets' in fmin_tnc, which allows for >fine-tuning of parameter > scaling. If I force these offsets to be all null >(instead of their actual > defaults), I find results very similar to scipy 0.5.2 >ones. > > PS: I suspect the TNC algo within OpenOpt is actually >the one from Scipy, right? Dmitrey should reply here since he is the author of openopt. > So this would not solve my problem. Furthermore, I try >to limit the number of > external packages to be needed by my code, so I restrict >myself to routines from > numpy/scipy. > IMHO, one should compare the results obtained by different optimizations tools, wrt. efficiency, convergence behaviour, etc. openopt provides solvers for different optimizations tasks. and it's easy to install svn co http://svn.scipy.org/svn/scikits/trunk/openopt openopt cd openopt python setup.py install Nils From dmitrey.kroshko at scipy.org Fri Jun 20 17:42:23 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Sat, 21 Jun 2008 00:42:23 +0300 Subject: [SciPy-user] fmin_tnc changes since scipy 0.6.0 In-Reply-To: References: <485BDE70.4020308@ipnl.in2p3.fr> Message-ID: <485C243F.5030102@scipy.org> Nils Wagner wrote: > On Fri, 20 Jun 2008 19:26:48 +0000 (UTC) > Yannick Copin wrote: > >> Yannick Copin ipnl.in2p3.fr> writes: >> >>> I'm a bit puzzled: I had a code which worked fine using >>> scipy.optimize.fmin_tnc with scipy 0.5.2. Unfortunately, >>> since I upgraded >>> scipy to 0.6.0, I hit some convergence issues... >>> >>> Does it ring a bell to somebody? I understand I'm a bit >>> vague for the moment >>> (I did not look for a simple test case yet), but maybe >>> somebody had this >>> experience too. Besides the reverted output order, was >>> there some significant >>> algorithmic changes in fmin_tnc between 0.5.2 and 0.6.0? >>> >> To complete my previous email, it seems my convergence >> problems is related to >> the new 'offsets' in fmin_tnc, which allows for >> fine-tuning of parameter >> scaling. If I force these offsets to be all null >> (instead of their actual >> defaults), I find results very similar to scipy 0.5.2 >> ones. >> >> PS: I suspect the TNC algo within OpenOpt is actually >> the one from Scipy, right? >> > > Dmitrey should reply here since he is the author of > openopt. > yes, indeed, versions are same since tnc is called from scipy. > >> So this would not solve my problem. Furthermore, I try >> to limit the number of >> external packages to be needed by my code, so I restrict >> myself to routines from >> numpy/scipy. >> >> > > IMHO, one should compare the results obtained by different > optimizations tools, wrt. efficiency, convergence > behaviour, etc. > Maybe Yannick's demands are completely covered by scipy solvers, so making additional dependencies really are not appreciated here. First of I write openopt for some classes of problems beyond scipy could solve. D. > Nils > From listservs at mac.com Fri Jun 20 19:51:25 2008 From: listservs at mac.com (Chris) Date: Fri, 20 Jun 2008 23:51:25 +0000 (UTC) Subject: [SciPy-user] =?utf-8?q?weave=3A_ambiguous_overload_for_=E2=80=98o?= =?utf-8?b?cGVyYXRvcirigJk=?= Message-ID: I have some pretty simple weave code that multiplies floats: /* Calculate new Q-value */ qval_raw += alpha * delta * traceval_raw; all 3 elements are floats -- I have checked this. Yet, I get the following error: ReinforcementLearning.py:461: error: ambiguous overload for ?operator*? in ?alpha * delta? I dont see anything ambiguous here -- any ideas? Thanks, cf From brian.lewis17 at gmail.com Fri Jun 20 21:48:15 2008 From: brian.lewis17 at gmail.com (Brian Lewis) Date: Fri, 20 Jun 2008 18:48:15 -0700 Subject: [SciPy-user] Naive Question about Data Representations Message-ID: Sorry for my naivety. Can someone explain how it is possible to store more than 1 32-bit integer on a 32-bit system? I hope this questions makes my confusion obvious. -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Fri Jun 20 22:41:29 2008 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 20 Jun 2008 21:41:29 -0500 Subject: [SciPy-user] Naive Question about Data Representations In-Reply-To: References: Message-ID: <3d375d730806201941r6f148834ubfc10a3c3fc3710d@mail.gmail.com> On Fri, Jun 20, 2008 at 20:48, Brian Lewis wrote: > Sorry for my naivety. > > Can someone explain how it is possible to store more than 1 32-bit integer > on a 32-bit system? I hope this questions makes my confusion obvious. *That* you're confused, yes. *How* you are confused, no. Can you give us a little more context? What are you trying to do? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From dwf at cs.toronto.edu Fri Jun 20 23:10:39 2008 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Fri, 20 Jun 2008 23:10:39 -0400 Subject: [SciPy-user] Naive Question about Data Representations In-Reply-To: References: Message-ID: On 20-Jun-08, at 9:48 PM, Brian Lewis wrote: > Sorry for my naivety. > > Can someone explain how it is possible to store more than 1 32-bit > integer on a 32-bit system? I hope this questions makes my > confusion obvious. If by "32-bit system" you mean "32-bit processor/operating system", this refers to the number of bits in a memory address, and by extension that the system can deal with 2^32 distinct memory addresses / locations (4 GB on a 32-bit system, though you can't usually use a full 4GB of physical memory since lots of addresses are reserved for use by the system for things like I/O devices). A 32-bit integer will occupy 4 bytes of system memory (8 bits per byte), and any valid memory address will have to be such a 32-bit integer, but you could store much, much bigger numbers by just occupying more bytes of memory. For example, the long long type in C occupies 8 bytes = 64 bits. While any given programming language may not support it directly, you could store an integer as big as your system's memory would allow, for example you could allocate 4 megabytes and use it to store a 33554432 bit number if you wanted, or 2 gigabytes and store a 17179869184-bit number. Working with it in your programs wouldn't be pleasant, but it would be possible. David From brian.lewis17 at gmail.com Sat Jun 21 00:47:07 2008 From: brian.lewis17 at gmail.com (Brian Lewis) Date: Fri, 20 Jun 2008 21:47:07 -0700 Subject: [SciPy-user] Naive Question about Data Representations In-Reply-To: References: Message-ID: On Fri, Jun 20, 2008 at 8:10 PM, David Warde-Farley wrote: > On 20-Jun-08, at 9:48 PM, Brian Lewis wrote: > > > Sorry for my naivety. > > > > Can someone explain how it is possible to store more than 1 32-bit > > integer on a 32-bit system? I hope this questions makes my > > confusion obvious. > > If by "32-bit system" you mean "32-bit processor/operating system", > this refers to the number of bits in a memory address, and by > extension that the system can deal with 2^32 distinct memory > addresses / locations (4 GB on a 32-bit system, though you can't > usually use a full 4GB of physical memory since lots of addresses are > reserved for use by the system for things like I/O devices). > How do we go from 2^32 addresses to 4 GiB? To make this jump, it seems we associate each address with 1 B (8 bits). Then 2^32 = 4 * 2^30 = 4 Gibi-addresses = 4 GibiBytes. I understand that now, but if we associated each address with something larger than 1 B, then the upper limit would be larger. So why does 1 address == 1 B? It seems that having a 32-bit processor/operating system, alone, is not the only constraint on why there is a 4 GiB upper bound. > > A 32-bit integer will occupy 4 bytes of system memory (8 bits per > byte), and any valid memory address will have to be such a 32-bit > integer, but you could store much, much bigger numbers by just > occupying more bytes of memory. For example, the long long type in C > occupies 8 bytes = 64 bits. > Now I understand that we can store 2^32 Bytes / 4 Bytes = 2^30 integers (upper bound). Previously, I would have said: "We have 2^32 locations, which is just 32 bits....each integer requires 32 bits....so we should only be able to store one integer". Obviously, I know this not to be the case, and the missing link was that each location corresponded to 1 B. But why? If we could associate each address with 2 Bytes, shouldn't the upper bound for a 32-bit system be 8 GiB instead? Relevance: I'm trying to understand the largest array (upper bounds) I can make on my system, and I am not so familiar with these things. >>> import struct; print struct.calcsize('P') 4 This means each pointer will take up 4 Bytes. So it is the size of an integer, and I should be able to store 2^30, 32-bit integers (on a sunny day). Approx: 4 GiB of RAM. Thanks again for your patience. Please correct my mistakes and if possible, shed light on why each address represents 1 B. -------------- next part -------------- An HTML attachment was scrubbed... URL: From peridot.faceted at gmail.com Sat Jun 21 01:23:48 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Sat, 21 Jun 2008 01:23:48 -0400 Subject: [SciPy-user] Naive Question about Data Representations In-Reply-To: References: Message-ID: 2008/6/21 Brian Lewis : > How do we go from 2^32 addresses to 4 GiB? To make this jump, it seems we > associate each address with 1 B (8 bits). Then 2^32 = 4 * 2^30 = 4 > Gibi-addresses = 4 GibiBytes. I understand that now, but if we associated > each address with something larger than 1 B, then the upper limit would be > larger. So why does 1 address == 1 B? It seems that having a 32-bit > processor/operating system, alone, is not the only constraint on why there > is a 4 GiB upper bound. > Now I understand that we can store 2^32 Bytes / 4 Bytes = 2^30 integers > (upper bound). Previously, I would have said: "We have 2^32 locations, > which is just 32 bits....each integer requires 32 bits....so we should only > be able to store one integer". Obviously, I know this not to be the case, > and the missing link was that each location corresponded to 1 B. But why? > If we could associate each address with 2 Bytes, shouldn't the upper bound > for a 32-bit system be 8 GiB instead? > > Relevance: I'm trying to understand the largest array (upper bounds) I can > make on my system, and I am not so familiar with these things. > >>>> import struct; print struct.calcsize('P') > 4 > > This means each pointer will take up 4 Bytes. So it is the size of an > integer, and I should be able to store 2^30, 32-bit integers (on a sunny > day). Approx: 4 GiB of RAM. > > Thanks again for your patience. Please correct my mistakes and if > possible, shed light on why each address represents 1 B. It's a design decision. It's cumbersome to address memory on a finer granularity than your addresses allow (though it is possible), so many years ago computer designers settled on 8-bit bytes being the basic unit of memory (for PCs). To get at memory in units smaller than a byte you have to do bit-fiddling, which is slow and painful. Since bytes are a reasonably common unit - for strings, for example - having to do bit-fiddling to get at them would be a nuisance. That said, I think some specialized architectures, for example some DSPs, do exactly this, for efficiency. But since the only benefit would have been a factor of four in address space, it didn't seem worth it - especially as the decision was being made when a whole gigabyte of RAM was barely imaginable. Since any change would have meant breaking backward compatibility, nobody bothered until 64-bit addresses became available, at which point it seemed moot again. Anne From dwf at cs.toronto.edu Sat Jun 21 03:58:16 2008 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Sat, 21 Jun 2008 03:58:16 -0400 Subject: [SciPy-user] Naive Question about Data Representations In-Reply-To: References: Message-ID: On 21-Jun-08, at 12:47 AM, Brian Lewis wrote: > Now I understand that we can store 2^32 Bytes / 4 Bytes = 2^30 > integers (upper bound). Previously, I would have said: "We have > 2^32 locations, which is just 32 bits....each integer requires 32 > bits....so we should only be able to store one integer". Obviously, > I know this not to be the case, and the missing link was that each > location corresponded to 1 B. But why? If we could associate each > address with 2 Bytes, shouldn't the upper bound for a 32-bit system > be 8 GiB instead? Anne's reply is far more astute than I could ever manage; suffice it to say, for a variety of reasons (ASCII being one of them, I think), it's just The Way It Is on all modern CPUs, and changing it would require fabricating new chips and updating everything from the OS upward to work with the new addressing scheme. > Relevance: I'm trying to understand the largest array (upper bounds) > I can make on my system, and I am not so familiar with these things. > > >>> import struct; print struct.calcsize('P') > 4 > > This means each pointer will take up 4 Bytes. So it is the size of > an integer, and I should be able to store 2^30, 32-bit integers (on > a sunny day). Approx: 4 GiB of RAM. Yes, though keep in mind that on modern multi-tasking OSes, you've got some overhead from the kernel and the always-running services. Also there's virtual memory to think about, where all programs are tricked into thinking there's more memory than actually available, and the deficit is just handled by swapping out unused portions to disk (all transparent to the user). The address space limitation still holds though, in terms of the upper limit of what you can _address_ (even though the totality of the memory your program uses may not physically be in RAM all the time). David From contact at pythonxy.com Sat Jun 21 04:24:44 2008 From: contact at pythonxy.com (Pierre Raybaut) Date: Sat, 21 Jun 2008 10:24:44 +0200 Subject: [SciPy-user] Advanced Image Processing with Python In-Reply-To: References: Message-ID: <485CBACC.6050500@pythonxy.com> Hi all, I'm about to start a project at work and I would really appreciate to have your opinion. The purpose of this project is to develop a modular image and signal processing software (I would really like it to be open-source but it's far from easy at my work, so I'll see in time) with an "object-oriented GUI" (i.e. from the user point of view) and an interactive console. Image processing features will be quite advanced (not just filters and basic operations) and signal processing features will be basic (just what's needed to process image profiles: smoothing, curve fitting, and so on). I am sure that some of you are using advanced image processing tools in Python. So, according to your experience in this field, what would be the most complete, advanced and up-to-date Python module for this application? Apparently, VTK, OpenCV, and PIL (too basic?) are generally used in this field. Thanks Pierre From contact at pythonxy.com Sat Jun 21 06:09:36 2008 From: contact at pythonxy.com (Pierre Raybaut) Date: Sat, 21 Jun 2008 12:09:36 +0200 Subject: [SciPy-user] [ Python(x,y) ] New release : 1.2.7 Message-ID: <485CD360.8000707@pythonxy.com> Hi all, Python(x,y) 1.2.7 is now available on http://www.pythonxy.com. From now on, two Python(x,y) versions will be released: > Python(x,y).exe : standard Windows installer (~250Mb) > Python(x,y)-Lite.exe : exactly the same installer but without Eclipse (~150Mb) - but please note that Eclipse (as other packages) is still optional in the standard installer Linux version: Thanks to a lot of discussions with some of you on Python(x,y) Google group or in private, we will distribute the linux version as a metapackage. Changes history 06 -21 -2008 - Version 1.2.7 : * Added: o VPython 4.beta26 : creating 3D interactive models of physical systems * Updated: o Pydev 1.3.18 o py2exe 0.6.8 (see release notes) o pywin32 2.11 (see release notes) o PySerial 2.3 (see release notes) Regards, Pierre Raybaut From zachary.pincus at yale.edu Sat Jun 21 09:53:28 2008 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Sat, 21 Jun 2008 09:53:28 -0400 Subject: [SciPy-user] Advanced Image Processing with Python In-Reply-To: <485CBACC.6050500@pythonxy.com> References: <485CBACC.6050500@pythonxy.com> Message-ID: Hello, > I am sure that some of you are using advanced image processing tools > in > Python. > So, according to your experience in this field, what would be the most > complete, advanced and up-to-date Python module for this application? > > Apparently, VTK, OpenCV, and PIL (too basic?) are generally used in > this > field. VTK is, as the name suggests, a visualization toolkit. It has some image-processing features, but by-and-large, they're the kind of thing you'd need for 2/3D image display (isocontouring, etc.). Its sister project, ITK, has more of the hard-core image-processing algorithms in general, and in particular, lots of tools for segmentation and registration. A while ago, I participated in getting some modern python wrappers for ITK working. I'm not sure what the state of this work is now, but hopefully there should be some decent wrappers. (It's a pain, because ITK is designed almost completely around C++ templates, which are instantiated at compile-time, so making a complete run-time library for a dynamic language is rather tricky.) There are some recent ctypes wrappers for OpenCV, but I haven't worked with those. OpenCV is sort of tricky because a lot of it was designed to be really low-level (so you could put it on stripped down or even embedded systems), but it's got good stuff. I wind up using scipy.ndimage a lot because it is a pretty convenient library and does a lot of the basics, and in particular, has nice (if slow-ish) resampling routines. Good luck, Zach From mconroy at uga.edu Sat Jun 21 12:54:17 2008 From: mconroy at uga.edu (Mike Conroy) Date: Sat, 21 Jun 2008 12:54:17 -0400 Subject: [SciPy-user] problems with python and numpy Message-ID: <485D3239.7050800@uga.edu> All- I am having some baffling conflicts between python and numpy that seem to have arisen in recent installation. These seem to center around the tempfile in the Python Lib directory. What happens is that when I open python in an directory other than c:\ or perhaps c:\python25, and then attempt to load numpy (or pymc which also loads numpy) I get: the following error from tempfile.py "from random import Random as _Random ImportError: cannot import name Random" Because things work fine in C:\ and some other directories, I am betting this is a path issue. Here is my path in case anyone sees the issue> PATH=C:\Python25\;c:\Python25\Lib;c:\program files\miktex2.5\miktex\bin;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\system32\wbem;c:\python25;c:\python25\bin;c:\python24\enthought\mingw\bin;c:\python24\enthought\graphviz\bin;c:\program files\jedit;C:\Program Files\QuickTime\QTSystem\;C:\Program Files\Subversion\bin;C:\MinGW\bin;C:\Program Files\Intel\MKL\10.0.1.015\ia64\bin;C:\Program Files\Common Files\Intuit\QBPOSSDKRuntime I have installed the following version of python and packages: ActivePython-2.5.2.2-win32-x86 numpy-1.1.0-win32-superpack-python2.5 matplotlib-0.98.0.win32-py2.5. scipy-0.6.0.win32-p3-py2.5 and the latest svn-updated pymc2.0 Thanks for any help. Meantime things work but what a pain to be stuck in one director M -- Dr. Michael J. Conroy Adjunct Professor and Assistant Unit Leader Georgia Cooperative Fish and Wildlife Research Unit Warnell School of Forestry and Natural Resources University of Georgia Athens, GA 30602 USA Off. 3-427 Forestry Building tel. +706-542-1167 fax +706-542-8356 Unit web page http://coopunit.forestry.uga.edu/unit_homepage My web page http://coopunit.forestry.uga.edu/unit_homepage/Conroy From stefan at sun.ac.za Sat Jun 21 13:01:21 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Sat, 21 Jun 2008 19:01:21 +0200 Subject: [SciPy-user] Advanced Image Processing with Python In-Reply-To: <485CBACC.6050500@pythonxy.com> References: <485CBACC.6050500@pythonxy.com> Message-ID: <9457e7c80806211001q7e9db52fs93ecf8dac31b5d4a@mail.gmail.com> 2008/6/21 Pierre Raybaut : > The purpose of this project is to develop a modular image and signal > processing software (I would really like it to be open-source but it's > far from easy at my work, so I'll see in time) with an "object-oriented > GUI" (i.e. from the user point of view) and an interactive console. Traits is a fantastic way to build an object-oriented GUI. > Image processing features will be quite advanced (not just filters and > basic operations) and signal processing features will be basic (just > what's needed to process image profiles: smoothing, curve fitting, and > so on). Do stay in contact with the list, because many of us have implemented some of these features on our own. If you release your work under a free license, I'm sure you'll have no trouble finding contributors. Enthought has set a stellar example of how to open software whilst running a business. Regards St?fan From Scott.Daniels at Acm.Org Sat Jun 21 15:54:06 2008 From: Scott.Daniels at Acm.Org (Scott David Daniels) Date: Sat, 21 Jun 2008 12:54:06 -0700 Subject: [SciPy-user] problems with python and numpy In-Reply-To: <485D3239.7050800@uga.edu> References: <485D3239.7050800@uga.edu> Message-ID: Mike Conroy wrote: > ... when I open python in an directory other than c:\ or perhaps c:\python25, > and then attempt to load numpy (or pymc which also loads numpy) I get: > the following error from tempfile.py > "from random import Random as _Random ImportError: cannot import name > Random" > ... Here's how to debug this: Run python -v (You'll get a lot of output -- -v shows every import). Then to the prompt type "import numpy". Somewhere near the end you will find enough information about where things started going wrong. -Scott From haase at msg.ucsf.edu Sat Jun 21 16:38:40 2008 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Sat, 21 Jun 2008 22:38:40 +0200 Subject: [SciPy-user] Advanced Image Processing with Python In-Reply-To: <485CBACC.6050500@pythonxy.com> References: <485CBACC.6050500@pythonxy.com> Message-ID: On Sat, Jun 21, 2008 at 10:24 AM, Pierre Raybaut wrote: > Hi all, > > I'm about to start a project at work and I would really appreciate to > have your opinion. > > The purpose of this project is to develop a modular image and signal > processing software (I would really like it to be open-source but it's > far from easy at my work, so I'll see in time) with an "object-oriented > GUI" (i.e. from the user point of view) and an interactive console. > Image processing features will be quite advanced (not just filters and > basic operations) and signal processing features will be basic (just > what's needed to process image profiles: smoothing, curve fitting, and > so on). > > I am sure that some of you are using advanced image processing tools in > Python. > So, according to your experience in this field, what would be the most > complete, advanced and up-to-date Python module for this application? > > Apparently, VTK, OpenCV, and PIL (too basic?) are generally used in this > field. > Hi Pierre, You failed to say what you mean by "image" -- 2D or 3D or maybe even sequences of the above ... Color or gray-scale or multi-wavelength !? How would your software compare to existing ones -- like ImageJ ? - Sebastian Haase From robert.kern at gmail.com Sat Jun 21 19:39:30 2008 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 21 Jun 2008 18:39:30 -0500 Subject: [SciPy-user] problems with python and numpy In-Reply-To: <485D3239.7050800@uga.edu> References: <485D3239.7050800@uga.edu> Message-ID: <3d375d730806211639v6ad576ecn8d3126e11b3a1df7@mail.gmail.com> On Sat, Jun 21, 2008 at 11:54, Mike Conroy wrote: > All- I am having some baffling conflicts between python and numpy that > seem to have arisen in recent installation. These seem to center around > the tempfile in the Python Lib > directory. > > What happens is that when I open python in an directory other than c:\ > or perhaps c:\python25, and then attempt to load numpy (or pymc which > also loads numpy) I get: > the following error from tempfile.py > "from random import Random as _Random ImportError: cannot import name > Random" > > Because things work fine in C:\ and some other directories, I am betting > this is a path issue. Here is my path in case anyone sees the issue> You almost certainly have a module named random.py in the directories where you see the problem. Python looks in the current directory for modules to import before standard directories like c:\Python25\Lib where the desired random.py module is. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From zachary.pincus at yale.edu Sat Jun 21 22:56:59 2008 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Sat, 21 Jun 2008 22:56:59 -0400 Subject: [SciPy-user] Image Morphology In-Reply-To: <20080620164312.re0odtu9fl08cg08@webmail.mit.edu> References: <20080620164312.re0odtu9fl08cg08@webmail.mit.edu> Message-ID: Hi Rebecca, > I am looking for python equivalents for the matlab functions > BWselect and > BWmorph(skel). Can the outcomes of these functions be accomplished > using > scipy.ndimage? Could you describe those operations in a bit more detail? ndimage probably doesn't have any direct equivalent, but since many of the more-complex morphological operations are built out of simpler ones (ndimage has the full complement of "building block" morphological operators, more or less), it might be possible to get the same functionality out of scipy pretty easily. Zach From zhangchipr at gmail.com Sun Jun 22 00:58:09 2008 From: zhangchipr at gmail.com (zhang chi) Date: Sun, 22 Jun 2008 12:58:09 +0800 Subject: [SciPy-user] why my fft2 can't work? Message-ID: <90c482ab0806212158p480316f4ne9263c849b4a9c77@mail.gmail.com> from numpy import * from scipy import fftpack a = ones((64,64)) b = fftpack.fft2(a) the system display: Process Python ??? (core dumped) why? thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhangchipr at gmail.com Sun Jun 22 03:56:58 2008 From: zhangchipr at gmail.com (zhang chi) Date: Sun, 22 Jun 2008 15:56:58 +0800 Subject: [SciPy-user] How to realize inverse DCT transformation by using scipy Message-ID: <90c482ab0806220056l66986450wac3f0ee183703395@mail.gmail.com> hi How to realize inverse DCT transformation by using scipy? Is there a function to realize this transformation in scipy? Thank you very much -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at pythonxy.com Sun Jun 22 16:40:03 2008 From: contact at pythonxy.com (Pierre Raybaut) Date: Sun, 22 Jun 2008 22:40:03 +0200 Subject: [SciPy-user] Advanced Image Processing with Python In-Reply-To: References: Message-ID: <485EB8A3.9040400@pythonxy.com> > >> > Hi all, >> > >> > I'm about to start a project at work and I would really appreciate to >> > have your opinion. >> > >> > The purpose of this project is to develop a modular image and signal >> > processing software (I would really like it to be open-source but it's >> > far from easy at my work, so I'll see in time) with an "object-oriented >> > GUI" (i.e. from the user point of view) and an interactive console. >> > Image processing features will be quite advanced (not just filters and >> > basic operations) and signal processing features will be basic (just >> > what's needed to process image profiles: smoothing, curve fitting, and >> > so on). >> > >> > I am sure that some of you are using advanced image processing tools in >> > Python. >> > So, according to your experience in this field, what would be the most >> > complete, advanced and up-to-date Python module for this application? >> > >> > Apparently, VTK, OpenCV, and PIL (too basic?) are generally used in this >> > field. >> > >> > > Hi Pierre, > You failed to say what you mean by "image" -- 2D or 3D or maybe even > sequences of the above ... > Color or gray-scale or multi-wavelength !? > > How would your software compare to existing ones -- like ImageJ ? > > - Sebastian Haase You're right, I've been intentionally vague, but I guess I could be more specific anyway. I used the word "advanced" to illustrate the fact that it would be a scientific image processing tool, and not just a simple software to apply basic operations (maybe it was an unnecessary precaution since this is the *Sci*Py user list...). In fact, the "advanced" image processing would be based on some of our own algorithms which I will not explain here (and could never be a part of the open-source project if there is one). The rest of what I'm looking for is basically what you can find in MATLAB image processing toolbox, and some shape recognition algorithms eventually. The main idea would be to develop a first software with commonly used image processing tools (basic ones: filters, geometric transformations, basic operations, histograms, profile extraction, fft, ...), and optional modules to extend its features. This software will be used to process mainly gray-scale (and less often color) images, that can be stored in 2D data arrays (as well as 1D signals, but that's not a problem with Numpy/Scipy). So what I'm looking for would be an image processing library (and one only if possible) to handle this, and of course I'd prefer it to be widely used, still developed and maintained, with a great future ahead... the perfect library! I didn't know ImageJ, it seems interesting, I'll have to take a closer look at it. Thanks for your help Pierre From rmay31 at gmail.com Sun Jun 22 17:45:38 2008 From: rmay31 at gmail.com (Ryan May) Date: Sun, 22 Jun 2008 17:45:38 -0400 Subject: [SciPy-user] Help with 2D interpolation Message-ID: <485EC802.3040105@gmail.com> Hi, Can anyone explain why this won't work: import numpy as np import scipy.interpolate as interp from scipy.interpolate.fitpack2 import SmoothBivariateSpline import matplotlib.pyplot as plt x = np.linspace(-10, 10, 25) y = np.linspace(-15, 15, 50) X,Y = np.meshgrid(x,y) Z = 1.5 * X**2 + 0.5 * Y**2 #tck = interp.bisplrep(X, Y, Z) #z = interp.bisplev(X.flatten(), Y.flatten(), tck) lut = SmoothBivariateSpline(X.ravel(), Y.ravel(), Z.ravel()) z = lut(X.ravel(), Y.ravel()) fig = plt.figure() ax1 = fig.add_subplot(1,2,1) ax1.pcolor(X,Y,Z) ax1.set_title('Original') ax2 = fig.add_subplot(1,2,2) ax2.pcolor(X,Y,z.reshape(Z.shape)) ax2.set_title('Interpolated') plt.show() I get this error with scipy 0.6: --------------------------------------------------------------------------- AssertionError Traceback (most recent call last) /home/rmay/pypkg/scattering/test/test_interp.py in () 14 15 lut = SmoothBivariateSpline(X.ravel(), Y.ravel(), Z.ravel()) ---> 16 z = lut(X.ravel(), Y.ravel()) 17 18 fig = plt.figure() /usr/lib64/python2.5/site-packages/scipy/interpolate/fitpack2.py in __call__(self, x, y, mth) 350 kx,ky = self.degrees 351 z,ier = dfitpack.bispev(tx,ty,c,kx,ky,x,y) --> 352 assert ier==0,'Invalid input: ier='+`ier` 353 return z 354 raise NotImplementedError Any takers? Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma From zachary.pincus at yale.edu Sun Jun 22 18:10:14 2008 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Sun, 22 Jun 2008 18:10:14 -0400 Subject: [SciPy-user] Advanced Image Processing with Python In-Reply-To: <485EB8A3.9040400@pythonxy.com> References: <485EB8A3.9040400@pythonxy.com> Message-ID: <66DC272E-5676-4C7A-9CC8-68CB6CC45B73@yale.edu> > The rest of what I'm looking for is basically what you > can find in MATLAB image processing toolbox, and some shape > recognition > algorithms eventually. > > The main idea would be to develop a first software with commonly used > image processing tools (basic ones: filters, geometric > transformations, > basic operations, histograms, profile extraction, fft, ...), and > optional modules to extend its features. > > This software will be used to process mainly gray-scale (and less > often > color) images, that can be stored in 2D data arrays (as well as 1D > signals, but that's not a problem with Numpy/Scipy). So what I'm > looking > for would be an image processing library (and one only if possible) to > handle this, and of course I'd prefer it to be widely used, still > developed and maintained, with a great future ahead... the perfect > library! I think that OpenCV (with the ctypes python bindings) and scipy.ndimage should probably be pretty good candidates here... Zach From jjh at 42quarks.com Mon Jun 23 01:59:53 2008 From: jjh at 42quarks.com (Jonathan Hunt) Date: Mon, 23 Jun 2008 15:59:53 +1000 Subject: [SciPy-user] Test failure on OS X Leopard Message-ID: Hi, I installed NumPy (1.1) and SciPy (0.6) both from source on OS X (Leopard 10.5.3) . I removed existing numpy/scipy libraries. Both install with no errors. And numpy.test() passes all tests. However, running: >>> import scipy >>> scipy.test(); fails with: ....................................................................................................................warning: specified build_dir '_bad_path_' does not exist or is not writable. Trying default locations .....warning: specified build_dir '_bad_path_' does not exist or is not writable. Trying default locations ............................................................................................................................ ====================================================================== FAIL: check_dot (scipy.lib.blas.tests.test_blas.test_fblas1_simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Python/2.5/site-packages/scipy/lib/blas/tests/test_blas.py", line 76, in check_dot assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) File "/Library/Python/2.5/site-packages/numpy/testing/utils.py", line 158, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: ACTUAL: 2.1796638438433325e-36j DESIRED: (-9+2j) ---------------------------------------------------------------------- Ran 1574 tests in 5.679s FAILED (failures=1) Is this bad? A problem? It seems to be related to LAPACK/BLAS but on install SciPy said it detected LAPACK/BLAST. Any help appreciated. Thanks, Jonny Full test output is appended below: Failed importing scipy.linsolve.umfpack: 'module' object has no attribute 'umfpack' Found 9/9 tests for scipy.cluster.tests.test_vq Found 18/18 tests for scipy.fftpack.tests.test_basic Found 4/4 tests for scipy.fftpack.tests.test_helper Found 20/20 tests for scipy.fftpack.tests.test_pseudo_diffs Found 1/1 tests for scipy.integrate.tests.test_integrate Found 10/10 tests for scipy.integrate.tests.test_quadpack Found 3/3 tests for scipy.integrate.tests.test_quadrature Found 6/6 tests for scipy.tests.test_fitpack Found 6/6 tests for scipy.tests.test_interpolate Found 4/4 tests for scipy.io.tests.test_array_import Found 28/28 tests for scipy.io.tests.test_mio Found 13/13 tests for scipy.io.tests.test_mmio Found 5/5 tests for scipy.io.tests.test_npfile Found 4/4 tests for scipy.io.tests.test_recaster Found 16/16 tests for scipy.lib.blas.tests.test_blas Found 128/128 tests for scipy.lib.blas.tests.test_fblas **************************************************************** WARNING: clapack module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by numpy/distutils/system_info.py, then scipy uses flapack instead of clapack. **************************************************************** Found 42/42 tests for scipy.lib.lapack.tests.test_lapack Found 41/41 tests for scipy.linalg.tests.test_basic Found 16/16 tests for scipy.linalg.tests.test_blas Found 72/72 tests for scipy.linalg.tests.test_decomp Found 128/128 tests for scipy.linalg.tests.test_fblas Found 6/6 tests for scipy.linalg.tests.test_iterative Found 4/4 tests for scipy.linalg.tests.test_lapack Found 7/7 tests for scipy.linalg.tests.test_matfuncs Failed importing /Library/Python/2.5/site-packages/scipy/linsolve/umfpack/tests/test_umfpack.py: 'module' object has no attribute 'umfpack' Found 2/2 tests for scipy.maxentropy.tests.test_maxentropy Failed importing /Library/Python/2.5/site-packages/scipy/misc/tests/test_pilutil.py: No module named PIL.Image Found 399/399 tests for scipy.ndimage.tests.test_ndimage Found 5/5 tests for scipy.odr.tests.test_odr Found 1/1 tests for scipy.optimize.tests.test_cobyla Found 10/10 tests for scipy.optimize.tests.test_nonlin Found 8/8 tests for scipy.optimize.tests.test_optimize Found 4/4 tests for scipy.optimize.tests.test_zeros Found 5/5 tests for scipy.signal.tests.test_signaltools Found 4/4 tests for scipy.signal.tests.test_wavelets Found 152/152 tests for scipy.sparse.tests.test_sparse Found 342/342 tests for scipy.special.tests.test_basic Found 3/3 tests for scipy.special.tests.test_spfun_stats Found 73/73 tests for scipy.stats.tests.test_distributions Found 10/10 tests for scipy.stats.tests.test_morestats Found 107/107 tests for scipy.stats.tests.test_stats Found 1/1 tests for scipy.weave.tests.test_ast_tools Found 2/2 tests for scipy.weave.tests.test_blitz_tools Found 9/9 tests for scipy.weave.tests.test_build_tools Found 0/0 tests for scipy.weave.tests.test_c_spec Found 26/26 tests for scipy.weave.tests.test_catalog building extensions here: /Users/uqjhunt2/.python25_compiled/m5 Found 1/1 tests for scipy.weave.tests.test_ext_tools Found 0/0 tests for scipy.weave.tests.test_inline_tools Found 0/0 tests for scipy.weave.tests.test_scxx_dict Found 0/0 tests for scipy.weave.tests.test_scxx_object Found 0/0 tests for scipy.weave.tests.test_scxx_sequence Found 74/74 tests for scipy.weave.tests.test_size_check Found 16/16 tests for scipy.weave.tests.test_slice_handler Found 3/3 tests for scipy.weave.tests.test_standard_array_spec Found 0/0 tests for scipy.weave.tests.test_wx_spec .../Library/Python/2.5/site-packages/scipy/cluster/vq.py:477: UserWarning: One of the clusters is empty. Re-run kmean with a different initialization. warnings.warn("One of the clusters is empty. " exception raised as expected: One of the clusters is empty. Re-run kmean with a different initialization. ................................................Residual: 1.05006987366e-07 ............../Library/Python/2.5/site-packages/scipy/interpolate/fitpack2.py:458: UserWarning: The coefficients of the spline returned have been computed as the minimal norm least-squares solution of a (numerically) rank deficient system (deficiency=7). If deficiency is large, the results may be inaccurate. Deficiency may strongly depend on the value of eps. warnings.warn(message) ............ Don't worry about a warning regarding the number of bytes read. Warning: 1000000 bytes requested, 20 bytes read. ..............................................................F..........caxpy:n=4 ..caxpy:n=3 ....ccopy:n=4 ..ccopy:n=3 .............cscal:n=4 ....cswap:n=4 ..cswap:n=3 .....daxpy:n=4 ..daxpy:n=3 ....dcopy:n=4 ..dcopy:n=3 .............dscal:n=4 ....dswap:n=4 ..dswap:n=3 .....saxpy:n=4 ..saxpy:n=3 ....scopy:n=4 ..scopy:n=3 .............sscal:n=4 ....sswap:n=4 ..sswap:n=3 .....zaxpy:n=4 ..zaxpy:n=3 ....zcopy:n=4 ..zcopy:n=3 .............zscal:n=4 ....zswap:n=4 ..zswap:n=3 ..................................................................................... **************************************************************** WARNING: cblas module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by numpy/distutils/system_info.py, then scipy uses fblas instead of cblas. **************************************************************** ...........................................................................................caxpy:n=4 ..caxpy:n=3 ....ccopy:n=4 ..ccopy:n=3 .............cscal:n=4 ....cswap:n=4 ..cswap:n=3 .....daxpy:n=4 ..daxpy:n=3 ....dcopy:n=4 ..dcopy:n=3 .............dscal:n=4 ....dswap:n=4 ..dswap:n=3 .....saxpy:n=4 ..saxpy:n=3 ....scopy:n=4 ..scopy:n=3 .............sscal:n=4 ....sswap:n=4 ..sswap:n=3 .....zaxpy:n=4 ..zaxpy:n=3 ....zcopy:n=4 ..zcopy:n=3 .............zscal:n=4 ....zswap:n=4 ..zswap:n=3 .......... **************************************************************** WARNING: clapack module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by numpy/distutils/system_info.py, then scipy uses flapack instead of clapack. **************************************************************** ...Result may be inaccurate, approximate err = 1.23518201169e-08 ...Result may be inaccurate, approximate err = 7.27595761418e-12 ............................................................................................................/Library/Python/2.5/site-packages/scipy/ndimage/interpolation.py:41: UserWarning: Mode "reflect" may yield incorrect results on boundaries. Please use "mirror" instead. warnings.warn('Mode "reflect" may yield incorrect results on ' .....................................................................................................................................................................................................................................................................................................................................................................Use minimum degree ordering on A'+A. .....................................Use minimum degree ordering on A'+A. .....................................Use minimum degree ordering on A'+A. ................................Use minimum degree ordering on A'+A. ....................................................................................................................................................................................................................................................................................................................................................0.2 0.2 0.2 ......0.2 ..0.2 0.2 0.2 0.2 0.2 ..............................................................................................Ties preclude use of exact statistic. ..Ties preclude use of exact statistic. ......................./Library/Python/2.5/site-packages/numpy/lib/function_base.py:166: FutureWarning: The semantics of histogram will be modified in release 1.2 to improve outlier handling. The new behavior can be obtained using new=True. Note that the new version accepts/returns the bin edges instead of the left bin edges. Please read the docstring for more information. Please read the docstring for more information.""", FutureWarning) /Library/Python/2.5/site-packages/numpy/lib/function_base.py:181: FutureWarning: Outliers handling will change in version 1.2. Please read the docstring for details. Please read the docstring for details.""", FutureWarning) .............................................................................................warning: specified build_dir '_bad_path_' does not exist or is not writable. Trying default locations .....warning: specified build_dir '_bad_path_' does not exist or is not writable. Trying default locations ............................................................................................................................ ====================================================================== FAIL: check_dot (scipy.lib.blas.tests.test_blas.test_fblas1_simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Python/2.5/site-packages/scipy/lib/blas/tests/test_blas.py", line 76, in check_dot assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) File "/Library/Python/2.5/site-packages/numpy/testing/utils.py", line 158, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: ACTUAL: 8.5864323499206074e-37j DESIRED: (-9+2j) ---------------------------------------------------------------------- Ran 1848 tests in 6.136s FAILED (failures=1) >>> -- Jonathan J Hunt Homepage: http://www.42quarks.net.nz/wiki/JJH (Further contact details there) "Physics isn't the most important thing. Love is." Richard Feynman From david at ar.media.kyoto-u.ac.jp Mon Jun 23 08:02:00 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 23 Jun 2008 21:02:00 +0900 Subject: [SciPy-user] Test failure on OS X Leopard In-Reply-To: References: Message-ID: <485F90B8.7080701@ar.media.kyoto-u.ac.jp> Jonathan Hunt wrote: > Is this bad? A problem? It seems to be related to LAPACK/BLAS but on > install SciPy said it detected LAPACK/BLAST. > Yes, this is potentially bad. It is more than likely caused by a buggy interaction between C and Fortran. Which version of Mac OS X are you using, and which fortran compiler are you using ? A full build log (from scratch, that is after having removed build directory) would be useful, cheers, David From jjh at 42quarks.com Mon Jun 23 08:58:13 2008 From: jjh at 42quarks.com (Jonathan Hunt) Date: Mon, 23 Jun 2008 22:58:13 +1000 Subject: [SciPy-user] Test failure on OS X Leopard In-Reply-To: References: <485F90B8.7080701@ar.media.kyoto-u.ac.jp> Message-ID: Hi, Build log is attached. I am running OS X 10.5.3 with gfortran 4.2.1 and gcc 4.0.1 (and python 2.5). Thanks for any help. Jonny > On Mon, Jun 23, 2008 at 10:02 PM, David Cournapeau > wrote: >> Jonathan Hunt wrote: >>> Is this bad? A problem? It seems to be related to LAPACK/BLAS but on >>> install SciPy said it detected LAPACK/BLAST. >>> >> >> Yes, this is potentially bad. It is more than likely caused by a buggy >> interaction between C and Fortran. Which version of Mac OS X are you >> using, and which fortran compiler are you using ? A full build log (from >> scratch, that is after having removed build directory) would be useful, > > -- > Jonathan J Hunt > Homepage: http://www.42quarks.net.nz/wiki/JJH > (Further contact details there) > "Physics isn't the most important thing. Love is." Richard Feynman > -- Jonathan J Hunt Homepage: http://www.42quarks.net.nz/wiki/JJH (Further contact details there) "Physics isn't the most important thing. Love is." Richard Feynman -------------- next part -------------- A non-text attachment was scrubbed... Name: build.log.bz2 Type: application/x-bzip2 Size: 25562 bytes Desc: not available URL: From piscinero at gmail.com Mon Jun 23 09:29:16 2008 From: piscinero at gmail.com (AndresGM) Date: Mon, 23 Jun 2008 06:29:16 -0700 (PDT) Subject: [SciPy-user] Test failure on OS X Leopard In-Reply-To: References: <485F90B8.7080701@ar.media.kyoto-u.ac.jp> Message-ID: <5f89059f-2266-47d8-bb7b-bfc34259ae8e@w7g2000hsa.googlegroups.com> I'm getting a different error when I try to run the tests. I installed numpy 1.1.0 from source and tried Scipy both 0.6 and from the SVN trunk. Numpy builds without any problems and the all tests pass. Scipy builds but when trying to run the tests I get: import scipy >>> scipy.test(1,10) Traceback (most recent call last): File "", line 1, in File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/site-packages/scipy/testing/nosetester.py", line 133, in test nose = import_nose() File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/site-packages/scipy/testing/nosetester.py", line 25, in import_nose raise ImportError('Need nose >=0.10 for tests - see ' ImportError: Need nose >=0.10 for tests - see http://somethingaboutorange.com/mrl/projects/nose >>> I'm running python 2.5.2 from python.org and gfortran 4.2.3. Any help is appreciated. Thanks, Andr?s On 23 jun, 07:58, "Jonathan Hunt" wrote: > Hi, > > Build log is attached. I am running OS X 10.5.3 with gfortran 4.2.1 > and gcc 4.0.1 (and python 2.5). > > Thanks for any help. > Jonny > > > > > On Mon, Jun 23, 2008 at 10:02 PM, David Cournapeau > > wrote: > >> Jonathan Hunt wrote: > >>> Is this bad? A problem? It seems to be related to LAPACK/BLAS but on > >>> install SciPy said it detected LAPACK/BLAST. > > >> Yes, this is potentially bad. It is more than likely caused by a buggy > >> interaction between C and Fortran. Which version of Mac OS X are you > >> using, and which fortran compiler are you using ? A full build log (from > >> scratch, that is after having removed build directory) would be useful, > > > -- > > Jonathan J Hunt > > Homepage:http://www.42quarks.net.nz/wiki/JJH > > (Further contact details there) > > "Physics isn't the most important thing. Love is." Richard Feynman > > -- > Jonathan J Hunt > Homepage:http://www.42quarks.net.nz/wiki/JJH > (Further contact details there) > "Physics isn't the most important thing. Love is." Richard Feynman > > ?build.log.bz2 > 33 KDescargar > > _______________________________________________ > SciPy-user mailing list > SciPy-u... at scipy.orghttp://projects.scipy.org/mailman/listinfo/scipy-user From david at ar.media.kyoto-u.ac.jp Mon Jun 23 09:29:55 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 23 Jun 2008 22:29:55 +0900 Subject: [SciPy-user] Test failure on OS X Leopard In-Reply-To: <5f89059f-2266-47d8-bb7b-bfc34259ae8e@w7g2000hsa.googlegroups.com> References: <485F90B8.7080701@ar.media.kyoto-u.ac.jp> <5f89059f-2266-47d8-bb7b-bfc34259ae8e@w7g2000hsa.googlegroups.com> Message-ID: <485FA553.2040109@ar.media.kyoto-u.ac.jp> AndresGM wrote: > > ImportError: Need nose >=0.10 for tests - see http://somethingaboutorange.com/mrl/projects/nose > What about installing nose :) Starting from 0.7 for scipy and 1.2 for numpy (meaning svn trunk as well for both packages), nose is a hard dependency. cheers, David From david at ar.media.kyoto-u.ac.jp Mon Jun 23 10:00:58 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 23 Jun 2008 23:00:58 +0900 Subject: [SciPy-user] Test failure on OS X Leopard In-Reply-To: References: <485F90B8.7080701@ar.media.kyoto-u.ac.jp> Message-ID: <485FAC9A.9060508@ar.media.kyoto-u.ac.jp> Jonathan Hunt wrote: > Hi, > > Build log is attached. I am running OS X 10.5.3 with gfortran 4.2.1 > and gcc 4.0.1 (and python 2.5). > Thanks, as I expected, a problem with fortran/C wrapping. There is a problem with the Accelerate framework, for which we have a workaround. For some reason, the workaround is not setup in your case; I have Leopard 10.5.3 too, and I don't have this problem (the workaround is used in my case). Even stranger, the workaround is used once in your case; it only does not work for scipy.lib.blas. Since this module is not used anywhere in scipy yet, it is not so much problematic (on the contrary, having this problem in scipy.linalg would have been problematic). If you are willing to help us solving this log, could you tell use what does info['extra_link_flags'] contain in the function needs_cblas_wrapper, in scipy.lib.blas.setup.py (line 30). It should contain -Wl,Accelerate, as in the build.log, but I don't quite see how the regex could fail... thanks, David From twaite at berkeley.edu Mon Jun 23 10:41:02 2008 From: twaite at berkeley.edu (Tom Waite) Date: Mon, 23 Jun 2008 07:41:02 -0700 Subject: [SciPy-user] Image Morphology In-Reply-To: <20080620164312.re0odtu9fl08cg08@webmail.mit.edu> References: <20080620164312.re0odtu9fl08cg08@webmail.mit.edu> Message-ID: I wrote a package called segmenter (under development in ndimage) that has a thinning method. It will operate on either a single binary image or with my connected components methods to build connected regions and then thin each region of interest, using the automatically derived bounding box of each ROI. I need to clean up the code to complete it. I typically pre-filter the images to prevent spurs and I had planned to add more advanced methods like Zhang-Suen that deal with this (which may be what Matlab does). On Fri, Jun 20, 2008 at 1:43 PM, wrote: > Hello, > I am looking for python equivalents for the matlab functions BWselect and > BWmorph(skel). Can the outcomes of these functions be accomplished using > scipy.ndimage? > Thanks, > Rebecca > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rrothkop at MIT.EDU Mon Jun 23 11:41:49 2008 From: rrothkop at MIT.EDU (rrothkop at MIT.EDU) Date: Mon, 23 Jun 2008 11:41:49 -0400 Subject: [SciPy-user] Image Morphology Message-ID: <20080623114149.h22ww9augvksoog4@webmail.mit.edu> Hello, Thank you for your responses. To explain the functions further, BWselect returns a binary image containing objects that overlap a given pixel and BWmorph(skel) thins the edges of an image. It looks like Tom's package contains an equivalent. I understand the package is not complete, but is the thinning function working? Thanks, Rebecca From nmb at wartburg.edu Mon Jun 23 13:32:26 2008 From: nmb at wartburg.edu (Neil Martinsen-Burrell) Date: Mon, 23 Jun 2008 17:32:26 +0000 (UTC) Subject: [SciPy-user] Test failure on OS X Leopard References: <485F90B8.7080701@ar.media.kyoto-u.ac.jp> <5f89059f-2266-47d8-bb7b-bfc34259ae8e@w7g2000hsa.googlegroups.com> <485FA553.2040109@ar.media.kyoto-u.ac.jp> Message-ID: David Cournapeau ar.media.kyoto-u.ac.jp> writes: > > AndresGM wrote: > > > > ImportError: Need nose >=0.10 for tests - see http://somethingaboutorange.com/mrl/projects/nose > > > > What about installing nose :) Starting from 0.7 for scipy and 1.2 for > numpy (meaning svn trunk as well for both packages), nose is a hard > dependency. To be crystal-clear, nose is *not* a dependency for running anything in scipy or numpy, nor will this ever be the case. It *is* a hard dependency for running the numpy and scipy test suites as of the current SVN versions of both scipy and numpy. (Scipy made the change before numpy but they will eventually converge in the next releases.) -Neil From twaite at berkeley.edu Mon Jun 23 16:04:13 2008 From: twaite at berkeley.edu (Tom Waite) Date: Mon, 23 Jun 2008 13:04:13 -0700 Subject: [SciPy-user] Image Morphology In-Reply-To: <20080623114149.h22ww9augvksoog4@webmail.mit.edu> References: <20080623114149.h22ww9augvksoog4@webmail.mit.edu> Message-ID: Yes it is. I have a test_segmenter.py script that shows its use. I build 4 simple discs and get the Sobel edge "band". The thinning filter (I refer to the method as mat-filter for medial axis) returns the single-pixel wide edge and is shown to be comparable to the Canny edge filter which I also provide. This will use the ROI when generating the thin edges. On Mon, Jun 23, 2008 at 8:41 AM, wrote: > Hello, > Thank you for your responses. To explain the functions further, BWselect > returns > a binary image containing objects that overlap a given pixel and > BWmorph(skel) > thins the edges of an image. It looks like Tom's package contains an > equivalent. I understand the package is not complete, but is the thinning > function working? > Thanks, > Rebecca > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anand.prabhakar.patil at gmail.com Mon Jun 23 18:12:56 2008 From: anand.prabhakar.patil at gmail.com (Anand Patil) Date: Mon, 23 Jun 2008 23:12:56 +0100 Subject: [SciPy-user] Are cf2py threadsafe and cf2py inplace incompatible? Message-ID: Hi all, I'm trying to write a multithreaded function where each thread fills in a portion of an array. I'm on a MacBook Pro using Python 2.5 and the numpy subversion head. The following program produces intermittent seg faults: --------------------- from f2py_test import test from numpy import * from threading import Thread, Lock from copy import copy D = empty((60,60)) n_threads = 4 threads = [] for i in xrange(n_threads): new_thread= Thread(target=test, args=(D,)) threads.append(new_thread) new_thread.start() [thread.join() for thread in threads] ----------------- where f2py_test.f is just ----------------- SUBROUTINE test(D,nx) cf2py intent(inplace) D cf2py intent(hide) nx cf2py threadsafe DOUBLE PRECISION D(nx,nx) INTEGER nx ! Writing to D would happen here. ! Each thread would get some of the columns. RETURN END ------------------ Is there any way I can make this safe without computing the pieces independently and copying them in in serial? Thanks very much, Anand The relevant bits of the crash report are Process: Python [6796] Path: /Library/Frameworks/Python.framework/Versions/2.5/ Resources/Python.app/Contents/MacOS/Python Identifier: Python Version: ??? (???) Code Type: X86 (Native) Parent Process: bash [2950] Date/Time: 2008-06-23 22:58:21.777 +0100 OS Version: Mac OS X 10.5.3 (9D34) Report Version: 6 Exception Type: EXC_BAD_ACCESS (SIGSEGV) Exception Codes: KERN_INVALID_ADDRESS at 0x00000000039cc148 Crashed Thread: 3 ... Thread 3 Crashed: 0 multiarray.so 0x011c135b _strided_byte_copy + 427 (arrayobject.c:350) 1 multiarray.so 0x011f071d _copy_from_same_shape + 525 (arrayobject.c:978) 2 multiarray.so 0x012086eb _array_copy_into + 1195 (arrayobject.c:1134) 3 distances.so 0x031d0453 array_from_pyobj + 2179 (fortranobject.c:660) 4 distances.so 0x031cd5aa f2py_rout_distances_test + 170 (distancesmodule.c:233) 5 distances.so 0x031cee54 fortran_call + 68 (fortranobject.c:323) 6 org.python.python 0x0019b5d2 PyObject_Call + 50 7 org.python.python 0x0022a9e4 PyEval_EvalFrameEx + 15492 8 org.python.python 0x0022d445 PyEval_EvalFrameEx + 26341 9 org.python.python 0x0022d445 PyEval_EvalFrameEx + 26341 10 org.python.python 0x0022dba5 PyEval_EvalCodeEx + 1845 11 org.python.python 0x001bf4ce function_call + 446 12 org.python.python 0x0019b5d2 PyObject_Call + 50 13 org.python.python 0x001a3732 instancemethod_call + 354 14 org.python.python 0x0019b5d2 PyObject_Call + 50 15 org.python.python 0x00225c26 PyEval_CallObjectWithKeywords + 118 16 org.python.python 0x0026277f t_bootstrap + 63 17 libSystem.B.dylib 0x93ff06f5 _pthread_start + 321 18 libSystem.B.dylib 0x93ff05b2 thread_start + 34 ... Thread 3 crashed with X86 Thread State (32-bit): eax: 0x00000000 ebx: 0x011c11bb ecx: 0x039cc148 edx: 0x00000000 edi: 0x00000021 esi: 0x03c39008 ebp: 0xb0184098 esp: 0xb0184080 ss: 0x0000001f efl: 0x00010293 eip: 0x011c135b cs: 0x00000017 ds: 0x0000001f es: 0x0000001f fs: 0x0000001f gs: 0x00000037 cr2: 0x039cc148 From anand.prabhakar.patil at gmail.com Mon Jun 23 18:25:30 2008 From: anand.prabhakar.patil at gmail.com (Anand Patil) Date: Mon, 23 Jun 2008 23:25:30 +0100 Subject: [SciPy-user] Are cf2py threadsafe and cf2py inplace incompatible? In-Reply-To: References: Message-ID: <2bc7a5a50806231525t241042c9s3533b5e66607ae9f@mail.gmail.com> Sorry about the five copies of the last message I apparently just sent. I'll stop using Apple Mail. -------------- next part -------------- An HTML attachment was scrubbed... URL: From haase at msg.ucsf.edu Mon Jun 23 18:51:19 2008 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Tue, 24 Jun 2008 00:51:19 +0200 Subject: [SciPy-user] Image Morphology In-Reply-To: References: <20080623114149.h22ww9augvksoog4@webmail.mit.edu> Message-ID: Hey Tom, when you say """a package called segmenter (under development in ndimage)""" what do you mean by """under development in ndimage""" ? Is your code available (for public download) already ? Under a free license ? Thanks, Sebastian Haase On Mon, Jun 23, 2008 at 10:04 PM, Tom Waite wrote: > Yes it is. I have a test_segmenter.py script that shows its use. I build 4 > simple discs and get the Sobel edge "band". The thinning filter (I refer to > the method as mat-filter for medial axis) returns the single-pixel wide edge > and is shown to be comparable to the Canny edge filter which I also provide. > This will use the ROI when generating the thin edges. > > On Mon, Jun 23, 2008 at 8:41 AM, wrote: >> >> Hello, >> Thank you for your responses. To explain the functions further, BWselect >> returns >> a binary image containing objects that overlap a given pixel and >> BWmorph(skel) >> thins the edges of an image. It looks like Tom's package contains an >> equivalent. I understand the package is not complete, but is the thinning >> function working? >> Thanks, >> Rebecca From robert.kern at gmail.com Mon Jun 23 18:57:51 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 23 Jun 2008 17:57:51 -0500 Subject: [SciPy-user] Image Morphology In-Reply-To: References: <20080623114149.h22ww9augvksoog4@webmail.mit.edu> Message-ID: <3d375d730806231557r3afd01ddse760e4a8fe3b9e6@mail.gmail.com> On Mon, Jun 23, 2008 at 17:51, Sebastian Haase wrote: > Hey Tom, > when you say """a package called segmenter (under development in ndimage)""" > what do you mean by > """under development in ndimage""" > ? from scipy.ndimage import _segmenter > Is your code available (for public download) already ? scipy SVN. > Under a free license ? scipy license. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From jjh at 42quarks.com Mon Jun 23 19:59:55 2008 From: jjh at 42quarks.com (Jonathan Hunt) Date: Tue, 24 Jun 2008 09:59:55 +1000 Subject: [SciPy-user] Test failure on OS X Leopard In-Reply-To: <485FAC9A.9060508@ar.media.kyoto-u.ac.jp> References: <485F90B8.7080701@ar.media.kyoto-u.ac.jp> <485FAC9A.9060508@ar.media.kyoto-u.ac.jp> Message-ID: Hi David, I'm happy to help SciPy/NumPy are excellent software that I'm pleased to be able to help with (in a small way). Btw. this problem occured on both my laptop (MacBook Pro, Core2 Due) and desktop (MacPro 4 core) which are running 10.5.3 and 10.5.2 respectively. I had tried to answer you question so I opened /Library/Python/2.5/site-packages/scipy/lib/blas/setup.py but I couldn't find the function you were refering to. I have attached the file in case I'm missing something. Am I looking in the wrong place? Would you like me to update to the SVN version of SciPy and see if that fixes the problem? Also, if I understand your message the errors in this test function won't cause problems when I actually use SciPy? Is that correct? Thanks, Jonny On Tue, Jun 24, 2008 at 12:00 AM, David Cournapeau wrote: > Jonathan Hunt wrote: >> Hi, >> >> Build log is attached. I am running OS X 10.5.3 with gfortran 4.2.1 >> and gcc 4.0.1 (and python 2.5). >> > > Thanks, as I expected, a problem with fortran/C wrapping. There is a > problem with the Accelerate framework, for which we have a workaround. > For some reason, the workaround is not setup in your case; I have > Leopard 10.5.3 too, and I don't have this problem (the workaround is > used in my case). > > Even stranger, the workaround is used once in your case; it only does > not work for scipy.lib.blas. Since this module is not used anywhere in > scipy yet, it is not so much problematic (on the contrary, having this > problem in scipy.linalg would have been problematic). > > If you are willing to help us solving this log, could you tell use what > does info['extra_link_flags'] contain in the function > needs_cblas_wrapper, in scipy.lib.blas.setup.py (line 30). It should > contain -Wl,Accelerate, as in the build.log, but I don't quite see how > the regex could fail... > > thanks, > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Jonathan J Hunt Homepage: http://www.42quarks.net.nz/wiki/JJH (Further contact details there) "Physics isn't the most important thing. Love is." Richard Feynman -------------- next part -------------- A non-text attachment was scrubbed... Name: setup.py Type: application/octet-stream Size: 3383 bytes Desc: not available URL: From david at ar.media.kyoto-u.ac.jp Mon Jun 23 22:28:18 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 24 Jun 2008 11:28:18 +0900 Subject: [SciPy-user] Test failure on OS X Leopard In-Reply-To: References: <485F90B8.7080701@ar.media.kyoto-u.ac.jp> <5f89059f-2266-47d8-bb7b-bfc34259ae8e@w7g2000hsa.googlegroups.com> <485FA553.2040109@ar.media.kyoto-u.ac.jp> Message-ID: <48605BC2.5030900@ar.media.kyoto-u.ac.jp> Neil Martinsen-Burrell wrote: > > To be crystal-clear, nose is *not* a dependency for running anything in scipy or > numpy, nor will this ever be the case. It *is* a hard dependency for running > the numpy and scipy test suites as of the current SVN versions of both scipy and > numpy. (Scipy made the change before numpy but they will eventually converge in > the next releases.) > Ah yes, sorry for the confusion, I should have said it was a hard dependency for *tests* only. cheers, David From david at ar.media.kyoto-u.ac.jp Mon Jun 23 22:48:23 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 24 Jun 2008 11:48:23 +0900 Subject: [SciPy-user] Test failure on OS X Leopard In-Reply-To: References: <485F90B8.7080701@ar.media.kyoto-u.ac.jp> <485FAC9A.9060508@ar.media.kyoto-u.ac.jp> Message-ID: <48606077.8040007@ar.media.kyoto-u.ac.jp> Jonathan Hunt wrote: > I had tried to answer you question so I opened > /Library/Python/2.5/site-packages/scipy/lib/blas/setup.py but I > couldn't find the function you were refering to. I have attached the > file in case I'm missing something. Am I looking in the wrong place? > Hi Johnathan, You solved the problem :) I forgot that you were using 0.6, in which the fix for this problem was not applied everywhere. I should have thought about this... > Also, if I understand your message the errors in this test function > won't cause problems when I actually use SciPy? Is that correct? > Yes. Unless you use directly scipy.lib.blas, no code inside scipy uses it; it will always use the code from scipy.linalg, which does have the fix even in 0.6, cheers, David From jjh at 42quarks.com Mon Jun 23 23:07:03 2008 From: jjh at 42quarks.com (Jonathan Hunt) Date: Tue, 24 Jun 2008 13:07:03 +1000 Subject: [SciPy-user] Test failure on OS X Leopard In-Reply-To: <48606077.8040007@ar.media.kyoto-u.ac.jp> References: <485F90B8.7080701@ar.media.kyoto-u.ac.jp> <485FAC9A.9060508@ar.media.kyoto-u.ac.jp> <48606077.8040007@ar.media.kyoto-u.ac.jp> Message-ID: Hi, Sorry for the hassle. Thanks for that. Jonny On Tue, Jun 24, 2008 at 12:48 PM, David Cournapeau wrote: > Jonathan Hunt wrote: >> I had tried to answer you question so I opened >> /Library/Python/2.5/site-packages/scipy/lib/blas/setup.py but I >> couldn't find the function you were refering to. I have attached the >> file in case I'm missing something. Am I looking in the wrong place? >> > > Hi Johnathan, > > You solved the problem :) I forgot that you were using 0.6, in which > the fix for this problem was not applied everywhere. I should have > thought about this... >> Also, if I understand your message the errors in this test function >> won't cause problems when I actually use SciPy? Is that correct? >> > > Yes. Unless you use directly scipy.lib.blas, no code inside scipy uses > it; it will always use the code from scipy.linalg, which does have the > fix even in 0.6, > > cheers, > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Jonathan J Hunt Homepage: http://www.42quarks.net.nz/wiki/JJH (Further contact details there) "Physics isn't the most important thing. Love is." Richard Feynman From jjh at 42quarks.com Mon Jun 23 23:56:42 2008 From: jjh at 42quarks.com (Jonathan Hunt) Date: Tue, 24 Jun 2008 13:56:42 +1000 Subject: [SciPy-user] Reading MATLAB 7.4 (HDF5 based) .mat files Message-ID: Hi, The current SciPy libraries for reading MATLAB .mat files only support v6 and v7.1 format. Just wondering if anyone is currently working on adding support for 7.4 (HDF5 based) .mat files. Thanks, Jonny -- Jonathan J Hunt Homepage: http://www.42quarks.net.nz/wiki/JJH (Further contact details there) "Physics isn't the most important thing. Love is." Richard Feynman From peridot.faceted at gmail.com Tue Jun 24 00:01:56 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Tue, 24 Jun 2008 00:01:56 -0400 Subject: [SciPy-user] Reading MATLAB 7.4 (HDF5 based) .mat files In-Reply-To: References: Message-ID: 2008/6/23 Jonathan Hunt : > The current SciPy libraries for reading MATLAB .mat files only support > v6 and v7.1 format. Just wondering if anyone is currently working on > adding support for 7.4 (HDF5 based) .mat files. My last use of MATLAB was some ten years ago so I haven't tried it, but "pytables" is a general-purpose HDF5 reader, so presumably it should be a fairly convenient way to get at MATLAB files. Anne From david at ar.media.kyoto-u.ac.jp Mon Jun 23 23:47:38 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 24 Jun 2008 12:47:38 +0900 Subject: [SciPy-user] Reading MATLAB 7.4 (HDF5 based) .mat files In-Reply-To: References: Message-ID: <48606E5A.1070307@ar.media.kyoto-u.ac.jp> Jonathan Hunt wrote: > Hi, > > The current SciPy libraries for reading MATLAB .mat files only support > v6 and v7.1 format. Just wondering if anyone is currently working on > adding support for 7.4 (HDF5 based) .mat files. > For hdf5 files, pytables is your best bet I think: it uses hdf5 as its implementation file format, and I used it to exchange data between matlab and python (at a time where matlab support for hdf5 was really poor; I would guess that since they base their new format on it, it is much better now) http://www.pytables.org/moin cheers, David From grs2103 at columbia.edu Tue Jun 24 00:36:40 2008 From: grs2103 at columbia.edu (Gideon Simpson) Date: Tue, 24 Jun 2008 00:36:40 -0400 Subject: [SciPy-user] sinc function Message-ID: <98DBB071-77BF-44E6-A50A-CE375D4F0C40@columbia.edu> Why is sinc not in scipy.special? Is it implemented in another python package? -gideon From robert.kern at gmail.com Tue Jun 24 00:42:55 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 23 Jun 2008 23:42:55 -0500 Subject: [SciPy-user] sinc function In-Reply-To: <98DBB071-77BF-44E6-A50A-CE375D4F0C40@columbia.edu> References: <98DBB071-77BF-44E6-A50A-CE375D4F0C40@columbia.edu> Message-ID: <3d375d730806232142i3ad51a0nf17c9cd51c2fe5c0@mail.gmail.com> On Mon, Jun 23, 2008 at 23:36, Gideon Simpson wrote: > Why is sinc not in scipy.special? Is it implemented in another python > package? numpy.sinc() -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From anand.prabhakar.patil at gmail.com Tue Jun 24 04:33:07 2008 From: anand.prabhakar.patil at gmail.com (Anand Patil) Date: Tue, 24 Jun 2008 09:33:07 +0100 Subject: [SciPy-user] Are cf2py threadsafe and cf2py inplace incompatible? In-Reply-To: References: Message-ID: <2bc7a5a50806240133j699f2606k659e4470f2985191@mail.gmail.com> On Mon, Jun 23, 2008 at 11:12 PM, Anand Patil < anand.prabhakar.patil at gmail.com> wrote: > Hi all, > > I'm trying to write a multithreaded function where each thread fills in a > portion of an array. I'm on a MacBook Pro using Python 2.5 and the numpy > subversion head. The following program produces intermittent seg faults: > Well good, it looks like only one copy got through. Also I realized what the problem is: the new empty array is row-major, so f2py has to make a column-major copy of it to pass into Fortran, then copy the results into the original array to make it seem like Fortran function is changing the array in-place. Cheers, Anand -------------- next part -------------- An HTML attachment was scrubbed... URL: From lorenzo.isella at gmail.com Tue Jun 24 09:12:34 2008 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Tue, 24 Jun 2008 15:12:34 +0200 Subject: [SciPy-user] Again on array manipulation Message-ID: Dear All, This should be quick: say that you have e.g. the following two scipy arrays: a=[15 23 44 78|11 77| 33 45 89| 56 99| 12 654 81] b=[10 10 10 10|40 40| 60 60 60| 22 22| 17 17 17] (the vertical bar is just a guide for the eye). No entry appears twice in a and no entry appears only once in b. I would like to chop off all the elements appearing more than twice in b and get rid of the corresponding elements in a, thus ending up with: a=[|11 77| 56 99|] b=[|40 40| 22 22|] what is the easiest way to achieve this? Cheers Lorenzo From contact at pythonxy.com Tue Jun 24 09:51:07 2008 From: contact at pythonxy.com (Pierre Raybaut) Date: Tue, 24 Jun 2008 15:51:07 +0200 Subject: [SciPy-user] Advanced Image Processing with Python Message-ID: <629b08a40806240651n49f6d208v3b7bbc4024d4dfa5@mail.gmail.com> > > Date: Sun, 22 Jun 2008 18:10:14 -0400 > From: Zachary Pincus > Subject: Re: [SciPy-user] Advanced Image Processing with Python > To: SciPy Users List > Message-ID: <66DC272E-5676-4C7A-9CC8-68CB6CC45B73 at yale.edu> > Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes > > > The rest of what I'm looking for is basically what you > > can find in MATLAB image processing toolbox, and some shape > > recognition > > algorithms eventually. > > > > The main idea would be to develop a first software with commonly used > > image processing tools (basic ones: filters, geometric > > transformations, > > basic operations, histograms, profile extraction, fft, ...), and > > optional modules to extend its features. > > > > This software will be used to process mainly gray-scale (and less > > often > > color) images, that can be stored in 2D data arrays (as well as 1D > > signals, but that's not a problem with Numpy/Scipy). So what I'm > > looking > > for would be an image processing library (and one only if possible) to > > handle this, and of course I'd prefer it to be widely used, still > > developed and maintained, with a great future ahead... the perfect > > library! > > I think that OpenCV (with the ctypes python bindings) and > scipy.ndimage should probably be pretty good candidates here... > > Zach Hi Zach, Among all Python image processing libraries, ITK appears to be the most promising (thanks to WrapITK - as you know, Zach, of course: http://voxel.jouy.inra.fr/darcs/contrib-itk/WrapITK/WrapITK_-_Enhanced_languages_support_for_the_Insight_Toolkit.pdf). But for Windows users (and I am one, unfortunately), ITK seems difficult to build compared to VTK for instance (and it's a very long process - 3 hours on my machine which is quite powerful). I've already tried and failed with MinGW and VSC2003. That is really weird for such an interesting library to be almost closed to Windows users: there is no binaries available for Python 2.5 (actually there is none for VTK either but at least it's easy to build). And when I see what it can do (at least on paper), it surprises me that no one tried to build it on Windows before (and shared the binaries with the community), except for Python 2.4 ( http://cpbotha.net/2007/08/02/python-enabled-vtk-51-and-itk-32-windows-binaries/). Is there any reason for that? (is this library not so good as it seems?) Anyway, if I succeed, I will be glad to share the result with the community. Pierre -------------- next part -------------- An HTML attachment was scrubbed... URL: From wnbell at gmail.com Tue Jun 24 10:06:41 2008 From: wnbell at gmail.com (Nathan Bell) Date: Tue, 24 Jun 2008 09:06:41 -0500 Subject: [SciPy-user] Again on array manipulation In-Reply-To: References: Message-ID: On Tue, Jun 24, 2008 at 8:12 AM, Lorenzo Isella wrote: > > a=[15 23 44 78|11 77| 33 45 89| 56 99| 12 654 81] > > b=[10 10 10 10|40 40| 60 60 60| 22 22| 17 17 17] > > (the vertical bar is just a guide for the eye). > No entry appears twice in a and no entry appears only once in b. I > would like to chop off all the > elements appearing more than twice in b and get rid of the > corresponding elements in a, thus ending up with: If b consists of only small integers, then I'd used bincount() In [15]: from scipy import * In [16]: a = array([15, 23, 44, 78, 11, 77, 33, 45, 89, 56, 99, 12, 654, 81]) In [17]: b = array([10, 10, 10, 10, 40, 40, 60, 60, 60, 22, 22, 17, 17, 17]) In [18]: mask = (bincount(b) <= 2)[b] In [19]: print a[mask] [11 77 56 99] In [20]: print b[mask] [40 40 22 22] Otherwise, it depends on whether the repeated elements of b are contiguous or not. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From zachary.pincus at yale.edu Tue Jun 24 10:11:16 2008 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Tue, 24 Jun 2008 10:11:16 -0400 Subject: [SciPy-user] Advanced Image Processing with Python In-Reply-To: <629b08a40806240651n49f6d208v3b7bbc4024d4dfa5@mail.gmail.com> References: <629b08a40806240651n49f6d208v3b7bbc4024d4dfa5@mail.gmail.com> Message-ID: > Hi Zach, > > Among all Python image processing libraries, ITK appears to be the > most promising (thanks to WrapITK - as you know, Zach, of course: http://voxel.jouy.inra.fr/darcs/contrib-itk/WrapITK/WrapITK_-_Enhanced_languages_support_for_the_Insight_Toolkit.pdf) > . But for Windows users (and I am one, unfortunately), ITK seems > difficult to build compared to VTK for instance (and it's a very > long process - 3 hours on my machine which is quite powerful). I've > already tried and failed with MinGW and VSC2003. > That is really weird for such an interesting library to be almost > closed to Windows users: there is no binaries available for Python > 2.5 (actually there is none for VTK either but at least it's easy to > build). And when I see what it can do (at least on paper), it > surprises me that no one tried to build it on Windows before (and > shared the binaries with the community), except for Python 2.4 (http://cpbotha.net/2007/08/02/python-enabled-vtk-51-and-itk-32-windows-binaries/ > ). Is there any reason for that? (is this library not so good as it > seems?) > > Anyway, if I succeed, I will be glad to share the result with the > community. Hi Pierre, I'd advise asking for help on the ITK list -- they take quite seriously their commitment to cross-platform build-ability (to the extent of having built that CMake tool precisely to that end). As for build time, that's another issue. Basically, ITK is (to a frightening degree) driven by C++ template programming, which really strains compilers. You can speed the build time for WrapITK (when you get it working) by turning off the template instantiations for all but a few pixel types (say, uint8, uint16, and float32) and image dimensions (only 2D, say). Build time will be nearly linear in the product of pixel types and image dimension... Now, it's been a while since I was involved with WrapITK, but last I checked, the ITK people were pretty enthusiastic about getting it integrated into ITK and fully supported. Hopefully this is still the case! (Though coaxing the CMake build system into performing the magic required to run everything was non-trivial, due to limitations in the CMake language, so perhaps it has been difficult to maintain.) Good luck, Zach From contact at pythonxy.com Tue Jun 24 14:26:18 2008 From: contact at pythonxy.com (Pierre Raybaut) Date: Tue, 24 Jun 2008 20:26:18 +0200 Subject: [SciPy-user] Advanced Image Processing with Python In-Reply-To: References: Message-ID: <48613C4A.7070905@pythonxy.com> > >> > Hi Zach, >> > >> > Among all Python image processing libraries, ITK appears to be the >> > most promising (thanks to WrapITK - as you know, Zach, of course: http://voxel.jouy.inra.fr/darcs/contrib-itk/WrapITK/WrapITK_-_Enhanced_languages_support_for_the_Insight_Toolkit.pdf) >> > . But for Windows users (and I am one, unfortunately), ITK seems >> > difficult to build compared to VTK for instance (and it's a very >> > long process - 3 hours on my machine which is quite powerful). I've >> > already tried and failed with MinGW and VSC2003. >> > That is really weird for such an interesting library to be almost >> > closed to Windows users: there is no binaries available for Python >> > 2.5 (actually there is none for VTK either but at least it's easy to >> > build). And when I see what it can do (at least on paper), it >> > surprises me that no one tried to build it on Windows before (and >> > shared the binaries with the community), except for Python 2.4 (http://cpbotha.net/2007/08/02/python-enabled-vtk-51-and-itk-32-windows-binaries/ >> > ). Is there any reason for that? (is this library not so good as it >> > seems?) >> > >> > Anyway, if I succeed, I will be glad to share the result with the >> > community. >> > > Hi Pierre, > > I'd advise asking for help on the ITK list -- they take quite > seriously their commitment to cross-platform build-ability (to the > extent of having built that CMake tool precisely to that end). > I'm giving it a try with VSC2005 Express, and if it goes wrong I'll switch to ITK list for support indeed -- good suggestion. > Now, it's been a while since I was involved with WrapITK, but last I > checked, the ITK people were pretty enthusiastic about getting it > integrated into ITK and fully supported. Hopefully this is still the > case! (Though coaxing the CMake build system into performing the magic > required to run everything was non-trivial, due to limitations in the > CMake language, so perhaps it has been difficult to maintain.) > > Apparently, WrapITK is now a part of ITK package but there is still a warning in CMake when you enable it (something like "WrapITK is still experimental [...] some problems have been reported when building on Windows platforms [...]"). I had problems which were in fact limitations of MinGW and VSC2003, so at this stage I can't really say that the CMake configuration does not work. I'll see tomorrow morning if it worked with VSC2005 when I get back to my work... > Good luck, Hope I won't need it ;) Thanks Pierre From zachary.pincus at yale.edu Tue Jun 24 15:33:41 2008 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Tue, 24 Jun 2008 15:33:41 -0400 Subject: [SciPy-user] simple PDE tools? Message-ID: <2E41AD1B-5EF8-426A-99F3-B99ECCAB4EA3@yale.edu> Hello all, I'm trying to solve an optimization problem that might be best approached by formulating it as a "physical" system of interacting particles (certain of which have attractions to and repulsions from certain others). I could then write an energy term for the system at a given state, and a term for the forces on each particle in that state. Are there any simple PDE tools available that I could use to time-step something like this until convergence? I've looked into FiPy, which looks like it's more set up for continuum modeling (like level sets, heat flow, etc.) than just time-integrating the forces on a set of particles. I could also write an integrator myself, but there are of course sufficiently many gotchas there that I'd rather not. Thanks, Zach From robert.kern at gmail.com Tue Jun 24 15:38:50 2008 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 24 Jun 2008 14:38:50 -0500 Subject: [SciPy-user] simple PDE tools? In-Reply-To: <2E41AD1B-5EF8-426A-99F3-B99ECCAB4EA3@yale.edu> References: <2E41AD1B-5EF8-426A-99F3-B99ECCAB4EA3@yale.edu> Message-ID: <3d375d730806241238k69136397gb875bf7038eb8c06@mail.gmail.com> On Tue, Jun 24, 2008 at 14:33, Zachary Pincus wrote: > Hello all, > > I'm trying to solve an optimization problem that might be best > approached by formulating it as a "physical" system of interacting > particles (certain of which have attractions to and repulsions from > certain others). I could then write an energy term for the system at a > given state, and a term for the forces on each particle in that state. > > Are there any simple PDE tools available that I could use to time-step > something like this until convergence? I've looked into FiPy, which > looks like it's more set up for continuum modeling (like level sets, > heat flow, etc.) than just time-integrating the forces on a set of > particles. Wouldn't that be an ODE, then? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From zachary.pincus at yale.edu Tue Jun 24 16:13:46 2008 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Tue, 24 Jun 2008 16:13:46 -0400 Subject: [SciPy-user] simple PDE tools? In-Reply-To: <3d375d730806241238k69136397gb875bf7038eb8c06@mail.gmail.com> References: <2E41AD1B-5EF8-426A-99F3-B99ECCAB4EA3@yale.edu> <3d375d730806241238k69136397gb875bf7038eb8c06@mail.gmail.com> Message-ID: <80CABC80-D50B-4B7D-A631-C126531852EA@yale.edu> Robert Kern: > Wouldn't that be an ODE, then? Hmm, it would appear that you are correct. (As usual!) I had somehow gotten myself confused into thinking that the problem wasn't an ODE one... Sorry for the noise, and thanks, Zach From zachary.pincus at yale.edu Tue Jun 24 19:45:10 2008 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Tue, 24 Jun 2008 19:45:10 -0400 Subject: [SciPy-user] integrate ODE to steady-state? Message-ID: <042BF501-C3C2-451D-A0D3-0A3B762E3930@yale.edu> Hi all, So, after a brief bout of the stupids (thanks Robert), I have formulated my optimization problem as a physical system governed by an ODE, and I wish to learn the equilibrium configuration of the system. Any thoughts on what the easiest way to do this with scipy.integrate is? Ideally, I'd just want the solver to take as large steps as possible until things converge, and so I don't really care about the "time" values. One option would be to use odeint and just tell it to integrate to a distant time-point when I'm sure things will be in equilibrium, but that seems dorky, wasteful, and potentially incorrect. Alternately, I could use the ode class and keep asking it to integrate small time-steps until the RMS change drops below a threshold. There, still, I'd need to choose a reasonable time-step, and also the inner loop would be in python instead of fortran. Any recommendations? (Or a I again being daft? I never really took a class in numerical methods, so sorry for dim-bulb questions!) Zach From wnbell at gmail.com Tue Jun 24 20:51:57 2008 From: wnbell at gmail.com (Nathan Bell) Date: Tue, 24 Jun 2008 19:51:57 -0500 Subject: [SciPy-user] integrate ODE to steady-state? In-Reply-To: <042BF501-C3C2-451D-A0D3-0A3B762E3930@yale.edu> References: <042BF501-C3C2-451D-A0D3-0A3B762E3930@yale.edu> Message-ID: On Tue, Jun 24, 2008 at 6:45 PM, Zachary Pincus wrote: > > Any thoughts on what the easiest way to do this with scipy.integrate > is? Ideally, I'd just want the solver to take as large steps as > possible until things converge, and so I don't really care about the > "time" values. One option would be to use odeint and just tell it to > integrate to a distant time-point when I'm sure things will be in > equilibrium, but that seems dorky, wasteful, and potentially incorrect. > Regarding numerical methods, you should read about the "Forward Euler" integrator, which is very simple and intuitive. For applications where accuracy is unimportant and a reasonable time step is known a priori, Forward Euler is sufficient. Also, explicit ODE integrators (like Forward Euler and Runge-Kutta) are generally cheap, so the dominant cost of the integration is usually computing the time derivatives for each variable. You are right that you'll want to take as-large-as-possible timesteps when integrating through fictitious time. However, even though the timescale is arbitrary, the allowable timestep is limited by both the accuracy of the trajectory (probably not a concern in your case) and the stability of the system. In short, there's no free lunch :) I don't know what interface scipy's ode solvers support, but I would suggest setting t_final to be a large value and instructing the solver to use at most N iterations. I would use an "adaptive" method like RK45 (or whatever scipy supports) which automatically finds a timestep that achieves numerical stability and the desired accuracy. Alternatively, you could use your second proposal, and terminate relaxation when the functional is sufficiently small. BTW, are you using a Lennard-Jones like potential to distribute points ? I did something similar to distribute points evenly over a mesh surface using a simple integrator: http://graphics.cs.uiuc.edu/~wnbell/publications/2005-05-SCA-Granular/BeYiMu2005.pdf An interesting alternative is the CVT: http://www.math.psu.edu/qdu/Res/Pic/gallery3.html -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From rob.clewley at gmail.com Tue Jun 24 20:55:42 2008 From: rob.clewley at gmail.com (Rob Clewley) Date: Tue, 24 Jun 2008 20:55:42 -0400 Subject: [SciPy-user] integrate ODE to steady-state? In-Reply-To: <042BF501-C3C2-451D-A0D3-0A3B762E3930@yale.edu> References: <042BF501-C3C2-451D-A0D3-0A3B762E3930@yale.edu> Message-ID: Zach, It somewhat depends on your equations, but in principle you don't need to actually integrate the ODEs at all. An equilibrium corresponds to no motion, i.e. when the right-hand sides of all the ODEs are zero-valued. So for a system dx/dt = f(x), where x is a vector, you just need to solve f(x) = 0. This may have multiple solutions, of course, if f is nonlinear. There's no scipy code that will do all this for you but fsolve will do in most cases. It's best to have a good initial guess as a starting condition for the search. So, integration can help you find that. If f is strongly nonlinear there could be many equilibria and your task will be to identify which initial conditions lead to which equilibria. This is not a trivial task - you may need an exhaustive search of initial conditions (e.g. based on sampling your phase space of x) to get started. I have some naive code that pre-samples the phase space and then uses fsolve, and works OK on some nonlinear problems. It's the find_fixedpoints function in PyDSTool, but it's extremely easy to remove the PyDSTool dependence :) If this approach turns out to be too numerically problematic, you'll just have to go back to integrating for long times... Rob On Tue, Jun 24, 2008 at 7:45 PM, Zachary Pincus wrote: > Hi all, > > So, after a brief bout of the stupids (thanks Robert), I have > formulated my optimization problem as a physical system governed by an > ODE, and I wish to learn the equilibrium configuration of the system. > > Any thoughts on what the easiest way to do this with scipy.integrate > is? Ideally, I'd just want the solver to take as large steps as > possible until things converge, and so I don't really care about the > "time" values. One option would be to use odeint and just tell it to > integrate to a distant time-point when I'm sure things will be in > equilibrium, but that seems dorky, wasteful, and potentially incorrect. > > Alternately, I could use the ode class and keep asking it to integrate > small time-steps until the RMS change drops below a threshold. There, > still, I'd need to choose a reasonable time-step, and also the inner > loop would be in python instead of fortran. > > Any recommendations? (Or a I again being daft? I never really took a > class in numerical methods, so sorry for dim-bulb questions!) > > Zach > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From lou_boog2000 at yahoo.com Tue Jun 24 21:05:23 2008 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Tue, 24 Jun 2008 18:05:23 -0700 (PDT) Subject: [SciPy-user] integrate ODE to steady-state? In-Reply-To: Message-ID: <714288.82822.qm@web34408.mail.mud.yahoo.com> I agree with Rob Clewley's approach. I suspect that finding the zeros of the vector field (f(x)) is the best way to go. Integration is harder and still leaves the problem of multiple equilibria. This is often expressed as finding multiple basins of attraction (each initial condition of which goes to a different equilibrium point). You may have an additional problem in that you want to find the stability of the equilibrium points. If you want a point that a system will stay at even under the influence of some local noise or perturbations, then once you find the equilibrium points, you want to determine their stability. This is done quite easily by evaluating the Jacobian of the vector field (the matrix which is the "gradient" of the vector field: d f(x)/ dx, where f and x are vectors of the same dimension). You then find the eigenvalues of the Jacobian. If all are negative, you have a stable equilibrium. If one if positive, you have an unstable equilibrium and the actual system will probably *not* ever end up there. I hope that's clear and that helps. Your problem is only as hard as the f(x) is complex and nonlinear. It is not an ODE integration problem, really. -- Lou Pecora, my views are my own. From wnbell at gmail.com Tue Jun 24 21:21:59 2008 From: wnbell at gmail.com (Nathan Bell) Date: Tue, 24 Jun 2008 20:21:59 -0500 Subject: [SciPy-user] integrate ODE to steady-state? In-Reply-To: <714288.82822.qm@web34408.mail.mud.yahoo.com> References: <714288.82822.qm@web34408.mail.mud.yahoo.com> Message-ID: On Tue, Jun 24, 2008 at 8:05 PM, Lou Pecora wrote: > It is not an ODE integration problem, really. Casting the minimization of a functional as a system of ODEs is a completely reasonable thing to do. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From zachary.pincus at yale.edu Tue Jun 24 22:50:20 2008 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Tue, 24 Jun 2008 22:50:20 -0400 Subject: [SciPy-user] integrate ODE to steady-state? In-Reply-To: References: <714288.82822.qm@web34408.mail.mud.yahoo.com> Message-ID: <1CF00348-252B-4F68-A3DE-5AEA8A8ABBD1@yale.edu> Hi all, Thanks for the help! Nathan: > I don't know what interface scipy's ode solvers support, but I would > suggest setting t_final to be a large value and instructing the solver > to use at most N iterations. I would use an "adaptive" method like > RK45 (or whatever scipy supports) which automatically finds a timestep > that achieves numerical stability and the desired accuracy. > > Alternatively, you could use your second proposal, and terminate > relaxation when the functional is sufficiently small. I'll try both of these -- I hadn't thought to use the solver's limit on iterations, which is a good idea... Nathan: > BTW, are you using a Lennard-Jones like potential to distribute points > ? I did something similar to distribute points evenly over a mesh > surface using a simple integrator: > http://graphics.cs.uiuc.edu/~wnbell/publications/2005-05-SCA-Granular/BeYiMu2005.pdf > > An interesting alternative is the CVT: > http://www.math.psu.edu/qdu/Res/Pic/gallery3.html My problem is similar, actually, but in 2D. I have an outline of a bacteria as a 2D polygon and want to lay down a coordinate system on it, with a midline running from pole to pole ("x-axis"), and well- defined lines normal to the midline ("y-axis"). The classic solution would be to use the medial axis transform, which I had been (using Martin Held's excellent if quirky VRONI code). However, the medial axis is only a partial midline, it may branch, and there's no guarantee that it is possible to find reasonable straight-line normals to the axis (you get a clearance radius at each point instead). So I formulated the problem as trying to find pairs of positions along the outline of the shape where the distance between the pairs is minimized, but adjacent points don't "clump" too much, which naturally led to this sort of spring-system picture. (This is a very brief explanation -- sorry if unclear.) It doesn't seem particularly elegant to solve it out like this, but I wanted to see if it worked at all... Also, thanks for the references -- interesting indeed. Rob and Lou: > [try solving for dy/dt=0] Good idea too! I'll try that one as well, and see which seems to work better... Thanks again, everyone, Zach From peridot.faceted at gmail.com Tue Jun 24 23:57:56 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Tue, 24 Jun 2008 23:57:56 -0400 Subject: [SciPy-user] integrate ODE to steady-state? In-Reply-To: <1CF00348-252B-4F68-A3DE-5AEA8A8ABBD1@yale.edu> References: <714288.82822.qm@web34408.mail.mud.yahoo.com> <1CF00348-252B-4F68-A3DE-5AEA8A8ABBD1@yale.edu> Message-ID: 2008/6/24 Zachary Pincus : > My problem is similar, actually, but in 2D. I have an outline of a > bacteria as a 2D polygon and want to lay down a coordinate system on > it, with a midline running from pole to pole ("x-axis"), and well- > defined lines normal to the midline ("y-axis"). The classic solution > would be to use the medial axis transform, which I had been (using > Martin Held's excellent if quirky VRONI code). However, the medial > axis is only a partial midline, it may branch, and there's no > guarantee that it is possible to find reasonable straight-line normals > to the axis (you get a clearance radius at each point instead). So I > formulated the problem as trying to find pairs of positions along the > outline of the shape where the distance between the pairs is > minimized, but adjacent points don't "clump" too much, which naturally > led to this sort of spring-system picture. (This is a very brief > explanation -- sorry if unclear.) It doesn't seem particularly elegant > to solve it out like this, but I wanted to see if it worked at all... > > Also, thanks for the references -- interesting indeed. > > Rob and Lou: >> [try solving for dy/dt=0] > > Good idea too! I'll try that one as well, and see which seems to work > better... I'd set it up as a numerical minimization procedure - minimizing the "potential energy" of all those springs. If you give it a good initial guess and are lucky, the minimizer will follow a path down to the optimum not too different from the one your ODEs would have followed, but since the optimizer knows you don't care how it gets there it can be much more efficient. The danger with both approaches is that there may be some rather bad local minimum they can get trapped in. The best approach is to choose a decent initial guess - maybe using the medial axis transform, since you already have the code working - so that the nearest, most obvious solution is the real optimum. Failing that, you can look at some of the global optimizers in OpenOpt. Better, of course, is to find an algorithm specifically adapted to solving your particular problem, but I gather you've already done some considerable searching. Anne From fredmfp at gmail.com Wed Jun 25 13:28:31 2008 From: fredmfp at gmail.com (fred) Date: Wed, 25 Jun 2008 19:28:31 +0200 Subject: [SciPy-user] msked array & histogram... In-Reply-To: <485BB463.1020600@gmail.com> References: <485BB0F5.90204@gmail.com> <485BB463.1020600@gmail.com> Message-ID: <4862803F.2020604@gmail.com> fred a ?crit : > fred a ?crit : > >> Is there some workaround ? > Yes. > Convert them to NaN. No. It does not work at all (say for 10 % of nan) for masked arrays or arrays with NaN. Any suggestion ? I really need some help for this issue. TIA. Cheers, -- Fred From pgmdevlist at gmail.com Wed Jun 25 13:49:59 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Wed, 25 Jun 2008 13:49:59 -0400 Subject: [SciPy-user] msked array & histogram... In-Reply-To: <4862803F.2020604@gmail.com> References: <485BB0F5.90204@gmail.com> <485BB463.1020600@gmail.com> <4862803F.2020604@gmail.com> Message-ID: <200806251350.00021.pgmdevlist@gmail.com> Fred, If you don't need to process the missing data are using a 1D array or an array that can be flattened, just use the `compressed()` method/function on your MaskedArray, and send the output to histogram, e.g. numpy.histogram(your_masked_array.compressed()) If you need to keep track of the number of missing data, or qualify them as one particular category, just fill your masked array with a convenient value. From phaustin at gmail.com Wed Jun 25 13:54:39 2008 From: phaustin at gmail.com (Phil Austin) Date: Wed, 25 Jun 2008 10:54:39 -0700 Subject: [SciPy-user] msked array & histogram... In-Reply-To: <4862803F.2020604@gmail.com> References: <485BB0F5.90204@gmail.com> <485BB463.1020600@gmail.com> <4862803F.2020604@gmail.com> Message-ID: <4862865F.7050903@gmail.com> fred wrote: > fred a ?crit : >> fred a ?crit : >> >>> Is there some workaround ? >> Yes. >> Convert them to NaN. > No. > > It does not work at all (say for 10 % of nan) > for masked arrays or arrays with NaN. > > Any suggestion ? ma.compressed? import numpy.ma as ma In [8]:help ma.compressed ------>help(ma.compressed) Help on function compressed in module numpy.ma.core: compressed(x) Return a 1-D array of all the non-masked data. From lists at vrbka.net Wed Jun 25 14:20:07 2008 From: lists at vrbka.net (Lubos Vrbka) Date: Wed, 25 Jun 2008 20:20:07 +0200 Subject: [SciPy-user] How to realize inverse DCT transformation by using scipy In-Reply-To: <90c482ab0806220056l66986450wac3f0ee183703395@mail.gmail.com> References: <90c482ab0806220056l66986450wac3f0ee183703395@mail.gmail.com> Message-ID: <48628C57.9080507@vrbka.net> hi > How to realize inverse DCT transformation by using scipy? if by iDCT you mean inverse dicrete cosine transform, you can use this code http://projects.scipy.org/scipy/scipy/browser/trunk/scipy/sandbox/image/transforms.py i asked similar question (but regarding DST) few days ago and the following travis' reply might be the right thing for you as well. http://article.gmane.org/gmane.comp.python.scientific.user/16607/match=fourier+sine+transform+scipy best, -- Lubos _ at _" http://www.lubos.vrbka.net From fredmfp at gmail.com Wed Jun 25 14:46:33 2008 From: fredmfp at gmail.com (fred) Date: Wed, 25 Jun 2008 20:46:33 +0200 Subject: [SciPy-user] msked array & histogram... In-Reply-To: <4862865F.7050903@gmail.com> References: <485BB0F5.90204@gmail.com> <485BB463.1020600@gmail.com> <4862803F.2020604@gmail.com> <4862865F.7050903@gmail.com> Message-ID: <48629289.20900@gmail.com> Phil Austin a ?crit : > ma.compressed? Great !! :-))) Thank to you two, Pierre & Phil. Cheers, -- Fred From williamhpurcell at gmail.com Thu Jun 26 08:28:59 2008 From: williamhpurcell at gmail.com (William Purcell) Date: Thu, 26 Jun 2008 07:28:59 -0500 Subject: [SciPy-user] import scipy - py2exe Message-ID: Sorry for posting in two places for those of you on the py2exe list. The py2exe list doesn't seem to be as active as this one. After compiling a small wx gui with py2exe, the following is the traceback I get when running the resulting .exe. Traceback (most recent call last): File "simple.py", line 2, in File "zipextimporter.pyo", line 82, in load_module File "scipy\__init__.pyo", line 48, in TypeError: unsupported operand type(s) for +=: 'NoneType' and 'str' operating sys. --> XP scipy.__version__ --> 0.6.0 wx.__version__ --> 2.8.7.1 py2exe.__version__ --> 0.6.8 The gui script is simple.... import wx, scipy app = wx.App() frame = wx.Frame(None, -1, 'simple.py') frame.Show() app.MainLoop() ... and runs fine from the command line. I was having this problem with a larger gui and I think I have narrowed the problem down to this. Thanks in advance, Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From williamhpurcell at gmail.com Thu Jun 26 09:36:40 2008 From: williamhpurcell at gmail.com (William Purcell) Date: Thu, 26 Jun 2008 08:36:40 -0500 Subject: [SciPy-user] import scipy - py2exe In-Reply-To: References: Message-ID: FYI: If i remove 'import scipy' the gui will compile with py2exe and run fine. On Thu, Jun 26, 2008 at 7:28 AM, William Purcell wrote: > Sorry for posting in two places for those of you on the py2exe list. The > py2exe list doesn't seem to be as active as this one. > > After compiling a small wx gui with py2exe, the following is the traceback > I get when running the resulting .exe. > > Traceback (most recent call last): > File "simple.py", line 2, in > File "zipextimporter.pyo", line 82, in load_module > File "scipy\__init__.pyo", line 48, in > TypeError: unsupported operand type(s) for +=: 'NoneType' and 'str' > > operating sys. --> XP > scipy.__version__ --> 0.6.0 > wx.__version__ --> 2.8.7.1 > py2exe.__version__ --> 0.6.8 > > The gui script is simple.... > > import wx, scipy > app = wx.App() > frame = wx.Frame(None, -1, 'simple.py') > frame.Show() > app.MainLoop() > > ... and runs fine from the command line. I was having this problem with a > larger gui and I think I have narrowed the problem down to this. > > Thanks in advance, > Bill > -------------- next part -------------- An HTML attachment was scrubbed... URL: From grs2103 at columbia.edu Thu Jun 26 11:14:46 2008 From: grs2103 at columbia.edu (Gideon Simpson) Date: Thu, 26 Jun 2008 11:14:46 -0400 Subject: [SciPy-user] optimize.fsolve output Message-ID: <89F274AC-C1BB-4FB4-AA04-C52668995987@columbia.edu> Is there a flag to give fsolve to see the convergence properties? -gideon From nwagner at iam.uni-stuttgart.de Thu Jun 26 12:28:56 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 26 Jun 2008 18:28:56 +0200 Subject: [SciPy-user] optimize.fsolve output In-Reply-To: <89F274AC-C1BB-4FB4-AA04-C52668995987@columbia.edu> References: <89F274AC-C1BB-4FB4-AA04-C52668995987@columbia.edu> Message-ID: On Thu, 26 Jun 2008 11:14:46 -0400 Gideon Simpson wrote: > Is there a flag to give fsolve to see the convergence >properties? > > -gideon AFAIK fsolve has no callback function like cg(A, b, x0=None, tol=1.0000000000000001e-05, maxiter=None, xtype=None, M=None, callback=None) So I guess you cannot monitor the convergence behavior of fsolve. Please correct me if I am missing something. Nils From gael.varoquaux at normalesup.org Fri Jun 27 01:18:47 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Fri, 27 Jun 2008 07:18:47 +0200 Subject: [SciPy-user] Student sponsorship for the SciPy08 conference Message-ID: <20080627051847.GM11323@phare.normalesup.org> We are delighted to announce that the Python Software Foundation has answered our call and is providing sponsoring to the SciPy08 conference. We will use this money to sponsor the registration fees and travel for up to 10 college or graduate students to attend the conference. The PSF did not provide all the founds required for all 10 students and once again Enthought Inc. (http://www.enthought.com) is stepping up to fill in. To apply, please send a short description of what you are studying and why you?d like to attend to info at enthought.com. Please include telephone contact information. Thanks a lot to Travis Vaught from Enthought for bringing this project to a success. Please don't hesitate to forward this announcement to anybody who might be interested. Ga?l, on behalf of the Scipy08 organisation committee SciPy coneference site: http://conference.scipy.org From soren.skou.nielsen at gmail.com Fri Jun 27 09:49:45 2008 From: soren.skou.nielsen at gmail.com (=?ISO-8859-1?Q?S=F8ren_Nielsen?=) Date: Fri, 27 Jun 2008 15:49:45 +0200 Subject: [SciPy-user] weave problems, weave_imp.o no such file or directory Message-ID: Hi, I've done a fresh install of MinGw 5.14, Python 2.4.4, Scipy 0.6.0, Numpy 1.1.0 and numarray 1.5.2 When I try to use weave i get this: C:\>test_weave.py running build_ext running build_src building extension "sc_552cccf5dbf4f6eadd273cdcbd5860523" sources customize Mingw32CCompiler customize Mingw32CCompiler using build_ext customize Mingw32CCompiler customize Mingw32CCompiler using build_ext building 'sc_552cccf5dbf4f6eadd273cdcbd5860523' extension compiling C++ sources C compiler: g++ -mno-cygwin -O2 -Wall compile options: '-IC:\Python24\lib\site-packages\scipy\weave -IC:\Python24\lib\ site-packages\scipy\weave\scxx -IC:\Python24\lib\site-packages\numpy\core\includ e -IC:\Python24\include -IC:\Python24\PC -c' g++ -mno-cygwin -O2 -Wall -IC:\Python24\lib\site-packages\scipy\weave -IC:\Pytho n24\lib\site-packages\scipy\weave\scxx -IC:\Python24\lib\site-packages\numpy\cor e\include -IC:\Python24\include -IC:\Python24\PC -c C:\Python24\lib\site-package s\scipy\weave\scxx\weave_imp.cpp -o c:\docume~1\lisear~1\lokale~1\temp\Lise Arle th\python24_intermediate\compiler_08edc7e348e1c33f63a33ab500aef08e\Release\pytho n24\lib\site-packages\scipy\weave\scxx\weave_imp.o Found executable C:\MinGw\bin\g++.exe g++.exe: Arleth\python24_intermediate\compiler_08edc7e348e1c33f63a33ab500aef08e\ Release\python24\lib\site-packages\scipy\weave\scxx\weave_imp.o: No such file or directory Traceback (most recent call last): File "C:\test_weave.py", line 341, in ? main() File "C:\test_weave.py", line 224, in main weave.inline('printf("%d\\n",a);',['a'], verbose=2, type_converters=converte rs.blitz) #, compiler = 'msvc', verbpse=2, type_converters=converters.blitz, au to_downcast=0) #'msvc' or 'gcc' or 'mingw32' File "C:\Python24\Lib\site-packages\scipy\weave\inline_tools.py", line 338, in inline auto_downcast = auto_downcast, File "C:\Python24\Lib\site-packages\scipy\weave\inline_tools.py", line 447, in compile_function verbose=verbose, **kw) File "C:\Python24\Lib\site-packages\scipy\weave\ext_tools.py", line 365, in co mpile verbose = verbose, **kw) File "C:\Python24\Lib\site-packages\scipy\weave\build_tools.py", line 269, in build_extension setup(name = module_name, ext_modules = [ext],verbose=verb) File "C:\Python24\Lib\site-packages\numpy\distutils\core.py", line 184, in set up return old_setup(**new_attr) File "C:\Python24\lib\distutils\core.py", line 166, in setup raise SystemExit, "error: " + str(msg) distutils.errors.CompileError: error: Command "g++ -mno-cygwin -O2 -Wall -IC:\Py thon24\lib\site-packages\scipy\weave -IC:\Python24\lib\site-packages\scipy\weave \scxx -IC:\Python24\lib\site-packages\numpy\core\include -IC:\Python24\include - IC:\Python24\PC -c C:\Python24\lib\site-packages\scipy\weave\scxx\weave_imp.cpp -o c:\docume~1\lisear~1\lokale~1\temp\Lise Arleth\python24_intermediate\compiler _08edc7e348e1c33f63a33ab500aef08e\Release\python24\lib\site-packages\scipy\weave \scxx\weave_imp.o" failed with exit status 1 the test_weave file was something i found on the scipy dev wiki... I also tried some of my older files that also uses weave and they all give the same error... Can anyone help me with this? -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: test_weave.py Type: text/x-python Size: 11334 bytes Desc: not available URL: From lbolla at gmail.com Fri Jun 27 10:54:13 2008 From: lbolla at gmail.com (lorenzo bolla) Date: Fri, 27 Jun 2008 16:54:13 +0200 Subject: [SciPy-user] Radial Basis Function Interpolation Message-ID: <80c99e790806270754r6b7d8680ndf052020f23bb9ba@mail.gmail.com> Using scipy.interpolate.Rbf, I found two bugs. One is a typo in one of the possible values of the 'function' keyword: 'gausian' instead of 'gaussian'. Another one is the lack of import of numpy.exp, required by 'gaussian'. The following diff fixes these bugs. ============================= $ diff rbf_orig.py rbf.py 45c45 < from numpy import sqrt, log, asarray, newaxis, all, dot, float64, eye --- > from numpy import sqrt, exp, log, asarray, newaxis, all, dot, float64, eye 61c61 < elif self.function.lower() == 'gausian': --- > elif self.function.lower() == 'gaussian': 87c87 < 'gausian': exp(-(self.epsilon*r)**2) --- > 'gaussian': exp(-(self.epsilon*r)**2) ============================= Lorenzo. -- "Whereof one cannot speak, thereof one must be silent." -- Ludwig Wittgenstein -------------- next part -------------- An HTML attachment was scrubbed... URL: From rmay31 at gmail.com Fri Jun 27 14:13:22 2008 From: rmay31 at gmail.com (Ryan May) Date: Fri, 27 Jun 2008 14:13:22 -0400 Subject: [SciPy-user] 2D Interpolation Message-ID: <48652DC2.2070807@gmail.com> Hi, Can anyone help me use scipy.interpolate correctly. Here's my problem: I'm trying to make a 2D lookup table to save some calculations. The two parameters over which the lookup table is generated are independent and I have complete control over how I divide up the domain. Using this lookup table, I'd like to then calculate values over an unstructured set of parameter values (ie. a list of pairs of parameter values). Is there a function in scipy.interpolate that can help here? What I'd really like to be able to do is generate an interpolator object from my 2D array, and then pass a pair of 1D arrays to the object and have it return 1D array of values. Thanks in advance, Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma From gael.varoquaux at normalesup.org Fri Jun 27 14:24:29 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Fri, 27 Jun 2008 20:24:29 +0200 Subject: [SciPy-user] Scipy08 Paper submission deadline extention to Monday 30th Message-ID: <20080627182429.GH5103@phare.normalesup.org> The deadline for submitting abstracts to the Scipy conference was tonight. In order to give you more time to submit excellent abstracts, the review committee is extending the deadline to Monday (June 30th), and will work hastily to get all of them reviewed in time for the program announcement, on Thursday July 3rd. ---- The SciPy 2008 Conference will be held 21-22 August 2008 at the California Institute of Technology, Pasadena, California. SciPy is a scientific computing package, written in the Python language. It is widely used in research, the industry and academia. The program features tutorials, contributed papers, lightning talks, and bird-of-a-feather sessions. We are soliciting talks and accompanying papers (either formal academic or magazine-style articles) that discuss topics which center around scientific computing using Python. These include applications, teaching, future development directions and research. A collection of peer-reviewed articles will be published as part of the proceedings. Proposals for talks are submitted as extended abstracts. There are two categories of talks: Lightning talks These talks are 10 minutes in duration. An abstract of between 300 and 700 words should describe the topic and motivate its relevance to scientific computing. Lightning talks do not require an accompanying article (although, if submitted, these will still be published). Paper presentations These talks are 35 minutes in duration (including questions). A one page abstract of no less than 500 words (excluding figures and references) should give an outline of the final paper. Papers are due two weeks before the conference, and may be in a formal academic style, or in a more relaxed magazine-style format. If you wish to present a talk at the conference, please create an account on the website http://conference.scipy.org. You may then submit an abstract by logging in, clicking on your profile and following the " Submit an abstract " link. Ga?l, on behalf on the SciPy08 organizing committee. From jtravs at gmail.com Fri Jun 27 14:37:47 2008 From: jtravs at gmail.com (John Travers) Date: Fri, 27 Jun 2008 19:37:47 +0100 Subject: [SciPy-user] 2D Interpolation In-Reply-To: <48652DC2.2070807@gmail.com> References: <48652DC2.2070807@gmail.com> Message-ID: <3a1077e70806271137k4776e229ifd2850e9b460e79a@mail.gmail.com> On Fri, Jun 27, 2008 at 7:13 PM, Ryan May wrote: > Hi, > > Can anyone help me use scipy.interpolate correctly. Here's my problem: > I'm trying to make a 2D lookup table to save some calculations. The two > parameters over which the lookup table is generated are independent and > I have complete control over how I divide up the domain. Using this > lookup table, I'd like to then calculate values over an unstructured set > of parameter values (ie. a list of pairs of parameter values). Is there > a function in scipy.interpolate that can help here? What I'd really like > to be able to do is generate an interpolator object from my 2D array, > and then pass a pair of 1D arrays to the object and have it return 1D > array of values. This should get you started: import scipy import scipy.interpolate # the two axes x = scipy.linspace(-1.0,1.0,10) y = x # make some pretend data gridy, gridx = scipy.meshgrid(x,y) z = scipy.sin(gridx)*scipy.sin(gridy) # create a spline interpolator spl = scipy.interpolate.RectBivariateSpline(x,y,z) # make some new axes to interpolate to nx = scipy.linspace(-1.0,1.0,100) ny = nx # evaluate nz = spl(nx, ny) # with matplotlib, compare: import pylab pylab.matshow(z) pylab.matshow(nz) Cheers, John From jtravs at gmail.com Fri Jun 27 14:40:25 2008 From: jtravs at gmail.com (John Travers) Date: Fri, 27 Jun 2008 19:40:25 +0100 Subject: [SciPy-user] 2D Interpolation In-Reply-To: <3a1077e70806271137k4776e229ifd2850e9b460e79a@mail.gmail.com> References: <48652DC2.2070807@gmail.com> <3a1077e70806271137k4776e229ifd2850e9b460e79a@mail.gmail.com> Message-ID: <3a1077e70806271140y6dc8f6d4qf1f565a8b41f7e93@mail.gmail.com> On Fri, Jun 27, 2008 at 7:37 PM, John Travers wrote: > On Fri, Jun 27, 2008 at 7:13 PM, Ryan May wrote: >> Hi, >> >> Can anyone help me use scipy.interpolate correctly. Here's my problem: >> I'm trying to make a 2D lookup table to save some calculations. The two >> parameters over which the lookup table is generated are independent and >> I have complete control over how I divide up the domain. Using this >> lookup table, I'd like to then calculate values over an unstructured set >> of parameter values (ie. a list of pairs of parameter values). Is there >> a function in scipy.interpolate that can help here? What I'd really like >> to be able to do is generate an interpolator object from my 2D array, >> and then pass a pair of 1D arrays to the object and have it return 1D >> array of values. > > This should get you started: > Ahhh, I miss-read your question and answered something else! Not sure how to answer your question... Sorry. John From robert.kern at gmail.com Fri Jun 27 15:38:11 2008 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 27 Jun 2008 14:38:11 -0500 Subject: [SciPy-user] Radial Basis Function Interpolation In-Reply-To: <80c99e790806270754r6b7d8680ndf052020f23bb9ba@mail.gmail.com> References: <80c99e790806270754r6b7d8680ndf052020f23bb9ba@mail.gmail.com> Message-ID: <3d375d730806271238r17f61699r199d67e9002b696a@mail.gmail.com> On Fri, Jun 27, 2008 at 09:54, lorenzo bolla wrote: > Using scipy.interpolate.Rbf, I found two bugs. > One is a typo in one of the possible values of the 'function' keyword: > 'gausian' instead of 'gaussian'. > Another one is the lack of import of numpy.exp, required by 'gaussian'. Fixed in r4486. Thank you! -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From pav at iki.fi Fri Jun 27 16:13:21 2008 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 27 Jun 2008 20:13:21 +0000 (UTC) Subject: [SciPy-user] 2D Interpolation References: <48652DC2.2070807@gmail.com> Message-ID: Hi, Fri, 27 Jun 2008 14:13:22 -0400, Ryan May wrote: > Can anyone help me use scipy.interpolate correctly. Here's my problem: > I'm trying to make a 2D lookup table to save some calculations. The two > parameters over which the lookup table is generated are independent and > I have complete control over how I divide up the domain. Using this > lookup table, I'd like to then calculate values over an unstructured set > of parameter values (ie. a list of pairs of parameter values). Is there > a function in scipy.interpolate that can help here? What I'd really like > to be able to do is generate an interpolator object from my 2D array, > and then pass a pair of 1D arrays to the object and have it return 1D > array of values. I don't think there are currently any functions that do that, but certainly we'd like to have them. I created a Scipy enhancement ticket for this feature: http://scipy.org/scipy/scipy/ticket/693 Attached to it is a quick patch that implements the necessary loop on the Fortran side. I suspect the patch needs further work, as there possibly are faster ways to vectorise this piece of computation than simply calling fpbisp at each point separately. (If someone more familiar with the spline code wants to bless it, I can commit it, though...) Currently, you can use spl = scipy.interpolate.RectBivariateSpline(xi,yi,zi) z = scipy.array([spl(xp, yp)[0,0] for xp, yp in zip(x, y)]) The Python overhead isn't as bad as it looks like; moving the loop to the Fortran side gains only a factor of 5 improvement in speed. -- Pauli Virtanen From rmay31 at gmail.com Fri Jun 27 16:46:30 2008 From: rmay31 at gmail.com (Ryan May) Date: Fri, 27 Jun 2008 16:46:30 -0400 Subject: [SciPy-user] 2D Interpolation In-Reply-To: References: <48652DC2.2070807@gmail.com> Message-ID: <486551A6.60203@gmail.com> Pauli Virtanen wrote: > Hi, > > Fri, 27 Jun 2008 14:13:22 -0400, Ryan May wrote: >> Can anyone help me use scipy.interpolate correctly. Here's my problem: >> I'm trying to make a 2D lookup table to save some calculations. The two >> parameters over which the lookup table is generated are independent and >> I have complete control over how I divide up the domain. Using this >> lookup table, I'd like to then calculate values over an unstructured set >> of parameter values (ie. a list of pairs of parameter values). Is there >> a function in scipy.interpolate that can help here? What I'd really like >> to be able to do is generate an interpolator object from my 2D array, >> and then pass a pair of 1D arrays to the object and have it return 1D >> array of values. > > I don't think there are currently any functions that do that, but > certainly we'd like to have them. > > I created a Scipy enhancement ticket for this feature: > > http://scipy.org/scipy/scipy/ticket/693 > > Attached to it is a quick patch that implements the necessary loop on the > Fortran side. I suspect the patch needs further work, as there possibly > are faster ways to vectorise this piece of computation than simply > calling fpbisp at each point separately. (If someone more familiar with > the spline code wants to bless it, I can commit it, though...) > > Currently, you can use > > spl = scipy.interpolate.RectBivariateSpline(xi,yi,zi) > z = scipy.array([spl(xp, yp)[0,0] for xp, yp in zip(x, y)]) > > The Python overhead isn't as bad as it looks like; moving the loop to > the Fortran side gains only a factor of 5 improvement in speed. Thanks for that. I'm surprised that the Python looping overhead isn't worse. I just automatically assumed looping of Python == death of my code. I'll see what I get then just manually looping. Thanks -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma From pav at iki.fi Fri Jun 27 17:38:15 2008 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 27 Jun 2008 21:38:15 +0000 (UTC) Subject: [SciPy-user] 2D Interpolation References: <48652DC2.2070807@gmail.com> <486551A6.60203@gmail.com> Message-ID: Fri, 27 Jun 2008 16:46:30 -0400, Ryan May wrote: > Pauli Virtanen wrote: >> Hi, >> >> Fri, 27 Jun 2008 14:13:22 -0400, Ryan May wrote: >>> Can anyone help me use scipy.interpolate correctly. Here's my >>> problem: I'm trying to make a 2D lookup table to save some >>> calculations. The two parameters over which the lookup table is >>> generated are independent and I have complete control over how I >>> divide up the domain. Using this lookup table, I'd like to then >>> calculate values over an unstructured set of parameter values (ie. a >>> list of pairs of parameter values). Is there a function in >>> scipy.interpolate that can help here? What I'd really like to be able >>> to do is generate an interpolator object from my 2D array, and then >>> pass a pair of 1D arrays to the object and have it return 1D array of >>> values. [clip] Another hint: looking at scipy.ndimage.map_coordinates may turn out to be useful: it seems to be able to interpolate from a regular grid to a vector of coordinates. -- Pauli Virtanen From travis at enthought.com Fri Jun 27 19:04:01 2008 From: travis at enthought.com (Travis Vaught) Date: Fri, 27 Jun 2008 18:04:01 -0500 Subject: [SciPy-user] SciPy Conference Updates Message-ID: <66889DEE-AFE8-484C-9C15-582457DFD3DB@enthought.com> Greetings, The SciPy Conference is not too far away. I thought I'd summarize some recent news about the conference in case some of you missed it: - Accommodations (news!): We've negotiated a group rate with a nearby Marriott hotel, for those that would like to take advantage of it. The hotel has set up a web site for our event here: http://cwp.marriott.com/laxot/scipyworkshop/ - Student Sponsorship: As you may have seen, the Python Software Foundation has agreed to partner with Enthought to sponsor 10 students' travel, registration, and accommodation for the tutorials, conference and (most importantly) sprints. If you're in college or a graduate program, please check out the details here: http://conference.scipy.org/sponsoring - Abstracts Submission Deadline Extended: The review committee is extending the deadline to Monday, June 30th. Please see the Call for Papers for instructions on abstract submission here: http://conference.scipy.org/call_for_papers Please drop me an email if you have any questions or comments. Best, Travis From emanuele at relativita.com Sat Jun 28 09:46:50 2008 From: emanuele at relativita.com (Emanuele Olivetti) Date: Sat, 28 Jun 2008 15:46:50 +0200 Subject: [SciPy-user] [OpenOpt] lb issue Message-ID: <486640CA.8000303@relativita.com> Dear all and dear Dmitrey, I experience problems when setting lower bounds (lb) to a non-linear problem. I have a simple bound for x: being bigger than zero. Here follows a simple example that shows the issue: even though I set "lb=N.zeros(dimensions)", the "ralg" solver tries to compute f() when x<0. Why? ---- import numpy as N from scikits.openopt import NLP size = 100 dimensions = 2 data = N.random.rand(size,dimensions)-0.5 def f(x): global data if (x<0).sum()>0: print "WARNING! Lower bound exceeded, x =",x pass return N.dot(data**2,x.T) x0 = N.ones(dimensions) p = NLP(f,x0,lb=N.zeros(dimensions),ftol=1.0e-3) p.solve("ralg") print p.ff,p.xf ---- I'm wondering if I've understood correctly how to use p.lb. Any explanation will be very appreciated. Emanuele P.S.: OpenOpt updated from SVN, NumPy v1.0.3 and SciPy v0.5.2 provided by Ubuntu Gutsy 7.10. From dmitrey.kroshko at scipy.org Sat Jun 28 11:11:43 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Sat, 28 Jun 2008 18:11:43 +0300 Subject: [SciPy-user] [OpenOpt] lb issue In-Reply-To: <486640CA.8000303@relativita.com> References: <486640CA.8000303@relativita.com> Message-ID: <486654AF.8060905@scipy.org> Hi Emanuele, some solvers (ALGENCAN, scipy_lbfgsb) don't have to obtain objFunc(x) out of the lb-ub region (for box-bound problems), but some other, including current ralg implementation, do. Mb I could try to enhance ralg handling of the problems of the type during this gsoc, but it requires sufficient amount of time and I'm currently busy with other chapters from my gsoc shedule. Regards, D. Emanuele Olivetti wrote: > Dear all and dear Dmitrey, > > I experience problems when setting lower bounds (lb) to > a non-linear problem. I have a simple bound for x: being > bigger than zero. > > Here follows a simple example that shows the issue: even though > I set "lb=N.zeros(dimensions)", the "ralg" solver tries to compute > f() when x<0. Why? > ---- > import numpy as N > from scikits.openopt import NLP > > size = 100 > dimensions = 2 > data = N.random.rand(size,dimensions)-0.5 > > def f(x): > global data > if (x<0).sum()>0: > print "WARNING! Lower bound exceeded, x =",x > pass > return N.dot(data**2,x.T) > > x0 = N.ones(dimensions) > p = NLP(f,x0,lb=N.zeros(dimensions),ftol=1.0e-3) > p.solve("ralg") > print p.ff,p.xf > ---- > > I'm wondering if I've understood correctly how to use > p.lb. Any explanation will be very appreciated. > > Emanuele > > > P.S.: OpenOpt updated from SVN, NumPy v1.0.3 and SciPy v0.5.2 > provided by Ubuntu Gutsy 7.10. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > From dmitrey.kroshko at scipy.org Sat Jun 28 14:49:10 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Sat, 28 Jun 2008 21:49:10 +0300 Subject: [SciPy-user] [OpenOpt] lb issue In-Reply-To: <486654AF.8060905@scipy.org> References: <486640CA.8000303@relativita.com> <486654AF.8060905@scipy.org> Message-ID: <486687A6.5020703@scipy.org> Hi Emanuele, I could propose you temporary solution (see below), this one doesn't require updating oo from svn. However, usually ALGENCAN, ipopt and scipy_slsqp work much better for box-bound constrained problems (w/o other constraints) than current ralg implementation. D. import numpy as N from scikits.openopt import NLP from numpy import any, inf size = 100 dimensions = 2 data = N.random.rand(size,dimensions)-0.5 contol = 1e-6 lb=N.zeros(dimensions) + contol def f(x): global data if any(x<0): #objective function is not defined here, let's use inf instead #however, some iters will show objFunVa= inf in text output # and graphic output is currently unavailable for the case return inf return N.dot(data**2,x.T) x0 = N.ones(dimensions) p = NLP(f,x0,lb=lb, contol = contol) p.solve('ralg') print p.ff,p.xf From tomo.bbe at gmail.com Sat Jun 28 17:09:55 2008 From: tomo.bbe at gmail.com (James) Date: Sat, 28 Jun 2008 22:09:55 +0100 Subject: [SciPy-user] The other solvers in ODEPACK Message-ID: <5a757d050806281409y6db27abck7d223f17c89480df@mail.gmail.com> Hi all, Recently I have been using ODEPACK (direct from Netlib) in a pure Fortran project and have noticed dramatically different execution times between two of the solvers (LSODA and LSODES). I have also noticed the source for LSODES is in scipy's repository - so is it possible to access the other solvers through scipy at the moment ? Cheers James -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Sat Jun 28 17:13:02 2008 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 28 Jun 2008 16:13:02 -0500 Subject: [SciPy-user] The other solvers in ODEPACK In-Reply-To: <5a757d050806281409y6db27abck7d223f17c89480df@mail.gmail.com> References: <5a757d050806281409y6db27abck7d223f17c89480df@mail.gmail.com> Message-ID: <3d375d730806281413o702f6c85k432d41c7228fcddc@mail.gmail.com> On Sat, Jun 28, 2008 at 16:09, James wrote: > Hi all, > > Recently I have been using ODEPACK (direct from Netlib) in a pure Fortran > project and have noticed dramatically different execution times between two > of the solvers (LSODA and LSODES). > > > I have also noticed the source for LSODES is in scipy's repository - so is > it possible to access the other solvers through scipy at the moment ? Someone needs to write the f2py wrappers for them. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From emanuele at relativita.com Sat Jun 28 18:12:24 2008 From: emanuele at relativita.com (Emanuele Olivetti) Date: Sun, 29 Jun 2008 00:12:24 +0200 Subject: [SciPy-user] [OpenOpt] lb issue In-Reply-To: <486654AF.8060905@scipy.org> References: <486640CA.8000303@relativita.com> <486654AF.8060905@scipy.org> Message-ID: <4866B748.3060805@relativita.com> I'm trying scipy_lbfgsb because ALGENCAN requires extra libraries, which is not desirable in my case. I hope not to incur in noise instabilities which I know are handled well only by "ralg". If you can improve ralg in near future I'll be very happy :) Emanuele dmitrey wrote: > Hi Emanuele, > some solvers (ALGENCAN, scipy_lbfgsb) don't have to obtain objFunc(x) > out of the lb-ub region (for box-bound problems), but some other, > including current ralg implementation, do. Mb I could try to enhance > ralg handling of the problems of the type during this gsoc, but it > requires sufficient amount of time and I'm currently busy with other > chapters from my gsoc shedule. > Regards, D. > > Emanuele Olivetti wrote: >> Dear all and dear Dmitrey, >> >> I experience problems when setting lower bounds (lb) to >> a non-linear problem. I have a simple bound for x: being >> bigger than zero. >> >> ... From emanuele at relativita.com Sat Jun 28 18:17:42 2008 From: emanuele at relativita.com (Emanuele Olivetti) Date: Sun, 29 Jun 2008 00:17:42 +0200 Subject: [SciPy-user] [OpenOpt] lb issue In-Reply-To: <486687A6.5020703@scipy.org> References: <486640CA.8000303@relativita.com> <486654AF.8060905@scipy.org> <486687A6.5020703@scipy.org> Message-ID: <4866B886.2050205@relativita.com> Unfortunately it is not so simple to map this advice to my real situation, which is more complex than the proof-of-concept example of the initial message. Returning a big positive value when x is outside the bounds is an option I considered some time ago but then discarded. But I'll think more about it now. Ciao, Emanuele dmitrey wrote: > Hi Emanuele, > > I could propose you temporary solution (see below), this one doesn't > require updating oo from svn. However, usually ALGENCAN, ipopt and > scipy_slsqp work much better for box-bound constrained problems (w/o > other constraints) than current ralg implementation. > D. > > import numpy as N > from scikits.openopt import NLP > from numpy import any, inf > size = 100 > dimensions = 2 > data = N.random.rand(size,dimensions)-0.5 > > contol = 1e-6 > lb=N.zeros(dimensions) + contol > > def f(x): > global data > if any(x<0): > #objective function is not defined here, let's use inf instead > #however, some iters will show objFunVa= inf in text output > # and graphic output is currently unavailable for the case > return inf > return N.dot(data**2,x.T) > > x0 = N.ones(dimensions) > p = NLP(f,x0,lb=lb, contol = contol) > p.solve('ralg') > print p.ff,p.xf > From kdsudac at yahoo.com Sat Jun 28 18:39:10 2008 From: kdsudac at yahoo.com (Keith Suda-Cederquist) Date: Sat, 28 Jun 2008 15:39:10 -0700 (PDT) Subject: [SciPy-user] Fastest Way to element-by-element operations in SciPy/NumPy Message-ID: <886240.18505.qm@web54302.mail.re2.yahoo.com> Hi All, I'm a relatively new Python/SciPy/NumPy user who migrated from Matlab.? I've managed to put together some code that does some image processing on 2000x2000 sized images.? It all works well, but one of the image processing steps takes a long time and I'd like to try to speed it up.? I can think of a few ways that *might* speed it up, but I figured it'd? be good to ask the experts on how they would recommend to do it. As is I import the image into an NumPy 2d array using PIL.? For each row, I do some signal processing to locate the zero crossing in-between a local maxima and a local minima (an edge detection algorithm).? So roughly my code is structured like this: --Start Code imarray=im2array(filename)? #reads file into array steparray=scipy.zeros(shape(imarray))? #initialize array that will contain information of edge ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? # locations for row in xrange(0,shape(imarray)[0]): ???? imrow=imarray[row,:] ???? #some basic code that identifies the local minima and maxima then identifies a ???? columns that are first-guess zero crossing ???? for col in first_guesses: ????????? window=imrow[col-3:col+4]? #window of data around first guess zero crossing ????????? #code that does a scipy.polyfit operation to identify more precisely the????? ????????? #zero-crossing.?? The zero-crossing from the fit is fit_zcross ????????? steparray[row,col]=fit_zcross --End Code As I said I'm new so my code is definitely not very 'pythonic', but I'm trying to learn how to do things better. My two guesses for how to speed things up are: 1)? initializing the step array and then assigniging different values to different rows and columns of that array is probably a slow way of doing things? 2)? write a function that takes as input a single row of the imarray and outputs an array giving the edge crossings.? Then use list comprehension to build the 2d array with my function Am I on the right track, or would you suggest a different approach to speeding things up? As is, this part of my code takes about 150 seconds to run, so for the 2000 rows that means 75ms per row.? So maybe my array is just too big and will take awhile to process. Thanks in advance for your help. Sincerely, Keith -------------- next part -------------- An HTML attachment was scrubbed... URL: From ted.sandler at gmail.com Sat Jun 28 22:18:00 2008 From: ted.sandler at gmail.com (Ted Sandler) Date: Sat, 28 Jun 2008 22:18:00 -0400 Subject: [SciPy-user] accessing only the nonzero elements of a sparse matrix Message-ID: It doesn't seem like anything like the numpy arrays method ``nonzero'' is implemented for scipy's sparse matrices. What I am looking for is something like: A = sparse.spidentity(3); A.nonzero() ==> (array([0, 1, 2]), array([0, 1, 2])) I guess I could convert to coordinate form 'COO' and then access the "row" and "col" fields but this seems a little evil as it relies on properties of the 'COO' matrix which could change in the future. It would be nicer to have a uniform method of all sparse matrix types. Thanks for any info on this, -Ted From peridot.faceted at gmail.com Sun Jun 29 02:01:31 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Sun, 29 Jun 2008 02:01:31 -0400 Subject: [SciPy-user] The other solvers in ODEPACK In-Reply-To: <5a757d050806281409y6db27abck7d223f17c89480df@mail.gmail.com> References: <5a757d050806281409y6db27abck7d223f17c89480df@mail.gmail.com> Message-ID: 2008/6/28 James : > Recently I have been using ODEPACK (direct from Netlib) in a pure Fortran > project and have noticed dramatically different execution times between two > of the solvers (LSODA and LSODES). > > > I have also noticed the source for LSODES is in scipy's repository - so is > it possible to access the other solvers through scipy at the moment ? No, it's not, as Robert said. I have written a (partial) wrapper for one of them (LSODAR) before deciding it wasn't very useful. I do have a few comments: * The FORTRAN interface is a colossal pain to use in a python program, so you'd want to come up with some sort of python wrapper. There are two very different examples in scipy already, odeint() and the ode class. the latter of which was intended to support multiple backends (but only one was implemented). Neither interface did what I wanted, though, which was part of my motivation for trying to wrap things myself. * If your concern is straight runtime, you should be aware that for ODE solvers implemented in an ODEPACK-like way, where the right-hand side is an arbitrary python function, there is a very substantial overhead associated with calling back into python for each RHS evaluation. I realize that different algorithms may conceivably make a bigger difference than this, but do be aware that solveing ODEs in python in this way is quite a slow process. For this reason I would be much more interesting in wrapping an ODE solver that was more powerful - that is, allowed solution of more problems - than one that was more efficient. * If you want a more full-featured ODE solver, you should look at PyDSTool. As its name suggests, it is oriented towards the study of dynamical systems, but it provides many useful tools for simply solving ODEs. High on the list is a system for providing symbolic RHSs which can be compiled to C and (I think) automatically differentiated. It also includes some more modern ODE solvers (though I seem to recall none of them support automatic stiff/non-stiff switching). There was some discussion of integrating some of this into scipy. Anne From dmitrey.kroshko at scipy.org Sun Jun 29 02:20:03 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Sun, 29 Jun 2008 09:20:03 +0300 Subject: [SciPy-user] [OpenOpt] lb issue Message-ID: <48672993.1040001@scipy.org> I don't see any difficulties to map my advice to more difficult cases (just don't forget to increase your ineq constraints by contol, for to compare those ones with zero, not contol, and use inf, not the huge value you have mentioned). Current ralg implementation doesn't need objfunc value when point is outside of feasible region (i.e. if *any* constraint is bigger than p.contol, not only lb-ub constraint). OO calls objFunc outside feasible region just for to check some stop criteria, yield iter output point text and possible graphics output. Regards, D. Emanuele Olivetti wrote: > Unfortunately it is not so simple to map this advice to my > real situation, which is more complex than the > proof-of-concept example of the initial message. Returning > a big positive value when x is outside the bounds is an option > I considered some time ago but then discarded. But I'll think > more about it now. > > Ciao, > > Emanuele > > dmitrey wrote: > >> Hi Emanuele, >> >> I could propose you temporary solution (see below), this one doesn't >> require updating oo from svn. However, usually ALGENCAN, ipopt and >> scipy_slsqp work much better for box-bound constrained problems (w/o >> other constraints) than current ralg implementation. >> D. >> >> import numpy as N >> from scikits.openopt import NLP >> from numpy import any, inf >> size = 100 >> dimensions = 2 >> data = N.random.rand(size,dimensions)-0.5 >> >> contol = 1e-6 >> lb=N.zeros(dimensions) + contol >> >> def f(x): >> global data >> if any(x<0): >> #objective function is not defined here, let's use inf instead >> #however, some iters will show objFunVa= inf in text output >> # and graphic output is currently unavailable for the case >> return inf >> return N.dot(data**2,x.T) >> >> x0 = N.ones(dimensions) >> p = NLP(f,x0,lb=lb, contol = contol) >> p.solve('ralg') >> print p.ff,p.xf >> >> > > > > > From rmay31 at gmail.com Sun Jun 29 14:00:06 2008 From: rmay31 at gmail.com (Ryan May) Date: Sun, 29 Jun 2008 14:00:06 -0400 Subject: [SciPy-user] 2D Interpolation In-Reply-To: References: <48652DC2.2070807@gmail.com> <486551A6.60203@gmail.com> Message-ID: <4867CDA6.8070108@gmail.com> Pauli Virtanen wrote: > Fri, 27 Jun 2008 16:46:30 -0400, Ryan May wrote: > >> Pauli Virtanen wrote: >>> Hi, >>> >>> Fri, 27 Jun 2008 14:13:22 -0400, Ryan May wrote: >>>> Can anyone help me use scipy.interpolate correctly. Here's my >>>> problem: I'm trying to make a 2D lookup table to save some >>>> calculations. The two parameters over which the lookup table is >>>> generated are independent and I have complete control over how I >>>> divide up the domain. Using this lookup table, I'd like to then >>>> calculate values over an unstructured set of parameter values (ie. a >>>> list of pairs of parameter values). Is there a function in >>>> scipy.interpolate that can help here? What I'd really like to be able >>>> to do is generate an interpolator object from my 2D array, and then >>>> pass a pair of 1D arrays to the object and have it return 1D array of >>>> values. > [clip] > > Another hint: looking at > > scipy.ndimage.map_coordinates > > may turn out to be useful: it seems to be able to interpolate from a > regular grid to a vector of coordinates. > Jackpot! This works really well. There's a good example on using it here: http://www.scipy.org/Cookbook/Interpolation Which is important, because I had to read the docstring a dozen times to understand what was going on. It should also be noted that while the example interpolates to a regular grid, there's nothing precluding interpolating to an irregular collection of points. What's weird is that you need to manually scale the points to which you're interpolating to be floating point indices within the original grid. This is probably due to the ndimage-focused nature of map_coordinates. Moving some of the functionality of map_coordinates, in a more generic fashion, into scipy.interpolate wouldn't be the worst idea in the world. Then again, I don't know if anyone else is planning on improving scipy.interpolate to gain this functionality (interpolation to array of irregular set of points) in another way. (I also can't volunteer to step up and do it at this time.) Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma From travis at enthought.com Sun Jun 29 14:18:20 2008 From: travis at enthought.com (Travis Vaught) Date: Sun, 29 Jun 2008 13:18:20 -0500 Subject: [SciPy-user] 2D Interpolation In-Reply-To: <4867CDA6.8070108@gmail.com> References: <48652DC2.2070807@gmail.com> <486551A6.60203@gmail.com> <4867CDA6.8070108@gmail.com> Message-ID: On Jun 29, 2008, at 1:00 PM, Ryan May wrote: > Pauli Virtanen wrote: >> Fri, 27 Jun 2008 16:46:30 -0400, Ryan May wrote: >> >>> Pauli Virtanen wrote: >>>> Hi, >>>> >>>> Fri, 27 Jun 2008 14:13:22 -0400, Ryan May wrote: >>>>> Can anyone help me use scipy.interpolate correctly. Here's my >>>>> problem: I'm trying to make a 2D lookup table to save some >>>>> calculations. The two parameters over which the lookup table is >>>>> generated are independent and I have complete control over how I >>>>> divide up the domain. Using this lookup table, I'd like to then >>>>> calculate values over an unstructured set of parameter values >>>>> (ie. a >>>>> list of pairs of parameter values). Is there a function in >>>>> scipy.interpolate that can help here? What I'd really like to be >>>>> able >>>>> to do is generate an interpolator object from my 2D array, and >>>>> then >>>>> pass a pair of 1D arrays to the object and have it return 1D >>>>> array of >>>>> values. >> [clip] >> >> Another hint: looking at >> >> scipy.ndimage.map_coordinates >> >> may turn out to be useful: it seems to be able to interpolate from a >> regular grid to a vector of coordinates. >> > > Jackpot! This works really well. There's a good example on using > it here: > > http://www.scipy.org/Cookbook/Interpolation > > Which is important, because I had to read the docstring a dozen > times to > understand what was going on. It should also be noted that while the > example interpolates to a regular grid, there's nothing precluding > interpolating to an irregular collection of points. What's weird is > that you need to manually scale the points to which you're > interpolating > to be floating point indices within the original grid. This is > probably due to the ndimage-focused nature of map_coordinates. > > Moving some of the functionality of map_coordinates, in a more generic > fashion, into scipy.interpolate wouldn't be the worst idea in the > world. > Then again, I don't know if anyone else is planning on improving > scipy.interpolate to gain this functionality (interpolation to array > of > irregular set of points) in another way. (I also can't volunteer to > step up and do it at this time.) > > Ryan > > -- > Ryan May > Graduate Research Assistant > School of Meteorology > University of Oklahoma Ryan, It looks like you were bitten by the same organizational problem that Shane mentions here (4th paragraph about where to find interpolation): http://www.vetta.org/2008/05/scipy-some-more-thoughts/ Seems like we should address this both in the documentation and in the actual organization/location of the routines. Thoughts? Travis From gael.varoquaux at normalesup.org Sun Jun 29 14:29:48 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sun, 29 Jun 2008 20:29:48 +0200 Subject: [SciPy-user] 2D Interpolation In-Reply-To: References: <48652DC2.2070807@gmail.com> <486551A6.60203@gmail.com> <4867CDA6.8070108@gmail.com> Message-ID: <20080629182948.GA4828@phare.normalesup.org> On Sun, Jun 29, 2008 at 01:18:20PM -0500, Travis Vaught wrote: > Seems like we should address this both in the documentation and in the > actual organization/location of the routines. Thoughts? We need to get the numpy doc server[1] running also for scipy. But before that we need to move it to scipy.org, elsewhere I'll be in trouble when I get back home. Ga?l [1] http://sd-2116.dedibox.fr/pydocweb/wiki/Front%20Page/ From J.Anderson at hull.ac.uk Sun Jun 29 14:45:48 2008 From: J.Anderson at hull.ac.uk (Joseph Anderson) Date: Sun, 29 Jun 2008 19:45:48 +0100 Subject: [SciPy-user] advice on inner product, element by element References: Message-ID: Hello All, I'm looking for advice on speeding up what looks to be a simple problem to me, but is illusive due to my numpy-naiveness. I'm attempting to evaluate the inner product of n-frames of a time varying signal with n-frames a time varying matrix. The code below does this using two approaches, list comprehension and python's map function. Neither approach is particularly fast, which is problematic, as I'm usually needing to do this many times at once--bringing things to a grinding halt. I'm expecting / hoping there is a much faster, fancy numpy way (clever indexing?) to do the trick. I'm considering having a go at the problem with weave.blitz or weave.inline, but this seems somewhat fussy for solving something that I expect is actually easy, but I'm just missing. Thanks in advance for the help! Example code: # parms cs = 4 # number of channels fs = 60 * 44100 # number of frames (a minute at sr = 44100) # sigs a = reshape(arange(fs * cs), (fs, cs)) b = reshape(arange(fs * cs * cs), (fs, cs, cs)) print "a = ", a # the 'signal' print "b = ", b # the 'matrices' print "list comp = ", array([inner(val_a, val_b) for val_a, val_b in zip(a, b)]) print "mapping = ", asarray(map(inner, a, b)) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Dr Joseph Anderson Lecturer in Music School of Arts and New Media University of Hull, Scarborough Campus, Scarborough, North Yorkshire, YO11 3AZ, UK T: +44.(0)1723.357341 T: +44.(0)1723.357370 F: +44.(0)1723.350815 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 3348 bytes Desc: not available URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: not available URL: From pav at iki.fi Sun Jun 29 15:27:35 2008 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 29 Jun 2008 19:27:35 +0000 (UTC) Subject: [SciPy-user] advice on inner product, element by element References: Message-ID: Sun, 29 Jun 2008 19:45:48 +0100, Joseph Anderson wrote: [clip] > Example code: > > # parms > cs = 4 > fs = 60 * 44100 > > a = reshape(arange(fs * cs), (fs, cs)) > b = reshape(arange(fs * cs * cs), (fs, cs, cs)) > > c1 = array([inner(val_a, val_b) for val_a, val_b in zip(a, b)]) > c2 = asarray(map(inner, a, b)) Something like this: c3 = (a[:,None,:]*b).sum(axis=2) Note that "None" is the same as "newaxis". (Also note that inner does an inner product along the last dimension by default, but you probably knew that.) Timings: In [49]: %time c1 = array([inner(val_a, val_b) for val_a, val_b in zip(a, b)]) CPU times: user 1.07 s, sys: 0.04 s, total: 1.10 s Wall time: 1.34 s In [51]: %time c2 = asarray(map(inner, a, b)) CPU times: user 0.65 s, sys: 0.01 s, total: 0.66 s Wall time: 0.82 s In [53]: %time c3 = (a[:,None,:]*b).sum(axis=2) CPU times: user 0.07 s, sys: 0.00 s, total: 0.07 s Wall time: 0.13 s In [60]: a[:,None,:].shape Out[60]: (88200, 1, 4) In [61]: b.shape Out[61]: (88200, 4, 4) In [55]: allclose(c1, c2) Out[55]: True In [56]: allclose(c1, c3) Out[56]: True From rmay31 at gmail.com Sun Jun 29 15:30:51 2008 From: rmay31 at gmail.com (Ryan May) Date: Sun, 29 Jun 2008 15:30:51 -0400 Subject: [SciPy-user] 2D Interpolation In-Reply-To: References: <48652DC2.2070807@gmail.com> <486551A6.60203@gmail.com> <4867CDA6.8070108@gmail.com> Message-ID: <4867E2EB.40404@gmail.com> >> Jackpot! This works really well. There's a good example on using >> it here: >> >> http://www.scipy.org/Cookbook/Interpolation >> >> Which is important, because I had to read the docstring a dozen >> times to >> understand what was going on. It should also be noted that while the >> example interpolates to a regular grid, there's nothing precluding >> interpolating to an irregular collection of points. What's weird is >> that you need to manually scale the points to which you're >> interpolating >> to be floating point indices within the original grid. This is >> probably due to the ndimage-focused nature of map_coordinates. >> >> Moving some of the functionality of map_coordinates, in a more generic >> fashion, into scipy.interpolate wouldn't be the worst idea in the >> world. >> Then again, I don't know if anyone else is planning on improving >> scipy.interpolate to gain this functionality (interpolation to array >> of >> irregular set of points) in another way. (I also can't volunteer to >> step up and do it at this time.) >> > Ryan, > > It looks like you were bitten by the same organizational problem that > Shane mentions here (4th paragraph about where to find interpolation): > > http://www.vetta.org/2008/05/scipy-some-more-thoughts/ > > Seems like we should address this both in the documentation and in the > actual organization/location of the routines. Thoughts? > > Travis I think docs would be a good first step. Ndimage definitely wasn't an area that I thought of, and even if I did look there, "map_coordinates" is not a name I associate with interpolation. As far as a reorganization is concerned, I think a refactoring is more likely what is needed. Even though it manages to do what I need, it still feels like I'm abusing the function a bit rather than it being a smooth solution to my problem. I think what would be nice would be taking the functionality available in map_coordinates and wrapping it with the existing API's in scipy.interpolate (either OO-based or procedural). Obviously, this is a bit of work, and since I don't have the time ATM, I can't push too hard here. :) I think a good interim solution would to at least add a see also link to map_coordinates from some of the relevant scipy.interpolate docstrings. I think cleaning up the map_coordinates docstring and maybe adding some better examples would also help. My 0.02. Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma From wnbell at gmail.com Mon Jun 30 04:28:11 2008 From: wnbell at gmail.com (Nathan Bell) Date: Mon, 30 Jun 2008 03:28:11 -0500 Subject: [SciPy-user] accessing only the nonzero elements of a sparse matrix In-Reply-To: References: Message-ID: On Sat, Jun 28, 2008 at 9:18 PM, Ted Sandler wrote: > > I guess I could convert to coordinate form 'COO' and then access the > "row" and "col" fields but this seems a little evil as it relies on > properties of the 'COO' matrix which could change in the future. It > would be nicer to have a uniform method of all sparse matrix types. > > Thanks for any info on this, Ted, you are right that we should have a nonzero() member function. Currently the best you can do is, as you say, converting to COO and taking the .row,.col, and .data members. I don't see the names of these variables changing, but we'll certainly add .nonzero() in the future anyway. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From J.Anderson at hull.ac.uk Mon Jun 30 11:38:56 2008 From: J.Anderson at hull.ac.uk (Joseph Anderson) Date: Mon, 30 Jun 2008 16:38:56 +0100 Subject: [SciPy-user] advice on inner product, element by element References: Message-ID: Ah, ha. . . thanks Pauli, this was just the sort of thing I was hoping to see. . . the trouble being I'm still tending to think in terms of loops. Thanks for the help!! My best, ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Dr Joseph Anderson Lecturer in Music School of Arts and New Media University of Hull, Scarborough Campus, Scarborough, North Yorkshire, YO11 3AZ, UK T: +44.(0)1723.357341 T: +44.(0)1723.357370 F: +44.(0)1723.350815 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -----Original Message----- From: scipy-user-bounces at scipy.org on behalf of Pauli Virtanen Sent: Sun 06/29/2008 8:27 PM To: scipy-user at scipy.org Subject: Re: [SciPy-user] advice on inner product, element by element Sun, 29 Jun 2008 19:45:48 +0100, Joseph Anderson wrote: [clip] > Example code: > > # parms > cs = 4 > fs = 60 * 44100 > > a = reshape(arange(fs * cs), (fs, cs)) > b = reshape(arange(fs * cs * cs), (fs, cs, cs)) > > c1 = array([inner(val_a, val_b) for val_a, val_b in zip(a, b)]) > c2 = asarray(map(inner, a, b)) Something like this: c3 = (a[:,None,:]*b).sum(axis=2) Note that "None" is the same as "newaxis". (Also note that inner does an inner product along the last dimension by default, but you probably knew that.) Timings: In [49]: %time c1 = array([inner(val_a, val_b) for val_a, val_b in zip(a, b)]) CPU times: user 1.07 s, sys: 0.04 s, total: 1.10 s Wall time: 1.34 s In [51]: %time c2 = asarray(map(inner, a, b)) CPU times: user 0.65 s, sys: 0.01 s, total: 0.66 s Wall time: 0.82 s In [53]: %time c3 = (a[:,None,:]*b).sum(axis=2) CPU times: user 0.07 s, sys: 0.00 s, total: 0.07 s Wall time: 0.13 s In [60]: a[:,None,:].shape Out[60]: (88200, 1, 4) In [61]: b.shape Out[61]: (88200, 4, 4) In [55]: allclose(c1, c2) Out[55]: True In [56]: allclose(c1, c3) Out[56]: True _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 3669 bytes Desc: not available URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: not available URL: From grs2103 at columbia.edu Mon Jun 30 13:45:29 2008 From: grs2103 at columbia.edu (Gideon Simpson) Date: Mon, 30 Jun 2008 13:45:29 -0400 Subject: [SciPy-user] single elements and arrays Message-ID: Suppose I have a function foo def foo(x): blah and i have written foo in a vectorized fashion such that x can be a numpy array of #'s. But suppose I also want to be able to do foo(5.5) where i am interested in the value at a single point. Is there an easy way to handle both cases? Should I just not permit this sort of ambiguity (force everything to be an array)? -gideon From robert.kern at gmail.com Mon Jun 30 13:52:20 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 30 Jun 2008 12:52:20 -0500 Subject: [SciPy-user] single elements and arrays In-Reply-To: References: Message-ID: <3d375d730806301052k703db955iac318c6bcfa7f200@mail.gmail.com> On Mon, Jun 30, 2008 at 12:45, Gideon Simpson wrote: > Suppose I have a function foo > > def foo(x): > blah > > and i have written foo in a vectorized fashion such that x can be a > numpy array of #'s. But suppose I also want to be able to do > > foo(5.5) > > where i am interested in the value at a single point. > > Is there an easy way to handle both cases? Should I just not permit > this sort of ambiguity (force everything to be an array)? It depends. If you are just using ufuncs inside, you don't have to do anything. Your function will work just fine with arrays and scalars. If you're doing something else, like looking at .shape or using .sum() or something, then you might need to actually do something. Exactly what depends on the details, though. Can you provide more details about what your function does? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From grs2103 at columbia.edu Mon Jun 30 14:14:06 2008 From: grs2103 at columbia.edu (Gideon Simpson) Date: Mon, 30 Jun 2008 14:14:06 -0400 Subject: [SciPy-user] single elements and arrays In-Reply-To: <3d375d730806301052k703db955iac318c6bcfa7f200@mail.gmail.com> References: <3d375d730806301052k703db955iac318c6bcfa7f200@mail.gmail.com> Message-ID: <02AA0899-7A57-4CD3-8DF3-388135BD5D0A@columbia.edu> foo finds a root where x is a parameter in the equation to be solved. If x is an array, I iterate through the elements of the array. -gideon On Jun 30, 2008, at 1:52 PM, Robert Kern wrote: > On Mon, Jun 30, 2008 at 12:45, Gideon Simpson > wrote: >> Suppose I have a function foo >> >> def foo(x): >> blah >> >> and i have written foo in a vectorized fashion such that x can be a >> numpy array of #'s. But suppose I also want to be able to do >> >> foo(5.5) >> >> where i am interested in the value at a single point. >> >> Is there an easy way to handle both cases? Should I just not permit >> this sort of ambiguity (force everything to be an array)? > > It depends. If you are just using ufuncs inside, you don't have to do > anything. Your function will work just fine with arrays and scalars. > If you're doing something else, like looking at .shape or using .sum() > or something, then you might need to actually do something. Exactly > what depends on the details, though. Can you provide more details > about what your function does? > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From robert.kern at gmail.com Mon Jun 30 14:57:18 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 30 Jun 2008 13:57:18 -0500 Subject: [SciPy-user] single elements and arrays In-Reply-To: <02AA0899-7A57-4CD3-8DF3-388135BD5D0A@columbia.edu> References: <3d375d730806301052k703db955iac318c6bcfa7f200@mail.gmail.com> <02AA0899-7A57-4CD3-8DF3-388135BD5D0A@columbia.edu> Message-ID: <3d375d730806301157t13d8d3eejc62b1f9ced005ed7@mail.gmail.com> On Mon, Jun 30, 2008 at 13:14, Gideon Simpson wrote: > foo finds a root where x is a parameter in the equation to be solved. > If x is an array, I iterate through the elements of the array. In that case, just special case it. Use numpy.isscalar() to do the test. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco