From jba at SDF.LONESTAR.ORG Mon May 2 08:03:06 2011 From: jba at SDF.LONESTAR.ORG (Jeffrey Armstrong) Date: Mon, 2 May 2011 12:03:06 +0000 (UTC) Subject: [SciPy-Dev] Discrete-time additions to scipy.signal In-Reply-To: References: Message-ID: On Sat, 30 Apr 2011, Ralf Gommers wrote: > > That was a good start. I've taken your discrete-time-signal branch, > and did some more reorganizing to only have a few commits with no > renames and deletions: > https://github.com/rgommers/scipy/tree/armstrong-discrete. I then did > some cleanups of cont2discrete in the last commit, to make the code > cleaner. I'll pull from here for any future work. The cont2discrete function needs some updates. I have to admit I don't remember why I chose to use *args on this function, but, for consistency with other LTI functions, it should probably be combined into a single function again, but accept a tuple as the system. The tuple's length would then be checked to see if it's a state-space model or a transfer function. In regards to the permissions, I've been working on Windows, so the tests were in fact executing on my end. After reading your email, it did occur to me that git was marking new files a "755", which was unexpected. I'll make sure it doesn't happen again. As far as styling goes, I'll do my best to stick with best practices. Sorry for any inconvenience. -Jeff Jeff Armstrong - jba at sdf.lonestar.org SDF Public Access UNIX System - http://sdf.lonestar.org From ralf.gommers at googlemail.com Mon May 2 14:22:23 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Mon, 2 May 2011 20:22:23 +0200 Subject: [SciPy-Dev] Discrete-time additions to scipy.signal In-Reply-To: References: Message-ID: On Mon, May 2, 2011 at 2:03 PM, Jeffrey Armstrong wrote: > On Sat, 30 Apr 2011, Ralf Gommers wrote: >> >> That was a good start. I've taken your discrete-time-signal branch, >> and did some more reorganizing to only have a few commits with no >> renames and deletions: >> https://github.com/rgommers/scipy/tree/armstrong-discrete. I then did >> some cleanups of cont2discrete in the last commit, to make the code >> cleaner. > > I'll pull from here for any future work. ?The cont2discrete function needs > some updates. ?I have to admit I don't remember why I chose to use *args > on this function, but, for consistency with other LTI functions, it should > probably be combined into a single function again, but accept a tuple as > the system. ?The tuple's length would then be checked to see if it's a > state-space model or a transfer function. Not sure I understand that reasoning - changing a transfer function or a state space model look like separate operations to me. > > In regards to the permissions, I've been working on Windows, so the tests > were in fact executing on my end. ?After reading your email, it did occur > to me that git was marking new files a "755", which was unexpected. ?I'll > make sure it doesn't happen again. > > As far as styling goes, I'll do my best to stick with best practices. > Sorry for any inconvenience. No problem at all. Cheers, Ralf From jba at SDF.LONESTAR.ORG Tue May 3 09:41:33 2011 From: jba at SDF.LONESTAR.ORG (Jeffrey Armstrong) Date: Tue, 3 May 2011 13:41:33 +0000 (UTC) Subject: [SciPy-Dev] Discrete-time additions to scipy.signal In-Reply-To: References: Message-ID: On Mon, 2 May 2011, Ralf Gommers wrote: >> >> I'll pull from here for any future work. ?The cont2discrete function needs >> some updates. ?I have to admit I don't remember why I chose to use *args >> on this function, but, for consistency with other LTI functions, it should >> probably be combined into a single function again, but accept a tuple as >> the system. ?The tuple's length would then be checked to see if it's a >> state-space model or a transfer function. > > Not sure I understand that reasoning - changing a transfer function or > a state space model look like separate operations to me. While the two operations are mildly different, most of the LTI functions in scipy.signal accept a tuple and determine whether they are dealing with a transfer function or a state-space model internally. If you look at scipy.signal.lsim, for example, you'll see the first input is the "system," which can be one of three types. For consistency, I would say that the cont2discrete function should behave similarly. I'm not implying its the correct route to take, only that it seems that it was handled this way in the past. -Jeff Jeff Armstrong - jba at sdf.lonestar.org SDF Public Access UNIX System - http://sdf.lonestar.org From opossumnano at gmail.com Wed May 4 06:30:08 2011 From: opossumnano at gmail.com (Tiziano Zito) Date: Wed, 4 May 2011 12:30:08 +0200 Subject: [SciPy-Dev] [ANN] EuroScipy 2011 - deadline approaching Message-ID: <20110504103008.GB820@tulpenbaum.cognition.tu-berlin.de> ===================================== EuroScipy 2011 - Deadline Approaching ===================================== Beware: talk submission deadline is approaching. You can submit your contribution until Sunday May 8. --------------------------------------------- The 4th European meeting on Python in Science --------------------------------------------- **Paris, Ecole Normale Sup?rieure, August 25-28 2011** We are happy to announce the 4th EuroScipy meeting, in Paris, August 2011. The EuroSciPy meeting is a cross-disciplinary gathering focused on the use and development of the Python language in scientific research. This event strives to bring together both users and developers of scientific tools, as well as academic research and state of the art industry. Main topics =========== - Presentations of scientific tools and libraries using the Python language, including but not limited to: - vector and array manipulation - parallel computing - scientific visualization - scientific data flow and persistence - algorithms implemented or exposed in Python - web applications and portals for science and engineering. - Reports on the use of Python in scientific achievements or ongoing projects. - General-purpose Python tools that can be of special interest to the scientific community. Tutorials ========= There will be two tutorial tracks at the conference, an introductory one, to bring up to speed with the Python language as a scientific tool, and an advanced track, during which experts of the field will lecture on specific advanced topics such as advanced use of numpy, scientific visualization, software engineering... Keynote Speaker: Fernando Perez =============================== We are excited to welcome Fernando Perez (UC Berkeley, Helen Wills Neuroscience Institute, USA) as our keynote speaker. Fernando Perez is the original author of the enhanced interactive python shell IPython and a very active contributor to the Python for Science ecosystem. Important dates =============== Talk submission deadline: Sunday May 8 Program announced: Sunday May 29 Tutorials tracks: Thursday August 25 - Friday August 26 Conference track: Saturday August 27 - Sunday August 28 Call for papers =============== We are soliciting talks that discuss topics related to scientific computing using Python. These include applications, teaching, future development directions, and research. We welcome contributions from the industry as well as the academic world. Indeed, industrial research and development as well academic research face the challenge of mastering IT tools for exploration, modeling and analysis. We look forward to hearing your recent breakthroughs using Python! Submission guidelines ===================== - We solicit talk proposals in the form of a one-page long abstract. - Submissions whose main purpose is to promote a commercial product or service will be refused. - All accepted proposals must be presented at the EuroSciPy conference by at least one author. The one-page long abstracts are for conference planing and selection purposes only. We will later select papers for publication of post-proceedings in a peer-reviewed journal. How to submit an abstract ========================= To submit a talk to the EuroScipy conference follow the instructions here: http://www.euroscipy.org/card/euroscipy2011_call_for_papers Organizers ========== Chairs: - Ga?l Varoquaux (INSERM, Unicog team, and INRIA, Parietal team) - Nicolas Chauvat (Logilab) Local organization committee: - Emmanuelle Gouillart (Saint-Gobain Recherche) - Jean-Philippe Chauvat (Logilab) Tutorial chair: - Valentin Haenel (MKP, Technische Universit?t Berlin) Program committee: - Chair: Tiziano Zito (MKP, Technische Universit?t Berlin) - Romain Brette (ENS Paris, DEC) - Emmanuelle Gouillart (Saint-Gobain Recherche) - Eric Lebigot (Laboratoire Kastler Brossel, Universit? Pierre et Marie Curie) - Konrad Hinsen (Soleil Synchrotron, CNRS) - Hans Petter Langtangen (Simula laboratories) - Jarrod Millman (UC Berkeley, Helen Wills NeuroScience institute) - Mike M?ller (Python Academy) - Didrik Pinte (Enthought Inc) - Marc Poinot (ONERA) - Christophe Pradal (CIRAD/INRIA, Virtual Plantes team) - Andreas Schreiber (DLR) - St?fan van der Walt (University of Stellenbosch) Website ======= http://www.euroscipy.org/conference/euroscipy_2011 From fabian.pedregosa at inria.fr Thu May 5 08:34:19 2011 From: fabian.pedregosa at inria.fr (Fabian Pedregosa) Date: Thu, 5 May 2011 14:34:19 +0200 Subject: [SciPy-Dev] w prefix in fortran bindings Message-ID: Hi all. I've been backing lately on the LAPACK bindings and there's something I don't understand . Sometimes the fortranname statement is prefixed with `w`, like in fortranname wdotc Someone could tell me why the w is needed? Thanks, Fabian From pav at iki.fi Thu May 5 08:43:56 2011 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 5 May 2011 12:43:56 +0000 (UTC) Subject: [SciPy-Dev] w prefix in fortran bindings References: Message-ID: Thu, 05 May 2011 14:34:19 +0200, Fabian Pedregosa wrote: > I've been backing lately on the LAPACK bindings and there's something I > don't understand . Sometimes the fortranname statement is prefixed with > `w`, like in > > fortranname wdotc > > Someone could tell me why the w is needed? It's because on some platforms there are difficulties in calling Fortran functions (as opposed to subroutines) from C, so we need subroutine wrappers for the functions. You can find the w* subroutines implemented in some *.f file. Pauli From xperroni at gmail.com Thu May 5 14:28:25 2011 From: xperroni at gmail.com (Helio Perroni Filho) Date: Thu, 5 May 2011 15:28:25 -0300 Subject: [SciPy-Dev] The state of weave.accelerate? In-Reply-To: References: Message-ID: On Mon, Apr 18, 2011 at 10:16 PM, Jarrod Millman wrote: > As Ralf already mentioned, you should probably take a look at Cython: > ?http://cython.org/ > > Cython is very widely used and is improving rapidly. ?The Cython > developers added functionality similar to weave.inline last December: > ?http://wiki.cython.org/enhancements/inline Thank you. I have taken a look at Cython and yes, it's pretty much what I was looking for API-wise ? though performance still falls short of my requirements. They may well get there eventually, but for the time being I think I'm going with SWIG and C++. -- Ja ne, Helio Perroni Filho http://machineawakening.blogspot.com/ From lists at onerussian.com Fri May 6 17:04:45 2011 From: lists at onerussian.com (Yaroslav Halchenko) Date: Fri, 6 May 2011 17:04:45 -0400 Subject: [SciPy-Dev] distributions.ncf.fit -- never converges? In-Reply-To: References: <20110413143435.GD7764@onerussian.com> <20110413170845.GE7764@onerussian.com> <4DA5E440.2060803@gmail.com> Message-ID: <20110506210445.GX16547@onerussian.com> Hi Josef, Sorry -- I have missed the reply/question On Fri, 15 Apr 2011, josef.pktd at gmail.com wrote: > Yaroslav, > Did you ever get a good estimate in your example with older versions? > With numpy 1.3, scipy 0.72 and with fully specified starting values in > scipy 0.9, the estimation results are the same as the starting values. > So the estimation doesn't move and does not produce any useful > results. (I don't have scipy 0.8 right now.) not sure -- those distributions were not of particular interest for me, so in my evil match_distribution script, their absence among the winners didn't trigger my interest ;) But before it wasn't getting stuck for sure -- =------------------------------------------------------------------= Keep in touch www.onerussian.com Yaroslav Halchenko www.ohloh.net/accounts/yarikoptic From npkuin at gmail.com Fri May 13 19:08:24 2011 From: npkuin at gmail.com (Paul Kuin) Date: Sat, 14 May 2011 00:08:24 +0100 Subject: [SciPy-Dev] Fwd: For certain arrays scipy.interpolate.fitpack.bisplrep fails, In-Reply-To: References: Message-ID: From:?Paul Kuin To:?scipy-dev at scipy.org Date:?Tue, 10 May 2011 22:23:34 +0100 Subject:?For certain arrays scipy.interpolate.fitpack.bisplrep fails, For certain arrays scipy.interpolate.fitpack.bisplrep fails, since the fortran code called expects an integer for the dimension size of certain arrays: line 760-761 of fitpack.py should be modified to ? if nxest is None: nxest=int(kx+sqrt(m/2)) ? if nyest is None: nyest=int(ky+sqrt(m/2)) to make sure nxest, and nyest are integer. Please make a patch. From joshua.m.grant at gmail.com Fri May 13 19:40:44 2011 From: joshua.m.grant at gmail.com (Joshua Grant) Date: Fri, 13 May 2011 19:40:44 -0400 Subject: [SciPy-Dev] User Acceptance Testing Message-ID: Hi everyone. So I notice that for Scipy, there's some fairly extensive unit tests, which are great for the actual development of the library code. Are there any forms of user acceptance testing, automated or otherwise? Ways to simulate user behaviour and larger blocks of code, beyond single methods/functions? -------------- next part -------------- An HTML attachment was scrubbed... URL: From nathan.faggian at gmail.com Sat May 14 04:50:43 2011 From: nathan.faggian at gmail.com (Nathan Faggian) Date: Sat, 14 May 2011 18:50:43 +1000 Subject: [SciPy-Dev] scikit-morph Message-ID: <2B9BD974-BEE7-4835-BF17-C5BAB828CDB6@gmail.com> Greetings! I would like to start a project working on image registration algorithms in python. I am proposing a new sci-kit called scikit-morph because I believe registration is too specific for inclusion in scipy directly. My short term goals are to: 1) implement a basic image registration framework - perhaps similar to the design used by ITK. 2) Implement image resampling methods for use in the optimisation (c++). 3) Use py.test to thoroughly test all code. I have started by setting up a github project, here: http://github.com/nfaggian/scikit-morph How I can get this going in a way that meshes well with existing scikit structures? I have looked at both the image and learn scikits, is it a matter of simply using the same directory structures?? Any help on getting a skeleton project going would be great - also very open to people joining in - please email me directly if you would like to help out! Kind Regards, Nathan Faggian -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Sat May 14 04:52:37 2011 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 14 May 2011 08:52:37 +0000 (UTC) Subject: [SciPy-Dev] Fwd: For certain arrays scipy.interpolate.fitpack.bisplrep fails, References: Message-ID: On Sat, 14 May 2011 00:08:24 +0100, Paul Kuin wrote: > scipy.interpolate.fitpack.bisplrep fails, For certain arrays > scipy.interpolate.fitpack.bisplrep fails, since the fortran code called > expects > an integer for the dimension size of certain arrays: Floats will be automatically cast to integers, so this by itself cannot cause any errors. Please give example code that fails (how does it fail?) -- otherwise it will be difficult to determine the actual cause. > line 760-761 of fitpack.py should be modified to > ? if nxest is None: nxest=int(kx+sqrt(m/2)) > if nyest is None: nyest=int(ky+sqrt(m/2)) > to make sure nxest, and nyest are integer. That was already changed to be so in Scipy 0.9. -- Pauli Virtanne From pav at iki.fi Sat May 14 05:05:07 2011 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 14 May 2011 09:05:07 +0000 (UTC) Subject: [SciPy-Dev] User Acceptance Testing References: Message-ID: On Fri, 13 May 2011 19:40:44 -0400, Joshua Grant wrote: > Hi everyone. So I notice that for Scipy, there's some fairly extensive > unit tests, which are great for the actual development of the library > code. Are there any forms of user acceptance testing, automated or > otherwise? Ways to simulate user behaviour and larger blocks of code, > beyond single methods/functions? You can write a given functional test in the form of a unit test -- just solve a problem as you would do normally, and put asserts at the end to check that the result is correct. In fact, this is what a majority of the unit tests in Scipy do; they're (fixed) functional tests for high-level routines. I'm not sure how you would generate tests automatically for library code (do you have some ideas :)? With UIs and web pages I can understand that one can do fuzz testing to simulate user input, and check that whatever is done does not cause totally unexpected results. But when the user input is scientific code, the specifications are so complex that I don't see a way to generate that automatically --- it's easier to write out given cases manually. -- Pauli Virtanen From gael.varoquaux at normalesup.org Sat May 14 05:17:00 2011 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sat, 14 May 2011 11:17:00 +0200 Subject: [SciPy-Dev] scikit-morph In-Reply-To: <2B9BD974-BEE7-4835-BF17-C5BAB828CDB6@gmail.com> References: <2B9BD974-BEE7-4835-BF17-C5BAB828CDB6@gmail.com> Message-ID: <20110514091700.GB19021@phare.normalesup.org> On Sat, May 14, 2011 at 06:50:43PM +1000, Nathan Faggian wrote: > I would like to start a project working on image registration algorithms > in python. I am proposing a new sci-kit called scikit-morph because I > believe registration is too specific for inclusion in scipy directly. You might want to coordinnate with Alexis Roche, who has been doing similar work in Nipy. Cheers, Gael From ralf.gommers at googlemail.com Sat May 14 05:57:41 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sat, 14 May 2011 11:57:41 +0200 Subject: [SciPy-Dev] scikit-morph In-Reply-To: <2B9BD974-BEE7-4835-BF17-C5BAB828CDB6@gmail.com> References: <2B9BD974-BEE7-4835-BF17-C5BAB828CDB6@gmail.com> Message-ID: On Sat, May 14, 2011 at 10:50 AM, Nathan Faggian wrote: > Greetings! > > I would like to start a project working on image registration algorithms in > python. I am proposing a new sci-kit called scikit-morph because I believe > registration is too specific for inclusion in scipy directly. > Is there a reason not to include this in scikits.image instead? It fits well there, you'd probably get more help and you wouldn't have to worry about setting up a new project structure. Cheers, Ralf > > My short term goals are to: > > 1) implement a basic image registration framework - perhaps similar to > the design used by ITK. > 2) Implement image resampling methods for use in the optimisation (c++). > 3) Use py.test to thoroughly test all code. > > I have started by setting up a github project, here: > > http://github.com/nfaggian/scikit-morph > > How I can get this going in a way that meshes well with existing scikit > structures? I have looked at both the image and learn scikits, is it a > matter of simply using the same directory structures?? > > Any help on getting a skeleton project going would be great - also very > open to people joining in - please email me directly if you would like to > help out! > > Kind Regards, > > Nathan Faggian > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Sat May 14 09:02:20 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 14 May 2011 09:02:20 -0400 Subject: [SciPy-Dev] User Acceptance Testing In-Reply-To: References: Message-ID: On Sat, May 14, 2011 at 5:05 AM, Pauli Virtanen wrote: > On Fri, 13 May 2011 19:40:44 -0400, Joshua Grant wrote: >> Hi everyone. So I notice that for Scipy, there's some fairly extensive >> unit tests, which are great for the actual development of the library >> code. Are there any forms of user acceptance testing, automated or >> otherwise? Ways to simulate user behaviour and larger blocks of code, >> beyond single methods/functions? > > You can write a given functional test in the form of a unit test -- > just solve a problem as you would do normally, and put asserts at > the end to check that the result is correct. In fact, this is what > a majority of the unit tests in Scipy do; they're (fixed) functional > tests for high-level routines. > > I'm not sure how you would generate tests automatically for library > code (do you have some ideas :)? With UIs and web pages I can understand > that one can do fuzz testing to simulate user input, and check that > whatever is done does not cause totally unexpected results. ?But when > the user input is scientific code, the specifications are so complex that > I don't see a way to generate that automatically --- it's easier to write > out given cases manually. Since scipy is a library of individual functions and classes, the unit tests are functional tests, and acceptance tests, since that's what the user is using, e.g. a function in scipy.special or stats or linalg either produces a correct result or not. scipy.optimize manages to optimize or not. But it's all for some predesigned cases and users can come with unusual edge cases, which are slow to catch. For scipy.stats I started out with fuzz tests, throwing many different random parameters at the distributions, but doing this on a regular basis as part of the test suite takes too much testing time. However, numpy and scipy are also tested against "larger blocks of code, beyond single methods/functions" but not as part of the scipy test suite. In the release preparation everyone is encouraged to test against there own code. And for example tests of (at least) scikits.statsmodels and matplotlib against the new numpy 1.6 found some bugs. The tests of bottleneck found some strange or inconsistent edge cases. numpy has scipy as a big user to be tested against. But for example in the case of one bug in scikits.statsmodels with incorrect casting, it took Skipper a while to figure out why the precision of some test results dropped in one case. The problem is better handled with a unit test, then trying to figure out why some results after lots of different calculations have only 2 significant digits precision instead of 4 decimal precision (or something like this). So, I think functional unit tests are fine, but coverage for some parts of scipy is not great. In some cases of failures in scipy.optimize, I'm not sure whether it's bugs or inherent to the optimization problems. And as an acceptance test, I have never heard of anyone running numpy.random through the randomness test suites http://en.wikipedia.org/wiki/Diehard_tests . Josef > > -- > Pauli Virtanen > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From nathan.faggian at gmail.com Sat May 14 09:35:30 2011 From: nathan.faggian at gmail.com (Nathan Faggian) Date: Sat, 14 May 2011 23:35:30 +1000 Subject: [SciPy-Dev] scikit-morph In-Reply-To: References: <2B9BD974-BEE7-4835-BF17-C5BAB828CDB6@gmail.com> Message-ID: <2BE13CD2-1A46-4070-8DE1-28CEFAF4C20F@gmail.com> Hi, I am really glad to get some responses so quickly. Gael: I will follow up with Alexis - thats a great idea! Ralf: I would still like to push the idea of having a separate scikit for this work. What I would like the scikit to become is a collection of non-linear image registration algorithms, which is different to the warping that is described by the scikit image library. Particularly, I would also like the scikit to support the design and evaluation of registration algorithms . Actually, maybe it is better that the scikit is called scikit-register? Something a bit different... Does anyone else see some duplication between the scipy.ndimage and scikit.image? Seems like morphological operators are defined in both? -N On 14/05/2011, at 7:57 PM, Ralf Gommers wrote: > > > On Sat, May 14, 2011 at 10:50 AM, Nathan Faggian wrote: > Greetings! > > I would like to start a project working on image registration algorithms in python. I am proposing a new sci-kit called scikit-morph because I believe registration is too specific for inclusion in scipy directly. > > Is there a reason not to include this in scikits.image instead? It fits well there, you'd probably get more help and you wouldn't have to worry about setting up a new project structure. > > Cheers, > Ralf > > > > My short term goals are to: > > 1) implement a basic image registration framework - perhaps similar to the design used by ITK. > 2) Implement image resampling methods for use in the optimisation (c++). > 3) Use py.test to thoroughly test all code. > > I have started by setting up a github project, here: > > http://github.com/nfaggian/scikit-morph > > How I can get this going in a way that meshes well with existing scikit structures? I have looked at both the image and learn scikits, is it a matter of simply using the same directory structures?? > > Any help on getting a skeleton project going would be great - also very open to people joining in - please email me directly if you would like to help out! > > Kind Regards, > > Nathan Faggian > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From npkuin at gmail.com Sat May 14 09:54:39 2011 From: npkuin at gmail.com (Paul Kuin) Date: Sat, 14 May 2011 14:54:39 +0100 Subject: [SciPy-Dev] Fwd: For certain arrays scipy.interpolate.fitpack.bisplrep fails, In-Reply-To: References: Message-ID: On Sat, May 14, 2011 at 9:52 AM, Pauli Virtanen wrote: > On Sat, 14 May 2011 00:08:24 +0100, Paul Kuin wrote: >> scipy.interpolate.fitpack.bisplrep fails, For certain arrays >> scipy.interpolate.fitpack.bisplrep fails, since the fortran code called >> expects >> an integer for the dimension size of certain arrays: > > Floats will be automatically cast to integers, so this by itself > cannot cause any errors. It does when the parameter is not cast to an integer and passed to the fortran library. > Please give example code that fails (how does it fail?) -- otherwise > it will be difficult to determine the actual cause. The failure was an error message from the fortran code. I recognized it since I have been working with it outside python. Unfortunately I cannot give an easy snippet of code and I have time limitations now. My old Macbook crashed and when migrating, I decided to go with the STScI python package. When I ran tests this was the only bug my code stumbled upon, which I fixed. So I wanted to share the fix. You say it was already patched. So I don't think more is needed. Thanks for checking this out though. All your efforts help more than you think! > >> line 760-761 of fitpack.py should be modified to >> ? if nxest is None: nxest=int(kx+sqrt(m/2)) >> ? if nyest is None: nyest=int(ky+sqrt(m/2)) >> to make sure nxest, and nyest are integer. > > That was already changed to be so in Scipy 0.9. > > -- > Pauli Virtanne > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -- * * * * * * * * http://www.mssl.ucl.ac.uk/~npmk/ * * * * Dr. N.P.M. Kuin? ? ? (npmk at mssl.ucl.ac.uk) Mullard Space Science Laboratory? ? University College London? ? Holmbury St Mary ? Dorking ??Surrey RH5 6NT?? U.K. From robert.kern at gmail.com Sat May 14 11:16:27 2011 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 14 May 2011 10:16:27 -0500 Subject: [SciPy-Dev] User Acceptance Testing In-Reply-To: References: Message-ID: On Sat, May 14, 2011 at 08:02, wrote: > And as an acceptance > test, I have never heard of anyone running numpy.random through the > randomness test suites http://en.wikipedia.org/wiki/Diehard_tests . The DIEHARD tests are for uniform RNGs. For that, we use the original Mersenne Twister code directly, and that has already been thoroughly tested by DIEHARD and other such suites. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From josef.pktd at gmail.com Sat May 14 11:45:45 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 14 May 2011 11:45:45 -0400 Subject: [SciPy-Dev] User Acceptance Testing In-Reply-To: References: Message-ID: On Sat, May 14, 2011 at 11:16 AM, Robert Kern wrote: > On Sat, May 14, 2011 at 08:02, ? wrote: > >> And as an acceptance >> test, I have never heard of anyone running numpy.random through the >> randomness test suites http://en.wikipedia.org/wiki/Diehard_tests . > > The DIEHARD tests are for uniform RNGs. For that, we use the original > Mersenne Twister code directly, and that has already been thoroughly > tested by DIEHARD and other such suites. Thanks, good to hear, I didn't know the implementation in numpy has been tested. I'm pretty confident about the transformed random variables that are used in scipy.stats.distributions, since I have tortured them enough, I think. Josef > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > ? -- Umberto Eco > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From joshua.m.grant at gmail.com Sat May 14 23:06:35 2011 From: joshua.m.grant at gmail.com (Joshua Grant) Date: Sat, 14 May 2011 23:06:35 -0400 Subject: [SciPy-Dev] User Acceptance Testing In-Reply-To: References: Message-ID: Well, first off, let me say that if there are any packages missing unit tests or lacking coverage, I would definitely look into making that a project for myself :) As for the functional testing, that's more or less what I was thinking of doing. Basically, a test could be a block of code that does some nontrivial task (eg computing and plotting results from a SDE) using different parts of the numpy/scipy libraries. The idea is see how different functions from different modules interact with each other, and how they work "in the wild". However, I'm fairly confident that if the unit testing returns positive, the functions would work correctly. I'm open to more discussion about testing / QA aspects of numpy/scipy though. On Sat, May 14, 2011 at 11:45 AM, wrote: > On Sat, May 14, 2011 at 11:16 AM, Robert Kern > wrote: > > On Sat, May 14, 2011 at 08:02, wrote: > > > >> And as an acceptance > >> test, I have never heard of anyone running numpy.random through the > >> randomness test suites http://en.wikipedia.org/wiki/Diehard_tests . > > > > The DIEHARD tests are for uniform RNGs. For that, we use the original > > Mersenne Twister code directly, and that has already been thoroughly > > tested by DIEHARD and other such suites. > > Thanks, good to hear, I didn't know the implementation in numpy has > been tested. > > I'm pretty confident about the transformed random variables that are > used in scipy.stats.distributions, since I have tortured them enough, > I think. > > Josef > > > > > -- > > Robert Kern > > > > "I have come to believe that the whole world is an enigma, a harmless > > enigma that is made terrible by our own mad attempt to interpret it as > > though it had an underlying truth." > > -- Umberto Eco > > _______________________________________________ > > SciPy-Dev mailing list > > SciPy-Dev at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-dev > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Sun May 15 04:53:05 2011 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sun, 15 May 2011 10:53:05 +0200 Subject: [SciPy-Dev] scikit-morph In-Reply-To: <2BE13CD2-1A46-4070-8DE1-28CEFAF4C20F@gmail.com> References: <2B9BD974-BEE7-4835-BF17-C5BAB828CDB6@gmail.com> <2BE13CD2-1A46-4070-8DE1-28CEFAF4C20F@gmail.com> Message-ID: <20110515085305.GA19718@phare.normalesup.org> On Sat, May 14, 2011 at 11:35:30PM +1000, Nathan Faggian wrote: > Ralf: I would still like to push the idea of having a separate scikit for > this work.? I agree with you: scikits.image is more 2D related. In addition, registration as a fairly big endeavior, and tackling it by itself will probably increase the chances of success. G From ralf.gommers at googlemail.com Sun May 15 06:09:45 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sun, 15 May 2011 12:09:45 +0200 Subject: [SciPy-Dev] User Acceptance Testing In-Reply-To: References: Message-ID: On Sun, May 15, 2011 at 5:06 AM, Joshua Grant wrote: > Well, first off, let me say that if there are any packages missing unit > tests or lacking coverage, I would definitely look into making that a > project for myself :) > That would be very useful. Here's one: http://projects.scipy.org/numpy/ticket/763. I'm sure there are more, but we don't have a good overview of not-well-tested functions. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From luca.penasa at gmail.com Tue May 17 11:12:07 2011 From: luca.penasa at gmail.com (Luca Penasa) Date: Tue, 17 May 2011 17:12:07 +0200 Subject: [SciPy-Dev] maybe a bug into scipy.interpolate.Rbf? Message-ID: <1305645127.5022.8.camel@greenstar> scipy version 0.9.0-r1 I am trying to use scipy.interpolate.Rbf to interpolate a multidimensional function. I see the Rbf class permits to use a callable function as radial basis function, but gives me an error, see below: My self defined rbf: def ElasticDeformationRBF(this, r): ratio = ((r**2) / this.epsilon) out = ratio * np.log(ratio) + 1 - ratio return out interpolator = scipy.interpolate.Rbf(somex, somey, function=ElasticDeformationRBF) The error is: /usr/lib64/python2.6/site-packages/scipy/interpolate/rbf.pyc in __init__(self, *args, **kwargs) 195 setattr(self, item, value) 196 --> 197 self.A = self._init_function(r) - eye(self.N)*self.smooth 198 self.nodes = linalg.solve(self.A, self.di) 199 /usr/lib64/python2.6/site-packages/scipy/interpolate/rbf.pyc in _init_function(self, r) 159 self._function = self.function 160 elif argcount == 2: --> 161 if sys.version_info[0] >= 3: 162 self._function = function.__get__(self, Rbf) 163 else: NameError: global name 'sys' is not defined Maybe a missing import sys?? test_function_is_callable() from rbf's tests does not give any error. help docs say, about callable function: "If callable, then it must take 2 arguments (self, r). The epsilon parameter will be available as self.epsilon." So i cannot understand if it is my fault. Any idea? Thank you -- --------------------------- Luca Penasa Student at Geosciences Dpt. University of Padua (IT) luca.penasa at email.it --------------------------- From scott.sinclair.za at gmail.com Tue May 17 10:19:45 2011 From: scott.sinclair.za at gmail.com (Scott Sinclair) Date: Tue, 17 May 2011 16:19:45 +0200 Subject: [SciPy-Dev] maybe a bug into scipy.interpolate.Rbf? In-Reply-To: <1305645127.5022.8.camel@greenstar> References: <1305645127.5022.8.camel@greenstar> Message-ID: On 17 May 2011 17:12, Luca Penasa wrote: > I see the Rbf class permits to use a callable function as radial basis > function, but gives me an error, see below: > > > My self defined rbf: > > def ElasticDeformationRBF(this, r): > ? ? ? ?ratio = ((r**2) / this.epsilon) > ? ? ? ?out = ratio * np.log(ratio) + 1 - ratio > ? ? ? ?return out > > interpolator = scipy.interpolate.Rbf(somex, somey, > function=ElasticDeformationRBF) > > The error is: > > ?/usr/lib64/python2.6/site-packages/scipy/interpolate/rbf.pyc in > __init__(self, *args, **kwargs) > ? ?195 ? ? ? ? ? ? setattr(self, item, value) > ? ?196 > --> 197 ? ? ? ? self.A = self._init_function(r) - > eye(self.N)*self.smooth > ? ?198 ? ? ? ? self.nodes = linalg.solve(self.A, self.di) > ? ?199 > > /usr/lib64/python2.6/site-packages/scipy/interpolate/rbf.pyc in > _init_function(self, r) > ? ?159 ? ? ? ? ? ? ? ? self._function = self.function > ? ?160 ? ? ? ? ? ? elif argcount == 2: > --> 161 ? ? ? ? ? ? ? ? if sys.version_info[0] >= 3: > ? ?162 ? ? ? ? ? ? ? ? ? ? self._function = function.__get__(self, Rbf) > ? ?163 ? ? ? ? ? ? ? ? else: > > NameError: global name 'sys' is not defined Looks like a bug accidentally introduced here https://github.com/scipy/scipy/commit/4e8e22983786de5150a510af6d2e12c8081898db. You can work around the bug by adding an 'import sys' to your /usr/lib64/python2.6/site-packages/scipy/interpolate/rbf.py There's a pull request with a fix here https://github.com/scipy/scipy/pull/22 Cheers, Scott From james.bergstra at gmail.com Fri May 20 16:56:09 2011 From: james.bergstra at gmail.com (James Bergstra) Date: Fri, 20 May 2011 16:56:09 -0400 Subject: [SciPy-Dev] ttest_1samp strange behaviour Message-ID: I was trying to make sure that ttest_1samp did what I thought and came across the following... In [41]: for i in range(10): ....: print scipy.stats.ttest_1samp([0]*i, 0) ....: ....: (nan, nan) (1.0, nan) (1.0, 0.49999999999999956) (1.0, 0.42264973081037427) (1.0, 0.39100221895577053) (1.0, 0.37390096630005898) (1.0, 0.36321746764912255) (1.0, 0.35591768374958205) (1.0, 0.35061666282020748) (1.0, 0.34659350708733416) Am I interpreting this correctly that the null-hypothesis is decreasing in probability as we actually observe it more times?? James -- http://www-etud.iro.umontreal.ca/~bergstrj -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Fri May 20 17:04:47 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 20 May 2011 17:04:47 -0400 Subject: [SciPy-Dev] ttest_1samp strange behaviour In-Reply-To: References: Message-ID: On Fri, May 20, 2011 at 4:56 PM, James Bergstra wrote: > I was trying to make sure that ttest_1samp did what I thought and came > across the following... > In [41]: for i in range(10): > ?? ....: ? ? print scipy.stats.ttest_1samp([0]*i, 0) I don't know if this makes sense, variance is zero in your case try something like for i in range(2,1000,10): print(stats.ttest_1samp(1e-10*np.random.randn(i), 0)) Josef > ?? ....: > ?? ....: > (nan, nan) > (1.0, nan) > (1.0, 0.49999999999999956) > (1.0, 0.42264973081037427) > (1.0, 0.39100221895577053) > (1.0, 0.37390096630005898) > (1.0, 0.36321746764912255) > (1.0, 0.35591768374958205) > (1.0, 0.35061666282020748) > (1.0, 0.34659350708733416) > Am I interpreting this correctly that the null-hypothesis is decreasing in > probability as we actually observe it more times?? > > James > -- > http://www-etud.iro.umontreal.ca/~bergstrj > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > From james.bergstra at gmail.com Fri May 20 17:11:28 2011 From: james.bergstra at gmail.com (James Bergstra) Date: Fri, 20 May 2011 17:11:28 -0400 Subject: [SciPy-Dev] ttest_1samp strange behaviour In-Reply-To: References: Message-ID: I appreciate that I presented a corner case, but your example gives something even more bizarre: the two-tailed p-value jumps erratically almost all the way between 0 and 1 on adjacent calls! Shouldn't it hold steady and approach 1? On Fri, May 20, 2011 at 5:04 PM, wrote: > On Fri, May 20, 2011 at 4:56 PM, James Bergstra > wrote: > > I was trying to make sure that ttest_1samp did what I thought and came > > across the following... > > In [41]: for i in range(10): > > ....: print scipy.stats.ttest_1samp([0]*i, 0) > > I don't know if this makes sense, variance is zero in your case > > try something like > > for i in range(2,1000,10): > print(stats.ttest_1samp(1e-10*np.random.randn(i), 0)) > > Josef > > > ....: > > ....: > > (nan, nan) > > (1.0, nan) > > (1.0, 0.49999999999999956) > > (1.0, 0.42264973081037427) > > (1.0, 0.39100221895577053) > > (1.0, 0.37390096630005898) > > (1.0, 0.36321746764912255) > > (1.0, 0.35591768374958205) > > (1.0, 0.35061666282020748) > > (1.0, 0.34659350708733416) > > Am I interpreting this correctly that the null-hypothesis is decreasing > in > > probability as we actually observe it more times?? > > > > James > > -- > > http://www-etud.iro.umontreal.ca/~bergstrj > > > > _______________________________________________ > > SciPy-Dev mailing list > > SciPy-Dev at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-dev > > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -- http://www-etud.iro.umontreal.ca/~bergstrj -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Fri May 20 17:34:52 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 20 May 2011 17:34:52 -0400 Subject: [SciPy-Dev] ttest_1samp strange behaviour In-Reply-To: References: Message-ID: On Fri, May 20, 2011 at 5:11 PM, James Bergstra wrote: > I appreciate that I presented a corner case, but your example gives > something even more bizarre: the two-tailed p-value jumps erratically almost > all the way between 0 and 1 on adjacent calls! ?Shouldn't it hold steady and > approach 1? (I might have to check, but I'm pretty sure) Under the Null the p-values are uniform distributed. try for power and you should see the monotonous decrease: >>> for i in range(2,60,2): print(stats.ttest_1samp(1e-10+1e-10*np.random.randn(i), 0)) Josef > > On Fri, May 20, 2011 at 5:04 PM, wrote: >> >> On Fri, May 20, 2011 at 4:56 PM, James Bergstra >> wrote: >> > I was trying to make sure that ttest_1samp did what I thought and came >> > across the following... >> > In [41]: for i in range(10): >> > ?? ....: ? ? print scipy.stats.ttest_1samp([0]*i, 0) >> >> I don't know if this makes sense, variance is zero in your case >> >> try something like >> >> for i in range(2,1000,10): >> print(stats.ttest_1samp(1e-10*np.random.randn(i), 0)) >> >> Josef >> >> > ?? ....: >> > ?? ....: >> > (nan, nan) >> > (1.0, nan) >> > (1.0, 0.49999999999999956) >> > (1.0, 0.42264973081037427) >> > (1.0, 0.39100221895577053) >> > (1.0, 0.37390096630005898) >> > (1.0, 0.36321746764912255) >> > (1.0, 0.35591768374958205) >> > (1.0, 0.35061666282020748) >> > (1.0, 0.34659350708733416) >> > Am I interpreting this correctly that the null-hypothesis is decreasing >> > in >> > probability as we actually observe it more times?? >> > >> > James >> > -- >> > http://www-etud.iro.umontreal.ca/~bergstrj >> > >> > _______________________________________________ >> > SciPy-Dev mailing list >> > SciPy-Dev at scipy.org >> > http://mail.scipy.org/mailman/listinfo/scipy-dev >> > >> > >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev > > > > -- > http://www-etud.iro.umontreal.ca/~bergstrj > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > From cgohlke at uci.edu Fri May 20 19:05:28 2011 From: cgohlke at uci.edu (Christoph Gohlke) Date: Fri, 20 May 2011 16:05:28 -0700 Subject: [SciPy-Dev] scipy.special.expi complex test failure with scipy 0.9, numpy 1.6.1dev Message-ID: <4DD6F3B8.4000709@uci.edu> Hello, I am getting the following scipy.special.expi failure when testing scipy 0.9, which was built against numpy 1.6.1dev (these are Windows/msvc9/MKL builds). E.g. scipy.special.expi(1e-99+0j) returns -227.3787085+6.2831855j, not -227.3787085+0j. This is not Python 2/3 version or 32/64 bit specific. Scipy 0.9 built against numpy 1.5.1 passes the test. Christoph ====================================================================== FAIL: test_mpmath.test_expi_complex ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python32\lib\site-packages\nose\case.py", line 188, in runTest self.test(*self.arg) File "C:\Python32\lib\site-packages\numpy\testing\decorators.py", line 147, in skipper_func return f(*args, **kwargs) File "C:\Python32\lib\site-packages\scipy\special\tests\test_mpmath.py", line 48, in test_expi_complex FuncData(sc.expi, dataset, 0, 1).check() File "C:\Python32\lib\site-packages\scipy\special\tests\testutils.py", line 224, in check assert_(False, "\n".join(msg)) File "C:\Python32\lib\site-packages\numpy\testing\utils.py", line 34, in assert_ raise AssertionError(msg) AssertionError: Max |adiff|: 3.86856e+25 Max |rdiff|: 0.304157 Bad results for the following points (in output 0): (1e-99+0j) => (-227.37870854150898+6.283185307179586j) != (-227.37870854150898+0j) (rdiff 0.02763312953742352) (1.668100537200083e-88+0j) => (-201.5385869423536+6.283185307179586j) != (-201.53858694235356+0j) (rdiff 0.03117609090400528) (2.7825594022071145e-77+0j) => (-175.6984653431982+6.283185307179586j) != (-175.6984653431982+0j) (rdiff 0.03576118490794107) (4.6415888336126776e-66+0j) => (-149.8583437440428+6.283185307179586j) != (-149.8583437440428+0j) (rdiff 0.041927497329819895) (7.742636826811214e-55+0j) => (-124.01822214488739+6.283185307179586j) != (-124.01822214488739+0j) (rdiff 0.050663404123299706) (1.2915496650148721e-43+0j) => (-98.178100545732+6.283185307179586j) != (-98.178100545732+0j) (rdiff 0.06399782917222806) (2.1544346900318602e-32+0j) => (-72.3379789465766+6.283185307179586j) != (-72.3379789465766+0j) (rdiff 0.08685873449436396) (3.5938136638045226e-21+0j) => (-46.49785734742121+6.283185307179586j) != (-46.49785734742121+0j) (rdiff 0.13512849119547773) (5.994842503189323e-10+0j) => (-20.657735747666305+6.283185307179586j) != (-20.657735747666308+0j) (rdiff 0.30415653409107984) From josef.pktd at gmail.com Fri May 20 19:06:19 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 20 May 2011 19:06:19 -0400 Subject: [SciPy-Dev] ttest_1samp strange behaviour In-Reply-To: References: Message-ID: On Fri, May 20, 2011 at 5:34 PM, wrote: > On Fri, May 20, 2011 at 5:11 PM, James Bergstra > wrote: >> I appreciate that I presented a corner case, but your example gives >> something even more bizarre: the two-tailed p-value jumps erratically almost >> all the way between 0 and 1 on adjacent calls! ?Shouldn't it hold steady and >> approach 1? > > (I might have to check, but I'm pretty sure) > > Under the Null the p-values are uniform distributed. > > try for power and you should see the monotonous decrease: > >>>> for i in range(2,60,2): print(stats.ttest_1samp(1e-10+1e-10*np.random.randn(i), 0)) (now that I had a bit more time) the standard way to check tests is that they are correctly sized under the Null, and have power under the alternative. correctly sized looks good, see below zero variance has to be special handled, I set the test statistic for stats.ttest_1samp in this case to one. I remember I asked on the mailing list before changing this. (essentially what is 0/0 in this case?) t = np.where((d==0)*(v==0), 1.0, t I never looked at what pattern that would imply for the p-value. some quick Monte Carlo test whether ttest_1samp is correctly sized under the Null hypothesis >>> nrep = 10000 >>> >>> for i in range(5,50,2): print((stats.ttest_1samp(1e-10*np.random.randn(i,nrep), 0)[1]<0.1).sum()/float(nrep)) 0.0981 0.1042 0.099 0.0996 0.1033 0.096 0.0951 0.1045 0.0992 0.1028 0.101 0.0948 0.1 0.0949 0.1003 0.0996 0.1033 0.1049 0.1057 0.098 0.1027 0.1008 0.1022 >>> for i in range(5,50,2): print((stats.ttest_1samp(1e-10*np.random.randn(i,nrep), 0)[1]<0.05).sum()/float(nrep)) 0.0459 0.0495 0.0475 0.0509 0.0501 0.0501 0.0535 0.0496 0.0505 0.0525 0.0524 0.0494 0.0506 0.0495 0.0503 0.0515 0.0518 0.0512 0.0531 0.0511 0.0502 0.0515 0.0504 looks fine, Josef > Josef > >> >> On Fri, May 20, 2011 at 5:04 PM, wrote: >>> >>> On Fri, May 20, 2011 at 4:56 PM, James Bergstra >>> wrote: >>> > I was trying to make sure that ttest_1samp did what I thought and came >>> > across the following... >>> > In [41]: for i in range(10): >>> > ?? ....: ? ? print scipy.stats.ttest_1samp([0]*i, 0) >>> >>> I don't know if this makes sense, variance is zero in your case >>> >>> try something like >>> >>> for i in range(2,1000,10): >>> print(stats.ttest_1samp(1e-10*np.random.randn(i), 0)) >>> >>> Josef >>> >>> > ?? ....: >>> > ?? ....: >>> > (nan, nan) >>> > (1.0, nan) >>> > (1.0, 0.49999999999999956) >>> > (1.0, 0.42264973081037427) >>> > (1.0, 0.39100221895577053) >>> > (1.0, 0.37390096630005898) >>> > (1.0, 0.36321746764912255) >>> > (1.0, 0.35591768374958205) >>> > (1.0, 0.35061666282020748) >>> > (1.0, 0.34659350708733416) >>> > Am I interpreting this correctly that the null-hypothesis is decreasing >>> > in >>> > probability as we actually observe it more times?? >>> > >>> > James >>> > -- >>> > http://www-etud.iro.umontreal.ca/~bergstrj >>> > >>> > _______________________________________________ >>> > SciPy-Dev mailing list >>> > SciPy-Dev at scipy.org >>> > http://mail.scipy.org/mailman/listinfo/scipy-dev >>> > >>> > >>> _______________________________________________ >>> SciPy-Dev mailing list >>> SciPy-Dev at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-dev >> >> >> >> -- >> http://www-etud.iro.umontreal.ca/~bergstrj >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev >> >> > From charlesr.harris at gmail.com Fri May 20 19:43:54 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 20 May 2011 17:43:54 -0600 Subject: [SciPy-Dev] scipy.special.expi complex test failure with scipy 0.9, numpy 1.6.1dev In-Reply-To: <4DD6F3B8.4000709@uci.edu> References: <4DD6F3B8.4000709@uci.edu> Message-ID: On Fri, May 20, 2011 at 5:05 PM, Christoph Gohlke wrote: > Hello, > > I am getting the following scipy.special.expi failure when testing scipy > 0.9, which was built against numpy 1.6.1dev (these are Windows/msvc9/MKL > builds). > > E.g. scipy.special.expi(1e-99+0j) returns -227.3787085+6.2831855j, not > -227.3787085+0j. > > This is not Python 2/3 version or 32/64 bit specific. > > Scipy 0.9 built against numpy 1.5.1 passes the test. > > Christoph > > ====================================================================== > FAIL: test_mpmath.test_expi_complex > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "C:\Python32\lib\site-packages\nose\case.py", line 188, in runTest > self.test(*self.arg) > File "C:\Python32\lib\site-packages\numpy\testing\decorators.py", > line 147, in skipper_func > return f(*args, **kwargs) > File > "C:\Python32\lib\site-packages\scipy\special\tests\test_mpmath.py", line > 48, in test_expi_complex > FuncData(sc.expi, dataset, 0, 1).check() > File > "C:\Python32\lib\site-packages\scipy\special\tests\testutils.py", line > 224, in check > assert_(False, "\n".join(msg)) > File "C:\Python32\lib\site-packages\numpy\testing\utils.py", line 34, > in assert_ > raise AssertionError(msg) > AssertionError: > Max |adiff|: 3.86856e+25 > Max |rdiff|: 0.304157 > Bad results for the following points (in output 0): > (1e-99+0j) => > (-227.37870854150898+6.283185307179586j) != > (-227.37870854150898+0j) (rdiff 0.02763312953742352) > (1.668100537200083e-88+0j) => > (-201.5385869423536+6.283185307179586j) != > (-201.53858694235356+0j) (rdiff 0.03117609090400528) > (2.7825594022071145e-77+0j) => > (-175.6984653431982+6.283185307179586j) != > (-175.6984653431982+0j) (rdiff 0.03576118490794107) > (4.6415888336126776e-66+0j) => > (-149.8583437440428+6.283185307179586j) != > (-149.8583437440428+0j) (rdiff 0.041927497329819895) > (7.742636826811214e-55+0j) => > (-124.01822214488739+6.283185307179586j) != > (-124.01822214488739+0j) (rdiff 0.050663404123299706) > (1.2915496650148721e-43+0j) => (-98.178100545732+6.283185307179586j) != > (-98.178100545732+0j) (rdiff 0.06399782917222806) > (2.1544346900318602e-32+0j) => (-72.3379789465766+6.283185307179586j) != > (-72.3379789465766+0j) (rdiff 0.08685873449436396) > (3.5938136638045226e-21+0j) => > (-46.49785734742121+6.283185307179586j) != > (-46.49785734742121+0j) (rdiff 0.13512849119547773) > (5.994842503189323e-10+0j) => > (-20.657735747666305+6.283185307179586j) != > (-20.657735747666308+0j) (rdiff 0.30415653409107984) > _________ Curious. I don't see this on ubuntu, it looks like a factor of 2*pi*1j is getting added on. Looks like a corner case involving complex log or some such. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Fri May 20 20:33:05 2011 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 21 May 2011 00:33:05 +0000 (UTC) Subject: [SciPy-Dev] scipy.special.expi complex test failure with scipy 0.9, numpy 1.6.1dev References: <4DD6F3B8.4000709@uci.edu> Message-ID: On Fri, 20 May 2011 17:43:54 -0600, Charles R Harris wrote: [clip: expi] > Curious. I don't see this on ubuntu, it looks like a factor of 2*pi*1j > is getting added on. Looks like a corner case involving complex log or > some such. The implementation is based on calculating E1(-z), and cancelling out the i*pis with a logarithm. It's a bit of a bad form on the real axis, and apparently the branch cuts are not consistently chosen with MKL. I'd guess the values will be OK away from the real axis, though, but some rewriting probably needs to be done. Pauli From charlesr.harris at gmail.com Fri May 20 22:39:40 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 20 May 2011 20:39:40 -0600 Subject: [SciPy-Dev] scipy.special.expi complex test failure with scipy 0.9, numpy 1.6.1dev In-Reply-To: References: <4DD6F3B8.4000709@uci.edu> Message-ID: On Fri, May 20, 2011 at 6:33 PM, Pauli Virtanen wrote: > On Fri, 20 May 2011 17:43:54 -0600, Charles R Harris wrote: > [clip: expi] > > Curious. I don't see this on ubuntu, it looks like a factor of 2*pi*1j > > is getting added on. Looks like a corner case involving complex log or > > some such. > > The implementation is based on calculating E1(-z), and cancelling out > the i*pis with a logarithm. It's a bit of a bad form on the real axis, > and apparently the branch cuts are not consistently chosen with MKL. > I'd guess the values will be OK away from the real axis, though, > but some rewriting probably needs to be done. > > IIRC, the default flags for compiling with MKL on Windows changed between 1.5 and 1.6. I wonder if that has anything to do with this? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From cgohlke at uci.edu Fri May 20 23:11:55 2011 From: cgohlke at uci.edu (Christoph Gohlke) Date: Fri, 20 May 2011 20:11:55 -0700 Subject: [SciPy-Dev] scipy.special.expi complex test failure with scipy 0.9, numpy 1.6.1dev In-Reply-To: References: <4DD6F3B8.4000709@uci.edu> Message-ID: <4DD72D7B.5050906@uci.edu> On 5/20/2011 7:39 PM, Charles R Harris wrote: > > > On Fri, May 20, 2011 at 6:33 PM, Pauli Virtanen > wrote: > > On Fri, 20 May 2011 17:43:54 -0600, Charles R Harris wrote: > [clip: expi] > > Curious. I don't see this on ubuntu, it looks like a factor of 2*pi*1j > > is getting added on. Looks like a corner case involving complex log or > > some such. > > The implementation is based on calculating E1(-z), and cancelling out > the i*pis with a logarithm. It's a bit of a bad form on the real axis, > and apparently the branch cuts are not consistently chosen with MKL. > I'd guess the values will be OK away from the real axis, though, > but some rewriting probably needs to be done. > > > IIRC, the default flags for compiling with MKL on Windows changed > between 1.5 and 1.6. I wonder if that has anything to do with this? > > Chuck > > The ifort flags changed from '/O1 /Qip' to '/O1' . But this wasn't the problem. Turns out that in between the numpy 1.5.1 and 1.6.x builds I changed the processor from Core2 Quad to Core i7. If I force '/QaxSSE3' instead of using numpy's default option '/QaxM', the test pass and expi() returns correct results. Thank you and Pauli for your help! Christoph From stefan at sun.ac.za Sat May 21 09:35:10 2011 From: stefan at sun.ac.za (Stefan van der Walt) Date: Sat, 21 May 2011 13:35:10 +0000 (UTC) Subject: [SciPy-Dev] scikit-morph References: <2B9BD974-BEE7-4835-BF17-C5BAB828CDB6@gmail.com> <2BE13CD2-1A46-4070-8DE1-28CEFAF4C20F@gmail.com> Message-ID: Hi Nathan Nathan Faggian gmail.com> writes: > What I would like the scikit to become is a collection of non-linear image registration algorithms, which is different to the warping that is described by the scikit image library. ?Particularly, I would also like the scikit to support the design and evaluation of registration algorithms . Actually, maybe it is better that the scikit ?is called scikit-register? Image registration is definitely on the cards for scikits.image. I already have some rigid registration algorithms implemented at http://github.com/stefanv/supreme but I'd like to round those off and make them more robust. Currently, there are feature-based and dense methods. > Does anyone else see some duplication between the scipy.ndimage and scikit.image? ?Seems like morphological operators are defined in both? We try not to duplicate functionality unless necessary. Unfortunately, it is hard to maintain scipy.ndimage, so sometimes we rewrite or wrap algorithms as appropriate. The buffered approach ndimage uses to do filtering is actually very efficient, so we build on that until we have a better replacement (we're working on different backends now, such as OpenCL, Theano etc.) Regards St?fan From gokhansever at gmail.com Sat May 21 18:24:12 2011 From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_Sever?=) Date: Sat, 21 May 2011 16:24:12 -0600 Subject: [SciPy-Dev] Comments on optimize.newton function Message-ID: Hello, Could we add "tolf" argument into the newton function signature? I am guessing this should match the tolf argument in IDL newton ( http://star.pst.qub.ac.uk/idl/NEWTON.html) TOLF: Set the convergence criterion on the function values. The default value is 1.0 x 10-4. def newton(func, x0, fprime=None, args=(), tol=1.48e-8, maxiter=50, tolf= 1.e-4) In second method part of the newton function (from https://github.com/scipy/scipy/blob/master/scipy/optimize/zeros.py) # Secant method p0 = x0 if x0 >= 0: p1 = x0*(1 + 1e-4) + 1e-4 else: p1 = x0*(1 + 1e-4) - 1e-4 without increasing the tolf or (1e-4) in these statements I can't get a proper root solution for my function. The reason that I am experiencing with newton is because fsolve seems slower comparing to the newton for scalar root finding for a given function. Consider this example run Sage v4.6.1 notebook: %cython cpdef double myfunc(double x): return x**3 + 2*x - 1 timeit('scipy.optimize.newton(myfunc,1)') 625 loops, best of 3: 22.1 ?s per loop timeit('scipy.optimize.fsolve(myfunc,1)') 625 loops, best of 3: 86.5 ?s per loop I am also experimenting to Cythonize the Newton secant method, which the Cython written version shows significant speed-ups comparing to the Python version. Without going any further, I would like to know if there is any Cythonized code around for newton or any other approach to make fsolve faster? Thanks. -- G?khan -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sat May 21 19:25:19 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 21 May 2011 17:25:19 -0600 Subject: [SciPy-Dev] Comments on optimize.newton function In-Reply-To: References: Message-ID: On Sat, May 21, 2011 at 4:24 PM, G?khan Sever wrote: > Hello, > > Could we add "tolf" argument into the newton function signature? > > I am guessing this should match the tolf argument in IDL newton ( > http://star.pst.qub.ac.uk/idl/NEWTON.html) > > TOLF: Set the convergence criterion on the function values. The default > value is 1.0 x 10-4. > > > def newton(func, x0, fprime=None, args=(), tol=1.48e-8, maxiter=50, tolf= > 1.e-4) > > In second method part of the newton function (from > https://github.com/scipy/scipy/blob/master/scipy/optimize/zeros.py) > > > # Secant method > > p0 = x0 > > if x0 >= 0: > > p1 = x0*(1 + 1e-4) + 1e-4 > > else: > > p1 = x0*(1 + 1e-4) - 1e-4 > > without increasing the tolf or (1e-4) in these statements I can't get a proper root solution for my function. > > > > The reason that I am experiencing with newton is because fsolve seems slower comparing to the newton for scalar root finding for a given function. > > Consider this example run Sage v4.6.1 notebook: > > > %cython > cpdef double myfunc(double x): > return x**3 + 2*x - 1 > > > timeit('scipy.optimize.newton(myfunc,1)') > > 625 loops, best of 3: 22.1 ?s per loop > > > timeit('scipy.optimize.fsolve(myfunc,1)') > > 625 loops, best of 3: 86.5 ?s per loop > > > I am also experimenting to Cythonize the Newton secant method, which the > Cython written version shows significant speed-ups comparing to the Python > version. Without going any further, I would like to know if there is any > Cythonized code around for newton or any other approach to make fsolve > faster? > > Thanks. > > > You could probably adapt one of the other 1d zero finders, say ritter, just ignore all the fancy stuff for the bounding interval and such. I don't much like the stopping criterion in newton either and ftol would probably help, but it might be worth thinking about overstepping and looking for a sign change. Or something like that that would give more assurance that a zero was at hand. Note that there is also a pull request for using the second derivative as well as the first. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From nathan.faggian at gmail.com Sat May 21 20:10:20 2011 From: nathan.faggian at gmail.com (Nathan Faggian) Date: Sun, 22 May 2011 10:10:20 +1000 Subject: [SciPy-Dev] scikit-morph In-Reply-To: References: <2B9BD974-BEE7-4835-BF17-C5BAB828CDB6@gmail.com> <2BE13CD2-1A46-4070-8DE1-28CEFAF4C20F@gmail.com> Message-ID: <2D16773A-83EC-42ED-B60C-D0250331F264@gmail.com> Hi Stefan, Your supreme project looks great! A couple of years ago I dabbled with image registration for video stabilisation and the work you have done by implementing the Ransac algorithm and linear registration is also suitable for that application. Do you also know about the X84 rejection rule? I hope that I didn't sound too critical when I pointed at duplication, I really think the image scikit is great and I am keen to see how you go with different back ends to speed up computation. In scikit-morph I plan to implement dense non-linear image registration methods, for example: http://www.fmrib.ox.ac.uk/fsl/fnirt/index.html I am part way through implementing the approach used by FNIRT and once I get this working in 2D I will make some noise on the mailing-list. So far I have had a bit of fun looking into cython to speed up image sampling. -N On 21/05/2011, at 11:35 PM, Stefan van der Walt wrote: > Hi Nathan > > Nathan Faggian gmail.com> writes: > >> What I would like the scikit to become is a collection of non-linear image > registration algorithms, which is different to the warping that is described by > the scikit image library. Particularly, I would also like the scikit to support > the design and evaluation of registration algorithms . Actually, maybe it is > better that the scikit is called scikit-register? > > Image registration is definitely on the cards for scikits.image. I already have > some rigid registration algorithms implemented at > > http://github.com/stefanv/supreme > > but I'd like to round those off and make them more robust. Currently, there are > feature-based and dense methods. > >> Does anyone else see some duplication between the scipy.ndimage and > scikit.image? Seems like morphological operators are defined in both? > > We try not to duplicate functionality unless necessary. Unfortunately, it is > hard to maintain scipy.ndimage, so sometimes we rewrite or wrap algorithms as > appropriate. > > The buffered approach ndimage uses to do filtering is actually very efficient, > so we build on that until we have a better replacement (we're working on > different backends now, such as OpenCL, Theano etc.) > > Regards > St?fan > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev From gokhansever at gmail.com Sat May 21 21:07:21 2011 From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_Sever?=) Date: Sat, 21 May 2011 19:07:21 -0600 Subject: [SciPy-Dev] Comments on optimize.newton function In-Reply-To: References: Message-ID: On Sat, May 21, 2011 at 5:25 PM, Charles R Harris wrote: > > You could probably adapt one of the other 1d zero finders, say ritter, just ignore all the fancy stuff for the bounding interval and such. I have tried to use those solvers (listed under scalar functions at http://docs.scipy.org/doc/scipy/reference/optimize.html)?but fail to make them work for my function. I am not sure how to automatically find an interval for the solution and make sure that f(a), f(b) will be opposite sign to each other. > > I don't much like the stopping criterion in newton either and ftol would probably help, but it might be worth thinking about overstepping and looking for a sign change. Or something like that that would give more assurance that a zero was at hand. Note that there is also a pull request for using the second derivative as well as the first. Now, I see an issue with the different versions of scipy used for newton. See my work at ->?http://www.sagenb.org/home/pub/2801/ The version of the SciPy at sagenb.org is 0.7, my local sage v.4.6.1 uses v0.8 and my local SciPy is?'0.10.0dev' -- a source build from a couple weeks ago. SciPy v.0.8 and above use these definitions in secant method: # Secant method if x0 >= 0: p1 = x0*(1 + 1e-4) + 1e-4 else: p1 = x0*(1 + 1e-4) - 1e-4 whereas v0.7 at sagenb.org p1 = x0*(1+1e-4) A full diff shows a bit more changes --style corrections and warnings added in 0.8 above. For the cythonized function?petters_solve_for_rw "fsolve" can successfully finds the root, likewise the newton at sagenb using tol=1.e-10 argument. (newton being 5 times faster.) Later I copied the newton function from https://github.com/scipy/scipy/blob/master/scipy/optimize/zeros.py and just use the secant method parts to make it work locally. This goes to the idea of adding ftol argument and making appropriate change of 1.e-4 to 1.e-10 (at least this work well for my petters_solve_for_rw function case.) This seems slower comparing to the Scipy newton --might be due to the internals of Sage. In the next step, I tried to Cython compiled the pnewton function. This is about 10-15X faster comparing to the fsolve, about 8-9X faster than the python version (pnewton). For some reason newton in sagenb.org is fast --but 2-3 slower than Cython. One solution for myself is I can go ahead and define a local cythonized newton in my code library, unless I find another fast method for root solving from a scalar function. Indeed I use fsolve for finding two roots and passing an array rather than a scalar to make some estimations in another part of the code. However, these are a few times called calculations and don't add any overhead to the final computation. The newton that I am trying to accelerate is called about 1 to many millions times depends on the simulation case, thus the need for a fast root solver. I haven't tried using fprime option yet. How to approach this one? Use a symbolic package and estimate derivative of a function and call the solver with fprime set? > > Chuck > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -- G?khan From charlesr.harris at gmail.com Sat May 21 21:50:01 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 21 May 2011 19:50:01 -0600 Subject: [SciPy-Dev] Comments on optimize.newton function In-Reply-To: References: Message-ID: On Sat, May 21, 2011 at 7:07 PM, G?khan Sever wrote: > On Sat, May 21, 2011 at 5:25 PM, Charles R Harris > wrote: > > > > You could probably adapt one of the other 1d zero finders, say ritter, > just ignore all the fancy stuff for the bounding interval and such. > > I have tried to use those solvers (listed under scalar functions at > http://docs.scipy.org/doc/scipy/reference/optimize.html) but fail to > make them work for my function. I am not sure how to automatically > find an interval for the solution and make sure that f(a), f(b) will > be opposite sign to each other. > > Yeah, that left me thinking that we could really use a bracketing function. The brent optimizer must have something like that it uses to bracket minimums and perhaps that could be adapted. I know there are bracketing methods out there, IIRC there is one in NR. > > > > I don't much like the stopping criterion in newton either and ftol would > probably help, but it might be worth thinking about overstepping and looking > for a sign change. Or something like that that would give more assurance > that a zero was at hand. Note that there is also a pull request for using > the second derivative as well as the first. > > Now, I see an issue with the different versions of scipy used for > newton. See my work at -> http://www.sagenb.org/home/pub/2801/ > The version of the SciPy at sagenb.org is 0.7, my local sage v.4.6.1 > uses v0.8 and my local SciPy is '0.10.0dev' -- a source build from a > couple weeks ago. > > SciPy v.0.8 and above use these definitions in secant method: > > # Secant method > if x0 >= 0: > p1 = x0*(1 + 1e-4) + 1e-4 > else: > p1 = x0*(1 + 1e-4) - 1e-4 > > whereas v0.7 at sagenb.org > p1 = x0*(1+1e-4) > > IIRC, the old version blew up when the root was at zero, the problem was posted on the list. > A full diff shows a bit more changes --style corrections and warnings > added in 0.8 above. > > For the cythonized function petters_solve_for_rw "fsolve" can > successfully finds the root, likewise the newton at sagenb using > tol=1.e-10 argument. (newton being 5 times faster.) > > Later I copied the newton function from > https://github.com/scipy/scipy/blob/master/scipy/optimize/zeros.py and > just use the secant method parts to make it work locally. This goes to > the idea of adding ftol argument and making appropriate change of > 1.e-4 to 1.e-10 (at least this work well for my petters_solve_for_rw > function case.) This seems slower comparing to the Scipy newton > --might be due to the internals of Sage. > > ? I believe there is only one newton method in scipy, but we moved it into the zeros module and deprecated the version at the old location. It has since been removed. > In the next step, I tried to Cython compiled the pnewton function. > This is about 10-15X faster comparing to the fsolve, about 8-9X faster > than the python version (pnewton). For some reason newton in > sagenb.org is fast --but 2-3 slower than Cython. > > One solution for myself is I can go ahead and define a local > cythonized newton in my code library, unless I find another fast > method for root solving from a scalar function. Indeed I use fsolve > for finding two roots and passing an array rather than a scalar to > make some estimations in another part of the code. However, these are > a few times called calculations and don't add any overhead to the > final computation. The newton that I am trying to accelerate is called > about 1 to many millions times depends on the simulation case, thus > the need for a fast root solver. > > I haven't tried using fprime option yet. How to approach this one? Use > a symbolic package and estimate derivative of a function and call the > solver with fprime set? > > I would just stick to the secant method in the general case unless the derivative is easy to come by. Note that newton is one of the 'original' functions in scipy, the other 1d zero finders came later. So if you can make an improved cythonized version I don't see any reason not to use it. If you do make cythonized versions it might be worth implementing the Newton and secant parts separately and make the current newton a driver function. How many function evaluations are you seeing in the root finding? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at enthought.com Sun May 22 01:51:29 2011 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Sun, 22 May 2011 00:51:29 -0500 Subject: [SciPy-Dev] Used "Automatic Merge" on my github pull request... Message-ID: I used the "Automatic merge" button on my own pull request on github, but afterwards discovered that it uses --no-ff, so my single commit also resulted in a "merge" commit: https://github.com/scipy/scipy/commit/cf04a2b8dd4cf258413687ec146883ea5ab197cb Should I try to get rid of that merge? If so, how? (The git book is by my side, but I suspect an answer will show up here faster than I can find it.) Warren -------------- next part -------------- An HTML attachment was scrubbed... URL: From gokhansever at gmail.com Sun May 22 01:55:02 2011 From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_Sever?=) Date: Sat, 21 May 2011 23:55:02 -0600 Subject: [SciPy-Dev] Comments on optimize.newton function In-Reply-To: References: Message-ID: On Sat, May 21, 2011 at 7:50 PM, Charles R Harris wrote: > Yeah, that left me thinking that we could really use a bracketing function. > The brent optimizer must have something like that it uses to bracket > minimums and perhaps that could be adapted. I know there are bracketing > methods out there, IIRC there is one in NR. I have managed to get those bracketed solvers working after plotting my function. brenth seems converging the fastest but still providing the search interval is a problem for me. I am probably skipping these solvers. > > IIRC, the old version blew up when the root was at zero, the problem was > posted on the list. > > ? I believe there is only one newton method in scipy, but we moved it into > the zeros module and deprecated the version at the old location. It has > since been removed. Yes, you are right. At the current source repository, there is only one newton in scipy.optimize which resides in zeros.py > > I would just stick to the secant method in the general case unless the > derivative is easy to come by. Note that newton is one of the 'original' > functions in scipy, the other 1d zero finders came later. So if you can make > an improved cythonized version I don't see any reason not to use it. If you > do make cythonized versions it might be worth implementing the Newton and > secant parts separately and make the current newton a driver function. I have figured out the derivative option and tested fsolve and newton with fprime arg provided. Still slower comparing to the secant method. Most likely, that the derivative function requires a bit calculation to be evaluated. As you can see, the function is: cpdef double petters_solve_for_rw(double x, double rd, double rh): return rh - exp(kelvin/x) * (x**3 - rd**3) / (x**3 - rd**3 * (1.0 - kappa)) but the derivative is quite complex comparing to the function: cpdef double pprime(double x, double rd, double rh): return -3*(rd**3 - x**3)*x**2*exp(kelvin/x)/((kappa - 1.0)*rd**3 +x**3)**2 - 3*x**2.*exp(kelvin/x)/((kappa - 1.0)*rd**3 + x**3)-\ (rd**3 - x**3)*kelvin*exp(kelvin/x)/(((kappa - 1.0)*rd**3 + x**3)*x**2) Skipping the fprime option, I focus on the secant method which works the fastest and quite robust for my case. Below you can see the latest version of the cythonized secant (probably I should update the name) that I use: cpdef double cnewton(func, double x0, args=(), double tol=1e-10, int maxiter=50): # Secant method p0 = x0 p1 = x0*(1 + 1e-10) + 1e-10 #p1 = x0*(1+1e-4) q0 = func(*((p0,) + args)) q1 = func(*((p1,) + args)) for iter in range(maxiter): p = p1 - q1*(p1 - p0)/(q1 - q0) if abs(p - p1) < tol: return p p0 = p1 q0 = q1 p1 = p q1 = func(*((p1,) + args)) I simplified this block if x0 >= 0: p1 = x0*(1 + 1e-4) + 1e-4 else: p1 = x0*(1 + 1e-4) - 1e-4 as just, one line since I am not interested with negative init point. In other words, reals drops can never be less than < 0 meters. p1 = x0*(1 + 1e-10) + 1e-10 #p1 = x0*(1+1e-4) I also skipped this block: if q1 == q0: if p1 != p0: msg = "Tolerance of %s reached" % (p1 - p0) warnings.warn(msg, RuntimeWarning) return (p1 + p0)/2.0 because, this part is almost never executed. > > How many function evaluations are you seeing in the root finding? > With this version of the newton, I see about 9-10 function evaluations to converge to a meaningful and reasonable root. > Chuck From ralf.gommers at googlemail.com Sun May 22 05:49:44 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sun, 22 May 2011 11:49:44 +0200 Subject: [SciPy-Dev] Used "Automatic Merge" on my github pull request... In-Reply-To: References: Message-ID: On Sun, May 22, 2011 at 7:51 AM, Warren Weckesser < warren.weckesser at enthought.com> wrote: > I used the "Automatic merge" button on my own pull request on github, but > afterwards discovered that it uses --no-ff, so my single commit also > resulted in a "merge" commit: > > https://github.com/scipy/scipy/commit/cf04a2b8dd4cf258413687ec146883ea5ab197cb > > Should I try to get rid of that merge? If so, how? (The git book is by my > side, but I suspect an answer will show up here faster than I can find it.) > No, it's public now so you shouldn't touch it anymore. That button is very pointless - best to ignore it. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Sun May 22 08:22:04 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 22 May 2011 08:22:04 -0400 Subject: [SciPy-Dev] Comments on optimize.newton function In-Reply-To: References: Message-ID: On Sun, May 22, 2011 at 1:55 AM, G?khan Sever wrote: > On Sat, May 21, 2011 at 7:50 PM, Charles R Harris > wrote: >> Yeah, that left me thinking that we could really use a bracketing function. >> The brent optimizer must have something like that it uses to bracket >> minimums and perhaps that could be adapted. I know there are bracketing >> methods out there, IIRC there is one in NR. > > I have managed to get those bracketed solvers working after plotting > my function. brenth seems converging the fastest but still providing > the search interval is a problem for me. I am probably skipping these > solvers. >> >> IIRC, the old version blew up when the root was at zero, the problem was >> posted on the list. >> >> ? I believe there is only one newton method in scipy, but we moved it into >> the zeros module and deprecated the version at the old location. It has >> since been removed. > > Yes, you are right. At the current source repository, there is only > one newton in scipy.optimize which resides in zeros.py > >> >> I would just stick to the secant method in the general case unless the >> derivative is easy to come by. Note that newton is one of the 'original' >> functions in scipy, the other 1d zero finders came later. So if you can make >> an improved cythonized version I don't see any reason not to use it. If you >> do make cythonized versions it might be worth implementing the Newton and >> secant parts separately and make the current newton a driver function. > > I have figured out the derivative option and tested fsolve and newton > with fprime arg provided. Still slower comparing to the secant method. > Most likely, that the derivative function requires a bit calculation > to be evaluated. As you can see, the function is: > > cpdef double petters_solve_for_rw(double x, double rd, double rh): > ? ?return rh - exp(kelvin/x) * (x**3 - rd**3) / (x**3 - rd**3 * (1.0 - kappa)) Wouldn't this be easier (for derivatives), and maybe be more stable, taking logs np.log(rh) - kelvin/x + np.log(..) ... ? (independently of any improvement to the solvers) Josef From warren.weckesser at enthought.com Sun May 22 12:12:44 2011 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Sun, 22 May 2011 11:12:44 -0500 Subject: [SciPy-Dev] Building the docs: need "plot_directive" extension Message-ID: I'm trying to build the HTML docs. Running 'make html' in the doc/ directory first resulted in: Extension error: Could not import extension numpydoc (exception: No module named numpydoc) so I installed the numpydoc package from pypi. Now I get this: Extension error: Could not import extension plot_directive (exception: No module named plot_directive) How do I install the 'plot_directive' extension? I'm using python 2.7, numpy 1.5.1, matplotlib 1.0.1. Warren -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at enthought.com Sun May 22 12:21:01 2011 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Sun, 22 May 2011 11:21:01 -0500 Subject: [SciPy-Dev] Building the docs: need "plot_directive" extension In-Reply-To: References: Message-ID: On Sun, May 22, 2011 at 11:12 AM, Warren Weckesser < warren.weckesser at enthought.com> wrote: > I'm trying to build the HTML docs. Running 'make html' in the doc/ > directory first resulted in: > > Extension error: > Could not import extension numpydoc (exception: No module named > numpydoc) > > so I installed the numpydoc package from pypi. Now I get this: > > Extension error: > Could not import extension plot_directive (exception: No module named > plot_directive) > > How do I install the 'plot_directive' extension? > > I just compared an old svn checkout that I have to the current trunk; looks like the directory doc/sphinxext/ has been removed, which is where plot_directory.py lived. Now to figure out why... Warren > I'm using python 2.7, numpy 1.5.1, matplotlib 1.0.1. > > > Warren > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Sun May 22 12:42:19 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 22 May 2011 12:42:19 -0400 Subject: [SciPy-Dev] Building the docs: need "plot_directive" extension In-Reply-To: References: Message-ID: On Sun, May 22, 2011 at 12:21 PM, Warren Weckesser wrote: > > > On Sun, May 22, 2011 at 11:12 AM, Warren Weckesser > wrote: >> >> I'm trying to build the HTML docs.? Running 'make html' in the doc/ >> directory first resulted in: >> >> ??? Extension error: >> ??? Could not import extension numpydoc (exception: No module named >> numpydoc) >> >> so I installed the numpydoc package from pypi.? Now I get this: >> >> ??? Extension error: >> ??? Could not import extension plot_directive (exception: No module named >> plot_directive) >> >> How do I install the 'plot_directive' extension? >> > > > I just compared an old svn checkout that I have to the current trunk; looks > like the directory doc/sphinxext/ has been removed, which is where > plot_directory.py lived.? Now to figure out why... could also be that your sphinx version is too old, but I don't know what the current requirement is. Josef > > > Warren > > >> >> I'm using python 2.7, numpy 1.5.1, matplotlib 1.0.1. >> >> >> Warren >> > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > From ralf.gommers at googlemail.com Sun May 22 12:46:53 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sun, 22 May 2011 18:46:53 +0200 Subject: [SciPy-Dev] Building the docs: need "plot_directive" extension In-Reply-To: References: Message-ID: On Sun, May 22, 2011 at 6:21 PM, Warren Weckesser < warren.weckesser at enthought.com> wrote: > > > On Sun, May 22, 2011 at 11:12 AM, Warren Weckesser < > warren.weckesser at enthought.com> wrote: > >> I'm trying to build the HTML docs. Running 'make html' in the doc/ >> directory first resulted in: >> >> Extension error: >> Could not import extension numpydoc (exception: No module named >> numpydoc) >> >> so I installed the numpydoc package from pypi. Now I get this: >> >> Extension error: >> Could not import extension plot_directive (exception: No module named >> plot_directive) >> >> How do I install the 'plot_directive' extension? >> >> > > I just compared an old svn checkout that I have to the current trunk; looks > like the directory doc/sphinxext/ has been removed, which is where > plot_directory.py lived. Now to figure out why... > > It lives in the numpy source tree, just copy everything under numpy/doc/sphinxext/ to scipy/doc/sphinxext/. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at enthought.com Sun May 22 12:48:48 2011 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Sun, 22 May 2011 11:48:48 -0500 Subject: [SciPy-Dev] Building the docs: need "plot_directive" extension In-Reply-To: References: Message-ID: On Sun, May 22, 2011 at 11:42 AM, wrote: > On Sun, May 22, 2011 at 12:21 PM, Warren Weckesser > wrote: > > > > > > On Sun, May 22, 2011 at 11:12 AM, Warren Weckesser > > wrote: > >> > >> I'm trying to build the HTML docs. Running 'make html' in the doc/ > >> directory first resulted in: > >> > >> Extension error: > >> Could not import extension numpydoc (exception: No module named > >> numpydoc) > >> > >> so I installed the numpydoc package from pypi. Now I get this: > >> > >> Extension error: > >> Could not import extension plot_directive (exception: No module > named > >> plot_directive) > >> > >> How do I install the 'plot_directive' extension? > >> > > > > > > I just compared an old svn checkout that I have to the current trunk; > looks > > like the directory doc/sphinxext/ has been removed, which is where > > plot_directory.py lived. Now to figure out why... > > could also be that your sphinx version is too old, but I don't know > what the current requirement is. > I found a thread from last June where Skipper ran into the same problem: http://mail.scipy.org/pipermail/scipy-dev/2010-June/014647.html I also realized that the 'numpydoc' package includes the plot_directive extension. I just had to change doc/source/conf.py to refer to the extensions as 'numpydoc.numpydoc' and 'numpydoc.plot_directive', and the docs built--with only 4531 warnings! Warren > > Josef > > > > > > > Warren > > > > > >> > >> I'm using python 2.7, numpy 1.5.1, matplotlib 1.0.1. > >> > >> > >> Warren > >> > > > > > > _______________________________________________ > > SciPy-Dev mailing list > > SciPy-Dev at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-dev > > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gokhansever at gmail.com Sun May 22 12:59:41 2011 From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_Sever?=) Date: Sun, 22 May 2011 10:59:41 -0600 Subject: [SciPy-Dev] Comments on optimize.newton function In-Reply-To: References: Message-ID: On Sun, May 22, 2011 at 6:22 AM, wrote: >> cpdef double petters_solve_for_rw(double x, double rd, double rh): >> ? ?return rh - exp(kelvin/x) * (x**3 - rd**3) / (x**3 - rd**3 * (1.0 - kappa)) > > Wouldn't this be easier (for derivatives), and maybe be more stable, taking logs > > np.log(rh) - ?kelvin/x + np.log(..) ... ?? > > (independently of any improvement to the solvers) > Seems like this produces more terms in derivatives (tested below in Sage v.4.6.1 via notebook): myfunc (rd^3 - x^3)*e^(kelvin/x)/((kappa - 1.0)*rd^3 + x^3) + rh myfunc.derivative(x).simplify() -3*(rd^3 - x^3)*x^2*e^(kelvin/x)/((kappa - 1.0)*rd^3 + x^3)^2 - 3*x^2*e^(kelvin/x)/((kappa - 1.0)*rd^3 + x^3) - (rd^3 - x^3)*kelvin*e^(kelvin/x)/(((kappa - 1.0)*rd^3 + x^3)*x^2) p = myfunc.log() p.derivative(x).simplify() -(3*(rd^3 - x^3)*x^2*e^(kelvin/x)/((kappa - 1.0)*rd^3 + x^3)^2 + 3*x^2*e^(kelvin/x)/((kappa - 1.0)*rd^3 + x^3) + (rd^3 - x^3)*kelvin*e^(kelvin/x)/(((kappa - 1.0)*rd^3 + x^3)*x^2))/((rd^3 - x^3)*e^(kelvin/x)/((kappa - 1.0)*rd^3 + x^3) + rh) -- G?khan From josef.pktd at gmail.com Sun May 22 13:12:44 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 22 May 2011 13:12:44 -0400 Subject: [SciPy-Dev] Comments on optimize.newton function In-Reply-To: References: Message-ID: On Sun, May 22, 2011 at 12:59 PM, G?khan Sever wrote: > On Sun, May 22, 2011 at 6:22 AM, ? wrote: >>> cpdef double petters_solve_for_rw(double x, double rd, double rh): >>> ? ?return rh - exp(kelvin/x) * (x**3 - rd**3) / (x**3 - rd**3 * (1.0 - kappa)) >> >> Wouldn't this be easier (for derivatives), and maybe be more stable, taking logs >> >> np.log(rh) - ?kelvin/x + np.log(..) ... ?? >> >> (independently of any improvement to the solvers) >> > > Seems like this produces more terms in derivatives (tested below in > Sage v.4.6.1 via notebook): > > myfunc > (rd^3 - x^3)*e^(kelvin/x)/((kappa - 1.0)*rd^3 + x^3) + rh > > myfunc.derivative(x).simplify() > -3*(rd^3 - x^3)*x^2*e^(kelvin/x)/((kappa - 1.0)*rd^3 + x^3)^2 - > 3*x^2*e^(kelvin/x)/((kappa - 1.0)*rd^3 + x^3) - (rd^3 - > x^3)*kelvin*e^(kelvin/x)/(((kappa - 1.0)*rd^3 + x^3)*x^2) > > p = myfunc.log() I proposed taking logs of left hand side and right hand side separately, since you are just looking for a zero, with myfunc.log(), it is not simplified (I don't have a quick way to do the symbolic derivative, but there shouldn'd be any exp left in the expression) Josef > p.derivative(x).simplify() > -(3*(rd^3 - x^3)*x^2*e^(kelvin/x)/((kappa - 1.0)*rd^3 + x^3)^2 + > 3*x^2*e^(kelvin/x)/((kappa - 1.0)*rd^3 + x^3) + (rd^3 - > x^3)*kelvin*e^(kelvin/x)/(((kappa - 1.0)*rd^3 + x^3)*x^2))/((rd^3 - > x^3)*e^(kelvin/x)/((kappa - 1.0)*rd^3 + x^3) + rh) > > -- > G?khan > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From charlesr.harris at gmail.com Sun May 22 14:00:15 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 22 May 2011 12:00:15 -0600 Subject: [SciPy-Dev] Comments on optimize.newton function In-Reply-To: References: Message-ID: On Sat, May 21, 2011 at 11:55 PM, G?khan Sever wrote: > On Sat, May 21, 2011 at 7:50 PM, Charles R Harris > wrote: > > Yeah, that left me thinking that we could really use a bracketing > function. > > The brent optimizer must have something like that it uses to bracket > > minimums and perhaps that could be adapted. I know there are bracketing > > methods out there, IIRC there is one in NR. > > I have managed to get those bracketed solvers working after plotting > my function. brenth seems converging the fastest but still providing > the search interval is a problem for me. I am probably skipping these > solvers. > > > > IIRC, the old version blew up when the root was at zero, the problem was > > posted on the list. > > > > ? I believe there is only one newton method in scipy, but we moved it > into > > the zeros module and deprecated the version at the old location. It has > > since been removed. > > Yes, you are right. At the current source repository, there is only > one newton in scipy.optimize which resides in zeros.py > > > > > I would just stick to the secant method in the general case unless the > > derivative is easy to come by. Note that newton is one of the 'original' > > functions in scipy, the other 1d zero finders came later. So if you can > make > > an improved cythonized version I don't see any reason not to use it. If > you > > do make cythonized versions it might be worth implementing the Newton and > > secant parts separately and make the current newton a driver function. > > I have figured out the derivative option and tested fsolve and newton > with fprime arg provided. Still slower comparing to the secant method. > Most likely, that the derivative function requires a bit calculation > to be evaluated. As you can see, the function is: > > cpdef double petters_solve_for_rw(double x, double rd, double rh): > return rh - exp(kelvin/x) * (x**3 - rd**3) / (x**3 - rd**3 * (1.0 - > kappa)) > > You could also try rewriting this in various ways. For instance cpdef double petters_solve_for_rw(double x, double rd, double rh): return rh*( x**3 - rd**3 * (1.0 - kappa)) - exp(kelvin/x) * (x**3 - rd**3) or cpdef double petters_solve_for_rw(double x, double rd, double rh): return rh*(1 - (1 - kappa)*y**3) - exp(y*kelvin/rd) * (1 - y**3) or cpdef double petters_solve_for_rw(double x, double rd, double rh): return rh*kappa*y**3 - (exp(y*kelvin/rd) - rh) * (1 - y**3) where x = rd/y. The last might allow you to bracket things fairly easily, i.e., (exp(y*kelvin/rd) - rh) * (1 - y**3) has to be >0 if you expect y>0 Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From gokhansever at gmail.com Sun May 22 16:24:43 2011 From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_Sever?=) Date: Sun, 22 May 2011 14:24:43 -0600 Subject: [SciPy-Dev] Comments on optimize.newton function In-Reply-To: References: Message-ID: On Sun, May 22, 2011 at 11:12 AM, wrote: > I proposed taking logs of left hand side and right hand side > separately, since you are just looking for a zero, > with myfunc.log(), ?it is not simplified OK, I have gotten this right this time: myfunc = rh - exp(kelvin/x) * (x**3 - rd**3) / (x**3 - rd**3 * (1.0 - kappa)) myfunc2 = log(myfunc) = log(rh) - (kelvin/x) + log(x**3 - rd**3) - log(x**3 - rd**3*(1.0 - kappa)) myfunc2_prime = -3*x**2/(rd**3 - x**3) - 3*x**2/((kappa - 1.0)*rd**3 + x**3) + kelvin/x**2 How can I proceed this point onwards? > > (I don't have a quick way to do the symbolic derivative, but there > shouldn'd be any exp left in the expression) From gokhansever at gmail.com Sun May 22 16:46:38 2011 From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_Sever?=) Date: Sun, 22 May 2011 14:46:38 -0600 Subject: [SciPy-Dev] Comments on optimize.newton function In-Reply-To: References: Message-ID: On Sun, May 22, 2011 at 12:00 PM, Charles R Harris > cpdef double petters_solve_for_rw(double x, double rd, double rh): > ? ?return rh*kappa*y**3 - (exp(y*kelvin/rd) - rh) * (1 - y**3) This last modification is converging faster than the original version, but readability of the function is reduced now. > > > > where x = rd/y. The last might allow you to bracket things fairly easily, > i.e., (exp(y*kelvin/rd) - rh) * (1 - y**3) has to be >0 if you expect y>0 "rh" also plays role in determining the sign of the right portion of this equation. Throughout the model rh usually goes from 0.95 and pass beyond 1.0. This causes a sign change. > > > > Chuck > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -- G?khan From josef.pktd at gmail.com Sun May 22 16:50:42 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 22 May 2011 16:50:42 -0400 Subject: [SciPy-Dev] Comments on optimize.newton function In-Reply-To: References: Message-ID: On Sun, May 22, 2011 at 4:24 PM, G?khan Sever wrote: > On Sun, May 22, 2011 at 11:12 AM, ? wrote: >> I proposed taking logs of left hand side and right hand side >> separately, since you are just looking for a zero, >> with myfunc.log(), ?it is not simplified > > OK, I have gotten this right this time: > > myfunc = ?rh - exp(kelvin/x) * (x**3 - rd**3) / (x**3 - rd**3 * (1.0 - kappa)) > myfunc2 = log(myfunc) = log(rh) - (kelvin/x) + log(x**3 - rd**3) - > log(x**3 - rd**3*(1.0 - kappa)) > > myfunc2_prime = -3*x**2/(rd**3 - x**3) - 3*x**2/((kappa - 1.0)*rd**3 + > x**3) + kelvin/x**2 > > How can I proceed this point onwards? try newton with fprime. My initial suggestion was in response to your statement that newton with fprime is too slow because the expression for the derivative is too complicated and slow. Trying to get the function in a nicer form might help quite a bit, but it won't be a solution if you have a large set of functions that might show up in different simulations. It's just an aside for the main topic of the thread, improving the solvers. Josef > > >> >> (I don't have a quick way to do the symbolic derivative, but there >> shouldn'd be any exp left in the expression) > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From charlesr.harris at gmail.com Sun May 22 16:51:10 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 22 May 2011 14:51:10 -0600 Subject: [SciPy-Dev] Comments on optimize.newton function In-Reply-To: References: Message-ID: On Sun, May 22, 2011 at 2:46 PM, G?khan Sever wrote: > On Sun, May 22, 2011 at 12:00 PM, Charles R Harris < > charlesr.harris at gmail.com> > > cpdef double petters_solve_for_rw(double x, double rd, double rh): > > return rh*kappa*y**3 - (exp(y*kelvin/rd) - rh) * (1 - y**3) > > This last modification is converging faster than the original version, > but readability of the function is reduced now. > > > > > > > > > where x = rd/y. The last might allow you to bracket things fairly easily, > > i.e., (exp(y*kelvin/rd) - rh) * (1 - y**3) has to be >0 if you expect y>0 > > "rh" also plays role in determining the sign of the right portion of > this equation. Throughout the model rh usually goes from 0.95 and pass > beyond 1.0. This causes a sign change. > > I think the zeros of this function can be bracketed by inspection. What sort of values do rd, rh, and kappa have? What is kelvin? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From gokhansever at gmail.com Sun May 22 17:02:49 2011 From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_Sever?=) Date: Sun, 22 May 2011 15:02:49 -0600 Subject: [SciPy-Dev] Comments on optimize.newton function In-Reply-To: References: Message-ID: On Sun, May 22, 2011 at 2:50 PM, wrote: > try newton with fprime. My initial suggestion was in response to your > statement that newton with fprime is too slow because the expression > for the derivative is too complicated and slow. > This seems to be not working for my case: cpdef double myfunc2(double x, double rd, double rh): return log(rh) - (kelvin/x) + log(x**3 - rd**3) - log(x**3 - rd**3*(1.0 - kappa)) cpdef double myfunc2_prime(double x, double rd, double rh): -3*x**2/(rd**3 - x**3) - 3*x**2/((kappa - 1.0)*rd**3 + x**3) + kelvin/x**2 rd = 5.75e-08; rh = 0.95 I[4]: newton(myfunc2, rd, args=(rd, rh), fprime=myfunc2_prime) /usr/lib64/python2.7/site-packages/scipy/optimize/zeros.py:106: RuntimeWarning: derivative was zero. warnings.warn(msg, RuntimeWarning) O[4]: 5.75e-08 # eliminating the zero derivative. I[5]: newton(myfunc2, rd, args=(rd, rh), fprime=myfunc2_prime, tol=1.e-10) O[5]: 5.75e-08 The correct result is: Setting tol to different accuracies makes a different. In this case tol=1.e-20 yields exact solution. I[7]: cnewton(petters_solve_for_rw, rd, args=(rd, rh), tol=1.e-20) O[7]: 1.4972782377152967e-07 > Trying to get the function in a nicer form might help quite a bit, Yes, I can confirm this from Charles Harris' suggestion, bit as I said the speed gain isn't that significant in this case and readability still counts. but > it won't be a solution if you have a large set of functions that might > show up in different simulations. > > It's just an aside for the main topic of the thread, improving the solvers. > > Josef -- G?khan From gokhansever at gmail.com Sun May 22 17:21:08 2011 From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_Sever?=) Date: Sun, 22 May 2011 15:21:08 -0600 Subject: [SciPy-Dev] Comments on optimize.newton function In-Reply-To: References: Message-ID: On Sun, May 22, 2011 at 2:51 PM, Charles R Harris wrote: > I think the zeros of this function can be bracketed by inspection. What sort > of values do rd, rh, and kappa have? What is kelvin? This is the original function: cpdef double petters_solve_for_rw(double x, double rd, double rh): return rh - exp(kelvin/x) * (x**3 - rd**3) / (x**3 - rd**3 * (1.0 - kappa)) "kelvin" is a constant: 1.04962912337e-09 and stays constant throughout all of the simulations. "kappa" is a constant, but its value set before the simulation. Default is 1, but can range from 0.001 to 2 depend on the simulation "rd" is initialized differently. For one simulation about 20k element rd array created --this number changes depends on the simulation --solving a set of five ODE equations. For one case: I[3]: rd.min() O[3]: 1.1926858018899999e-08 I[4]: rd.max() O[4]: 1.3455000000000001e-06 "rh" is the relative humidity. Starts at rh=0.95, and evolves like "rd" within the simulation, and differs from simulation to simulation depends on the initial conditions. I[9]: rh.max() O[9]: 1.0050122345200001 I[10]: rh.min() O[10]: 0.95017287164200004 With these numbers, I still think it is hard to bracket this function within which a root is searched. From charlesr.harris at gmail.com Sun May 22 20:04:02 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 22 May 2011 18:04:02 -0600 Subject: [SciPy-Dev] Comments on optimize.newton function In-Reply-To: References: Message-ID: On Sun, May 22, 2011 at 3:21 PM, G?khan Sever wrote: > On Sun, May 22, 2011 at 2:51 PM, Charles R Harris > wrote: > > I think the zeros of this function can be bracketed by inspection. What > sort > > of values do rd, rh, and kappa have? What is kelvin? > > This is the original function: > > cpdef double petters_solve_for_rw(double x, double rd, double rh): > return rh - exp(kelvin/x) * (x**3 - rd**3) / (x**3 - rd**3 * (1.0 - > kappa)) > > "kelvin" is a constant: 1.04962912337e-09 and stays constant > throughout all of the simulations. > > "kappa" is a constant, but its value set before the simulation. > Default is 1, but can range from 0.001 to 2 depend on the simulation > > "rd" is initialized differently. For one simulation about 20k element > rd array created --this number changes depends on the simulation > --solving a set of five ODE equations. For one case: > > I[3]: rd.min() > O[3]: 1.1926858018899999e-08 > > I[4]: rd.max() > O[4]: 1.3455000000000001e-06 > > "rh" is the relative humidity. Starts at rh=0.95, and evolves like > "rd" within the simulation, and differs from simulation to simulation > depends on the initial conditions. > > I[9]: rh.max() > O[9]: 1.0050122345200001 > > I[10]: rh.min() > O[10]: 0.95017287164200004 > > With these numbers, I still think it is hard to bracket this function > within which a root is searched. Solve rh*exp(-kelvin/x) = (x**3 - rd**3) / (x**3 - rd**3 * (1.0 - kappa)) The lhs increases from 0 to rh as x -> inf, hence is <= rh. The rhs looks sort like a hyperbola with a horizontal asymptote at y = 1, and a vertical asymptote at rd*(1 - kappa)**1/3. If rh = 1, there is no solution unless kappa = 0 and x = +/- inf. I suspect that might be a problem and a hint that the model might be a bit off. If rh < 1, solve rh = (b**3 - rd**3) / (b**3 - rd**3 * (1.0 - kappa)) for b, which you can do algebraically, and the root will lie in the interval [rd, b]. If rh > 1, things are a mess, but x < 0 and also to the left of the vertical asymptote, and to the right of b solved for previously from above. Is the negative x a problem? There is no (real) solution in this case if 1/(1 - kappa) < rh, and more generally, if the bracket doesn't contain any values. Using the reciprical of x can help as it makes the exponential continuous for x = +/- inf. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at enthought.com Sun May 22 20:24:04 2011 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Sun, 22 May 2011 19:24:04 -0500 Subject: [SciPy-Dev] Tests not running for scipy.constants? Message-ID: It appears that the tests for scipy.constants are not running. Here's what I get with trunk: $ python -c "import scipy.constants; scipy.constants.test('full')" Running unit tests for scipy.constants NumPy version 1.5.1 NumPy is installed in /Library/Frameworks/Python.framework/Versions/7.0/lib/python2.7/site-packages/numpy SciPy version 0.10.0.dev SciPy is installed in /Users/warren/local_tmp/lib/python2.7/site-packages/scipy Python version 2.7.1 |EPD 7.0-1 (32-bit)| (r271:86832, Dec 3 2010, 15:41:32) [GCC 4.0.1 (Apple Inc. build 5488)] nose version 1.0.0 ---------------------------------------------------------------------- Ran 0 tests in 0.000s OK I get the same with 0.9.0rc2, so the problem may have been around for awhile. The tests for other packages run as expected, and if I run nosetests in the constants/tests directory, the test run: $ nosetests .............. ---------------------------------------------------------------------- Ran 14 tests in 0.451s OK Does anyone else see this? Warren -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Sun May 22 20:51:41 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 22 May 2011 20:51:41 -0400 Subject: [SciPy-Dev] Tests not running for scipy.constants? In-Reply-To: References: Message-ID: On Sun, May 22, 2011 at 8:24 PM, Warren Weckesser wrote: > It appears that the tests for scipy.constants are not running.? Here's what > I get with trunk: > > $ python -c "import scipy.constants; scipy.constants.test('full')" > Running unit tests for scipy.constants > NumPy version 1.5.1 > NumPy is installed in > /Library/Frameworks/Python.framework/Versions/7.0/lib/python2.7/site-packages/numpy > SciPy version 0.10.0.dev > SciPy is installed in > /Users/warren/local_tmp/lib/python2.7/site-packages/scipy > Python version 2.7.1 |EPD 7.0-1 (32-bit)| (r271:86832, Dec? 3 2010, > 15:41:32) [GCC 4.0.1 (Apple Inc. build 5488)] > nose version 1.0.0 > > ---------------------------------------------------------------------- > Ran 0 tests in 0.000s > > OK > > I get the same with 0.9.0rc2, so the problem may have been around for > awhile. > > The tests for other packages run as expected, and if I run nosetests in the > constants/tests directory, the test run: > > $ nosetests > .............. > ---------------------------------------------------------------------- > Ran 14 tests in 0.451s > > OK > > > Does anyone else see this? Same here with 0.9, but I have no idea what might be going on, looks the same as other subpackages. (Is "constants" are reserved word somewhere?) Josef > > Warren > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > From cgohlke at uci.edu Sun May 22 21:15:47 2011 From: cgohlke at uci.edu (Christoph Gohlke) Date: Sun, 22 May 2011 18:15:47 -0700 Subject: [SciPy-Dev] Tests not running for scipy.constants? In-Reply-To: References: Message-ID: <4DD9B543.3050608@uci.edu> On 5/22/2011 5:51 PM, josef.pktd at gmail.com wrote: > On Sun, May 22, 2011 at 8:24 PM, Warren Weckesser > wrote: >> It appears that the tests for scipy.constants are not running. Here's what >> I get with trunk: >> >> $ python -c "import scipy.constants; scipy.constants.test('full')" >> Running unit tests for scipy.constants >> NumPy version 1.5.1 >> NumPy is installed in >> /Library/Frameworks/Python.framework/Versions/7.0/lib/python2.7/site-packages/numpy >> SciPy version 0.10.0.dev >> SciPy is installed in >> /Users/warren/local_tmp/lib/python2.7/site-packages/scipy >> Python version 2.7.1 |EPD 7.0-1 (32-bit)| (r271:86832, Dec 3 2010, >> 15:41:32) [GCC 4.0.1 (Apple Inc. build 5488)] >> nose version 1.0.0 >> >> ---------------------------------------------------------------------- >> Ran 0 tests in 0.000s >> >> OK >> >> I get the same with 0.9.0rc2, so the problem may have been around for >> awhile. >> >> The tests for other packages run as expected, and if I run nosetests in the >> constants/tests directory, the test run: >> >> $ nosetests >> .............. >> ---------------------------------------------------------------------- >> Ran 14 tests in 0.451s >> >> OK >> >> >> Does anyone else see this? > > Same here with 0.9, but I have no idea what might be going on, looks > the same as other subpackages. (Is "constants" are reserved word > somewhere?) > > Josef > >> >> Warren >> >> On my system the scipy\constants\tests directory is not installed. Seems there is a `config.add_data_dir('tests')` missing in scipy\constants\setup.py. Christoph From warren.weckesser at enthought.com Sun May 22 21:59:01 2011 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Sun, 22 May 2011 20:59:01 -0500 Subject: [SciPy-Dev] Tests not running for scipy.constants? In-Reply-To: <4DD9B543.3050608@uci.edu> References: <4DD9B543.3050608@uci.edu> Message-ID: On Sun, May 22, 2011 at 8:15 PM, Christoph Gohlke wrote: > > > On 5/22/2011 5:51 PM, josef.pktd at gmail.com wrote: > > On Sun, May 22, 2011 at 8:24 PM, Warren Weckesser > > wrote: > >> It appears that the tests for scipy.constants are not running. Here's > what > >> I get with trunk: > >> > >> $ python -c "import scipy.constants; scipy.constants.test('full')" > >> Running unit tests for scipy.constants > >> NumPy version 1.5.1 > >> NumPy is installed in > >> > /Library/Frameworks/Python.framework/Versions/7.0/lib/python2.7/site-packages/numpy > >> SciPy version 0.10.0.dev > >> SciPy is installed in > >> /Users/warren/local_tmp/lib/python2.7/site-packages/scipy > >> Python version 2.7.1 |EPD 7.0-1 (32-bit)| (r271:86832, Dec 3 2010, > >> 15:41:32) [GCC 4.0.1 (Apple Inc. build 5488)] > >> nose version 1.0.0 > >> > >> ---------------------------------------------------------------------- > >> Ran 0 tests in 0.000s > >> > >> OK > >> > >> I get the same with 0.9.0rc2, so the problem may have been around for > >> awhile. > >> > >> The tests for other packages run as expected, and if I run nosetests in > the > >> constants/tests directory, the test run: > >> > >> $ nosetests > >> .............. > >> ---------------------------------------------------------------------- > >> Ran 14 tests in 0.451s > >> > >> OK > >> > >> > >> Does anyone else see this? > > > > Same here with 0.9, but I have no idea what might be going on, looks > > the same as other subpackages. (Is "constants" are reserved word > > somewhere?) > > > > Josef > > > >> > >> Warren > >> > >> > > On my system the scipy\constants\tests directory is not installed. Seems > there is a `config.add_data_dir('tests')` missing in > scipy\constants\setup.py. > > Yes, that's the problem. Apparently config.add_subpackage('*') is not working as expected. All the other packages use config.add_data_dir('tests') to include the tests. I'll push the fix shortly. Warren Christoph > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at enthought.com Sun May 22 22:03:52 2011 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Sun, 22 May 2011 21:03:52 -0500 Subject: [SciPy-Dev] Tests not running for scipy.constants? In-Reply-To: References: <4DD9B543.3050608@uci.edu> Message-ID: On Sun, May 22, 2011 at 8:59 PM, Warren Weckesser < warren.weckesser at enthought.com> wrote: > > > On Sun, May 22, 2011 at 8:15 PM, Christoph Gohlke wrote: > >> >> >> On 5/22/2011 5:51 PM, josef.pktd at gmail.com wrote: >> > On Sun, May 22, 2011 at 8:24 PM, Warren Weckesser >> > wrote: >> >> It appears that the tests for scipy.constants are not running. Here's >> what >> >> I get with trunk: >> >> >> >> $ python -c "import scipy.constants; scipy.constants.test('full')" >> >> Running unit tests for scipy.constants >> >> NumPy version 1.5.1 >> >> NumPy is installed in >> >> >> /Library/Frameworks/Python.framework/Versions/7.0/lib/python2.7/site-packages/numpy >> >> SciPy version 0.10.0.dev >> >> SciPy is installed in >> >> /Users/warren/local_tmp/lib/python2.7/site-packages/scipy >> >> Python version 2.7.1 |EPD 7.0-1 (32-bit)| (r271:86832, Dec 3 2010, >> >> 15:41:32) [GCC 4.0.1 (Apple Inc. build 5488)] >> >> nose version 1.0.0 >> >> >> >> ---------------------------------------------------------------------- >> >> Ran 0 tests in 0.000s >> >> >> >> OK >> >> >> >> I get the same with 0.9.0rc2, so the problem may have been around for >> >> awhile. >> >> >> >> The tests for other packages run as expected, and if I run nosetests in >> the >> >> constants/tests directory, the test run: >> >> >> >> $ nosetests >> >> .............. >> >> ---------------------------------------------------------------------- >> >> Ran 14 tests in 0.451s >> >> >> >> OK >> >> >> >> >> >> Does anyone else see this? >> > >> > Same here with 0.9, but I have no idea what might be going on, looks >> > the same as other subpackages. (Is "constants" are reserved word >> > somewhere?) >> > >> > Josef >> > >> >> >> >> Warren >> >> >> >> >> >> On my system the scipy\constants\tests directory is not installed. Seems >> there is a `config.add_data_dir('tests')` missing in >> scipy\constants\setup.py. >> >> > > Yes, that's the problem. Apparently config.add_subpackage('*') is not > working as expected. All the other packages use > config.add_data_dir('tests') to include the tests. I'll push the fix > shortly. > Done. Thanks, Christoph! Warren > > Warren > > > Christoph >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gokhansever at gmail.com Sun May 22 22:05:53 2011 From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_Sever?=) Date: Sun, 22 May 2011 20:05:53 -0600 Subject: [SciPy-Dev] Comments on optimize.newton function In-Reply-To: References: Message-ID: On Sun, May 22, 2011 at 6:04 PM, Charles R Harris wrote: > Solve > > rh*exp(-kelvin/x) = (x**3 - rd**3) / (x**3 - rd**3 * (1.0 - kappa)) > > The lhs increases from 0 to rh as x -> inf, hence is <= rh. > The rhs looks sort like a hyperbola with a horizontal asymptote at y = 1, > and a vertical asymptote at rd*(1 - kappa)**1/3. > > If rh = 1, there is no solution unless kappa = 0 and x = +/- inf. I suspect > that might be a problem and a hint that the model might be a bit off. rh = 1 is a special case, right when the supersaturation is reached within the parcel model. I highly suspect that we get an exact rh=1 throughout the simulations. This value is usually rh=1+-small number. However I might need to verify this further. Soon, I will work on separating the model thermodynamics for rh<1 and rh>1 cases which I will have to estimate the closest rh=1 point where the saturation starts occurring. > > If rh < 1, solve rh = (b**3 - rd**3) / (b**3 - rd**3 * (1.0 - kappa)) for b, > which you can do algebraically, and the root will lie in the interval [rd, > b]. How did you get this one for rh<1? What happened to the exponential term? Even if so, how I am going to ensure that for the [rd, b] interval f(rd) and f(b) will result opposite sign results? > > If rh > 1, things are a mess, but x < 0 and also to the left of the vertical > asymptote, and to the right of b solved for previously from above. Is the > negative x a problem? There is no (real) solution in this case if 1/(1 - > kappa)? < rh, and more generally, if the bracket doesn't contain any values. x is >= 0. I don't use negative x as an initial estimator. Neither the function or the root solver should yield a negative result. > > Using the reciprical of x can help as it makes the exponential continuous > for x = +/- inf. > > Chuck > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -- G?khan From charlesr.harris at gmail.com Sun May 22 22:07:39 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 22 May 2011 20:07:39 -0600 Subject: [SciPy-Dev] Comments on optimize.newton function In-Reply-To: References: Message-ID: On Sun, May 22, 2011 at 6:04 PM, Charles R Harris wrote: > > > On Sun, May 22, 2011 at 3:21 PM, G?khan Sever wrote: > >> On Sun, May 22, 2011 at 2:51 PM, Charles R Harris >> wrote: >> > I think the zeros of this function can be bracketed by inspection. What >> sort >> > of values do rd, rh, and kappa have? What is kelvin? >> >> This is the original function: >> >> cpdef double petters_solve_for_rw(double x, double rd, double rh): >> return rh - exp(kelvin/x) * (x**3 - rd**3) / (x**3 - rd**3 * (1.0 - >> kappa)) >> >> "kelvin" is a constant: 1.04962912337e-09 and stays constant >> throughout all of the simulations. >> >> "kappa" is a constant, but its value set before the simulation. >> Default is 1, but can range from 0.001 to 2 depend on the simulation >> >> "rd" is initialized differently. For one simulation about 20k element >> rd array created --this number changes depends on the simulation >> --solving a set of five ODE equations. For one case: >> >> I[3]: rd.min() >> O[3]: 1.1926858018899999e-08 >> >> I[4]: rd.max() >> O[4]: 1.3455000000000001e-06 >> >> "rh" is the relative humidity. Starts at rh=0.95, and evolves like >> "rd" within the simulation, and differs from simulation to simulation >> depends on the initial conditions. >> >> I[9]: rh.max() >> O[9]: 1.0050122345200001 >> >> I[10]: rh.min() >> O[10]: 0.95017287164200004 >> >> With these numbers, I still think it is hard to bracket this function >> within which a root is searched. > > > Solve > > > rh*exp(-kelvin/x) = (x**3 - rd**3) / (x**3 - rd**3 * (1.0 - kappa)) > > The lhs increases from 0 to rh as x -> inf, hence is <= rh. > The rhs looks sort like a hyperbola with a horizontal asymptote at y = 1, > and a vertical asymptote at rd*(1 - kappa)**1/3. > > If rh = 1, there is no solution unless kappa = 0 and x = +/- inf. I suspect > that might be a problem and a hint that the model might be a bit off. > > This isn't quite right it seems. > If rh < 1, solve rh = (b**3 - rd**3) / (b**3 - rd**3 * (1.0 - kappa)) for > b, which you can do algebraically, and the root will lie in the interval > [rd, b]. > > If rh > 1, things are a mess, but x < 0 and also to the left of the > vertical asymptote, and to the right of b solved for previously from above. > Is the negative x a problem? There is no (real) solution in this case if > 1/(1 - kappa) < rh, and more generally, if the bracket doesn't contain any > values. > > Using the reciprical of x can help as it makes the exponential continuous > for x = +/- inf. > > Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sun May 22 23:13:47 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 22 May 2011 21:13:47 -0600 Subject: [SciPy-Dev] Comments on optimize.newton function In-Reply-To: References: Message-ID: On Sun, May 22, 2011 at 8:05 PM, G?khan Sever wrote: > On Sun, May 22, 2011 at 6:04 PM, Charles R Harris > wrote: > > Solve > > > > rh*exp(-kelvin/x) = (x**3 - rd**3) / (x**3 - rd**3 * (1.0 - kappa)) > > > > The lhs increases from 0 to rh as x -> inf, hence is <= rh. > > The rhs looks sort like a hyperbola with a horizontal asymptote at y = 1, > > and a vertical asymptote at rd*(1 - kappa)**1/3. > > > > If rh = 1, there is no solution unless kappa = 0 and x = +/- inf. I > suspect > > that might be a problem and a hint that the model might be a bit off. > > rh = 1 is a special case, right when the supersaturation is reached > within the parcel model. I highly suspect that we get an exact rh=1 > throughout the simulations. This value is usually rh=1+-small number. > However I might need to verify this further. Soon, I will work on > separating the model thermodynamics for rh<1 and rh>1 cases which I > will have to estimate the closest rh=1 point where the saturation > starts occurring. > > > > > If rh < 1, solve rh = (b**3 - rd**3) / (b**3 - rd**3 * (1.0 - kappa)) for > b, > > which you can do algebraically, and the root will lie in the interval > [rd, > > b]. > > How did you get this one for rh<1? What happened to the exponential > term? Even if so, how I am going to ensure that for the [rd, b] > interval f(rd) and f(b) will result opposite sign results? > > > > > If rh > 1, things are a mess, but x < 0 and also to the left of the > vertical > > asymptote, and to the right of b solved for previously from above. Is the > > negative x a problem? There is no (real) solution in this case if 1/(1 - > > kappa) < rh, and more generally, if the bracket doesn't contain any > values. > > x is >= 0. I don't use negative x as an initial estimator. Neither > the function or the root solver should yield a negative result > > > I'm not so confidant about the rh >= 1 case, but I've attached an example for rh = .95, rd=1e-8. The light blue line is the lhs from above, the labled lines are for the rhs and different values of kappa. The heavy horizontal line is the rh and the bracket I was suggesting was between the zero of the rhs at x=rd and its crossing with the rh line. There are corner cases here depending on the parameter values, so this probably needs more exploration, there might be cases with two zeros. Also same thing with rd=1.5e-6. Note that the zero is very near the upper limit. I suspect there will none or two zeros in the supersaturated case. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: gokhan1.png Type: image/png Size: 58846 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: gokhan2.png Type: image/png Size: 60297 bytes Desc: not available URL: From gokhansever at gmail.com Mon May 23 02:21:09 2011 From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_Sever?=) Date: Mon, 23 May 2011 00:21:09 -0600 Subject: [SciPy-Dev] Comments on optimize.newton function In-Reply-To: References: Message-ID: On Sun, May 22, 2011 at 9:13 PM, Charles R Harris wrote: > I'm not so confidant about the rh >= 1 case, but I've attached an example > for rh = .95, rd=1e-8. The light blue line is the lhs from above, the labled > lines are for the rhs and different values of kappa. The heavy horizontal > line is the rh and the bracket I was suggesting was between the zero of the > rhs at x=rd and its crossing with the rh line. There are corner cases here > depending on the parameter values, so this probably needs more exploration, > there might be cases with two zeros. > > Also same thing with rd=1.5e-6. Note that the zero is very near the upper > limit. > > I suspect there will none or two zeros in the supersaturated case. > > Chuck Thanks for spending your time and producing those plots. Could you please provide the code that you used to create the figures? This might help me to better understand some of the points you have made in your latest reply. Your comment on having no or two zero worries me a bit. I can't easily see how apart these two zeros from each if they ever exist. One of them could be unrealistic to easily disregard, but yet to verify this claim. I went ahead and tested the secant and fsolve solvers to see if they produce any significant differences in terms of the root they return. I assume fsolve is a more robust solver. You can see this comparison in the attached figure. I use a tolerance value of 1.e-20 for both solvers. Again this comparison is based on the estimate of about 20k different values. Most of the difference is zero. I focused in to a more interesting part of the figure. Still the difference is about 1.e-17 which is quite insignificant. -------------- next part -------------- A non-text attachment was scrubbed... Name: solver_comparison.png Type: image/png Size: 78615 bytes Desc: not available URL: From stefan at sun.ac.za Mon May 23 07:10:02 2011 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 23 May 2011 13:10:02 +0200 Subject: [SciPy-Dev] scikit-morph In-Reply-To: <2D16773A-83EC-42ED-B60C-D0250331F264@gmail.com> References: <2B9BD974-BEE7-4835-BF17-C5BAB828CDB6@gmail.com> <2BE13CD2-1A46-4070-8DE1-28CEFAF4C20F@gmail.com> <2D16773A-83EC-42ED-B60C-D0250331F264@gmail.com> Message-ID: Hi Nathan On Sun, May 22, 2011 at 2:10 AM, Nathan Faggian wrote: > Your supreme project looks great! A couple of years ago I dabbled with image registration for video stabilisation and ?the work you have done by implementing the Ransac algorithm and linear registration is also suitable for that application. ?Do you also know about the X84 rejection rule? It vaguely rings a bell when thinking back to KLT's "Good Features to Track", but if you could give me a better pointer that'd be great. I implemented a couple of early termination rules for RANSAC, such as LO-RANSAC, etc. > I hope that I didn't sound too critical when I pointed at duplication, I really think the image scikit is great and I am keen to see how you go with different back ends to speed up computation. Not at all; we welcome good criticism--especially if it leads to more conversations (and hopefully more contributors)! > In scikit-morph I plan to implement dense non-linear image registration methods, for example: > > ? ?http://www.fmrib.ox.ac.uk/fsl/fnirt/index.html > > I am part way through implementing the approach used by FNIRT and once I get this working in 2D I will make some noise on the mailing-list. So far I have had a bit of fun looking into cython to speed up image sampling. Since I'd also like to extend the registration capabilities in scikits.image, would you be open to having your work included there as well? One advantage would be that your code is then distributed with EPD and Python(x,y); both packages that reach a large audience. >From my side, the super-resolution toolbox is released under a BSD license and you are more than welcome to use any of the code in it as you see fit. Regards St?fan From charlesr.harris at gmail.com Mon May 23 10:37:33 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 23 May 2011 08:37:33 -0600 Subject: [SciPy-Dev] Comments on optimize.newton function In-Reply-To: References: Message-ID: On Mon, May 23, 2011 at 12:21 AM, G?khan Sever wrote: > On Sun, May 22, 2011 at 9:13 PM, Charles R Harris > wrote: > > I'm not so confidant about the rh >= 1 case, but I've attached an example > > for rh = .95, rd=1e-8. The light blue line is the lhs from above, the > labled > > lines are for the rhs and different values of kappa. The heavy horizontal > > line is the rh and the bracket I was suggesting was between the zero of > the > > rhs at x=rd and its crossing with the rh line. There are corner cases > here > > depending on the parameter values, so this probably needs more > exploration, > > there might be cases with two zeros. > > > > Also same thing with rd=1.5e-6. Note that the zero is very near the upper > > limit. > > > > I suspect there will none or two zeros in the supersaturated case. > > > > Chuck > > Thanks for spending your time and producing those plots. Could you > please provide the code that you used to create the figures? This > might help me to better understand some of the points you have made in > your latest reply. > > I've attached the module with the lhs, rhs functions. I hope I got them right ;) The plots were done using x = linspace(small number > 0, 5*rd, 500) and a loop for the values of kappa. They might actually look better as a semilogx plot. > Your comment on having no or two zero worries me a bit. I can't easily > see how apart these two zeros from each if they ever exist. One of > them could be unrealistic to easily disregard, but yet to verify this > claim. > > The reason I think there will be two zeros is that the upper branch of the hyperbolic rhs is concave up while the lhs is concave down, so if they intersect it will be at two points or a tangent (double zero). At least the supersaturated case shows up as being a bit squirrelly, which is probably a good sign for the model ;) Also note that the zeros go off to +inf as rh -> 1, which might be a good argument for using rd/x as the independent variable. > I went ahead and tested the secant and fsolve solvers to see if they > produce any significant differences in terms of the root they return. > I assume fsolve is a more robust solver. You can see this comparison > in the attached figure. I use a tolerance value of 1.e-20 for both > solvers. Again this comparison is based on the estimate of about 20k > different values. Most of the difference is zero. I focused in to a > more interesting part of the figure. Still the difference is about > 1.e-17 which is quite insignificant. > > I think you are right that the secant solver needs a user input for the initial step size. Although working near singularities is always going to be a problem, hence variable changes. In fact, the whole newton thing could probably use a think through. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: gokhan.py Type: text/x-python Size: 157 bytes Desc: not available URL: From ralf.gommers at googlemail.com Mon May 23 15:18:58 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Mon, 23 May 2011 21:18:58 +0200 Subject: [SciPy-Dev] Question about subpackage/submodule API In-Reply-To: References: Message-ID: On Wed, Mar 9, 2011 at 6:17 AM, Ralf Gommers wrote: > On Sun, Feb 20, 2011 at 3:33 PM, Ralf Gommers > wrote: > > On Tue, Feb 15, 2011 at 7:53 AM, Warren Weckesser > > wrote: > >> > >> > >> On Sat, Feb 12, 2011 at 7:28 PM, Ralf Gommers < > ralf.gommers at googlemail.com> > >> wrote: > >>> > >>> > >>> On Sun, Feb 13, 2011 at 5:05 AM, Pauli Virtanen wrote: > >>>> > >>>> One wild idea to make this clearer could be to prefix all internal > sub- > >>>> package names with the usual '_'. In the long run, it probably > wouldn't > >>>> be as bad as it initially sounds like. > >>>> > >>> This is not a wild idea at all, I think it should be done. I considered > >>> all modules without '_' prefix public API. > >>> > >> > >> > >> Agreed (despite what I said in my initial post). > >> > >> To actually do this, we'll need to check which packages have modules > that > >> should be private. These can be renamed in 0.10 to have an underscore, > and > >> new public versions created that contain a deprecation warning and that > >> import everything from the private version. The deprecated public > modules > >> can be removed in 0.11. > >> > >> Some modules will require almost no changes. For example, scipy.cluster > >> *only* exposes two modules, vq and hierarchy, so no changes are needed. > >> (Well, there is also the module info.py that all packages have. That > should > >> become _info.py--there's no need for that to be public, is there?) > > > > Agreed, rename to _info.py > > This can't actually be done very easily, because the info.py name is > hardcoded in PackageLoader in numpy._import_tools.py. So if desired, > it first has to be done in numpy. > > > >> Other packages will probably require some discussion about what modules > should be > >> public. > >> > >> Consider the above a proposed change for 0.10 and 0.11--what do you > think? > >> > > Sounds good. Attached is a file that goes through scipy sub-packages > > and checks their __all__ for modules. Those are public by definition > > (but this doesn't give you the whole API). It's pretty messy, for > > example: > > > > signal > > ====== > > bsplines > > filter_design > > fir_filter_design > > integrate > > interpolate > > linalg > > ltisys > > np > > numpy > > optimize > > scipy > > signaltools > > sigtools > > special > > spline > > types > > warnings > > waveforms > > wavelets > > windows > > > > That should be cleaned up. Then there are also public modules that > > don't show up of course (for example odr.models). > > > > How about doing the following? : > > 1. Start a doc, perhaps on the wiki, with a full list of public modules. > > 2. Put that doc at the beginning of the reference guide, as well as > > the relevant part in the docstring for each sub-package. > > 3. Clean up existing __all__, and add __all__ to sub-packages that > > don't have them yet. > > 4. Rename private modules, with suitable deprecation warning. > > I've done the uncontroversial part (3): > https://github.com/rgommers/scipy/tree/refactor-private-modules > This cleans up the sub-package namespaces quite a bit, which will also > make for example tab-completion in IPython easier to use. For example, > sp.signal. gives now 147 results instead of 262. > After Warren reviewed it (thanks!), I've just pushed this branch. If anyone noticed any functions that look like they've gone missing then that's probably my fault. Easy to fix anyway and should get test coverage up. > > There is one thing I wasn't sure about: should > arccos/arccosh/arcsinh/... stay exposed in the scipy.special > namespace, even though they are numpy functions? > I left these out of the scipy.special namespace, they're numpy functions after all. If anyone thinks this needs a deprecation let me know. The next step is to actually add underscores to non-public modules, I'll probably start on that after next week. Cheers, Ralf > > Here is a complete list of modules that I think are part (or should be > part) of the public API. I added modules because they are documented > as being public in docs, or contain useful functions/objects that are > not exposed one level up, or because the sub-package namespace is very > large and could benefit from a subdivision. > > cluster > ======= > vq > hierarchy > > constants > ========= > > fftpack > ======= > > integrate > ========= > vode > > interpolate > =========== > dfitpack > > io > == > arff > idl > matlab > mmio > netcdf > wavfile > > linalg > ====== > calc_lwork > cblas > clapack > fblas > flapack > flinalg > lapack > special_matrices > > maxentropy > ========== > > > misc > ==== > doccer > pilutil > > ndimage > ======= > filters > fourier > interpolation > io > measurements > morphology > > odr > === > models > odrpack > > optimize > ======== > > signal > ====== > bsplines > filter_design > fir_filter_design > ltisys > spectral > spline > waveforms > wavelets > windows > > sparse > ====== > > sparse.linalg > ============= > umfpack > > spatial > ======= > distance > > special > ======= > > stats > ===== > distributions > mstats > it's too large and slow to import.> > > weave > ===== > > > > Cheers, > Ralf > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Wed May 25 17:35:33 2011 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Wed, 25 May 2011 23:35:33 +0200 Subject: [SciPy-Dev] Entropy from empirical high-dimensional data Message-ID: <20110525213533.GC9388@phare.normalesup.org> Hi list, I am looking at estimating entropy and conditional entropy from data for which I have only access to observations, and not the underlying probabilistic laws. With low dimensional data, I would simply use an empirical estimate of the probabilities by converting each observation to its quantile, and then apply the standard formula for entropy (for instance using scipy.stats.entropy). However, I have high-dimensional data (~100 features, and 30000 observations). Not only is it harder to convert observations to probabilities in the empirical law, but I am also worried of curse of dimensionality effects: density estimation in high-dimension is a difficult problem. Does anybody has advices, or code in Python to point to, for this task? Cheers, Ga?l From gael.varoquaux at normalesup.org Wed May 25 17:40:03 2011 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Wed, 25 May 2011 23:40:03 +0200 Subject: [SciPy-Dev] Entropy from empirical high-dimensional data In-Reply-To: <20110525213533.GC9388@phare.normalesup.org> References: <20110525213533.GC9388@phare.normalesup.org> Message-ID: <20110525214003.GD9388@phare.normalesup.org> Sorry for the noise, I sent this to the dev list, while it belongs to the user list. Hi list, I am looking at estimating entropy and conditional entropy from data for which I have only access to observations, and not the underlying probabilistic laws. With low dimensional data, I would simply use an empirical estimate of the probabilities by converting each observation to its quantile, and then apply the standard formula for entropy (for instance using scipy.stats.entropy). However, I have high-dimensional data (~100 features, and 30000 observations). Not only is it harder to convert observations to probabilities in the empirical law, but I am also worried of curse of dimensionality effects: density estimation in high-dimension is a difficult problem. Does anybody has advices, or code in Python to point to, for this task? Cheers, Ga?l From fperez.net at gmail.com Thu May 26 02:08:25 2011 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 25 May 2011 23:08:25 -0700 Subject: [SciPy-Dev] Used "Automatic Merge" on my github pull request... In-Reply-To: References: Message-ID: On Sun, May 22, 2011 at 2:49 AM, Ralf Gommers wrote: > That button is very pointless - best to ignore it. For those who might not have noticed (like me - https://github.com/blog/843-the-merge-button#comment-12116), the little 'i' icon on the left still has the old 3-step copy/paste instructions so you can do the merge locally with proper testing without having to manually type all the proper git commands. That auto-merge button may be useful for multi-commit requests (where --no-ff is typically what you want to keep them grouped) that happen to be all documentation so you're fine not running the test suite locally. But the notion of auto-merging stuff without testing it at all isn't very nice. And if you have it merged locally and ran the tests, then you can just push. So in summary, follow Ralf's advice :) f From emanuele at relativita.com Thu May 26 04:17:16 2011 From: emanuele at relativita.com (Emanuele Olivetti) Date: Thu, 26 May 2011 10:17:16 +0200 Subject: [SciPy-Dev] Entropy from empirical high-dimensional data In-Reply-To: <20110525213533.GC9388@phare.normalesup.org> References: <20110525213533.GC9388@phare.normalesup.org> Message-ID: <4DDE0C8C.4030002@relativita.com> Hi Gael, I recently played with a related problem which you might find of interest: (short paper) http://nilab.cimec.unitn.it/people/olivetti/work/prni2011/olivetti_bayes_error.pdf (slides) http://nilab.cimec.unitn.it/people/olivetti/work/prni2011/olivetti_prni2011_bayesian.pdf The proposed model can be used to estimate the posterior probability of information given observations and using classifiers. Note that these are just preliminary results. If this is of some help for you just let me know :-) I've recently talked to Stephen Strother about this topics and he pointed me to this paper: http://www.ncbi.nlm.nih.gov/pubmed/20533565 HTH, Emanuele On 05/25/2011 11:35 PM, Gael Varoquaux wrote: > Hi list, > > I am looking at estimating entropy and conditional entropy from data for > which I have only access to observations, and not the underlying > probabilistic laws. > > With low dimensional data, I would simply use an empirical estimate of > the probabilities by converting each observation to its quantile, and > then apply the standard formula for entropy (for instance using > scipy.stats.entropy). > > However, I have high-dimensional data (~100 features, and 30000 > observations). Not only is it harder to convert observations to > probabilities in the empirical law, but I am also worried of curse of > dimensionality effects: density estimation in high-dimension is a > difficult problem. > > Does anybody has advices, or code in Python to point to, for this task? > > Cheers, > > Ga?l > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From gael.varoquaux at normalesup.org Thu May 26 04:23:22 2011 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 26 May 2011 10:23:22 +0200 Subject: [SciPy-Dev] Entropy from empirical high-dimensional data In-Reply-To: <4DDE0C8C.4030002@relativita.com> References: <20110525213533.GC9388@phare.normalesup.org> <4DDE0C8C.4030002@relativita.com> Message-ID: <20110526082322.GA24376@phare.normalesup.org> On Thu, May 26, 2011 at 10:17:16AM +0200, Emanuele Olivetti wrote: > I recently played with a related problem which you might find of interest: > (slides) > http://nilab.cimec.unitn.it/people/olivetti/work/prni2011/olivetti_prni2011_bayesian.pdf Very interesting. This is quite unrelated to what I am doing right now, but it is very interesting in general. > I've recently talked to Stephen Strother about this topics and he > pointed me to this paper: http://www.ncbi.nlm.nih.gov/pubmed/20533565 I saw Stephen at NIPS and we did discuss these matters. All this is indeed promising. Thanks for the pointers, Ga?l From zw4131 at gmail.com Sun May 29 11:14:41 2011 From: zw4131 at gmail.com (=?GB2312?B?va20886w?=) Date: Sun, 29 May 2011 23:14:41 +0800 Subject: [SciPy-Dev] where and how I can submit the translation. Message-ID: *Hi guys* I have done some translation on the first part of the Scipy Reference Guide. But I do not know where and how I can submit. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Sun May 29 11:36:21 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sun, 29 May 2011 17:36:21 +0200 Subject: [SciPy-Dev] where and how I can submit the translation. In-Reply-To: References: Message-ID: Hi, 2011/5/29 ??? > *Hi guys* > > I have done some translation on the first part of the Scipy > > Reference Guide. > > But I do not know where and how I can submit. > > That sounds promising. Are you able to put this up on github, or online in some other form? You can also open a ticket ( http://projects.scipy.org/scipy/newticket, you need to register first). At the moment we don't have any translations, so after seeing what you have done so far we can discuss how to proceed. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at enthought.com Mon May 30 18:13:29 2011 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Mon, 30 May 2011 17:13:29 -0500 Subject: [SciPy-Dev] Platform-dependent tests? Message-ID: Jeff Armstrong has enhanced the Schur decomposition function scipy.linalg.schur to expose the ability to sort the eigenvalues (more accurately, group them according to a boolean function); see https://github.com/scipy/scipy/pull/23. schur() is a wrapper for the DGEES Fortran library function. According to its documentation, if the matrix is poorly scaled, the eigenvalues can fail to satisfy the sorting condition *after* they've been sorted, because of rounding errors. The function returns an error code when this occurs. Jeff's code checks for this condition, and he also added a couple unit tests for it. Unfortunately, the tests fail on my computer--that is, the error condition does not happen. This is not surprising, as the error condition relies on the behavior of rounding errors. So, to the question: is there a way to make these tests platform-dependent, so that they only run on a platform/architecture/whatever where the tests are known to trigger the desired condition? Or is the result likely to depend on so many other factors (compiler and optimization settings, third party math library, phase of the moon, etc) that it is hopeless to try? Warren -------------- next part -------------- An HTML attachment was scrubbed... URL: From akshar.bhosale at gmail.com Sun May 29 00:33:53 2011 From: akshar.bhosale at gmail.com (akshar bhosale) Date: Sun, 29 May 2011 10:03:53 +0530 Subject: [SciPy-Dev] numpy,scipy installation using mkl Message-ID: Hi, Is this right forum for doubts abt numpy/scipy installation? Please find our issue below .. we have machine having intel xeon x7350 processors(8 nos) and RHEL 5.2 x86_64 with kernel 2.6.18-92.el5. We have following configuration : /opt/intel/Compiler/11.0/069/ mkl/lib/em64t Now we want to install numpy and scipy as an user in my home directory. Following are the libraries build inside MKL. libfftw2x_cdft_DOUBLE.a libmkl_blacs_sgimpt_ilp64.a libmkl_intel_ilp64.a libmkl_pgi_thread.so libmkl_vml_mc2.so libfftw2xc_intel.a libmkl_blacs_sgimpt_lp64.a libmkl_intel_ilp64.so libmkl_scalapack.a libmkl_vml_mc3.so libfftw2xf_intel.a libmkl_blas95.a libmkl_intel_lp64.a libmkl_scalapack_ilp64.a libmkl_vml_mc.so libfftw3xc_intel.a libmkl_cdft.a libmkl_intel_lp64.so libmkl_scalapack_ilp64.so libmkl_vml_p4n.so libfftw3xf_intel.a libmkl_cdft_core.a libmkl_intel_sp2dp.a libmkl_scalapack_lp64.a locale libmkl_blacs_ilp64.a libmkl_core.a libmkl_intel_sp2dp.so libmkl_scalapack_lp64.so mkl77_blas.mod libmkl_blacs_intelmpi20_ilp64.a libmkl_core.so libmkl_intel_thread.a libmkl_sequential.a mkl77_lapack1.mod libmkl_blacs_intelmpi20_lp64.a libmkl_def.so libmkl_intel_thread.so libmkl_sequential.so mkl77_lapack.mod libmkl_blacs_intelmpi_ilp64.a libmkl_em64t.a libmkl_lapack95.a libmkl.so mkl95_blas.mod libmkl_blacs_intelmpi_ilp64.so libmkl_gf_ilp64.a libmkl_lapack.a libmkl_solver.a mkl95_lapack.mod libmkl_blacs_intelmpi_lp64.a libmkl_gf_ilp64.so libmkl_lapack.so libmkl_solver_ilp64.a mkl95_precision.mod libmkl_blacs_intelmpi_lp64.so libmkl_gf_lp64.a libmkl_mc3.so libmkl_solver_ilp64_sequential.a libmkl_blacs_lp64.a libmkl_gf_lp64.so libmkl_mc.so libmkl_solver_lp64.a libmkl_blacs_openmpi_ilp64.a libmkl_gnu_thread.a libmkl_p4n.so libmkl_solver_lp64_sequential.a libmkl_blacs_openmpi_lp64.a libmkl_gnu_thread.so libmkl_pgi_thread.a libmkl_vml_def.so Version we are trying to build of numpy and scipy are : numpy : 1.6.0b2 and scipy : 0.9.0 We have configured python as a user in my home directory with version : 2.6.6 our machine has python : 2.4.3 We want to know the exact procedure for installing it using MKL as we are facing lot of issues while installing the same. Please help us. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cimrman3 at ntc.zcu.cz Tue May 31 04:04:55 2011 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Tue, 31 May 2011 10:04:55 +0200 Subject: [SciPy-Dev] ANN: SfePy 2011.2 Message-ID: <4DE4A127.9070006@ntc.zcu.cz> I am pleased to announce release 2011.2 of SfePy. Description ----------- SfePy (simple finite elements in Python) is a software for solving systems of coupled partial differential equations by the finite element method. The code is based on NumPy and SciPy packages. It is distributed under the new BSD license. Home page: http://sfepy.org Mailing lists, issue tracking: http://code.google.com/p/sfepy/ Git (source) repository: http://github.com/sfepy Documentation: http://docs.sfepy.org/doc Highlights of this release -------------------------- - experimental implementation of terms aiming at easier usage and definition of new terms - Mooney-Rivlin membrane term - update build system to use exclusively setup.py - allow switching boundary conditions on/off depending on time - support for variable time step solvers For more information on this release, see http://sfepy.googlecode.com/svn/web/releases/2011.2_RELEASE_NOTES.txt (full release notes, rather long and technical). Best regards, Robert Cimrman and Vladim?r Luke?