From cournape at gmail.com Fri Jan 1 21:32:00 2010 From: cournape at gmail.com (David Cournapeau) Date: Sat, 2 Jan 2010 11:32:00 +0900 Subject: [SciPy-dev] [Numpy-discussion] Announcing toydist, improving distribution and packaging situation In-Reply-To: References: <5b8d13220912280603p7221a264o875b0d5e74a5404@mail.gmail.com> <5b8d13220912300816s12c934adh4abdd6d703f8928f@mail.gmail.com> Message-ID: <5b8d13221001011832p5ca4875dj13f04ba7ee61dd41@mail.gmail.com> On Thu, Dec 31, 2009 at 6:06 AM, Darren Dale wrote: > > I should defer to the description of extras in the setuptools > documentation. It is only a few paragraphs long: > > http://peak.telecommunity.com/DevCenter/setuptools#declaring-extras-optional-features-with-their-own-dependencies Ok, so there are two issues related to this feature: - supporting variant at the build stage - supporting different variants of the same package in the dependency graph at install time The first issue is definitely supported - I fixed a bug in toydist to support this correctly, and this will be used when converting setuptools-based setup.py which use the features argument. The second issue is more challenging. It complicates the dependency handling quite a bit, and may cause difficult situations to happen at dependency resolution time. This becomes particularly messy if you mix packages you build yourself with packages grabbed from a repository. I wonder if there is a simpler solution which would give a similar feature set. cheers, David From gael.varoquaux at normalesup.org Sat Jan 2 02:58:48 2010 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sat, 2 Jan 2010 08:58:48 +0100 Subject: [SciPy-dev] [Numpy-discussion] Announcing toydist, improving distribution and packaging situation In-Reply-To: <5b8d13221001011832p5ca4875dj13f04ba7ee61dd41@mail.gmail.com> References: <5b8d13220912280603p7221a264o875b0d5e74a5404@mail.gmail.com> <5b8d13220912300816s12c934adh4abdd6d703f8928f@mail.gmail.com> <5b8d13221001011832p5ca4875dj13f04ba7ee61dd41@mail.gmail.com> Message-ID: <20100102075848.GA17293@phare.normalesup.org> On Sat, Jan 02, 2010 at 11:32:00AM +0900, David Cournapeau wrote: > [snip] > - supporting different variants of the same package in the > dependency graph at install time > [snip] > The second issue is more challenging. It complicates the dependency > handling quite a bit, and may cause difficult situations to happen at > dependency resolution time. This becomes particularly messy if you mix > packages you build yourself with packages grabbed from a repository. I > wonder if there is a simpler solution which would give a similar > feature set. AFAICT, in Debian, the same feature is given via virtual packages: you would have: python-matplotlib python-matplotlib-basemap for instance. It is interesting to note that the same source package may be used to generate both binary, end-user, packages. And happy new year! Ga?l From cournape at gmail.com Sat Jan 2 05:18:34 2010 From: cournape at gmail.com (David Cournapeau) Date: Sat, 2 Jan 2010 19:18:34 +0900 Subject: [SciPy-dev] [Numpy-discussion] Announcing toydist, improving distribution and packaging situation In-Reply-To: <20100102075848.GA17293@phare.normalesup.org> References: <5b8d13220912280603p7221a264o875b0d5e74a5404@mail.gmail.com> <5b8d13220912300816s12c934adh4abdd6d703f8928f@mail.gmail.com> <5b8d13221001011832p5ca4875dj13f04ba7ee61dd41@mail.gmail.com> <20100102075848.GA17293@phare.normalesup.org> Message-ID: <5b8d13221001020218j32eaff29v2c777c513373a43c@mail.gmail.com> On Sat, Jan 2, 2010 at 4:58 PM, Gael Varoquaux wrote: > On Sat, Jan 02, 2010 at 11:32:00AM +0900, David Cournapeau wrote: >> [snip] >> ? - supporting different variants of the same package in the >> dependency graph at install time > >> [snip] > >> The second issue is more challenging. It complicates the dependency >> handling quite a bit, and may cause difficult situations to happen at >> dependency resolution time. This becomes particularly messy if you mix >> packages you build yourself with packages grabbed from a repository. I >> wonder if there is a simpler solution which would give a similar >> feature set. > > > AFAICT, in Debian, the same feature is given via virtual packages: you > would have: I don't think virtual-packages entirely fix the issue. AFAIK, virtual packages have two uses: - handle dependencies where several packages may resolve one particular dependency in an equivalent way (one good example is LAPACK: both liblapack and libatlas provides the lapack feature) - closer to this discussion, you can build several variants of the same package, and each variant would resolve the dependency on a virtual package handling the commonalities. For example, say we have two numpy packages, one built with lapack (python-numpy-full), the other without (python-numpy-core). What happens when a package foo depends on numpy-full, but numpy-core is installed ? AFAICS, this can only work as long as the set containing every variant can be ordered (in the conventional set ordering sense), and the dependency can be satisfied by the smallest one. cheers, David From dwf at cs.toronto.edu Sun Jan 3 04:14:46 2010 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Sun, 3 Jan 2010 04:14:46 -0500 Subject: [SciPy-dev] [Numpy-discussion] Announcing toydist, improving distribution and packaging situation In-Reply-To: <5b8d13220912290622m2f0ec2c3x5a26e63118cb29a0@mail.gmail.com> References: <5b8d13220912280603p7221a264o875b0d5e74a5404@mail.gmail.com> <64ddb72c0912290527s1143efc7g3efe93936ca5de5@mail.gmail.com> <5b8d13220912290622m2f0ec2c3x5a26e63118cb29a0@mail.gmail.com> Message-ID: <37CD5035-07A6-4C1B-A3EF-869BE91197E6@cs.toronto.edu> On 29-Dec-09, at 9:22 AM, David Cournapeau wrote: > I think Pypi is mostly useless. The lack of enforced metadata is a big > no-no IMHO. The fact that Pypi is miles beyond CRAN for example is > quite significant. I want CRAN for scientific python, and I don't see > Pypi becoming it in the near future. Agreed. The amount of times I have seen packages on the RSS feed that have no package description, no external URL, no author contact info... Not to mention the huge preponderance of plone.basketweaving.underwater packages or other plugins that really don't interest me but I have no good way to filter out (not that there's anything wrong with plone, but it's a general problem with a lack of namespace-awareness). David, the description of toydist sounds fantastic, and there's nobody I'd trust more to have thought through the design of such a beast. Happy New Year to you and yours. :) David From njs at pobox.com Sun Jan 3 06:05:54 2010 From: njs at pobox.com (Nathaniel Smith) Date: Sun, 3 Jan 2010 03:05:54 -0800 Subject: [SciPy-dev] [matplotlib-devel] [Numpy-discussion] Announcing toydist, improving distribution and packaging situation In-Reply-To: <5b8d13220912290634u5902a6bag33ddb8a15a93406b@mail.gmail.com> References: <5b8d13220912280603p7221a264o875b0d5e74a5404@mail.gmail.com> <64ddb72c0912290527s1143efc7g3efe93936ca5de5@mail.gmail.com> <5b8d13220912290634u5902a6bag33ddb8a15a93406b@mail.gmail.com> Message-ID: <961fa2b41001030305mddd301fp416a2fe23fc11568@mail.gmail.com> On Tue, Dec 29, 2009 at 6:34 AM, David Cournapeau wrote: > Buildout, virtualenv all work by sandboxing from the system python: > each of them do not see each other, which may be useful for > development, but as a deployment solution to the casual user who may > not be familiar with python, it is useless. A scientist who installs > numpy, scipy, etc... to try things out want to have everything > available in one python interpreter, and does not want to jump to > different virtualenvs and whatnot to try different packages. What I do -- and documented for people in my lab to do -- is set up one virtualenv in my user account, and use it as my default python. (I 'activate' it from my login scripts.) The advantage of this is that easy_install (or pip) just works, without any hassle about permissions etc. This should be easier, but I think the basic approach is sound. "Integration with the package system" is useless; the advantage of distribution packages is that distributions can provide a single coherent system with consistent version numbers across all packages, etc., and the only way to "integrate" with that is to, well, get the packages into the distribution. On another note, I hope toydist will provide a "source prepare" step, that allows arbitrary code to be run on the source tree. (For, e.g., cython->C conversion, ad-hoc template languages, etc.) IME this is a very common pain point with distutils; there is just no good way to do it, and it has to be supported in the distribution utility in order to get everything right. In particular: -- Generated files should never be written to the source tree itself, but only the build directory -- Building from a source checkout should run the "source prepare" step automatically -- Building a source distribution should also run the "source prepare" step, and stash the results in such a way that when later building the source distribution, this step can be skipped. This is a common requirement for user convenience, and necessary if you want to avoid arbitrary code execution during builds. And if you just set up the distribution util so that the only place you can specify arbitrary code execution is in the "source prepare" step, then even people who know nothing about packaging will automatically get all of the above right. Cheers, -- Nathaniel From gael.varoquaux at normalesup.org Sun Jan 3 06:11:53 2010 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sun, 3 Jan 2010 12:11:53 +0100 Subject: [SciPy-dev] [Numpy-discussion] [matplotlib-devel] Announcing toydist, improving distribution and packaging situation In-Reply-To: <961fa2b41001030305mddd301fp416a2fe23fc11568@mail.gmail.com> References: <5b8d13220912280603p7221a264o875b0d5e74a5404@mail.gmail.com> <64ddb72c0912290527s1143efc7g3efe93936ca5de5@mail.gmail.com> <5b8d13220912290634u5902a6bag33ddb8a15a93406b@mail.gmail.com> <961fa2b41001030305mddd301fp416a2fe23fc11568@mail.gmail.com> Message-ID: <20100103111153.GB24770@phare.normalesup.org> On Sun, Jan 03, 2010 at 03:05:54AM -0800, Nathaniel Smith wrote: > What I do -- and documented for people in my lab to do -- is set up > one virtualenv in my user account, and use it as my default python. (I > 'activate' it from my login scripts.) The advantage of this is that > easy_install (or pip) just works, without any hassle about permissions > etc. This should be easier, but I think the basic approach is sound. > "Integration with the package system" is useless; the advantage of > distribution packages is that distributions can provide a single > coherent system with consistent version numbers across all packages, > etc., and the only way to "integrate" with that is to, well, get the > packages into the distribution. That works because either you use packages that don't have much hard-core compiled dependencies, or these are already installed. Think about installing VTK or ITK this way, even something simpler such as umfpack. I think that you would loose most of your users. In my lab, I do lose users on such packages actually. Beside, what you are describing is possible without package isolation, it is simply the use of a per-user local site-packages, which now semi automatic in python2.6 using the '.local' directory. I do agree that, in a research lab, this is a best practice. Ga?l From cournape at gmail.com Sun Jan 3 06:24:59 2010 From: cournape at gmail.com (David Cournapeau) Date: Sun, 3 Jan 2010 20:24:59 +0900 Subject: [SciPy-dev] [matplotlib-devel] [Numpy-discussion] Announcing toydist, improving distribution and packaging situation In-Reply-To: <4B3F8FF6.7010800@astraw.com> References: <5b8d13220912280603p7221a264o875b0d5e74a5404@mail.gmail.com> <5b8d13220912300816s12c934adh4abdd6d703f8928f@mail.gmail.com> <5b8d13221001011832p5ca4875dj13f04ba7ee61dd41@mail.gmail.com> <20100102075848.GA17293@phare.normalesup.org> <5b8d13221001020218j32eaff29v2c777c513373a43c@mail.gmail.com> <4B3F8FF6.7010800@astraw.com> Message-ID: <5b8d13221001030324i67c3125dh8efa68c04a539659@mail.gmail.com> On Sun, Jan 3, 2010 at 3:27 AM, Andrew Straw wrote: >> > Typically, the dependencies only depend on the smallest subset of what > they require (if they don't need lapack, they'd only depend on > python-numpy-core in your example), but yes, if there's an unsatisfiable > condition, then apt-get will raise an error and abort. In practice, this > system seems to work quite well, IMO. Yes, but: - debian dependency resolution is complex. I think many people don't realize how complex the problem really is (AFAIK, any correct scheme to resolve dependencies in debian requires an algorithm which is NP-complete ) - introducing a lot of variants significantly slow down the whole thing. I think it worths thinking whether our problems warrant such a complexity. > > Anyhow, here's the full Debian documentation: > http://www.debian.org/doc/debian-policy/ch-relationships.html This is not the part I am afraid of. This is: http://people.debian.org/~dburrows/model.pdf cheers, David From cournape at gmail.com Sun Jan 3 07:23:14 2010 From: cournape at gmail.com (David Cournapeau) Date: Sun, 3 Jan 2010 21:23:14 +0900 Subject: [SciPy-dev] [matplotlib-devel] [Numpy-discussion] Announcing toydist, improving distribution and packaging situation In-Reply-To: <961fa2b41001030305mddd301fp416a2fe23fc11568@mail.gmail.com> References: <5b8d13220912280603p7221a264o875b0d5e74a5404@mail.gmail.com> <64ddb72c0912290527s1143efc7g3efe93936ca5de5@mail.gmail.com> <5b8d13220912290634u5902a6bag33ddb8a15a93406b@mail.gmail.com> <961fa2b41001030305mddd301fp416a2fe23fc11568@mail.gmail.com> Message-ID: <5b8d13221001030423j96fdb72l832964f6c5df7f97@mail.gmail.com> On Sun, Jan 3, 2010 at 8:05 PM, Nathaniel Smith wrote: > On Tue, Dec 29, 2009 at 6:34 AM, David Cournapeau wrote: >> Buildout, virtualenv all work by sandboxing from the system python: >> each of them do not see each other, which may be useful for >> development, but as a deployment solution to the casual user who may >> not be familiar with python, it is useless. A scientist who installs >> numpy, scipy, etc... to try things out want to have everything >> available in one python interpreter, and does not want to jump to >> different virtualenvs and whatnot to try different packages. > > What I do -- and documented for people in my lab to do -- is set up > one virtualenv in my user account, and use it as my default python. (I > 'activate' it from my login scripts.) The advantage of this is that > easy_install (or pip) just works, without any hassle about permissions > etc. It just works if you happen to be able to build everything from sources. That alone means you ignore the majority of users I intend to target. No other community (except maybe Ruby) push those isolated install solutions as a general deployment solutions. If it were such a great idea, other people would have picked up those solutions. > This should be easier, but I think the basic approach is sound. > "Integration with the package system" is useless; the advantage of > distribution packages is that distributions can provide a single > coherent system with consistent version numbers across all packages, > etc., and the only way to "integrate" with that is to, well, get the > packages into the distribution. Another way is to provide our own repository for a few major distributions, with automatically built packages. This is how most open source providers work. Miguel de Icaza explains this well: http://tirania.org/blog/archive/2007/Jan-26.html I hope we will be able to reuse much of the opensuse build service infrastructure. > > On another note, I hope toydist will provide a "source prepare" step, > that allows arbitrary code to be run on the source tree. (For, e.g., > cython->C conversion, ad-hoc template languages, etc.) IME this is a > very common pain point with distutils; there is just no good way to do > it, and it has to be supported in the distribution utility in order to > get everything right. In particular: > ?-- Generated files should never be written to the source tree > itself, but only the build directory > ?-- Building from a source checkout should run the "source prepare" > step automatically > ?-- Building a source distribution should also run the "source > prepare" step, and stash the results in such a way that when later > building the source distribution, this step can be skipped. This is a > common requirement for user convenience, and necessary if you want to > avoid arbitrary code execution during builds. Build directories are hard to implement right. I don't think toydist will support this directly. IMO, those advanced builds warrant a real build tool - one main goal of toydist is to make integration with waf or scons much easier. Both waf and scons have the concept of a build directory, which should do everything you described. David From josef.pktd at gmail.com Sun Jan 3 10:40:53 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 3 Jan 2010 10:40:53 -0500 Subject: [SciPy-dev] namespace recommendation in scipy tutorial and docs for np.emath Message-ID: <1cd32cbb1001030740q13a8c27cu77bd88b71c29f3ce@mail.gmail.com> I'm going through some scipy tickets and I found http://projects.scipy.org/scipy/ticket/882 I don't know much about the old usage of mixing (mixed up) namespaces. But I think it would be good if someone could check the namespace convention/recommendation in the basic scipy tutorial http://docs.scipy.org/doc/scipy/reference/tutorial/basic.html When I tried to figure out the difference between scipy.* and numpy.*, I also found that the functions in numpy.emath (numpy.scipy) have good docstrings, but they are not included in the docs (at least I don't find them), because http://docs.scipy.org/numpy/docs/numpy-docs/reference/routines.emath.rst/#numpy-docs-reference-routines-emath-rst is still a stub page without function links. http://docs.scipy.org/doc/numpy/reference/routines.emath.html Josef From arkapravobhaumik at gmail.com Sun Jan 3 14:19:19 2010 From: arkapravobhaumik at gmail.com (Arkapravo Bhaumik) Date: Sun, 3 Jan 2010 19:19:19 +0000 Subject: [SciPy-dev] Module for test for convergence ? Message-ID: Hello everyone ! Wish you all a very happy new year ! I had a question ! Is there any scipy module for confirming *convergence of a series*, given the generic term of tn ? If not, have any of you tried to write such a module ? I have been searching MATLAB, I doubt that there is any such facility ? Regards Arkapravo -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Sun Jan 3 18:42:32 2010 From: njs at pobox.com (Nathaniel Smith) Date: Sun, 3 Jan 2010 15:42:32 -0800 Subject: [SciPy-dev] [matplotlib-devel] [Numpy-discussion] Announcing toydist, improving distribution and packaging situation In-Reply-To: <5b8d13221001030423j96fdb72l832964f6c5df7f97@mail.gmail.com> References: <5b8d13220912280603p7221a264o875b0d5e74a5404@mail.gmail.com> <64ddb72c0912290527s1143efc7g3efe93936ca5de5@mail.gmail.com> <5b8d13220912290634u5902a6bag33ddb8a15a93406b@mail.gmail.com> <961fa2b41001030305mddd301fp416a2fe23fc11568@mail.gmail.com> <5b8d13221001030423j96fdb72l832964f6c5df7f97@mail.gmail.com> Message-ID: <961fa2b41001031542t203e8ef6mee8f590095e54d18@mail.gmail.com> On Sun, Jan 3, 2010 at 4:23 AM, David Cournapeau wrote: > On Sun, Jan 3, 2010 at 8:05 PM, Nathaniel Smith wrote: >> What I do -- and documented for people in my lab to do -- is set up >> one virtualenv in my user account, and use it as my default python. (I >> 'activate' it from my login scripts.) The advantage of this is that >> easy_install (or pip) just works, without any hassle about permissions >> etc. > > It just works if you happen to be able to build everything from > sources. That alone means you ignore the majority of users I intend to > target. > > No other community (except maybe Ruby) push those isolated install > solutions as a general deployment solutions. If it were such a great > idea, other people would have picked up those solutions. AFAICT, R works more-or-less identically (once I convinced it to use a per-user library directory); install.packages() builds from source, and doesn't automatically pull in and build random C library dependencies. I'm not advocating the 'every app in its own world' model that virtualenv's designers had min mind, but virtualenv is very useful to give each user their own world. Normally I only use a fraction of virtualenv's power this way, but sometimes it's handy that they've solved the more general problem -- I can easily move my environment out of the way and rebuild if I've done something stupid, or experiment with new python versions in isolation, or whatever. And when you *do* have to reproduce some old environment -- if only to test that the new improved environment gives the same results -- then it's *really* handy. >> This should be easier, but I think the basic approach is sound. >> "Integration with the package system" is useless; the advantage of >> distribution packages is that distributions can provide a single >> coherent system with consistent version numbers across all packages, >> etc., and the only way to "integrate" with that is to, well, get the >> packages into the distribution. > > Another way is to provide our own repository for a few major > distributions, with automatically built packages. This is how most > open source providers work. Miguel de Icaza explains this well: > > http://tirania.org/blog/archive/2007/Jan-26.html > > I hope we will be able to reuse much of the opensuse build service > infrastructure. Sure, I'm aware of the opensuse build service, have built third-party packages for my projects, etc. It's a good attempt, but also has a lot of problems, and when talking about scientific software it's totally useless to me :-). First, I don't have root on our compute cluster. Second, even if I did I'd be very leery about installing third-party packages because there is no guarantee that the version numbering will be consistent between the third-party repo and the real distro repo -- suppose that the distro packages 0.1, then the third party packages 0.2, then the distro packages 0.3, will upgrades be seamless? What if the third party screws up the version numbering at some point? Debian has "epochs" to deal with this, but third-parties can't use them and maintain compatibility. What if the person making the third party packages is not an expert on these random distros that they don't even use? Will bug reporting tools work properly? Distros are complicated. Third, while we shouldn't advocate that people screw up backwards compatibility, version skew is a real issue. If I need one version of a package and my lab-mate needs another and we have submissions due tomorrow, then filing bugs is a great idea but not a solution. Fourth, even if we had expert maintainers taking care of all these third-party packages and all my concerns were answered, I couldn't convince our sysadmin of that; he's the one who'd have to clean up if something went wrong we don't have a big budget for overtime. Let's be honest -- scientists, on the whole, suck at IT infrastructure, and small individual packages are not going to be very expertly put together. IMHO any real solution should take this into account, keep them sandboxed from the rest of the system, and focus on providing the most friendly and seamless sandbox possible. >> On another note, I hope toydist will provide a "source prepare" step, >> that allows arbitrary code to be run on the source tree. (For, e.g., >> cython->C conversion, ad-hoc template languages, etc.) IME this is a >> very common pain point with distutils; there is just no good way to do >> it, and it has to be supported in the distribution utility in order to >> get everything right. In particular: >> ?-- Generated files should never be written to the source tree >> itself, but only the build directory >> ?-- Building from a source checkout should run the "source prepare" >> step automatically >> ?-- Building a source distribution should also run the "source >> prepare" step, and stash the results in such a way that when later >> building the source distribution, this step can be skipped. This is a >> common requirement for user convenience, and necessary if you want to >> avoid arbitrary code execution during builds. > > Build directories are hard to implement right. I don't think toydist > will support this directly. IMO, those advanced builds warrant a real > build tool - one main goal of toydist is to make integration with waf > or scons much easier. Both waf and scons have the concept of a build > directory, which should do everything you described. Maybe I was unclear -- proper build directory handling is nice, Cython/Pyrex's distutils integration get it wrong (not their fault, distutils is just impossible to do anything sensible with, as you've said), and I've never found build directories hard to implement (perhaps I'm missing something). But what I'm really talking about is having a "pre-build" step that integrates properly with the source and binary packaging stages, and that's not something waf or scons have any particular support for, AFAIK. -- Nathaniel From dwf at cs.toronto.edu Sun Jan 3 19:18:55 2010 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Sun, 3 Jan 2010 19:18:55 -0500 Subject: [SciPy-dev] Module for test for convergence ? In-Reply-To: References: Message-ID: <195B4BAE-5D4A-4374-9CF2-AB26D7DED075@cs.toronto.edu> On 3-Jan-10, at 2:19 PM, Arkapravo Bhaumik wrote: > I had a question ! Is there any scipy module for confirming > convergence of a series, given the generic term of tn ? If not, have > any of you tried to write such a module ? This would be more a task for SymPy. David From arkapravobhaumik at gmail.com Sun Jan 3 20:42:03 2010 From: arkapravobhaumik at gmail.com (Arkapravo Bhaumik) Date: Mon, 4 Jan 2010 01:42:03 +0000 Subject: [SciPy-dev] Zeta Functions ! Message-ID: Dear Friends I found some serious discrepancies with the zeta function modules. I have discussed them in my blogspot : http://3chevrons.blogspot.com/2010/01/zeta-in-scipy.html . Please feel free to comment on it, any other suggestion to improve the module is welcome. Regards Arkapravo -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at enthought.com Sun Jan 3 21:38:15 2010 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Sun, 03 Jan 2010 20:38:15 -0600 Subject: [SciPy-dev] Zeta Functions ! In-Reply-To: References: Message-ID: <4B415497.4090008@enthought.com> I don't know the reason for the names of the functions, but it appears that the function that you should use to compute the zeta function for an arbitrary argument is zetac(x) + 1: In [1]: from scipy.special import zeta, zetac In [2]: zetac(0) + 1 Out[2]: -0.5 In [3]: zetac(-1) + 1 Out[3]: -0.083333333333333259 In [4]: zetac(0.5) + 1 Out[4]: -1.4603545088095866 In [5]: zetac(1) + 1 Out[5]: 1.7014118346046923e+38 In [6]: zetac(2) + 1 Out[6]: 1.6449340668482264 In [7]: zetac(-2) + 1 Out[7]: 0.0 zeta() and zetac() are wrappers for functions from the cephes library; you can find the source code at the end of this list: http://projects.scipy.org/scipy/browser/trunk/scipy/special/cephes In the C file zeta.c, you can see that zeta(x,q) returns NaN if x < 1.0. Of course, a SciPy user should not have to dig into the C code to find this out. Clearly the docstrings for zeta() and zetac() should be improved. Warren P.S. Be careful: don't forget the 1/2 is zero, unless you are using Python 3. Arkapravo Bhaumik wrote: > Dear Friends > > I found some serious discrepancies with the zeta function modules. I > have discussed them in my blogspot > : http://3chevrons.blogspot.com/2010/01/zeta-in-scipy.html . Please > feel free to comment on it, any other suggestion to improve the module > is welcome. > > Regards > > Arkapravo > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From josef.pktd at gmail.com Sun Jan 3 22:07:56 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 3 Jan 2010 22:07:56 -0500 Subject: [SciPy-dev] Zeta Functions ! In-Reply-To: <4B415497.4090008@enthought.com> References: <4B415497.4090008@enthought.com> Message-ID: <1cd32cbb1001031907p704193e2qc4e6a6329129cb1@mail.gmail.com> On Sun, Jan 3, 2010 at 9:38 PM, Warren Weckesser wrote: > I don't know the reason for the names of the functions, but it appears > that the function that you should use to compute the zeta function for > an arbitrary argument is zetac(x) + 1: > > In [1]: from scipy.special import zeta, zetac > > In [2]: zetac(0) + 1 > Out[2]: -0.5 > > In [3]: zetac(-1) + 1 > Out[3]: -0.083333333333333259 > > In [4]: zetac(0.5) + 1 > Out[4]: -1.4603545088095866 > > In [5]: zetac(1) + 1 > Out[5]: 1.7014118346046923e+38 > > In [6]: zetac(2) + 1 > Out[6]: 1.6449340668482264 > > In [7]: zetac(-2) + 1 > Out[7]: 0.0 > > > zeta() and zetac() are wrappers for functions from the cephes library; > you can find the source code at the end of this list: > > ? ?http://projects.scipy.org/scipy/browser/trunk/scipy/special/cephes > > In the C file zeta.c, you can see that zeta(x,q) returns NaN if x < 1.0. > > Of course, a SciPy user should not have to dig into the C code to find > this out. ?Clearly the docstrings for zeta() and zetac() should be improved. c at end of name usually stands for complement, only in this case the docstring for zetac gets it wrong. y=zetac(x) returns 1.0 - the Riemann zeta function: sum(k**(-x), k=2..inf) In the c source you linked to, it says 25 * is related to the Riemann zeta function by 26 * 27 * Riemann zeta(x) = zetac(x) + 1. for scipy.special (and lapack) the docs is usually better (or only exist) in the c or fortran files. Josef > > > Warren > > P.S. Be careful: don't forget the 1/2 is zero, unless you are using > Python 3. > > > Arkapravo Bhaumik wrote: >> Dear Friends >> >> I found some serious discrepancies with the zeta function modules. I >> have discussed them in my blogspot >> : http://3chevrons.blogspot.com/2010/01/zeta-in-scipy.html . Please >> feel free to comment on it, any other suggestion to improve the module >> is welcome. >> >> Regards >> >> Arkapravo >> ------------------------------------------------------------------------ >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev >> > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From keithcu at gmail.com Mon Jan 4 00:32:02 2010 From: keithcu at gmail.com (Keith Curtis) Date: Sun, 3 Jan 2010 21:32:02 -0800 Subject: [SciPy-dev] SciPy and vision Message-ID: <8db43c661001032132p3a6f1687h25f79d250fc0db53@mail.gmail.com> Hi; I have become convinced that SciPy is the best software community for people doing scientific computing, and hope more people join up. But one thing I see is not much computer vision code in SciPy. The image processing seems powerful but very basic right now. People can use OpenCV or one of its 3 Python wrappers, but it has defined its own Matrix class, etc. Researchers have told me that they don't like to use OpenCV because it is a monotholic monstrosity. The community seems quite unhealthy (e-mail traffic is quite low and much about build issues [1]) because I think it is so tempting for people to roll their own ( http://www.cs.cmu.edu/~cil/v-source.html) rather than spending months learning all of its eccentricities. I think you guys should keep working on your image processing support and build a better place for vision researchers to easily jump into. Is this in progress? Regards, -Keith [1] http://sourceforge.net/mailarchive/forum.php?forum_name=opencvlibrary-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From renato.francisco.amaral at gmail.com Mon Jan 4 02:14:04 2010 From: renato.francisco.amaral at gmail.com (Renato Francisco G. Amaral) Date: Mon, 4 Jan 2010 05:14:04 -0200 Subject: [SciPy-dev] SciPy and vision In-Reply-To: <8db43c661001032132p3a6f1687h25f79d250fc0db53@mail.gmail.com> References: <8db43c661001032132p3a6f1687h25f79d250fc0db53@mail.gmail.com> Message-ID: Hey, Keith, your book is a treasure of knowledge. I'm reading it again and learning a lot in the process. The book "After the Software Wars" is one of the best I've seen in ages. All the best, Renato Amaral On Mon, Jan 4, 2010 at 3:32 AM, Keith Curtis wrote: > Hi; > > I have become convinced that SciPy is the best software community for > people doing scientific computing, and hope more people join up. > > But one thing I see is not much computer vision code in SciPy. The image > processing seems powerful but very basic right now. People can use OpenCV or > one of its 3 Python wrappers, but it has defined its own Matrix class, etc. > Researchers have told me that they don't like to use OpenCV because it is a > monotholic monstrosity. The community seems quite unhealthy (e-mail traffic > is quite low and much about build issues [1]) because I think it is so > tempting for people to roll their own ( > http://www.cs.cmu.edu/~cil/v-source.html) > rather than spending months learning all of its eccentricities. > > I think you guys should keep working on your image processing support and > build a better place for vision researchers to easily jump into. Is this in > progress? > > Regards, > > -Keith > > [1] > http://sourceforge.net/mailarchive/forum.php?forum_name=opencvlibrary-devel > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Mon Jan 4 02:25:44 2010 From: cournape at gmail.com (David Cournapeau) Date: Mon, 4 Jan 2010 16:25:44 +0900 Subject: [SciPy-dev] [matplotlib-devel] [Numpy-discussion] Announcing toydist, improving distribution and packaging situation In-Reply-To: <961fa2b41001031542t203e8ef6mee8f590095e54d18@mail.gmail.com> References: <5b8d13220912280603p7221a264o875b0d5e74a5404@mail.gmail.com> <64ddb72c0912290527s1143efc7g3efe93936ca5de5@mail.gmail.com> <5b8d13220912290634u5902a6bag33ddb8a15a93406b@mail.gmail.com> <961fa2b41001030305mddd301fp416a2fe23fc11568@mail.gmail.com> <5b8d13221001030423j96fdb72l832964f6c5df7f97@mail.gmail.com> <961fa2b41001031542t203e8ef6mee8f590095e54d18@mail.gmail.com> Message-ID: <5b8d13221001032325o250e4d5ao5f476b384ad6dd17@mail.gmail.com> On Mon, Jan 4, 2010 at 8:42 AM, Nathaniel Smith wrote: > On Sun, Jan 3, 2010 at 4:23 AM, David Cournapeau wrote: >> On Sun, Jan 3, 2010 at 8:05 PM, Nathaniel Smith wrote: >>> What I do -- and documented for people in my lab to do -- is set up >>> one virtualenv in my user account, and use it as my default python. (I >>> 'activate' it from my login scripts.) The advantage of this is that >>> easy_install (or pip) just works, without any hassle about permissions >>> etc. >> >> It just works if you happen to be able to build everything from >> sources. That alone means you ignore the majority of users I intend to >> target. >> >> No other community (except maybe Ruby) push those isolated install >> solutions as a general deployment solutions. If it were such a great >> idea, other people would have picked up those solutions. > > AFAICT, R works more-or-less identically (once I convinced it to use a > per-user library directory); install.packages() builds from source, > and doesn't automatically pull in and build random C library > dependencies. As mentioned by Robert, this is different from the usual virtualenv approach. Per-user app installation is certainly a useful (and uncontroversial) feature. And R does support automatically-built binary installers. > > Sure, I'm aware of the opensuse build service, have built third-party > packages for my projects, etc. It's a good attempt, but also has a lot > of problems, and when talking about scientific software it's totally > useless to me :-). First, I don't have root on our compute cluster. True, non-root install is a problem. Nothing *prevents* dpkg to run in non root environment in principle if the packages itself does not require it, but it is not really supported by the tools ATM. > Second, even if I did I'd be very leery about installing third-party > packages because there is no guarantee that the version numbering will > be consistent between the third-party repo and the real distro repo -- > suppose that the distro packages 0.1, then the third party packages > 0.2, then the distro packages 0.3, will upgrades be seamless? What if > the third party screws up the version numbering at some point? Debian > has "epochs" to deal with this, but third-parties can't use them and > maintain compatibility. Actually, at least with .deb-based distributions, this issue has a solution. As packages has their own version in addition to the upstream version, PPA-built packages have their own versions. https://help.launchpad.net/Packaging/PPA/BuildingASourcePackage Of course, this assumes a simple versioning scheme in the first place, instead of the cluster-fck that versioning has became within python packages (again, the scheme used in python is much more complicated than everyone else, and it seems that nobody has ever stopped and thought 5 minutes about the consequences, and whether this complexity was a good idea in the first place). > What if the person making the third party > packages is not an expert on these random distros that they don't even > use? I think simple rules/conventions + build farms would solve most issues. The problem is if you allow total flexibility as input, then automatic and simple solutions become impossible. Certainly, PPA and the build service provides for a much better experience than anything pypi has ever given to me. > Third, while we shouldn't advocate that people screw up backwards > compatibility, version skew is a real issue. If I need one version of > a package and my lab-mate needs another and we have submissions due > tomorrow, then filing bugs is a great idea but not a solution. Nothing prevents you from using virtualenv in that case (I may sound dismissive of those tools, but I am really not. I use them myselves. What I strongly react to is when those are pushed as the de-facto, standard method). > Fourth, > even if we had expert maintainers taking care of all these third-party > packages and all my concerns were answered, I couldn't convince our > sysadmin of that; he's the one who'd have to clean up if something > went wrong we don't have a big budget for overtime. I am not advocating using only packaged, binary installers. I am advocating using them as much as possible where it makes sense - on windows and mac os x in particular. Toydist also aims at making it easier to build, customize installs. Although not yet implemented, --user-like scheme would be quite simple to implement, because toydist installer internally uses autoconf-like directories description (of which --user is a special case). If you need sandboxed installs, customized installs, toydist will not prevent it. It is certainly my intention to make it possible to use virtualenv and co (you already can by building eggs, actually). I hope that by having our own "SciPi", we can actually have a more reliable approach. For example, the static dependency description + mandated metadata would make this much easier and more robust, as there would not be a need to run a setup.py to get the dependencies. If you look at hackageDB (http://hackage.haskell.org/packages/hackage.html), they have a very simple index structure, which makes it easy to download it entirely, and reuse this locally to avoid any internet access. > Let's be honest -- scientists, on the whole, suck at IT > infrastructure, and small individual packages are not going to be very > expertly put together. IMHO any real solution should take this into > account, keep them sandboxed from the rest of the system, and focus on > providing the most friendly and seamless sandbox possible. I agree packages will not always be well put together - but I don't see why this would be worse than the current situation. I also strongly disagree about the sandboxing as the solution of choice. For most users, having only one install of most packages is the typical use-case. Once you start sandboxing, you create artificial barriers between the sandboxes, and this becomes too complicated for most users IMHO. > > Maybe I was unclear -- proper build directory handling is nice, > Cython/Pyrex's distutils integration get it wrong (not their fault, > distutils is just impossible to do anything sensible with, as you've > said), and I've never found build directories hard to implement It is simple if you have a good infrastructure in place (node abstraction, etc...), but that infrastructure is hard to get right. > But what I'm really talking about is > having a "pre-build" step that integrates properly with the source and > binary packaging stages, and that's not something waf or scons have > any particular support for, AFAIK. Could you explain with a concrete example what a pre-build stage would look like ? I don't think I understand what you want, cheers, David From dagss at student.matnat.uio.no Mon Jan 4 03:48:43 2010 From: dagss at student.matnat.uio.no (Dag Sverre Seljebotn) Date: Mon, 4 Jan 2010 09:48:43 +0100 Subject: [SciPy-dev] [Numpy-discussion] [matplotlib-devel] Announcing toydist, improving distribution and packaging situation In-Reply-To: <961fa2b41001031542t203e8ef6mee8f590095e54d18@mail.gmail.com> References: <5b8d13220912280603p7221a264o875b0d5e74a5404@mail.gmail.com> <64ddb72c0912290527s1143efc7g3efe93936ca5de5@mail.gmail.com> <5b8d13220912290634u5902a6bag33ddb8a15a93406b@mail.gmail.com> <961fa2b41001030305mddd301fp416a2fe23fc11568@mail.gmail.com> <5b8d13221001030423j96fdb72l832964f6c5df7f97@mail.gmail.com> <961fa2b41001031542t203e8ef6mee8f590095e54d18@mail.gmail.com> Message-ID: <89f11d872447395b3271dbea1fd5cd9d.squirrel@webmail.uio.no> Nathaniel Smith wrote: > On Sun, Jan 3, 2010 at 4:23 AM, David Cournapeau > wrote: >> Another way is to provide our own repository for a few major >> distributions, with automatically built packages. This is how most >> open source providers work. Miguel de Icaza explains this well: >> >> http://tirania.org/blog/archive/2007/Jan-26.html >> >> I hope we will be able to reuse much of the opensuse build service >> infrastructure. > > Sure, I'm aware of the opensuse build service, have built third-party > packages for my projects, etc. It's a good attempt, but also has a lot > of problems, and when talking about scientific software it's totally > useless to me :-). First, I don't have root on our compute cluster. I use Sage for this very reason, and others use EPD or FEMHub or Python(x,y) for the same reasons. Rolling this into the Python package distribution scheme seems backwards though, since a lot of binary packages that have nothing to do with Python are used as well -- the Python stuff is simply thin wrappers around what should ideally be located in /usr/lib or similar (but are nowadays compiled into the Python extension .so because of distribution problems). To solve the exact problem you (and me) have I think the best solution is to integrate the tools mentioned above with what David is planning (SciPI etc.). Or if that isn't good enough, find generic "userland package manager" that has nothing to do with Python (I'm sure a dozen half-finished ones must have been written but didn't look), finish it, and connect it to SciPI. What David does (I think) is seperate the concerns. This makes the task feasible, and also has the advantage of convenience for the people that *do* want to use Ubuntu, Red Hat or whatever to roll out scientific software on hundreds of clients. Dag Sverre From cournape at gmail.com Mon Jan 4 04:11:13 2010 From: cournape at gmail.com (David Cournapeau) Date: Mon, 4 Jan 2010 18:11:13 +0900 Subject: [SciPy-dev] [Numpy-discussion] [matplotlib-devel] Announcing toydist, improving distribution and packaging situation In-Reply-To: <89f11d872447395b3271dbea1fd5cd9d.squirrel@webmail.uio.no> References: <5b8d13220912280603p7221a264o875b0d5e74a5404@mail.gmail.com> <64ddb72c0912290527s1143efc7g3efe93936ca5de5@mail.gmail.com> <5b8d13220912290634u5902a6bag33ddb8a15a93406b@mail.gmail.com> <961fa2b41001030305mddd301fp416a2fe23fc11568@mail.gmail.com> <5b8d13221001030423j96fdb72l832964f6c5df7f97@mail.gmail.com> <961fa2b41001031542t203e8ef6mee8f590095e54d18@mail.gmail.com> <89f11d872447395b3271dbea1fd5cd9d.squirrel@webmail.uio.no> Message-ID: <5b8d13221001040111x29cd463al7f9559ff93655508@mail.gmail.com> On Mon, Jan 4, 2010 at 5:48 PM, Dag Sverre Seljebotn wrote: > > Rolling this into the Python package distribution scheme seems backwards > though, since a lot of binary packages that have nothing to do with Python > are used as well Yep, exactly. > > To solve the exact problem you (and me) have I think the best solution is > to integrate the tools mentioned above with what David is planning (SciPI > etc.). Or if that isn't good enough, find generic "userland package > manager" that has nothing to do with Python (I'm sure a dozen > half-finished ones must have been written but didn't look), finish it, and > connect it to SciPI. You have 0install, autopackage and klik, to cite the ones I know about. I wish people had looked at those before rolling toy solutions to complex problems. > > What David does (I think) is seperate the concerns. Exactly - you've describe this better than I did David From dwf at cs.toronto.edu Mon Jan 4 08:02:49 2010 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Mon, 4 Jan 2010 08:02:49 -0500 Subject: [SciPy-dev] SciPy and vision In-Reply-To: <8db43c661001032132p3a6f1687h25f79d250fc0db53@mail.gmail.com> References: <8db43c661001032132p3a6f1687h25f79d250fc0db53@mail.gmail.com> Message-ID: <6C1F2E50-0542-44AE-BAB7-055090BE605F@cs.toronto.edu> On 4-Jan-10, at 12:32 AM, Keith Curtis wrote: > I think you guys should keep working on your image processing > support and build a better place for vision researchers to easily > jump into. Is this in progress? An image processing SciKit was started back in August and has made leaps and bounds since then. The idea with SciKits is that they may eventually become part of SciPy. http://stefanv.github.com/scikits.image/ There's also scipy.ndimage which does very simple stuff, and I think the CellProfiler guys may have some more sophisticated stuff (GPL, though they have expressed in the past a desire to relicense bits for upstream integration with SciPy). David From keithcu at gmail.com Mon Jan 4 17:33:55 2010 From: keithcu at gmail.com (Keith Curtis) Date: Mon, 4 Jan 2010 14:33:55 -0800 Subject: [SciPy-dev] SciPy and vision In-Reply-To: <6C1F2E50-0542-44AE-BAB7-055090BE605F@cs.toronto.edu> References: <8db43c661001032132p3a6f1687h25f79d250fc0db53@mail.gmail.com> <6C1F2E50-0542-44AE-BAB7-055090BE605F@cs.toronto.edu> Message-ID: <8db43c661001041433h718a3bd3vd527d6376f62f855@mail.gmail.com> Hi David; On Mon, Jan 4, 2010 at 5:02 AM, David Warde-Farley wrote: > On 4-Jan-10, at 12:32 AM, Keith Curtis wrote: > > > I think you guys should keep working on your image processing > > support and build a better place for vision researchers to easily > > jump into. Is this in progress? > > An image processing SciKit was started back in August and has made > leaps and bounds since then. The idea with SciKits is that they may > eventually become part of SciPy. > > http://stefanv.github.com/scikits.image/ > That is great to hear. As a commercial programmer, spending years learning the ins and out of a language or a component was not a problem, but for researchers and other dabblers with shorter attention spans, being able to jump in and be productive in hours rather than days is an important consideration. The layers of a computer vision stack are several and complicated, so let's get going. There must be 1000s of vision PhDs out there re-inventing the wheel. I hope the image processing SciKit is easy to install and use once SciPy is setup. The dependency on OpenCV is a bit of a hassle because then you have to deal with the OpenCV build issues, but I think as an incremental solution that is fine. But big chunks of OpenCV is code already implemented by SciPy so the native / natural port would be a lot smaller. Unlike JPEG converters or other things that make sense to just call from Python, vision code is not mature like this yet. In fact, I don't understand why there are so many slightly different algorithms which appear to do the same things. (Anyone know?) I wonder if it is like "sort" where there are many different ways to do it, but in reality just a handful are enough for an entire industry. Or maybe not, but today it does require a native Python stack to let people easily experiment. With OpenCV, you can use the Python wrappers, but if you want to change what the code under the wrappers is doing, your task just got a lot harder. I don't have any great ways on growing your community, but I hope outreach is also an important part of your efforts. -Keith -------------- next part -------------- An HTML attachment was scrubbed... URL: From d.l.goldsmith at gmail.com Mon Jan 4 18:35:48 2010 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Mon, 4 Jan 2010 15:35:48 -0800 Subject: [SciPy-dev] Does the doc wiki have a format for internal links... Message-ID: <45d1ab481001041535h10db5cc3y7ff2ec9b37d3b842@mail.gmail.com> ...similar to the external link format? For example, at a certain place I want the word "broadcasting" to be a link to numpy.doc.broadcasting - do I have to use the external format `broadcasting < http://docs.scipy.org/numpy/docs/numpy.doc.broadcasting/>`_ (which does work) or is there some variant of `numpy.doc.broadcasting` (which does correctly point there, but w/out the desired text replacement) I can use? (I tried the obvious `broadcasting numpy.doc.broadcasting` both with and without a trailing "_", but neither worked, (and beyond that, the number of variants to "just try" is excessive) and I looked at the rst quickref, but it wasn't helpful and in any event, linking w/in our Wiki is a specialization enabled by pydocweb, yes?) Thanks, DG -------------- next part -------------- An HTML attachment was scrubbed... URL: From dwf at cs.toronto.edu Tue Jan 5 01:09:28 2010 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Tue, 5 Jan 2010 01:09:28 -0500 Subject: [SciPy-dev] SciPy and vision In-Reply-To: <8db43c661001041433h718a3bd3vd527d6376f62f855@mail.gmail.com> References: <8db43c661001032132p3a6f1687h25f79d250fc0db53@mail.gmail.com> <6C1F2E50-0542-44AE-BAB7-055090BE605F@cs.toronto.edu> <8db43c661001041433h718a3bd3vd527d6376f62f855@mail.gmail.com> Message-ID: <127ADC3A-903E-4014-8DB0-F64613E683D5@cs.toronto.edu> On 4-Jan-10, at 5:33 PM, Keith Curtis wrote: > I hope the image processing SciKit is easy to install and use once > SciPy is setup. The dependency on OpenCV is a bit of a hassle > because then you have to deal with the OpenCV build issues, but I > think as an incremental solution that is fine. I've been a bit out of the loop w.r.t. the scikit but as I understand it the OpenCV dependency is strictly optional to get the scikits.image.opencv subpackage. And I imagine it is seen as a stopgap solution. > But big chunks of OpenCV is code already implemented by SciPy so the > native / natural port would be a lot smaller. Unlike JPEG converters > or other things that make sense to just call from Python, vision > code is not mature like this yet. > > In fact, I don't understand why there are so many slightly different > algorithms which appear to do the same things. (Anyone know?) I > wonder if it is like "sort" where there are many different ways to > do it, but in reality just a handful are enough for an entire > industry. Or maybe not, but today it does require a native Python > stack to let people easily experiment. With OpenCV, you can use the > Python wrappers, but if you want to change what the code under the > wrappers is doing, your task just got a lot harder. IMO Cython can help a lot here with the algorithmic heavy-lifting. It's nearly as easy as writing Python code and can be incrementally optimized i.e. only optimize the stuff that your profiling tells you is a bottleneck. David From ralf.gommers at googlemail.com Tue Jan 5 06:19:15 2010 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Tue, 5 Jan 2010 19:19:15 +0800 Subject: [SciPy-dev] Does the doc wiki have a format for internal links... In-Reply-To: <45d1ab481001041535h10db5cc3y7ff2ec9b37d3b842@mail.gmail.com> References: <45d1ab481001041535h10db5cc3y7ff2ec9b37d3b842@mail.gmail.com> Message-ID: On Tue, Jan 5, 2010 at 7:35 AM, David Goldsmith wrote: > ...similar to the external link format? For example, at a certain place I > want the word "broadcasting" to be a link to numpy.doc.broadcasting - do I > have to use the external format `broadcasting < > http://docs.scipy.org/numpy/docs/numpy.doc.broadcasting/>`_ (which does > work) or is there some variant of `numpy.doc.broadcasting` (which does > correctly point there, but w/out the desired text replacement) I can use? Not that I know of. This can go on the wishlist, next to anchor links. Please don't use hardcoded links like the example you give above, that will break if the docs are restructured. For now I'd use something like: broadcasting (see `numpy.doc.broadcasting`) > (I tried the obvious `broadcasting numpy.doc.broadcasting` both with and > without a trailing "_", but neither worked, (and beyond that, the number of > variants to "just try" is excessive) and I looked at the rst quickref, but > it wasn't helpful and in any event, linking w/in our Wiki is a > specialization enabled by pydocweb, yes?) > Yes. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From d.l.goldsmith at gmail.com Tue Jan 5 11:13:56 2010 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Tue, 5 Jan 2010 08:13:56 -0800 Subject: [SciPy-dev] Does the doc wiki have a format for internal links... In-Reply-To: References: <45d1ab481001041535h10db5cc3y7ff2ec9b37d3b842@mail.gmail.com> Message-ID: <45d1ab481001050813g867fa6ej357b2548a7894c0f@mail.gmail.com> On Tue, Jan 5, 2010 at 3:19 AM, Ralf Gommers wrote: > > > On Tue, Jan 5, 2010 at 7:35 AM, David Goldsmith wrote: > >> ...similar to the external link format? For example, at a certain place I >> want the word "broadcasting" to be a link to numpy.doc.broadcasting - do I >> have to use the external format `broadcasting < >> http://docs.scipy.org/numpy/docs/numpy.doc.broadcasting/>`_ (which does >> work) or is there some variant of `numpy.doc.broadcasting` (which does >> correctly point there, but w/out the desired text replacement) I can use? > > > Not that I know of. This can go on the wishlist, next to anchor links. > Please don't use hardcoded links like the example you give above, that will > break if the docs are restructured. For now I'd use something like: > broadcasting (see `numpy.doc.broadcasting`) > Gotchya, thanks, Ralf! > > >> (I tried the obvious `broadcasting numpy.doc.broadcasting` both with and >> without a trailing "_", but neither worked, (and beyond that, the number of >> variants to "just try" is excessive) and I looked at the rst quickref, but >> it wasn't helpful and in any event, linking w/in our Wiki is a >> specialization enabled by pydocweb, yes?) >> > > Yes. > > Cheers, > Ralf > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ilanschnell at gmail.com Tue Jan 5 11:38:04 2010 From: ilanschnell at gmail.com (Ilan Schnell) Date: Tue, 5 Jan 2010 10:38:04 -0600 Subject: [SciPy-dev] EPD 6.0 released Message-ID: <2fbe16301001050838r5424d5f4sff9897fbeacad4f3@mail.gmail.com> Hello, I am pleased to announce that EPD (Enthought Python Distribution) version 6.0 has been released. This is the first EPD release which is based on Python 2.6, and 64-bit Windows and MacOSX support is also available now. You may find more information about EPD, as well as download a 30 day free trial, here: http://www.enthought.com/products/epd.php You can find a complete list of updates in the change log: http://www.enthought.com/EPDChangelog.html About EPD --------- The Enthought Python Distribution (EPD) is a "kitchen-sink-included" distribution of the Python Programming Language, including over 75 additional tools and libraries. The EPD bundle includes NumPy, SciPy, IPython, 2D and 3D visualization, and many other tools. http://www.enthought.com/products/epdlibraries.php It is currently available as a single-click installer for Windows XP, Vista and 7, MacOS (10.5 and 10.6), RedHat 3, 4 and 5, as well as Solaris 10 (x86 and x86_64/amd64 on all platforms). EPD is free for academic use. An annual subscription including installation support is available for individual and commercial use. Additional support options, including customization, bug fixes and training classes are also available: http://www.enthought.com/products/support_level_table.php - Ilan -------------- next part -------------- An HTML attachment was scrubbed... URL: From ecarlson at eng.ua.edu Tue Jan 5 17:29:28 2010 From: ecarlson at eng.ua.edu (Eric Carlson) Date: Tue, 05 Jan 2010 16:29:28 -0600 Subject: [SciPy-dev] EPD 6.0 released In-Reply-To: <2fbe16301001050838r5424d5f4sff9897fbeacad4f3@mail.gmail.com> References: <2fbe16301001050838r5424d5f4sff9897fbeacad4f3@mail.gmail.com> Message-ID: Are the numpy and scipy versions 64-bit as well? From ilanschnell at gmail.com Tue Jan 5 20:01:00 2010 From: ilanschnell at gmail.com (Ilan Schnell) Date: Tue, 5 Jan 2010 19:01:00 -0600 Subject: [SciPy-dev] EPD 6.0 released In-Reply-To: References: <2fbe16301001050838r5424d5f4sff9897fbeacad4f3@mail.gmail.com> Message-ID: <2fbe16301001051701v619b2038o782065b1cf9b7904@mail.gmail.com> Hello Eric, all extension modules have to be 64-bit as well, if you want to import them from a 64-bit Python interpreter. So the answer is yes, NumPy, SciPy, matplotlib, etc. are all 64-bit. - Ilan On Tue, Jan 5, 2010 at 4:29 PM, Eric Carlson wrote: > Are the numpy and scipy versions 64-bit as well? > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dwf at cs.toronto.edu Tue Jan 5 20:12:22 2010 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Tue, 5 Jan 2010 20:12:22 -0500 Subject: [SciPy-dev] EPD 6.0 released In-Reply-To: <2fbe16301001051701v619b2038o782065b1cf9b7904@mail.gmail.com> References: <2fbe16301001050838r5424d5f4sff9897fbeacad4f3@mail.gmail.com> <2fbe16301001051701v619b2038o782065b1cf9b7904@mail.gmail.com> Message-ID: On 5-Jan-10, at 8:01 PM, Ilan Schnell wrote: > Hello Eric, > > all extension modules have to be 64-bit as well, if you want > to import them from a 64-bit Python interpreter. So the > answer is yes, NumPy, SciPy, matplotlib, etc. are all 64-bit. Hi Ilan, I guess this means that in the 64-bit version wxPython isn't available? David From matthew.brett at gmail.com Tue Jan 5 21:00:42 2010 From: matthew.brett at gmail.com (Matthew Brett) Date: Wed, 6 Jan 2010 02:00:42 +0000 Subject: [SciPy-dev] Please stress-test SVN version pf matlab reader Message-ID: <1e2af89e1001051800o2f3ffa1ai6ba858611f87f977@mail.gmail.com> Hi, For those of you interested in the matlab reader in scipy - a bout of 'flu left me with only enough energy to try and work out the last undocumented bits of the matlab file format. I _believe_ that the current SVN version of the reader: import scipy.io as sio a = sio.loadmat('your_matfile_here.mat') should successfully read any non-HDF matlab-written matfile - and this is just to ask if those of you, with mat files lying around, could try out the latest SVN version, and let me know that I am wrong, most usefully with some way of me being able to reproduce the problem. I also have the hope / belief that the scipy reader should be performing at round about the same speed as matlab to read the same file - please let me know if this is way off too. Thanks a lot, Matthew From ilanschnell at gmail.com Tue Jan 5 21:39:53 2010 From: ilanschnell at gmail.com (Ilan Schnell) Date: Tue, 5 Jan 2010 20:39:53 -0600 Subject: [SciPy-dev] EPD 6.0 released In-Reply-To: References: <2fbe16301001050838r5424d5f4sff9897fbeacad4f3@mail.gmail.com> <2fbe16301001051701v619b2038o782065b1cf9b7904@mail.gmail.com> Message-ID: <2fbe16301001051839r7b6f32f6g95965f4f97c56e37@mail.gmail.com> Hello David, wxPython is not available for MacOSX 64-bit, since wxPython links against Carbon (which only a 32-bit build). See: http://www.enthought.com/products/roadmap.php But on all other 64-bit platform, i.e. Windows, RH5, RH3, Solaris, the EPD 6.0 includes wxPython and ETS. - Ilan On Tue, Jan 5, 2010 at 7:12 PM, David Warde-Farley wrote: > > On 5-Jan-10, at 8:01 PM, Ilan Schnell wrote: > > > Hello Eric, > > > > all extension modules have to be 64-bit as well, if you want > > to import them from a 64-bit Python interpreter. So the > > answer is yes, NumPy, SciPy, matplotlib, etc. are all 64-bit. > > Hi Ilan, > > I guess this means that in the 64-bit version wxPython isn't available? > > David > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ecarlson at eng.ua.edu Tue Jan 5 22:02:24 2010 From: ecarlson at eng.ua.edu (Eric Carlson) Date: Tue, 05 Jan 2010 21:02:24 -0600 Subject: [SciPy-dev] EPD 6.0 released In-Reply-To: <2fbe16301001051701v619b2038o782065b1cf9b7904@mail.gmail.com> References: <2fbe16301001050838r5424d5f4sff9897fbeacad4f3@mail.gmail.com> <2fbe16301001051701v619b2038o782065b1cf9b7904@mail.gmail.com> Message-ID: Hello Ilan, Did the new 64-bit build use MKL or other optimized library? This issue seemed to be quite difficult according to various previous discussions related to building 64-bit Windows versions. Cheers, Eric > Hello Eric, > > all extension modules have to be 64-bit as well, if you want > to import them from a 64-bit Python interpreter. So the > answer is yes, NumPy, SciPy, matplotlib, etc. are all 64-bit. > > - Ilan > > > On Tue, Jan 5, 2010 at 4:29 PM, Eric Carlson > wrote: > > Are the numpy and scipy versions 64-bit as well? > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev From cournape at gmail.com Tue Jan 5 22:07:39 2010 From: cournape at gmail.com (David Cournapeau) Date: Wed, 6 Jan 2010 12:07:39 +0900 Subject: [SciPy-dev] Please stress-test SVN version pf matlab reader In-Reply-To: <1e2af89e1001051800o2f3ffa1ai6ba858611f87f977@mail.gmail.com> References: <1e2af89e1001051800o2f3ffa1ai6ba858611f87f977@mail.gmail.com> Message-ID: <5b8d13221001051907h437bb114gb90581f2437d997e@mail.gmail.com> Hi Matthew, On Wed, Jan 6, 2010 at 11:00 AM, Matthew Brett wrote: > Hi, > > For those of you interested in the matlab reader in scipy - a bout of > 'flu left me with only enough energy to try and work out the last > undocumented bits of the matlab file format. ?I _believe_ that the > current ?SVN version of the reader: > > import scipy.io as sio > a = sio.loadmat('your_matfile_here.mat') > > should successfully read any non-HDF matlab-written matfile - and this > is just to ask if those of you, with mat files lying around, could try > out the latest SVN version, and let me know that I am wrong, most > usefully with some way of me being able to reproduce the problem. That's great news. I have just added the numscons scripts for the new files. Funnily enough, I was just toying with sparse matrices, and needed to load some sparse matrices. Here is the problem I got with some sparse files: from scipy.io import loadmat loadmat("odepa400.mat") This raises an exception: Traceback (most recent call last): File "read.py", line 3, in loadmat("odepa400.mat") File "/home/david/local/lib/python2.6/site-packages/scipy/io/matlab/mio.py", line 134, in loadmat matfile_dict = MR.get_variables() File "/home/david/local/lib/python2.6/site-packages/scipy/io/matlab/mio5.py", line 347, in get_variables res = self.read_var_array(hdr) File "/home/david/local/lib/python2.6/site-packages/scipy/io/matlab/mio5.py", line 321, in read_var_array return self._matrix_reader.array_from_header(header, process) File "mio5_utils.pyx", line 584, in scipy.io.matlab.mio5_utils.VarReader5.array_from_header (scipy/io/matlab/mio5_utils.c:4721) File "mio5_utils.pyx", line 629, in scipy.io.matlab.mio5_utils.VarReader5.array_from_header (scipy/io/matlab/mio5_utils.c:4370) File "mio5_utils.pyx", line 835, in scipy.io.matlab.mio5_utils.VarReader5.read_struct (scipy/io/matlab/mio5_utils.c:6774) File "mio5_utils.pyx", line 582, in scipy.io.matlab.mio5_utils.VarReader5.read_mi_matrix (scipy/io/matlab/mio5_utils.c:3999) TypeError: Cannot convert csc_matrix to numpy.ndarray (I needed to fix a missing import first) The matrix may be found here: http://www.cise.ufl.edu/research/sparse/mat/Bai/odepa400.mat (small file) David From ilanschnell at gmail.com Tue Jan 5 22:28:37 2010 From: ilanschnell at gmail.com (Ilan Schnell) Date: Tue, 5 Jan 2010 21:28:37 -0600 Subject: [SciPy-dev] EPD 6.0 released In-Reply-To: References: <2fbe16301001050838r5424d5f4sff9897fbeacad4f3@mail.gmail.com> <2fbe16301001051701v619b2038o782065b1cf9b7904@mail.gmail.com> Message-ID: <2fbe16301001051928i517a2a58pca578be420198556@mail.gmail.com> Hello Eric, yes, on Windows things are linked against MKL. In fact this is currently the only way to get numpy and scipy working on 64-bit Windows. On OSX, EPD links aginast optimized system libs, and on Linux against Atlas. - Ilan On Tue, Jan 5, 2010 at 9:02 PM, Eric Carlson wrote: > Hello Ilan, > Did the new 64-bit build use MKL or other optimized library? This issue > seemed to be quite difficult according to various previous discussions > related to building 64-bit Windows versions. > > Cheers, > Eric > > > > Hello Eric, > > > > all extension modules have to be 64-bit as well, if you want > > to import them from a 64-bit Python interpreter. So the > > answer is yes, NumPy, SciPy, matplotlib, etc. are all 64-bit. > > > > - Ilan > > > > > > On Tue, Jan 5, 2010 at 4:29 PM, Eric Carlson > > wrote: > > > > Are the numpy and scipy versions 64-bit as well? > > > > _______________________________________________ > > SciPy-Dev mailing list > > SciPy-Dev at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-dev > > > > > > > > ------------------------------------------------------------------------ > > > > _______________________________________________ > > SciPy-Dev mailing list > > SciPy-Dev at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-dev > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Wed Jan 6 00:45:37 2010 From: cournape at gmail.com (David Cournapeau) Date: Wed, 6 Jan 2010 14:45:37 +0900 Subject: [SciPy-dev] Please stress-test SVN version pf matlab reader In-Reply-To: <5b8d13221001051907h437bb114gb90581f2437d997e@mail.gmail.com> References: <1e2af89e1001051800o2f3ffa1ai6ba858611f87f977@mail.gmail.com> <5b8d13221001051907h437bb114gb90581f2437d997e@mail.gmail.com> Message-ID: <5b8d13221001052145p6cb6f75fp689c4613a39c799d@mail.gmail.com> On Wed, Jan 6, 2010 at 12:07 PM, David Cournapeau wrote: > Hi Matthew, > > On Wed, Jan 6, 2010 at 11:00 AM, Matthew Brett wrote: >> Hi, >> >> For those of you interested in the matlab reader in scipy - a bout of >> 'flu left me with only enough energy to try and work out the last >> undocumented bits of the matlab file format. ?I _believe_ that the >> current ?SVN version of the reader: >> >> import scipy.io as sio >> a = sio.loadmat('your_matfile_here.mat') >> >> should successfully read any non-HDF matlab-written matfile - and this >> is just to ask if those of you, with mat files lying around, could try >> out the latest SVN version, and let me know that I am wrong, most >> usefully with some way of me being able to reproduce the problem. > > That's great news. I have just added the numscons scripts for the new files. > > Funnily enough, I was just toying with sparse matrices, and needed to > load some sparse matrices. Here is the problem I got with some sparse > files: > > from scipy.io import loadmat > loadmat("odepa400.mat") > > This raises an exception: > > Traceback (most recent call last): > ?File "read.py", line 3, in > ? ?loadmat("odepa400.mat") > ?File "/home/david/local/lib/python2.6/site-packages/scipy/io/matlab/mio.py", > line 134, in loadmat > ? ?matfile_dict = MR.get_variables() > ?File "/home/david/local/lib/python2.6/site-packages/scipy/io/matlab/mio5.py", > line 347, in get_variables > ? ?res = self.read_var_array(hdr) > ?File "/home/david/local/lib/python2.6/site-packages/scipy/io/matlab/mio5.py", > line 321, in read_var_array > ? ?return self._matrix_reader.array_from_header(header, process) > ?File "mio5_utils.pyx", line 584, in > scipy.io.matlab.mio5_utils.VarReader5.array_from_header > (scipy/io/matlab/mio5_utils.c:4721) > ?File "mio5_utils.pyx", line 629, in > scipy.io.matlab.mio5_utils.VarReader5.array_from_header > (scipy/io/matlab/mio5_utils.c:4370) > ?File "mio5_utils.pyx", line 835, in > scipy.io.matlab.mio5_utils.VarReader5.read_struct > (scipy/io/matlab/mio5_utils.c:6774) > ?File "mio5_utils.pyx", line 582, in > scipy.io.matlab.mio5_utils.VarReader5.read_mi_matrix > (scipy/io/matlab/mio5_utils.c:3999) > TypeError: Cannot convert csc_matrix to numpy.ndarray > > (I needed to fix a missing import first) > > The matrix may be found here: > > http://www.cise.ufl.edu/research/sparse/mat/Bai/odepa400.mat (small file) Ok, I took a look at it, and the problem is caused by the sparse matrix to be in a struct. The fix is easy, and just consists in telling read_mi_matrix function not to return a cnp.ndarray (as array_from_header does not necessarily return a cnp.ndarray). I don't have matlab handy to add a unit test, but adding it should be easy, cheers, David From matthew.brett at gmail.com Wed Jan 6 05:25:03 2010 From: matthew.brett at gmail.com (Matthew Brett) Date: Wed, 6 Jan 2010 10:25:03 +0000 Subject: [SciPy-dev] Please stress-test SVN version pf matlab reader In-Reply-To: <5b8d13221001052145p6cb6f75fp689c4613a39c799d@mail.gmail.com> References: <1e2af89e1001051800o2f3ffa1ai6ba858611f87f977@mail.gmail.com> <5b8d13221001051907h437bb114gb90581f2437d997e@mail.gmail.com> <5b8d13221001052145p6cb6f75fp689c4613a39c799d@mail.gmail.com> Message-ID: <1e2af89e1001060225l79223dbcg9388e2b903757227@mail.gmail.com> Hi, >>> import scipy.io as sio >>> a = sio.loadmat('your_matfile_here.mat') >>> >>> should successfully read any non-HDF matlab-written matfile - and this >>> is just to ask if those of you, with mat files lying around, could try >>> out the latest SVN version, and let me know that I am wrong, most >>> usefully with some way of me being able to reproduce the problem. >> >> That's great news. I have just added the numscons scripts for the new files. >> >> Funnily enough, I was just toying with sparse matrices, and needed to >> load some sparse matrices. Here is the problem I got with some sparse >> files: >> >> from scipy.io import loadmat >> loadmat("odepa400.mat") >> >> This raises an exception: Thank you - just what I was hoping for - and thank you for tracking it down as well - fixed in trunk, with test. See you, Matthew From jakevdp at gmail.com Wed Jan 6 17:48:22 2010 From: jakevdp at gmail.com (Jake VanderPlas) Date: Wed, 6 Jan 2010 14:48:22 -0800 Subject: [SciPy-dev] Ball Tree code updated (ticket 1048) Message-ID: <58df6dc21001061448l4dc9ef05x478437a7837d3c32@mail.gmail.com> Hello, I have had comments from a few people over the last two months on the Ball Tree code that I submitted (ticket 1048). I cleaned up the code a bit and posted the changes on the tracker. Any other comments would be appreciated! -Jake From lists at onerussian.com Wed Jan 6 22:56:53 2010 From: lists at onerussian.com (Yaroslav Halchenko) Date: Wed, 6 Jan 2010 22:56:53 -0500 Subject: [SciPy-dev] tiny issue in numpy online doc Message-ID: <20100107035653.GC18216@onerussian.com> probably this is a wrong list... but since at scipy.org: it seems that merge wasn't successful ;) I don't see those conflict lines in git repo though -- so may be it was built from some local svn checkout or smth like that: http://docs.scipy.org/doc/numpy/user/basics.io.genfromtxt.html#mine now it looks like .... <<<<<<< .mineP: +---------------------------------------------------------------------------+ | Expected type | Default | |------------------------------------+--------------------------------------| | bool | False | |------------------------------------+--------------------------------------| | int | -1 | |------------------------------------+--------------------------------------| | float | np.nan | |------------------------------------+--------------------------------------| | complex | np.nan+0j | |------------------------------------+--------------------------------------| | string | '???' | +---------------------------------------------------------------------------+ >>>>>>> .r7751 We can get a finer control on the conversion of missing values with the filling_values optional argument. Like missing_values, this argument accepts different kind of values: .... -- .-. =------------------------------ /v\ ----------------------------= Keep in touch // \\ (yoh@|www.)onerussian.com Yaroslav Halchenko /( )\ ICQ#: 60653192 Linux User ^^-^^ [175555] From d.l.goldsmith at gmail.com Thu Jan 7 04:33:31 2010 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Thu, 7 Jan 2010 01:33:31 -0800 Subject: [SciPy-dev] Requesting "expert eye" to look at c-api.generalized-ufuncs.rst Message-ID: <45d1ab481001070133n7f5a225dw6833c3e16ecec881@mail.gmail.com> Hi, folks.? I've been working on http://docs.scipy.org/numpy/docs/numpy-docs/reference/c-api.generalized-ufuncs.rst/, having gotten up to the "C-API for implementing Elementary Functions" section. As a result, I've added quite a few questions/comments to its Discussion section; I'm hoping someone with expertise in this area might find/make the time to look over what I've done and my questions and opine on the former and, hopefully, answer the latter. :-) Thanks, DG From gael.varoquaux at normalesup.org Thu Jan 7 07:19:12 2010 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 7 Jan 2010 13:19:12 +0100 Subject: [SciPy-dev] tiny issue in numpy online doc In-Reply-To: <20100107035653.GC18216@onerussian.com> References: <20100107035653.GC18216@onerussian.com> Message-ID: <20100107121912.GB18133@phare.normalesup.org> On Wed, Jan 06, 2010 at 10:56:53PM -0500, Yaroslav Halchenko wrote: > http://docs.scipy.org/doc/numpy/user/basics.io.genfromtxt.html#mine > now it looks like > .... > <<<<<<< .mineP: Hey Yaroslav, thanks for pointing this out. I'll fix that. FYI, you can very easily fix these problems by creating an account on docs.scipy.org and clicking on the 'edit page' on the left of the page you mention. Ga?l From gael.varoquaux at normalesup.org Thu Jan 7 07:21:04 2010 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 7 Jan 2010 13:21:04 +0100 Subject: [SciPy-dev] tiny issue in numpy online doc In-Reply-To: <20100107121912.GB18133@phare.normalesup.org> References: <20100107035653.GC18216@onerussian.com> <20100107121912.GB18133@phare.normalesup.org> Message-ID: <20100107122104.GC18133@phare.normalesup.org> On Thu, Jan 07, 2010 at 01:19:12PM +0100, Gael Varoquaux wrote: > Hey Yaroslav, thanks for pointing this out. I'll fix that. Acutally, it's been fixed. It just needs to propagate to the main document. Ga?l From arokem at berkeley.edu Thu Jan 7 22:58:55 2010 From: arokem at berkeley.edu (Ariel Rokem) Date: Thu, 7 Jan 2010 19:58:55 -0800 Subject: [SciPy-dev] Please stress-test SVN version pf matlab reader In-Reply-To: <43958ee61001071926j73e5e932w8eef9e9b6f5034b4@mail.gmail.com> References: <1e2af89e1001051800o2f3ffa1ai6ba858611f87f977@mail.gmail.com> <5b8d13221001051907h437bb114gb90581f2437d997e@mail.gmail.com> <5b8d13221001052145p6cb6f75fp689c4613a39c799d@mail.gmail.com> <1e2af89e1001060225l79223dbcg9388e2b903757227@mail.gmail.com> <43958ee61001071926j73e5e932w8eef9e9b6f5034b4@mail.gmail.com> Message-ID: <43958ee61001071958g38bed811l4437cb81df54a227@mail.gmail.com> Hi Matthew, further - if I set both squeeze_me and struct_as_record to True (does that make sense?), I get into some rather nasty situations, where I can't use the variables in the matfile: For example: In [40]: m = sio.loadmat('C-RA_Adapt.mat',squeeze_me=True,struct_as_record=True) In [41]: h = m['history'] In [42]: h1 = h[0] In [43]: q = h1['q'] Yes - I know it's a bit convoluted, but that's how it is. Now - I can't do anything with 'q': In [44]: q[0] --------------------------------------------------------------------------- IndexError Traceback (most recent call last) /Users/arokem/Projects/SchizoSpread/OrientationTuningBehavior/DATA/ in () IndexError: 0-d arrays can't be indexed In [45]: q.dtype Out[45]: dtype('object') There's supposed to be an array called 'x' in there: In [46]: q['x'] --------------------------------------------------------------------------- ValueError Traceback (most recent call last) /Users/arokem/Projects/SchizoSpread/OrientationTuningBehavior/DATA/ in () ValueError: field named x not found. Am I doing something unreasonable? Cheers, Ariel On Thu, Jan 7, 2010 at 7:26 PM, Ariel Rokem wrote: > Hi Matthew, > > I updated my scipy from the repo and I am still getting the following > behavior (also reported a couple of days ago on scipy-user), when dealing > with the attached file: > > In [6]: m = > sio.loadmat('/Users/arokem/Projects/SchizoSpread/OrientationTuningBehavior/DATA/C-RA_Adapt.mat') > > /Library/Frameworks/Python.framework/Versions/6.0.0/lib/python2.6/site-packages/scipy/io/matlab/mio.py:99: > FutureWarning: Using struct_as_record default value (False) This will change > to True in future versions > return MatFile5Reader(byte_stream, **kwargs) > > So far, so good - this is to be expected. But, for obvious reasons, I want > to set the kwarg squeeze_me to True: > > In [7]: m = > sio.loadmat('/Users/arokem/Projects/SchizoSpread/OrientationTuningBehavior/DATA/C-RA_Adapt.mat',squeeze_me=True) > --------------------------------------------------------------------------- > TypeError Traceback (most recent call last) > > /Users/arokem/ in () > > /Library/Frameworks/Python.framework/Versions/6.0.0/lib/python2.6/site-packages/scipy/io/matlab/mio.pyc > in loadmat(file_name, mdict, appendmat, **kwargs) > 132 ''' > 133 MR = mat_reader_factory(file_name, appendmat, **kwargs) > --> 134 matfile_dict = MR.get_variables() > 135 if mdict is not None: > 136 mdict.update(matfile_dict) > > /Library/Frameworks/Python.framework/Versions/6.0.0/lib/python2.6/site-packages/scipy/io/matlab/mio5.pyc > in get_variables(self, variable_names) > 411 continue > 412 try: > --> 413 res = self.read_var_array(hdr, process) > 414 except MatReadError, err: > 415 warnings.warn( > > /Library/Frameworks/Python.framework/Versions/6.0.0/lib/python2.6/site-packages/scipy/io/matlab/mio5.pyc > in read_var_array(self, header, process) > 380 `process`. > 381 ''' > --> 382 return self._matrix_reader.array_from_header(header, > process) > 383 > 384 def get_variables(self, variable_names=None): > > /Library/Frameworks/Python.framework/Versions/6.0.0/lib/python2.6/site-packages/scipy/io/matlab/mio5_utils.so > in scipy.io.matlab.mio5_utils.VarReader5.array_from_header > (scipy/io/matlab/mio5_utils.c:4718)() > > /Library/Frameworks/Python.framework/Versions/6.0.0/lib/python2.6/site-packages/scipy/io/matlab/mio5_utils.so > in scipy.io.matlab.mio5_utils.VarReader5.array_from_header > (scipy/io/matlab/mio5_utils.c:4367)() > > /Library/Frameworks/Python.framework/Versions/6.0.0/lib/python2.6/site-packages/scipy/io/matlab/mio5_utils.so > in scipy.io.matlab.mio5_utils.VarReader5.read_struct > (scipy/io/matlab/mio5_utils.c:6771)() > > /Library/Frameworks/Python.framework/Versions/6.0.0/lib/python2.6/site-packages/scipy/io/matlab/mio5_utils.so > in scipy.io.matlab.mio5_utils.VarReader5.read_mi_matrix > (scipy/io/matlab/mio5_utils.c:3995)() > > /Library/Frameworks/Python.framework/Versions/6.0.0/lib/python2.6/site-packages/scipy/io/matlab/mio5_utils.so > in scipy.io.matlab.mio5_utils.VarReader5.array_from_header > (scipy/io/matlab/mio5_utils.c:4610)() > > /Library/Frameworks/Python.framework/Versions/6.0.0/lib/python2.6/site-packages/scipy/io/matlab/mio_utils.so > in scipy.io.matlab.mio_utils.process_element > (scipy/io/matlab/mio_utils.c:1272)() > > /Library/Frameworks/Python.framework/Versions/6.0.0/lib/python2.6/site-packages/scipy/io/matlab/mio_utils.so > in scipy.io.matlab.mio_utils.process_element > (scipy/io/matlab/mio_utils.c:1149)() > > TypeError: Cannot convert mat_struct to numpy.ndarray > > > This doesn't happen when I set struct_as_record to True as well, but then I > need to deal with all these record arrays that I get. And yes - it does seem > to work faster, when it works, though I haven't done any thorough testing of > that. > > Any ideas? > > Cheers, > > Ariel > > > > > -- Ariel Rokem Helen Wills Neuroscience Institute University of California, Berkeley http://argentum.ucbso.berkeley.edu/ariel -------------- next part -------------- An HTML attachment was scrubbed... URL: From skorpio11 at gmail.com Fri Jan 8 11:56:59 2010 From: skorpio11 at gmail.com (Leon Adams) Date: Fri, 8 Jan 2010 11:56:59 -0500 Subject: [SciPy-dev] bug in scipy.stats package Message-ID: <9e3090c1001080856t3ce4bb36ia6974ea409b9515a@mail.gmail.com> Hi all, I am currently doing some work involving distribution fitting and it looks like a came across a strange bug in the ppf function of the gamma distribution. Can some one else verify and if so suggest possible resolution. Below is a sample of my python script: import scipy.stats as st import numpy as np nbins = 12 binProb = np.zeros(nbins) + 1.0/nbins binSumProb = np.add.accumulate(binProb) print binSumProb print st.gamma.ppf(binSumProb,0.6379,loc=1.6,scale=39.555) With the following output: [ 0.08333333 0.16666667 0.25 0.33333333 0.41666667 0.5 0.58333333 0.66666667 0.75 0.83333333 0.91666667 1. ] [ 2.28713122 3.68072087 1.6 8.20183111 11.42062087 15.44977132 20.5371578 27.11627185 36.01838095 49.13633397 72.58240868 Inf] The problem occurs at an input value of 0.25, the ppf returns the location parameter (in this case 1.6) Thanks in advance -- Leon Adams -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Fri Jan 8 12:40:59 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 8 Jan 2010 12:40:59 -0500 Subject: [SciPy-dev] bug in scipy.stats package In-Reply-To: <9e3090c1001080856t3ce4bb36ia6974ea409b9515a@mail.gmail.com> References: <9e3090c1001080856t3ce4bb36ia6974ea409b9515a@mail.gmail.com> Message-ID: <1cd32cbb1001080940i29a3bb85ofac109560abf8979@mail.gmail.com> On Fri, Jan 8, 2010 at 11:56 AM, Leon Adams wrote: > > Hi all, > I am currently doing some work involving distribution fitting and it looks > like a came across a strange bug in the ppf function of the gamma > distribution. Can some one else verify and if so suggest possible > resolution. Below is a sample of my python script: > > import scipy.stats as st > import numpy as np > nbins = 12 > binProb = np.zeros(nbins) + 1.0/nbins > binSumProb = np.add.accumulate(binProb) > print binSumProb > print st.gamma.ppf(binSumProb,0.6379,loc=1.6,scale=39.555) > > With the following output: > [ 0.08333333? 0.16666667? 0.25??????? 0.33333333? 0.41666667? 0.5 > ? 0.58333333? 0.66666667? 0.75??????? 0.83333333? 0.91666667? 1.??????? ] > [? 2.28713122?? 3.68072087?? 1.6????????? 8.20183111? 11.42062087 > ? 15.44977132? 20.5371578?? 27.11627185? 36.01838095? 49.13633397 > ? 72.58240868????????? Inf] My result look correct, see below. Which version of scipy are you using? I think this was http://projects.scipy.org/scipy/ticket/975 which would mean the fix is only in trunk Thanks for reporting it Josef >>> scipy.version.version '0.8.0.dev6156' >>> import scipy.stats as stats >>> import numpy as np >>> nbins = 12 >>> binProb = np.zeros(nbins) + 1.0/nbins >>> binSumProb = np.add.accumulate(binProb) >>> print binSumProb [ 0.08333333 0.16666667 0.25 0.33333333 0.41666667 0.5 0.58333333 0.66666667 0.75 0.83333333 0.91666667 1. ] >>> print stats.gamma.ppf(binSumProb,0.6379,loc=1.6,scale=39.555) [ 2.28713122 3.68072087 5.64774079 8.20183111 11.42062087 15.44977132 20.5371578 27.11627185 36.01838095 49.13633397 72.58240868 Inf] > > The problem occurs at an input value of 0.25, the ppf returns the location > parameter (in this case 1.6) > > Thanks in advance > -- > Leon Adams > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > From matthew.brett at gmail.com Sat Jan 9 12:33:16 2010 From: matthew.brett at gmail.com (Matthew Brett) Date: Sat, 9 Jan 2010 17:33:16 +0000 Subject: [SciPy-dev] Please stress-test SVN version pf matlab reader In-Reply-To: <43958ee61001071958g38bed811l4437cb81df54a227@mail.gmail.com> References: <1e2af89e1001051800o2f3ffa1ai6ba858611f87f977@mail.gmail.com> <5b8d13221001051907h437bb114gb90581f2437d997e@mail.gmail.com> <5b8d13221001052145p6cb6f75fp689c4613a39c799d@mail.gmail.com> <1e2af89e1001060225l79223dbcg9388e2b903757227@mail.gmail.com> <43958ee61001071926j73e5e932w8eef9e9b6f5034b4@mail.gmail.com> <43958ee61001071958g38bed811l4437cb81df54a227@mail.gmail.com> Message-ID: <1e2af89e1001090933q70ac17f3hb857edf3bf5f86e1@mail.gmail.com> Hi > further - if I set both squeeze_me and struct_as_record to True (does that > make sense?), I get into some rather nasty situations, where I can't use the > variables in the matfile: Sorry - I think I missed your first email, and can't see the attachment - can you send by private mail? I've fixed your first error I think - thanks for the report - please let me know if current SVN does work for mat_struct problem... See you, Matthew From efiring at hawaii.edu Sat Jan 9 14:49:21 2010 From: efiring at hawaii.edu (Eric Firing) Date: Sat, 09 Jan 2010 09:49:21 -1000 Subject: [SciPy-dev] Please stress-test SVN version pf matlab reader In-Reply-To: <1e2af89e1001090933q70ac17f3hb857edf3bf5f86e1@mail.gmail.com> References: <1e2af89e1001051800o2f3ffa1ai6ba858611f87f977@mail.gmail.com> <5b8d13221001051907h437bb114gb90581f2437d997e@mail.gmail.com> <5b8d13221001052145p6cb6f75fp689c4613a39c799d@mail.gmail.com> <1e2af89e1001060225l79223dbcg9388e2b903757227@mail.gmail.com> <43958ee61001071926j73e5e932w8eef9e9b6f5034b4@mail.gmail.com> <43958ee61001071958g38bed811l4437cb81df54a227@mail.gmail.com> <1e2af89e1001090933q70ac17f3hb857edf3bf5f86e1@mail.gmail.com> Message-ID: <4B48DDC1.3040808@hawaii.edu> Matthew Brett wrote: > Hi > >> further - if I set both squeeze_me and struct_as_record to True (does that >> make sense?), I get into some rather nasty situations, where I can't use the >> variables in the matfile: > > Sorry - I think I missed your first email, and can't see the > attachment - can you send by private mail? > > I've fixed your first error I think - thanks for the report - please > let me know if current SVN does work for mat_struct problem... > > See you, > > Matthew Matthew, I updated from svn, built, and installed, and I am having a similar problem. A sample file is http://currents.soest.hawaii.edu/clivar/ladcp/I5S_2009/002.mat Here is what happens: In [16]:from scipy.io import loadmat In [17]:a = loadmat('002.mat', struct_as_record=True, squeeze_me=True) In [18]:a['p']['lon'] Out[18]:array(array(30.355230445182066), dtype=object) n [20]:scipy.version.version Out[20]:'0.8.0.dev6182' I think this is the same problem that Ariel noted. If I don't use squeeze_me but explicitly de-reference each of the two levels of arrays of shape (1,1), then I can get at the underlying variables. Eric > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev From matthew.brett at gmail.com Sat Jan 9 22:39:46 2010 From: matthew.brett at gmail.com (Matthew Brett) Date: Sun, 10 Jan 2010 03:39:46 +0000 Subject: [SciPy-dev] Please stress-test SVN version pf matlab reader In-Reply-To: <43958ee61001071958g38bed811l4437cb81df54a227@mail.gmail.com> References: <1e2af89e1001051800o2f3ffa1ai6ba858611f87f977@mail.gmail.com> <5b8d13221001051907h437bb114gb90581f2437d997e@mail.gmail.com> <5b8d13221001052145p6cb6f75fp689c4613a39c799d@mail.gmail.com> <1e2af89e1001060225l79223dbcg9388e2b903757227@mail.gmail.com> <43958ee61001071926j73e5e932w8eef9e9b6f5034b4@mail.gmail.com> <43958ee61001071958g38bed811l4437cb81df54a227@mail.gmail.com> Message-ID: <1e2af89e1001091939g7f469d56s37279ec6b68651c5@mail.gmail.com> Hi Ariel, > further - if I set both squeeze_me and struct_as_record to True (does that > make sense?), I get into some rather nasty situations, where I can't use the > variables in the matfile: > > For example: > > In [40]: m = > sio.loadmat('C-RA_Adapt.mat',squeeze_me=True,struct_as_record=True) > > In [41]: h = m['history'] > > In [42]: h1 = h[0] > > In [43]: q = h1['q'] ... > There's supposed to be an array called 'x' in there: > > In [46]: q['x'] > --------------------------------------------------------------------------- > ValueError??????????????????????????????? Traceback (most recent call last) Looking further into this - rather late at night - so I might be wrong - I think this is structural (forgive the pun) to structured arrays. We have to load the arrays as structured arrays with named fields and object dtype per field, because matlab allows any contents of the fields (fields need not have the same class of contents for each struct in the struct array). When we squeeze structured arrays, we get 0d structured arrays, with object fields. We can't now index these, but It is only indexing into the structured array, that dereferences the object: In [3]: st = np.zeros((1,), dtype=[('f0', 'O'), ('f1', 'O')]) In [4]: st[0]['f0'] Out[4]: 0 In [5]: st['f0'] Out[5]: array([0], dtype=object) Now, when we squeeze to 0d, we can't index any more: In [6]: sst = np.squeeze(st) In [7]: sst.shape Out[7]: () In [8]: sst['f0'] Out[8]: array(0, dtype=object) So we can't dereference, except explicitly like this: In [9]: sst['f0'].item() Out[9]: 0 I think. Well, more, I hope that someone who knows better than I do can think of a way round this... Best, Matthew From david at silveregg.co.jp Tue Jan 12 03:01:14 2010 From: david at silveregg.co.jp (David Cournapeau) Date: Tue, 12 Jan 2010 17:01:14 +0900 Subject: [SciPy-dev] Is "Creative Commons: Attribution" an acceptable license for datasets included in scipy ? Message-ID: <4B4C2C4A.4020801@silveregg.co.jp> Hi, Everything is in the title - I have some new IO code for scipy.sparse I would like to include in scipy, and the tests include some dataset under this license. Should I remove them before inclusion ? cheers, David From gael.varoquaux at normalesup.org Tue Jan 12 03:07:31 2010 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Tue, 12 Jan 2010 09:07:31 +0100 Subject: [SciPy-dev] Is "Creative Commons: Attribution" an acceptable license for datasets included in scipy ? In-Reply-To: <4B4C2C4A.4020801@silveregg.co.jp> References: <4B4C2C4A.4020801@silveregg.co.jp> Message-ID: <20100112080731.GA31896@phare.normalesup.org> On Tue, Jan 12, 2010 at 05:01:14PM +0900, David Cournapeau wrote: > Hi, > Everything is in the title - I have some new IO code for > scipy.sparse I would like to include in scipy, and the tests include > some dataset under this license. Should I remove them before inclusion ? I believe you must: the attribution clause is not free by OSI definition. In addition, I am pretty sure that none of the CC licenses are DFSG-free up to version 3.0 (don't ask me why). Ga?l From josef.pktd at gmail.com Tue Jan 12 10:12:34 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 12 Jan 2010 10:12:34 -0500 Subject: [SciPy-dev] Is "Creative Commons: Attribution" an acceptable license for datasets included in scipy ? In-Reply-To: <20100112080731.GA31896@phare.normalesup.org> References: <4B4C2C4A.4020801@silveregg.co.jp> <20100112080731.GA31896@phare.normalesup.org> Message-ID: <1cd32cbb1001120712i55f22516w866d8a049d0bbd30@mail.gmail.com> On Tue, Jan 12, 2010 at 3:07 AM, Gael Varoquaux wrote: > On Tue, Jan 12, 2010 at 05:01:14PM +0900, David Cournapeau wrote: >> Hi, > >> ? ? ?Everything is in the title - I have some new IO code for >> scipy.sparse I would like to include in scipy, and the tests include >> some dataset under this license. Should I remove them before inclusion ? > > I believe you must: the attribution clause is not free by OSI definition. > In addition, I am pretty sure that none of the CC licenses are DFSG-free > up to version 3.0 (don't ask me why). cc-by looks pretty innocent for bundling with a package, especially if it's only used for tests and for examples and not part of the main program (like icons or sound). Bundling doesn't look infectious and the user is free to make use of them or not. Attribution for bundling doesn't look more restrictive than including the copyright statement for BSD lisencend code. In statsmodels, we have several datasets, some public domain, some with authorization by the author, but sometimes it is not very clear whether a dataset is copyrightable or not. Although, I haven't seen any cc-by datasets in econometrics that I remember, and cc-by-nc looks clearly inconsistent. Are there some guidelines somewhere what would be consistent with this kind of bundling of datasets (tests and examples)? US government data is nice because it's all public domain. But there are a lot of efforts to make data more widely available, e.g. http://www.ckan.net http://opendefinition.org/licenses Sorry, if this expands too much on the original question, but this is bugging me for a while. Josef > > Ga?l > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From robert.kern at gmail.com Tue Jan 12 10:15:08 2010 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 12 Jan 2010 09:15:08 -0600 Subject: [SciPy-dev] Is "Creative Commons: Attribution" an acceptable license for datasets included in scipy ? In-Reply-To: <4B4C2C4A.4020801@silveregg.co.jp> References: <4B4C2C4A.4020801@silveregg.co.jp> Message-ID: <3d375d731001120715w54f85033s79c5c9673bdb9fc7@mail.gmail.com> On Tue, Jan 12, 2010 at 02:01, David Cournapeau wrote: > Hi, > > ? ? Everything is in the title - I have some new IO code for > scipy.sparse I would like to include in scipy, and the tests include > some dataset under this license. Should I remove them before inclusion ? Yes. Just generate some random data. There is no need to bother with licensed data for tests. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From skorpio11 at gmail.com Tue Jan 12 10:33:20 2010 From: skorpio11 at gmail.com (Leon Adams) Date: Tue, 12 Jan 2010 10:33:20 -0500 Subject: [SciPy-dev] bug in scipy.stats package In-Reply-To: <1cd32cbb1001080940i29a3bb85ofac109560abf8979@mail.gmail.com> References: <9e3090c1001080856t3ce4bb36ia6974ea409b9515a@mail.gmail.com> <1cd32cbb1001080940i29a3bb85ofac109560abf8979@mail.gmail.com> Message-ID: <9e3090c1001120733h6bb79f48me6da5caea2d5561d@mail.gmail.com> Thanks for the update. Currently I am running scipy 0.7.1 (from python XY distribution) on vista and version 0.7.0-2 on ubuntu with both exhibiting this behavior. I will look into upgrading. Thanks again On Fri, Jan 8, 2010 at 12:40 PM, wrote: > On Fri, Jan 8, 2010 at 11:56 AM, Leon Adams wrote: > > > > Hi all, > > I am currently doing some work involving distribution fitting and it > looks > > like a came across a strange bug in the ppf function of the gamma > > distribution. Can some one else verify and if so suggest possible > > resolution. Below is a sample of my python script: > > > > import scipy.stats as st > > import numpy as np > > nbins = 12 > > binProb = np.zeros(nbins) + 1.0/nbins > > binSumProb = np.add.accumulate(binProb) > > print binSumProb > > print st.gamma.ppf(binSumProb,0.6379,loc=1.6,scale=39.555) > > > > With the following output: > > [ 0.08333333 0.16666667 0.25 0.33333333 0.41666667 0.5 > > 0.58333333 0.66666667 0.75 0.83333333 0.91666667 1. ] > > [ 2.28713122 3.68072087 1.6 8.20183111 11.42062087 > > 15.44977132 20.5371578 27.11627185 36.01838095 49.13633397 > > 72.58240868 Inf] > > My result look correct, see below. Which version of scipy are you using? > I think this was http://projects.scipy.org/scipy/ticket/975 > which would mean the fix is only in trunk > > Thanks for reporting it > > Josef > > > >>> scipy.version.version > '0.8.0.dev6156' > > >>> import scipy.stats as stats > >>> import numpy as np > >>> nbins = 12 > >>> binProb = np.zeros(nbins) + 1.0/nbins > >>> binSumProb = np.add.accumulate(binProb) > >>> print binSumProb > [ 0.08333333 0.16666667 0.25 0.33333333 0.41666667 0.5 > 0.58333333 0.66666667 0.75 0.83333333 0.91666667 1. ] > >>> print stats.gamma.ppf(binSumProb,0.6379,loc=1.6,scale=39.555) > [ 2.28713122 3.68072087 5.64774079 8.20183111 11.42062087 > 15.44977132 20.5371578 27.11627185 36.01838095 49.13633397 > 72.58240868 Inf] > > > > > > The problem occurs at an input value of 0.25, the ppf returns the > location > > parameter (in this case 1.6) > > > > Thanks in advance > > -- > > Leon Adams > > > > _______________________________________________ > > SciPy-Dev mailing list > > SciPy-Dev at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-dev > > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -- Leon Adams -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsouthey at gmail.com Tue Jan 12 10:52:39 2010 From: bsouthey at gmail.com (Bruce Southey) Date: Tue, 12 Jan 2010 09:52:39 -0600 Subject: [SciPy-dev] Is "Creative Commons: Attribution" an acceptable license for datasets included in scipy ? In-Reply-To: <1cd32cbb1001120712i55f22516w866d8a049d0bbd30@mail.gmail.com> References: <4B4C2C4A.4020801@silveregg.co.jp> <20100112080731.GA31896@phare.normalesup.org> <1cd32cbb1001120712i55f22516w866d8a049d0bbd30@mail.gmail.com> Message-ID: <4B4C9AC7.30306@gmail.com> On 01/12/2010 09:12 AM, josef.pktd at gmail.com wrote: > On Tue, Jan 12, 2010 at 3:07 AM, Gael Varoquaux > wrote: > >> On Tue, Jan 12, 2010 at 05:01:14PM +0900, David Cournapeau wrote: >> >>> Hi, >>> >> >>> Everything is in the title - I have some new IO code for >>> scipy.sparse I would like to include in scipy, and the tests include >>> some dataset under this license. Should I remove them before inclusion ? >>> >> I believe you must: the attribution clause is not free by OSI definition. >> In addition, I am pretty sure that none of the CC licenses are DFSG-free >> up to version 3.0 (don't ask me why). >> > cc-by looks pretty innocent for bundling with a package, especially if > it's only used for tests and for examples and not part of the main > program (like icons or sound). > > Bundling doesn't look infectious and the user is free to make use of > them or not. Attribution for bundling doesn't look more restrictive > than including the copyright statement for BSD lisencend code. > > In statsmodels, we have several datasets, some public domain, some > with authorization by the author, but sometimes it is not very clear > whether a dataset is copyrightable or not. > > Although, I haven't seen any cc-by datasets in econometrics that I > remember, and cc-by-nc looks clearly inconsistent. > > Are there some guidelines somewhere what would be consistent with this > kind of bundling of datasets (tests and examples)? > > US government data is nice because it's all public domain. > > But there are a lot of efforts to make data more widely available, e.g. > http://www.ckan.net > http://opendefinition.org/licenses > > Sorry, if this expands too much on the original question, but this is > bugging me for a while. > > Josef > > A little off topic, but search google for 'is data copyrightable'. For example: http://answers.google.com/answers/threadview/id/778789.html http://scienceblogs.com/commonknowledge/2009/01/data_copyrights_and_slogans_oh.php http://sciencecommons.org/resources/faq/databases#dbcopyright The important case that is referred to is Feist vs Rural: http://en.wikipedia.org/wiki/Feist_Publications_v._Rural_Telephone_Service The answer really depends on what country, what the data is ('facts' are not copyrightable), how (and when) it was collected and who collected it. I agree with Robert with regards to data with tests. As for examples, it depends on the point you want to make as I would suggest simulated data or well-known datasets that are most likely in public domain. Bruce From josef.pktd at gmail.com Tue Jan 12 11:34:25 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 12 Jan 2010 11:34:25 -0500 Subject: [SciPy-dev] Is "Creative Commons: Attribution" an acceptable license for datasets included in scipy ? In-Reply-To: <4B4C9AC7.30306@gmail.com> References: <4B4C2C4A.4020801@silveregg.co.jp> <20100112080731.GA31896@phare.normalesup.org> <1cd32cbb1001120712i55f22516w866d8a049d0bbd30@mail.gmail.com> <4B4C9AC7.30306@gmail.com> Message-ID: <1cd32cbb1001120834y7f563d0dnef39709fb37d1d04@mail.gmail.com> On Tue, Jan 12, 2010 at 10:52 AM, Bruce Southey wrote: > On 01/12/2010 09:12 AM, josef.pktd at gmail.com wrote: >> On Tue, Jan 12, 2010 at 3:07 AM, Gael Varoquaux >> ?wrote: >> >>> On Tue, Jan 12, 2010 at 05:01:14PM +0900, David Cournapeau wrote: >>> >>>> Hi, >>>> >>> >>>> ? ? ? Everything is in the title - I have some new IO code for >>>> scipy.sparse I would like to include in scipy, and the tests include >>>> some dataset under this license. Should I remove them before inclusion ? >>>> >>> I believe you must: the attribution clause is not free by OSI definition. >>> In addition, I am pretty sure that none of the CC licenses are DFSG-free >>> up to version 3.0 (don't ask me why). >>> >> cc-by looks pretty innocent for bundling with a package, especially if >> it's only used for tests and for examples and not part of the main >> program (like icons or sound). >> >> Bundling doesn't look infectious and the user is free to make use of >> them or not. Attribution for bundling doesn't look more restrictive >> than including the copyright statement for BSD lisencend code. >> >> In statsmodels, we have several datasets, ?some public domain, some >> with authorization by the author, but sometimes it is not very clear >> whether a dataset is copyrightable or not. >> >> Although, I haven't seen any cc-by datasets in econometrics that I >> remember, ?and cc-by-nc looks clearly inconsistent. >> >> Are there some guidelines somewhere what would be consistent with this >> kind of bundling of datasets (tests and examples)? >> >> US government data is nice because it's all public domain. >> >> But there are a lot of efforts to make data more widely available, e.g. >> http://www.ckan.net >> http://opendefinition.org/licenses >> >> Sorry, if this expands too much on the original question, but this is >> bugging me for a while. >> >> Josef >> >> > A little off topic, but search google for 'is data copyrightable'. > For example: > http://answers.google.com/answers/threadview/id/778789.html > http://scienceblogs.com/commonknowledge/2009/01/data_copyrights_and_slogans_oh.php > http://sciencecommons.org/resources/faq/databases#dbcopyright > > The important case that is referred to is Feist vs Rural: > http://en.wikipedia.org/wiki/Feist_Publications_v._Rural_Telephone_Service Thanks, for the links, especially the wikipedia article is pretty clear. It looks like datasets in R (based on published or publicly available information) is pretty much free game, since we only use the facts and not the R code. > > The answer really depends on what country, what the data is ('facts' are > not copyrightable), how (and when) it was collected and who ?collected it. Do we need a disclaimer, don't look at the data if you are in Australia? > > I agree with Robert with regards to data with tests. As for examples, it > depends on the point you want to make as I would suggest simulated data > or well-known datasets that are most likely in public domain. In statsmodels, we don't just want to have tests, we also want to verify that we can replicate known results and for illustration as part of the documentation. (maybe it's functional/acceptance tests versus unit tests) Thanks, Josef > > Bruce > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From robert.kern at gmail.com Tue Jan 12 11:38:18 2010 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 12 Jan 2010 10:38:18 -0600 Subject: [SciPy-dev] Is "Creative Commons: Attribution" an acceptable license for datasets included in scipy ? In-Reply-To: <1cd32cbb1001120834y7f563d0dnef39709fb37d1d04@mail.gmail.com> References: <4B4C2C4A.4020801@silveregg.co.jp> <20100112080731.GA31896@phare.normalesup.org> <1cd32cbb1001120712i55f22516w866d8a049d0bbd30@mail.gmail.com> <4B4C9AC7.30306@gmail.com> <1cd32cbb1001120834y7f563d0dnef39709fb37d1d04@mail.gmail.com> Message-ID: <3d375d731001120838i306f7399i7c901624ecfbf03@mail.gmail.com> On Tue, Jan 12, 2010 at 10:34, wrote: > On Tue, Jan 12, 2010 at 10:52 AM, Bruce Southey wrote: >> The answer really depends on what country, what the data is ('facts' are >> not copyrightable), how (and when) it was collected and who ?collected it. > > Do we need a disclaimer, don't look at the data if you are in Australia? No, we would simply not distribute such data at all since those terms are inconsistent with our licensing policy. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From mtrumpis at berkeley.edu Tue Jan 12 15:30:15 2010 From: mtrumpis at berkeley.edu (M Trumpis) Date: Tue, 12 Jan 2010 12:30:15 -0800 Subject: [SciPy-dev] Please stress-test SVN version pf matlab reader In-Reply-To: <1e2af89e1001051800o2f3ffa1ai6ba858611f87f977@mail.gmail.com> References: <1e2af89e1001051800o2f3ffa1ai6ba858611f87f977@mail.gmail.com> Message-ID: Hi Matthew.. here's something I've been seeing which is confusing, concerning the dtypes. I'm sorry if anyone has pointed this out before related to record arrays. I haven't looked at it too closely myself yet In [13]: mri = sio.loadmat('mri.mat', struct_as_record=True)['mri'] In [14]: affine = mri[0,0]['transform'] In [15]: affine.dtype Out[15]: dtype('float64') In [16]: affine.dtype.isbuiltin Out[16]: 0 In [17]: affine.dtype.str Out[17]: ' wrote: > Hi, > > For those of you interested in the matlab reader in scipy - a bout of > 'flu left me with only enough energy to try and work out the last > undocumented bits of the matlab file format. ?I _believe_ that the > current ?SVN version of the reader: > > import scipy.io as sio > a = sio.loadmat('your_matfile_here.mat') > > should successfully read any non-HDF matlab-written matfile - and this > is just to ask if those of you, with mat files lying around, could try > out the latest SVN version, and let me know that I am wrong, most > usefully with some way of me being able to reproduce the problem. > > I also have the hope / belief that the scipy reader should be > performing at round about the same speed as matlab to read the same > file - please let me know if this is way off too. > > Thanks a lot, > > Matthew > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From david at silveregg.co.jp Tue Jan 12 23:33:40 2010 From: david at silveregg.co.jp (David Cournapeau) Date: Wed, 13 Jan 2010 13:33:40 +0900 Subject: [SciPy-dev] Is "Creative Commons: Attribution" an acceptable license for datasets included in scipy ? In-Reply-To: <3d375d731001120715w54f85033s79c5c9673bdb9fc7@mail.gmail.com> References: <4B4C2C4A.4020801@silveregg.co.jp> <3d375d731001120715w54f85033s79c5c9673bdb9fc7@mail.gmail.com> Message-ID: <4B4D4D24.3030803@silveregg.co.jp> Robert Kern wrote: > On Tue, Jan 12, 2010 at 02:01, David Cournapeau wrote: >> Hi, >> >> Everything is in the title - I have some new IO code for >> scipy.sparse I would like to include in scipy, and the tests include >> some dataset under this license. Should I remove them before inclusion ? > > Yes. Just generate some random data. There is no need to bother with > licensed data for tests. Ok, thanks, David From njs at pobox.com Tue Jan 12 23:43:42 2010 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 12 Jan 2010 20:43:42 -0800 Subject: [SciPy-dev] build system for scikit Message-ID: <961fa2b41001122043p529d4b48g684454b33db8354@mail.gmail.com> So I've had some reports of problems with the scikits.sparse build system; I'm sure the general thrust will be familiar to many people here: -- Even on Linux, different distributions put the CHOLMOD headers in different places (argh wft people) -- Apparently numpy headers are not always placed in the Python include path? This seems broken to me, but okay... -- In principle, it might be nice to be able to build just some of the scikit, e.g. if the person has the right libraries installed for scikits.sparse.cholmod but not scikits.sparse.umfpack. I know that Python building is completely broken in many ways, but there must be some kind of standard solutions (or at least standard hacks!) for these. I did check numpy.distutils, but the docs I can find are all about how to piece together a build for a giant project with many pieces, which is not very relevant, and I couldn't figure out how to use it and Cython together (they both want to replace build_ext). Help! -- Nathaniel From cournape at gmail.com Wed Jan 13 01:40:51 2010 From: cournape at gmail.com (David Cournapeau) Date: Wed, 13 Jan 2010 15:40:51 +0900 Subject: [SciPy-dev] build system for scikit In-Reply-To: <961fa2b41001122043p529d4b48g684454b33db8354@mail.gmail.com> References: <961fa2b41001122043p529d4b48g684454b33db8354@mail.gmail.com> Message-ID: <5b8d13221001122240h8ac397rfecd5e0727705b41@mail.gmail.com> On Wed, Jan 13, 2010 at 1:43 PM, Nathaniel Smith wrote: > So I've had some reports of problems with the scikits.sparse build > system; I'm sure the general thrust will be familiar to many people > here: > ?-- Even on Linux, different distributions put the CHOLMOD headers in > different places (argh wft people) Yes, that's annoying, OTOH, dumping a whole bunch of headers in /usr/include is not scalable either. > ?-- Apparently numpy headers are not always placed in the Python > include path? This seems broken to me, but okay... This should not matter if you use the function to get numpy headers from numpy.distutils,misc_util (get_numpy_include_headers). > ?-- In principle, it might be nice to be able to build just some of > the scikit, e.g. if the person has the right libraries installed for > scikits.sparse.cholmod but not scikits.sparse.umfpack. This sounds like a good idea but it is almost always wrong in my opinion, because you introduce more possible different configurations. It increases more than decreases the distribution burden in my experience. > > I know that Python building is completely broken in many ways, but > there must be some kind of standard solutions (or at least standard > hacks!) for these. The only way that works if you care about being cross platform is doing the autotools way: check for features instead of versions. So for example, for CHOLMOD headers, you would test into different paths until you found the one you want by compiling some small code snippets. numpy.distutils can help you for that, and you can find a extensive (but complicated) example in numpy/core/setup.py. There is no general solution to your problem with python - autotools and cmake are the best changes ATM, but interfacing distutils with them is a big PITA, cheers, David From dagss at student.matnat.uio.no Wed Jan 13 04:47:50 2010 From: dagss at student.matnat.uio.no (Dag Sverre Seljebotn) Date: Wed, 13 Jan 2010 10:47:50 +0100 Subject: [SciPy-dev] build system for scikit In-Reply-To: <961fa2b41001122043p529d4b48g684454b33db8354@mail.gmail.com> References: <961fa2b41001122043p529d4b48g684454b33db8354@mail.gmail.com> Message-ID: <4B4D96C6.9020107@student.matnat.uio.no> Nathaniel Smith wrote: > So I've had some reports of problems with the scikits.sparse build > system; I'm sure the general thrust will be familiar to many people > here: > -- Even on Linux, different distributions put the CHOLMOD headers in > different places (argh wft people) > -- Apparently numpy headers are not always placed in the Python > include path? This seems broken to me, but okay... > -- In principle, it might be nice to be able to build just some of > the scikit, e.g. if the person has the right libraries installed for > scikits.sparse.cholmod but not scikits.sparse.umfpack. Most other libraries seem to "fix" this by statically linking all the .c files directly into the extension .so. That's the state we're currently in with Python building. (Please, plese don't do this.) It seems to me that the best solution for now is to make things injectable from a configuration file, so that one has to explicitly set CHOLMOD header directory location in a configuration file. This is no different I think from NumPy and SciPy, which will get their paths to MKL etc. (if used) from a configuration file. (The Cython files should probably be changed to use include "cholmod.h" rather than "suitesparse/cholmod.h", and then rely on being passed -I flags). That should give a baseline that's buildable with the Python tools with some manual configuration. Then other people would automate this as appropriate for their platform (e.g. to create a Linux package you'd hook it up with autotools as David suggested -- in my Sage package I am already autogenerating a setup.cfg for scikits.sparse with the Sage-specific NumPy include path in the package build process). Creating something new which is "automatic" and tries to guess the location of CHOLMOD across different platforms is in my opinion a waste of time and makes things worse. What if you want to build your own SuiteSparse? Etc.etc. Explicit is better than implicit! -- Dag Sverre From david at silveregg.co.jp Wed Jan 13 20:14:04 2010 From: david at silveregg.co.jp (David Cournapeau) Date: Thu, 14 Jan 2010 10:14:04 +0900 Subject: [SciPy-dev] build system for scikit In-Reply-To: <4B4D96C6.9020107@student.matnat.uio.no> References: <961fa2b41001122043p529d4b48g684454b33db8354@mail.gmail.com> <4B4D96C6.9020107@student.matnat.uio.no> Message-ID: <4B4E6FDC.5010802@silveregg.co.jp> Dag Sverre Seljebotn wrote: > > Creating something new which is "automatic" and tries to guess the > location of CHOLMOD across different platforms is in my opinion a waste > of time and makes things worse. It depends on what you mean by automatic: autoconf/cmake configuration schemes are automatic, and they work very well for packages much more complex than python will ever need (e.g. KDE and Gnome). Ideally, there should be a way to do: configure --with-libfoo=/usr/local or configure --with-libfoo-includedir=/usr/local/include --with-libfoo-libdir=/usr/local/lib So that you can configure foo how you want, and inside your setup.py, there would be something like def check_foo(env): try: env.check_header("amd_foo.h") except CompileError: old_env = env.Append({"CPPPATH": ["$prefix/suitesparse"]}) try: env.check_header("amd_foo.h") finally: env = old_env With abstracted machinery to deal with the options given at the command line. But that's typically the kind of things made difficult with distutils commands scheme. David From matthew.brett at gmail.com Thu Jan 14 07:58:05 2010 From: matthew.brett at gmail.com (Matthew Brett) Date: Thu, 14 Jan 2010 12:58:05 +0000 Subject: [SciPy-dev] Please stress-test SVN version pf matlab reader In-Reply-To: References: <1e2af89e1001051800o2f3ffa1ai6ba858611f87f977@mail.gmail.com> Message-ID: <1e2af89e1001140458p649ddd2fif4b7c2e1ed44e6f2@mail.gmail.com> Hi Mike, On Tue, Jan 12, 2010 at 8:30 PM, M Trumpis wrote: > Hi Matthew.. here's something I've been seeing which is confusing, > concerning the dtypes. I'm sorry if anyone has pointed this out before > related to record arrays. I haven't looked at it too closely myself > yet > > In [13]: mri = sio.loadmat('mri.mat', struct_as_record=True)['mri'] > > In [14]: affine = mri[0,0]['transform'] > > In [15]: affine.dtype > Out[15]: dtype('float64') > > In [16]: affine.dtype.isbuiltin > Out[16]: 0 You mean, this last line? That you were expecting: In [16]: affine.dtype.isbuiltin Out[16]: 1 ? Yes - that's odd. It comes from this behaviour in numpy: In [20]: dt = np.dtype('f8') In [21]: dt.isbuiltin Out[21]: 1 In [22]: ndt = dt.newbyteorder('<') In [23]: ndt.isbuiltin Out[23]: 0 and that looks a bit six-legged to me... I'll post on the numpy list for enlightenment. Thanks for sharing... Matthew From arokem at berkeley.edu Thu Jan 14 22:24:39 2010 From: arokem at berkeley.edu (Ariel Rokem) Date: Thu, 14 Jan 2010 19:24:39 -0800 Subject: [SciPy-dev] Is this a bugfix for scipy.hilbert? Message-ID: <43958ee61001141924n1663f379m77bafbbbb4c88ede@mail.gmail.com> Hi everyone, I have been trying to use scipy.signal.hilbert and I got the following puzzling result: In [22]: import scipy In [23]: scipy.__version__ #I have r6182 Out[23]: '0.8.0.dev' In [24]: import scipy.signal as signal In [25]: a = np.random.rand(100,100) In [26]: np.abs(signal.hilbert(a[-1])) Out[26]: array([ 0.57567681, 0.25918624, 0.50207097, 0.51834052, 0.24293389, 0.5779464 , 0.6515758 , 0.89973173, 1.00275444, 0.37352935, 0.62332717, 0.93599749, 0.40651376, 0.65088756, 0.8332281 , 0.5770101 , 0.9288512 , 0.46671906, 0.41536055, 0.71418068, 0.81250913, 0.07652627, 0.72939072, 0.26755626, 0.36396146, 0.59725999, 1.02264694, 0.41227986, 0.98122853, 0.71906675, 0.58582611, 0.77288117, 0.3217015 , 0.65261394, 0.11947618, 0.75632703, 0.43432935, 0.52182485, 1.0277177 , 1.01104986, 0.3023265 , 0.6024772 , 0.69257548, 0.55418735, 0.46259052, 0.25832231, 0.38278355, 0.45508532, 0.26215872, 0.34207947, 0.80704729, 0.80755477, 0.95317178, 0.97458885, 0.58762294, 0.82540618, 0.62005585, 0.82494646, 1.04221293, 0.14983027, 1.01571579, 0.99381328, 0.24158714, 0.84256569, 0.53418924, 0.24067628, 0.90489883, 1.02217747, 0.34988034, 0.5310065 , 0.48135002, 1.03020269, 0.6013679 , 0.46062485, 0.3918485 , 0.21554545, 0.31704519, 0.04868385, 0.1787766 , 0.37361852, 0.21977912, 0.7649772 , 0.77867281, 0.37684278, 0.64432638, 0.77494951, 0.87106309, 0.77611484, 0.52666801, 0.88683667, 0.69164967, 0.98618191, 0.84811375, 0.35934198, 0.32650478, 0.1752677 , 0.60574454, 0.5109132 , 0.52332287, 0.99777805]) In [27]: np.abs(signal.hilbert(a))[-1] Out[27]: array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]) ---------------------------------------------------------------------- I was expecting both of these to have the same values - am I missing something? I think that the following solves this issue, but now I am not that sure whether it does what it is supposed to do and I couldn't find a test for this in test_signaltools.py. Does anyone know of a good test-case for the analytic signal, that I could create for this? Index: scipy/signal/signaltools.py =================================================================== --- scipy/signal/signaltools.py (revision 6182) +++ scipy/signal/signaltools.py (working copy) @@ -1062,13 +1062,13 @@ """ x = asarray(x) if N is None: - N = len(x) + N = x.shape[-1] if N <=0: raise ValueError, "N must be positive." if iscomplexobj(x): print "Warning: imaginary part of x ignored." x = real(x) - Xf = fft(x,N,axis=0) + Xf = fft(x,N,axis=-1) h = zeros(N) if N % 2 == 0: h[0] = h[N/2] = 1 @@ -1078,7 +1078,7 @@ h[1:(N+1)/2] = 2 if len(x.shape) > 1: - h = h[:, newaxis] + h = h[newaxis,:] x = ifft(Xf*h) return x Cheers, Ariel -- Ariel Rokem Helen Wills Neuroscience Institute University of California, Berkeley http://argentum.ucbso.berkeley.edu/ariel -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Thu Jan 14 22:54:12 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 14 Jan 2010 22:54:12 -0500 Subject: [SciPy-dev] Is this a bugfix for scipy.hilbert? In-Reply-To: <43958ee61001141924n1663f379m77bafbbbb4c88ede@mail.gmail.com> References: <43958ee61001141924n1663f379m77bafbbbb4c88ede@mail.gmail.com> Message-ID: <1cd32cbb1001141954u72d7199cm936be6f3d95866c4@mail.gmail.com> On Thu, Jan 14, 2010 at 10:24 PM, Ariel Rokem wrote: > Hi everyone, > > I have been trying to use scipy.signal.hilbert and I got the following > puzzling result: > > In [22]: import scipy > > In [23]: scipy.__version__ #I have r6182 > Out[23]: '0.8.0.dev' > > In [24]: import scipy.signal as signal > > In [25]: a = np.random.rand(100,100) > > In [26]: np.abs(signal.hilbert(a[-1])) > Out[26]: > array([ 0.57567681,? 0.25918624,? 0.50207097,? 0.51834052,? 0.24293389, > ??????? 0.5779464 ,? 0.6515758 ,? 0.89973173,? 1.00275444,? 0.37352935, > ??????? 0.62332717,? 0.93599749,? 0.40651376,? 0.65088756,? 0.8332281 , > ??????? 0.5770101 ,? 0.9288512 ,? 0.46671906,? 0.41536055,? 0.71418068, > ??????? 0.81250913,? 0.07652627,? 0.72939072,? 0.26755626,? 0.36396146, > ??????? 0.59725999,? 1.02264694,? 0.41227986,? 0.98122853,? 0.71906675, > ??????? 0.58582611,? 0.77288117,? 0.3217015 ,? 0.65261394,? 0.11947618, > ??????? 0.75632703,? 0.43432935,? 0.52182485,? 1.0277177 ,? 1.01104986, > ??????? 0.3023265 ,? 0.6024772 ,? 0.69257548,? 0.55418735,? 0.46259052, > ??????? 0.25832231,? 0.38278355,? 0.45508532,? 0.26215872,? 0.34207947, > ??????? 0.80704729,? 0.80755477,? 0.95317178,? 0.97458885,? 0.58762294, > ??????? 0.82540618,? 0.62005585,? 0.82494646,? 1.04221293,? 0.14983027, > ??????? 1.01571579,? 0.99381328,? 0.24158714,? 0.84256569,? 0.53418924, > ??????? 0.24067628,? 0.90489883,? 1.02217747,? 0.34988034,? 0.5310065 , > ??????? 0.48135002,? 1.03020269,? 0.6013679 ,? 0.46062485,? 0.3918485 , > ??????? 0.21554545,? 0.31704519,? 0.04868385,? 0.1787766 ,? 0.37361852, > ??????? 0.21977912,? 0.7649772 ,? 0.77867281,? 0.37684278,? 0.64432638, > ??????? 0.77494951,? 0.87106309,? 0.77611484,? 0.52666801,? 0.88683667, > ??????? 0.69164967,? 0.98618191,? 0.84811375,? 0.35934198,? 0.32650478, > ??????? 0.1752677 ,? 0.60574454,? 0.5109132 ,? 0.52332287,? 0.99777805]) > > In [27]: np.abs(signal.hilbert(a))[-1] > Out[27]: > array([ 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., > ??????? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., > ??????? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., > ??????? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., > ??????? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., > ??????? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., > ??????? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., > ??????? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.]) > > > ---------------------------------------------------------------------- > > I was expecting both of these to have the same values - am I missing > something? > > I think that the following solves this issue, but now I am not that sure > whether it does what it is supposed to do and I couldn't find a test for > this in test_signaltools.py. Does anyone know of a good test-case for the > analytic signal, that I could create for this? > > Index: scipy/signal/signaltools.py > =================================================================== > --- scipy/signal/signaltools.py??? (revision 6182) > +++ scipy/signal/signaltools.py??? (working copy) > @@ -1062,13 +1062,13 @@ > ???? """ > ???? x = asarray(x) > ???? if N is None: > -??????? N = len(x) > +??????? N = x.shape[-1] > ???? if N <=0: > ???????? raise ValueError, "N must be positive." > ???? if iscomplexobj(x): > ???????? print "Warning: imaginary part of x ignored." > ???????? x = real(x) > -??? Xf = fft(x,N,axis=0) > +??? Xf = fft(x,N,axis=-1) > ???? h = zeros(N) > ???? if N % 2 == 0: > ???????? h[0] = h[N/2] = 1 > @@ -1078,7 +1078,7 @@ > ???????? h[1:(N+1)/2] = 2 > > ???? if len(x.shape) > 1: > -??????? h = h[:, newaxis] > +??????? h = h[newaxis,:] > ???? x = ifft(Xf*h) > ???? return x I think your change would break the currently advertised behavior, axis=0 (The transformation is done along the first axis) but fft and ifft have default axis=-1 fft in hilbert uses axis=0 as in docstring but ifft uses default axis=-1 so, I would think the fix should be x = ifft(Xf*h, axis=0) But as it currently looks like the axis argument doesn't work anyway, there wouldn't be much breakage if the axis would be included as an argument and default to -1. However, I don't know what the "standard" for scipy.signal is for default axis. Josef > > > Cheers, > > Ariel > -- > Ariel Rokem > Helen Wills Neuroscience Institute > University of California, Berkeley > http://argentum.ucbso.berkeley.edu/ariel > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > From josef.pktd at gmail.com Thu Jan 14 23:02:38 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 14 Jan 2010 23:02:38 -0500 Subject: [SciPy-dev] Is this a bugfix for scipy.hilbert? In-Reply-To: <1cd32cbb1001141954u72d7199cm936be6f3d95866c4@mail.gmail.com> References: <43958ee61001141924n1663f379m77bafbbbb4c88ede@mail.gmail.com> <1cd32cbb1001141954u72d7199cm936be6f3d95866c4@mail.gmail.com> Message-ID: <1cd32cbb1001142002xc3f62b7n8ca45eedf94da884@mail.gmail.com> On Thu, Jan 14, 2010 at 10:54 PM, wrote: > On Thu, Jan 14, 2010 at 10:24 PM, Ariel Rokem wrote: >> Hi everyone, >> >> I have been trying to use scipy.signal.hilbert and I got the following >> puzzling result: >> >> In [22]: import scipy >> >> In [23]: scipy.__version__ #I have r6182 >> Out[23]: '0.8.0.dev' >> >> In [24]: import scipy.signal as signal >> >> In [25]: a = np.random.rand(100,100) >> >> In [26]: np.abs(signal.hilbert(a[-1])) >> Out[26]: >> array([ 0.57567681,? 0.25918624,? 0.50207097,? 0.51834052,? 0.24293389, >> ??????? 0.5779464 ,? 0.6515758 ,? 0.89973173,? 1.00275444,? 0.37352935, >> ??????? 0.62332717,? 0.93599749,? 0.40651376,? 0.65088756,? 0.8332281 , >> ??????? 0.5770101 ,? 0.9288512 ,? 0.46671906,? 0.41536055,? 0.71418068, >> ??????? 0.81250913,? 0.07652627,? 0.72939072,? 0.26755626,? 0.36396146, >> ??????? 0.59725999,? 1.02264694,? 0.41227986,? 0.98122853,? 0.71906675, >> ??????? 0.58582611,? 0.77288117,? 0.3217015 ,? 0.65261394,? 0.11947618, >> ??????? 0.75632703,? 0.43432935,? 0.52182485,? 1.0277177 ,? 1.01104986, >> ??????? 0.3023265 ,? 0.6024772 ,? 0.69257548,? 0.55418735,? 0.46259052, >> ??????? 0.25832231,? 0.38278355,? 0.45508532,? 0.26215872,? 0.34207947, >> ??????? 0.80704729,? 0.80755477,? 0.95317178,? 0.97458885,? 0.58762294, >> ??????? 0.82540618,? 0.62005585,? 0.82494646,? 1.04221293,? 0.14983027, >> ??????? 1.01571579,? 0.99381328,? 0.24158714,? 0.84256569,? 0.53418924, >> ??????? 0.24067628,? 0.90489883,? 1.02217747,? 0.34988034,? 0.5310065 , >> ??????? 0.48135002,? 1.03020269,? 0.6013679 ,? 0.46062485,? 0.3918485 , >> ??????? 0.21554545,? 0.31704519,? 0.04868385,? 0.1787766 ,? 0.37361852, >> ??????? 0.21977912,? 0.7649772 ,? 0.77867281,? 0.37684278,? 0.64432638, >> ??????? 0.77494951,? 0.87106309,? 0.77611484,? 0.52666801,? 0.88683667, >> ??????? 0.69164967,? 0.98618191,? 0.84811375,? 0.35934198,? 0.32650478, >> ??????? 0.1752677 ,? 0.60574454,? 0.5109132 ,? 0.52332287,? 0.99777805]) >> >> In [27]: np.abs(signal.hilbert(a))[-1] >> Out[27]: >> array([ 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., >> ??????? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., >> ??????? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., >> ??????? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., >> ??????? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., >> ??????? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., >> ??????? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., >> ??????? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.]) >> >> >> ---------------------------------------------------------------------- >> >> I was expecting both of these to have the same values - am I missing >> something? >> >> I think that the following solves this issue, but now I am not that sure >> whether it does what it is supposed to do and I couldn't find a test for >> this in test_signaltools.py. Does anyone know of a good test-case for the >> analytic signal, that I could create for this? >> >> Index: scipy/signal/signaltools.py >> =================================================================== >> --- scipy/signal/signaltools.py??? (revision 6182) >> +++ scipy/signal/signaltools.py??? (working copy) >> @@ -1062,13 +1062,13 @@ >> ???? """ >> ???? x = asarray(x) >> ???? if N is None: >> -??????? N = len(x) >> +??????? N = x.shape[-1] >> ???? if N <=0: >> ???????? raise ValueError, "N must be positive." >> ???? if iscomplexobj(x): >> ???????? print "Warning: imaginary part of x ignored." >> ???????? x = real(x) >> -??? Xf = fft(x,N,axis=0) >> +??? Xf = fft(x,N,axis=-1) >> ???? h = zeros(N) >> ???? if N % 2 == 0: >> ???????? h[0] = h[N/2] = 1 >> @@ -1078,7 +1078,7 @@ >> ???????? h[1:(N+1)/2] = 2 >> >> ???? if len(x.shape) > 1: >> -??????? h = h[:, newaxis] >> +??????? h = h[newaxis,:] >> ???? x = ifft(Xf*h) >> ???? return x > > I think your change would break the currently advertised behavior, > axis=0 (The transformation is done along the first axis) > > but fft and ifft have default axis=-1 > > fft in hilbert uses axis=0 as in docstring > but ifft uses default axis=-1 > > so, I would think the fix should be ?x = ifft(Xf*h, axis=0) > > But as it currently looks like the axis argument doesn't work anyway, > there wouldn't be much breakage if the axis would be included as an > argument and default to -1. > However, I don't know what the "standard" for scipy.signal is for default axis. > > Josef after adding axis to ifft: >>> print hilbert(aa).real [[ 0.82584851 0.15215031 0.14767381] [ 0.95021675 0.16803995 0.43562964] [ 0.13033881 0.06198952 0.70729614] [ 0.69409563 0.06962778 0.72552601] [ 0.34297612 0.50579001 0.86463304] [ 0.28355261 0.21626889 0.85165102] [ 0.49481491 0.21290645 0.71416814] [ 0.2645843 0.95783096 0.77514016] [ 0.38735994 0.14274852 0.56344808] [ 0.88084015 0.39879649 0.64949951]] >>> print hilbert(aa[:,:1]).real [[ 0.82584851] [ 0.95021675] [ 0.13033881] [ 0.69409563] [ 0.34297612] [ 0.28355261] [ 0.49481491] [ 0.2645843 ] [ 0.38735994] [ 0.88084015]] but it treats a 1d array as row vector and transforms along zero axis of length 1, and not along the length of the array. so another fix to handle 1d arrays correctly should be done >>> print hilbert(aa[:,1]).real [ 0.15215031 0.16803995 0.06198952 0.06962778 0.50579001 0.21626889 0.21290645 0.95783096 0.14274852 0.39879649] >>> aa[:,1] array([ 0.15215031, 0.16803995, 0.06198952, 0.06962778, 0.50579001, 0.21626889, 0.21290645, 0.95783096, 0.14274852, 0.39879649]) >>> Josef > >> >> >> Cheers, >> >> Ariel >> -- >> Ariel Rokem >> Helen Wills Neuroscience Institute >> University of California, Berkeley >> http://argentum.ucbso.berkeley.edu/ariel >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev >> >> > From josef.pktd at gmail.com Thu Jan 14 23:27:34 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 14 Jan 2010 23:27:34 -0500 Subject: [SciPy-dev] Is this a bugfix for scipy.hilbert? In-Reply-To: <1cd32cbb1001142002xc3f62b7n8ca45eedf94da884@mail.gmail.com> References: <43958ee61001141924n1663f379m77bafbbbb4c88ede@mail.gmail.com> <1cd32cbb1001141954u72d7199cm936be6f3d95866c4@mail.gmail.com> <1cd32cbb1001142002xc3f62b7n8ca45eedf94da884@mail.gmail.com> Message-ID: <1cd32cbb1001142027j3be923qedadae0149f49f0a@mail.gmail.com> On Thu, Jan 14, 2010 at 11:02 PM, wrote: > On Thu, Jan 14, 2010 at 10:54 PM, ? wrote: >> On Thu, Jan 14, 2010 at 10:24 PM, Ariel Rokem wrote: >>> Hi everyone, >>> >>> I have been trying to use scipy.signal.hilbert and I got the following >>> puzzling result: >>> >>> In [22]: import scipy >>> >>> In [23]: scipy.__version__ #I have r6182 >>> Out[23]: '0.8.0.dev' >>> >>> In [24]: import scipy.signal as signal >>> >>> In [25]: a = np.random.rand(100,100) >>> >>> In [26]: np.abs(signal.hilbert(a[-1])) >>> Out[26]: >>> array([ 0.57567681,? 0.25918624,? 0.50207097,? 0.51834052,? 0.24293389, >>> ??????? 0.5779464 ,? 0.6515758 ,? 0.89973173,? 1.00275444,? 0.37352935, >>> ??????? 0.62332717,? 0.93599749,? 0.40651376,? 0.65088756,? 0.8332281 , >>> ??????? 0.5770101 ,? 0.9288512 ,? 0.46671906,? 0.41536055,? 0.71418068, >>> ??????? 0.81250913,? 0.07652627,? 0.72939072,? 0.26755626,? 0.36396146, >>> ??????? 0.59725999,? 1.02264694,? 0.41227986,? 0.98122853,? 0.71906675, >>> ??????? 0.58582611,? 0.77288117,? 0.3217015 ,? 0.65261394,? 0.11947618, >>> ??????? 0.75632703,? 0.43432935,? 0.52182485,? 1.0277177 ,? 1.01104986, >>> ??????? 0.3023265 ,? 0.6024772 ,? 0.69257548,? 0.55418735,? 0.46259052, >>> ??????? 0.25832231,? 0.38278355,? 0.45508532,? 0.26215872,? 0.34207947, >>> ??????? 0.80704729,? 0.80755477,? 0.95317178,? 0.97458885,? 0.58762294, >>> ??????? 0.82540618,? 0.62005585,? 0.82494646,? 1.04221293,? 0.14983027, >>> ??????? 1.01571579,? 0.99381328,? 0.24158714,? 0.84256569,? 0.53418924, >>> ??????? 0.24067628,? 0.90489883,? 1.02217747,? 0.34988034,? 0.5310065 , >>> ??????? 0.48135002,? 1.03020269,? 0.6013679 ,? 0.46062485,? 0.3918485 , >>> ??????? 0.21554545,? 0.31704519,? 0.04868385,? 0.1787766 ,? 0.37361852, >>> ??????? 0.21977912,? 0.7649772 ,? 0.77867281,? 0.37684278,? 0.64432638, >>> ??????? 0.77494951,? 0.87106309,? 0.77611484,? 0.52666801,? 0.88683667, >>> ??????? 0.69164967,? 0.98618191,? 0.84811375,? 0.35934198,? 0.32650478, >>> ??????? 0.1752677 ,? 0.60574454,? 0.5109132 ,? 0.52332287,? 0.99777805]) >>> >>> In [27]: np.abs(signal.hilbert(a))[-1] >>> Out[27]: >>> array([ 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., >>> ??????? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., >>> ??????? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., >>> ??????? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., >>> ??????? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., >>> ??????? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., >>> ??????? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., >>> ??????? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.]) >>> >>> >>> ---------------------------------------------------------------------- >>> >>> I was expecting both of these to have the same values - am I missing >>> something? >>> >>> I think that the following solves this issue, but now I am not that sure >>> whether it does what it is supposed to do and I couldn't find a test for >>> this in test_signaltools.py. Does anyone know of a good test-case for the >>> analytic signal, that I could create for this? >>> >>> Index: scipy/signal/signaltools.py >>> =================================================================== >>> --- scipy/signal/signaltools.py??? (revision 6182) >>> +++ scipy/signal/signaltools.py??? (working copy) >>> @@ -1062,13 +1062,13 @@ >>> ???? """ >>> ???? x = asarray(x) >>> ???? if N is None: >>> -??????? N = len(x) >>> +??????? N = x.shape[-1] >>> ???? if N <=0: >>> ???????? raise ValueError, "N must be positive." >>> ???? if iscomplexobj(x): >>> ???????? print "Warning: imaginary part of x ignored." >>> ???????? x = real(x) >>> -??? Xf = fft(x,N,axis=0) >>> +??? Xf = fft(x,N,axis=-1) >>> ???? h = zeros(N) >>> ???? if N % 2 == 0: >>> ???????? h[0] = h[N/2] = 1 >>> @@ -1078,7 +1078,7 @@ >>> ???????? h[1:(N+1)/2] = 2 >>> >>> ???? if len(x.shape) > 1: >>> -??????? h = h[:, newaxis] >>> +??????? h = h[newaxis,:] >>> ???? x = ifft(Xf*h) >>> ???? return x >> >> I think your change would break the currently advertised behavior, >> axis=0 (The transformation is done along the first axis) >> >> but fft and ifft have default axis=-1 >> >> fft in hilbert uses axis=0 as in docstring >> but ifft uses default axis=-1 >> >> so, I would think the fix should be ?x = ifft(Xf*h, axis=0) >> >> But as it currently looks like the axis argument doesn't work anyway, >> there wouldn't be much breakage if the axis would be included as an >> argument and default to -1. >> However, I don't know what the "standard" for scipy.signal is for default axis. >> >> Josef > > after adding axis to ifft: >>>> print hilbert(aa).real > [[ 0.82584851 ?0.15215031 ?0.14767381] > ?[ 0.95021675 ?0.16803995 ?0.43562964] > ?[ 0.13033881 ?0.06198952 ?0.70729614] > ?[ 0.69409563 ?0.06962778 ?0.72552601] > ?[ 0.34297612 ?0.50579001 ?0.86463304] > ?[ 0.28355261 ?0.21626889 ?0.85165102] > ?[ 0.49481491 ?0.21290645 ?0.71416814] > ?[ 0.2645843 ? 0.95783096 ?0.77514016] > ?[ 0.38735994 ?0.14274852 ?0.56344808] > ?[ 0.88084015 ?0.39879649 ?0.64949951]] >>>> print hilbert(aa[:,:1]).real > [[ 0.82584851] > ?[ 0.95021675] > ?[ 0.13033881] > ?[ 0.69409563] > ?[ 0.34297612] > ?[ 0.28355261] > ?[ 0.49481491] > ?[ 0.2645843 ] > ?[ 0.38735994] > ?[ 0.88084015]] > > but it treats a 1d array as row vector and transforms along zero axis > of length 1, and not along the length of the array. > so another fix to handle 1d arrays correctly should be done > >>>> print hilbert(aa[:,1]).real > [ 0.15215031 ?0.16803995 ?0.06198952 ?0.06962778 ?0.50579001 ?0.21626889 > ?0.21290645 ?0.95783096 ?0.14274852 ?0.39879649] >>>> aa[:,1] > array([ 0.15215031, ?0.16803995, ?0.06198952, ?0.06962778, ?0.50579001, > ? ? ? ?0.21626889, ?0.21290645, ?0.95783096, ?0.14274852, ?0.39879649]) >>>> there's something wrong with my example, the real part is the same which confused me it works correctly with 1d >>> np.abs(hilbert(aa[:,0])) array([ 0.83251128, 1.04487091, 0.27702083, 0.69901499, 0.49170197, 0.31227114, 0.49505637, 0.26461488, 0.61385196, 0.90716272]) >>> np.abs(hilbert(aa[:,:1])).T array([[ 0.83251128, 1.04487091, 0.27702083, 0.69901499, 0.49170197, 0.31227114, 0.49505637, 0.26461488, 0.61385196, 0.90716272]]) >>> np.abs(hilbert(aa))[:,0] array([ 0.83251128, 1.04487091, 0.27702083, 0.69901499, 0.49170197, 0.31227114, 0.49505637, 0.26461488, 0.61385196, 0.90716272]) besides reading the docstring, I don't know what hilbert is supposed to be good for. Josef > Josef > > >> >>> >>> >>> Cheers, >>> >>> Ariel >>> -- >>> Ariel Rokem >>> Helen Wills Neuroscience Institute >>> University of California, Berkeley >>> http://argentum.ucbso.berkeley.edu/ariel >>> >>> _______________________________________________ >>> SciPy-Dev mailing list >>> SciPy-Dev at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-dev >>> >>> >> > From josef.pktd at gmail.com Thu Jan 14 23:53:19 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 14 Jan 2010 23:53:19 -0500 Subject: [SciPy-dev] Is this a bugfix for scipy.hilbert? In-Reply-To: <1cd32cbb1001142027j3be923qedadae0149f49f0a@mail.gmail.com> References: <43958ee61001141924n1663f379m77bafbbbb4c88ede@mail.gmail.com> <1cd32cbb1001141954u72d7199cm936be6f3d95866c4@mail.gmail.com> <1cd32cbb1001142002xc3f62b7n8ca45eedf94da884@mail.gmail.com> <1cd32cbb1001142027j3be923qedadae0149f49f0a@mail.gmail.com> Message-ID: <1cd32cbb1001142053yc2a37c2i1ff3b651c25ad3de@mail.gmail.com> On Thu, Jan 14, 2010 at 11:27 PM, wrote: > On Thu, Jan 14, 2010 at 11:02 PM, ? wrote: >> On Thu, Jan 14, 2010 at 10:54 PM, ? wrote: >>> On Thu, Jan 14, 2010 at 10:24 PM, Ariel Rokem wrote: >>>> Hi everyone, >>>> >>>> I have been trying to use scipy.signal.hilbert and I got the following >>>> puzzling result: >>>> >>>> In [22]: import scipy >>>> >>>> In [23]: scipy.__version__ #I have r6182 >>>> Out[23]: '0.8.0.dev' >>>> >>>> In [24]: import scipy.signal as signal >>>> >>>> In [25]: a = np.random.rand(100,100) >>>> >>>> In [26]: np.abs(signal.hilbert(a[-1])) >>>> Out[26]: >>>> array([ 0.57567681,? 0.25918624,? 0.50207097,? 0.51834052,? 0.24293389, >>>> ??????? 0.5779464 ,? 0.6515758 ,? 0.89973173,? 1.00275444,? 0.37352935, >>>> ??????? 0.62332717,? 0.93599749,? 0.40651376,? 0.65088756,? 0.8332281 , >>>> ??????? 0.5770101 ,? 0.9288512 ,? 0.46671906,? 0.41536055,? 0.71418068, >>>> ??????? 0.81250913,? 0.07652627,? 0.72939072,? 0.26755626,? 0.36396146, >>>> ??????? 0.59725999,? 1.02264694,? 0.41227986,? 0.98122853,? 0.71906675, >>>> ??????? 0.58582611,? 0.77288117,? 0.3217015 ,? 0.65261394,? 0.11947618, >>>> ??????? 0.75632703,? 0.43432935,? 0.52182485,? 1.0277177 ,? 1.01104986, >>>> ??????? 0.3023265 ,? 0.6024772 ,? 0.69257548,? 0.55418735,? 0.46259052, >>>> ??????? 0.25832231,? 0.38278355,? 0.45508532,? 0.26215872,? 0.34207947, >>>> ??????? 0.80704729,? 0.80755477,? 0.95317178,? 0.97458885,? 0.58762294, >>>> ??????? 0.82540618,? 0.62005585,? 0.82494646,? 1.04221293,? 0.14983027, >>>> ??????? 1.01571579,? 0.99381328,? 0.24158714,? 0.84256569,? 0.53418924, >>>> ??????? 0.24067628,? 0.90489883,? 1.02217747,? 0.34988034,? 0.5310065 , >>>> ??????? 0.48135002,? 1.03020269,? 0.6013679 ,? 0.46062485,? 0.3918485 , >>>> ??????? 0.21554545,? 0.31704519,? 0.04868385,? 0.1787766 ,? 0.37361852, >>>> ??????? 0.21977912,? 0.7649772 ,? 0.77867281,? 0.37684278,? 0.64432638, >>>> ??????? 0.77494951,? 0.87106309,? 0.77611484,? 0.52666801,? 0.88683667, >>>> ??????? 0.69164967,? 0.98618191,? 0.84811375,? 0.35934198,? 0.32650478, >>>> ??????? 0.1752677 ,? 0.60574454,? 0.5109132 ,? 0.52332287,? 0.99777805]) >>>> >>>> In [27]: np.abs(signal.hilbert(a))[-1] >>>> Out[27]: >>>> array([ 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., >>>> ??????? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., >>>> ??????? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., >>>> ??????? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., >>>> ??????? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., >>>> ??????? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., >>>> ??????? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., >>>> ??????? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.]) >>>> >>>> >>>> ---------------------------------------------------------------------- >>>> >>>> I was expecting both of these to have the same values - am I missing >>>> something? >>>> >>>> I think that the following solves this issue, but now I am not that sure >>>> whether it does what it is supposed to do and I couldn't find a test for >>>> this in test_signaltools.py. Does anyone know of a good test-case for the >>>> analytic signal, that I could create for this? >>>> >>>> Index: scipy/signal/signaltools.py >>>> =================================================================== >>>> --- scipy/signal/signaltools.py??? (revision 6182) >>>> +++ scipy/signal/signaltools.py??? (working copy) >>>> @@ -1062,13 +1062,13 @@ >>>> ???? """ >>>> ???? x = asarray(x) >>>> ???? if N is None: >>>> -??????? N = len(x) >>>> +??????? N = x.shape[-1] >>>> ???? if N <=0: >>>> ???????? raise ValueError, "N must be positive." >>>> ???? if iscomplexobj(x): >>>> ???????? print "Warning: imaginary part of x ignored." >>>> ???????? x = real(x) >>>> -??? Xf = fft(x,N,axis=0) >>>> +??? Xf = fft(x,N,axis=-1) >>>> ???? h = zeros(N) >>>> ???? if N % 2 == 0: >>>> ???????? h[0] = h[N/2] = 1 >>>> @@ -1078,7 +1078,7 @@ >>>> ???????? h[1:(N+1)/2] = 2 >>>> >>>> ???? if len(x.shape) > 1: >>>> -??????? h = h[:, newaxis] >>>> +??????? h = h[newaxis,:] >>>> ???? x = ifft(Xf*h) >>>> ???? return x >>> >>> I think your change would break the currently advertised behavior, >>> axis=0 (The transformation is done along the first axis) >>> >>> but fft and ifft have default axis=-1 >>> >>> fft in hilbert uses axis=0 as in docstring >>> but ifft uses default axis=-1 >>> >>> so, I would think the fix should be ?x = ifft(Xf*h, axis=0) >>> >>> But as it currently looks like the axis argument doesn't work anyway, >>> there wouldn't be much breakage if the axis would be included as an >>> argument and default to -1. >>> However, I don't know what the "standard" for scipy.signal is for default axis. >>> >>> Josef >> >> after adding axis to ifft: >>>>> print hilbert(aa).real >> [[ 0.82584851 ?0.15215031 ?0.14767381] >> ?[ 0.95021675 ?0.16803995 ?0.43562964] >> ?[ 0.13033881 ?0.06198952 ?0.70729614] >> ?[ 0.69409563 ?0.06962778 ?0.72552601] >> ?[ 0.34297612 ?0.50579001 ?0.86463304] >> ?[ 0.28355261 ?0.21626889 ?0.85165102] >> ?[ 0.49481491 ?0.21290645 ?0.71416814] >> ?[ 0.2645843 ? 0.95783096 ?0.77514016] >> ?[ 0.38735994 ?0.14274852 ?0.56344808] >> ?[ 0.88084015 ?0.39879649 ?0.64949951]] >>>>> print hilbert(aa[:,:1]).real >> [[ 0.82584851] >> ?[ 0.95021675] >> ?[ 0.13033881] >> ?[ 0.69409563] >> ?[ 0.34297612] >> ?[ 0.28355261] >> ?[ 0.49481491] >> ?[ 0.2645843 ] >> ?[ 0.38735994] >> ?[ 0.88084015]] >> >> but it treats a 1d array as row vector and transforms along zero axis >> of length 1, and not along the length of the array. >> so another fix to handle 1d arrays correctly should be done >> >>>>> print hilbert(aa[:,1]).real >> [ 0.15215031 ?0.16803995 ?0.06198952 ?0.06962778 ?0.50579001 ?0.21626889 >> ?0.21290645 ?0.95783096 ?0.14274852 ?0.39879649] >>>>> aa[:,1] >> array([ 0.15215031, ?0.16803995, ?0.06198952, ?0.06962778, ?0.50579001, >> ? ? ? ?0.21626889, ?0.21290645, ?0.95783096, ?0.14274852, ?0.39879649]) >>>>> > > there's something wrong with my example, the real part is the same > which confused me > > it works correctly with 1d > >>>> np.abs(hilbert(aa[:,0])) > array([ 0.83251128, ?1.04487091, ?0.27702083, ?0.69901499, ?0.49170197, > ? ? ? ?0.31227114, ?0.49505637, ?0.26461488, ?0.61385196, ?0.90716272]) > >>>> np.abs(hilbert(aa[:,:1])).T > array([[ 0.83251128, ?1.04487091, ?0.27702083, ?0.69901499, ?0.49170197, > ? ? ? ? 0.31227114, ?0.49505637, ?0.26461488, ?0.61385196, ?0.90716272]]) > >>>> np.abs(hilbert(aa))[:,0] > array([ 0.83251128, ?1.04487091, ?0.27702083, ?0.69901499, ?0.49170197, > ? ? ? ?0.31227114, ?0.49505637, ?0.26461488, ?0.61385196, ?0.90716272]) > > besides reading the docstring, I don't know what hilbert is supposed > to be good for. Would something like the function in the attachment do ? > Josef > > >> Josef >> >> >>> >>>> >>>> >>>> Cheers, >>>> >>>> Ariel >>>> -- >>>> Ariel Rokem >>>> Helen Wills Neuroscience Institute >>>> University of California, Berkeley >>>> http://argentum.ucbso.berkeley.edu/ariel >>>> >>>> _______________________________________________ >>>> SciPy-Dev mailing list >>>> SciPy-Dev at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/scipy-dev >>>> >>>> >>> >> > -------------- next part -------------- # -*- coding: utf-8 -*- """ scipy.signal.hilbert bugfix Created on Thu Jan 14 22:54:53 2010 changes by josef-pktd """ import numpy as np from numpy import asarray, zeros, newaxis, real from scipy import fftpack def hilbert(x, N=None, axis=0): """Compute the analytic signal. The transformation is done along the first axis. Parameters ---------- x : array-like Signal data N : int, optional Number of Fourier components. Default: ``x.shape[0]`` axis : int, optional Returns ------- xa : ndarray, shape (N,) + x.shape[1:] Analytic signal of `x` Notes ----- The analytic signal `x_a(t)` of `x(t)` is:: x_a = F^{-1}(F(x) 2U) = x + i y where ``F`` is the Fourier transform, ``U`` the unit step function, and ``y`` the Hilbert transform of ``x``. [1] References ---------- .. [1] Wikipedia, "Analytic signal". http://en.wikipedia.org/wiki/Analytic_signal """ x = asarray(x) if N is None: N = x.shape[axis] if N <=0: raise ValueError, "N must be positive." if np.iscomplexobj(x): print "Warning: imaginary part of x ignored." x = real(x) Xf = fftpack.fft(x,N,axis=axis) h = zeros(N) if N % 2 == 0: h[0] = h[N/2] = 1 h[1:N/2] = 2 else: h[0] = 1 h[1:(N+1)/2] = 2 if len(x.shape) > 1: ind = [newaxis]*x.ndim ind[axis] = slice(None) h = h[ind] x = fftpack.ifft(Xf*h, axis=axis) return x aa = np.random.randn(10,3) print np.abs(hilbert(aa))[:,0] print np.abs(hilbert(aa[:,:1])).T print np.abs(hilbert(aa,axis=1))[0] print np.abs(hilbert(aa[0])) From arokem at berkeley.edu Fri Jan 15 01:44:54 2010 From: arokem at berkeley.edu (Ariel Rokem) Date: Thu, 14 Jan 2010 22:44:54 -0800 Subject: [SciPy-dev] Is this a bugfix for scipy.hilbert? In-Reply-To: <1cd32cbb1001142053yc2a37c2i1ff3b651c25ad3de@mail.gmail.com> References: <43958ee61001141924n1663f379m77bafbbbb4c88ede@mail.gmail.com> <1cd32cbb1001141954u72d7199cm936be6f3d95866c4@mail.gmail.com> <1cd32cbb1001142002xc3f62b7n8ca45eedf94da884@mail.gmail.com> <1cd32cbb1001142027j3be923qedadae0149f49f0a@mail.gmail.com> <1cd32cbb1001142053yc2a37c2i1ff3b651c25ad3de@mail.gmail.com> Message-ID: <43958ee61001142244y37b31feel9b9c0232d9c9e430@mail.gmail.com> Yes - looks good. Except I would prefer to eventually set the axis to default to -1, to be consistent with signal.fft (and also np.fft.fft) which has axis=-1. As for whether it's doing what it's supposed to do, for what it's worth - it seems to do similar things to what Matlab's 'hilbert' function does on a few simple examples I tried out. Cheers, Ariel On Thu, Jan 14, 2010 at 8:53 PM, wrote: > On Thu, Jan 14, 2010 at 11:27 PM, wrote: > > On Thu, Jan 14, 2010 at 11:02 PM, wrote: > >> On Thu, Jan 14, 2010 at 10:54 PM, wrote: > >>> On Thu, Jan 14, 2010 at 10:24 PM, Ariel Rokem > wrote: > >>>> Hi everyone, > >>>> > >>>> I have been trying to use scipy.signal.hilbert and I got the following > >>>> puzzling result: > >>>> > >>>> In [22]: import scipy > >>>> > >>>> In [23]: scipy.__version__ #I have r6182 > >>>> Out[23]: '0.8.0.dev' > >>>> > >>>> In [24]: import scipy.signal as signal > >>>> > >>>> In [25]: a = np.random.rand(100,100) > >>>> > >>>> In [26]: np.abs(signal.hilbert(a[-1])) > >>>> Out[26]: > >>>> array([ 0.57567681, 0.25918624, 0.50207097, 0.51834052, > 0.24293389, > >>>> 0.5779464 , 0.6515758 , 0.89973173, 1.00275444, > 0.37352935, > >>>> 0.62332717, 0.93599749, 0.40651376, 0.65088756, 0.8332281 > , > >>>> 0.5770101 , 0.9288512 , 0.46671906, 0.41536055, > 0.71418068, > >>>> 0.81250913, 0.07652627, 0.72939072, 0.26755626, > 0.36396146, > >>>> 0.59725999, 1.02264694, 0.41227986, 0.98122853, > 0.71906675, > >>>> 0.58582611, 0.77288117, 0.3217015 , 0.65261394, > 0.11947618, > >>>> 0.75632703, 0.43432935, 0.52182485, 1.0277177 , > 1.01104986, > >>>> 0.3023265 , 0.6024772 , 0.69257548, 0.55418735, > 0.46259052, > >>>> 0.25832231, 0.38278355, 0.45508532, 0.26215872, > 0.34207947, > >>>> 0.80704729, 0.80755477, 0.95317178, 0.97458885, > 0.58762294, > >>>> 0.82540618, 0.62005585, 0.82494646, 1.04221293, > 0.14983027, > >>>> 1.01571579, 0.99381328, 0.24158714, 0.84256569, > 0.53418924, > >>>> 0.24067628, 0.90489883, 1.02217747, 0.34988034, 0.5310065 > , > >>>> 0.48135002, 1.03020269, 0.6013679 , 0.46062485, 0.3918485 > , > >>>> 0.21554545, 0.31704519, 0.04868385, 0.1787766 , > 0.37361852, > >>>> 0.21977912, 0.7649772 , 0.77867281, 0.37684278, > 0.64432638, > >>>> 0.77494951, 0.87106309, 0.77611484, 0.52666801, > 0.88683667, > >>>> 0.69164967, 0.98618191, 0.84811375, 0.35934198, > 0.32650478, > >>>> 0.1752677 , 0.60574454, 0.5109132 , 0.52332287, > 0.99777805]) > >>>> > >>>> In [27]: np.abs(signal.hilbert(a))[-1] > >>>> Out[27]: > >>>> array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., > 0., > >>>> 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., > 0., > >>>> 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., > 0., > >>>> 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., > 0., > >>>> 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., > 0., > >>>> 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., > 0., > >>>> 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., > 0., > >>>> 0., 0., 0., 0., 0., 0., 0., 0., 0.]) > >>>> > >>>> > >>>> ---------------------------------------------------------------------- > >>>> > >>>> I was expecting both of these to have the same values - am I missing > >>>> something? > >>>> > >>>> I think that the following solves this issue, but now I am not that > sure > >>>> whether it does what it is supposed to do and I couldn't find a test > for > >>>> this in test_signaltools.py. Does anyone know of a good test-case for > the > >>>> analytic signal, that I could create for this? > >>>> > >>>> Index: scipy/signal/signaltools.py > >>>> =================================================================== > >>>> --- scipy/signal/signaltools.py (revision 6182) > >>>> +++ scipy/signal/signaltools.py (working copy) > >>>> @@ -1062,13 +1062,13 @@ > >>>> """ > >>>> x = asarray(x) > >>>> if N is None: > >>>> - N = len(x) > >>>> + N = x.shape[-1] > >>>> if N <=0: > >>>> raise ValueError, "N must be positive." > >>>> if iscomplexobj(x): > >>>> print "Warning: imaginary part of x ignored." > >>>> x = real(x) > >>>> - Xf = fft(x,N,axis=0) > >>>> + Xf = fft(x,N,axis=-1) > >>>> h = zeros(N) > >>>> if N % 2 == 0: > >>>> h[0] = h[N/2] = 1 > >>>> @@ -1078,7 +1078,7 @@ > >>>> h[1:(N+1)/2] = 2 > >>>> > >>>> if len(x.shape) > 1: > >>>> - h = h[:, newaxis] > >>>> + h = h[newaxis,:] > >>>> x = ifft(Xf*h) > >>>> return x > >>> > >>> I think your change would break the currently advertised behavior, > >>> axis=0 (The transformation is done along the first axis) > >>> > >>> but fft and ifft have default axis=-1 > >>> > >>> fft in hilbert uses axis=0 as in docstring > >>> but ifft uses default axis=-1 > >>> > >>> so, I would think the fix should be x = ifft(Xf*h, axis=0) > >>> > >>> But as it currently looks like the axis argument doesn't work anyway, > >>> there wouldn't be much breakage if the axis would be included as an > >>> argument and default to -1. > >>> However, I don't know what the "standard" for scipy.signal is for > default axis. > >>> > >>> Josef > >> > >> after adding axis to ifft: > >>>>> print hilbert(aa).real > >> [[ 0.82584851 0.15215031 0.14767381] > >> [ 0.95021675 0.16803995 0.43562964] > >> [ 0.13033881 0.06198952 0.70729614] > >> [ 0.69409563 0.06962778 0.72552601] > >> [ 0.34297612 0.50579001 0.86463304] > >> [ 0.28355261 0.21626889 0.85165102] > >> [ 0.49481491 0.21290645 0.71416814] > >> [ 0.2645843 0.95783096 0.77514016] > >> [ 0.38735994 0.14274852 0.56344808] > >> [ 0.88084015 0.39879649 0.64949951]] > >>>>> print hilbert(aa[:,:1]).real > >> [[ 0.82584851] > >> [ 0.95021675] > >> [ 0.13033881] > >> [ 0.69409563] > >> [ 0.34297612] > >> [ 0.28355261] > >> [ 0.49481491] > >> [ 0.2645843 ] > >> [ 0.38735994] > >> [ 0.88084015]] > >> > >> but it treats a 1d array as row vector and transforms along zero axis > >> of length 1, and not along the length of the array. > >> so another fix to handle 1d arrays correctly should be done > >> > >>>>> print hilbert(aa[:,1]).real > >> [ 0.15215031 0.16803995 0.06198952 0.06962778 0.50579001 0.21626889 > >> 0.21290645 0.95783096 0.14274852 0.39879649] > >>>>> aa[:,1] > >> array([ 0.15215031, 0.16803995, 0.06198952, 0.06962778, 0.50579001, > >> 0.21626889, 0.21290645, 0.95783096, 0.14274852, 0.39879649]) > >>>>> > > > > there's something wrong with my example, the real part is the same > > which confused me > > > > it works correctly with 1d > > > >>>> np.abs(hilbert(aa[:,0])) > > array([ 0.83251128, 1.04487091, 0.27702083, 0.69901499, 0.49170197, > > 0.31227114, 0.49505637, 0.26461488, 0.61385196, 0.90716272]) > > > >>>> np.abs(hilbert(aa[:,:1])).T > > array([[ 0.83251128, 1.04487091, 0.27702083, 0.69901499, 0.49170197, > > 0.31227114, 0.49505637, 0.26461488, 0.61385196, 0.90716272]]) > > > >>>> np.abs(hilbert(aa))[:,0] > > array([ 0.83251128, 1.04487091, 0.27702083, 0.69901499, 0.49170197, > > 0.31227114, 0.49505637, 0.26461488, 0.61385196, 0.90716272]) > > > > besides reading the docstring, I don't know what hilbert is supposed > > to be good for. > > Would something like the function in the attachment do ? > > > > > Josef > > > > > >> Josef > >> > >> > >>> > >>>> > >>>> > >>>> Cheers, > >>>> > >>>> Ariel > >>>> -- > >>>> Ariel Rokem > >>>> Helen Wills Neuroscience Institute > >>>> University of California, Berkeley > >>>> http://argentum.ucbso.berkeley.edu/ariel > >>>> > >>>> _______________________________________________ > >>>> SciPy-Dev mailing list > >>>> SciPy-Dev at scipy.org > >>>> http://mail.scipy.org/mailman/listinfo/scipy-dev > >>>> > >>>> > >>> > >> > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -- Ariel Rokem Helen Wills Neuroscience Institute University of California, Berkeley http://argentum.ucbso.berkeley.edu/ariel -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Fri Jan 15 02:10:47 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 15 Jan 2010 02:10:47 -0500 Subject: [SciPy-dev] Is this a bugfix for scipy.hilbert? In-Reply-To: <43958ee61001142244y37b31feel9b9c0232d9c9e430@mail.gmail.com> References: <43958ee61001141924n1663f379m77bafbbbb4c88ede@mail.gmail.com> <1cd32cbb1001141954u72d7199cm936be6f3d95866c4@mail.gmail.com> <1cd32cbb1001142002xc3f62b7n8ca45eedf94da884@mail.gmail.com> <1cd32cbb1001142027j3be923qedadae0149f49f0a@mail.gmail.com> <1cd32cbb1001142053yc2a37c2i1ff3b651c25ad3de@mail.gmail.com> <43958ee61001142244y37b31feel9b9c0232d9c9e430@mail.gmail.com> Message-ID: <1cd32cbb1001142310t65f1492cne1736987b389c49@mail.gmail.com> On Fri, Jan 15, 2010 at 1:44 AM, Ariel Rokem wrote: > Yes - looks good. Except I would prefer to eventually set the axis to > default to -1, to be consistent with signal.fft (and also np.fft.fft) which > has axis=-1. I'm indifferent to the default axis, from a quick look and my experience there are not many functions with axis arguments in signal. So I'm fine with switching to axis=-1. We should do it with this bugfix, since until now the function wasn't correct anyway for 2d. > > As for whether it's doing what it's supposed to do, for what it's worth - it > seems to do similar things to what Matlab's 'hilbert' function does on a few > simple examples I tried out. I was reading briefly on wikipedia, and checked with fftpack.hilbert, which returns the same array as signal.hilbert(a).imag, but I didn't manage to figure out why fftpack.hilbert only allows 1d (i got lost starting at convolve.pyf) Could you write a simple test case compared to matlab, e.g. 10by3 as in my example, for both axis, or 10by6 if 10by3 doesn't make sense? If nobody objects, I can commit the change with axis=-1. Josef > > Cheers, > > Ariel > > > > On Thu, Jan 14, 2010 at 8:53 PM, wrote: >> >> On Thu, Jan 14, 2010 at 11:27 PM, ? wrote: >> > On Thu, Jan 14, 2010 at 11:02 PM, ? wrote: >> >> On Thu, Jan 14, 2010 at 10:54 PM, ? wrote: >> >>> On Thu, Jan 14, 2010 at 10:24 PM, Ariel Rokem >> >>> wrote: >> >>>> Hi everyone, >> >>>> >> >>>> I have been trying to use scipy.signal.hilbert and I got the >> >>>> following >> >>>> puzzling result: >> >>>> >> >>>> In [22]: import scipy >> >>>> >> >>>> In [23]: scipy.__version__ #I have r6182 >> >>>> Out[23]: '0.8.0.dev' >> >>>> >> >>>> In [24]: import scipy.signal as signal >> >>>> >> >>>> In [25]: a = np.random.rand(100,100) >> >>>> >> >>>> In [26]: np.abs(signal.hilbert(a[-1])) >> >>>> Out[26]: >> >>>> array([ 0.57567681,? 0.25918624,? 0.50207097,? 0.51834052, >> >>>> 0.24293389, >> >>>> ??????? 0.5779464 ,? 0.6515758 ,? 0.89973173,? 1.00275444, >> >>>> 0.37352935, >> >>>> ??????? 0.62332717,? 0.93599749,? 0.40651376,? 0.65088756,? 0.8332281 >> >>>> , >> >>>> ??????? 0.5770101 ,? 0.9288512 ,? 0.46671906,? 0.41536055, >> >>>> 0.71418068, >> >>>> ??????? 0.81250913,? 0.07652627,? 0.72939072,? 0.26755626, >> >>>> 0.36396146, >> >>>> ??????? 0.59725999,? 1.02264694,? 0.41227986,? 0.98122853, >> >>>> 0.71906675, >> >>>> ??????? 0.58582611,? 0.77288117,? 0.3217015 ,? 0.65261394, >> >>>> 0.11947618, >> >>>> ??????? 0.75632703,? 0.43432935,? 0.52182485,? 1.0277177 , >> >>>> 1.01104986, >> >>>> ??????? 0.3023265 ,? 0.6024772 ,? 0.69257548,? 0.55418735, >> >>>> 0.46259052, >> >>>> ??????? 0.25832231,? 0.38278355,? 0.45508532,? 0.26215872, >> >>>> 0.34207947, >> >>>> ??????? 0.80704729,? 0.80755477,? 0.95317178,? 0.97458885, >> >>>> 0.58762294, >> >>>> ??????? 0.82540618,? 0.62005585,? 0.82494646,? 1.04221293, >> >>>> 0.14983027, >> >>>> ??????? 1.01571579,? 0.99381328,? 0.24158714,? 0.84256569, >> >>>> 0.53418924, >> >>>> ??????? 0.24067628,? 0.90489883,? 1.02217747,? 0.34988034,? 0.5310065 >> >>>> , >> >>>> ??????? 0.48135002,? 1.03020269,? 0.6013679 ,? 0.46062485,? 0.3918485 >> >>>> , >> >>>> ??????? 0.21554545,? 0.31704519,? 0.04868385,? 0.1787766 , >> >>>> 0.37361852, >> >>>> ??????? 0.21977912,? 0.7649772 ,? 0.77867281,? 0.37684278, >> >>>> 0.64432638, >> >>>> ??????? 0.77494951,? 0.87106309,? 0.77611484,? 0.52666801, >> >>>> 0.88683667, >> >>>> ??????? 0.69164967,? 0.98618191,? 0.84811375,? 0.35934198, >> >>>> 0.32650478, >> >>>> ??????? 0.1752677 ,? 0.60574454,? 0.5109132 ,? 0.52332287, >> >>>> 0.99777805]) >> >>>> >> >>>> In [27]: np.abs(signal.hilbert(a))[-1] >> >>>> Out[27]: >> >>>> array([ 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., >> >>>> 0., >> >>>> ??????? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., >> >>>> 0., >> >>>> ??????? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., >> >>>> 0., >> >>>> ??????? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., >> >>>> 0., >> >>>> ??????? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., >> >>>> 0., >> >>>> ??????? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., >> >>>> 0., >> >>>> ??????? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., >> >>>> 0., >> >>>> ??????? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.]) >> >>>> >> >>>> >> >>>> >> >>>> ---------------------------------------------------------------------- >> >>>> >> >>>> I was expecting both of these to have the same values - am I missing >> >>>> something? >> >>>> >> >>>> I think that the following solves this issue, but now I am not that >> >>>> sure >> >>>> whether it does what it is supposed to do and I couldn't find a test >> >>>> for >> >>>> this in test_signaltools.py. Does anyone know of a good test-case for >> >>>> the >> >>>> analytic signal, that I could create for this? >> >>>> >> >>>> Index: scipy/signal/signaltools.py >> >>>> =================================================================== >> >>>> --- scipy/signal/signaltools.py??? (revision 6182) >> >>>> +++ scipy/signal/signaltools.py??? (working copy) >> >>>> @@ -1062,13 +1062,13 @@ >> >>>> ???? """ >> >>>> ???? x = asarray(x) >> >>>> ???? if N is None: >> >>>> -??????? N = len(x) >> >>>> +??????? N = x.shape[-1] >> >>>> ???? if N <=0: >> >>>> ???????? raise ValueError, "N must be positive." >> >>>> ???? if iscomplexobj(x): >> >>>> ???????? print "Warning: imaginary part of x ignored." >> >>>> ???????? x = real(x) >> >>>> -??? Xf = fft(x,N,axis=0) >> >>>> +??? Xf = fft(x,N,axis=-1) >> >>>> ???? h = zeros(N) >> >>>> ???? if N % 2 == 0: >> >>>> ???????? h[0] = h[N/2] = 1 >> >>>> @@ -1078,7 +1078,7 @@ >> >>>> ???????? h[1:(N+1)/2] = 2 >> >>>> >> >>>> ???? if len(x.shape) > 1: >> >>>> -??????? h = h[:, newaxis] >> >>>> +??????? h = h[newaxis,:] >> >>>> ???? x = ifft(Xf*h) >> >>>> ???? return x >> >>> >> >>> I think your change would break the currently advertised behavior, >> >>> axis=0 (The transformation is done along the first axis) >> >>> >> >>> but fft and ifft have default axis=-1 >> >>> >> >>> fft in hilbert uses axis=0 as in docstring >> >>> but ifft uses default axis=-1 >> >>> >> >>> so, I would think the fix should be ?x = ifft(Xf*h, axis=0) >> >>> >> >>> But as it currently looks like the axis argument doesn't work anyway, >> >>> there wouldn't be much breakage if the axis would be included as an >> >>> argument and default to -1. >> >>> However, I don't know what the "standard" for scipy.signal is for >> >>> default axis. >> >>> >> >>> Josef >> >> >> >> after adding axis to ifft: >> >>>>> print hilbert(aa).real >> >> [[ 0.82584851 ?0.15215031 ?0.14767381] >> >> ?[ 0.95021675 ?0.16803995 ?0.43562964] >> >> ?[ 0.13033881 ?0.06198952 ?0.70729614] >> >> ?[ 0.69409563 ?0.06962778 ?0.72552601] >> >> ?[ 0.34297612 ?0.50579001 ?0.86463304] >> >> ?[ 0.28355261 ?0.21626889 ?0.85165102] >> >> ?[ 0.49481491 ?0.21290645 ?0.71416814] >> >> ?[ 0.2645843 ? 0.95783096 ?0.77514016] >> >> ?[ 0.38735994 ?0.14274852 ?0.56344808] >> >> ?[ 0.88084015 ?0.39879649 ?0.64949951]] >> >>>>> print hilbert(aa[:,:1]).real >> >> [[ 0.82584851] >> >> ?[ 0.95021675] >> >> ?[ 0.13033881] >> >> ?[ 0.69409563] >> >> ?[ 0.34297612] >> >> ?[ 0.28355261] >> >> ?[ 0.49481491] >> >> ?[ 0.2645843 ] >> >> ?[ 0.38735994] >> >> ?[ 0.88084015]] >> >> >> >> but it treats a 1d array as row vector and transforms along zero axis >> >> of length 1, and not along the length of the array. >> >> so another fix to handle 1d arrays correctly should be done >> >> >> >>>>> print hilbert(aa[:,1]).real >> >> [ 0.15215031 ?0.16803995 ?0.06198952 ?0.06962778 ?0.50579001 >> >> ?0.21626889 >> >> ?0.21290645 ?0.95783096 ?0.14274852 ?0.39879649] >> >>>>> aa[:,1] >> >> array([ 0.15215031, ?0.16803995, ?0.06198952, ?0.06962778, ?0.50579001, >> >> ? ? ? ?0.21626889, ?0.21290645, ?0.95783096, ?0.14274852, ?0.39879649]) >> >>>>> >> > >> > there's something wrong with my example, the real part is the same >> > which confused me >> > >> > it works correctly with 1d >> > >> >>>> np.abs(hilbert(aa[:,0])) >> > array([ 0.83251128, ?1.04487091, ?0.27702083, ?0.69901499, ?0.49170197, >> > ? ? ? ?0.31227114, ?0.49505637, ?0.26461488, ?0.61385196, ?0.90716272]) >> > >> >>>> np.abs(hilbert(aa[:,:1])).T >> > array([[ 0.83251128, ?1.04487091, ?0.27702083, ?0.69901499, ?0.49170197, >> > ? ? ? ? 0.31227114, ?0.49505637, ?0.26461488, ?0.61385196, >> > ?0.90716272]]) >> > >> >>>> np.abs(hilbert(aa))[:,0] >> > array([ 0.83251128, ?1.04487091, ?0.27702083, ?0.69901499, ?0.49170197, >> > ? ? ? ?0.31227114, ?0.49505637, ?0.26461488, ?0.61385196, ?0.90716272]) >> > >> > besides reading the docstring, I don't know what hilbert is supposed >> > to be good for. >> >> Would something like the function in the attachment do ? >> >> >> >> > Josef >> > >> > >> >> Josef >> >> >> >> >> >>> >> >>>> >> >>>> >> >>>> Cheers, >> >>>> >> >>>> Ariel >> >>>> -- >> >>>> Ariel Rokem >> >>>> Helen Wills Neuroscience Institute >> >>>> University of California, Berkeley >> >>>> http://argentum.ucbso.berkeley.edu/ariel >> >>>> >> >>>> _______________________________________________ >> >>>> SciPy-Dev mailing list >> >>>> SciPy-Dev at scipy.org >> >>>> http://mail.scipy.org/mailman/listinfo/scipy-dev >> >>>> >> >>>> >> >>> >> >> >> > >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev >> > > > > -- > Ariel Rokem > Helen Wills Neuroscience Institute > University of California, Berkeley > http://argentum.ucbso.berkeley.edu/ariel > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > From arokem at berkeley.edu Fri Jan 15 14:34:34 2010 From: arokem at berkeley.edu (Ariel Rokem) Date: Fri, 15 Jan 2010 11:34:34 -0800 Subject: [SciPy-dev] Is this a bugfix for scipy.hilbert? In-Reply-To: <1cd32cbb1001142310t65f1492cne1736987b389c49@mail.gmail.com> References: <43958ee61001141924n1663f379m77bafbbbb4c88ede@mail.gmail.com> <1cd32cbb1001141954u72d7199cm936be6f3d95866c4@mail.gmail.com> <1cd32cbb1001142002xc3f62b7n8ca45eedf94da884@mail.gmail.com> <1cd32cbb1001142027j3be923qedadae0149f49f0a@mail.gmail.com> <1cd32cbb1001142053yc2a37c2i1ff3b651c25ad3de@mail.gmail.com> <43958ee61001142244y37b31feel9b9c0232d9c9e430@mail.gmail.com> <1cd32cbb1001142310t65f1492cne1736987b389c49@mail.gmail.com> Message-ID: <43958ee61001151134v43e4e6a4sf08fa63bf97ec08a@mail.gmail.com> Hi - attached is a file with a couple of tests. I am not sure this tests the issues we were dealing with previously (the axis issues, etc.), but it has some sensible test-cases, which compare to what Matlab would give you (not quite 10by3 or 10by6, but as you can see, they make sense). Also - all the assertions are assert_almost_equal. Do you think that's OK? I think there are float-precision issues here, which would make assert_equal fail, but I am not sure - I would be happy to get any general comments on these tests, in case I am doing this all wrong. Cheers, Ariel On Thu, Jan 14, 2010 at 11:10 PM, wrote: > On Fri, Jan 15, 2010 at 1:44 AM, Ariel Rokem wrote: > > Yes - looks good. Except I would prefer to eventually set the axis to > > default to -1, to be consistent with signal.fft (and also np.fft.fft) > which > > has axis=-1. > > I'm indifferent to the default axis, from a quick look and my > experience there are not many functions with axis arguments in signal. > So I'm fine with switching to axis=-1. We should do it with this > bugfix, since until now the function wasn't correct anyway for 2d. > > > > > As for whether it's doing what it's supposed to do, for what it's worth - > it > > seems to do similar things to what Matlab's 'hilbert' function does on a > few > > simple examples I tried out. > > I was reading briefly on wikipedia, and checked with fftpack.hilbert, > which returns the same array as signal.hilbert(a).imag, but I didn't > manage to figure out why fftpack.hilbert only allows 1d (i got lost > starting at convolve.pyf) > > Could you write a simple test case compared to matlab, e.g. 10by3 as > in my example, for both axis, or 10by6 if 10by3 doesn't make sense? > > If nobody objects, I can commit the change with axis=-1. > > Josef > > > > > Cheers, > > > > Ariel > > > > > > > > On Thu, Jan 14, 2010 at 8:53 PM, wrote: > >> > >> On Thu, Jan 14, 2010 at 11:27 PM, wrote: > >> > On Thu, Jan 14, 2010 at 11:02 PM, wrote: > >> >> On Thu, Jan 14, 2010 at 10:54 PM, wrote: > >> >>> On Thu, Jan 14, 2010 at 10:24 PM, Ariel Rokem > >> >>> wrote: > >> >>>> Hi everyone, > >> >>>> > >> >>>> I have been trying to use scipy.signal.hilbert and I got the > >> >>>> following > >> >>>> puzzling result: > >> >>>> > >> >>>> In [22]: import scipy > >> >>>> > >> >>>> In [23]: scipy.__version__ #I have r6182 > >> >>>> Out[23]: '0.8.0.dev' > >> >>>> > >> >>>> In [24]: import scipy.signal as signal > >> >>>> > >> >>>> In [25]: a = np.random.rand(100,100) > >> >>>> > >> >>>> In [26]: np.abs(signal.hilbert(a[-1])) > >> >>>> Out[26]: > >> >>>> array([ 0.57567681, 0.25918624, 0.50207097, 0.51834052, > >> >>>> 0.24293389, > >> >>>> 0.5779464 , 0.6515758 , 0.89973173, 1.00275444, > >> >>>> 0.37352935, > >> >>>> 0.62332717, 0.93599749, 0.40651376, 0.65088756, > 0.8332281 > >> >>>> , > >> >>>> 0.5770101 , 0.9288512 , 0.46671906, 0.41536055, > >> >>>> 0.71418068, > >> >>>> 0.81250913, 0.07652627, 0.72939072, 0.26755626, > >> >>>> 0.36396146, > >> >>>> 0.59725999, 1.02264694, 0.41227986, 0.98122853, > >> >>>> 0.71906675, > >> >>>> 0.58582611, 0.77288117, 0.3217015 , 0.65261394, > >> >>>> 0.11947618, > >> >>>> 0.75632703, 0.43432935, 0.52182485, 1.0277177 , > >> >>>> 1.01104986, > >> >>>> 0.3023265 , 0.6024772 , 0.69257548, 0.55418735, > >> >>>> 0.46259052, > >> >>>> 0.25832231, 0.38278355, 0.45508532, 0.26215872, > >> >>>> 0.34207947, > >> >>>> 0.80704729, 0.80755477, 0.95317178, 0.97458885, > >> >>>> 0.58762294, > >> >>>> 0.82540618, 0.62005585, 0.82494646, 1.04221293, > >> >>>> 0.14983027, > >> >>>> 1.01571579, 0.99381328, 0.24158714, 0.84256569, > >> >>>> 0.53418924, > >> >>>> 0.24067628, 0.90489883, 1.02217747, 0.34988034, > 0.5310065 > >> >>>> , > >> >>>> 0.48135002, 1.03020269, 0.6013679 , 0.46062485, > 0.3918485 > >> >>>> , > >> >>>> 0.21554545, 0.31704519, 0.04868385, 0.1787766 , > >> >>>> 0.37361852, > >> >>>> 0.21977912, 0.7649772 , 0.77867281, 0.37684278, > >> >>>> 0.64432638, > >> >>>> 0.77494951, 0.87106309, 0.77611484, 0.52666801, > >> >>>> 0.88683667, > >> >>>> 0.69164967, 0.98618191, 0.84811375, 0.35934198, > >> >>>> 0.32650478, > >> >>>> 0.1752677 , 0.60574454, 0.5109132 , 0.52332287, > >> >>>> 0.99777805]) > >> >>>> > >> >>>> In [27]: np.abs(signal.hilbert(a))[-1] > >> >>>> Out[27]: > >> >>>> array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., > >> >>>> 0., > >> >>>> 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., > >> >>>> 0., > >> >>>> 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., > >> >>>> 0., > >> >>>> 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., > >> >>>> 0., > >> >>>> 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., > >> >>>> 0., > >> >>>> 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., > >> >>>> 0., > >> >>>> 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., > >> >>>> 0., > >> >>>> 0., 0., 0., 0., 0., 0., 0., 0., 0.]) > >> >>>> > >> >>>> > >> >>>> > >> >>>> > ---------------------------------------------------------------------- > >> >>>> > >> >>>> I was expecting both of these to have the same values - am I > missing > >> >>>> something? > >> >>>> > >> >>>> I think that the following solves this issue, but now I am not that > >> >>>> sure > >> >>>> whether it does what it is supposed to do and I couldn't find a > test > >> >>>> for > >> >>>> this in test_signaltools.py. Does anyone know of a good test-case > for > >> >>>> the > >> >>>> analytic signal, that I could create for this? > >> >>>> > >> >>>> Index: scipy/signal/signaltools.py > >> >>>> =================================================================== > >> >>>> --- scipy/signal/signaltools.py (revision 6182) > >> >>>> +++ scipy/signal/signaltools.py (working copy) > >> >>>> @@ -1062,13 +1062,13 @@ > >> >>>> """ > >> >>>> x = asarray(x) > >> >>>> if N is None: > >> >>>> - N = len(x) > >> >>>> + N = x.shape[-1] > >> >>>> if N <=0: > >> >>>> raise ValueError, "N must be positive." > >> >>>> if iscomplexobj(x): > >> >>>> print "Warning: imaginary part of x ignored." > >> >>>> x = real(x) > >> >>>> - Xf = fft(x,N,axis=0) > >> >>>> + Xf = fft(x,N,axis=-1) > >> >>>> h = zeros(N) > >> >>>> if N % 2 == 0: > >> >>>> h[0] = h[N/2] = 1 > >> >>>> @@ -1078,7 +1078,7 @@ > >> >>>> h[1:(N+1)/2] = 2 > >> >>>> > >> >>>> if len(x.shape) > 1: > >> >>>> - h = h[:, newaxis] > >> >>>> + h = h[newaxis,:] > >> >>>> x = ifft(Xf*h) > >> >>>> return x > >> >>> > >> >>> I think your change would break the currently advertised behavior, > >> >>> axis=0 (The transformation is done along the first axis) > >> >>> > >> >>> but fft and ifft have default axis=-1 > >> >>> > >> >>> fft in hilbert uses axis=0 as in docstring > >> >>> but ifft uses default axis=-1 > >> >>> > >> >>> so, I would think the fix should be x = ifft(Xf*h, axis=0) > >> >>> > >> >>> But as it currently looks like the axis argument doesn't work > anyway, > >> >>> there wouldn't be much breakage if the axis would be included as an > >> >>> argument and default to -1. > >> >>> However, I don't know what the "standard" for scipy.signal is for > >> >>> default axis. > >> >>> > >> >>> Josef > >> >> > >> >> after adding axis to ifft: > >> >>>>> print hilbert(aa).real > >> >> [[ 0.82584851 0.15215031 0.14767381] > >> >> [ 0.95021675 0.16803995 0.43562964] > >> >> [ 0.13033881 0.06198952 0.70729614] > >> >> [ 0.69409563 0.06962778 0.72552601] > >> >> [ 0.34297612 0.50579001 0.86463304] > >> >> [ 0.28355261 0.21626889 0.85165102] > >> >> [ 0.49481491 0.21290645 0.71416814] > >> >> [ 0.2645843 0.95783096 0.77514016] > >> >> [ 0.38735994 0.14274852 0.56344808] > >> >> [ 0.88084015 0.39879649 0.64949951]] > >> >>>>> print hilbert(aa[:,:1]).real > >> >> [[ 0.82584851] > >> >> [ 0.95021675] > >> >> [ 0.13033881] > >> >> [ 0.69409563] > >> >> [ 0.34297612] > >> >> [ 0.28355261] > >> >> [ 0.49481491] > >> >> [ 0.2645843 ] > >> >> [ 0.38735994] > >> >> [ 0.88084015]] > >> >> > >> >> but it treats a 1d array as row vector and transforms along zero axis > >> >> of length 1, and not along the length of the array. > >> >> so another fix to handle 1d arrays correctly should be done > >> >> > >> >>>>> print hilbert(aa[:,1]).real > >> >> [ 0.15215031 0.16803995 0.06198952 0.06962778 0.50579001 > >> >> 0.21626889 > >> >> 0.21290645 0.95783096 0.14274852 0.39879649] > >> >>>>> aa[:,1] > >> >> array([ 0.15215031, 0.16803995, 0.06198952, 0.06962778, > 0.50579001, > >> >> 0.21626889, 0.21290645, 0.95783096, 0.14274852, > 0.39879649]) > >> >>>>> > >> > > >> > there's something wrong with my example, the real part is the same > >> > which confused me > >> > > >> > it works correctly with 1d > >> > > >> >>>> np.abs(hilbert(aa[:,0])) > >> > array([ 0.83251128, 1.04487091, 0.27702083, 0.69901499, > 0.49170197, > >> > 0.31227114, 0.49505637, 0.26461488, 0.61385196, > 0.90716272]) > >> > > >> >>>> np.abs(hilbert(aa[:,:1])).T > >> > array([[ 0.83251128, 1.04487091, 0.27702083, 0.69901499, > 0.49170197, > >> > 0.31227114, 0.49505637, 0.26461488, 0.61385196, > >> > 0.90716272]]) > >> > > >> >>>> np.abs(hilbert(aa))[:,0] > >> > array([ 0.83251128, 1.04487091, 0.27702083, 0.69901499, > 0.49170197, > >> > 0.31227114, 0.49505637, 0.26461488, 0.61385196, > 0.90716272]) > >> > > >> > besides reading the docstring, I don't know what hilbert is supposed > >> > to be good for. > >> > >> Would something like the function in the attachment do ? > >> > >> > >> > >> > Josef > >> > > >> > > >> >> Josef > >> >> > >> >> > >> >>> > >> >>>> > >> >>>> > >> >>>> Cheers, > >> >>>> > >> >>>> Ariel > >> >>>> -- > >> >>>> Ariel Rokem > >> >>>> Helen Wills Neuroscience Institute > >> >>>> University of California, Berkeley > >> >>>> http://argentum.ucbso.berkeley.edu/ariel > >> >>>> > >> >>>> _______________________________________________ > >> >>>> SciPy-Dev mailing list > >> >>>> SciPy-Dev at scipy.org > >> >>>> http://mail.scipy.org/mailman/listinfo/scipy-dev > >> >>>> > >> >>>> > >> >>> > >> >> > >> > > >> > >> _______________________________________________ > >> SciPy-Dev mailing list > >> SciPy-Dev at scipy.org > >> http://mail.scipy.org/mailman/listinfo/scipy-dev > >> > > > > > > > > -- > > Ariel Rokem > > Helen Wills Neuroscience Institute > > University of California, Berkeley > > http://argentum.ucbso.berkeley.edu/ariel > > > > _______________________________________________ > > SciPy-Dev mailing list > > SciPy-Dev at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-dev > > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -- Ariel Rokem Helen Wills Neuroscience Institute University of California, Berkeley http://argentum.ucbso.berkeley.edu/ariel -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: test_hilbert.py Type: application/octet-stream Size: 1028 bytes Desc: not available URL: From josef.pktd at gmail.com Fri Jan 15 14:48:51 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 15 Jan 2010 14:48:51 -0500 Subject: [SciPy-dev] Is this a bugfix for scipy.hilbert? In-Reply-To: <43958ee61001151134v43e4e6a4sf08fa63bf97ec08a@mail.gmail.com> References: <43958ee61001141924n1663f379m77bafbbbb4c88ede@mail.gmail.com> <1cd32cbb1001141954u72d7199cm936be6f3d95866c4@mail.gmail.com> <1cd32cbb1001142002xc3f62b7n8ca45eedf94da884@mail.gmail.com> <1cd32cbb1001142027j3be923qedadae0149f49f0a@mail.gmail.com> <1cd32cbb1001142053yc2a37c2i1ff3b651c25ad3de@mail.gmail.com> <43958ee61001142244y37b31feel9b9c0232d9c9e430@mail.gmail.com> <1cd32cbb1001142310t65f1492cne1736987b389c49@mail.gmail.com> <43958ee61001151134v43e4e6a4sf08fa63bf97ec08a@mail.gmail.com> Message-ID: <1cd32cbb1001151148p617c838dp8fda69f3f4c9a009@mail.gmail.com> On Fri, Jan 15, 2010 at 2:34 PM, Ariel Rokem wrote: > Hi - > > attached is a file with a couple of tests. I am not sure this tests the > issues we were dealing with previously (the axis issues, etc.), but it has > some sensible test-cases, which compare to what Matlab would give you (not > quite 10by3 or 10by6, but as you can see, they make sense). Also - all the > assertions are assert_almost_equal. Do you think that's OK? I think there > are float-precision issues here, which would make assert_equal fail, but I > am not sure - I would be happy to get any general comments on these tests, > in case I am doing this all wrong. nice test cases, I like theoretical tests even better than verified numbers from other packages. Besides some cosmetic changes to get them into a test function, the only part to add is the precision of the tests. The default precision of assert_almost_equal is only 6 decimals. For these kind of cases, I usually go to 12 to 15 depending on the numerical precision of the algorithm. Usually, I go by trial and error until the test breaks, or calculate max abs of the error. I can add some tests for the axis argument. Can you open a ticket for the record or shall I ? Josef > > Cheers, > > Ariel > > On Thu, Jan 14, 2010 at 11:10 PM, wrote: >> >> On Fri, Jan 15, 2010 at 1:44 AM, Ariel Rokem wrote: >> > Yes - looks good. Except I would prefer to eventually set the axis to >> > default to -1, to be consistent with signal.fft (and also np.fft.fft) >> > which >> > has axis=-1. >> >> I'm indifferent to the default axis, from a quick look and my >> experience there are not many functions with axis arguments in signal. >> So I'm fine with switching to axis=-1. We should do it with this >> bugfix, since until now the function wasn't correct anyway for 2d. >> >> > >> > As for whether it's doing what it's supposed to do, for what it's worth >> > - it >> > seems to do similar things to what Matlab's 'hilbert' function does on a >> > few >> > simple examples I tried out. >> >> I was reading briefly on wikipedia, and checked with fftpack.hilbert, >> which returns the same array as signal.hilbert(a).imag, but I didn't >> manage to figure out why fftpack.hilbert only allows 1d (i got lost >> starting at convolve.pyf) >> >> Could you write a simple test case compared to matlab, e.g. 10by3 as >> in my example, for both axis, or 10by6 if 10by3 doesn't make sense? >> >> If nobody objects, I can commit the change with axis=-1. >> >> Josef >> >> > >> > Cheers, >> > >> > Ariel >> > >> > >> > >> > On Thu, Jan 14, 2010 at 8:53 PM, wrote: >> >> >> >> On Thu, Jan 14, 2010 at 11:27 PM, ? wrote: >> >> > On Thu, Jan 14, 2010 at 11:02 PM, ? wrote: >> >> >> On Thu, Jan 14, 2010 at 10:54 PM, ? wrote: >> >> >>> On Thu, Jan 14, 2010 at 10:24 PM, Ariel Rokem >> >> >>> wrote: >> >> >>>> Hi everyone, >> >> >>>> >> >> >>>> I have been trying to use scipy.signal.hilbert and I got the >> >> >>>> following >> >> >>>> puzzling result: >> >> >>>> >> >> >>>> In [22]: import scipy >> >> >>>> >> >> >>>> In [23]: scipy.__version__ #I have r6182 >> >> >>>> Out[23]: '0.8.0.dev' >> >> >>>> >> >> >>>> In [24]: import scipy.signal as signal >> >> >>>> >> >> >>>> In [25]: a = np.random.rand(100,100) >> >> >>>> >> >> >>>> In [26]: np.abs(signal.hilbert(a[-1])) >> >> >>>> Out[26]: >> >> >>>> array([ 0.57567681,? 0.25918624,? 0.50207097,? 0.51834052, >> >> >>>> 0.24293389, >> >> >>>> ??????? 0.5779464 ,? 0.6515758 ,? 0.89973173,? 1.00275444, >> >> >>>> 0.37352935, >> >> >>>> ??????? 0.62332717,? 0.93599749,? 0.40651376,? 0.65088756, >> >> >>>> 0.8332281 >> >> >>>> , >> >> >>>> ??????? 0.5770101 ,? 0.9288512 ,? 0.46671906,? 0.41536055, >> >> >>>> 0.71418068, >> >> >>>> ??????? 0.81250913,? 0.07652627,? 0.72939072,? 0.26755626, >> >> >>>> 0.36396146, >> >> >>>> ??????? 0.59725999,? 1.02264694,? 0.41227986,? 0.98122853, >> >> >>>> 0.71906675, >> >> >>>> ??????? 0.58582611,? 0.77288117,? 0.3217015 ,? 0.65261394, >> >> >>>> 0.11947618, >> >> >>>> ??????? 0.75632703,? 0.43432935,? 0.52182485,? 1.0277177 , >> >> >>>> 1.01104986, >> >> >>>> ??????? 0.3023265 ,? 0.6024772 ,? 0.69257548,? 0.55418735, >> >> >>>> 0.46259052, >> >> >>>> ??????? 0.25832231,? 0.38278355,? 0.45508532,? 0.26215872, >> >> >>>> 0.34207947, >> >> >>>> ??????? 0.80704729,? 0.80755477,? 0.95317178,? 0.97458885, >> >> >>>> 0.58762294, >> >> >>>> ??????? 0.82540618,? 0.62005585,? 0.82494646,? 1.04221293, >> >> >>>> 0.14983027, >> >> >>>> ??????? 1.01571579,? 0.99381328,? 0.24158714,? 0.84256569, >> >> >>>> 0.53418924, >> >> >>>> ??????? 0.24067628,? 0.90489883,? 1.02217747,? 0.34988034, >> >> >>>> 0.5310065 >> >> >>>> , >> >> >>>> ??????? 0.48135002,? 1.03020269,? 0.6013679 ,? 0.46062485, >> >> >>>> 0.3918485 >> >> >>>> , >> >> >>>> ??????? 0.21554545,? 0.31704519,? 0.04868385,? 0.1787766 , >> >> >>>> 0.37361852, >> >> >>>> ??????? 0.21977912,? 0.7649772 ,? 0.77867281,? 0.37684278, >> >> >>>> 0.64432638, >> >> >>>> ??????? 0.77494951,? 0.87106309,? 0.77611484,? 0.52666801, >> >> >>>> 0.88683667, >> >> >>>> ??????? 0.69164967,? 0.98618191,? 0.84811375,? 0.35934198, >> >> >>>> 0.32650478, >> >> >>>> ??????? 0.1752677 ,? 0.60574454,? 0.5109132 ,? 0.52332287, >> >> >>>> 0.99777805]) >> >> >>>> >> >> >>>> In [27]: np.abs(signal.hilbert(a))[-1] >> >> >>>> Out[27]: >> >> >>>> array([ 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., >> >> >>>> 0., >> >> >>>> ??????? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., >> >> >>>> 0., >> >> >>>> ??????? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., >> >> >>>> 0., >> >> >>>> ??????? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., >> >> >>>> 0., >> >> >>>> ??????? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., >> >> >>>> 0., >> >> >>>> ??????? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., >> >> >>>> 0., >> >> >>>> ??????? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., >> >> >>>> 0., >> >> >>>> ??????? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.]) >> >> >>>> >> >> >>>> >> >> >>>> >> >> >>>> >> >> >>>> ---------------------------------------------------------------------- >> >> >>>> >> >> >>>> I was expecting both of these to have the same values - am I >> >> >>>> missing >> >> >>>> something? >> >> >>>> >> >> >>>> I think that the following solves this issue, but now I am not >> >> >>>> that >> >> >>>> sure >> >> >>>> whether it does what it is supposed to do and I couldn't find a >> >> >>>> test >> >> >>>> for >> >> >>>> this in test_signaltools.py. Does anyone know of a good test-case >> >> >>>> for >> >> >>>> the >> >> >>>> analytic signal, that I could create for this? >> >> >>>> >> >> >>>> Index: scipy/signal/signaltools.py >> >> >>>> >> >> >>>> =================================================================== >> >> >>>> --- scipy/signal/signaltools.py??? (revision 6182) >> >> >>>> +++ scipy/signal/signaltools.py??? (working copy) >> >> >>>> @@ -1062,13 +1062,13 @@ >> >> >>>> ???? """ >> >> >>>> ???? x = asarray(x) >> >> >>>> ???? if N is None: >> >> >>>> -??????? N = len(x) >> >> >>>> +??????? N = x.shape[-1] >> >> >>>> ???? if N <=0: >> >> >>>> ???????? raise ValueError, "N must be positive." >> >> >>>> ???? if iscomplexobj(x): >> >> >>>> ???????? print "Warning: imaginary part of x ignored." >> >> >>>> ???????? x = real(x) >> >> >>>> -??? Xf = fft(x,N,axis=0) >> >> >>>> +??? Xf = fft(x,N,axis=-1) >> >> >>>> ???? h = zeros(N) >> >> >>>> ???? if N % 2 == 0: >> >> >>>> ???????? h[0] = h[N/2] = 1 >> >> >>>> @@ -1078,7 +1078,7 @@ >> >> >>>> ???????? h[1:(N+1)/2] = 2 >> >> >>>> >> >> >>>> ???? if len(x.shape) > 1: >> >> >>>> -??????? h = h[:, newaxis] >> >> >>>> +??????? h = h[newaxis,:] >> >> >>>> ???? x = ifft(Xf*h) >> >> >>>> ???? return x >> >> >>> >> >> >>> I think your change would break the currently advertised behavior, >> >> >>> axis=0 (The transformation is done along the first axis) >> >> >>> >> >> >>> but fft and ifft have default axis=-1 >> >> >>> >> >> >>> fft in hilbert uses axis=0 as in docstring >> >> >>> but ifft uses default axis=-1 >> >> >>> >> >> >>> so, I would think the fix should be ?x = ifft(Xf*h, axis=0) >> >> >>> >> >> >>> But as it currently looks like the axis argument doesn't work >> >> >>> anyway, >> >> >>> there wouldn't be much breakage if the axis would be included as an >> >> >>> argument and default to -1. >> >> >>> However, I don't know what the "standard" for scipy.signal is for >> >> >>> default axis. >> >> >>> >> >> >>> Josef >> >> >> >> >> >> after adding axis to ifft: >> >> >>>>> print hilbert(aa).real >> >> >> [[ 0.82584851 ?0.15215031 ?0.14767381] >> >> >> ?[ 0.95021675 ?0.16803995 ?0.43562964] >> >> >> ?[ 0.13033881 ?0.06198952 ?0.70729614] >> >> >> ?[ 0.69409563 ?0.06962778 ?0.72552601] >> >> >> ?[ 0.34297612 ?0.50579001 ?0.86463304] >> >> >> ?[ 0.28355261 ?0.21626889 ?0.85165102] >> >> >> ?[ 0.49481491 ?0.21290645 ?0.71416814] >> >> >> ?[ 0.2645843 ? 0.95783096 ?0.77514016] >> >> >> ?[ 0.38735994 ?0.14274852 ?0.56344808] >> >> >> ?[ 0.88084015 ?0.39879649 ?0.64949951]] >> >> >>>>> print hilbert(aa[:,:1]).real >> >> >> [[ 0.82584851] >> >> >> ?[ 0.95021675] >> >> >> ?[ 0.13033881] >> >> >> ?[ 0.69409563] >> >> >> ?[ 0.34297612] >> >> >> ?[ 0.28355261] >> >> >> ?[ 0.49481491] >> >> >> ?[ 0.2645843 ] >> >> >> ?[ 0.38735994] >> >> >> ?[ 0.88084015]] >> >> >> >> >> >> but it treats a 1d array as row vector and transforms along zero >> >> >> axis >> >> >> of length 1, and not along the length of the array. >> >> >> so another fix to handle 1d arrays correctly should be done >> >> >> >> >> >>>>> print hilbert(aa[:,1]).real >> >> >> [ 0.15215031 ?0.16803995 ?0.06198952 ?0.06962778 ?0.50579001 >> >> >> ?0.21626889 >> >> >> ?0.21290645 ?0.95783096 ?0.14274852 ?0.39879649] >> >> >>>>> aa[:,1] >> >> >> array([ 0.15215031, ?0.16803995, ?0.06198952, ?0.06962778, >> >> >> ?0.50579001, >> >> >> ? ? ? ?0.21626889, ?0.21290645, ?0.95783096, ?0.14274852, >> >> >> ?0.39879649]) >> >> >>>>> >> >> > >> >> > there's something wrong with my example, the real part is the same >> >> > which confused me >> >> > >> >> > it works correctly with 1d >> >> > >> >> >>>> np.abs(hilbert(aa[:,0])) >> >> > array([ 0.83251128, ?1.04487091, ?0.27702083, ?0.69901499, >> >> > ?0.49170197, >> >> > ? ? ? ?0.31227114, ?0.49505637, ?0.26461488, ?0.61385196, >> >> > ?0.90716272]) >> >> > >> >> >>>> np.abs(hilbert(aa[:,:1])).T >> >> > array([[ 0.83251128, ?1.04487091, ?0.27702083, ?0.69901499, >> >> > ?0.49170197, >> >> > ? ? ? ? 0.31227114, ?0.49505637, ?0.26461488, ?0.61385196, >> >> > ?0.90716272]]) >> >> > >> >> >>>> np.abs(hilbert(aa))[:,0] >> >> > array([ 0.83251128, ?1.04487091, ?0.27702083, ?0.69901499, >> >> > ?0.49170197, >> >> > ? ? ? ?0.31227114, ?0.49505637, ?0.26461488, ?0.61385196, >> >> > ?0.90716272]) >> >> > >> >> > besides reading the docstring, I don't know what hilbert is supposed >> >> > to be good for. >> >> >> >> Would something like the function in the attachment do ? >> >> >> >> >> >> >> >> > Josef >> >> > >> >> > >> >> >> Josef >> >> >> >> >> >> >> >> >>> >> >> >>>> >> >> >>>> >> >> >>>> Cheers, >> >> >>>> >> >> >>>> Ariel >> >> >>>> -- >> >> >>>> Ariel Rokem >> >> >>>> Helen Wills Neuroscience Institute >> >> >>>> University of California, Berkeley >> >> >>>> http://argentum.ucbso.berkeley.edu/ariel >> >> >>>> >> >> >>>> _______________________________________________ >> >> >>>> SciPy-Dev mailing list >> >> >>>> SciPy-Dev at scipy.org >> >> >>>> http://mail.scipy.org/mailman/listinfo/scipy-dev >> >> >>>> >> >> >>>> >> >> >>> >> >> >> >> >> > >> >> >> >> _______________________________________________ >> >> SciPy-Dev mailing list >> >> SciPy-Dev at scipy.org >> >> http://mail.scipy.org/mailman/listinfo/scipy-dev >> >> >> > >> > >> > >> > -- >> > Ariel Rokem >> > Helen Wills Neuroscience Institute >> > University of California, Berkeley >> > http://argentum.ucbso.berkeley.edu/ariel >> > >> > _______________________________________________ >> > SciPy-Dev mailing list >> > SciPy-Dev at scipy.org >> > http://mail.scipy.org/mailman/listinfo/scipy-dev >> > >> > >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev > > > > -- > Ariel Rokem > Helen Wills Neuroscience Institute > University of California, Berkeley > http://argentum.ucbso.berkeley.edu/ariel > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > From arokem at berkeley.edu Fri Jan 15 16:20:31 2010 From: arokem at berkeley.edu (Ariel Rokem) Date: Fri, 15 Jan 2010 13:20:31 -0800 Subject: [SciPy-dev] Is this a bugfix for scipy.hilbert? In-Reply-To: <1cd32cbb1001151148p617c838dp8fda69f3f4c9a009@mail.gmail.com> References: <43958ee61001141924n1663f379m77bafbbbb4c88ede@mail.gmail.com> <1cd32cbb1001141954u72d7199cm936be6f3d95866c4@mail.gmail.com> <1cd32cbb1001142002xc3f62b7n8ca45eedf94da884@mail.gmail.com> <1cd32cbb1001142027j3be923qedadae0149f49f0a@mail.gmail.com> <1cd32cbb1001142053yc2a37c2i1ff3b651c25ad3de@mail.gmail.com> <43958ee61001142244y37b31feel9b9c0232d9c9e430@mail.gmail.com> <1cd32cbb1001142310t65f1492cne1736987b389c49@mail.gmail.com> <43958ee61001151134v43e4e6a4sf08fa63bf97ec08a@mail.gmail.com> <1cd32cbb1001151148p617c838dp8fda69f3f4c9a009@mail.gmail.com> Message-ID: <43958ee61001151320l1f0e985xbbb330e90536ccf8@mail.gmail.com> Hi - I've never done this before, so it would be great if I could 'look over your shoulder' (in the sense that I know how this ticket came about :D), as you submit a ticket on this. Thanks -- Ariel On Fri, Jan 15, 2010 at 11:48 AM, wrote: > On Fri, Jan 15, 2010 at 2:34 PM, Ariel Rokem wrote: > > Hi - > > > > attached is a file with a couple of tests. I am not sure this tests the > > issues we were dealing with previously (the axis issues, etc.), but it > has > > some sensible test-cases, which compare to what Matlab would give you > (not > > quite 10by3 or 10by6, but as you can see, they make sense). Also - all > the > > assertions are assert_almost_equal. Do you think that's OK? I think there > > are float-precision issues here, which would make assert_equal fail, but > I > > am not sure - I would be happy to get any general comments on these > tests, > > in case I am doing this all wrong. > > nice test cases, I like theoretical tests even better than verified > numbers from other packages. > > Besides some cosmetic changes to get them into a test function, the > only part to add is the precision of the tests. > The default precision of assert_almost_equal is only 6 decimals. > > For these kind of cases, I usually go to 12 to 15 depending on the > numerical precision of the algorithm. Usually, I go by trial and error > until the test breaks, or calculate max abs of the error. > > I can add some tests for the axis argument. > > Can you open a ticket for the record or shall I ? > > Josef > > > > > > Cheers, > > > > Ariel > > > > On Thu, Jan 14, 2010 at 11:10 PM, wrote: > >> > >> On Fri, Jan 15, 2010 at 1:44 AM, Ariel Rokem > wrote: > >> > Yes - looks good. Except I would prefer to eventually set the axis to > >> > default to -1, to be consistent with signal.fft (and also np.fft.fft) > >> > which > >> > has axis=-1. > >> > >> I'm indifferent to the default axis, from a quick look and my > >> experience there are not many functions with axis arguments in signal. > >> So I'm fine with switching to axis=-1. We should do it with this > >> bugfix, since until now the function wasn't correct anyway for 2d. > >> > >> > > >> > As for whether it's doing what it's supposed to do, for what it's > worth > >> > - it > >> > seems to do similar things to what Matlab's 'hilbert' function does on > a > >> > few > >> > simple examples I tried out. > >> > >> I was reading briefly on wikipedia, and checked with fftpack.hilbert, > >> which returns the same array as signal.hilbert(a).imag, but I didn't > >> manage to figure out why fftpack.hilbert only allows 1d (i got lost > >> starting at convolve.pyf) > >> > >> Could you write a simple test case compared to matlab, e.g. 10by3 as > >> in my example, for both axis, or 10by6 if 10by3 doesn't make sense? > >> > >> If nobody objects, I can commit the change with axis=-1. > >> > >> Josef > >> > >> > > >> > Cheers, > >> > > >> > Ariel > >> > > >> > > >> > > >> > On Thu, Jan 14, 2010 at 8:53 PM, wrote: > >> >> > >> >> On Thu, Jan 14, 2010 at 11:27 PM, wrote: > >> >> > On Thu, Jan 14, 2010 at 11:02 PM, wrote: > >> >> >> On Thu, Jan 14, 2010 at 10:54 PM, wrote: > >> >> >>> On Thu, Jan 14, 2010 at 10:24 PM, Ariel Rokem < > arokem at berkeley.edu> > >> >> >>> wrote: > >> >> >>>> Hi everyone, > >> >> >>>> > >> >> >>>> I have been trying to use scipy.signal.hilbert and I got the > >> >> >>>> following > >> >> >>>> puzzling result: > >> >> >>>> > >> >> >>>> In [22]: import scipy > >> >> >>>> > >> >> >>>> In [23]: scipy.__version__ #I have r6182 > >> >> >>>> Out[23]: '0.8.0.dev' > >> >> >>>> > >> >> >>>> In [24]: import scipy.signal as signal > >> >> >>>> > >> >> >>>> In [25]: a = np.random.rand(100,100) > >> >> >>>> > >> >> >>>> In [26]: np.abs(signal.hilbert(a[-1])) > >> >> >>>> Out[26]: > >> >> >>>> array([ 0.57567681, 0.25918624, 0.50207097, 0.51834052, > >> >> >>>> 0.24293389, > >> >> >>>> 0.5779464 , 0.6515758 , 0.89973173, 1.00275444, > >> >> >>>> 0.37352935, > >> >> >>>> 0.62332717, 0.93599749, 0.40651376, 0.65088756, > >> >> >>>> 0.8332281 > >> >> >>>> , > >> >> >>>> 0.5770101 , 0.9288512 , 0.46671906, 0.41536055, > >> >> >>>> 0.71418068, > >> >> >>>> 0.81250913, 0.07652627, 0.72939072, 0.26755626, > >> >> >>>> 0.36396146, > >> >> >>>> 0.59725999, 1.02264694, 0.41227986, 0.98122853, > >> >> >>>> 0.71906675, > >> >> >>>> 0.58582611, 0.77288117, 0.3217015 , 0.65261394, > >> >> >>>> 0.11947618, > >> >> >>>> 0.75632703, 0.43432935, 0.52182485, 1.0277177 , > >> >> >>>> 1.01104986, > >> >> >>>> 0.3023265 , 0.6024772 , 0.69257548, 0.55418735, > >> >> >>>> 0.46259052, > >> >> >>>> 0.25832231, 0.38278355, 0.45508532, 0.26215872, > >> >> >>>> 0.34207947, > >> >> >>>> 0.80704729, 0.80755477, 0.95317178, 0.97458885, > >> >> >>>> 0.58762294, > >> >> >>>> 0.82540618, 0.62005585, 0.82494646, 1.04221293, > >> >> >>>> 0.14983027, > >> >> >>>> 1.01571579, 0.99381328, 0.24158714, 0.84256569, > >> >> >>>> 0.53418924, > >> >> >>>> 0.24067628, 0.90489883, 1.02217747, 0.34988034, > >> >> >>>> 0.5310065 > >> >> >>>> , > >> >> >>>> 0.48135002, 1.03020269, 0.6013679 , 0.46062485, > >> >> >>>> 0.3918485 > >> >> >>>> , > >> >> >>>> 0.21554545, 0.31704519, 0.04868385, 0.1787766 , > >> >> >>>> 0.37361852, > >> >> >>>> 0.21977912, 0.7649772 , 0.77867281, 0.37684278, > >> >> >>>> 0.64432638, > >> >> >>>> 0.77494951, 0.87106309, 0.77611484, 0.52666801, > >> >> >>>> 0.88683667, > >> >> >>>> 0.69164967, 0.98618191, 0.84811375, 0.35934198, > >> >> >>>> 0.32650478, > >> >> >>>> 0.1752677 , 0.60574454, 0.5109132 , 0.52332287, > >> >> >>>> 0.99777805]) > >> >> >>>> > >> >> >>>> In [27]: np.abs(signal.hilbert(a))[-1] > >> >> >>>> Out[27]: > >> >> >>>> array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., > 0., > >> >> >>>> 0., > >> >> >>>> 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., > 0., > >> >> >>>> 0., > >> >> >>>> 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., > 0., > >> >> >>>> 0., > >> >> >>>> 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., > 0., > >> >> >>>> 0., > >> >> >>>> 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., > 0., > >> >> >>>> 0., > >> >> >>>> 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., > 0., > >> >> >>>> 0., > >> >> >>>> 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., > 0., > >> >> >>>> 0., > >> >> >>>> 0., 0., 0., 0., 0., 0., 0., 0., 0.]) > >> >> >>>> > >> >> >>>> > >> >> >>>> > >> >> >>>> > >> >> >>>> > ---------------------------------------------------------------------- > >> >> >>>> > >> >> >>>> I was expecting both of these to have the same values - am I > >> >> >>>> missing > >> >> >>>> something? > >> >> >>>> > >> >> >>>> I think that the following solves this issue, but now I am not > >> >> >>>> that > >> >> >>>> sure > >> >> >>>> whether it does what it is supposed to do and I couldn't find a > >> >> >>>> test > >> >> >>>> for > >> >> >>>> this in test_signaltools.py. Does anyone know of a good > test-case > >> >> >>>> for > >> >> >>>> the > >> >> >>>> analytic signal, that I could create for this? > >> >> >>>> > >> >> >>>> Index: scipy/signal/signaltools.py > >> >> >>>> > >> >> >>>> > =================================================================== > >> >> >>>> --- scipy/signal/signaltools.py (revision 6182) > >> >> >>>> +++ scipy/signal/signaltools.py (working copy) > >> >> >>>> @@ -1062,13 +1062,13 @@ > >> >> >>>> """ > >> >> >>>> x = asarray(x) > >> >> >>>> if N is None: > >> >> >>>> - N = len(x) > >> >> >>>> + N = x.shape[-1] > >> >> >>>> if N <=0: > >> >> >>>> raise ValueError, "N must be positive." > >> >> >>>> if iscomplexobj(x): > >> >> >>>> print "Warning: imaginary part of x ignored." > >> >> >>>> x = real(x) > >> >> >>>> - Xf = fft(x,N,axis=0) > >> >> >>>> + Xf = fft(x,N,axis=-1) > >> >> >>>> h = zeros(N) > >> >> >>>> if N % 2 == 0: > >> >> >>>> h[0] = h[N/2] = 1 > >> >> >>>> @@ -1078,7 +1078,7 @@ > >> >> >>>> h[1:(N+1)/2] = 2 > >> >> >>>> > >> >> >>>> if len(x.shape) > 1: > >> >> >>>> - h = h[:, newaxis] > >> >> >>>> + h = h[newaxis,:] > >> >> >>>> x = ifft(Xf*h) > >> >> >>>> return x > >> >> >>> > >> >> >>> I think your change would break the currently advertised > behavior, > >> >> >>> axis=0 (The transformation is done along the first axis) > >> >> >>> > >> >> >>> but fft and ifft have default axis=-1 > >> >> >>> > >> >> >>> fft in hilbert uses axis=0 as in docstring > >> >> >>> but ifft uses default axis=-1 > >> >> >>> > >> >> >>> so, I would think the fix should be x = ifft(Xf*h, axis=0) > >> >> >>> > >> >> >>> But as it currently looks like the axis argument doesn't work > >> >> >>> anyway, > >> >> >>> there wouldn't be much breakage if the axis would be included as > an > >> >> >>> argument and default to -1. > >> >> >>> However, I don't know what the "standard" for scipy.signal is for > >> >> >>> default axis. > >> >> >>> > >> >> >>> Josef > >> >> >> > >> >> >> after adding axis to ifft: > >> >> >>>>> print hilbert(aa).real > >> >> >> [[ 0.82584851 0.15215031 0.14767381] > >> >> >> [ 0.95021675 0.16803995 0.43562964] > >> >> >> [ 0.13033881 0.06198952 0.70729614] > >> >> >> [ 0.69409563 0.06962778 0.72552601] > >> >> >> [ 0.34297612 0.50579001 0.86463304] > >> >> >> [ 0.28355261 0.21626889 0.85165102] > >> >> >> [ 0.49481491 0.21290645 0.71416814] > >> >> >> [ 0.2645843 0.95783096 0.77514016] > >> >> >> [ 0.38735994 0.14274852 0.56344808] > >> >> >> [ 0.88084015 0.39879649 0.64949951]] > >> >> >>>>> print hilbert(aa[:,:1]).real > >> >> >> [[ 0.82584851] > >> >> >> [ 0.95021675] > >> >> >> [ 0.13033881] > >> >> >> [ 0.69409563] > >> >> >> [ 0.34297612] > >> >> >> [ 0.28355261] > >> >> >> [ 0.49481491] > >> >> >> [ 0.2645843 ] > >> >> >> [ 0.38735994] > >> >> >> [ 0.88084015]] > >> >> >> > >> >> >> but it treats a 1d array as row vector and transforms along zero > >> >> >> axis > >> >> >> of length 1, and not along the length of the array. > >> >> >> so another fix to handle 1d arrays correctly should be done > >> >> >> > >> >> >>>>> print hilbert(aa[:,1]).real > >> >> >> [ 0.15215031 0.16803995 0.06198952 0.06962778 0.50579001 > >> >> >> 0.21626889 > >> >> >> 0.21290645 0.95783096 0.14274852 0.39879649] > >> >> >>>>> aa[:,1] > >> >> >> array([ 0.15215031, 0.16803995, 0.06198952, 0.06962778, > >> >> >> 0.50579001, > >> >> >> 0.21626889, 0.21290645, 0.95783096, 0.14274852, > >> >> >> 0.39879649]) > >> >> >>>>> > >> >> > > >> >> > there's something wrong with my example, the real part is the same > >> >> > which confused me > >> >> > > >> >> > it works correctly with 1d > >> >> > > >> >> >>>> np.abs(hilbert(aa[:,0])) > >> >> > array([ 0.83251128, 1.04487091, 0.27702083, 0.69901499, > >> >> > 0.49170197, > >> >> > 0.31227114, 0.49505637, 0.26461488, 0.61385196, > >> >> > 0.90716272]) > >> >> > > >> >> >>>> np.abs(hilbert(aa[:,:1])).T > >> >> > array([[ 0.83251128, 1.04487091, 0.27702083, 0.69901499, > >> >> > 0.49170197, > >> >> > 0.31227114, 0.49505637, 0.26461488, 0.61385196, > >> >> > 0.90716272]]) > >> >> > > >> >> >>>> np.abs(hilbert(aa))[:,0] > >> >> > array([ 0.83251128, 1.04487091, 0.27702083, 0.69901499, > >> >> > 0.49170197, > >> >> > 0.31227114, 0.49505637, 0.26461488, 0.61385196, > >> >> > 0.90716272]) > >> >> > > >> >> > besides reading the docstring, I don't know what hilbert is > supposed > >> >> > to be good for. > >> >> > >> >> Would something like the function in the attachment do ? > >> >> > >> >> > >> >> > >> >> > Josef > >> >> > > >> >> > > >> >> >> Josef > >> >> >> > >> >> >> > >> >> >>> > >> >> >>>> > >> >> >>>> > >> >> >>>> Cheers, > >> >> >>>> > >> >> >>>> Ariel > >> >> >>>> -- > >> >> >>>> Ariel Rokem > >> >> >>>> Helen Wills Neuroscience Institute > >> >> >>>> University of California, Berkeley > >> >> >>>> http://argentum.ucbso.berkeley.edu/ariel > >> >> >>>> > >> >> >>>> _______________________________________________ > >> >> >>>> SciPy-Dev mailing list > >> >> >>>> SciPy-Dev at scipy.org > >> >> >>>> http://mail.scipy.org/mailman/listinfo/scipy-dev > >> >> >>>> > >> >> >>>> > >> >> >>> > >> >> >> > >> >> > > >> >> > >> >> _______________________________________________ > >> >> SciPy-Dev mailing list > >> >> SciPy-Dev at scipy.org > >> >> http://mail.scipy.org/mailman/listinfo/scipy-dev > >> >> > >> > > >> > > >> > > >> > -- > >> > Ariel Rokem > >> > Helen Wills Neuroscience Institute > >> > University of California, Berkeley > >> > http://argentum.ucbso.berkeley.edu/ariel > >> > > >> > _______________________________________________ > >> > SciPy-Dev mailing list > >> > SciPy-Dev at scipy.org > >> > http://mail.scipy.org/mailman/listinfo/scipy-dev > >> > > >> > > >> _______________________________________________ > >> SciPy-Dev mailing list > >> SciPy-Dev at scipy.org > >> http://mail.scipy.org/mailman/listinfo/scipy-dev > > > > > > > > -- > > Ariel Rokem > > Helen Wills Neuroscience Institute > > University of California, Berkeley > > http://argentum.ucbso.berkeley.edu/ariel > > > > _______________________________________________ > > SciPy-Dev mailing list > > SciPy-Dev at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-dev > > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -- Ariel Rokem Helen Wills Neuroscience Institute University of California, Berkeley http://argentum.ucbso.berkeley.edu/ariel -------------- next part -------------- An HTML attachment was scrubbed... URL: From resurgo at gmail.com Sat Jan 16 09:37:13 2010 From: resurgo at gmail.com (Peter Clarke) Date: Sat, 16 Jan 2010 14:37:13 +0000 Subject: [SciPy-dev] Python coders for Haiti disaster relief Message-ID: Apologies for off topic posting but I think this in an important project. Python programmers are required immediately for assistance in coding a disaster management framework for the Earthquake in Haiti. >From http://wiki.python.org/moin/VolunteerOpportunities: ----------------- URGENT REQUEST, Sahana Disaster Management System, Haiti Earthquake *Job Description*:This is an urgent call for experienced Python programmers to help in the Sahana Disaster Management System immediately - knowledge of Web2Py platform would be best. The Sahana Disaster Management System is used to coordinate relief efforts. Please recruit any available programmers for the Haiti effort as quickly as possible and have them contact me immediately so that I can put them in touch with the correct people. Thank you kindly and I do hope that we can quickly identify some contributors for this monumental effort - they are needed ASAP. http://sahanapy.org/ is the developer site and the demo is http://demo.sahanapy.org/ - *Contact*: Connie White, PhD, Institute for Emergency Preparedness, Jacksonville State University - *E-mail contact*: connie.m.white at gmail.com - *Web*: http://sahanapy.org/ ----------------------------- Please help if you can. -Peter Clarke -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Sat Jan 16 14:09:48 2010 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Sat, 16 Jan 2010 20:09:48 +0100 Subject: [SciPy-dev] Ticket 1070 Message-ID: Hi all, Ticket http://projects.scipy.org/scipy/ticket/1070 can be closed. Nils From josef.pktd at gmail.com Sun Jan 17 00:31:16 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 17 Jan 2010 00:31:16 -0500 Subject: [SciPy-dev] Is this a bugfix for scipy.hilbert? In-Reply-To: <43958ee61001151320l1f0e985xbbb330e90536ccf8@mail.gmail.com> References: <43958ee61001141924n1663f379m77bafbbbb4c88ede@mail.gmail.com> <1cd32cbb1001141954u72d7199cm936be6f3d95866c4@mail.gmail.com> <1cd32cbb1001142002xc3f62b7n8ca45eedf94da884@mail.gmail.com> <1cd32cbb1001142027j3be923qedadae0149f49f0a@mail.gmail.com> <1cd32cbb1001142053yc2a37c2i1ff3b651c25ad3de@mail.gmail.com> <43958ee61001142244y37b31feel9b9c0232d9c9e430@mail.gmail.com> <1cd32cbb1001142310t65f1492cne1736987b389c49@mail.gmail.com> <43958ee61001151134v43e4e6a4sf08fa63bf97ec08a@mail.gmail.com> <1cd32cbb1001151148p617c838dp8fda69f3f4c9a009@mail.gmail.com> <43958ee61001151320l1f0e985xbbb330e90536ccf8@mail.gmail.com> Message-ID: <1cd32cbb1001162131t7e04c92ekd4507c8ddab9c127@mail.gmail.com> On Fri, Jan 15, 2010 at 4:20 PM, Ariel Rokem wrote: > Hi - I've never done this before, so it would be great if I could 'look over > your shoulder' (in the sense that I know how this ticket came about :D), as > you submit a ticket on this. Done in http://projects.scipy.org/scipy/ticket/1093 tests pass at 14 decimals Josef > > Thanks -- > > Ariel > > On Fri, Jan 15, 2010 at 11:48 AM, wrote: >> >> On Fri, Jan 15, 2010 at 2:34 PM, Ariel Rokem wrote: >> > Hi - >> > >> > attached is a file with a couple of tests. I am not sure this tests the >> > issues we were dealing with previously (the axis issues, etc.), but it >> > has >> > some sensible test-cases, which compare to what Matlab would give you >> > (not >> > quite 10by3 or 10by6, but as you can see, they make sense). Also - all >> > the >> > assertions are assert_almost_equal. Do you think that's OK? I think >> > there >> > are float-precision issues here, which would make assert_equal fail, but >> > I >> > am not sure - I would be happy to get any general comments on these >> > tests, >> > in case I am doing this all wrong. >> >> nice test cases, I like theoretical tests even better than verified >> numbers from other packages. >> >> Besides some cosmetic changes to get them into a test function, the >> only part to add is the precision of the tests. >> The default precision of assert_almost_equal is only 6 decimals. >> >> For these kind of cases, I usually go to 12 to 15 depending on the >> numerical precision of the algorithm. Usually, I go by trial and error >> until the test breaks, or calculate max abs of the error. >> >> I can add some tests for the axis argument. >> >> Can you open a ticket for the record or shall I ? >> >> Josef >> >> >> > >> > Cheers, >> > >> > Ariel >> > >> > On Thu, Jan 14, 2010 at 11:10 PM, wrote: >> >> >> >> On Fri, Jan 15, 2010 at 1:44 AM, Ariel Rokem >> >> wrote: >> >> > Yes - looks good. Except I would prefer to eventually set the axis to >> >> > default to -1, to be consistent with signal.fft (and also np.fft.fft) >> >> > which >> >> > has axis=-1. >> >> >> >> I'm indifferent to the default axis, from a quick look and my >> >> experience there are not many functions with axis arguments in signal. >> >> So I'm fine with switching to axis=-1. We should do it with this >> >> bugfix, since until now the function wasn't correct anyway for 2d. >> >> >> >> > >> >> > As for whether it's doing what it's supposed to do, for what it's >> >> > worth >> >> > - it >> >> > seems to do similar things to what Matlab's 'hilbert' function does >> >> > on a >> >> > few >> >> > simple examples I tried out. >> >> >> >> I was reading briefly on wikipedia, and checked with fftpack.hilbert, >> >> which returns the same array as signal.hilbert(a).imag, but I didn't >> >> manage to figure out why fftpack.hilbert only allows 1d (i got lost >> >> starting at convolve.pyf) >> >> >> >> Could you write a simple test case compared to matlab, e.g. 10by3 as >> >> in my example, for both axis, or 10by6 if 10by3 doesn't make sense? >> >> >> >> If nobody objects, I can commit the change with axis=-1. >> >> >> >> Josef >> >> >> >> > >> >> > Cheers, >> >> > >> >> > Ariel >> >> > >> >> > >> >> > >> >> > On Thu, Jan 14, 2010 at 8:53 PM, wrote: >> >> >> >> >> >> On Thu, Jan 14, 2010 at 11:27 PM, ? wrote: >> >> >> > On Thu, Jan 14, 2010 at 11:02 PM, ? wrote: >> >> >> >> On Thu, Jan 14, 2010 at 10:54 PM, ? wrote: >> >> >> >>> On Thu, Jan 14, 2010 at 10:24 PM, Ariel Rokem >> >> >> >>> >> >> >> >>> wrote: >> >> >> >>>> Hi everyone, >> >> >> >>>> >> >> >> >>>> I have been trying to use scipy.signal.hilbert and I got the >> >> >> >>>> following >> >> >> >>>> puzzling result: >> >> >> >>>> >> >> >> >>>> In [22]: import scipy >> >> >> >>>> >> >> >> >>>> In [23]: scipy.__version__ #I have r6182 >> >> >> >>>> Out[23]: '0.8.0.dev' >> >> >> >>>> >> >> >> >>>> In [24]: import scipy.signal as signal >> >> >> >>>> >> >> >> >>>> In [25]: a = np.random.rand(100,100) >> >> >> >>>> >> >> >> >>>> In [26]: np.abs(signal.hilbert(a[-1])) >> >> >> >>>> Out[26]: >> >> >> >>>> array([ 0.57567681,? 0.25918624,? 0.50207097,? 0.51834052, >> >> >> >>>> 0.24293389, >> >> >> >>>> ??????? 0.5779464 ,? 0.6515758 ,? 0.89973173,? 1.00275444, >> >> >> >>>> 0.37352935, >> >> >> >>>> ??????? 0.62332717,? 0.93599749,? 0.40651376,? 0.65088756, >> >> >> >>>> 0.8332281 >> >> >> >>>> , >> >> >> >>>> ??????? 0.5770101 ,? 0.9288512 ,? 0.46671906,? 0.41536055, >> >> >> >>>> 0.71418068, >> >> >> >>>> ??????? 0.81250913,? 0.07652627,? 0.72939072,? 0.26755626, >> >> >> >>>> 0.36396146, >> >> >> >>>> ??????? 0.59725999,? 1.02264694,? 0.41227986,? 0.98122853, >> >> >> >>>> 0.71906675, >> >> >> >>>> ??????? 0.58582611,? 0.77288117,? 0.3217015 ,? 0.65261394, >> >> >> >>>> 0.11947618, >> >> >> >>>> ??????? 0.75632703,? 0.43432935,? 0.52182485,? 1.0277177 , >> >> >> >>>> 1.01104986, >> >> >> >>>> ??????? 0.3023265 ,? 0.6024772 ,? 0.69257548,? 0.55418735, >> >> >> >>>> 0.46259052, >> >> >> >>>> ??????? 0.25832231,? 0.38278355,? 0.45508532,? 0.26215872, >> >> >> >>>> 0.34207947, >> >> >> >>>> ??????? 0.80704729,? 0.80755477,? 0.95317178,? 0.97458885, >> >> >> >>>> 0.58762294, >> >> >> >>>> ??????? 0.82540618,? 0.62005585,? 0.82494646,? 1.04221293, >> >> >> >>>> 0.14983027, >> >> >> >>>> ??????? 1.01571579,? 0.99381328,? 0.24158714,? 0.84256569, >> >> >> >>>> 0.53418924, >> >> >> >>>> ??????? 0.24067628,? 0.90489883,? 1.02217747,? 0.34988034, >> >> >> >>>> 0.5310065 >> >> >> >>>> , >> >> >> >>>> ??????? 0.48135002,? 1.03020269,? 0.6013679 ,? 0.46062485, >> >> >> >>>> 0.3918485 >> >> >> >>>> , >> >> >> >>>> ??????? 0.21554545,? 0.31704519,? 0.04868385,? 0.1787766 , >> >> >> >>>> 0.37361852, >> >> >> >>>> ??????? 0.21977912,? 0.7649772 ,? 0.77867281,? 0.37684278, >> >> >> >>>> 0.64432638, >> >> >> >>>> ??????? 0.77494951,? 0.87106309,? 0.77611484,? 0.52666801, >> >> >> >>>> 0.88683667, >> >> >> >>>> ??????? 0.69164967,? 0.98618191,? 0.84811375,? 0.35934198, >> >> >> >>>> 0.32650478, >> >> >> >>>> ??????? 0.1752677 ,? 0.60574454,? 0.5109132 ,? 0.52332287, >> >> >> >>>> 0.99777805]) >> >> >> >>>> >> >> >> >>>> In [27]: np.abs(signal.hilbert(a))[-1] >> >> >> >>>> Out[27]: >> >> >> >>>> array([ 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., >> >> >> >>>> 0., >> >> >> >>>> 0., >> >> >> >>>> ??????? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., >> >> >> >>>> 0., >> >> >> >>>> 0., >> >> >> >>>> ??????? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., >> >> >> >>>> 0., >> >> >> >>>> 0., >> >> >> >>>> ??????? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., >> >> >> >>>> 0., >> >> >> >>>> 0., >> >> >> >>>> ??????? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., >> >> >> >>>> 0., >> >> >> >>>> 0., >> >> >> >>>> ??????? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., >> >> >> >>>> 0., >> >> >> >>>> 0., >> >> >> >>>> ??????? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., >> >> >> >>>> 0., >> >> >> >>>> 0., >> >> >> >>>> ??????? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.]) >> >> >> >>>> >> >> >> >>>> >> >> >> >>>> >> >> >> >>>> >> >> >> >>>> >> >> >> >>>> ---------------------------------------------------------------------- >> >> >> >>>> >> >> >> >>>> I was expecting both of these to have the same values - am I >> >> >> >>>> missing >> >> >> >>>> something? >> >> >> >>>> >> >> >> >>>> I think that the following solves this issue, but now I am not >> >> >> >>>> that >> >> >> >>>> sure >> >> >> >>>> whether it does what it is supposed to do and I couldn't find a >> >> >> >>>> test >> >> >> >>>> for >> >> >> >>>> this in test_signaltools.py. Does anyone know of a good >> >> >> >>>> test-case >> >> >> >>>> for >> >> >> >>>> the >> >> >> >>>> analytic signal, that I could create for this? >> >> >> >>>> >> >> >> >>>> Index: scipy/signal/signaltools.py >> >> >> >>>> >> >> >> >>>> >> >> >> >>>> =================================================================== >> >> >> >>>> --- scipy/signal/signaltools.py??? (revision 6182) >> >> >> >>>> +++ scipy/signal/signaltools.py??? (working copy) >> >> >> >>>> @@ -1062,13 +1062,13 @@ >> >> >> >>>> ???? """ >> >> >> >>>> ???? x = asarray(x) >> >> >> >>>> ???? if N is None: >> >> >> >>>> -??????? N = len(x) >> >> >> >>>> +??????? N = x.shape[-1] >> >> >> >>>> ???? if N <=0: >> >> >> >>>> ???????? raise ValueError, "N must be positive." >> >> >> >>>> ???? if iscomplexobj(x): >> >> >> >>>> ???????? print "Warning: imaginary part of x ignored." >> >> >> >>>> ???????? x = real(x) >> >> >> >>>> -??? Xf = fft(x,N,axis=0) >> >> >> >>>> +??? Xf = fft(x,N,axis=-1) >> >> >> >>>> ???? h = zeros(N) >> >> >> >>>> ???? if N % 2 == 0: >> >> >> >>>> ???????? h[0] = h[N/2] = 1 >> >> >> >>>> @@ -1078,7 +1078,7 @@ >> >> >> >>>> ???????? h[1:(N+1)/2] = 2 >> >> >> >>>> >> >> >> >>>> ???? if len(x.shape) > 1: >> >> >> >>>> -??????? h = h[:, newaxis] >> >> >> >>>> +??????? h = h[newaxis,:] >> >> >> >>>> ???? x = ifft(Xf*h) >> >> >> >>>> ???? return x >> >> >> >>> >> >> >> >>> I think your change would break the currently advertised >> >> >> >>> behavior, >> >> >> >>> axis=0 (The transformation is done along the first axis) >> >> >> >>> >> >> >> >>> but fft and ifft have default axis=-1 >> >> >> >>> >> >> >> >>> fft in hilbert uses axis=0 as in docstring >> >> >> >>> but ifft uses default axis=-1 >> >> >> >>> >> >> >> >>> so, I would think the fix should be ?x = ifft(Xf*h, axis=0) >> >> >> >>> >> >> >> >>> But as it currently looks like the axis argument doesn't work >> >> >> >>> anyway, >> >> >> >>> there wouldn't be much breakage if the axis would be included as >> >> >> >>> an >> >> >> >>> argument and default to -1. >> >> >> >>> However, I don't know what the "standard" for scipy.signal is >> >> >> >>> for >> >> >> >>> default axis. >> >> >> >>> >> >> >> >>> Josef >> >> >> >> >> >> >> >> after adding axis to ifft: >> >> >> >>>>> print hilbert(aa).real >> >> >> >> [[ 0.82584851 ?0.15215031 ?0.14767381] >> >> >> >> ?[ 0.95021675 ?0.16803995 ?0.43562964] >> >> >> >> ?[ 0.13033881 ?0.06198952 ?0.70729614] >> >> >> >> ?[ 0.69409563 ?0.06962778 ?0.72552601] >> >> >> >> ?[ 0.34297612 ?0.50579001 ?0.86463304] >> >> >> >> ?[ 0.28355261 ?0.21626889 ?0.85165102] >> >> >> >> ?[ 0.49481491 ?0.21290645 ?0.71416814] >> >> >> >> ?[ 0.2645843 ? 0.95783096 ?0.77514016] >> >> >> >> ?[ 0.38735994 ?0.14274852 ?0.56344808] >> >> >> >> ?[ 0.88084015 ?0.39879649 ?0.64949951]] >> >> >> >>>>> print hilbert(aa[:,:1]).real >> >> >> >> [[ 0.82584851] >> >> >> >> ?[ 0.95021675] >> >> >> >> ?[ 0.13033881] >> >> >> >> ?[ 0.69409563] >> >> >> >> ?[ 0.34297612] >> >> >> >> ?[ 0.28355261] >> >> >> >> ?[ 0.49481491] >> >> >> >> ?[ 0.2645843 ] >> >> >> >> ?[ 0.38735994] >> >> >> >> ?[ 0.88084015]] >> >> >> >> >> >> >> >> but it treats a 1d array as row vector and transforms along zero >> >> >> >> axis >> >> >> >> of length 1, and not along the length of the array. >> >> >> >> so another fix to handle 1d arrays correctly should be done >> >> >> >> >> >> >> >>>>> print hilbert(aa[:,1]).real >> >> >> >> [ 0.15215031 ?0.16803995 ?0.06198952 ?0.06962778 ?0.50579001 >> >> >> >> ?0.21626889 >> >> >> >> ?0.21290645 ?0.95783096 ?0.14274852 ?0.39879649] >> >> >> >>>>> aa[:,1] >> >> >> >> array([ 0.15215031, ?0.16803995, ?0.06198952, ?0.06962778, >> >> >> >> ?0.50579001, >> >> >> >> ? ? ? ?0.21626889, ?0.21290645, ?0.95783096, ?0.14274852, >> >> >> >> ?0.39879649]) >> >> >> >>>>> >> >> >> > >> >> >> > there's something wrong with my example, the real part is the same >> >> >> > which confused me >> >> >> > >> >> >> > it works correctly with 1d >> >> >> > >> >> >> >>>> np.abs(hilbert(aa[:,0])) >> >> >> > array([ 0.83251128, ?1.04487091, ?0.27702083, ?0.69901499, >> >> >> > ?0.49170197, >> >> >> > ? ? ? ?0.31227114, ?0.49505637, ?0.26461488, ?0.61385196, >> >> >> > ?0.90716272]) >> >> >> > >> >> >> >>>> np.abs(hilbert(aa[:,:1])).T >> >> >> > array([[ 0.83251128, ?1.04487091, ?0.27702083, ?0.69901499, >> >> >> > ?0.49170197, >> >> >> > ? ? ? ? 0.31227114, ?0.49505637, ?0.26461488, ?0.61385196, >> >> >> > ?0.90716272]]) >> >> >> > >> >> >> >>>> np.abs(hilbert(aa))[:,0] >> >> >> > array([ 0.83251128, ?1.04487091, ?0.27702083, ?0.69901499, >> >> >> > ?0.49170197, >> >> >> > ? ? ? ?0.31227114, ?0.49505637, ?0.26461488, ?0.61385196, >> >> >> > ?0.90716272]) >> >> >> > >> >> >> > besides reading the docstring, I don't know what hilbert is >> >> >> > supposed >> >> >> > to be good for. >> >> >> >> >> >> Would something like the function in the attachment do ? >> >> >> >> >> >> >> >> >> >> >> >> > Josef >> >> >> > >> >> >> > >> >> >> >> Josef >> >> >> >> >> >> >> >> >> >> >> >>> >> >> >> >>>> >> >> >> >>>> >> >> >> >>>> Cheers, >> >> >> >>>> >> >> >> >>>> Ariel >> >> >> >>>> -- >> >> >> >>>> Ariel Rokem >> >> >> >>>> Helen Wills Neuroscience Institute >> >> >> >>>> University of California, Berkeley >> >> >> >>>> http://argentum.ucbso.berkeley.edu/ariel >> >> >> >>>> >> >> >> >>>> _______________________________________________ >> >> >> >>>> SciPy-Dev mailing list >> >> >> >>>> SciPy-Dev at scipy.org >> >> >> >>>> http://mail.scipy.org/mailman/listinfo/scipy-dev >> >> >> >>>> >> >> >> >>>> >> >> >> >>> >> >> >> >> >> >> >> > >> >> >> >> >> >> _______________________________________________ >> >> >> SciPy-Dev mailing list >> >> >> SciPy-Dev at scipy.org >> >> >> http://mail.scipy.org/mailman/listinfo/scipy-dev >> >> >> >> >> > >> >> > >> >> > >> >> > -- >> >> > Ariel Rokem >> >> > Helen Wills Neuroscience Institute >> >> > University of California, Berkeley >> >> > http://argentum.ucbso.berkeley.edu/ariel >> >> > >> >> > _______________________________________________ >> >> > SciPy-Dev mailing list >> >> > SciPy-Dev at scipy.org >> >> > http://mail.scipy.org/mailman/listinfo/scipy-dev >> >> > >> >> > >> >> _______________________________________________ >> >> SciPy-Dev mailing list >> >> SciPy-Dev at scipy.org >> >> http://mail.scipy.org/mailman/listinfo/scipy-dev >> > >> > >> > >> > -- >> > Ariel Rokem >> > Helen Wills Neuroscience Institute >> > University of California, Berkeley >> > http://argentum.ucbso.berkeley.edu/ariel >> > >> > _______________________________________________ >> > SciPy-Dev mailing list >> > SciPy-Dev at scipy.org >> > http://mail.scipy.org/mailman/listinfo/scipy-dev >> > >> > >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev > > > > -- > Ariel Rokem > Helen Wills Neuroscience Institute > University of California, Berkeley > http://argentum.ucbso.berkeley.edu/ariel > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > From josef.pktd at gmail.com Sun Jan 17 22:49:18 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 17 Jan 2010 22:49:18 -0500 Subject: [SciPy-dev] Is this a bugfix for scipy.hilbert? In-Reply-To: <1cd32cbb1001162131t7e04c92ekd4507c8ddab9c127@mail.gmail.com> References: <43958ee61001141924n1663f379m77bafbbbb4c88ede@mail.gmail.com> <1cd32cbb1001142002xc3f62b7n8ca45eedf94da884@mail.gmail.com> <1cd32cbb1001142027j3be923qedadae0149f49f0a@mail.gmail.com> <1cd32cbb1001142053yc2a37c2i1ff3b651c25ad3de@mail.gmail.com> <43958ee61001142244y37b31feel9b9c0232d9c9e430@mail.gmail.com> <1cd32cbb1001142310t65f1492cne1736987b389c49@mail.gmail.com> <43958ee61001151134v43e4e6a4sf08fa63bf97ec08a@mail.gmail.com> <1cd32cbb1001151148p617c838dp8fda69f3f4c9a009@mail.gmail.com> <43958ee61001151320l1f0e985xbbb330e90536ccf8@mail.gmail.com> <1cd32cbb1001162131t7e04c92ekd4507c8ddab9c127@mail.gmail.com> Message-ID: <1cd32cbb1001171949p2516c463s276a247d64188483@mail.gmail.com> On Sun, Jan 17, 2010 at 12:31 AM, wrote: > On Fri, Jan 15, 2010 at 4:20 PM, Ariel Rokem wrote: >> Hi - I've never done this before, so it would be great if I could 'look over >> your shoulder' (in the sense that I know how this ticket came about :D), as >> you submit a ticket on this. > > Done in http://projects.scipy.org/scipy/ticket/1093 > tests pass at 14 decimals While adding some tests, I got one more question According to Wikipedia http://en.wikipedia.org/wiki/Analytic_signal#Definition the imaginary part of the analytical signal is equal to the Hilbert transform however, fftpack.hilbert has the opposite sign. >From the examples signal.hilbert looks correct, so does fftpack.hilbert have a sign mistake or is it based on a different definition? >>> r = np.random.randn(20) >>> fftpack.hilbert(r) array([-0.27285468, -1.39747965, 1.7991044 , -0.16609304, -1.84459577, 0.48696479, -0.33190553, 0.59383033, 2.15361055, -0.89341275, -0.13730369, 0.84046658, 1.38110384, -1.7595949 , -0.04869402, 0.59871558, -1.09627219, 0.59375139, -1.6021929 , 1.10285168]) >>> hilbert(r).imag array([ 0.27285468, 1.39747965, -1.7991044 , 0.16609304, 1.84459577, -0.48696479, 0.33190553, -0.59383033, -2.15361055, 0.89341275, 0.13730369, -0.84046658, -1.38110384, 1.7595949 , 0.04869402, -0.59871558, 1.09627219, -0.59375139, 1.6021929 , -1.10285168]) I'm just checking definitions for the tests, Josef > > Josef > >> >> Thanks -- >> >> Ariel >> >> On Fri, Jan 15, 2010 at 11:48 AM, wrote: >>> >>> On Fri, Jan 15, 2010 at 2:34 PM, Ariel Rokem wrote: >>> > Hi - >>> > >>> > attached is a file with a couple of tests. I am not sure this tests the >>> > issues we were dealing with previously (the axis issues, etc.), but it >>> > has >>> > some sensible test-cases, which compare to what Matlab would give you >>> > (not >>> > quite 10by3 or 10by6, but as you can see, they make sense). Also - all >>> > the >>> > assertions are assert_almost_equal. Do you think that's OK? I think >>> > there >>> > are float-precision issues here, which would make assert_equal fail, but >>> > I >>> > am not sure - I would be happy to get any general comments on these >>> > tests, >>> > in case I am doing this all wrong. >>> >>> nice test cases, I like theoretical tests even better than verified >>> numbers from other packages. >>> >>> Besides some cosmetic changes to get them into a test function, the >>> only part to add is the precision of the tests. >>> The default precision of assert_almost_equal is only 6 decimals. >>> >>> For these kind of cases, I usually go to 12 to 15 depending on the >>> numerical precision of the algorithm. Usually, I go by trial and error >>> until the test breaks, or calculate max abs of the error. >>> >>> I can add some tests for the axis argument. >>> >>> Can you open a ticket for the record or shall I ? >>> >>> Josef >>> >>> >>> > >>> > Cheers, >>> > >>> > Ariel >>> > >>> > On Thu, Jan 14, 2010 at 11:10 PM, wrote: >>> >> >>> >> On Fri, Jan 15, 2010 at 1:44 AM, Ariel Rokem >>> >> wrote: >>> >> > Yes - looks good. Except I would prefer to eventually set the axis to >>> >> > default to -1, to be consistent with signal.fft (and also np.fft.fft) >>> >> > which >>> >> > has axis=-1. >>> >> >>> >> I'm indifferent to the default axis, from a quick look and my >>> >> experience there are not many functions with axis arguments in signal. >>> >> So I'm fine with switching to axis=-1. We should do it with this >>> >> bugfix, since until now the function wasn't correct anyway for 2d. >>> >> >>> >> > >>> >> > As for whether it's doing what it's supposed to do, for what it's >>> >> > worth >>> >> > - it >>> >> > seems to do similar things to what Matlab's 'hilbert' function does >>> >> > on a >>> >> > few >>> >> > simple examples I tried out. >>> >> >>> >> I was reading briefly on wikipedia, and checked with fftpack.hilbert, >>> >> which returns the same array as signal.hilbert(a).imag, but I didn't >>> >> manage to figure out why fftpack.hilbert only allows 1d (i got lost >>> >> starting at convolve.pyf) >>> >> >>> >> Could you write a simple test case compared to matlab, e.g. 10by3 as >>> >> in my example, for both axis, or 10by6 if 10by3 doesn't make sense? >>> >> >>> >> If nobody objects, I can commit the change with axis=-1. >>> >> >>> >> Josef >>> >> >>> >> > >>> >> > Cheers, >>> >> > >>> >> > Ariel >>> >> > >>> >> > >>> >> > >>> >> > On Thu, Jan 14, 2010 at 8:53 PM, wrote: >>> >> >> >>> >> >> On Thu, Jan 14, 2010 at 11:27 PM, ? wrote: >>> >> >> > On Thu, Jan 14, 2010 at 11:02 PM, ? wrote: >>> >> >> >> On Thu, Jan 14, 2010 at 10:54 PM, ? wrote: >>> >> >> >>> On Thu, Jan 14, 2010 at 10:24 PM, Ariel Rokem >>> >> >> >>> >>> >> >> >>> wrote: >>> >> >> >>>> Hi everyone, >>> >> >> >>>> >>> >> >> >>>> I have been trying to use scipy.signal.hilbert and I got the >>> >> >> >>>> following >>> >> >> >>>> puzzling result: >>> >> >> >>>> >>> >> >> >>>> In [22]: import scipy >>> >> >> >>>> >>> >> >> >>>> In [23]: scipy.__version__ #I have r6182 >>> >> >> >>>> Out[23]: '0.8.0.dev' >>> >> >> >>>> >>> >> >> >>>> In [24]: import scipy.signal as signal >>> >> >> >>>> >>> >> >> >>>> In [25]: a = np.random.rand(100,100) >>> >> >> >>>> >>> >> >> >>>> In [26]: np.abs(signal.hilbert(a[-1])) >>> >> >> >>>> Out[26]: >>> >> >> >>>> array([ 0.57567681,? 0.25918624,? 0.50207097,? 0.51834052, >>> >> >> >>>> 0.24293389, >>> >> >> >>>> ??????? 0.5779464 ,? 0.6515758 ,? 0.89973173,? 1.00275444, >>> >> >> >>>> 0.37352935, >>> >> >> >>>> ??????? 0.62332717,? 0.93599749,? 0.40651376,? 0.65088756, >>> >> >> >>>> 0.8332281 >>> >> >> >>>> , >>> >> >> >>>> ??????? 0.5770101 ,? 0.9288512 ,? 0.46671906,? 0.41536055, >>> >> >> >>>> 0.71418068, >>> >> >> >>>> ??????? 0.81250913,? 0.07652627,? 0.72939072,? 0.26755626, >>> >> >> >>>> 0.36396146, >>> >> >> >>>> ??????? 0.59725999,? 1.02264694,? 0.41227986,? 0.98122853, >>> >> >> >>>> 0.71906675, >>> >> >> >>>> ??????? 0.58582611,? 0.77288117,? 0.3217015 ,? 0.65261394, >>> >> >> >>>> 0.11947618, >>> >> >> >>>> ??????? 0.75632703,? 0.43432935,? 0.52182485,? 1.0277177 , >>> >> >> >>>> 1.01104986, >>> >> >> >>>> ??????? 0.3023265 ,? 0.6024772 ,? 0.69257548,? 0.55418735, >>> >> >> >>>> 0.46259052, >>> >> >> >>>> ??????? 0.25832231,? 0.38278355,? 0.45508532,? 0.26215872, >>> >> >> >>>> 0.34207947, >>> >> >> >>>> ??????? 0.80704729,? 0.80755477,? 0.95317178,? 0.97458885, >>> >> >> >>>> 0.58762294, >>> >> >> >>>> ??????? 0.82540618,? 0.62005585,? 0.82494646,? 1.04221293, >>> >> >> >>>> 0.14983027, >>> >> >> >>>> ??????? 1.01571579,? 0.99381328,? 0.24158714,? 0.84256569, >>> >> >> >>>> 0.53418924, >>> >> >> >>>> ??????? 0.24067628,? 0.90489883,? 1.02217747,? 0.34988034, >>> >> >> >>>> 0.5310065 >>> >> >> >>>> , >>> >> >> >>>> ??????? 0.48135002,? 1.03020269,? 0.6013679 ,? 0.46062485, >>> >> >> >>>> 0.3918485 >>> >> >> >>>> , >>> >> >> >>>> ??????? 0.21554545,? 0.31704519,? 0.04868385,? 0.1787766 , >>> >> >> >>>> 0.37361852, >>> >> >> >>>> ??????? 0.21977912,? 0.7649772 ,? 0.77867281,? 0.37684278, >>> >> >> >>>> 0.64432638, >>> >> >> >>>> ??????? 0.77494951,? 0.87106309,? 0.77611484,? 0.52666801, >>> >> >> >>>> 0.88683667, >>> >> >> >>>> ??????? 0.69164967,? 0.98618191,? 0.84811375,? 0.35934198, >>> >> >> >>>> 0.32650478, >>> >> >> >>>> ??????? 0.1752677 ,? 0.60574454,? 0.5109132 ,? 0.52332287, >>> >> >> >>>> 0.99777805]) >>> >> >> >>>> >>> >> >> >>>> In [27]: np.abs(signal.hilbert(a))[-1] >>> >> >> >>>> Out[27]: >>> >> >> >>>> array([ 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., >>> >> >> >>>> 0., >>> >> >> >>>> 0., >>> >> >> >>>> ??????? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., >>> >> >> >>>> 0., >>> >> >> >>>> 0., >>> >> >> >>>> ??????? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., >>> >> >> >>>> 0., >>> >> >> >>>> 0., >>> >> >> >>>> ??????? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., >>> >> >> >>>> 0., >>> >> >> >>>> 0., >>> >> >> >>>> ??????? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., >>> >> >> >>>> 0., >>> >> >> >>>> 0., >>> >> >> >>>> ??????? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., >>> >> >> >>>> 0., >>> >> >> >>>> 0., >>> >> >> >>>> ??????? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., >>> >> >> >>>> 0., >>> >> >> >>>> 0., >>> >> >> >>>> ??????? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.]) >>> >> >> >>>> >>> >> >> >>>> >>> >> >> >>>> >>> >> >> >>>> >>> >> >> >>>> >>> >> >> >>>> ---------------------------------------------------------------------- >>> >> >> >>>> >>> >> >> >>>> I was expecting both of these to have the same values - am I >>> >> >> >>>> missing >>> >> >> >>>> something? >>> >> >> >>>> >>> >> >> >>>> I think that the following solves this issue, but now I am not >>> >> >> >>>> that >>> >> >> >>>> sure >>> >> >> >>>> whether it does what it is supposed to do and I couldn't find a >>> >> >> >>>> test >>> >> >> >>>> for >>> >> >> >>>> this in test_signaltools.py. Does anyone know of a good >>> >> >> >>>> test-case >>> >> >> >>>> for >>> >> >> >>>> the >>> >> >> >>>> analytic signal, that I could create for this? >>> >> >> >>>> >>> >> >> >>>> Index: scipy/signal/signaltools.py >>> >> >> >>>> >>> >> >> >>>> >>> >> >> >>>> =================================================================== >>> >> >> >>>> --- scipy/signal/signaltools.py??? (revision 6182) >>> >> >> >>>> +++ scipy/signal/signaltools.py??? (working copy) >>> >> >> >>>> @@ -1062,13 +1062,13 @@ >>> >> >> >>>> ???? """ >>> >> >> >>>> ???? x = asarray(x) >>> >> >> >>>> ???? if N is None: >>> >> >> >>>> -??????? N = len(x) >>> >> >> >>>> +??????? N = x.shape[-1] >>> >> >> >>>> ???? if N <=0: >>> >> >> >>>> ???????? raise ValueError, "N must be positive." >>> >> >> >>>> ???? if iscomplexobj(x): >>> >> >> >>>> ???????? print "Warning: imaginary part of x ignored." >>> >> >> >>>> ???????? x = real(x) >>> >> >> >>>> -??? Xf = fft(x,N,axis=0) >>> >> >> >>>> +??? Xf = fft(x,N,axis=-1) >>> >> >> >>>> ???? h = zeros(N) >>> >> >> >>>> ???? if N % 2 == 0: >>> >> >> >>>> ???????? h[0] = h[N/2] = 1 >>> >> >> >>>> @@ -1078,7 +1078,7 @@ >>> >> >> >>>> ???????? h[1:(N+1)/2] = 2 >>> >> >> >>>> >>> >> >> >>>> ???? if len(x.shape) > 1: >>> >> >> >>>> -??????? h = h[:, newaxis] >>> >> >> >>>> +??????? h = h[newaxis,:] >>> >> >> >>>> ???? x = ifft(Xf*h) >>> >> >> >>>> ???? return x >>> >> >> >>> >>> >> >> >>> I think your change would break the currently advertised >>> >> >> >>> behavior, >>> >> >> >>> axis=0 (The transformation is done along the first axis) >>> >> >> >>> >>> >> >> >>> but fft and ifft have default axis=-1 >>> >> >> >>> >>> >> >> >>> fft in hilbert uses axis=0 as in docstring >>> >> >> >>> but ifft uses default axis=-1 >>> >> >> >>> >>> >> >> >>> so, I would think the fix should be ?x = ifft(Xf*h, axis=0) >>> >> >> >>> >>> >> >> >>> But as it currently looks like the axis argument doesn't work >>> >> >> >>> anyway, >>> >> >> >>> there wouldn't be much breakage if the axis would be included as >>> >> >> >>> an >>> >> >> >>> argument and default to -1. >>> >> >> >>> However, I don't know what the "standard" for scipy.signal is >>> >> >> >>> for >>> >> >> >>> default axis. >>> >> >> >>> >>> >> >> >>> Josef >>> >> >> >> >>> >> >> >> after adding axis to ifft: >>> >> >> >>>>> print hilbert(aa).real >>> >> >> >> [[ 0.82584851 ?0.15215031 ?0.14767381] >>> >> >> >> ?[ 0.95021675 ?0.16803995 ?0.43562964] >>> >> >> >> ?[ 0.13033881 ?0.06198952 ?0.70729614] >>> >> >> >> ?[ 0.69409563 ?0.06962778 ?0.72552601] >>> >> >> >> ?[ 0.34297612 ?0.50579001 ?0.86463304] >>> >> >> >> ?[ 0.28355261 ?0.21626889 ?0.85165102] >>> >> >> >> ?[ 0.49481491 ?0.21290645 ?0.71416814] >>> >> >> >> ?[ 0.2645843 ? 0.95783096 ?0.77514016] >>> >> >> >> ?[ 0.38735994 ?0.14274852 ?0.56344808] >>> >> >> >> ?[ 0.88084015 ?0.39879649 ?0.64949951]] >>> >> >> >>>>> print hilbert(aa[:,:1]).real >>> >> >> >> [[ 0.82584851] >>> >> >> >> ?[ 0.95021675] >>> >> >> >> ?[ 0.13033881] >>> >> >> >> ?[ 0.69409563] >>> >> >> >> ?[ 0.34297612] >>> >> >> >> ?[ 0.28355261] >>> >> >> >> ?[ 0.49481491] >>> >> >> >> ?[ 0.2645843 ] >>> >> >> >> ?[ 0.38735994] >>> >> >> >> ?[ 0.88084015]] >>> >> >> >> >>> >> >> >> but it treats a 1d array as row vector and transforms along zero >>> >> >> >> axis >>> >> >> >> of length 1, and not along the length of the array. >>> >> >> >> so another fix to handle 1d arrays correctly should be done >>> >> >> >> >>> >> >> >>>>> print hilbert(aa[:,1]).real >>> >> >> >> [ 0.15215031 ?0.16803995 ?0.06198952 ?0.06962778 ?0.50579001 >>> >> >> >> ?0.21626889 >>> >> >> >> ?0.21290645 ?0.95783096 ?0.14274852 ?0.39879649] >>> >> >> >>>>> aa[:,1] >>> >> >> >> array([ 0.15215031, ?0.16803995, ?0.06198952, ?0.06962778, >>> >> >> >> ?0.50579001, >>> >> >> >> ? ? ? ?0.21626889, ?0.21290645, ?0.95783096, ?0.14274852, >>> >> >> >> ?0.39879649]) >>> >> >> >>>>> >>> >> >> > >>> >> >> > there's something wrong with my example, the real part is the same >>> >> >> > which confused me >>> >> >> > >>> >> >> > it works correctly with 1d >>> >> >> > >>> >> >> >>>> np.abs(hilbert(aa[:,0])) >>> >> >> > array([ 0.83251128, ?1.04487091, ?0.27702083, ?0.69901499, >>> >> >> > ?0.49170197, >>> >> >> > ? ? ? ?0.31227114, ?0.49505637, ?0.26461488, ?0.61385196, >>> >> >> > ?0.90716272]) >>> >> >> > >>> >> >> >>>> np.abs(hilbert(aa[:,:1])).T >>> >> >> > array([[ 0.83251128, ?1.04487091, ?0.27702083, ?0.69901499, >>> >> >> > ?0.49170197, >>> >> >> > ? ? ? ? 0.31227114, ?0.49505637, ?0.26461488, ?0.61385196, >>> >> >> > ?0.90716272]]) >>> >> >> > >>> >> >> >>>> np.abs(hilbert(aa))[:,0] >>> >> >> > array([ 0.83251128, ?1.04487091, ?0.27702083, ?0.69901499, >>> >> >> > ?0.49170197, >>> >> >> > ? ? ? ?0.31227114, ?0.49505637, ?0.26461488, ?0.61385196, >>> >> >> > ?0.90716272]) >>> >> >> > >>> >> >> > besides reading the docstring, I don't know what hilbert is >>> >> >> > supposed >>> >> >> > to be good for. >>> >> >> >>> >> >> Would something like the function in the attachment do ? >>> >> >> >>> >> >> >>> >> >> >>> >> >> > Josef >>> >> >> > >>> >> >> > >>> >> >> >> Josef >>> >> >> >> >>> >> >> >> >>> >> >> >>> >>> >> >> >>>> >>> >> >> >>>> >>> >> >> >>>> Cheers, >>> >> >> >>>> >>> >> >> >>>> Ariel >>> >> >> >>>> -- >>> >> >> >>>> Ariel Rokem >>> >> >> >>>> Helen Wills Neuroscience Institute >>> >> >> >>>> University of California, Berkeley >>> >> >> >>>> http://argentum.ucbso.berkeley.edu/ariel >>> >> >> >>>> >>> >> >> >>>> _______________________________________________ >>> >> >> >>>> SciPy-Dev mailing list >>> >> >> >>>> SciPy-Dev at scipy.org >>> >> >> >>>> http://mail.scipy.org/mailman/listinfo/scipy-dev >>> >> >> >>>> >>> >> >> >>>> >>> >> >> >>> >>> >> >> >> >>> >> >> > >>> >> >> >>> >> >> _______________________________________________ >>> >> >> SciPy-Dev mailing list >>> >> >> SciPy-Dev at scipy.org >>> >> >> http://mail.scipy.org/mailman/listinfo/scipy-dev >>> >> >> >>> >> > >>> >> > >>> >> > >>> >> > -- >>> >> > Ariel Rokem >>> >> > Helen Wills Neuroscience Institute >>> >> > University of California, Berkeley >>> >> > http://argentum.ucbso.berkeley.edu/ariel >>> >> > >>> >> > _______________________________________________ >>> >> > SciPy-Dev mailing list >>> >> > SciPy-Dev at scipy.org >>> >> > http://mail.scipy.org/mailman/listinfo/scipy-dev >>> >> > >>> >> > >>> >> _______________________________________________ >>> >> SciPy-Dev mailing list >>> >> SciPy-Dev at scipy.org >>> >> http://mail.scipy.org/mailman/listinfo/scipy-dev >>> > >>> > >>> > >>> > -- >>> > Ariel Rokem >>> > Helen Wills Neuroscience Institute >>> > University of California, Berkeley >>> > http://argentum.ucbso.berkeley.edu/ariel >>> > >>> > _______________________________________________ >>> > SciPy-Dev mailing list >>> > SciPy-Dev at scipy.org >>> > http://mail.scipy.org/mailman/listinfo/scipy-dev >>> > >>> > >>> _______________________________________________ >>> SciPy-Dev mailing list >>> SciPy-Dev at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-dev >> >> >> >> -- >> Ariel Rokem >> Helen Wills Neuroscience Institute >> University of California, Berkeley >> http://argentum.ucbso.berkeley.edu/ariel >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev >> >> > From david at silveregg.co.jp Mon Jan 18 00:28:11 2010 From: david at silveregg.co.jp (David Cournapeau) Date: Mon, 18 Jan 2010 14:28:11 +0900 Subject: [SciPy-dev] RFC: initial support for Harwell-Boeing files Message-ID: <4B53F16B.4070009@silveregg.co.jp> Hi, I have added basic support for Harwell-Boeing file format support, for sparse matrices. I would be happy to receive to get some reviews on the API before pushing this into the trunk, as well as bug reports from people who have matrices in such a format: http://github.com/cournape/scipy3/tree/hb_io The code is in scipy.sparse.io. Two API are supported: - a simple, function-based one: read_hb(filename) and write_hb(filename, m) - a more complete API based on HBFile and HBInfo, to control the exact format, get the metadata when reading. The simple API is just sugar on top on this one For now, only read/write is supported for Real, Unsymmetric, Assembled matrices, as that's what I need. There is also some code to deal with fortran-formatted field: maybe this can be useful outside HB IO, although I only parse a limited subset of fortran format for now (exponential format and integer format). cheers, David From josef.pktd at gmail.com Mon Jan 18 00:54:05 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 18 Jan 2010 00:54:05 -0500 Subject: [SciPy-dev] Is this a bugfix for scipy.hilbert? In-Reply-To: <1cd32cbb1001171949p2516c463s276a247d64188483@mail.gmail.com> References: <43958ee61001141924n1663f379m77bafbbbb4c88ede@mail.gmail.com> <1cd32cbb1001142027j3be923qedadae0149f49f0a@mail.gmail.com> <1cd32cbb1001142053yc2a37c2i1ff3b651c25ad3de@mail.gmail.com> <43958ee61001142244y37b31feel9b9c0232d9c9e430@mail.gmail.com> <1cd32cbb1001142310t65f1492cne1736987b389c49@mail.gmail.com> <43958ee61001151134v43e4e6a4sf08fa63bf97ec08a@mail.gmail.com> <1cd32cbb1001151148p617c838dp8fda69f3f4c9a009@mail.gmail.com> <43958ee61001151320l1f0e985xbbb330e90536ccf8@mail.gmail.com> <1cd32cbb1001162131t7e04c92ekd4507c8ddab9c127@mail.gmail.com> <1cd32cbb1001171949p2516c463s276a247d64188483@mail.gmail.com> Message-ID: <1cd32cbb1001172154j55843e9ckd0d5c6cdc9d5dacc@mail.gmail.com> On Sun, Jan 17, 2010 at 10:49 PM, wrote: > On Sun, Jan 17, 2010 at 12:31 AM, ? wrote: >> On Fri, Jan 15, 2010 at 4:20 PM, Ariel Rokem wrote: >>> Hi - I've never done this before, so it would be great if I could 'look over >>> your shoulder' (in the sense that I know how this ticket came about :D), as >>> you submit a ticket on this. >> >> Done in http://projects.scipy.org/scipy/ticket/1093 >> tests pass at 14 decimals > > > While adding some tests, I got one more question > > According to Wikipedia http://en.wikipedia.org/wiki/Analytic_signal#Definition > the imaginary part of the analytical signal is equal to the Hilbert transform > > however, fftpack.hilbert has the opposite sign. > > From the examples signal.hilbert looks correct, so does > fftpack.hilbert have a sign mistake or is it based on a different > definition? > > >>>> r = np.random.randn(20) >>>> fftpack.hilbert(r) > array([-0.27285468, -1.39747965, ?1.7991044 , -0.16609304, -1.84459577, > ? ? ? ?0.48696479, -0.33190553, ?0.59383033, ?2.15361055, -0.89341275, > ? ? ? -0.13730369, ?0.84046658, ?1.38110384, -1.7595949 , -0.04869402, > ? ? ? ?0.59871558, -1.09627219, ?0.59375139, -1.6021929 , ?1.10285168]) >>>> hilbert(r).imag > array([ 0.27285468, ?1.39747965, -1.7991044 , ?0.16609304, ?1.84459577, > ? ? ? -0.48696479, ?0.33190553, -0.59383033, -2.15361055, ?0.89341275, > ? ? ? ?0.13730369, -0.84046658, -1.38110384, ?1.7595949 , ?0.04869402, > ? ? ? -0.59871558, ?1.09627219, -0.59375139, ?1.6021929 , -1.10285168]) > > > I'm just checking definitions for the tests, committed in http://projects.scipy.org/scipy/changeset/6205 There is also a hilbert2 for 2d convolution. It is not in the documentation (at least in not my old ones), and I didn't try whether it is correct. Josef > > Josef >> >> Josef >> >>> >>> Thanks -- >>> >>> Ariel >>> >>> On Fri, Jan 15, 2010 at 11:48 AM, wrote: >>>> >>>> On Fri, Jan 15, 2010 at 2:34 PM, Ariel Rokem wrote: >>>> > Hi - >>>> > >>>> > attached is a file with a couple of tests. I am not sure this tests the >>>> > issues we were dealing with previously (the axis issues, etc.), but it >>>> > has >>>> > some sensible test-cases, which compare to what Matlab would give you >>>> > (not >>>> > quite 10by3 or 10by6, but as you can see, they make sense). Also - all >>>> > the >>>> > assertions are assert_almost_equal. Do you think that's OK? I think >>>> > there >>>> > are float-precision issues here, which would make assert_equal fail, but >>>> > I >>>> > am not sure - I would be happy to get any general comments on these >>>> > tests, >>>> > in case I am doing this all wrong. >>>> >>>> nice test cases, I like theoretical tests even better than verified >>>> numbers from other packages. >>>> >>>> Besides some cosmetic changes to get them into a test function, the >>>> only part to add is the precision of the tests. >>>> The default precision of assert_almost_equal is only 6 decimals. >>>> >>>> For these kind of cases, I usually go to 12 to 15 depending on the >>>> numerical precision of the algorithm. Usually, I go by trial and error >>>> until the test breaks, or calculate max abs of the error. >>>> >>>> I can add some tests for the axis argument. >>>> >>>> Can you open a ticket for the record or shall I ? >>>> >>>> Josef >>>> >>>> >>>> > >>>> > Cheers, >>>> > >>>> > Ariel >>>> > >>>> > On Thu, Jan 14, 2010 at 11:10 PM, wrote: >>>> >> >>>> >> On Fri, Jan 15, 2010 at 1:44 AM, Ariel Rokem >>>> >> wrote: >>>> >> > Yes - looks good. Except I would prefer to eventually set the axis to >>>> >> > default to -1, to be consistent with signal.fft (and also np.fft.fft) >>>> >> > which >>>> >> > has axis=-1. >>>> >> >>>> >> I'm indifferent to the default axis, from a quick look and my >>>> >> experience there are not many functions with axis arguments in signal. >>>> >> So I'm fine with switching to axis=-1. We should do it with this >>>> >> bugfix, since until now the function wasn't correct anyway for 2d. >>>> >> >>>> >> > >>>> >> > As for whether it's doing what it's supposed to do, for what it's >>>> >> > worth >>>> >> > - it >>>> >> > seems to do similar things to what Matlab's 'hilbert' function does >>>> >> > on a >>>> >> > few >>>> >> > simple examples I tried out. >>>> >> >>>> >> I was reading briefly on wikipedia, and checked with fftpack.hilbert, >>>> >> which returns the same array as signal.hilbert(a).imag, but I didn't >>>> >> manage to figure out why fftpack.hilbert only allows 1d (i got lost >>>> >> starting at convolve.pyf) >>>> >> >>>> >> Could you write a simple test case compared to matlab, e.g. 10by3 as >>>> >> in my example, for both axis, or 10by6 if 10by3 doesn't make sense? >>>> >> >>>> >> If nobody objects, I can commit the change with axis=-1. >>>> >> >>>> >> Josef >>>> >> >>>> >> > >>>> >> > Cheers, >>>> >> > >>>> >> > Ariel >>>> >> > >>>> >> > >>>> >> > >>>> >> > On Thu, Jan 14, 2010 at 8:53 PM, wrote: >>>> >> >> >>>> >> >> On Thu, Jan 14, 2010 at 11:27 PM, ? wrote: >>>> >> >> > On Thu, Jan 14, 2010 at 11:02 PM, ? wrote: >>>> >> >> >> On Thu, Jan 14, 2010 at 10:54 PM, ? wrote: >>>> >> >> >>> On Thu, Jan 14, 2010 at 10:24 PM, Ariel Rokem >>>> >> >> >>> >>>> >> >> >>> wrote: >>>> >> >> >>>> Hi everyone, >>>> >> >> >>>> >>>> >> >> >>>> I have been trying to use scipy.signal.hilbert and I got the >>>> >> >> >>>> following >>>> >> >> >>>> puzzling result: >>>> >> >> >>>> >>>> >> >> >>>> In [22]: import scipy >>>> >> >> >>>> >>>> >> >> >>>> In [23]: scipy.__version__ #I have r6182 >>>> >> >> >>>> Out[23]: '0.8.0.dev' >>>> >> >> >>>> >>>> >> >> >>>> In [24]: import scipy.signal as signal >>>> >> >> >>>> >>>> >> >> >>>> In [25]: a = np.random.rand(100,100) >>>> >> >> >>>> >>>> >> >> >>>> In [26]: np.abs(signal.hilbert(a[-1])) >>>> >> >> >>>> Out[26]: >>>> >> >> >>>> array([ 0.57567681,? 0.25918624,? 0.50207097,? 0.51834052, >>>> >> >> >>>> 0.24293389, >>>> >> >> >>>> ??????? 0.5779464 ,? 0.6515758 ,? 0.89973173,? 1.00275444, >>>> >> >> >>>> 0.37352935, >>>> >> >> >>>> ??????? 0.62332717,? 0.93599749,? 0.40651376,? 0.65088756, >>>> >> >> >>>> 0.8332281 >>>> >> >> >>>> , >>>> >> >> >>>> ??????? 0.5770101 ,? 0.9288512 ,? 0.46671906,? 0.41536055, >>>> >> >> >>>> 0.71418068, >>>> >> >> >>>> ??????? 0.81250913,? 0.07652627,? 0.72939072,? 0.26755626, >>>> >> >> >>>> 0.36396146, >>>> >> >> >>>> ??????? 0.59725999,? 1.02264694,? 0.41227986,? 0.98122853, >>>> >> >> >>>> 0.71906675, >>>> >> >> >>>> ??????? 0.58582611,? 0.77288117,? 0.3217015 ,? 0.65261394, >>>> >> >> >>>> 0.11947618, >>>> >> >> >>>> ??????? 0.75632703,? 0.43432935,? 0.52182485,? 1.0277177 , >>>> >> >> >>>> 1.01104986, >>>> >> >> >>>> ??????? 0.3023265 ,? 0.6024772 ,? 0.69257548,? 0.55418735, >>>> >> >> >>>> 0.46259052, >>>> >> >> >>>> ??????? 0.25832231,? 0.38278355,? 0.45508532,? 0.26215872, >>>> >> >> >>>> 0.34207947, >>>> >> >> >>>> ??????? 0.80704729,? 0.80755477,? 0.95317178,? 0.97458885, >>>> >> >> >>>> 0.58762294, >>>> >> >> >>>> ??????? 0.82540618,? 0.62005585,? 0.82494646,? 1.04221293, >>>> >> >> >>>> 0.14983027, >>>> >> >> >>>> ??????? 1.01571579,? 0.99381328,? 0.24158714,? 0.84256569, >>>> >> >> >>>> 0.53418924, >>>> >> >> >>>> ??????? 0.24067628,? 0.90489883,? 1.02217747,? 0.34988034, >>>> >> >> >>>> 0.5310065 >>>> >> >> >>>> , >>>> >> >> >>>> ??????? 0.48135002,? 1.03020269,? 0.6013679 ,? 0.46062485, >>>> >> >> >>>> 0.3918485 >>>> >> >> >>>> , >>>> >> >> >>>> ??????? 0.21554545,? 0.31704519,? 0.04868385,? 0.1787766 , >>>> >> >> >>>> 0.37361852, >>>> >> >> >>>> ??????? 0.21977912,? 0.7649772 ,? 0.77867281,? 0.37684278, >>>> >> >> >>>> 0.64432638, >>>> >> >> >>>> ??????? 0.77494951,? 0.87106309,? 0.77611484,? 0.52666801, >>>> >> >> >>>> 0.88683667, >>>> >> >> >>>> ??????? 0.69164967,? 0.98618191,? 0.84811375,? 0.35934198, >>>> >> >> >>>> 0.32650478, >>>> >> >> >>>> ??????? 0.1752677 ,? 0.60574454,? 0.5109132 ,? 0.52332287, >>>> >> >> >>>> 0.99777805]) >>>> >> >> >>>> >>>> >> >> >>>> In [27]: np.abs(signal.hilbert(a))[-1] >>>> >> >> >>>> Out[27]: >>>> >> >> >>>> array([ 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., >>>> >> >> >>>> 0., >>>> >> >> >>>> 0., >>>> >> >> >>>> ??????? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., >>>> >> >> >>>> 0., >>>> >> >> >>>> 0., >>>> >> >> >>>> ??????? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., >>>> >> >> >>>> 0., >>>> >> >> >>>> 0., >>>> >> >> >>>> ??????? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., >>>> >> >> >>>> 0., >>>> >> >> >>>> 0., >>>> >> >> >>>> ??????? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., >>>> >> >> >>>> 0., >>>> >> >> >>>> 0., >>>> >> >> >>>> ??????? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., >>>> >> >> >>>> 0., >>>> >> >> >>>> 0., >>>> >> >> >>>> ??????? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0., >>>> >> >> >>>> 0., >>>> >> >> >>>> 0., >>>> >> >> >>>> ??????? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.,? 0.]) >>>> >> >> >>>> >>>> >> >> >>>> >>>> >> >> >>>> >>>> >> >> >>>> >>>> >> >> >>>> >>>> >> >> >>>> ---------------------------------------------------------------------- >>>> >> >> >>>> >>>> >> >> >>>> I was expecting both of these to have the same values - am I >>>> >> >> >>>> missing >>>> >> >> >>>> something? >>>> >> >> >>>> >>>> >> >> >>>> I think that the following solves this issue, but now I am not >>>> >> >> >>>> that >>>> >> >> >>>> sure >>>> >> >> >>>> whether it does what it is supposed to do and I couldn't find a >>>> >> >> >>>> test >>>> >> >> >>>> for >>>> >> >> >>>> this in test_signaltools.py. Does anyone know of a good >>>> >> >> >>>> test-case >>>> >> >> >>>> for >>>> >> >> >>>> the >>>> >> >> >>>> analytic signal, that I could create for this? >>>> >> >> >>>> >>>> >> >> >>>> Index: scipy/signal/signaltools.py >>>> >> >> >>>> >>>> >> >> >>>> >>>> >> >> >>>> =================================================================== >>>> >> >> >>>> --- scipy/signal/signaltools.py??? (revision 6182) >>>> >> >> >>>> +++ scipy/signal/signaltools.py??? (working copy) >>>> >> >> >>>> @@ -1062,13 +1062,13 @@ >>>> >> >> >>>> ???? """ >>>> >> >> >>>> ???? x = asarray(x) >>>> >> >> >>>> ???? if N is None: >>>> >> >> >>>> -??????? N = len(x) >>>> >> >> >>>> +??????? N = x.shape[-1] >>>> >> >> >>>> ???? if N <=0: >>>> >> >> >>>> ???????? raise ValueError, "N must be positive." >>>> >> >> >>>> ???? if iscomplexobj(x): >>>> >> >> >>>> ???????? print "Warning: imaginary part of x ignored." >>>> >> >> >>>> ???????? x = real(x) >>>> >> >> >>>> -??? Xf = fft(x,N,axis=0) >>>> >> >> >>>> +??? Xf = fft(x,N,axis=-1) >>>> >> >> >>>> ???? h = zeros(N) >>>> >> >> >>>> ???? if N % 2 == 0: >>>> >> >> >>>> ???????? h[0] = h[N/2] = 1 >>>> >> >> >>>> @@ -1078,7 +1078,7 @@ >>>> >> >> >>>> ???????? h[1:(N+1)/2] = 2 >>>> >> >> >>>> >>>> >> >> >>>> ???? if len(x.shape) > 1: >>>> >> >> >>>> -??????? h = h[:, newaxis] >>>> >> >> >>>> +??????? h = h[newaxis,:] >>>> >> >> >>>> ???? x = ifft(Xf*h) >>>> >> >> >>>> ???? return x >>>> >> >> >>> >>>> >> >> >>> I think your change would break the currently advertised >>>> >> >> >>> behavior, >>>> >> >> >>> axis=0 (The transformation is done along the first axis) >>>> >> >> >>> >>>> >> >> >>> but fft and ifft have default axis=-1 >>>> >> >> >>> >>>> >> >> >>> fft in hilbert uses axis=0 as in docstring >>>> >> >> >>> but ifft uses default axis=-1 >>>> >> >> >>> >>>> >> >> >>> so, I would think the fix should be ?x = ifft(Xf*h, axis=0) >>>> >> >> >>> >>>> >> >> >>> But as it currently looks like the axis argument doesn't work >>>> >> >> >>> anyway, >>>> >> >> >>> there wouldn't be much breakage if the axis would be included as >>>> >> >> >>> an >>>> >> >> >>> argument and default to -1. >>>> >> >> >>> However, I don't know what the "standard" for scipy.signal is >>>> >> >> >>> for >>>> >> >> >>> default axis. >>>> >> >> >>> >>>> >> >> >>> Josef >>>> >> >> >> >>>> >> >> >> after adding axis to ifft: >>>> >> >> >>>>> print hilbert(aa).real >>>> >> >> >> [[ 0.82584851 ?0.15215031 ?0.14767381] >>>> >> >> >> ?[ 0.95021675 ?0.16803995 ?0.43562964] >>>> >> >> >> ?[ 0.13033881 ?0.06198952 ?0.70729614] >>>> >> >> >> ?[ 0.69409563 ?0.06962778 ?0.72552601] >>>> >> >> >> ?[ 0.34297612 ?0.50579001 ?0.86463304] >>>> >> >> >> ?[ 0.28355261 ?0.21626889 ?0.85165102] >>>> >> >> >> ?[ 0.49481491 ?0.21290645 ?0.71416814] >>>> >> >> >> ?[ 0.2645843 ? 0.95783096 ?0.77514016] >>>> >> >> >> ?[ 0.38735994 ?0.14274852 ?0.56344808] >>>> >> >> >> ?[ 0.88084015 ?0.39879649 ?0.64949951]] >>>> >> >> >>>>> print hilbert(aa[:,:1]).real >>>> >> >> >> [[ 0.82584851] >>>> >> >> >> ?[ 0.95021675] >>>> >> >> >> ?[ 0.13033881] >>>> >> >> >> ?[ 0.69409563] >>>> >> >> >> ?[ 0.34297612] >>>> >> >> >> ?[ 0.28355261] >>>> >> >> >> ?[ 0.49481491] >>>> >> >> >> ?[ 0.2645843 ] >>>> >> >> >> ?[ 0.38735994] >>>> >> >> >> ?[ 0.88084015]] >>>> >> >> >> >>>> >> >> >> but it treats a 1d array as row vector and transforms along zero >>>> >> >> >> axis >>>> >> >> >> of length 1, and not along the length of the array. >>>> >> >> >> so another fix to handle 1d arrays correctly should be done >>>> >> >> >> >>>> >> >> >>>>> print hilbert(aa[:,1]).real >>>> >> >> >> [ 0.15215031 ?0.16803995 ?0.06198952 ?0.06962778 ?0.50579001 >>>> >> >> >> ?0.21626889 >>>> >> >> >> ?0.21290645 ?0.95783096 ?0.14274852 ?0.39879649] >>>> >> >> >>>>> aa[:,1] >>>> >> >> >> array([ 0.15215031, ?0.16803995, ?0.06198952, ?0.06962778, >>>> >> >> >> ?0.50579001, >>>> >> >> >> ? ? ? ?0.21626889, ?0.21290645, ?0.95783096, ?0.14274852, >>>> >> >> >> ?0.39879649]) >>>> >> >> >>>>> >>>> >> >> > >>>> >> >> > there's something wrong with my example, the real part is the same >>>> >> >> > which confused me >>>> >> >> > >>>> >> >> > it works correctly with 1d >>>> >> >> > >>>> >> >> >>>> np.abs(hilbert(aa[:,0])) >>>> >> >> > array([ 0.83251128, ?1.04487091, ?0.27702083, ?0.69901499, >>>> >> >> > ?0.49170197, >>>> >> >> > ? ? ? ?0.31227114, ?0.49505637, ?0.26461488, ?0.61385196, >>>> >> >> > ?0.90716272]) >>>> >> >> > >>>> >> >> >>>> np.abs(hilbert(aa[:,:1])).T >>>> >> >> > array([[ 0.83251128, ?1.04487091, ?0.27702083, ?0.69901499, >>>> >> >> > ?0.49170197, >>>> >> >> > ? ? ? ? 0.31227114, ?0.49505637, ?0.26461488, ?0.61385196, >>>> >> >> > ?0.90716272]]) >>>> >> >> > >>>> >> >> >>>> np.abs(hilbert(aa))[:,0] >>>> >> >> > array([ 0.83251128, ?1.04487091, ?0.27702083, ?0.69901499, >>>> >> >> > ?0.49170197, >>>> >> >> > ? ? ? ?0.31227114, ?0.49505637, ?0.26461488, ?0.61385196, >>>> >> >> > ?0.90716272]) >>>> >> >> > >>>> >> >> > besides reading the docstring, I don't know what hilbert is >>>> >> >> > supposed >>>> >> >> > to be good for. >>>> >> >> >>>> >> >> Would something like the function in the attachment do ? >>>> >> >> >>>> >> >> >>>> >> >> >>>> >> >> > Josef >>>> >> >> > >>>> >> >> > >>>> >> >> >> Josef >>>> >> >> >> >>>> >> >> >> >>>> >> >> >>> >>>> >> >> >>>> >>>> >> >> >>>> >>>> >> >> >>>> Cheers, >>>> >> >> >>>> >>>> >> >> >>>> Ariel >>>> >> >> >>>> -- >>>> >> >> >>>> Ariel Rokem >>>> >> >> >>>> Helen Wills Neuroscience Institute >>>> >> >> >>>> University of California, Berkeley >>>> >> >> >>>> http://argentum.ucbso.berkeley.edu/ariel >>>> >> >> >>>> >>>> >> >> >>>> _______________________________________________ >>>> >> >> >>>> SciPy-Dev mailing list >>>> >> >> >>>> SciPy-Dev at scipy.org >>>> >> >> >>>> http://mail.scipy.org/mailman/listinfo/scipy-dev >>>> >> >> >>>> >>>> >> >> >>>> >>>> >> >> >>> >>>> >> >> >> >>>> >> >> > >>>> >> >> >>>> >> >> _______________________________________________ >>>> >> >> SciPy-Dev mailing list >>>> >> >> SciPy-Dev at scipy.org >>>> >> >> http://mail.scipy.org/mailman/listinfo/scipy-dev >>>> >> >> >>>> >> > >>>> >> > >>>> >> > >>>> >> > -- >>>> >> > Ariel Rokem >>>> >> > Helen Wills Neuroscience Institute >>>> >> > University of California, Berkeley >>>> >> > http://argentum.ucbso.berkeley.edu/ariel >>>> >> > >>>> >> > _______________________________________________ >>>> >> > SciPy-Dev mailing list >>>> >> > SciPy-Dev at scipy.org >>>> >> > http://mail.scipy.org/mailman/listinfo/scipy-dev >>>> >> > >>>> >> > >>>> >> _______________________________________________ >>>> >> SciPy-Dev mailing list >>>> >> SciPy-Dev at scipy.org >>>> >> http://mail.scipy.org/mailman/listinfo/scipy-dev >>>> > >>>> > >>>> > >>>> > -- >>>> > Ariel Rokem >>>> > Helen Wills Neuroscience Institute >>>> > University of California, Berkeley >>>> > http://argentum.ucbso.berkeley.edu/ariel >>>> > >>>> > _______________________________________________ >>>> > SciPy-Dev mailing list >>>> > SciPy-Dev at scipy.org >>>> > http://mail.scipy.org/mailman/listinfo/scipy-dev >>>> > >>>> > >>>> _______________________________________________ >>>> SciPy-Dev mailing list >>>> SciPy-Dev at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/scipy-dev >>> >>> >>> >>> -- >>> Ariel Rokem >>> Helen Wills Neuroscience Institute >>> University of California, Berkeley >>> http://argentum.ucbso.berkeley.edu/ariel >>> >>> _______________________________________________ >>> SciPy-Dev mailing list >>> SciPy-Dev at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-dev >>> >>> >> > From arokem at berkeley.edu Mon Jan 18 18:59:03 2010 From: arokem at berkeley.edu (Ariel Rokem) Date: Mon, 18 Jan 2010 15:59:03 -0800 Subject: [SciPy-dev] Is this a bugfix for scipy.hilbert? In-Reply-To: <1cd32cbb1001172154j55843e9ckd0d5c6cdc9d5dacc@mail.gmail.com> References: <43958ee61001141924n1663f379m77bafbbbb4c88ede@mail.gmail.com> <1cd32cbb1001142053yc2a37c2i1ff3b651c25ad3de@mail.gmail.com> <43958ee61001142244y37b31feel9b9c0232d9c9e430@mail.gmail.com> <1cd32cbb1001142310t65f1492cne1736987b389c49@mail.gmail.com> <43958ee61001151134v43e4e6a4sf08fa63bf97ec08a@mail.gmail.com> <1cd32cbb1001151148p617c838dp8fda69f3f4c9a009@mail.gmail.com> <43958ee61001151320l1f0e985xbbb330e90536ccf8@mail.gmail.com> <1cd32cbb1001162131t7e04c92ekd4507c8ddab9c127@mail.gmail.com> <1cd32cbb1001171949p2516c463s276a247d64188483@mail.gmail.com> <1cd32cbb1001172154j55843e9ckd0d5c6cdc9d5dacc@mail.gmail.com> Message-ID: <43958ee61001181559o5ffed607o709496f03d387de3@mail.gmail.com> Hi Josef - thanks! I have to say that I don't quite understand what is happening in either hilbert2 or in fftpack.hilbert, but just from looking at it, at least hilbert2 doesn't have the same problem of inconsistency among the 'axis' arguments that hilbert had. Maybe someone else understands these two things better than me? Cheers, Ariel On Sun, Jan 17, 2010 at 9:54 PM, wrote: > On Sun, Jan 17, 2010 at 10:49 PM, wrote: > > On Sun, Jan 17, 2010 at 12:31 AM, wrote: > >> On Fri, Jan 15, 2010 at 4:20 PM, Ariel Rokem > wrote: > >>> Hi - I've never done this before, so it would be great if I could 'look > over > >>> your shoulder' (in the sense that I know how this ticket came about > :D), as > >>> you submit a ticket on this. > >> > >> Done in http://projects.scipy.org/scipy/ticket/1093 > >> tests pass at 14 decimals > > > > > > While adding some tests, I got one more question > > > > According to Wikipedia > http://en.wikipedia.org/wiki/Analytic_signal#Definition > > the imaginary part of the analytical signal is equal to the Hilbert > transform > > > > however, fftpack.hilbert has the opposite sign. > > > > From the examples signal.hilbert looks correct, so does > > fftpack.hilbert have a sign mistake or is it based on a different > > definition? > > > > > >>>> r = np.random.randn(20) > >>>> fftpack.hilbert(r) > > array([-0.27285468, -1.39747965, 1.7991044 , -0.16609304, -1.84459577, > > 0.48696479, -0.33190553, 0.59383033, 2.15361055, -0.89341275, > > -0.13730369, 0.84046658, 1.38110384, -1.7595949 , -0.04869402, > > 0.59871558, -1.09627219, 0.59375139, -1.6021929 , 1.10285168]) > >>>> hilbert(r).imag > > array([ 0.27285468, 1.39747965, -1.7991044 , 0.16609304, 1.84459577, > > -0.48696479, 0.33190553, -0.59383033, -2.15361055, 0.89341275, > > 0.13730369, -0.84046658, -1.38110384, 1.7595949 , 0.04869402, > > -0.59871558, 1.09627219, -0.59375139, 1.6021929 , -1.10285168]) > > > > > > I'm just checking definitions for the tests, > > committed in http://projects.scipy.org/scipy/changeset/6205 > > There is also a hilbert2 for 2d convolution. It is not in the > documentation (at least in not my old ones), and I didn't try whether > it is correct. > > Josef > > > > > > Josef > >> > >> Josef > >> > >>> > >>> Thanks -- > >>> > >>> Ariel > >>> > >>> On Fri, Jan 15, 2010 at 11:48 AM, wrote: > >>>> > >>>> On Fri, Jan 15, 2010 at 2:34 PM, Ariel Rokem > wrote: > >>>> > Hi - > >>>> > > >>>> > attached is a file with a couple of tests. I am not sure this tests > the > >>>> > issues we were dealing with previously (the axis issues, etc.), but > it > >>>> > has > >>>> > some sensible test-cases, which compare to what Matlab would give > you > >>>> > (not > >>>> > quite 10by3 or 10by6, but as you can see, they make sense). Also - > all > >>>> > the > >>>> > assertions are assert_almost_equal. Do you think that's OK? I think > >>>> > there > >>>> > are float-precision issues here, which would make assert_equal fail, > but > >>>> > I > >>>> > am not sure - I would be happy to get any general comments on these > >>>> > tests, > >>>> > in case I am doing this all wrong. > >>>> > >>>> nice test cases, I like theoretical tests even better than verified > >>>> numbers from other packages. > >>>> > >>>> Besides some cosmetic changes to get them into a test function, the > >>>> only part to add is the precision of the tests. > >>>> The default precision of assert_almost_equal is only 6 decimals. > >>>> > >>>> For these kind of cases, I usually go to 12 to 15 depending on the > >>>> numerical precision of the algorithm. Usually, I go by trial and error > >>>> until the test breaks, or calculate max abs of the error. > >>>> > >>>> I can add some tests for the axis argument. > >>>> > >>>> Can you open a ticket for the record or shall I ? > >>>> > >>>> Josef > >>>> > >>>> > >>>> > > >>>> > Cheers, > >>>> > > >>>> > Ariel > >>>> > > >>>> > On Thu, Jan 14, 2010 at 11:10 PM, wrote: > >>>> >> > >>>> >> On Fri, Jan 15, 2010 at 1:44 AM, Ariel Rokem > >>>> >> wrote: > >>>> >> > Yes - looks good. Except I would prefer to eventually set the > axis to > >>>> >> > default to -1, to be consistent with signal.fft (and also > np.fft.fft) > >>>> >> > which > >>>> >> > has axis=-1. > >>>> >> > >>>> >> I'm indifferent to the default axis, from a quick look and my > >>>> >> experience there are not many functions with axis arguments in > signal. > >>>> >> So I'm fine with switching to axis=-1. We should do it with this > >>>> >> bugfix, since until now the function wasn't correct anyway for 2d. > >>>> >> > >>>> >> > > >>>> >> > As for whether it's doing what it's supposed to do, for what it's > >>>> >> > worth > >>>> >> > - it > >>>> >> > seems to do similar things to what Matlab's 'hilbert' function > does > >>>> >> > on a > >>>> >> > few > >>>> >> > simple examples I tried out. > >>>> >> > >>>> >> I was reading briefly on wikipedia, and checked with > fftpack.hilbert, > >>>> >> which returns the same array as signal.hilbert(a).imag, but I > didn't > >>>> >> manage to figure out why fftpack.hilbert only allows 1d (i got lost > >>>> >> starting at convolve.pyf) > >>>> >> > >>>> >> Could you write a simple test case compared to matlab, e.g. 10by3 > as > >>>> >> in my example, for both axis, or 10by6 if 10by3 doesn't make sense? > >>>> >> > >>>> >> If nobody objects, I can commit the change with axis=-1. > >>>> >> > >>>> >> Josef > >>>> >> > >>>> >> > > >>>> >> > Cheers, > >>>> >> > > >>>> >> > Ariel > >>>> >> > > >>>> >> > > >>>> >> > > >>>> >> > On Thu, Jan 14, 2010 at 8:53 PM, wrote: > >>>> >> >> > >>>> >> >> On Thu, Jan 14, 2010 at 11:27 PM, > wrote: > >>>> >> >> > On Thu, Jan 14, 2010 at 11:02 PM, > wrote: > >>>> >> >> >> On Thu, Jan 14, 2010 at 10:54 PM, > wrote: > >>>> >> >> >>> On Thu, Jan 14, 2010 at 10:24 PM, Ariel Rokem > >>>> >> >> >>> > >>>> >> >> >>> wrote: > >>>> >> >> >>>> Hi everyone, > >>>> >> >> >>>> > >>>> >> >> >>>> I have been trying to use scipy.signal.hilbert and I got > the > >>>> >> >> >>>> following > >>>> >> >> >>>> puzzling result: > >>>> >> >> >>>> > >>>> >> >> >>>> In [22]: import scipy > >>>> >> >> >>>> > >>>> >> >> >>>> In [23]: scipy.__version__ #I have r6182 > >>>> >> >> >>>> Out[23]: '0.8.0.dev' > >>>> >> >> >>>> > >>>> >> >> >>>> In [24]: import scipy.signal as signal > >>>> >> >> >>>> > >>>> >> >> >>>> In [25]: a = np.random.rand(100,100) > >>>> >> >> >>>> > >>>> >> >> >>>> In [26]: np.abs(signal.hilbert(a[-1])) > >>>> >> >> >>>> Out[26]: > >>>> >> >> >>>> array([ 0.57567681, 0.25918624, 0.50207097, 0.51834052, > >>>> >> >> >>>> 0.24293389, > >>>> >> >> >>>> 0.5779464 , 0.6515758 , 0.89973173, 1.00275444, > >>>> >> >> >>>> 0.37352935, > >>>> >> >> >>>> 0.62332717, 0.93599749, 0.40651376, 0.65088756, > >>>> >> >> >>>> 0.8332281 > >>>> >> >> >>>> , > >>>> >> >> >>>> 0.5770101 , 0.9288512 , 0.46671906, 0.41536055, > >>>> >> >> >>>> 0.71418068, > >>>> >> >> >>>> 0.81250913, 0.07652627, 0.72939072, 0.26755626, > >>>> >> >> >>>> 0.36396146, > >>>> >> >> >>>> 0.59725999, 1.02264694, 0.41227986, 0.98122853, > >>>> >> >> >>>> 0.71906675, > >>>> >> >> >>>> 0.58582611, 0.77288117, 0.3217015 , 0.65261394, > >>>> >> >> >>>> 0.11947618, > >>>> >> >> >>>> 0.75632703, 0.43432935, 0.52182485, 1.0277177 , > >>>> >> >> >>>> 1.01104986, > >>>> >> >> >>>> 0.3023265 , 0.6024772 , 0.69257548, 0.55418735, > >>>> >> >> >>>> 0.46259052, > >>>> >> >> >>>> 0.25832231, 0.38278355, 0.45508532, 0.26215872, > >>>> >> >> >>>> 0.34207947, > >>>> >> >> >>>> 0.80704729, 0.80755477, 0.95317178, 0.97458885, > >>>> >> >> >>>> 0.58762294, > >>>> >> >> >>>> 0.82540618, 0.62005585, 0.82494646, 1.04221293, > >>>> >> >> >>>> 0.14983027, > >>>> >> >> >>>> 1.01571579, 0.99381328, 0.24158714, 0.84256569, > >>>> >> >> >>>> 0.53418924, > >>>> >> >> >>>> 0.24067628, 0.90489883, 1.02217747, 0.34988034, > >>>> >> >> >>>> 0.5310065 > >>>> >> >> >>>> , > >>>> >> >> >>>> 0.48135002, 1.03020269, 0.6013679 , 0.46062485, > >>>> >> >> >>>> 0.3918485 > >>>> >> >> >>>> , > >>>> >> >> >>>> 0.21554545, 0.31704519, 0.04868385, 0.1787766 , > >>>> >> >> >>>> 0.37361852, > >>>> >> >> >>>> 0.21977912, 0.7649772 , 0.77867281, 0.37684278, > >>>> >> >> >>>> 0.64432638, > >>>> >> >> >>>> 0.77494951, 0.87106309, 0.77611484, 0.52666801, > >>>> >> >> >>>> 0.88683667, > >>>> >> >> >>>> 0.69164967, 0.98618191, 0.84811375, 0.35934198, > >>>> >> >> >>>> 0.32650478, > >>>> >> >> >>>> 0.1752677 , 0.60574454, 0.5109132 , 0.52332287, > >>>> >> >> >>>> 0.99777805]) > >>>> >> >> >>>> > >>>> >> >> >>>> In [27]: np.abs(signal.hilbert(a))[-1] > >>>> >> >> >>>> Out[27]: > >>>> >> >> >>>> array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., > 0., > >>>> >> >> >>>> 0., > >>>> >> >> >>>> 0., > >>>> >> >> >>>> 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., > 0., > >>>> >> >> >>>> 0., > >>>> >> >> >>>> 0., > >>>> >> >> >>>> 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., > 0., > >>>> >> >> >>>> 0., > >>>> >> >> >>>> 0., > >>>> >> >> >>>> 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., > 0., > >>>> >> >> >>>> 0., > >>>> >> >> >>>> 0., > >>>> >> >> >>>> 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., > 0., > >>>> >> >> >>>> 0., > >>>> >> >> >>>> 0., > >>>> >> >> >>>> 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., > 0., > >>>> >> >> >>>> 0., > >>>> >> >> >>>> 0., > >>>> >> >> >>>> 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., > 0., > >>>> >> >> >>>> 0., > >>>> >> >> >>>> 0., > >>>> >> >> >>>> 0., 0., 0., 0., 0., 0., 0., 0., 0.]) > >>>> >> >> >>>> > >>>> >> >> >>>> > >>>> >> >> >>>> > >>>> >> >> >>>> > >>>> >> >> >>>> > >>>> >> >> >>>> > ---------------------------------------------------------------------- > >>>> >> >> >>>> > >>>> >> >> >>>> I was expecting both of these to have the same values - am > I > >>>> >> >> >>>> missing > >>>> >> >> >>>> something? > >>>> >> >> >>>> > >>>> >> >> >>>> I think that the following solves this issue, but now I am > not > >>>> >> >> >>>> that > >>>> >> >> >>>> sure > >>>> >> >> >>>> whether it does what it is supposed to do and I couldn't > find a > >>>> >> >> >>>> test > >>>> >> >> >>>> for > >>>> >> >> >>>> this in test_signaltools.py. Does anyone know of a good > >>>> >> >> >>>> test-case > >>>> >> >> >>>> for > >>>> >> >> >>>> the > >>>> >> >> >>>> analytic signal, that I could create for this? > >>>> >> >> >>>> > >>>> >> >> >>>> Index: scipy/signal/signaltools.py > >>>> >> >> >>>> > >>>> >> >> >>>> > >>>> >> >> >>>> > =================================================================== > >>>> >> >> >>>> --- scipy/signal/signaltools.py (revision 6182) > >>>> >> >> >>>> +++ scipy/signal/signaltools.py (working copy) > >>>> >> >> >>>> @@ -1062,13 +1062,13 @@ > >>>> >> >> >>>> """ > >>>> >> >> >>>> x = asarray(x) > >>>> >> >> >>>> if N is None: > >>>> >> >> >>>> - N = len(x) > >>>> >> >> >>>> + N = x.shape[-1] > >>>> >> >> >>>> if N <=0: > >>>> >> >> >>>> raise ValueError, "N must be positive." > >>>> >> >> >>>> if iscomplexobj(x): > >>>> >> >> >>>> print "Warning: imaginary part of x ignored." > >>>> >> >> >>>> x = real(x) > >>>> >> >> >>>> - Xf = fft(x,N,axis=0) > >>>> >> >> >>>> + Xf = fft(x,N,axis=-1) > >>>> >> >> >>>> h = zeros(N) > >>>> >> >> >>>> if N % 2 == 0: > >>>> >> >> >>>> h[0] = h[N/2] = 1 > >>>> >> >> >>>> @@ -1078,7 +1078,7 @@ > >>>> >> >> >>>> h[1:(N+1)/2] = 2 > >>>> >> >> >>>> > >>>> >> >> >>>> if len(x.shape) > 1: > >>>> >> >> >>>> - h = h[:, newaxis] > >>>> >> >> >>>> + h = h[newaxis,:] > >>>> >> >> >>>> x = ifft(Xf*h) > >>>> >> >> >>>> return x > >>>> >> >> >>> > >>>> >> >> >>> I think your change would break the currently advertised > >>>> >> >> >>> behavior, > >>>> >> >> >>> axis=0 (The transformation is done along the first axis) > >>>> >> >> >>> > >>>> >> >> >>> but fft and ifft have default axis=-1 > >>>> >> >> >>> > >>>> >> >> >>> fft in hilbert uses axis=0 as in docstring > >>>> >> >> >>> but ifft uses default axis=-1 > >>>> >> >> >>> > >>>> >> >> >>> so, I would think the fix should be x = ifft(Xf*h, axis=0) > >>>> >> >> >>> > >>>> >> >> >>> But as it currently looks like the axis argument doesn't > work > >>>> >> >> >>> anyway, > >>>> >> >> >>> there wouldn't be much breakage if the axis would be > included as > >>>> >> >> >>> an > >>>> >> >> >>> argument and default to -1. > >>>> >> >> >>> However, I don't know what the "standard" for scipy.signal > is > >>>> >> >> >>> for > >>>> >> >> >>> default axis. > >>>> >> >> >>> > >>>> >> >> >>> Josef > >>>> >> >> >> > >>>> >> >> >> after adding axis to ifft: > >>>> >> >> >>>>> print hilbert(aa).real > >>>> >> >> >> [[ 0.82584851 0.15215031 0.14767381] > >>>> >> >> >> [ 0.95021675 0.16803995 0.43562964] > >>>> >> >> >> [ 0.13033881 0.06198952 0.70729614] > >>>> >> >> >> [ 0.69409563 0.06962778 0.72552601] > >>>> >> >> >> [ 0.34297612 0.50579001 0.86463304] > >>>> >> >> >> [ 0.28355261 0.21626889 0.85165102] > >>>> >> >> >> [ 0.49481491 0.21290645 0.71416814] > >>>> >> >> >> [ 0.2645843 0.95783096 0.77514016] > >>>> >> >> >> [ 0.38735994 0.14274852 0.56344808] > >>>> >> >> >> [ 0.88084015 0.39879649 0.64949951]] > >>>> >> >> >>>>> print hilbert(aa[:,:1]).real > >>>> >> >> >> [[ 0.82584851] > >>>> >> >> >> [ 0.95021675] > >>>> >> >> >> [ 0.13033881] > >>>> >> >> >> [ 0.69409563] > >>>> >> >> >> [ 0.34297612] > >>>> >> >> >> [ 0.28355261] > >>>> >> >> >> [ 0.49481491] > >>>> >> >> >> [ 0.2645843 ] > >>>> >> >> >> [ 0.38735994] > >>>> >> >> >> [ 0.88084015]] > >>>> >> >> >> > >>>> >> >> >> but it treats a 1d array as row vector and transforms along > zero > >>>> >> >> >> axis > >>>> >> >> >> of length 1, and not along the length of the array. > >>>> >> >> >> so another fix to handle 1d arrays correctly should be done > >>>> >> >> >> > >>>> >> >> >>>>> print hilbert(aa[:,1]).real > >>>> >> >> >> [ 0.15215031 0.16803995 0.06198952 0.06962778 0.50579001 > >>>> >> >> >> 0.21626889 > >>>> >> >> >> 0.21290645 0.95783096 0.14274852 0.39879649] > >>>> >> >> >>>>> aa[:,1] > >>>> >> >> >> array([ 0.15215031, 0.16803995, 0.06198952, 0.06962778, > >>>> >> >> >> 0.50579001, > >>>> >> >> >> 0.21626889, 0.21290645, 0.95783096, 0.14274852, > >>>> >> >> >> 0.39879649]) > >>>> >> >> >>>>> > >>>> >> >> > > >>>> >> >> > there's something wrong with my example, the real part is the > same > >>>> >> >> > which confused me > >>>> >> >> > > >>>> >> >> > it works correctly with 1d > >>>> >> >> > > >>>> >> >> >>>> np.abs(hilbert(aa[:,0])) > >>>> >> >> > array([ 0.83251128, 1.04487091, 0.27702083, 0.69901499, > >>>> >> >> > 0.49170197, > >>>> >> >> > 0.31227114, 0.49505637, 0.26461488, 0.61385196, > >>>> >> >> > 0.90716272]) > >>>> >> >> > > >>>> >> >> >>>> np.abs(hilbert(aa[:,:1])).T > >>>> >> >> > array([[ 0.83251128, 1.04487091, 0.27702083, 0.69901499, > >>>> >> >> > 0.49170197, > >>>> >> >> > 0.31227114, 0.49505637, 0.26461488, 0.61385196, > >>>> >> >> > 0.90716272]]) > >>>> >> >> > > >>>> >> >> >>>> np.abs(hilbert(aa))[:,0] > >>>> >> >> > array([ 0.83251128, 1.04487091, 0.27702083, 0.69901499, > >>>> >> >> > 0.49170197, > >>>> >> >> > 0.31227114, 0.49505637, 0.26461488, 0.61385196, > >>>> >> >> > 0.90716272]) > >>>> >> >> > > >>>> >> >> > besides reading the docstring, I don't know what hilbert is > >>>> >> >> > supposed > >>>> >> >> > to be good for. > >>>> >> >> > >>>> >> >> Would something like the function in the attachment do ? > >>>> >> >> > >>>> >> >> > >>>> >> >> > >>>> >> >> > Josef > >>>> >> >> > > >>>> >> >> > > >>>> >> >> >> Josef > >>>> >> >> >> > >>>> >> >> >> > >>>> >> >> >>> > >>>> >> >> >>>> > >>>> >> >> >>>> > >>>> >> >> >>>> Cheers, > >>>> >> >> >>>> > >>>> >> >> >>>> Ariel > >>>> >> >> >>>> -- > >>>> >> >> >>>> Ariel Rokem > >>>> >> >> >>>> Helen Wills Neuroscience Institute > >>>> >> >> >>>> University of California, Berkeley > >>>> >> >> >>>> http://argentum.ucbso.berkeley.edu/ariel > >>>> >> >> >>>> > >>>> >> >> >>>> _______________________________________________ > >>>> >> >> >>>> SciPy-Dev mailing list > >>>> >> >> >>>> SciPy-Dev at scipy.org > >>>> >> >> >>>> http://mail.scipy.org/mailman/listinfo/scipy-dev > >>>> >> >> >>>> > >>>> >> >> >>>> > >>>> >> >> >>> > >>>> >> >> >> > >>>> >> >> > > >>>> >> >> > >>>> >> >> _______________________________________________ > >>>> >> >> SciPy-Dev mailing list > >>>> >> >> SciPy-Dev at scipy.org > >>>> >> >> http://mail.scipy.org/mailman/listinfo/scipy-dev > >>>> >> >> > >>>> >> > > >>>> >> > > >>>> >> > > >>>> >> > -- > >>>> >> > Ariel Rokem > >>>> >> > Helen Wills Neuroscience Institute > >>>> >> > University of California, Berkeley > >>>> >> > http://argentum.ucbso.berkeley.edu/ariel > >>>> >> > > >>>> >> > _______________________________________________ > >>>> >> > SciPy-Dev mailing list > >>>> >> > SciPy-Dev at scipy.org > >>>> >> > http://mail.scipy.org/mailman/listinfo/scipy-dev > >>>> >> > > >>>> >> > > >>>> >> _______________________________________________ > >>>> >> SciPy-Dev mailing list > >>>> >> SciPy-Dev at scipy.org > >>>> >> http://mail.scipy.org/mailman/listinfo/scipy-dev > >>>> > > >>>> > > >>>> > > >>>> > -- > >>>> > Ariel Rokem > >>>> > Helen Wills Neuroscience Institute > >>>> > University of California, Berkeley > >>>> > http://argentum.ucbso.berkeley.edu/ariel > >>>> > > >>>> > _______________________________________________ > >>>> > SciPy-Dev mailing list > >>>> > SciPy-Dev at scipy.org > >>>> > http://mail.scipy.org/mailman/listinfo/scipy-dev > >>>> > > >>>> > > >>>> _______________________________________________ > >>>> SciPy-Dev mailing list > >>>> SciPy-Dev at scipy.org > >>>> http://mail.scipy.org/mailman/listinfo/scipy-dev > >>> > >>> > >>> > >>> -- > >>> Ariel Rokem > >>> Helen Wills Neuroscience Institute > >>> University of California, Berkeley > >>> http://argentum.ucbso.berkeley.edu/ariel > >>> > >>> _______________________________________________ > >>> SciPy-Dev mailing list > >>> SciPy-Dev at scipy.org > >>> http://mail.scipy.org/mailman/listinfo/scipy-dev > >>> > >>> > >> > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -- Ariel Rokem Helen Wills Neuroscience Institute University of California, Berkeley http://argentum.ucbso.berkeley.edu/ariel -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter.demarest at gmail.com Wed Jan 20 22:58:53 2010 From: peter.demarest at gmail.com (Peter Demarest) Date: Wed, 20 Jan 2010 21:58:53 -0600 Subject: [SciPy-dev] SciPy Documentation Message-ID: <953a60fc1001201958g3e4d29a6vcdd5d63885cb8b40@mail.gmail.com> Hello, I would like to start contributing to the SciPy documentation. My user name on the wiki is: pdemarest I thought I would start taking a look at the docstings in the optimize module. Thanks Peter Demarest demarest at alum.wpi.edu C: 281-954-4243 -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Thu Jan 21 01:18:14 2010 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 21 Jan 2010 07:18:14 +0100 Subject: [SciPy-dev] SciPy Documentation In-Reply-To: <953a60fc1001201958g3e4d29a6vcdd5d63885cb8b40@mail.gmail.com> References: <953a60fc1001201958g3e4d29a6vcdd5d63885cb8b40@mail.gmail.com> Message-ID: <20100121061814.GA21316@phare.normalesup.org> On Wed, Jan 20, 2010 at 09:58:53PM -0600, Peter Demarest wrote: > I would like to start contributing to the SciPy documentation. ?My user > name on the wiki is: pdemarest > I thought I would start taking a look at the docstings in the optimize > module. Hi, Thanks for your interest in the documentation. I can't find you in the list of users. How did you create your user? You must go to http://docs.scipy.org/numpy/Front%20Page/ and click on the 'log in' link on the top right. Cheers, Ga?l From fabian.pedregosa at inria.fr Thu Jan 21 04:55:11 2010 From: fabian.pedregosa at inria.fr (Fabian Pedregosa) Date: Thu, 21 Jan 2010 10:55:11 +0100 Subject: [SciPy-dev] Genetic Algorithms module in scikits.learn Message-ID: <4B58247F.9090200@inria.fr> Hello. I'm trying to maintain and release scikits.learn[1], and for historical reasons there's a module for genetic algorithms in this package. As the module does not quite fit the package's thematic, has references to scipy.ga (which has moved to scipy.sandbox and is no longer maintained) and there are already some great genetic algorithm frameworks in Python, I'm rather inclined to remove this from scikits.learn. If there are no objections, I'll remove that module from scikits.learn, but just wanted to say that I've uploaded the code to github [2] in case someone finds it useful or is willing to maintain. Thanks, ~fabian [1] http://scikit-learn.sourceforge.net/ [2] http://github.com/fseoane/ga From peter.demarest at gmail.com Thu Jan 21 21:51:26 2010 From: peter.demarest at gmail.com (Peter Demarest) Date: Thu, 21 Jan 2010 20:51:26 -0600 Subject: [SciPy-dev] SciPy Documentation Message-ID: <953a60fc1001211851h5ed4abchea56085f3cfe1180@mail.gmail.com> Gael, I am able to log in to the front page of the wiki using the using name: pdemarest Peter Peter Demarest demarest at alum.wpi.edu C: 281-954-4243 Message: 2 > Date: Thu, 21 Jan 2010 07:18:14 +0100 > From: Gael Varoquaux > Subject: Re: [SciPy-dev] SciPy Documentation > To: SciPy Developers List > Message-ID: <20100121061814.GA21316 at phare.normalesup.org> > Content-Type: text/plain; charset=utf-8 > > On Wed, Jan 20, 2010 at 09:58:53PM -0600, Peter Demarest wrote: > > I would like to start contributing to the SciPy documentation. ?My > user > > name on the wiki is: pdemarest > > I thought I would start taking a look at the docstings in the optimize > > module. > > Hi, > > Thanks for your interest in the documentation. > > I can't find you in the list of users. How did you create your user? You > must go to http://docs.scipy.org/numpy/Front%20Page/ and click on the > 'log in' link on the top right. > > Cheers, > > Ga? > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Fri Jan 22 03:07:39 2010 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Fri, 22 Jan 2010 09:07:39 +0100 Subject: [SciPy-dev] SciPy Documentation In-Reply-To: <953a60fc1001211851h5ed4abchea56085f3cfe1180@mail.gmail.com> References: <953a60fc1001211851h5ed4abchea56085f3cfe1180@mail.gmail.com> Message-ID: <20100122080739.GB27110@phare.normalesup.org> Hi Peter, Sorry, I was beeing stupid. I have added you to the editors list. Thanks for your interest. Ga?l On Thu, Jan 21, 2010 at 08:51:26PM -0600, Peter Demarest wrote: > Gael, > I am able to log in to the front page of the wiki using the using name: > pdemarest > Peter > Peter Demarest > [1]demarest at alum.wpi.edu > C: 281-954-4243 > Message: 2 > Date: Thu, 21 Jan 2010 07:18:14 +0100 > From: Gael Varoquaux <[2]gael.varoquaux at normalesup.org> > Subject: Re: [SciPy-dev] SciPy Documentation > To: SciPy Developers List <[3]scipy-dev at scipy.org> > Message-ID: <[4]20100121061814.GA21316 at phare.normalesup.org> > Content-Type: text/plain; charset=utf-8 > On Wed, Jan 20, 2010 at 09:58:53PM -0600, Peter Demarest wrote: > > ? ?I would like to start contributing to the SciPy documentation. ?My > user > > ? ?name on the wiki is: pdemarest > > ? ?I thought I would start taking a look at the docstings in the > optimize > > ? ?module. > Hi, > Thanks for your interest in the documentation. > I can't find you in the list of users. How did you create your user? You > must go to [5]http://docs.scipy.org/numpy/Front%20Page/ and click on the > 'log in' link on the top right. > Cheers, > Ga? > References > Visible links > 1. mailto:demarest at alum.wpi.edu > 2. mailto:gael.varoquaux at normalesup.org > 3. mailto:scipy-dev at scipy.org > 4. mailto:20100121061814.GA21316 at phare.normalesup.org > 5. http://docs.scipy.org/numpy/Front%20Page/ > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev -- Gael Varoquaux Research Fellow, INRIA Laboratoire de Neuro-Imagerie Assistee par Ordinateur NeuroSpin/CEA Saclay , Bat 145, 91191 Gif-sur-Yvette France ++ 33-1-69-08-78-35 http://gael-varoquaux.info From bsouthey at gmail.com Fri Jan 22 10:24:58 2010 From: bsouthey at gmail.com (Bruce Southey) Date: Fri, 22 Jan 2010 09:24:58 -0600 Subject: [SciPy-dev] Genetic Algorithms module in scikits.learn In-Reply-To: <4B58247F.9090200@inria.fr> References: <4B58247F.9090200@inria.fr> Message-ID: <4B59C34A.9010102@gmail.com> On 01/21/2010 03:55 AM, Fabian Pedregosa wrote: > Hello. > > I'm trying to maintain and release scikits.learn[1], and for historical > reasons there's a module for genetic algorithms in this package. > > As the module does not quite fit the package's thematic, has references > to scipy.ga (which has moved to scipy.sandbox and is no longer > maintained) and there are already some great genetic algorithm > frameworks in Python, I'm rather inclined to remove this from scikits.learn. > > If there are no objections, I'll remove that module from scikits.learn, > but just wanted to say that I've uploaded the code to github [2] in case > someone finds it useful or is willing to maintain. > > > Thanks, > > ~fabian > > Hi, Great that you are doing this! I have no interest in genetic algorithms as such but know people do use it for machine learning. I would support just depreciating the genetic algorithm component in scikit.learn and then within a release or two you can remove it. While I know it is a scikit, numpy and scipy have a long history of providing adequate notification and learn has been available with it in for quite some time. Ultimately it is your decision not mine but please make it very clear that genetic algorithms have been removed. Regards Bruce From fabian.pedregosa at inria.fr Fri Jan 22 10:45:20 2010 From: fabian.pedregosa at inria.fr (Fabian Pedregosa) Date: Fri, 22 Jan 2010 16:45:20 +0100 Subject: [SciPy-dev] Genetic Algorithms module in scikits.learn In-Reply-To: <4B59C34A.9010102@gmail.com> References: <4B58247F.9090200@inria.fr> <4B59C34A.9010102@gmail.com> Message-ID: <4B59C810.7060506@inria.fr> Bruce Southey wrote: > On 01/21/2010 03:55 AM, Fabian Pedregosa wrote: >> Hello. >> >> I'm trying to maintain and release scikits.learn[1], and for historical >> reasons there's a module for genetic algorithms in this package. >> >> As the module does not quite fit the package's thematic, has references >> to scipy.ga (which has moved to scipy.sandbox and is no longer >> maintained) and there are already some great genetic algorithm >> frameworks in Python, I'm rather inclined to remove this from scikits.learn. >> >> If there are no objections, I'll remove that module from scikits.learn, >> but just wanted to say that I've uploaded the code to github [2] in case >> someone finds it useful or is willing to maintain. >> >> >> Thanks, >> >> ~fabian >> >> > Hi, > Great that you are doing this! > > I have no interest in genetic algorithms as such but know people do use > it for machine learning. > > I would support just depreciating the genetic algorithm component in > scikit.learn and then within a release or two you can remove it. While I > know it is a scikit, numpy and scipy have a long history of providing > adequate notification and learn has been available with it in for quite > some time. > Hi Bruce!. Thanks for you interest. I would not mind shipping the genetic algorithm in this release (scheduled in two weeks), but main problem is that it uses scipy.ga, which not there anymore (at least in svn), so I do not want to ship something that completely fails to import and is largely untested ... > Ultimately it is your decision not mine but please make it very clear > that genetic algorithms have been removed. > > Regards > Bruce > > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From fabian.pedregosa at inria.fr Fri Jan 22 10:49:38 2010 From: fabian.pedregosa at inria.fr (Fabian Pedregosa) Date: Fri, 22 Jan 2010 16:49:38 +0100 Subject: [SciPy-dev] Ball Tree code updated (ticket 1048) In-Reply-To: <58df6dc21001061448l4dc9ef05x478437a7837d3c32@mail.gmail.com> References: <58df6dc21001061448l4dc9ef05x478437a7837d3c32@mail.gmail.com> Message-ID: <4B59C912.30506@inria.fr> Jake VanderPlas wrote: > Hello, > I have had comments from a few people over the last two months on the > Ball Tree code that I submitted (ticket 1048). I cleaned up the code > a bit and posted the changes on the tracker. Any other comments would > be appreciated! > -Jake > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > Hi!. I'm not a scipy dev, but I'm very interested in your code. I'll take a look at it in the following days and give some feedback. Regards, ~fabian From bsouthey at gmail.com Fri Jan 22 11:25:48 2010 From: bsouthey at gmail.com (Bruce Southey) Date: Fri, 22 Jan 2010 10:25:48 -0600 Subject: [SciPy-dev] Genetic Algorithms module in scikits.learn In-Reply-To: <4B59C810.7060506@inria.fr> References: <4B58247F.9090200@inria.fr> <4B59C34A.9010102@gmail.com> <4B59C810.7060506@inria.fr> Message-ID: <4B59D18C.4030607@gmail.com> On 01/22/2010 09:45 AM, Fabian Pedregosa wrote: > Bruce Southey wrote: > >> On 01/21/2010 03:55 AM, Fabian Pedregosa wrote: >> >>> Hello. >>> >>> I'm trying to maintain and release scikits.learn[1], and for historical >>> reasons there's a module for genetic algorithms in this package. >>> >>> As the module does not quite fit the package's thematic, has references >>> to scipy.ga (which has moved to scipy.sandbox and is no longer >>> maintained) and there are already some great genetic algorithm >>> frameworks in Python, I'm rather inclined to remove this from scikits.learn. >>> >>> If there are no objections, I'll remove that module from scikits.learn, >>> but just wanted to say that I've uploaded the code to github [2] in case >>> someone finds it useful or is willing to maintain. >>> >>> >>> Thanks, >>> >>> ~fabian >>> >>> >>> >> Hi, >> Great that you are doing this! >> >> I have no interest in genetic algorithms as such but know people do use >> it for machine learning. >> >> I would support just depreciating the genetic algorithm component in >> scikit.learn and then within a release or two you can remove it. While I >> know it is a scikit, numpy and scipy have a long history of providing >> adequate notification and learn has been available with it in for quite >> some time. >> >> > Hi Bruce!. Thanks for you interest. > > I would not mind shipping the genetic algorithm in this release > (scheduled in two weeks), but main problem is that it uses scipy.ga, > which not there anymore (at least in svn), so I do not want to ship > something that completely fails to import and is largely untested ... > Sorry, I do not realize the status of scipy.ga and that there is an 'import Numeric' as well. So sure just remove it! Bruce From d.l.goldsmith at gmail.com Fri Jan 22 12:40:36 2010 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Fri, 22 Jan 2010 09:40:36 -0800 Subject: [SciPy-dev] SciPy Documentation In-Reply-To: <20100122080739.GB27110@phare.normalesup.org> References: <953a60fc1001211851h5ed4abchea56085f3cfe1180@mail.gmail.com> <20100122080739.GB27110@phare.normalesup.org> Message-ID: <45d1ab481001220940u172d7dbejae57c765fed5a337@mail.gmail.com> On Fri, Jan 22, 2010 at 12:07 AM, Gael Varoquaux < gael.varoquaux at normalesup.org> wrote: > Hi Peter, > > Sorry, I was beeing stupid. I have added you to the editors list. Thanks > for your interest. > > Ga?l > Yes, thanks Peter! David Goldsmith, Technical Editor Olympia, WA > > On Thu, Jan 21, 2010 at 08:51:26PM -0600, Peter Demarest wrote: > > Gael, > > I am able to log in to the front page of the wiki using the using > name: > > pdemarest > > Peter > > Peter Demarest > > [1]demarest at alum.wpi.edu > > C: 281-954-4243 > > > Message: 2 > > Date: Thu, 21 Jan 2010 07:18:14 +0100 > > From: Gael Varoquaux <[2]gael.varoquaux at normalesup.org> > > Subject: Re: [SciPy-dev] SciPy Documentation > > To: SciPy Developers List <[3]scipy-dev at scipy.org> > > Message-ID: <[4]20100121061814.GA21316 at phare.normalesup.org> > > Content-Type: text/plain; charset=utf-8 > > > On Wed, Jan 20, 2010 at 09:58:53PM -0600, Peter Demarest wrote: > > > I would like to start contributing to the SciPy documentation. > ?My > > user > > > name on the wiki is: pdemarest > > > I thought I would start taking a look at the docstings in the > > optimize > > > module. > > > Hi, > > > Thanks for your interest in the documentation. > > > I can't find you in the list of users. How did you create your user? > You > > must go to [5]http://docs.scipy.org/numpy/Front%20Page/ and click > on the > > 'log in' link on the top right. > > > Cheers, > > > Ga? > > > References > > > Visible links > > 1. mailto:demarest at alum.wpi.edu > > 2. mailto:gael.varoquaux at normalesup.org > > 3. mailto:scipy-dev at scipy.org > > 4. mailto:20100121061814.GA21316 at phare.normalesup.org > > 5. http://docs.scipy.org/numpy/Front%20Page/ > > > _______________________________________________ > > SciPy-Dev mailing list > > SciPy-Dev at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-dev > > > -- > Gael Varoquaux > Research Fellow, INRIA > Laboratoire de Neuro-Imagerie Assistee par Ordinateur > NeuroSpin/CEA Saclay , Bat 145, 91191 Gif-sur-Yvette France > ++ 33-1-69-08-78-35 > http://gael-varoquaux.info > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tanner at gmx.de Fri Jan 22 15:06:36 2010 From: tanner at gmx.de (Thomas Tanner) Date: Fri, 22 Jan 2010 21:06:36 +0100 Subject: [SciPy-dev] compilation with fort77 Message-ID: <4B5A054C.7060308@gmx.de> Hello, I'm trying to port scipy 0.7.1 to the Maemo platform (Linux/ARM). There is no Fortran compiler for this platform so I have to use fort77 (and f2c) to compile numpy and scipy. However, f2c does not seem to be 100% Fortran77 compatible (or may scipy is not?). I get (only :) two compilation errors. The first one is a trivial type conversion and fixed by my attached patch. the full build output can be found at https://garage.maemo.org/builder/fremantle/python-scipy_0.7.1-1maemo1/armel.build.log.OK.txt I don't know how to fix the second error: compiling Fortran sources Fortran f77 compiler: /usr/bin/f77 -g -Wall -fno-second-underscore -fPIC -O2 -funroll-loops compile options: '-Ibuild/src.linux-armv5tel-2.5 -I/usr/lib/python2.5/site-packages/numpy/core/include -I/usr/include/python2.5 -c' f77:f77: scipy/stats/mvndst.f mvnun: Error on line 76: Declaration error for rho: adjustable dimension on non-argument Error on line 76: Declaration error for infin: adjustable dimension on non-argument Error on line 76: Declaration error for stdev: adjustable dimension on non-argument Error on line 76: Declaration error for nlower: adjustable dimension on non-argument Error on line 76: Declaration error for nupper: adjustable dimension on non-argument Error on line 24: wr_ardecls: nonconstant array size Error on line 24: wr_ardecls: nonconstant array size Error on line 24: wr_ardecls: nonconstant array size Error on line 24: wr_ardecls: nonconstant array size Error on line 24: wr_ardecls: nonconstant array size mvndst: mvndfn: entry mvndnt: mvnlms: covsrt: dkswap: rcswp: dkbvrc: dksmrc: mvnphi: phinvs: bvnmvn: bvu: mvnuni: the problem is that fort77 cannot deal with those variable sized input arguments: SUBROUTINE mvnun(d, n, lower, upper, means, covar, maxpts, & abseps, releps, value, inform) ... integer n, d, infin(d), maxpts, inform, tmpinf double precision lower(d), upper(d), releps, abseps, & error, value, stdev(d), rho(d*(d-1)/2), & covar(d,d), & nlower(d), nupper(d), means(d,n), tmpval integer i, j Could some Fortran expert please help me to make it fort77 compatible? Thanks for your help! best regards, -- Thomas Tanner ------ email: tanner at gmx.de GnuPG: 1024/5924D4DD -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: my.diff URL: From d.l.goldsmith at gmail.com Sat Jan 23 22:52:36 2010 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Sat, 23 Jan 2010 19:52:36 -0800 Subject: [SciPy-dev] Difference between polynomial.trimcoef and trimseq Message-ID: <45d1ab481001231952t274d5139g63c35113dd3993f4@mail.gmail.com> Is the only difference that which is in the Notes of trimseq: "Do [sic?] not lose the type info if the sequence contains unknown objects"? DG -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sun Jan 24 01:00:10 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 23 Jan 2010 23:00:10 -0700 Subject: [SciPy-dev] Difference between polynomial.trimcoef and trimseq In-Reply-To: <45d1ab481001231952t274d5139g63c35113dd3993f4@mail.gmail.com> References: <45d1ab481001231952t274d5139g63c35113dd3993f4@mail.gmail.com> Message-ID: On Sat, Jan 23, 2010 at 8:52 PM, David Goldsmith wrote: > Is the only difference that which is in the Notes of trimseq: "Do [sic?] > not lose the type info if the sequence contains unknown objects"? > > trimseq works on sequence types and doesn't attempt to turn them into ndarrays, so does less work than trimcoef. trimseq is also used in as_series where trimcoef can't be used because the latter calls as_series itself to convert its arguments to ndarrays. I admit the distinction is not obvious at first glance. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From d.l.goldsmith at gmail.com Sun Jan 24 01:08:52 2010 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Sat, 23 Jan 2010 22:08:52 -0800 Subject: [SciPy-dev] Difference between polynomial.trimcoef and trimseq In-Reply-To: References: <45d1ab481001231952t274d5139g63c35113dd3993f4@mail.gmail.com> Message-ID: <45d1ab481001232208i3753ae19ma0c93f1b8f83861@mail.gmail.com> Do you think a typical user would ever use both? (Or is this an efficiency that most can live w/out? I'm just curious how much we should "explain ourselves" in their docstrings.) DG PS: If I were to use chebyshev as my "template," what would you say is the next most useful/algorithmically-studied polynomial basis to implement? On Sat, Jan 23, 2010 at 10:00 PM, Charles R Harris < charlesr.harris at gmail.com> wrote: > > > On Sat, Jan 23, 2010 at 8:52 PM, David Goldsmith wrote: > >> Is the only difference that which is in the Notes of trimseq: "Do [sic?] >> not lose the type info if the sequence contains unknown objects"? >> >> > trimseq works on sequence types and doesn't attempt to turn them into > ndarrays, so does less work than trimcoef. trimseq is also used in as_series > where trimcoef can't be used because the latter calls as_series itself to > convert its arguments to ndarrays. I admit the distinction is not obvious at > first glance. > > Chuck > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sun Jan 24 01:42:50 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 23 Jan 2010 23:42:50 -0700 Subject: [SciPy-dev] Difference between polynomial.trimcoef and trimseq In-Reply-To: <45d1ab481001232208i3753ae19ma0c93f1b8f83861@mail.gmail.com> References: <45d1ab481001231952t274d5139g63c35113dd3993f4@mail.gmail.com> <45d1ab481001232208i3753ae19ma0c93f1b8f83861@mail.gmail.com> Message-ID: On Sat, Jan 23, 2010 at 11:08 PM, David Goldsmith wrote: > Do you think a typical user would ever use both? (Or is this an efficiency > that most can live w/out? I'm just curious how much we should "explain > ourselves" in their docstrings.) > > Hard to say ;) I wrote the docstrings for the helper funtions mostly for my own use and think of those helper functions as private. They are in the standard import just in case anyone wants to do their own stuff. > PS: If I were to use chebyshev as my "template," what would you say is the > next most useful/algorithmically-studied polynomial basis to implement? > > The power/Chebyshev series have the special property that it is easy to multiply/divide them, so the template needs to lose a few features to be useful for functions where that is far more difficult. Multiplication by x should be sufficient for most things, in particular evaluation and conversion to/from other series. Apart from that, I think Legendre polynomials would fit in well. There was a request for Hermite polynomials, which shouldn't be difficult in principle, but perhaps more so in practice because there are two versions that go under that name but have different scalings. It is also more difficult to assign a fixed domain for them because the domain essentially expands with the degree. But I don't think those difficulties are fundamental. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From peridot.faceted at gmail.com Sun Jan 24 02:44:20 2010 From: peridot.faceted at gmail.com (Anne Archibald) Date: Sun, 24 Jan 2010 02:44:20 -0500 Subject: [SciPy-dev] Difference between polynomial.trimcoef and trimseq In-Reply-To: <45d1ab481001232208i3753ae19ma0c93f1b8f83861@mail.gmail.com> References: <45d1ab481001231952t274d5139g63c35113dd3993f4@mail.gmail.com> <45d1ab481001232208i3753ae19ma0c93f1b8f83861@mail.gmail.com> Message-ID: 2010/1/24 David Goldsmith : > PS: If I were to use chebyshev as my "template," what would you say is the > next most useful/algorithmically-studied polynomial basis to implement? There was extensive (and occasionally heated) discussion of other polynomial representations around the time the Chebyshev routines were being introduced. My point of view in that discussion was that there should be a general framework for working with polynomials in many representations, but the representations I thought might be worth having were: (a) Power basis. (b) Chebyshev basis. (c) Bases of other families of orthogonal polynomials. (d) Lagrange basis (polynomials by value). (e) Spline basis. The need for polynomials expressed in terms of other families of orthogonal polynomials is to some degree alleviated by the improved orthogonal polynomial support that came in a little after the discussion. Polynomials by value are a useful tool; if you choose the right evaluation points they are competitive with Chebyshev polynomials for many purposes, and they can do other things as well. The spline basis would be nice, in that it would give people good tools for manipulating functions represented by splines, but the issues of numerical instability with degree raising and lowering suggest to me that they're not going to be that useful as a generic polynomial library. So I think my vote would be for polynomials by value. Not that I'm unbiased! I have a mostly-functional implementation: http://github.com/aarchiba/scikits.polynomial I can't vouch for its consistency with the current implementations, or its completeness; it's been a while since I worked on it. Anne From charlesr.harris at gmail.com Sun Jan 24 09:55:02 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 24 Jan 2010 07:55:02 -0700 Subject: [SciPy-dev] Difference between polynomial.trimcoef and trimseq In-Reply-To: References: <45d1ab481001231952t274d5139g63c35113dd3993f4@mail.gmail.com> <45d1ab481001232208i3753ae19ma0c93f1b8f83861@mail.gmail.com> Message-ID: On Sun, Jan 24, 2010 at 12:44 AM, Anne Archibald wrote: > 2010/1/24 David Goldsmith : > > > PS: If I were to use chebyshev as my "template," what would you say is > the > > next most useful/algorithmically-studied polynomial basis to implement? > > There was extensive (and occasionally heated) discussion of other > polynomial representations around the time the Chebyshev routines were > being introduced. My point of view in that discussion was that there > should be a general framework for working with polynomials in many > representations, but the representations I thought might be worth > having were: > > (a) Power basis. > (b) Chebyshev basis. > (c) Bases of other families of orthogonal polynomials. > (d) Lagrange basis (polynomials by value). > (e) Spline basis. > > The need for polynomials expressed in terms of other families of > orthogonal polynomials is to some degree alleviated by the improved > orthogonal polynomial support that came in a little after the > discussion. Polynomials by value are a useful tool; if you choose the > right evaluation points they are competitive with Chebyshev > polynomials for many purposes, and they can do other things as well. > Speaking of polynomials by value, I have some (cython) routines for barycentric interpolation of trigonometric polynomials I wanted to add to your barycentric work but it seemed that some reorganization of the interpolation folder with maybe some renaming might be in order. I was thinking of a separate barycentric folder. Also, I think the name polyint could maybe be changed to something more suggestive of the contents. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From d.l.goldsmith at gmail.com Sun Jan 24 04:09:09 2010 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Sun, 24 Jan 2010 01:09:09 -0800 Subject: [SciPy-dev] Difference between polynomial.trimcoef and trimseq In-Reply-To: References: <45d1ab481001231952t274d5139g63c35113dd3993f4@mail.gmail.com> <45d1ab481001232208i3753ae19ma0c93f1b8f83861@mail.gmail.com> Message-ID: <45d1ab481001240109o100b2f0eo78d6a5055e5487f6@mail.gmail.com> fromCharles R Harris On Sat, Jan 23, 2010 at 11:08 PM, David Goldsmith wrote: > Do you think a typical user would ever use both? (Or is this an efficiency > that most can live w/out? I'm just curious how much we should "explain > ourselves" in their docstrings.) > > > Hard to say ;) I wrote the docstrings for the helper funtions And in this case the "helper function" is the trimseq, correct? > mostly for my own use and think of those helper functions as private. They are in the standard import just in case anyone wants > to do their own stuff. > PS: If I were to use chebyshev as my "template," what would you say is the > next most useful/algorithmically-studied polynomial basis to implement? > > > The power/Chebyshev series have the special property that it is easy to multiply/divide them, so the template needs to lose a > few features to be useful for functions where that is far more difficult. Yeah, that's what I meant by "algorithmically-studied": AFAYK, numericists haven't derived/discovered nearly as efficient "tricks" for operating on the other orthos/classes as they have for the standard and Chebyshev bases? BTW: on the subject of "numerical tricks," are there such for trigonometric polynomials? > Multiplication by x should be sufficient for most things, in Which, in the standard basis at least, is just a prepending of a zero, of course... > particular evaluation and conversion to/from other series. > Apart from that, I think Legendre polynomials would fit in well. There That was my first guess as to what to do next. > was a request for Hermite polynomials, Hermite requester: if you're reading this, did you already "roll your own"? > which shouldn't be difficult in principle, but perhaps more so in practice because there > are two versions that go under that name but have different scalings. It is also more difficult to assign a fixed domain for them > because the domain essentially expands with the degree. But I don't think those difficulties are fundamental. No, I agree (I think): the issue is more one of "given a particular basis, what can one do to stay in the basis while only manipulating the coefficients" - if your multiplying Legendre polynomials, e.g., the natural default is that you want the result to also be Legendre. If good algorithms for this have been worked out for standard and Cheby, but no others...well, at the very least, it helps me see why you stopped where you did. ;-) > Chuck On Sat, Jan 23, 2010 at 11:44 PM, Anne Archibald wrote: > 2010/1/24 David Goldsmith : > > > PS: If I were to use chebyshev as my "template," what would you say is > the > > next most useful/algorithmically-studied polynomial basis to implement? > > There was extensive (and occasionally heated) discussion of other > polynomial representations around the time the Chebyshev routines were > being introduced. My point of view in that discussion was that there > should be a general framework for working with polynomials in many > representations, but the representations I thought might be worth > having were: > > (a) Power basis. > (b) Chebyshev basis. > (c) Bases of other families of orthogonal polynomials. > (d) Lagrange basis (polynomials by value). > (e) Spline basis. > > The need for polynomials expressed in terms of other families of > orthogonal polynomials is to some degree alleviated by the improved > orthogonal polynomial support that came in a little after the > discussion. Polynomials by value are a useful tool; This was my other leading candidate for "Next!" > if you choose the > right evaluation points they are competitive with Chebyshev > polynomials for many purposes, and they can do other things as well. > The spline basis would be nice, in that it would give people good > tools for manipulating functions represented by splines, but the > issues of numerical instability with degree raising and lowering > suggest to me that they're not going to be that useful as a generic > polynomial library. > > So I think my vote would be for polynomials by value. Not that I'm > unbiased! I have a mostly-functional implementation: > http://github.com/aarchiba/scikits.polynomial > I can't vouch for its consistency with the current implementations, or > its completeness; it's been a while since I worked on it. > Great, I'll give it a look-see! :-) DG > > Anne > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sun Jan 24 16:10:52 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 24 Jan 2010 14:10:52 -0700 Subject: [SciPy-dev] Difference between polynomial.trimcoef and trimseq In-Reply-To: <45d1ab481001240109o100b2f0eo78d6a5055e5487f6@mail.gmail.com> References: <45d1ab481001231952t274d5139g63c35113dd3993f4@mail.gmail.com> <45d1ab481001232208i3753ae19ma0c93f1b8f83861@mail.gmail.com> <45d1ab481001240109o100b2f0eo78d6a5055e5487f6@mail.gmail.com> Message-ID: On Sun, Jan 24, 2010 at 2:09 AM, David Goldsmith wrote: > from > Charles R Harris > > On Sat, Jan 23, 2010 at 11:08 PM, David Goldsmith > wrote: > >> Do you think a typical user would ever use both? (Or is this an >> efficiency that most can live w/out? I'm just curious how much we should >> "explain ourselves" in their docstrings.) >> >> > > Hard to say ;) I wrote the docstrings for the helper funtions > > And in this case the "helper function" is the trimseq, correct? > > > Yes, and pretty much the rest of the functions in polyutils, but trimseq is sort of lower level than the others. > mostly for my own use and think of those helper functions as private. They > are in the standard import just in case anyone wants > > to do their own stuff. > > >> PS: If I were to use chebyshev as my "template," what would you say is the >> next most useful/algorithmically-studied polynomial basis to implement? >> >> > > The power/Chebyshev series have the special property that it is easy to > multiply/divide them, so the template needs to lose a > > few features to be useful for functions where that is far more difficult. > > > Yeah, that's what I meant by "algorithmically-studied": AFAYK, numericists > haven't derived/discovered nearly as efficient "tricks" for operating on the > other orthos/classes as they have for the standard and Chebyshev bases? > BTW: on the subject of "numerical tricks," are there such for trigonometric > polynomials? > > Trigonometric polynomials could pretty much follow the Chebyshev pattern, they are essentially the z-series. The trick is to decide how to represent the coefficients. The complex exponential form is easy to work with but not so easy to enter as data, the sin/cos version is easier in that respect but effectively requires two sets of coefficients. The main virtue of such a trigonometric series relative to using an fft is that the sample/interpolation points can be more general. The drawback is that the fft is much faster for large degree. > > Multiplication by x should be sufficient for most things, in > > Which, in the standard basis at least, is just a prepending of a zero, of > course... > > > > particular evaluation and conversion to/from other series. > > Apart from that, I think Legendre polynomials would fit in well. There > > That was my first guess as to what to do next. > > > > was a request for Hermite polynomials, > > Hermite requester: if you're reading this, did you already "roll your own"? > > > > which shouldn't be difficult in principle, but perhaps more so in > practice because there > > are two versions that go under that name but have different scalings. It > is also more difficult to assign a fixed domain for them > > because the domain essentially expands with the degree. But I don't think > those difficulties are fundamental. > > No, I agree (I think): the issue is more one of "given a particular basis, > what can one do to stay in the basis while only manipulating the > coefficients" - if your multiplying Legendre polynomials, e.g., the natural > default is that you want the result to also be Legendre. If good algorithms > for this have been worked out for standard and Cheby, but no others...well, > at the very least, it helps me see why you stopped where you did. ;-) > > > Chuck > > On Sat, Jan 23, 2010 at 11:44 PM, Anne Archibald < > peridot.faceted at gmail.com> wrote: > >> 2010/1/24 David Goldsmith : >> >> > PS: If I were to use chebyshev as my "template," what would you say is >> the >> > next most useful/algorithmically-studied polynomial basis to implement? >> >> There was extensive (and occasionally heated) discussion of other >> polynomial representations around the time the Chebyshev routines were >> being introduced. My point of view in that discussion was that there >> should be a general framework for working with polynomials in many >> representations, but the representations I thought might be worth >> having were: >> >> (a) Power basis. >> (b) Chebyshev basis. >> (c) Bases of other families of orthogonal polynomials. >> (d) Lagrange basis (polynomials by value). >> (e) Spline basis. >> >> The need for polynomials expressed in terms of other families of >> orthogonal polynomials is to some degree alleviated by the improved >> orthogonal polynomial support that came in a little after the >> discussion. Polynomials by value are a useful tool; > > > This was my other leading candidate for "Next!" > > >> if you choose the >> right evaluation points they are competitive with Chebyshev >> polynomials for many purposes, and they can do other things as well. >> The spline basis would be nice, in that it would give people good >> tools for manipulating functions represented by splines, but the >> issues of numerical instability with degree raising and lowering >> suggest to me that they're not going to be that useful as a generic >> polynomial library. >> >> So I think my vote would be for polynomials by value. Not that I'm >> unbiased! I have a mostly-functional implementation: >> http://github.com/aarchiba/scikits.polynomial >> I can't vouch for its consistency with the current implementations, or >> its completeness; it's been a while since I worked on it. >> > > Polynomials by value would be a valuable addition. But I'm thinking the framework should specific to that problem and not try to be more general. It's a tradeoff between simplicity and generality and I incline towards simplicity here along with numerical speed. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From d.l.goldsmith at gmail.com Sun Jan 24 17:04:45 2010 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Sun, 24 Jan 2010 14:04:45 -0800 Subject: [SciPy-dev] Difference between polynomial.trimcoef and trimseq In-Reply-To: References: <45d1ab481001231952t274d5139g63c35113dd3993f4@mail.gmail.com> <45d1ab481001232208i3753ae19ma0c93f1b8f83861@mail.gmail.com> <45d1ab481001240109o100b2f0eo78d6a5055e5487f6@mail.gmail.com> Message-ID: <45d1ab481001241404r32abeef8m1718fa2a75c016a0@mail.gmail.com> On Sun, Jan 24, 2010 at 1:10 PM, Charles R Harris wrote: > > On Sun, Jan 24, 2010 at 2:09 AM, David Goldsmith wrote: > >> from >> Charles R Harris >> >> On Sat, Jan 23, 2010 at 11:08 PM, David Goldsmith < >> d.l.goldsmith at gmail.com> wrote: >> >>> Do you think a typical user would ever use both? (Or is this an >>> efficiency that most can live w/out? I'm just curious how much we should >>> "explain ourselves" in their docstrings.) >>> >>> > Hard to say ;) I wrote the docstrings for the helper funtions >> >> And in this case the "helper function" is the trimseq, correct? >> >> Yes, and pretty much the rest of the functions in polyutils, but trimseq > is sort of lower level than the others. > OK, thanks for the clarification. > > mostly for my own use and think of those helper functions as private. >> They are in the standard import just in case anyone wants >> > to do their own stuff. >> >> >>> PS: If I were to use chebyshev as my "template," what would you say is >>> the next most useful/algorithmically-studied polynomial basis to implement? >> >> > The power/Chebyshev series have the special property that it is easy to >> multiply/divide them, so the template needs to lose a >> > few features to be useful for functions where that is far more >> difficult. >> >> Yeah, that's what I meant by "algorithmically-studied": AFAYK, numericists >> haven't derived/discovered nearly as efficient "tricks" for operating on the >> other orthos/classes as they have for the standard and Chebyshev bases? >> BTW: on the subject of "numerical tricks," are there such for trigonometric >> polynomials? > > > Trigonometric polynomials could pretty much follow the Chebyshev pattern, > they are essentially the z-series. The trick is to decide how to represent > the coefficients. The complex exponential form is easy to work with but not > so easy to enter as data, the sin/cos version is easier in that respect but > effectively requires two sets of coefficients. > Sounds like the ideal sit. would be to implement both w/ go between functions. > The main virtue of such a trigonometric series relative to using an fft is > that the sample/interpolation points can be more general. > That, and pedagogical purposes (trigonometric poly's are still taught in various contexts, aren't they? Plus instructors might like to have both implemented to illustrate the relationships and relative advantages. You can see where I'm coming from: part of what I consider to be my charge is to assure some suitability of NumPy/SciPy as an instructional tool, not just a research/professional tool.) > The drawback is that the fft is much faster for large degree. > Of course, thus the name. ;-) > Polynomials by value would be a valuable addition. But I'm thinking the > framework should specific to that problem and not try to be more general. > It's a tradeoff between simplicity and generality and I incline towards > simplicity here along with numerical speed. > Is this a case where that relationship doesn't lend itself to class/subclass? (E.g., because the implementation details are vastly different to achieve the speed gains of the specialized?) DG > > Chuck > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sun Jan 24 17:50:32 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 24 Jan 2010 15:50:32 -0700 Subject: [SciPy-dev] Difference between polynomial.trimcoef and trimseq In-Reply-To: <45d1ab481001241404r32abeef8m1718fa2a75c016a0@mail.gmail.com> References: <45d1ab481001231952t274d5139g63c35113dd3993f4@mail.gmail.com> <45d1ab481001232208i3753ae19ma0c93f1b8f83861@mail.gmail.com> <45d1ab481001240109o100b2f0eo78d6a5055e5487f6@mail.gmail.com> <45d1ab481001241404r32abeef8m1718fa2a75c016a0@mail.gmail.com> Message-ID: On Sun, Jan 24, 2010 at 3:04 PM, David Goldsmith wrote: > On Sun, Jan 24, 2010 at 1:10 PM, Charles R Harris < > charlesr.harris at gmail.com> wrote: > >> >> On Sun, Jan 24, 2010 at 2:09 AM, David Goldsmith > > wrote: >> >>> from >>> Charles R Harris >>> >>> On Sat, Jan 23, 2010 at 11:08 PM, David Goldsmith < >>> d.l.goldsmith at gmail.com> wrote: >>> >>>> Do you think a typical user would ever use both? (Or is this an >>>> efficiency that most can live w/out? I'm just curious how much we should >>>> "explain ourselves" in their docstrings.) >>>> >>>> > Hard to say ;) I wrote the docstrings for the helper funtions >>> >>> And in this case the "helper function" is the trimseq, correct? >>> >>> Yes, and pretty much the rest of the functions in polyutils, but trimseq >> is sort of lower level than the others. >> > > OK, thanks for the clarification. > > >> > mostly for my own use and think of those helper functions as private. >>> They are in the standard import just in case anyone wants >>> > to do their own stuff. >>> >>> >>>> PS: If I were to use chebyshev as my "template," what would you say is >>>> the next most useful/algorithmically-studied polynomial basis to implement? >>> >>> > The power/Chebyshev series have the special property that it is easy >>> to multiply/divide them, so the template needs to lose a >>> > few features to be useful for functions where that is far more >>> difficult. >>> >>> Yeah, that's what I meant by "algorithmically-studied": AFAYK, >>> numericists haven't derived/discovered nearly as efficient "tricks" for >>> operating on the other orthos/classes as they have for the standard and >>> Chebyshev bases? BTW: on the subject of "numerical tricks," are there such >>> for trigonometric polynomials? >> >> >> Trigonometric polynomials could pretty much follow the Chebyshev pattern, >> they are essentially the z-series. The trick is to decide how to represent >> the coefficients. The complex exponential form is easy to work with but not >> so easy to enter as data, the sin/cos version is easier in that respect but >> effectively requires two sets of coefficients. >> > > Sounds like the ideal sit. would be to implement both w/ go between > functions. > > Yeah, that could be done. The template approach is a bit of a stunt for just the two polynomial types, but maybe it will justify itself if the trigonometric polynomials are added. Hmm.... > The main virtue of such a trigonometric series relative to using an fft is >> that the sample/interpolation points can be more general. >> > > That, and pedagogical purposes (trigonometric poly's are still taught in > various contexts, aren't they? Plus instructors might like to have both > implemented to illustrate the relationships and relative advantages. You > can see where I'm coming from: part of what I consider to be my charge is to > assure some suitability of NumPy/SciPy as an instructional tool, not just a > research/professional tool.) > > >> The drawback is that the fft is much faster for large degree. >> > > Of course, thus the name. ;-) > > >> Polynomials by value would be a valuable addition. But I'm thinking the >> framework should specific to that problem and not try to be more general. >> It's a tradeoff between simplicity and generality and I incline towards >> simplicity here along with numerical speed. >> > > Is this a case where that relationship doesn't lend itself to > class/subclass? (E.g., because the implementation details are vastly > different to achieve the speed gains of the specialized?) > > Looking back on the discussion, I think we were shooting for too much generality in a single implementation. An more productive approach might be to treat each area -- graded polynomials, lagrange, Bernstein, and B-splines -- in a way most appropriate to each. The experience gained in such an approach could help us if at some point a generalized framework looks desirable. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From d.l.goldsmith at gmail.com Sun Jan 24 17:54:05 2010 From: d.l.goldsmith at gmail.com (David Goldsmith) Date: Sun, 24 Jan 2010 14:54:05 -0800 Subject: [SciPy-dev] Difference between polynomial.trimcoef and trimseq In-Reply-To: References: <45d1ab481001231952t274d5139g63c35113dd3993f4@mail.gmail.com> <45d1ab481001232208i3753ae19ma0c93f1b8f83861@mail.gmail.com> <45d1ab481001240109o100b2f0eo78d6a5055e5487f6@mail.gmail.com> <45d1ab481001241404r32abeef8m1718fa2a75c016a0@mail.gmail.com> Message-ID: <45d1ab481001241454i14b9b910y2d5c95c11485a59d@mail.gmail.com> On Sun, Jan 24, 2010 at 2:50 PM, Charles R Harris wrote: > > On Sun, Jan 24, 2010 at 3:04 PM, David Goldsmith wrote: > >> On Sun, Jan 24, 2010 at 1:10 PM, Charles R Harris < >> charlesr.harris at gmail.com> wrote: >> >>> >>> On Sun, Jan 24, 2010 at 2:09 AM, David Goldsmith < >>> d.l.goldsmith at gmail.com> wrote: >>> >>>> from >>>> Charles R Harris >>>> >>>> On Sat, Jan 23, 2010 at 11:08 PM, David Goldsmith < >>>> d.l.goldsmith at gmail.com> wrote: >>>> >>>>> Do you think a typical user would ever use both? (Or is this an >>>>> efficiency that most can live w/out? I'm just curious how much we should >>>>> "explain ourselves" in their docstrings.) >>>>> >>>>> > Hard to say ;) I wrote the docstrings for the helper funtions >>>> >>>> And in this case the "helper function" is the trimseq, correct? >>>> >>>> Yes, and pretty much the rest of the functions in polyutils, but trimseq >>> is sort of lower level than the others. >>> >> >> OK, thanks for the clarification. >> >> >>> > mostly for my own use and think of those helper functions as private. >>>> They are in the standard import just in case anyone wants >>>> > to do their own stuff. >>>> >>>> >>>>> PS: If I were to use chebyshev as my "template," what would you say is >>>>> the next most useful/algorithmically-studied polynomial basis to implement? >>>> >>>> > The power/Chebyshev series have the special property that it is easy >>>> to multiply/divide them, so the template needs to lose a >>>> > few features to be useful for functions where that is far more >>>> difficult. >>>> >>>> Yeah, that's what I meant by "algorithmically-studied": AFAYK, >>>> numericists haven't derived/discovered nearly as efficient "tricks" for >>>> operating on the other orthos/classes as they have for the standard and >>>> Chebyshev bases? BTW: on the subject of "numerical tricks," are there such >>>> for trigonometric polynomials? >>> >>> >>> Trigonometric polynomials could pretty much follow the Chebyshev pattern, >>> they are essentially the z-series. The trick is to decide how to represent >>> the coefficients. The complex exponential form is easy to work with but not >>> so easy to enter as data, the sin/cos version is easier in that respect but >>> effectively requires two sets of coefficients. >>> >> >> Sounds like the ideal sit. would be to implement both w/ go between >> functions. >> >> > > Yeah, that could be done. The template approach is a bit of a stunt for > just the two polynomial types, but maybe it will justify itself if the > trigonometric polynomials are added. Hmm.... > > >> The main virtue of such a trigonometric series relative to using an fft is >>> that the sample/interpolation points can be more general. >>> >> >> That, and pedagogical purposes (trigonometric poly's are still taught in >> various contexts, aren't they? Plus instructors might like to have both >> implemented to illustrate the relationships and relative advantages. You >> can see where I'm coming from: part of what I consider to be my charge is to >> assure some suitability of NumPy/SciPy as an instructional tool, not just a >> research/professional tool.) >> >> >>> The drawback is that the fft is much faster for large degree. >>> >> >> Of course, thus the name. ;-) >> >> >>> Polynomials by value would be a valuable addition. But I'm thinking the >>> framework should specific to that problem and not try to be more general. >>> It's a tradeoff between simplicity and generality and I incline towards >>> simplicity here along with numerical speed. >>> >> >> Is this a case where that relationship doesn't lend itself to >> class/subclass? (E.g., because the implementation details are vastly >> different to achieve the speed gains of the specialized?) >> >> > Looking back on the discussion, I think we were shooting for too much > generality in a single implementation. An more productive approach might be > to treat each area -- graded polynomials, lagrange, Bernstein, and B-splines > -- in a way most appropriate to each. The experience gained in such an > approach could help us if at some point a generalized framework looks > desirable. > Gotchya, sounds reasonable. DG > > Chuck > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at silveregg.co.jp Sun Jan 24 19:48:09 2010 From: david at silveregg.co.jp (David Cournapeau) Date: Mon, 25 Jan 2010 09:48:09 +0900 Subject: [SciPy-dev] compilation with fort77 In-Reply-To: <4B5A054C.7060308@gmx.de> References: <4B5A054C.7060308@gmx.de> Message-ID: <4B5CEA49.3040604@silveregg.co.jp> Thomas Tanner wrote: > Hello, > > I'm trying to port scipy 0.7.1 to the Maemo platform (Linux/ARM). > There is no Fortran compiler for this platform so I have to use > fort77 (and f2c) to compile numpy and scipy. > However, f2c does not seem to be 100% Fortran77 compatible (or may scipy > is not?). I get (only :) two compilation errors. > The first one is a trivial type conversion and fixed by my attached patch. > the full build output can be found at > https://garage.maemo.org/builder/fremantle/python-scipy_0.7.1-1maemo1/armel.build.log.OK.txt > > I don't know how to fix the second error: > > compiling Fortran sources > Fortran f77 compiler: /usr/bin/f77 -g -Wall -fno-second-underscore -fPIC > -O2 -funroll-loops > compile options: '-Ibuild/src.linux-armv5tel-2.5 > -I/usr/lib/python2.5/site-packages/numpy/core/include > -I/usr/include/python2.5 -c' > f77:f77: scipy/stats/mvndst.f > mvnun: > Error on line 76: Declaration error for rho: adjustable dimension on > non-argument > Error on line 76: Declaration error for infin: adjustable dimension on > non-argument > Error on line 76: Declaration error for stdev: adjustable dimension on > non-argument > Error on line 76: Declaration error for nlower: adjustable dimension on > non-argument > Error on line 76: Declaration error for nupper: adjustable dimension on > non-argument > Error on line 24: wr_ardecls: nonconstant array size > Error on line 24: wr_ardecls: nonconstant array size > Error on line 24: wr_ardecls: nonconstant array size > Error on line 24: wr_ardecls: nonconstant array size > Error on line 24: wr_ardecls: nonconstant array size > mvndst: > mvndfn: > entry mvndnt: > mvnlms: > covsrt: > dkswap: > rcswp: > dkbvrc: > dksmrc: > mvnphi: > phinvs: > bvnmvn: > bvu: > mvnuni: > > the problem is that fort77 cannot deal with those variable sized > input arguments: > > SUBROUTINE mvnun(d, n, lower, upper, means, covar, maxpts, > & abseps, releps, value, inform) > ... > integer n, d, infin(d), maxpts, inform, tmpinf > double precision lower(d), upper(d), releps, abseps, > & error, value, stdev(d), rho(d*(d-1)/2), > & covar(d,d), > & nlower(d), nupper(d), means(d,n), tmpval > integer i, j > > Could some Fortran expert please help me to make it fort77 compatible? This file is using some F90/F95 - I am not sure what the policy is on this point, but don't think we want to guarantee that we will never use fortran > F77. Can't you get gfortran running on your platform ? Cross-compiling gfortran for your platform should not be difficult David From tanner at gmx.de Mon Jan 25 18:14:26 2010 From: tanner at gmx.de (Thomas Tanner) Date: Tue, 26 Jan 2010 00:14:26 +0100 Subject: [SciPy-dev] compilation with fort77 In-Reply-To: <4B5CEA49.3040604@silveregg.co.jp> References: <4B5A054C.7060308@gmx.de> <4B5CEA49.3040604@silveregg.co.jp> Message-ID: <4B5E25D2.7010604@gmx.de> Hello, David Cournapeau wrote: >> Could some Fortran expert please help me to make it fort77 compatible? > This file is using some F90/F95 - I am not sure what the policy is on > this point, but don't think we want to guarantee that we will never use > fortran > F77. it's currently the only function in scipy that is not F77 compatible. It could be very simple to achieve F77 compatibility at least for the next release by fixing it. > Can't you get gfortran running on your platform ? Cross-compiling > gfortran for your platform should not be difficult Unfortunately, no. that's nearly impossible. I have to rely on a device specific gcc toolchain for which I don't have access to the sources or configuration. I guess fixing this one function should be much less effort. best, -- Thomas Tanner ------ email: tanner at gmx.de GnuPG: 1024/5924D4DD From cournape at gmail.com Mon Jan 25 20:04:11 2010 From: cournape at gmail.com (David Cournapeau) Date: Tue, 26 Jan 2010 10:04:11 +0900 Subject: [SciPy-dev] compilation with fort77 In-Reply-To: <4B5E25D2.7010604@gmx.de> References: <4B5A054C.7060308@gmx.de> <4B5CEA49.3040604@silveregg.co.jp> <4B5E25D2.7010604@gmx.de> Message-ID: <5b8d13221001251704g7f0832c0h7dc7fabde69db22c@mail.gmail.com> On Tue, Jan 26, 2010 at 8:14 AM, Thomas Tanner wrote: > Unfortunately, no. that's nearly impossible. > I have to rely on a device specific gcc toolchain for which I don't > have access to the sources or configuration. Maybe I misunderstand your configuration, but that sounds like it would violate the gcc license as it is. Compiling your own toolchain is not that difficult. I thought maemo was based on Debian ? Debian has a gfortran compiler for both X86 and ARMEL, which are the archs supported by maemo, right ? > I guess fixing this one function should be much less effort. the problem is not so much fixing this function as much as scipy to only depend on F77 code. Although much of the fortran legacy is in F77, a lot of more recent libraries use F90 or F95. cheers, David From tanner at gmx.de Tue Jan 26 02:30:52 2010 From: tanner at gmx.de (Thomas Tanner) Date: Tue, 26 Jan 2010 08:30:52 +0100 Subject: [SciPy-dev] compilation with fort77 In-Reply-To: <5b8d13221001251704g7f0832c0h7dc7fabde69db22c@mail.gmail.com> References: <4B5A054C.7060308@gmx.de> <4B5CEA49.3040604@silveregg.co.jp> <4B5E25D2.7010604@gmx.de> <5b8d13221001251704g7f0832c0h7dc7fabde69db22c@mail.gmail.com> Message-ID: <4B5E9A2C.4060508@gmx.de> David Cournapeau wrote: >> Unfortunately, no. that's nearly impossible. >> I have to rely on a device specific gcc toolchain for which I don't >> have access to the sources or configuration. > Maybe I misunderstand your configuration, but that sounds like it > would violate the gcc license as it is. Compiling your own toolchain > is not that difficult. the sources for all components may be available but patching, fine-tuning, building and verifying that it works for on a specific platform it is not trivial. That's why companies like CodeSourcery exist. Also consider the effort and the risk of getting something wrong in a few lines for Fortran vs. millions lines of code of a toolchain. > I thought maemo was based on Debian ? Debian has a gfortran compiler > for both X86 and ARMEL, which are the archs supported by maemo, right The SDK and autobuilder for Maemo use the scratchbox environment for compilation, which ships a gcc toolchain for both X86 and ARMEL. You cannot modify those toolchains from within scratchbox. I'd have to build and upload a complete toolchain (and make sure that it does not conflict with the main toolchain) just to compile a single function!? Isn't that breaking a butterfly on a wheel? >> I guess fixing this one function should be much less effort. > the problem is not so much fixing this function as much as scipy to > only depend on F77 code. Although much of the fortran legacy is in > F77, a lot of more recent libraries use F90 or F95. ?? 100% of scipy's dependencies and 99% of scipy itself work fine with F77. there's no need to require scipy to be F77 compatible for all future releases but we are so close to that goal for the current version. Is there really no Fortran programmer who could fix that function? best, -- Thomas Tanner ------ email: tanner at gmx.de GnuPG: 1024/5924D4DD From josef.pktd at gmail.com Tue Jan 26 02:53:44 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 26 Jan 2010 02:53:44 -0500 Subject: [SciPy-dev] compilation with fort77 In-Reply-To: <4B5E9A2C.4060508@gmx.de> References: <4B5A054C.7060308@gmx.de> <4B5CEA49.3040604@silveregg.co.jp> <4B5E25D2.7010604@gmx.de> <5b8d13221001251704g7f0832c0h7dc7fabde69db22c@mail.gmail.com> <4B5E9A2C.4060508@gmx.de> Message-ID: <1cd32cbb1001252353r494d0c9dxaf8eda8e0e344eaf@mail.gmail.com> On Tue, Jan 26, 2010 at 2:30 AM, Thomas Tanner wrote: > David Cournapeau wrote: >>> Unfortunately, no. that's nearly impossible. >>> I have to rely on a device specific gcc toolchain for which I don't >>> have access to the sources or configuration. >> Maybe I misunderstand your configuration, but that sounds like it >> would violate the gcc license as it is. Compiling your own toolchain >> is not that difficult. > > the sources for all components may be available but patching, > fine-tuning, building and verifying that it works for on a specific > platform it is not trivial. That's why companies like > CodeSourcery exist. Also consider the effort and the risk of getting > something wrong in a few lines for Fortran vs. millions lines of code of > a toolchain. > >> I thought maemo was based on Debian ? Debian has a gfortran compiler >> for both X86 and ARMEL, which are the archs supported by maemo, right > > The SDK and autobuilder for Maemo use the scratchbox environment for > compilation, which ships a gcc toolchain for both X86 and ARMEL. > You cannot modify those toolchains from within scratchbox. > I'd have to build and upload a complete toolchain (and make sure that it > does not conflict with the main toolchain) just to compile a single > function!? Isn't that breaking a butterfly on a wheel? > >>> I guess fixing this one function should be much less effort. >> the problem is not so much fixing this function as much as scipy to >> only depend on F77 code. Although much of the fortran legacy is in >> F77, a lot of more recent libraries use F90 or F95. > > ?? 100% of scipy's dependencies and 99% of scipy itself work fine with > F77. there's no need to require scipy to be F77 compatible for all > future releases but we are so close to that goal for the current version. > > Is there really no Fortran programmer who could fix that function? I'm no help for the original question and changing fortran code. However, scipy still builds with g77 in MingW 3.4.5 and I thought mvndst is the fortran 77 version of http://www.math.wsu.edu/faculty/genz/homepage I also build his cdf for the multivariate t distribution ( fortran 77 version) without much problems with f2py and g77 To the original question, if mvndst is the only problem, one possibility is to remove (or not build) scipy.stats.kde. As far as I know, no other part of scipy is using mvndst yet. Josef > > best, > -- > Thomas Tanner ------ > email: tanner at gmx.de > GnuPG: 1024/5924D4DD > > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From tanner at gmx.de Tue Jan 26 03:05:32 2010 From: tanner at gmx.de (Thomas Tanner) Date: Tue, 26 Jan 2010 09:05:32 +0100 Subject: [SciPy-dev] compilation with fort77 In-Reply-To: <1cd32cbb1001252353r494d0c9dxaf8eda8e0e344eaf@mail.gmail.com> References: <4B5A054C.7060308@gmx.de> <4B5CEA49.3040604@silveregg.co.jp> <4B5E25D2.7010604@gmx.de> <5b8d13221001251704g7f0832c0h7dc7fabde69db22c@mail.gmail.com> <4B5E9A2C.4060508@gmx.de> <1cd32cbb1001252353r494d0c9dxaf8eda8e0e344eaf@mail.gmail.com> Message-ID: <4B5EA24C.20200@gmx.de> Hi Josef, josef.pktd at gmail.com wrote: > However, scipy still builds with g77 in MingW 3.4.5 and I thought > mvndst is the fortran 77 version of > http://www.math.wsu.edu/faculty/genz/homepage the routine MVNUN is a modification by Enthought Inc. * Note: The test program has been removed and a utlity routine mvnun has been * added. RTK 2004-08-10 * * Copyright 2000 by Alan Genz. * Copyright 2004-2005 by Enthought, Inc. * * The subroutine MVNUN is copyrighted by Enthought, Inc. scipy ignores that error while building > To the original question, if mvndst is the only problem, one > possibility is to remove (or not build) scipy.stats.kde. As far as I > know, no other part of scipy is using mvndst yet. I'd like to use stats.kde :) best regards, -- Thomas Tanner ------ email: tanner at gmx.de GnuPG: 1024/5924D4DD From josef.pktd at gmail.com Tue Jan 26 03:16:37 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 26 Jan 2010 03:16:37 -0500 Subject: [SciPy-dev] compilation with fort77 In-Reply-To: <4B5EA24C.20200@gmx.de> References: <4B5A054C.7060308@gmx.de> <4B5CEA49.3040604@silveregg.co.jp> <4B5E25D2.7010604@gmx.de> <5b8d13221001251704g7f0832c0h7dc7fabde69db22c@mail.gmail.com> <4B5E9A2C.4060508@gmx.de> <1cd32cbb1001252353r494d0c9dxaf8eda8e0e344eaf@mail.gmail.com> <4B5EA24C.20200@gmx.de> Message-ID: <1cd32cbb1001260016o3e5e9f02u77a7016d6890ab60@mail.gmail.com> On Tue, Jan 26, 2010 at 3:05 AM, Thomas Tanner wrote: > Hi Josef, > > josef.pktd at gmail.com wrote: >> However, scipy still builds with g77 in MingW 3.4.5 and I thought >> mvndst is the fortran 77 version of >> http://www.math.wsu.edu/faculty/genz/homepage > > the routine MVNUN is a modification by Enthought Inc. > > * Note: The test program has been removed and a utlity routine mvnun has > been > * added. ?RTK 2004-08-10 > * > * Copyright 2000 by Alan Genz. > * Copyright 2004-2005 by Enthought, Inc. > * > * The subroutine MVNUN is copyrighted by Enthought, Inc. Yes, I also just checked following your original line numbers. mvnun is just a short wrapper function that looks like it could be replaced by a python function. For http://projects.scipy.org/scipy/ticket/846 I access mvndst directly, I think. I don't remember how much I checked the usage of mvnun in kde. Josef > > scipy ignores that error while building > >> To the original question, if mvndst is the only problem, one >> possibility is to remove (or not build) scipy.stats.kde. As far as I >> know, no other part of scipy is using mvndst yet. > > I'd like to use stats.kde :) > > best regards, > -- > Thomas Tanner ------ > email: tanner at gmx.de > GnuPG: 1024/5924D4DD > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From josef.pktd at gmail.com Tue Jan 26 03:54:20 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 26 Jan 2010 03:54:20 -0500 Subject: [SciPy-dev] compilation with fort77 In-Reply-To: <1cd32cbb1001260016o3e5e9f02u77a7016d6890ab60@mail.gmail.com> References: <4B5A054C.7060308@gmx.de> <4B5CEA49.3040604@silveregg.co.jp> <4B5E25D2.7010604@gmx.de> <5b8d13221001251704g7f0832c0h7dc7fabde69db22c@mail.gmail.com> <4B5E9A2C.4060508@gmx.de> <1cd32cbb1001252353r494d0c9dxaf8eda8e0e344eaf@mail.gmail.com> <4B5EA24C.20200@gmx.de> <1cd32cbb1001260016o3e5e9f02u77a7016d6890ab60@mail.gmail.com> Message-ID: <1cd32cbb1001260054u98430cch5bbe1be4cb804c1b@mail.gmail.com> On Tue, Jan 26, 2010 at 3:16 AM, wrote: > On Tue, Jan 26, 2010 at 3:05 AM, Thomas Tanner wrote: >> Hi Josef, >> >> josef.pktd at gmail.com wrote: >>> However, scipy still builds with g77 in MingW 3.4.5 and I thought >>> mvndst is the fortran 77 version of >>> http://www.math.wsu.edu/faculty/genz/homepage >> >> the routine MVNUN is a modification by Enthought Inc. >> >> * Note: The test program has been removed and a utlity routine mvnun has >> been >> * added. ?RTK 2004-08-10 >> * >> * Copyright 2000 by Alan Genz. >> * Copyright 2004-2005 by Enthought, Inc. >> * >> * The subroutine MVNUN is copyrighted by Enthought, Inc. > > Yes, I also just checked following your original line numbers. mvnun > is just a short wrapper function that looks like it could be replaced > by a python function. Or maybe not, the second part of mvnun is a heavy duty loop over all points in the data, so there is a strong reason to keep it in fortran. Since I don't know type details in fortran, I don't know how difficult it would be to move the creation of the temp variables of mvnun that includes the variable dimensions declarations into a python function. But looking at the fortran code, I have no idea, mvndist uses the similar variables CORREL(*) instead of rho(d*(d-1)/2) How are temp variables with flexible dimension defined in fortran77? (If I had to fix it, knowing as little fortran as I do, I would just add all offending temp variables into the argument list and create them in python. Since kde is the only user of mvnun, it shouldn't be to difficult to change) Josef " http://en.wikipedia.org/wiki/Infinite_monkey_theorem " > > For http://projects.scipy.org/scipy/ticket/846 I access mvndst > directly, I think. I don't remember how much I checked the usage of > mvnun in kde. > > Josef > > > >> >> scipy ignores that error while building >> >>> To the original question, if mvndst is the only problem, one >>> possibility is to remove (or not build) scipy.stats.kde. As far as I >>> know, no other part of scipy is using mvndst yet. >> >> I'd like to use stats.kde :) >> >> best regards, >> -- >> Thomas Tanner ------ >> email: tanner at gmx.de >> GnuPG: 1024/5924D4DD >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev >> > From pav+sp at iki.fi Tue Jan 26 04:22:21 2010 From: pav+sp at iki.fi (Pauli Virtanen) Date: Tue, 26 Jan 2010 09:22:21 +0000 (UTC) Subject: [SciPy-dev] compilation with fort77 References: <4B5A054C.7060308@gmx.de> <4B5CEA49.3040604@silveregg.co.jp> <4B5E25D2.7010604@gmx.de> <5b8d13221001251704g7f0832c0h7dc7fabde69db22c@mail.gmail.com> <4B5E9A2C.4060508@gmx.de> Message-ID: Tue, 26 Jan 2010 08:30:52 +0100, Thomas Tanner wrote: [clip] >>> I guess fixing this one function should be much less effort. >> the problem is not so much fixing this function as much as scipy to >> only depend on F77 code. Although much of the fortran legacy is in F77, >> a lot of more recent libraries use F90 or F95. > > ?? 100% of scipy's dependencies and 99% of scipy itself work fine with > F77. there's no need to require scipy to be F77 compatible for all > future releases but we are so close to that goal for the current > version. > > Is there really no Fortran programmer who could fix that function? There is, we're just short on time as usual. The temp variables should be added to the argument list, and made intent(cache,hide) variables or something in the .pyf file. -- Pauli Virtanen From david.kirkby at onetel.net Tue Jan 26 04:23:18 2010 From: david.kirkby at onetel.net (David Kirkby) Date: Tue, 26 Jan 2010 09:23:18 +0000 Subject: [SciPy-dev] compilation with fort77 In-Reply-To: <1cd32cbb1001260054u98430cch5bbe1be4cb804c1b@mail.gmail.com> References: <4B5A054C.7060308@gmx.de> <4B5CEA49.3040604@silveregg.co.jp> <4B5E25D2.7010604@gmx.de> <5b8d13221001251704g7f0832c0h7dc7fabde69db22c@mail.gmail.com> <4B5E9A2C.4060508@gmx.de> <1cd32cbb1001252353r494d0c9dxaf8eda8e0e344eaf@mail.gmail.com> <4B5EA24C.20200@gmx.de> <1cd32cbb1001260016o3e5e9f02u77a7016d6890ab60@mail.gmail.com> <1cd32cbb1001260054u98430cch5bbe1be4cb804c1b@mail.gmail.com> Message-ID: <286f7bad1001260123o26c891e6x3e5bf00e66c072e0@mail.gmail.com> Just loking over this thread, would it not be worth posting the few lines of Fortran on a fortran newsgroup, and asking somene why it will not build with a Fortran 77 compiler? Dave From pav+sp at iki.fi Tue Jan 26 04:24:48 2010 From: pav+sp at iki.fi (Pauli Virtanen) Date: Tue, 26 Jan 2010 09:24:48 +0000 (UTC) Subject: [SciPy-dev] compilation with fort77 References: <4B5A054C.7060308@gmx.de> <4B5CEA49.3040604@silveregg.co.jp> <4B5E25D2.7010604@gmx.de> <5b8d13221001251704g7f0832c0h7dc7fabde69db22c@mail.gmail.com> <4B5E9A2C.4060508@gmx.de> <1cd32cbb1001252353r494d0c9dxaf8eda8e0e344eaf@mail.gmail.com> <4B5EA24C.20200@gmx.de> <1cd32cbb1001260016o3e5e9f02u77a7016d6890ab60@mail.gmail.com> <1cd32cbb1001260054u98430cch5bbe1be4cb804c1b@mail.gmail.com> <286f7bad1001260123o26c891e6x3e5bf00e66c072e0@mail.gmail.com> Message-ID: Tue, 26 Jan 2010 09:23:18 +0000, David Kirkby wrote: > Just loking over this thread, would it not be worth posting the few > lines of Fortran on a fortran newsgroup, and asking somene why it will > not build with a Fortran 77 compiler? The reason is that only dummy variables can be adjustable size variables in F77, as the error messages indicated. -- Pauli Virtanen From david.kirkby at onetel.net Tue Jan 26 04:52:02 2010 From: david.kirkby at onetel.net (Dr. David Kirkby) Date: Tue, 26 Jan 2010 09:52:02 +0000 Subject: [SciPy-dev] compilation with fort77 In-Reply-To: References: <4B5A054C.7060308@gmx.de> <4B5CEA49.3040604@silveregg.co.jp> <4B5E25D2.7010604@gmx.de> <5b8d13221001251704g7f0832c0h7dc7fabde69db22c@mail.gmail.com> <4B5E9A2C.4060508@gmx.de> <1cd32cbb1001252353r494d0c9dxaf8eda8e0e344eaf@mail.gmail.com> <4B5EA24C.20200@gmx.de> <1cd32cbb1001260016o3e5e9f02u77a7016d6890ab60@mail.gmail.com> <1cd32cbb1001260054u98430cch5bbe1be4cb804c1b@mail.gmail.com> <286f7bad1001260123o26c891e6x3e5bf00e66c072e0@mail.gmail.com> Message-ID: <4B5EBB42.4020905@onetel.net> Pauli Virtanen wrote: > Tue, 26 Jan 2010 09:23:18 +0000, David Kirkby wrote: >> Just loking over this thread, would it not be worth posting the few >> lines of Fortran on a fortran newsgroup, and asking somene why it will >> not build with a Fortran 77 compiler? > > The reason is that only dummy variables can be adjustable size variables > in F77, as the error messages indicated. > I suppose I should have added, "and ask for suggestions how one might change the the code to make it F77 compatible". If a minor change in the code would allow it to build with an older compiler, that seems a logical thing to do. From tanner at gmx.de Tue Jan 26 08:32:03 2010 From: tanner at gmx.de (Thomas Tanner) Date: Tue, 26 Jan 2010 14:32:03 +0100 Subject: [SciPy-dev] multivariate distributions In-Reply-To: <1cd32cbb1001260016o3e5e9f02u77a7016d6890ab60@mail.gmail.com> References: <4B5A054C.7060308@gmx.de> <4B5CEA49.3040604@silveregg.co.jp> <4B5E25D2.7010604@gmx.de> <5b8d13221001251704g7f0832c0h7dc7fabde69db22c@mail.gmail.com> <4B5E9A2C.4060508@gmx.de> <1cd32cbb1001252353r494d0c9dxaf8eda8e0e344eaf@mail.gmail.com> <4B5EA24C.20200@gmx.de> <1cd32cbb1001260016o3e5e9f02u77a7016d6890ab60@mail.gmail.com> Message-ID: <4B5EEED3.1090100@gmx.de> josef.pktd at gmail.com wrote: > For http://projects.scipy.org/scipy/ticket/846 I access mvndst > directly, I think. I don't remember how much I checked the usage of > mvnun in kde. thank you, Josef! it may not fix my problem but it looks useful. are there any plans to add multivariate (or even matrix variate) distributions to SciPy? Alan Genz has a some code for mv normal and t http://www.math.wsu.edu/faculty/genz/software/software.html best regards, -- Thomas Tanner ------ email: tanner at gmx.de GnuPG: 1024/5924D4DD From tanner at gmx.de Tue Jan 26 08:32:27 2010 From: tanner at gmx.de (Thomas Tanner) Date: Tue, 26 Jan 2010 14:32:27 +0100 Subject: [SciPy-dev] compilation with fort77 In-Reply-To: <286f7bad1001260123o26c891e6x3e5bf00e66c072e0@mail.gmail.com> References: <4B5A054C.7060308@gmx.de> <4B5CEA49.3040604@silveregg.co.jp> <4B5E25D2.7010604@gmx.de> <5b8d13221001251704g7f0832c0h7dc7fabde69db22c@mail.gmail.com> <4B5E9A2C.4060508@gmx.de> <1cd32cbb1001252353r494d0c9dxaf8eda8e0e344eaf@mail.gmail.com> <4B5EA24C.20200@gmx.de> <1cd32cbb1001260016o3e5e9f02u77a7016d6890ab60@mail.gmail.com> <1cd32cbb1001260054u98430cch5bbe1be4cb804c1b@mail.gmail.com> <286f7bad1001260123o26c891e6x3e5bf00e66c072e0@mail.gmail.com> Message-ID: <4B5EEEEB.8010304@gmx.de> Thanks for the suggestion. That would be my next step. David Kirkby wrote: > Just loking over this thread, would it not be worth posting the few > lines of Fortran on a fortran newsgroup, and asking somene why it will > not build with a Fortran 77 compiler? -- Thomas Tanner ------ email: tanner at gmx.de GnuPG: 1024/5924D4DD From pav+sp at iki.fi Tue Jan 26 09:38:13 2010 From: pav+sp at iki.fi (Pauli Virtanen) Date: Tue, 26 Jan 2010 14:38:13 +0000 (UTC) Subject: [SciPy-dev] compilation with fort77 References: <4B5A054C.7060308@gmx.de> <4B5CEA49.3040604@silveregg.co.jp> <4B5E25D2.7010604@gmx.de> <5b8d13221001251704g7f0832c0h7dc7fabde69db22c@mail.gmail.com> <4B5E9A2C.4060508@gmx.de> <1cd32cbb1001252353r494d0c9dxaf8eda8e0e344eaf@mail.gmail.com> <4B5EA24C.20200@gmx.de> <1cd32cbb1001260016o3e5e9f02u77a7016d6890ab60@mail.gmail.com> <1cd32cbb1001260054u98430cch5bbe1be4cb804c1b@mail.gmail.com> <286f7bad1001260123o26c891e6x3e5bf00e66c072e0@mail.gmail.com> <4B5EBB42.4020905@onetel.net> Message-ID: Tue, 26 Jan 2010 09:52:02 +0000, Dr. David Kirkby wrote: [clip] > I suppose I should have added, > > "and ask for suggestions how one might change the the code to make it > F77 compatible". > > If a minor change in the code would allow it to build with an older > compiler, that seems a logical thing to do. The fix is most likely that the temp variables should be added to the argument list, and made intent(cache,hide) variables in the .pyf file. That should make it as per F77 spec. Testing the fix is probably more work than actually changing the source -- I'm not sure what the test coverage of that function is. -- Pauli Virtanen From josef.pktd at gmail.com Tue Jan 26 09:47:50 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 26 Jan 2010 09:47:50 -0500 Subject: [SciPy-dev] compilation with fort77 In-Reply-To: References: <4B5A054C.7060308@gmx.de> <4B5E9A2C.4060508@gmx.de> <1cd32cbb1001252353r494d0c9dxaf8eda8e0e344eaf@mail.gmail.com> <4B5EA24C.20200@gmx.de> <1cd32cbb1001260016o3e5e9f02u77a7016d6890ab60@mail.gmail.com> <1cd32cbb1001260054u98430cch5bbe1be4cb804c1b@mail.gmail.com> <286f7bad1001260123o26c891e6x3e5bf00e66c072e0@mail.gmail.com> <4B5EBB42.4020905@onetel.net> Message-ID: <1cd32cbb1001260647j333840a7rd0ba1900bb67cd55@mail.gmail.com> On Tue, Jan 26, 2010 at 9:38 AM, Pauli Virtanen wrote: > Tue, 26 Jan 2010 09:52:02 +0000, Dr. David Kirkby wrote: > [clip] >> I suppose I should have added, >> >> "and ask for suggestions how one might change the the code to make it >> F77 compatible". >> >> If a minor change in the code would allow it to build with an older >> compiler, that seems a logical thing to do. > > The fix is most likely that the temp variables should be added to the > argument list, and made intent(cache,hide) variables in the .pyf file. > That should make it as per F77 spec. > > Testing the fix is probably more work than actually changing the source > -- I'm not sure what the test coverage of that function is. I added a test for the correctness of the results for the 1d case a while ago, so basic functionality is tested, but nothing for higher dimensions. Josef > > -- > Pauli Virtanen > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From josef.pktd at gmail.com Tue Jan 26 09:43:30 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 26 Jan 2010 09:43:30 -0500 Subject: [SciPy-dev] multivariate distributions In-Reply-To: <4B5EEED3.1090100@gmx.de> References: <4B5A054C.7060308@gmx.de> <4B5CEA49.3040604@silveregg.co.jp> <4B5E25D2.7010604@gmx.de> <5b8d13221001251704g7f0832c0h7dc7fabde69db22c@mail.gmail.com> <4B5E9A2C.4060508@gmx.de> <1cd32cbb1001252353r494d0c9dxaf8eda8e0e344eaf@mail.gmail.com> <4B5EA24C.20200@gmx.de> <1cd32cbb1001260016o3e5e9f02u77a7016d6890ab60@mail.gmail.com> <4B5EEED3.1090100@gmx.de> Message-ID: <1cd32cbb1001260643j15a960fewea8176fb4808194d@mail.gmail.com> On Tue, Jan 26, 2010 at 8:32 AM, Thomas Tanner wrote: > josef.pktd at gmail.com wrote: >> For http://projects.scipy.org/scipy/ticket/846 I access mvndst >> directly, I think. I don't remember how much I checked the usage of >> mvnun in kde. > > thank you, Josef! it may not fix my problem but it looks useful. > > are there any plans to add multivariate (or even matrix variate) > distributions to SciPy? > Alan Genz has a some code for mv normal and t > http://www.math.wsu.edu/faculty/genz/software/software.html It's on my wishlist, but there are many things on it. Currently the best package for distributions (outside of the univariate in scipy.stats) is pymc which includes several multivariate and some matrix distributions, at least likelihood function and random number generation, largely in fortran. I think scikits.learn also has some functions for the multivariate normal. I wrapped mvtdst with f2py as an experiment to see whether I would be able to do it. The multivariate t cdf would be useful for t-copulas. I didn't see a license statement for the Genz code, so he would need to be specifically asked whether he gives the permission to include his t code in scipy. Given that I'm more interested in other things right now, and I haven't made up my mind what the design of a more general multivariate distribution class would look like, I don't think I will work much in this direction anytime soon. However, I would review and help with any contribution. As a side issue, since scipy doesn't have a sandbox, I'm spreading my own extensions to scipy.stats all over the place. I was thinking of parking them in statsmodels for maturing. Here is my previous, aborted, attempt to collect some of my functions, but there is nothing for multivariate distributions besides mvndst from the ticket http://bazaar.launchpad.net/~josef-pktd/scipy/scipytrunkwork2/annotate/head%3A/scipy/stats/distribution_extras.py http://bazaar.launchpad.net/~josef-pktd/scipy/scipytrunkwork2/annotate/head%3A/scipy/stats/extras.py http://bazaar.launchpad.net/~josef-pktd/scipy/scipytrunkwork2/annotate/head%3A/scipy/stats/examples/ex_distributionextras.py This was based on the last version of scipy that didn't require numpy 1.4 Other functions are hiding on my computer or on the mailinglists (semifrozen distributions would be my top priority when I get around to cleaning it up.) Josef > > best regards, > -- > Thomas Tanner ------ > email: tanner at gmx.de > GnuPG: 1024/5924D4DD > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From ryanlists at gmail.com Thu Jan 28 15:38:02 2010 From: ryanlists at gmail.com (Ryan Krauss) Date: Thu, 28 Jan 2010 14:38:02 -0600 Subject: [SciPy-dev] bug in signal.lsim2 Message-ID: I believe I have discovered a bug in signal.lsim2. I believe the short attached script illustrates the problem. I was trying to predict the response of a transfer function with a pure integrator: g G = ------------- s(s+p) to a finite width pulse. lsim2 seems to handle the step response just fine, but says that the pulse response is exactly 0.0 for the entire time of the simulation. Obviously, this isn't the right answer. I am running scipy 0.7.0 and numpy 1.2.1 on Ubuntu 9.04, but I also have the same problem on Windows running 0.7.1 and 1.4.0. Thanks, Ryan -------------- next part -------------- A non-text attachment was scrubbed... Name: lsim2_problem.py Type: text/x-python Size: 360 bytes Desc: not available URL: From ryanlists at gmail.com Thu Jan 28 15:38:47 2010 From: ryanlists at gmail.com (Ryan Krauss) Date: Thu, 28 Jan 2010 14:38:47 -0600 Subject: [SciPy-dev] bug in signal.lsim2 In-Reply-To: References: Message-ID: (sorry, I meant to send this to the user list. But feel free to reply.) On Thu, Jan 28, 2010 at 2:38 PM, Ryan Krauss wrote: > I believe I have discovered a bug in signal.lsim2. ?I believe the > short attached script illustrates the problem. ?I was trying to > predict the response of a transfer function with a pure integrator: > > ? ? ? ? ? ? ?g > G = ------------- > ? ? ? ? ?s(s+p) > > to a finite width pulse. ?lsim2 seems to handle the step response just > fine, but says that the pulse response is exactly 0.0 for the entire > time of the simulation. ?Obviously, this isn't the right answer. > > I am running scipy 0.7.0 and numpy 1.2.1 on Ubuntu 9.04, but I also > have the same problem on Windows running 0.7.1 and 1.4.0. > > Thanks, > > Ryan >