From ralf.gommers at gmail.com Sat Feb 1 11:04:52 2014 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 1 Feb 2014 17:04:52 +0100 Subject: [SciPy-Dev] Add detrend=None for scipy.signal.welch et al. In-Reply-To: <1391204402.13427.9.camel@mws-deb> References: <1391204402.13427.9.camel@mws-deb> Message-ID: On Fri, Jan 31, 2014 at 10:40 PM, Maximilian Singh wrote: > Hey all, > > I suggest to add the possibility to turn off the 'detrend' functionality > in scipy.signal.welch and scipy.signal.periodogram, e.g. by giving > 'detrend=None' as parameter. Although detrending is an useful option, > there are also use cases where DC should not be suppressed. > > Do you agree that this is common enough to be added? > Hard to judge how common it is, but if you have a use case then I don't see any problem in adding it. PR welcome. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ewm at redtetrahedron.org Sat Feb 1 11:09:40 2014 From: ewm at redtetrahedron.org (Eric Moore) Date: Sat, 1 Feb 2014 11:09:40 -0500 Subject: [SciPy-Dev] Add detrend=None for scipy.signal.welch et al. In-Reply-To: References: <1391204402.13427.9.camel@mws-deb> Message-ID: On Saturday, February 1, 2014, Ralf Gommers wrote: > > > > On Fri, Jan 31, 2014 at 10:40 PM, Maximilian Singh > > wrote: > >> Hey all, >> >> I suggest to add the possibility to turn off the 'detrend' functionality >> in scipy.signal.welch and scipy.signal.periodogram, e.g. by giving >> 'detrend=None' as parameter. Although detrending is an useful option, >> there are also use cases where DC should not be suppressed. >> >> Do you agree that this is common enough to be added? >> > > Hard to judge how common it is, but if you have a use case then I don't > see any problem in adding it. PR welcome. > > Ralf > > Welch calls signal.detrend internally, I wonder if it would be better to add this to detrend instead. Eric -------------- next part -------------- An HTML attachment was scrubbed... URL: From mws at lionex.de Sat Feb 1 13:27:31 2014 From: mws at lionex.de (Maximilian Singh) Date: Sat, 01 Feb 2014 19:27:31 +0100 Subject: [SciPy-Dev] Add detrend=None for scipy.signal.welch et al. In-Reply-To: References: <1391204402.13427.9.camel@mws-deb> Message-ID: <1391279251.13427.14.camel@mws-deb> > > Welch calls signal.detrend internally, I wonder if it would be better > to add this to detrend instead. > I also thought about this, but it seemed strange to me to add a parameter to signal.detrend which actually makes it doing nothing. But perhaps this is the better solution as the change is only in one function and not in every function using this internally. -Max -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From ozancag at gmail.com Sat Feb 1 18:01:57 2014 From: ozancag at gmail.com (=?UTF-8?B?T3phbiDDh2HEn2xheWFu?=) Date: Sun, 2 Feb 2014 01:01:57 +0200 Subject: [SciPy-Dev] Add detrend=None for scipy.signal.welch et al. In-Reply-To: <1391279251.13427.14.camel@mws-deb> References: <1391204402.13427.9.camel@mws-deb> <1391279251.13427.14.camel@mws-deb> Message-ID: > > Welch calls signal.detrend internally, I wonder if it would be better > > to add this to detrend instead. > > > > I also thought about this, but it seemed strange to me to add a > parameter to signal.detrend which actually makes it doing nothing. But > perhaps this is the better solution as the change is only in one > function and not in every function using this internally. Wow this is the strangest idea think i've ever heard :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sun Feb 2 10:54:54 2014 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 2 Feb 2014 16:54:54 +0100 Subject: [SciPy-Dev] Symbol not found: __ZNSt8ios_base4InitD1Ev for scipy.sparse In-Reply-To: References: Message-ID: On Fri, Jan 31, 2014 at 5:09 AM, Matthew Brett wrote: > Hi, > > On Wed, Jan 22, 2014 at 4:59 PM, Matthew Brett > wrote: > > Hi, > > > > On Wed, Jan 22, 2014 at 11:33 AM, Ralf Gommers > wrote: > >> > >> > >> > >> On Mon, Jan 20, 2014 at 1:24 PM, Matthew Brett > > >> wrote: > >>> > >>> Hi, > >>> > >>> I am trying to install scipy master on OSX 10.9. > >>> > >>> I'm using > >>> > >>> export CC=clang > >>> export CXX=clang > >>> export FFLAGS=--ff2c > >>> > >>> from http://www.scipy.org/scipylib/building/macosx.html > >>> > >>> I built and installed numpy with these flags, then scipy. > >>> > >>> scipy installs: > >>> > >>> >>> scipy.__version__ > >>> '0.14.0.dev-6b18a3b' > >>> > >>> but then: > >>> > >>> >>> import scipy.sparse > >>> Traceback (most recent call last): > >>> File "", line 1, in > >>> File > >>> > "/Users/mb312/.virtualenvs/py33-sp-devel/lib/python3.3/site-packages/scipy/sparse/__init__.py", > >>> line 206, in > >>> from .csr import * > >>> File > >>> > "/Users/mb312/.virtualenvs/py33-sp-devel/lib/python3.3/site-packages/scipy/sparse/csr.py", > >>> line 13, in > >>> from .sparsetools import csr_tocsc, csr_tobsr, csr_count_blocks, \ > >>> File > >>> > "/Users/mb312/.virtualenvs/py33-sp-devel/lib/python3.3/site-packages/scipy/sparse/sparsetools/__init__.py", > >>> line 5, in > >>> from .csr import * > >>> File > >>> > "/Users/mb312/.virtualenvs/py33-sp-devel/lib/python3.3/site-packages/scipy/sparse/sparsetools/csr.py", > >>> line 26, in > >>> _csr = swig_import_helper() > >>> File > >>> > "/Users/mb312/.virtualenvs/py33-sp-devel/lib/python3.3/site-packages/scipy/sparse/sparsetools/csr.py", > >>> line 22, in swig_import_helper > >>> _mod = imp.load_module('_csr', fp, pathname, description) > >>> File "/Users/mb312/.virtualenvs/py33-sp-devel/lib/python3.3/imp.py", > >>> line 183, in load_module > >>> return load_dynamic(name, filename, file) > >>> ImportError: > >>> > dlopen(/Users/mb312/.virtualenvs/py33-sp-devel/lib/python3.3/site-packages/scipy/sparse/sparsetools/_csr.so, > >>> 2): Symbol not found: __ZNSt8ios_base4InitD1Ev > >>> Referenced from: > >>> > >>> > /Users/mb312/.virtualenvs/py33-sp-devel/lib/python3.3/site-packages/scipy/sparse/sparsetools/_csr.so > >>> Expected in: flat namespace > >>> in > >>> > /Users/mb312/.virtualenvs/py33-sp-devel/lib/python3.3/site-packages/scipy/sparse/sparsetools/_csr.so > >>> > >>> Here's clang version info: > >>> > >>> $ clang -v > >>> Apple LLVM version 5.0 (clang-500.2.79) (based on LLVM 3.3svn) > >>> Target: x86_64-apple-darwin13.0.0 > >>> Thread model: posix > >>> > >>> I noticed a similar issue here: > >>> > >>> https://github.com/scipy/scipy/issues/3053 > >>> > >>> but I think I have clean install (into a virtualenv). > >>> > >>> Any hints as to where to look next? > >> > >> > >> I can't reproduce this with the same OS and Clang versions, XCode 5.0.1 > and > >> Homebrew Python 2.7. > >> > >> Do you get the same without a virtualenv? Maybe try ``git clean -xdf`` > >> followed by an in-place build. Do you get this for Python 2.7 as well? > And > >> which XCode and Python (version + how installed)? > > > > Thanks for checking. > > > > I do get the same with an in-place python 2.7 build, after git clean > > -fxd, and I get the same in a virtualenv with python 2.7. > > > > For both python 3.3 and python 2.7 I'm using the python.org binaries > > installed from the binary dmg installers. > > > > [mb312 at Kerbin ~]$ python2.7 --version > > Python 2.7.6 > > [mb312 at Kerbin ~]$ python3.3 --version > > Python 3.3.2 > > [mb312 at Kerbin ~]$ xcodebuild -version > > Xcode 5.0.2 > > Build version 5A3005 > > Just as a data point - the build works fine for system python. > Probably the issue is that more recent XCode ignores __STDC_FORMAT_MACROS defined here: https://github.com/scipy/scipy/blob/master/scipy/sparse/sparsetools/setup.py#L15. At least that's what's claimed here: http://software.intel.com/en-us/forums/topic/498727. Workaround suggested there, can you try that out? That still doesn't explain why only the python.org binaries fail, but that should have something to do with the different build flags coming from distutils for the different Pythons. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Sun Feb 2 15:14:55 2014 From: matthew.brett at gmail.com (Matthew Brett) Date: Sun, 2 Feb 2014 12:14:55 -0800 Subject: [SciPy-Dev] Symbol not found: __ZNSt8ios_base4InitD1Ev for scipy.sparse In-Reply-To: References: Message-ID: Hi, On Sun, Feb 2, 2014 at 7:54 AM, Ralf Gommers wrote: > > > > On Fri, Jan 31, 2014 at 5:09 AM, Matthew Brett > wrote: >> >> Hi, >> >> On Wed, Jan 22, 2014 at 4:59 PM, Matthew Brett >> wrote: >> > Hi, >> > >> > On Wed, Jan 22, 2014 at 11:33 AM, Ralf Gommers >> > wrote: >> >> >> >> >> >> >> >> On Mon, Jan 20, 2014 at 1:24 PM, Matthew Brett >> >> >> >> wrote: >> >>> >> >>> Hi, >> >>> >> >>> I am trying to install scipy master on OSX 10.9. >> >>> >> >>> I'm using >> >>> >> >>> export CC=clang >> >>> export CXX=clang >> >>> export FFLAGS=--ff2c >> >>> >> >>> from http://www.scipy.org/scipylib/building/macosx.html >> >>> >> >>> I built and installed numpy with these flags, then scipy. >> >>> >> >>> scipy installs: >> >>> >> >>> >>> scipy.__version__ >> >>> '0.14.0.dev-6b18a3b' >> >>> >> >>> but then: >> >>> >> >>> >>> import scipy.sparse >> >>> Traceback (most recent call last): >> >>> File "", line 1, in >> >>> File >> >>> >> >>> "/Users/mb312/.virtualenvs/py33-sp-devel/lib/python3.3/site-packages/scipy/sparse/__init__.py", >> >>> line 206, in >> >>> from .csr import * >> >>> File >> >>> >> >>> "/Users/mb312/.virtualenvs/py33-sp-devel/lib/python3.3/site-packages/scipy/sparse/csr.py", >> >>> line 13, in >> >>> from .sparsetools import csr_tocsc, csr_tobsr, csr_count_blocks, \ >> >>> File >> >>> >> >>> "/Users/mb312/.virtualenvs/py33-sp-devel/lib/python3.3/site-packages/scipy/sparse/sparsetools/__init__.py", >> >>> line 5, in >> >>> from .csr import * >> >>> File >> >>> >> >>> "/Users/mb312/.virtualenvs/py33-sp-devel/lib/python3.3/site-packages/scipy/sparse/sparsetools/csr.py", >> >>> line 26, in >> >>> _csr = swig_import_helper() >> >>> File >> >>> >> >>> "/Users/mb312/.virtualenvs/py33-sp-devel/lib/python3.3/site-packages/scipy/sparse/sparsetools/csr.py", >> >>> line 22, in swig_import_helper >> >>> _mod = imp.load_module('_csr', fp, pathname, description) >> >>> File "/Users/mb312/.virtualenvs/py33-sp-devel/lib/python3.3/imp.py", >> >>> line 183, in load_module >> >>> return load_dynamic(name, filename, file) >> >>> ImportError: >> >>> >> >>> dlopen(/Users/mb312/.virtualenvs/py33-sp-devel/lib/python3.3/site-packages/scipy/sparse/sparsetools/_csr.so, >> >>> 2): Symbol not found: __ZNSt8ios_base4InitD1Ev >> >>> Referenced from: >> >>> >> >>> >> >>> /Users/mb312/.virtualenvs/py33-sp-devel/lib/python3.3/site-packages/scipy/sparse/sparsetools/_csr.so >> >>> Expected in: flat namespace >> >>> in >> >>> >> >>> /Users/mb312/.virtualenvs/py33-sp-devel/lib/python3.3/site-packages/scipy/sparse/sparsetools/_csr.so >> >>> >> >>> Here's clang version info: >> >>> >> >>> $ clang -v >> >>> Apple LLVM version 5.0 (clang-500.2.79) (based on LLVM 3.3svn) >> >>> Target: x86_64-apple-darwin13.0.0 >> >>> Thread model: posix >> >>> >> >>> I noticed a similar issue here: >> >>> >> >>> https://github.com/scipy/scipy/issues/3053 >> >>> >> >>> but I think I have clean install (into a virtualenv). >> >>> >> >>> Any hints as to where to look next? >> >> >> >> >> >> I can't reproduce this with the same OS and Clang versions, XCode 5.0.1 >> >> and >> >> Homebrew Python 2.7. >> >> >> >> Do you get the same without a virtualenv? Maybe try ``git clean -xdf`` >> >> followed by an in-place build. Do you get this for Python 2.7 as well? >> >> And >> >> which XCode and Python (version + how installed)? >> > >> > Thanks for checking. >> > >> > I do get the same with an in-place python 2.7 build, after git clean >> > -fxd, and I get the same in a virtualenv with python 2.7. >> > >> > For both python 3.3 and python 2.7 I'm using the python.org binaries >> > installed from the binary dmg installers. >> > >> > [mb312 at Kerbin ~]$ python2.7 --version >> > Python 2.7.6 >> > [mb312 at Kerbin ~]$ python3.3 --version >> > Python 3.3.2 >> > [mb312 at Kerbin ~]$ xcodebuild -version >> > Xcode 5.0.2 >> > Build version 5A3005 >> >> Just as a data point - the build works fine for system python. > > > Probably the issue is that more recent XCode ignores __STDC_FORMAT_MACROS > defined here: > https://github.com/scipy/scipy/blob/master/scipy/sparse/sparsetools/setup.py#L15. > At least that's what's claimed here: > http://software.intel.com/en-us/forums/topic/498727. Workaround suggested > there, can you try that out? > > That still doesn't explain why only the python.org binaries fail, but that > should have something to do with the different build flags coming from > distutils for the different Pythons. Sorry to be dumb - but I wasn't sure what the workaround should be, from that page. Is it: export CLANG_CXX_LIBRARY=libstdc++ before `python setup.py build` ? Thanks for keeping going on this one, Matthew From ralf.gommers at gmail.com Sun Feb 2 20:13:01 2014 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Mon, 3 Feb 2014 02:13:01 +0100 Subject: [SciPy-Dev] Symbol not found: __ZNSt8ios_base4InitD1Ev for scipy.sparse In-Reply-To: References: Message-ID: On Sun, Feb 2, 2014 at 9:14 PM, Matthew Brett wrote: > Hi, > > On Sun, Feb 2, 2014 at 7:54 AM, Ralf Gommers > wrote: > > > > > > > > On Fri, Jan 31, 2014 at 5:09 AM, Matthew Brett > > wrote: > >> > >> Hi, > >> > >> On Wed, Jan 22, 2014 at 4:59 PM, Matthew Brett > > >> wrote: > >> > Hi, > >> > > >> > On Wed, Jan 22, 2014 at 11:33 AM, Ralf Gommers < > ralf.gommers at gmail.com> > >> > wrote: > >> >> > >> >> > >> >> > >> >> On Mon, Jan 20, 2014 at 1:24 PM, Matthew Brett > >> >> > >> >> wrote: > >> >>> > >> >>> Hi, > >> >>> > >> >>> I am trying to install scipy master on OSX 10.9. > >> >>> > >> >>> I'm using > >> >>> > >> >>> export CC=clang > >> >>> export CXX=clang > >> >>> export FFLAGS=--ff2c > >> >>> > >> >>> from http://www.scipy.org/scipylib/building/macosx.html > >> >>> > >> >>> I built and installed numpy with these flags, then scipy. > >> >>> > >> >>> scipy installs: > >> >>> > >> >>> >>> scipy.__version__ > >> >>> '0.14.0.dev-6b18a3b' > >> >>> > >> >>> but then: > >> >>> > >> >>> >>> import scipy.sparse > >> >>> Traceback (most recent call last): > >> >>> File "", line 1, in > >> >>> File > >> >>> > >> >>> > "/Users/mb312/.virtualenvs/py33-sp-devel/lib/python3.3/site-packages/scipy/sparse/__init__.py", > >> >>> line 206, in > >> >>> from .csr import * > >> >>> File > >> >>> > >> >>> > "/Users/mb312/.virtualenvs/py33-sp-devel/lib/python3.3/site-packages/scipy/sparse/csr.py", > >> >>> line 13, in > >> >>> from .sparsetools import csr_tocsc, csr_tobsr, > csr_count_blocks, \ > >> >>> File > >> >>> > >> >>> > "/Users/mb312/.virtualenvs/py33-sp-devel/lib/python3.3/site-packages/scipy/sparse/sparsetools/__init__.py", > >> >>> line 5, in > >> >>> from .csr import * > >> >>> File > >> >>> > >> >>> > "/Users/mb312/.virtualenvs/py33-sp-devel/lib/python3.3/site-packages/scipy/sparse/sparsetools/csr.py", > >> >>> line 26, in > >> >>> _csr = swig_import_helper() > >> >>> File > >> >>> > >> >>> > "/Users/mb312/.virtualenvs/py33-sp-devel/lib/python3.3/site-packages/scipy/sparse/sparsetools/csr.py", > >> >>> line 22, in swig_import_helper > >> >>> _mod = imp.load_module('_csr', fp, pathname, description) > >> >>> File > "/Users/mb312/.virtualenvs/py33-sp-devel/lib/python3.3/imp.py", > >> >>> line 183, in load_module > >> >>> return load_dynamic(name, filename, file) > >> >>> ImportError: > >> >>> > >> >>> > dlopen(/Users/mb312/.virtualenvs/py33-sp-devel/lib/python3.3/site-packages/scipy/sparse/sparsetools/_csr.so, > >> >>> 2): Symbol not found: __ZNSt8ios_base4InitD1Ev > >> >>> Referenced from: > >> >>> > >> >>> > >> >>> > /Users/mb312/.virtualenvs/py33-sp-devel/lib/python3.3/site-packages/scipy/sparse/sparsetools/_csr.so > >> >>> Expected in: flat namespace > >> >>> in > >> >>> > >> >>> > /Users/mb312/.virtualenvs/py33-sp-devel/lib/python3.3/site-packages/scipy/sparse/sparsetools/_csr.so > >> >>> > >> >>> Here's clang version info: > >> >>> > >> >>> $ clang -v > >> >>> Apple LLVM version 5.0 (clang-500.2.79) (based on LLVM 3.3svn) > >> >>> Target: x86_64-apple-darwin13.0.0 > >> >>> Thread model: posix > >> >>> > >> >>> I noticed a similar issue here: > >> >>> > >> >>> https://github.com/scipy/scipy/issues/3053 > >> >>> > >> >>> but I think I have clean install (into a virtualenv). > >> >>> > >> >>> Any hints as to where to look next? > >> >> > >> >> > >> >> I can't reproduce this with the same OS and Clang versions, XCode > 5.0.1 > >> >> and > >> >> Homebrew Python 2.7. > >> >> > >> >> Do you get the same without a virtualenv? Maybe try ``git clean > -xdf`` > >> >> followed by an in-place build. Do you get this for Python 2.7 as > well? > >> >> And > >> >> which XCode and Python (version + how installed)? > >> > > >> > Thanks for checking. > >> > > >> > I do get the same with an in-place python 2.7 build, after git clean > >> > -fxd, and I get the same in a virtualenv with python 2.7. > >> > > >> > For both python 3.3 and python 2.7 I'm using the python.org binaries > >> > installed from the binary dmg installers. > >> > > >> > [mb312 at Kerbin ~]$ python2.7 --version > >> > Python 2.7.6 > >> > [mb312 at Kerbin ~]$ python3.3 --version > >> > Python 3.3.2 > >> > [mb312 at Kerbin ~]$ xcodebuild -version > >> > Xcode 5.0.2 > >> > Build version 5A3005 > >> > >> Just as a data point - the build works fine for system python. > > > > > > Probably the issue is that more recent XCode ignores __STDC_FORMAT_MACROS > > defined here: > > > https://github.com/scipy/scipy/blob/master/scipy/sparse/sparsetools/setup.py#L15 > . > > At least that's what's claimed here: > > http://software.intel.com/en-us/forums/topic/498727. Workaround > suggested > > there, can you try that out? > > > > That still doesn't explain why only the python.org binaries fail, but > that > > should have something to do with the different build flags coming from > > distutils for the different Pythons. > > Sorry to be dumb - but I wasn't sure what the workaround should be, > from that page. Is it: > > export CLANG_CXX_LIBRARY=libstdc++ > Sorry, yes that should do it. Or otherwise somehow get distutils to link against /usr/lib/libstdc++.dylib. Or try if Bento works better for you. I don't have the python.org Python installed right now, so I don't have a more definite answer. Ralf > before `python setup.py build` ? > > Thanks for keeping going on this one, > > Matthew > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sun Feb 2 20:16:48 2014 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Mon, 3 Feb 2014 02:16:48 +0100 Subject: [SciPy-Dev] Symbol not found: __ZNSt8ios_base4InitD1Ev for scipy.sparse In-Reply-To: References: Message-ID: On Sun, Feb 2, 2014 at 9:14 PM, Matthew Brett wrote: > Hi, > > On Sun, Feb 2, 2014 at 7:54 AM, Ralf Gommers > wrote: > > > > > > > > On Fri, Jan 31, 2014 at 5:09 AM, Matthew Brett > > wrote: > >> > >> Hi, > >> > >> On Wed, Jan 22, 2014 at 4:59 PM, Matthew Brett > > >> wrote: > >> > Hi, > >> > > >> > On Wed, Jan 22, 2014 at 11:33 AM, Ralf Gommers < > ralf.gommers at gmail.com> > >> > wrote: > >> >> > >> >> > >> >> > >> >> On Mon, Jan 20, 2014 at 1:24 PM, Matthew Brett > >> >> > >> >> wrote: > >> >>> > >> >>> Hi, > >> >>> > >> >>> I am trying to install scipy master on OSX 10.9. > >> >>> > >> >>> I'm using > >> >>> > >> >>> export CC=clang > >> >>> export CXX=clang > >> >>> export FFLAGS=--ff2c > >> >>> > >> >>> from http://www.scipy.org/scipylib/building/macosx.html > >> >>> > >> >>> I built and installed numpy with these flags, then scipy. > >> >>> > >> >>> scipy installs: > >> >>> > >> >>> >>> scipy.__version__ > >> >>> '0.14.0.dev-6b18a3b' > >> >>> > >> >>> but then: > >> >>> > >> >>> >>> import scipy.sparse > >> >>> Traceback (most recent call last): > >> >>> File "", line 1, in > >> >>> File > >> >>> > >> >>> > "/Users/mb312/.virtualenvs/py33-sp-devel/lib/python3.3/site-packages/scipy/sparse/__init__.py", > >> >>> line 206, in > >> >>> from .csr import * > >> >>> File > >> >>> > >> >>> > "/Users/mb312/.virtualenvs/py33-sp-devel/lib/python3.3/site-packages/scipy/sparse/csr.py", > >> >>> line 13, in > >> >>> from .sparsetools import csr_tocsc, csr_tobsr, > csr_count_blocks, \ > >> >>> File > >> >>> > >> >>> > "/Users/mb312/.virtualenvs/py33-sp-devel/lib/python3.3/site-packages/scipy/sparse/sparsetools/__init__.py", > >> >>> line 5, in > >> >>> from .csr import * > >> >>> File > >> >>> > >> >>> > "/Users/mb312/.virtualenvs/py33-sp-devel/lib/python3.3/site-packages/scipy/sparse/sparsetools/csr.py", > >> >>> line 26, in > >> >>> _csr = swig_import_helper() > >> >>> File > >> >>> > >> >>> > "/Users/mb312/.virtualenvs/py33-sp-devel/lib/python3.3/site-packages/scipy/sparse/sparsetools/csr.py", > >> >>> line 22, in swig_import_helper > >> >>> _mod = imp.load_module('_csr', fp, pathname, description) > >> >>> File > "/Users/mb312/.virtualenvs/py33-sp-devel/lib/python3.3/imp.py", > >> >>> line 183, in load_module > >> >>> return load_dynamic(name, filename, file) > >> >>> ImportError: > >> >>> > >> >>> > dlopen(/Users/mb312/.virtualenvs/py33-sp-devel/lib/python3.3/site-packages/scipy/sparse/sparsetools/_csr.so, > >> >>> 2): Symbol not found: __ZNSt8ios_base4InitD1Ev > >> >>> Referenced from: > >> >>> > >> >>> > >> >>> > /Users/mb312/.virtualenvs/py33-sp-devel/lib/python3.3/site-packages/scipy/sparse/sparsetools/_csr.so > >> >>> Expected in: flat namespace > >> >>> in > >> >>> > >> >>> > /Users/mb312/.virtualenvs/py33-sp-devel/lib/python3.3/site-packages/scipy/sparse/sparsetools/_csr.so > >> >>> > >> >>> Here's clang version info: > >> >>> > >> >>> $ clang -v > >> >>> Apple LLVM version 5.0 (clang-500.2.79) (based on LLVM 3.3svn) > >> >>> Target: x86_64-apple-darwin13.0.0 > >> >>> Thread model: posix > >> >>> > >> >>> I noticed a similar issue here: > >> >>> > >> >>> https://github.com/scipy/scipy/issues/3053 > >> >>> > >> >>> but I think I have clean install (into a virtualenv). > >> >>> > >> >>> Any hints as to where to look next? > >> >> > >> >> > >> >> I can't reproduce this with the same OS and Clang versions, XCode > 5.0.1 > >> >> and > >> >> Homebrew Python 2.7. > >> >> > >> >> Do you get the same without a virtualenv? Maybe try ``git clean > -xdf`` > >> >> followed by an in-place build. Do you get this for Python 2.7 as > well? > >> >> And > >> >> which XCode and Python (version + how installed)? > >> > > >> > Thanks for checking. > >> > > >> > I do get the same with an in-place python 2.7 build, after git clean > >> > -fxd, and I get the same in a virtualenv with python 2.7. > >> > > >> > For both python 3.3 and python 2.7 I'm using the python.org binaries > >> > installed from the binary dmg installers. > >> > > >> > [mb312 at Kerbin ~]$ python2.7 --version > >> > Python 2.7.6 > >> > [mb312 at Kerbin ~]$ python3.3 --version > >> > Python 3.3.2 > >> > [mb312 at Kerbin ~]$ xcodebuild -version > >> > Xcode 5.0.2 > >> > Build version 5A3005 > >> > >> Just as a data point - the build works fine for system python. > > > > > > Probably the issue is that more recent XCode ignores __STDC_FORMAT_MACROS > > defined here: > > > https://github.com/scipy/scipy/blob/master/scipy/sparse/sparsetools/setup.py#L15 > . > > At least that's what's claimed here: > > http://software.intel.com/en-us/forums/topic/498727. Workaround > suggested > > there, can you try that out? > > > > That still doesn't explain why only the python.org binaries fail, but > that > > should have something to do with the different build flags coming from > > distutils for the different Pythons. > > Sorry to be dumb - but I wasn't sure what the workaround should be, > from that page. Is it: > > export CLANG_CXX_LIBRARY=libstdc++ > Sorry, yes that should do it. Or otherwise somehow get distutils to link against /usr/lib/libstdc++.dylib. Or try if Bento works better for you. I don't have the python.org Python installed right now, so I don't have a more definite answer. Ralf > before `python setup.py build` ? > > Thanks for keeping going on this one, > > Matthew > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Mon Feb 3 15:28:14 2014 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 3 Feb 2014 12:28:14 -0800 Subject: [SciPy-Dev] Symbol not found: __ZNSt8ios_base4InitD1Ev for scipy.sparse In-Reply-To: References: Message-ID: Hi, On Sun, Feb 2, 2014 at 5:13 PM, Ralf Gommers wrote: > > > > On Sun, Feb 2, 2014 at 9:14 PM, Matthew Brett > wrote: >> >> Hi, >> >> On Sun, Feb 2, 2014 at 7:54 AM, Ralf Gommers >> wrote: >> > >> > >> > >> > On Fri, Jan 31, 2014 at 5:09 AM, Matthew Brett >> > wrote: >> >> >> >> Hi, >> >> >> >> On Wed, Jan 22, 2014 at 4:59 PM, Matthew Brett >> >> >> >> wrote: >> >> > Hi, >> >> > >> >> > On Wed, Jan 22, 2014 at 11:33 AM, Ralf Gommers >> >> > >> >> > wrote: >> >> >> >> >> >> >> >> >> >> >> >> On Mon, Jan 20, 2014 at 1:24 PM, Matthew Brett >> >> >> >> >> >> wrote: >> >> >>> >> >> >>> Hi, >> >> >>> >> >> >>> I am trying to install scipy master on OSX 10.9. >> >> >>> >> >> >>> I'm using >> >> >>> >> >> >>> export CC=clang >> >> >>> export CXX=clang >> >> >>> export FFLAGS=--ff2c >> >> >>> >> >> >>> from http://www.scipy.org/scipylib/building/macosx.html >> >> >>> >> >> >>> I built and installed numpy with these flags, then scipy. >> >> >>> >> >> >>> scipy installs: >> >> >>> >> >> >>> >>> scipy.__version__ >> >> >>> '0.14.0.dev-6b18a3b' >> >> >>> >> >> >>> but then: >> >> >>> >> >> >>> >>> import scipy.sparse >> >> >>> Traceback (most recent call last): >> >> >>> File "", line 1, in >> >> >>> File >> >> >>> >> >> >>> >> >> >>> "/Users/mb312/.virtualenvs/py33-sp-devel/lib/python3.3/site-packages/scipy/sparse/__init__.py", >> >> >>> line 206, in >> >> >>> from .csr import * >> >> >>> File >> >> >>> >> >> >>> >> >> >>> "/Users/mb312/.virtualenvs/py33-sp-devel/lib/python3.3/site-packages/scipy/sparse/csr.py", >> >> >>> line 13, in >> >> >>> from .sparsetools import csr_tocsc, csr_tobsr, >> >> >>> csr_count_blocks, \ >> >> >>> File >> >> >>> >> >> >>> >> >> >>> "/Users/mb312/.virtualenvs/py33-sp-devel/lib/python3.3/site-packages/scipy/sparse/sparsetools/__init__.py", >> >> >>> line 5, in >> >> >>> from .csr import * >> >> >>> File >> >> >>> >> >> >>> >> >> >>> "/Users/mb312/.virtualenvs/py33-sp-devel/lib/python3.3/site-packages/scipy/sparse/sparsetools/csr.py", >> >> >>> line 26, in >> >> >>> _csr = swig_import_helper() >> >> >>> File >> >> >>> >> >> >>> >> >> >>> "/Users/mb312/.virtualenvs/py33-sp-devel/lib/python3.3/site-packages/scipy/sparse/sparsetools/csr.py", >> >> >>> line 22, in swig_import_helper >> >> >>> _mod = imp.load_module('_csr', fp, pathname, description) >> >> >>> File >> >> >>> "/Users/mb312/.virtualenvs/py33-sp-devel/lib/python3.3/imp.py", >> >> >>> line 183, in load_module >> >> >>> return load_dynamic(name, filename, file) >> >> >>> ImportError: >> >> >>> >> >> >>> >> >> >>> dlopen(/Users/mb312/.virtualenvs/py33-sp-devel/lib/python3.3/site-packages/scipy/sparse/sparsetools/_csr.so, >> >> >>> 2): Symbol not found: __ZNSt8ios_base4InitD1Ev >> >> >>> Referenced from: >> >> >>> >> >> >>> >> >> >>> >> >> >>> /Users/mb312/.virtualenvs/py33-sp-devel/lib/python3.3/site-packages/scipy/sparse/sparsetools/_csr.so >> >> >>> Expected in: flat namespace >> >> >>> in >> >> >>> >> >> >>> >> >> >>> /Users/mb312/.virtualenvs/py33-sp-devel/lib/python3.3/site-packages/scipy/sparse/sparsetools/_csr.so >> >> >>> >> >> >>> Here's clang version info: >> >> >>> >> >> >>> $ clang -v >> >> >>> Apple LLVM version 5.0 (clang-500.2.79) (based on LLVM 3.3svn) >> >> >>> Target: x86_64-apple-darwin13.0.0 >> >> >>> Thread model: posix >> >> >>> >> >> >>> I noticed a similar issue here: >> >> >>> >> >> >>> https://github.com/scipy/scipy/issues/3053 >> >> >>> >> >> >>> but I think I have clean install (into a virtualenv). >> >> >>> >> >> >>> Any hints as to where to look next? >> >> >> >> >> >> >> >> >> I can't reproduce this with the same OS and Clang versions, XCode >> >> >> 5.0.1 >> >> >> and >> >> >> Homebrew Python 2.7. >> >> >> >> >> >> Do you get the same without a virtualenv? Maybe try ``git clean >> >> >> -xdf`` >> >> >> followed by an in-place build. Do you get this for Python 2.7 as >> >> >> well? >> >> >> And >> >> >> which XCode and Python (version + how installed)? >> >> > >> >> > Thanks for checking. >> >> > >> >> > I do get the same with an in-place python 2.7 build, after git clean >> >> > -fxd, and I get the same in a virtualenv with python 2.7. >> >> > >> >> > For both python 3.3 and python 2.7 I'm using the python.org binaries >> >> > installed from the binary dmg installers. >> >> > >> >> > [mb312 at Kerbin ~]$ python2.7 --version >> >> > Python 2.7.6 >> >> > [mb312 at Kerbin ~]$ python3.3 --version >> >> > Python 3.3.2 >> >> > [mb312 at Kerbin ~]$ xcodebuild -version >> >> > Xcode 5.0.2 >> >> > Build version 5A3005 >> >> >> >> Just as a data point - the build works fine for system python. >> > >> > >> > Probably the issue is that more recent XCode ignores >> > __STDC_FORMAT_MACROS >> > defined here: >> > >> > https://github.com/scipy/scipy/blob/master/scipy/sparse/sparsetools/setup.py#L15. >> > At least that's what's claimed here: >> > http://software.intel.com/en-us/forums/topic/498727. Workaround >> > suggested >> > there, can you try that out? >> > >> > That still doesn't explain why only the python.org binaries fail, but >> > that >> > should have something to do with the different build flags coming from >> > distutils for the different Pythons. >> >> Sorry to be dumb - but I wasn't sure what the workaround should be, >> from that page. Is it: >> >> export CLANG_CXX_LIBRARY=libstdc++ > > > Sorry, yes that should do it. Short version: I needed export CXX=clang++ instead of export CXX=clang Long version: Hint here: http://stackoverflow.com/questions/4788721/how-can-i-resolve-missing-symbols-when-compiling-c-code-with-clang-2-8-on-mac Full flags for compile: export CC=clang export CXX=clang++ export FFLAGS=-ff2c I didn't set the CLANG_CXX_LIBRARY (and doing so didn't fix the compile). I think that should be correct for earlier xcodes, so I proposed the change in the docs here: https://github.com/scipy/scipy.org/pull/49 Best, Matthew From ralf.gommers at gmail.com Mon Feb 3 21:15:14 2014 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Tue, 4 Feb 2014 03:15:14 +0100 Subject: [SciPy-Dev] Symbol not found: __ZNSt8ios_base4InitD1Ev for scipy.sparse In-Reply-To: References: Message-ID: On Mon, Feb 3, 2014 at 9:28 PM, Matthew Brett wrote: > Hi, > > On Sun, Feb 2, 2014 at 5:13 PM, Ralf Gommers > wrote: > > > > > > > > On Sun, Feb 2, 2014 at 9:14 PM, Matthew Brett > > wrote: > >> > >> Hi, > >> > >> On Sun, Feb 2, 2014 at 7:54 AM, Ralf Gommers > >> wrote: > >> > > >> > > >> > > >> > On Fri, Jan 31, 2014 at 5:09 AM, Matthew Brett < > matthew.brett at gmail.com> > >> > wrote: > >> >> > >> >> Hi, > >> >> > >> >> On Wed, Jan 22, 2014 at 4:59 PM, Matthew Brett > >> >> > >> >> wrote: > >> >> > Hi, > >> >> > > >> >> > On Wed, Jan 22, 2014 at 11:33 AM, Ralf Gommers > >> >> > > >> >> > wrote: > >> >> >> > >> >> >> > >> >> >> > >> >> >> On Mon, Jan 20, 2014 at 1:24 PM, Matthew Brett > >> >> >> > >> >> >> wrote: > >> >> >>> > >> >> >>> Hi, > >> >> >>> > >> >> >>> I am trying to install scipy master on OSX 10.9. > >> >> >>> > >> >> >>> I'm using > >> >> >>> > >> >> >>> export CC=clang > >> >> >>> export CXX=clang > >> >> >>> export FFLAGS=--ff2c > >> >> >>> > >> >> >>> from http://www.scipy.org/scipylib/building/macosx.html > >> >> >>> > >> >> >>> I built and installed numpy with these flags, then scipy. > >> >> >>> > >> >> >>> scipy installs: > >> >> >>> > >> >> >>> >>> scipy.__version__ > >> >> >>> '0.14.0.dev-6b18a3b' > >> >> >>> > >> >> >>> but then: > >> >> >>> > >> >> >>> >>> import scipy.sparse > >> >> >>> Traceback (most recent call last): > >> >> >>> File "", line 1, in > >> >> >>> File > >> >> >>> > >> >> >>> > >> >> >>> > "/Users/mb312/.virtualenvs/py33-sp-devel/lib/python3.3/site-packages/scipy/sparse/__init__.py", > >> >> >>> line 206, in > >> >> >>> from .csr import * > >> >> >>> File > >> >> >>> > >> >> >>> > >> >> >>> > "/Users/mb312/.virtualenvs/py33-sp-devel/lib/python3.3/site-packages/scipy/sparse/csr.py", > >> >> >>> line 13, in > >> >> >>> from .sparsetools import csr_tocsc, csr_tobsr, > >> >> >>> csr_count_blocks, \ > >> >> >>> File > >> >> >>> > >> >> >>> > >> >> >>> > "/Users/mb312/.virtualenvs/py33-sp-devel/lib/python3.3/site-packages/scipy/sparse/sparsetools/__init__.py", > >> >> >>> line 5, in > >> >> >>> from .csr import * > >> >> >>> File > >> >> >>> > >> >> >>> > >> >> >>> > "/Users/mb312/.virtualenvs/py33-sp-devel/lib/python3.3/site-packages/scipy/sparse/sparsetools/csr.py", > >> >> >>> line 26, in > >> >> >>> _csr = swig_import_helper() > >> >> >>> File > >> >> >>> > >> >> >>> > >> >> >>> > "/Users/mb312/.virtualenvs/py33-sp-devel/lib/python3.3/site-packages/scipy/sparse/sparsetools/csr.py", > >> >> >>> line 22, in swig_import_helper > >> >> >>> _mod = imp.load_module('_csr', fp, pathname, description) > >> >> >>> File > >> >> >>> "/Users/mb312/.virtualenvs/py33-sp-devel/lib/python3.3/imp.py", > >> >> >>> line 183, in load_module > >> >> >>> return load_dynamic(name, filename, file) > >> >> >>> ImportError: > >> >> >>> > >> >> >>> > >> >> >>> > dlopen(/Users/mb312/.virtualenvs/py33-sp-devel/lib/python3.3/site-packages/scipy/sparse/sparsetools/_csr.so, > >> >> >>> 2): Symbol not found: __ZNSt8ios_base4InitD1Ev > >> >> >>> Referenced from: > >> >> >>> > >> >> >>> > >> >> >>> > >> >> >>> > /Users/mb312/.virtualenvs/py33-sp-devel/lib/python3.3/site-packages/scipy/sparse/sparsetools/_csr.so > >> >> >>> Expected in: flat namespace > >> >> >>> in > >> >> >>> > >> >> >>> > >> >> >>> > /Users/mb312/.virtualenvs/py33-sp-devel/lib/python3.3/site-packages/scipy/sparse/sparsetools/_csr.so > >> >> >>> > >> >> >>> Here's clang version info: > >> >> >>> > >> >> >>> $ clang -v > >> >> >>> Apple LLVM version 5.0 (clang-500.2.79) (based on LLVM 3.3svn) > >> >> >>> Target: x86_64-apple-darwin13.0.0 > >> >> >>> Thread model: posix > >> >> >>> > >> >> >>> I noticed a similar issue here: > >> >> >>> > >> >> >>> https://github.com/scipy/scipy/issues/3053 > >> >> >>> > >> >> >>> but I think I have clean install (into a virtualenv). > >> >> >>> > >> >> >>> Any hints as to where to look next? > >> >> >> > >> >> >> > >> >> >> I can't reproduce this with the same OS and Clang versions, XCode > >> >> >> 5.0.1 > >> >> >> and > >> >> >> Homebrew Python 2.7. > >> >> >> > >> >> >> Do you get the same without a virtualenv? Maybe try ``git clean > >> >> >> -xdf`` > >> >> >> followed by an in-place build. Do you get this for Python 2.7 as > >> >> >> well? > >> >> >> And > >> >> >> which XCode and Python (version + how installed)? > >> >> > > >> >> > Thanks for checking. > >> >> > > >> >> > I do get the same with an in-place python 2.7 build, after git > clean > >> >> > -fxd, and I get the same in a virtualenv with python 2.7. > >> >> > > >> >> > For both python 3.3 and python 2.7 I'm using the python.orgbinaries > >> >> > installed from the binary dmg installers. > >> >> > > >> >> > [mb312 at Kerbin ~]$ python2.7 --version > >> >> > Python 2.7.6 > >> >> > [mb312 at Kerbin ~]$ python3.3 --version > >> >> > Python 3.3.2 > >> >> > [mb312 at Kerbin ~]$ xcodebuild -version > >> >> > Xcode 5.0.2 > >> >> > Build version 5A3005 > >> >> > >> >> Just as a data point - the build works fine for system python. > >> > > >> > > >> > Probably the issue is that more recent XCode ignores > >> > __STDC_FORMAT_MACROS > >> > defined here: > >> > > >> > > https://github.com/scipy/scipy/blob/master/scipy/sparse/sparsetools/setup.py#L15 > . > >> > At least that's what's claimed here: > >> > http://software.intel.com/en-us/forums/topic/498727. Workaround > >> > suggested > >> > there, can you try that out? > >> > > >> > That still doesn't explain why only the python.org binaries fail, but > >> > that > >> > should have something to do with the different build flags coming from > >> > distutils for the different Pythons. > >> > >> Sorry to be dumb - but I wasn't sure what the workaround should be, > >> from that page. Is it: > >> > >> export CLANG_CXX_LIBRARY=libstdc++ > > > > > > Sorry, yes that should do it. > > Short version: I needed > > export CXX=clang++ > > instead of > > export CXX=clang > > Long version: > > Hint here: > http://stackoverflow.com/questions/4788721/how-can-i-resolve-missing-symbols-when-compiling-c-code-with-clang-2-8-on-mac > > Full flags for compile: > > export CC=clang > export CXX=clang++ > export FFLAGS=-ff2c > Thanks for figuring this one out! Ralf > > I didn't set the CLANG_CXX_LIBRARY (and doing so didn't fix the compile). > > I think that should be correct for earlier xcodes, so I proposed the > change in the docs here: > > https://github.com/scipy/scipy.org/pull/49 > > Best, > > Matthew > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Tue Feb 4 02:16:39 2014 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Tue, 4 Feb 2014 08:16:39 +0100 Subject: [SciPy-Dev] ANN: Scipy 0.13.3 release Message-ID: Hi, I'm happy to announce the availability of the scipy 0.13.3 release. This is a bugfix only release; it contains fixes for regressions in ndimage and weave. Source tarballs can be found at https://sourceforge.net/projects/scipy/files/scipy/0.13.3/ and on PyPi. Release notes copied below, binaries will follow later (the regular build machine is not available for the next two weeks). Cheers, Ralf ========================== SciPy 0.13.3 Release Notes ========================== SciPy 0.13.3 is a bug-fix release with no new features compared to 0.13.2. Both the weave and the ndimage.label bugs were severe regressions in 0.13.0, hence this release. Issues fixed ------------ - 3148: fix a memory leak in ``ndimage.label``. - 3216: fix weave issue with too long file names for MSVC. Other changes ------------- - Update Sphinx theme used for html docs so ``>>>`` in examples can be toggled. Checksums ========= 0547c1f8e8afad4009cc9b5ef17a2d4d release/installers/scipy-0.13.3.tar.gz 20ff3a867cc5925ef1d654aed2ff7e88 release/installers/scipy-0.13.3.zip -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaime.frio at gmail.com Tue Feb 4 19:38:41 2014 From: jaime.frio at gmail.com (=?ISO-8859-1?Q?Jaime_Fern=E1ndez_del_R=EDo?=) Date: Tue, 4 Feb 2014 16:38:41 -0800 Subject: [SciPy-Dev] Distance functions Message-ID: I plan to have a first fully functional version of the new distance module for scipy.spatial during this week, see https://github.com/scipy/scipy/pull/3163. The main feature of this new implementation is broadcasting support, so that you can compute multiple distances with a single function call, e.g.: >>> import scipy.spatial.umath_distance as udist >>> import numpy as np >>> u = np.random.rand(4, 3) >>> v = np.random.rand(4, 3) >>> udist.euclidean(u, v) array([ 0.77251733, 0.93810526, 0.81122198, 0.87690279]) Or even all pairwise distances: >>> v = np.random.rand(5, 3) >>> udist.euclidean(u[:, None, :], v) array([[ 1.14347901, 0.38593543, 0.83640822, 0.87086049, 1.04517368], [ 0.92483017, 0.71607629, 0.93401924, 0.5825294 , 0.92202074], [ 0.72273632, 0.81842944, 1.0669884 , 0.85308725, 0.83356081], [ 0.22143423, 0.81656032, 0.78084203, 0.90199059, 0.35111241]]) Although this is the main new feature, now is the right time to ask for that other feature of distance you have been missing all this time, but were afraid to ask for. I have thrown a few of my own: 1. greatcircle, to compute distances on the surface of a sphere. 2. weighted hamming, city block, euclidean and sqeuclidean distances 3. pminkowski and pwminkowski, to compute the Minkowski and weighted Minkowski distances without the final raising to the power 1/p, i.e. what sqeuclidean is to euclidean, this are to minkowski. Any suggestions or comments are very welcome! Jaime -- (\__/) ( O.o) ( > <) Este es Conejo. Copia a Conejo en tu firma y ay?dale en sus planes de dominaci?n mundial. -------------- next part -------------- An HTML attachment was scrubbed... URL: From adityashah30 at gmail.com Mon Feb 10 06:51:40 2014 From: adityashah30 at gmail.com (Aditya Shah) Date: Mon, 10 Feb 2014 17:21:40 +0530 Subject: [SciPy-Dev] guidance Message-ID: Hi, I am Aditya Shah and I am currently pursuing computer science at BITS-Pilani university. I am in the third year and have covered dsa course. I want to work on scipy for GSOC 2014. Can anyone please guide me on the process? Thanks, Aditya Shah -- Regards, Aditya Shah Undergraduate Student B.E.(Hons) Computer Science BITS Pilani K. K. Birla Goa Campus -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Thu Feb 13 11:48:55 2014 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 13 Feb 2014 09:48:55 -0700 Subject: [SciPy-Dev] GSOC Message-ID: Thought I'd forward this to the lists in case we need to do something. Hi everyone, > > Just a friendly reminder that applications for mentoring organizations > close in about 24 hours. Please get your applications in soon, we will > not accept late applications for any reason! > > Thanks, > Carol Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From richard9404 at gmail.com Thu Feb 13 21:45:22 2014 From: richard9404 at gmail.com (Richard Tsai) Date: Fri, 14 Feb 2014 10:45:22 +0800 Subject: [SciPy-Dev] About Google Summer of Code Message-ID: <6760436.Dn19K7eRFh@linux-tp-laptop.sysu> Hi everyone, I've just noticed that neither NumPy or SciPy is listed on Python's GSoC page (https://wiki.python.org/moin/SummerOfCode/2014 ). Is SciPy/NumPy going to apply for it? I'm a student and I want to participate in GSoC this year. Thanks! -- Richard Tsai From tom.grydeland at gmail.com Fri Feb 14 04:45:40 2014 From: tom.grydeland at gmail.com (Tom Grydeland) Date: Fri, 14 Feb 2014 10:45:40 +0100 Subject: [SciPy-Dev] Hankel transforms, again Message-ID: <5F68ADA2-0DF2-43B4-B55F-45FE08A0A231@gmail.com> Hi developers, This is a repost of a message from December 2008 which gave no useful answers. Since then, I?ve had 4-5 requests for the code from people who had a need for it. It?s not a massive demand, but enough that perhaps you?ll consider my offer again. Since the previous posting, I?ve also included alternative filters thanks to Fan-Nian Kong that are shorter and more accurate when the function makes significant changes in more limited intervals. I?m not including the code (since it is mostly thousands of lines of tables), but I will provide the files to anyone who?s interested. Cheers, Tom ?? original message below ??? When I recently needed a Hankel transform I was unable to find one in Scipy. I found one in MATLAB though[1], written by Prof. Brian Borchers at New Mexico Tech. The code is based on a previous FORTRAN implementation by W. L. Anderson [2], and the MATLAB code is not marked with any copyright statements. Hankel transforms of the first and second order can be computed through digital filtering. I have rewritten the code in Python/numpy, complete with tests and acknowledgements of the origins, and my employer has agreed that I can donate the code to Scipy. I believe this should be of interest. Hankel transforms arise often in acoustics and other systems with cylinder symmetry. If you want it, I would like a suggestion where it belongs (with other integral transforms, probably) and how I should shape the tests to make them suitable for inclusion. The tests I currently have use Hankel transforms of five different functions with known analytical transforms and compares the transformed values to the numerically evaluated analytical expressions. Currently plots are generated, but for automated testing I suppose something else would be better. Pointing me at an example is sufficient. [1] http://infohost.nmt.edu/~borchers/hankel.html [2] Anderson, W. L., 1979, Computer Program Numerical Integration of Related Hankel Transforms of Orders 0 and 1 by Adaptive Digital Filtering. Geophysics, 44(7):1287-1305. Best regards, -- Tom Grydeland From robert.kern at gmail.com Fri Feb 14 07:15:41 2014 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 14 Feb 2014 12:15:41 +0000 Subject: [SciPy-Dev] Hankel transforms, again In-Reply-To: <5F68ADA2-0DF2-43B4-B55F-45FE08A0A231@gmail.com> References: <5F68ADA2-0DF2-43B4-B55F-45FE08A0A231@gmail.com> Message-ID: On Fri, Feb 14, 2014 at 9:45 AM, Tom Grydeland wrote: > Hi developers, > > This is a repost of a message from December 2008 which gave no useful answers. Since then, I?ve had 4-5 requests for the code from people who had a need for it. It?s not a massive demand, but enough that perhaps you?ll consider my offer again. > > Since the previous posting, I?ve also included alternative filters thanks to Fan-Nian Kong that are shorter and more accurate when the function makes significant changes in more limited intervals. I?m not including the code (since it is mostly thousands of lines of tables), but I will provide the files to anyone who?s interested. Yes, I think we'd be interested. Please do make a PR. Before you do, please double-check the licensing on the new code that you have added. It does look like Anderson's original code is in the public domain (the paper being published as part of Anderson's work at the USGS as a federal employee), so that part is in the clear. Just so we are clear, the lack of copyright statements (work by US federal employees aside) usually means that you have *no license* to redistribute the work, not that there are no restrictions on redistribution. Thanks! > ?? original message below ??? > > When I recently needed a Hankel transform I was unable to find one in > Scipy. I found one in MATLAB though[1], written by Prof. Brian > Borchers at New Mexico Tech. The code is based on a previous FORTRAN > implementation by W. L. Anderson [2], and the MATLAB code is not > marked with any copyright statements. Hankel transforms of the first > and second order can be computed through digital filtering. > > I have rewritten the code in Python/numpy, complete with tests and > acknowledgements of the origins, and my employer has agreed that I can > donate the code to Scipy. I believe this should be of interest. > Hankel transforms arise often in acoustics and other systems with > cylinder symmetry. If you want it, I would like a suggestion where it > belongs (with other integral transforms, probably) and how I should > shape the tests to make them suitable for inclusion. > > The tests I currently have use Hankel transforms of five different > functions with known analytical transforms and compares the > transformed values to the numerically evaluated analytical > expressions. Currently plots are generated, but for automated testing > I suppose something else would be better. Pointing me at an example > is sufficient. > > [1] > http://infohost.nmt.edu/~borchers/hankel.html > > [2] Anderson, W. L., 1979, Computer Program Numerical Integration of > Related Hankel Transforms of Orders 0 and 1 by Adaptive Digital > Filtering. Geophysics, 44(7):1287-1305. > > Best regards, > > -- > Tom Grydeland > > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev -- Robert Kern From yw5aj at virginia.edu Sat Feb 15 23:22:25 2014 From: yw5aj at virginia.edu (Yuxiang Wang) Date: Sat, 15 Feb 2014 23:22:25 -0500 Subject: [SciPy-Dev] An inconsistency in scipy.optimize.minimize In-Reply-To: References: Message-ID: Hi guys, Sorry about digging this out again... As well as not knowing how to contribute my effort to SciPy. Sorry about being ignorant in how to participate in an open-source project, but what is the suggested step that I should do next? Should I: 1) Report this issue on github 2) Fix the code I could do both, but the code would be only on my personal machine and not thoroughly tested... Could anyone please help and get me started on that? Thanks so much! -Shawn On Mon, Jan 27, 2014 at 4:55 PM, Yuxiang Wang wrote: > Aaron, > > Thanks for confirming! I agree that epsilon is better, as in the > following functions "epsilon" instead of "eps" are used: > > scipy.optimize.fmin_cg > scipy.optimize.fmin_ncg > scipy.optimize.fmin_tnc > scipy.optimize.fmin_bfgs > scipy.optimize.fmin_l_bfgs_b > > > -Shawn > > > > On Mon, Jan 27, 2014 at 4:46 PM, Aaron Webster wrote: >> On Mon, Jan 27, 2014 at 10:36 PM, Yuxiang Wang wrote: >>> However, by digging into the code, I realize that this value's name is >>> called "eps" instead of "epsilon" in the minimize wrapper. That is to >>> say, >> >> "epsilon" seems to be the consistent naming convention; I suggest it >> be changed to that. >> >> Good observation. >> >> -- >> Aaron Webster >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev > > > > -- > Yuxiang "Shawn" Wang > Gerling Research Lab > University of Virginia > yw5aj at virginia.edu > +1 (434) 284-0836 > https://sites.google.com/a/virginia.edu/yw5aj/ -- Yuxiang "Shawn" Wang Gerling Research Lab University of Virginia yw5aj at virginia.edu +1 (434) 284-0836 https://sites.google.com/a/virginia.edu/yw5aj/ From pmhobson at gmail.com Sun Feb 16 01:53:27 2014 From: pmhobson at gmail.com (Paul Hobson) Date: Sat, 15 Feb 2014 22:53:27 -0800 Subject: [SciPy-Dev] An inconsistency in scipy.optimize.minimize In-Reply-To: References: Message-ID: Hey Shawn, The main github repo has a document outlining how to most effectively contribute to the project. You can find that here: https://github.com/scipy/scipy/blob/master/HACKING.rst.txt Good luck! -paul On Sat, Feb 15, 2014 at 8:22 PM, Yuxiang Wang wrote: > Hi guys, > > Sorry about digging this out again... As well as not knowing how to > contribute my effort to SciPy. Sorry about being ignorant in how to > participate in an open-source project, but what is the suggested step > that I should do next? Should I: > > 1) Report this issue on github > > 2) Fix the code > > I could do both, but the code would be only on my personal machine and > not thoroughly tested... Could anyone please help and get me started > on that? > > Thanks so much! > > -Shawn > > On Mon, Jan 27, 2014 at 4:55 PM, Yuxiang Wang wrote: > > Aaron, > > > > Thanks for confirming! I agree that epsilon is better, as in the > > following functions "epsilon" instead of "eps" are used: > > > > scipy.optimize.fmin_cg > > scipy.optimize.fmin_ncg > > scipy.optimize.fmin_tnc > > scipy.optimize.fmin_bfgs > > scipy.optimize.fmin_l_bfgs_b > > > > > > -Shawn > > > > > > > > On Mon, Jan 27, 2014 at 4:46 PM, Aaron Webster > wrote: > >> On Mon, Jan 27, 2014 at 10:36 PM, Yuxiang Wang > wrote: > >>> However, by digging into the code, I realize that this value's name is > >>> called "eps" instead of "epsilon" in the minimize wrapper. That is to > >>> say, > >> > >> "epsilon" seems to be the consistent naming convention; I suggest it > >> be changed to that. > >> > >> Good observation. > >> > >> -- > >> Aaron Webster > >> _______________________________________________ > >> SciPy-Dev mailing list > >> SciPy-Dev at scipy.org > >> http://mail.scipy.org/mailman/listinfo/scipy-dev > > > > > > > > -- > > Yuxiang "Shawn" Wang > > Gerling Research Lab > > University of Virginia > > yw5aj at virginia.edu > > +1 (434) 284-0836 > > https://sites.google.com/a/virginia.edu/yw5aj/ > > > > -- > Yuxiang "Shawn" Wang > Gerling Research Lab > University of Virginia > yw5aj at virginia.edu > +1 (434) 284-0836 > https://sites.google.com/a/virginia.edu/yw5aj/ > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Sun Feb 16 07:59:09 2014 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 16 Feb 2014 14:59:09 +0200 Subject: [SciPy-Dev] An inconsistency in scipy.optimize.minimize In-Reply-To: References: Message-ID: Hi, 16.02.2014 06:22, Yuxiang Wang kirjoitti: > Sorry about digging this out again... As well as not knowing how to > contribute my effort to SciPy. Sorry about being ignorant in how to > participate in an open-source project, but what is the suggested step > that I should do next? Should I: > > 1) Report this issue on github > > 2) Fix the code > > I could do both, but the code would be only on my personal machine and > not thoroughly tested... Could anyone please help and get me started > on that? > > Thanks so much! > > -Shawn > > On Mon, Jan 27, 2014 at 4:55 PM, Yuxiang Wang wrote: >> Aaron, >> >> Thanks for confirming! I agree that epsilon is better, as in the >> following functions "epsilon" instead of "eps" are used: >> >> scipy.optimize.fmin_cg >> scipy.optimize.fmin_ncg >> scipy.optimize.fmin_tnc >> scipy.optimize.fmin_bfgs >> scipy.optimize.fmin_l_bfgs_b The overall situation is that `fmin_*` exist only for backward compatibility, and may be deprecated at some point if this seems sensible. Everything that they can do, also minimize() can do. minimize() was introduced to provide a standard interface with consistent parameter names, since the fmin_* were inconsistent between each other. It does not seem wise to me for us to start changing the parameter names in minimize() again, just for aesthetic reasons. This will break backward compatibility, and moreover, users of minimize() should not need to use the fmin_* functions. -- Pauli Virtanen From ralf.gommers at gmail.com Sun Feb 16 17:08:42 2014 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 16 Feb 2014 23:08:42 +0100 Subject: [SciPy-Dev] ANN: Scipy 0.13.3 release In-Reply-To: References: Message-ID: Binaries are now available on SourceForge. Ralf On Tue, Feb 4, 2014 at 8:16 AM, Ralf Gommers wrote: > Hi, > > I'm happy to announce the availability of the scipy 0.13.3 release. This > is a bugfix only release; it contains fixes for regressions in ndimage and > weave. > > Source tarballs can be found at > https://sourceforge.net/projects/scipy/files/scipy/0.13.3/ and on PyPi. > Release notes copied below, binaries will follow later (the regular build > machine is not available for the next two weeks). > > Cheers, > Ralf > > > > ========================== > SciPy 0.13.3 Release Notes > ========================== > > SciPy 0.13.3 is a bug-fix release with no new features compared to 0.13.2. > Both the weave and the ndimage.label bugs were severe regressions in > 0.13.0, > hence this release. > > Issues fixed > ------------ > - 3148: fix a memory leak in ``ndimage.label``. > - 3216: fix weave issue with too long file names for MSVC. > > Other changes > ------------- > - Update Sphinx theme used for html docs so ``>>>`` in examples can be > toggled. > > Checksums > ========= > 0547c1f8e8afad4009cc9b5ef17a2d4d release/installers/scipy-0.13.3.tar.gz > 20ff3a867cc5925ef1d654aed2ff7e88 release/installers/scipy-0.13.3.zip > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tom.grydeland at gmail.com Mon Feb 17 09:45:46 2014 From: tom.grydeland at gmail.com (Tom Grydeland) Date: Mon, 17 Feb 2014 15:45:46 +0100 Subject: [SciPy-Dev] Hankel transforms, again In-Reply-To: References: <5F68ADA2-0DF2-43B4-B55F-45FE08A0A231@gmail.com> Message-ID: <004A17FA-F51A-4028-97B0-74E2AB958804@gmail.com> On 2014-02-14, at 13:15, Robert Kern wrote: > On Fri, Feb 14, 2014 at 9:45 AM, Tom Grydeland wrote: >> Hi developers, >> >> This is a repost of a message from December 2008 which gave no useful answers. Since then, I?ve had 4-5 requests for the code from people who had a need for it. It?s not a massive demand, but enough that perhaps you?ll consider my offer again. >> >> Since the previous posting, I?ve also included alternative filters thanks to Fan-Nian Kong that are shorter and more accurate when the function makes significant changes in more limited intervals. I?m not including the code (since it is mostly thousands of lines of tables), but I will provide the files to anyone who?s interested. > > Yes, I think we'd be interested. Please do make a PR. Before you do, > please double-check the licensing on the new code that you have added. > It does look like Anderson's original code is in the public domain > (the paper being published as part of Anderson's work at the USGS as a > federal employee), so that part is in the clear. Just so we are clear, > the lack of copyright statements (work by US federal employees aside) > usually means that you have *no license* to redistribute the work, not > that there are no restrictions on redistribution. Hello again, To the last point first: I agree that Anderson?s work is in the public domain. I contacted Fannian Kong regarding terms for his filters, whether he would be willing to put them in the public domain or release them under a BSD-style license. I explained that in either case others were free to use them, even for profit, without any compensation. I got the following reply: ?????????? Copy right things are complicated things to me. Please treat those material as published results from an open journal. So, as long as the journal paper is quoted as the reference, everybody can use the journal results. ?????????? Frankly, I don?t know if that is enough that we can include these filters or not. Opinions? To the first point: I?ll require a few hints and pointers. If the original function is f and its transform F, then F(b) = ([f(y/b)]^t * w)/b, where y and w are vectors of a certain length (801 for Anderson; 61, 121 or 241 for Kong ? these are the tables I mentioned previously). In other words, each transformed point requires a certain number of function evaluations. Orders 0 and 1 transforms differ only in the weighting vectors w, so if both are needed, much is saved by computing both at once on the same grid. In my application I would typically transform a number of functions to the same offsets b, so I would call one method on a transform object to set up a ?k? grid ( = y/b), evaluate my function(s) on that grid, and then call a ?transform? method with these function evaluations to obtain the transformed quantities F(b). This is sufficiently different from what one would use for other types of integral transforms that I?m open to other suggestions when it comes to interfaces. Also, I don?t see an obvious place where this should live. I?m thinking SciPy rather than NumPy, but it is not obviously a fit for any of the existing top-level namespaces. The closest thing is fftpack, but this isn?t part of fftpack. Arguments could be made for ndimage or signal also, but not very convincingly. > Thanks! > Robert Kern Cheers, Tom Grydeland From awebster at falsecolour.com Mon Feb 17 10:14:54 2014 From: awebster at falsecolour.com (Aaron Webster) Date: Mon, 17 Feb 2014 16:14:54 +0100 Subject: [SciPy-Dev] Hankel transforms, again In-Reply-To: <004A17FA-F51A-4028-97B0-74E2AB958804@gmail.com> References: <5F68ADA2-0DF2-43B4-B55F-45FE08A0A231@gmail.com> <004A17FA-F51A-4028-97B0-74E2AB958804@gmail.com> Message-ID: I'm no lawyer, but in my experience it is enough to cite the paper describing the algorithm in your own implementation of it. I've seen this done thousands of times. It gets a little bit strange when software patents are concerned, but as far as copyrights go you should be fine. On Mon, Feb 17, 2014 at 3:45 PM, Tom Grydeland wrote: > > On 2014-02-14, at 13:15, Robert Kern wrote: > >> On Fri, Feb 14, 2014 at 9:45 AM, Tom Grydeland wrote: >>> Hi developers, >>> >>> This is a repost of a message from December 2008 which gave no useful answers. Since then, I?ve had 4-5 requests for the code from people who had a need for it. It?s not a massive demand, but enough that perhaps you?ll consider my offer again. >>> >>> Since the previous posting, I?ve also included alternative filters thanks to Fan-Nian Kong that are shorter and more accurate when the function makes significant changes in more limited intervals. I?m not including the code (since it is mostly thousands of lines of tables), but I will provide the files to anyone who?s interested. >> >> Yes, I think we'd be interested. Please do make a PR. Before you do, >> please double-check the licensing on the new code that you have added. >> It does look like Anderson's original code is in the public domain >> (the paper being published as part of Anderson's work at the USGS as a >> federal employee), so that part is in the clear. Just so we are clear, >> the lack of copyright statements (work by US federal employees aside) >> usually means that you have *no license* to redistribute the work, not >> that there are no restrictions on redistribution. > > Hello again, > > To the last point first: I agree that Anderson?s work is in the public domain. > > I contacted Fannian Kong regarding terms for his filters, whether he would be willing to put them in the public domain or release them under a BSD-style license. I explained that in either case others were free to use them, even for profit, without any compensation. > > I got the following reply: > > ?????????? > Copy right things are complicated things to me. Please treat those material as published results from an open journal. So, as long as the journal paper is quoted as the reference, everybody can use the journal results. > ?????????? > > Frankly, I don?t know if that is enough that we can include these filters or not. Opinions? > > To the first point: I?ll require a few hints and pointers. > > If the original function is f and its transform F, then F(b) = ([f(y/b)]^t * w)/b, where y and w are vectors of a certain length (801 for Anderson; 61, 121 or 241 for Kong ? these are the tables I mentioned previously). In other words, each transformed point requires a certain number of function evaluations. Orders 0 and 1 transforms differ only in the weighting vectors w, so if both are needed, much is saved by computing both at once on the same grid. > > In my application I would typically transform a number of functions to the same offsets b, so I would call one method on a transform object to set up a ?k? grid ( = y/b), evaluate my function(s) on that grid, and then call a ?transform? method with these function evaluations to obtain the transformed quantities F(b). This is sufficiently different from what one would use for other types of integral transforms that I?m open to other suggestions when it comes to interfaces. > > Also, I don?t see an obvious place where this should live. I?m thinking SciPy rather than NumPy, but it is not obviously a fit for any of the existing top-level namespaces. The closest thing is fftpack, but this isn?t part of fftpack. Arguments could be made for ndimage or signal also, but not very convincingly. > > >> Thanks! > >> Robert Kern > > Cheers, > > Tom Grydeland > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev From robert.kern at gmail.com Mon Feb 17 10:54:28 2014 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 17 Feb 2014 15:54:28 +0000 Subject: [SciPy-Dev] Hankel transforms, again In-Reply-To: <004A17FA-F51A-4028-97B0-74E2AB958804@gmail.com> References: <5F68ADA2-0DF2-43B4-B55F-45FE08A0A231@gmail.com> <004A17FA-F51A-4028-97B0-74E2AB958804@gmail.com> Message-ID: On Mon, Feb 17, 2014 at 2:45 PM, Tom Grydeland wrote: > > On 2014-02-14, at 13:15, Robert Kern wrote: > >> On Fri, Feb 14, 2014 at 9:45 AM, Tom Grydeland wrote: >>> Hi developers, >>> >>> This is a repost of a message from December 2008 which gave no useful answers. Since then, I?ve had 4-5 requests for the code from people who had a need for it. It?s not a massive demand, but enough that perhaps you?ll consider my offer again. >>> >>> Since the previous posting, I?ve also included alternative filters thanks to Fan-Nian Kong that are shorter and more accurate when the function makes significant changes in more limited intervals. I?m not including the code (since it is mostly thousands of lines of tables), but I will provide the files to anyone who?s interested. >> >> Yes, I think we'd be interested. Please do make a PR. Before you do, >> please double-check the licensing on the new code that you have added. >> It does look like Anderson's original code is in the public domain >> (the paper being published as part of Anderson's work at the USGS as a >> federal employee), so that part is in the clear. Just so we are clear, >> the lack of copyright statements (work by US federal employees aside) >> usually means that you have *no license* to redistribute the work, not >> that there are no restrictions on redistribution. > > Hello again, > > To the last point first: I agree that Anderson?s work is in the public domain. > > I contacted Fannian Kong regarding terms for his filters, whether he would be willing to put them in the public domain or release them under a BSD-style license. I explained that in either case others were free to use them, even for profit, without any compensation. > > I got the following reply: > > ?????????? > Copy right things are complicated things to me. Please treat those material as published results from an open journal. So, as long as the journal paper is quoted as the reference, everybody can use the journal results. > ?????????? > > Frankly, I don?t know if that is enough that we can include these filters or not. Opinions? Short answer: no, that's too vague of a statement, and he might be wanting more restrictions than we are prepared to place on code in scipy. The BSD license that we use for scipy does not require anyone to cite the journal article. We will include a citation in our code, certainly, but we can make no guarantee that any downstream users of these functions will include that citation in their code or papers that use this code. We do include *encouragement* for users of such functions to cite the journal article when the original author wishes it. The BSD license does require that the copyright notice remain attached to the code, though, and that may be enough for him. We would need a positive statement from him that we can redistribute his code under a BSD license. If he does not take the time to read and understand the consequences of the BSD license, I would not be comfortable taking that statement of his as assenting to it. That said, the bulk of this code appears to just be tables of constants with a few fairly trivial function calls. Under US law, these aren't really copyrightable, but under EU law, the tables might be (it seems Fan-Nian Kong works in Norway). Can these tables be recreated somehow besides copying the data files on his website? Is there an algorithm for doing so? If you can rewrite the code from just the description of the algorithm in the paper and not by translating the MATLAB code from his site, then we are in the clear, IMO. (IANAL. TINLA.) Since he does seem to want citations, we should include the citation in the docstrings and encourage users of those functions to cite it too. > To the first point: I?ll require a few hints and pointers. > > If the original function is f and its transform F, then F(b) = ([f(y/b)]^t * w)/b, where y and w are vectors of a certain length (801 for Anderson; 61, 121 or 241 for Kong ? these are the tables I mentioned previously). In other words, each transformed point requires a certain number of function evaluations. Orders 0 and 1 transforms differ only in the weighting vectors w, so if both are needed, much is saved by computing both at once on the same grid. > > In my application I would typically transform a number of functions to the same offsets b, so I would call one method on a transform object to set up a ?k? grid ( = y/b), evaluate my function(s) on that grid, and then call a ?transform? method with these function evaluations to obtain the transformed quantities F(b). This is sufficiently different from what one would use for other types of integral transforms that I?m open to other suggestions when it comes to interfaces. > > Also, I don?t see an obvious place where this should live. I?m thinking SciPy rather than NumPy, but it is not obviously a fit for any of the existing top-level namespaces. The closest thing is fftpack, but this isn?t part of fftpack. Arguments could be made for ndimage or signal also, but not very convincingly. I would opt for scipy.signal. The interface can be whatever you find the most useful. If we can build on top of that an interface similar to other transforms, albeit inefficiently, so much the better, but let your actual use cases drive the interface. -- Robert Kern From nouiz at nouiz.org Mon Feb 17 15:39:03 2014 From: nouiz at nouiz.org (=?ISO-8859-1?Q?Fr=E9d=E9ric_Bastien?=) Date: Mon, 17 Feb 2014 15:39:03 -0500 Subject: [SciPy-Dev] About Google Summer of Code In-Reply-To: <6760436.Dn19K7eRFh@linux-tp-laptop.sysu> References: <6760436.Dn19K7eRFh@linux-tp-laptop.sysu> Message-ID: I saw some discussion about this on the numpy mailing list. Maybe you can check their. Otherwise Theano applied to GSoC this year :) (I'm a Theano developer...) If you are interrested: http://www.deeplearning.net/software/theano/ https://github.com/Theano/Theano/wiki/GSoC2014 Fred On Thu, Feb 13, 2014 at 9:45 PM, Richard Tsai wrote: > Hi everyone, I've just noticed that neither NumPy or SciPy is listed on > Python's GSoC page (https://wiki.python.org/moin/SummerOfCode/2014 > ). Is SciPy/NumPy going to apply for it? I'm a student and I want to > participate in GSoC this year. Thanks! > > -- > Richard Tsai > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev From richard9404 at gmail.com Mon Feb 17 22:14:27 2014 From: richard9404 at gmail.com (Richard Tsai) Date: Tue, 18 Feb 2014 11:14:27 +0800 Subject: [SciPy-Dev] About Google Summer of Code In-Reply-To: References: <6760436.Dn19K7eRFh@linux-tp-laptop.sysu> Message-ID: <4582681.rkXC7UH0Rr@linux-tp-laptop.sysu> Hi, Fred! I've never used Theano. But I'm interested in GPU programming and I'm attending the Heterogeneous Parallel Programming course on Coursera now. I'll be glad to try it. Thanks! Richard On 2014?2?17? ??? 15:39:03, Fr?d?ric Bastien wrote? > I saw some discussion about this on the numpy mailing list. Maybe you > can check their. > > Otherwise Theano applied to GSoC this year :) (I'm a Theano > developer...) If you are interrested: > > http://www.deeplearning.net/software/theano/ > https://github.com/Theano/Theano/wiki/GSoC2014 > > Fred From tom.grydeland at gmail.com Tue Feb 18 03:23:46 2014 From: tom.grydeland at gmail.com (Tom Grydeland) Date: Tue, 18 Feb 2014 09:23:46 +0100 Subject: [SciPy-Dev] Hankel transforms, again In-Reply-To: References: <5F68ADA2-0DF2-43B4-B55F-45FE08A0A231@gmail.com> <004A17FA-F51A-4028-97B0-74E2AB958804@gmail.com> Message-ID: On 2014-02-17, at 16:54, Robert Kern wrote: > Short answer: no, that's too vague of a statement, and he might be > wanting more restrictions than we are prepared to place on code in > scipy. The BSD license that we use for scipy does not require anyone > to cite the journal article. We will include a citation in our code, > certainly, but we can make no guarantee that any downstream users of > these functions will include that citation in their code or papers > that use this code. We do include *encouragement* for users of such > functions to cite the journal article when the original author wishes > it. The BSD license does require that the copyright notice remain > attached to the code, though, and that may be enough for him. Okay, I can ask again with these clarifications. > That said, the bulk of this code appears to just be tables of > constants with a few fairly trivial function calls. Under US law, > these aren't really copyrightable, but under EU law, the tables might > be (it seems Fan-Nian Kong works in Norway). Can these tables be > recreated somehow besides copying the data files on his website? Is > there an algorithm for doing so? The algorithm for doing so is what is described in the paper. (The paper is here: DOI:10.1111/j.1365-2478.2006.00585.x and I have PDF that I won?t spam the mailing list with.) > If you can rewrite the code from just > the description of the algorithm in the paper and not by translating > the MATLAB code from his site, then we are in the clear, IMO. (IANAL. > TINLA.) As you say, the mathematical machinery of the code is simple enough, once you have the tables. My Python code (apart from the tables) does not even vaguely resemble the Matlab code. I called it an ?adaptation?, not a ?translation?, but even ?adaptation? may be too specific. > Since he does seem to want citations, we should include the > citation in the docstrings and encourage users of those functions to > cite it too. Absolutely. >> Also, I don?t see an obvious place where this should live. I?m thinking SciPy rather than NumPy, but it is not obviously a fit for any of the existing top-level namespaces. The closest thing is fftpack, but this isn?t part of fftpack. Arguments could be made for ndimage or signal also, but not very convincingly. > > I would opt for scipy.signal. The interface can be whatever you find > the most useful. If we can build on top of that an interface similar > to other transforms, albeit inefficiently, so much the better, but let > your actual use cases drive the interface. Fine. > Robert Kern Thank you for your feedback. I?ll ask Dr. Kong again, and proceed with a PR using Anderson (with provisions for adding Kong later) ?T From larson.eric.d at gmail.com Tue Feb 18 12:40:36 2014 From: larson.eric.d at gmail.com (Eric Larson) Date: Tue, 18 Feb 2014 09:40:36 -0800 Subject: [SciPy-Dev] Maximum length sequence Message-ID: Maximum length sequences (MLS) are useful in signal processing for finding impulse responses. I can't find a great implementation of MLS in Python, but maybe I've missed one somewhere. Would this be something good to put in scipy.signal, perhaps as scipy.signal.mls? Cheers, Eric -------------- next part -------------- An HTML attachment was scrubbed... URL: From ndbecker2 at gmail.com Wed Feb 19 08:37:20 2014 From: ndbecker2 at gmail.com (Neal Becker) Date: Wed, 19 Feb 2014 08:37:20 -0500 Subject: [SciPy-Dev] Maximum length sequence References: Message-ID: Eric Larson wrote: > Maximum length sequences (MLS) are useful in signal processing for finding > impulse responses. I can't find a great implementation of MLS in Python, > but maybe I've missed one somewhere. Would this be something good to put in > scipy.signal, perhaps as scipy.signal.mls? > > Cheers, > Eric If you want a sequence of gaussian rv, just use numpy.random. From robert.kern at gmail.com Wed Feb 19 08:39:28 2014 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 19 Feb 2014 13:39:28 +0000 Subject: [SciPy-Dev] Maximum length sequence In-Reply-To: References: Message-ID: On Wed, Feb 19, 2014 at 1:37 PM, Neal Becker wrote: > Eric Larson wrote: > >> Maximum length sequences (MLS) are useful in signal processing for finding >> impulse responses. I can't find a great implementation of MLS in Python, >> but maybe I've missed one somewhere. Would this be something good to put in >> scipy.signal, perhaps as scipy.signal.mls? >> >> Cheers, >> Eric > > If you want a sequence of gaussian rv, just use numpy.random. That does not appear to be what he wants: http://en.wikipedia.org/wiki/Maximum_length_sequence -- Robert Kern From pierre.haessig at crans.org Wed Feb 19 09:40:22 2014 From: pierre.haessig at crans.org (Pierre Haessig) Date: Wed, 19 Feb 2014 15:40:22 +0100 Subject: [SciPy-Dev] Maximum length sequence In-Reply-To: References: Message-ID: <5304C256.2080300@crans.org> Hi, Le 18/02/2014 18:40, Eric Larson a ?crit : > Maximum length sequences (MLS) are useful in signal processing for > finding impulse responses. I can't find a great implementation of MLS > in Python, but maybe I've missed one somewhere. Would this be > something good to put in scipy.signal, perhaps as scipy.signal.mls? scipy.signal seems the right place to me. Just a name idea : scipy.signal.max_len_seq ? (to avoid yet another acronym) best, Pierre -------------- next part -------------- A non-text attachment was scrubbed... Name: pierre_haessig.vcf Type: text/x-vcard Size: 329 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 880 bytes Desc: OpenPGP digital signature URL: From larson.eric.d at gmail.com Wed Feb 19 11:55:13 2014 From: larson.eric.d at gmail.com (Eric Larson) Date: Wed, 19 Feb 2014 08:55:13 -0800 Subject: [SciPy-Dev] Maximum length sequence In-Reply-To: <5304C256.2080300@crans.org> References: <5304C256.2080300@crans.org> Message-ID: Thanks for the ideas. I've opened a PR: https://github.com/scipy/scipy/pull/3351 Indeed I specifically want MLS as opposed to Gaussian noise as MLS has some properties that make it nice for finding impluse responses (e.g., in acoustics and neuroscience). I've changed the name in the PR to `max_len_seq` from `mls`, feel free to comment on the PR if people have other ideas. Cheers, Eric On Wed, Feb 19, 2014 at 6:40 AM, Pierre Haessig wrote: > Hi, > > Le 18/02/2014 18:40, Eric Larson a ?crit : > > Maximum length sequences (MLS) are useful in signal processing for > > finding impulse responses. I can't find a great implementation of MLS > > in Python, but maybe I've missed one somewhere. Would this be > > something good to put in scipy.signal, perhaps as scipy.signal.mls? > scipy.signal seems the right place to me. > > Just a name idea : scipy.signal.max_len_seq ? > (to avoid yet another acronym) > > best, > Pierre > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Wed Feb 19 14:28:41 2014 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 19 Feb 2014 21:28:41 +0200 Subject: [SciPy-Dev] Pull request labels Message-ID: Hi, I'm going to tag some unfinished pull requests with a "needs-work" label. This should help with the problem that it's not easy to see from the list which ones are ready to go and which are waiting for someone to do something. Since the labels cannot be changed by the PR submitter, please shout in the comments if the label is out of date. Pauli From ralf.gommers at gmail.com Wed Feb 19 14:34:59 2014 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Wed, 19 Feb 2014 20:34:59 +0100 Subject: [SciPy-Dev] Pull request labels In-Reply-To: References: Message-ID: On Wed, Feb 19, 2014 at 8:28 PM, Pauli Virtanen wrote: > Hi, > > I'm going to tag some unfinished pull requests with a "needs-work" label. > Good idea. I've also started labeling all PRs with the component, and with milestone when merging. If we all do this it helps a lot for getting an overview of what went into a release. Ralf > > This should help with the problem that it's not easy to see from the > list which ones are ready to go and which are waiting for someone to do > something. Since the labels cannot be changed by the PR submitter, > please shout in the comments if the label is out of date. > > Pauli > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Wed Feb 19 14:45:14 2014 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 19 Feb 2014 21:45:14 +0200 Subject: [SciPy-Dev] Pull request labels In-Reply-To: References: Message-ID: 19.02.2014 21:34, Ralf Gommers kirjoitti: > On Wed, Feb 19, 2014 at 8:28 PM, Pauli Virtanen wrote: >> I'm going to tag some unfinished pull requests with a "needs-work" label. > > Good idea. I've also started labeling all PRs with the component, and with > milestone when merging. If we all do this it helps a lot for getting an > overview of what went into a release. I think I'll add also a "PR" label: The problem is that the pull request list does not show labels, so I can't see at a glance what is what. The issues view shows also the PRs, so it could be usable, but without a separate PR label, these are jumbled together with bug reports. Pauli From josef.pktd at gmail.com Wed Feb 19 15:43:31 2014 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 19 Feb 2014 15:43:31 -0500 Subject: [SciPy-Dev] Pull request labels In-Reply-To: References: Message-ID: On Wed, Feb 19, 2014 at 2:45 PM, Pauli Virtanen wrote: > 19.02.2014 21:34, Ralf Gommers kirjoitti: >> On Wed, Feb 19, 2014 at 8:28 PM, Pauli Virtanen wrote: >>> I'm going to tag some unfinished pull requests with a "needs-work" label. >> >> Good idea. I've also started labeling all PRs with the component, and with >> milestone when merging. If we all do this it helps a lot for getting an >> overview of what went into a release. > > I think I'll add also a "PR" label: The problem is that the pull request > list does not show labels, so I can't see at a glance what is what. The > issues view shows also the PRs, so it could be usable, but without a > separate PR label, these are jumbled together with bug reports. That's very useful (label == PR & stats).sum() = 11 Josef > > Pauli > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev From pasky at ucw.cz Wed Feb 19 21:43:00 2014 From: pasky at ucw.cz (Petr Baudis) Date: Thu, 20 Feb 2014 03:43:00 +0100 Subject: [SciPy-Dev] Matlab's fminbnd vs. scipy.optimize Message-ID: <20140220024300.GK7903@machine.or.cz> Hi! I'd like to ask if anyone knows of any important differences between Matlab's fminbnd and scipy.optimize.fminbound (minimize_scalar method Bounded)? Can they be assumed to behave and perform in a roughly equivalent way? Thanks, Petr "Pasky" Baudis From pelson.pub at gmail.com Thu Feb 20 09:00:14 2014 From: pelson.pub at gmail.com (Phil Elson) Date: Thu, 20 Feb 2014 14:00:14 +0000 Subject: [SciPy-Dev] Qhull Delaunay triangulation "equations" attribute Message-ID: I'm trying to manually construct a Delaunay triangulation for an orthogonal 2d grid as described in http://stackoverflow.com/questions/21888546and wonder if anybody can help provide some interpretation of the "equations" values of a scipy.spatial.Delaunay instance. Essentially I'm working off the premise that it is possible to construct a Delaunay triangulation from a regular grid without going through the expensive triangulation stage, does anybody know if that is true or not? Many thanks for any advice you might have, Phil -------------- next part -------------- An HTML attachment was scrubbed... URL: From robfalck at gmail.com Thu Feb 20 09:54:43 2014 From: robfalck at gmail.com (Rob Falck) Date: Thu, 20 Feb 2014 09:54:43 -0500 Subject: [SciPy-Dev] ENH: linprog function for linear programming Message-ID: I've finally gotten around to putting a linear programming routine into a good state for inclusion into scipy. I've tested with somewhat large problems (400 variables, 40 constraints) and it solves within a second or two. Right now it's only uses a two-phase dense-matrix based simplex solver, but the overall "linprog" routine is intended to function in much the way that minimize does, where different methods can be used as they are added in the future. The pull request has been submitted and passes all tests. I welcome people to put it to the test and find any problems I may have missed. -- - Rob Falck -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Thu Feb 20 12:18:26 2014 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 20 Feb 2014 19:18:26 +0200 Subject: [SciPy-Dev] Qhull Delaunay triangulation "equations" attribute In-Reply-To: References: Message-ID: 20.02.2014 16:00, Phil Elson kirjoitti: > I'm trying to manually construct a Delaunay triangulation for an orthogonal > 2d grid as described in > http://stackoverflow.com/questions/21888546and > wonder if anybody can help provide some interpretation of the > "equations" values of a scipy.spatial.Delaunay instance. > Essentially I'm working off the premise that it is possible to construct a > Delaunay triangulation from a regular grid without going through the > expensive triangulation stage, does anybody know if that is true or not? Yes, it should be possible to construct the equations manually. For Delaunay, "equations" contains the hyperplane equation defining the convex hull facets in ndim+1 dimensions corresponding to the simplices of the triangulation. You get the ndim+1 dim coordinates for each simplex from the ndim coordinates by adding an additional last coordinate to the vertices of the simplices. The routine Delaunay.lift_points maps points in ndim dims onto the paraboloid in ndim+1. The hyperplane equations should be constructed for the so transformed coordinates, in the form sum([equations[j,k]*x[k] for k in range(ndim+1)]) + equations[j,ndim+1] == 0 Here, x is the coordinate "lifted" to ndim+1 dims. Geometrically, equations[j,:ndim+1] contains the normal vector of the facet j, and equations[j,ndim+1] the offset scalar. -- Pauli Virtanen From lists at hilboll.de Fri Feb 21 05:18:12 2014 From: lists at hilboll.de (Andreas Hilboll) Date: Fri, 21 Feb 2014 11:18:12 +0100 Subject: [SciPy-Dev] "Anonymous" bug reporting: useful? Message-ID: <530727E4.5010701@hilboll.de> I was wondering if it would be good if users could report bugs without having to sign up for a Github account. From a technical viewpoint, this should be possible, using a dedicated email account, some server-side scripting, and the Github API. Do you have any feeling for how many bugs do not get reported because a user doesn't want to go through the hassle of creating an account just to report a bug? I have the feeling that it must be as simple as possible to report bugs. And having to create an account (be it at Github or by signing up for a mailing list) is an obstacle to that. Any thoughts / ideas? -- Andreas. From robert.kern at gmail.com Fri Feb 21 05:37:18 2014 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 21 Feb 2014 10:37:18 +0000 Subject: [SciPy-Dev] "Anonymous" bug reporting: useful? In-Reply-To: <530727E4.5010701@hilboll.de> References: <530727E4.5010701@hilboll.de> Message-ID: On Fri, Feb 21, 2014 at 10:18 AM, Andreas Hilboll wrote: > I was wondering if it would be good if users could report bugs without > having to sign up for a Github account. From a technical viewpoint, > this should be possible, using a dedicated email account, some > server-side scripting, and the Github API. > > Do you have any feeling for how many bugs do not get reported because a > user doesn't want to go through the hassle of creating an account just > to report a bug? > > I have the feeling that it must be as simple as possible to report bugs. > And having to create an account (be it at Github or by signing up for a > mailing list) is an obstacle to that. > > Any thoughts / ideas? I didn't realize that we had fixed all of the bugs that have already been reported and our volunteers are pining away without any work to do. ;-) More seriously, the problem has always been spam. Removing filters to lower the barrier for honest bug reports also removes the filters for much more spam. There are always tradeoffs, and I think we've settled on a reasonable one. If we were getting no bug reports, that would be a problem, but we're getting a few every day. That's a healthy amount. -- Robert Kern From yw5aj at virginia.edu Fri Feb 21 08:25:54 2014 From: yw5aj at virginia.edu (Yuxiang Wang) Date: Fri, 21 Feb 2014 08:25:54 -0500 Subject: [SciPy-Dev] An inconsistency in scipy.optimize.minimize In-Reply-To: References: Message-ID: Thank you Pauli for your response! And sorry for replying late. I agree with you that we should not use fmin_* functions in the future, and neither should we break backward compatibility. However I do see a problem the overall consistency and the document. Could we make both "epsilon" and "eps" work? Let me try to explain here. All functions except minimize uses "eps" and have it documented; minimize with all other algorithms (e.g., "bfgs") uses "eps"; only minimize uses "epsilon" when "l_bfgs_b" is called, which is inconsistent, and what's worse... it is nowhere documented. There is nowhere in the minimize document saying that I should use the parameter "epsilon" for "l-bfgs-b". The only hint I could get is in fmin_l_bfgs_b, and it says "eps", not "epsilon". I use show_options and it still wouldn't tell me to use "epsilon". This is what I got: In [6]: from scipy.optimize import minimize, show_options In [7]: show_options('minimize', 'BFGS') BFGS options: gtol : float Gradient norm must be less than `gtol` before successful termination. norm : float Order of norm (Inf is max, -Inf is min). eps : float or ndarray If `jac` is approximated, use this value for the step size. In [8]: show_options('minimize', 'L-BFGS-B') L-BFGS-B options: ftol : float The iteration stops when ``(f^k - f^{k+1})/max{|f^k|,|f^{k+1}|,1} <= ftol``. gtol : float The iteration will stop when ``max{|proj g_i | i = 1, ..., n} <= gtol`` where ``pg_i`` is the i-th component of the projected gradient. maxcor : int The maximum number of variable metric corrections used to define the limited memory matrix. (The limited memory BFGS method does not store the full hessian but uses this many terms in an approximation to it.) maxiter : int Maximum number of function evaluations. None of them made me realize the name "epsilon" is usable - the only hint is that I might try "eps", but it didn't work. -Shawn On Sun, Feb 16, 2014 at 7:59 AM, Pauli Virtanen wrote: > Hi, > > 16.02.2014 06:22, Yuxiang Wang kirjoitti: >> Sorry about digging this out again... As well as not knowing how to >> contribute my effort to SciPy. Sorry about being ignorant in how to >> participate in an open-source project, but what is the suggested step >> that I should do next? Should I: >> >> 1) Report this issue on github >> >> 2) Fix the code >> >> I could do both, but the code would be only on my personal machine and >> not thoroughly tested... Could anyone please help and get me started >> on that? >> >> Thanks so much! >> >> -Shawn >> >> On Mon, Jan 27, 2014 at 4:55 PM, Yuxiang Wang wrote: >>> Aaron, >>> >>> Thanks for confirming! I agree that epsilon is better, as in the >>> following functions "epsilon" instead of "eps" are used: >>> >>> scipy.optimize.fmin_cg >>> scipy.optimize.fmin_ncg >>> scipy.optimize.fmin_tnc >>> scipy.optimize.fmin_bfgs >>> scipy.optimize.fmin_l_bfgs_b > > The overall situation is that `fmin_*` exist only for backward > compatibility, and may be deprecated at some point if this seems > sensible. Everything that they can do, also minimize() can do. > > minimize() was introduced to provide a standard interface with > consistent parameter names, since the fmin_* were inconsistent between > each other. > > It does not seem wise to me for us to start changing the parameter names > in minimize() again, just for aesthetic reasons. This will break > backward compatibility, and moreover, users of minimize() should not > need to use the fmin_* functions. > > -- > Pauli Virtanen > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev -- Yuxiang "Shawn" Wang Gerling Research Lab University of Virginia yw5aj at virginia.edu +1 (434) 284-0836 https://sites.google.com/a/virginia.edu/yw5aj/ From yw5aj at virginia.edu Fri Feb 21 08:34:09 2014 From: yw5aj at virginia.edu (Yuxiang Wang) Date: Fri, 21 Feb 2014 08:34:09 -0500 Subject: [SciPy-Dev] An inconsistency in scipy.optimize.minimize In-Reply-To: References: Message-ID: Dear all, Please ignore my last email - I got "epsilon" and "eps" reversed in my mind. I am really sorry for the spam. -Shawn On Fri, Feb 21, 2014 at 8:25 AM, Yuxiang Wang wrote: > Thank you Pauli for your response! And sorry for replying late. > > I agree with you that we should not use fmin_* functions in the > future, and neither should we break backward compatibility. However I > do see a problem the overall consistency and the document. Could we > make both "epsilon" and "eps" work? Let me try to explain here. > > All functions except minimize uses "eps" and have it documented; > minimize with all other algorithms (e.g., "bfgs") uses "eps"; only > minimize uses "epsilon" when "l_bfgs_b" is called, which is > inconsistent, and what's worse... it is nowhere documented. There is > nowhere in the minimize document saying that I should use the > parameter "epsilon" for "l-bfgs-b". The only hint I could get is in > fmin_l_bfgs_b, and it says "eps", not "epsilon". > > > > I use show_options and it still wouldn't tell me to use "epsilon". > This is what I got: > > In [6]: from scipy.optimize import minimize, show_options > In [7]: show_options('minimize', 'BFGS') > > BFGS options: > gtol : float > Gradient norm must be less than `gtol` before successful > termination. > norm : float > Order of norm (Inf is max, -Inf is min). > eps : float or ndarray > If `jac` is approximated, use this value for the step size. > > In [8]: show_options('minimize', 'L-BFGS-B') > > L-BFGS-B options: > ftol : float > The iteration stops when ``(f^k - > f^{k+1})/max{|f^k|,|f^{k+1}|,1} <= ftol``. > gtol : float > The iteration will stop when ``max{|proj g_i | i = 1, ..., n} > <= gtol`` where ``pg_i`` is the i-th component of the > projected gradient. > maxcor : int > The maximum number of variable metric corrections used to > define the limited memory matrix. (The limited memory BFGS > method does not store the full hessian but uses this many terms > in an approximation to it.) > maxiter : int > Maximum number of function evaluations. > > None of them made me realize the name "epsilon" is usable - the only > hint is that I might try "eps", but it didn't work. > > -Shawn > > On Sun, Feb 16, 2014 at 7:59 AM, Pauli Virtanen wrote: >> Hi, >> >> 16.02.2014 06:22, Yuxiang Wang kirjoitti: >>> Sorry about digging this out again... As well as not knowing how to >>> contribute my effort to SciPy. Sorry about being ignorant in how to >>> participate in an open-source project, but what is the suggested step >>> that I should do next? Should I: >>> >>> 1) Report this issue on github >>> >>> 2) Fix the code >>> >>> I could do both, but the code would be only on my personal machine and >>> not thoroughly tested... Could anyone please help and get me started >>> on that? >>> >>> Thanks so much! >>> >>> -Shawn >>> >>> On Mon, Jan 27, 2014 at 4:55 PM, Yuxiang Wang wrote: >>>> Aaron, >>>> >>>> Thanks for confirming! I agree that epsilon is better, as in the >>>> following functions "epsilon" instead of "eps" are used: >>>> >>>> scipy.optimize.fmin_cg >>>> scipy.optimize.fmin_ncg >>>> scipy.optimize.fmin_tnc >>>> scipy.optimize.fmin_bfgs >>>> scipy.optimize.fmin_l_bfgs_b >> >> The overall situation is that `fmin_*` exist only for backward >> compatibility, and may be deprecated at some point if this seems >> sensible. Everything that they can do, also minimize() can do. >> >> minimize() was introduced to provide a standard interface with >> consistent parameter names, since the fmin_* were inconsistent between >> each other. >> >> It does not seem wise to me for us to start changing the parameter names >> in minimize() again, just for aesthetic reasons. This will break >> backward compatibility, and moreover, users of minimize() should not >> need to use the fmin_* functions. >> >> -- >> Pauli Virtanen >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev > > > > -- > Yuxiang "Shawn" Wang > Gerling Research Lab > University of Virginia > yw5aj at virginia.edu > +1 (434) 284-0836 > https://sites.google.com/a/virginia.edu/yw5aj/ -- Yuxiang "Shawn" Wang Gerling Research Lab University of Virginia yw5aj at virginia.edu +1 (434) 284-0836 https://sites.google.com/a/virginia.edu/yw5aj/ From nouiz at nouiz.org Fri Feb 21 09:40:38 2014 From: nouiz at nouiz.org (=?ISO-8859-1?Q?Fr=E9d=E9ric_Bastien?=) Date: Fri, 21 Feb 2014 09:40:38 -0500 Subject: [SciPy-Dev] "Anonymous" bug reporting: useful? In-Reply-To: References: <530727E4.5010701@hilboll.de> Message-ID: On Fri, Feb 21, 2014 at 5:37 AM, Robert Kern wrote: > On Fri, Feb 21, 2014 at 10:18 AM, Andreas Hilboll wrote: >> I was wondering if it would be good if users could report bugs without >> having to sign up for a Github account. From a technical viewpoint, >> this should be possible, using a dedicated email account, some >> server-side scripting, and the Github API. >> >> Do you have any feeling for how many bugs do not get reported because a >> user doesn't want to go through the hassle of creating an account just >> to report a bug? >> >> I have the feeling that it must be as simple as possible to report bugs. >> And having to create an account (be it at Github or by signing up for a >> mailing list) is an obstacle to that. >> >> Any thoughts / ideas? > > I didn't realize that we had fixed all of the bugs that have already > been reported and our volunteers are pining away without any work to > do. ;-) > > More seriously, the problem has always been spam. Removing filters to > lower the barrier for honest bug reports also removes the filters for > much more spam. There are always tradeoffs, and I think we've settled > on a reasonable one. If we were getting no bug reports, that would be > a problem, but we're getting a few every day. That's a healthy amount. It is the first time I read that having bug report every few days is healthy :) Fred From robert.kern at gmail.com Fri Feb 21 11:07:19 2014 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 21 Feb 2014 16:07:19 +0000 Subject: [SciPy-Dev] "Anonymous" bug reporting: useful? In-Reply-To: References: <530727E4.5010701@hilboll.de> Message-ID: On Fri, Feb 21, 2014 at 2:40 PM, Fr?d?ric Bastien wrote: > On Fri, Feb 21, 2014 at 5:37 AM, Robert Kern wrote: >> On Fri, Feb 21, 2014 at 10:18 AM, Andreas Hilboll wrote: >>> I was wondering if it would be good if users could report bugs without >>> having to sign up for a Github account. From a technical viewpoint, >>> this should be possible, using a dedicated email account, some >>> server-side scripting, and the Github API. >>> >>> Do you have any feeling for how many bugs do not get reported because a >>> user doesn't want to go through the hassle of creating an account just >>> to report a bug? >>> >>> I have the feeling that it must be as simple as possible to report bugs. >>> And having to create an account (be it at Github or by signing up for a >>> mailing list) is an obstacle to that. >>> >>> Any thoughts / ideas? >> >> I didn't realize that we had fixed all of the bugs that have already >> been reported and our volunteers are pining away without any work to >> do. ;-) >> >> More seriously, the problem has always been spam. Removing filters to >> lower the barrier for honest bug reports also removes the filters for >> much more spam. There are always tradeoffs, and I think we've settled >> on a reasonable one. If we were getting no bug reports, that would be >> a problem, but we're getting a few every day. That's a healthy amount. > > It is the first time I read that having bug report every few days is healthy :) Bugs are inevitable. Quality bug *reports* are not. :-) -- Robert Kern From nouiz at nouiz.org Fri Feb 21 15:50:35 2014 From: nouiz at nouiz.org (=?ISO-8859-1?Q?Fr=E9d=E9ric_Bastien?=) Date: Fri, 21 Feb 2014 15:50:35 -0500 Subject: [SciPy-Dev] "Anonymous" bug reporting: useful? In-Reply-To: References: <530727E4.5010701@hilboll.de> Message-ID: On Fri, Feb 21, 2014 at 11:07 AM, Robert Kern wrote: > On Fri, Feb 21, 2014 at 2:40 PM, Fr?d?ric Bastien wrote: >> On Fri, Feb 21, 2014 at 5:37 AM, Robert Kern wrote: >>> On Fri, Feb 21, 2014 at 10:18 AM, Andreas Hilboll wrote: >>>> I was wondering if it would be good if users could report bugs without >>>> having to sign up for a Github account. From a technical viewpoint, >>>> this should be possible, using a dedicated email account, some >>>> server-side scripting, and the Github API. >>>> >>>> Do you have any feeling for how many bugs do not get reported because a >>>> user doesn't want to go through the hassle of creating an account just >>>> to report a bug? >>>> >>>> I have the feeling that it must be as simple as possible to report bugs. >>>> And having to create an account (be it at Github or by signing up for a >>>> mailing list) is an obstacle to that. >>>> >>>> Any thoughts / ideas? >>> >>> I didn't realize that we had fixed all of the bugs that have already >>> been reported and our volunteers are pining away without any work to >>> do. ;-) >>> >>> More seriously, the problem has always been spam. Removing filters to >>> lower the barrier for honest bug reports also removes the filters for >>> much more spam. There are always tradeoffs, and I think we've settled >>> on a reasonable one. If we were getting no bug reports, that would be >>> a problem, but we're getting a few every day. That's a healthy amount. >> >> It is the first time I read that having bug report every few days is healthy :) > > Bugs are inevitable. Quality bug *reports* are not. :-) that is sure! It is just the way I read it, I found that funny. Fred From ndbecker2 at gmail.com Fri Feb 21 16:27:52 2014 From: ndbecker2 at gmail.com (Neal Becker) Date: Fri, 21 Feb 2014 16:27:52 -0500 Subject: [SciPy-Dev] Maximum length sequence References: Message-ID: Robert Kern wrote: > On Wed, Feb 19, 2014 at 1:37 PM, Neal Becker wrote: >> Eric Larson wrote: >> >>> Maximum length sequences (MLS) are useful in signal processing for finding >>> impulse responses. I can't find a great implementation of MLS in Python, >>> but maybe I've missed one somewhere. Would this be something good to put in >>> scipy.signal, perhaps as scipy.signal.mls? >>> >>> Cheers, >>> Eric >> >> If you want a sequence of gaussian rv, just use numpy.random. > > That does not appear to be what he wants: > > http://en.wikipedia.org/wiki/Maximum_length_sequence > I'm assuming what's wanted here is white noise. To generate random bits of some specified bit width, I use the underlying mersenne twister to generate uniform integer r.v., then cache the bits (32 at a time) and output them with the desired width. Of course this is not identical to what the OP asked, but serves the same purpose. A snippet of the C++ code used for this looks like: template result_type operator() (Engine& eng) { if (count <= width-1) { cache = eng(); count = std::min (std::numeric_limits::digits, std::numeric_limits::digits); } result_type bits = cache & mask; cache >>= width; count -= width; return bits; } From larson.eric.d at gmail.com Fri Feb 21 16:50:32 2014 From: larson.eric.d at gmail.com (Eric Larson) Date: Fri, 21 Feb 2014 13:50:32 -0800 Subject: [SciPy-Dev] Maximum length sequence In-Reply-To: References: Message-ID: MLS has specific properties that make it especially well suited to finding impulse responses of systems. I appreciate the suggestion of using white noise, but some MLS properties (binary representation, systematic noise cancellation through repetitions, true rather than on average white spectrum) make it a preferred option in applications like acoustics (for speaker impulse responses) and even neuroscience (e.g., for finding auditory brainstem responses). I can try to track down relevant references if you're really interested. Eric On Feb 21, 2014 1:28 PM, "Neal Becker" wrote: > Robert Kern wrote: > > > On Wed, Feb 19, 2014 at 1:37 PM, Neal Becker > wrote: > >> Eric Larson wrote: > >> > >>> Maximum length sequences (MLS) are useful in signal processing for > finding > >>> impulse responses. I can't find a great implementation of MLS in > Python, > >>> but maybe I've missed one somewhere. Would this be something good to > put in > >>> scipy.signal, perhaps as scipy.signal.mls? > >>> > >>> Cheers, > >>> Eric > >> > >> If you want a sequence of gaussian rv, just use numpy.random. > > > > That does not appear to be what he wants: > > > > http://en.wikipedia.org/wiki/Maximum_length_sequence > > > > I'm assuming what's wanted here is white noise. > > To generate random bits of some specified bit width, I use the underlying > mersenne twister to generate uniform integer r.v., then cache the bits (32 > at a > time) and output them with the desired width. > > Of course this is not identical to what the OP asked, but serves the same > purpose. > > A snippet of the C++ code used for this looks like: > > template > result_type operator() (Engine& eng) { > > > if (count <= width-1) { > cache = eng(); > count = std::min (std::numeric_limits Engine::result_type>::digits, std::numeric_limits::digits); > } > result_type bits = cache & mask; > cache >>= width; > count -= width; > return bits; > } > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yw5aj at virginia.edu Fri Feb 21 17:28:07 2014 From: yw5aj at virginia.edu (Yuxiang Wang) Date: Fri, 21 Feb 2014 17:28:07 -0500 Subject: [SciPy-Dev] Fixing the scipy.optimize.show_options Message-ID: Dear all, I found this function missing the option 'eps' when I checked L-BFGS-B algorithm. This is what I got. In [1]: from scipy.optimize import show_options In [2]: show_options('minimize', 'l-bfgs-b') L-BFGS-B options: ftol : float The iteration stops when ``(f^k - f^{k+1})/max{|f^k|,|f^{k+1}|,1} <= ftol``. gtol : float The iteration will stop when ``max{|proj g_i | i = 1, ..., n} <= gtol`` where ``pg_i`` is the i-th component of the projected gradient. maxcor : int The maximum number of variable metric corrections used to define the limited memory matrix. (The limited memory BFGS method does not store the full hessian but uses this many terms in an approximation to it.) maxiter : int Maximum number of function evaluations. -Shawn -- Yuxiang "Shawn" Wang Gerling Research Lab University of Virginia yw5aj at virginia.edu +1 (434) 284-0836 https://sites.google.com/a/virginia.edu/yw5aj/ From yw5aj at virginia.edu Fri Feb 21 17:37:29 2014 From: yw5aj at virginia.edu (Yuxiang Wang) Date: Fri, 21 Feb 2014 17:37:29 -0500 Subject: [SciPy-Dev] The scipy.optimize.minimize() documentation: highlight show_options? Message-ID: Dear all, I have a feel that the "show_options" is not as highlighted as in the documentation of the scipy.optimize.minimize(), as I personally took quite a while to see it. I now know how to use it, but I was wondering whether anyone echoes with me in the following suggestions: 1) Place the show_options('minimize', method) in the "See also:", so it is more highlighted. 2) Display scipy.optimize.show_options(), instead of show_options() because the latter confused me... well, that could also just be me :( 3) This is one that I haven't thought through yet: I don't think most people will first start reading the parameter arguments to the end and see the "show_options" description. Rather, one tends to start reading the notes about the algorithm that they care about. Could we add a list (only the parameter names) in the notes following each method? It could be simple, like: "Method L-BFGS-B uses the L-BFGS-B algorithm [R98], [R99] for bound constrained minimization. Available options are ftol, gtol, maxcor and maxiter. Please use scipy.optimize.show_options('minimize', 'L-BFGS-B') for details." Please let me know how you may think. I may highly be wrong, but I'd like to bring this topic up, in case this issue get slipped. Thanks! -Shawn -- Yuxiang "Shawn" Wang Gerling Research Lab University of Virginia yw5aj at virginia.edu +1 (434) 284-0836 https://sites.google.com/a/virginia.edu/yw5aj/ From nonhermitian at gmail.com Fri Feb 21 22:50:43 2014 From: nonhermitian at gmail.com (Paul Nation) Date: Sat, 22 Feb 2014 12:50:43 +0900 Subject: [SciPy-Dev] Reordering methods for Sparse matrices Message-ID: I have written two sparse matrix reordering functions in Cython and I am wondering if they would be worthy additions to the scipy.sparse module? These are currently being used in our QuTiP library (qutip.org). The first function finds the Reverse Cuthill-Mckee (RCM) ordering [1] of a symmetric sparse matrix, similar to the Matlab ?symrcm' function. This is useful for reducing the bandwidth of the underlying matrix to help eliminate fill-in when using direct or iterative solvers. If the matrix is not symmetric, it looks at A+trans(A). Since the matrix is symmetric, this works for both CSR and CSC matrices. The second, is a maximal traversal algorithm based on a breadth-first-search method [2]. This returns a set of row permutations that makes the diagonal zero-free. Since the permutation is not symmetric, this works only on CSC matrices. Like RCM this is useful for preordering in both direct and iterative solvers. [1] E. Cuthill and J. McKee, "Reducing the Bandwidth of Sparse Symmetric Matrices?, ACM '69 Proceedings of the 1969 24th national conference, (1969). [2] I. S. Duff, K. Kaya, and B. Ucar, "Design, Implementation, and Analysis of Maximum Transversal Algorithms", ACM Trans. Math. Softw. 38, no. 2, (2011). Best regards, Paul From sturla.molden at gmail.com Sat Feb 22 04:56:37 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Sat, 22 Feb 2014 09:56:37 +0000 (UTC) Subject: [SciPy-Dev] "Anonymous" bug reporting: useful? References: <530727E4.5010701@hilboll.de> Message-ID: <338275545414686993.828676sturla.molden-gmail.com@news.gmane.org> Fr?d?ric Bastien wrote: > It is the first time I read that having bug report every few days is healthy :) > It is. There are bugs in all software because of the human factor. Most automatic bug reports are generated by errors on the users side. That is a signal to noise problem. Also, one would want to prioritize fixing bugs that are critical or bugs that people actually care about. If it is too easy to report a bug, the important ones will wait forever in queue. Sturla From lists at hilboll.de Sat Feb 22 07:10:54 2014 From: lists at hilboll.de (Andreas Hilboll) Date: Sat, 22 Feb 2014 13:10:54 +0100 Subject: [SciPy-Dev] "Anonymous" bug reporting: useful? In-Reply-To: <338275545414686993.828676sturla.molden-gmail.com@news.gmane.org> References: <530727E4.5010701@hilboll.de> <338275545414686993.828676sturla.molden-gmail.com@news.gmane.org> Message-ID: <530893CE.7030807@hilboll.de> On 22.02.2014 10:56, Sturla Molden wrote: > Fr?d?ric Bastien wrote: > >> It is the first time I read that having bug report every few days is healthy :) >> > > It is. There are bugs in all software because of the human factor. Most > automatic bug reports are generated by errors on the users side. That is a > signal to noise problem. I agree. I hadn't thought about this yet. > Also, one would want to prioritize fixing bugs > that are critical or bugs that people actually care about. If it is too > easy to report a bug, the important ones will wait forever in queue. Almost all reported bugs are important, because except for corner cases, they usually prevent a user from using Scipy in her/his intended way. I agree that some bugs are more important thant others, in that they affect more users more strongly. But I'm not sure that the effort a user has to go through to report a bug correlates with the bug's importance. Especially people who are not familiar with the open source world could see their prejudices confirmed when they (a) encounter a bug and then (b) have to go through "too much hassle" to actually report it. Just my 2ct. Andreas. From pav at iki.fi Sat Feb 22 07:25:34 2014 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 22 Feb 2014 14:25:34 +0200 Subject: [SciPy-Dev] Reordering methods for Sparse matrices In-Reply-To: References: Message-ID: Hi, 22.02.2014 05:50, Paul Nation kirjoitti: > I have written two sparse matrix reordering functions in Cython and I > am wondering if they would be worthy additions to the scipy.sparse module? > These are currently being used in our QuTiP library (qutip.org). > > The first function finds the Reverse Cuthill-Mckee (RCM) ordering [1] > of a symmetric sparse matrix, similar to the Matlab ?symrcm' function. > This is useful for reducing the bandwidth of the underlying matrix to > help eliminate fill-in when using direct or iterative solvers. > If the matrix is not symmetric, it looks at A+trans(A). Since the > matrix is symmetric, this works for both CSR and CSC matrices. > > The second, is a maximal traversal algorithm based on a breadth-first-search > method [2]. This returns a set of row permutations that makes > the diagonal zero-free. Since the permutation is not symmetric, > this works only on CSC matrices. Like RCM this is useful > for preordering in both direct and iterative solvers. Sparse matrix permutation algorithms would certainly be a welcome addition. As the code is already written, integrating it in scipy.sparse is probably fairly simple. If you have questions, please ask. -- Pauli Virtanen From njs at pobox.com Sat Feb 22 17:00:47 2014 From: njs at pobox.com (Nathaniel Smith) Date: Sat, 22 Feb 2014 17:00:47 -0500 Subject: [SciPy-Dev] A PEP for adding infix matrix multiply to Python Message-ID: [Apologies for wide distribution -- please direct followups to either the github PR linked below, or else numpy-discussion at scipy.org] After the numpy-discussion thread about np.matrix a week or so back, I got curious and read the old PEPs that attempted to add better matrix/elementwise operators to Python. http://legacy.python.org/dev/peps/pep-0211/ http://legacy.python.org/dev/peps/pep-0225/ And I was a bit surprised -- if I were BDFL I probably would have rejected these PEPs too. One is actually a proposal to make itertools.product into an infix operator, which no-one would consider seriously on its own merits. And the other adds a whole pile of weirdly spelled new operators with no clear idea about what they should do. But it seems to me that at this point, with the benefit of multiple years more experience, we know much better what we want -- basically, just a nice clean infix op for matrix multiplication. And that just asking for this directly, and explaining clearly why we want it, is something that hasn't been tried. So maybe we should try and see what happens. As a starting point for discussion, I wrote a draft. It can be read and commented on here: https://github.com/numpy/numpy/pull/4351 It's important that if we're going to do this at all, we do it right, and that means being able to honestly say that this document represents our consensus when going to python-dev. So if you think you might object please do so now :-) -n -- Nathaniel J. Smith Postdoctoral researcher - Informatics - University of Edinburgh http://vorpus.org From ralf.gommers at gmail.com Sun Feb 23 04:38:48 2014 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 23 Feb 2014 10:38:48 +0100 Subject: [SciPy-Dev] 0.13.3 and 0.14.0 releases In-Reply-To: References: Message-ID: On Thu, Jan 16, 2014 at 9:24 PM, Pauli Virtanen wrote: > 15.01.2014 22:40, Ralf Gommers kirjoitti: > > It looks to me like we should do an 0.13.3 bugfix release soon to fix > these > > two issues: > > - another memory leak in ndimage.label: > > https://github.com/scipy/scipy/issues/3148 > > - we broke weave.inline with Visual Studio: > > https://github.com/scipy/scipy/issues/3216 > > > > I propose to make this release within a week. > > > > For the 0.14.0 release I propose to branch this around Feb 23rd. That > gives > > us a month to work through a decent part of the backlog of PRs. (plus I'm > > on holiday until the 15th, so earlier wouldn't work for me). > > > > Does that schedule work for everyone? > > +1 sounds OK to me. > It's almost time to branch 0.14.x. There are lots of open PRs, but only a few left marked for 0.14.0. If there are other PRs that need to go in, please set the milestone and/or comment on them today. My proposal is to branch tomorrow evening (around 10pm GMT), and after that only backport important fixes to 0.14.x. If you have enhancements that you'd like to go in, please help merge the relevant PR by tomorrow. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From jenny.stone125 at gmail.com Sun Feb 23 06:34:08 2014 From: jenny.stone125 at gmail.com (Jennifer stone) Date: Sun, 23 Feb 2014 17:04:08 +0530 Subject: [SciPy-Dev] unexpected result by hyp2f1 Message-ID: The hyp2f1 function at present, fails without warning for the following case: c<0 |c|>>|a| and |b| For example: sp.hyp2f1(10,5,-300.5,0.5) >>-6.5184949735e+156 while the answer is -3.8520770815e+32 -------------- next part -------------- An HTML attachment was scrubbed... URL: From pasky at ucw.cz Sun Feb 23 06:44:28 2014 From: pasky at ucw.cz (Petr Baudis) Date: Sun, 23 Feb 2014 12:44:28 +0100 Subject: [SciPy-Dev] 0.13.3 and 0.14.0 releases In-Reply-To: References: Message-ID: <20140223114428.GH19214@machine.or.cz> Hi! On Sun, Feb 23, 2014 at 10:38:48AM +0100, Ralf Gommers wrote: > It's almost time to branch 0.14.x. There are lots of open PRs, but only a > few left marked for 0.14.0. If there are other PRs that need to go in, > please set the milestone and/or comment on them today. > > My proposal is to branch tomorrow evening (around 10pm GMT), and after that > only backport important fixes to 0.14.x. If you have enhancements that > you'd like to go in, please help merge the relevant PR by tomorrow. I'd just like to ask if anyone could take a look at https://github.com/scipy/scipy/pull/3369 as I'll need to depend on that in a soon-to-be-released software package related to function optimization. It's a tiny change code-wise (>90% of the diff is documentation and testcases), but it does change the public API, so I'll completely understand if you decide it needs more thorough review than there's time for. But I don't think it necessarily does, so I would appreciate if someone evaluated it before the deadline. Thanks, Petr "Pasky" Baudis From evgeny.burovskiy at gmail.com Sun Feb 23 08:08:09 2014 From: evgeny.burovskiy at gmail.com (Evgeni Burovski) Date: Sun, 23 Feb 2014 13:08:09 +0000 Subject: [SciPy-Dev] unexpected result by hyp2f1 In-Reply-To: References: Message-ID: Dear Jennifer, Best file a ticket in the bug tracker github.com/scipy/scipy There are several tickets relating to hyp2f1 already. For example: https://github.com/scipy/scipy/issues/1561 Evgeni On Sun, Feb 23, 2014 at 11:34 AM, Jennifer stone wrote: > The hyp2f1 function at present, fails without warning for the following > case: > c<0 > |c|>>|a| and |b| > > For example: > sp.hyp2f1(10,5,-300.5,0.5) > >>-6.5184949735e+156 > while the answer is > -3.8520770815e+32 > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sun Feb 23 11:04:52 2014 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 23 Feb 2014 17:04:52 +0100 Subject: [SciPy-Dev] About Google Summer of Code In-Reply-To: <6760436.Dn19K7eRFh@linux-tp-laptop.sysu> References: <6760436.Dn19K7eRFh@linux-tp-laptop.sysu> Message-ID: On Fri, Feb 14, 2014 at 3:45 AM, Richard Tsai wrote: > Hi everyone, I've just noticed that neither NumPy or SciPy is listed on > Python's GSoC page (https://wiki.python.org/moin/SummerOfCode/2014 > ). Is SciPy/NumPy going to apply for it? I'm a student and I want to > participate in GSoC this year. Thanks! > Hi Richard, this is fixed now. Sorry for the delay. I've also updated https://github.com/scipy/scipy/wiki/GSoC-project-ideas a bit. Looking forward to your idea/proposal. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sun Feb 23 11:06:30 2014 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 23 Feb 2014 17:06:30 +0100 Subject: [SciPy-Dev] guidance In-Reply-To: References: Message-ID: On Mon, Feb 10, 2014 at 12:51 PM, Aditya Shah wrote: > Hi, > I am Aditya Shah and I am currently pursuing computer science at > BITS-Pilani university. I am in the third year and have covered dsa course. > I want to work on scipy for GSOC 2014. Can anyone please guide me on the > process? > Hi Aditya, thanks for your interest and apologies for the slow reply. The guidelines and links with information on how to get started are at https://github.com/scipy/scipy/wiki/GSoC-project-ideas. If you have specific questions please let us know. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sun Feb 23 16:39:55 2014 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 23 Feb 2014 22:39:55 +0100 Subject: [SciPy-Dev] request for review: access SuperLU L and U factors Message-ID: Hi, If there's an expert/user of SuperLU that would be interested to review this PR that would be useful: https://github.com/scipy/scipy/pull/3375. The PR adds access to the L and U factors that were previously hidden. This PR could still go into 0.14.x. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahaneshantanu at gmail.com Tue Feb 25 08:11:23 2014 From: shahaneshantanu at gmail.com (SHANTANU SHAHANE) Date: Tue, 25 Feb 2014 18:41:23 +0530 Subject: [SciPy-Dev] GSoC 2014 Scipy Message-ID: Hello, I am a pre-final year engineering student. I am interested in the project of "improving interpolation capabilities" as a part of GSoC 2014. I have experience of python programming and also, I have studied basic B-spline and other interpolation schemes. I would be grateful if someone could guide me as how to proceed further as in if I need to read some reference or to code some small problem etc. Thanks in advance. Regards: Shantanu Shahane, Fourth Year Undergraduate, Dept. of Mechanical Engineering, Indian Institute of Technology Bombay, Powai, Mumbai-400076 (India). Mobile No.: 9967330927, 9370029097 -------------- next part -------------- An HTML attachment was scrubbed... URL: From cimrman3 at ntc.zcu.cz Tue Feb 25 08:57:11 2014 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Tue, 25 Feb 2014 14:57:11 +0100 Subject: [SciPy-Dev] ANN: SfePy 2014.1 Message-ID: <530CA137.8090606@ntc.zcu.cz> I am pleased to announce release 2014.1 of SfePy. Description ----------- SfePy (simple finite elements in Python) is a software for solving systems of coupled partial differential equations by the finite element method. The code is based on NumPy and SciPy packages. It is distributed under the new BSD license. Home page: http://sfepy.org Mailing list: http://groups.google.com/group/sfepy-devel Git (source) repository, issue tracker, wiki: http://github.com/sfepy Highlights of this release -------------------------- - sfepy.fem was split to separate FEM-specific and general modules - lower memory usage by creating active DOF connectivities directly from field connectivities - new handling of field and variable shapes - clean up: many obsolete modules were removed, all module names follow naming conventions For full release notes see http://docs.sfepy.org/doc/release_notes.html#id1 (rather long and technical). Best regards, Robert Cimrman and Contributors (*) (*) Contributors to this release (alphabetical order): Vladim?r Luke?, Maty?? Nov?k, Jaroslav Vond?ejc From ndel314 at gmail.com Tue Feb 25 10:16:33 2014 From: ndel314 at gmail.com (Nico Del Piano) Date: Tue, 25 Feb 2014 12:16:33 -0300 Subject: [SciPy-Dev] Levenberg-Marquardt Implementation Message-ID: Hi all, I am Nicolas Del Piano, and currently studying the last year of computer science career. I was searching for a GSoC project that involves mathematical concepts, and I found one very interesting. I have experience on python programming and I studied calculus, so I have a background of the problem context. I am interested on the implementation and testing of Levenberg-Marquardt / trust region nonlinear least squares algorithm. Here are some useful links, that I have searched to having a reference about the problem: http://ananth.in/docs/lmtut.pdf (Introduction) www.cs.nyu.edu/~roweis/notes/lm.pdf (Optimization) http://scribblethink.org/Computer/Javanumeric/index.html (java implementation) I would be glad if there is an interested mentor to discuss the issues, talk about the problem and its possible implementations, and provide me some support and guide. Thanks! Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla.molden at gmail.com Tue Feb 25 10:41:58 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Tue, 25 Feb 2014 15:41:58 +0000 (UTC) Subject: [SciPy-Dev] Levenberg-Marquardt Implementation References: Message-ID: <2074852030415035272.616873sturla.molden-gmail.com@news.gmane.org> scipy.optimize.leastsq uses a trust-region Levenberg-Marquardt solver from MINPACK. I think one that uses LAPACK subroutine DGELS could be made more efficient. MINPACK has an unoptimized QR factorization and is also not re-entrant (global variables). But the numerical quality and stability of the MINPACK version in undisputed. Sturla Nico Del Piano wrote: > Hi all, > > I am Nicolas Del Piano, and currently studying the last year of computer > science career. I was searching for a GSoC project that involves > mathematical concepts, and I found one very interesting. I have > experience on python programming and I studied calculus, so I have a > background of the problem context. > > I am interested on the implementation and testing of Levenberg-Marquardt > / trust region nonlinear least squares algorithm. > > Here are some useful links, that I have searched to having a reference about the problem: > > href="http://ananth.in/docs/lmtut.pdf">http://ananth.in/docs/lmtut.pdf (Introduction) > www.cs.nyu.edu/~roweis/notes/lm.pdf (Optimization) href="http://scribblethink.org/Computer/Javanumeric/index.html">http://scribblethink.org/Computer/Javanumeric/index.html > (java implementation) > > I would be glad if there is an interested mentor to discuss the issues, > talk about the problem and its possible implementations, and provide me > some support and guide. > > Thanks! > > Regards. > > _______________________________________________ SciPy-Dev mailing list > SciPy-Dev at scipy.org href="http://mail.scipy.org/mailman/listinfo/scipy-dev">http://mail.scipy.org/mailman/listinfo/scipy-dev From benny.malengier at gmail.com Tue Feb 25 10:45:07 2014 From: benny.malengier at gmail.com (Benny Malengier) Date: Tue, 25 Feb 2014 16:45:07 +0100 Subject: [SciPy-Dev] Levenberg-Marquardt Implementation In-Reply-To: References: Message-ID: This is present already, See http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.leastsq.html which wraps: http://www.math.utah.edu/software/minpack/minpack/lmder.html Benny 2014-02-25 16:16 GMT+01:00 Nico Del Piano : > Hi all, > > I am Nicolas Del Piano, and currently studying the last year of computer > science career. > I was searching for a GSoC project that involves mathematical concepts, > and I found one very interesting. I have experience on python programming > and I studied calculus, so I have a background of the problem context. > > I am interested on the implementation and testing of Levenberg-Marquardt / > trust region nonlinear least squares algorithm. > > Here are some useful links, that I have searched to having a reference > about the problem: > > http://ananth.in/docs/lmtut.pdf (Introduction) > www.cs.nyu.edu/~roweis/notes/lm.pdf (Optimization) > http://scribblethink.org/Computer/Javanumeric/index.html (java > implementation) > > I would be glad if there is an interested mentor to discuss the issues, > talk about the problem and its possible implementations, and provide me > some support and guide. > > Thanks! > > Regards. > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Tue Feb 25 13:25:38 2014 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 25 Feb 2014 20:25:38 +0200 Subject: [SciPy-Dev] Levenberg-Marquardt Implementation In-Reply-To: References: Message-ID: 25.02.2014 17:45, Benny Malengier kirjoitti: > This is present already, See > http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.leastsq.html > > which wraps: http://www.math.utah.edu/software/minpack/minpack > /lmder.html The current leastsq implementation does not support sparse Jacobians or constraints. MINPACK does dense QR factorizations, and this approach doesn't work well for problems where the number of variables is too big. This was one of our the GSoC topic ideas [1] --- if you have suggestions on how to improve these, please speak up. [1] https://github.com/scipy/scipy/wiki/GSoC-project-ideas -- Pauli Virtanen From benny.malengier at gmail.com Tue Feb 25 16:04:19 2014 From: benny.malengier at gmail.com (Benny Malengier) Date: Tue, 25 Feb 2014 22:04:19 +0100 Subject: [SciPy-Dev] Levenberg-Marquardt Implementation In-Reply-To: References: Message-ID: 2014-02-25 19:25 GMT+01:00 Pauli Virtanen : > 25.02.2014 17:45, Benny Malengier kirjoitti: > > This is present already, See > > > http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.leastsq.html > > > > which wraps: http://www.math.utah.edu/software/minpack/minpack > > /lmder.html > > The current leastsq implementation does not support sparse Jacobians > or constraints. > Problems with sparse Jacobians could be nice improvement, have not encountered such problems yet, my outputs being typically measurements depending on all parameters. As to constrainst, typically I use: 1. if contstrained on [c,inf], rescale the parameter to p=c+exp(q), with q new parameter 2. if constrained on an interval, rescale the parameter to p=arctan(q) with some offset and scaling for the interval size and start. Results of this are typically good.. One just has to keep in mind that the covariance is then on the transformed parameters. So doing on the optimized variables a run to determine the covariance of the original parameters is a good idea. Doing such transformation in your own code for full control is probably best, but allowing transformations in the python layer is a nice idea. One can alternatively just return a continuous penalty which scales with the distance from the required interval, going to error of 1e15 or so out of the requried interval, and then 1e15 + dist_to_interval e15. Also then, constraining will happen normally good. As to LM, it's actually a simple algorithm. I used to have a C implementation before using python on minpack. Adapting lmder.or a cython implemetation seems not too hard to add sparse. The problem would be a good sparse implementation in fortran, not the LM algorithm. No experience on that. Doing LM in cython should be fast too. Benny > MINPACK does dense QR factorizations, and this approach doesn't work > well for problems where the number of variables is too big. This was one > of our the GSoC topic ideas [1] --- if you have suggestions on how to > improve these, please speak up. > > [1] https://github.com/scipy/scipy/wiki/GSoC-project-ideas > > -- > Pauli Virtanen > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From evgeny.burovskiy at gmail.com Tue Feb 25 16:25:05 2014 From: evgeny.burovskiy at gmail.com (Evgeni Burovski) Date: Tue, 25 Feb 2014 21:25:05 +0000 Subject: [SciPy-Dev] Levenberg-Marquardt Implementation In-Reply-To: References: Message-ID: FWIW, One other possiblity for constrained minimization would be to wrap BOBYQA by Powell. (Disclaimer: All I know about it is from skimming this paper www.damtp.cam.ac.uk/user/na/NA_papers/NA2009_06.pdf? and talking to people in PyData London last weekend.) As far as I understand, * it's a state of the art minimizer * the original Fortran code is in public domain * a wrapper is available in OpenOpt. Which might be usable for scipy as well (or we may just point users to it). As a side note, we may either want to look into NEWUOA / LINQOA as well: http://mat.uc.pt/~zhang/software.html#powell_software >From a brief reading of the author's comments, it seems that neither of these deals with sparse problems though. FWIW, Evgeni On Tue, Feb 25, 2014 at 9:04 PM, Benny Malengier wrote: > > > > 2014-02-25 19:25 GMT+01:00 Pauli Virtanen : > >> 25.02.2014 17:45, Benny Malengier kirjoitti: >> > This is present already, See >> > >> > http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.leastsq.html >> > >> > which wraps: http://www.math.utah.edu/software/minpack/minpack >> > /lmder.html >> >> The current leastsq implementation does not support sparse Jacobians >> or constraints. > > > Problems with sparse Jacobians could be nice improvement, have not > encountered such problems yet, my outputs being typically measurements > depending on all parameters. > > As to constrainst, typically I use: > 1. if contstrained on [c,inf], rescale the parameter to p=c+exp(q), with q > new parameter > 2. if constrained on an interval, rescale the parameter to p=arctan(q) with > some offset and scaling for the interval size and start. > > Results of this are typically good.. One just has to keep in mind that the > covariance is then on the transformed parameters. So doing on the optimized > variables a run to determine the covariance of the original parameters is a > good idea. Doing such transformation in your own code for full control is > probably best, but allowing transformations in the python layer is a nice > idea. > > One can alternatively just return a continuous penalty which scales with the > distance from the required interval, going to error of 1e15 or so out of the > requried interval, and then 1e15 + dist_to_interval e15. Also then, > constraining will happen normally good. > > As to LM, it's actually a simple algorithm. I used to have a C > implementation before using python on minpack. Adapting lmder.or a cython > implemetation seems not too hard to add sparse. The problem would be a good > sparse implementation in fortran, not the LM algorithm. No experience on > that. Doing LM in cython should be fast too. > > Benny > >> >> MINPACK does dense QR factorizations, and this approach doesn't work >> well for problems where the number of variables is too big. This was one >> of our the GSoC topic ideas [1] --- if you have suggestions on how to >> improve these, please speak up. >> >> [1] https://github.com/scipy/scipy/wiki/GSoC-project-ideas >> >> -- >> Pauli Virtanen >> >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From robert.kern at gmail.com Tue Feb 25 16:37:15 2014 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 25 Feb 2014 21:37:15 +0000 Subject: [SciPy-Dev] Levenberg-Marquardt Implementation In-Reply-To: References: Message-ID: On Tue, Feb 25, 2014 at 9:25 PM, Evgeni Burovski wrote: > FWIW, One other possiblity for constrained minimization would be to > wrap BOBYQA by Powell. > > (Disclaimer: All I know about it is from skimming this paper > www.damtp.cam.ac.uk/user/na/NA_papers/NA2009_06.pdf? > and talking to people in PyData London last weekend.) > > As far as I understand, > > * it's a state of the art minimizer > * the original Fortran code is in public domain I see no indication of this. -- Robert Kern From argriffi at ncsu.edu Tue Feb 25 16:44:57 2014 From: argriffi at ncsu.edu (alex) Date: Tue, 25 Feb 2014 16:44:57 -0500 Subject: [SciPy-Dev] Levenberg-Marquardt Implementation In-Reply-To: References: Message-ID: On Tue, Feb 25, 2014 at 4:37 PM, Robert Kern wrote: > On Tue, Feb 25, 2014 at 9:25 PM, Evgeni Burovski > wrote: >> FWIW, One other possiblity for constrained minimization would be to >> wrap BOBYQA by Powell. >> >> (Disclaimer: All I know about it is from skimming this paper >> www.damtp.cam.ac.uk/user/na/NA_papers/NA2009_06.pdf >> and talking to people in PyData London last weekend.) >> >> As far as I understand, >> >> * it's a state of the art minimizer >> * the original Fortran code is in public domain > > I see no indication of this. The readme says "There are no restrictions on or charges for the use of the software." which may or may not be phrased using enough legal jargon to allow corporations to do whatever they want with it. From benny.malengier at gmail.com Tue Feb 25 16:47:10 2014 From: benny.malengier at gmail.com (Benny Malengier) Date: Tue, 25 Feb 2014 22:47:10 +0100 Subject: [SciPy-Dev] Levenberg-Marquardt Implementation In-Reply-To: References: Message-ID: 2014-02-25 22:37 GMT+01:00 Robert Kern : > On Tue, Feb 25, 2014 at 9:25 PM, Evgeni Burovski > wrote: > > FWIW, One other possiblity for constrained minimization would be to > > wrap BOBYQA by Powell. > > > > (Disclaimer: All I know about it is from skimming this paper > > www.damtp.cam.ac.uk/user/na/NA_papers/NA2009_06.pdf? > > and talking to people in PyData London last weekend.) > > > > As far as I understand, > > > > * it's a state of the art minimizer > > * the original Fortran code is in public domain > > I see no indication of this. > :-) Apart from that, they are not LM, so if added, should be an extra name in optimize. Looking at it fast, I don't expect the benefits to be that great to warrant moving there myself. Benny > -- > Robert Kern > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From evgeny.burovskiy at gmail.com Tue Feb 25 16:48:27 2014 From: evgeny.burovskiy at gmail.com (Evgeni Burovski) Date: Tue, 25 Feb 2014 21:48:27 +0000 Subject: [SciPy-Dev] Levenberg-Marquardt Implementation In-Reply-To: References: Message-ID: On Tue, Feb 25, 2014 at 9:37 PM, Robert Kern wrote: > On Tue, Feb 25, 2014 at 9:25 PM, Evgeni Burovski > wrote: >> FWIW, One other possiblity for constrained minimization would be to >> wrap BOBYQA by Powell. >> >> (Disclaimer: All I know about it is from skimming this paper >> www.damtp.cam.ac.uk/user/na/NA_papers/NA2009_06.pdf? >> and talking to people in PyData London last weekend.) >> >> As far as I understand, >> >> * it's a state of the art minimizer >> * the original Fortran code is in public domain > > I see no indication of this. The very last paragraph of readme.txt in bobyqa.zip from http://mat.uc.pt/~zhang/software.html#bobyqa """ There are no restrictions on or charges for the use of the software. I hope that the time and effort I have spent on developing the package will be helpful to much research and to many applications. """ IANAL of course :-). At any rate, turns out there's a ticket in scipy tracker for this: https://github.com/scipy/scipy/issues/1477 Evgeni > > -- > Robert Kern > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev From pav at iki.fi Tue Feb 25 17:13:47 2014 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 26 Feb 2014 00:13:47 +0200 Subject: [SciPy-Dev] Levenberg-Marquardt Implementation In-Reply-To: References: Message-ID: 25.02.2014 23:04, Benny Malengier kirjoitti: [clip] > As to constrainst, typically I use: > 1. if contstrained on [c,inf], rescale the parameter to p=c+exp(q), with q > new parameter > 2. if constrained on an interval, rescale the parameter to p=arctan(q) with > some offset and scaling for the interval size and start. Most optimization toolboxes I've seen use projection methods to deal with boundaries in LM. [clip] > As to LM, it's actually a simple algorithm. I used to have a C > implementation before using python on minpack. Adapting lmder.or a cython > implemetation seems not too hard to add sparse. The problem would be a good > sparse implementation in fortran, not the LM algorithm. No experience on > that. Doing LM in cython should be fast too. There's likely no real need to write it in a low-level language, as most of the algorithm should decompose to some matrix algebra. If it doesn't, then writing a sparse version wouldn't be possible... Porting LMDER to Python has been done (a several times), but as such, it doesn't have so much advantage over MINPACK. -- Pauli Virtanen From ralf.gommers at gmail.com Tue Feb 25 18:04:39 2014 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Wed, 26 Feb 2014 00:04:39 +0100 Subject: [SciPy-Dev] 0.13.3 and 0.14.0 releases In-Reply-To: References: Message-ID: On Sun, Feb 23, 2014 at 10:38 AM, Ralf Gommers wrote: > > > > On Thu, Jan 16, 2014 at 9:24 PM, Pauli Virtanen wrote: > >> 15.01.2014 22:40, Ralf Gommers kirjoitti: >> > It looks to me like we should do an 0.13.3 bugfix release soon to fix >> these >> > two issues: >> > - another memory leak in ndimage.label: >> > https://github.com/scipy/scipy/issues/3148 >> > - we broke weave.inline with Visual Studio: >> > https://github.com/scipy/scipy/issues/3216 >> > >> > I propose to make this release within a week. >> > >> > For the 0.14.0 release I propose to branch this around Feb 23rd. That >> gives >> > us a month to work through a decent part of the backlog of PRs. (plus >> I'm >> > on holiday until the 15th, so earlier wouldn't work for me). >> > >> > Does that schedule work for everyone? >> >> +1 sounds OK to me. >> > > It's almost time to branch 0.14.x. There are lots of open PRs, but only a > few left marked for 0.14.0. If there are other PRs that need to go in, > please set the milestone and/or comment on them today. > > My proposal is to branch tomorrow evening (around 10pm GMT), and after > that only backport important fixes to 0.14.x. If you have enhancements that > you'd like to go in, please help merge the relevant PR by tomorrow. > Update: the issues in scipy.sparse were a little too many / severe (see many already merged PRs and https://github.com/scipy/scipy/issues/3330), so we had to postpone the branching. There's some more things to be fixed for numpy 1.5.x and 1.6.x compat. Other than that we're good to go. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue Feb 25 18:27:02 2014 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 25 Feb 2014 23:27:02 +0000 Subject: [SciPy-Dev] Levenberg-Marquardt Implementation In-Reply-To: References: Message-ID: On Tue, Feb 25, 2014 at 9:48 PM, Evgeni Burovski wrote: > On Tue, Feb 25, 2014 at 9:37 PM, Robert Kern wrote: >> On Tue, Feb 25, 2014 at 9:25 PM, Evgeni Burovski >> wrote: >>> FWIW, One other possiblity for constrained minimization would be to >>> wrap BOBYQA by Powell. >>> >>> (Disclaimer: All I know about it is from skimming this paper >>> www.damtp.cam.ac.uk/user/na/NA_papers/NA2009_06.pdf? >>> and talking to people in PyData London last weekend.) >>> >>> As far as I understand, >>> >>> * it's a state of the art minimizer >>> * the original Fortran code is in public domain >> >> I see no indication of this. > > The very last paragraph of readme.txt in bobyqa.zip from > http://mat.uc.pt/~zhang/software.html#bobyqa > """ > > There are no > restrictions on or charges for the use of the software. I hope that the time > and effort I have spent on developing the package will be helpful to much > research and to many applications. > """ Well, that's certainly not public domain. It *might* be a license grant that is intended to be maximally free, but I wouldn't risk it. "use" is not really sufficient. It might refer to simply running the program unmodified, not the additional rights to modify and redistribute that scipy's BSD license explicitly calls for (in addition to, and thus separate and distinct from "use"). When in doubt, ask the original author. -- Robert Kern From aaaagrawal at gmail.com Tue Feb 25 23:49:15 2014 From: aaaagrawal at gmail.com (Ankit Agrawal) Date: Wed, 26 Feb 2014 10:19:15 +0530 Subject: [SciPy-Dev] GSoC 2014 : Discrete Wavelet Transforms in Scipy Message-ID: Hi everyone, I am Ankit Agrawal, a 4th year student enrolled in a Dual Degree program(Bachelors + Masters) in Electrical Engineering at IIT Bombay. My Masters specialization is in Communication and Signal Processing, with focus in Machine Learning and Computer Vision. I would like to work on integrating PyWavelets in scipy.signal and then adding more features as mentioned on the ideas page. I participated in GSoC 2013 with scikit-image where I implemented some Feature Detectors like FAST, Censure(STAR) and Binary Feature Descriptors like BRIEF, ORB and FREAK(in progress). The relevant courses that I have taken in the past are Image Processing, Machine Learning, Computer Vision, Speech Processing, NLP, Wavelets and Filter Banks*, Probabilistic Graphical Models*, Wireless & Mobile Communications* etc.(* - taken this semester). My open source contributions can be seen here . I will get started by fixing some small issues in scipy by this weekend. I haven't used PyWavelets before, but me and my group members are going to use it in the coming weeks in an application assignment for the Wavelets course, which I will try to take as an opportunity to look into its codebase. I would also like to discuss featuresto be implemented and some related tickets in detail once I start writing proposal in the next week. If there are other things that I should take a look at, please let me know. Thanks. Regards, Ankit Agrawal, Communication and Signal Processing, IIT Bombay. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pelson.pub at gmail.com Wed Feb 26 06:23:22 2014 From: pelson.pub at gmail.com (Phil Elson) Date: Wed, 26 Feb 2014 11:23:22 +0000 Subject: [SciPy-Dev] Qhull Delaunay triangulation "equations" attribute In-Reply-To: References: Message-ID: Thanks Pauli, Based on what you said, I've simply used the Delaunay.lift_points method to take my vertices to the paraboloid. Though I arbitrarily picked a paraboloid of scale 1 and shift of 0. Is it worth submitting a PR to add a static method for the creation of a Delaunay instance from 2 orthogonal 1D coordinates (I've only implemented it for the 2D case) in this way? I've found that for reasonably large regular grids (800, 1200), manually constructing the triangulation can cut ~25s from the ~31s overall execution time. Additionally, I've been using LinearNDInterplator which I would like to be able to construct with an already computed triangulation instance, would there be interest in me submitting a PR for that also? Thanks, Phil On 20 February 2014 17:18, Pauli Virtanen wrote: > 20.02.2014 16:00, Phil Elson kirjoitti: > > I'm trying to manually construct a Delaunay triangulation for an > orthogonal > > 2d grid as described in > > http://stackoverflow.com/questions/21888546< > http://stackoverflow.com/questions/21888546/regularly-spaced-orthogonal-grid-delaunay-triangulation-computing-the-paraboloi > >and > > wonder if anybody can help provide some interpretation of the > > "equations" values of a scipy.spatial.Delaunay instance. > > Essentially I'm working off the premise that it is possible to construct > a > > Delaunay triangulation from a regular grid without going through the > > expensive triangulation stage, does anybody know if that is true or not? > > Yes, it should be possible to construct the equations manually. > > For Delaunay, "equations" contains the hyperplane equation defining the > convex hull facets in ndim+1 dimensions corresponding to the simplices > of the triangulation. > > You get the ndim+1 dim coordinates for each simplex from the ndim > coordinates by adding an additional last coordinate to the vertices of > the simplices. The routine Delaunay.lift_points maps points in ndim dims > onto the paraboloid in ndim+1. > > The hyperplane equations should be constructed for the so transformed > coordinates, in the form > > sum([equations[j,k]*x[k] for k in range(ndim+1)]) > + > equations[j,ndim+1] > == > 0 > > Here, x is the coordinate "lifted" to ndim+1 dims. > > Geometrically, equations[j,:ndim+1] contains the normal vector of the > facet j, and equations[j,ndim+1] the offset scalar. > > -- > Pauli Virtanen > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at hilboll.de Wed Feb 26 06:45:16 2014 From: lists at hilboll.de (Andreas Hilboll) Date: Wed, 26 Feb 2014 12:45:16 +0100 Subject: [SciPy-Dev] Qhull Delaunay triangulation "equations" attribute In-Reply-To: References: Message-ID: <530DD3CC.4040806@hilboll.de> Phil, not directly answering your question, the upcoming 0.14 release will include a new class RegularGridInterpolator, which performs efficient interpolation on rectangular, possibly unevenly spaced, grids in arbitrary dimensions. The commit is https://github.com/scipy/scipy/commit/a90dc2804da21ba4c48c2615facd0ac5848ebe59. Cheers, Andreas. On 26.02.2014 12:23, Phil Elson wrote: > Thanks Pauli, > > Based on what you said, I've simply used the Delaunay.lift_points method > to take my vertices to the paraboloid. Though I arbitrarily picked a > paraboloid of scale 1 and shift of 0. > > Is it worth submitting a PR to add a static method for the creation of a > Delaunay instance from 2 orthogonal 1D coordinates (I've only > implemented it for the 2D case) in this way? I've found that for > reasonably large regular grids (800, 1200), manually constructing the > triangulation can cut ~25s from the ~31s overall execution time. > > Additionally, I've been using LinearNDInterplator which I would like to > be able to construct with an already computed triangulation instance, > would there be interest in me submitting a PR for that also? > > Thanks, > > Phil > > > > On 20 February 2014 17:18, Pauli Virtanen > wrote: > > 20.02.2014 16:00, Phil Elson kirjoitti: > > I'm trying to manually construct a Delaunay triangulation for an > orthogonal > > 2d grid as described in > > > http://stackoverflow.com/questions/21888546and > > wonder if anybody can help provide some interpretation of the > > "equations" values of a scipy.spatial.Delaunay instance. > > Essentially I'm working off the premise that it is possible to > construct a > > Delaunay triangulation from a regular grid without going through the > > expensive triangulation stage, does anybody know if that is true > or not? > > Yes, it should be possible to construct the equations manually. > > For Delaunay, "equations" contains the hyperplane equation defining the > convex hull facets in ndim+1 dimensions corresponding to the simplices > of the triangulation. > > You get the ndim+1 dim coordinates for each simplex from the ndim > coordinates by adding an additional last coordinate to the vertices of > the simplices. The routine Delaunay.lift_points maps points in ndim dims > onto the paraboloid in ndim+1. > > The hyperplane equations should be constructed for the so transformed > coordinates, in the form > > sum([equations[j,k]*x[k] for k in range(ndim+1)]) > + > equations[j,ndim+1] > == > 0 > > Here, x is the coordinate "lifted" to ndim+1 dims. > > Geometrically, equations[j,:ndim+1] contains the normal vector of the > facet j, and equations[j,ndim+1] the offset scalar. > > -- > Pauli Virtanen > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -- -- Andreas. From richard9404 at gmail.com Wed Feb 26 07:17:49 2014 From: richard9404 at gmail.com (Richard Tsai) Date: Wed, 26 Feb 2014 20:17:49 +0800 Subject: [SciPy-Dev] GSoC: Cython-rewrite and improvement for scipy.cluster Message-ID: Hi all, I am Richard Tsai, and currently a second year undergraduate student in Computer Science at Sun Yat-sen University. I wish to take part in this year's GSoC. I'm learning machine learning with scipy/sklearn now and I've ever contributed some code to SciPy since last term. I've read Ralf's [Roadmap to Scipy 1.0][1] and I'm interested in the `cluster` part. I want to help finish the cython-rewrite work and make some improvement for it as my GSoC project. I noticed that there's a `_vq_rewrite.pyx` in `scipy/cluster` but I think it still needs furthor work. I want to start with some issues related to `cluster` as a warm-up, and then try to re-implement the `cluster.vq` module in cython first and try to do some optimizations. I'm familiar with it since I've ever done a little SNS text mining research with it with my classmates in a contest. As for the `cluster.hierarchy` module, I do not know a lot about hierarchical clustering for I haven't used it in practice. I may start with reading some papers and writing some examples for the documents. Then I will start the cython-rewrite for the `hierarchy` module. Finally, I plan to make some enhancements for the package. Maybe to automatically determine the number of clusters with Elbow Method? I haven't had a detailed plan yet. Since this idea is not listed on the ideas page, I don't know if it is suitable to be a GSoC project. If you have any suggestions, please let me know. I'd appreciate it if you can provide any guidance/opinions/suggestions. Regards, Richard [1]: https://github.com/rgommers/scipy/blob/roadmap/doc/ROADMAP.rst.txt -------------- next part -------------- An HTML attachment was scrubbed... URL: From pelson.pub at gmail.com Wed Feb 26 08:22:41 2014 From: pelson.pub at gmail.com (Phil Elson) Date: Wed, 26 Feb 2014 13:22:41 +0000 Subject: [SciPy-Dev] Qhull Delaunay triangulation "equations" attribute In-Reply-To: <530DD3CC.4040806@hilboll.de> References: <530DD3CC.4040806@hilboll.de> Message-ID: Great news, thanks Andreas - this will definitely be useful in the future. In the meantime I'm writing software which has to target v0.10 upwards, so I've implemented the regular triangulation approach, which I'm reasonably happy with (good results and reasonable performance, though probably a lot slower than a direct bilinear interpolation). On 26 February 2014 11:45, Andreas Hilboll wrote: > Phil, > > not directly answering your question, the upcoming 0.14 release will > include a new class RegularGridInterpolator, which performs efficient > interpolation on rectangular, possibly unevenly spaced, grids in > arbitrary dimensions. The commit is > > https://github.com/scipy/scipy/commit/a90dc2804da21ba4c48c2615facd0ac5848ebe59 > . > > Cheers, Andreas. > > > On 26.02.2014 12:23, Phil Elson wrote: > > Thanks Pauli, > > > > Based on what you said, I've simply used the Delaunay.lift_points method > > to take my vertices to the paraboloid. Though I arbitrarily picked a > > paraboloid of scale 1 and shift of 0. > > > > Is it worth submitting a PR to add a static method for the creation of a > > Delaunay instance from 2 orthogonal 1D coordinates (I've only > > implemented it for the 2D case) in this way? I've found that for > > reasonably large regular grids (800, 1200), manually constructing the > > triangulation can cut ~25s from the ~31s overall execution time. > > > > Additionally, I've been using LinearNDInterplator which I would like to > > be able to construct with an already computed triangulation instance, > > would there be interest in me submitting a PR for that also? > > > > Thanks, > > > > Phil > > > > > > > > On 20 February 2014 17:18, Pauli Virtanen > > wrote: > > > > 20.02.2014 16:00, Phil Elson kirjoitti: > > > I'm trying to manually construct a Delaunay triangulation for an > > orthogonal > > > 2d grid as described in > > > > > http://stackoverflow.com/questions/21888546< > http://stackoverflow.com/questions/21888546/regularly-spaced-orthogonal-grid-delaunay-triangulation-computing-the-paraboloi > >and > > > wonder if anybody can help provide some interpretation of the > > > "equations" values of a scipy.spatial.Delaunay instance. > > > Essentially I'm working off the premise that it is possible to > > construct a > > > Delaunay triangulation from a regular grid without going through > the > > > expensive triangulation stage, does anybody know if that is true > > or not? > > > > Yes, it should be possible to construct the equations manually. > > > > For Delaunay, "equations" contains the hyperplane equation defining > the > > convex hull facets in ndim+1 dimensions corresponding to the > simplices > > of the triangulation. > > > > You get the ndim+1 dim coordinates for each simplex from the ndim > > coordinates by adding an additional last coordinate to the vertices > of > > the simplices. The routine Delaunay.lift_points maps points in ndim > dims > > onto the paraboloid in ndim+1. > > > > The hyperplane equations should be constructed for the so transformed > > coordinates, in the form > > > > sum([equations[j,k]*x[k] for k in range(ndim+1)]) > > + > > equations[j,ndim+1] > > == > > 0 > > > > Here, x is the coordinate "lifted" to ndim+1 dims. > > > > Geometrically, equations[j,:ndim+1] contains the normal vector of the > > facet j, and equations[j,ndim+1] the offset scalar. > > > > -- > > Pauli Virtanen > > > > _______________________________________________ > > SciPy-Dev mailing list > > SciPy-Dev at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-dev > > > > > > > > > > _______________________________________________ > > SciPy-Dev mailing list > > SciPy-Dev at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-dev > > > > > -- > -- Andreas. > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Wed Feb 26 15:56:11 2014 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Wed, 26 Feb 2014 21:56:11 +0100 Subject: [SciPy-Dev] GSoC 2014 Scipy In-Reply-To: References: Message-ID: Hi Shantanu, On Tue, Feb 25, 2014 at 2:11 PM, SHANTANU SHAHANE wrote: > Hello, > I am a pre-final year engineering student. I am interested in the project > of "improving interpolation capabilities" as a part of GSoC 2014. I have > experience of python programming and also, I have studied basic B-spline > and other interpolation schemes. > I would be grateful if someone could guide me as how to proceed further as > in if I need to read some reference or to code some small problem etc. > I'm sure you've seen https://github.com/scipy/scipy/wiki/GSoC-project-ideas, which contains some advice and requirements. Having one PR accepted is a hard requirement, so I suggest to start with that. It will give you a good idea of the workflow. Please don't leave it to the last moment, because usually some rework is needed before the PR can be merged. In principle you can make whatever PR you want (bugfix, enhancement), but I'd advice to start with one from labeled "easy-fix" ( https://github.com/scipy/scipy/issues?labels=easy-fix&milestone=&page=1&state=open ). Regarding the interpolation project Pauli or Evgeni should be able to give you some details, they've been working on that recently. Cheers, Ralf > Thanks in advance. > > > Regards: > Shantanu Shahane, > Fourth Year Undergraduate, > Dept. of Mechanical Engineering, > Indian Institute of Technology Bombay, > Powai, Mumbai-400076 (India). > Mobile No.: 9967330927, 9370029097 > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Wed Feb 26 16:08:18 2014 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Wed, 26 Feb 2014 22:08:18 +0100 Subject: [SciPy-Dev] help with Fortran + MinGW issue Message-ID: Hi all, If anyone wants to have a go at fixing this build issue in Fortran code in scipy.special on Windows (with MinGW), that would be much appreciated: https://github.com/scipy/scipy/issues/3395. This is one of the two remaining blockers for 0.14.0 beta 1 (the other being scipy.sparse issues with older numpy). Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From evgeny.burovskiy at gmail.com Wed Feb 26 16:56:47 2014 From: evgeny.burovskiy at gmail.com (Evgeni Burovski) Date: Wed, 26 Feb 2014 21:56:47 +0000 Subject: [SciPy-Dev] GSoC 2014 Scipy In-Reply-To: References: Message-ID: The lowest hanging fruit in interpolate is probably https://github.com/scipy/scipy/issues/2883 (also https://github.com/scipy/scipy/pull/2884). Addressing https://github.com/scipy/scipy/issues/3261 should be doable think, depending on your math background/familiarity with rather low level c/cython coding (which is required for doing things in interpolate anyway). Might be a bit too much for a first PR though. Adding derivatives to a barycentric interpolator, https://github.com/scipy/scipy/issues/2725, can be worth a shot. From a quick glance over the original paper it seems that at least the first derivative is relatively straightforward (while higher derivatives do get messy). For B-splines have a look at https://github.com/scipy/scipy/issues/1408 My recent try, https://github.com/scipy/scipy/pull/3174 is not very useful really: the tck format really needs to be set in stone before coding anything. On Feb 26, 2014 8:56 PM, "Ralf Gommers" wrote: > Hi Shantanu, > > > On Tue, Feb 25, 2014 at 2:11 PM, SHANTANU SHAHANE < > shahaneshantanu at gmail.com> wrote: > >> Hello, >> I am a pre-final year engineering student. I am interested in the project >> of "improving interpolation capabilities" as a part of GSoC 2014. I have >> experience of python programming and also, I have studied basic B-spline >> and other interpolation schemes. >> I would be grateful if someone could guide me as how to proceed further >> as in if I need to read some reference or to code some small problem etc. >> > > I'm sure you've seen > https://github.com/scipy/scipy/wiki/GSoC-project-ideas, which contains > some advice and requirements. Having one PR accepted is a hard requirement, > so I suggest to start with that. It will give you a good idea of the > workflow. Please don't leave it to the last moment, because usually some > rework is needed before the PR can be merged. In principle you can make > whatever PR you want (bugfix, enhancement), but I'd advice to start with > one from labeled "easy-fix" ( > https://github.com/scipy/scipy/issues?labels=easy-fix&milestone=&page=1&state=open > ). > > Regarding the interpolation project Pauli or Evgeni should be able to give > you some details, they've been working on that recently. > > Cheers, > Ralf > > > > >> Thanks in advance. >> >> >> Regards: >> Shantanu Shahane, >> Fourth Year Undergraduate, >> Dept. of Mechanical Engineering, >> Indian Institute of Technology Bombay, >> Powai, Mumbai-400076 (India). >> Mobile No.: 9967330927, 9370029097 >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev >> >> > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Wed Feb 26 17:41:02 2014 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Wed, 26 Feb 2014 23:41:02 +0100 Subject: [SciPy-Dev] GSoC 2014 : Discrete Wavelet Transforms in Scipy In-Reply-To: References: Message-ID: On Wed, Feb 26, 2014 at 5:49 AM, Ankit Agrawal wrote: > Hi everyone, > > I am Ankit Agrawal, a 4th year student enrolled in a Dual > Degree program(Bachelors + Masters) in Electrical Engineering at IIT > Bombay. My Masters specialization is in Communication and Signal > Processing, with focus in Machine Learning and Computer Vision. > > I would like to work on integrating PyWavelets in scipy.signal > and then adding more features as mentioned on the ideas page. > I participated in GSoC 2013 with scikit-image where I implemented some > Feature Detectors like FAST, Censure(STAR) and Binary Feature Descriptors > like BRIEF, ORB and FREAK(in progress). The relevant courses that I have > taken in the past are Image Processing, Machine Learning, Computer Vision, > Speech Processing, NLP, Wavelets and Filter Banks*, Probabilistic Graphical > Models*, Wireless & Mobile Communications* etc.(* - taken this semester). > My open source contributions can be seen here > . > > I will get started by fixing some small issues in scipy by this > weekend. I haven't used PyWavelets before, but me and my group members are > going to use it in the coming weeks in an application assignment for the > Wavelets course, which I will try to take as an opportunity to look into > its codebase. I would also like to discuss featuresto be implemented and some related > tickets in > detail once I start writing proposal in the next week. If there are other > things that I should take a look at, please let me know. Thanks. > Hi Ankit, thanks for your interest. I'm curious what you'll propose for features to implement - there's many ways to go. At the moment there are no existing issues for DWT or the continuous wavelets already present in scipy.signal, so I'd say just pick something that interests you for your first PR(s). Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Thu Feb 27 02:34:53 2014 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Thu, 27 Feb 2014 08:34:53 +0100 Subject: [SciPy-Dev] GSoC: Cython-rewrite and improvement for scipy.cluster In-Reply-To: References: Message-ID: Hi Richard, On Wed, Feb 26, 2014 at 1:17 PM, Richard Tsai wrote: > Hi all, > > I am Richard Tsai, and currently a second year undergraduate student in > Computer Science at Sun Yat-sen University. I wish to take part in this > year's GSoC. I'm learning machine learning with scipy/sklearn now and I've > ever contributed some code to SciPy since last term. > > I've read Ralf's [Roadmap to Scipy 1.0][1] and I'm interested in the > `cluster` part. > We should merge that thing. It's not mine, it's the product of a lot of discussion between most of the core devs. I want to help finish the cython-rewrite work and make some improvement for > it as my GSoC project. > Great! > I noticed that there's a `_vq_rewrite.pyx` in `scipy/cluster` but I think > it still needs furthor work. I want to start with some issues related to > `cluster` as a warm-up, and then try to re-implement the `cluster.vq` > module in cython first and try to do some optimizations. I'm familiar with > it since I've ever done a little SNS text mining research with it with my > classmates in a contest. As for the `cluster.hierarchy` module, I do not > know a lot about hierarchical clustering for I haven't used it in practice. > I may start with reading some papers and writing some examples for the > documents. Then I will start the cython-rewrite for the `hierarchy` module. > Finally, I plan to make some enhancements for the package. Maybe to > automatically determine the number of clusters with Elbow Method? I haven't > had a detailed plan yet. > > Since this idea is not listed on the ideas page, I don't know if it is > suitable to be a GSoC project. If you have any suggestions, please let me > know. I'd appreciate it if you can provide any > guidance/opinions/suggestions. > I think that there's definitely enough work there for one GSoC. However I don't know much about cluster so I'll let one of the experts comment on that. Cheers, Ralf > Regards, > > Richard > > [1]: https://github.com/rgommers/scipy/blob/roadmap/doc/ROADMAP.rst.txt > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From benny.malengier at gmail.com Fri Feb 28 03:50:25 2014 From: benny.malengier at gmail.com (Benny Malengier) Date: Fri, 28 Feb 2014 09:50:25 +0100 Subject: [SciPy-Dev] Levenberg-Marquardt Implementation In-Reply-To: References: Message-ID: As literature study is mentioned on the GSOC page, fort constrained optimization, for reference: http://www.ai7.uni-bayreuth.de/software.htm It is all non-free software (http://www.ai7.uni-bayreuth.de/SOFTWARE_A.pdf), but the papers of Prof Schittkowski are available. It's a pity professors write so much for profit software with tax payers money. Benny 2014-02-26 0:27 GMT+01:00 Robert Kern : > On Tue, Feb 25, 2014 at 9:48 PM, Evgeni Burovski > wrote: > > On Tue, Feb 25, 2014 at 9:37 PM, Robert Kern > wrote: > >> On Tue, Feb 25, 2014 at 9:25 PM, Evgeni Burovski > >> wrote: > >>> FWIW, One other possiblity for constrained minimization would be to > >>> wrap BOBYQA by Powell. > >>> > >>> (Disclaimer: All I know about it is from skimming this paper > >>> www.damtp.cam.ac.uk/user/na/NA_papers/NA2009_06.pdf? > >>> and talking to people in PyData London last weekend.) > >>> > >>> As far as I understand, > >>> > >>> * it's a state of the art minimizer > >>> * the original Fortran code is in public domain > >> > >> I see no indication of this. > > > > The very last paragraph of readme.txt in bobyqa.zip from > > http://mat.uc.pt/~zhang/software.html#bobyqa > > """ > > > > There are no > > restrictions on or charges for the use of the software. I hope that the > time > > and effort I have spent on developing the package will be helpful to much > > research and to many applications. > > """ > > Well, that's certainly not public domain. It *might* be a license > grant that is intended to be maximally free, but I wouldn't risk it. > "use" is not really sufficient. It might refer to simply running the > program unmodified, not the additional rights to modify and > redistribute that scipy's BSD license explicitly calls for (in > addition to, and thus separate and distinct from "use"). When in > doubt, ask the original author. > > -- > Robert Kern > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From benny.malengier at gmail.com Fri Feb 28 05:41:08 2014 From: benny.malengier at gmail.com (Benny Malengier) Date: Fri, 28 Feb 2014 11:41:08 +0100 Subject: [SciPy-Dev] Levenberg-Marquardt Implementation In-Reply-To: References: Message-ID: In light of my remark, I see this type of constrained optimization is present in scipy via fmin_slsqp. I did a pull request to add example in the reference on the capabilities and use. Benny 2014-02-28 9:50 GMT+01:00 Benny Malengier : > As literature study is mentioned on the GSOC page, fort constrained > optimization, for reference: http://www.ai7.uni-bayreuth.de/software.htm > > It is all non-free software (http://www.ai7.uni-bayreuth.de/SOFTWARE_A.pdf), > but the papers of Prof Schittkowski are available. > It's a pity professors write so much for profit software with tax payers > money. > > Benny > > > 2014-02-26 0:27 GMT+01:00 Robert Kern : > > On Tue, Feb 25, 2014 at 9:48 PM, Evgeni Burovski >> wrote: >> > On Tue, Feb 25, 2014 at 9:37 PM, Robert Kern >> wrote: >> >> On Tue, Feb 25, 2014 at 9:25 PM, Evgeni Burovski >> >> wrote: >> >>> FWIW, One other possiblity for constrained minimization would be to >> >>> wrap BOBYQA by Powell. >> >>> >> >>> (Disclaimer: All I know about it is from skimming this paper >> >>> www.damtp.cam.ac.uk/user/na/NA_papers/NA2009_06.pdf? >> >>> and talking to people in PyData London last weekend.) >> >>> >> >>> As far as I understand, >> >>> >> >>> * it's a state of the art minimizer >> >>> * the original Fortran code is in public domain >> >> >> >> I see no indication of this. >> > >> > The very last paragraph of readme.txt in bobyqa.zip from >> > http://mat.uc.pt/~zhang/software.html#bobyqa >> > """ >> > >> > There are no >> > restrictions on or charges for the use of the software. I hope that the >> time >> > and effort I have spent on developing the package will be helpful to >> much >> > research and to many applications. >> > """ >> >> Well, that's certainly not public domain. It *might* be a license >> grant that is intended to be maximally free, but I wouldn't risk it. >> "use" is not really sufficient. It might refer to simply running the >> program unmodified, not the additional rights to modify and >> redistribute that scipy's BSD license explicitly calls for (in >> addition to, and thus separate and distinct from "use"). When in >> doubt, ask the original author. >> >> -- >> Robert Kern >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Fri Feb 28 13:19:03 2014 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 28 Feb 2014 20:19:03 +0200 Subject: [SciPy-Dev] Levenberg-Marquardt Implementation In-Reply-To: References: Message-ID: 28.02.2014 12:41, Benny Malengier kirjoitti: > In light of my remark, I see this type of constrained optimization is > present in scipy via fmin_slsqp. Yes, and also COBYLA does constrained optimization. But both are general scalar function minimization, rather than nonlinear least squares. You can of course express NLLSQ as a minimization problem, but there usually is a difference in efficiency. -- Pauli Virtanen From ralf.gommers at gmail.com Fri Feb 28 17:12:31 2014 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Fri, 28 Feb 2014 23:12:31 +0100 Subject: [SciPy-Dev] 0.13.3 and 0.14.0 releases In-Reply-To: References: Message-ID: On Wed, Feb 26, 2014 at 12:04 AM, Ralf Gommers wrote: > > > > > On Sun, Feb 23, 2014 at 10:38 AM, Ralf Gommers wrote: > >> >> >> >> On Thu, Jan 16, 2014 at 9:24 PM, Pauli Virtanen wrote: >> >>> 15.01.2014 22:40, Ralf Gommers kirjoitti: >>> > It looks to me like we should do an 0.13.3 bugfix release soon to fix >>> these >>> > two issues: >>> > - another memory leak in ndimage.label: >>> > https://github.com/scipy/scipy/issues/3148 >>> > - we broke weave.inline with Visual Studio: >>> > https://github.com/scipy/scipy/issues/3216 >>> > >>> > I propose to make this release within a week. >>> > >>> > For the 0.14.0 release I propose to branch this around Feb 23rd. That >>> gives >>> > us a month to work through a decent part of the backlog of PRs. (plus >>> I'm >>> > on holiday until the 15th, so earlier wouldn't work for me). >>> > >>> > Does that schedule work for everyone? >>> >>> +1 sounds OK to me. >>> >> >> It's almost time to branch 0.14.x. There are lots of open PRs, but only a >> few left marked for 0.14.0. If there are other PRs that need to go in, >> please set the milestone and/or comment on them today. >> >> My proposal is to branch tomorrow evening (around 10pm GMT), and after >> that only backport important fixes to 0.14.x. If you have enhancements that >> you'd like to go in, please help merge the relevant PR by tomorrow. >> > > Update: the issues in scipy.sparse were a little too many / severe (see > many already merged PRs and https://github.com/scipy/scipy/issues/3330), > so we had to postpone the branching. There's some more things to be fixed > for numpy 1.5.x and 1.6.x compat. Other than that we're good to go. > The 0.14.x branch is created, beta 1 will follow soon. Thanks to all who helped fix a host of last minute issues. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: