From robert.kern at gmail.com Fri Nov 1 06:16:05 2013 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 1 Nov 2013 10:16:05 +0000 Subject: [Numpy-discussion] strange behavior of += with object array In-Reply-To: References: Message-ID: On Thu, Oct 31, 2013 at 11:22 PM, Neal Becker wrote: > > import numpy as np > #from accumulator import stat2nd_double > > ## Just to make this really clear, I'm making a dummy > ## class here that overloads += > class stat2nd_double (object): > def __iadd__ (self, x): > return self > > m = np.empty ((2,3), dtype=object) > m[:,:] = stat2nd_double() > > m[0,0] += 1.0 <<<< no error here > > m += np.ones ((2,3)) <<< but this gives an error > > Traceback (most recent call last): > File "test_stat.py", line 13, in > m += np.ones ((2,3)) > TypeError: unsupported operand type(s) for +: 'stat2nd_double' and 'float' Yeah, numpy doesn't pass down the __iadd__() to the underlying objects. object arrays are the only dtype that could implement __iadd__() at that level, so it has never been an operation added to the generic "numeric ops" system. Look in numpy/core/src/multiarray/numeric.c for more details. It might be possible to implement a special case for object arrays in array_inplace_add() and the rest. -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From ndbecker2 at gmail.com Fri Nov 1 06:34:52 2013 From: ndbecker2 at gmail.com (Neal Becker) Date: Fri, 01 Nov 2013 06:34:52 -0400 Subject: [Numpy-discussion] strange behavior of += with object array References: Message-ID: Robert Kern wrote: > On Thu, Oct 31, 2013 at 11:22 PM, Neal Becker wrote: >> >> import numpy as np >> #from accumulator import stat2nd_double >> >> ## Just to make this really clear, I'm making a dummy >> ## class here that overloads += >> class stat2nd_double (object): >> def __iadd__ (self, x): >> return self >> >> m = np.empty ((2,3), dtype=object) >> m[:,:] = stat2nd_double() >> >> m[0,0] += 1.0 <<<< no error here >> >> m += np.ones ((2,3)) <<< but this gives an error >> >> Traceback (most recent call last): >> File "test_stat.py", line 13, in >> m += np.ones ((2,3)) >> TypeError: unsupported operand type(s) for +: 'stat2nd_double' and 'float' > > Yeah, numpy doesn't pass down the __iadd__() to the underlying objects. > object arrays are the only dtype that could implement __iadd__() at that > level, so it has never been an operation added to the generic "numeric ops" > system. Look in numpy/core/src/multiarray/numeric.c for more details. It > might be possible to implement a special case for object arrays in > array_inplace_add() and the rest. > > -- > Robert Kern Is it worth filing an enhancement request? From ndbecker2 at gmail.com Fri Nov 1 06:37:18 2013 From: ndbecker2 at gmail.com (Neal Becker) Date: Fri, 01 Nov 2013 06:37:18 -0400 Subject: [Numpy-discussion] strange behavior of += with object array References: Message-ID: Robert Kern wrote: > On Thu, Oct 31, 2013 at 11:22 PM, Neal Becker wrote: >> >> import numpy as np >> #from accumulator import stat2nd_double >> >> ## Just to make this really clear, I'm making a dummy >> ## class here that overloads += >> class stat2nd_double (object): >> def __iadd__ (self, x): >> return self >> >> m = np.empty ((2,3), dtype=object) >> m[:,:] = stat2nd_double() >> >> m[0,0] += 1.0 <<<< no error here >> >> m += np.ones ((2,3)) <<< but this gives an error >> >> Traceback (most recent call last): >> File "test_stat.py", line 13, in >> m += np.ones ((2,3)) >> TypeError: unsupported operand type(s) for +: 'stat2nd_double' and 'float' > > Yeah, numpy doesn't pass down the __iadd__() to the underlying objects. > object arrays are the only dtype that could implement __iadd__() at that > level, so it has never been an operation added to the generic "numeric ops" > system. Look in numpy/core/src/multiarray/numeric.c for more details. It > might be possible to implement a special case for object arrays in > array_inplace_add() and the rest. > > -- > Robert Kern What is a suggested workaround? The best I could think of is: np.vectorize (lambda s,x: s.__iadd__(x)) (m, np.ones ((2,3))) where m is my matrix of objects and np.ones ((2,3)) is the array to += each element. From robert.kern at gmail.com Fri Nov 1 06:41:30 2013 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 1 Nov 2013 10:41:30 +0000 Subject: [Numpy-discussion] strange behavior of += with object array In-Reply-To: References: Message-ID: On Fri, Nov 1, 2013 at 10:34 AM, Neal Becker wrote: > > Robert Kern wrote: > > > On Thu, Oct 31, 2013 at 11:22 PM, Neal Becker wrote: > >> > >> import numpy as np > >> #from accumulator import stat2nd_double > >> > >> ## Just to make this really clear, I'm making a dummy > >> ## class here that overloads += > >> class stat2nd_double (object): > >> def __iadd__ (self, x): > >> return self > >> > >> m = np.empty ((2,3), dtype=object) > >> m[:,:] = stat2nd_double() > >> > >> m[0,0] += 1.0 <<<< no error here > >> > >> m += np.ones ((2,3)) <<< but this gives an error > >> > >> Traceback (most recent call last): > >> File "test_stat.py", line 13, in > >> m += np.ones ((2,3)) > >> TypeError: unsupported operand type(s) for +: 'stat2nd_double' and 'float' > > > > Yeah, numpy doesn't pass down the __iadd__() to the underlying objects. > > object arrays are the only dtype that could implement __iadd__() at that > > level, so it has never been an operation added to the generic "numeric ops" > > system. Look in numpy/core/src/multiarray/numeric.c for more details. It > > might be possible to implement a special case for object arrays in > > array_inplace_add() and the rest. > > > > -- > > Robert Kern > > Is it worth filing an enhancement request? Whatever motivates you to work on the PR. ;-) -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Fri Nov 1 12:00:38 2013 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 1 Nov 2013 10:00:38 -0600 Subject: [Numpy-discussion] char with native integer signedness In-Reply-To: References: Message-ID: On Thu, Oct 31, 2013 at 10:19 AM, Geoffrey Irving wrote: > On Thu, Oct 31, 2013 at 2:08 AM, Robert Kern > wrote: > > On Thu, Oct 31, 2013 at 12:52 AM, Geoffrey Irving > wrote: > >> > >> Is there a standard way in numpy of getting a char with C-native > >> integer signedness? I.e., > >> > >> boost::is_signed::value ? numpy.byte : numpy.ubyte > >> > >> but without nonsensical mixing of languages? > > > > This is for interop with a C/C++ extension, right? Do this test in that > > extension's C/C++ code to expose the right dtype. As far as I know, this > is > > not something determined by the hardware, but the compiler used. Since > the > > compiler of numpy may be different from your extension, only your > extension > > can do that test properly. > > It's not determined by the hardware, but I believe it is standardized > by each platform's ABI even if it can be adjusted by the compiler. > In particular, it is unsigned on ARM. That caused a problem with the initial implementation of einsum until npy_char was changed to npy_byte (signed char) instead. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From jtaylor.debian at googlemail.com Sun Nov 3 11:42:25 2013 From: jtaylor.debian at googlemail.com (Julian Taylor) Date: Sun, 03 Nov 2013 17:42:25 +0100 Subject: [Numpy-discussion] ANN: NumPy 1.7.2rc1 release Message-ID: <52767CF1.60003@googlemail.com> Hi all, I'm happy to announce the release candidate of Numpy 1.7.2. This is a bugfix only release supporting Python 2.4 - 2.7 and 3.1 - 3.3. More than 37 issues were fixed, the most important issues are listed in the release notes: https://github.com/numpy/numpy/blob/v1.7.2rc1/doc/release/1.7.2-notes.rst It is supposed to not break any existing code, so please test the releases and report any issues you find. Source tarballs and release notes can be found at https://sourceforge.net/projects/numpy/files/NumPy/1.7.2rc1/. Currently only Windows installers are available. OS X installer will follow soon. Concerning OS X currently only a single person can create the binary installers which is not a good situation. If you have a suitable machine [0] and want to help out please contact us. Cheers, Julian Taylor [0] we currently base the releases on macos 10.6 using python.org python versions -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: OpenPGP digital signature URL: From charlesr.harris at gmail.com Sun Nov 3 11:52:00 2013 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 3 Nov 2013 09:52:00 -0700 Subject: [Numpy-discussion] ANN: NumPy 1.7.2rc1 release In-Reply-To: <52767CF1.60003@googlemail.com> References: <52767CF1.60003@googlemail.com> Message-ID: On Sun, Nov 3, 2013 at 9:42 AM, Julian Taylor wrote: > Hi all, > > I'm happy to announce the release candidate of Numpy 1.7.2. > This is a bugfix only release supporting Python 2.4 - 2.7 and 3.1 - 3.3. > > More than 37 issues were fixed, the most important issues are listed in > the release notes: > https://github.com/numpy/numpy/blob/v1.7.2rc1/doc/release/1.7.2-notes.rst > > It is supposed to not break any existing code, so please test the > releases and report any issues you find. > > Source tarballs and release notes can be found at > https://sourceforge.net/projects/numpy/files/NumPy/1.7.2rc1/. > Currently only Windows installers are available. OS X installer will > follow soon. > > Concerning OS X currently only a single person can create the binary > installers which is not a good situation. If you have a suitable machine > [0] and want to help out please contact us. > > Cheers, > Julian Taylor > > [0] we currently base the releases on macos 10.6 using python.org python > versions > > > Great to get this out! Thanks Julian. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From xavier.gnata at gmail.com Sun Nov 3 15:51:08 2013 From: xavier.gnata at gmail.com (xavier.gnata at gmail.com) Date: Sun, 03 Nov 2013 21:51:08 +0100 Subject: [Numpy-discussion] ANN: NumPy 1.8.0 release. In-Reply-To: References: Message-ID: <5276B73C.20203@gmail.com> On ubuntu 13.10 with the following packages installed: libopenblas-base libopenblas-dev liblapack-dev liblapack3 site.cfg: [DEFAULT] library_dirs = /usr/lib include_dirs = /usr/lib/openblas-base [openblas] libraries = openblas library_dirs = /usr/lib/openblas-base include_dirs = /usr/lib/openblas-base I get this error when I try to import numpy1.8: /usr/local/lib/python3.3/dist-packages/numpy/__init__.py in () 151 return loader(*packages, **options) 152 --> 153 from . import add_newdocs 154 __all__ = ['add_newdocs', 'ModuleDeprecationWarning'] 155 /usr/local/lib/python3.3/dist-packages/numpy/add_newdocs.py in () 11 from __future__ import division, absolute_import, print_function 12 ---> 13 from numpy.lib import add_newdoc 14 15 ############################################################################### /usr/local/lib/python3.3/dist-packages/numpy/lib/__init__.py in () 16 17 from . import scimath as emath ---> 18 from .polynomial import * 19 #import convertcode 20 from .utils import * /usr/local/lib/python3.3/dist-packages/numpy/lib/polynomial.py in () 17 from numpy.lib.function_base import trim_zeros, sort_complex 18 from numpy.lib.type_check import iscomplex, real, imag ---> 19 from numpy.linalg import eigvals, lstsq, inv 20 21 class RankWarning(UserWarning): /usr/local/lib/python3.3/dist-packages/numpy/linalg/__init__.py in () 48 from .info import __doc__ 49 ---> 50 from .linalg import * 51 52 from numpy.testing import Tester /usr/local/lib/python3.3/dist-packages/numpy/linalg/linalg.py in () 27 ) 28 from numpy.lib import triu, asfarray ---> 29 from numpy.linalg import lapack_lite, _umath_linalg 30 from numpy.matrixlib.defmatrix import matrix_power 31 from numpy.compat import asbytes ImportError: /usr/local/lib/python3.3/dist-packages/numpy/linalg/lapack_lite.cpython-33m.so: undefined symbol: dpotrf_ What's wrong? > Use site.cfg.example as template to create a new site.cfg. For > openblas, uncomment: > [openblas] > library_dirs = /opt/OpenBLAS/lib > include_dirs = /opt/OpenBLAS/include > Also, uncomment default section: > [DEFAULT] > library_dirs = /usr/local/lib > include_dirs = /usr/local/include > That should do it - hopefully. > > > On Thu, Oct 31, 2013 at 7:13 AM, Neal Becker > wrote: > > Charles R Harris wrote: > > > On Thu, Oct 31, 2013 at 6:58 AM, Neal Becker > > wrote: > > > >> Thanks for the release! > >> > >> I am having a hard time finding the build instructions. Could > you please > >> add > >> this to the announcement? > >> > > > > What sort of build instructions are you looking for? > > > > Chuck > > How to build from source, what are some settings for site.cfg. I > did get this > figured out (wanted to try out openblas), but it could be a small > barrier to > new users. > > > That should be explained in INSTALL.txt. It may be a bit outdated at > this point. If so, could you make a PR adding relevant bits from your > experience. > > Chuck > > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From nouiz at nouiz.org Tue Nov 5 12:01:34 2013 From: nouiz at nouiz.org (=?ISO-8859-1?Q?Fr=E9d=E9ric_Bastien?=) Date: Tue, 5 Nov 2013 12:01:34 -0500 Subject: [Numpy-discussion] How we support new and old NumPy C API. Message-ID: Hi, With recent version of NumPy, when we compile c code, by default it raise a deprecation warning. To remore it, we must of only the new NumPy C API and define a macro. The new API only exist for NumPy 1.6 and later, so if we want to support older NumPy we need to do more work. As Theano compile many c code that include NumPy, it generate too many warning to the user. So I spend about 2 weeks to update Theano. Here is what I did in hope that this help people. In particular, I think cython can do the same. Currently they use only the old interface as they want to support old NumPy version. 1) Define this macro when compiled again numpy: NPY_NO_DEPRECATED_API=NPY_1_7_API_VERSION 2) Replace the use of old macro to the new one and define new macro that map the new one to the old one when compiling with old NumPy version: New macro = old macro NPY_ARRAY_ENSURECOPY=NPY_ENSURECOPY NPY_ARRAY_ALIGNED=NPY_ALIGNED NPY_ARRAY_WRITEABLE=NPY_WRITEABLE NPY_ARRAY_UPDATE_ALL=NPY_UPDATE_ALL NPY_ARRAY_C_CONTIGUOUS=NPY_C_CONTIGUOUS NPY_ARRAY_F_CONTIGUOUS=NPY_F_CONTIGUOUS 2) Do not access members of PyArrayObject directly, but use the old macro(that are inline fct in newer NumPy) For example change a_object->dimensions to PyArray_DIMS(a_object). 3) Another change is that the new API do not allow asignation to the BASE attribute of an ndarray. To do this, you must call a function. So we use this code that will work for all version of NumPy: #if NPY_API_VERSION < 0x00000007 PyArray_BASE(xview) = py_%(x)s; #else PyArray_SetBaseObject(xview, py_%(x)s); #endif 4) The new interface have no way to modify the data ptr of an ndarray. The work around that we needed is to change our code such that we create the ndarray directly with the good data ptr. In the past, we created it with a temporary value, compute the one we want, and update it. Now we create the ndarray directly with the good data ptr. This was done in our subtensor code (a_ndarray[slice[,...]]) code. 5) Lastly, we have one c code file that is generated from cython code. We modified manually this code to be compatible with the new NumPy API to don't generate error, as we disable the old NumPy interface. Here is the PR to Theano for this: https://github.com/scikit-learn/scikit-learn/issues/2573 Hoping that this will help someone. Fr?d?ric From qweqwegod at yahoo.com Tue Nov 5 13:10:22 2013 From: qweqwegod at yahoo.com (Sergey Petrov) Date: Tue, 05 Nov 2013 23:10:22 +0500 Subject: [Numpy-discussion] c api, ndarray creation Message-ID: Rather stupid question here, but I can't figure out by myself: Why does the following c program segfaults? And how can I avoid it? #define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION #include #include int main(int argc, char *argv[]) { int nd=1; npy_intp dims[] = {3}; npy_intp data[] = {1,2,3}; PyObject* array = PyArray_SimpleNewFromData(nd, dims, NPY_INT, (void*) data); return 0; } From jaime.frio at gmail.com Tue Nov 5 13:20:45 2013 From: jaime.frio at gmail.com (=?ISO-8859-1?Q?Jaime_Fern=E1ndez_del_R=EDo?=) Date: Tue, 5 Nov 2013 10:20:45 -0800 Subject: [Numpy-discussion] c api, ndarray creation In-Reply-To: References: Message-ID: On Tue, Nov 5, 2013 at 10:10 AM, Sergey Petrov wrote: > Rather stupid question here, but I can't figure out by myself: > Why does the following c program segfaults? And how can I avoid it? > You need to call import_array before using the C-API, see here: http://docs.scipy.org/doc/numpy/user/c-info.how-to-extend.html#required-subroutine > > #define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION > > #include > #include > > int main(int argc, char *argv[]) > { > int nd=1; > npy_intp dims[] = {3}; > npy_intp data[] = {1,2,3}; > PyObject* array = PyArray_SimpleNewFromData(nd, dims, NPY_INT, (void*) > data); > return 0; > } > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -- (\__/) ( O.o) ( > <) Este es Conejo. Copia a Conejo en tu firma y ay?dale en sus planes de dominaci?n mundial. -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue Nov 5 13:22:37 2013 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 5 Nov 2013 18:22:37 +0000 Subject: [Numpy-discussion] c api, ndarray creation In-Reply-To: References: Message-ID: On Tue, Nov 5, 2013 at 6:10 PM, Sergey Petrov wrote: > > Rather stupid question here, but I can't figure out by myself: > Why does the following c program segfaults? And how can I avoid it? > > #define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION > > #include > #include > > int main(int argc, char *argv[]) > { > int nd=1; > npy_intp dims[] = {3}; > npy_intp data[] = {1,2,3}; > PyObject* array = PyArray_SimpleNewFromData(nd, dims, NPY_INT, (void*) > data); > return 0; > } numpy is not a C library. It is a Python extension module. You can use its C API from other Python extension modules, not C main programs. You have not started a Python interpreter or imported the numpy module. Only then will the numpy API be available. Mechanically speaking, the proximate cause of your segfault is that `PyArray_SimpleNewFromData` is not actually a function, but a macro that looks up a function pointer from a static table defined in the numpy module. Calling the macro `import_array()` will import the numpy module set up this table with the correct function pointers. But you first need a Python interpreter running. -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Tue Nov 5 13:50:17 2013 From: cournape at gmail.com (David Cournapeau) Date: Tue, 5 Nov 2013 18:50:17 +0000 Subject: [Numpy-discussion] Updates on using recent mingw for numpy/scipy Message-ID: Hi there, During pycon.fr sprints, I took some time to look more into building numpy/scipy wheels on windows with recent mingw (gcc 4.x series). tl;dr: While I made some progress, there remains some hard-to-track issues. I will prepare a vagrant setup so that other people can work on this as well so that I don't remain the bottleneck. details: - all the tests were done on 32 bits. While 64 bits would add some challenges, I think most mingw-specific issues are independent of 32 vs 64 bits. - works mean building + running numpy/scipy.test() without 'bad' failures - compiling numpy without any blas/lapack, but completely static (no dependency on mingw at runtime) works with both mingw-w64 and mingw. - compiling numpy with a custom-built blas/lapack 3.4.2 gives me a working numpy with mingw, but mingw-w64 gives a very weird failure: importing numpy once segfaults at some location, twice at a difference one, three times at yet another one, and then never segault anymore (unless I build a new one or reboot windows). I suspect some weird runtime interactions. - compiling scipy with mingw + custom-built blas/lapack works, except that it crashes when exciting the process (with some C runtime-related errors). - compiling both numpy and scipy without statically linking gfortran runtime does work. I am hoping that those errors are caused by some invalid link order. As controlling those precisely with distutils is horrible, I have not been able to pursue this any further. cheers, David -------------- next part -------------- An HTML attachment was scrubbed... URL: From qweqwegod at yahoo.com Tue Nov 5 13:50:58 2013 From: qweqwegod at yahoo.com (Sergey Petrov) Date: Tue, 05 Nov 2013 23:50:58 +0500 Subject: [Numpy-discussion] c api, ndarray creation In-Reply-To: References: Message-ID: On Tue, 05 Nov 2013 23:20:45 +0500, Jaime Fern?ndez del R?o wrote: > On Tue, Nov 5, 2013 at 10:10 AM, Sergey Petrov > wrote: >> Rather stupid question here, but I can't figure out by myself: >> Why does the following c program segfaults? And how can I avoid it? > > You need to call import_array before using the C-API, see here: Thank you, Jaime, that solved my issue. -------------- next part -------------- An HTML attachment was scrubbed... URL: From qweqwegod at yahoo.com Tue Nov 5 13:55:15 2013 From: qweqwegod at yahoo.com (Sergey Petrov) Date: Tue, 05 Nov 2013 23:55:15 +0500 Subject: [Numpy-discussion] c api, ndarray creation In-Reply-To: References: Message-ID: On Tue, 05 Nov 2013 23:22:37 +0500, Robert Kern wrote: > numpy is not a C library. It is a Python extension module. You can use > its C API from other Python extension modules, not C >main programs. You > have not started a Python interpreter or imported the numpy module. Only > then will the numpy API be >available. > > Mechanically speaking, the proximate cause of your segfault is that > `PyArray_SimpleNewFromData` is not actually a function, >but a macro > that looks up a function pointer from a static table defined in the > numpy module. Calling the macro >`import_array()` will import the numpy > module set up this table with the correct function pointers. But you > first need a >Python interpreter running. > I just tried, apparently in vain, to simplify example. Thanks for detailed answer! -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Tue Nov 5 14:38:24 2013 From: matthew.brett at gmail.com (Matthew Brett) Date: Tue, 5 Nov 2013 11:38:24 -0800 Subject: [Numpy-discussion] Updates on using recent mingw for numpy/scipy In-Reply-To: References: Message-ID: Hi David, Thanks a lot for the update. On Tue, Nov 5, 2013 at 10:50 AM, David Cournapeau wrote: > Hi there, > > During pycon.fr sprints, I took some time to look more into building > numpy/scipy wheels on windows with recent mingw (gcc 4.x series). > > tl;dr: While I made some progress, there remains some hard-to-track issues. > I will prepare a vagrant setup so that other people can work on this as well > so that I don't remain the bottleneck. > > details: > > - all the tests were done on 32 bits. While 64 bits would add some > challenges, I think most mingw-specific issues are independent of 32 vs 64 > bits. > - works mean building + running numpy/scipy.test() without 'bad' failures > - compiling numpy without any blas/lapack, but completely static (no > dependency on mingw at runtime) works with both mingw-w64 and mingw. > - compiling numpy with a custom-built blas/lapack 3.4.2 gives me a working > numpy with mingw, but mingw-w64 gives a very weird failure: importing numpy > once segfaults at some location, twice at a difference one, three times at > yet another one, and then never segault anymore (unless I build a new one or > reboot windows). I suspect some weird runtime interactions. > - compiling scipy with mingw + custom-built blas/lapack works, except that > it crashes when exciting the process (with some C runtime-related errors). > - compiling both numpy and scipy without statically linking gfortran > runtime does work. > > I am hoping that those errors are caused by some invalid link order. As > controlling those precisely with distutils is horrible, I have not been able > to pursue this any further. Can the build problems be addressed by using waf / bento? Cheers, Matthew From orion at cora.nwra.com Wed Nov 6 00:41:52 2013 From: orion at cora.nwra.com (Orion Poplawski) Date: Tue, 05 Nov 2013 22:41:52 -0700 Subject: [Numpy-discussion] ANN: NumPy 1.7.2rc1 release In-Reply-To: <52767CF1.60003@googlemail.com> References: <52767CF1.60003@googlemail.com> Message-ID: <5279D6A0.6050904@cora.nwra.com> On 11/3/2013 9:42 AM, Julian Taylor wrote: > Hi all, > > I'm happy to announce the release candidate of Numpy 1.7.2. > This is a bugfix only release supporting Python 2.4 - 2.7 and 3.1 - 3.3. > > More than 37 issues were fixed, the most important issues are listed in > the release notes: > https://github.com/numpy/numpy/blob/v1.7.2rc1/doc/release/1.7.2-notes.rst > > It is supposed to not break any existing code, so please test the > releases and report any issues you find. Builds and tests okay on Fedora 20. -- Orion Poplawski Technical Manager 303-415-9701 x222 NWRA, Boulder/CoRA Office FAX: 303-415-9702 3380 Mitchell Lane orion at nwra.com Boulder, CO 80301 http://www.nwra.com From seb.haase at gmail.com Wed Nov 6 05:55:30 2013 From: seb.haase at gmail.com (Sebastian Haase) Date: Wed, 6 Nov 2013 11:55:30 +0100 Subject: [Numpy-discussion] Announcing Bokeh 0.2: interactive web plotting for Python In-Reply-To: <52697B17.6060106@creativetrax.com> References: <526905E9.40000@creativetrax.com> <52690655.1030605@creativetrax.com> <526938AF.5020509@creativetrax.com> <52697B17.6060106@creativetrax.com> Message-ID: Hi, you projects looks really great! I was wondering if you are making use of any pre-existing javascript plotting library like flot or flotr2 ? And if not, what are your reasons ? Thanks, Sebastian Haase On Thu, Oct 24, 2013 at 9:55 PM, Jason Grout wrote: > On 10/24/13 1:42 PM, Peter Wang wrote: >> On Thu, Oct 24, 2013 at 10:11 AM, Jason Grout >> > wrote: >> >> It would be really cool if you could hook into the new IPython comm >> infrastructure to push events back to the server in IPython (this is not >> quite merged yet, but probably ready for experimentation like this). >> The comm infrastructure basically opens up a communication channel >> between objects on the server and the browser. Messages get sent over >> the normal IPython channels. The server and browser objects just use >> either send() or an on_message() handler. See >> https://github.com/ipython/ipython/pull/4195 >> >> >> Yeah, I think we should definitely look into integrating with this >> mechanism for when we are embedded in a Notebook. However, we always >> want the underlying infrastructure to be independent of IPython >> Notebook, because we want people to be able to build analytical >> applications on top of these components. > > That makes a lot of sense. And looking at the code, it looks like you > are cleanly separating out the session objects controlling communication > from the plot machinery. That will hopefully make it much easier to > have different transports for the communication. > > >> >> Here's a very simple example of the Comm implementation working with >> matplotlib images in the Sage Cell server (which is built on top of the >> IPython infrastructure): http://sagecell.sagemath.org/?q=fyjgmk (I'd >> love to see a bokeh version of this sort of thing :). >> >> >> This is interesting, and introducing widgets is already on the roadmap, >> tentatively v0.4. When running against a plot server, Bokeh plots >> already push selections back to the server side. (That's how the linked >> brushing in e.g. this example works: >> https://www.wakari.io/sharing/bundle/pwang/cars) >> >> Our immediate short-term priorities for 0.3 are improving the layout >> mechanism, incorporating large data processing into the plot server, and >> investigating basic interop with Matplotlib objects. >> >> > > Great to hear. > > > Jason > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From ndbecker2 at gmail.com Wed Nov 6 13:17:51 2013 From: ndbecker2 at gmail.com (Neal Becker) Date: Wed, 06 Nov 2013 13:17:51 -0500 Subject: [Numpy-discussion] numpy.savetxt to string? Message-ID: According to doc, savetxt only allows a file name. I'm surprised it doesn't allow a file-like object. How can I format text into a string? I would like savetxt to accept StringIO for this. From warren.weckesser at gmail.com Wed Nov 6 13:23:14 2013 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Wed, 6 Nov 2013 13:23:14 -0500 Subject: [Numpy-discussion] numpy.savetxt to string? In-Reply-To: References: Message-ID: Which version of numpy are you using? I just tried it with 1.7.1, and it accepted a StringIO instance. The docstring says the first argument may be a filename or file handle ( http://docs.scipy.org/doc/numpy/reference/generated/numpy.savetxt.html#numpy.savetxt ). Warren On Wed, Nov 6, 2013 at 1:17 PM, Neal Becker wrote: > According to doc, savetxt only allows a file name. I'm surprised it > doesn't > allow a file-like object. How can I format text into a string? I would > like > savetxt to accept StringIO for this. > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matt at plot.ly Wed Nov 6 21:35:46 2013 From: matt at plot.ly (Matt Sundquist) Date: Wed, 6 Nov 2013 18:35:46 -0800 Subject: [Numpy-discussion] Plotly: Python Sandbox (NumPy supported) and Plotting Library Message-ID: Hey NumPy users, My name is Matt, and I'm part of Plot.ly, a graphing and analytics startup. We just launched a beta, and wanted to reach out to this group about our Python and Numpy features. *Background.* Plotly's graphing libraries let you make interactive, publication-quality plots in your browser. You can style graphs and analyze data with Python, a GUI, or our grid. Then, download, export, or share your work publicly with a url or privately among other Plotly members. And you can access your graphs from anywhere. You can also plot with our online Python sandbox (*NumPy supported*), and save and share scripts. In particular, here are three features we thought would be of interest to this group: - Python graphing library . Download here, or fork it here . - Interactive graphs w/ IPython notebooks (here ). - Python command line, from which you can run Numpy. Gallery and scripts . We'd love to hear what you think, get your feedback, and try to improve based on your advice. As we're just launching, it's super helpful to get feedback and advice, so we really appreciate you checking it out. Thanks so much, Matt https://plot.ly/ l Plotly Twitter l Plotly Facebook > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pwang at continuum.io Thu Nov 7 00:24:53 2013 From: pwang at continuum.io (Peter Wang) Date: Wed, 6 Nov 2013 23:24:53 -0600 Subject: [Numpy-discussion] Announcing Bokeh 0.2: interactive web plotting for Python In-Reply-To: References: <526905E9.40000@creativetrax.com> <52690655.1030605@creativetrax.com> <526938AF.5020509@creativetrax.com> <52697B17.6060106@creativetrax.com> Message-ID: Hi Sebastian, On Wed, Nov 6, 2013 at 4:55 AM, Sebastian Haase wrote: > Hi, > you projects looks really great! > I was wondering if you are making use of any pre-existing javascript > plotting library like flot or flotr2 ? > And if not, what are your reasons ? We did not use any pre-existing JS plotting library. At the time we were exploring our options (and I believe this still to be the case), the plotting libraries were all architected to (1) have their plots specified by JS code, (2) interact with mouse/keyboard via DOM events and JS callbacks; (3) process data locally in the JS namespace. Flot was one of the few that actually had any support for callbacks to retrieve data from a server, but even so, its data model was very limited. We recognized that in order to support good interaction with a non-browser language, we would need a JS runtime that was *designed* to sync its object models with server-side state, which could then be produced and modified by other languages. (Python is the first language for Bokeh, of course, but other languages should be pretty straightforward.) We also wanted to structure the interaction model at a higher level, and offer the configuration of interactions from a non-JS language. It's not entirely obvious from our current set of initial examples, but if you use the "output_server()" mode of bokeh, and you grab the Plot object via curplot(), you can modify graphical and data attributes of the plot, and *they are reflected in realtime in the browser*. This is independent of whether your plot is in an output cell of an IPython notebook, or embedded in some HTML page you wrote - the BokehJS library powering those plots are watching for server-side model updates automagically. Lastly, most of the JS plotting libraries that we saw took a very traditional perspective on information visualization, i.e. they treat it as mostly as a rendering task. So, you pass in some configuration and it rasters some pixels on a backend or outputs some SVG. None of the ones I looked at used a scene-graph approach to info viz. Even the venerable d3.js did not do this; it is a scripting layer over DOM (including SVG), and its core concepts are the primitives of the underlying drawing system, and not ones appropriate to the infovis task. (Only recently did they add an "axis" object, and there still is not any reification of coordinate spaces and such AFAIK.) The charting libraries that build on top of d3 (e.g. nvd3 and d3-chart) exist for a reason... but they mostly just use d3 as a fancy SVG rendering layer. And, once again, they live purely in Javascript, leaving server-side data and state management as an exercise to the scientist/analyst. FWIW, I have CCed the Bokeh discussion list, which is perhaps a more appropriate list for further discussion on this topic. :-) -Peter -------------- next part -------------- An HTML attachment was scrubbed... URL: From cmkleffner at gmx.de Thu Nov 7 10:11:36 2013 From: cmkleffner at gmx.de (Carl Dr. Kleffner) Date: Thu, 7 Nov 2013 16:11:36 +0100 (CET) Subject: [Numpy-discussion] mingw-w64 and openblas test Message-ID: An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Thu Nov 7 12:28:11 2013 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 7 Nov 2013 10:28:11 -0700 Subject: [Numpy-discussion] mingw-w64 and openblas test In-Reply-To: References: Message-ID: On Thu, Nov 7, 2013 at 8:11 AM, Carl Dr. Kleffner wrote: > Hi list, > > my name is Carl and I'm new to the list. With the advent of the recently > released mingw-w64 libs and headers v-3.0 ( > http://sourceforge.net/projects/mingw-w64/files/mingw-w64/mingw-w64-release) I decided to build a custom version of the mingw-w64 toolchain for both: > i686 (32bit) and x86_64 (64 bit) adapted for python-2.7. > > highlights are: > > - build on Windows 7 with the help of msys2 ( > http://sourceforge.net/projects/msys2/files/Alpha-versions/ ) and the > mingw-build scripts found at > https://github.com/niXman/mingw-builds/tree/develop . > - recent gcc-4.8.1 and mingw-w64 rev 3.0 code base. > - no multilib and thus pure native compile on Windows. > - fully statically build, thus no dependancy on any mingw dlls. > - languages: C/C++/Fortran/LTO. > - SEH exceptions (x86_64) and SJLJ exceptions (i686) configuration. > - win32 threads (default) configuration with winpthreads as option. > - additional spec files for linkage to MSVCR90 runtime according to > http://developer.berlios.de/devlog/akruis/2012/06/10/msvcr90dll-and-mingw/(now defunct). > - API compatible build to the following officially mingw-w64 toolchains: > http://sourceforge.net/projects/mingw-w64/files/Toolchains%20targetting%20Win32/Personal%20Builds/mingw-builds/4.8.1/threads-win32/sjlj/i686-4.8.1-release-win32-sjlj-rt_v3-rev2.7z > > http://sourceforge.net/projects/mingw-w64/files/Toolchains%20targetting%20Win64/Personal%20Builds/mingw-builds/4.8.1/threads-win32/seh/x86_64-4.8.1-release-win32-seh-rt_v3-rev2.7z > > A first test with mingw-w64 (32bit) and a serial (single threaded) > OpenBLAS compiled with the toolchain mentioned above I was able to build a > numpy-1.8.0rc2-py2.7 against OpenBLAS. 64 bit is on my TODO list. The build > stage needed a lot of manual intervention due to my non-understanding of > distutils. numpy.test(verbose=2) runs without segfault. Failures and Errors > are pasted below. > > Regards > > Carl > > numyp.test(verbose=2) results: > test_field_names (test_multiarray.TestRecord) ... SKIP: non ascii unicode > field indexing skipped; raises segfault on python 2.x > test_inf_ninf (test_umath.TestArctan2SpecialValues) ... > D:\tmp\mingw_w64_i686\tmp\numpytest_i686\numpy\core\tests\test_umath.py:417: > RuntimeWarning: invalid value encountered in arctan2 > assert_almost_equal(ncu.arctan2( np.inf, -np.inf), 0.75 * np.pi) > FAIL > test_inf_pinf (test_umath.TestArctan2SpecialValues) ... > D:\tmp\mingw_w64_i686\tmp\numpytest_i686\numpy\core\tests\test_umath.py:422: > RuntimeWarning: invalid value encountered in arctan2 > assert_almost_equal(ncu.arctan2( np.inf, np.inf), 0.25 * np.pi) > FAIL > test_nan_outputs2 (test_umath.TestHypotSpecialValues) ... FAIL > test_umath_complex.TestCabs.test_cabs_inf_nan(, inf, > nan, inf) ... FAIL > test_umath_complex.TestCabs.test_cabs_inf_nan(, -inf, > nan, inf) ... FAIL > test_umath_complex.TestCarg.test_special_values(, -inf, inf, > 2.356194490192345, False) ... > D:\tmp\mingw_w64_i686\tmp\numpytest_i686\numpy\core\tests\test_umath_complex.py:525: > RuntimeWarning: invalid value encountered in _arg > assert_almost_equal(f(z1), x) > FAIL > test_umath_complex.TestCarg.test_special_values(, -inf, > -inf, -2.356194490192345, False) ... FAIL > test_umath_complex.TestCarg.test_special_values(, inf, inf, > 0.7853981633974483, False) ... FAIL > test_umath_complex.TestCarg.test_special_values(, inf, -inf, > -0.7853981633974483, False) ... FAIL > Failure: SkipTest (Skipping test: test_special_values: Numpy is using > complex functions (e.g. sqrt) provided by yourplatform's C library. > However, they do not seem to behave accordingto C99 -- so C99 tests are > skipped.) ... SKIP: Skipping test: test_special_values: Numpy is using > complex functions (e.g. sqrt) provided by yourplatform's C library. > However, they do not seem to behave accordingto C99 -- so C99 tests are > skipped. > test_special_values (test_umath_complex.TestClog) ... SKIP: Skipping test: > test_special_values: Numpy is using complex functions (e.g. sqrt) provided > by yourplatform's C library. However, they do not seem to behave > accordingto C99 -- so C99 tests are skipped. > Failure: SkipTest (Skipping test: test_special_values: Numpy is using > complex functions (e.g. sqrt) provided by yourplatform's C library. > However, they do not seem to behave accordingto C99 -- so C99 tests are > skipped.) ... SKIP: Skipping test: test_special_values: Numpy is using > complex functions (e.g. sqrt) provided by yourplatform's C library. > However, they do not seem to behave accordingto C99 -- so C99 tests are > skipped. > Failure: ImportError (cannot import name ccompiler) ... ERROR > SKIP: No C compiler available > test_ufunc (test_function_base.TestVectorize) ... FAIL > test_lapack (test_build.TestF77Mismatch) ... SKIP: Skipping test: > test_lapack: Skipping fortran compiler mismatch on non Linux platform > test_linalg.test_xerbla_override ... SKIP: Not POSIX or fork failed. > ====================================================================== > ERROR: Failure: ImportError (cannot import name ccompiler) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "d:\devel32\python-2.7.5\lib\site-packages\nose\loader.py", line > 413, in loadTestsFromName > addr.filename, addr.module) > File "d:\devel32\python-2.7.5\lib\site-packages\nose\importer.py", line > 47, in importFromPath > return self.importFromDir(dir_path, fqname) > File "d:\devel32\python-2.7.5\lib\site-packages\nose\importer.py", line > 94, in importFromDir > mod = load_module(part_fqname, fh, filename, desc) > File "numpy\distutils\__init__.py", line 9, in > from . import ccompiler > File "numpy\distutils\ccompiler.py", line 9, in > from distutils.ccompiler import * > File "numpy\distutils\__init__.py", line 9, in > from . import ccompiler > File "numpy\distutils\ccompiler.py", line 10, in > from distutils import ccompiler > ImportError: cannot import name ccompiler > ====================================================================== > FAIL: test_inf_ninf (test_umath.TestArctan2SpecialValues) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "D:\tmp\mingw_w64_i686\tmp\numpytest_i686\numpy\core\tests\test_umath.py", > line 417, in test_inf_ninf > assert_almost_equal(ncu.arctan2( np.inf, -np.inf), 0.75 * np.pi) > File "numpy\testing\utils.py", line 462, in assert_almost_equal > raise AssertionError(msg) > AssertionError: > Arrays are not almost equal to 7 decimals > ACTUAL: nan > DESIRED: 2.356194490192345 > ====================================================================== > FAIL: test_inf_pinf (test_umath.TestArctan2SpecialValues) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "D:\tmp\mingw_w64_i686\tmp\numpytest_i686\numpy\core\tests\test_umath.py", > line 422, in test_inf_pinf > assert_almost_equal(ncu.arctan2( np.inf, np.inf), 0.25 * np.pi) > File "numpy\testing\utils.py", line 462, in assert_almost_equal > raise AssertionError(msg) > AssertionError: > Arrays are not almost equal to 7 decimals > ACTUAL: nan > DESIRED: 0.7853981633974483 > ====================================================================== > FAIL: test_nan_outputs2 (test_umath.TestHypotSpecialValues) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "D:\tmp\mingw_w64_i686\tmp\numpytest_i686\numpy\core\tests\test_umath.py", > line 337, in test_nan_outputs2 > assert_hypot_isinf(np.nan, np.inf) > File > "D:\tmp\mingw_w64_i686\tmp\numpytest_i686\numpy\core\tests\test_umath.py", > line 328, in assert_hypot_isinf > "hypot(%s, %s) is %s, not inf" % (x, y, ncu.hypot(x, y))) > File "numpy\testing\utils.py", line 44, in assert_ > raise AssertionError(msg) > AssertionError: hypot(nan, inf) is nan, not inf > ====================================================================== > FAIL: test_umath_complex.TestCabs.test_cabs_inf_nan(, > inf, nan, inf) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "d:\devel32\python-2.7.5\lib\site-packages\nose\case.py", line 197, > in runTest > self.test(*self.arg) > File > "D:\tmp\mingw_w64_i686\tmp\numpytest_i686\numpy\core\tests\test_umath_complex.py", > line 523, in check_real_value > assert_equal(f(z1), x) > File "numpy\testing\utils.py", line 260, in assert_equal > return assert_array_equal(actual, desired, err_msg, verbose) > File "numpy\testing\utils.py", line 718, in assert_array_equal > verbose=verbose, header='Arrays are not equal') > File "numpy\testing\utils.py", line 607, in assert_array_compare > chk_same_position(x_isnan, y_isnan, hasval='nan') > File "numpy\testing\utils.py", line 587, in chk_same_position > raise AssertionError(msg) > AssertionError: > Arrays are not equal > x and y nan location mismatch: > x: array([ nan]) > y: array(inf) > ====================================================================== > FAIL: test_umath_complex.TestCabs.test_cabs_inf_nan(, > -inf, nan, inf) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "d:\devel32\python-2.7.5\lib\site-packages\nose\case.py", line 197, > in runTest > self.test(*self.arg) > File > "D:\tmp\mingw_w64_i686\tmp\numpytest_i686\numpy\core\tests\test_umath_complex.py", > line 523, in check_real_value > assert_equal(f(z1), x) > File "numpy\testing\utils.py", line 260, in assert_equal > return assert_array_equal(actual, desired, err_msg, verbose) > File "numpy\testing\utils.py", line 718, in assert_array_equal > verbose=verbose, header='Arrays are not equal') > File "numpy\testing\utils.py", line 607, in assert_array_compare > chk_same_position(x_isnan, y_isnan, hasval='nan') > File "numpy\testing\utils.py", line 587, in chk_same_position > raise AssertionError(msg) > AssertionError: > Arrays are not equal > x and y nan location mismatch: > x: array([ nan]) > y: array(inf) > ====================================================================== > FAIL: test_umath_complex.TestCarg.test_special_values(, > -inf, inf, 2.356194490192345, False) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "d:\devel32\python-2.7.5\lib\site-packages\nose\case.py", line 197, > in runTest > self.test(*self.arg) > File > "D:\tmp\mingw_w64_i686\tmp\numpytest_i686\numpy\core\tests\test_umath_complex.py", > line 525, in check_real_value > assert_almost_equal(f(z1), x) > File "numpy\testing\utils.py", line 454, in assert_almost_equal > return assert_array_almost_equal(actual, desired, decimal, err_msg) > File "numpy\testing\utils.py", line 811, in assert_array_almost_equal > header=('Arrays are not almost equal to %d decimals' % decimal)) > File "numpy\testing\utils.py", line 607, in assert_array_compare > chk_same_position(x_isnan, y_isnan, hasval='nan') > File "numpy\testing\utils.py", line 587, in chk_same_position > raise AssertionError(msg) > AssertionError: > Arrays are not almost equal to 7 decimals > x and y nan location mismatch: > x: array([ nan]) > y: array(2.356194490192345) > ====================================================================== > FAIL: test_umath_complex.TestCarg.test_special_values(, > -inf, -inf, -2.356194490192345, False) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "d:\devel32\python-2.7.5\lib\site-packages\nose\case.py", line 197, > in runTest > self.test(*self.arg) > File > "D:\tmp\mingw_w64_i686\tmp\numpytest_i686\numpy\core\tests\test_umath_complex.py", > line 525, in check_real_value > assert_almost_equal(f(z1), x) > File "numpy\testing\utils.py", line 454, in assert_almost_equal > return assert_array_almost_equal(actual, desired, decimal, err_msg) > File "numpy\testing\utils.py", line 811, in assert_array_almost_equal > header=('Arrays are not almost equal to %d decimals' % decimal)) > File "numpy\testing\utils.py", line 607, in assert_array_compare > chk_same_position(x_isnan, y_isnan, hasval='nan') > File "numpy\testing\utils.py", line 587, in chk_same_position > raise AssertionError(msg) > AssertionError: > Arrays are not almost equal to 7 decimals > x and y nan location mismatch: > x: array([ nan]) > y: array(-2.356194490192345) > ====================================================================== > FAIL: test_umath_complex.TestCarg.test_special_values(, inf, > inf, 0.7853981633974483, False) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "d:\devel32\python-2.7.5\lib\site-packages\nose\case.py", line 197, > in runTest > self.test(*self.arg) > File > "D:\tmp\mingw_w64_i686\tmp\numpytest_i686\numpy\core\tests\test_umath_complex.py", > line 525, in check_real_value > assert_almost_equal(f(z1), x) > File "numpy\testing\utils.py", line 454, in assert_almost_equal > return assert_array_almost_equal(actual, desired, decimal, err_msg) > File "numpy\testing\utils.py", line 811, in assert_array_almost_equal > header=('Arrays are not almost equal to %d decimals' % decimal)) > File "numpy\testing\utils.py", line 607, in assert_array_compare > chk_same_position(x_isnan, y_isnan, hasval='nan') > File "numpy\testing\utils.py", line 587, in chk_same_position > raise AssertionError(msg) > AssertionError: > Arrays are not almost equal to 7 decimals > x and y nan location mismatch: > x: array([ nan]) > y: array(0.7853981633974483) > ====================================================================== > FAIL: test_umath_complex.TestCarg.test_special_values(, inf, > -inf, -0.7853981633974483, False) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "d:\devel32\python-2.7.5\lib\site-packages\nose\case.py", line 197, > in runTest > self.test(*self.arg) > File > "D:\tmp\mingw_w64_i686\tmp\numpytest_i686\numpy\core\tests\test_umath_complex.py", > line 525, in check_real_value > assert_almost_equal(f(z1), x) > File "numpy\testing\utils.py", line 454, in assert_almost_equal > return assert_array_almost_equal(actual, desired, decimal, err_msg) > File "numpy\testing\utils.py", line 811, in assert_array_almost_equal > header=('Arrays are not almost equal to %d decimals' % decimal)) > File "numpy\testing\utils.py", line 607, in assert_array_compare > chk_same_position(x_isnan, y_isnan, hasval='nan') > File "numpy\testing\utils.py", line 587, in chk_same_position > raise AssertionError(msg) > AssertionError: > Arrays are not almost equal to 7 decimals > x and y nan location mismatch: > x: array([ nan]) > y: array(-0.7853981633974483) > ====================================================================== > FAIL: test_ufunc (test_function_base.TestVectorize) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "D:\tmp\mingw_w64_i686\tmp\numpytest_i686\numpy\lib\tests\test_function_base.py", > line 554, in test_ufunc > assert_array_equal(r1, r2) > File "numpy\testing\utils.py", line 718, in assert_array_equal > verbose=verbose, header='Arrays are not equal') > File "numpy\testing\utils.py", line 644, in assert_array_compare > raise AssertionError(msg) > AssertionError: > Arrays are not equal > (mismatch 40.0%) > x: array([ 1.00000000e+00, 6.12323400e-17, -1.00000000e+00, > -1.83697020e-16, 1.00000000e+00]) > y: array([ 1.00000000e+00, 6.12303177e-17, -1.00000000e+00, > -1.83690953e-16, 1.00000000e+00]) > ---------------------------------------------------------------------- > Ran 4413 tests in 74.154s > FAILED (KNOWNFAIL=9, SKIP=7, errors=1, failures=10) > Running unit tests for numpy > NumPy version 1.8.0rc2 > NumPy is installed in numpy > Python version 2.7.5 (default, May 15 2013, 22:43:36) [MSC v.1500 32 bit > (Intel)] > nose version 1.3.0 > > > Could you take a look at https://github.com/numpy/numpy/pull/4021? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Thu Nov 7 13:01:40 2013 From: cournape at gmail.com (David Cournapeau) Date: Thu, 7 Nov 2013 18:01:40 +0000 Subject: [Numpy-discussion] mingw-w64 and openblas test In-Reply-To: References: Message-ID: Carl, Can you post a build log of numpy (the output of python setup.py build...) ? I could also build a working numpy with gcc 4 + open blas in full static mode, but got some issues with scipy. I suspect numpy + openblas works ok because no fortran is actually used except maybe for wrappers. David On Thu, Nov 7, 2013 at 3:11 PM, Carl Dr. Kleffner wrote: > Hi list, > > my name is Carl and I'm new to the list. With the advent of the recently > released mingw-w64 libs and headers v-3.0 ( > http://sourceforge.net/projects/mingw-w64/files/mingw-w64/mingw-w64-release) I decided to build a custom version of the mingw-w64 toolchain for both: > i686 (32bit) and x86_64 (64 bit) adapted for python-2.7. > > highlights are: > > - build on Windows 7 with the help of msys2 ( > http://sourceforge.net/projects/msys2/files/Alpha-versions/ ) and the > mingw-build scripts found at > https://github.com/niXman/mingw-builds/tree/develop . > - recent gcc-4.8.1 and mingw-w64 rev 3.0 code base. > - no multilib and thus pure native compile on Windows. > - fully statically build, thus no dependancy on any mingw dlls. > - languages: C/C++/Fortran/LTO. > - SEH exceptions (x86_64) and SJLJ exceptions (i686) configuration. > - win32 threads (default) configuration with winpthreads as option. > - additional spec files for linkage to MSVCR90 runtime according to > http://developer.berlios.de/devlog/akruis/2012/06/10/msvcr90dll-and-mingw/(now defunct). > - API compatible build to the following officially mingw-w64 toolchains: > http://sourceforge.net/projects/mingw-w64/files/Toolchains%20targetting%20Win32/Personal%20Builds/mingw-builds/4.8.1/threads-win32/sjlj/i686-4.8.1-release-win32-sjlj-rt_v3-rev2.7z > > http://sourceforge.net/projects/mingw-w64/files/Toolchains%20targetting%20Win64/Personal%20Builds/mingw-builds/4.8.1/threads-win32/seh/x86_64-4.8.1-release-win32-seh-rt_v3-rev2.7z > > A first test with mingw-w64 (32bit) and a serial (single threaded) > OpenBLAS compiled with the toolchain mentioned above I was able to build a > numpy-1.8.0rc2-py2.7 against OpenBLAS. 64 bit is on my TODO list. The build > stage needed a lot of manual intervention due to my non-understanding of > distutils. numpy.test(verbose=2) runs without segfault. Failures and Errors > are pasted below. > > Regards > > Carl > > numyp.test(verbose=2) results: > test_field_names (test_multiarray.TestRecord) ... SKIP: non ascii unicode > field indexing skipped; raises segfault on python 2.x > test_inf_ninf (test_umath.TestArctan2SpecialValues) ... > D:\tmp\mingw_w64_i686\tmp\numpytest_i686\numpy\core\tests\test_umath.py:417: > RuntimeWarning: invalid value encountered in arctan2 > assert_almost_equal(ncu.arctan2( np.inf, -np.inf), 0.75 * np.pi) > FAIL > test_inf_pinf (test_umath.TestArctan2SpecialValues) ... > D:\tmp\mingw_w64_i686\tmp\numpytest_i686\numpy\core\tests\test_umath.py:422: > RuntimeWarning: invalid value encountered in arctan2 > assert_almost_equal(ncu.arctan2( np.inf, np.inf), 0.25 * np.pi) > FAIL > test_nan_outputs2 (test_umath.TestHypotSpecialValues) ... FAIL > test_umath_complex.TestCabs.test_cabs_inf_nan(, inf, > nan, inf) ... FAIL > test_umath_complex.TestCabs.test_cabs_inf_nan(, -inf, > nan, inf) ... FAIL > test_umath_complex.TestCarg.test_special_values(, -inf, inf, > 2.356194490192345, False) ... > D:\tmp\mingw_w64_i686\tmp\numpytest_i686\numpy\core\tests\test_umath_complex.py:525: > RuntimeWarning: invalid value encountered in _arg > assert_almost_equal(f(z1), x) > FAIL > test_umath_complex.TestCarg.test_special_values(, -inf, > -inf, -2.356194490192345, False) ... FAIL > test_umath_complex.TestCarg.test_special_values(, inf, inf, > 0.7853981633974483, False) ... FAIL > test_umath_complex.TestCarg.test_special_values(, inf, -inf, > -0.7853981633974483, False) ... FAIL > Failure: SkipTest (Skipping test: test_special_values: Numpy is using > complex functions (e.g. sqrt) provided by yourplatform's C library. > However, they do not seem to behave accordingto C99 -- so C99 tests are > skipped.) ... SKIP: Skipping test: test_special_values: Numpy is using > complex functions (e.g. sqrt) provided by yourplatform's C library. > However, they do not seem to behave accordingto C99 -- so C99 tests are > skipped. > test_special_values (test_umath_complex.TestClog) ... SKIP: Skipping test: > test_special_values: Numpy is using complex functions (e.g. sqrt) provided > by yourplatform's C library. However, they do not seem to behave > accordingto C99 -- so C99 tests are skipped. > Failure: SkipTest (Skipping test: test_special_values: Numpy is using > complex functions (e.g. sqrt) provided by yourplatform's C library. > However, they do not seem to behave accordingto C99 -- so C99 tests are > skipped.) ... SKIP: Skipping test: test_special_values: Numpy is using > complex functions (e.g. sqrt) provided by yourplatform's C library. > However, they do not seem to behave accordingto C99 -- so C99 tests are > skipped. > Failure: ImportError (cannot import name ccompiler) ... ERROR > SKIP: No C compiler available > test_ufunc (test_function_base.TestVectorize) ... FAIL > test_lapack (test_build.TestF77Mismatch) ... SKIP: Skipping test: > test_lapack: Skipping fortran compiler mismatch on non Linux platform > test_linalg.test_xerbla_override ... SKIP: Not POSIX or fork failed. > ====================================================================== > ERROR: Failure: ImportError (cannot import name ccompiler) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "d:\devel32\python-2.7.5\lib\site-packages\nose\loader.py", line > 413, in loadTestsFromName > addr.filename, addr.module) > File "d:\devel32\python-2.7.5\lib\site-packages\nose\importer.py", line > 47, in importFromPath > return self.importFromDir(dir_path, fqname) > File "d:\devel32\python-2.7.5\lib\site-packages\nose\importer.py", line > 94, in importFromDir > mod = load_module(part_fqname, fh, filename, desc) > File "numpy\distutils\__init__.py", line 9, in > from . import ccompiler > File "numpy\distutils\ccompiler.py", line 9, in > from distutils.ccompiler import * > File "numpy\distutils\__init__.py", line 9, in > from . import ccompiler > File "numpy\distutils\ccompiler.py", line 10, in > from distutils import ccompiler > ImportError: cannot import name ccompiler > ====================================================================== > FAIL: test_inf_ninf (test_umath.TestArctan2SpecialValues) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "D:\tmp\mingw_w64_i686\tmp\numpytest_i686\numpy\core\tests\test_umath.py", > line 417, in test_inf_ninf > assert_almost_equal(ncu.arctan2( np.inf, -np.inf), 0.75 * np.pi) > File "numpy\testing\utils.py", line 462, in assert_almost_equal > raise AssertionError(msg) > AssertionError: > Arrays are not almost equal to 7 decimals > ACTUAL: nan > DESIRED: 2.356194490192345 > ====================================================================== > FAIL: test_inf_pinf (test_umath.TestArctan2SpecialValues) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "D:\tmp\mingw_w64_i686\tmp\numpytest_i686\numpy\core\tests\test_umath.py", > line 422, in test_inf_pinf > assert_almost_equal(ncu.arctan2( np.inf, np.inf), 0.25 * np.pi) > File "numpy\testing\utils.py", line 462, in assert_almost_equal > raise AssertionError(msg) > AssertionError: > Arrays are not almost equal to 7 decimals > ACTUAL: nan > DESIRED: 0.7853981633974483 > ====================================================================== > FAIL: test_nan_outputs2 (test_umath.TestHypotSpecialValues) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "D:\tmp\mingw_w64_i686\tmp\numpytest_i686\numpy\core\tests\test_umath.py", > line 337, in test_nan_outputs2 > assert_hypot_isinf(np.nan, np.inf) > File > "D:\tmp\mingw_w64_i686\tmp\numpytest_i686\numpy\core\tests\test_umath.py", > line 328, in assert_hypot_isinf > "hypot(%s, %s) is %s, not inf" % (x, y, ncu.hypot(x, y))) > File "numpy\testing\utils.py", line 44, in assert_ > raise AssertionError(msg) > AssertionError: hypot(nan, inf) is nan, not inf > ====================================================================== > FAIL: test_umath_complex.TestCabs.test_cabs_inf_nan(, > inf, nan, inf) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "d:\devel32\python-2.7.5\lib\site-packages\nose\case.py", line 197, > in runTest > self.test(*self.arg) > File > "D:\tmp\mingw_w64_i686\tmp\numpytest_i686\numpy\core\tests\test_umath_complex.py", > line 523, in check_real_value > assert_equal(f(z1), x) > File "numpy\testing\utils.py", line 260, in assert_equal > return assert_array_equal(actual, desired, err_msg, verbose) > File "numpy\testing\utils.py", line 718, in assert_array_equal > verbose=verbose, header='Arrays are not equal') > File "numpy\testing\utils.py", line 607, in assert_array_compare > chk_same_position(x_isnan, y_isnan, hasval='nan') > File "numpy\testing\utils.py", line 587, in chk_same_position > raise AssertionError(msg) > AssertionError: > Arrays are not equal > x and y nan location mismatch: > x: array([ nan]) > y: array(inf) > ====================================================================== > FAIL: test_umath_complex.TestCabs.test_cabs_inf_nan(, > -inf, nan, inf) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "d:\devel32\python-2.7.5\lib\site-packages\nose\case.py", line 197, > in runTest > self.test(*self.arg) > File > "D:\tmp\mingw_w64_i686\tmp\numpytest_i686\numpy\core\tests\test_umath_complex.py", > line 523, in check_real_value > assert_equal(f(z1), x) > File "numpy\testing\utils.py", line 260, in assert_equal > return assert_array_equal(actual, desired, err_msg, verbose) > File "numpy\testing\utils.py", line 718, in assert_array_equal > verbose=verbose, header='Arrays are not equal') > File "numpy\testing\utils.py", line 607, in assert_array_compare > chk_same_position(x_isnan, y_isnan, hasval='nan') > File "numpy\testing\utils.py", line 587, in chk_same_position > raise AssertionError(msg) > AssertionError: > Arrays are not equal > x and y nan location mismatch: > x: array([ nan]) > y: array(inf) > ====================================================================== > FAIL: test_umath_complex.TestCarg.test_special_values(, > -inf, inf, 2.356194490192345, False) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "d:\devel32\python-2.7.5\lib\site-packages\nose\case.py", line 197, > in runTest > self.test(*self.arg) > File > "D:\tmp\mingw_w64_i686\tmp\numpytest_i686\numpy\core\tests\test_umath_complex.py", > line 525, in check_real_value > assert_almost_equal(f(z1), x) > File "numpy\testing\utils.py", line 454, in assert_almost_equal > return assert_array_almost_equal(actual, desired, decimal, err_msg) > File "numpy\testing\utils.py", line 811, in assert_array_almost_equal > header=('Arrays are not almost equal to %d decimals' % decimal)) > File "numpy\testing\utils.py", line 607, in assert_array_compare > chk_same_position(x_isnan, y_isnan, hasval='nan') > File "numpy\testing\utils.py", line 587, in chk_same_position > raise AssertionError(msg) > AssertionError: > Arrays are not almost equal to 7 decimals > x and y nan location mismatch: > x: array([ nan]) > y: array(2.356194490192345) > ====================================================================== > FAIL: test_umath_complex.TestCarg.test_special_values(, > -inf, -inf, -2.356194490192345, False) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "d:\devel32\python-2.7.5\lib\site-packages\nose\case.py", line 197, > in runTest > self.test(*self.arg) > File > "D:\tmp\mingw_w64_i686\tmp\numpytest_i686\numpy\core\tests\test_umath_complex.py", > line 525, in check_real_value > assert_almost_equal(f(z1), x) > File "numpy\testing\utils.py", line 454, in assert_almost_equal > return assert_array_almost_equal(actual, desired, decimal, err_msg) > File "numpy\testing\utils.py", line 811, in assert_array_almost_equal > header=('Arrays are not almost equal to %d decimals' % decimal)) > File "numpy\testing\utils.py", line 607, in assert_array_compare > chk_same_position(x_isnan, y_isnan, hasval='nan') > File "numpy\testing\utils.py", line 587, in chk_same_position > raise AssertionError(msg) > AssertionError: > Arrays are not almost equal to 7 decimals > x and y nan location mismatch: > x: array([ nan]) > y: array(-2.356194490192345) > ====================================================================== > FAIL: test_umath_complex.TestCarg.test_special_values(, inf, > inf, 0.7853981633974483, False) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "d:\devel32\python-2.7.5\lib\site-packages\nose\case.py", line 197, > in runTest > self.test(*self.arg) > File > "D:\tmp\mingw_w64_i686\tmp\numpytest_i686\numpy\core\tests\test_umath_complex.py", > line 525, in check_real_value > assert_almost_equal(f(z1), x) > File "numpy\testing\utils.py", line 454, in assert_almost_equal > return assert_array_almost_equal(actual, desired, decimal, err_msg) > File "numpy\testing\utils.py", line 811, in assert_array_almost_equal > header=('Arrays are not almost equal to %d decimals' % decimal)) > File "numpy\testing\utils.py", line 607, in assert_array_compare > chk_same_position(x_isnan, y_isnan, hasval='nan') > File "numpy\testing\utils.py", line 587, in chk_same_position > raise AssertionError(msg) > AssertionError: > Arrays are not almost equal to 7 decimals > x and y nan location mismatch: > x: array([ nan]) > y: array(0.7853981633974483) > ====================================================================== > FAIL: test_umath_complex.TestCarg.test_special_values(, inf, > -inf, -0.7853981633974483, False) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "d:\devel32\python-2.7.5\lib\site-packages\nose\case.py", line 197, > in runTest > self.test(*self.arg) > File > "D:\tmp\mingw_w64_i686\tmp\numpytest_i686\numpy\core\tests\test_umath_complex.py", > line 525, in check_real_value > assert_almost_equal(f(z1), x) > File "numpy\testing\utils.py", line 454, in assert_almost_equal > return assert_array_almost_equal(actual, desired, decimal, err_msg) > File "numpy\testing\utils.py", line 811, in assert_array_almost_equal > header=('Arrays are not almost equal to %d decimals' % decimal)) > File "numpy\testing\utils.py", line 607, in assert_array_compare > chk_same_position(x_isnan, y_isnan, hasval='nan') > File "numpy\testing\utils.py", line 587, in chk_same_position > raise AssertionError(msg) > AssertionError: > Arrays are not almost equal to 7 decimals > x and y nan location mismatch: > x: array([ nan]) > y: array(-0.7853981633974483) > ====================================================================== > FAIL: test_ufunc (test_function_base.TestVectorize) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "D:\tmp\mingw_w64_i686\tmp\numpytest_i686\numpy\lib\tests\test_function_base.py", > line 554, in test_ufunc > assert_array_equal(r1, r2) > File "numpy\testing\utils.py", line 718, in assert_array_equal > verbose=verbose, header='Arrays are not equal') > File "numpy\testing\utils.py", line 644, in assert_array_compare > raise AssertionError(msg) > AssertionError: > Arrays are not equal > (mismatch 40.0%) > x: array([ 1.00000000e+00, 6.12323400e-17, -1.00000000e+00, > -1.83697020e-16, 1.00000000e+00]) > y: array([ 1.00000000e+00, 6.12303177e-17, -1.00000000e+00, > -1.83690953e-16, 1.00000000e+00]) > ---------------------------------------------------------------------- > Ran 4413 tests in 74.154s > FAILED (KNOWNFAIL=9, SKIP=7, errors=1, failures=10) > Running unit tests for numpy > NumPy version 1.8.0rc2 > NumPy is installed in numpy > Python version 2.7.5 (default, May 15 2013, 22:43:36) [MSC v.1500 32 bit > (Intel)] > nose version 1.3.0 > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cmkleffner at gmail.com Fri Nov 8 02:42:14 2013 From: cmkleffner at gmail.com (Carl Kleffner) Date: Fri, 8 Nov 2013 08:42:14 +0100 Subject: [Numpy-discussion] mingw-w64 and openblas test Message-ID: Hi list, I created a repository at google code https://code.google.com/p/mingw-w64-static with some downloads as well as my last numpy setp.py log. Regards Carl -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Fri Nov 8 06:57:13 2013 From: cournape at gmail.com (David Cournapeau) Date: Fri, 8 Nov 2013 11:57:13 +0000 Subject: [Numpy-discussion] mingw-w64 and openblas test In-Reply-To: References: Message-ID: Hi Carl, Thanks for that. I am a bit confused by the build log https://drive.google.com/file/d/0B4DmELLTwYmlRTRlOHpJbjdmbTQ/edit?usp=sharing, in particular the failures for lapack_lite and umath. May we ask you to put the logs on gists.github.com ? google docs is rather painful to use for logs (no line number, etc...) thanks, David On Fri, Nov 8, 2013 at 7:42 AM, Carl Kleffner wrote: > Hi list, > > I created a repository at google code > https://code.google.com/p/mingw-w64-static with some downloads as well as > my last numpy setp.py log. > > Regards > > Carl > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Fri Nov 8 14:22:35 2013 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 8 Nov 2013 12:22:35 -0700 Subject: [Numpy-discussion] Numpy 1.9 release date Message-ID: Hi All, The question has come up as to how much effort we should spend backporting fixes to 1.8.x. An alternative would be to tag 1.9.0 early next year, aiming for a release around April. I think there is almost enough in 1.9-devel to justify a release. There is Sebastian's index work, Julian's continuing work on speedups, the removal of oldnumeric and numarray support, and various other deprecations and cleanups that add up to a significant number of changes. I've tended to think of 1.9 as a cleanup and consolidation release and think that the main thing missing at this point is fixing the datetime problems. Thoughts? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sun Nov 10 09:28:53 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 10 Nov 2013 15:28:53 +0100 Subject: [Numpy-discussion] Numpy 1.9 release date In-Reply-To: References: Message-ID: On Fri, Nov 8, 2013 at 8:22 PM, Charles R Harris wrote: > Hi All, > > The question has come up as to how much effort we should spend backporting > fixes to 1.8.x. An alternative would be to tag 1.9.0 early next year, > aiming for a release around April. I think there is almost enough in > 1.9-devel to justify a release. There is Sebastian's index work, Julian's > continuing work on speedups, the removal of oldnumeric and numarray > support, and various other deprecations and cleanups that add up to a > significant number of changes. I've tended to think of 1.9 as a cleanup and > consolidation release > Makes sense. > and think that the main thing missing at this point is fixing the datetime > problems. > Is anyone planning to work on this? If yes, you need a rough estimate of when this is ready to go. If no, it needs to be decided if this is critical for the release. From the previous discussion I tend to think so. If it's critical but no one does it, why plan a release....... A suggestion for backporting strategy: do not backport things that have just been merged. Because (a) doing it PR by PR gives a lot of overhead, and (b) if the commit causes issues that have to be fixed or reverted, you have to fix things twice. Instead, just keep a list of backport candidates in a github issue, then do it all at once when it's clear that a bugfix release is needed. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sun Nov 10 11:58:53 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 10 Nov 2013 17:58:53 +0100 Subject: [Numpy-discussion] ANN: NumPy 1.7.2rc1 release In-Reply-To: <5279D6A0.6050904@cora.nwra.com> References: <52767CF1.60003@googlemail.com> <5279D6A0.6050904@cora.nwra.com> Message-ID: On Wed, Nov 6, 2013 at 6:41 AM, Orion Poplawski wrote: > On 11/3/2013 9:42 AM, Julian Taylor wrote: > > Hi all, > > > > I'm happy to announce the release candidate of Numpy 1.7.2. > > This is a bugfix only release supporting Python 2.4 - 2.7 and 3.1 - 3.3. > > > > More than 37 issues were fixed, the most important issues are listed in > > the release notes: > > > https://github.com/numpy/numpy/blob/v1.7.2rc1/doc/release/1.7.2-notes.rst > > > > It is supposed to not break any existing code, so please test the > > releases and report any issues you find. > > Builds and tests okay on Fedora 20. > Tested scipy and statsmodels master with Python 2.7 on Ubuntu 13.10, all OK. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave.hirschfeld at gmail.com Sun Nov 10 13:15:09 2013 From: dave.hirschfeld at gmail.com (Dave Hirschfeld) Date: Sun, 10 Nov 2013 18:15:09 +0000 (UTC) Subject: [Numpy-discussion] Numpy 1.9 release date References: Message-ID: Ralf Gommers gmail.com> writes: > > On Fri, Nov 8, 2013 at 8:22 PM, Charles R Harris gmail.com> wrote: > > > and think that the main thing missing at this point is fixing the datetime problems. > > > Is anyone planning to work on this? If yes, you need a rough estimate of when this is ready to go. If no, it needs to be decided if this is critical for the release. From the previous discussion I tend to think so. If it's critical but no one does it, why plan a release....... > > > Ralf > Just want to pipe up here as to the criticality of datetime bug. Below is a minimal example from some data analysis code I found in our company that was giving incorrect results (fortunately it was caught by thorough testing): In [110]: records = [ ...: ('2014-03-29 23:00:00', '2014-03-29 23:00:00'), ...: ('2014-03-30 00:00:00', '2014-03-30 00:00:00'), ...: ('2014-03-30 01:00:00', '2014-03-30 01:00:00'), ...: ('2014-03-30 02:00:00', '2014-03-30 02:00:00'), ...: ('2014-03-30 03:00:00', '2014-03-30 03:00:00'), ...: ('2014-10-25 23:00:00', '2014-10-25 23:00:00'), ...: ('2014-10-26 00:00:00', '2014-10-26 00:00:00'), ...: ('2014-10-26 01:00:00', '2014-10-26 01:00:00'), ...: ('2014-10-26 02:00:00', '2014-10-26 02:00:00'), ...: ('2014-10-26 03:00:00', '2014-10-26 03:00:00')] ...: ...: ...: data = np.asarray(records, dtype=[('date obj', 'M8[h]'), ('str repr', object)]) ...: df = pd.DataFrame(data) In [111]: df Out[111]: date obj str repr 0 2014-03-29 23:00:00 2014-03-29 23:00:00 1 2014-03-30 00:00:00 2014-03-30 00:00:00 2 2014-03-30 00:00:00 2014-03-30 01:00:00 3 2014-03-30 01:00:00 2014-03-30 02:00:00 4 2014-03-30 02:00:00 2014-03-30 03:00:00 5 2014-10-25 22:00:00 2014-10-25 23:00:00 6 2014-10-25 23:00:00 2014-10-26 00:00:00 7 2014-10-26 01:00:00 2014-10-26 01:00:00 8 2014-10-26 02:00:00 2014-10-26 02:00:00 9 2014-10-26 03:00:00 2014-10-26 03:00:00 Note the local timezone adjusted `date obj` including the duplicate value at the clock-change in March and the missing value at the clock-change in October. As you can imagine this could very easily lead to incorrect analysis. If running this exact same code in the (Eastern) US you'd see the following results: date obj str repr 0 2014-03-30 03:00:00 2014-03-29 23:00:00 1 2014-03-30 04:00:00 2014-03-30 00:00:00 2 2014-03-30 05:00:00 2014-03-30 01:00:00 3 2014-03-30 06:00:00 2014-03-30 02:00:00 4 2014-03-30 07:00:00 2014-03-30 03:00:00 5 2014-10-26 03:00:00 2014-10-25 23:00:00 6 2014-10-26 04:00:00 2014-10-26 00:00:00 7 2014-10-26 05:00:00 2014-10-26 01:00:00 8 2014-10-26 06:00:00 2014-10-26 02:00:00 9 2014-10-26 07:00:00 2014-10-26 03:00:00 Unfortunately I don't have the skills to meaningfully contribute in this area but it is a very real problem for users of numpy, many of whom are not active on the mailing list. HTH, Dave From cgohlke at uci.edu Sun Nov 10 15:25:48 2013 From: cgohlke at uci.edu (Christoph Gohlke) Date: Sun, 10 Nov 2013 12:25:48 -0800 Subject: [Numpy-discussion] ANN: NumPy 1.7.2rc1 release In-Reply-To: References: <52767CF1.60003@googlemail.com> <5279D6A0.6050904@cora.nwra.com> Message-ID: <527FEBCC.1090202@uci.edu> On 11/10/2013 8:58 AM, Ralf Gommers wrote: > > > > On Wed, Nov 6, 2013 at 6:41 AM, Orion Poplawski > wrote: > > On 11/3/2013 9:42 AM, Julian Taylor wrote: > > Hi all, > > > > I'm happy to announce the release candidate of Numpy 1.7.2. > > This is a bugfix only release supporting Python 2.4 - 2.7 and 3.1 - 3.3. > > > > More than 37 issues were fixed, the most important issues are listed in > > the release notes: > >https://github.com/numpy/numpy/blob/v1.7.2rc1/doc/release/1.7.2-notes.rst > > > > It is supposed to not break any existing code, so please test the > > releases and report any issues you find. > > Builds and tests okay on Fedora 20. > > > Tested scipy and statsmodels master with Python 2.7 on Ubuntu 13.10, all OK. > > Cheers, > Ralf > All OK on Windows with msvc9 & 10, MKL 11.1, Python 2.6-3.3, 32 & 64 bit. Christoph From ckkart at hoc.net Sun Nov 10 19:06:00 2013 From: ckkart at hoc.net (Christian K.) Date: Sun, 10 Nov 2013 21:06:00 -0300 Subject: [Numpy-discussion] ANN: NumPy 1.7.2rc1 release In-Reply-To: <52767CF1.60003@googlemail.com> References: <52767CF1.60003@googlemail.com> Message-ID: Am 03.11.13 13:42, schrieb Julian Taylor: > Hi all, > > I'm happy to announce the release candidate of Numpy 1.7.2. > This is a bugfix only release supporting Python 2.4 - 2.7 and 3.1 - 3.3. > > More than 37 issues were fixed, the most important issues are listed in > the release notes: > https://github.com/numpy/numpy/blob/v1.7.2rc1/doc/release/1.7.2-notes.rst > > It is supposed to not break any existing code, so please test the > releases and report any issues you find. > > Source tarballs and release notes can be found at > https://sourceforge.net/projects/numpy/files/NumPy/1.7.2rc1/. > Currently only Windows installers are available. OS X installer will > follow soon. On OSX compilation succeeds (with some errors though) but test() fails. Attached is the full output. Christian -------------- next part -------------- Python 2.7.5 |Anaconda 1.6.1 (x86_64)| (default, Jun 28 2013, 22:20:13) [GCC 4.0.1 (Apple Inc. build 5493)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> numpy.__version__ '1.7.2rc1' >>> numpy.test() Running unit tests for numpy NumPy version 1.7.2rc1 NumPy is installed in /Users/ck/anaconda/lib/python2.7/site-packages/numpy Python version 2.7.5 |Anaconda 1.6.1 (x86_64)| (default, Jun 28 2013, 22:20:13) [GCC 4.0.1 (Apple Inc. build 5493)] nose version 1.3.0 .......................................................................................................................................................FFFFF......................................................................................................................................................................................................................................................................................................S......................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................K.....................................................................................................................................................................................................................................................................................................................................................................................................................................F.........................................................................................K......................................................................................................K...SK.S.......S............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................E.....................................................................................................................................................................................................SFRuntimeError: module compiled against API version 9 but this version of numpy is 7 E....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................K.................................................... ====================================================================== ERROR: Failure: ImportError (cannot import name nanmean) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/ck/anaconda/lib/python2.7/site-packages/nose/loader.py", line 413, in loadTestsFromName addr.filename, addr.module) File "/Users/ck/anaconda/lib/python2.7/site-packages/nose/importer.py", line 47, in importFromPath return self.importFromDir(dir_path, fqname) File "/Users/ck/anaconda/lib/python2.7/site-packages/nose/importer.py", line 94, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/Users/ck/anaconda/lib/python2.7/site-packages/numpy/lib/tests/test_nanfunctions.py", line 10, in from numpy.lib import ( ImportError: cannot import name nanmean ====================================================================== ERROR: Failure: ImportError (numpy.core.multiarray failed to import) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/ck/anaconda/lib/python2.7/site-packages/nose/loader.py", line 413, in loadTestsFromName addr.filename, addr.module) File "/Users/ck/anaconda/lib/python2.7/site-packages/nose/importer.py", line 47, in importFromPath return self.importFromDir(dir_path, fqname) File "/Users/ck/anaconda/lib/python2.7/site-packages/nose/importer.py", line 94, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/Users/ck/anaconda/lib/python2.7/site-packages/numpy/linalg/tests/test_gufuncs_linalg.py", line 61, in import numpy.linalg._gufuncs_linalg as gula File "/Users/ck/anaconda/lib/python2.7/site-packages/numpy/linalg/_gufuncs_linalg.py", line 124, in from . import _umath_linalg as _impl ImportError: numpy.core.multiarray failed to import ====================================================================== FAIL: test_deprecations.TestArrayToIndexDeprecation.test_array_to_index_deprecation ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/ck/anaconda/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/Users/ck/anaconda/lib/python2.7/site-packages/numpy/core/tests/test_deprecations.py", line 284, in test_array_to_index_deprecation self.assert_deprecated(operator.index, args=(np.array([1]),)) File "/Users/ck/anaconda/lib/python2.7/site-packages/numpy/core/tests/test_deprecations.py", line 90, in assert_deprecated % (len(self.log), num)) AssertionError: 0 warnings found but 1 expected ====================================================================== FAIL: test_deprecations.TestBooleanArgumentDeprecation.test_bool_as_int_argument ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/ck/anaconda/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/Users/ck/anaconda/lib/python2.7/site-packages/numpy/core/tests/test_deprecations.py", line 257, in test_bool_as_int_argument self.assert_deprecated(np.reshape, args=(a, (True, -1))) File "/Users/ck/anaconda/lib/python2.7/site-packages/numpy/core/tests/test_deprecations.py", line 90, in assert_deprecated % (len(self.log), num)) AssertionError: 0 warnings found but 1 expected ====================================================================== FAIL: test_deprecations.TestFloatNonIntegerArgumentDeprecation.test_indexing ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/ck/anaconda/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/Users/ck/anaconda/lib/python2.7/site-packages/numpy/core/tests/test_deprecations.py", line 148, in test_indexing assert_deprecated(lambda: a[0.0]) File "/Users/ck/anaconda/lib/python2.7/site-packages/numpy/core/tests/test_deprecations.py", line 146, in assert_deprecated self.assert_deprecated(*args, exceptions=(IndexError,), **kwargs) File "/Users/ck/anaconda/lib/python2.7/site-packages/numpy/core/tests/test_deprecations.py", line 90, in assert_deprecated % (len(self.log), num)) AssertionError: 0 warnings found but 1 expected ====================================================================== FAIL: test_deprecations.TestFloatNonIntegerArgumentDeprecation.test_non_integer_argument_deprecations ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/ck/anaconda/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/Users/ck/anaconda/lib/python2.7/site-packages/numpy/core/tests/test_deprecations.py", line 238, in test_non_integer_argument_deprecations self.assert_deprecated(np.reshape, args=(a, (1., 1., -1)), num=2) File "/Users/ck/anaconda/lib/python2.7/site-packages/numpy/core/tests/test_deprecations.py", line 90, in assert_deprecated % (len(self.log), num)) AssertionError: 0 warnings found but 2 expected ====================================================================== FAIL: test_deprecations.TestFloatNonIntegerArgumentDeprecation.test_slicing ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/ck/anaconda/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/Users/ck/anaconda/lib/python2.7/site-packages/numpy/core/tests/test_deprecations.py", line 193, in test_slicing assert_deprecated(lambda: a[0.0:]) File "/Users/ck/anaconda/lib/python2.7/site-packages/numpy/core/tests/test_deprecations.py", line 190, in assert_deprecated self.assert_deprecated(*args, exceptions=(IndexError,), **kwargs) File "/Users/ck/anaconda/lib/python2.7/site-packages/numpy/core/tests/test_deprecations.py", line 90, in assert_deprecated % (len(self.log), num)) AssertionError: 0 warnings found but 1 expected ====================================================================== FAIL: test_str (test_scalarprint.TestRealScalars) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/ck/anaconda/lib/python2.7/site-packages/numpy/core/tests/test_scalarprint.py", line 26, in test_str assert_(res == val) File "/Users/ck/anaconda/lib/python2.7/site-packages/numpy/testing/utils.py", line 35, in assert_ raise AssertionError(msg) AssertionError ====================================================================== FAIL: Check mode='full' FutureWarning. ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/ck/anaconda/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/Users/ck/anaconda/lib/python2.7/site-packages/numpy/linalg/tests/test_deprecations.py", line 17, in test_qr_mode_full_future_warning assert_warns(DeprecationWarning, np.linalg.qr, a, mode='full') File "/Users/ck/anaconda/lib/python2.7/site-packages/numpy/testing/utils.py", line 1481, in assert_warns % func.__name__) AssertionError: No warning raised when calling qr ---------------------------------------------------------------------- Ran 4846 tests in 31.028s FAILED (KNOWNFAIL=5, SKIP=5, errors=2, failures=7) From ckkart at hoc.net Sun Nov 10 19:12:39 2013 From: ckkart at hoc.net (Christian K.) Date: Sun, 10 Nov 2013 21:12:39 -0300 Subject: [Numpy-discussion] ANN: NumPy 1.7.2rc1 release In-Reply-To: References: <52767CF1.60003@googlemail.com> Message-ID: Am 10.11.13 21:06, schrieb Christian K.: > Am 03.11.13 13:42, schrieb Julian Taylor: >> Hi all, >> >> I'm happy to announce the release candidate of Numpy 1.7.2. >> This is a bugfix only release supporting Python 2.4 - 2.7 and 3.1 - 3.3. >> >> More than 37 issues were fixed, the most important issues are listed in >> the release notes: >> https://github.com/numpy/numpy/blob/v1.7.2rc1/doc/release/1.7.2-notes.rst >> >> It is supposed to not break any existing code, so please test the >> releases and report any issues you find. >> >> Source tarballs and release notes can be found at >> https://sourceforge.net/projects/numpy/files/NumPy/1.7.2rc1/. >> Currently only Windows installers are available. OS X installer will >> follow soon. > > On OSX compilation succeeds (with some errors though) but test() fails. > > Attached is the full output. I just realised that setup.py picked up gcc 4.0. Anyway, after switching to 4.2 the tests still fail. Christian From stefan at sun.ac.za Sun Nov 10 20:23:44 2013 From: stefan at sun.ac.za (=?iso-8859-1?Q?St=E9fan?= van der Walt) Date: Mon, 11 Nov 2013 03:23:44 +0200 Subject: [Numpy-discussion] ANN: NumPy 1.7.2rc1 release In-Reply-To: References: <52767CF1.60003@googlemail.com> Message-ID: <20131111012344.GD18796@shinobi> Hi Christian On Sun, 10 Nov 2013 21:06:00 -0300, Christian K. wrote: > On OSX compilation succeeds (with some errors though) but test() fails. Do you have the build log available as well? Thanks St?fan From charlesr.harris at gmail.com Sun Nov 10 21:27:10 2013 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 10 Nov 2013 19:27:10 -0700 Subject: [Numpy-discussion] ANN: NumPy 1.7.2rc1 release In-Reply-To: References: <52767CF1.60003@googlemail.com> Message-ID: On Sun, Nov 10, 2013 at 5:12 PM, Christian K. wrote: > Am 10.11.13 21:06, schrieb Christian K.: > > Am 03.11.13 13:42, schrieb Julian Taylor: > >> Hi all, > >> > >> I'm happy to announce the release candidate of Numpy 1.7.2. > >> This is a bugfix only release supporting Python 2.4 - 2.7 and 3.1 - 3.3. > >> > >> More than 37 issues were fixed, the most important issues are listed in > >> the release notes: > >> > https://github.com/numpy/numpy/blob/v1.7.2rc1/doc/release/1.7.2-notes.rst > >> > >> It is supposed to not break any existing code, so please test the > >> releases and report any issues you find. > >> > >> Source tarballs and release notes can be found at > >> https://sourceforge.net/projects/numpy/files/NumPy/1.7.2rc1/. > >> Currently only Windows installers are available. OS X installer will > >> follow soon. > > > > On OSX compilation succeeds (with some errors though) but test() fails. > > > > Attached is the full output. > > I just realised that setup.py picked up gcc 4.0. Anyway, after switching > to 4.2 the tests still fail. > Looks like your tests might be left over from 1.8. Try cleaning out the install and build directories and doing a clean install. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Sun Nov 10 22:27:46 2013 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 11 Nov 2013 05:27:46 +0200 Subject: [Numpy-discussion] Numpy 1.9 release date In-Reply-To: References: Message-ID: On 9 Nov 2013 03:22, "Charles R Harris" wrote: > > that the main thing missing at this point is fixing the datetime problems. What needs to be done, and what is the plan forward? Is there perhaps an issue one can follow? Thanks St?fan -------------- next part -------------- An HTML attachment was scrubbed... URL: From cmkleffner at gmail.com Mon Nov 11 04:51:24 2013 From: cmkleffner at gmail.com (Carl Kleffner) Date: Mon, 11 Nov 2013 10:51:24 +0100 Subject: [Numpy-discussion] mingw-w64 and openblas test In-Reply-To: References: Message-ID: Hi David, I made a new build with the numpy-1.8.0 code base. binaries and logs are included in the following archive: 2013-11-11_i686-numpy-1.8.0-py27-openblastest.7z https://drive.google.com/file/d/0B4DmELLTwYmlajBzZFpXcVYycEE/edit?usp=sharing Regards Carl 2013/11/8 David Cournapeau > Hi Carl, > > Thanks for that. I am a bit confused by the build log > https://drive.google.com/file/d/0B4DmELLTwYmlRTRlOHpJbjdmbTQ/edit?usp=sharing, > in particular the failures for lapack_lite and umath. > > May we ask you to put the logs on gists.github.com ? google docs is > rather painful to use for logs (no line number, etc...) > > thanks, > David > > > > > > > On Fri, Nov 8, 2013 at 7:42 AM, Carl Kleffner wrote: > >> Hi list, >> >> I created a repository at google code >> https://code.google.com/p/mingw-w64-static with some downloads as well >> as my last numpy setp.py log. >> >> Regards >> >> Carl >> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> >> > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Mon Nov 11 05:00:29 2013 From: cournape at gmail.com (David Cournapeau) Date: Mon, 11 Nov 2013 10:00:29 +0000 Subject: [Numpy-discussion] mingw-w64 and openblas test In-Reply-To: References: Message-ID: Carl, It looks like the google drive contains the binary and not the actual log ? For the log, it is more convenient to put it on gist.github.com, thanks for the work, David On Mon, Nov 11, 2013 at 9:51 AM, Carl Kleffner wrote: > Hi David, > > I made a new build with the numpy-1.8.0 code base. binaries and logs are > included in the following archive: > > 2013-11-11_i686-numpy-1.8.0-py27-openblastest.7z > https://drive.google.com/file/d/0B4DmELLTwYmlajBzZFpXcVYycEE/edit?usp=sharing > > Regards > > Carl > > > > > 2013/11/8 David Cournapeau > >> Hi Carl, >> >> Thanks for that. I am a bit confused by the build log >> https://drive.google.com/file/d/0B4DmELLTwYmlRTRlOHpJbjdmbTQ/edit?usp=sharing, >> in particular the failures for lapack_lite and umath. >> >> May we ask you to put the logs on gists.github.com ? google docs is >> rather painful to use for logs (no line number, etc...) >> >> thanks, >> David >> >> >> >> >> >> >> On Fri, Nov 8, 2013 at 7:42 AM, Carl Kleffner wrote: >> >>> Hi list, >>> >>> I created a repository at google code >>> https://code.google.com/p/mingw-w64-static with some downloads as well >>> as my last numpy setp.py log. >>> >>> Regards >>> >>> Carl >>> >>> _______________________________________________ >>> NumPy-Discussion mailing list >>> NumPy-Discussion at scipy.org >>> http://mail.scipy.org/mailman/listinfo/numpy-discussion >>> >>> >> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> >> > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cmkleffner at gmail.com Mon Nov 11 05:28:44 2013 From: cmkleffner at gmail.com (Carl Kleffner) Date: Mon, 11 Nov 2013 11:28:44 +0100 Subject: [Numpy-discussion] mingw-w64 and openblas test In-Reply-To: References: Message-ID: done all logs in https://gist.github.com/anonymous/7411039 Regards Carl 2013/11/11 David Cournapeau > Carl, > > It looks like the google drive contains the binary and not the actual log > ? For the log, it is more convenient to put it on gist.github.com, > > thanks for the work, > David > > > On Mon, Nov 11, 2013 at 9:51 AM, Carl Kleffner wrote: > >> Hi David, >> >> I made a new build with the numpy-1.8.0 code base. binaries and logs are >> included in the following archive: >> >> 2013-11-11_i686-numpy-1.8.0-py27-openblastest.7z >> https://drive.google.com/file/d/0B4DmELLTwYmlajBzZFpXcVYycEE/edit?usp=sharing >> >> Regards >> >> Carl >> >> >> >> >> 2013/11/8 David Cournapeau >> >>> Hi Carl, >>> >>> Thanks for that. I am a bit confused by the build log >>> https://drive.google.com/file/d/0B4DmELLTwYmlRTRlOHpJbjdmbTQ/edit?usp=sharing, >>> in particular the failures for lapack_lite and umath. >>> >>> May we ask you to put the logs on gists.github.com ? google docs is >>> rather painful to use for logs (no line number, etc...) >>> >>> thanks, >>> David >>> >>> >>> >>> >>> >>> >>> On Fri, Nov 8, 2013 at 7:42 AM, Carl Kleffner wrote: >>> >>>> Hi list, >>>> >>>> I created a repository at google code >>>> https://code.google.com/p/mingw-w64-static with some downloads as well >>>> as my last numpy setp.py log. >>>> >>>> Regards >>>> >>>> Carl >>>> >>>> _______________________________________________ >>>> NumPy-Discussion mailing list >>>> NumPy-Discussion at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/numpy-discussion >>>> >>>> >>> >>> _______________________________________________ >>> NumPy-Discussion mailing list >>> NumPy-Discussion at scipy.org >>> http://mail.scipy.org/mailman/listinfo/numpy-discussion >>> >>> >> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> >> > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Mon Nov 11 07:12:40 2013 From: cournape at gmail.com (David Cournapeau) Date: Mon, 11 Nov 2013 12:12:40 +0000 Subject: [Numpy-discussion] mingw-w64 and openblas test In-Reply-To: References: Message-ID: On Mon, Nov 11, 2013 at 10:28 AM, Carl Kleffner wrote: > done > > all logs in https://gist.github.com/anonymous/7411039 > Thanks. Looking at the log, it does not look like you are statically linking the mingw runtimes, though. I would expect numpy not to work if you don't have mingw dlls in your %PATH%, right ? David > > > Regards Carl > > > 2013/11/11 David Cournapeau > >> Carl, >> >> It looks like the google drive contains the binary and not the actual log >> ? For the log, it is more convenient to put it on gist.github.com, >> >> thanks for the work, >> David >> >> >> On Mon, Nov 11, 2013 at 9:51 AM, Carl Kleffner wrote: >> >>> Hi David, >>> >>> I made a new build with the numpy-1.8.0 code base. binaries and logs are >>> included in the following archive: >>> >>> 2013-11-11_i686-numpy-1.8.0-py27-openblastest.7z >>> https://drive.google.com/file/d/0B4DmELLTwYmlajBzZFpXcVYycEE/edit?usp=sharing >>> >>> Regards >>> >>> Carl >>> >>> >>> >>> >>> 2013/11/8 David Cournapeau >>> >>>> Hi Carl, >>>> >>>> Thanks for that. I am a bit confused by the build log >>>> https://drive.google.com/file/d/0B4DmELLTwYmlRTRlOHpJbjdmbTQ/edit?usp=sharing, >>>> in particular the failures for lapack_lite and umath. >>>> >>>> May we ask you to put the logs on gists.github.com ? google docs is >>>> rather painful to use for logs (no line number, etc...) >>>> >>>> thanks, >>>> David >>>> >>>> >>>> >>>> >>>> >>>> >>>> On Fri, Nov 8, 2013 at 7:42 AM, Carl Kleffner wrote: >>>> >>>>> Hi list, >>>>> >>>>> I created a repository at google code >>>>> https://code.google.com/p/mingw-w64-static with some downloads as >>>>> well as my last numpy setp.py log. >>>>> >>>>> Regards >>>>> >>>>> Carl >>>>> >>>>> _______________________________________________ >>>>> NumPy-Discussion mailing list >>>>> NumPy-Discussion at scipy.org >>>>> http://mail.scipy.org/mailman/listinfo/numpy-discussion >>>>> >>>>> >>>> >>>> _______________________________________________ >>>> NumPy-Discussion mailing list >>>> NumPy-Discussion at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/numpy-discussion >>>> >>>> >>> >>> _______________________________________________ >>> NumPy-Discussion mailing list >>> NumPy-Discussion at scipy.org >>> http://mail.scipy.org/mailman/listinfo/numpy-discussion >>> >>> >> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> >> > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cmkleffner at gmail.com Mon Nov 11 08:32:39 2013 From: cmkleffner at gmail.com (Carl Kleffner) Date: Mon, 11 Nov 2013 14:32:39 +0100 Subject: [Numpy-discussion] mingw-w64 and openblas test In-Reply-To: References: Message-ID: Hi David, i used my customized mingw-w64 toolkit mentioned in this thread to circumvent several problems with the mixed compiler enviroment. It is a fully statically gcc build. Hence the compiled executables and dlls never depend on mingw dlls even without usage of -static -static-libgcc ... compiler options. The crt runtime is msvcr90.dll as used by python-2.7. The manifest is linked to the executables per default. I have to write some documentation about that, but this may take some time due to my workload. You can test it on a windows cmd prompt with: objdump -p numpy\core\_dotblas.pyd | findstr DLL DLL vma: Hint Time Forward DLL First DLL Name: ADVAPI32.dll DLL Name: KERNEL32.dll DLL Name: msvcr90.dll DLL Name: python27.dll Carl 2013/11/11 David Cournapeau > > > > On Mon, Nov 11, 2013 at 10:28 AM, Carl Kleffner wrote: > >> done >> >> all logs in https://gist.github.com/anonymous/7411039 >> > > Thanks. Looking at the log, it does not look like you are statically > linking the mingw runtimes, though. I would expect numpy not to work if you > don't have mingw dlls in your %PATH%, right ? > > David > >> >> >> Regards Carl >> >> >> 2013/11/11 David Cournapeau >> >>> Carl, >>> >>> It looks like the google drive contains the binary and not the actual >>> log ? For the log, it is more convenient to put it on gist.github.com, >>> >>> thanks for the work, >>> David >>> >>> >>> On Mon, Nov 11, 2013 at 9:51 AM, Carl Kleffner wrote: >>> >>>> Hi David, >>>> >>>> I made a new build with the numpy-1.8.0 code base. binaries and logs >>>> are included in the following archive: >>>> >>>> 2013-11-11_i686-numpy-1.8.0-py27-openblastest.7z >>>> https://drive.google.com/file/d/0B4DmELLTwYmlajBzZFpXcVYycEE/edit?usp=sharing >>>> >>>> Regards >>>> >>>> Carl >>>> >>>> >>>> >>>> >>>> 2013/11/8 David Cournapeau >>>> >>>>> Hi Carl, >>>>> >>>>> Thanks for that. I am a bit confused by the build log >>>>> https://drive.google.com/file/d/0B4DmELLTwYmlRTRlOHpJbjdmbTQ/edit?usp=sharing, >>>>> in particular the failures for lapack_lite and umath. >>>>> >>>>> May we ask you to put the logs on gists.github.com ? google docs is >>>>> rather painful to use for logs (no line number, etc...) >>>>> >>>>> thanks, >>>>> David >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> On Fri, Nov 8, 2013 at 7:42 AM, Carl Kleffner wrote: >>>>> >>>>>> Hi list, >>>>>> >>>>>> I created a repository at google code >>>>>> https://code.google.com/p/mingw-w64-static with some downloads as >>>>>> well as my last numpy setp.py log. >>>>>> >>>>>> Regards >>>>>> >>>>>> Carl >>>>>> >>>>>> _______________________________________________ >>>>>> NumPy-Discussion mailing list >>>>>> NumPy-Discussion at scipy.org >>>>>> http://mail.scipy.org/mailman/listinfo/numpy-discussion >>>>>> >>>>>> >>>>> >>>>> _______________________________________________ >>>>> NumPy-Discussion mailing list >>>>> NumPy-Discussion at scipy.org >>>>> http://mail.scipy.org/mailman/listinfo/numpy-discussion >>>>> >>>>> >>>> >>>> _______________________________________________ >>>> NumPy-Discussion mailing list >>>> NumPy-Discussion at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/numpy-discussion >>>> >>>> >>> >>> _______________________________________________ >>> NumPy-Discussion mailing list >>> NumPy-Discussion at scipy.org >>> http://mail.scipy.org/mailman/listinfo/numpy-discussion >>> >>> >> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> >> > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From conny.kuehne at gmail.com Mon Nov 11 12:34:19 2013 From: conny.kuehne at gmail.com (=?iso-8859-1?Q?Conny_K=FChne?=) Date: Mon, 11 Nov 2013 18:34:19 +0100 Subject: [Numpy-discussion] Error in docstring np.random.pareto Message-ID: <50EE3EEE-9C2E-4ED7-AF3F-3EEA527F066D@gmail.com> I think the docstring of np.random.pareto (Version 1.8.0) is erroneous. In a nutshell np.random.pareto draws samples from a Lomax distribution with shape a and location m'=1. To convert those samples to a classical Pareto distribution with shape a and location m you have to add 1 and multiply by m, instead of adding m as stated in the docstring. More specifically, the docstring says: "The Lomax or Pareto II distribution is a shifted Pareto distribution. The classical Pareto distribution can be obtained from the Lomax distribution by adding the location parameter m, see below." Instead, it should read "[..] by adding 1 and multiplying my m, see below." The example at the bottom therefore should read: >>> a, m = 3., 1. # shape and mode >>> s = (np.random.pareto(a, 1000) + 1) * m Maybe an example with m=10 makes it clearer >>> a, m = 3., 10. # shape and mode >>> s = (np.random.pareto(a, 1000) + 1) * m Additionally, calling m the location parameter could be misleading. For simple Pareto (Type I) distributions it is usually referred to as the x_min oder mode parameter of the distribution. When discussing generalized Pareto distributions m is usually called the scale parameter (a constant factor) while mu is the location (an additive term) [1]. I assume the misleading naming could have caused some confusion and lead to the errors described above. Last but not least I think it might also cause confusion to call a function 'pareto' while its meaning is 'shifted by -1 pareto'. :) [1] http://en.wikipedia.org/wiki/Generalized_Pareto_distribution -- Conny Kuehne From Gerard.Brunick at constellation.com Mon Nov 11 14:52:20 2013 From: Gerard.Brunick at constellation.com (Brunick, Gerard:(Constellation)) Date: Mon, 11 Nov 2013 19:52:20 +0000 Subject: [Numpy-discussion] Can I use numpy-MKL for commercial purposes without an Intel MKL license? Message-ID: <71E0ECF7BE49E84CBD47914C9E19FFED51D828@exchm-omf-22.exelonds.com> I am looking for a 64-bit, Windows version of Numpy and I have found a couple pre-built versions that are linked against the Intel MKL library. I would like to use this library for in-house commercial use. In particular, I do not intend to redistribute my code or link against the Intel MKL libraries. Does this require an MKL license? One of the FAQ questions from http://software.intel.com/en-us/articles/intel-math-kernel-library-licensing-faq#redistribute reads: "Can I redistribute the Intel Math Kernel Library with my application? Yes. When you purchase Intel MKL, you receive rights to redistribute computational portions of Intel MKL with your application. The evaluation versions of Intel MKL do not include redistribution rights. The list of files that can be redistributed is provided in redist.txt included in the Intel MKL distribution with product license." This suggests that I can use a binary that has been compiled by a license holder without a license, but numpy is itself a library, in some sense, so it is not entirely clear to me. I suppose that I would also need to confirm that the individual who built the version of numpy that I am using holds a MKL product license? Thanks in advance for any (non-legally binding) insight that you can share, Gerard This e-mail and any attachments are confidential, may contain legal, professional or other privileged information, and are intended solely for the addressee. If you are not the intended recipient, do not use the information in this e-mail in any way, delete this e-mail and notify the sender. -EXCIP -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Mon Nov 11 18:39:08 2013 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 11 Nov 2013 16:39:08 -0700 Subject: [Numpy-discussion] Can I use numpy-MKL for commercial purposes without an Intel MKL license? In-Reply-To: <71E0ECF7BE49E84CBD47914C9E19FFED51D828@exchm-omf-22.exelonds.com> References: <71E0ECF7BE49E84CBD47914C9E19FFED51D828@exchm-omf-22.exelonds.com> Message-ID: On Mon, Nov 11, 2013 at 12:52 PM, Brunick, Gerard:(Constellation) < Gerard.Brunick at constellation.com> wrote: > I am looking for a 64-bit, Windows version of Numpy and I have found a > couple pre-built versions that are linked against the Intel MKL library. I > would like to use this library for in-house commercial use. In particular, > I do not intend to redistribute my code or link against the Intel MKL > libraries. Does this require an MKL license? > > > > One of the FAQ questions from > > > http://software.intel.com/en-us/articles/intel-math-kernel-library-licensing-faq#redistribute > > reads: > > > > ?Can I redistribute the Intel Math Kernel Library with my application? > > Yes. When you purchase Intel MKL, you receive rights to redistribute > computational portions of Intel MKL with your application. The evaluation > versions of Intel MKL do not include redistribution rights. The list of > files that can be redistributed is provided in redist.txt included in the > Intel MKL distribution with product license.? > > > > This suggests that I can use a binary that has been compiled by a license > holder without a license, but numpy is itself a library, in some sense, so > it is not entirely clear to me. I suppose that I would also need to > confirm that the individual who built the version of numpy that I am using > holds a MKL product license? > > > > Thanks in advance for any (non-legally binding) insight that you can share, > > Gerard > > > > This e-mail and any attachments are confidential, may contain legal, > professional or other privileged information, and are intended solely for > the > addressee. If you are not the intended recipient, do not use the > information > in this e-mail in any way, delete this e-mail and notify the sender. -EXCIP > > If you get an answer to this I'd like to know too, but I suspect you may need to contact Intel to get clarification. I could never quite figure it out from their website. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Mon Nov 11 19:23:06 2013 From: cournape at gmail.com (David Cournapeau) Date: Tue, 12 Nov 2013 00:23:06 +0000 Subject: [Numpy-discussion] mingw-w64 and openblas test In-Reply-To: References: Message-ID: On Mon, Nov 11, 2013 at 1:32 PM, Carl Kleffner wrote: > Hi David, > > i used my customized mingw-w64 toolkit mentioned in this thread to > circumvent several problems with the mixed compiler enviroment. It is a > fully statically gcc build. Hence the compiled executables and dlls never > depend on mingw dlls even without usage of -static -static-libgcc ... > compiler options. The crt runtime is msvcr90.dll as used by python-2.7. The > manifest is linked to the executables per default. I have to write some > documentation about that, but this may take some time due to my workload. > Hm, interesting, I did not even know this was possible ! Does that work for scipy as well ? I am a bit worried about using custom toolchains, OTOH, that's the only real solution we seem to have ATM. David > > You can test it on a windows cmd prompt with: objdump -p > numpy\core\_dotblas.pyd | findstr DLL > DLL > vma: Hint Time Forward DLL First > DLL Name: ADVAPI32.dll > DLL Name: KERNEL32.dll > DLL Name: msvcr90.dll > DLL Name: python27.dll > > Carl > > > 2013/11/11 David Cournapeau > >> >> >> >> On Mon, Nov 11, 2013 at 10:28 AM, Carl Kleffner wrote: >> >>> done >>> >>> all logs in https://gist.github.com/anonymous/7411039 >>> >> >> Thanks. Looking at the log, it does not look like you are statically >> linking the mingw runtimes, though. I would expect numpy not to work if you >> don't have mingw dlls in your %PATH%, right ? >> >> David >> >>> >>> >>> Regards Carl >>> >>> >>> 2013/11/11 David Cournapeau >>> >>>> Carl, >>>> >>>> It looks like the google drive contains the binary and not the actual >>>> log ? For the log, it is more convenient to put it on gist.github.com, >>>> >>>> thanks for the work, >>>> David >>>> >>>> >>>> On Mon, Nov 11, 2013 at 9:51 AM, Carl Kleffner wrote: >>>> >>>>> Hi David, >>>>> >>>>> I made a new build with the numpy-1.8.0 code base. binaries and logs >>>>> are included in the following archive: >>>>> >>>>> 2013-11-11_i686-numpy-1.8.0-py27-openblastest.7z >>>>> https://drive.google.com/file/d/0B4DmELLTwYmlajBzZFpXcVYycEE/edit?usp=sharing >>>>> >>>>> Regards >>>>> >>>>> Carl >>>>> >>>>> >>>>> >>>>> >>>>> 2013/11/8 David Cournapeau >>>>> >>>>>> Hi Carl, >>>>>> >>>>>> Thanks for that. I am a bit confused by the build log >>>>>> https://drive.google.com/file/d/0B4DmELLTwYmlRTRlOHpJbjdmbTQ/edit?usp=sharing, >>>>>> in particular the failures for lapack_lite and umath. >>>>>> >>>>>> May we ask you to put the logs on gists.github.com ? google docs is >>>>>> rather painful to use for logs (no line number, etc...) >>>>>> >>>>>> thanks, >>>>>> David >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> On Fri, Nov 8, 2013 at 7:42 AM, Carl Kleffner wrote: >>>>>> >>>>>>> Hi list, >>>>>>> >>>>>>> I created a repository at google code >>>>>>> https://code.google.com/p/mingw-w64-static with some downloads as >>>>>>> well as my last numpy setp.py log. >>>>>>> >>>>>>> Regards >>>>>>> >>>>>>> Carl >>>>>>> >>>>>>> _______________________________________________ >>>>>>> NumPy-Discussion mailing list >>>>>>> NumPy-Discussion at scipy.org >>>>>>> http://mail.scipy.org/mailman/listinfo/numpy-discussion >>>>>>> >>>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> NumPy-Discussion mailing list >>>>>> NumPy-Discussion at scipy.org >>>>>> http://mail.scipy.org/mailman/listinfo/numpy-discussion >>>>>> >>>>>> >>>>> >>>>> _______________________________________________ >>>>> NumPy-Discussion mailing list >>>>> NumPy-Discussion at scipy.org >>>>> http://mail.scipy.org/mailman/listinfo/numpy-discussion >>>>> >>>>> >>>> >>>> _______________________________________________ >>>> NumPy-Discussion mailing list >>>> NumPy-Discussion at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/numpy-discussion >>>> >>>> >>> >>> _______________________________________________ >>> NumPy-Discussion mailing list >>> NumPy-Discussion at scipy.org >>> http://mail.scipy.org/mailman/listinfo/numpy-discussion >>> >>> >> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> >> > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Mon Nov 11 21:17:33 2013 From: cournape at gmail.com (David Cournapeau) Date: Tue, 12 Nov 2013 02:17:33 +0000 Subject: [Numpy-discussion] Caution about using intrisincs, and other 'advanced' optimizations Message-ID: Hi there, I have noticed more and more subtle and hard to track serious bugs in numpy and scipy, due to the use of advanced optimization features (flags, or gcc intrinsics). I am wondering whether those are worth it: they compile wrongly under quite a few configurations, and it is not always obvious to find the cause (case in point: gcc 4.4 with numpy 1.8.0 causes infinite loop in scipy.stats, which disappear if I disable the intrinsics in numpy/npy_config.h). Maybe they should be disabled by default, and only built in if required ? Do we know for sure they bring significant improvements ? While gcc 4.4 is not the most recent compiler, it is not ancient either, David -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Mon Nov 11 21:28:26 2013 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 11 Nov 2013 19:28:26 -0700 Subject: [Numpy-discussion] Caution about using intrisincs, and other 'advanced' optimizations In-Reply-To: References: Message-ID: On Mon, Nov 11, 2013 at 7:17 PM, David Cournapeau wrote: > Hi there, > > I have noticed more and more subtle and hard to track serious bugs in > numpy and scipy, due to the use of advanced optimization features (flags, > or gcc intrinsics). > > I am wondering whether those are worth it: they compile wrongly under > quite a few configurations, and it is not always obvious to find the cause > (case in point: gcc 4.4 with numpy 1.8.0 causes infinite loop in > scipy.stats, which disappear if I disable the intrinsics in > numpy/npy_config.h). Maybe they should be disabled by default, and only > built in if required ? Do we know for sure they bring significant > improvements ? > > While gcc 4.4 is not the most recent compiler, it is not ancient either, > > What happens with more recent gcc? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaime.frio at gmail.com Mon Nov 11 23:59:46 2013 From: jaime.frio at gmail.com (=?ISO-8859-1?Q?Jaime_Fern=E1ndez_del_R=EDo?=) Date: Mon, 11 Nov 2013 20:59:46 -0800 Subject: [Numpy-discussion] numpy.lib functions as gufuncs Message-ID: Hi, Inspired by the great rewrite of numpy.linalg in 1.8, I've spent the last couple of days coding a couple of the functions in numpy.lib as gufuncs, namely np.interp and np.bincount. I want to do something along the same lines to np.digitize, but haven't started on it just yet. I'm currently keeping the code in a separate repository ( https://github.com/jaimefrio/new_gufuncs), because I have a lot of problems compiling numpy, and this makes my life much easier while developing. If there's interest in adding these code to numpy, I?ll be more than happy to figure my compilation issues and turn this into a PR. If you run python setup.py install, you should then be able to `import new_gufuncs as ng`. Basically, there are 4 gufuncs (ng._interp, ng._minmax, ng._bincount, ng._bincount_wt) and 2 Python wrappers to them (ng.interp, ng.bincount) that reproduce the interface of their numpy counterparts. Some obvious advantages: 1. Added support for more types, including complex, in both the interp data points and the bincount weights. 2. You can use ng.bincount to do things like clustering, or ng.interp to interpolate vector functions of a single variable. 3. It makes the code much more flexible, I'm sure people will come up with new uses. Some things I'm concerned about: 1. For very small datasets, performance is several times worse than the current functions. For larger datasets the gufunc versions perform better than the current functions, but the gufunc dispatch mechanism seems to take for ever to get going. 2. There's what feels like a very nasty hack in the bincount gufuncs: since the shape of the output depends on the contents of the input arrays, the signature cannot be determined beforehand. So the gufunc has signature '(m)->(n)', which means that the wrapper function has to determine what the shape of the output is, instantiate it, and pass it in with the 'out' keyword argument to the gufunc for things to work properly. IT works beautifully, but it just doesn't seem right. 3. The broadcasting syntax doesn't seem to be the most convenient for the obvious applications: 3.1. Say you have a function R->R^3. The obvious way of describing it as piecewise linear would be to have xp of shape (n,) and yp of shape (n, 3). To run ng.interp on a set of points x of shape (m,), and get back an output of shape (m, 3) you would have to do ng.interp(x, xp, yp.T).T. 3.2. Say you have counts of terms in documents. The typical way of storing this would be in an array `dataset` of shape (docs, terms). If you then run clustering on that data, you would typically get a `clusters` array of shape (docs,). To cluster the counts you would do something like ng.bincount(clusters, dataset.T).T Those transpositions in these two cases seem a little annoying, but then again "special cases aren't special enough to break the rules," or is it "although practicality beats purity" that applies here? Any feedback on any of the above points (or others) is more than welcome. If you think this is a worthy addition to numpy, I'll work on some tests and performance benchmarks. Jaime -- (\__/) ( O.o) ( > <) Este es Conejo. Copia a Conejo en tu firma y ay?dale en sus planes de dominaci?n mundial. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bartbkr at gmail.com Tue Nov 12 06:01:39 2013 From: bartbkr at gmail.com (Bart Baker) Date: Tue, 12 Nov 2013 06:01:39 -0500 Subject: [Numpy-discussion] Precision/value change moving from C to Python Message-ID: Hi all, First time posting. I've been working on a class for solving a group of affine models and decided to drop down to C for a portion of the code for improved performance. There are two questions that I've had come up from this. 1) I've written the section of code, which involved a series of fully populated masked arrays as arguments. There are a few array operations such as dot products and differences that are taken on the arrays to generate the final value. I've written versions in both Python/NumPy and C and now have a verification issue. The issue is that there are some minor (10^-16) differences in the values when I do the calculation in C vs Python. The way that I test these differences is by passing the NumPy arrays to the C extension and testing for equality in gdb. I'm not relying on any of the NumPy API for the array operations such as dot product, addition, and subtraction. All of the arrays I'm treating as two-dimensional in C right now. Once I figure out this bug, I'm planning on rewriting the computations in terms of one dimensional arrays. All of the values that I'm dealing with in C right now are double precision. 2) Beyond this issue, I also noticed that the values that are equal in gdb are not eqaul once I return to Python! I pass the same arrays back from C to Python, test eqaulity by: npy_array == c_array and find that almost the entire arrays are non-equal, even though equality tests in gdb had been true. Is there some obvious reason why this would be. Is there some sort of implicit value conversion performed when moving from C back to Python? I can post some code if need be, but I wanted to get anyone's first insight if there is anything obvious that I'm not doing right. I appreciate your help. Thanks, Bart From davidmenhur at gmail.com Tue Nov 12 09:07:49 2013 From: davidmenhur at gmail.com (=?UTF-8?B?RGHPgGlk?=) Date: Tue, 12 Nov 2013 15:07:49 +0100 Subject: [Numpy-discussion] Precision/value change moving from C to Python In-Reply-To: References: Message-ID: On 12 November 2013 12:01, Bart Baker wrote: > The issue is that there are some minor (10^-16) differences in the > values when I do the calculation in C vs Python. > That is the order of the machine epsilon for double, that looks like roundoff errors to me. I found similar results cythonising some code, everything was the same until I changed some numpy functions for libc functions (exp, sin, cos...). After some operations in float32, the results were apart for 1 in 10^-5 (the epsilon is 10^-6). I blame them on specific implementation differences between numpy's and my system's libc specific functions. To check equality, use np.allclose, it lets you define the relative and absolute error. /David. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jtaylor.debian at googlemail.com Tue Nov 12 13:17:05 2013 From: jtaylor.debian at googlemail.com (Julian Taylor) Date: Tue, 12 Nov 2013 19:17:05 +0100 Subject: [Numpy-discussion] Caution about using intrisincs, and other 'advanced' optimizations In-Reply-To: References: Message-ID: <528270A1.70307@googlemail.com> On 12.11.2013 03:17, David Cournapeau wrote: > Hi there, > > I have noticed more and more subtle and hard to track serious bugs in > numpy and scipy, due to the use of advanced optimization features > (flags, or gcc intrinsics). > > I am wondering whether those are worth it: they compile wrongly under > quite a few configurations, and it is not always obvious to find the > cause (case in point: gcc 4.4 with numpy 1.8.0 causes infinite loop in > scipy.stats, which disappear if I disable the intrinsics in > numpy/npy_config.h). Maybe they should be disabled by default, and only > built in if required ? Do we know for sure they bring significant > improvements ? yes, e.g. http://yarikoptic.github.io/numpy-vbench/vb_vb_ufunc.html#numpy-add-scalar2-numpy-float32 http://yarikoptic.github.io/numpy-vbench/vb_vb_ufunc.html#numpy-not-bool http://yarikoptic.github.io/numpy-vbench/vb_vb_ufunc.html#numpy-isnan-a-10types and many more. this benchmark runs on a pretty old amd, the improvements are greater on more modern AMD and Intel cpus. > > While gcc 4.4 is not the most recent compiler, it is not ancient either, I can't reproduce any issue with gcc 4.4.7 Can you narrow it down to a specific intrinsic? they can be enabled and disabled in set ./numpy/core/setup_common.py From chris.barker at noaa.gov Tue Nov 12 16:10:36 2013 From: chris.barker at noaa.gov (Chris Barker) Date: Tue, 12 Nov 2013 13:10:36 -0800 Subject: [Numpy-discussion] Numpy 1.9 release date In-Reply-To: References: Message-ID: On Sun, Nov 10, 2013 at 7:27 PM, St?fan van der Walt wrote: > > that the main thing missing at this point is fixing the datetime > problems. > > What needs to be done, and what is the plan forward? > I'm not sure that's quite been decided, but my take: 1) remove the existing time zone handling -- it simply isn't useful often, and does cause a pain in the &%^& often. - as far as I know, the only point of debate to the simple not-time-zone aware datetimes is whether that means "UTC" or "Local" or "Not Known" -- these are pretty subtle distinctions and I think really only have an impact when you try to parse an iso string with a timezone attached. 2) _maybe_ do something smarter -- though this takes a lot more work and discussion as to what that should be. I think they key points are captured here: http://thread.gmane.org/gmane.comp.python.numeric.general/53805 There is an issue: https://github.com/numpy/numpy/issues/3388, but there is no detail there. There are a number of other issues that come up in discussion: * More precision with lap-seconds, etc. * Allowing an epoch that can change -- this is really crucial if you want picoseconds and friends to be remotely useful. But these are orthogonal issues AFIIC, except that maybe one we open it up it makes sense to do it at once... -Chris > Is there perhaps an issue one can follow? > > Thanks > St?fan > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From derek at astro.physik.uni-goettingen.de Tue Nov 12 16:38:21 2013 From: derek at astro.physik.uni-goettingen.de (Derek Homeier) Date: Tue, 12 Nov 2013 22:38:21 +0100 Subject: [Numpy-discussion] ANN: NumPy 1.7.2rc1 release In-Reply-To: <52767CF1.60003@googlemail.com> References: <52767CF1.60003@googlemail.com> Message-ID: <2DAE0F54-1ACB-4643-8FE5-2F73AD6C6F19@astro.physik.uni-goettingen.de> Hi, On 03.11.2013, at 5:42PM, Julian Taylor wrote: > I'm happy to announce the release candidate of Numpy 1.7.2. > This is a bugfix only release supporting Python 2.4 - 2.7 and 3.1 - 3.3. on OS X 10.5, build and tests succeed for Python 2.5-3.3, but Python 2.4.4 fails with /sw/bin/python2.4 setup.py build Running from numpy source directory. Traceback (most recent call last): File "setup.py", line 214, in ? setup_package() File "setup.py", line 191, in setup_package from numpy.distutils.core import setup File "/sw/src/fink.build/numpy-py24-1.7.2rc1-1/numpy-1.7.2rc1/numpy/distutils/core.py", line 25, in ? from numpy.distutils.command import config, config_compiler, \ File "/sw/src/fink.build/numpy-py24-1.7.2rc1-1/numpy-1.7.2rc1/numpy/distutils/command/build_ext.py", line 16, in ? from numpy.distutils.system_info import combine_paths File "/sw/src/fink.build/numpy-py24-1.7.2rc1-1/numpy-1.7.2rc1/numpy/distutils/system_info.py", line 235 finally: ^ SyntaxError: invalid syntax Cheers, Derek From ckkart at hoc.net Tue Nov 12 17:13:18 2013 From: ckkart at hoc.net (Christian K.) Date: Tue, 12 Nov 2013 19:13:18 -0300 Subject: [Numpy-discussion] ANN: NumPy 1.7.2rc1 release In-Reply-To: References: <52767CF1.60003@googlemail.com> Message-ID: Am 10.11.13 23:27, schrieb Charles R Harris: > > > > On Sun, Nov 10, 2013 at 5:12 PM, Christian K. > wrote: > > Am 10.11.13 21:06, schrieb Christian K.: > > Am 03.11.13 13:42, schrieb Julian Taylor: > >> Hi all, > >> > >> I'm happy to announce the release candidate of Numpy 1.7.2. > >> This is a bugfix only release supporting Python 2.4 - 2.7 and > 3.1 - 3.3. > >> > >> More than 37 issues were fixed, the most important issues are > listed in > >> the release notes: > >> > https://github.com/numpy/numpy/blob/v1.7.2rc1/doc/release/1.7.2-notes.rst > >> > >> It is supposed to not break any existing code, so please test the > >> releases and report any issues you find. > >> > >> Source tarballs and release notes can be found at > >> https://sourceforge.net/projects/numpy/files/NumPy/1.7.2rc1/. > >> Currently only Windows installers are available. OS X installer will > >> follow soon. > > > > On OSX compilation succeeds (with some errors though) but test() > fails. > > > > Attached is the full output. > > I just realised that setup.py picked up gcc 4.0. Anyway, after switching > to 4.2 the tests still fail. > > > Looks like your tests might be left over from 1.8. Try cleaning out the > install and build directories and doing a clean install. That was the reason indeed. No errors anymore. Thanks, Chrisitan From charlesr.harris at gmail.com Tue Nov 12 18:24:21 2013 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 12 Nov 2013 16:24:21 -0700 Subject: [Numpy-discussion] ANN: NumPy 1.7.2rc1 release In-Reply-To: <2DAE0F54-1ACB-4643-8FE5-2F73AD6C6F19@astro.physik.uni-goettingen.de> References: <52767CF1.60003@googlemail.com> <2DAE0F54-1ACB-4643-8FE5-2F73AD6C6F19@astro.physik.uni-goettingen.de> Message-ID: On Tue, Nov 12, 2013 at 2:38 PM, Derek Homeier < derek at astro.physik.uni-goettingen.de> wrote: > Hi, > > On 03.11.2013, at 5:42PM, Julian Taylor > wrote: > > > I'm happy to announce the release candidate of Numpy 1.7.2. > > This is a bugfix only release supporting Python 2.4 - 2.7 and 3.1 - 3.3. > > on OS X 10.5, build and tests succeed for Python 2.5-3.3, but Python 2.4.4 > fails with > > /sw/bin/python2.4 setup.py build > Running from numpy source directory. > Traceback (most recent call last): > File "setup.py", line 214, in ? > setup_package() > File "setup.py", line 191, in setup_package > from numpy.distutils.core import setup > File > "/sw/src/fink.build/numpy-py24-1.7.2rc1-1/numpy-1.7.2rc1/numpy/distutils/core.py", > line 25, in ? > from numpy.distutils.command import config, config_compiler, \ > File > "/sw/src/fink.build/numpy-py24-1.7.2rc1-1/numpy-1.7.2rc1/numpy/distutils/command/build_ext.py", > line 16, in ? > from numpy.distutils.system_info import combine_paths > File > "/sw/src/fink.build/numpy-py24-1.7.2rc1-1/numpy-1.7.2rc1/numpy/distutils/system_info.py", > line 235 > finally: > ^ > SyntaxError: invalid syntax > > Ah, thanks for testing with 2.4. Looks like it could use some fixes. For 2.4 (googled) the try with finally needs to be nested: try: try: return client.fetch_pack(path, determine_wants, graphwalker, f.write, self.ui.status) except HangupException: raise hgutil.Abort("the remote end hung up unexpectedly") finally: commit() Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From bartbkr at gmail.com Tue Nov 12 20:40:19 2013 From: bartbkr at gmail.com (Bart Baker) Date: Tue, 12 Nov 2013 20:40:19 -0500 Subject: [Numpy-discussion] Precision/value change moving from C to Python In-Reply-To: References: Message-ID: <20131113014019.GA9410@bart-Inspiron-1525> > The issue is that there are some minor (10^-16) differences in the > values when I do the calculation in C vs Python.? > > That is the order of the machine epsilon for double, that looks like roundoff > errors to me. Hi Da?id, Thanks for the reply. That does make sense. I'm trying to my head around this. So does that mean that neither of them is "right", that it is just the result of doing the same calculation two different ways using different computational libraries? > ?I found similar results cythonising some code, everything was the same until I > changed some numpy functions for libc functions (exp, sin, cos...). After some > operations in float32, the results were apart for 1 in 10^-5 (the epsilon is > 10^-6). I blame them on specific implementation differences between numpy's and > my system's libc specific functions. > > To check equality, use np.allclose, it lets you define the relative and > absolute error. OK, thanks. I'll use this depending on your answer to my above question. -Bart From charlesr.harris at gmail.com Tue Nov 12 21:07:52 2013 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 12 Nov 2013 19:07:52 -0700 Subject: [Numpy-discussion] ANN: NumPy 1.7.2rc1 release In-Reply-To: References: <52767CF1.60003@googlemail.com> <2DAE0F54-1ACB-4643-8FE5-2F73AD6C6F19@astro.physik.uni-goettingen.de> Message-ID: On Tue, Nov 12, 2013 at 4:24 PM, Charles R Harris wrote: > > > > On Tue, Nov 12, 2013 at 2:38 PM, Derek Homeier < > derek at astro.physik.uni-goettingen.de> wrote: > >> Hi, >> >> On 03.11.2013, at 5:42PM, Julian Taylor >> wrote: >> >> > I'm happy to announce the release candidate of Numpy 1.7.2. >> > This is a bugfix only release supporting Python 2.4 - 2.7 and 3.1 - 3.3. >> >> on OS X 10.5, build and tests succeed for Python 2.5-3.3, but Python >> 2.4.4 fails with >> >> /sw/bin/python2.4 setup.py build >> Running from numpy source directory. >> Traceback (most recent call last): >> File "setup.py", line 214, in ? >> setup_package() >> File "setup.py", line 191, in setup_package >> from numpy.distutils.core import setup >> File >> "/sw/src/fink.build/numpy-py24-1.7.2rc1-1/numpy-1.7.2rc1/numpy/distutils/core.py", >> line 25, in ? >> from numpy.distutils.command import config, config_compiler, \ >> File >> "/sw/src/fink.build/numpy-py24-1.7.2rc1-1/numpy-1.7.2rc1/numpy/distutils/command/build_ext.py", >> line 16, in ? >> from numpy.distutils.system_info import combine_paths >> File >> "/sw/src/fink.build/numpy-py24-1.7.2rc1-1/numpy-1.7.2rc1/numpy/distutils/system_info.py", >> line 235 >> finally: >> ^ >> SyntaxError: invalid syntax >> >> > Ah, thanks for testing with 2.4. Looks like it could use some fixes. For > 2.4 (googled) the try with finally needs to be nested: > > try: > try: > return client.fetch_pack(path, determine_wants, graphwalker, f.write, self.ui.status) > except HangupException: > raise hgutil.Abort("the remote end hung up unexpectedly") > finally: > commit() > > Python 2.4 fixes at https://github.com/numpy/numpy/pull/4049. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From cmkleffner at gmail.com Wed Nov 13 00:29:22 2013 From: cmkleffner at gmail.com (Carl Kleffner) Date: Wed, 13 Nov 2013 06:29:22 +0100 Subject: [Numpy-discussion] mingw-w64 and openblas test In-Reply-To: References: Message-ID: Hi, yes it also works for scipy. I didn't patch the scipy source other than creating site.cfg with the path to openblas.I will upload the binaries and logs later today. Regards Carl scipy.test(verbose=2) runs without segfault: ... ====================================================================== FAIL: Tests for the minimize wrapper. ---------------------------------------------------------------------- Traceback (most recent call last): File "d:\devel32\WinPy2753\python-2.7.5\lib\site-packages\nose\case.py", line 197, in runTest self.test(*self.arg) File "d:\devel32\WinPy2753\python-2.7.5\lib\site-packages\scipy\optimize\tests\test_optimize.py", line 435, in test_minimize self.test_powell(True) File "d:\devel32\WinPy2753\python-2.7.5\lib\site-packages\scipy\optimize\tests\test_optimize.py", line 209, in test_powell atol=1e-14, rtol=1e-7) File "d:\devel32\WinPy2753\python-2.7.5\lib\site-packages\numpy\testing\utils.py", line 1181, in assert_allclose verbose=verbose, header=header) File "d:\devel32\WinPy2753\python-2.7.5\lib\site-packages\numpy\testing\utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=1e-07, atol=1e-14 (mismatch 100.0%) x: array([[ 0.75077639, -0.44156936, 0.47100962], [ 0.75077639, -0.44156936, 0.48052496], [ 1.50155279, -0.88313872, 0.95153458],... y: array([[ 0.72949016, -0.44156936, 0.47100962], [ 0.72949016, -0.44156936, 0.48052496], [ 1.45898031, -0.88313872, 0.95153458],... ====================================================================== FAIL: Powell (direction set) optimization routine ---------------------------------------------------------------------- Traceback (most recent call last): File "d:\devel32\WinPy2753\python-2.7.5\lib\site-packages\nose\case.py", line 197, in runTest self.test(*self.arg) File "d:\devel32\WinPy2753\python-2.7.5\lib\site-packages\scipy\optimize\tests\test_optimize.py", line 209, in test_powell atol=1e-14, rtol=1e-7) File "d:\devel32\WinPy2753\python-2.7.5\lib\site-packages\numpy\testing\utils.py", line 1181, in assert_allclose verbose=verbose, header=header) File "d:\devel32\WinPy2753\python-2.7.5\lib\site-packages\numpy\testing\utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=1e-07, atol=1e-14 (mismatch 100.0%) x: array([[ 0.75077639, -0.44156936, 0.47100962], [ 0.75077639, -0.44156936, 0.48052496], [ 1.50155279, -0.88313872, 0.95153458],... y: array([[ 0.72949016, -0.44156936, 0.47100962], [ 0.72949016, -0.44156936, 0.48052496], [ 1.45898031, -0.88313872, 0.95153458],... ====================================================================== FAIL: Test that bode() doesn't fail on a system with a pole at 0. ---------------------------------------------------------------------- Traceback (most recent call last): File "d:\devel32\WinPy2753\python-2.7.5\lib\site-packages\nose\case.py", line 197, in runTest self.test(*self.arg) File "d:\devel32\WinPy2753\python-2.7.5\lib\site-packages\scipy\signal\tests\test_ltisys.py", line 346, in test_06 assert_equal(w[0], 0.01) # a fail would give not-a-number File "d:\devel32\WinPy2753\python-2.7.5\lib\site-packages\numpy\testing\utils.py", line 317, in assert_equal raise AssertionError(msg) AssertionError: Items are not equal: ACTUAL: 0.010000000000000002 DESIRED: 0.01 ====================================================================== FAIL: test_ltisys.Test_freqresp.test_pole_zero ---------------------------------------------------------------------- Traceback (most recent call last): File "d:\devel32\WinPy2753\python-2.7.5\lib\site-packages\nose\case.py", line 197, in runTest self.test(*self.arg) File "d:\devel32\WinPy2753\python-2.7.5\lib\site-packages\scipy\signal\tests\test_ltisys.py", line 437, in test_pole_zero assert_equal(w[0], 0.01) # a fail would give not-a-number File "d:\devel32\WinPy2753\python-2.7.5\lib\site-packages\numpy\testing\utils.py", line 317, in assert_equal raise AssertionError(msg) AssertionError: Items are not equal: ACTUAL: 0.010000000000000002 DESIRED: 0.01 ---------------------------------------------------------------------- Ran 8937 tests in 173.925s FAILED (KNOWNFAIL=114, SKIP=213, failures=4) Running unit tests for scipy NumPy version 1.8.0 NumPy is installed in d:\devel32\WinPy2753\python-2.7.5\lib\site-packages\numpy SciPy version 0.13.0 SciPy is installed in d:\devel32\WinPy2753\python-2.7.5\lib\site-packages\scipy Python version 2.7.5 (default, May 15 2013, 22:43:36) [MSC v.1500 32 bit (Intel)] nose version 1.3.0 2013/11/12 David Cournapeau > > > > On Mon, Nov 11, 2013 at 1:32 PM, Carl Kleffner wrote: > >> Hi David, >> >> i used my customized mingw-w64 toolkit mentioned in this thread to >> circumvent several problems with the mixed compiler enviroment. It is a >> fully statically gcc build. Hence the compiled executables and dlls never >> depend on mingw dlls even without usage of -static -static-libgcc ... >> compiler options. The crt runtime is msvcr90.dll as used by python-2.7. The >> manifest is linked to the executables per default. I have to write some >> documentation about that, but this may take some time due to my workload. >> > > Hm, interesting, I did not even know this was possible ! > > Does that work for scipy as well ? I am a bit worried about using custom > toolchains, OTOH, that's the only real solution we seem to have ATM. > > David > >> >> You can test it on a windows cmd prompt with: objdump -p >> numpy\core\_dotblas.pyd | findstr DLL >> DLL >> vma: Hint Time Forward DLL First >> DLL Name: ADVAPI32.dll >> DLL Name: KERNEL32.dll >> DLL Name: msvcr90.dll >> DLL Name: python27.dll >> >> Carl >> >> >> 2013/11/11 David Cournapeau >> >>> >>> >>> >>> On Mon, Nov 11, 2013 at 10:28 AM, Carl Kleffner wrote: >>> >>>> done >>>> >>>> all logs in https://gist.github.com/anonymous/7411039 >>>> >>> >>> Thanks. Looking at the log, it does not look like you are statically >>> linking the mingw runtimes, though. I would expect numpy not to work if you >>> don't have mingw dlls in your %PATH%, right ? >>> >>> David >>> >>>> >>>> >>>> Regards Carl >>>> >>>> >>>> 2013/11/11 David Cournapeau >>>> >>>>> Carl, >>>>> >>>>> It looks like the google drive contains the binary and not the actual >>>>> log ? For the log, it is more convenient to put it on gist.github.com, >>>>> >>>>> thanks for the work, >>>>> David >>>>> >>>>> >>>>> On Mon, Nov 11, 2013 at 9:51 AM, Carl Kleffner wrote: >>>>> >>>>>> Hi David, >>>>>> >>>>>> I made a new build with the numpy-1.8.0 code base. binaries and logs >>>>>> are included in the following archive: >>>>>> >>>>>> 2013-11-11_i686-numpy-1.8.0-py27-openblastest.7z >>>>>> https://drive.google.com/file/d/0B4DmELLTwYmlajBzZFpXcVYycEE/edit?usp=sharing >>>>>> >>>>>> Regards >>>>>> >>>>>> Carl >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> 2013/11/8 David Cournapeau >>>>>> >>>>>>> Hi Carl, >>>>>>> >>>>>>> Thanks for that. I am a bit confused by the build log >>>>>>> https://drive.google.com/file/d/0B4DmELLTwYmlRTRlOHpJbjdmbTQ/edit?usp=sharing, >>>>>>> in particular the failures for lapack_lite and umath. >>>>>>> >>>>>>> May we ask you to put the logs on gists.github.com ? google docs is >>>>>>> rather painful to use for logs (no line number, etc...) >>>>>>> >>>>>>> thanks, >>>>>>> David >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Fri, Nov 8, 2013 at 7:42 AM, Carl Kleffner wrote: >>>>>>> >>>>>>>> Hi list, >>>>>>>> >>>>>>>> I created a repository at google code >>>>>>>> https://code.google.com/p/mingw-w64-static with some downloads as >>>>>>>> well as my last numpy setp.py log. >>>>>>>> >>>>>>>> Regards >>>>>>>> >>>>>>>> Carl >>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> NumPy-Discussion mailing list >>>>>>>> NumPy-Discussion at scipy.org >>>>>>>> http://mail.scipy.org/mailman/listinfo/numpy-discussion >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> _______________________________________________ >>>>>>> NumPy-Discussion mailing list >>>>>>> NumPy-Discussion at scipy.org >>>>>>> http://mail.scipy.org/mailman/listinfo/numpy-discussion >>>>>>> >>>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> NumPy-Discussion mailing list >>>>>> NumPy-Discussion at scipy.org >>>>>> http://mail.scipy.org/mailman/listinfo/numpy-discussion >>>>>> >>>>>> >>>>> >>>>> _______________________________________________ >>>>> NumPy-Discussion mailing list >>>>> NumPy-Discussion at scipy.org >>>>> http://mail.scipy.org/mailman/listinfo/numpy-discussion >>>>> >>>>> >>>> >>>> _______________________________________________ >>>> NumPy-Discussion mailing list >>>> NumPy-Discussion at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/numpy-discussion >>>> >>>> >>> >>> _______________________________________________ >>> NumPy-Discussion mailing list >>> NumPy-Discussion at scipy.org >>> http://mail.scipy.org/mailman/listinfo/numpy-discussion >>> >>> >> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> >> > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidmenhur at gmail.com Wed Nov 13 04:25:17 2013 From: davidmenhur at gmail.com (=?UTF-8?B?RGHPgGlk?=) Date: Wed, 13 Nov 2013 10:25:17 +0100 Subject: [Numpy-discussion] Precision/value change moving from C to Python In-Reply-To: <20131113014019.GA9410@bart-Inspiron-1525> References: <20131113014019.GA9410@bart-Inspiron-1525> Message-ID: On 13 November 2013 02:40, Bart Baker wrote: > > That is the order of the machine epsilon for double, that looks like > roundoff > > errors to me. > > > I'm trying to my head around this. So does that mean that neither of > them is "right", that it is just the result of doing the same > calculation two different ways using different computational libraries? Essentially, yes. I am tempted to say that, depending on the compiler flags, the C version *could* be more accurate, as the compiler can reorganise the operations and reduce the number of steps. But also, if it is optimised for speed, it could be using faster and less accurate functions and techniques. In any case, if that 10^-16 matters to you, I'd say you are either doing something wrong or using the wrong dtype; and without knowing the specifics, I would bet on the first one. If you really need that precision, you would have to use more bits, and make sure your library supports that dtype. I believe the following proves that (my) numpy.cos can deal with 128 bits without converting it to float. >>> a = np.array([1.2584568431575694895413875135786543], dtype=np.float128) >>> np.cos(a)-np.cos(a.astype(np.float64)) array([ 7.2099444e-18], dtype=float128) The bottom line is, don't believe nor trust the least significant bits of your floating point numbers. /David. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre.haessig at crans.org Wed Nov 13 11:28:05 2013 From: pierre.haessig at crans.org (Pierre Haessig) Date: Wed, 13 Nov 2013 17:28:05 +0100 Subject: [Numpy-discussion] savetxt fmt argument fails when using a unicode string Message-ID: <5283A895.509@crans.org> Hi, I just noticed (with numpy 1.7.1) that the following code import numpy as np np.savetxt('a.csv', [1], fmt=u'%.3f') fails with: 1045 else: 1046 for row in X: -> 1047 fh.write(asbytes(format % tuple(row) + newline)) 1048 if len(footer) > 0: 1049 footer = footer.replace('\n', '\n' + comments) UnboundLocalError: local variable 'format' referenced before assignment (of course, the simple solution is to remove the "u" prefix, but in my real code, I have a "from __future__ import unicode_literals") Since parts of the savetxt function were changed since that, I don't know if this bug is still applicable or not. I'm guessing that since Warren Weckesser commit https://github.com/numpy/numpy/commit/0d284764a855cae11699228ff1b81e6d18f38ed2 the problem would be caught earlier with a much nicer ValueError. However, I still see isinstance(fmt, str): (on line 1035 https://github.com/numpy/numpy/blob/master/numpy/lib/npyio.py#L1035 ) How can I make the fmt argument work with unicode_literals ? best, Pierre -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 897 bytes Desc: OpenPGP digital signature URL: From davidmenhur at gmail.com Wed Nov 13 11:54:27 2013 From: davidmenhur at gmail.com (=?UTF-8?B?RGHPgGlk?=) Date: Wed, 13 Nov 2013 17:54:27 +0100 Subject: [Numpy-discussion] savetxt fmt argument fails when using a unicode string In-Reply-To: <5283A895.509@crans.org> References: <5283A895.509@crans.org> Message-ID: The following works on Numpy 1.8.0: from __future__ import unicode_literals import numpy as np np.savetxt('a.csv', [1], fmt=str('%.3f')) Without the str, I get a clearer error: Traceback (most recent call last): File "a.py", line 4, in np.savetxt('a.csv', [1], fmt='%.3f') File ".virtualenv/local/lib/python2.7/site-packages/numpy/lib/npyio.py", line 1049, in savetxt raise ValueError('invalid fmt: %r' % (fmt,)) ValueError: invalid fmt: u'%.3f' /David. On 13 November 2013 17:28, Pierre Haessig wrote: > Hi, > > I just noticed (with numpy 1.7.1) that the following code > > import numpy as np > np.savetxt('a.csv', [1], fmt=u'%.3f') > > fails with: > > 1045 else: > 1046 for row in X: > -> 1047 fh.write(asbytes(format % tuple(row) + newline)) > 1048 if len(footer) > 0: > 1049 footer = footer.replace('\n', '\n' + comments) > > UnboundLocalError: local variable 'format' referenced before assignment > > (of course, the simple solution is to remove the "u" prefix, but in my > real code, I have a "from __future__ import unicode_literals") > > Since parts of the savetxt function were changed since that, I don't > know if this bug is still applicable or not. > I'm guessing that since Warren Weckesser commit > > https://github.com/numpy/numpy/commit/0d284764a855cae11699228ff1b81e6d18f38ed2 > the problem would be caught earlier with a much nicer ValueError. > > However, I still see > > isinstance(fmt, str): (on line 1035 > https://github.com/numpy/numpy/blob/master/numpy/lib/npyio.py#L1035 ) > > How can I make the fmt argument work with unicode_literals ? > > best, > Pierre > > > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Wed Nov 13 12:26:55 2013 From: cournape at gmail.com (David Cournapeau) Date: Wed, 13 Nov 2013 17:26:55 +0000 Subject: [Numpy-discussion] Caution about using intrisincs, and other 'advanced' optimizations In-Reply-To: <528270A1.70307@googlemail.com> References: <528270A1.70307@googlemail.com> Message-ID: On Tue, Nov 12, 2013 at 6:17 PM, Julian Taylor < jtaylor.debian at googlemail.com> wrote: > On 12.11.2013 03:17, David Cournapeau wrote: > > Hi there, > > > > I have noticed more and more subtle and hard to track serious bugs in > > numpy and scipy, due to the use of advanced optimization features > > (flags, or gcc intrinsics). > > > > I am wondering whether those are worth it: they compile wrongly under > > quite a few configurations, and it is not always obvious to find the > > cause (case in point: gcc 4.4 with numpy 1.8.0 causes infinite loop in > > scipy.stats, which disappear if I disable the intrinsics in > > numpy/npy_config.h). Maybe they should be disabled by default, and only > > built in if required ? Do we know for sure they bring significant > > improvements ? > > yes, e.g. > > http://yarikoptic.github.io/numpy-vbench/vb_vb_ufunc.html#numpy-add-scalar2-numpy-float32 > http://yarikoptic.github.io/numpy-vbench/vb_vb_ufunc.html#numpy-not-bool > > http://yarikoptic.github.io/numpy-vbench/vb_vb_ufunc.html#numpy-isnan-a-10types > and many more. > > this benchmark runs on a pretty old amd, the improvements are greater on > more modern AMD and Intel cpus. > > > > > While gcc 4.4 is not the most recent compiler, it is not ancient either, > > I can't reproduce any issue with gcc 4.4.7 > So my initial email was wrong, the issue appears with gcc 4.1 and disappears with 4.4. > > Can you narrow it down to a specific intrinsic? they can be enabled and > disabled in set ./numpy/core/setup_common.py > valgrind shows quite a few invalid read in BOOL_ functions when running the scipy or sklearn test suite. BOOL_logical_or is the one that appears the most often. I don't have time to track this down now, but I think it would be good to have at least a system in place to disable the simd intrinsics when building numpy. David _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jtaylor.debian at googlemail.com Wed Nov 13 13:16:09 2013 From: jtaylor.debian at googlemail.com (Julian Taylor) Date: Wed, 13 Nov 2013 19:16:09 +0100 Subject: [Numpy-discussion] Caution about using intrisincs, and other 'advanced' optimizations In-Reply-To: References: <528270A1.70307@googlemail.com> Message-ID: <5283C1E9.7010900@googlemail.com> On 13.11.2013 18:26, David Cournapeau wrote: > > > Can you narrow it down to a specific intrinsic? they can be enabled and > disabled in set ./numpy/core/setup_common.py > > > valgrind shows quite a few invalid read in BOOL_ functions when running > the scipy or sklearn test suite. BOOL_logical_or is the one that appears > the most often. I don't have time to track this down now, but I think it > would be good to have at least a system in place to disable the simd > intrinsics when building numpy. those are unrelated to the intrinsics, they should be fixed in master by github.com/numpy/numpy/issues/3965 Can you try it? Possibly there is something we should backport. From fperez.net at gmail.com Wed Nov 13 15:58:35 2013 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 13 Nov 2013 12:58:35 -0800 Subject: [Numpy-discussion] [ANN, x-post] Creating a space for scientific open source at Berkeley (with UW and NYU) Message-ID: Hi folks, forgive me for the x-post to a few lists and the semi off-topic nature of this post, but I think it's worth mentioning this to our broader community. To keep the SNR of each list high, I'd prefer any replies to happen on the numfocus list. Yesterday, during an event at the White House OSTP, an announcement was made about a 5-year, $37.8M initative funded by the Moore and Sloan foundations to create a collaboration between UC Berkeley, the University of Washington and NYU on Data Science environments: - Press release: http://www.moore.org/newsroom/press-releases/2013/11/12/%20bold_new_partnership_launches_to_harness_potential_of_data_scientists_and_big_data - Project description: http://www.moore.org/programs/science/data-driven-discovery/data-science-environments We worked in private on this for a year, so it's great to be able to finally engage the community in an open fashion. I've provided some additional detail in my blog: http://blog.fperez.org/2013/11/an-ambitious-experiment-in-data-science.html At Berkeley, we are using this as an opportunity to create the new Berkeley Institute for Data Science (BIDS): http://newscenter.berkeley.edu/2013/11/13/new-data-science-institute-to-help-scholars-harness-big-data and from the very start, open source and the scientific Python ecosystem have been at the center of our thinking. In the team of co-PIs we have, in addition to me, a bunch of Python supporters: - Josh Bloom leads our Python bootcamps and graduate seminar) - Cathryn Carson founded the DLab (dlab.berkeley.edu), which runs python.berkeley.edu. - Philip Stark: Stats Chair, teaches reproducible research with Python tools. - Kimmen Sjolander: comp. biologist whose tools are all open source Python. - Mike Franklin and Ion Stoica: co-directors of AMPLab, whose Spark framework has Python support. - Dave Culler: chair of CS, which now uses Python for its undergraduate intro courses. We will be working very hard to basically make BIDS "a place for people like us" (and by that I mean open source scientific computing, not just Python: Juila, R, etc. are equally welcome). This is a community that has a significant portion of academic scientists who struggle with all the issues I list in my post, and solving that problem is an explicit goal of this initiative (in fact, it was the key point identified by the foundations when they announced the competition for this grant). Beyond that, we want to create a space where the best of academia, the power of a university like Berkeley, and the best of our open source communities, can come together. We are just barely getting off the ground, deep in more mundane issues like building renovations, but over the next few months we'll be clarifying our scientific programs, starting to have open positions, etc. Very importantly, I want to thank everyone who, for the last decade+, has been working like mad to make all of this possible. It's absolutely clear to me that the often unrewarded work of many of you was essential in this process, shaping the very existence of "data science" and the recognition that it should be done in an open, collaborative, reproducible fashion. Consider this event an important victory along the way, and hopefully a starting point for much more work in slightly better conditions. Here are some additional resources for anyone interested: http://bitly.com/bundles/fperezorg/1 -- Fernando Perez (@fperez_org; http://fperez.org) fperez.net-at-gmail: mailing lists only (I ignore this when swamped!) fernando.perez-at-berkeley: contact me here for any direct mail -------------- next part -------------- An HTML attachment was scrubbed... URL: From cmkleffner at gmail.com Wed Nov 13 17:03:00 2013 From: cmkleffner at gmail.com (Carl Kleffner) Date: Wed, 13 Nov 2013 23:03:00 +0100 Subject: [Numpy-discussion] mingw-w64 and openblas test In-Reply-To: References: Message-ID: Hi, numpy, scipy test binaries (32 bit, openblas) can be downloaded from https://code.google.com/p/mingw-w64-static/ . link: https://drive.google.com/file/d/0B4DmELLTwYmlc2tjMkpwUDF5cDg/edit?usp=sharing log-files: https://gist.github.com/anonymous/7457182 Regards Carl 2013/11/13 Carl Kleffner > Hi, > > yes it also works for scipy. I didn't patch the scipy source other than > creating site.cfg with the path to openblas.I will upload the binaries and > logs later today. > > Regards > > Carl > > scipy.test(verbose=2) runs without segfault: > > ... > > ====================================================================== > FAIL: Tests for the minimize wrapper. > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "d:\devel32\WinPy2753\python-2.7.5\lib\site-packages\nose\case.py", > line 197, in runTest > self.test(*self.arg) > File > "d:\devel32\WinPy2753\python-2.7.5\lib\site-packages\scipy\optimize\tests\test_optimize.py", > line 435, in test_minimize > self.test_powell(True) > File > "d:\devel32\WinPy2753\python-2.7.5\lib\site-packages\scipy\optimize\tests\test_optimize.py", > line 209, in test_powell > atol=1e-14, rtol=1e-7) > File > "d:\devel32\WinPy2753\python-2.7.5\lib\site-packages\numpy\testing\utils.py", > line 1181, in assert_allclose > verbose=verbose, header=header) > File > "d:\devel32\WinPy2753\python-2.7.5\lib\site-packages\numpy\testing\utils.py", > line 644, in assert_array_compare > raise AssertionError(msg) > AssertionError: > Not equal to tolerance rtol=1e-07, atol=1e-14 > > (mismatch 100.0%) > x: array([[ 0.75077639, -0.44156936, 0.47100962], > [ 0.75077639, -0.44156936, 0.48052496], > [ 1.50155279, -0.88313872, 0.95153458],... > y: array([[ 0.72949016, -0.44156936, 0.47100962], > [ 0.72949016, -0.44156936, 0.48052496], > [ 1.45898031, -0.88313872, 0.95153458],... > > ====================================================================== > FAIL: Powell (direction set) optimization routine > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "d:\devel32\WinPy2753\python-2.7.5\lib\site-packages\nose\case.py", > line 197, in runTest > self.test(*self.arg) > File > "d:\devel32\WinPy2753\python-2.7.5\lib\site-packages\scipy\optimize\tests\test_optimize.py", > line 209, in test_powell > atol=1e-14, rtol=1e-7) > File > "d:\devel32\WinPy2753\python-2.7.5\lib\site-packages\numpy\testing\utils.py", > line 1181, in assert_allclose > verbose=verbose, header=header) > File > "d:\devel32\WinPy2753\python-2.7.5\lib\site-packages\numpy\testing\utils.py", > line 644, in assert_array_compare > raise AssertionError(msg) > AssertionError: > Not equal to tolerance rtol=1e-07, atol=1e-14 > > (mismatch 100.0%) > x: array([[ 0.75077639, -0.44156936, 0.47100962], > [ 0.75077639, -0.44156936, 0.48052496], > [ 1.50155279, -0.88313872, 0.95153458],... > y: array([[ 0.72949016, -0.44156936, 0.47100962], > [ 0.72949016, -0.44156936, 0.48052496], > [ 1.45898031, -0.88313872, 0.95153458],... > > ====================================================================== > FAIL: Test that bode() doesn't fail on a system with a pole at 0. > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "d:\devel32\WinPy2753\python-2.7.5\lib\site-packages\nose\case.py", > line 197, in runTest > self.test(*self.arg) > File > "d:\devel32\WinPy2753\python-2.7.5\lib\site-packages\scipy\signal\tests\test_ltisys.py", > line 346, in test_06 > assert_equal(w[0], 0.01) # a fail would give not-a-number > File > "d:\devel32\WinPy2753\python-2.7.5\lib\site-packages\numpy\testing\utils.py", > line 317, in assert_equal > raise AssertionError(msg) > AssertionError: > Items are not equal: > ACTUAL: 0.010000000000000002 > DESIRED: 0.01 > > ====================================================================== > FAIL: test_ltisys.Test_freqresp.test_pole_zero > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "d:\devel32\WinPy2753\python-2.7.5\lib\site-packages\nose\case.py", > line 197, in runTest > self.test(*self.arg) > File > "d:\devel32\WinPy2753\python-2.7.5\lib\site-packages\scipy\signal\tests\test_ltisys.py", > line 437, in test_pole_zero > assert_equal(w[0], 0.01) # a fail would give not-a-number > File > "d:\devel32\WinPy2753\python-2.7.5\lib\site-packages\numpy\testing\utils.py", > line 317, in assert_equal > raise AssertionError(msg) > AssertionError: > Items are not equal: > ACTUAL: 0.010000000000000002 > DESIRED: 0.01 > > ---------------------------------------------------------------------- > Ran 8937 tests in 173.925s > > FAILED (KNOWNFAIL=114, SKIP=213, failures=4) > Running unit tests for scipy > NumPy version 1.8.0 > NumPy is installed in > d:\devel32\WinPy2753\python-2.7.5\lib\site-packages\numpy > SciPy version 0.13.0 > SciPy is installed in > d:\devel32\WinPy2753\python-2.7.5\lib\site-packages\scipy > Python version 2.7.5 (default, May 15 2013, 22:43:36) [MSC v.1500 32 bit > (Intel)] > nose version 1.3.0 > > > > > 2013/11/12 David Cournapeau > >> >> >> >> On Mon, Nov 11, 2013 at 1:32 PM, Carl Kleffner wrote: >> >>> Hi David, >>> >>> i used my customized mingw-w64 toolkit mentioned in this thread to >>> circumvent several problems with the mixed compiler enviroment. It is a >>> fully statically gcc build. Hence the compiled executables and dlls never >>> depend on mingw dlls even without usage of -static -static-libgcc ... >>> compiler options. The crt runtime is msvcr90.dll as used by python-2.7. The >>> manifest is linked to the executables per default. I have to write some >>> documentation about that, but this may take some time due to my workload. >>> >> >> Hm, interesting, I did not even know this was possible ! >> >> Does that work for scipy as well ? I am a bit worried about using custom >> toolchains, OTOH, that's the only real solution we seem to have ATM. >> >> David >> >>> >>> You can test it on a windows cmd prompt with: objdump -p >>> numpy\core\_dotblas.pyd | findstr DLL >>> DLL >>> vma: Hint Time Forward DLL First >>> DLL Name: ADVAPI32.dll >>> DLL Name: KERNEL32.dll >>> DLL Name: msvcr90.dll >>> DLL Name: python27.dll >>> >>> Carl >>> >>> >>> 2013/11/11 David Cournapeau >>> >>>> >>>> >>>> >>>> On Mon, Nov 11, 2013 at 10:28 AM, Carl Kleffner wrote: >>>> >>>>> done >>>>> >>>>> all logs in https://gist.github.com/anonymous/7411039 >>>>> >>>> >>>> Thanks. Looking at the log, it does not look like you are statically >>>> linking the mingw runtimes, though. I would expect numpy not to work if you >>>> don't have mingw dlls in your %PATH%, right ? >>>> >>>> David >>>> >>>>> >>>>> >>>>> Regards Carl >>>>> >>>>> >>>>> 2013/11/11 David Cournapeau >>>>> >>>>>> Carl, >>>>>> >>>>>> It looks like the google drive contains the binary and not the actual >>>>>> log ? For the log, it is more convenient to put it on gist.github.com >>>>>> , >>>>>> >>>>>> thanks for the work, >>>>>> David >>>>>> >>>>>> >>>>>> On Mon, Nov 11, 2013 at 9:51 AM, Carl Kleffner wrote: >>>>>> >>>>>>> Hi David, >>>>>>> >>>>>>> I made a new build with the numpy-1.8.0 code base. binaries and logs >>>>>>> are included in the following archive: >>>>>>> >>>>>>> 2013-11-11_i686-numpy-1.8.0-py27-openblastest.7z >>>>>>> https://drive.google.com/file/d/0B4DmELLTwYmlajBzZFpXcVYycEE/edit?usp=sharing >>>>>>> >>>>>>> Regards >>>>>>> >>>>>>> Carl >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> 2013/11/8 David Cournapeau >>>>>>> >>>>>>>> Hi Carl, >>>>>>>> >>>>>>>> Thanks for that. I am a bit confused by the build log >>>>>>>> https://drive.google.com/file/d/0B4DmELLTwYmlRTRlOHpJbjdmbTQ/edit?usp=sharing, >>>>>>>> in particular the failures for lapack_lite and umath. >>>>>>>> >>>>>>>> May we ask you to put the logs on gists.github.com ? google docs >>>>>>>> is rather painful to use for logs (no line number, etc...) >>>>>>>> >>>>>>>> thanks, >>>>>>>> David >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Fri, Nov 8, 2013 at 7:42 AM, Carl Kleffner >>>>>>> > wrote: >>>>>>>> >>>>>>>>> Hi list, >>>>>>>>> >>>>>>>>> I created a repository at google code >>>>>>>>> https://code.google.com/p/mingw-w64-static with some downloads as >>>>>>>>> well as my last numpy setp.py log. >>>>>>>>> >>>>>>>>> Regards >>>>>>>>> >>>>>>>>> Carl >>>>>>>>> >>>>>>>>> _______________________________________________ >>>>>>>>> NumPy-Discussion mailing list >>>>>>>>> NumPy-Discussion at scipy.org >>>>>>>>> http://mail.scipy.org/mailman/listinfo/numpy-discussion >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> NumPy-Discussion mailing list >>>>>>>> NumPy-Discussion at scipy.org >>>>>>>> http://mail.scipy.org/mailman/listinfo/numpy-discussion >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> _______________________________________________ >>>>>>> NumPy-Discussion mailing list >>>>>>> NumPy-Discussion at scipy.org >>>>>>> http://mail.scipy.org/mailman/listinfo/numpy-discussion >>>>>>> >>>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> NumPy-Discussion mailing list >>>>>> NumPy-Discussion at scipy.org >>>>>> http://mail.scipy.org/mailman/listinfo/numpy-discussion >>>>>> >>>>>> >>>>> >>>>> _______________________________________________ >>>>> NumPy-Discussion mailing list >>>>> NumPy-Discussion at scipy.org >>>>> http://mail.scipy.org/mailman/listinfo/numpy-discussion >>>>> >>>>> >>>> >>>> _______________________________________________ >>>> NumPy-Discussion mailing list >>>> NumPy-Discussion at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/numpy-discussion >>>> >>>> >>> >>> _______________________________________________ >>> NumPy-Discussion mailing list >>> NumPy-Discussion at scipy.org >>> http://mail.scipy.org/mailman/listinfo/numpy-discussion >>> >>> >> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bloring at lbl.gov Wed Nov 13 20:21:19 2013 From: bloring at lbl.gov (Burlen Loring) Date: Wed, 13 Nov 2013 17:21:19 -0800 Subject: [Numpy-discussion] segv PyArray_Check Message-ID: <5284258F.5020208@lbl.gov> Hi, I'd like to add numpy support to an existing code that uses swig. I've changed the source file that has code to convert python lists into native data from c to c++ so I can use templates to handle data conversions. The problem I'm having is a segfault on PyArray_Check called from my c++ source. Looking at things in the debugger the argument passed to it is indeed an intialized python list, my swig file has import_array() in it's init, and I've verified that it is getting called. adding a function in my c++ source to call import_array() a second time prevents the segfault. Could anyone explain why I need the import_array() in both places? If that's the correct way to handle it? Thanks Burlen From david.froger at inria.fr Thu Nov 14 01:48:37 2013 From: david.froger at inria.fr (David Froger) Date: Thu, 14 Nov 2013 07:48:37 +0100 Subject: [Numpy-discussion] segv PyArray_Check In-Reply-To: <5284258F.5020208@lbl.gov> References: <5284258F.5020208@lbl.gov> Message-ID: <20131114064837.14035.30762@fl-58186.rocq.inria.fr> Hi Burlen, SWIG will generate a file named for example foo_wrap.c, which will contains a call to import_array() inserted by SWIG because of the %init %{ import_array(); %} in the SWIG script. So in the file foo_wrap.c (which will be compiled to a Python module _foo.so), you should be able to used PyArray_Check without segfault. Typically, PyArray_Check will be inserted in foo_wrap.c by a typemap, for example a typemap from numpy.i . Do you use PyArray_Check in the foo_wrap.c or in another file? Is PyArray_Check in called in another C library, that _foo.so is linked with? David Quoting Burlen Loring (2013-11-14 02:21:19) > Hi, > > I'd like to add numpy support to an existing code that uses swig. I've > changed the source file that has code to convert python lists into > native data from c to c++ so I can use templates to handle data > conversions. The problem I'm having is a segfault on PyArray_Check > called from my c++ source. Looking at things in the debugger the > argument passed to it is indeed an intialized python list, my swig file > has import_array() in it's init, and I've verified that it is getting > called. adding a function in my c++ source to call import_array() a > second time prevents the segfault. Could anyone explain why I need the > import_array() in both places? If that's the correct way to handle it? > > Thanks > Burlen > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From pierre.haessig at crans.org Thu Nov 14 03:25:21 2013 From: pierre.haessig at crans.org (Pierre Haessig) Date: Thu, 14 Nov 2013 09:25:21 +0100 Subject: [Numpy-discussion] savetxt fmt argument fails when using a unicode string In-Reply-To: References: <5283A895.509@crans.org> Message-ID: <528488F1.6090800@crans.org> Le 13/11/2013 17:54, Da?id a ?crit : > > np.savetxt('a.csv', [1], fmt=str('%.3f')) Thanks, that's what I did too. I'm just still wondering whether there is a cleaner solution... > > Without the str, I get a clearer error: > ValueError: invalid fmt: u'%.3f' > Yeah, the commit by Warren Weckesser makes things much clearer. -- Pierre -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 897 bytes Desc: OpenPGP digital signature URL: From davidmenhur at gmail.com Thu Nov 14 05:22:23 2013 From: davidmenhur at gmail.com (=?UTF-8?B?RGHPgGlk?=) Date: Thu, 14 Nov 2013 11:22:23 +0100 Subject: [Numpy-discussion] savetxt fmt argument fails when using a unicode string In-Reply-To: <528488F1.6090800@crans.org> References: <5283A895.509@crans.org> <528488F1.6090800@crans.org> Message-ID: On 14 November 2013 09:25, Pierre Haessig wrote: > Le 13/11/2013 17:54, Da?id a ?crit : > > > > np.savetxt('a.csv', [1], fmt=str('%.3f')) > Thanks, that's what I did too. > > I'm just still wondering whether there is a cleaner solution... > I have a fix, I am submitting a PR. The only thing I am missing is where the tests for IO are. https://github.com/numpy/numpy/pull/4053 -------------- next part -------------- An HTML attachment was scrubbed... URL: From max_linke at gmx.de Thu Nov 14 11:18:26 2013 From: max_linke at gmx.de (Max Linke) Date: Thu, 14 Nov 2013 17:18:26 +0100 Subject: [Numpy-discussion] strange runtimes of numpy fft Message-ID: <1384445906.12947.9.camel@x200.kel.wh.lokal> Hi I noticed some strange scaling behavior of the fft runtime. For most array-sizes the fft will be calculated in a couple of seconds, even for very large ones. But there are some array sizes in between where it will take about ~20 min (e.g. 400000). This is really odd for me because an array with 10 million entries is transformed in ~2s. Is this typical for numpy? I attached a plot and an ipynb to reproduce and illustrate it. best Max -------------- next part -------------- { "metadata": { "name": "" }, "nbformat": 3, "nbformat_minor": 0, "worksheets": [ { "cells": [ { "cell_type": "code", "collapsed": false, "input": [ "%matplotlib inline\n", "import numpy as np\n", "import matplotlib.pyplot as plt\n", "import timeit" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 1 }, { "cell_type": "code", "collapsed": false, "input": [ "x_length = np.logspace(2, 7, 25)\n", "times = np.empty((25,2))\n", "for i, length in enumerate(x_length):\n", " setup = 'import numpy as np\\ntrj=np.random.uniform(low=-1,high=1,size=%i)' % (length)\n", " times[i] = timeit.repeat(stmt='fft=np.fft.fft(trj)', setup=setup,\n", " repeat=2, number=2)" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 2 }, { "cell_type": "code", "collapsed": false, "input": [ "plt.plot(x_length, times[:, 0], marker='o');\n", "plt.plot(x_length, times[:, 1], marker='o');\n", "#plt.yscale('log');\n", "plt.xscale('log');\n", "plt.xlabel('elements in array');\n", "plt.ylabel('execution time in seconds');" ], "language": "python", "metadata": {}, "outputs": [ { "metadata": {}, "output_type": "display_data", "png": "iVBORw0KGgoAAAANSUhEUgAAAZEAAAEUCAYAAADqXAs8AAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAIABJREFUeJzt3Xl8VOW5wPHfTDJZINskkAQSIBACIRK2QFQKEoQALqCg\nRkAloFYLirhU5eq919gqhLYqooVqi1fEpeDGohgWJQhqoIgFS1QwEMhOMpON7DNz7h8xA2OWyTIL\nE57v59NPmTPnnPeZF5xn3vc95zkqRVEUhBBCiE5QOzsAIYQQrkuSiBBCiE6TJCKEEKLTJIkIIYTo\nNEkiQgghOk2SiBBCiE6zWxK5++67CQkJITY21rzt8ccfZ9iwYYwcOZI5c+ZQXl5ufm/lypVERUUR\nHR3Nrl27zNu//fZbYmNjiYqKYtmyZfYKVwghRCfYLYksWrSItLQ0i23Tpk3j+PHjHD16lCFDhrBy\n5UoAMjMz2bRpE5mZmaSlpbFkyRKabl9ZvHgx69ev5+TJk5w8ebLZOYUQQjiP3ZLIxIkT0Wq1FtsS\nExNRqxubvPLKK8nNzQVg69atzJs3D41GQ0REBIMHD+bgwYMUFBRQWVlJfHw8AAsWLGDLli32ClkI\nIUQHOW1N5I033uD6668HID8/n/DwcPN74eHh5OXlNdseFhZGXl6ew2MVQgjRMqckkeeffx4PDw/m\nz5/vjOaFEELYiLujG3zzzTfZsWMHn3/+uXlbWFgYOTk55te5ubmEh4cTFhZmnvJq2h4WFtbieQcP\nHkxWVpb9AhdCiG4oMjKSn3/+ufMnUOzo9OnTyvDhw82vP/vsMyUmJkYpLi622O/48ePKyJEjlbq6\nOuXUqVPKoEGDFJPJpCiKosTHxysZGRmKyWRSrrvuOuWzzz5rsS1bf5RnnnnGpvu39X5L7/16W0de\ndzR2a6QvWm+rq/tLX7TvfemLjm3vyOuufnfabSQyb9489u3bR0lJCf369ePZZ59l5cqV1NfXk5iY\nCMDVV1/N2rVriYmJISkpiZiYGNzd3Vm7di0qlQqAtWvXsnDhQmpqarj++uuZMWOGvUK2kJCQYNP9\n23q/pfd+va2jr21J+qLz55a+aP/+0hfW32/vdkf2heqXTOTyVCoV3eSjdFlKSgopKSnODuOSIH1x\ngfTFBdIXF3T1u1PuWO+G7PmLy9VIX1wgfXGB9IXtyEhECCEuYzISEUII4TSSRIQQQnSaw+8TEUII\nYTuffvola9bsoq7OHU9PAw89NI0bbrjGYe1LEhFCCBf16adfsmzZTrKynjdvy8p6GsBhiUSms4QQ\nwkWtWbOLrJzx0Hc6DEiAvtPJyhnPK6/sdlgMMhIRQggXlVdyCqL+CbddVPLp/Sxyi+MdFoOMRIQQ\nwkUVNmRYJhCA27IoNBx0WAySRIQQwkX1GeDX8vb+vg6LQZKIEEK4qL69QlrcHtY71GExSBIRQggX\n9dD8h+h/MNJiW+SRSJbOW+qwGKTsiRBCuLDnV7/Pf3+chFYZQvzggSydt5QbEm9o9/Fd/e6Uq7OE\nEMKF9R0wDq6FqLL7SHvpMYe3L9NZQgjhwvL0egDK6/VOaV+SiBBCuLCCssbkcd5Q6pT2JYkIIYQL\nK6rQgaKiyiRJRAghRAeVVOvxMYVTq5IkIoQQooNKa3X08RxMg5skESGEEB1U3qBnoH8kRk0pJpPj\n25ckIoQQLqzKqGdw4GBU3qWUlzu+fUkiQgjhwqrRER0SieJVhl7v+BuuJYkIIYQLq1frGRQcitrk\nQV7JeYe3L0lECCFclNEIBg8d/XsHojFqyS1x/OK6JBEhhHBR5eWg6qEn2CcIT5OWPL0kESGEEO2k\n0ykoXqVovbV4q7QUlkkSEUII0U455ypQG73xcPPAx13LuUpJIkIIIdrpbLEeD2MgAH4aLbrqbpRE\n7r77bkJCQoiNjTVv0+v1JCYmMmTIEKZNm0ZZWZn5vZUrVxIVFUV0dDS7du0yb//222+JjY0lKiqK\nZcuW2StcIYRwOXl6Pd40JpEALy2lNd0oiSxatIi0tDSLbampqSQmJnLixAmmTJlCamoqAJmZmWza\ntInMzEzS0tJYsmSJ+SEpixcvZv369Zw8eZKTJ082O6cQQlyu8sp09FQHARDUQ0t5XTdKIhMnTkSr\n1Vps27ZtG8nJyQAkJyezZcsWALZu3cq8efPQaDREREQwePBgDh48SEFBAZWVlcTHxwOwYMEC8zFC\nCHG5O1epx8+9cSTS2yeQSieUg3fomkhRUREhIY0Plg8JCaGoqAiA/Px8wsPDzfuFh4eTl5fXbHtY\nWBh5eXmODFkIIS5ZJVU6Arwak0iIv5Zqk+MfTOW0hXWVSoVKpXJW80II4fJKa/UEeTdOZ/XVaqnB\n8SMRhz5jPSQkhMLCQkJDQykoKCA4OBhoHGHk5OSY98vNzSU8PJywsDByc3MttoeFhbV6/pSUFPOf\nExISSEhIsPlnEEKIS0VFg55g334AhAVpqW9HOfj09HTS09NtFoNDk8isWbPYsGEDTz75JBs2bODm\nm282b58/fz6PPvooeXl5nDx5kvj4eFQqFX5+fhw8eJD4+Hg2btzIQw891Or5L04iQgjR3Z036ujj\nPwqAfr205nLw6jbmmH79A/vZZ5/tUgx2SyLz5s1j3759lJSU0K9fP/7whz+wfPlykpKSWL9+PRER\nEWzevBmAmJgYkpKSiImJwd3dnbVr15qnutauXcvChQupqanh+uuvZ8aMGfYKWQghXEo1evoGNq6J\nBPtqwbuUykrw93dcDCql6VpaF6dSqegmH0UIIdpF87ur+XjJX7hxxG+oN9bj+YeenF5UT0RE+9eb\nu/rdKXesCyGECzKZwOChJyK4cWHdw80DlcmDvGLHloOXJCKEEC6oogJU3nqCfQPN2zQGLTkOLgcv\nSUQIIVxQic7UWMHX68JN3Z6KlnwHl4OXJCKEEC6osYJvTzRuGvO2HiotBQ4uBy9JRAghXNCZcxcq\n+Dbp6eb4cvCSRIQQwgXl6nV4K0EW2/w0WnRVl3AS0ev1HDt2zF6xCCGEaKeCMj091ZYjkQAvLXoH\nl4O3mkQmTZpERUUFer2euLg47r33Xh555BFHxCaEEKIV5yp1+Gksk0igt5aK+kssiZSXl+Pn58dH\nH33EggULOHToEHv27HFEbEIIIVpRUqUnwNNyOqu3j5bKhkssiRiNRgoKCti8eTM33HADgFTfFUII\nJyut1RPUw3IkEuIfSJXpEksi//u//8v06dOJjIwkPj6erKwsoqKiHBGbEEKIVpQ36Aj2tRyJ9HFC\nOXirBRhvu+02brvtNvPryMhIPvzwQ7sGJYQQom3njXpC/eIstoUHaql3c+yDqVpNIkuXLjX/+eIC\nXU1TWWvWrLFzaEIIIVpTo+gJC7SczuofrMXgXoqigKNWHVqdzoqLiyMuLo66ujqOHDnCkCFDiIqK\n4rvvvqO+vt4x0QkhhGhRnZuO/r0sp7NC/BrLwVdVOS6OVkciCxcuBGDdunUcOHAAjabx1vrFixcz\nYcIEhwQnhBCiOUUBg0ZP/2DLkYjWWwteZej1Cj4+jhmKWF1YLysro6Kiwvy6srKSsrIyuwYlhBCi\ndZWVQA8doX6WScQZ5eCtLqwvX76cMWPGmB+nuG/fPnkMrRBCOFGJzgSe5Y0jj1/RGLTk6koBX4fE\nYjWJLFq0iBkzZnDw4EFUKhWrVq0iNDTUEbEJIYRowZnCctyMPrirm3+Fe5q05OlLgf4OiaVdtbNM\nJhO9e/cmICCAEydO8OWXX9o7LiGEEK04U6zDwxDU4nveKi2FDiwHb3Uk8uSTT7Jp0yZiYmJwc3Mz\nb7/mmmvsGpgQQoiW5en1eBHY4ns93bQUVVxCSeTjjz/mp59+wtPT0xHxCCGEsKKgTNesgm8TR5eD\ntzqdFRkZKfeFCCHEJaSoUo+fe8vTWY4uB291JOLt7c2oUaOYMmWKeTSiUqnkjnUhhHASXZWeAL+W\nRyKB3lpO1F1CSWTWrFnMmjXLXO5EURSp4iuEEE6kr9UxIKTlkUhvHy3fNvzosFisJpGFCxdSV1fH\niRMnAIiOjjbfvS6EEMLxKhr09PaNbPG9YD8t1Q4sB281iaSnp5OcnMyAAQMAOHv2LBs2bGDSpEl2\nD04IIURzjRV8x7X4Xp+AQIeWg7eaRB599FF27drF0KFDAThx4gRz587lyJEjdg9OCCFEc9XoCAts\neTorPEhLvfoSujrLYDCYEwjAkCFDMBgMdg1KCCFE6+rUevr1anlhvV/vxnLwjmJ1JBIXF8e9997L\nnXfeiaIovPPOO4wdO9YRsQkhhPiVxgq+OiJCWk4ifQK0KN56amrA29v+8Vgdiaxbt45hw4axZs0a\nXnnlFa644grWrVvXpUZXrlzJFVdcQWxsLPPnz6eurg69Xk9iYiJDhgxh2rRpFpWCV65cSVRUFNHR\n0ezatatLbQshhCs7fx7w1tPHv+XprMAeWvBsLAfvCCql6ZGFraiqqsLLy8tc8sRoNFJXV0ePHj06\n1WB2djbXXnstP/zwA56entx+++1cf/31HD9+nF69evHEE0+watUqSktLSU1NJTMzk/nz5/Ovf/2L\nvLw8pk6dyokTJ1CrLfPfxU9fFEKI7urUaSORb3pieKYON7Vbi/uo/6cnGbcUEj/KeiXfrn53Wh2J\nXHvttdTU1JhfV1dXM3Xq1E436Ofnh0ajobq6GoPBQHV1NX379mXbtm0kJycDkJyczJYtWwDYunUr\n8+bNQ6PREBERweDBgzl06FCn2xdCCFd25lwZbga/VhMINJaDzylxzLqI1SRSV1eHj4+P+bWvry/V\n1dWdbjAwMJDHHnuM/v3707dvXwICAkhMTKSoqIiQkBAAQkJCKCoqAiA/P5/w8HDz8eHh4eTl5XW6\nfSGEcGVni/V4GFteD2niYdKSp3NMErG6sN6zZ0++/fZb4uLiADh8+DDeXVitycrKYvXq1WRnZ+Pv\n789tt93G22+/bbGPSqVq86741t67+GFZCQkJ5gdpCSFEd5Gr0+GltJ1EvFVaCstbTiLp6emkp6fb\nLB6rSWT16tXcdttt9O3bF4CCggI2bdrU6QYPHz7M+PHjCQpqXBSaM2cO33zzDaGhoRQWFhIaGkpB\nQQHBwcEAhIWFkZOTYz4+NzeXsLCwFs8tT1wUQnR3BeV6fNQtL6o36aluvRz8r39gP/vss12Kx+p0\n1rhx4/jpp5/429/+xrp16/jxxx+7dIlvdHQ0GRkZ1NTUoCgKe/bsISYmhpkzZ7JhwwYANmzYwM03\n3ww01u765z//SX19PadPn+bkyZPEx8d3un0hhHBl5yr0+GraHon4abSUOKgcvNWRSFVVFS+++CJn\nz57l73//OydPnuSnn37ixhtv7FSDI0eOZMGCBYwdOxa1Ws2YMWO47777qKysJCkpifXr1xMREcHm\nzZsBiImJISkpiZiYGNzd3Vm7dq0UgBRCXLZKqnQE+LSdRAI8tZS2Mp1la1Yv8U1KSiIuLo633nqL\n48ePU1VVxfjx4zl69KhDAmwvucRXCHE5GP3IM/Trp2Lboymt7jN79R/Iyq7n2OrnrJ7P7pf4ZmVl\n8eSTT+Lh4QE0LrQLIYRwjvIGPcG+bY9EevXUUtlwiVzi6+npaXGfSFZWljwqVwghnOS8UUdoK3er\nNwn201LloHLwVtdEUlJSmDFjBrm5ucyfP5+vvvqKN9980wGhCSGE+LUa9IQFtj0S6asNpEa5RJLI\ntGnTGDNmDBkZGQCsWbOGXr162T0wIYQQzdWpda1W8G0SFqSlzkHl4K1OZx04cAAvLy9uvPFGSktL\nWbFiBWfOnHFEbEIIIS6iKNCg0RMR3PZ0Vr9eWgyaSySJLF68mB49enD06FFefPFFIiMjWbBggSNi\nE0IIcZHqasBbT1+tlZFIoBbFs5T6evvHZDWJuLu7o1ar2bJlCw888AAPPPAAlZWV9o9MCCGEhXMl\nBvCoJMAroM39AntowavUIeXgrSYRX19fVqxYwdtvv82NN96I0WikoaHB7oEJIYSwdKaoDLcGf9Sq\ntr+6Pdw8UCke5BWft3tMVpPIpk2b8PLy4o033iA0NJS8vDwef/xxuwcmhBDCUnsq+DZxb9Byttj+\n6yJWr87q06cPjz76qPl1//79ZU1ECCGcIFevw0tpe1G9iaeiJU9fCvS3a0xWRyJCCCEuDQWlenqq\n2zcS8UZLQan9RyKSRIQQwkUUVerws1LBt0lPtZZzrZSDtyVJIkII4SJKqvQEeLZvOsvXQeXgra6J\nHDhwgGeffZbs7GwMBgPQWPXx1KlTdg9OCCHEBaW1eqt3qzfx99SiP38JJJF77rmH1atXM2bMGNzc\nWn8wvBBCCPsqb9AR5zOsXfsGemvJdsBz1q0mkYCAAK677jq7ByKEEKJt5416Qv3bNxLp5aPl+/of\n7RxRO5LI5MmTefzxx5kzZ45FCfgxY8bYNTAhhBCWqtFZreDbxFHl4K0mkYyMDFQqFYcPH7bYvnfv\nXrsFJYQQorl6tZ5+vdq3sN4nQOuQcvBWk0h6errdgxBCCGGdQaMnIrh9I5HwwECHlINvNYls3LiR\nu+66ixdeeAGVSmXerigKKpXK4i52IYQQ9lVTA4qXzmoF3yb9emtpcHdiEqmurgagsrLSIokIIYRw\nvKKSBvCoIsDbv137hwc1loM3GMDd6pxT56kURbF/rWAHUKlUdJOPIoQQzew7XMyUj4ZhWFHSrv3r\njfV4PuvDuaV19O7d+kCgq9+dcse6EEK4gDPndHgY2reoDr+UgzdpyCuusmNUkkSEEMIl5Or0eNG+\n9ZAm7gYtOXYuBy9JRAghXEBBma7dFXybeJi05Or0doqokdUkUlhYyD333MOMGTMAyMzMZP369XYN\nSgghhKVzlXr83Ns/nQW/lIMvc/JIZOHChUybNo38/HwAoqKieOmll+walBBCCEuNFXw7NhLpqdZS\nZOdy8FaTSElJCbfffru5+KJGo8HdnteLCSGEaEZfqyOoR8dGIr4aLcV2ruRrNYn4+Pig0+nMrzMy\nMvD3b991yq0pKyvj1ltvZdiwYcTExHDw4EH0ej2JiYkMGTKEadOmUVZWZt5/5cqVREVFER0dza5d\nu7rUthBCuKLyej3BPh0bifh7atFXOzmJvPDCC8ycOZNTp04xfvx47rrrLtasWdOlRpctW8b111/P\nDz/8wLFjx4iOjiY1NZXExEROnDjBlClTSE1NBRrXYDZt2kRmZiZpaWksWbIEk8nUpfaFEMLVnDfp\n2l3Bt0mgt5byOvsmEavzUnFxcezbt48TJ06gKApDhw5Fo9F0usHy8nL279/Phg0bGgNwd8ff359t\n27axb98+AJKTk0lISCA1NZWtW7cyb948NBoNERERDB48mEOHDnHVVVd1OgYhhHA1NYqesMCOTWf1\n8tFyvMG+5eCtJhGDwcCOHTvMTzbcuXNnl2pnnT59mt69e7No0SKOHj1KXFwcq1evpqioiJCQEABC\nQkIoKioCID8/3yJhhIeHk5eX16m2hRDCVdW5tf+phk16+2qpMjp5JDJz5ky8vb2JjY1Fre76bSUG\ng4EjR47w6quvMm7cOB5++GHz1FUTlUrVZr2u1t5LSUkx/zkhIYGEhIQuxyuEEJcCg0bHgHZW8G3S\nx19L9a/Kwaenp9u0OrvVJJKXl8exY8ds1mB4eDjh4eGMGzcOgFtvvZWVK1cSGhpKYWEhoaGhFBQU\nEBwcDEBYWBg5OTnm43NzcwkLC2vx3BcnESGE6C5qa0Hx0tMvqGPTWWFB2mbl4H/9A/vZZ5/tUmxW\nhxbTpk1j586dXWrkYqGhofTr148TJ04AsGfPHq644gpmzpxpXifZsGEDN998MwCzZs3in//8J/X1\n9Zw+fZqTJ08SHx9vs3iEEOJSd07XAO41+Hv5dei4fr0CMbg5eTpr/PjxzJ49G5PJZF5QV6lUVFRU\ndLrRV155hTvuuIP6+noiIyP5v//7P4xGI0lJSaxfv56IiAg2b94MQExMDElJScTExODu7s7atWul\nNL0Q4rJyqkCPW4O2w999/XtrMXmWYjTCL7f62ZzVUvARERFs27aN4cOH22RNxF6kFLwQort667Mf\n+N3e2VT/qWNXWjWVgy9ZVkdQUMsJyO6l4Pv3788VV1xxSScQIYToznJ1OryUji2qg2PKwVudzho4\ncCCTJ0/muuuuw8PDA0AejyuEEA5UWK6np7pji+pN3A1ackpKGYGPjaP65fzWdhg4cCADBw6kvr6e\n+vp68zPWhRBCOEZRhR4/946PRAA8jFrydKVAP9sG9QurSUQumxVCCOcqqdYR4NW5JOKFloJS+12h\n1WoSWbZsGS+//DIzZ85s9p5KpWLbtm12C0oIIcQF+ho9/fw7N53V001LYYX9HkzVahJZsGABAI89\n9liz92Q6SwghHKeiXk9vn9hOHevrrqW40gkjkbi4OAD+/e9/8/DDD1u8t3r1aiZNmmS3oIQQQlxw\n3qSjTydHIv6eWvQ19ksiVq/bbbqL/GJvvvmmPWIRQgjRgmpFT9/Azq2JBHprKat1wkjkvffe4913\n3+X06dMW6yKVlZUEdbB+ixBCiM6rc9N1uIJvk6CeWn6wYzn4VpPI+PHj6dOnD8XFxfz+978339Ho\n5+fHiBEj7BaQEEIISwaNnoEhnfvxHmzncvCtJpEBAwYwYMAAMjIy7Na4EEKIttXXg+LZ8WeJNAkN\naF4O3paklokQQlzCCovrwb0WP0/fTh3fV6ulTiVJRAghLkuni/S4NQR2+taK/r0DaXCXJCKEEJel\ns+d0eBg6N5UFv5SD9yjFZLJhUBexmkQOHDhAYmIiUVFR5jpagwYNsk80QgghLOTq9Xgpnb8iNthP\nC15lVFTY51EZVmtn3XPPPaxevZoxY8bgZq+nmgghhGhRQZmenurOj0QuLgcfEGD7Sr5Wk0hAQADX\nXXedzRsWQghhXVGFDj9N55MINJaDP1tcyhVRTkgikydP5vHHH2fOnDl4enqat48ZM8bmwQghhLBU\nUqUnwLNrN3h7GLXk2qkcvNUkkpGRgUql4vDhwxbb9+7da/NghBBCWCqt1RPu17WRiBdaCu1UDt5q\nEklPT7dLw0IIIawrb9Ax2rd/l87RQ62lqMI+ScTq1VllZWU88sgjxMXFERcXx2OPPUZ5ebldghFC\nCGHpvFFPaBdHIvYsB281idx99934+fnx/vvvs3nzZnx9fVm0aJFdghFCCGGpWtF1uoJvE38PLbpq\n+zyYyup0VlZWFh999JH5dUpKCiNHjrRLMEIIISzVuekZ0KtrC+taby2FxU4aiXh7e7N//37z6wMH\nDtCjRw+7BCOEEMKSwV3PgJCujUR69dRS0eCkhfW//e1vLFiwwLwOotVqW3xQlRBCCNtqaADFS0dE\ncNdGIr19tZy3Uzl4q0lk1KhRHDt2jIqKCqDxeSJCCCHsr6C4Ftwa8PHo2aXzhAZoqTE5OIls3LiR\nu+66ixdeeMGieqSiKKhUKh599FG7BCSEEKJRdpEet/rOV/BtEhaopU7t4CRSXV0NND4Ot6sfQAgh\nRMedLdbjYej648jDe2lpcHNwErn//vsBmDp1KhMmTLB478CBA11u2Gg0MnbsWMLDw9m+fTt6vZ7b\nb7+dM2fOEBERwebNmwkICABg5cqVvPHGG7i5ubFmzRqmTZvW5faFEOJSl6vT40XXFtUBBgQHYvQo\nRVHA1mMCq1dnLV26tNm2hx56qMsNv/zyy8TExJhHOampqSQmJnLixAmmTJlCamoqAJmZmWzatInM\nzEzS0tJYsmQJJnsVxhdCiEtIfqmuSxV8m4T6N5aDP3/e9uXgWx2JfPPNN3z99dcUFxfz4osvoiiN\njVdWVmI0GrvUaG5uLjt27ODpp5/mxRdfBGDbtm3s27cPgOTkZBISEkhNTWXr1q3MmzcPjUZDREQE\ngwcP5tChQ1x11VVdikEIIS51RZV6/Ny7Pp11cTn4aF/bVvJtdSRSX19vThiVlZWcP3+e8+fP4+fn\nxwcffNClRh955BH+/Oc/o1ZfaL6oqIiQkBAAQkJCKCoqAiA/P5/w8HDzfuHh4eTl5XWpfSGEcAW6\nKj0Bnl0fiQC4NWg5e8726yKtjkQmTZrEpEmTWLRoEQMGDLBZg5988gnBwcGMHj261eKOKpWqzcV8\nWegXQlwO9LU6wnt3fSQC4GGyTzl4q/eJLFy4sNk2lUrFF1980akGv/76a7Zt28aOHTuora2loqKC\nu+66i5CQEAoLCwkNDaWgoIDg4GAAwsLCyMnJMR+fm5tLWFhYi+dOSUkx/zkhIYGEhIROxSiEEJeC\n8gY9o30G2uRcXoqW/NJS0tPTbVqdXaU0LXa04uLniNTW1vLhhx/i7u7On//85y43vm/fPv7yl7+w\nfft2nnjiCYKCgnjyySdJTU2lrKyM1NRUMjMzmT9/PocOHSIvL4+pU6fy888/NxuNqFQqrHwUIYRw\nKSEP3cK9V83l+fm3dflc/Z64iZsHLOKVB2622N7V706rI5GxY8davJ4wYQLjxo3rdIO/1pQMli9f\nTlJSEuvXrzdf4gsQExNDUlISMTExuLu7s3btWpnOEkJcFqoVPWFa20xn+bprKTnvwDWRJnr9hfLB\nJpOJw4cPm0ugdFXTugtAYGAge/bsaXG/p556iqeeesombQohhKuoV+vp18s2C+v+nlp01U5IImPG\njDH/8nd3dyciIoL169fbPBAhhBCWGjQ6IrpYwbdJgJeWEr0Tkkh2drbNGxVCCNE2gwEUL32XK/g2\n6dVDS1bhjzY518Ws3rH+17/+ldKLHvBeWlrK2rVrbR6IEEKICwpKakBtxMfTNs9vslc5eKtJ5PXX\nX0er1Zpfa7VaXn/9dZsHIoQQ4oLsQj1udUE2u5DIXuXgrSYRk8lkUavKaDTS0NBg80CEEEJccOac\nHg+jbdZDAPpqtdSqnLAmMn36dObOncv999+Poii89tprzJgxw+aBCCGEuCBHp8NLsV0S6ddLS4O7\nE5LIqlWreP3111m3bh0AiYmJ3HvvvTYPRAghxAUF5Xp6qm2zqA7QP1iLUeOEJOLm5kZycjKTJ08m\nOjra5gEQh5/ZAAAcYklEQVQIIYRo7lyFHj93205n4VVKdbVCjx62u2Hb6prItm3bGD16tHkK67vv\nvmPWrFk2C0AIIURzJVU6AjxtNxLxdPdEZfIgr7jKZueEdiSRlJQUDh48aL5Ca/To0Zw6dcqmQQgh\nhLBUWqsnsIftRiLQWA7+jI3LwVtNIhqNxvyYWvNBaquHCSGE6ILyBh3BPrZNIhqjltwSByeRK664\ngnfeeQeDwcDJkydZunQp48ePt2kQQgghLFUa9IT62246CxrLwReUOjiJvPLKKxw/fhxPT0/mzZuH\nn58fq1evtmkQQgghLNWgJ0xr25FIT7WWwnLbJhGrV2edOXOGFStWsGLFCvO29PR0eeCTEELYUZ1a\nR7iNKvg28XHXUlzp4JFIUlISq1atQlEUqqurWbp0KcuXL7dpEEIIISw1aPQMDLHtdJa/h+3LwVtN\nIgcPHiQnJ4err76a+Ph4+vTpw9dff23TIIQQrufTT79k+vT/JiEhhenT/5tPP/3S2SF1G0YjKJ56\nm5WBbxLgraWs1sHTWe7u7nh7e1NTU0NtbS2DBg2Sq7OEuMx9+umXLFu2k6ys583bsrKeBuCGG65x\nVljdRkFJNajA18s2FXyb9OqhJfvcTzY9p9VsEB8fj5eXF4cPH2b//v28++673HZb15/3K4RwXWvW\n7LJIIABZWc/zyiu7nRRR93K6UI9bvW1HIfBLOXiDg0ci//jHP8zPVO/Tpw/btm1j48aNNg1CCOFa\n6upa/uqorXVzcCTd05lzOjwMtk8iIf5aqhW99R07wOpIJC4ujo0bN/KHP/wBgLNnzzJkyBCbBiGE\ncC2enoYWt3t5GR0cSfeUq9fjpdh2UR3sUw7eahJZsmQJ33zzDe+++y4APj4+PPDAAzYNQgjhWh56\naBp9+z5tsS0y8imWLk10UkTdS0GZnp5q249EwntpqXdz8HTWwYMH+e677xg9ejQAgYGB8lAqIS5z\nN9xwDZMmQUbG/+Dh4UZpqZGXX54hi+o2UlShw9eGFXybDLBDOXirScTDwwOj8cIQtbi4WK7OEkLw\n3fFKAocfwsOnjp+/8qRBdZWzQ+o2Sqr1Nq3g2yQssLEcfG2tgpeXbcrBW00iS5cuZfbs2Zw7d46n\nnnqKDz74gOeee84mjQshXNPmrZ/yk7IMJS6rccNQWLI6C40Gbki8wbnBdQOlNXr6am2fRLw0nmDy\nIL+kikHhPjY5p9UkcueddxIXF8fnn38OwNatWxk2bJhNGhdCuKbUv69BuSXLYlvBb7J45b1XJInY\nQHm9jlE+9rmAyf2XcvAOSyIAw4YNk8QhhDArKq1rcXuNsdbBkXRP5416Qv1tvyYCF5eD72eT88ni\nhhCiw8qLPVvcbqz1cnAk3VM1OvrauIJvEy9FS77edovrkkSEEB2Smwsq/UP0+5flL1m/nZEMDljq\npKi6lzq1nn69bL8mAtBDZdty8A5PIjk5OUyePJkrrriC4cOHs2bNGgD0ej2JiYkMGTKEadOmUVZW\nZj5m5cqVREVFER0dza5duxwdshDiIp9/DtddewOTpkxiwLcDGHJsCMEHg3lk5stkZcp6iC00uOuJ\nCLbPSMTW5eAdnkQ0Gg0vvfQSx48fJyMjg7/+9a/88MMPpKamkpiYyIkTJ5gyZQqpqakAZGZmsmnT\nJjIzM0lLS2PJkiWYTCZHhy2E+MXu3ZCYCD/0/IE317zJwXcOUntNLfcvHs+xY6DTOTtC12Y0Kihe\nOgaG2ieJ+HloKbFhOXiHJ5HQ0FBGjRoFNN79PmzYMPLy8ti2bRvJyckAJCcns2XLFqDxarB58+ah\n0WiIiIhg8ODBHDp0yNFhCyEARYE9e2DYVWfJLstmQv8JBHgFMHXQVD499SGTJ8Nnnzk7StdWoKsG\nxQ1fb2+7nF/rpaWsxoWTyMWys7P57rvvuPLKKykqKiIkJASAkJAQioqKAMjPzyc8PNx8THh4OHl5\neU6JV4jL3fHj0LMnfFe9lRuH3Ii7uvECzztj7+Sd799h5kzYvt3JQbq404U6u1TwbRLUU0t5fTdI\nIufPn+eWW27h5ZdfxtfX1+I9lUqFStX63ZRtvSeEsJ+mqawtP23h5uibzduvj7qeY0XHGHlNDrt2\ngVRG6ryzxXo8DPZZVAfo7WPbcvDtuk/E1hoaGrjlllu46667uPnmxn+IISEhFBYWEhoaSkFBAcHB\nwQCEhYWRk5NjPjY3N5ewsLAWz5uSkmL+c0JCgjwHXggb27MHbr1Lz7un/8W0yGnm7Z7unsyJnsMX\n594jKuoJ9u+Ha691YqAuLFenxwv7jUSqc3MoyThk8X3ZFSpFURSbnKmdFEUhOTmZoKAgXnrpJfP2\nJ554gqCgIJ588klSU1MpKysjNTWVzMxM5s+fz6FDh8jLy2Pq1Kn8/PPPzUYjKpUKB38UIS4r9fXQ\nqxesStvIrpyP+Pj2jy3e35e9j6WfLeXW4mOUlcGLLzopUBe37PX3+einf5Lzwod2Of+bX3zNA1se\no2rNN0DXvzsdPp311Vdf8fbbb7N3715Gjx7N6NGjSUtLY/ny5ezevZshQ4bwxRdfsHz5cgBiYmJI\nSkoiJiaG6667jrVr18p0lhBOkJEBQ4fCntwt3Dz05mbvTxwwkbLaMqInfc/27Y2L8KLjiir0+Gns\nN50VHqil3s12D6Zy+EjEXmQkIoR9/c//QI2hhr/7h3LqoVME9Wj+Rbd8z3IUBd5ZlMrnnzcmHdEx\nU59dSY2pnK+eTbXL+U/kFxL98ghMq84BLjgSEUK4pj17IGDMHsb0GdNiAgG4I/YO3vvPu9xwo0mu\n0uqk0lodgd72G4n066VF8Sylvt42P7oliQghrCovh//8B066fdziVFaT2JBYArwCGJiwn08+cWCA\n3Uh5vZ7ePvZbWPf2aCwHn1dSZZPzSRIRQli1dy9cNd7Ajqzt3BR9U5v73jniTn7yfJsjR6DUtg/R\nuyycN+rsVsG3iVu9lrNFtvnLkSQihLBqzx4YPPlrwv3CiQiIaHPfecPnse3kR0ycXEdammPi606q\n0RNmhwdSXczDqCVHJ0lECOEgu3dDZXjLV2X9Wj//fsQGx9J/yg5ZF+mExgq+9h2JeCpaCmw0TJQk\nIoRo09mzoNMrfK2zvEu9LXeOuJMz/m+TlgYGg50D7GYa3HVEhNg3ifRQaykskyQihHCAPXtg7PXf\nAzAiZES7jrk15la+KthDv6gyvvrKntF1L40VfPV2q+DbxMfNduXgJYkIIdq0Zw94jWochbT3Rt+m\nyr4DrvtQrtLqgAL9eTBp8PW27xMi/Ty0lFRJEhFC2JnJ1PgQqp/d2z+V1eSO2DvID3pb1kU6ILtQ\nj1udfRfVAQK8tJTWShIRQtjZ999Dj75nKKrNYXy/8R069vqo6zlVfZRSYw4nT9opwG7mTLEeD6N9\np7IAgnpoKa+TJCKEsLM9eyDs2i3MHDLT/OyQ9vJy9+KWYbcw4Mb3ZEqrnXJ1OrwU+ycRW5aDlyQi\nhGjV7t1Q1qfjU1lN7hhxB8Wh70gSaaf8Uj091fafzgr111JlkiQihLCjujo48K2OM/XfkjgosVPn\nuGbANTRo9GSc/p7ychsH2A2dq9Tj627/kUgfrZZalSQRIYQdffMNBE/8hMTIqXhrOve8b7VKzR2x\n8wlNfIedO20cYDdUXKVD62n/kUh4Ly31bpJEhBB2tHs3aGI7P5XV5I4Rd1Aa/i7btptsFFn3VVqr\nJ9Db/iOR/r21GN0liQgh7GjnF9Xkab7gxiE3duk8I0JGEOIfwPaj+zEabRRcN5SyYhX/3vk6O99f\nSa8Rg0hZscpubQ0I1qJ46TEYul4OXpKIEKKZ0lI4XrubceFjbfLLeOGYO3Af8w4ZGTYIrhtKWbGK\n5zenYpp1noab9OhuOc3zm1Ptlkgay8FryLdBOXhJIkKIZvbuhaDxW5gzrGtTWU3mx86nJuJDPt5e\nZ5PzdTev/vM1DLPLLLYZZpfx6qbX7damW30gZ851fUpLkogQopmduw2UBlt/dkh79fPvR3RgLJuP\n7LDJ+bqTmjoDZZqWv8wNKvvN/2mMWnJLJIkIIezgk+8PMCBgAP39+9vsnPePv4Nzoe9w+rTNTuny\nNu87RtDyqzC61bb4vrviZre2PRUt+XpJIkIIG8vOhtLgLcwdaZuprCZJV9yKaeBuNm8rs75zN1dR\nVcfEZ/6XuZ9N4fZBi/mfWc/g/nGAxT7uHwfw4O332S2GHiotheVdTyIdq2MghOj2du9WUMdsYfYw\n21ZO1HprGRMwhbf+9SFPco9Nz+1K1u88yAM770arDObwkqOMieoLgFqt4tVNr2NQGXFX3Hjw9vtI\neepJu8Xh46blnA3KwUsSEUJY+ODAUXoMcWN48HCbn3tpwp0k/+dVKivvwdfX5qe/pJWUVzPjT//N\nd4Z3eXDYy7x0TxJq9YXS+ilPPWnXpPFrfh5adDYoBy/TWUIIM5MJ9pc0Pga3vc8O6YhbYq9H1fff\nvPdprs3PbWspK1bRa8QgAkZFdPm+jdVb9tLnj7GU1BZx/IH/8PJvb7dIIM4Q4KVFXyMjESGEDR09\nCqaoLSyIf9Uu5/dy9yI0K5IlW0fwRKof7iY1D86936G/wNuj6b4Nwy0X1m+e35za+J6VWFNWrGq8\nZFdtQmUC98G9KY0s5L9GruOPd3Xtxk1bCuqhpaDipy6fR5KIEMJs087TqP3zuTr8arucP2XFKvJ/\n+hHTLecpp/FXcHu/nLvSZtOXurWkZTAZyCrJ4YXNq1u8b+NP779I77iRhAT4ExrgT9/AAIL9/Onp\n0QOVStVi8mFHAcuG/dcllUAAevXUcr6h6yMRlaIoXb/v/RKgUqnoJh9FCKcZmrya/mP/w+6l/7DL\n+XuNGITulubX+AZ9NIiSo1ltHtuRZHDxMc9vTrVICO5b/Lll8h1Ej5/I8fzTnCo9RX7NKco4Ta1H\nHpwPga/PwXUt3Bj5qTc+YyfSoC7H4FaO0aMMPMtB3YC6wQ/TvgqYbujU53O0P/zzU9Z881d0az7r\n0nenjESEEADU1sLPmo/5w4TH7daGQd1yEUZdjzz6PXI7Wo8QgnuGEOYXSv+gECJDQhgSFsLW99/i\nxS0vNJteqm6o5Y4Fd5OrKyVPX0phWSnnKkspOV+KvqaULz5cg3F2hWUMN5ez6bN/EFhzjhCPgQzw\nj2Ni+K2MjhhE/ND+DI7wpE/cIHS0kOzq+lCy1rIccV0d6MrqKdBXcM2OkVST3/xz2/Gmwc7qq9VS\ny2W0JpKWlsbDDz+M0Wjk3nvv5cknL605VCFc3afpxahC/82s4VPs1oa7qeVreXpWBzIn5mZyyooo\nrCzim7yv+OxMEVUUUedehDEjF2ZbHmOYXcaf96Twl/Ov496gxcOoxQstPVRafNy1+Gm00MrNev61\nfdCte7/VOB+ce3/zEUwr9214ekLfEA/6hvTCW/GkuqXPbcebBjsrLMg25eBdIokYjUYefPBB9uzZ\nQ1hYGOPGjWPWrFkMGzbM2aFdktLT00lISHB2GJcE6YsLrPXFGwc+Ico9sdPPDmmP1r6cf3/7MlJ+\nO6/V4/xHDaCCs822+1UOoPzP2a0e1+uzDeha+LWtVNa3GWfTNFlH79voSPJxtv69tRhaKbfSES5x\nie+hQ4cYPHgwERERaDQa5s6dy9atW5vt15HL8Dp7+Z4rHJeenu7Q9i7lPklPT3eJOB1x3MV90dJx\nO3Y/QPbWL+1agjzlqSd5Omk5QR8Nwv/jAQR9NIink5Zb/XLWmFr+Ja+x8gv/wbn3t3gn+OiImHbF\nWnI0i7J/Z1NyNKtdC/+d/XzO8M6Gf6B8fa7rJ1JcwPvvv6/ce++95tcbN25UHnzwQYt9AIUUFPeR\nAcozz6e2eb5nnk9V3EcGKKRg/t+vj9u7d2+Hjmtp/4uPU0f2bLW9lo7du3evZXvJbbd38etJ106x\n+vk60i/J9/y2Q8eoI3u22Vazz2alrYs/W0t9+ev2LPZ/5pk2P1uH+yS5g8clt93er/8e2/o7b2n/\nptettXdxfzb1RZufr5390lIsXd2/rfeT7/mt1X9nbfVN0IhBSs/IECVoxCDlmedTW+yLrnBUX7R3\nu7XXFv3ZxTTgEtNZHbnpyTC7jOc+WMkn51vPsP9O+0fzxbZfHZe//2v6Thzf7uNe/2xYs/0vPs4U\nXtXicZ+cP9diW/n7v+ZcVeaF9rKBga23d/E5jmTuR/md5XC9K/3y1t/e4j9PtXx7cUvHmMKr2myr\n2Wez0tbFn62lvmxqb/v5IhRFoWD/N/SZePUvx37D1royvt/5fy3/3X24kq2VBaho/m9MpVJxNG29\n5XHZF/py+/miFj8fcOG4vcDA5u0pNF4NU7g/g5AJV5qP+8+uDZha+Dv/44fP817pSUq++pbA8aMw\nYUJRTOi//jf+44eTs287yuyqZu1teO1N0h+rQo2a0m+O8c75M6hUatSoUanUnPj8PUyzK5v1y6ub\nXrf667mjU4XW9m/r/Yjwvjw9aLnF9NLwAQMtYvz18U2vm+4ET0lJISUlBcD8/7biqL5o73Zrrzdv\nfx/DEtvUMHOJS3wzMjJISUkhLS0NgJUrV6JWqy0W11WBKmxwoYEQQlxetKDoO58GXCKJGAwGhg4d\nyueff07fvn2Jj4/nvffek4V1IYRwMpeYznJ3d+fVV19l+vTpGI1G7rnnHkkgQghxCXCJkYgQQohL\nk0tc4iuEEOLSJElECCFEp3XbJLJ161buu+8+5s6dy+7du50djtP8+OOPLF68mKSkJNavX+/scJyu\nqqqKcePG8emnnzo7FKdKT09n4sSJLF68mH379jk7HKdSFIWnn36ahx56iLfeesvZ4TjVgQMHWLx4\nMb/97W/5zW9+065jXGJhvTNuuukmbrrpJsrKyvj9739PYmKis0NyiujoaNatW4fJZGLu3Lncc8/l\n+1hSgD/96U/cfvvtzg7D6dRqNb6+vtTV1REeHu7scJxqy5Yt5OXl0atXr8u+LyZMmMCECRPYunUr\n8fHx7TrGpUYid999NyEhIcTGxlpsT0tLIzo6mqioKFatsizZ8Nxzz/Hggw86Mky762g/bN++nRtu\nuIG5c+c6OlS760hf7N69m5iYGHr37u2MUO2uI30xceJEduzYQWpqKs8884wzwrWrjvTFiRMn+M1v\nfsNf/vIX1q1b54xw7aoz35vvvvsu8+fPb18DXbrf3cG+/PJL5ciRI8rw4cPN2wwGgxIZGamcPn1a\nqa+vV0aOHKlkZmYqJpNJeeKJJ5Q9e/Y4MWL76Eg/XGzWrFmODtXuOtIXTz/9tPLwww8r06ZNU266\n6SbFZDI5MXLb68y/i7q6OuXWW291Rrh21ZG+ePvtt5XNmzcriqIoSUlJzgrZbjr67+LMmTPKb3/b\neqmjX3Op6ayJEyeSnZ1tse3i4oyAuTjjnj17+Pzzz6moqODnn3/m/vvvd3zAdtKRfjh37hwfffQR\ntbW1TJ482fHB2llH+uK5554DYMOGDfTu3dsuzxB3po70xY8//sjOnTspKytj6dKljg/WzjrSF8uW\nLWPp0qXs37+/W1Z87khfDBs2jDfeeIO777673ed3qSTSkry8PPr162d+HR4ezsGDB3nllVe65X8c\nrWmtHyZNmsSkSZOcGJnjtdYXTZKTk50RllO01hfLly9n9uzZbRzZ/bTWF97e3vzjH/Z5kuOlqq3/\nRjpaV8yl1kRa0t1+TXaW9MMF0hcXSF9cIH1xgS37wuWTSFhYGDk5OebXOTk5l+UVFtIPF0hfXCB9\ncYH0xQW27AuXTyJjx47l5MmTZGdnU19fz6ZNm5g1a5azw3I46YcLpC8ukL64QPriApv2hc0vBbCj\nuXPnKn369FE8PDyU8PBw5Y033lAURVF27NihDBkyRImMjFRWrFjh5CjtT/rhAumLC6QvLpC+uMDe\nfSEFGIUQQnSay09nCSGEcB5JIkIIITpNkogQQohOkyQihBCi0ySJCCGE6DRJIkIIITpNkogQQohO\nkyQiupWIiAj0er1T2l69ejU1NTXt3v+1115j48aNdoxICPuTmw1FtzJw4EC+/fZbAgMDndL24cOH\nCQoKcnjbTZr+c24qsPfr10LYmoxEhEt6++23ufLKKxk9ejS/+93vMJlM7d7Hx8eHJ554guHDh5OY\nmEhGRgaTJk0iMjKS7du3A2A0Gnn88ceJj49n5MiRvP7660Djs8kTEhK47bbbGDZsGHfeeScAa9as\nIT8/n8mTJzNlyhRMJhMLFy4kNjaWESNGsHr16mbxpaSk8MILLwCQkJDA8uXLufLKKxk6dCgHDhxo\ntn9VVRVTp04lLi6OESNGsG3bNgCys7MZOnQoycnJxMbGsn//fovXOTk5LFmyhHHjxjF8+HBzqe8v\nvvjCohz87t27mTNnTmf/SsTlquuVWYRwrMzMTGXmzJmKwWBQFEVRFi9erLz11luKoihKRESEotPp\n2txHpVIpaWlpiqIoyuzZs5XExETFYDAoR48eVUaNGqUoiqK89tprynPPPacoiqLU1tYqY8eOVU6f\nPq3s3btX8ff3V/Ly8hSTyaRcffXVyldffWXRtqIoyuHDh5XExERzzGVlZc0+R0pKivLCCy8oiqIo\nCQkJyu9//3tFURprGk2dOrXZ/gaDQamoqFAURVGKi4uVwYMHK4qiKKdPn1bUarVy8ODBFl8riqLo\n9XrzORISEpTvv/9eURRFiY6OVkpKShRFUZR58+Ypn3zySTv+BoS4wOUfSiUuP59//jnffvstY8eO\nBaCmpobQ0FDz+4qitLmPh4cH06dPByA2NhYvLy/c3NwYPny4+Qlwu3bt4vvvv+eDDz4AMD8hU6PR\nEB8fT9++fQEYNWoU2dnZjB8/3iLGyMhITp06xUMPPcQNN9zAtGnTrH6uplHAmDFjmj2JDsBkMvFf\n//Vf7N+/H7VaTX5+PufOnQNgwIABxMfHm/f99etNmzbx97//HYPBQEFBAZmZmQwfPpy77rqLjRs3\nsnDhQjIyMnj77betxinExSSJCJeUnJzMihUrOrWPRqMx/1mtVuPh4WH+s8FgML/36quvkpiYaHFs\neno6np6e5tdubm4WxzQJCAjg2LFjpKWl8be//Y3Nmzezfv36NuNtOm9r53znnXcoKSnhyJEjuLm5\nMXDgQGprawHo2bOnxb4Xvz59+jQvvPAChw8fxt/fn0WLFpkvAFi0aBEzZ87Ey8uLpKQk1GqZ4RYd\nI/9ihMuZMmUKH3zwAcXFxQDo9XrOnj1rfl+lUlndx5rp06ezdu1a85f5iRMnqK6ubvMYX19fKioq\nANDpdBgMBubMmcMf//hHjhw50uIxSgeua6moqCA4OBg3Nzf27t3LmTNn2n1cz5498fPzo6ioiM8+\n+8y80N6nTx/69u3Lc889x6JFi9odixBNZCQiXM6wYcN47rnnmDZtGiaTCY1Gw9q1a+nfv3+79vn1\nlUoXv27687333kt2djZjxoxBURSCg4P5+OOPUalUrV7pdN999zFjxgzCwsJ46aWXWLRokXkxPzU1\ntcVjWjtXS9vvuOMOZs6cyYgRIxg7dizDhg1rdf+LX48cOZLRo0cTHR1Nv379mDBhgsW+8+fPp6Sk\nhKFDh7YYixBtkUt8hbjMPfjgg8TFxclIRHSKJBEhLmNxcXH4+vqye/dui7UiIdpLkogQQohOk4V1\nIYQQnSZJRAghRKdJEhFCCNFpkkSEEEJ0miQRIYQQnSZJRAghRKf9P25ziuAKHorRAAAAAElFTkSu\nQmCC\n", "text": [ "" ] } ], "prompt_number": 34 }, { "cell_type": "code", "collapsed": false, "input": [ "np.__version__" ], "language": "python", "metadata": {}, "outputs": [ { "metadata": {}, "output_type": "pyout", "prompt_number": 4, "text": [ "'1.8.0'" ] } ], "prompt_number": 4 }, { "cell_type": "heading", "level": 1, "metadata": {}, "source": [ "Check if fft produces correct results" ] }, { "cell_type": "code", "collapsed": false, "input": [ "trj = np.sin(np.arange(x_length[-1]))\n", "fft = np.fft.fft(trj)" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 20 }, { "cell_type": "code", "collapsed": false, "input": [ "plt.plot(np.abs(fft) ** 2)\n", "plt.title('fourier spectra of sin(x)');" ], "language": "python", "metadata": {}, "outputs": [ { "metadata": {}, "output_type": "display_data", "png": "iVBORw0KGgoAAAANSUhEUgAAAXcAAAEXCAYAAABWNASkAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAIABJREFUeJzt3XtU0/f9P/BnMOmqVO6WShK1CgoMBHrwPly0syJV5rHa\nwSZW6pDa2a6dXdW6tdCutnTt126yVnrmpV5Ha3uGrZo6W4OdN5w3PICIVjRQbUXAe2sM798f/gxG\nIIkhCeHt83FOT5N83nl/Xp93yNM373w+QSGEECAiIqn4dHQBRETkegx3IiIJMdyJiCTEcCcikhDD\nnYhIQgx3IiIJMdzvMpWVlYiPj4efnx/y8/Nd3n9KSgpWrVrl8n5l5ezrsWbNGowdO9bh9mfPnkVU\nVBR+/PFHu21feOEFLFmyxOG+yTspeJ773WXGjBkICAjAO++809GleB2DwYCMjAwYjUaP7dNTr8ec\nOXMQGhqKF1980W7bM2fOYPDgwTh+/DhUKpVb6yL34cz9LnPy5ElER0e7vF8hBNozT7h+/boLq3Ef\ns9ns0v7c9Xrc6scff8TKlSsxdepUh9o/8MADiIyMxIYNG9xaF7kXw/0uMnr0aBgMBsyePRt+fn44\nduwYzp8/j2nTpuH+++9Hnz598Prrr1tCOicnBxkZGZbnV1dXw8fHB01NTQAAnU6HP/3pTxgxYgTu\nu+8+fPPNN9DpdFi6dKnlOcuWLUN0dDSCgoKQnJyMU6dOWbb5+PjgvffeQ0REBAYMGNCi3h9++AFT\np05FSEgIAgMDMXjwYJw9e9ay7/nz52PIkCHw9/fHxIkT0dDQYHnu7t27MXz4cAQGBiI+Ph7FxcWW\nbfX19cjMzIRarUZQUBAmTZqEK1euYNy4cfj222/RvXt3+Pn54fTp08jJycHkyZORkZEBf39/fPjh\nh9i7dy+GDRuGwMBAhIWF4ZlnnoHJZGpz3Dds2ICf/vSnCAwMxKhRo3DkyJE2X4/brVixAv369YOf\nnx/69u2LtWvXWh5PSkqyGsuCggL0798fgYGBmD17tmXbnj17EBAQgLCwMMvxa7VafP755wCAS5cu\nITw8HKtXr7Y8R6fTYePGjW0eE3UCwkMyMzPF/fffL2JiYuy2LS4uFgkJCUKpVIr169dbHq+urhYP\nPfSQiI+PF9HR0eLdd991Z8lS0ul0YunSpZb7GRkZYuLEieLSpUuiurpa9O/f37I9JydHTJ061dL2\nxIkTQqFQCLPZLIQQ4uc//7no3bu3KC8vF2azWZhMJqv+//3vf4vw8HBx5MgRYTabxV/+8hcxfPhw\nS38KhUI88sgjoqGhQfzwww8tal2yZImYMGGCuHr1qmhqahL79+8XFy5csOxbrVaLsrIycfnyZfHY\nY49Zaq2pqRHBwcFi8+bNQggh/vOf/4jg4GBRV1cnhBAiJSVFpKWlicbGRmEymcT27duFEEIYDAah\n0WisanjllVeESqUSRUVFQgghrl69Kvbt2yf27NkjzGazqK6uFlFRUW3+LFZWVgpfX1+xdetWcf36\ndfHWW2+J8PBwYTKZWn09bnXp0iXh5+cnjh49KoQQ4syZM6KsrEwIIcTy5cvFz372M6uxnDBhgjh/\n/rw4deqU6NGjh9Dr9UIIIfLz88Wjjz5q1feWLVvEAw88IL7//nvx29/+VkyZMsVq+yeffCIeeuih\nVuuizsFjM/fMzEzo9XqH2vbu3Rsffvghfv3rX1s9HhYWht27d+PAgQMoKSnBokWLUFNT445ypSb+\n/8zcbDajsLAQb7zxBnx9fdG7d2/MmTPH8oGosLPMolAoMH36dERFRcHHxwdKpdJq+5IlSzB//nwM\nGDAAPj4+mD9/Pg4ePGi1pj1//nwEBATgJz/5SYv+77nnHpw7dw5VVVVQKBRISEhA9+7dLfueNm0a\noqOj0a1bN7z22mv46KOP0NTUhNWrVyMlJQXJyckAgF/84hdITEzExo0bcfr0aej1eixZsgT+/v5Q\nKpWWGXBbxzt8+HCkpqYCAO6991489NBDGDx4MHx8fNC7d2/MnDnT6jeDWxUWFmL8+PF4+OGH0aVL\nF7zwwgu4evUqdu7c2eL1aI2Pjw8OHz6Mq1evIjQ01OYSzrx58+Dn5wetVotRo0bh4MGDAIDGxkbL\nuN00ZswYTJkyBaNHj4Zer0dBQYHV9u7du6OxsbHNfZH381i4JyUlITAw0Oqx48ePY9y4cUhMTMTI\nkSNRWVkJ4Ea4x8bGwsfHujyVSmX5gOfq1atQqVTo1q2bZw5AIgqFAgBQV1cHk8mE3r17W7b16tUL\ntbW1Dvel1Wrb3Hby5En8/ve/R2BgIAIDAxEcHAwAVv3ben5GRgbGjh2LtLQ0qNVqzJ0712pt/tbn\n9urVCyaTCXV1dTh58iQ+/vhjy34DAwOxY8cOnDlzBkajEUFBQfD393f4GDUajdX9o0ePYvz48ejZ\nsyf8/f2xYMECnDt3rtXnnj59Gr169bLcVygU0Gq1VmNw8/W4na+vLwoLC7FkyRKEhYVh/PjxlvdI\nax544AHL7W7duuHy5csAgKCgIFy8eLFF+6ysLJSVlWH69Okt3psXL15EQEBAm/si79eha+4zZ87E\n4sWL8b///Q9//etf8fTTT9t9Tk1NDQYOHIhevXrh+eefR1BQkAcqlVNISAhUKhWqq6stj506dcoS\nZr6+vrhy5Ypl25kzZ1r00VYwATcC94MPPkBDQ4Plv8uXL2Po0KEOPV+pVOLll19GWVkZdu7cic8/\n/xwrV660qvXW2yqVCj169ECvXr2QkZFhtd+LFy/ixRdfhFarRX19Pc6fP+/QsSgUihaPz5o1C9HR\n0ZbPLF5//XXL5xC3CwsLw8mTJy33hRAwGo1Qq9VtHvetHnnkEWzZsgVnzpxBZGQksrKyHHrerWJj\nY3H06FGrx8xmM2bOnIlp06bhH//4B44fP261vaKiAvHx8Xe8L/IeHRbuly5dwq5duzBlyhQkJCTg\nqaeeajU8bqfRaFBaWorjx4/j3XffbfVDKLLt5jJAly5d8Pjjj2PBggW4dOkSTp48iUWLFlnOqkhI\nSMD27dthNBpx/vx5vPHGG2321ZqnnnoKCxcuRHl5OQDg/Pnz+Pjjjx2u02Aw4PDhwzCbzejevTtU\nKhW6dOli2e/q1atRUVGBK1eu4OWXX8aUKVOgUCgwdepUfPbZZ9iyZQvMZjN++OEHGAwG1NbWomfP\nnhg3bhyefvppNDY2wmQyYfv27QCA0NBQnDt3DhcuXLB5fJcuXUL37t3RrVs3HDlyBO+//36bx/D4\n449j48aN+Oqrr2AymfDOO+/g3nvvxfDhw+2O4ffff4+ioiJcvnwZKpUKvr6+luO3R9xy9tLgwYPR\n2NiIb7/91rJ94cKF6NKlC5YvX44//vGPmDZtmtU/UMXFxRg3bpxD+yLv1GHh3tTUhICAABw4cMDy\nX1lZWYt2bc3sevbsiaSkJMu6Ijnu1jFdvHgxfH190bdvXyQlJeE3v/kNMjMzAdxYq/7Vr36FgQMH\nYtCgQZgwYUKL18PWzHvixImYO3cu0tLS4O/vj9jYWHzxxRcOPRe48ZvClClT4O/vj+joaOh0OsvZ\nOwqFAhkZGZg+fTp69uyJa9eu4e9//zuAGxOAoqIiLFy4EPfffz969eqFd955xxJeq1atgkqlQmRk\nJEJDQy3Pi4yMRHp6Ovr27YugoCCcPn261Zn722+/jbVr18LPzw8zZ85EWlpam8fSv39/rF69Gs88\n8wx69OiBjRs34rPPPrP6fKKt5zY1NWHRokVQq9UIDg7G119/bfmH5Pa6Wntdbj52zz33YPr06Zaz\nYfbt24dFixZh5cqVUCgUmDt3LhQKBfLy8gDcWEqqqKjAxIkTbb4+5OXsfeLq6FkuJSUlokuXLuKT\nTz5ps82JEyes+hk+fLj4+OOPhRBCNDU1iUOHDlm1f+KJJ6zOlqmpqRFXrlwRQghRX18vBgwYICor\nK+0dAknI1lkm1NLZs2dFZGRkq2cl3W7OnDni/fff90BV5E52Z+6OnOViNpsxd+5cJCcnt/krZnp6\nOoYPH47KykpotVosX74ca9aswdKlSxEfH4+YmBjLRRN79+6FVqvF+vXrkZ2djdjYWABAeXk5hg4d\nivj4eIwePRovvfQS+vfvf6f/npEk2vpZo5ZCQkJQUVHR6llJt3v77bfx1FNPeaAqcielvQZJSUlW\nH7i1ZvHixZg8eTL27t3bZpt169a1+vjmzZtbPDZo0KBWLwEfM2YMDh06ZLtgumvYW9YhupvZDXd7\namtrUVRUhK+++gp79+7lG448Ytu2bR1dApFXa/cHqs899xzefPNNKBSKdn+/CBERuUa7Z+779u1D\nWloagBsXxWzevBkqlcpyRd9N4eHhLc6lJSIi2/r16+fcKd+OfOp6+1kubZk+fXqbZ8s4uKu7wiuv\nvNLRJXgNjkUzjkUzjkUzZ7PT7sw9PT0dxcXFqKurg1arRW5uruUb8LKzs+/8XxMiInI7u+He1lku\nrVm+fHm7iiEiItfg97l3AJ1O19EleA2ORTOORTOORft57M/s3TybhoiIHOdsdnLmTkQkIYY7EZGE\nGO5ERBJiuBMRSYjhTkQkIYY7EZGEGO5ERBJiuBMRSYjh7iQhgOvXO7oKIrqJ70drDHcnrVgBqFQd\nXQURAcDly3w/3o7h7qTKyo6ugIhu+vHHjq7A+zDciYgkxHAnIpIQw52ISEIMdyIiCTHciYgkxHAn\nIpIQw52ISEIMdyIiCTHciYgkxHAnIpKQ3XB/8sknERoaitjY2Fa3r1mzBnFxcRg4cCBGjBiB0tJS\nlxdJRER3xm64Z2ZmQq/Xt7m9b9++2L59O0pLS/HnP/8ZM2fOdGmBRER05+yGe1JSEgIDA9vcPmzY\nMPj7+wMAhgwZgpqaGtdVR0RETnHpmvvSpUuRkpLiyi6JiMgJSld1tG3bNixbtgw7duxos01OTo7l\ntk6ng06nc9XuiYikYDAYYDAY2t2PQggh7DWqrq7GhAkTcPjw4Va3l5aWYtKkSdDr9QgPD299RwoF\nHNhVpzFvHpCXd+MvMhFRx6qvB4KD5Xw/Opud7V6WOXXqFCZNmoTVq1e3GexERORZdpdl0tPTUVxc\njLq6Omi1WuTm5sJkMgEAsrOz8eqrr6KhoQGzZs0CAKhUKpSUlLi3aiIissmhZRmX7IjLMkTkJlyW\naYlXqBIRSYjhTkQkIYY7EZGEGO5ERBJiuBMRSYjhTkQkIYY7EZGEGO5ERBJiuBMRSYjhTkQkIYY7\nEZGEGO5ERBJiuBMRSYjhTkQkIYY7EZGEGO5ERBJiuBMRSYjhTkQkIYY7EZGEGO5ERBJiuBMRSYjh\nTkQkIYY7EZGE7Ib7k08+idDQUMTGxrbZ5tlnn0VERATi4uJw4MABlxZIRER3zm64Z2ZmQq/Xt7l9\n06ZNOHbsGKqqqvDBBx9g1qxZLi2QiIjunN1wT0pKQmBgYJvbN2zYgCeeeAIAMGTIEDQ2NuK7775z\nXYVERHTH2r3mXltbC61Wa7mv0WhQU1PT3m6JiKgdlK7oRAhhdV+hULTaLicnx3Jbp9NBp9O5YvdE\nRNIwGAwwGAzt7qfd4a5Wq2E0Gi33a2pqoFarW217a7gTEVFLt098c3Nzneqn3csyqampWLlyJQBg\n9+7dCAgIQGhoaHu7JSKidrA7c09PT0dxcTHq6uqg1WqRm5sLk8kEAMjOzkZKSgo2bdqE8PBw+Pr6\nYvny5W4vmoiIbFOI2xfM3bUjhaLF2nxnNm8ekJcHSHRIRJ1WfT0QHCzn+9HZ7OQVqkREEmK4ExFJ\niOFORCQhhjsRkYQY7kREEmK4ExFJiOFORCQhhjsRkYQY7kREEmK4ExFJiOFORCQhhjsRkYQY7kRE\nEmK4ExFJiOFORCQhhjsRkYQY7kREEmK4ExFJiOFORCQhhjsRkYQY7kREEmK4ExFJiOFORCQhu+Gu\n1+sRGRmJiIgI5OXltdheV1eH5ORkxMfHIyYmBitWrHBHnUREdAdshrvZbMbs2bOh1+tRXl6OdevW\noaKiwqpNfn4+EhIScPDgQRgMBsyZMwfXr193a9FERGSbzXAvKSlBeHg4+vTpA5VKhbS0NBQVFVm1\n6dmzJy5cuAAAuHDhAoKDg6FUKt1XMRER2WUzhWtra6HVai33NRoN9uzZY9UmKysLo0ePRlhYGC5e\nvIiPPvrIPZUSEZHDbIa7QqGw28HChQsRHx8Pg8GA48ePY8yYMTh06BC6d+/eom1OTo7ltk6ng06n\nu+OCiYhkZjAYYDAY2t2PzXBXq9UwGo2W+0ajERqNxqrNzp07sWDBAgBAv3798OCDD6KyshKJiYkt\n+rs13ImIqKXbJ765ublO9WNzzT0xMRFVVVWorq7GtWvXUFhYiNTUVKs2kZGR2Lp1KwDgu+++Q2Vl\nJfr27etUMURE5Bo2Z+5KpRL5+fkYO3YszGYzZsyYgaioKBQUFAAAsrOz8dJLLyEzMxNxcXFoamrC\nW2+9haCgII8UT0RErVMIIYRHdqRQwEO78oh584C8PECiQyLqtOrrgeBgOd+PzmYnr1AlIpIQw52I\nSEIMdyIiCTHciYgkxHAnIpIQw52ISEIMdyIiCTHciYgkxHB3kgPfqUZE1GEY7kREEmK4ExFJiOFO\nRCQhhjsRkYQY7k7iB6pE5M0Y7kREEmK4ExFJiOFORCQhhruTuOZORN6M4U5EJCGGOxGRhBjuREQS\nYrg7iWvuROTNGO5ERBKyG+56vR6RkZGIiIhAXl5eq20MBgMSEhIQExMDnU7n6hqJiOgOKW1tNJvN\nmD17NrZu3Qq1Wo1BgwYhNTUVUVFRljaNjY343e9+hy+++AIajQZ1dXVuL5qIiGyzOXMvKSlBeHg4\n+vTpA5VKhbS0NBQVFVm1Wbt2LR577DFoNBoAQEhIiPuq9SJccycib2Yz3Gtra6HVai33NRoNamtr\nrdpUVVWhvr4eo0aNQmJiIlatWuWeSomIyGE2l2UUDkxPTSYT9u/fjy+//BJXrlzBsGHDMHToUERE\nRLRom5OTY7mt0+m4Pk9EdBuDwQCDwdDufmyGu1qthtFotNw3Go2W5ZebtFotQkJC0LVrV3Tt2hUj\nR47EoUOH7IY7ERG1dPvENzc316l+bC7LJCYmoqqqCtXV1bh27RoKCwuRmppq1eaXv/wl/vvf/8Js\nNuPKlSvYs2cPoqOjnSqmM+GaOxF5M5szd6VSifz8fIwdOxZmsxkzZsxAVFQUCgoKAADZ2dmIjIxE\ncnIyBg4cCB8fH2RlZd0V4U5E5M0UQgjhkR0pFPDQrjzi5ZeB114DJDokok6rvh4IDpbz/ehsdvIK\nVSIiCTHcncQ1dyLyZgx3IiIJMdyJiCTEcCcikhDD3Ulccycib8ZwJyKSEMOdiEhCDHciIgkx3J3E\nNXci8mYMdyIiCTHciYgkxHAnIpIQw91JXHMnIm/GcCcikhDDnYhIQgx3IiIJMdydxDV3IvJmDHci\nIgkx3ImIJMRwJyKSEMOdiEhCDHcn8QNVIvJmdsNdr9cjMjISERERyMvLa7Pd3r17oVQq8emnn7q0\nQCIiunM2w91sNmP27NnQ6/UoLy/HunXrUFFR0Wq7uXPnIjk5GUIItxVLRESOsRnuJSUlCA8PR58+\nfaBSqZCWloaioqIW7RYvXozJkyejR48ebiuUiIgcZzPca2trodVqLfc1Gg1qa2tbtCkqKsKsWbMA\nAIq7ZDH6LjlMIuqklLY2OhLUzz33HN58800oFAoIIWwuy+Tk5Fhu63Q66HQ6hwslIrobGAwGGAyG\ndvdjM9zVajWMRqPlvtFohEajsWqzb98+pKWlAQDq6uqwefNmqFQqpKamtujv1nAnIqKWbp/45ubm\nOtWPzXBPTExEVVUVqqurERYWhsLCQqxbt86qzTfffGO5nZmZiQkTJrQa7ERE5Dk2w12pVCI/Px9j\nx46F2WzGjBkzEBUVhYKCAgBAdna2R4r0RlxzJyJvphAeOnfx5pq8LBYuBBYsACQ6JKJOq74eCA6W\n8/3obHbyClUiIgkx3ImIJMRwdxLX3InImzHciYgkxHAnIpIQw52ISEIMdydxzZ2IvBnDnYhIQgx3\nIiIJMdyJiCTEcHcS19yJyJsx3ImIJMRwJyKSEMOdiEhCDHcncc2diLwZw52ISEIMdyIiCTHciYgk\nxHB3EtfcicibMdyJiCTEcCcikhDDnYhIQgx3J3HNnYi8mUPhrtfrERkZiYiICOTl5bXYvmbNGsTF\nxWHgwIEYMWIESktLXV4oERE5TmmvgdlsxuzZs7F161ao1WoMGjQIqampiIqKsrTp27cvtm/fDn9/\nf+j1esycORO7d+92a+FERNQ2uzP3kpIShIeHo0+fPlCpVEhLS0NRUZFVm2HDhsHf3x8AMGTIENTU\n1LinWiIicojdcK+trYVWq7Xc12g0qK2tbbP90qVLkZKS4prqvBjX3InIm9ldllHcQYpt27YNy5Yt\nw44dO1rdnpOTY7mt0+mg0+kc7puI6G5gMBhgMBja3Y/dcFer1TAajZb7RqMRGo2mRbvS0lJkZWVB\nr9cjMDCw1b5uDXciImrp9olvbm6uU/3YXZZJTExEVVUVqqurce3aNRQWFiI1NdWqzalTpzBp0iSs\nXr0a4eHhThVCRESuY3fmrlQqkZ+fj7Fjx8JsNmPGjBmIiopCQUEBACA7OxuvvvoqGhoaMGvWLACA\nSqVCSUmJeysnIqI2KYQQwiM7UijgoV15xP/9HzBnDiDRIRF1WvX1QHCwnO9HZ7OTV6gSEUmI4U5E\nJCGGOxGRhBjuTuJFTETkzRjuREQSYrgTEUmI4U5EJCGGu5O45k5E3ozhTkQkIYY7EXV6/E26JYY7\nEXV6PkyyFjgkTuJMgch78P3YEsOdiDo9ztxb4pAQUafHmXtLDHci6vRuhruMX/nrLIa7kzhTIPIe\nDPeWGO5EJI2mpo6uwHsw3IlIGpy5N2O4E5E0OHNvxnB3EtfcibwPZ+7NGO5EJA3O3Jsx3IlIGpy5\nN7Mb7nq9HpGRkYiIiEBeXl6rbZ599llEREQgLi4OBw4ccHmRRESO4My9mc1wN5vNmD17NvR6PcrL\ny7Fu3TpUVFRYtdm0aROOHTuGqqoqfPDBB5g1a5ZbC/YW7VlzNxgMLqujs+NYNONYNHN2LDhzb2Yz\n3EtKShAeHo4+ffpApVIhLS0NRUVFVm02bNiAJ554AgAwZMgQNDY24rvvvnNfxRLgm7gZx6IZx6KZ\ns2PBmXszm+FeW1sLrVZrua/RaFBbW2u3TU1NjYvLJCKyj+HeTGlro8LBtQdx2+9CbT1v/PjmX5s6\n+//3728+Jntu/1Xx6FFg71777Rztz5va3Wlf33wD/Pe/ruuvM7errga+/NKz+/TWdjU1wObNjvd3\nM9TT04F77nFs/7KzGe5qtRpGo9Fy32g0QqPR2GxTU1MDtVrdoq9+/fph40b5Tg7fuNG55x07luva\nQjqxEyc4FjcZjRyLm7799s7HYssWNxTSwfr16+fU82yGe2JiIqqqqlBdXY2wsDAUFhZi3bp1Vm1S\nU1ORn5+PtLQ07N69GwEBAQgNDW3R17Fjx5wqkIiI7pzNcFcqlcjPz8fYsWNhNpsxY8YMREVFoaCg\nAACQnZ2NlJQUbNq0CeHh4fD19cXy5cs9UjgREbVNIW5fMCciok7P5Veo8qKnZvbGYs2aNYiLi8PA\ngQMxYsQIlJaWdkCVnuHIzwUA7N27F0qlEp9++qkHq/McR8bBYDAgISEBMTEx0Ol0ni3Qg+yNRV1d\nHZKTkxEfH4+YmBisWLHC80V6yJNPPonQ0FDExsa22eaOc1O40PXr10W/fv3EiRMnxLVr10RcXJwo\nLy+3arNx40Yxbtw4IYQQu3fvFkOGDHFlCV7DkbHYuXOnaGxsFEIIsXnz5rt6LG62GzVqlHj00UfF\n+vXrO6BS93JkHBoaGkR0dLQwGo1CCCHOnj3bEaW6nSNj8corr4h58+YJIW6MQ1BQkDCZTB1Rrttt\n375d7N+/X8TExLS63ZncdOnMnRc9NXNkLIYNGwZ/f38AN8ZC1usDHBkLAFi8eDEmT56MHj16dECV\n7ufIOKxduxaPPfaY5ay0kJCQjijV7RwZi549e+LChQsAgAsXLiA4OBhKpc2PCTutpKQkBAYGtrnd\nmdx0abjzoqdmjozFrZYuXYqUlBRPlOZxjv5cFBUVWb6+wtFrLDoTR8ahqqoK9fX1GDVqFBITE7Fq\n1SpPl+kRjoxFVlYWysrKEBYWhri4OPztb3/zdJlew5ncdOk/g66+6Kkzu5Nj2rZtG5YtW4YdO3a4\nsaKO48hYPPfcc3jzzTehUCgghGjxMyIDR8bBZDJh//79+PLLL3HlyhUMGzYMQ4cORUREhAcq9BxH\nxmLhwoWIj4+HwWDA8ePHMWbMGBw6dAjdu3f3QIXe505z06Xh7sqLnjo7R8YCAEpLS5GVlQW9Xm/z\n17LOzJGx2LdvH9LS0gDc+CBt8+bNUKlUSE1N9Wit7uTIOGi1WoSEhKBr167o2rUrRo4ciUOHDkkX\n7o6Mxc6dO7FgwQIANy7kefDBB1FZWYnExESP1uoNnMpNl30iIIQwmUyib9++4sSJE+LHH3+0+4Hq\nrl27pP0Q0ZGxOHnypOjXr5/YtWtXB1XpGY6Mxa2mT58uPvnkEw9W6BmOjENFRYV4+OGHxfXr18Xl\ny5dFTEyMKCsr66CK3ceRsXj++edFTk6OEEKIM2fOCLVaLc6dO9cR5XrEiRMnHPpA1dHcdOnMnRc9\nNXNkLF599VU0NDRY1plVKhVKSko6smy3cGQs7gaOjENkZCSSk5MxcOBA+Pj4ICsrC9HR0R1cues5\nMhYvvfQSMjMzERcXh6amJrz11lsICgrq4MrdIz09HcXFxairq4NWq0Vubi5MJhMA53OTFzEREUmI\nf2aPiEhCDHciIgkx3ImIJMRwJyKSEMOdiMhNHPlCsJv+8Ic/ICEhAQkJCRgwYEC7r3vh2TJERG7y\n9ddf47777sO0adNw+PBhh5+Xn5+PgwcP4p///KfT++bMnYjITVr7QrDjx49j3LhxSExMxMiRI1FZ\nWdnieWuuUF+dAAAA/UlEQVTXrkV6enq79i3nV6wREXmpmTNnoqCgAOHh4dizZw+efvppfHnLX0Y/\nefIkqqurMXr06Hbth+FOROQhly5dwq5duzBlyhTLY9euXbNq869//QtTpkxp9xcqMtyJiDykqakJ\nAQEBNv+SUmFhId57771274tr7kREHuLn54cHH3wQ69evB3Dja3xv/fOaR44cQUNDA4YOHdrufTHc\niYjcJD09HcOHD0dlZSW0Wi2WL1+ONWvWYOnSpZa/DbthwwZL+8LCwnZ/kHoTT4UkIpIQZ+5ERBJi\nuBMRSYjhTkQkIYY7EZGEGO5ERBJiuBMRSYjhTkQkIYY7EZGE/h/f98RGUgH3zwAAAABJRU5ErkJg\ngg==\n", "text": [ "" ] } ], "prompt_number": 21 }, { "cell_type": "code", "collapsed": false, "input": [ "acf = np.fft.ifft(np.abs(fft) **2) / x_length[-1]\n", "acf = np.real(acf[:acf.size / 2] / acf[0])" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 9 }, { "cell_type": "code", "collapsed": false, "input": [ "plt.plot(acf, marker='o');\n", "plt.xlim(0, 10)\n", "plt.title('acf of sin(x)');" ], "language": "python", "metadata": {}, "outputs": [ { "metadata": {}, "output_type": "display_data", "png": "iVBORw0KGgoAAAANSUhEUgAAAXwAAAEKCAYAAAARnO4WAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAHwBJREFUeJzt3XtwU3XiNvAnbYpAoVeg0CRYSEMvlLZoFUTFICIMSl2x\n+ouisNwXYb2sM9tdfRmL7ytQdmdcsM5YrwiytTI/xzJYIy+6BQRLRS7l9oMCBtIUq7UtiNWWhu/7\nR96GlqZtktP2nOY8n5kMSc73nPOYlofj9yQnGiGEABERBbwguQMQEVHvYOETEakEC5+ISCVY+ERE\nKsHCJyJSCRY+EZFKsPAp4Jw6dQrp6ekICwtDXl6ez+vPnz8fUVFRmDhxok/rDR48GDabzevxjz/+\nOIqKirocV15ejjvvvNOnLESeaOUOQNTd1q1bh6lTp+Lw4cM+r7tnzx7s3LkTVVVV6N+/v0/r/vLL\nL16PLS8vR3l5OQoKCrocm5qaioiICGzfvh0PPvigT5mIWuMRPgWc8+fPIzk52e914+LifC57X+Xn\n5+PJJ5/0evycOXOQn5/fg4lIDVj4pFhr165FfHw8wsLCMHbsWHz66adtlr/99ttITk52Lz906BDu\nvfdelJSUYMWKFQgLC8OZM2fabbeqqgqZmZmIjo6GyWTCO++8AwB49913sXjxYnzzzTcYPHgwVq1a\n1W7dM2fO4J577kFERASGDh0Ki8XiXhYUFIRz584BAP74xz9i+fLlePDBBxEWFoaJEye6lwGA1WrF\nPffc4368bNkyZGVluR9nZ2fjvvvucz++55578OWXX+Lq1au+voxE1wkihdq6dau4ePGiEEKIwsJC\nERoaKn744QchhBAff/yx0Ol04sCBA0IIIc6cOSPOnz8vhBDCbDaLd999t8Pt3n333WL58uWisbFR\nHD58WAwdOlR89dVXQgghNm7cKO66664O17VYLGL16tVCCCEaGxvF3r173cs0Go04e/asEEKIefPm\niejoaPHtt9+K5uZmMWfOHGGxWIQQQly5ckVoNBpRU1PjXrehoUGMGTNGbNy4UezevVsMGTJEOByO\nNvsOCwsTR48e9eKVI/KMR/ikWFlZWRg+fDgA4LHHHoPJZEJZWRkA4J133kF2djZuvfVWAIDRaMTI\nkSPd64oOLhFlt9uxb98+5Obmol+/fkhLS8OiRYuwadOmTtdr0a9fP9hsNjgcDvTr1w+TJk3yOE6j\n0WD27NnIyMhAcHAw5syZ4z6nUF9fD8B1krfFgAEDsHnzZjz//PN46qmnkJeXh9jY2DbbHDx4sHtd\nIn+w8EmxNm3ahPHjxyMyMhKRkZE4duwYampqAACVlZUwGo0drqvRaDw+X1VVhaioKISGhrqfGzly\nJBwOh1eZ1q1bByEEbr/9dqSkpOD999/vcGxMTIz7/oABA3DlyhUAQEREBID2J3lvv/12jB49GgDw\n6KOPttveL7/84l6XyB8sfFKk8+fPY8mSJXjjjTdQW1uLuro6pKSkuI/ADQaDx/n5rsTGxqK2ttZd\nvgBw4cIF6PV6r9aPiYnBW2+9BYfDgfz8fDz99NNt5ua9ERoaCqPRiFOnTrV5/o033kBTUxNiY2Ox\nbt26NsscDgeampqQkJDg076IWmPhkyL9+uuv0Gg0GDJkCK5du4b3338fx44dcy9ftGgR/vnPf+Lg\nwYMQQuDMmTO4cOGCe3lHUzMGgwGTJk3C3//+dzQ2NqK8vBzvvfee1++Y2bp1KyorKwG4jtQ1Gg2C\ngtr/NepqamjmzJnYtWuX+/Hp06excuVKbNmyBZs2bcK6detw5MgR9/Jdu3Zh6tSpCAkJ8SonkScs\nfFKk5ORkvPDCC7jjjjswfPhwHDt2DHfddZd7eVZWFl566SU88cQTCAsLw+zZs1FXV+de3tGUDgAU\nFBTAZrMhNjYWs2fPxiuvvIJ7773XvV5n6x44cAATJ07E4MGD8dBDD2HDhg2Ii4trt09P22n9eMmS\nJdiyZQsAoLm5GU899RT+9re/Ydy4cYiPj8fq1avx1FNPud+Vs2XLFvzpT3/q6mUj6pRGdHUo0okF\nCxbgs88+w7Bhw3D06FGPY5555hl8/vnnGDhwIDZu3Ijx48f7HZYokMyZMwePPfYYHnrooU7HlZeX\nY9myZdi7d28vJaNAJanw9+zZg0GDBmHu3LkeC7+4uBh5eXkoLi7G/v378eyzz6K0tFRSYCIi8o+k\nKZ27774bkZGRHS7ftm0b5s2bBwCYMGEC6uvrUV1dLWWXRETkpx6dw3c4HDAYDO7Her3efcKLiIh6\nV4+ftL1xxqizE2JERNRzevRqmTqdDna73f24srISOp2u3TiNJh7A2Z6MQkQUcIxGo0+fR+nRI/zM\nzEz3R9ZLS0sRERHR5tOH150FIAAITJ/+vyCEUO3t5Zdflj2DUm58Lfha8LXo/Hb2rG8HypKO8B9/\n/HHs2rULNTU1MBgMWLVqlft9w0uXLsXMmTNRXFyM+Ph4hIaGdvoxdAAwGl/En/88Q0okIiLqgKTC\n9+bLG7z9xqGYmJVYv34GHnhgspRIRETUAcV841Vk5P/GAw/InUJ+ZrNZ7giKwdfiOr4W1/G18J+k\nD151WwiNBv37C9TWAgMGyJ2GiKhv0Gg08KXCFXMtnTFjgBMn5E5BRBS4FFP4qalAebncKYiIAhcL\nn4hIJVj4REQqoajCP3IEkP8UMhFRYFJM4Q8fDmg0wA8/yJ2EiCgwKabwNRpO6xAR9STFFD7Awici\n6kksfCIilWDhExGphGIurSCEwG+/AVFRwKVLQL9+cqciIlK2PntpBcB1HZ24OOB//kfuJEREgUdR\nhQ8AaWmc1iEi6gmKK3zO4xMR9QwWPhGRSrDwiYhUQnGFbzAADQ3ATz/JnYSIKLAorvBbLrFw9Kjc\nSYiIAoviCh/gtA4RUU9g4RMRqQQLn4hIJRR1aYUWV64Aw4YBly8DWq2MwYiIFKxPX1qhxaBBQGws\ncOaM3EmIiAKHIgsf4LQOEVF3Y+ETEakEC5+ISCUkF77VakViYiJMJhNyc3PbLa+pqcGMGTOQnp6O\nlJQUbNy40avtsvCJiLqXpHfpOJ1OJCQkYOfOndDpdLjttttQUFCApKQk95icnBw0NjZizZo1qKmp\nQUJCAqqrq6Ft9fYbT2ear10DwsIAhwMID/c3IRFR4OrVd+mUlZUhPj4ecXFxCAkJgcViQVFRUZsx\nI0aMwOXLlwEAly9fRnR0dJuy7zBYEJCSwqN8IqLuIqnwHQ4HDAaD+7Fer4fD4WgzZvHixTh+/Dhi\nY2ORlpaG9evXe719TusQEXUfSYWv0Wi6HLN69Wqkp6ejqqoKhw8fxvLly/HLL794tX1++xURUfeR\n9DlWnU4Hu93ufmy326HX69uM2bdvH1566SUAgNFoxKhRo3Dq1ClkZGS0GZeTk+O+bzabYTabkZoK\nfPihlIRERIGjpKQEJSUlfq8v6aRtc3MzEhIS8OWXXyI2Nha33357u5O2f/nLXxAeHo6XX34Z1dXV\nuPXWW1FeXo6oqKjrITo48VBf77o+/qVLrjl9IiK6zteTtpKO8LVaLfLy8jB9+nQ4nU4sXLgQSUlJ\nyM/PBwAsXboUL774IubPn4+0tDRcu3YN69ata1P2nYmIAKKigO+/B4xGKUmJiEiRF09rbdYsYMEC\n4OGHezkUEZHCBcTF01rjO3WIiLoHC5+ISCVY+EREKqH4OfzmZtclFn780XWdfCIicgm4OXytFkhK\nAo4flzsJEVHfpvjCBzitQ0TUHVj4REQqwcInIlIJxZ+0BYCffgLGjAFqawEvrtdGRKQKAXfSFgCG\nDgX69wcqK+VOQkTUd/WJwgdc0zpHjsidgoio7+pThc95fCIi//WZwueXoRARSdNnCp9H+ERE0vSJ\nd+kAQFMTEB4O1NW5TuASEaldQL5LBwD69QNMJuDECbmTEBH1TX2m8AFO6xARScHCJyJSCRY+EZFK\n9LnCP3IEkP80MxFR39OnCn/ECFfZV1fLnYSIqO/pU4Wv0XBah4jIX32q8AEWPhGRv1j4REQqwcIn\nIlKJPnNphRYNDUB0NHD5MhAS0sPBiIgULGAvrdBi4EDg5puBU6fkTkJE1Lf0ucIHOK1DROSPPlv4\n/PYrIiLfSC58q9WKxMREmEwm5ObmehxTUlKC8ePHIyUlBWazWeoueYRPROQHSSdtnU4nEhISsHPn\nTuh0Otx2220oKChAUlKSe0x9fT3uvPNOfPHFF9Dr9aipqcGQIUPahvDxxMP588CkSYDD4W9yIqK+\nr1dP2paVlSE+Ph5xcXEICQmBxWJBUVFRmzH//ve/8cgjj0Cv1wNAu7L3x8iRwJUrQE2N5E0REamG\npMJ3OBwwGAzux3q9Ho4bDrsrKipQW1uLKVOmICMjA5s3b5aySwDXL7Fw9KjkTRERqYZWysoajabL\nMVevXsXBgwfx5ZdfoqGhAXfccQcmTpwIk8nUZlxOTo77vtls7nKuv2Uef8oUf5ITEfU9JSUlKCkp\n8Xt9SYWv0+lgt9vdj+12u3vqpoXBYMCQIUMwYMAADBgwAJMnT8aRI0c6LXxvpKYCZWV+Ryci6nNu\nPBhetWqVT+tLmtLJyMhARUUFbDYbmpqaUFhYiMzMzDZjHnroIXz99ddwOp1oaGjA/v37kZycLGW3\nAPhOHSIiX0k6wtdqtcjLy8P06dPhdDqxcOFCJCUlIT8/HwCwdOlSJCYmYsaMGUhNTUVQUBAWL17c\nLYWfkuL6QnOnEwgOlrw5IqKA1+eupdOa0QgUFwMJCT0QiohI4QL+WjqtcVqHiMh7LHwiIpVg4RMR\nqQQLn4hIJfr0SVunEwgPd11TJzy8B4IRESmYqk7aBgcDY8cCx47JnYSISPn6dOEDnNYhIvJWQBQ+\nvwyFiKhrAVH4PMInIupanz5pCwB1da7r41+6BAT1+X++iIi8p6qTtgAQGem62WxyJyEiUrY+X/gA\np3WIiLzBwiciUgkWPhGRSrDwiYhUos+/SwcAmpuBsDDgp5+A0NBuDEZEpGCqe5cOAGi1QGIicPy4\n3EmIiJQrIAof4LQOEVFXWPhERCrBwiciUomAOGkLAD/+6JrH//lnQKPppmBERAqmypO2ADBsGNCv\nn+vLUIiIqL2AKXyA0zpERJ1h4RMRqUTAFT6/DIWIyLOAK3we4RMReRYw79IBgMZGICLC9aUo/ft3\nQzAiIgVT7bt0AOCmm4D4eODkSbmTEBEpj+TCt1qtSExMhMlkQm5ubofjvv32W2i1WnzyySdSd9kp\nTusQEXkmqfCdTidWrFgBq9WKEydOoKCgACc9HF47nU5kZ2djxowZ3TJ10xkWPhGRZ5IKv6ysDPHx\n8YiLi0NISAgsFguKiorajXv99deRlZWFoUOHStmdV1j4RESeSSp8h8MBg8HgfqzX6+G44aOuDocD\nRUVFWLZsGQDXSYaexMInIvJMK2Vlb8r7ueeew9q1a91nkzua0snJyXHfN5vNMJvNfmWKjXV9IUp1\nNRAT49cmiIgUqaSkBCUlJX6vL+ltmaWlpcjJyYHVagUArFmzBkFBQcjOznaPGT16tLvka2pqMHDg\nQLz99tvIzMy8HqKb3pbZYsoU4MUXgWnTum2TRESK42t3SjrCz8jIQEVFBWw2G2JjY1FYWIiCgoI2\nY86dO+e+P3/+fMyaNatN2feElmkdFj4R0XWSCl+r1SIvLw/Tp0+H0+nEwoULkZSUhPz8fADA0qVL\nuyWkr1JTgd27Zdk1EZFiBdQnbVt8+y2wZAlw6FC3bZKISHF87c6ALPyGBmDIEODSJSAkpNs2S0Sk\nKKq+tEKLgQMBgwE4fVruJEREyhGQhQ/w/fhERDdi4RMRqURAFz6/DIWI6LqALnwe4RMRXRewhX/z\nzcDly8DPP8udhIhIGQK28IOCXEf5R4/KnYSISBkCtvABTusQEbXGwiciUgkWPhGRSgTkpRVaXL4M\njBjh+jM4uNs3T0QkK15aoZWwMNeXoJw9K3cSIiL5BXThA5zWISJqwcInIlIJFj4RkUqw8ImIVCKg\n36UDAE6n6+TtxYuuP4mIAgXfpXOD4GBg7Fjg2DG5kxARySvgCx/gtA4REcDCJyJSDRY+EZFKBPxJ\nW8B1TfxRo4D6etdlk4mIAgFP2noQHe16h87583InISKSjyoKH+C0DhGRago/LY2FT0TqpprC5xE+\nEakdC5+ISCUkF77VakViYiJMJhNyc3PbLd+yZQvS0tKQmpqKO++8E+Uyte6YMYDdDjQ0yLJ7IiLZ\nSSp8p9OJFStWwGq14sSJEygoKMDJkyfbjBk9ejR2796N8vJyrFy5EkuWLJEU2F8hIUBCAnD8uCy7\nJyKSnaTCLysrQ3x8POLi4hASEgKLxYKioqI2Y+644w6Eh4cDACZMmIDKykopu5SE0zpEpGaSCt/h\ncMBgMLgf6/V6OByODse/++67mDlzppRdSsLCJyI100pZWaPReD32P//5D9577z3s3bvX4/KcnBz3\nfbPZDLPZLCWaR6mpwPbt3b5ZIqJeUVJSgpKSEr/Xl1T4Op0Odrvd/dhut0Ov17cbV15ejsWLF8Nq\ntSIyMtLjtloXfk9pOcIXAvDh3yoiIkW48WB41apVPq0vaUonIyMDFRUVsNlsaGpqQmFhITIzM9uM\nuXDhAmbPno0PP/wQ8fHxUnYnWUwMoNUCVVWyxiAikoWkI3ytVou8vDxMnz4dTqcTCxcuRFJSEvLz\n8wEAS5cuxSuvvIK6ujosW7YMABASEoKysjLpyf3UcpSv08kWgYhIFqq4WmZrL7wADBsGZGf3yu6I\niHoMr5bZBb5Th4jUioVPRKQSqpvS+f13IDLS9WUoN93UK7skIuoRnNLpQv/+wOjRwA1XgCAiCniq\nK3yA0zpEpE6qLHx+GQoRqZEqC59H+ESkRix8IiKVUGXh63RAUxNQXS13EiKi3qPKwtdoXEf5R4/K\nnYSIqPeosvABTusQkfqw8ImIVIKFT0SkEqq7tEKLX38Fhg4FLl92XSOfiKiv4aUVvBQaCuj1wOnT\ncichIuodqi18gNM6RKQuLHwWPhGpBAufhU9EKsHCZ+ETkUqouvDj4oC6OqC2Vu4kREQ9T9WFHxQE\njBvHSywQkTqouvABTusQkXqw8Fn4RKQSqi98fvsVEamFai+t0OLSJdf18S9dAoKDZYlAROQXXlrB\nR+HhrmvqnDsndxIiop6l+sIHOI9PROrAwgcLn4jUQXLhW61WJCYmwmQyITc31+OYZ555BiaTCWlp\naTh06JDUXXY7Fj4RqYGkwnc6nVixYgWsVitOnDiBgoICnDx5ss2Y4uJinDlzBhUVFXjrrbewbNky\nSYF7AgufiNRA0ld/lJWVIT4+HnFxcQAAi8WCoqIiJCUlucds27YN8+bNAwBMmDAB9fX1qK6uRkxM\njJRdd6tTp3bDZtuBu+7SIjS0Gc88cz8eeGByr2b47LPd2LBhBxobtbjpJnkyKCWHEjIoJYcSMigl\nhxIyKCVHSwafCQm2bt0qFi1a5H68efNmsWLFijZjHnzwQbF3717346lTp4oDBw60GSMxhiTbt+8S\nRuOLAhDum9H4oti+fZeqMiglhxIyKCWHEjIoJYcSMiglR9sMvnWnpCN8jUbj7T8qfq3XGzZs2IGz\nZ19t89zZs6/iv/5rJQyGybh2rfWPtvObv2MbGnbg2rX2GTIzV2LAANeRQ8tLptG0vd+dz/388w40\nNrbPkZW1EsOGtc/RwtfnOlt+4cIONDR4/nncfHPbo6iOfo06+/Xydtm5cztw5Ur7HBbLSsTFTUbL\nr7SnPztb5ss6P/64A7//3j7DI4+sxNChHR9R+vKRFm/G1tTsQFNT+xyzZ69EdHTbHF1tz9/ldXU7\ncPVq+wwPP7wSUVHXM9z48+3o96+zZZ2N6+hnkpW1EsOHd5yjo+158/yNyxyOHfjtt1c7HtwJSYWv\n0+lgt9vdj+12O/R6fadjKisrodPp2m0rJyfHfd9sNsNsNkuJ5rXGRs8vQVJSMD744HohajSui621\nftzRzZtxrcfMnKnF3r3tM0yaFIzPP++6GLrruUcf1aK0tH2OtLRgFBa2X//GbXnzXFfL587V4sCB\n9hmSkoKxcaPn7bXWWaH4smzhQi2++679uISEYLz/vuu+p388W/7sbJm361gsWuzf3z5Denowtm7t\n+L+l9Ta80dXYrCzPvxe33BKM//5v37fnz/KHH9bim2/aP5+REYxPPnHdv/Fn2NHvX2fLuhrX0c8k\nNTUYH33keRsdbc+b529ctn9/CbKzv8Zvv+V0vEInJBV+RkYGKioqYLPZEBsbi8LCQhQUFLQZk5mZ\niby8PFgsFpSWliIiIsLj/H3rwu9NN93U7PH56GgnkpN7J0NoqOcMoaFODBrUOxkAICzMc46ICCdu\nvrl3MkRFdfzzGDu2dzK49uc5x5AhTowb1zsZwsM7/nkYDL2TAej49yI83InY2N7JMHiw5wxhYU4M\nH947GYCOfyaRkU6MGtXz+zeZzNi8+S5UVeX8/2dW+bYBqfNJxcXFYsyYMcJoNIrVq1cLIYR48803\nxZtvvukes3z5cmE0GkVqaqr47rvv2m2jG2L4zfOc3N8VMD/ZuxmUkkMJGZSSQwkZlJJDCRmUkkPK\nHL7qr6UDuM54v/76/8Xvvwejf38n/vznabKcdZc7g1JyKCGDUnIoIYNScighg1JytGT44ov/41N3\nsvCJiPooXjyNiIg8YuETEakEC5+ISCVY+EREKsHCJyJSCRY+EZFKsPCJiFSChU9EpBIsfCIilWDh\nExGpBAufiEglWPhERCrBwiciUgkWPhGRSrDwiYhUgoVPRKQSLHwiIpVg4RMRqQQLn4hIJVj4REQq\nwcInIlIJFj4RkUqw8ImIVIKFT0SkEix8IiKVYOETEakEC5+ISCX8Lvza2lpMmzYNY8aMwf3334/6\n+vp2Y+x2O6ZMmYKxY8ciJSUFGzZskBSWiIj853fhr127FtOmTcPp06cxdepUrF27tt2YkJAQvPba\nazh+/DhKS0vxxhtv4OTJk5ICB7qSkhK5IygGX4vr+Fpcx9fCf34X/rZt2zBv3jwAwLx58/Dpp5+2\nGzN8+HCkp6cDAAYNGoSkpCRUVVX5u0tV4C/zdXwtruNrcR1fC//5XfjV1dWIiYkBAMTExKC6urrT\n8TabDYcOHcKECRP83SUREUmg7WzhtGnT8MMPP7R7/tVXX23zWKPRQKPRdLidK1euICsrC+vXr8eg\nQYP8jEpERJIIPyUkJIiLFy8KIYSoqqoSCQkJHsc1NTWJ+++/X7z22msdbstoNAoAvPHGG2+8+XAz\nGo0+9bZGCCHgh7/+9a+Ijo5GdnY21q5di/r6+nYnboUQmDdvHqKjo/Haa6/5sxsiIuomfhd+bW0t\nHnvsMVy4cAFxcXH4+OOPERERgaqqKixevBifffYZvv76a0yePBmpqanuKZ81a9ZgxowZ3fofQURE\nXfO78ImIqG+R/ZO2VqsViYmJMJlMyM3NlTuObPghtfacTifGjx+PWbNmyR1FVvX19cjKykJSUhKS\nk5NRWloqdyTZrFmzBmPHjsW4cePwxBNPoLGxUe5IvWbBggWIiYnBuHHj3M958wHY1mQtfKfTiRUr\nVsBqteLEiRMoKChQ7Qez+CG19tavX4/k5ORO3wGmBs8++yxmzpyJkydPory8HElJSXJHkoXNZsPb\nb7+NgwcP4ujRo3A6nfjoo4/kjtVr5s+fD6vV2uY5bz4A25qshV9WVob4+HjExcUhJCQEFosFRUVF\nckaSDT+k1lZlZSWKi4uxaNEiqHnW8dKlS9izZw8WLFgAANBqtQgPD5c5lTzCwsIQEhKChoYGNDc3\no6GhATqdTu5Yvebuu+9GZGRkm+e8+QBsa7IWvsPhgMFgcD/W6/VwOBwyJlIGfkgNeP755/GPf/wD\nQUGyzzrK6vvvv8fQoUMxf/583HLLLVi8eDEaGhrkjiWLqKgovPDCCxg5ciRiY2MRERGB++67T+5Y\nsvL1A7Cy/m1S+/+qe8IPqQHbt2/HsGHDMH78eFUf3QNAc3MzDh48iKeffhoHDx5EaGhol//bHqjO\nnj2Lf/3rX7DZbKiqqsKVK1ewZcsWuWMpRlcfgAVkLnydTge73e5+bLfbodfrZUwkr6tXr+KRRx7B\nk08+iT/84Q9yx5HNvn37sG3bNowaNQqPP/44vvrqK8ydO1fuWLLQ6/XQ6/W47bbbAABZWVk4ePCg\nzKnkceDAAUyaNAnR0dHQarWYPXs29u3bJ3csWcXExLivhnDx4kUMGzas0/GyFn5GRgYqKipgs9nQ\n1NSEwsJCZGZmyhlJNkIILFy4EMnJyXjuuefkjiOr1atXw2634/vvv8dHH32Ee++9F5s2bZI7liyG\nDx8Og8GA06dPAwB27tyJsWPHypxKHomJiSgtLcVvv/0GIQR27tyJ5ORkuWPJKjMzEx988AEA4IMP\nPuj6QNGnz+X2gOLiYjFmzBhhNBrF6tWr5Y4jmz179giNRiPS0tJEenq6SE9PF59//rncsWRXUlIi\nZs2aJXcMWR0+fFhkZGSI1NRU8fDDD4v6+nq5I8kmNzdXJCcni5SUFDF37lzR1NQkd6ReY7FYxIgR\nI0RISIjQ6/XivffeEz///LOYOnWqMJlMYtq0aaKurq7TbfCDV0REKqHut0AQEakIC5+ISCVY+ERE\nKsHCJyJSCRY+EZFKsPCJiFSChU9EpBIsfCIilfh/Knl84e6AtZ8AAAAASUVORK5CYII=\n", "text": [ "" ] } ], "prompt_number": 17 }, { "cell_type": "code", "collapsed": false, "input": [ "trj = np.random.uniform(low=-1, high=1, size=x_length[-1])\n", "fft = np.fft.fft(trj)" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 11 }, { "cell_type": "code", "collapsed": false, "input": [ "plt.plot(np.abs(fft) ** 2)\n", "plt.title('fourier spectra of noise');" ], "language": "python", "metadata": {}, "outputs": [ { "metadata": {}, "output_type": "display_data", "png": "iVBORw0KGgoAAAANSUhEUgAAAW0AAAEXCAYAAABmuBWFAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAIABJREFUeJzt3Xl4VNX5B/DvkKQikI2wGEJAIQqJARJlEwWCiCyCYllK\nClHQgrhW1EdAfhpwgVYKVtFWrREri1BXQBat6JgqyBqUsgREkoZNE/ZAIMuc3x/jTCbJLHfuOid8\nP8+TJ8vce+6bOzPvnHvuWWxCCAEiIpJCA6sDICIi5Zi0iYgkwqRNRCQRJm0iIokwaRMRSYRJm4hI\nIkza9Uh+fj7S0tIQFRWFV199VffyhwwZgkWLFulebn1l5PORmpqK3NxcXcskOdjYT7v+uPfeexET\nE4N58+ZZHUrIsdvtyMrKQlFRkWnH5PNBRmBNux4pLCxESkqK7uUKIaDls72yslLHaIxTVVWla3lG\nPR90iRNUL/Tr10+EhYWJhg0bisjISLF//35x6tQpkZWVJZo3by7atm0rnn/+eeFwOIQQQmRnZ4tx\n48a59z948KCw2WyiqqpKCCFE3759xYwZM0SvXr1Eo0aNxI8//ij69u0r3nrrLfc+OTk5Ijk5WcTG\nxoqBAweKwsJC92M2m0289tprIikpSbRr165OvGVlZWLs2LEiLi5OxMTEiG7duolffvnFfexp06aJ\n7t27i6ioKHHHHXeIEydOuPfduHGjuOGGG0RMTIzo0qWLsNvt7seOHz8uxo8fL1q1aiViY2PFnXfe\nKc6dOycaNmwoGjRoIJo0aSIiIyPFkSNHRHZ2thgxYoQYN26ciIqKEjk5OWLz5s2iZ8+eIiYmRsTH\nx4uHHnpIlJeX+zzvK1asECkpKSImJkZkZGSIPXv2+Hw+auvbt694+umnxY033igiIyPFrbfeKkpK\nSgKWLYQQbdu2FevXrxdCCLFp0yZx/fXXi6ioKNGyZUvx2GOPKTpXJCfNSXvChAmiRYsWIjU1NeC2\nU6ZMEWlpaSItLU1cc801IiYmRuvhyUNGRobIyclx/56VlSWGDx8uSktLRUFBgbjmmmvcj8+cOTNg\n0m7btq3YvXu3qKqqEhUVFTXK/+STT0RSUpLYu3evqKqqEs8//7zo1auXuzybzSZuvfVWcfLkSXHh\nwoU6sb7++uti2LBhoqysTDgcDrF9+3Zx5swZ97ETEhLErl27xLlz59yJVQghDh06JOLi4sTatWuF\nEEL8+9//FnFxce5kN2TIEDFmzBhx6tQpUVFRIXJzc4UQQtjtdtG6desaMWRnZ4uIiAixYsUKIYTz\ng2Tbtm1i06ZNoqqqShQUFIjk5GTx17/+1ev5zs/PF40bNxZffPGFqKysFC+++KJISkoSFRUVXp+P\n2vr27SuSkpLE/v37RVlZmcjIyBDTpk1TVPaVV17pTto9e/YUixcvFkIIce7cOfHdd9/5PVfFxcU+\nY6LQp7l5ZMKECVi3bp2ibefPn4+8vDzk5eXh4YcfxogRI7QenmoRvzZjVFVVYfny5ZgzZw4aN26M\ntm3b4vHHH3ffSBQBmjtsNhvGjx+P5ORkNGjQAOHh4TUef/311zF9+nR06NABDRo0wPTp07Fjx44a\nbcbTp09HTEwMLrvssjrl/+Y3v8Hx48exf/9+2Gw2pKenIzIy0n3su+66CykpKWjUqBGee+45/Otf\n/4LD4cDixYsxZMgQDBo0CABwyy23oGvXrli9ejWOHj2KdevW4fXXX0d0dDTCw8PRu3dvv/9vr169\ncPvttwMAGjZsiOuuuw7du3dHgwYN0LZtW0yaNAlff/21132XL1+OoUOHon///ggLC8MTTzyBsrIy\nbNiwoc7z4escT5gwAUlJSWjYsCFGjx6NHTt2KC7b81zu378fJSUlaNSoEXr06AEAPs/VmjVrfMZE\noU9z0u7duzdiY2Nr/O3AgQMYPHgwunbtij59+iA/P7/OfkuXLkVmZqbWw1MtNpsNAFBSUoKKigq0\nbdvW/VibNm1w+PBhxWUlJib6fKywsBB//OMfERsbi9jYWMTFxQFAjfL97Z+VlYWBAwdizJgxSEhI\nwNSpU2u0fXvu26ZNG1RUVKCkpASFhYV4//333ceNjY3Ft99+i2PHjqGoqAhNmzZFdHS04v+xdevW\nNX7ft28fhg4divj4eERHR2PGjBk4fvy4132PHj2KNm3auH+32WxITEyscQ5cz4cvV1xxhfvnyy+/\nHKWlpQCAI0eOBCzbJScnB/v27UNycjK6d++O1atXA4Dfc0XyMuRG5KRJk7BgwQJs3boVc+fOxQMP\nPFDj8cLCQhQUFODmm2824vAEoFmzZoiIiEBBQYH7b//73//cSapx48Y4f/68+zFvb2R/CadNmzZ4\n8803cfLkSffXuXPn0LNnT0X7h4eH45lnnsGuXbuwYcMGfPrpp3j33XdrxOr5c0REBJo3b442bdog\nKyurxnHPnj2LJ598EomJiThx4gROnz6t6H+x2Wx1/n7//fcjJSUFP/74I06fPo0XXngBDofD6//Q\nqlUrFBYWun8XQqCoqAgJCQk+/2+lEhISFJedlJSEpUuXori4GFOnTsXIkSNx/vx5v+eK5KV70i4t\nLcXGjRsxatQopKenY/LkyXUSwrJlyzBq1KiAtRAKnutyPCwsDKNHj8aMGTNQWlqKwsJCvPTSSxg3\nbhwAID09Hbm5uSgqKsLp06cxZ84cn2V5M3nyZMyePRu7d+8GAJw+fRrvv/++4jjtdjt27tyJqqoq\nREZGIiIiAmFhYe7jLl68GHv27MH58+fxzDPPuF8v48aNw6pVq/D555+jqqoKFy5cgN1ux+HDhxEf\nH4/BgwfjgQcewKlTp1BRUeHuy9yyZUscP34cZ86c8fv/lZaWIjIyEo0aNcLevXvx97//3ef/MHr0\naKxevRpffvklKioqMG/ePDRs2BC9evVSdA79PT5q1KiAZbssXrwYxcXFAIDo6GjYbDaEhYX5PVck\nL92TtsPhQExMjLvtOi8vD7t27aqxzfLly9k0YhDPD8IFCxagcePGaNeuHXr37o2xY8diwoQJAJzt\nm7/73e/QuXNndOvWDcOGDavzIervQ3X48OGYOnUqxowZg+joaHTq1AmfffaZon0BZ81+1KhRiI6O\nRkpKCjIyMpCVleXeNysrC+PHj0d8fDzKy8vxyiuvAHA2Z6xYsQKzZ89GixYt0KZNG8ybN89dG160\naBEiIiLQsWNHtGzZ0r1fx44dkZmZiXbt2qFp06Y4evSo15r2X/7yFyxduhRRUVGYNGkSxowZ4/N/\nueaaa7B48WI8/PDDaN68OVavXo1Vq1bVaP8PdB48H/eMp0OHDgHLdvnss8+QmpqKyMhITJkyBcuW\nLcNll10W8FyRnBQNrrnyyisRFRWFsLAwREREYPPmzTUeLygowLBhw7Bz504AwI033ogpU6Zg5MiR\nEEJg586d6Ny5MwBg7969GDx4MA4ePGjAv0P1Qb9+/ZCVlYV77rnH6lCIQo6imrbNZoPdbkdeXl6d\nhJ2ZmYlevXohPz8fiYmJWLhwIZYsWYKcnBykpaUhNTUVK1eudG/PWjYpoaAuQXRJqnut5YOvN9F7\n773n9e9r1671+vfs7Gylh6RLGO93EHmnqHmkXbt2iI6ORlhYGO677z5MnDjRjNiIiKgWRTXtb7/9\nFvHx8SguLsaAAQPQsWNH96AFIiIyj6KkHR8fDwBo3rw57rzzTmzevNmdtJOSknDgwAHjIiQiqofa\nt2+PH3/8Mej9At6IPH/+PM6ePQsAOHfuHD7//HN06tTJ/fiBAwfcs8Bd6l/Z2dmWxxAqXzwXPBc8\nF/6/1FZ2A9a0f/75Z9x5550AnFNsjh07FrfeequqgxERkTYBk/ZVV13lnsSGiIisxUUQdJSRkWF1\nCCGD56Iaz0U1ngvtNC83ZrPZoLEIIqJLjtrcyZo2EZFEmLSJiCTCpE1EJBEmbSIiiTBpExFJhEmb\niEgiTNpERBJh0iYikgiTNhGRRJi0iYgkwqRdj9lswKlTVkdBVvn4YyAszOooSG9M2vXcmTNWR0BW\n2bIFcDisjoL0xqRNRCQRJm0iIokwadcjkycDdnvg7UaNAtavNzycOr74ArhwwfzjmuXQIeD7780/\n7pw5wNy5gbebPx94803j4yFjMWkrtHYtMGCA1VH498YbwFtvBd7ugw+AZcuMj6e2AQOAd981/7hm\nueMOIC3N/OM+9RQwfXrg7R5/HHjsMePj0SI6Gjh+3OooQhuTtkIff+ysKZI29Xm9jMpKqyOQ35kz\nzisW8o1Jm4hIIqYk7U6dgO3b9S/X4QCmTNG/3FCTkgKUlVkdBV0qtm4FRo+2OgpjlZQAzz9vTNkL\nFgD3329M2YBJSfu//wVyc/Uv9+JF4K9/1b/cULNnD9v5yDwffQS8/77VURhrxQrg6aeNKXvBAuD1\n140pG2DzCBGRVKRN2q+8whs/WtTnG4JESh05Yk1PKi2kTdp//CPw009WR0FEMnvxRSAz0+oogmNo\n0i4uZlssEV2aCgqMGUxmaNJOSQG6djXyCEREoemqq4y52Rmuf5HVSko4y5zV2HZNZJ2TJ/UvU9o2\nbaJQY7NZHQFdCnRN2mPHGjOIxhe+SYgo1PjKS2+9Bcybp718XZP20qXAhx/qWWLoOXcO2LbN6ijk\nxQ9a8iY3t/435T3+OPDEE9rLYfOIFwMGAJ995v2xF14w7ubqiRPODz4z1Pc3COnvtdeMe9307QsU\nFnp/7JlngJkzjTmujJi0vfjiC993fcvLjTvum286m5jqM29v+osXgdOnzY9FreLiS/ND76GHjO1Y\n4OucPvccMGuWcceVDZO2D1u21Pw9lC7r8/OtjkBff/gDEBNjdRTKtWgBrFtndRT6EQLYt8/qKOon\nIz7cmbQlU1wMdOxodRT6OnDA6giCV58GjX3+OdChg9VRkFIhmbQvXACuu87qKEKTUfOtlJYCX39t\nTNmkn9Wr9a+9nTunb3n1ydGjQP/+VkdRk6KkXVVVhfT0dAwbNszoeAA4a5N5eaYcSrF//MP8Yy5f\n7pxC0gzz5wMZGeYcSwmHA6iqsjoK54dkKLVfDx0K/PyzOcf685+tWfMylOYU2roV+PJLq6OoSVHS\nfvnll5GSkgKbzg27Gzdqu9TXM5yNG53r0/lixRt3zBjzJrNxOMw5jlJDhgA9e1odBRARAbz9ttVR\nWGPaNGvmqy8o8P2YzRY6r1WbzXvHBKPvfwVM2ocOHcKaNWvwhz/8AULnzGW3h85NtS1bLq0h91qe\nys8+8909Sy/ffOOs5YQCpa9RtW/WDRuAnTvV7XspCqUrn4sXzT9mwKQ9ZcoUzJ07Fw0aaGv+NqKr\n3NCh+pdJgQ0a5JwaV42NG/WNJZTs2KFuvxtvdJ5TMt/LL+tfppHdgoEAE0Z9+umnaNGiBdLT02G3\n231uN9Oj53tBQQaAjDrbTJ0KPPmkuiB9+d//9C2PjPfPfwLvvGN1FHSpM6q2fvQocPCg98cqK+0A\n7JoHCvlN2hs2bMDKlSuxZs0aXLhwAWfOnMFdd92Fd999t8Z2rqQ9axbQtq22gEKdt0vgVauAr75y\n3sxTa+9eY9rqXC/OoiKgogJo107fckPNwYNA8+ZAkyY1/37kCHDZZUBcnDVxBaLXTdctW5xTIjdu\nrE95tZ05A5w9C7Rurb6Mfv2ARYvqlhFKYyHUOnu25u+e75Pw8AwAGe6kPUvliCG/bR6zZ89GUVER\nDh48iGXLluHmm2+uk7DJeYn10kvaykhOBj75JPj9bDZl3QC7dQPatw+83YEDdV943qxcGXgbPa1b\np+yDol074OGH6/49IQG4+ebA+5865WxjNlugHiGHDjmnOg6ke3dnr49AtmwBrr1WWWyefv97IDEx\n+P082e11B6+RckE1VOvde8RqP/5odQQ1nT+vbj8lbWhKy05KAiZPDu7469cDTz0V3D7BGjzYeTWi\nxKlT3v9eXBx43+xsZxuzkW6/Hfjll+D2SUxU3l+4oiLwNnY7sHt3cDEAyj44zFJZ6b+niT+heqWo\nhOKk3bdvX6xUUL2SKa9ffXVovQiV0PpiW78+8DbBzgMyfz4wZ466eIJhxhvNjL7hq1ap6xlz4oT/\nx71dYdRnr7ziXB1Gjc8/13ZsK5N+SIyIVHI5Hozjx5XXIrSMMDx5Ejh2TP3+Wqhde87VfU2PF53S\nmq/RPv00uP7E996rvoamp/JyfYfwf/WVuv30fv8ptX+/9vefEt9+W/fDuKxM2b6+3if//Key/Y0Q\nEklbyeVcMMaPV9de54uvJ3jgQCA+Xr/jBENJ+6xWgfqtJycDhw8bH0cgU6cCU6Yo3/7tt4E1a4yL\nR6l585zNUb5UVTmnFzDa+PG+HzOyRnnNNc6ZLb1RmlSVuOkm5/B/PR05om95wQiJpK03PZ/wEyeA\nRo28P2ZVLRsAfvhB2/5KZnWLjg5cmzZqLpRLQaBmqOefByIjlZUVCkP+1fBWMdi0yfd7Tq369DoN\n6aTt6u+4f7/ziQSCm3fhxInqG3AVFf5r9N76fO/dq+8HgBV8tdkHGgjiepH7uqlHxnPdKFfbFTSU\n5vBQwlWrd1WGqqq8Vy783VS/cKH6fB0/rrwZUYjq3LJiRXWTUW6u//0C3cO7ZKZmdZ2IZ591fnfN\nQ1FeDlxxhfJy4uKA0aOdP/fs6VyRxhdva7clJ4fOPAdKLVlS8/du3dSVk5KiPRYtlM4896c/1b1/\noeSN8uCDzh4UMvi//1O33/Ll1T/LWBNfssT7lLH+xkNcfjkwd67z52bNnHO1K7FyZXVuGT7cuZ4j\noM+ajnrTPWkb2QamJoG6bjht3w58952u4egulHre7N9vdQTKzJihft+PPtIvDiO5msK0vLf0mivG\nzNeor/b8QE0dnrVzpaOmZepFFpI1bbWOHTP+RWVE+cFMf6nm+A6HdT0EQo2VN5DMcuiQ8ccwotnQ\nyPeuGbnBLCGRtPWYhP3YMXUzBoZCJ3ujp/70tlhwebl5l8xlZXXP89mz1nQZVNPVz99rZOvWuo9X\nVho/aZCLt+MsXhz4udWawIqKtO1vNle8JSXaKzBVVeq73OpBl6Stdn05143BNm2cU3FqER8PPPCA\ntjK80aNN+29/A5SM/vf3RtKS4LwtjVVZGfzIR7UaNar7wREV5bxnsHt34Jn/XDeetPaYUcLVzh1o\nBGlWlvN7t27O/8PTb39r3j2B5s29/11LZURrH+THH3f2jdbKiMTYogWgdS2XRx91LjYMqD/Pq1ap\nP74uSdvVaA8Ay5b53/bs2eq74p43FPToPnf0aN2/7dqlrUx/Nz2U1lYefNC5krWnX35x3t13OJTN\nyeuaD0PPKwO1U4mq4WvmM88lzl54oe7jZWXVkx+ZsSCE6/Xia8KlW26p+7faV3hbt5q37qURc8Ar\naUJyNY9s3163V9b8+cDf/649jscf9/+4mqsFIbRfJXjLKXl53t+bnveGPJ+rMWPUH1/35hHXm9PX\n3AoPP+wcPg4EP1zaClpHzvm6ghg61DmB09NPA2+8oe0Y9cXChXX/Fmr9a5VMA3ApSEtz3ou5/nrv\nzXsnTqhf2MGspiU9XXed9/e6HlcctRnWpu2rT7SWmoGrLSrYm0muT8CLFwPfQAlmhjclk/707l3z\n90ceAb74ovo8qJm0h7yrPUS/efOaizUcPRoa9zBk4u98uSbgungRePHFmvOkr10LdO7se99t23w/\n5qu5LFBPEM/5XIJtWnHlFC1z9Jv1YWPqjci33wY+/lj9/lOnOr8nJHh/3NcLzLPJZvFi/8eoPcOb\nr0uwzZu9vzBsNv/dhxYscLZxe+tSF2zNxNfcC6+9Flw5RjJzCbfateCSEuekQp7MTNpqZ200wqRJ\n3v8e7D2bPXvq/m3fPud7c9o03/stWlTzveRK+NOnKz/20087v7uew9rPpeeYBNf4DKWeecb5Xcn0\nxZ48/6dbbjF+GT7A5KT9739r29/XmyDQG9FzFGWwb1pfd5q3b/e9T6BE9ckn+tzg9HXjtXb7udH8\nXfn4WyzZm5EjA7dVnj2rT48jlxMnAs9/Y7M55/QORrNmvh8za0V1F1/NOkoHGHm7me2ipJJg9hqY\natee1docZ0Z3y5Do8ueL2lpRsAnR3/ZffqkuBn/0qu0ZNcQ+2Pj0vKH54YeBt7n2WufqJ3qJi6uu\naXnjuuyt3XQW6HXm7/m55x5lsRlNaZL6z3+MjSMYQih/jbpq9LIN6ffHkKTtry3Ln4sXnU0Hvigd\nhBJM25IQQFiY8u29cdVivPVeMcOUKeq7XcqoqMi58kllZc3E+eCD1T/7elP36VP989mzzmYuV5m+\n1G5icZFtigOXjz+u2ePLTK7Xqa/Z/ZSYPVv57JqurrK1V4UKtmJSUaGt2VHPgT26JO3aJ0DtpdC2\nbTXfeLUZUbMMdupLfzcfW7UK/vh61LrffNN/V0sltVeraJlwx9+Hs6+E6lneO+/U7HJYn/jrBzx5\nsvrmA09qBql4m0vEJZixCD//bG4T0+7d5jc7+hLSzSO16dnI71oFJNi+v3pPfh7MoCKHQ12SD6VL\nWzLHb38b/D7BNDsA+q+UM3hwcNv/4x/O77Je8aglVdLW06OPOr8b0aTxxRfKR10FU1uJiKgeiaUX\nV08XJQv1zp3rvDQNdbWfU1m6+d1xR3A1f70nOerSJbhBH0p7BsXHK19lRg0j+kKrZcZrTZekvWFD\n3a50rVv736f2p+OoUXpE4p23E2lkn8r77nMugaWE0htBQjjPmb9eK8E6fLh6GHRenvdt1qwBcnKc\nP0+f7n9WPVfbfr9+1g6K2bKl5o3K2n3lzbZ3b/UUoU884Xu7lSurm7KeeML3KFLA+T/6GsKu1s6d\nxlyVHTtmbBt67Vzy9tvGJc/S0prLuim5Oqk9PfLMmdq6g+qWtF1zMSil9eafVlq7H+rF16Vd7bZz\n19zielJyj+C555wJZ/5875MQTZxY/bMr8dvt+i2TpXY+Gc/ko7Ympnbx19rt9B9+WP3B5zkd7Esv\n1d13wQLnIJF58/zHbWTN1dO774ZO84Ov5OhvoI7ear+uZ850zuHtT+1FnGfN0haD4c0jnjVaoz79\nfHW2d3nssbp/C7Sytd6C/d9r91io/cZR20/Z1VsiWL7u9hv94Vd7UQejeHt+Bg409pi+rvZ8jQgM\n9BpyzekTDCWvSzPWqdTC2xwlZs0Hv3WrsrmD9FSv2rTVzGUSqi9IV5/g2rXhLVu0lavkRqoRi976\nSlBmfKj7o6Urlq9FjfWes9u1KG2gD2qzkwdZM0d3SCXt2glU7QkJ9Ob3HKlm1uIApaWhdcPEbL6a\nYlasMDcOPY0bZ+7xQn2CNSGAf/3LmmPfdps5x9FzJK5aIZW0a1+Oqq15+Zv0ac0a/0NyjXTTTfqW\npzXhmblo73//6/3vMq8mEiptvVoMGqR+X28Vnt/9Tn15WgS6OtSr40FSUs3fa+coMxYWCamk7Yua\ntjpfzKzZybK0lZImIq2DMTzn7fBM1Fqbe6zkebM40M2oYDzyiP/Hd++WpxujnleyZnYe8HYfTAkz\nBrJJkbRD0QcfBN7G12yESpi5CLERK9goXdRCyyyAVicuz8qEv6lAfbV9q9W/v7Y5cYK9utEyXWlU\nVOBtlL7+9FgoxWhmLENmatKW+VK4NiMnwxci+CH7WhKYEc1FtZfgqg/Uvn71WMWlNi3JIdjXit4j\nH2vjIiDBYU07BJl9g0svTz5pdQTBcyViM2rtDof/ATNmUHMjzW7XNg8+6ateJG2rL5P1JmsvE2+r\nkl/qPCc1MntOaT2Z1e/ZDD/9JPdVv6lJW2lylfmEWmXmTKsjqEvpUH619Fjo9y9/0V6GP8OHG1u+\nGv5mqvQ1F099qxjp5ZLvp03WCHZFFn88e6Lcead+5Xrjb/pRpYK5uaVmKgHPG8p6DuQyop3cHzNv\njJtBr+6atSf4kmbCKKvl5lodAblERlodQejSs59+oOXRyL+1a62OQL16kbT17lJVH9TXy9nf/96Y\ncn/6Sf3kVERmCrc6AH/y85XNd23UWolW0WOxBxkvZ5V80Lz3njHH3rTJ+UXq+Wsrl9GuXdU/W93r\nx1PAmvaFCxfQo0cPpKWlISUlBdODWfNeo+++U7aaxaRJxscim1B6kSlVH4aFh6JQmC9DRqmp1T8b\nOf9+sALWtBs2bIivvvoKjRo1QmVlJW666SZ88803uEnviTQ8uFaVAepfLdosY8daHUHw9FxOjqp5\nznlO6lg1p4o3itq0GzVqBAAoLy9HVVUVmjZtqupgy5cr207JEHEiUqa+NVuEMjM6RShK2g6HA2lp\naWjZsiX69euHlJQUo+MiIp0YOeUCmU9R0m7QoAF27NiBQ4cOITc3F3a73eCwqnFidyKiakH1HomO\njsZtt92GrVu3IiMjw+ORmR4/Z/z6pQ+2cxJR/WD/9UsbmxD+O1qVlJQgPDwcMTExKCsrw8CBA5Gd\nnY3+/fs7C7DZANTTTsFERIaxIUD69SpgTfvo0aO4++674XA44HA4kJWV5U7YRERkroA17YAFsKZN\nRKSCupp2vRjGTkR0qWDSJiKSCJM2EZFEmLSJiCTCpE1EJBEmbSIiiTBpExFJhEmbiEgiTNpERBJh\n0iYikgiTNhGRRJi0iYgkwqRNRCQRJm0iIokwaRMRSYRJm4hIIkzaREQSYdImIpIIkzYRkUSYtImI\nJMKkTUQkESZtIiKJMGkTEUmESZuISCJM2kREEmHSJiKSCJM2EZFEmLSJiCTCpE1EJBEmbSIiiTBp\nExFJhEmbiEgiTNpERBJh0iYikgiTNhGRRJi0iYgkEjBpFxUVoV+/frj22muRmpqKV155xYy4iIjI\nC5sQQvjb4NixYzh27BjS0tJQWlqK66+/Hp988gmSk5OdBdhsAPwWQUREddgQIP16FbCmfcUVVyAt\nLQ0A0KRJEyQnJ+PIkSPBx0dERJoF1aZdUFCAvLw89OjRw6h4iIjID8VJu7S0FCNHjsTLL7+MJk2a\nGBkTERH5EK5ko4qKCowYMQLjxo3D8OHDvWwx0+PnjF+/iIiomv3XL20C3ogUQuDuu+9GXFwcXnrp\npboF8EYJEX0yAAAI0ElEQVQkEZEK6m5EBkza33zzDfr06YPOnTv/mqCBOXPmYNCgQc4CmLSJiFQw\nKGkHLIBJm4hIBYO6/BERUehg0iYikgiTNhGRRJi0iYgkwqRNRCQRJm0iIokwaRMRSYRJm4hIIkza\nREQSYdImIpIIkzYRkUSYtImIJMKkTUQkESZtIiKJMGkTEUmESZuISCJM2kREEmHSJiKSCJM2EZFE\nmLSJiCTCpE1EJBEmbSIiiTBpExFJhEmbiEgiTNpERBJh0iYikgiTNhGRRJi0iYgkwqRNRCQRJm0i\nIokwaRMRSYRJm4hIIkzaREQSYdImIpIIkzYRkUSYtImIJBIwad9zzz1o2bIlOnXqZEY8RETkR8Ck\nPWHCBKxbt86MWIiIKICASbt3796IjY01IxYiIgqAbdpERBIJ16eYmR4/Z/z6RURE1ey/fmljQNIm\nIqK6MlCzQjtLVSlsHiEikkjApJ2ZmYlevXph3759SExMxMKFC82Ii4iIvLAJIYSmAmw2AJqKICK6\nBNmgJv2yeYSISCJM2kREEmHSJiKSCJM2EZFEmLSJiCTCpE1EJBEmbSIiiTBpExFJhEmbiEgiTNpE\nRBJh0iYikgiTNhGRRJi0iYgkwqRNRCQRJm0iIokwaRMRSYRJm4hIIkzaREQSYdImIpIIkzYRkUSY\ntImIJMKkTUQkESZtIiKJMGkTEUmESZuISCJM2kREEmHSJiKSCJM2EZFEmLSJiCTCpE1EJBEmbSIi\niTBpExFJhEmbiEgiTNpERBJh0iYikkjApL1u3Tp07NgRV199Nf785z+bERMREfngN2lXVVXhoYce\nwrp167B7926899572LNnj1mxSchudQAhxG51ACHEbnUAIcRudQDS85u0N2/ejKSkJFx55ZWIiIjA\nmDFjsGLFCrNik5Dd6gBCiN3qAEKI3eoAQojd6gCk5zdpHz58GImJie7fW7dujcOHDxseFBEReec3\nadtsNrPiICIiBcL9PZiQkICioiL370VFRWjdunWNbdq3b48DB5jcq82yOoAQwnNRjeeiGs8F4Myd\natiEEMLXg5WVlejQoQPWr1+PVq1aoXv37njvvfeQnJysOlAiIlLPb007PDwcr776KgYOHIiqqirc\ne++9TNhERBbyW9MmIqLQonhEpJJBNo888giuvvpqdOnSBXl5eboFGWoCnYslS5agS5cu6Ny5M268\n8Ub88MMPFkRpDqWDr7Zs2YLw8HB89NFHJkZnLiXnwm63Iz09HampqcjIyDA3QBMFOhclJSUYNGgQ\n0tLSkJqainfeecf8IE1wzz33oGXLlujUqZPPbYLOm0KByspK0b59e3Hw4EFRXl4uunTpInbv3l1j\nm9WrV4vBgwcLIYT47rvvRI8ePZQULR0l52LDhg3i1KlTQggh1q5de0mfC9d2/fr1E7fddpv44IMP\nLIjUeErOxcmTJ0VKSoooKioSQghRXFxsRaiGU3IusrOzxbRp04QQzvPQtGlTUVFRYUW4hsrNzRXb\nt28XqampXh9XkzcV1bSVDLJZuXIl7r77bgBAjx49cOrUKfz8889KipeKknNxww03IDo6GoDzXBw6\ndMiKUA2ndPDVggULMHLkSDRv3tyCKM2h5FwsXboUI0aMcPfAatasmRWhGk7JuYiPj8eZM2cAAGfO\nnEFcXBzCw/3eYpNS7969ERsb6/NxNXlTUdJWMsjG2zb1MVkFO+AoJycHQ4YMMSM00yl9XaxYsQL3\n338/gPrb91/Judi/fz9OnDiBfv36oWvXrli0aJHZYZpCybmYOHEidu3ahVatWqFLly54+eWXzQ4z\nJKjJm4o+2pS+0USte5r18Q0azP/01Vdf4e2338a3335rYETWUXIuHn30UfzpT3+CzWaDEKLOa6S+\nUHIuKioqsH37dqxfvx7nz5/HDTfcgJ49e+Lqq682IULzKDkXs2fPRlpaGux2Ow4cOIABAwbg+++/\nR2RkpAkRhpZg86aipK1kkE3tbQ4dOoSEhAQlxUtFybkAgB9++AETJ07EunXr/F4eyUzJudi2bRvG\njBkDwHnzae3atYiIiMDtt99uaqxGU3IuEhMT0axZM1x++eW4/PLL0adPH3z//ff1LmkrORcbNmzA\njBkzADgHmVx11VXIz89H165dTY3VaqryppLG9IqKCtGuXTtx8OBBcfHixYA3Ijdu3Fhvb74pOReF\nhYWiffv2YuPGjRZFaQ4l58LT+PHjxYcffmhihOZRci727Nkj+vfvLyorK8W5c+dEamqq2LVrl0UR\nG0fJuZgyZYqYOXOmEEKIY8eOiYSEBHH8+HErwjXcwYMHFd2IVJo3FdW0fQ2yeeONNwAA9913H4YM\nGYI1a9YgKSkJjRs3xsKFC9V//IQwJefi2WefxcmTJ93tuBEREdi8ebOVYRtCybm4VCg5Fx07dsSg\nQYPQuXNnNGjQABMnTkRKSorFketPybl46qmnMGHCBHTp0gUOhwMvvvgimjZtanHk+svMzMTXX3+N\nkpISJCYmYtasWaioqACgPm9ycA0RkUS43BgRkUSYtImIJMKkTUQkESZtIiKJMGkTEQVJyURQLo89\n9hjS09ORnp6ODh06aB63wd4jRERB+s9//oMmTZrgrrvuws6dOxXv9+qrr2LHjh146623VB+bNW0i\noiB5mwjqwIEDGDx4MLp27Yo+ffogPz+/zn5Lly5FZmampmPXv2m1iIgsMGnSJLzxxhtISkrCpk2b\n8MADD2D9+vXuxwsLC1FQUICbb75Z03GYtImINCotLcXGjRsxatQo99/Ky8trbLNs2TKMGjVK80R6\nTNpERBo5HA7ExMT4XXlm+fLl+Nvf/qb5WGzTJiLSKCoqCldddRU++OADAM7pVj2XGdy7dy9OnjyJ\nnj17aj4WkzYRUZAyMzPRq1cv5OfnIzExEQsXLsSSJUuQk5PjXvdy5cqV7u2XL1+u+QakC7v8ERFJ\nhDVtIiKJMGkTEUmESZuISCJM2kREEmHSJiKSCJM2EZFEmLSJiCTCpE1EJJH/B8EHbZYG+kkSAAAA\nAElFTkSuQmCC\n", "text": [ "" ] } ], "prompt_number": 15 }, { "cell_type": "code", "collapsed": false, "input": [ "acf = np.fft.ifft(np.abs(fft) ** 2) / x_length[-1]\n", "acf = np.real(acf[:acf.size / 2] / acf[0])" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 13 }, { "cell_type": "code", "collapsed": false, "input": [ "plt.plot(acf, marker='o');\n", "plt.xlim(0, 10);\n", "plt.title('acf of noise');" ], "language": "python", "metadata": {}, "outputs": [ { "metadata": {}, "output_type": "display_data", "png": "iVBORw0KGgoAAAANSUhEUgAAAXwAAAEKCAYAAAARnO4WAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAHjNJREFUeJzt3XtQU3fCN/BvJPGGclVBkigaIhcVsEWtVp2o62XsylZL\n+1DHyyhS19Xtuu3sOts+ttj39cJexrV1Z7Vea9tF2tlOcSymPtZNtbWUWrzro2gLhqAoKlKLgobf\n+0deIsg9BziHnO9nJgPJ+Z2TbwP9cvydkxONEEKAiIi8Xhe5AxARUcdg4RMRqQQLn4hIJVj4REQq\nwcInIlIJFj4RkUqw8MnrXLhwAfHx8fDz88OmTZtavf7ChQsRFBSEp556qk1zrVu3DqmpqW26TaLW\n0PA8fPI2KSkpCAgIwN/+9rdWr3vkyBHMmTMH+fn56N69ezukI5IP9/DJ6xQWFiImJsbjdcPDw1n2\n5JVY+KRY69evR0REBPz8/DB06FB8+umndZZv3boVMTEx7uXHjx/HpEmTYLPZsHz5cvj5+eHSpUv1\ntltcXIzExEQEBwfDbDZj27ZtAIDt27cjNTUV33zzDXr37o3Vq1fXW3fXrl0YN24c/vCHPyAoKAiD\nBw+G1WptdtsAkJaWhnnz5gEA7t+/j7lz56JPnz4IDAzEqFGjcP36dQDAnTt3kJKSgrCwMBgMBqxa\ntQrV1dXSX1AiQaRQH3/8sbh69aoQQojMzEzh6+srrl27JoQQ4qOPPhJ6vV4cO3ZMCCHEpUuXRGFh\noRBCCIvFIrZv397odsePHy+WLVsmKisrxYkTJ0Tfvn3FoUOHhBBC7Nq1S4wbN67RdXfu3Cl0Op3Y\ntm2bqK6uFv/85z9FWFhYi7adlpYm5s2bJ4QQYvPmzWLmzJni3r17orq6WuTl5Yny8nIhhBDPPvus\n+PWvfy0qKirE9evXxahRo8SWLVs8eg2JauMePilWUlISQkNDAQAvvPACzGYzcnNzAQDbtm3DypUr\n8eSTTwIATCYTBgwY4F5XNHJoym634+jRo0hPT0fXrl0RFxeHxYsXY/fu3U2uV9vAgQORkpICjUaD\n+fPn4+rVq7h+/XqLtl2z/a5du+LmzZvIz8+HRqPBiBEj0Lt3b5SUlGD//v3YsGEDevTogb59+2LF\nihXYs2ePh68i0SNauQMQNWb37t3YsGEDCgoKAAB3795FaWkpAKCoqAgmk6nRdTUaTYOPFxcXIygo\nCL6+vu7HBgwYgGPHjrU4V80fIQDo2bOnO9uNGzdavO158+bBbrcjOTkZZWVlmDt3LtasWYPCwkI8\nePAA/fv3d4+trq6u88eMyFMsfFKkwsJCvPTSSzh06BDGjBnj3guu2UM2Go0Nzs83JywsDLdu3cLd\nu3fRq1cvAMCVK1dgMBgkZ27NtrVaLd544w288cYbKCwsxIwZMxAZGYkZM2agW7duuHnzJrp04T/A\nqW3xN4oU6eeff4ZGo0GfPn1QXV2NnTt34syZM+7lixcvxl//+lfk5eVBCIFLly7hypUr7uWNTc0Y\njUaMHTsWf/rTn1BZWYlTp05hx44dmDt3ruTMrdm2zWbD6dOn4XQ60bt3b+h0Ovj4+CA0NBRTp07F\nK6+8gp9++gnV1dW4fPkyDh8+LDkfEQufFCkmJgavvvoqxowZg9DQUJw5cwbjxo1zL09KSsLrr7+O\nOXPmwM/PD7Nnz8bt27fdyxub0gGAjIwMFBQUICwsDLNnz8Zbb72FSZMmuddrat2Glte+39JtX7t2\nDc8//zz8/f0RExMDi8XiPoNn9+7dqKqqQkxMDIKCgvD888/j2rVrLX3piBol6Y1XixYtwmeffYZ+\n/frh9OnTDY55+eWXsX//fvTs2RO7du3CiBEjPA5LRESek7SHv3DhwjrnID8uOzsbly5dQn5+Pt59\n910sXbpUytMREZEEkgp//PjxCAwMbHT53r17sWDBAgDA6NGjUVZWhpKSEilPSUREHmrXOXyHwwGj\n0ei+bzAYUFRU1J5PSUREjWj3g7aPHyJo6oAYERG1n3Y9D1+v18Nut7vvFxUVQa/X1xun0UQAuNye\nUYiIvI7JZGrV+1HadQ8/MTHR/bbynJwcBAQEICQkpIGRlwEIAALTpv23+y3oary9+eabsmdQyo2v\nBV8LvhZN3y5fbt2OsqQ9/BdffBFffvklSktLYTQasXr1ajx48AAAsGTJEsyYMQPZ2dmIiIiAr68v\ndu7c2eT2TKbX8NvfTpcSiYiIGiGp8DMyMpod09JPHAoJWYWNG6fjmWcmSIlERESNUMy1dAID/w+e\neUbuFPKzWCxyR1AMvhaP8LV4hK+F5xTxEYcajQbduwvcugX06CF3GiKizkGj0aA1Fa6Ya+kMGQKc\nOyd3CiIi76WYwo+NBU6dkjsFEZH3YuETEakEC5+ISCUUVfgnTwLyH0ImIvJOiin80FBAowH4OQ9E\nRO1DMYWv0XBah4ioPSmm8AEWPhFRe2LhExGpBAufiEglFHNpBSEE7t0DgoKAO3eArl3lTkVEpGyd\n9tIKgOs6OuHhwP/+r9xJiIi8j6IKHwDi4jitQ0TUHhRX+JzHJyJqHyx8IiKVYOETEamE4grfaAQq\nKoAbN+ROQkTkXRRX+DWXWDh9Wu4kRETeRXGFD3Bah4ioPbDwiYhUgoVPRKQSirq0Qo27d4F+/YDy\nckCrlTEYEZGCdepLK9To1QsICwMuXZI7CRGR91Bk4QOc1iEiamssfCIilWDhExGphOTCt1qtiIqK\ngtlsRnp6er3lpaWlmD59OuLj4zFs2DDs2rWrRdtl4RMRtS1JZ+k4nU5ERkbi4MGD0Ov1GDlyJDIy\nMhAdHe0ek5aWhsrKSqxbtw6lpaWIjIxESUkJtLVOv2noSHN1NeDnBzgcgL+/pwmJiLxXh56lk5ub\ni4iICISHh0On0yE5ORlZWVl1xvTv3x/l5eUAgPLycgQHB9cp+0aDdQGGDeNePhFRW5FU+A6HA0aj\n0X3fYDDA4XDUGZOamoqzZ88iLCwMcXFx2LhxY4u3z2kdIqK2I6nwNRpNs2PWrl2L+Ph4FBcX48SJ\nE1i2bBl++umnFm2fn35FRNR2JL2PVa/Xw263u+/b7XYYDIY6Y44ePYrXX38dAGAymTBo0CBcuHAB\nCQkJdcalpaW5v7dYLLBYLIiNBT74QEpCIiLvYbPZYLPZPF5f0kHbhw8fIjIyEl988QXCwsIwatSo\negdtX3nlFfj7++PNN99ESUkJnnzySZw6dQpBQUGPQjRy4KGszHV9/Dt3XHP6RET0SGsP2kraw9dq\ntdi0aROmTZsGp9OJlJQUREdHY8uWLQCAJUuW4LXXXsPChQsRFxeH6upq/PnPf65T9k0JCACCgoAf\nfwRMJilJiYhIkRdPq23mTGDRImDWrA4ORUSkcF5x8bTaeKYOEVHbYOETEakEC5+ISCUUP4f/8KHr\nEgvXr7uuk09ERC5eN4ev1QLR0cDZs3InISLq3BRf+ACndYiI2gILn4hIJVj4REQqofiDtgBw4wYw\nZAhw6xbQguu1ERGpgtcdtAWAvn2B7t2BoiK5kxARdV6dovAB17TOyZNypyAi6rw6VeFzHp+IyHOd\npvD5YShERNJ0msLnHj4RkTSd4iwdAKiqAvz9gdu3XQdwiYjUzivP0gGArl0Bsxk4d07uJEREnVOn\nKXyA0zpERFKw8ImIVIKFT0SkEp2u8E+eBOQ/zExE1Pl0qsLv399V9iUlcichIup8OlXhazSc1iEi\n8lSnKnyAhU9E5CkWPhGRSrDwiYhUotNcWqFGRQUQHAyUlwM6XTsHIyJSMK+9tEKNnj2BgQOBCxfk\nTkJE1Ll0usIHOK1DROSJTlv4/PQrIqLWkVz4VqsVUVFRMJvNSE9Pb3CMzWbDiBEjMGzYMFgsFqlP\nyT18IiIPSDpo63Q6ERkZiYMHD0Kv12PkyJHIyMhAdHS0e0xZWRmefvppfP755zAYDCgtLUWfPn3q\nhmjlgYfCQmDsWMDh8DQ5EVHn16EHbXNzcxEREYHw8HDodDokJycjKyurzph//etfeO6552AwGACg\nXtl7YsAA4O5doLRU8qaIiFRDUuE7HA4YjUb3fYPBAMdju935+fm4desWJk6ciISEBLz//vtSnhLA\no0ssnD4teVNERKqhlbKyRqNpdsyDBw+Ql5eHL774AhUVFRgzZgyeeuopmM3mOuPS0tLc31sslmbn\n+mvm8SdO9CQ5EVHnY7PZYLPZPF5fUuHr9XrY7Xb3fbvd7p66qWE0GtGnTx/06NEDPXr0wIQJE3Dy\n5MkmC78lYmOB3FyPoxMRdTqP7wyvXr26VetLmtJJSEhAfn4+CgoKUFVVhczMTCQmJtYZ86tf/Qpf\nffUVnE4nKioq8O233yImJkbK0wLgmTpERK0laQ9fq9Vi06ZNmDZtGpxOJ1JSUhAdHY0tW7YAAJYs\nWYKoqChMnz4dsbGx6NKlC1JTU9uk8IcNc32gudMJ+PhI3hwRkdfrdNfSqc1kArKzgcjIdghFRKRw\nXn8tndo4rUNE1HIsfCIilWDhExGpBAufiEglOvVBW6cT8Pd3XVPH378dghERKZiqDtr6+ABDhwJn\nzsidhIhI+Tp14QOc1iEiaimvKHx+GAoRUfO8ovC5h09E1LxOfdAWAG7fdl0f/84doEun//NFRNRy\nqjpoCwCBga5bQYHcSYiIlK3TFz7AaR0iopZg4RMRqQQLn4hIJVj4REQq0enP0gGAhw8BPz/gxg3A\n17cNgxERKZjqztIBAK0WiIoCzp6VOwkRkXJ5ReEDnNYhImoOC5+ISCVY+EREKuEVB20B4Pp11zz+\nzZuARtNGwYiIFEyVB20BoF8/oGtX14ehEBFRfV5T+ACndYiImsLCJyJSCa8rfH4YChFRw7yu8LmH\nT0TUMK85SwcAKiuBgADXh6J0794GwYiIFEy1Z+kAQLduQEQEcP683EmIiJRHcuFbrVZERUXBbDYj\nPT290XHfffcdtFotPvnkE6lP2SRO6xARNUxS4TudTixfvhxWqxXnzp1DRkYGzjewe+10OrFy5UpM\nnz69TaZumsLCJyJqmKTCz83NRUREBMLDw6HT6ZCcnIysrKx649555x0kJSWhb9++Up6uRVj4REQN\nk1T4DocDRqPRfd9gMMDx2FtdHQ4HsrKysHTpUgCugwztiYVPRNQwrZSVW1LeK1aswPr1691Hkxub\n0klLS3N/b7FYYLFYPMoUFub6QJSSEiAkxKNNEBEpks1mg81m83h9Sadl5uTkIC0tDVarFQCwbt06\ndOnSBStXrnSPGTx4sLvkS0tL0bNnT2zduhWJiYmPQrTRaZk1Jk4EXnsNmDKlzTZJRKQ4re1OSXv4\nCQkJyM/PR0FBAcLCwpCZmYmMjIw6Y3744Qf39wsXLsTMmTPrlH17qJnWYeETET0iqfC1Wi02bdqE\nadOmwel0IiUlBdHR0diyZQsAYMmSJW0SsrViY4HDh2V5aiIixfKqd9rW+O474KWXgOPH22yTRESK\n09ru9MrCr6gA+vQB7twBdLo22ywRkaKo+tIKNXr2BIxG4OJFuZMQESmHVxY+wPPxiYgex8InIlIJ\nry58fhgKEdEjXl343MMnInrEawt/4ECgvBy4eVPuJEREyuC1hd+li2sv//RpuZMQESmD1xY+wGkd\nIqLaWPhERCrBwiciUgmvvLRCjfJyoH9/11cfnzbfPBGRrHhphVr8/FwfgnL5stxJiIjk59WFD3Ba\nh4ioBgufiEglWPhERCrBwiciUgmvPksHAJxO18Hbq1ddX4mIvAXP0nmMjw8wdChw5ozcSYiI5OX1\nhQ9wWoeICGDhExGpBgufiEglvP6gLeC6Jv6gQUBZmeuyyURE3oAHbRsQHOw6Q6ewUO4kRETyUUXh\nA5zWISJSTeHHxbHwiUjdVFP43MMnIrVj4RMRqYTkwrdarYiKioLZbEZ6enq95R9++CHi4uIQGxuL\np59+Gqdkat0hQwC7HaiokOXpiYhkJ6nwnU4nli9fDqvVinPnziEjIwPnz5+vM2bw4ME4fPgwTp06\nhVWrVuGll16SFNhTOh0QGQmcPSvL0xMRyU5S4efm5iIiIgLh4eHQ6XRITk5GVlZWnTFjxoyBv78/\nAGD06NEoKiqS8pSScFqHiNRMUuE7HA4YjUb3fYPBAIfD0ej47du3Y8aMGVKeUhIWPhGpmVbKyhqN\npsVj//Of/2DHjh34+uuvG1yelpbm/t5iscBisUiJ1qDYWGDfvjbfLBFRh7DZbLDZbB6vL6nw9Xo9\n7Ha7+77dbofBYKg37tSpU0hNTYXVakVgYGCD26pd+O2lZg9fCKAVf6uIiBTh8Z3h1atXt2p9SVM6\nCQkJyM/PR0FBAaqqqpCZmYnExMQ6Y65cuYLZs2fjgw8+QEREhJSnkywkBNBqgeJiWWMQEclC0h6+\nVqvFpk2bMG3aNDidTqSkpCA6OhpbtmwBACxZsgRvvfUWbt++jaVLlwIAdDodcnNzpSf3UM1evl4v\nWwQiIlmo4mqZtb36KtCvH7ByZYc8HRFRu+HVMpvBM3WISK1Y+EREKqG6KZ3794HAQNeHoXTr1iFP\nSUTULjil04zu3YHBg4HHrgBBROT1VFf4AKd1iEidVFn4/DAUIlIjVRY+9/CJSI1Y+EREKqHKwtfr\ngaoqoKRE7iRERB1HlYWv0bj28k+fljsJEVHHUWXhA5zWISL1YeETEakEC5+ISCVUd2mFGj//DPTt\nC5SXu66RT0TU2fDSCi3k6wsYDMDFi3InISLqGKotfIDTOkSkLix8Fj4RqQQLn4VPRCrBwmfhE5FK\nqLrww8OB27eBW7fkTkJE1P5UXfhdugDDh/MSC0SkDqoufIDTOkSkHix8Fj4RqYTqC5+ffkVEaqHa\nSyvUuHPHdX38O3cAHx9ZIhAReYSXVmglf3/XNXV++EHuJERE7Uv1hQ9wHp+I1IGFDxY+EamD5MK3\nWq2IioqC2WxGenp6g2NefvllmM1mxMXF4fjx41Kfss2x8IlIDSQVvtPpxPLly2G1WnHu3DlkZGTg\n/PnzdcZkZ2fj0qVLyM/Px7vvvoulS5dKCtweWPhEpAaSPvojNzcXERERCA8PBwAkJycjKysL0dHR\n7jF79+7FggULAACjR49GWVkZSkpKEBISIuWp29SFC4dRUHAA48Zp4ev7EC+/PBXPPDOhQzN89tlh\nvP32AVRWatGtmzwZlJJDCRmUkkMJGZSSQwkZlJKjJkOrCQk+/vhjsXjxYvf9999/XyxfvrzOmF/+\n8pfi66+/dt+fPHmyOHbsWJ0xEmNIsm/fl8Jkek0Awn0zmV4T+/Z9qaoMSsmhhAxKyaGEDErJoYQM\nSslRN0PrulPSHr5Go2npHxWP1usIb799AJcvr6nz2OXLa/Bf/7UKRuMEVFfX/tE2ffN0bEXFAVRX\n18+QmLgKPXq49hxqXjKNpu73bfnYzZsHUFlZP0dS0ir061c/R43WPtbU8itXDqCiouGfx8CBdfei\nGvs1aurXq6XLfvjhAO7erZ8jOXkVwsMnoOZXuqGvTS1rzTrXrx/A/fv1Mzz33Cr07dv4HmVr3tLS\nkrGlpQdQVVU/x+zZqxAcXDdHc9vzdPnt2wfw4EH9DLNmrUJQ0KMMj/98G/v9a2pZU+Ma+5kkJa1C\naGjjORrbXksef3yZw3EA9+6taXxwEyQVvl6vh91ud9+32+0wGAxNjikqKoJer6+3rbS0NPf3FosF\nFotFSrQWq6xs+CWIjvbBe+89KkSNxnWxtdr3G7u1ZFztMTNmaPH11/UzjB3rg/37my+Gtnrs+ee1\nyMmpnyMuzgeZmfXXf3xbLXmsueXz52tx7Fj9DNHRPti1q+Ht1dZUobRmWUqKFt9/X39cZKQPdu50\nfd/QH8+ar00ta+k6yclafPtt/Qzx8T74+OPG/1tqb6MlmhublNTw78UTT/jg3/9u/fY8WT5rlhbf\nfFP/8YQEH3zyiev7x3+Gjf3+NbWsuXGN/UxiY32wZ0/D22hsey15/PFl335rw8qVX+HevbTGV2iC\npMJPSEhAfn4+CgoKEBYWhszMTGRkZNQZk5iYiE2bNiE5ORk5OTkICAhocP6+duF3pG7dHjb4eHCw\nEzExHZPB17fhDL6+TvTq1TEZAMDPr+EcAQFODBzYMRmCghr/eQwd2jEZXM/XcI4+fZwYPrxjMvj7\nN/7zMBo7JgPQ+O+Fv78TYWEdk6F374Yz+Pk5ERraMRmAxn8mgYFODBrU/s9vNlvw/vvjUFyc9v8f\nWd26DUidT8rOzhZDhgwRJpNJrF27VgghxObNm8XmzZvdY5YtWyZMJpOIjY0V33//fb1ttEEMjzU8\nJ/cnBcxPdmwGpeRQQgal5FBCBqXkUEIGpeSQMoev+mvpAK4j3u+88z+4f98H3bs78dvfTpHlqLvc\nGZSSQwkZlJJDCRmUkkMJGZSSoybD55//31Z1JwufiKiT4sXTiIioQSx8IiKVYOETEakEC5+ISCVY\n+EREKsHCJyJSCRY+EZFKsPCJiFSChU9EpBIsfCIilWDhExGpBAufiEglWPhERCrBwiciUgkWPhGR\nSrDwiYhUgoVPRKQSLHwiIpVg4RMRqQQLn4hIJVj4REQqwcInIlIJFj4RkUqw8ImIVIKFT0SkEix8\nIiKVYOETEamEx4V/69YtTJkyBUOGDMHUqVNRVlZWb4zdbsfEiRMxdOhQDBs2DG+//baksERE5DmP\nC3/9+vWYMmUKLl68iMmTJ2P9+vX1xuh0OmzYsAFnz55FTk4O/vGPf+D8+fOSAns7m80mdwTF4Gvx\nCF+LR/haeM7jwt+7dy8WLFgAAFiwYAE+/fTTemNCQ0MRHx8PAOjVqxeio6NRXFzs6VOqAn+ZH+Fr\n8Qhfi0f4WnjO48IvKSlBSEgIACAkJAQlJSVNji8oKMDx48cxevRoT5+SiIgk0Da1cMqUKbh27Vq9\nx9esWVPnvkajgUajaXQ7d+/eRVJSEjZu3IhevXp5GJWIiCQRHoqMjBRXr14VQghRXFwsIiMjGxxX\nVVUlpk6dKjZs2NDotkwmkwDAG2+88cZbK24mk6lVva0RQgh44I9//COCg4OxcuVKrF+/HmVlZfUO\n3AohsGDBAgQHB2PDhg2ePA0REbURjwv/1q1beOGFF3DlyhWEh4fjo48+QkBAAIqLi5GamorPPvsM\nX331FSZMmIDY2Fj3lM+6deswffr0Nv2PICKi5nlc+ERE1LnI/k5bq9WKqKgomM1mpKenyx1HNnyT\nWn1OpxMjRozAzJkz5Y4iq7KyMiQlJSE6OhoxMTHIycmRO5Js1q1bh6FDh2L48OGYM2cOKisr5Y7U\nYRYtWoSQkBAMHz7c/VhL3gBbm6yF73Q6sXz5clitVpw7dw4ZGRmqfWMW36RW38aNGxETE9PkGWBq\n8Lvf/Q4zZszA+fPncerUKURHR8sdSRYFBQXYunUr8vLycPr0aTidTuzZs0fuWB1m4cKFsFqtdR5r\nyRtga5O18HNzcxEREYHw8HDodDokJycjKytLzkiy4ZvU6ioqKkJ2djYWL14MNc863rlzB0eOHMGi\nRYsAAFqtFv7+/jKnkoefnx90Oh0qKirw8OFDVFRUQK/Xyx2rw4wfPx6BgYF1HmvJG2Brk7XwHQ4H\njEaj+77BYIDD4ZAxkTLwTWrA73//e/zlL39Bly6yzzrK6scff0Tfvn2xcOFCPPHEE0hNTUVFRYXc\nsWQRFBSEV199FQMGDEBYWBgCAgLwi1/8Qu5YsmrtG2Bl/b9J7f9UbwjfpAbs27cP/fr1w4gRI1S9\ndw8ADx8+RF5eHn7zm98gLy8Pvr6+zf6z3VtdvnwZf//731FQUIDi4mLcvXsXH374odyxFKO5N8AC\nMhe+Xq+H3W5337fb7TAYDDImkteDBw/w3HPPYe7cuXj22WfljiObo0ePYu/evRg0aBBefPFFHDp0\nCPPnz5c7liwMBgMMBgNGjhwJAEhKSkJeXp7MqeRx7NgxjB07FsHBwdBqtZg9ezaOHj0qdyxZhYSE\nuK+GcPXqVfTr16/J8bIWfkJCAvLz81FQUICqqipkZmYiMTFRzkiyEUIgJSUFMTExWLFihdxxZLV2\n7VrY7Xb8+OOP2LNnDyZNmoTdu3fLHUsWoaGhMBqNuHjxIgDg4MGDGDp0qMyp5BEVFYWcnBzcu3cP\nQggcPHgQMTExcseSVWJiIt577z0AwHvvvdf8jmKr3pfbDrKzs8WQIUOEyWQSa9eulTuObI4cOSI0\nGo2Ii4sT8fHxIj4+Xuzfv1/uWLKz2Wxi5syZcseQ1YkTJ0RCQoKIjY0Vs2bNEmVlZXJHkk16erqI\niYkRw4YNE/PnzxdVVVVyR+owycnJon///kKn0wmDwSB27Nghbt68KSZPnizMZrOYMmWKuH37dpPb\n4BuviIhUQt2nQBARqQgLn4hIJVj4REQqwcInIlIJFj4RkUqw8ImIVIKFT0SkEix8IiKV+H8TjW6O\n7e5nFQAAAABJRU5ErkJggg==\n", "text": [ "" ] } ], "prompt_number": 18 } ], "metadata": {} } ] } -------------- next part -------------- A non-text attachment was scrubbed... Name: fft.png Type: image/png Size: 15564 bytes Desc: not available URL: From charles at crunch.io Thu Nov 14 11:45:51 2013 From: charles at crunch.io (Charles Waldman) Date: Thu, 14 Nov 2013 10:45:51 -0600 Subject: [Numpy-discussion] strange runtimes of numpy fft In-Reply-To: <1384445906.12947.9.camel@x200.kel.wh.lokal> References: <1384445906.12947.9.camel@x200.kel.wh.lokal> Message-ID: Can you post the raw data? It seems like there are just a couple of "bad" sizes, I'd like to know more precisely what these are. It's typical for FFT to perform better at a sample size that is a power of 2, and algorithms like FFTW take advantage of factoring the size, and "sizes that are products of small factors are transformed most efficiently." - Charles On Thu, Nov 14, 2013 at 10:18 AM, Max Linke wrote: > Hi > > I noticed some strange scaling behavior of the fft runtime. For most > array-sizes the fft will be calculated in a couple of seconds, even for > very large ones. But there are some array sizes in between where it will > take about ~20 min (e.g. 400000). This is really odd for me because an > array with 10 million entries is transformed in ~2s. Is this typical for > numpy? > > I attached a plot and an ipynb to reproduce and illustrate it. > > best Max > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Thu Nov 14 12:19:44 2013 From: cournape at gmail.com (David Cournapeau) Date: Thu, 14 Nov 2013 17:19:44 +0000 Subject: [Numpy-discussion] strange runtimes of numpy fft In-Reply-To: References: <1384445906.12947.9.camel@x200.kel.wh.lokal> Message-ID: On Thu, Nov 14, 2013 at 4:45 PM, Charles Waldman wrote: > Can you post the raw data? It seems like there are just a couple of "bad" > sizes, I'd like to know more precisely what these are. > Indeed. Several of the sizes generated by logspace(2, 7, 25) are prime numbers, where numpy.fft is actually O(N^2) and not the usual O(NLogN). There is unfortunately no freely (aka BSD-like licensed) available fft implementation that works for prime (or 'close to prime') numbers, and implementing one that is precise enough is not trivial (look at Bernstein transform for more details). David > > It's typical for FFT to perform better at a sample size that is a power of > 2, and algorithms like FFTW take advantage of factoring the size, and > "sizes that are products of small factors are transformed most efficiently." > > - Charles > > On Thu, Nov 14, 2013 at 10:18 AM, Max Linke wrote: > >> Hi >> >> I noticed some strange scaling behavior of the fft runtime. For most >> array-sizes the fft will be calculated in a couple of seconds, even for >> very large ones. But there are some array sizes in between where it will >> take about ~20 min (e.g. 400000). This is really odd for me because an >> array with 10 million entries is transformed in ~2s. Is this typical for >> numpy? >> >> I attached a plot and an ipynb to reproduce and illustrate it. >> >> best Max >> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> >> > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Thu Nov 14 12:23:25 2013 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 14 Nov 2013 17:23:25 +0000 Subject: [Numpy-discussion] strange runtimes of numpy fft In-Reply-To: References: <1384445906.12947.9.camel@x200.kel.wh.lokal> Message-ID: On Thu, Nov 14, 2013 at 4:45 PM, Charles Waldman wrote: > > Can you post the raw data? It seems like there are just a couple of "bad" sizes, I'd like to know more precisely what these are. > > It's typical for FFT to perform better at a sample size that is a power of 2, and algorithms like FFTW take advantage of factoring the size, and "sizes that are products of small factors are transformed most efficiently." These are the sizes, as given in the notebook he supplied: array([ 100, 161, 261, 421, 681, 1100, 1778, 2872, 4641, 7498, 12115, 19573, 31622, 51089, 82540, 133352, 215443, 348070, 562341, 908517, 1467799, 2371373, 3831186, 6189658, 10000000]) -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From max_linke at gmx.de Thu Nov 14 12:30:31 2013 From: max_linke at gmx.de (Max Linke) Date: Thu, 14 Nov 2013 18:30:31 +0100 Subject: [Numpy-discussion] strange runtimes of numpy fft In-Reply-To: References: <1384445906.12947.9.camel@x200.kel.wh.lokal> Message-ID: <1384450231.5578.6.camel@640X4.kel.wh.lokal> You can check everything in the notebook, it will generate all the data. I checked the runtime for sizes in logspace(2, 7, 25). I know that the fft will work faster for array sizes that are a power of 2 but differences on 3 orders of magnitude is huge. I also attached a script generated from the notebook that you can run as well best Max On Thu, 2013-11-14 at 10:45 -0600, Charles Waldman wrote: > Can you post the raw data? It seems like there are just a couple of "bad" > sizes, I'd like to know more precisely what these are. > > It's typical for FFT to perform better at a sample size that is a power of > 2, and algorithms like FFTW take advantage of factoring the size, and > "sizes that are products of small factors are transformed most efficiently." > > - Charles > > On Thu, Nov 14, 2013 at 10:18 AM, Max Linke wrote: > > > Hi > > > > I noticed some strange scaling behavior of the fft runtime. For most > > array-sizes the fft will be calculated in a couple of seconds, even for > > very large ones. But there are some array sizes in between where it will > > take about ~20 min (e.g. 400000). This is really odd for me because an > > array with 10 million entries is transformed in ~2s. Is this typical for > > numpy? > > > > I attached a plot and an ipynb to reproduce and illustrate it. > > > > best Max > > > > _______________________________________________ > > NumPy-Discussion mailing list > > NumPy-Discussion at scipy.org > > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion -------------- next part -------------- A non-text attachment was scrubbed... Name: test_fft.py Type: text/x-python Size: 1293 bytes Desc: not available URL: From robert.kern at gmail.com Thu Nov 14 12:34:51 2013 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 14 Nov 2013 17:34:51 +0000 Subject: [Numpy-discussion] strange runtimes of numpy fft In-Reply-To: <1384450231.5578.6.camel@640X4.kel.wh.lokal> References: <1384445906.12947.9.camel@x200.kel.wh.lokal> <1384450231.5578.6.camel@640X4.kel.wh.lokal> Message-ID: On Thu, Nov 14, 2013 at 5:30 PM, Max Linke wrote: > > You can check everything in the notebook, it will generate all the data. > I checked the runtime for sizes in logspace(2, 7, 25). I know that the > fft will work faster for array sizes that are a power of 2 but > differences on 3 orders of magnitude is huge. Primes are especially bad. 215443, for example, is prime. Going from O(N*logN) to O(N**2) where N=215443 accounts for 3 orders of magnitude, at least. -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Thu Nov 14 12:37:41 2013 From: cournape at gmail.com (David Cournapeau) Date: Thu, 14 Nov 2013 17:37:41 +0000 Subject: [Numpy-discussion] strange runtimes of numpy fft In-Reply-To: <1384450231.5578.6.camel@640X4.kel.wh.lokal> References: <1384445906.12947.9.camel@x200.kel.wh.lokal> <1384450231.5578.6.camel@640X4.kel.wh.lokal> Message-ID: On Thu, Nov 14, 2013 at 5:30 PM, Max Linke wrote: > You can check everything in the notebook, it will generate all the data. > I checked the runtime for sizes in logspace(2, 7, 25). I know that the > fft will work faster for array sizes that are a power of 2 but > differences on 3 orders of magnitude is huge. This is expected if you go from N log N to N ** 2 for large N :) You can for example compare np.fft.fft(a) for 2**16 and 2**16+1 (and 2**16-1 that while bad is not prime, so only 1 order of magnitude slower). David -------------- next part -------------- An HTML attachment was scrubbed... URL: From bloring at lbl.gov Thu Nov 14 13:10:14 2013 From: bloring at lbl.gov (Burlen Loring) Date: Thu, 14 Nov 2013 10:10:14 -0800 Subject: [Numpy-discussion] segv PyArray_Check In-Reply-To: <20131114064837.14035.30762@fl-58186.rocq.inria.fr> References: <5284258F.5020208@lbl.gov> <20131114064837.14035.30762@fl-58186.rocq.inria.fr> Message-ID: <52851206.90304@lbl.gov> Hi David, Yes, that the situation. using your naming convention, in addtion to foo_wrap.c I have another file , say foo_convert.cxx, for massaging python data structures into our internal data structures. This source is compiled with a c++ compiler so that I can use templates to simplify handling the numerous types a user could throw at us. All of the sources are linked into the module's .so. I need to use various numpy functions in my data conversions functions that live in foo_convert.cxx. For example I'm using PyArray_Check because I need to discerne between numpy arrays, and python lists and tuples. What I'm confused about is that calls to numpy funtions from this file are segv'ing unless I add another call to import_array() made from foo_convert.cxx. I'd like to understand why that's necessary, and why the import_array() call in the module's init section in foo_wrap.c doesn't have any affect even though it's called first? Thanks Burlen On 11/13/2013 10:48 PM, David Froger wrote: > Hi Burlen, > > SWIG will generate a file named for example foo_wrap.c, which will > contains a call to import_array() inserted by SWIG because of the > %init %{ > import_array(); > %} > in the SWIG script. > So in the file foo_wrap.c (which will be compiled to a Python module > _foo.so), you should be able to used PyArray_Check without segfault. > Typically, PyArray_Check will be inserted in foo_wrap.c by a typemap, > for example a typemap from numpy.i . > > Do you use PyArray_Check in the foo_wrap.c or in another file? Is > PyArray_Check in called in another C library, that _foo.so is linked > with? > > David > > Quoting Burlen Loring (2013-11-14 02:21:19) >> Hi, >> >> I'd like to add numpy support to an existing code that uses swig. I've >> changed the source file that has code to convert python lists into >> native data from c to c++ so I can use templates to handle data >> conversions. The problem I'm having is a segfault on PyArray_Check >> called from my c++ source. Looking at things in the debugger the >> argument passed to it is indeed an intialized python list, my swig file >> has import_array() in it's init, and I've verified that it is getting >> called. adding a function in my c++ source to call import_array() a >> second time prevents the segfault. Could anyone explain why I need the >> import_array() in both places? If that's the correct way to handle it? >> >> Thanks >> Burlen >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From wfspotz at sandia.gov Thu Nov 14 13:31:56 2013 From: wfspotz at sandia.gov (Bill Spotz) Date: Thu, 14 Nov 2013 11:31:56 -0700 Subject: [Numpy-discussion] [EXTERNAL] segv PyArray_Check In-Reply-To: <52851206.90304@lbl.gov> References: <5284258F.5020208@lbl.gov> <20131114064837.14035.30762@fl-58186.rocq.inria.fr> <52851206.90304@lbl.gov> Message-ID: <2A6A31EA-270F-4967-89E8-EAD0AE3DBAD9@sandia.gov> Burlen, Have you actually verified that the first instance of import_array() actually gets called? Because the behavior suggests that it does not. In PyTrilinos, I wrap the import_array() call inside a C++ singleton, and put that singleton in a dynamic library that all my packages link to, thus insuring that it gets called once and only once. -Bill On Nov 14, 2013, at 11:10 AM, Burlen Loring wrote: > Hi David, > > Yes, that the situation. using your naming convention, in addtion to > foo_wrap.c I have another file , say foo_convert.cxx, for massaging > python data structures into our internal data structures. This source is > compiled with a c++ compiler so that I can use templates to simplify > handling the numerous types a user could throw at us. All of the sources > are linked into the module's .so. I need to use various numpy functions > in my data conversions functions that live in foo_convert.cxx. For > example I'm using PyArray_Check because I need to discerne between numpy > arrays, and python lists and tuples. > > What I'm confused about is that calls to numpy funtions from this file > are segv'ing unless I add another call to import_array() made from > foo_convert.cxx. I'd like to understand why that's necessary, and why > the import_array() call in the module's init section in foo_wrap.c > doesn't have any affect even though it's called first? > > Thanks > Burlen > > On 11/13/2013 10:48 PM, David Froger wrote: >> Hi Burlen, >> >> SWIG will generate a file named for example foo_wrap.c, which will >> contains a call to import_array() inserted by SWIG because of the >> %init %{ >> import_array(); >> %} >> in the SWIG script. >> So in the file foo_wrap.c (which will be compiled to a Python module >> _foo.so), you should be able to used PyArray_Check without segfault. >> Typically, PyArray_Check will be inserted in foo_wrap.c by a typemap, >> for example a typemap from numpy.i . >> >> Do you use PyArray_Check in the foo_wrap.c or in another file? Is >> PyArray_Check in called in another C library, that _foo.so is linked >> with? >> >> David >> >> Quoting Burlen Loring (2013-11-14 02:21:19) >>> Hi, >>> >>> I'd like to add numpy support to an existing code that uses swig. I've >>> changed the source file that has code to convert python lists into >>> native data from c to c++ so I can use templates to handle data >>> conversions. The problem I'm having is a segfault on PyArray_Check >>> called from my c++ source. Looking at things in the debugger the >>> argument passed to it is indeed an intialized python list, my swig file >>> has import_array() in it's init, and I've verified that it is getting >>> called. adding a function in my c++ source to call import_array() a >>> second time prevents the segfault. Could anyone explain why I need the >>> import_array() in both places? If that's the correct way to handle it? >>> >>> Thanks >>> Burlen >>> _______________________________________________ >>> NumPy-Discussion mailing list >>> NumPy-Discussion at scipy.org >>> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion ** Bill Spotz ** ** Sandia National Laboratories Voice: (505)845-0170 ** ** P.O. Box 5800 Fax: (505)284-0154 ** ** Albuquerque, NM 87185-0370 Email: wfspotz at sandia.gov ** From jaime.frio at gmail.com Thu Nov 14 14:05:32 2013 From: jaime.frio at gmail.com (=?ISO-8859-1?Q?Jaime_Fern=E1ndez_del_R=EDo?=) Date: Thu, 14 Nov 2013 11:05:32 -0800 Subject: [Numpy-discussion] strange runtimes of numpy fft In-Reply-To: References: <1384445906.12947.9.camel@x200.kel.wh.lokal> <1384450231.5578.6.camel@640X4.kel.wh.lokal> Message-ID: On Thu, Nov 14, 2013 at 9:37 AM, David Cournapeau wrote: > > You can for example compare np.fft.fft(a) for 2**16 and 2**16+1 (and > 2**16-1 that while bad is not prime, so only 1 order of magnitude slower). > I actually did... Each step of a FFT basically splits a DFT of size N = P*Q into P DFTs of size Q followed by Q DFTs of size P. If prime length DFTs take time proportional to their length squared, the time complexity can be written as N times the sum of the prime factors of N, which is N log N (actually 2 N log N) for sizes a power of two. So for 2^16 you actually have 2^16 * 2 * 16 = 2.1e6 For 2^16-1 = 3*5*17*257 you get (2^16-1)*(3+5+17+257) = 18.5e6 (8x slower) For 2^16+1, which is prime, you get (2^16+1)^2 = 4295.0e6 (2045x slower) On my system I get: %timeit np.fft.fft(a) # a = np.ones(2**16) 100 loops, best of 3: 3.18 ms per loop %timeit np.fft.fft(b) # b = np.ones(2**16 - 1) 100 loops, best of 3: 13.6 ms per loop (4x slower) %timeit np.fft.fft(c) # c = np.ones(2**16 + 1) 1 loops, best of 3: 25.1 s per loop (8000x slower) There clearly are some constant factors missing in this analysis, although it gives reasonable order of magnitude predictions. Doing some more timings it seems that: * prime sized inputs perform at least 2x worse than these formulas predict. * sizes with factors different than 2, perform about 2x better than these formulas predict. Jaime -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Thu Nov 14 14:28:02 2013 From: cournape at gmail.com (David Cournapeau) Date: Thu, 14 Nov 2013 19:28:02 +0000 Subject: [Numpy-discussion] strange runtimes of numpy fft In-Reply-To: References: <1384445906.12947.9.camel@x200.kel.wh.lokal> <1384450231.5578.6.camel@640X4.kel.wh.lokal> Message-ID: On Thu, Nov 14, 2013 at 7:05 PM, Jaime Fern?ndez del R?o < jaime.frio at gmail.com> wrote: > On Thu, Nov 14, 2013 at 9:37 AM, David Cournapeau wrote: > >> >> You can for example compare np.fft.fft(a) for 2**16 and 2**16+1 (and >> 2**16-1 that while bad is not prime, so only 1 order of magnitude slower). >> > > I actually did... > > Each step of a FFT basically splits a DFT of size N = P*Q into P DFTs of > size Q followed by Q DFTs of size P. If prime length DFTs take time > proportional to their length squared, the time complexity can be written as > N times the sum of the prime factors of N, which is N log N (actually 2 N > log N) for sizes a power of two. > It does not really make sense to talk about 2 N Log N, since the whole point of O notation is to ignore the constant factors. If you want to care about the # operations, the actual complexity of fft is not n log n, but C n log n with C > 1, and C depends quite a bit on the implementation (the FFTW guys published some details of their implementations). I believe it also depends on how well you can factor the number (i.e. the algos in FFTW are all O(n log n), but the constant factor if you count # operations is actually a function of the input). David > > So for 2^16 you actually have 2^16 * 2 * 16 = 2.1e6 > For 2^16-1 = 3*5*17*257 you get (2^16-1)*(3+5+17+257) = 18.5e6 (8x slower) > For 2^16+1, which is prime, you get (2^16+1)^2 = 4295.0e6 (2045x slower) > > On my system I get: > %timeit np.fft.fft(a) # a = np.ones(2**16) > 100 loops, best of 3: 3.18 ms per loop > > %timeit np.fft.fft(b) # b = np.ones(2**16 - 1) > 100 loops, best of 3: 13.6 ms per loop (4x slower) > > %timeit np.fft.fft(c) # c = np.ones(2**16 + 1) > 1 loops, best of 3: 25.1 s per loop (8000x slower) > > There clearly are some constant factors missing in this analysis, although > it gives reasonable order of magnitude predictions. Doing some more timings > it seems that: > * prime sized inputs perform at least 2x worse than these formulas > predict. > * sizes with factors different than 2, perform about 2x better than these > formulas predict. > > Jaime > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bloring at lbl.gov Thu Nov 14 15:43:20 2013 From: bloring at lbl.gov (Burlen Loring) Date: Thu, 14 Nov 2013 12:43:20 -0800 Subject: [Numpy-discussion] [EXTERNAL] segv PyArray_Check In-Reply-To: <2A6A31EA-270F-4967-89E8-EAD0AE3DBAD9@sandia.gov> References: <5284258F.5020208@lbl.gov> <20131114064837.14035.30762@fl-58186.rocq.inria.fr> <52851206.90304@lbl.gov> <2A6A31EA-270F-4967-89E8-EAD0AE3DBAD9@sandia.gov> Message-ID: <528535E8.90603@lbl.gov> yes, I verified that it get called. On 11/14/2013 10:31 AM, Bill Spotz wrote: > Burlen, > > Have you actually verified that the first instance of import_array() actually gets called? Because the behavior suggests that it does not. > > In PyTrilinos, I wrap the import_array() call inside a C++ singleton, and put that singleton in a dynamic library that all my packages link to, thus insuring that it gets called once and only once. > > -Bill > > On Nov 14, 2013, at 11:10 AM, Burlen Loring wrote: > >> Hi David, >> >> Yes, that the situation. using your naming convention, in addtion to >> foo_wrap.c I have another file , say foo_convert.cxx, for massaging >> python data structures into our internal data structures. This source is >> compiled with a c++ compiler so that I can use templates to simplify >> handling the numerous types a user could throw at us. All of the sources >> are linked into the module's .so. I need to use various numpy functions >> in my data conversions functions that live in foo_convert.cxx. For >> example I'm using PyArray_Check because I need to discerne between numpy >> arrays, and python lists and tuples. >> >> What I'm confused about is that calls to numpy funtions from this file >> are segv'ing unless I add another call to import_array() made from >> foo_convert.cxx. I'd like to understand why that's necessary, and why >> the import_array() call in the module's init section in foo_wrap.c >> doesn't have any affect even though it's called first? >> >> Thanks >> Burlen >> >> On 11/13/2013 10:48 PM, David Froger wrote: >>> Hi Burlen, >>> >>> SWIG will generate a file named for example foo_wrap.c, which will >>> contains a call to import_array() inserted by SWIG because of the >>> %init %{ >>> import_array(); >>> %} >>> in the SWIG script. >>> So in the file foo_wrap.c (which will be compiled to a Python module >>> _foo.so), you should be able to used PyArray_Check without segfault. >>> Typically, PyArray_Check will be inserted in foo_wrap.c by a typemap, >>> for example a typemap from numpy.i . >>> >>> Do you use PyArray_Check in the foo_wrap.c or in another file? Is >>> PyArray_Check in called in another C library, that _foo.so is linked >>> with? >>> >>> David >>> >>> Quoting Burlen Loring (2013-11-14 02:21:19) >>>> Hi, >>>> >>>> I'd like to add numpy support to an existing code that uses swig. I've >>>> changed the source file that has code to convert python lists into >>>> native data from c to c++ so I can use templates to handle data >>>> conversions. The problem I'm having is a segfault on PyArray_Check >>>> called from my c++ source. Looking at things in the debugger the >>>> argument passed to it is indeed an intialized python list, my swig file >>>> has import_array() in it's init, and I've verified that it is getting >>>> called. adding a function in my c++ source to call import_array() a >>>> second time prevents the segfault. Could anyone explain why I need the >>>> import_array() in both places? If that's the correct way to handle it? >>>> >>>> Thanks >>>> Burlen >>>> _______________________________________________ >>>> NumPy-Discussion mailing list >>>> NumPy-Discussion at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/numpy-discussion >>> _______________________________________________ >>> NumPy-Discussion mailing list >>> NumPy-Discussion at scipy.org >>> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion > ** Bill Spotz ** > ** Sandia National Laboratories Voice: (505)845-0170 ** > ** P.O. Box 5800 Fax: (505)284-0154 ** > ** Albuquerque, NM 87185-0370 Email: wfspotz at sandia.gov ** > > > > > From cournape at gmail.com Fri Nov 15 09:49:05 2013 From: cournape at gmail.com (David Cournapeau) Date: Fri, 15 Nov 2013 14:49:05 +0000 Subject: [Numpy-discussion] Caution about using intrisincs, and other 'advanced' optimizations In-Reply-To: <5283C1E9.7010900@googlemail.com> References: <528270A1.70307@googlemail.com> <5283C1E9.7010900@googlemail.com> Message-ID: On Wed, Nov 13, 2013 at 6:16 PM, Julian Taylor < jtaylor.debian at googlemail.com> wrote: > On 13.11.2013 18:26, David Cournapeau wrote: > > > > > > Can you narrow it down to a specific intrinsic? they can be enabled > and > > disabled in set ./numpy/core/setup_common.py > > > > > > valgrind shows quite a few invalid read in BOOL_ functions when running > > the scipy or sklearn test suite. BOOL_logical_or is the one that appears > > the most often. I don't have time to track this down now, but I think it > > would be good to have at least a system in place to disable the simd > > intrinsics when building numpy. > > those are unrelated to the intrinsics, they should be fixed in master by > github.com/numpy/numpy/issues/3965 > Can you try it? > Possibly there is something we should backport. > Will do, but the errors I am seeing only appear in the simc.inc.src-based implementation of BOOL_logical_or (they disappear if I disable the simd intrinsics manually in the numpy headers). David > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nouiz at nouiz.org Fri Nov 15 10:12:06 2013 From: nouiz at nouiz.org (=?ISO-8859-1?Q?Fr=E9d=E9ric_Bastien?=) Date: Fri, 15 Nov 2013 10:12:06 -0500 Subject: [Numpy-discussion] Silencing NumPy output Message-ID: Hi, NumPy 1.8 removed the private NumPy interface numpy.distutils.__config__. So a Theano user make a PR to make Theano use the official interface: numpy.distutils.system_info.get_info("blas_opt") But this output many stuff to the output. I can silence part of it by silencing warnings, but I'm not able to silence this output: Found executable /usr/bin/gfortran ATLAS version 3.8.3 built by mockbuild on Wed Jul 28 02:12:34 UTC 2010: UNAME : Linux x86-15.phx2.fedoraproject.org 2.6.32-44.el6.x86_64 #1 SMP Wed Jul 7 15:47:50 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux INSTFLG : -1 0 -a 1 ARCHDEFS : -DATL_OS_Linux -DATL_ARCH_Corei7 -DATL_CPUMHZ=1596 -DATL_SSE3 -DATL_SSE2 -DATL_SSE1 -DATL_USE64BITS -DATL_GAS_x8664 F2CDEFS : -DAdd_ -DF77_INTEGER=int -DStringSunStyle CACHEEDGE: 524288 F77 : gfortran, version GNU Fortran (GCC) 4.5.0 20100716 (Red Hat 4.5.0-3) F77FLAGS : -O -g -Wa,--noexecstack -fPIC -m64 SMC : gcc, version gcc (GCC) 4.5.0 20100716 (Red Hat 4.5.0-3) SMCFLAGS : -fomit-frame-pointer -mfpmath=sse -msse3 -O2 -fno-schedule-insns2 -g -Wa,--noexecstack -fPIC -m64 SKC : gcc, version gcc (GCC) 4.5.0 20100716 (Red Hat 4.5.0-3) SKCFLAGS : -fomit-frame-pointer -mfpmath=sse -msse3 -O2 -fno-schedule-insns2 -g -Wa,--noexecstack -fPIC -m64 -L/opt/lisa/os_v2/canopy/appdata/canopy-1.1.0.1371.rh5-x86_64/../../appdata/canopy-1.1.0.1371.rh5-x86_64/lib -lptf77blas -lptcblas -latlas I tried to redirect the stdout and stderr, but it don't work. I looked into NumPy code and I don't see a way to change that from a library that use NumPy. Is there a way to access to silence that output? Is there a new place of the old interface: numpy.distutils.__config__ that I can reuse? It don't need to be a public interface. thanks Fr?d?ric From derek at astro.physik.uni-goettingen.de Fri Nov 15 10:16:48 2013 From: derek at astro.physik.uni-goettingen.de (Derek Homeier) Date: Fri, 15 Nov 2013 16:16:48 +0100 Subject: [Numpy-discussion] ANN: NumPy 1.7.2rc1 release In-Reply-To: References: <52767CF1.60003@googlemail.com> <2DAE0F54-1ACB-4643-8FE5-2F73AD6C6F19@astro.physik.uni-goettingen.de> Message-ID: On 13.11.2013, at 3:07AM, Charles R Harris wrote: > Python 2.4 fixes at https://github.com/numpy/numpy/pull/4049. Thanks for the fixes; builds under OS X 10.5 now as well. There are two test errors (or maybe as nose problem?): NumPy version 1.7.2rc1 NumPy is installed in /sw/src/fink.build/root-numpy-py24-1.7.2rc1-1/sw/lib/python2.4/site-packages/numpy Python version 2.4.4 (#1, Jan 5 2011, 03:05:41) [GCC 4.0.1 (Apple Inc. build 5493)] nose version 1.3.0 ... ERROR: Failure: SkipTest (Skipping test: test_special_values Numpy is using complex functions (e.g. sqrt) provided by yourplatform's C library. However, they do not seem to behave accordingto C99 -- so C99 tests are skipped.) ---------------------------------------------------------------------- Traceback (most recent call last): File "/sw/lib/python2.4/site-packages/nose/failure.py", line 37, in runTest if isinstance(self.exc_val, BaseException): NameError: global name 'BaseException' is not defined ====================================================================== ERROR: Failure: SkipTest (Skipping test: test_special_values Numpy is using complex functions (e.g. sqrt) provided by yourplatform's C library. However, they do not seem to behave accordingto C99 -- so C99 tests are skipped.) ---------------------------------------------------------------------- Traceback (most recent call last): File "/sw/lib/python2.4/site-packages/nose/failure.py", line 37, in runTest if isinstance(self.exc_val, BaseException): NameError: global name 'BaseException' is not defined Cheers, Derek From charlesr.harris at gmail.com Fri Nov 15 10:35:04 2013 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 15 Nov 2013 08:35:04 -0700 Subject: [Numpy-discussion] ANN: NumPy 1.7.2rc1 release In-Reply-To: References: <52767CF1.60003@googlemail.com> <2DAE0F54-1ACB-4643-8FE5-2F73AD6C6F19@astro.physik.uni-goettingen.de> Message-ID: On Fri, Nov 15, 2013 at 8:16 AM, Derek Homeier < derek at astro.physik.uni-goettingen.de> wrote: > On 13.11.2013, at 3:07AM, Charles R Harris > wrote: > > > Python 2.4 fixes at https://github.com/numpy/numpy/pull/4049. > > Thanks for the fixes; builds under OS X 10.5 now as well. There are two > test errors (or maybe as nose problem?): > > NumPy version 1.7.2rc1 > NumPy is installed in > /sw/src/fink.build/root-numpy-py24-1.7.2rc1-1/sw/lib/python2.4/site-packages/numpy > Python version 2.4.4 (#1, Jan 5 2011, 03:05:41) [GCC 4.0.1 (Apple Inc. > build 5493)] > nose version 1.3.0 > ... > ERROR: Failure: SkipTest (Skipping test: test_special_values > Numpy is using complex functions (e.g. sqrt) provided by yourplatform's C > library. However, they do not seem to behave accordingto C99 -- so C99 > tests are skipped.) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/sw/lib/python2.4/site-packages/nose/failure.py", line 37, in > runTest > if isinstance(self.exc_val, BaseException): > NameError: global name 'BaseException' is not defined > > ====================================================================== > ERROR: Failure: SkipTest (Skipping test: test_special_values > Numpy is using complex functions (e.g. sqrt) provided by yourplatform's C > library. However, they do not seem to behave accordingto C99 -- so C99 > tests are skipped.) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/sw/lib/python2.4/site-packages/nose/failure.py", line 37, in > runTest > if isinstance(self.exc_val, BaseException): > NameError: global name 'BaseException' is not defined > > > Ha, BaseException is new in Python 2.5. This looks like a nose problem, maybe it is too recent ;) What nose version are you using? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Fri Nov 15 12:31:34 2013 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 15 Nov 2013 10:31:34 -0700 Subject: [Numpy-discussion] Silencing NumPy output In-Reply-To: References: Message-ID: On Fri, Nov 15, 2013 at 8:12 AM, Fr?d?ric Bastien wrote: > Hi, > > NumPy 1.8 removed the private NumPy interface > numpy.distutils.__config__. So a Theano user make a PR to make Theano > use the official interface: > > numpy.distutils.system_info.get_info("blas_opt") > > But this output many stuff to the output. I can silence part of it by > silencing warnings, but I'm not able to silence this output: > > Found executable /usr/bin/gfortran > ATLAS version 3.8.3 built by mockbuild on Wed Jul 28 02:12:34 UTC 2010: > UNAME : Linux x86-15.phx2.fedoraproject.org 2.6.32-44.el6.x86_64 > #1 SMP Wed Jul 7 15:47:50 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux > INSTFLG : -1 0 -a 1 > ARCHDEFS : -DATL_OS_Linux -DATL_ARCH_Corei7 -DATL_CPUMHZ=1596 > -DATL_SSE3 -DATL_SSE2 -DATL_SSE1 -DATL_USE64BITS -DATL_GAS_x8664 > F2CDEFS : -DAdd_ -DF77_INTEGER=int -DStringSunStyle > CACHEEDGE: 524288 > F77 : gfortran, version GNU Fortran (GCC) 4.5.0 20100716 (Red > Hat 4.5.0-3) > F77FLAGS : -O -g -Wa,--noexecstack -fPIC -m64 > SMC : gcc, version gcc (GCC) 4.5.0 20100716 (Red Hat 4.5.0-3) > SMCFLAGS : -fomit-frame-pointer -mfpmath=sse -msse3 -O2 > -fno-schedule-insns2 -g -Wa,--noexecstack -fPIC -m64 > SKC : gcc, version gcc (GCC) 4.5.0 20100716 (Red Hat 4.5.0-3) > SKCFLAGS : -fomit-frame-pointer -mfpmath=sse -msse3 -O2 > -fno-schedule-insns2 -g -Wa,--noexecstack -fPIC -m64 > > -L/opt/lisa/os_v2/canopy/appdata/canopy-1.1.0.1371.rh5-x86_64/../../appdata/canopy-1.1.0.1371.rh5-x86_64/lib > -lptf77blas -lptcblas -latlas > > I tried to redirect the stdout and stderr, but it don't work. I looked > into NumPy code and I don't see a way to change that from a library > that use NumPy. > > Is there a way to access to silence that output? > > Is there a new place of the old interface: numpy.distutils.__config__ > that I can reuse? It don't need to be a public interface. > Looks like the problem is in numpy/distutils/exec_command.py and numpy/distutils/log.py. In particular, it looks like a logging problem and I'd guess it may be connected to the debug logs. Also, looks like numpy.distutils.log inherits from distutils.log, which may be obsolete. You might get some control of the log with an environment variable, but the function itself looks largely undocumented. That said, it should probably be printing to stderror when run from the command line. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Fri Nov 15 12:40:54 2013 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 15 Nov 2013 10:40:54 -0700 Subject: [Numpy-discussion] Silencing NumPy output In-Reply-To: References: Message-ID: On Fri, Nov 15, 2013 at 10:31 AM, Charles R Harris < charlesr.harris at gmail.com> wrote: > > > > On Fri, Nov 15, 2013 at 8:12 AM, Fr?d?ric Bastien wrote: > >> Hi, >> >> NumPy 1.8 removed the private NumPy interface >> numpy.distutils.__config__. So a Theano user make a PR to make Theano >> use the official interface: >> >> numpy.distutils.system_info.get_info("blas_opt") >> >> But this output many stuff to the output. I can silence part of it by >> silencing warnings, but I'm not able to silence this output: >> >> Found executable /usr/bin/gfortran >> ATLAS version 3.8.3 built by mockbuild on Wed Jul 28 02:12:34 UTC 2010: >> UNAME : Linux x86-15.phx2.fedoraproject.org 2.6.32-44.el6.x86_64 >> #1 SMP Wed Jul 7 15:47:50 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux >> INSTFLG : -1 0 -a 1 >> ARCHDEFS : -DATL_OS_Linux -DATL_ARCH_Corei7 -DATL_CPUMHZ=1596 >> -DATL_SSE3 -DATL_SSE2 -DATL_SSE1 -DATL_USE64BITS -DATL_GAS_x8664 >> F2CDEFS : -DAdd_ -DF77_INTEGER=int -DStringSunStyle >> CACHEEDGE: 524288 >> F77 : gfortran, version GNU Fortran (GCC) 4.5.0 20100716 (Red >> Hat 4.5.0-3) >> F77FLAGS : -O -g -Wa,--noexecstack -fPIC -m64 >> SMC : gcc, version gcc (GCC) 4.5.0 20100716 (Red Hat 4.5.0-3) >> SMCFLAGS : -fomit-frame-pointer -mfpmath=sse -msse3 -O2 >> -fno-schedule-insns2 -g -Wa,--noexecstack -fPIC -m64 >> SKC : gcc, version gcc (GCC) 4.5.0 20100716 (Red Hat 4.5.0-3) >> SKCFLAGS : -fomit-frame-pointer -mfpmath=sse -msse3 -O2 >> -fno-schedule-insns2 -g -Wa,--noexecstack -fPIC -m64 >> >> -L/opt/lisa/os_v2/canopy/appdata/canopy-1.1.0.1371.rh5-x86_64/../../appdata/canopy-1.1.0.1371.rh5-x86_64/lib >> -lptf77blas -lptcblas -latlas >> >> I tried to redirect the stdout and stderr, but it don't work. I looked >> into NumPy code and I don't see a way to change that from a library >> that use NumPy. >> >> Is there a way to access to silence that output? >> >> Is there a new place of the old interface: numpy.distutils.__config__ >> that I can reuse? It don't need to be a public interface. >> > > Looks like the problem is in numpy/distutils/exec_command.py and > numpy/distutils/log.py. In particular, it looks like a logging problem and > I'd guess it may be connected to the debug logs. Also, looks like > numpy.distutils.log inherits from distutils.log, which may be obsolete. You > might get some control of the log with an environment variable, but the > function itself looks largely undocumented. That said, it should probably > be printing to stderror when run from the command line. > > In numpy/distutils/__init__.py line 886 try changing log.set_verbosity(-2) to log.set_verbosity(2) Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Fri Nov 15 13:01:36 2013 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 15 Nov 2013 11:01:36 -0700 Subject: [Numpy-discussion] Silencing NumPy output In-Reply-To: References: Message-ID: On Fri, Nov 15, 2013 at 10:40 AM, Charles R Harris < charlesr.harris at gmail.com> wrote: > > > > On Fri, Nov 15, 2013 at 10:31 AM, Charles R Harris < > charlesr.harris at gmail.com> wrote: > >> >> >> >> On Fri, Nov 15, 2013 at 8:12 AM, Fr?d?ric Bastien wrote: >> >>> Hi, >>> >>> NumPy 1.8 removed the private NumPy interface >>> numpy.distutils.__config__. So a Theano user make a PR to make Theano >>> use the official interface: >>> >>> numpy.distutils.system_info.get_info("blas_opt") >>> >>> But this output many stuff to the output. I can silence part of it by >>> silencing warnings, but I'm not able to silence this output: >>> >>> Found executable /usr/bin/gfortran >>> ATLAS version 3.8.3 built by mockbuild on Wed Jul 28 02:12:34 UTC 2010: >>> UNAME : Linux x86-15.phx2.fedoraproject.org 2.6.32-44.el6.x86_64 >>> #1 SMP Wed Jul 7 15:47:50 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux >>> INSTFLG : -1 0 -a 1 >>> ARCHDEFS : -DATL_OS_Linux -DATL_ARCH_Corei7 -DATL_CPUMHZ=1596 >>> -DATL_SSE3 -DATL_SSE2 -DATL_SSE1 -DATL_USE64BITS -DATL_GAS_x8664 >>> F2CDEFS : -DAdd_ -DF77_INTEGER=int -DStringSunStyle >>> CACHEEDGE: 524288 >>> F77 : gfortran, version GNU Fortran (GCC) 4.5.0 20100716 (Red >>> Hat 4.5.0-3) >>> F77FLAGS : -O -g -Wa,--noexecstack -fPIC -m64 >>> SMC : gcc, version gcc (GCC) 4.5.0 20100716 (Red Hat 4.5.0-3) >>> SMCFLAGS : -fomit-frame-pointer -mfpmath=sse -msse3 -O2 >>> -fno-schedule-insns2 -g -Wa,--noexecstack -fPIC -m64 >>> SKC : gcc, version gcc (GCC) 4.5.0 20100716 (Red Hat 4.5.0-3) >>> SKCFLAGS : -fomit-frame-pointer -mfpmath=sse -msse3 -O2 >>> -fno-schedule-insns2 -g -Wa,--noexecstack -fPIC -m64 >>> >>> -L/opt/lisa/os_v2/canopy/appdata/canopy-1.1.0.1371.rh5-x86_64/../../appdata/canopy-1.1.0.1371.rh5-x86_64/lib >>> -lptf77blas -lptcblas -latlas >>> >>> I tried to redirect the stdout and stderr, but it don't work. I looked >>> into NumPy code and I don't see a way to change that from a library >>> that use NumPy. >>> >>> Is there a way to access to silence that output? >>> >>> Is there a new place of the old interface: numpy.distutils.__config__ >>> that I can reuse? It don't need to be a public interface. >>> >> >> Looks like the problem is in numpy/distutils/exec_command.py and >> numpy/distutils/log.py. In particular, it looks like a logging problem and >> I'd guess it may be connected to the debug logs. Also, looks like >> numpy.distutils.log inherits from distutils.log, which may be obsolete. You >> might get some control of the log with an environment variable, but the >> function itself looks largely undocumented. That said, it should probably >> be printing to stderror when run from the command line. >> >> > In numpy/distutils/__init__.py line 886 try changing > log.set_verbosity(-2) to log.set_verbosity(2) > > OK, that isn't right. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From nouiz at nouiz.org Fri Nov 15 13:05:34 2013 From: nouiz at nouiz.org (=?ISO-8859-1?Q?Fr=E9d=E9ric_Bastien?=) Date: Fri, 15 Nov 2013 13:05:34 -0500 Subject: [Numpy-discussion] Silencing NumPy output In-Reply-To: References: Message-ID: I found a line like this in the file: numpy/distutils/fcompiler/__init__.py I changed the -2 to 2, but it didn't change anything. In fact, this line wasn't called. The fct set_verbosity() is called only once, with the value of 0. The default value set at import. If I change that to 2 or -2, it do not change anything. I think that the problem is related to the exec_command as you told. It seam it call it in a way that don't take the stdout/stderr set in Python. So I can't redirect it. The problem is that is use os.system() and we can't redirect its stdout/stderr. What about replacing the os.system call to subprocess.Popen? This would allow us to catch the stdout/stderr. We use this call in Theano and it is compatible with python 2.4. Fred On Fri, Nov 15, 2013 at 12:40 PM, Charles R Harris wrote: > > > > On Fri, Nov 15, 2013 at 10:31 AM, Charles R Harris > wrote: >> >> >> >> >> On Fri, Nov 15, 2013 at 8:12 AM, Fr?d?ric Bastien wrote: >>> >>> Hi, >>> >>> NumPy 1.8 removed the private NumPy interface >>> numpy.distutils.__config__. So a Theano user make a PR to make Theano >>> use the official interface: >>> >>> numpy.distutils.system_info.get_info("blas_opt") >>> >>> But this output many stuff to the output. I can silence part of it by >>> silencing warnings, but I'm not able to silence this output: >>> >>> Found executable /usr/bin/gfortran >>> ATLAS version 3.8.3 built by mockbuild on Wed Jul 28 02:12:34 UTC 2010: >>> UNAME : Linux x86-15.phx2.fedoraproject.org 2.6.32-44.el6.x86_64 >>> #1 SMP Wed Jul 7 15:47:50 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux >>> INSTFLG : -1 0 -a 1 >>> ARCHDEFS : -DATL_OS_Linux -DATL_ARCH_Corei7 -DATL_CPUMHZ=1596 >>> -DATL_SSE3 -DATL_SSE2 -DATL_SSE1 -DATL_USE64BITS -DATL_GAS_x8664 >>> F2CDEFS : -DAdd_ -DF77_INTEGER=int -DStringSunStyle >>> CACHEEDGE: 524288 >>> F77 : gfortran, version GNU Fortran (GCC) 4.5.0 20100716 (Red >>> Hat 4.5.0-3) >>> F77FLAGS : -O -g -Wa,--noexecstack -fPIC -m64 >>> SMC : gcc, version gcc (GCC) 4.5.0 20100716 (Red Hat 4.5.0-3) >>> SMCFLAGS : -fomit-frame-pointer -mfpmath=sse -msse3 -O2 >>> -fno-schedule-insns2 -g -Wa,--noexecstack -fPIC -m64 >>> SKC : gcc, version gcc (GCC) 4.5.0 20100716 (Red Hat 4.5.0-3) >>> SKCFLAGS : -fomit-frame-pointer -mfpmath=sse -msse3 -O2 >>> -fno-schedule-insns2 -g -Wa,--noexecstack -fPIC -m64 >>> >>> -L/opt/lisa/os_v2/canopy/appdata/canopy-1.1.0.1371.rh5-x86_64/../../appdata/canopy-1.1.0.1371.rh5-x86_64/lib >>> -lptf77blas -lptcblas -latlas >>> >>> I tried to redirect the stdout and stderr, but it don't work. I looked >>> into NumPy code and I don't see a way to change that from a library >>> that use NumPy. >>> >>> Is there a way to access to silence that output? >>> >>> Is there a new place of the old interface: numpy.distutils.__config__ >>> that I can reuse? It don't need to be a public interface. >> >> >> Looks like the problem is in numpy/distutils/exec_command.py and >> numpy/distutils/log.py. In particular, it looks like a logging problem and >> I'd guess it may be connected to the debug logs. Also, looks like >> numpy.distutils.log inherits from distutils.log, which may be obsolete. You >> might get some control of the log with an environment variable, but the >> function itself looks largely undocumented. That said, it should probably be >> printing to stderror when run from the command line. >> > > In numpy/distutils/__init__.py line 886 try changing log.set_verbosity(-2) > to log.set_verbosity(2) > > Chuck > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From charlesr.harris at gmail.com Fri Nov 15 13:21:55 2013 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 15 Nov 2013 11:21:55 -0700 Subject: [Numpy-discussion] Silencing NumPy output In-Reply-To: References: Message-ID: On Fri, Nov 15, 2013 at 11:05 AM, Fr?d?ric Bastien wrote: > I found a line like this in the file: > > numpy/distutils/fcompiler/__init__.py > > I changed the -2 to 2, but it didn't change anything. In fact, this > line wasn't called. > > The fct set_verbosity() is called only once, with the value of 0. The > default value set at import. If I change that to 2 or -2, it do not > change anything. > > I think that the problem is related to the exec_command as you told. > It seam it call it in a way that don't take the stdout/stderr set in > Python. So I can't redirect it. The problem is that is use os.system() > and we can't redirect its stdout/stderr. > > What about replacing the os.system call to subprocess.Popen? This > would allow us to catch the stdout/stderr. We use this call in Theano > and it is compatible with python 2.4. > > Numpy 1.8 doesn't support python 2.4 in any case, so that isn't a problem ;) Sure, give it a shot. Looks like subprocess.Popen was intended to replace os.system in any case. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Fri Nov 15 14:28:29 2013 From: cournape at gmail.com (David Cournapeau) Date: Fri, 15 Nov 2013 19:28:29 +0000 Subject: [Numpy-discussion] Silencing NumPy output In-Reply-To: References: Message-ID: On Fri, Nov 15, 2013 at 6:21 PM, Charles R Harris wrote: > > > > On Fri, Nov 15, 2013 at 11:05 AM, Fr?d?ric Bastien wrote: > >> I found a line like this in the file: >> >> numpy/distutils/fcompiler/__init__.py >> >> I changed the -2 to 2, but it didn't change anything. In fact, this >> line wasn't called. >> >> The fct set_verbosity() is called only once, with the value of 0. The >> default value set at import. If I change that to 2 or -2, it do not >> change anything. >> >> I think that the problem is related to the exec_command as you told. >> It seam it call it in a way that don't take the stdout/stderr set in >> Python. So I can't redirect it. The problem is that is use os.system() >> and we can't redirect its stdout/stderr. >> >> What about replacing the os.system call to subprocess.Popen? This >> would allow us to catch the stdout/stderr. We use this call in Theano >> and it is compatible with python 2.4. >> >> > Numpy 1.8 doesn't support python 2.4 in any case, so that isn't a problem > ;) > > Sure, give it a shot. Looks like subprocess.Popen was intended to replace > os.system in any case. > Except that output is not 'real time' with straight Popen, and doing so reliably on every platform (cough - windows - cough) is not completely trivial. You also have to handle buffered output, etc... That code is very fragile, so this would be quite a lot of testing to change, and I am not sure it worths it. David -------------- next part -------------- An HTML attachment was scrubbed... URL: From nouiz at nouiz.org Fri Nov 15 14:39:05 2013 From: nouiz at nouiz.org (=?ISO-8859-1?Q?Fr=E9d=E9ric_Bastien?=) Date: Fri, 15 Nov 2013 14:39:05 -0500 Subject: [Numpy-discussion] Silencing NumPy output In-Reply-To: References: Message-ID: If it don't change, currently it mean that each process that use Theano and use BLAS will have that printed: Found executable /usr/bin/gfortran ATLAS version 3.8.3 built by mockbuild on Wed Jul 28 02:12:34 UTC 2010: UNAME : Linux x86-15.phx2.fedoraproject.org 2.6.32-44.el6.x86_64 #1 SMP Wed Jul 7 15:47:50 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux INSTFLG : -1 0 -a 1 ARCHDEFS : -DATL_OS_Linux -DATL_ARCH_Corei7 -DATL_CPUMHZ=1596 -DATL_SSE3 -DATL_SSE2 -DATL_SSE1 -DATL_USE64BITS -DATL_GAS_x8664 F2CDEFS : -DAdd_ -DF77_INTEGER=int -DStringSunStyle CACHEEDGE: 524288 F77 : gfortran, version GNU Fortran (GCC) 4.5.0 20100716 (Red Hat 4.5.0-3) F77FLAGS : -O -g -Wa,--noexecstack -fPIC -m64 SMC : gcc, version gcc (GCC) 4.5.0 20100716 (Red Hat 4.5.0-3) SMCFLAGS : -fomit-frame-pointer -mfpmath=sse -msse3 -O2 -fno-schedule-insns2 -g -Wa,--noexecstack -fPIC -m64 SKC : gcc, version gcc (GCC) 4.5.0 20100716 (Red Hat 4.5.0-3) SKCFLAGS : -fomit-frame-pointer -mfpmath=sse -msse3 -O2 -fno-schedule-insns2 -g -Wa,--noexecstack -fPIC -m64 Not very nice. As told, in Theano we do that and it work on Mac, Linux and Windows in 32 and 64 bits. Yes, we have our own wrapper around it to handle windows. But as it is already done and tested in those cases, I think it is a good idea to do it. Do you think that it should be tested in other environment? thanks Fr?d?ric p.s. There is also warning printed, but I can hide them without change in NumPy. p.p.s The first line isn't yet removed in my local tests, so maybe more is needed. On Fri, Nov 15, 2013 at 2:28 PM, David Cournapeau wrote: > > > > On Fri, Nov 15, 2013 at 6:21 PM, Charles R Harris > wrote: >> >> >> >> >> On Fri, Nov 15, 2013 at 11:05 AM, Fr?d?ric Bastien >> wrote: >>> >>> I found a line like this in the file: >>> >>> numpy/distutils/fcompiler/__init__.py >>> >>> I changed the -2 to 2, but it didn't change anything. In fact, this >>> line wasn't called. >>> >>> The fct set_verbosity() is called only once, with the value of 0. The >>> default value set at import. If I change that to 2 or -2, it do not >>> change anything. >>> >>> I think that the problem is related to the exec_command as you told. >>> It seam it call it in a way that don't take the stdout/stderr set in >>> Python. So I can't redirect it. The problem is that is use os.system() >>> and we can't redirect its stdout/stderr. >>> >>> What about replacing the os.system call to subprocess.Popen? This >>> would allow us to catch the stdout/stderr. We use this call in Theano >>> and it is compatible with python 2.4. >>> >> >> Numpy 1.8 doesn't support python 2.4 in any case, so that isn't a problem >> ;) >> >> Sure, give it a shot. Looks like subprocess.Popen was intended to replace >> os.system in any case. > > > Except that output is not 'real time' with straight Popen, and doing so > reliably on every platform (cough - windows - cough) is not completely > trivial. You also have to handle buffered output, etc... That code is very > fragile, so this would be quite a lot of testing to change, and I am not > sure it worths it. > > David > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From robert.kern at gmail.com Fri Nov 15 14:41:36 2013 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 15 Nov 2013 19:41:36 +0000 Subject: [Numpy-discussion] Silencing NumPy output In-Reply-To: References: Message-ID: On Fri, Nov 15, 2013 at 7:28 PM, David Cournapeau wrote: > > On Fri, Nov 15, 2013 at 6:21 PM, Charles R Harris < charlesr.harris at gmail.com> wrote: >> Sure, give it a shot. Looks like subprocess.Popen was intended to replace os.system in any case. > > Except that output is not 'real time' with straight Popen, and doing so reliably on every platform (cough - windows - cough) is not completely trivial. You also have to handle buffered output, etc... That code is very fragile, so this would be quite a lot of testing to change, and I am not sure it worths it. It doesn't have to be "real time". Just use .communicate() and print out the stdout and stderr to their appropriate streams after the subprocess finishes. -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From jtaylor.debian at googlemail.com Fri Nov 15 14:47:22 2013 From: jtaylor.debian at googlemail.com (Julian Taylor) Date: Fri, 15 Nov 2013 20:47:22 +0100 Subject: [Numpy-discussion] Caution about using intrisincs, and other 'advanced' optimizations In-Reply-To: References: <528270A1.70307@googlemail.com> <5283C1E9.7010900@googlemail.com> Message-ID: <52867A4A.5010107@googlemail.com> > > Will do, but the errors I am seeing only appear in the > simc.inc.src-based implementation of BOOL_logical_or (they disappear if > I disable the simd intrinsics manually in the numpy headers). > that is because the simd code always looks at the stride (as it only can run with unit strides) while the simple loop doesn't if the dimension is 1. GCC 4.1 is older than python2.5 which we do not support anymore in numpy >= 1.8. If you insist on using a buggy old compiler one could always use numpy 1.7. Also intrinsics are not more prone to compiler bugs than any other code, so I see no reason to special case them. The code itself is more prone bugs due to its higher complexity in some parts, but I think it is reasonably well tested. From max_linke at gmx.de Fri Nov 15 17:47:32 2013 From: max_linke at gmx.de (Max Linke) Date: Fri, 15 Nov 2013 23:47:32 +0100 Subject: [Numpy-discussion] strange runtimes of numpy fft In-Reply-To: References: <1384445906.12947.9.camel@x200.kel.wh.lokal> Message-ID: <1384555652.9792.7.camel@x200.kel.wh.lokal> On Thu, 2013-11-14 at 17:19 +0000, David Cournapeau wrote: > On Thu, Nov 14, 2013 at 4:45 PM, Charles Waldman wrote: > > > Can you post the raw data? It seems like there are just a couple of "bad" > > sizes, I'd like to know more precisely what these are. > > > > Indeed. Several of the sizes generated by logspace(2, 7, 25) are prime > numbers, where numpy.fft is actually O(N^2) and not the usual O(NLogN). Ok I thought the fft is always O(N log N). But it makes sense that this only works if the input-size factorizes well. Thanks for clearing this up. best Max > > There is unfortunately no freely (aka BSD-like licensed) available fft > implementation that works for prime (or 'close to prime') numbers, and > implementing one that is precise enough is not trivial (look at Bernstein > transform for more details). > > David > > > > > It's typical for FFT to perform better at a sample size that is a power of > > 2, and algorithms like FFTW take advantage of factoring the size, and > > "sizes that are products of small factors are transformed most efficiently." > > > > - Charles > > > > On Thu, Nov 14, 2013 at 10:18 AM, Max Linke wrote: > > > >> Hi > >> > >> I noticed some strange scaling behavior of the fft runtime. For most > >> array-sizes the fft will be calculated in a couple of seconds, even for > >> very large ones. But there are some array sizes in between where it will > >> take about ~20 min (e.g. 400000). This is really odd for me because an > >> array with 10 million entries is transformed in ~2s. Is this typical for > >> numpy? > >> > >> I attached a plot and an ipynb to reproduce and illustrate it. > >> > >> best Max > >> > >> _______________________________________________ > >> NumPy-Discussion mailing list > >> NumPy-Discussion at scipy.org > >> http://mail.scipy.org/mailman/listinfo/numpy-discussion > >> > >> > > > > _______________________________________________ > > NumPy-Discussion mailing list > > NumPy-Discussion at scipy.org > > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From cournape at gmail.com Fri Nov 15 17:49:36 2013 From: cournape at gmail.com (David Cournapeau) Date: Fri, 15 Nov 2013 22:49:36 +0000 Subject: [Numpy-discussion] Silencing NumPy output In-Reply-To: References: Message-ID: On Fri, Nov 15, 2013 at 7:39 PM, Fr?d?ric Bastien wrote: > If it don't change, currently it mean that each process that use > Theano and use BLAS will have that printed: > > Found executable /usr/bin/gfortran > ATLAS version 3.8.3 built by mockbuild on Wed Jul 28 02:12:34 UTC 2010: > UNAME : Linux x86-15.phx2.fedoraproject.org 2.6.32-44.el6.x86_64 > #1 SMP Wed Jul 7 15:47:50 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux > INSTFLG : -1 0 -a 1 > ARCHDEFS : -DATL_OS_Linux -DATL_ARCH_Corei7 -DATL_CPUMHZ=1596 > -DATL_SSE3 -DATL_SSE2 -DATL_SSE1 -DATL_USE64BITS -DATL_GAS_x8664 > F2CDEFS : -DAdd_ -DF77_INTEGER=int -DStringSunStyle > CACHEEDGE: 524288 > F77 : gfortran, version GNU Fortran (GCC) 4.5.0 20100716 (Red > Hat 4.5.0-3) > F77FLAGS : -O -g -Wa,--noexecstack -fPIC -m64 > SMC : gcc, version gcc (GCC) 4.5.0 20100716 (Red Hat 4.5.0-3) > SMCFLAGS : -fomit-frame-pointer -mfpmath=sse -msse3 -O2 > -fno-schedule-insns2 -g -Wa,--noexecstack -fPIC -m64 > SKC : gcc, version gcc (GCC) 4.5.0 20100716 (Red Hat 4.5.0-3) > SKCFLAGS : -fomit-frame-pointer -mfpmath=sse -msse3 -O2 > -fno-schedule-insns2 -g -Wa,--noexecstack -fPIC -m64 > > Not very nice. > > As told, in Theano we do that and it work on Mac, Linux and Windows in > 32 and 64 bits. Yes, we have our own wrapper around it to handle > windows. But as it is already done and tested in those cases, I think > it is a good idea to do it. > > Do you think that it should be tested in other environment? > I don't worry much about unix-like environments, windows with 'native' compilers is my main worry. Currently, building scipy with VS 2008 and ifort causes weird issues because scipy recently gained lots of files to be compiled, and sometimes, for some reason, the compilers crash. See: http://mail.scipy.org/pipermail/scipy-dev/2013-August/019169.html Now, if using subprocess fixes those, that alone would be a big +1 from me in its favor. Do you have the patch somewhere I can look at for testing ? David > > thanks > > Fr?d?ric > > p.s. There is also warning printed, but I can hide them without change in > NumPy. > p.p.s The first line isn't yet removed in my local tests, so maybe > more is needed. > > On Fri, Nov 15, 2013 at 2:28 PM, David Cournapeau > wrote: > > > > > > > > On Fri, Nov 15, 2013 at 6:21 PM, Charles R Harris > > wrote: > >> > >> > >> > >> > >> On Fri, Nov 15, 2013 at 11:05 AM, Fr?d?ric Bastien > >> wrote: > >>> > >>> I found a line like this in the file: > >>> > >>> numpy/distutils/fcompiler/__init__.py > >>> > >>> I changed the -2 to 2, but it didn't change anything. In fact, this > >>> line wasn't called. > >>> > >>> The fct set_verbosity() is called only once, with the value of 0. The > >>> default value set at import. If I change that to 2 or -2, it do not > >>> change anything. > >>> > >>> I think that the problem is related to the exec_command as you told. > >>> It seam it call it in a way that don't take the stdout/stderr set in > >>> Python. So I can't redirect it. The problem is that is use os.system() > >>> and we can't redirect its stdout/stderr. > >>> > >>> What about replacing the os.system call to subprocess.Popen? This > >>> would allow us to catch the stdout/stderr. We use this call in Theano > >>> and it is compatible with python 2.4. > >>> > >> > >> Numpy 1.8 doesn't support python 2.4 in any case, so that isn't a > problem > >> ;) > >> > >> Sure, give it a shot. Looks like subprocess.Popen was intended to > replace > >> os.system in any case. > > > > > > Except that output is not 'real time' with straight Popen, and doing so > > reliably on every platform (cough - windows - cough) is not completely > > trivial. You also have to handle buffered output, etc... That code is > very > > fragile, so this would be quite a lot of testing to change, and I am not > > sure it worths it. > > > > David > > > > _______________________________________________ > > NumPy-Discussion mailing list > > NumPy-Discussion at scipy.org > > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Fri Nov 15 17:49:58 2013 From: cournape at gmail.com (David Cournapeau) Date: Fri, 15 Nov 2013 22:49:58 +0000 Subject: [Numpy-discussion] Silencing NumPy output In-Reply-To: References: Message-ID: On Fri, Nov 15, 2013 at 7:41 PM, Robert Kern wrote: > On Fri, Nov 15, 2013 at 7:28 PM, David Cournapeau > wrote: > > > > On Fri, Nov 15, 2013 at 6:21 PM, Charles R Harris < > charlesr.harris at gmail.com> wrote: > > >> Sure, give it a shot. Looks like subprocess.Popen was intended to > replace os.system in any case. > > > > Except that output is not 'real time' with straight Popen, and doing so > reliably on every platform (cough - windows - cough) is not completely > trivial. You also have to handle buffered output, etc... That code is very > fragile, so this would be quite a lot of testing to change, and I am not > sure it worths it. > > It doesn't have to be "real time". Just use .communicate() and print out > the stdout and stderr to their appropriate streams after the subprocess > finishes. > Indeed, it does not have to be, but that's useful for debugging compilation issues (not so much for numpy itself, but for some packages which have files that takes a very long time to build, like scipy.sparsetools or bottleneck). That's a minor point compared to the potential issues when building on windows, though. David -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Fri Nov 15 17:52:46 2013 From: cournape at gmail.com (David Cournapeau) Date: Fri, 15 Nov 2013 22:52:46 +0000 Subject: [Numpy-discussion] strange runtimes of numpy fft In-Reply-To: <1384555652.9792.7.camel@x200.kel.wh.lokal> References: <1384445906.12947.9.camel@x200.kel.wh.lokal> <1384555652.9792.7.camel@x200.kel.wh.lokal> Message-ID: On Fri, Nov 15, 2013 at 10:47 PM, Max Linke wrote: > On Thu, 2013-11-14 at 17:19 +0000, David Cournapeau wrote: > > On Thu, Nov 14, 2013 at 4:45 PM, Charles Waldman > wrote: > > > > > Can you post the raw data? It seems like there are just a couple of > "bad" > > > sizes, I'd like to know more precisely what these are. > > > > > > > Indeed. Several of the sizes generated by logspace(2, 7, 25) are prime > > numbers, where numpy.fft is actually O(N^2) and not the usual O(NLogN). > Ok I thought the fft is always O(N log N). But it makes sense that this > only works if the input-size factorizes well. Thanks for clearing this > up. > To be exact, there *are* FFT-like algorithms for prime number size, but that's not implemented in numpy or scipy (see here: http://numpy-discussion.10968.n7.nabble.com/Prime-size-FFT-bluestein-transform-vs-general-chirp-z-transform-td3171.htmlfor an old discussion on that topic). cheers, David > > best Max > > > > There is unfortunately no freely (aka BSD-like licensed) available fft > > implementation that works for prime (or 'close to prime') numbers, and > > implementing one that is precise enough is not trivial (look at Bernstein > > transform for more details). > > > > David > > > > > > > > It's typical for FFT to perform better at a sample size that is a > power of > > > 2, and algorithms like FFTW take advantage of factoring the size, and > > > "sizes that are products of small factors are transformed most > efficiently." > > > > > > - Charles > > > > > > On Thu, Nov 14, 2013 at 10:18 AM, Max Linke wrote: > > > > > >> Hi > > >> > > >> I noticed some strange scaling behavior of the fft runtime. For most > > >> array-sizes the fft will be calculated in a couple of seconds, even > for > > >> very large ones. But there are some array sizes in between where it > will > > >> take about ~20 min (e.g. 400000). This is really odd for me because an > > >> array with 10 million entries is transformed in ~2s. Is this typical > for > > >> numpy? > > >> > > >> I attached a plot and an ipynb to reproduce and illustrate it. > > >> > > >> best Max > > >> > > >> _______________________________________________ > > >> NumPy-Discussion mailing list > > >> NumPy-Discussion at scipy.org > > >> http://mail.scipy.org/mailman/listinfo/numpy-discussion > > >> > > >> > > > > > > _______________________________________________ > > > NumPy-Discussion mailing list > > > NumPy-Discussion at scipy.org > > > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > > > > > > > _______________________________________________ > > NumPy-Discussion mailing list > > NumPy-Discussion at scipy.org > > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sat Nov 16 07:04:18 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 16 Nov 2013 13:04:18 +0100 Subject: [Numpy-discussion] Caution about using intrisincs, and other 'advanced' optimizations In-Reply-To: <52867A4A.5010107@googlemail.com> References: <528270A1.70307@googlemail.com> <5283C1E9.7010900@googlemail.com> <52867A4A.5010107@googlemail.com> Message-ID: On Fri, Nov 15, 2013 at 8:47 PM, Julian Taylor < jtaylor.debian at googlemail.com> wrote: > > > > Will do, but the errors I am seeing only appear in the > > simc.inc.src-based implementation of BOOL_logical_or (they disappear if > > I disable the simd intrinsics manually in the numpy headers). > > > > that is because the simd code always looks at the stride (as it only can > run with unit strides) while the simple loop doesn't if the dimension is 1. > > GCC 4.1 is older than python2.5 which we do not support anymore in numpy > >= 1.8. > If you insist on using a buggy old compiler one could always use numpy 1.7. > Compiler age and Python version are not equivalent. The former is much harder to upgrade, and much more depends on it for a user. OS X still ships gcc 4.2, and the default compiler for Python 2.6 on OS X is gcc 4.0 which we definitely still support. On Windows we even need gcc 3.4.5 until we find the right way to get rid of it. Ralf > Also intrinsics are not more prone to compiler bugs than any other code, > so I see no reason to special case them. > The code itself is more prone bugs due to its higher complexity in some > parts, but I think it is reasonably well tested. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From djpine at gmail.com Sat Nov 16 08:28:33 2013 From: djpine at gmail.com (David Pine) Date: Sat, 16 Nov 2013 08:28:33 -0500 Subject: [Numpy-discussion] runtime warning for where Message-ID: <9E8EAE52-820A-4855-80EB-7B44D0E65D98@gmail.com> The program at the bottom of this message returns the following runtime warning: python test.py test.py:5: RuntimeWarning: invalid value encountered in divide return np.where(x==0., 1., np.sin(x)/x) The function works correctly returning x = np.array([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9., 10.]) y = np.array([ 1. , 0.84147098, 0.45464871, 0.04704 , -0.18920062, -0.19178485, -0.04656925, 0.09385523, 0.12366978, 0.04579094, -0.05440211]) The runtime warning suggests that np.where evaluates np.sin(x)/x at all x, including x=0, even though the np.where function returns the correct value of 1. when x is 0. This seems odd to me. Why issue a runtime warning? Nothing is wrong. Moreover, I don't recall numpy issuing such warnings in earlier versions. import numpy as np import matplotlib.pyplot as plt def sinc(x): return np.where(x==0., 1., np.sin(x)/x) x = np.linspace(0., 10., 11) y = sinc(x) plt.plot(x, y) plt.show() From argriffi at ncsu.edu Sat Nov 16 08:36:45 2013 From: argriffi at ncsu.edu (alex) Date: Sat, 16 Nov 2013 08:36:45 -0500 Subject: [Numpy-discussion] runtime warning for where In-Reply-To: <9E8EAE52-820A-4855-80EB-7B44D0E65D98@gmail.com> References: <9E8EAE52-820A-4855-80EB-7B44D0E65D98@gmail.com> Message-ID: On Sat, Nov 16, 2013 at 8:28 AM, David Pine wrote: > The program at the bottom of this message returns the following runtime warning: > > python test.py > test.py:5: RuntimeWarning: invalid value encountered in divide > return np.where(x==0., 1., np.sin(x)/x) > > The function works correctly returning > x = np.array([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9., 10.]) > y = np.array([ 1. , 0.84147098, 0.45464871, 0.04704 , -0.18920062, > -0.19178485, -0.04656925, 0.09385523, 0.12366978, 0.04579094, > -0.05440211]) > > The runtime warning suggests that np.where evaluates np.sin(x)/x at all x, including x=0, even though the np.where function returns the correct value of 1. when x is 0. This seems odd to me. Why issue a runtime warning? Nothing is wrong. Moreover, I don't recall numpy issuing such warnings in earlier versions. > > import numpy as np > import matplotlib.pyplot as plt > > def sinc(x): > return np.where(x==0., 1., np.sin(x)/x) > > x = np.linspace(0., 10., 11) > y = sinc(x) > > plt.plot(x, y) > plt.show() For what it's worth, you can see the different strategies that numpy and scipy use to work around this warning. https://github.com/numpy/numpy/blob/master/numpy/lib/function_base.py#L2662 https://github.com/scipy/scipy/blob/master/scipy/special/basic.py#L43 Numpy sinc uses a small number instead of zero. Scipy sinc disables the warning explicitly. From argriffi at ncsu.edu Sat Nov 16 08:42:52 2013 From: argriffi at ncsu.edu (alex) Date: Sat, 16 Nov 2013 08:42:52 -0500 Subject: [Numpy-discussion] runtime warning for where In-Reply-To: <9E8EAE52-820A-4855-80EB-7B44D0E65D98@gmail.com> References: <9E8EAE52-820A-4855-80EB-7B44D0E65D98@gmail.com> Message-ID: On Sat, Nov 16, 2013 at 8:28 AM, David Pine wrote: > The program at the bottom of this message returns the following runtime warning: > > python test.py > test.py:5: RuntimeWarning: invalid value encountered in divide > return np.where(x==0., 1., np.sin(x)/x) > > The function works correctly returning > x = np.array([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9., 10.]) > y = np.array([ 1. , 0.84147098, 0.45464871, 0.04704 , -0.18920062, > -0.19178485, -0.04656925, 0.09385523, 0.12366978, 0.04579094, > -0.05440211]) > > The runtime warning suggests that np.where evaluates np.sin(x)/x at all x, including x=0, even though the np.where function returns the correct value of 1. when x is 0. This seems odd to me. Why issue a runtime warning? Nothing is wrong. Moreover, I don't recall numpy issuing such warnings in earlier versions. > > import numpy as np > import matplotlib.pyplot as plt > > def sinc(x): > return np.where(x==0., 1., np.sin(x)/x) > > x = np.linspace(0., 10., 11) > y = sinc(x) > > plt.plot(x, y) > plt.show() Also notice that scipy.stats.distributions has its own private implementation of where, called _lazywhere. It avoids evaluating the function when the condition is false. https://github.com/scipy/scipy/blob/master/scipy/stats/distributions.py#L506 From matthieu.brucher at gmail.com Sat Nov 16 08:57:31 2013 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Sat, 16 Nov 2013 14:57:31 +0100 Subject: [Numpy-discussion] runtime warning for where In-Reply-To: <9E8EAE52-820A-4855-80EB-7B44D0E65D98@gmail.com> References: <9E8EAE52-820A-4855-80EB-7B44D0E65D98@gmail.com> Message-ID: Hi, Don't forget that np.where is not smart. First np.sin(x)/x is computed for the array, which is why you see the warning, and then np.where selects the proper final results. Cheers, Matthieu 2013/11/16 David Pine : > The program at the bottom of this message returns the following runtime warning: > > python test.py > test.py:5: RuntimeWarning: invalid value encountered in divide > return np.where(x==0., 1., np.sin(x)/x) > > The function works correctly returning > x = np.array([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9., 10.]) > y = np.array([ 1. , 0.84147098, 0.45464871, 0.04704 , -0.18920062, > -0.19178485, -0.04656925, 0.09385523, 0.12366978, 0.04579094, > -0.05440211]) > > The runtime warning suggests that np.where evaluates np.sin(x)/x at all x, including x=0, even though the np.where function returns the correct value of 1. when x is 0. This seems odd to me. Why issue a runtime warning? Nothing is wrong. Moreover, I don't recall numpy issuing such warnings in earlier versions. > > import numpy as np > import matplotlib.pyplot as plt > > def sinc(x): > return np.where(x==0., 1., np.sin(x)/x) > > x = np.linspace(0., 10., 11) > y = sinc(x) > > plt.plot(x, y) > plt.show() > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion -- Information System Engineer, Ph.D. Blog: http://matt.eifelle.com LinkedIn: http://www.linkedin.com/in/matthieubrucher Music band: http://liliejay.com/ From djpine at gmail.com Sat Nov 16 09:05:41 2013 From: djpine at gmail.com (David J Pine) Date: Sat, 16 Nov 2013 09:05:41 -0500 Subject: [Numpy-discussion] runtime warning for where In-Reply-To: References: <9E8EAE52-820A-4855-80EB-7B44D0E65D98@gmail.com> Message-ID: Thanks. I must have had runtime warnings turned off in my previous versions of python. On Sat, Nov 16, 2013 at 8:42 AM, alex wrote: > On Sat, Nov 16, 2013 at 8:28 AM, David Pine wrote: > > The program at the bottom of this message returns the following runtime > warning: > > > > python test.py > > test.py:5: RuntimeWarning: invalid value encountered in divide > > return np.where(x==0., 1., np.sin(x)/x) > > > > The function works correctly returning > > x = np.array([ 0., 1., 2., 3., 4., 5., 6., 7., 8., > 9., 10.]) > > y = np.array([ 1. , 0.84147098, 0.45464871, 0.04704 , > -0.18920062, > > -0.19178485, -0.04656925, 0.09385523, 0.12366978, 0.04579094, > > -0.05440211]) > > > > The runtime warning suggests that np.where evaluates np.sin(x)/x at all > x, including x=0, even though the np.where function returns the correct > value of 1. when x is 0. This seems odd to me. Why issue a runtime > warning? Nothing is wrong. Moreover, I don't recall numpy issuing such > warnings in earlier versions. > > > > import numpy as np > > import matplotlib.pyplot as plt > > > > def sinc(x): > > return np.where(x==0., 1., np.sin(x)/x) > > > > x = np.linspace(0., 10., 11) > > y = sinc(x) > > > > plt.plot(x, y) > > plt.show() > > Also notice that scipy.stats.distributions has its own private > implementation of where, called _lazywhere. It avoids evaluating the > function when the condition is false. > > > https://github.com/scipy/scipy/blob/master/scipy/stats/distributions.py#L506 > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From charles at crunch.io Sat Nov 16 10:54:12 2013 From: charles at crunch.io (Charles Waldman) Date: Sat, 16 Nov 2013 09:54:12 -0600 Subject: [Numpy-discussion] runtime warning for where In-Reply-To: References: <9E8EAE52-820A-4855-80EB-7B44D0E65D98@gmail.com> Message-ID: > Don't forget that np.where is not smart And there's really no way it could be. np.where, like all Python functions, must evaluate all of the arguments first, then call the function. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sun Nov 17 04:53:13 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 17 Nov 2013 10:53:13 +0100 Subject: [Numpy-discussion] ANN: Scipy 0.13.1 release Message-ID: Hi, I'm happy to announce the availability of the scipy 0.13.1 release. This is a bugfix only release; it contains several fixes for issues in ndimage. Thanks to Pauli Virtanen and Ray Jones for fixing these issues quickly. Source tarballs, binaries and release notes can be found at http://sourceforge.net/projects/scipy/files/scipy/0.13.1/. Cheers, Ralf ========================== SciPy 0.13.1 Release Notes ========================== SciPy 0.13.1 is a bug-fix release with no new features compared to 0.13.0. The only changes are several fixes in ``ndimage``, one of which was a serious regression in ``ndimage.label`` (Github issue 3025), which gave incorrect results in 0.13.0. Issues fixed ------------ - 3025: ``ndimage.label`` returns incorrect results in scipy 0.13.0 - 1992: ``ndimage.label`` return type changed from int32 to uint32 - 1992: ``ndimage.find_objects`` doesn't work with int32 input in some cases -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Mon Nov 18 13:31:58 2013 From: chris.barker at noaa.gov (Chris Barker) Date: Mon, 18 Nov 2013 10:31:58 -0800 Subject: [Numpy-discussion] ANN: Scipy 0.13.1 release In-Reply-To: References: Message-ID: On Sun, Nov 17, 2013 at 1:53 AM, Ralf Gommers wrote: > I'm happy to announce the availability of the scipy 0.13.1 release. This > is a bugfix only release; it contains several fixes for issues in ndimage. > Thanks to Pauli Virtanen and Ray Jones for fixing these issues quickly. > > Thanks Ralf et al. for more great work! Source tarballs, binaries and release notes can be found at > http://sourceforge.net/projects/scipy/files/scipy/0.13.1/. > > What's the plan for OS-X binaries? It would be great to have those up sooner than later...Can I help? -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Mon Nov 18 13:45:32 2013 From: chris.barker at noaa.gov (Chris Barker) Date: Mon, 18 Nov 2013 10:45:32 -0800 Subject: [Numpy-discussion] ANN: Scipy 0.13.1 release In-Reply-To: References: Message-ID: On Mon, Nov 18, 2013 at 10:31 AM, Chris Barker wrote: > What's the plan for OS-X binaries? It would be great to have those up > sooner than later...Can I help? > > oops sorry there they are! Were they just uploaded -- I really did look before posting that! Thanks all! -Chris > -Chris > > -- > > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov > -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Mon Nov 18 14:03:54 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Mon, 18 Nov 2013 20:03:54 +0100 Subject: [Numpy-discussion] ANN: Scipy 0.13.1 release In-Reply-To: References: Message-ID: On Mon, Nov 18, 2013 at 7:45 PM, Chris Barker wrote: > On Mon, Nov 18, 2013 at 10:31 AM, Chris Barker wrote: > > What's the plan for OS-X binaries? It would be great to have those up >> sooner than later...Can I help? >> >> > oops sorry there they are! > > Were they just uploaded -- I really did look before posting that! > They were there. The ones that need compiling on OS X 10.5 are still missing though (that's a bit tedious as it's a remote machine). Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From oscar.j.benjamin at gmail.com Mon Nov 18 17:35:13 2013 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Mon, 18 Nov 2013 22:35:13 +0000 Subject: [Numpy-discussion] strange runtimes of numpy fft In-Reply-To: References: <1384445906.12947.9.camel@x200.kel.wh.lokal> Message-ID: On 14 November 2013 17:19, David Cournapeau wrote: > On Thu, Nov 14, 2013 at 4:45 PM, Charles Waldman wrote: >> >> Can you post the raw data? It seems like there are just a couple of "bad" >> sizes, I'd like to know more precisely what these are. > > Indeed. Several of the sizes generated by logspace(2, 7, 25) are prime > numbers, where numpy.fft is actually O(N^2) and not the usual O(NLogN). > > There is unfortunately no freely (aka BSD-like licensed) available fft > implementation that works for prime (or 'close to prime') numbers, and > implementing one that is precise enough is not trivial (look at Bernstein > transform for more details). I was interested by this comment as I wasn't aware of this aspect of numpy's fft function (or of fft algorithms in general). Having finally found a spare minute I've implemented the Bluestein algorithm based only on the Wikipedia page (feel free to use under any license including BSD). Is there anything wrong with the following? It's much faster for e.g. the prime size 215443 (~1s on this laptop; I didn't wait long enough to find out what numpy.fft.fft would take). from numpy import array, exp, pi, arange, concatenate from numpy.fft import fft, ifft def ceilpow2(N): ''' >>> ceilpow2(15) 16 >>> ceilpow2(16) 16 ''' p = 1 while p < N: p *= 2 return p def fftbs(x): ''' >>> data = [1, 2, 5, 2, 5, 2, 3] >>> from numpy.fft import fft >>> from numpy import allclose >>> from numpy.random import randn >>> for n in range(1, 1000): ... data = randn(n) ... assert allclose(fft(data), fftbs(data)) ''' N = len(x) x = array(x) n = arange(N) b = exp((1j*pi*n**2)/N) a = x * b.conjugate() M = ceilpow2(N) * 2 A = concatenate((a, [0] * (M - N))) B = concatenate((b, [0] * (M - 2*N + 1), b[:0:-1])) C = ifft(fft(A) * fft(B)) c = C[:N] return b.conjugate() * c if __name__ == "__main__": import doctest doctest.testmod() Oscar From charlesr.harris at gmail.com Mon Nov 18 22:26:00 2013 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 18 Nov 2013 20:26:00 -0700 Subject: [Numpy-discussion] strange runtimes of numpy fft In-Reply-To: References: <1384445906.12947.9.camel@x200.kel.wh.lokal> Message-ID: On Mon, Nov 18, 2013 at 3:35 PM, Oscar Benjamin wrote: > On 14 November 2013 17:19, David Cournapeau wrote: > > On Thu, Nov 14, 2013 at 4:45 PM, Charles Waldman > wrote: > >> > >> Can you post the raw data? It seems like there are just a couple of > "bad" > >> sizes, I'd like to know more precisely what these are. > > > > Indeed. Several of the sizes generated by logspace(2, 7, 25) are prime > > numbers, where numpy.fft is actually O(N^2) and not the usual O(NLogN). > > > > There is unfortunately no freely (aka BSD-like licensed) available fft > > implementation that works for prime (or 'close to prime') numbers, and > > implementing one that is precise enough is not trivial (look at Bernstein > > transform for more details). > > I was interested by this comment as I wasn't aware of this aspect of > numpy's fft function (or of fft algorithms in general). Having finally > found a spare minute I've implemented the Bluestein algorithm based > only on the Wikipedia page (feel free to use under any license > including BSD). > > Is there anything wrong with the following? It's much faster for e.g. > the prime size 215443 (~1s on this laptop; I didn't wait long enough > to find out what numpy.fft.fft would take). > > from numpy import array, exp, pi, arange, concatenate > from numpy.fft import fft, ifft > > def ceilpow2(N): > ''' > >>> ceilpow2(15) > 16 > >>> ceilpow2(16) > 16 > ''' > p = 1 > while p < N: > p *= 2 > return p > > def fftbs(x): > ''' > >>> data = [1, 2, 5, 2, 5, 2, 3] > >>> from numpy.fft import fft > >>> from numpy import allclose > >>> from numpy.random import randn > >>> for n in range(1, 1000): > ... data = randn(n) > ... assert allclose(fft(data), fftbs(data)) > ''' > N = len(x) > x = array(x) > > n = arange(N) > b = exp((1j*pi*n**2)/N) > a = x * b.conjugate() > > M = ceilpow2(N) * 2 > A = concatenate((a, [0] * (M - N))) > B = concatenate((b, [0] * (M - 2*N + 1), b[:0:-1])) > C = ifft(fft(A) * fft(B)) > c = C[:N] > return b.conjugate() * c > > if __name__ == "__main__": > import doctest > doctest.testmod() > > > Where this starts to get tricky is when N is a product of primes not natively supported in fftpack. The fftpack supports primes 2, 3, 5, 7(?) at the moment, one would need to do initial transforms to break it down into a number of smaller transforms whose size would have prime factors supported by fftpack. Then use fftpack on each of those. Or the other way round, depending on taste. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charles at crunch.io Tue Nov 19 11:00:49 2013 From: charles at crunch.io (Charles Waldman) Date: Tue, 19 Nov 2013 10:00:49 -0600 Subject: [Numpy-discussion] strange runtimes of numpy fft In-Reply-To: References: <1384445906.12947.9.camel@x200.kel.wh.lokal> Message-ID: How about FFTW? I think there are wrappers out there for that ... On Mon, Nov 18, 2013 at 9:26 PM, Charles R Harris wrote: > > > > On Mon, Nov 18, 2013 at 3:35 PM, Oscar Benjamin < > oscar.j.benjamin at gmail.com> wrote: > >> On 14 November 2013 17:19, David Cournapeau wrote: >> > On Thu, Nov 14, 2013 at 4:45 PM, Charles Waldman >> wrote: >> >> >> >> Can you post the raw data? It seems like there are just a couple of >> "bad" >> >> sizes, I'd like to know more precisely what these are. >> > >> > Indeed. Several of the sizes generated by logspace(2, 7, 25) are prime >> > numbers, where numpy.fft is actually O(N^2) and not the usual O(NLogN). >> > >> > There is unfortunately no freely (aka BSD-like licensed) available fft >> > implementation that works for prime (or 'close to prime') numbers, and >> > implementing one that is precise enough is not trivial (look at >> Bernstein >> > transform for more details). >> >> I was interested by this comment as I wasn't aware of this aspect of >> numpy's fft function (or of fft algorithms in general). Having finally >> found a spare minute I've implemented the Bluestein algorithm based >> only on the Wikipedia page (feel free to use under any license >> including BSD). >> >> Is there anything wrong with the following? It's much faster for e.g. >> the prime size 215443 (~1s on this laptop; I didn't wait long enough >> to find out what numpy.fft.fft would take). >> >> from numpy import array, exp, pi, arange, concatenate >> from numpy.fft import fft, ifft >> >> def ceilpow2(N): >> ''' >> >>> ceilpow2(15) >> 16 >> >>> ceilpow2(16) >> 16 >> ''' >> p = 1 >> while p < N: >> p *= 2 >> return p >> >> def fftbs(x): >> ''' >> >>> data = [1, 2, 5, 2, 5, 2, 3] >> >>> from numpy.fft import fft >> >>> from numpy import allclose >> >>> from numpy.random import randn >> >>> for n in range(1, 1000): >> ... data = randn(n) >> ... assert allclose(fft(data), fftbs(data)) >> ''' >> N = len(x) >> x = array(x) >> >> n = arange(N) >> b = exp((1j*pi*n**2)/N) >> a = x * b.conjugate() >> >> M = ceilpow2(N) * 2 >> A = concatenate((a, [0] * (M - N))) >> B = concatenate((b, [0] * (M - 2*N + 1), b[:0:-1])) >> C = ifft(fft(A) * fft(B)) >> c = C[:N] >> return b.conjugate() * c >> >> if __name__ == "__main__": >> import doctest >> doctest.testmod() >> >> >> > Where this starts to get tricky is when N is a product of primes not > natively supported in fftpack. The fftpack supports primes 2, 3, 5, 7(?) at > the moment, one would need to do initial transforms to break it down into a > number of smaller transforms whose size would have prime factors supported > by fftpack. Then use fftpack on each of those. Or the other way round, > depending on taste. > > Chuck > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From heng at cantab.net Tue Nov 19 11:03:13 2013 From: heng at cantab.net (Henry Gomersall) Date: Tue, 19 Nov 2013 16:03:13 +0000 Subject: [Numpy-discussion] strange runtimes of numpy fft In-Reply-To: References: <1384445906.12947.9.camel@x200.kel.wh.lokal> Message-ID: <528B8BC1.2010806@cantab.net> On 19/11/13 16:00, Charles Waldman wrote: > How about FFTW? I think there are wrappers out there for that ... Yes there are! (complete with the numpy.fft API) https://github.com/hgomersall/pyFFTW However, FFTW is dual licensed GPL/commercial and so the wrappers are also GPL by necessity. Cheers, Henry From stefan at sun.ac.za Tue Nov 19 11:08:19 2013 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Tue, 19 Nov 2013 18:08:19 +0200 Subject: [Numpy-discussion] strange runtimes of numpy fft In-Reply-To: <528B8BC1.2010806@cantab.net> References: <1384445906.12947.9.camel@x200.kel.wh.lokal> <528B8BC1.2010806@cantab.net> Message-ID: On Tue, Nov 19, 2013 at 6:03 PM, Henry Gomersall wrote: > However, FFTW is dual licensed GPL/commercial and so the wrappers are > also GPL by necessity. I'm not sure if that is true, strictly speaking--you may license your wrapper code under any license you wish. It's just that it becomes confusing when the combined work then has to be released under GPL. St?fan From heng at cantab.net Tue Nov 19 12:17:21 2013 From: heng at cantab.net (Henry Gomersall) Date: Tue, 19 Nov 2013 17:17:21 +0000 Subject: [Numpy-discussion] strange runtimes of numpy fft In-Reply-To: References: <1384445906.12947.9.camel@x200.kel.wh.lokal> <528B8BC1.2010806@cantab.net> Message-ID: <528B9D21.8000908@cantab.net> On 19/11/13 16:08, St?fan van der Walt wrote: > On Tue, Nov 19, 2013 at 6:03 PM, Henry Gomersall wrote: >> >However, FFTW is dual licensed GPL/commercial and so the wrappers are >> >also GPL by necessity. > I'm not sure if that is true, strictly speaking--you may license your > wrapper code under any license you wish. It's just that it becomes > confusing when the combined work then has to be released under GPL. This is on shaky GPL ground. I'm inclined to agree with you, but given any usage necessarily has to link against the FFTW libs, any resultant redistribution is going to be GPL by necessity. I.e. pyFFTW is useless without FFTW so it's just simpler to make it GPL for the time being. Cheers, Henry From njs at pobox.com Tue Nov 19 12:52:34 2013 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 19 Nov 2013 09:52:34 -0800 Subject: [Numpy-discussion] strange runtimes of numpy fft In-Reply-To: <528B9D21.8000908@cantab.net> References: <1384445906.12947.9.camel@x200.kel.wh.lokal> <528B8BC1.2010806@cantab.net> <528B9D21.8000908@cantab.net> Message-ID: On Tue, Nov 19, 2013 at 9:17 AM, Henry Gomersall wrote: > On 19/11/13 16:08, St?fan van der Walt wrote: >> On Tue, Nov 19, 2013 at 6:03 PM, Henry Gomersall wrote: >>> >However, FFTW is dual licensed GPL/commercial and so the wrappers are >>> >also GPL by necessity. >> I'm not sure if that is true, strictly speaking--you may license your >> wrapper code under any license you wish. It's just that it becomes >> confusing when the combined work then has to be released under GPL. > > This is on shaky GPL ground. I'm inclined to agree with you, but given > any usage necessarily has to link against the FFTW libs, any resultant > redistribution is going to be GPL by necessity. I.e. pyFFTW is useless > without FFTW so it's just simpler to make it GPL for the time being. The case where it makes a difference is when someone has purchased a commercial license to FFTW and then wants to use it from their proprietary Python application. -n From heng at cantab.net Wed Nov 20 06:06:32 2013 From: heng at cantab.net (Henry Gomersall) Date: Wed, 20 Nov 2013 11:06:32 +0000 Subject: [Numpy-discussion] strange runtimes of numpy fft In-Reply-To: References: <1384445906.12947.9.camel@x200.kel.wh.lokal> <528B8BC1.2010806@cantab.net> <528B9D21.8000908@cantab.net> Message-ID: <528C97B8.9030606@cantab.net> On 19/11/13 17:52, Nathaniel Smith wrote: > On Tue, Nov 19, 2013 at 9:17 AM, Henry Gomersall wrote: >> >On 19/11/13 16:08, St?fan van der Walt wrote: >>> >>On Tue, Nov 19, 2013 at 6:03 PM, Henry Gomersall wrote: >>>>> >>> >However, FFTW is dual licensed GPL/commercial and so the wrappers are >>>>> >>> >also GPL by necessity. >>> >>I'm not sure if that is true, strictly speaking--you may license your >>> >>wrapper code under any license you wish. It's just that it becomes >>> >>confusing when the combined work then has to be released under GPL. >> > >> >This is on shaky GPL ground. I'm inclined to agree with you, but given >> >any usage necessarily has to link against the FFTW libs, any resultant >> >redistribution is going to be GPL by necessity. I.e. pyFFTW is useless >> >without FFTW so it's just simpler to make it GPL for the time being. > The case where it makes a difference is when someone has purchased a > commercial license to FFTW and then wants to use it from their > proprietary Python application. Yes, this didn't occur to me as an option, mostly because I'm keen for a commercial FFTW license myself and it would gall me somewhat if I couldn't gain the same benefit from my own code as others. So, given that, if anyone has an FFTW license and is keen for decent Python wrappers, I'd be more than happy to discuss a sub-license to FFTW in exchange for a more liberal (say MIT) license for pyFFTW. Cheers, Henry From davidmenhur at gmail.com Wed Nov 20 09:09:52 2013 From: davidmenhur at gmail.com (=?UTF-8?B?RGHPgGlk?=) Date: Wed, 20 Nov 2013 15:09:52 +0100 Subject: [Numpy-discussion] RuntimeWarning raised in the wrong line Message-ID: I have the following code, operating on some arrays. pen = 1 - (real_sum[int(i1 + t)]) / (absolute[int(i1 + t)]) if np.isnan(pen): pen = 0.0 I know that, sometimes, real_sum and absolute are 0 at the designated point, so I should get a RuntimeWarning. But, the warning is puzzling: RuntimeWarning: invalid value encountered in double_scalars pen = 0.0 Is the warning actually being raised at that point? Or can I not really trust the line reported by the warning? I have tried to replicate it in interactive terminal, but without luck, I always get it in the previous line. I am using Python 2.7 and Numpy 1.8.0. /David. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben.root at ou.edu Wed Nov 20 09:34:52 2013 From: ben.root at ou.edu (Benjamin Root) Date: Wed, 20 Nov 2013 09:34:52 -0500 Subject: [Numpy-discussion] RuntimeWarning raised in the wrong line In-Reply-To: References: Message-ID: On Wed, Nov 20, 2013 at 9:09 AM, Da?id wrote: > I have the following code, operating on some arrays. > > pen = 1 - (real_sum[int(i1 + t)]) / (absolute[int(i1 + t)]) > if np.isnan(pen): > pen = 0.0 > > I know that, sometimes, real_sum and absolute are 0 at the designated > point, so I should get a RuntimeWarning. But, the warning is puzzling: > > RuntimeWarning: invalid value encountered in double_scalars > pen = 0.0 > > Is the warning actually being raised at that point? Or can I not really > trust the line reported by the warning? I have tried to replicate it in > interactive terminal, but without luck, I always get it in the previous > line. > > > I am using Python 2.7 and Numpy 1.8.0. > > > /David. > > This can sometimes happen in very weird edge cases of development workflows. Essentially, if one edits the source code while the module is still loaded in the python session, the line numbers recorded in the *.pyc files diverge from the source code. The warning that is emitted checks the source code at that time to print out the relevant line. Maybe that might explain what is happening here? Ben Root -------------- next part -------------- An HTML attachment was scrubbed... URL: From bryanv at continuum.io Wed Nov 20 11:20:06 2013 From: bryanv at continuum.io (Bryan Van de Ven) Date: Wed, 20 Nov 2013 10:20:06 -0600 Subject: [Numpy-discussion] ANN: Bokeh 0.3 released Message-ID: I am pleased to announce the release of Bokeh 0.3! Bokeh is a Python interactive visualization library for large datasets that natively uses the latest web technologies. Its goal is to provide elegant, concise construction of novel graphics in the style of Protovis/D3, while delivering high-performance interactivity over large data to thin clients. If you are using Anaconda, you can install through conda: conda install bokeh Alternatively you can install from PyPI using pip: pip install bokeh This release was largely an internal refactor to merge the BokehJS and Bokeh projects into one repository, and to greatly improve and simplify the BokehJS coffee script build process. Additionally, this release also includes a number of bug and stability fixes, and some enhancements. See the CHANGELOG for full details. Many new examples were added including a reproduction of Burtin's Antibiotics, and examples of animation using the Bokeh plot server inside IPython notebooks. ColorBrewer palettes were also added on the python side. Finally, the user guide has been flushed out and will continually be updated as features and API changes are made. Check out the full documentation and interactive gallery at http://bokeh.pydata.org The release of Bokeh 0.4 is planned for early January. Some notable features to be included are: * Integrate Abstract Rendering into bokeh server * Better grid-based layout system; use Cassowary.js for layout solver * Tool Improvements (pan always on, box zoom always on, passive resize with hot corners) * Basic MPL compatibility interface (enough to make ggplot.py work) * Expose image plot in Python interface Issues or enhancement requests can be logged on the Bokeh Github page: https://github.com/continuumio/bokeh Questions can be directed to the Bokeh mailing list: bokeh at continuum.io Regards, Bryan Van de Ven From chris.barker at noaa.gov Wed Nov 20 14:56:08 2013 From: chris.barker at noaa.gov (Chris Barker) Date: Wed, 20 Nov 2013 11:56:08 -0800 Subject: [Numpy-discussion] strange runtimes of numpy fft In-Reply-To: <528C97B8.9030606@cantab.net> References: <1384445906.12947.9.camel@x200.kel.wh.lokal> <528B8BC1.2010806@cantab.net> <528B9D21.8000908@cantab.net> <528C97B8.9030606@cantab.net> Message-ID: On Wed, Nov 20, 2013 at 3:06 AM, Henry Gomersall wrote: > Yes, this didn't occur to me as an option, mostly because I'm keen for a > commercial FFTW license myself and it would gall me somewhat if I > couldn't gain the same benefit from my own code as others. > > So, given that, if anyone has an FFTW license and is keen for decent > Python wrappers, I'd be more than happy to discuss a sub-license to FFTW > in exchange for a more liberal (say MIT) license for pyFFTW. > OT, and IANAL, but I think what you'd want to do is dual-licence pyFFTW. Heck, you could even charge for a commercially-licenced version, as it would only be useful to folks that were already paying for a commercially-licenced FFTW -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From heng at cantab.net Wed Nov 20 15:03:12 2013 From: heng at cantab.net (Henry Gomersall) Date: Wed, 20 Nov 2013 20:03:12 +0000 Subject: [Numpy-discussion] strange runtimes of numpy fft In-Reply-To: References: <1384445906.12947.9.camel@x200.kel.wh.lokal> <528B8BC1.2010806@cantab.net> <528B9D21.8000908@cantab.net> <528C97B8.9030606@cantab.net> Message-ID: <528D1580.20205@cantab.net> On 20/11/13 19:56, Chris Barker wrote: > On Wed, Nov 20, 2013 at 3:06 AM, Henry Gomersall > wrote: > > Yes, this didn't occur to me as an option, mostly because I'm keen for a > commercial FFTW license myself and it would gall me somewhat if I > couldn't gain the same benefit from my own code as others. > > So, given that, if anyone has an FFTW license and is keen for decent > Python wrappers, I'd be more than happy to discuss a sub-license to FFTW > in exchange for a more liberal (say MIT) license for pyFFTW. > > > > OT, and IANAL, but I think what you'd want to do is dual-licence > pyFFTW. Heck, you could even charge for a commercially-licenced version, > as it would only be useful to folks that were already paying for > a commercially-licenced FFTW Apologies for the continued OT... I _have_ considered a commercial license, but so far, no knowledge of who might be interested. Again, not being a lawyer, I'm not even sure if this is clear cut and I can do it without a license myself (I think the GPL gets very confusing when it comes to runtime linking with some implementation of a published API, and interpreted languages make it even more so). So, I'll put this out there, if anyone has a need for python wrappers for a commercial FFTW, please get in touch. All options considered. :) Cheers, Henry From pearu.peterson at gmail.com Thu Nov 21 02:55:52 2013 From: pearu.peterson at gmail.com (Pearu Peterson) Date: Thu, 21 Nov 2013 09:55:52 +0200 Subject: [Numpy-discussion] f2py and Fortran STOP statement issue Message-ID: Hi, The issue with wrapping Fortran codes that contain STOP statements has been raised several times in past with no good working solution proposed. Recently the issue was raised again in f2py issues. Since the user was filling to test out few ideas with positive results, I decided to describe the outcome (a working solution) in the following wikipage: https://code.google.com/p/f2py/wiki/FAQ2ed Just FYI, Pearu -------------- next part -------------- An HTML attachment was scrubbed... URL: From cimrman3 at ntc.zcu.cz Fri Nov 22 07:50:26 2013 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Fri, 22 Nov 2013 13:50:26 +0100 Subject: [Numpy-discussion] ANN: SfePy 2013.4 Message-ID: <528F5312.30700@ntc.zcu.cz> I am pleased to announce release 2013.4 of SfePy. Description ----------- SfePy (simple finite elements in Python) is a software for solving systems of coupled partial differential equations by the finite element method. The code is based on NumPy and SciPy packages. It is distributed under the new BSD license. Home page: http://sfepy.org Mailing list: http://groups.google.com/group/sfepy-devel Git (source) repository, issue tracker, wiki: http://github.com/sfepy Highlights of this release -------------------------- - simplified quadrature definition - equation sequence solver - initial support for 'plate' integration/connectivity type - script for visualization of quadrature points and weights For full release notes see http://docs.sfepy.org/doc/release_notes.html#id1 (rather long and technical). Best regards, Robert Cimrman and Contributors (*) (*) Contributors to this release (alphabetical order): Vladim?r Luke?, Jaroslav Vond?ejc From matthew.brett at gmail.com Fri Nov 22 16:23:34 2013 From: matthew.brett at gmail.com (Matthew Brett) Date: Fri, 22 Nov 2013 16:23:34 -0500 Subject: [Numpy-discussion] (no subject) Message-ID: Hi, I'm sorry if I missed something obvious - but is there a vectorized way to look for None in an array? In [3]: a = np.array([1, 1]) In [4]: a == object() Out[4]: array([False, False], dtype=bool) In [6]: a == None Out[6]: False (same for object arrays), Thanks a lot, Matthew From nouiz at nouiz.org Fri Nov 22 16:26:52 2013 From: nouiz at nouiz.org (=?ISO-8859-1?Q?Fr=E9d=E9ric_Bastien?=) Date: Fri, 22 Nov 2013 16:26:52 -0500 Subject: [Numpy-discussion] Silencing NumPy output In-Reply-To: References: Message-ID: I didn't forgot this, but I got side tracked. Here is the Theano code I would like to try to use to replace os.system: https://github.com/Theano/Theano/blob/master/theano/misc/windows.py But I won't be able to try this before next week. Fred On Fri, Nov 15, 2013 at 5:49 PM, David Cournapeau wrote: > > > > On Fri, Nov 15, 2013 at 7:41 PM, Robert Kern wrote: >> >> On Fri, Nov 15, 2013 at 7:28 PM, David Cournapeau >> wrote: >> > >> > On Fri, Nov 15, 2013 at 6:21 PM, Charles R Harris >> > wrote: >> >> >> Sure, give it a shot. Looks like subprocess.Popen was intended to >> >> replace os.system in any case. >> > >> > Except that output is not 'real time' with straight Popen, and doing so >> > reliably on every platform (cough - windows - cough) is not completely >> > trivial. You also have to handle buffered output, etc... That code is very >> > fragile, so this would be quite a lot of testing to change, and I am not >> > sure it worths it. >> >> It doesn't have to be "real time". Just use .communicate() and print out >> the stdout and stderr to their appropriate streams after the subprocess >> finishes. > > > Indeed, it does not have to be, but that's useful for debugging compilation > issues (not so much for numpy itself, but for some packages which have files > that takes a very long time to build, like scipy.sparsetools or bottleneck). > > That's a minor point compared to the potential issues when building on > windows, though. > > David > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From warren.weckesser at gmail.com Fri Nov 22 16:35:23 2013 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Fri, 22 Nov 2013 16:35:23 -0500 Subject: [Numpy-discussion] (no subject) In-Reply-To: References: Message-ID: On Fri, Nov 22, 2013 at 4:23 PM, Matthew Brett wrote: > Hi, > > I'm sorry if I missed something obvious - but is there a vectorized > way to look for None in an array? > > In [3]: a = np.array([1, 1]) > > In [4]: a == object() > Out[4]: array([False, False], dtype=bool) > > In [6]: a == None > Out[6]: False > > (same for object arrays), > Looks like using a "scalar array" that holds the value None will work: In [8]: a Out[8]: array([[1, 2], 'foo', None], dtype=object) In [9]: a == np.array(None) Out[9]: array([False, False, True], dtype=bool) Warren > Thanks a lot, > > Matthew > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Fri Nov 22 16:40:23 2013 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 22 Nov 2013 21:40:23 +0000 Subject: [Numpy-discussion] (no subject) In-Reply-To: References: Message-ID: On Fri, Nov 22, 2013 at 9:23 PM, Matthew Brett wrote: > > Hi, > > I'm sorry if I missed something obvious - but is there a vectorized > way to look for None in an array? > > In [3]: a = np.array([1, 1]) > > In [4]: a == object() > Out[4]: array([False, False], dtype=bool) > > In [6]: a == None > Out[6]: False [~] |1> x = np.array([1, None, 2], dtype=object) [~] |2> np.equal(x, None) array([False, True, False], dtype=bool) -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From kaspar.emanuel at gmail.com Fri Nov 22 17:30:56 2013 From: kaspar.emanuel at gmail.com (Kaspar Emanuel) Date: Fri, 22 Nov 2013 22:30:56 +0000 Subject: [Numpy-discussion] numpy.i and INPLACE_ARRAY1[ANY] Message-ID: Hey, I am trying to improve the Lilv Python bindings to include numpy.i to allow for creation and verification of audio test buffers using NumPy. I am just trying to get *something* working at the moment so I am tring to wrap a test function. static inline void lilv_test(float* data_location){} and I have in lilv.i: %apply (float* INPLACE_ARRAY1) {(float* data_location)}; This doesn?t produce any warnings or anything but when I try and use it from Python I get: TypeError: in method 'lilv_test', argument 1 of type 'float *' What does work is if I have: lilv_test(float* data_location, int n){} and %apply (float* INPLACE_ARRAY1, int DIM1) {(float* data_location, int n)}; but this doesn?t fit very well with the functions I eventually want to wrap, as they don?t have a dimension argument. Is it not possible to use INPLACE_ARRAY1 without a dimension? Thanks for any help, Kaspar -------------- next part -------------- An HTML attachment was scrubbed... URL: From wfspotz at sandia.gov Fri Nov 22 17:40:20 2013 From: wfspotz at sandia.gov (Bill Spotz) Date: Fri, 22 Nov 2013 15:40:20 -0700 Subject: [Numpy-discussion] [EXTERNAL] numpy.i and INPLACE_ARRAY1[ANY] In-Reply-To: References: Message-ID: <22D2D0DB-CBF9-46D3-8DAB-6EBFEAC2BE5A@sandia.gov> Kaspar, Yes, in order for numpy.i typemaps to work, you need to provide dimensions. How is lilv_test(float*) supposed to know how large the float array is? Is it actually a method where the class knows the size? In cases where dimensions are not passed through the argument list, you have two options: 1. Write a proxy function that does have dimension arguments and calls the original function, and then wrap that instead of the original function. 2. Use the functions and macros in numpy.i to write new typemaps that work for your case. -Bill On Nov 22, 2013, at 3:30 PM, Kaspar Emanuel wrote: > Hey, > I am trying to improve the Lilv Python bindings to include numpy.i to allow for creation and verification of audio test buffers using NumPy. > > I am just trying to get something working at the moment so I am tring to wrap a test function. > > static inline void > lilv_test(float* data_location){} > > and I have in lilv.i: > > %apply (float* INPLACE_ARRAY1) {(float* data_location)}; > This doesn?t produce any warnings or anything but when I try and use it from Python I get: > > TypeError: in method 'lilv_test', argument 1 of type 'float *' > What does work is if I have: > > lilv_test(float* data_location, int n){} > and > > %apply (float* INPLACE_ARRAY1, int DIM1) {(float* data_location, int n)}; > but this doesn?t fit very well with the functions I eventually want to wrap, as they don?t have a dimension argument. > > Is it not possible to use INPLACE_ARRAY1 without a dimension? > > Thanks for any help, > > Kaspar > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion ** Bill Spotz ** ** Sandia National Laboratories Voice: (505)845-0170 ** ** P.O. Box 5800 Fax: (505)284-0154 ** ** Albuquerque, NM 87185-0370 Email: wfspotz at sandia.gov ** From kaspar.emanuel at gmail.com Fri Nov 22 18:05:18 2013 From: kaspar.emanuel at gmail.com (Kaspar Emanuel) Date: Fri, 22 Nov 2013 23:05:18 +0000 Subject: [Numpy-discussion] [EXTERNAL] numpy.i and INPLACE_ARRAY1[ANY] In-Reply-To: <22D2D0DB-CBF9-46D3-8DAB-6EBFEAC2BE5A@sandia.gov> References: <22D2D0DB-CBF9-46D3-8DAB-6EBFEAC2BE5A@sandia.gov> Message-ID: Hi Bill, thanks for your response. So the function I am actually trying to wrap is: static inline void lilv_instance_connect_port(LilvInstance* instance, uint32_t port_index, void* data_location) It just passes on the pointer to the data_location (the audio buffer) and then you call lilv_instance_run(int nframes) where nframes could be the dimension of your buffer, or possibly less if you really want. So following your recommendations I tried to make a wrapper function: lilv_instance_pyconnect(LilvInstance* instance, uint32_t port_index, float* data_location, int unused) and then a typemap following what is in numpy.i : %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (UNUSED1* UNUSED2, UNUSED3 UNUSED4, DATA_TYPE* INPLACE_ARRAY1, DIM_TYPE DIM1) { $1 = is_array($input) && PyArray_EquivTypenums(array_type($input), DATA_TYPECODE); } %typemap(in, fragment="NumPy_Fragments") (UNUSED1* UNUSED2, UNUSED3 UNUSED4, DATA_TYPE* INPLACE_ARRAY1, DIM_TYPE DIM1) (PyArrayObject* array=NULL, int i=1) { array = obj_to_array_no_conversion($input, DATA_TYPECODE); if (!array || !require_dimensions(array,1) || !require_contiguous(array) || !require_native(array)) SWIG_fail; $1 = (DATA_TYPE*) array_data(array); $2 = 1; for (i=0; i < array_numdims(array); ++i) $2 *= array_size(array,i); } which I tried to apply with: %apply (LilvInstance* instance, uint32_t port_index, float* INPLACE_ARRAY1, int DIM1) {(LilvInstance* instance, uint32_t port_index, float* data_location, int unused)} But it doesn?t seem to do anything an I just get a TypeError. You might have noticed I am a little bit out of my depth? Ta, Kaspar On 22 November 2013 22:40, Bill Spotz wrote: > Kaspar, > > Yes, in order for numpy.i typemaps to work, you need to provide > dimensions. How is lilv_test(float*) supposed to know how large the float > array is? Is it actually a method where the class knows the size? In > cases where dimensions are not passed through the argument list, you have > two options: > > 1. Write a proxy function that does have dimension arguments and calls > the original function, and then wrap that instead of the original function. > > 2. Use the functions and macros in numpy.i to write new typemaps that > work for your case. > > -Bill > > On Nov 22, 2013, at 3:30 PM, Kaspar Emanuel wrote: > > > Hey, > > I am trying to improve the Lilv Python bindings to include numpy.i to > allow for creation and verification of audio test buffers using NumPy. > > > > I am just trying to get something working at the moment so I am tring to > wrap a test function. > > > > static inline void > > lilv_test(float* data_location){} > > > > and I have in lilv.i: > > > > %apply (float* INPLACE_ARRAY1) {(float* data_location)}; > > This doesn?t produce any warnings or anything but when I try and use it > from Python I get: > > > > TypeError: in method 'lilv_test', argument 1 of type 'float *' > > What does work is if I have: > > > > lilv_test(float* data_location, int n){} > > and > > > > %apply (float* INPLACE_ARRAY1, int DIM1) {(float* data_location, int n)}; > > but this doesn?t fit very well with the functions I eventually want to > wrap, as they don?t have a dimension argument. > > > > Is it not possible to use INPLACE_ARRAY1 without a dimension? > > > > Thanks for any help, > > > > Kaspar > > > > > > _______________________________________________ > > NumPy-Discussion mailing list > > NumPy-Discussion at scipy.org > > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > ** Bill Spotz ** > ** Sandia National Laboratories Voice: (505)845-0170 ** > ** P.O. Box 5800 Fax: (505)284-0154 ** > ** Albuquerque, NM 87185-0370 Email: wfspotz at sandia.gov ** > > > > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wfspotz at sandia.gov Fri Nov 22 18:19:21 2013 From: wfspotz at sandia.gov (Bill Spotz) Date: Fri, 22 Nov 2013 16:19:21 -0700 Subject: [Numpy-discussion] [EXTERNAL] numpy.i and INPLACE_ARRAY1[ANY] In-Reply-To: References: <22D2D0DB-CBF9-46D3-8DAB-6EBFEAC2BE5A@sandia.gov> Message-ID: I think you are getting close. Application of the typemap simply requires %apply (float* INPLACE_ARRAY1, int DIM1) {(float* data_location, int unused)} rather than the entire argument list. Be sure you understand the use case. The (data_location, unused) pair is going to be provided by a numpy array and its length. You might want to do a check to make sure variable "unused" has the correct value before you pass the numpy array data on to your function. Typemaps are generally non-intuitive. But once you understand all the rules, they start to make sense. -Bill On Nov 22, 2013, at 4:05 PM, Kaspar Emanuel wrote: > Hi Bill, > > thanks for your response. So the function I am actually trying to wrap is: > > static inline void > lilv_instance_connect_port(LilvInstance* instance, > uint32_t port_index, > void* data_location) > > It just passes on the pointer to the data_location (the audio buffer) and then you call lilv_instance_run(int nframes) where nframes could be the dimension of your buffer, or possibly less if you really want. > > So following your recommendations I tried to make a wrapper function: > > lilv_instance_pyconnect(LilvInstance* instance, > uint32_t port_index, > float* data_location, int unused) > > and then a typemap following what is in numpy.i : > > > %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, > fragment="NumPy_Macros") > (UNUSED1* UNUSED2, UNUSED3 UNUSED4, DATA_TYPE* INPLACE_ARRAY1, DIM_TYPE DIM1) > { > $1 = is_array($input) && PyArray_EquivTypenums(array_type($input), > DATA_TYPECODE); > } > %typemap(in, > fragment="NumPy_Fragments") > (UNUSED1* UNUSED2, UNUSED3 UNUSED4, DATA_TYPE* INPLACE_ARRAY1, DIM_TYPE DIM1) > (PyArrayObject* array=NULL, int i=1) > { > array = obj_to_array_no_conversion($input, DATA_TYPECODE); > if (!array || !require_dimensions(array,1) || !require_contiguous(array) > || !require_native(array)) SWIG_fail; > $1 = (DATA_TYPE*) array_data(array); > $2 = 1; > for (i=0; i < array_numdims(array); ++i) $2 *= array_size(array,i); > } > > which I tried to apply with: > > > %apply (LilvInstance* instance, uint32_t port_index, float* INPLACE_ARRAY1, int DIM1) {(LilvInstance* instance, uint32_t port_index, float* data_location, int unused)} > > But it doesn?t seem to do anything an I just get a TypeError. > > You might have noticed I am a little bit out of my depth? > > Ta, > > Kaspar > > > > On 22 November 2013 22:40, Bill Spotz wrote: > Kaspar, > > Yes, in order for numpy.i typemaps to work, you need to provide dimensions. How is lilv_test(float*) supposed to know how large the float array is? Is it actually a method where the class knows the size? In cases where dimensions are not passed through the argument list, you have two options: > > 1. Write a proxy function that does have dimension arguments and calls the original function, and then wrap that instead of the original function. > > 2. Use the functions and macros in numpy.i to write new typemaps that work for your case. > > -Bill > > On Nov 22, 2013, at 3:30 PM, Kaspar Emanuel wrote: > > > Hey, > > I am trying to improve the Lilv Python bindings to include numpy.i to allow for creation and verification of audio test buffers using NumPy. > > > > I am just trying to get something working at the moment so I am tring to wrap a test function. > > > > static inline void > > lilv_test(float* data_location){} > > > > and I have in lilv.i: > > > > %apply (float* INPLACE_ARRAY1) {(float* data_location)}; > > This doesn?t produce any warnings or anything but when I try and use it from Python I get: > > > > TypeError: in method 'lilv_test', argument 1 of type 'float *' > > What does work is if I have: > > > > lilv_test(float* data_location, int n){} > > and > > > > %apply (float* INPLACE_ARRAY1, int DIM1) {(float* data_location, int n)}; > > but this doesn?t fit very well with the functions I eventually want to wrap, as they don?t have a dimension argument. > > > > Is it not possible to use INPLACE_ARRAY1 without a dimension? > > > > Thanks for any help, > > > > Kaspar > > > > > > _______________________________________________ > > NumPy-Discussion mailing list > > NumPy-Discussion at scipy.org > > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > ** Bill Spotz ** > ** Sandia National Laboratories Voice: (505)845-0170 ** > ** P.O. Box 5800 Fax: (505)284-0154 ** > ** Albuquerque, NM 87185-0370 Email: wfspotz at sandia.gov ** > > > > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion ** Bill Spotz ** ** Sandia National Laboratories Voice: (505)845-0170 ** ** P.O. Box 5800 Fax: (505)284-0154 ** ** Albuquerque, NM 87185-0370 Email: wfspotz at sandia.gov ** From kaspar.emanuel at gmail.com Fri Nov 22 18:32:47 2013 From: kaspar.emanuel at gmail.com (Kaspar Emanuel) Date: Fri, 22 Nov 2013 23:32:47 +0000 Subject: [Numpy-discussion] [EXTERNAL] numpy.i and INPLACE_ARRAY1[ANY] In-Reply-To: References: <22D2D0DB-CBF9-46D3-8DAB-6EBFEAC2BE5A@sandia.gov> Message-ID: Cool! That seems to have worked. Many thanks. So I didn't need my own typemap for this at all as it will already ignore the rest of the arguments? What I still don't understand is that there seems to be a typemap for INPLACE_ARRAY1[ANY] without any DIM1. How come I can't apply that? > Be sure you understand the use case. The (data_location, unused) pair is going to be provided by a numpy array and its length. You might want to do a check to make sure variable "unused" has the correct value before you pass the numpy array data on to your function. The only point where the check could be made would be in the run function which should not be run with a value longer than the array length. On 22 November 2013 23:19, Bill Spotz wrote: > I think you are getting close. Application of the typemap simply requires > > %apply (float* INPLACE_ARRAY1, int DIM1) {(float* data_location, int unused)} > > rather than the entire argument list. > > Be sure you understand the use case. The (data_location, unused) pair is going to be provided by a numpy array and its length. You might want to do a check to make sure variable "unused" has the correct value before you pass the numpy array data on to your function. > > Typemaps are generally non-intuitive. But once you understand all the rules, they start to make sense. > > -Bill > > On Nov 22, 2013, at 4:05 PM, Kaspar Emanuel wrote: > >> Hi Bill, >> >> thanks for your response. So the function I am actually trying to wrap is: >> >> static inline void >> lilv_instance_connect_port(LilvInstance* instance, >> uint32_t port_index, >> void* data_location) >> >> It just passes on the pointer to the data_location (the audio buffer) and then you call lilv_instance_run(int nframes) where nframes could be the dimension of your buffer, or possibly less if you really want. >> >> So following your recommendations I tried to make a wrapper function: >> >> lilv_instance_pyconnect(LilvInstance* instance, >> uint32_t port_index, >> float* data_location, int unused) >> >> and then a typemap following what is in numpy.i : >> >> >> %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, >> fragment="NumPy_Macros") >> (UNUSED1* UNUSED2, UNUSED3 UNUSED4, DATA_TYPE* INPLACE_ARRAY1, DIM_TYPE DIM1) >> { >> $1 = is_array($input) && PyArray_EquivTypenums(array_type($input), >> DATA_TYPECODE); >> } >> %typemap(in, >> fragment="NumPy_Fragments") >> (UNUSED1* UNUSED2, UNUSED3 UNUSED4, DATA_TYPE* INPLACE_ARRAY1, DIM_TYPE DIM1) >> (PyArrayObject* array=NULL, int i=1) >> { >> array = obj_to_array_no_conversion($input, DATA_TYPECODE); >> if (!array || !require_dimensions(array,1) || !require_contiguous(array) >> || !require_native(array)) SWIG_fail; >> $1 = (DATA_TYPE*) array_data(array); >> $2 = 1; >> for (i=0; i < array_numdims(array); ++i) $2 *= array_size(array,i); >> } >> >> which I tried to apply with: >> >> >> %apply (LilvInstance* instance, uint32_t port_index, float* INPLACE_ARRAY1, int DIM1) {(LilvInstance* instance, uint32_t port_index, float* data_location, int unused)} >> >> But it doesn?t seem to do anything an I just get a TypeError. >> >> You might have noticed I am a little bit out of my depth? >> >> Ta, >> >> Kaspar >> >> >> >> On 22 November 2013 22:40, Bill Spotz wrote: >> Kaspar, >> >> Yes, in order for numpy.i typemaps to work, you need to provide dimensions. How is lilv_test(float*) supposed to know how large the float array is? Is it actually a method where the class knows the size? In cases where dimensions are not passed through the argument list, you have two options: >> >> 1. Write a proxy function that does have dimension arguments and calls the original function, and then wrap that instead of the original function. >> >> 2. Use the functions and macros in numpy.i to write new typemaps that work for your case. >> >> -Bill >> >> On Nov 22, 2013, at 3:30 PM, Kaspar Emanuel wrote: >> >> > Hey, >> > I am trying to improve the Lilv Python bindings to include numpy.i to allow for creation and verification of audio test buffers using NumPy. >> > >> > I am just trying to get something working at the moment so I am tring to wrap a test function. >> > >> > static inline void >> > lilv_test(float* data_location){} >> > >> > and I have in lilv.i: >> > >> > %apply (float* INPLACE_ARRAY1) {(float* data_location)}; >> > This doesn?t produce any warnings or anything but when I try and use it from Python I get: >> > >> > TypeError: in method 'lilv_test', argument 1 of type 'float *' >> > What does work is if I have: >> > >> > lilv_test(float* data_location, int n){} >> > and >> > >> > %apply (float* INPLACE_ARRAY1, int DIM1) {(float* data_location, int n)}; >> > but this doesn?t fit very well with the functions I eventually want to wrap, as they don?t have a dimension argument. >> > >> > Is it not possible to use INPLACE_ARRAY1 without a dimension? >> > >> > Thanks for any help, >> > >> > Kaspar >> > >> > >> > _______________________________________________ >> > NumPy-Discussion mailing list >> > NumPy-Discussion at scipy.org >> > http://mail.scipy.org/mailman/listinfo/numpy-discussion >> >> ** Bill Spotz ** >> ** Sandia National Laboratories Voice: (505)845-0170 ** >> ** P.O. Box 5800 Fax: (505)284-0154 ** >> ** Albuquerque, NM 87185-0370 Email: wfspotz at sandia.gov ** >> >> >> >> >> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion > > ** Bill Spotz ** > ** Sandia National Laboratories Voice: (505)845-0170 ** > ** P.O. Box 5800 Fax: (505)284-0154 ** > ** Albuquerque, NM 87185-0370 Email: wfspotz at sandia.gov ** > > > > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From wfspotz at sandia.gov Fri Nov 22 18:44:29 2013 From: wfspotz at sandia.gov (Bill Spotz) Date: Fri, 22 Nov 2013 16:44:29 -0700 Subject: [Numpy-discussion] [EXTERNAL] numpy.i and INPLACE_ARRAY1[ANY] In-Reply-To: References: <22D2D0DB-CBF9-46D3-8DAB-6EBFEAC2BE5A@sandia.gov> Message-ID: Yes, typemaps are checked against individual arguments or contiguous groups of arguments, not necessarily the entire argument list. I believe the argument would have to be "float data_location[]", signifying a null-terminated array, rather than float*, for the INPLACE_ARRAY1[ANY] to work. On Nov 22, 2013, at 4:32 PM, Kaspar Emanuel wrote: > Cool! That seems to have worked. Many thanks. So I didn't need my own > typemap for this at all as it will already ignore the rest of the > arguments? > > What I still don't understand is that there seems to be a typemap for > INPLACE_ARRAY1[ANY] without any DIM1. How come I can't apply that? > >> Be sure you understand the use case. The (data_location, unused) > pair is going to be provided by a numpy array and its length. You > might want to do a check to make sure variable "unused" has the > correct value before you pass the numpy array data on to your > function. > > The only point where the check could be made would be in the run > function which should not be run with a value longer than the array > length. > > > On 22 November 2013 23:19, Bill Spotz wrote: >> I think you are getting close. Application of the typemap simply requires >> >> %apply (float* INPLACE_ARRAY1, int DIM1) {(float* data_location, int unused)} >> >> rather than the entire argument list. >> >> Be sure you understand the use case. The (data_location, unused) pair is going to be provided by a numpy array and its length. You might want to do a check to make sure variable "unused" has the correct value before you pass the numpy array data on to your function. >> >> Typemaps are generally non-intuitive. But once you understand all the rules, they start to make sense. >> >> -Bill >> >> On Nov 22, 2013, at 4:05 PM, Kaspar Emanuel wrote: >> >>> Hi Bill, >>> >>> thanks for your response. So the function I am actually trying to wrap is: >>> >>> static inline void >>> lilv_instance_connect_port(LilvInstance* instance, >>> uint32_t port_index, >>> void* data_location) >>> >>> It just passes on the pointer to the data_location (the audio buffer) and then you call lilv_instance_run(int nframes) where nframes could be the dimension of your buffer, or possibly less if you really want. >>> >>> So following your recommendations I tried to make a wrapper function: >>> >>> lilv_instance_pyconnect(LilvInstance* instance, >>> uint32_t port_index, >>> float* data_location, int unused) >>> >>> and then a typemap following what is in numpy.i : >>> >>> >>> %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, >>> fragment="NumPy_Macros") >>> (UNUSED1* UNUSED2, UNUSED3 UNUSED4, DATA_TYPE* INPLACE_ARRAY1, DIM_TYPE DIM1) >>> { >>> $1 = is_array($input) && PyArray_EquivTypenums(array_type($input), >>> DATA_TYPECODE); >>> } >>> %typemap(in, >>> fragment="NumPy_Fragments") >>> (UNUSED1* UNUSED2, UNUSED3 UNUSED4, DATA_TYPE* INPLACE_ARRAY1, DIM_TYPE DIM1) >>> (PyArrayObject* array=NULL, int i=1) >>> { >>> array = obj_to_array_no_conversion($input, DATA_TYPECODE); >>> if (!array || !require_dimensions(array,1) || !require_contiguous(array) >>> || !require_native(array)) SWIG_fail; >>> $1 = (DATA_TYPE*) array_data(array); >>> $2 = 1; >>> for (i=0; i < array_numdims(array); ++i) $2 *= array_size(array,i); >>> } >>> >>> which I tried to apply with: >>> >>> >>> %apply (LilvInstance* instance, uint32_t port_index, float* INPLACE_ARRAY1, int DIM1) {(LilvInstance* instance, uint32_t port_index, float* data_location, int unused)} >>> >>> But it doesn?t seem to do anything an I just get a TypeError. >>> >>> You might have noticed I am a little bit out of my depth? >>> >>> Ta, >>> >>> Kaspar >>> >>> >>> >>> On 22 November 2013 22:40, Bill Spotz wrote: >>> Kaspar, >>> >>> Yes, in order for numpy.i typemaps to work, you need to provide dimensions. How is lilv_test(float*) supposed to know how large the float array is? Is it actually a method where the class knows the size? In cases where dimensions are not passed through the argument list, you have two options: >>> >>> 1. Write a proxy function that does have dimension arguments and calls the original function, and then wrap that instead of the original function. >>> >>> 2. Use the functions and macros in numpy.i to write new typemaps that work for your case. >>> >>> -Bill >>> >>> On Nov 22, 2013, at 3:30 PM, Kaspar Emanuel wrote: >>> >>>> Hey, >>>> I am trying to improve the Lilv Python bindings to include numpy.i to allow for creation and verification of audio test buffers using NumPy. >>>> >>>> I am just trying to get something working at the moment so I am tring to wrap a test function. >>>> >>>> static inline void >>>> lilv_test(float* data_location){} >>>> >>>> and I have in lilv.i: >>>> >>>> %apply (float* INPLACE_ARRAY1) {(float* data_location)}; >>>> This doesn?t produce any warnings or anything but when I try and use it from Python I get: >>>> >>>> TypeError: in method 'lilv_test', argument 1 of type 'float *' >>>> What does work is if I have: >>>> >>>> lilv_test(float* data_location, int n){} >>>> and >>>> >>>> %apply (float* INPLACE_ARRAY1, int DIM1) {(float* data_location, int n)}; >>>> but this doesn?t fit very well with the functions I eventually want to wrap, as they don?t have a dimension argument. >>>> >>>> Is it not possible to use INPLACE_ARRAY1 without a dimension? >>>> >>>> Thanks for any help, >>>> >>>> Kaspar >>>> >>>> >>>> _______________________________________________ >>>> NumPy-Discussion mailing list >>>> NumPy-Discussion at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/numpy-discussion >>> >>> ** Bill Spotz ** >>> ** Sandia National Laboratories Voice: (505)845-0170 ** >>> ** P.O. Box 5800 Fax: (505)284-0154 ** >>> ** Albuquerque, NM 87185-0370 Email: wfspotz at sandia.gov ** >>> >>> >>> >>> >>> >>> _______________________________________________ >>> NumPy-Discussion mailing list >>> NumPy-Discussion at scipy.org >>> http://mail.scipy.org/mailman/listinfo/numpy-discussion >>> >>> _______________________________________________ >>> NumPy-Discussion mailing list >>> NumPy-Discussion at scipy.org >>> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> >> ** Bill Spotz ** >> ** Sandia National Laboratories Voice: (505)845-0170 ** >> ** P.O. Box 5800 Fax: (505)284-0154 ** >> ** Albuquerque, NM 87185-0370 Email: wfspotz at sandia.gov ** >> >> >> >> >> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion ** Bill Spotz ** ** Sandia National Laboratories Voice: (505)845-0170 ** ** P.O. Box 5800 Fax: (505)284-0154 ** ** Albuquerque, NM 87185-0370 Email: wfspotz at sandia.gov ** From kaspar.emanuel at gmail.com Fri Nov 22 18:56:59 2013 From: kaspar.emanuel at gmail.com (Kaspar Emanuel) Date: Fri, 22 Nov 2013 23:56:59 +0000 Subject: [Numpy-discussion] [EXTERNAL] numpy.i and INPLACE_ARRAY1[ANY] In-Reply-To: References: <22D2D0DB-CBF9-46D3-8DAB-6EBFEAC2BE5A@sandia.gov> Message-ID: OK yeah, I tested with float data_location[] but then it expects and array of length 0, (shape [0] it says). With float data_location[64] I can use 64 size array but this isn't very useful in this instance. I will try and make a typemap for just (float* INPLACE_ARRAY) without a DIM1 as that's probably the cleanest. When I was playing with the typemaps before I put %typecheck and %typemap in lilv.i just under %include "numpy.i" (as in attached) but I don't think they were actually noticed till I put them in numpy.i itself. Where is the appropriate place to extend these? On 22 November 2013 23:44, Bill Spotz wrote: > Yes, typemaps are checked against individual arguments or contiguous groups of arguments, not necessarily the entire argument list. > > I believe the argument would have to be "float data_location[]", signifying a null-terminated array, rather than float*, for the INPLACE_ARRAY1[ANY] to work. > > On Nov 22, 2013, at 4:32 PM, Kaspar Emanuel wrote: > >> Cool! That seems to have worked. Many thanks. So I didn't need my own >> typemap for this at all as it will already ignore the rest of the >> arguments? >> >> What I still don't understand is that there seems to be a typemap for >> INPLACE_ARRAY1[ANY] without any DIM1. How come I can't apply that? >> >>> Be sure you understand the use case. The (data_location, unused) >> pair is going to be provided by a numpy array and its length. You >> might want to do a check to make sure variable "unused" has the >> correct value before you pass the numpy array data on to your >> function. >> >> The only point where the check could be made would be in the run >> function which should not be run with a value longer than the array >> length. >> >> >> On 22 November 2013 23:19, Bill Spotz wrote: >>> I think you are getting close. Application of the typemap simply requires >>> >>> %apply (float* INPLACE_ARRAY1, int DIM1) {(float* data_location, int unused)} >>> >>> rather than the entire argument list. >>> >>> Be sure you understand the use case. The (data_location, unused) pair is going to be provided by a numpy array and its length. You might want to do a check to make sure variable "unused" has the correct value before you pass the numpy array data on to your function. >>> >>> Typemaps are generally non-intuitive. But once you understand all the rules, they start to make sense. >>> >>> -Bill >>> >>> On Nov 22, 2013, at 4:05 PM, Kaspar Emanuel wrote: >>> >>>> Hi Bill, >>>> >>>> thanks for your response. So the function I am actually trying to wrap is: >>>> >>>> static inline void >>>> lilv_instance_connect_port(LilvInstance* instance, >>>> uint32_t port_index, >>>> void* data_location) >>>> >>>> It just passes on the pointer to the data_location (the audio buffer) and then you call lilv_instance_run(int nframes) where nframes could be the dimension of your buffer, or possibly less if you really want. >>>> >>>> So following your recommendations I tried to make a wrapper function: >>>> >>>> lilv_instance_pyconnect(LilvInstance* instance, >>>> uint32_t port_index, >>>> float* data_location, int unused) >>>> >>>> and then a typemap following what is in numpy.i : >>>> >>>> >>>> %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, >>>> fragment="NumPy_Macros") >>>> (UNUSED1* UNUSED2, UNUSED3 UNUSED4, DATA_TYPE* INPLACE_ARRAY1, DIM_TYPE DIM1) >>>> { >>>> $1 = is_array($input) && PyArray_EquivTypenums(array_type($input), >>>> DATA_TYPECODE); >>>> } >>>> %typemap(in, >>>> fragment="NumPy_Fragments") >>>> (UNUSED1* UNUSED2, UNUSED3 UNUSED4, DATA_TYPE* INPLACE_ARRAY1, DIM_TYPE DIM1) >>>> (PyArrayObject* array=NULL, int i=1) >>>> { >>>> array = obj_to_array_no_conversion($input, DATA_TYPECODE); >>>> if (!array || !require_dimensions(array,1) || !require_contiguous(array) >>>> || !require_native(array)) SWIG_fail; >>>> $1 = (DATA_TYPE*) array_data(array); >>>> $2 = 1; >>>> for (i=0; i < array_numdims(array); ++i) $2 *= array_size(array,i); >>>> } >>>> >>>> which I tried to apply with: >>>> >>>> >>>> %apply (LilvInstance* instance, uint32_t port_index, float* INPLACE_ARRAY1, int DIM1) {(LilvInstance* instance, uint32_t port_index, float* data_location, int unused)} >>>> >>>> But it doesn?t seem to do anything an I just get a TypeError. >>>> >>>> You might have noticed I am a little bit out of my depth? >>>> >>>> Ta, >>>> >>>> Kaspar >>>> >>>> >>>> >>>> On 22 November 2013 22:40, Bill Spotz wrote: >>>> Kaspar, >>>> >>>> Yes, in order for numpy.i typemaps to work, you need to provide dimensions. How is lilv_test(float*) supposed to know how large the float array is? Is it actually a method where the class knows the size? In cases where dimensions are not passed through the argument list, you have two options: >>>> >>>> 1. Write a proxy function that does have dimension arguments and calls the original function, and then wrap that instead of the original function. >>>> >>>> 2. Use the functions and macros in numpy.i to write new typemaps that work for your case. >>>> >>>> -Bill >>>> >>>> On Nov 22, 2013, at 3:30 PM, Kaspar Emanuel wrote: >>>> >>>>> Hey, >>>>> I am trying to improve the Lilv Python bindings to include numpy.i to allow for creation and verification of audio test buffers using NumPy. >>>>> >>>>> I am just trying to get something working at the moment so I am tring to wrap a test function. >>>>> >>>>> static inline void >>>>> lilv_test(float* data_location){} >>>>> >>>>> and I have in lilv.i: >>>>> >>>>> %apply (float* INPLACE_ARRAY1) {(float* data_location)}; >>>>> This doesn?t produce any warnings or anything but when I try and use it from Python I get: >>>>> >>>>> TypeError: in method 'lilv_test', argument 1 of type 'float *' >>>>> What does work is if I have: >>>>> >>>>> lilv_test(float* data_location, int n){} >>>>> and >>>>> >>>>> %apply (float* INPLACE_ARRAY1, int DIM1) {(float* data_location, int n)}; >>>>> but this doesn?t fit very well with the functions I eventually want to wrap, as they don?t have a dimension argument. >>>>> >>>>> Is it not possible to use INPLACE_ARRAY1 without a dimension? >>>>> >>>>> Thanks for any help, >>>>> >>>>> Kaspar >>>>> >>>>> >>>>> _______________________________________________ >>>>> NumPy-Discussion mailing list >>>>> NumPy-Discussion at scipy.org >>>>> http://mail.scipy.org/mailman/listinfo/numpy-discussion >>>> >>>> ** Bill Spotz ** >>>> ** Sandia National Laboratories Voice: (505)845-0170 ** >>>> ** P.O. Box 5800 Fax: (505)284-0154 ** >>>> ** Albuquerque, NM 87185-0370 Email: wfspotz at sandia.gov ** >>>> >>>> >>>> >>>> >>>> >>>> _______________________________________________ >>>> NumPy-Discussion mailing list >>>> NumPy-Discussion at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/numpy-discussion >>>> >>>> _______________________________________________ >>>> NumPy-Discussion mailing list >>>> NumPy-Discussion at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/numpy-discussion >>> >>> ** Bill Spotz ** >>> ** Sandia National Laboratories Voice: (505)845-0170 ** >>> ** P.O. Box 5800 Fax: (505)284-0154 ** >>> ** Albuquerque, NM 87185-0370 Email: wfspotz at sandia.gov ** >>> >>> >>> >>> >>> >>> _______________________________________________ >>> NumPy-Discussion mailing list >>> NumPy-Discussion at scipy.org >>> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion > > ** Bill Spotz ** > ** Sandia National Laboratories Voice: (505)845-0170 ** > ** P.O. Box 5800 Fax: (505)284-0154 ** > ** Albuquerque, NM 87185-0370 Email: wfspotz at sandia.gov ** > > > > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion -------------- next part -------------- A non-text attachment was scrubbed... Name: lilv.i Type: application/octet-stream Size: 2209 bytes Desc: not available URL: From kaspar.emanuel at gmail.com Fri Nov 22 19:11:24 2013 From: kaspar.emanuel at gmail.com (Kaspar Emanuel) Date: Sat, 23 Nov 2013 00:11:24 +0000 Subject: [Numpy-discussion] [EXTERNAL] numpy.i and INPLACE_ARRAY1[ANY] In-Reply-To: References: <22D2D0DB-CBF9-46D3-8DAB-6EBFEAC2BE5A@sandia.gov> Message-ID: Here is the typemap for anyone with a similar problem: /* Typemap suite for (DATA_TYPE* INPLACE_ARRAY1) */ %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, fragment="NumPy_Macros") (DATA_TYPE* INPLACE_ARRAY1) { $1 = is_array($input) && PyArray_EquivTypenums(array_type($input), DATA_TYPECODE); } %typemap(in, fragment="NumPy_Fragments") (DATA_TYPE* INPLACE_ARRAY1) (PyArrayObject* array=NULL) { array = obj_to_array_no_conversion($input, DATA_TYPECODE); if (!array || !require_dimensions(array,1) || !require_contiguous(array) || !require_native(array)) SWIG_fail; $1 = (DATA_TYPE*) array_data(array); } It seems I have to put it in numpy.i for it to be used. On 22 November 2013 23:56, Kaspar Emanuel wrote: > OK yeah, I tested with float data_location[] but then it expects and > array of length 0, (shape [0] it says). With float data_location[64] I > can use 64 size array but this isn't very useful in this instance. I > will try and make a typemap for just (float* INPLACE_ARRAY) without a > DIM1 as that's probably the cleanest. > > When I was playing with the typemaps before I put %typecheck and > %typemap in lilv.i just under %include "numpy.i" (as in attached) but > I don't think they were actually noticed till I put them in numpy.i > itself. Where is the appropriate place to extend these? > > On 22 November 2013 23:44, Bill Spotz wrote: > > Yes, typemaps are checked against individual arguments or contiguous > groups of arguments, not necessarily the entire argument list. > > > > I believe the argument would have to be "float data_location[]", > signifying a null-terminated array, rather than float*, for the > INPLACE_ARRAY1[ANY] to work. > > > > On Nov 22, 2013, at 4:32 PM, Kaspar Emanuel wrote: > > > >> Cool! That seems to have worked. Many thanks. So I didn't need my own > >> typemap for this at all as it will already ignore the rest of the > >> arguments? > >> > >> What I still don't understand is that there seems to be a typemap for > >> INPLACE_ARRAY1[ANY] without any DIM1. How come I can't apply that? > >> > >>> Be sure you understand the use case. The (data_location, unused) > >> pair is going to be provided by a numpy array and its length. You > >> might want to do a check to make sure variable "unused" has the > >> correct value before you pass the numpy array data on to your > >> function. > >> > >> The only point where the check could be made would be in the run > >> function which should not be run with a value longer than the array > >> length. > >> > >> > >> On 22 November 2013 23:19, Bill Spotz wrote: > >>> I think you are getting close. Application of the typemap simply > requires > >>> > >>> %apply (float* INPLACE_ARRAY1, int DIM1) {(float* data_location, > int unused)} > >>> > >>> rather than the entire argument list. > >>> > >>> Be sure you understand the use case. The (data_location, unused) pair > is going to be provided by a numpy array and its length. You might want to > do a check to make sure variable "unused" has the correct value before you > pass the numpy array data on to your function. > >>> > >>> Typemaps are generally non-intuitive. But once you understand all the > rules, they start to make sense. > >>> > >>> -Bill > >>> > >>> On Nov 22, 2013, at 4:05 PM, Kaspar Emanuel wrote: > >>> > >>>> Hi Bill, > >>>> > >>>> thanks for your response. So the function I am actually trying to > wrap is: > >>>> > >>>> static inline void > >>>> lilv_instance_connect_port(LilvInstance* instance, > >>>> uint32_t port_index, > >>>> void* data_location) > >>>> > >>>> It just passes on the pointer to the data_location (the audio buffer) > and then you call lilv_instance_run(int nframes) where nframes could be the > dimension of your buffer, or possibly less if you really want. > >>>> > >>>> So following your recommendations I tried to make a wrapper function: > >>>> > >>>> lilv_instance_pyconnect(LilvInstance* instance, > >>>> uint32_t port_index, > >>>> float* data_location, int unused) > >>>> > >>>> and then a typemap following what is in numpy.i : > >>>> > >>>> > >>>> %typecheck(SWIG_TYPECHECK_DOUBLE_ARRAY, > >>>> fragment="NumPy_Macros") > >>>> (UNUSED1* UNUSED2, UNUSED3 UNUSED4, DATA_TYPE* INPLACE_ARRAY1, > DIM_TYPE DIM1) > >>>> { > >>>> $1 = is_array($input) && PyArray_EquivTypenums(array_type($input), > >>>> DATA_TYPECODE); > >>>> } > >>>> %typemap(in, > >>>> fragment="NumPy_Fragments") > >>>> (UNUSED1* UNUSED2, UNUSED3 UNUSED4, DATA_TYPE* INPLACE_ARRAY1, > DIM_TYPE DIM1) > >>>> (PyArrayObject* array=NULL, int i=1) > >>>> { > >>>> array = obj_to_array_no_conversion($input, DATA_TYPECODE); > >>>> if (!array || !require_dimensions(array,1) || > !require_contiguous(array) > >>>> || !require_native(array)) SWIG_fail; > >>>> $1 = (DATA_TYPE*) array_data(array); > >>>> $2 = 1; > >>>> for (i=0; i < array_numdims(array); ++i) $2 *= array_size(array,i); > >>>> } > >>>> > >>>> which I tried to apply with: > >>>> > >>>> > >>>> %apply (LilvInstance* instance, uint32_t port_index, float* > INPLACE_ARRAY1, int DIM1) {(LilvInstance* instance, uint32_t port_index, > float* data_location, int unused)} > >>>> > >>>> But it doesn?t seem to do anything an I just get a TypeError. > >>>> > >>>> You might have noticed I am a little bit out of my depth? > >>>> > >>>> Ta, > >>>> > >>>> Kaspar > >>>> > >>>> > >>>> > >>>> On 22 November 2013 22:40, Bill Spotz wrote: > >>>> Kaspar, > >>>> > >>>> Yes, in order for numpy.i typemaps to work, you need to provide > dimensions. How is lilv_test(float*) supposed to know how large the float > array is? Is it actually a method where the class knows the size? In > cases where dimensions are not passed through the argument list, you have > two options: > >>>> > >>>> 1. Write a proxy function that does have dimension arguments and > calls the original function, and then wrap that instead of the original > function. > >>>> > >>>> 2. Use the functions and macros in numpy.i to write new typemaps > that work for your case. > >>>> > >>>> -Bill > >>>> > >>>> On Nov 22, 2013, at 3:30 PM, Kaspar Emanuel wrote: > >>>> > >>>>> Hey, > >>>>> I am trying to improve the Lilv Python bindings to include numpy.i > to allow for creation and verification of audio test buffers using NumPy. > >>>>> > >>>>> I am just trying to get something working at the moment so I am > tring to wrap a test function. > >>>>> > >>>>> static inline void > >>>>> lilv_test(float* data_location){} > >>>>> > >>>>> and I have in lilv.i: > >>>>> > >>>>> %apply (float* INPLACE_ARRAY1) {(float* data_location)}; > >>>>> This doesn?t produce any warnings or anything but when I try and use > it from Python I get: > >>>>> > >>>>> TypeError: in method 'lilv_test', argument 1 of type 'float *' > >>>>> What does work is if I have: > >>>>> > >>>>> lilv_test(float* data_location, int n){} > >>>>> and > >>>>> > >>>>> %apply (float* INPLACE_ARRAY1, int DIM1) {(float* data_location, int > n)}; > >>>>> but this doesn?t fit very well with the functions I eventually want > to wrap, as they don?t have a dimension argument. > >>>>> > >>>>> Is it not possible to use INPLACE_ARRAY1 without a dimension? > >>>>> > >>>>> Thanks for any help, > >>>>> > >>>>> Kaspar > >>>>> > >>>>> > >>>>> _______________________________________________ > >>>>> NumPy-Discussion mailing list > >>>>> NumPy-Discussion at scipy.org > >>>>> http://mail.scipy.org/mailman/listinfo/numpy-discussion > >>>> > >>>> ** Bill Spotz ** > >>>> ** Sandia National Laboratories Voice: (505)845-0170 ** > >>>> ** P.O. Box 5800 Fax: (505)284-0154 ** > >>>> ** Albuquerque, NM 87185-0370 Email: wfspotz at sandia.gov ** > >>>> > >>>> > >>>> > >>>> > >>>> > >>>> _______________________________________________ > >>>> NumPy-Discussion mailing list > >>>> NumPy-Discussion at scipy.org > >>>> http://mail.scipy.org/mailman/listinfo/numpy-discussion > >>>> > >>>> _______________________________________________ > >>>> NumPy-Discussion mailing list > >>>> NumPy-Discussion at scipy.org > >>>> http://mail.scipy.org/mailman/listinfo/numpy-discussion > >>> > >>> ** Bill Spotz ** > >>> ** Sandia National Laboratories Voice: (505)845-0170 ** > >>> ** P.O. Box 5800 Fax: (505)284-0154 ** > >>> ** Albuquerque, NM 87185-0370 Email: wfspotz at sandia.gov ** > >>> > >>> > >>> > >>> > >>> > >>> _______________________________________________ > >>> NumPy-Discussion mailing list > >>> NumPy-Discussion at scipy.org > >>> http://mail.scipy.org/mailman/listinfo/numpy-discussion > >> _______________________________________________ > >> NumPy-Discussion mailing list > >> NumPy-Discussion at scipy.org > >> http://mail.scipy.org/mailman/listinfo/numpy-discussion > > > > ** Bill Spotz ** > > ** Sandia National Laboratories Voice: (505)845-0170 ** > > ** P.O. Box 5800 Fax: (505)284-0154 ** > > ** Albuquerque, NM 87185-0370 Email: wfspotz at sandia.gov ** > > > > > > > > > > > > _______________________________________________ > > NumPy-Discussion mailing list > > NumPy-Discussion at scipy.org > > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mantihor at gmail.com Sat Nov 23 23:38:03 2013 From: mantihor at gmail.com (Bogdan Opanchuk) Date: Sun, 24 Nov 2013 15:38:03 +1100 Subject: [Numpy-discussion] Struct dtypes alignment Message-ID: Hello, I have some questions about how ``align=True`` and the corresponding ``isalignedstruct`` attribute work. Suppose we create the following dtype: >>> dt1 = numpy.dtype(dict(names=['i1','i2'], formats=[numpy.int32, numpy.int32], offsets=[0,4], itemsize=12, aligned=True)) >>> dt1.alignment 4 >>> dt1.isalignedstruct True The problem here is that the itemsize ``12`` is not actually achievable in C code for these particular fields without an explicit padding. But the ``isalignedstruct`` field reports that the struct is aligned, which is a bit misleading. There is no way to know which is true: 1) ``itemsize`` is consistent with the fields (that is, in C code you would not need any explicit modifiers) 2) ``itemsize`` requires an explicit alignment attribute in C code (e.g, ``itemsize=16`` would be achievable with an explicit alignment=16 applied to the whole struct). 3) ``itemsize`` is not achievable in C code and requires padding. Based on this, I would expect the following behavior: >>> dt1 = numpy.dtype(dict(names=['i1','i2'], formats=[numpy.int32, numpy.int32], offsets=[0,4], itemsize=8, aligned=True)) >>> dt1.base_alignment 4 >>> dt1.alignment 4 >>> dt1.isalignedstruct True >>> dt1 = numpy.dtype(dict(names=['i1','i2'], formats=[numpy.int32, numpy.int32], offsets=[0,4], itemsize=12, aligned=True)) Traceback (most recent call last): File "", line 1, in ValueError: itemsize 12 for this dtype is not achievable without an explicit padding >>> dt1 = numpy.dtype(dict(names=['i1','i2'], formats=[numpy.int32, numpy.int32], offsets=[0,4], itemsize=16, aligned=True)) >>> dt1.base_alignment 4 >>> dt1.alignment 16 >>> dt1.isalignedstruct True It would be also convenient to have a keyword for the total alignment of the struct (which will affect how it is placed in an encompassing struct): >>> dt1 = numpy.dtype(dict(names=['i1','i2'], formats=[numpy.int32, numpy.int32], offsets=[0,4], itemsize=8, alignment=8, aligned=True)) >>> dt1.base_alignment 4 >>> dt1.alignment 8 >>> dt1.isalignedstruct True So, is the current numpy behavior an intended one, or is it a bug? Best regards, Bogdan From lists at onerussian.com Sun Nov 24 20:32:20 2013 From: lists at onerussian.com (Yaroslav Halchenko) Date: Sun, 24 Nov 2013 20:32:20 -0500 Subject: [Numpy-discussion] RFC: is it worth giving a lightning talk at PyCon 2014 on numpy vbench-marking? In-Reply-To: References: <20131015161324.GL27621@onerussian.com> Message-ID: <20131125013220.GN27621@onerussian.com> On Tue, 15 Oct 2013, Nathaniel Smith wrote: > What do you have to lose? > > btw -- fresh results are here http://yarikoptic.github.io/numpy-vbench/ . > > I have tuned benchmarking so it now reflects the best performance across > > multiple executions of the whole battery, thus eliminating spurious > > variance if estimate is provided from a single point in time. Eventually I > > expect many of those curves to become even "cleaner". > On another note, what do you think of moving the vbench benchmarks > into the main numpy tree? We already require everyone who submits a > bug fix to add a test; there are a bunch of speed enhancements coming > in these days and it would be nice if we had some way to ask people to > submit a benchmark along with each one so that we know that the > enhancement stays enhanced... On this positive note (it is boring to start a new thread, isn't it?) -- would you be interested in me transfering numpy-vbench over to github.com/numpy ? as of today, plots on http://yarikoptic.github.io/numpy-vbench should be updating 24x7 (just a loop, thus no time guarantee after you submit new changes). Besides benchmarking new benchmarks (your PRs would still be very welcome, so far it was just me and Julian T) and revisions, that process also goes through a random sample of existing previously benchmarked revisions and re-runs the benchmarks thus improving upon the ultimate 'min' timing performance. So you can see already that many plots became much 'cleaner', although now there might be a bit of bias in estimates for recent revisions since they hadn't accumulated yet as many of 'independent runs' as older revisions. -- Yaroslav O. Halchenko, Ph.D. http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org Senior Research Associate, Psychological and Brain Sciences Dept. Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 WWW: http://www.linkedin.com/in/yarik From njs at pobox.com Sun Nov 24 20:47:40 2013 From: njs at pobox.com (Nathaniel Smith) Date: Sun, 24 Nov 2013 17:47:40 -0800 Subject: [Numpy-discussion] RFC: is it worth giving a lightning talk at PyCon 2014 on numpy vbench-marking? In-Reply-To: <20131125013220.GN27621@onerussian.com> References: <20131015161324.GL27621@onerussian.com> <20131125013220.GN27621@onerussian.com> Message-ID: On Sun, Nov 24, 2013 at 5:32 PM, Yaroslav Halchenko wrote: > > On Tue, 15 Oct 2013, Nathaniel Smith wrote: >> What do you have to lose? > >> > btw -- fresh results are here http://yarikoptic.github.io/numpy-vbench/ . > >> > I have tuned benchmarking so it now reflects the best performance across >> > multiple executions of the whole battery, thus eliminating spurious >> > variance if estimate is provided from a single point in time. Eventually I >> > expect many of those curves to become even "cleaner". > >> On another note, what do you think of moving the vbench benchmarks >> into the main numpy tree? We already require everyone who submits a >> bug fix to add a test; there are a bunch of speed enhancements coming >> in these days and it would be nice if we had some way to ask people to >> submit a benchmark along with each one so that we know that the >> enhancement stays enhanced... > > On this positive note (it is boring to start a new thread, isn't it?) -- > would you be interested in me transfering numpy-vbench over to > github.com/numpy ? If you mean just moving the existing git repo under the numpy organization, like github.com/numpy/numpy-vbench, then I'm not sure how much difference it would make really. What seems like it'd be really useful though would be if the code could move into the main numpy tree, so that people could submit both benchmarks and optimizations within a single PR. > as of today, plots on http://yarikoptic.github.io/numpy-vbench should > be updating 24x7 (just a loop, thus no time guarantee after you submit > new changes). > > Besides benchmarking new benchmarks (your PRs would still be very > welcome, so far it was just me and Julian T) and revisions, that > process also goes through a random sample of existing previously > benchmarked revisions and re-runs the benchmarks thus improving upon the > ultimate 'min' timing performance. So you can see already that many > plots became much 'cleaner', although now there might be a bit of bias > in estimates for recent revisions since they hadn't accumulated yet as > many of 'independent runs' as older revisions. Cool! -- Nathaniel J. Smith Postdoctoral researcher - Informatics - University of Edinburgh http://vorpus.org From jtaylor.debian at googlemail.com Mon Nov 25 14:16:29 2013 From: jtaylor.debian at googlemail.com (Julian Taylor) Date: Mon, 25 Nov 2013 20:16:29 +0100 Subject: [Numpy-discussion] numpy vbench-marking, compiler comparison In-Reply-To: <20131125013220.GN27621@onerussian.com> References: <20131015161324.GL27621@onerussian.com> <20131125013220.GN27621@onerussian.com> Message-ID: <5293A20D.7070501@googlemail.com> On 25.11.2013 02:32, Yaroslav Halchenko wrote: > > On Tue, 15 Oct 2013, Nathaniel Smith wrote: >> What do you have to lose? > >>> btw -- fresh results are here http://yarikoptic.github.io/numpy-vbench/ . > >>> I have tuned benchmarking so it now reflects the best performance across >>> multiple executions of the whole battery, thus eliminating spurious >>> variance if estimate is provided from a single point in time. Eventually I >>> expect many of those curves to become even "cleaner". > >> On another note, what do you think of moving the vbench benchmarks >> into the main numpy tree? We already require everyone who submits a >> bug fix to add a test; there are a bunch of speed enhancements coming >> in these days and it would be nice if we had some way to ask people to >> submit a benchmark along with each one so that we know that the >> enhancement stays enhanced... > > On this positive note (it is boring to start a new thread, isn't it?) -- > would you be interested in me transfering numpy-vbench over to > github.com/numpy ? > > as of today, plots on http://yarikoptic.github.io/numpy-vbench should > be updating 24x7 (just a loop, thus no time guarantee after you submit > new changes). > > Besides benchmarking new benchmarks (your PRs would still be very > welcome, so far it was just me and Julian T) and revisions, that > process also goes through a random sample of existing previously > benchmarked revisions and re-runs the benchmarks thus improving upon the > ultimate 'min' timing performance. So you can see already that many > plots became much 'cleaner', although now there might be a bit of bias > in estimates for recent revisions since they hadn't accumulated yet as > many of 'independent runs' as older revisions. > using the vbench I created a comparison of gcc and clang with different options. Cliffnotes: * gcc -O2 performs 5-10% better than -O3 in most benchmarks, except in a few select cases where the vectorizer does its magic * gcc and clang are very close in performance, but the cases where a compiler wins by a large margin its mostly gcc that wins I have collected some interesting plots on this notebook: http://nbviewer.ipython.org/7646615 From fperez.net at gmail.com Mon Nov 25 19:08:07 2013 From: fperez.net at gmail.com (Fernando Perez) Date: Mon, 25 Nov 2013 16:08:07 -0800 Subject: [Numpy-discussion] RFC: is it worth giving a lightning talk at PyCon 2014 on numpy vbench-marking? In-Reply-To: <20131015173625.GN27621@onerussian.com> References: <20131015161324.GL27621@onerussian.com> <20131015173625.GN27621@onerussian.com> Message-ID: On Tue, Oct 15, 2013 at 10:36 AM, Yaroslav Halchenko wrote: > ok -- since no negative feedback received -- submitted as is. I will > let you know when it gets rejected or accepted. > Let me know if it's accepted: I'll be keynoting at PyCon'14, and since my focus will obviously be scientific computing, I'd be happy to mention a few talks from our community in my slides. Cheers, f -- Fernando Perez (@fperez_org; http://fperez.org) fperez.net-at-gmail: mailing lists only (I ignore this when swamped!) fernando.perez-at-berkeley: contact me here for any direct mail -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at onerussian.com Mon Nov 25 20:43:47 2013 From: lists at onerussian.com (Yaroslav Halchenko) Date: Mon, 25 Nov 2013 20:43:47 -0500 Subject: [Numpy-discussion] RFC: is it worth giving a lightning talk at PyCon 2014 on numpy vbench-marking? In-Reply-To: References: <20131015161324.GL27621@onerussian.com> <20131015173625.GN27621@onerussian.com> Message-ID: <20131126014347.GP27621@onerussian.com> On Mon, 25 Nov 2013, Fernando Perez wrote: > ok -- since no negative feedback received -- submitted as is. ?I will > let you know when it gets rejected or accepted. > Let me know if it's accepted: I'll be keynoting at PyCon'14, and since my > focus will obviously be scientific computing, I'd be happy to mention a > few talks from our community in my slides. thank you Fernando! So far I have heard nothing from PyCon people. Cheers! -- Yaroslav O. Halchenko, Ph.D. http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org Senior Research Associate, Psychological and Brain Sciences Dept. Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 WWW: http://www.linkedin.com/in/yarik From lists at onerussian.com Mon Nov 25 20:48:24 2013 From: lists at onerussian.com (Yaroslav Halchenko) Date: Mon, 25 Nov 2013 20:48:24 -0500 Subject: [Numpy-discussion] RFC: is it worth giving a lightning talk at PyCon 2014 on numpy vbench-marking? In-Reply-To: References: <20131015161324.GL27621@onerussian.com> <20131125013220.GN27621@onerussian.com> Message-ID: <20131126014824.GQ27621@onerussian.com> On Sun, 24 Nov 2013, Nathaniel Smith wrote: > > On this positive note (it is boring to start a new thread, isn't it?) -- > > would you be interested in me transfering numpy-vbench over to > > github.com/numpy ? > If you mean just moving the existing git repo under the numpy > organization, like github.com/numpy/numpy-vbench, then I'm not sure > how much difference it would make really. I just thought about better visibility for this little project which is now under some shmooptic/ on github ;) > What seems like it'd be > really useful though would be if the code could move into the main > numpy tree, so that people could submit both benchmarks and > optimizations within a single PR. concur! ;) we would need to finally look into - RFing vb_suite/test_perf.py from within pandas over into vbench - making our changes into stock vbench - moving numpy-vbench under numpy tree -- Yaroslav O. Halchenko, Ph.D. http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org Senior Research Associate, Psychological and Brain Sciences Dept. Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 WWW: http://www.linkedin.com/in/yarik From davidmenhur at gmail.com Tue Nov 26 03:57:32 2013 From: davidmenhur at gmail.com (=?UTF-8?B?RGHPgGlk?=) Date: Tue, 26 Nov 2013 09:57:32 +0100 Subject: [Numpy-discussion] numpy vbench-marking, compiler comparison In-Reply-To: <5293A20D.7070501@googlemail.com> References: <20131015161324.GL27621@onerussian.com> <20131125013220.GN27621@onerussian.com> <5293A20D.7070501@googlemail.com> Message-ID: Have you tried on an Intel CPU? I have both a i5 quad core and an i7 octo core where I could run it over the weekend. One may expect some compiler magic taking advantage of the advanced features, specially the i7. /David On Nov 25, 2013 8:16 PM, "Julian Taylor" wrote: > On 25.11.2013 02:32, Yaroslav Halchenko wrote: > > > > On Tue, 15 Oct 2013, Nathaniel Smith wrote: > >> What do you have to lose? > > > >>> btw -- fresh results are here > http://yarikoptic.github.io/numpy-vbench/ . > > > >>> I have tuned benchmarking so it now reflects the best performance > across > >>> multiple executions of the whole battery, thus eliminating spurious > >>> variance if estimate is provided from a single point in time. > Eventually I > >>> expect many of those curves to become even "cleaner". > > > >> On another note, what do you think of moving the vbench benchmarks > >> into the main numpy tree? We already require everyone who submits a > >> bug fix to add a test; there are a bunch of speed enhancements coming > >> in these days and it would be nice if we had some way to ask people to > >> submit a benchmark along with each one so that we know that the > >> enhancement stays enhanced... > > > > On this positive note (it is boring to start a new thread, isn't it?) -- > > would you be interested in me transfering numpy-vbench over to > > github.com/numpy ? > > > > as of today, plots on http://yarikoptic.github.io/numpy-vbench should > > be updating 24x7 (just a loop, thus no time guarantee after you submit > > new changes). > > > > Besides benchmarking new benchmarks (your PRs would still be very > > welcome, so far it was just me and Julian T) and revisions, that > > process also goes through a random sample of existing previously > > benchmarked revisions and re-runs the benchmarks thus improving upon the > > ultimate 'min' timing performance. So you can see already that many > > plots became much 'cleaner', although now there might be a bit of bias > > in estimates for recent revisions since they hadn't accumulated yet as > > many of 'independent runs' as older revisions. > > > > using the vbench I created a comparison of gcc and clang with different > options. > Cliffnotes: > * gcc -O2 performs 5-10% better than -O3 in most benchmarks, except in a > few select cases where the vectorizer does its magic > * gcc and clang are very close in performance, but the cases where a > compiler wins by a large margin its mostly gcc that wins > > I have collected some interesting plots on this notebook: > http://nbviewer.ipython.org/7646615 > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dineshbvadhia at hotmail.com Tue Nov 26 04:02:40 2013 From: dineshbvadhia at hotmail.com (Dinesh Vadhia) Date: Tue, 26 Nov 2013 01:02:40 -0800 Subject: [Numpy-discussion] MKL + CPU, GPU + cuBLAS comparison Message-ID: Probably a loaded question but is there a significant performance difference between using MKL (or OpenBLAS) on multi-core cpu's and cuBLAS on gpu's. Does anyone have recent experience or link to an independent benchmark? -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jerome.Kieffer at esrf.fr Tue Nov 26 05:42:05 2013 From: Jerome.Kieffer at esrf.fr (Jerome Kieffer) Date: Tue, 26 Nov 2013 11:42:05 +0100 Subject: [Numpy-discussion] MKL + CPU, GPU + cuBLAS comparison In-Reply-To: References: Message-ID: <20131126114205.971805cb.Jerome.Kieffer@esrf.fr> On Tue, 26 Nov 2013 01:02:40 -0800 "Dinesh Vadhia" wrote: > Probably a loaded question but is there a significant performance difference between using MKL (or OpenBLAS) on multi-core cpu's and cuBLAS on gpu's. Does anyone have recent experience or link to an independent benchmark? > Using Numpy (Xeon 5520 2.2GHz): In [1]: import numpy In [2]: shape = (450,450,450) In [3]: start=numpy.random.random(shape).astype("complex128") In [4]: %timeit result = numpy.fft.fftn(start) 1 loops, best of 3: 10.2 s per loop Using FFTw (8 threads (2x quad cores): In [5]: import fftw3 In [7]: result = numpy.empty_like(start) In [8]: fft = fftw3.Plan(start, result, direction='forward', flags=['measure'], nthreads=8) In [9]: %timeit fft() 1 loops, best of 3: 887 ms per loop Using CuFFT (GeForce Titan): 1) with 2 transfers: In [10]: import pycuda,pycuda.gpuarray as gpuarray,scikits.cuda.fft as cu_fft,pycuda.autoinit In [11]: cuplan = cu_fft.Plan(start.shape, numpy.complex128, numpy.complex128) In [12]: d_result = gpuarray.empty(start.shape, start.dtype) In [13]: d_start = gpuarray.empty(start.shape, start.dtype) In [14]: def cuda_fft(start): ....: d_start.set(start) ....: cu_fft.fft(d_start, d_result, cuplan) ....: return d_result.get() ....: In [15]: %timeit cuda_fft(start) 1 loops, best of 3: 1.7 s per loop 2) with 1 transfert: In [18]: def cuda_fft_2(): cu_fft.fft(d_start, d_result, cuplan) return d_result.get() ....: In [20]: %timeit cuda_fft_2() 1 loops, best of 3: 1.05 s per loop 3) Without transfer: In [22]: def cuda_fft_3(): cu_fft.fft(d_start, d_result, cuplan) pycuda.autoinit.context.synchronize() ....: In [23]: %timeit cuda_fft_3() 1 loops, best of 3: 202 ms per loop Conclusion: A Geforce Titan (1000?) can be 4x faster than a couple of Xeon 5520 (2x 250?) if your data are already on the GPU. Nota: Plan calculation are much faster on GPU then on CPU. -- J?r?me Kieffer tel +33 476 882 445 From regi.public at gmail.com Tue Nov 26 05:47:42 2013 From: regi.public at gmail.com (regikeyz .) Date: Tue, 26 Nov 2013 10:47:42 +0000 Subject: [Numpy-discussion] MKL + CPU, GPU + cuBLAS comparison In-Reply-To: <20131126114205.971805cb.Jerome.Kieffer@esrf.fr> References: <20131126114205.971805cb.Jerome.Kieffer@esrf.fr> Message-ID: HI GUYS, PLEASE COULD YOU UNSUBSCRIBE ME FROM THESE EMAILS I cant find the link on the bottom Thank-you On 26 November 2013 10:42, Jerome Kieffer wrote: > On Tue, 26 Nov 2013 01:02:40 -0800 > "Dinesh Vadhia" wrote: > > > Probably a loaded question but is there a significant performance > difference between using MKL (or OpenBLAS) on multi-core cpu's and cuBLAS > on gpu's. Does anyone have recent experience or link to an independent > benchmark? > > > > Using Numpy (Xeon 5520 2.2GHz): > > In [1]: import numpy > In [2]: shape = (450,450,450) > In [3]: start=numpy.random.random(shape).astype("complex128") > In [4]: %timeit result = numpy.fft.fftn(start) > 1 loops, best of 3: 10.2 s per loop > > Using FFTw (8 threads (2x quad cores): > > In [5]: import fftw3 > In [7]: result = numpy.empty_like(start) > In [8]: fft = fftw3.Plan(start, result, direction='forward', > flags=['measure'], nthreads=8) > In [9]: %timeit fft() > 1 loops, best of 3: 887 ms per loop > > Using CuFFT (GeForce Titan): > 1) with 2 transfers: > In [10]: import pycuda,pycuda.gpuarray as gpuarray,scikits.cuda.fft as > cu_fft,pycuda.autoinit > In [11]: cuplan = cu_fft.Plan(start.shape, numpy.complex128, > numpy.complex128) > In [12]: d_result = gpuarray.empty(start.shape, start.dtype) > In [13]: d_start = gpuarray.empty(start.shape, start.dtype) > In [14]: def cuda_fft(start): > ....: d_start.set(start) > ....: cu_fft.fft(d_start, d_result, cuplan) > ....: return d_result.get() > ....: > In [15]: %timeit cuda_fft(start) > 1 loops, best of 3: 1.7 s per loop > > 2) with 1 transfert: > In [18]: def cuda_fft_2(): > cu_fft.fft(d_start, d_result, cuplan) > return d_result.get() > ....: > In [20]: %timeit cuda_fft_2() > 1 loops, best of 3: 1.05 s per loop > > 3) Without transfer: > In [22]: def cuda_fft_3(): > cu_fft.fft(d_start, d_result, cuplan) > pycuda.autoinit.context.synchronize() > ....: > > In [23]: %timeit cuda_fft_3() > 1 loops, best of 3: 202 ms per loop > > Conclusion: > A Geforce Titan (1000?) can be 4x faster than a couple of Xeon 5520 (2x > 250?) if your data are already on the GPU. > Nota: Plan calculation are much faster on GPU then on CPU. > -- > J?r?me Kieffer > tel +33 476 882 445 > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dineshbvadhia at hotmail.com Tue Nov 26 07:10:14 2013 From: dineshbvadhia at hotmail.com (Dinesh Vadhia) Date: Tue, 26 Nov 2013 04:10:14 -0800 Subject: [Numpy-discussion] MKL + CPU, GPU + cuBLAS comparison Message-ID: Jerome, Thanks for the swift response and tests. Crikey, that is a significant difference at first glance. Would it be possible to compare a BLAS computation eg. matrix-vector or matrix-matrix calculation? Thx! -------------- next part -------------- An HTML attachment was scrubbed... URL: From nouiz at nouiz.org Tue Nov 26 08:37:49 2013 From: nouiz at nouiz.org (=?ISO-8859-1?Q?Fr=E9d=E9ric_Bastien?=) Date: Tue, 26 Nov 2013 08:37:49 -0500 Subject: [Numpy-discussion] MKL + CPU, GPU + cuBLAS comparison In-Reply-To: References: Message-ID: We have such benchmark in Theano: https://github.com/Theano/Theano/blob/master/theano/misc/check_blas.py#L177 HTH Fred On Tue, Nov 26, 2013 at 7:10 AM, Dinesh Vadhia wrote: > Jerome, Thanks for the swift response and tests. Crikey, that is a > significant difference at first glance. Would it be possible to compare a > BLAS computation eg. matrix-vector or matrix-matrix calculation? Thx! > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From regi.public at gmail.com Tue Nov 26 08:42:08 2013 From: regi.public at gmail.com (regikeyz .) Date: Tue, 26 Nov 2013 13:42:08 +0000 Subject: [Numpy-discussion] UNSUBSCRIBE Re: MKL + CPU, GPU + cuBLAS comparison Message-ID: UNSUBSCRIBE On 26 November 2013 13:37, Fr?d?ric Bastien wrote: > We have such benchmark in Theano: > > https://github.com/Theano/Theano/blob/master/theano/misc/check_blas.py#L177 > > HTH > > Fred > > On Tue, Nov 26, 2013 at 7:10 AM, Dinesh Vadhia > wrote: > > Jerome, Thanks for the swift response and tests. Crikey, that is a > > significant difference at first glance. Would it be possible to compare > a > > BLAS computation eg. matrix-vector or matrix-matrix calculation? Thx! > > > > _______________________________________________ > > NumPy-Discussion mailing list > > NumPy-Discussion at scipy.org > > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pmhobson at gmail.com Tue Nov 26 11:11:27 2013 From: pmhobson at gmail.com (Paul Hobson) Date: Tue, 26 Nov 2013 08:11:27 -0800 Subject: [Numpy-discussion] UNSUBSCRIBE Re: MKL + CPU, GPU + cuBLAS comparison In-Reply-To: References: Message-ID: We can't manage your account for you. Click here: http://mail.scipy.org/mailman/listinfo/numpy-discussion to unsunscribe yourself. -paul On Tue, Nov 26, 2013 at 5:42 AM, regikeyz . wrote: > UNSUBSCRIBE > > > On 26 November 2013 13:37, Fr?d?ric Bastien wrote: > >> We have such benchmark in Theano: >> >> >> https://github.com/Theano/Theano/blob/master/theano/misc/check_blas.py#L177 >> >> HTH >> >> Fred >> >> On Tue, Nov 26, 2013 at 7:10 AM, Dinesh Vadhia >> wrote: >> > Jerome, Thanks for the swift response and tests. Crikey, that is a >> > significant difference at first glance. Would it be possible to >> compare a >> > BLAS computation eg. matrix-vector or matrix-matrix calculation? Thx! >> > >> > _______________________________________________ >> > NumPy-Discussion mailing list >> > NumPy-Discussion at scipy.org >> > http://mail.scipy.org/mailman/listinfo/numpy-discussion >> > >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jtaylor.debian at googlemail.com Tue Nov 26 12:57:08 2013 From: jtaylor.debian at googlemail.com (Julian Taylor) Date: Tue, 26 Nov 2013 18:57:08 +0100 Subject: [Numpy-discussion] numpy vbench-marking, compiler comparison In-Reply-To: References: <20131015161324.GL27621@onerussian.com> <20131125013220.GN27621@onerussian.com> <5293A20D.7070501@googlemail.com> Message-ID: <5294E0F4.8030802@googlemail.com> there isn't that much code in numpy that profits from modern x86 instruction sets, even the simple arithmetic loops are strided and thus unvectorizable by the compiler. They have been vectorized manually in 1.8 using sse2 and it is on my todo list to add runtime detected avx support. On 26.11.2013 09:57, Da?id wrote: > Have you tried on an Intel CPU? I have both a i5 quad core and an i7 > octo core where I could run it over the weekend. One may expect some > compiler magic taking advantage of the advanced features, specially the i7. > > using the vbench I created a comparison of gcc and clang with different > options. > Cliffnotes: > * gcc -O2 performs 5-10% better than -O3 in most benchmarks, except in a > few select cases where the vectorizer does its magic > * gcc and clang are very close in performance, but the cases where a > compiler wins by a large margin its mostly gcc that wins > > I have collected some interesting plots on this notebook: > http://nbviewer.ipython.org/7646615 From p.rennert at cs.ucl.ac.uk Tue Nov 26 14:54:36 2013 From: p.rennert at cs.ucl.ac.uk (Peter Rennert) Date: Tue, 26 Nov 2013 19:54:36 +0000 Subject: [Numpy-discussion] PyArray_BASE equivalent in python Message-ID: <5294FC7C.3060209@cs.ucl.ac.uk> Hi, I as the title says, I am looking for a way to set in python the base of an ndarray to an object. Use case is porting qimage2ndarray to PySide where I want to do something like: In [1]: from PySide import QtGui In [2]: image = QtGui.QImage('/home/peter/code/pyTools/sandbox/images/faceDemo.jpg') In [3]: import numpy as np In [4]: a = np.frombuffer(image.bits()) --> I would like to do something like: In [5]: a.base = image --> to avoid situations such as: In [6]: del image In [7]: a Segmentation fault (core dumped) The current implementation of qimage2ndarray uses a C function to do PyArray_BASE(sipRes) = image; Py_INCREF(image); But I want to avoid having to install compilers, headers etc on target machines of my code just for these two lines of code. Thanks, P From njs at pobox.com Tue Nov 26 15:03:55 2013 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 26 Nov 2013 12:03:55 -0800 Subject: [Numpy-discussion] PyArray_BASE equivalent in python In-Reply-To: <5294FC7C.3060209@cs.ucl.ac.uk> References: <5294FC7C.3060209@cs.ucl.ac.uk> Message-ID: On Tue, Nov 26, 2013 at 11:54 AM, Peter Rennert wrote: > Hi, > > I as the title says, I am looking for a way to set in python the base of > an ndarray to an object. > > Use case is porting qimage2ndarray to PySide where I want to do > something like: > > In [1]: from PySide import QtGui > > In [2]: image = > QtGui.QImage('/home/peter/code/pyTools/sandbox/images/faceDemo.jpg') > > In [3]: import numpy as np > > In [4]: a = np.frombuffer(image.bits()) > > --> I would like to do something like: > In [5]: a.base = image > > --> to avoid situations such as: > In [6]: del image > > In [7]: a > Segmentation fault (core dumped) This is a bug in PySide -- the buffer object returned by image.bits() needs to hold a reference to the original image. Please report a bug to them. You will also get a segfault from code that doesn't use numpy at all, by doing things like: bits = image.bits() del image As a workaround, you can write a little class with an __array_interface__ attribute that points to the image's contents, and then call np.asarray() on this object. The resulting array will have your object as its .base, and then your object can hold onto whatever references it wants. -n From p.rennert at cs.ucl.ac.uk Tue Nov 26 15:12:16 2013 From: p.rennert at cs.ucl.ac.uk (Peter Rennert) Date: Tue, 26 Nov 2013 20:12:16 +0000 Subject: [Numpy-discussion] PyArray_BASE equivalent in python In-Reply-To: References: <5294FC7C.3060209@cs.ucl.ac.uk> Message-ID: <529500A0.3000402@cs.ucl.ac.uk> Brilliant thanks, I will try out the "little class" approach. On 11/26/2013 08:03 PM, Nathaniel Smith wrote: > On Tue, Nov 26, 2013 at 11:54 AM, Peter Rennert wrote: >> Hi, >> >> I as the title says, I am looking for a way to set in python the base of >> an ndarray to an object. >> >> Use case is porting qimage2ndarray to PySide where I want to do >> something like: >> >> In [1]: from PySide import QtGui >> >> In [2]: image = >> QtGui.QImage('/home/peter/code/pyTools/sandbox/images/faceDemo.jpg') >> >> In [3]: import numpy as np >> >> In [4]: a = np.frombuffer(image.bits()) >> >> --> I would like to do something like: >> In [5]: a.base = image >> >> --> to avoid situations such as: >> In [6]: del image >> >> In [7]: a >> Segmentation fault (core dumped) > This is a bug in PySide -- the buffer object returned by image.bits() > needs to hold a reference to the original image. Please report a bug > to them. You will also get a segfault from code that doesn't use numpy > at all, by doing things like: > > bits = image.bits() > del image > > > As a workaround, you can write a little class with an > __array_interface__ attribute that points to the image's contents, and > then call np.asarray() on this object. The resulting array will have > your object as its .base, and then your object can hold onto whatever > references it wants. > > -n > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From p.rennert at cs.ucl.ac.uk Tue Nov 26 16:37:40 2013 From: p.rennert at cs.ucl.ac.uk (Peter Rennert) Date: Tue, 26 Nov 2013 21:37:40 +0000 Subject: [Numpy-discussion] PyArray_BASE equivalent in python In-Reply-To: <529500A0.3000402@cs.ucl.ac.uk> References: <5294FC7C.3060209@cs.ucl.ac.uk> <529500A0.3000402@cs.ucl.ac.uk> Message-ID: <529514A4.3050505@cs.ucl.ac.uk> I probably did something wrong, but it does not work how I tried it. I am not sure if you meant it like this, but I tried to subclass from ndarray first, but then I do not have access to __array_interface__. Is this what you had in mind? from PySide import QtGui import numpy as np class myArray(): def __init__(self, shape, bits, strides): self.__array_interface__ = \ {'data': bits, 'typestr': ' del image b # booom # On 11/26/2013 08:12 PM, Peter Rennert wrote: > Brilliant thanks, I will try out the "little class" approach. > > On 11/26/2013 08:03 PM, Nathaniel Smith wrote: >> On Tue, Nov 26, 2013 at 11:54 AM, Peter Rennert wrote: >>> Hi, >>> >>> I as the title says, I am looking for a way to set in python the base of >>> an ndarray to an object. >>> >>> Use case is porting qimage2ndarray to PySide where I want to do >>> something like: >>> >>> In [1]: from PySide import QtGui >>> >>> In [2]: image = >>> QtGui.QImage('/home/peter/code/pyTools/sandbox/images/faceDemo.jpg') >>> >>> In [3]: import numpy as np >>> >>> In [4]: a = np.frombuffer(image.bits()) >>> >>> --> I would like to do something like: >>> In [5]: a.base = image >>> >>> --> to avoid situations such as: >>> In [6]: del image >>> >>> In [7]: a >>> Segmentation fault (core dumped) >> This is a bug in PySide -- the buffer object returned by image.bits() >> needs to hold a reference to the original image. Please report a bug >> to them. You will also get a segfault from code that doesn't use numpy >> at all, by doing things like: >> >> bits = image.bits() >> del image >> >> >> As a workaround, you can write a little class with an >> __array_interface__ attribute that points to the image's contents, and >> then call np.asarray() on this object. The resulting array will have >> your object as its .base, and then your object can hold onto whatever >> references it wants. >> >> -n >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From njs at pobox.com Tue Nov 26 16:46:25 2013 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 26 Nov 2013 13:46:25 -0800 Subject: [Numpy-discussion] PyArray_BASE equivalent in python In-Reply-To: <529514A4.3050505@cs.ucl.ac.uk> References: <5294FC7C.3060209@cs.ucl.ac.uk> <529500A0.3000402@cs.ucl.ac.uk> <529514A4.3050505@cs.ucl.ac.uk> Message-ID: On Tue, Nov 26, 2013 at 1:37 PM, Peter Rennert wrote: > I probably did something wrong, but it does not work how I tried it. I > am not sure if you meant it like this, but I tried to subclass from > ndarray first, but then I do not have access to __array_interface__. Is > this what you had in mind? > > from PySide import QtGui > import numpy as np > > class myArray(): > def __init__(self, shape, bits, strides): > self.__array_interface__ = \ > {'data': bits, > 'typestr': ' 'descr': [('', ' 'shape': shape, > 'strides': strides, > 'version': 3} You need this object to also hold a reference to the image object -- the idea is that so long as the array lives it will hold a ref to this object in .base, and then this object holds the image alive. But... > image = QtGui.QImage('/home/peter/code/pyTools/sandbox/images/faceDemo.jpg') > > b = myArray((image.width(), image.height()), image.bits(), > (image.bytesPerLine(), 4)) > b = np.asarray(b) > > b.base > # ...this isn't promising, since it suggests that numpy is cleverly cutting out the middle-man when you give it a buffer object, since it knows that buffer objects are supposed to actually take care of memory management. You might have better luck using the raw pointer two-tuple form for the "data" field. You can't get these pointers directly from a buffer object, but numpy will give them to you. So you can use something like "data": np.asarray(bits).__array_interface__["data"] -n -- Nathaniel J. Smith Postdoctoral researcher - Informatics - University of Edinburgh http://vorpus.org From robert.kern at gmail.com Tue Nov 26 16:46:21 2013 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 26 Nov 2013 21:46:21 +0000 Subject: [Numpy-discussion] PyArray_BASE equivalent in python In-Reply-To: <529514A4.3050505@cs.ucl.ac.uk> References: <5294FC7C.3060209@cs.ucl.ac.uk> <529500A0.3000402@cs.ucl.ac.uk> <529514A4.3050505@cs.ucl.ac.uk> Message-ID: On Tue, Nov 26, 2013 at 9:37 PM, Peter Rennert wrote: > > I probably did something wrong, but it does not work how I tried it. I > am not sure if you meant it like this, but I tried to subclass from > ndarray first, but then I do not have access to __array_interface__. Is > this what you had in mind? > > from PySide import QtGui > import numpy as np > > class myArray(): > def __init__(self, shape, bits, strides): You need to pass in the image as well and keep a reference to it. > self.__array_interface__ = \ > {'data': bits, > 'typestr': ' 'descr': [('', ' 'shape': shape, > 'strides': strides, > 'version': 3} Most of these are wrong. Something like the following should suffice: class QImageArray(object): def __init__(self, qimage): shape = (qimage.height(), qimage.width(), -1) # Generate an ndarray from the image bits and steal its # __array_interface__ information. arr = np.frombuffer(qimage.bits(), dtype=np.uint8).reshape(shape) self.__array_interface__ = arr.__array_interface__ # Keep the QImage alive. self.qimage = qimage -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.rennert at cs.ucl.ac.uk Tue Nov 26 17:55:47 2013 From: p.rennert at cs.ucl.ac.uk (Peter Rennert) Date: Tue, 26 Nov 2013 22:55:47 +0000 Subject: [Numpy-discussion] PyArray_BASE equivalent in python In-Reply-To: <529514A4.3050505@cs.ucl.ac.uk> References: <5294FC7C.3060209@cs.ucl.ac.uk> <529500A0.3000402@cs.ucl.ac.uk> <529514A4.3050505@cs.ucl.ac.uk> Message-ID: <529526F3.6080409@cs.ucl.ac.uk> Btw, I just wanted to file a bug at PySide, but it might be alright at their end, because I can do this: from PySide import QtGui image = QtGui.QImage('/home/peter/code/pyTools/sandbox/images/faceDemo.jpg') a = image.bits() del image a # On 11/26/2013 09:37 PM, Peter Rennert wrote: > I probably did something wrong, but it does not work how I tried it. I > am not sure if you meant it like this, but I tried to subclass from > ndarray first, but then I do not have access to __array_interface__. > Is this what you had in mind? > > from PySide import QtGui > import numpy as np > > class myArray(): > def __init__(self, shape, bits, strides): > self.__array_interface__ = \ > {'data': bits, > 'typestr': ' 'descr': [('', ' 'shape': shape, > 'strides': strides, > 'version': 3} > > image = > QtGui.QImage('/home/peter/code/pyTools/sandbox/images/faceDemo.jpg') > > b = myArray((image.width(), image.height()), image.bits(), > (image.bytesPerLine(), 4)) > b = np.asarray(b) > > b.base > # > > del image > > b > # booom # > > On 11/26/2013 08:12 PM, Peter Rennert wrote: >> Brilliant thanks, I will try out the "little class" approach. >> >> On 11/26/2013 08:03 PM, Nathaniel Smith wrote: >>> On Tue, Nov 26, 2013 at 11:54 AM, Peter Rennert >>> wrote: >>>> Hi, >>>> >>>> I as the title says, I am looking for a way to set in python the >>>> base of >>>> an ndarray to an object. >>>> >>>> Use case is porting qimage2ndarray to PySide where I want to do >>>> something like: >>>> >>>> In [1]: from PySide import QtGui >>>> >>>> In [2]: image = >>>> QtGui.QImage('/home/peter/code/pyTools/sandbox/images/faceDemo.jpg') >>>> >>>> In [3]: import numpy as np >>>> >>>> In [4]: a = np.frombuffer(image.bits()) >>>> >>>> --> I would like to do something like: >>>> In [5]: a.base = image >>>> >>>> --> to avoid situations such as: >>>> In [6]: del image >>>> >>>> In [7]: a >>>> Segmentation fault (core dumped) >>> This is a bug in PySide -- the buffer object returned by image.bits() >>> needs to hold a reference to the original image. Please report a bug >>> to them. You will also get a segfault from code that doesn't use numpy >>> at all, by doing things like: >>> >>> bits = image.bits() >>> del image >>> >>> >>> As a workaround, you can write a little class with an >>> __array_interface__ attribute that points to the image's contents, and >>> then call np.asarray() on this object. The resulting array will have >>> your object as its .base, and then your object can hold onto whatever >>> references it wants. >>> >>> -n >>> _______________________________________________ >>> NumPy-Discussion mailing list >>> NumPy-Discussion at scipy.org >>> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion > From njs at pobox.com Tue Nov 26 17:58:21 2013 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 26 Nov 2013 14:58:21 -0800 Subject: [Numpy-discussion] PyArray_BASE equivalent in python In-Reply-To: <529526F3.6080409@cs.ucl.ac.uk> References: <5294FC7C.3060209@cs.ucl.ac.uk> <529500A0.3000402@cs.ucl.ac.uk> <529514A4.3050505@cs.ucl.ac.uk> <529526F3.6080409@cs.ucl.ac.uk> Message-ID: On Tue, Nov 26, 2013 at 2:55 PM, Peter Rennert wrote: > Btw, I just wanted to file a bug at PySide, but it might be alright at > their end, because I can do this: > > from PySide import QtGui > > image = QtGui.QImage('/home/peter/code/pyTools/sandbox/images/faceDemo.jpg') > > a = image.bits() > > del image > > a > # That just means that the buffer still has a pointer to the QImage's old memory. It doesn't mean that following that pointer won't crash. Try str(a) or something that actually touches the buffer contents... -- Nathaniel J. Smith Postdoctoral researcher - Informatics - University of Edinburgh http://vorpus.org From sudheer.joseph at yahoo.com Wed Nov 27 07:01:10 2013 From: sudheer.joseph at yahoo.com (Sudheer Joseph) Date: Wed, 27 Nov 2013 20:01:10 +0800 (SGT) Subject: [Numpy-discussion] Getting masked array boundary indices Message-ID: <1385553670.11770.YahooMailNeo@web193403.mail.sg3.yahoo.com> Hi, ?????????????? I have a numpy array which is masked ( bathymetry), as seen below ?[ True] ?[ True] ?[ True] ?[ True]], ?????? fill_value = -9999.0) In [10]: depth[:,1130:1131] I need to find the indices where land(mask) is there along the boundaries and where water(value) is there along the boundaries, the above listing is along eastern boundary. Please help if there is a way to get? starting and ending index of mask. I tried np.where but it gives another array as there are several mask points are there, I need to use some thing like "if neighbouring points are? True and False then" index =i, but I am not getting the pythonic way to? get this done. with best regards, Sudheer ? *************************************************************** Sudheer Joseph Indian National Centre for Ocean Information Services Ministry of Earth Sciences, Govt. of India POST BOX NO: 21, IDA Jeedeemetla P.O. Via Pragathi Nagar,Kukatpally, Hyderabad; Pin:5000 55 Tel:+91-40-23886047(O),Fax:+91-40-23895011(O), Tel:+91-40-23044600(R),Tel:+91-40-9440832534(Mobile) E-mail:sjo.India at gmail.com;sudheer.joseph at yahoo.com Web- http://oppamthadathil.tripod.com *************************************************************** From josef.pktd at gmail.com Wed Nov 27 07:14:38 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 27 Nov 2013 07:14:38 -0500 Subject: [Numpy-discussion] Getting masked array boundary indices In-Reply-To: <1385553670.11770.YahooMailNeo@web193403.mail.sg3.yahoo.com> References: <1385553670.11770.YahooMailNeo@web193403.mail.sg3.yahoo.com> Message-ID: On Wed, Nov 27, 2013 at 7:01 AM, Sudheer Joseph wrote: > Hi, > I have a numpy array which is masked ( bathymetry), as seen below > > [ True] > [ True] > [ True] > [ True]], > fill_value = -9999.0) > > > In [10]: depth[:,1130:1131] > > I need to find the indices where land(mask) is there along the boundaries and where water(value) is there along the boundaries, the above listing is along eastern boundary. > Please help if there is a way to get starting and ending index of mask. > I tried np.where but it gives another array as there are several mask points are there, I need to use some thing like "if neighbouring points are True and False then" index =i, but I am not getting the pythonic way to get this done. if I understand correctly np.nonzero(np.diff(depth.mask))[0] Josef > > with best regards, > Sudheer > > > *************************************************************** > Sudheer Joseph > Indian National Centre for Ocean Information Services > Ministry of Earth Sciences, Govt. of India > POST BOX NO: 21, IDA Jeedeemetla P.O. > Via Pragathi Nagar,Kukatpally, Hyderabad; Pin:5000 55 > Tel:+91-40-23886047(O),Fax:+91-40-23895011(O), > Tel:+91-40-23044600(R),Tel:+91-40-9440832534(Mobile) > E-mail:sjo.India at gmail.com;sudheer.joseph at yahoo.com > Web- http://oppamthadathil.tripod.com > *************************************************************** > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From p.rennert at cs.ucl.ac.uk Wed Nov 27 09:16:07 2013 From: p.rennert at cs.ucl.ac.uk (Peter Rennert) Date: Wed, 27 Nov 2013 14:16:07 +0000 Subject: [Numpy-discussion] PyArray_BASE equivalent in python In-Reply-To: References: <5294FC7C.3060209@cs.ucl.ac.uk> <529500A0.3000402@cs.ucl.ac.uk> <529514A4.3050505@cs.ucl.ac.uk> <529526F3.6080409@cs.ucl.ac.uk> Message-ID: <5295FEA7.4080606@cs.ucl.ac.uk> First, sorry for not responding to your other replies, there was a jam in Thunderbird and I did not receive your answers. The bits() seem to stay alive after deleting the image: from PySide import QtGui image = QtGui.QImage('/home/peter/code/pyTools/sandbox/images/faceDemo.jpg') a = image.bits() del image b = str(a) Works. But it might still be a Garbage collector thing and it might break later. I do not know enough about Python and its buffers to know what to expect here. As a solution I have done something similar as it was proposed earlier, just that I derived from ndarray and kept the QImage reference it it: from PySide import QtGui as _qt import numpy as _np class MemoryTie(np.ndarray): def __new__(cls, image): # Retrieving parameters for _np.ndarray() dims = [image.height(), image.width()] strides = [[] for i in range(2)] strides[0] = image.bytesPerLine() bits = image.bits() if image.format() == _qt.QImage.Format_Indexed8: dtype = _np.uint8 strides[1] = 1 elif image.format() == _qt.QImage.Format_RGB32 \ or image.format() == _qt.QImage.Format_ARGB32 \ or image.format() == _qt.QImage.Format_ARGB32_Premultiplied: dtype = _np.uint32 strides[1] = 4 elif image.format() == _qt.QImage.Format_Invalid: raise ValueError("qimageview got invalid QImage") else: raise ValueError("qimageview can only handle 8- or 32-bit QImages") # creation of ndarray obj = _np.ndarray(dims, _np.uint32, bits, 0, strides, 'C').view(cls) obj._image = image return obj Thanks all for your help, P On 11/26/2013 10:58 PM, Nathaniel Smith wrote: > On Tue, Nov 26, 2013 at 2:55 PM, Peter Rennert wrote: >> Btw, I just wanted to file a bug at PySide, but it might be alright at >> their end, because I can do this: >> >> from PySide import QtGui >> >> image = QtGui.QImage('/home/peter/code/pyTools/sandbox/images/faceDemo.jpg') >> >> a = image.bits() >> >> del image >> >> a >> # > That just means that the buffer still has a pointer to the QImage's > old memory. It doesn't mean that following that pointer won't crash. > Try str(a) or something that actually touches the buffer contents... > From robert.kern at gmail.com Wed Nov 27 09:22:57 2013 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 27 Nov 2013 14:22:57 +0000 Subject: [Numpy-discussion] PyArray_BASE equivalent in python In-Reply-To: <5295FEA7.4080606@cs.ucl.ac.uk> References: <5294FC7C.3060209@cs.ucl.ac.uk> <529500A0.3000402@cs.ucl.ac.uk> <529514A4.3050505@cs.ucl.ac.uk> <529526F3.6080409@cs.ucl.ac.uk> <5295FEA7.4080606@cs.ucl.ac.uk> Message-ID: On Wed, Nov 27, 2013 at 2:16 PM, Peter Rennert wrote: > As a solution I have done something similar as it was proposed earlier, > just that I derived from ndarray and kept the QImage reference it it: > > from PySide import QtGui as _qt > import numpy as _np > > class MemoryTie(np.ndarray): > def __new__(cls, image): > # Retrieving parameters for _np.ndarray() > dims = [image.height(), image.width()] > > strides = [[] for i in range(2)] > strides[0] = image.bytesPerLine() > > bits = image.bits() > > if image.format() == _qt.QImage.Format_Indexed8: > dtype = _np.uint8 > strides[1] = 1 > elif image.format() == _qt.QImage.Format_RGB32 \ > or image.format() == _qt.QImage.Format_ARGB32 \ > or image.format() == _qt.QImage.Format_ARGB32_Premultiplied: > dtype = _np.uint32 > strides[1] = 4 > elif image.format() == _qt.QImage.Format_Invalid: > raise ValueError("qimageview got invalid QImage") > else: > raise ValueError("qimageview can only handle 8- or 32-bit > QImages") > > # creation of ndarray > obj = _np.ndarray(dims, _np.uint32, bits, 0, strides, > 'C').view(cls) > obj._image = image > > return obj Don't do that. Use my recipe. You don't really want your RGBA tuples interpreted as uint32s, do you? from PySide import QtGui import numpy as np class QImageArray(object): def __init__(self, qimage): shape = (qimage.height(), qimage.width(), -1) # Generate an ndarray from the image bits and steal its # __array_interface__ information. arr = np.frombuffer(qimage.bits(), dtype=np.uint8).reshape(shape) self.__array_interface__ = arr.__array_interface__ # Keep the QImage alive. self.qimage = qimage def qimage2ndarray(qimage): return np.asarray(QImageArray(qimage)) qimage = QtGui.QImage('/Users/rkern/git/scipy/scipy/misc/tests/data/icon.png') arr = qimage2ndarray(qimage) del qimage print arr -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From sudheer.joseph at yahoo.com Wed Nov 27 11:39:42 2013 From: sudheer.joseph at yahoo.com (Sudheer Joseph) Date: Thu, 28 Nov 2013 00:39:42 +0800 (SGT) Subject: [Numpy-discussion] Getting masked array boundary indices In-Reply-To: Message-ID: <1385570382.17246.YahooMailMobile@web193406.mail.sg3.yahoo.com> Thank you, Though it did not get what I expected it is a strong clue, let me explore it, With. Best regards Sudheer -------------- next part -------------- An HTML attachment was scrubbed... URL: From sudheer.joseph at yahoo.com Wed Nov 27 11:39:42 2013 From: sudheer.joseph at yahoo.com (Sudheer Joseph) Date: Thu, 28 Nov 2013 00:39:42 +0800 (SGT) Subject: [Numpy-discussion] Getting masked array boundary indices In-Reply-To: Message-ID: <1385570382.17246.YahooMailMobile@web193406.mail.sg3.yahoo.com> Thank you, Though it did not get what I expected it is a strong clue, let me explore it, With. Best regards Sudheer -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Wed Nov 27 11:56:28 2013 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 27 Nov 2013 09:56:28 -0700 Subject: [Numpy-discussion] xkcd on git commit messages. Message-ID: Here . Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From argriffi at ncsu.edu Wed Nov 27 12:51:47 2013 From: argriffi at ncsu.edu (alex) Date: Wed, 27 Nov 2013 12:51:47 -0500 Subject: [Numpy-discussion] xkcd on git commit messages. In-Reply-To: References: Message-ID: On Wed, Nov 27, 2013 at 11:56 AM, Charles R Harris wrote: > Here. And his later commit messages went straight to https://twitter.com/gitlost From nouiz at nouiz.org Wed Nov 27 13:44:41 2013 From: nouiz at nouiz.org (=?ISO-8859-1?Q?Fr=E9d=E9ric_Bastien?=) Date: Wed, 27 Nov 2013 13:44:41 -0500 Subject: [Numpy-discussion] Silencing NumPy output In-Reply-To: References: Message-ID: Hi, After more investigation, I found that there already exist a way to suppress those message on posix system. So I reused it in the PR. That way, it was faster, but prevent change in that area. So there is less change of breaking other syste: https://github.com/numpy/numpy/pull/4081 But it remove the stdout when we run this command: numpy.distutils.system_info.get_info("blas_opt") But during compilation, we still have the info about what is found: atlas_blas_threads_info: Setting PTATLAS=ATLAS Setting PTATLAS=ATLAS customize Gnu95FCompiler Found executable /usr/bin/gfortran customize Gnu95FCompiler customize Gnu95FCompiler using config compiling '_configtest.c': /* This file is generated from numpy/distutils/system_info.py */ void ATL_buildinfo(void); int main(void) { ATL_buildinfo(); return 0; } C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -g -O2 -fPIC compile options: '-c' gcc: _configtest.c gcc -pthread _configtest.o -L/usr/lib64/atlas -lptf77blas -lptcblas -latlas -o _configtest success! removing: _configtest.c _configtest.o _configtest Setting PTATLAS=ATLAS FOUND: libraries = ['ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['/usr/lib64/atlas'] language = c define_macros = [('ATLAS_INFO', '"\\"3.8.3\\""')] include_dirs = ['/usr/include'] FOUND: libraries = ['ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['/usr/lib64/atlas'] language = c define_macros = [('ATLAS_INFO', '"\\"3.8.3\\""')] include_dirs = ['/usr/include'] non-existing path in 'numpy/lib': 'benchmarks' lapack_opt_info: lapack_mkl_info: mkl_info: libraries mkl,vml,guide not found in ['/opt/lisa/os_v2/common/Canopy_64bit/User/lib', '/usr/local/lib64', '/usr/local/lib', '/usr/lib64', '/usr/lib'] NOT AVAILABLE NOT AVAILABLE atlas_threads_info: Setting PTATLAS=ATLAS libraries ptf77blas,ptcblas,atlas not found in /opt/lisa/os_v2/common/Canopy_64bit/User/lib libraries lapack_atlas not found in /opt/lisa/os_v2/common/Canopy_64bit/User/lib libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib64 libraries lapack_atlas not found in /usr/local/lib64 libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib libraries lapack_atlas not found in /usr/local/lib libraries lapack_atlas not found in /usr/lib64/atlas numpy.distutils.system_info.atlas_threads_info Setting PTATLAS=ATLAS Setting PTATLAS=ATLAS FOUND: libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['/usr/lib64/atlas'] language = f77 define_macros = [('ATLAS_INFO', '"\\"3.8.3\\""')] include_dirs = ['/usr/include'] FOUND: libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['/usr/lib64/atlas'] language = f77 define_macros = [('ATLAS_INFO', '"\\"3.8.3\\""')] include_dirs = ['/usr/include'] Fr?d?ric On Fri, Nov 22, 2013 at 4:26 PM, Fr?d?ric Bastien wrote: > I didn't forgot this, but I got side tracked. Here is the Theano code > I would like to try to use to replace os.system: > > https://github.com/Theano/Theano/blob/master/theano/misc/windows.py > > But I won't be able to try this before next week. > > Fred > > On Fri, Nov 15, 2013 at 5:49 PM, David Cournapeau wrote: >> >> >> >> On Fri, Nov 15, 2013 at 7:41 PM, Robert Kern wrote: >>> >>> On Fri, Nov 15, 2013 at 7:28 PM, David Cournapeau >>> wrote: >>> > >>> > On Fri, Nov 15, 2013 at 6:21 PM, Charles R Harris >>> > wrote: >>> >>> >> Sure, give it a shot. Looks like subprocess.Popen was intended to >>> >> replace os.system in any case. >>> > >>> > Except that output is not 'real time' with straight Popen, and doing so >>> > reliably on every platform (cough - windows - cough) is not completely >>> > trivial. You also have to handle buffered output, etc... That code is very >>> > fragile, so this would be quite a lot of testing to change, and I am not >>> > sure it worths it. >>> >>> It doesn't have to be "real time". Just use .communicate() and print out >>> the stdout and stderr to their appropriate streams after the subprocess >>> finishes. >> >> >> Indeed, it does not have to be, but that's useful for debugging compilation >> issues (not so much for numpy itself, but for some packages which have files >> that takes a very long time to build, like scipy.sparsetools or bottleneck). >> >> That's a minor point compared to the potential issues when building on >> windows, though. >> >> David >> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> From charles at crunch.io Wed Nov 27 15:51:04 2013 From: charles at crunch.io (Charles G. Waldman) Date: Wed, 27 Nov 2013 14:51:04 -0600 Subject: [Numpy-discussion] numpy datetime64 NaT string conversion bug & patch Message-ID: If you convert an array of strings to datetime64s and 'NaT' (or one of its variants) appears in the string, all subsequent values are rendered as NaT: (this is in 1.7.1 but the problem is present in current dev version as well) >>> import numpy as np >>> a = np.array(['2010', 'nat', '2030']) >>> a.astype(np.datetime64) array(['2010', 'NaT', 'NaT'], dtype='datetime64[Y]') The fix is to re-initalize 'dt' inside the loop in _strided_to_strided_string_to_datetime (patch attached) Correct behavior (with patch): >>> import numpy as np >>> a=np.array(['2010', 'nat', '2020']) >>> a.astype(np.datetime64) array(['2010', 'NaT', '2020'], dtype='datetime64[Y]') >>> -------------- next part -------------- A non-text attachment was scrubbed... Name: datetime64-nat-bug.patch Type: text/x-patch Size: 758 bytes Desc: not available URL: From matthew.brett at gmail.com Wed Nov 27 16:52:51 2013 From: matthew.brett at gmail.com (Matthew Brett) Date: Wed, 27 Nov 2013 16:52:51 -0500 Subject: [Numpy-discussion] (no subject) In-Reply-To: References: Message-ID: Hi, Thanks both - very helpful, Matthew On 11/22/13, Robert Kern wrote: > On Fri, Nov 22, 2013 at 9:23 PM, Matthew Brett > wrote: >> >> Hi, >> >> I'm sorry if I missed something obvious - but is there a vectorized >> way to look for None in an array? >> >> In [3]: a = np.array([1, 1]) >> >> In [4]: a == object() >> Out[4]: array([False, False], dtype=bool) >> >> In [6]: a == None >> Out[6]: False > > [~] > |1> x = np.array([1, None, 2], dtype=object) > > [~] > |2> np.equal(x, None) > array([False, True, False], dtype=bool) > > -- > Robert Kern > From questions.anon at gmail.com Thu Nov 28 02:06:21 2013 From: questions.anon at gmail.com (questions anon) Date: Thu, 28 Nov 2013 18:06:21 +1100 Subject: [Numpy-discussion] sum of array for masked area Message-ID: Hi All, I just posted this on the SciPy forum but realised it might be more appropriate here? I have a separate text file for daily rainfall data that covers the whole country. I would like to calculate the monthly mean, min, max and the mean of the sum for one state. The mean, max and min are just the mean, max and min for all data in that month however the sum data needs to work out the total for the month across the array and then sum that value. I use gdal tools to mask out the rest of the country and I use numpy tools for the summary stats. I can get the max, min and mean for the state, but the mean of the sum keeps giving me a result for the whole country rather than just the state, even though I am performing the analysis on the state only data. I am not sure if this is a masking issue or a numpy calculation issue. The mask works fine for the other summary statistics. Any feedback will be greatly appreciated! import numpy as np import matplotlib.pyplot as plt from numpy import ma as MA from mpl_toolkits.basemap import Basemap from datetime import datetime as dt from datetime import timedelta import os from StringIO import StringIO from osgeo import gdal, gdalnumeric, ogr, osr import glob import matplotlib.dates as mdates import sys shapefile=r"/Users/state.shp" ## Create masked array from shapefile xmin,ymin,xmax,ymax=[111.975,- 9.975, 156.275,-44.525] ncols,nrows=[886, 691] #Your rows/cols maskvalue = 1 xres=(xmax-xmin)/float(ncols) yres=(ymax-ymin)/float(nrows) geotransform=(xmin,xres,0,ymax,0, -yres) 0 src_ds = ogr.Open(shapefile) src_lyr=src_ds.GetLayer() dst_ds = gdal.GetDriverByName('MEM').Create('',ncols, nrows, 1 ,gdal.GDT_Byte) dst_rb = dst_ds.GetRasterBand(1) dst_rb.Fill(0) #initialise raster with zeros dst_rb.SetNoDataValue(0) dst_ds.SetGeoTransform(geotransform) err = gdal.RasterizeLayer(dst_ds, [maskvalue], src_lyr) dst_ds.FlushCache() mask_arr=dst_ds.GetRasterBand(1).ReadAsArray() np.set_printoptions(threshold='nan') mask_arr[mask_arr == 255] = 1 newmask=MA.masked_equal(mask_arr,0) ### calculate monthly summary stats for state Only rainmax=[] rainmin=[] rainmean=[] rainsum=[] yearmonthlist=[] yearmonth_int=[] errors=[] OutputFolder=r"/outputfolder" GLOBTEMPLATE = r"/daily-rainfall/combined/rainfall-{year}/r{year}{month:02}??.txt" def accumulate_month(year, month): files = glob.glob(GLOBTEMPLATE.format(year=year, month=month)) monthlyrain=[] for ifile in files: try: f=np.genfromtxt(ifile,skip_header=6) except: print "ERROR with file:", ifile errors.append(ifile) f=np.flipud(f) stateonly_f=np.ma.masked_array(f, mask=newmask.mask) # this masks data to state print "stateonly_f:", stateonly_f.max(), stateonly_f.mean(), stateonly_f.sum() monthlyrain.append(stateonly_f) yearmonth=dt(year,month,1) yearmonthlist.append(yearmonth) yearmonthint=str(year)+str(month) d=dt.strptime(yearmonthint, '%Y%m') print d date_string=dt.strftime(d,'%Y%m') yearmonthint=int(date_string) yearmonth_int.append(yearmonthint) r_sum=np.sum(monthlyrain, axis=0) r_mean_of_sum=MA.mean(r_sum) r_max, r_mean, r_min=MA.max(monthlyrain), MA.mean(monthlyrain), MA.min(monthlyrain) rainmax.append(r_max) rainmean.append(r_mean) rainmin.append(r_min) rainsum.append(r_mean_of_sum) print " state only:", yearmonthint,r_max, r_mean, r_min, r_mean_of_sum -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.sinclair.za at gmail.com Thu Nov 28 03:20:47 2013 From: scott.sinclair.za at gmail.com (Scott Sinclair) Date: Thu, 28 Nov 2013 10:20:47 +0200 Subject: [Numpy-discussion] sum of array for masked area In-Reply-To: References: Message-ID: On 28 November 2013 09:06, questions anon wrote: > I have a separate text file for daily rainfall data that covers the whole > country. I would like to calculate the monthly mean, min, max and the mean > of the sum for one state. > > I can get the max, min and mean for the state, but the mean of the sum keeps > giving me a result for the whole country rather than just the state, even > def accumulate_month(year, month): > files = glob.glob(GLOBTEMPLATE.format(year=year, month=month)) > monthlyrain=[] > for ifile in files: > try: > f=np.genfromtxt(ifile,skip_header=6) > except: > print "ERROR with file:", ifile > errors.append(ifile) > f=np.flipud(f) > > stateonly_f=np.ma.masked_array(f, mask=newmask.mask) # this masks > data to state > > > print "stateonly_f:", stateonly_f.max(), stateonly_f.mean(), > stateonly_f.sum() > > monthlyrain.append(stateonly_f) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ At this point monthlyrain is a list of masked arrays > r_sum=np.sum(monthlyrain, axis=0) ^^^^^^^^^^^ Passing a list of masked arrays to np.sum returns an np.ndarray object (*not* a masked array) > r_mean_of_sum=MA.mean(r_sum) Therefore this call to MA.mean returns the mean of all values in the ndarray r_sum. To fix: convert your monthlyrain list to a 3D maksed array before calling np.sum(monthlyrain, axis=0). In this case np.sum will call the masked array's .sum() method which knows about the mask. monthlyrain = np.ma.asarray(monthlyrain) r_sum=np.sum(monthlyrain, axis=0) Consider the following simplified example: alist = [] for k in range(2): a = np.arange(4).reshape((2,2)) alist.append(np.ma.masked_array(a, mask=[[0,1],[0,0]])) print(alist) print(type(alist)) alist = np.ma.asarray(alist) print(alist) print(type(alist)) asum = np.sum(alist, axis=0) print(asum) print(type(asum)) print(asum.mean()) Cheers, Scott From josef.pktd at gmail.com Thu Nov 28 10:41:03 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 28 Nov 2013 10:41:03 -0500 Subject: [Numpy-discussion] diff and cumsum arguments Message-ID: np.diff has keyword for `n-th order discrete difference` but cumsum has different arguments and cannot integrate n times What's the best "inverse function" for np.diff with n>1? I briefly tried polynomial, but without success so far. need two starting values to integrate for n=2 >>> a = np.random.randn(10) >>> a - np.cumsum(np.r_[a[0], np.cumsum(np.r_[a[1]-a[0], np.diff(a, n=2)])]) array([ 0.00000000e+00, 0.00000000e+00, 1.11022302e-16, 2.22044605e-16, 3.33066907e-16, 2.22044605e-16, 1.11022302e-16, 2.77555756e-16, 5.55111512e-16, 8.88178420e-16]) >>> a - np.concatenate((a[:2], np.cumsum(np.cumsum(np.diff(a, n=2))) + a[0] + np.arange(2, len(a)) * (a[1] - a[0]))) array([ 0.00000000e+00, 0.00000000e+00, 1.11022302e-16, 2.22044605e-16, 3.33066907e-16, 2.22044605e-16, 1.11022302e-16, 2.77555756e-16, 5.55111512e-16, 1.77635684e-15]) Josef From jtaylor.debian at googlemail.com Thu Nov 28 15:48:47 2013 From: jtaylor.debian at googlemail.com (Julian Taylor) Date: Thu, 28 Nov 2013 21:48:47 +0100 Subject: [Numpy-discussion] numpy datetime64 NaT string conversion bug & patch In-Reply-To: References: Message-ID: <5297AC2F.4070401@googlemail.com> On 27.11.2013 21:51, Charles G. Waldman wrote: > If you convert an array of strings to datetime64s and 'NaT' (or one of > its variants) appears in the string, all subsequent values are > rendered as NaT: thanks, a little embarrassing I didn't spot that when I fixed a different bug in the function recently :/ do you want to create a pull request with the fix? see http://docs.scipy.org/doc/numpy/dev/gitwash/development_workflow.html add a testcase too, numpy/core/tests/test_datetime.py is probably the right place. Feel free to answer you don't have time for doing a PR. From dg.gmane at thesamovar.net Fri Nov 29 15:15:06 2013 From: dg.gmane at thesamovar.net (Dan Goodman) Date: Fri, 29 Nov 2013 20:15:06 +0000 (UTC) Subject: [Numpy-discussion] -ffast-math Message-ID: Hi, Is it possible to get access to versions of ufuncs like sin and cos but compiled with the -ffast-math compiler switch? I recently noticed that my weave.inline code was much faster for some fairly simple operations than my pure numpy code, and realised after some fiddling around that it was due to using this switch. The speed difference was enormous in my test example. I checked the difference in accuracy and over the range of values I'm interested in the errors are not really significant. If there's currently no way of using these faster versions of these functions in numpy, maybe it would be worth adding as a feature for future versions? Thanks, Dan Goodman From pav at iki.fi Fri Nov 29 16:02:32 2013 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 29 Nov 2013 23:02:32 +0200 Subject: [Numpy-discussion] -ffast-math In-Reply-To: References: Message-ID: 29.11.2013 22:15, Dan Goodman kirjoitti: > Is it possible to get access to versions of ufuncs like sin and cos but > compiled with the -ffast-math compiler switch? You can recompile Numpy with -ffast-math in OPT environment variable. Caveat emptor. -- Pauli Virtanen From questions.anon at gmail.com Fri Nov 29 17:08:11 2013 From: questions.anon at gmail.com (questions anon) Date: Sat, 30 Nov 2013 09:08:11 +1100 Subject: [Numpy-discussion] sum of array for masked area In-Reply-To: References: Message-ID: Thank you Scott for your prompt response. Your suggestion has fixed the problem and thank you for your clear explanation of how it works. thanks!! On Thu, Nov 28, 2013 at 7:20 PM, Scott Sinclair wrote: > On 28 November 2013 09:06, questions anon > wrote: > > I have a separate text file for daily rainfall data that covers the whole > > country. I would like to calculate the monthly mean, min, max and the > mean > > of the sum for one state. > > > > I can get the max, min and mean for the state, but the mean of the sum > keeps > > giving me a result for the whole country rather than just the state, even > > > def accumulate_month(year, month): > > files = glob.glob(GLOBTEMPLATE.format(year=year, month=month)) > > monthlyrain=[] > > for ifile in files: > > try: > > f=np.genfromtxt(ifile,skip_header=6) > > except: > > print "ERROR with file:", ifile > > errors.append(ifile) > > f=np.flipud(f) > > > > stateonly_f=np.ma.masked_array(f, mask=newmask.mask) # this masks > > data to state > > > > > > print "stateonly_f:", stateonly_f.max(), stateonly_f.mean(), > > stateonly_f.sum() > > > > monthlyrain.append(stateonly_f) > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > At this point monthlyrain is a list of masked arrays > > > r_sum=np.sum(monthlyrain, axis=0) > ^^^^^^^^^^^ > Passing a list of masked arrays to np.sum returns an np.ndarray object > (*not* a masked array) > > > r_mean_of_sum=MA.mean(r_sum) > > Therefore this call to MA.mean returns the mean of all values in the > ndarray r_sum. > > To fix: convert your monthlyrain list to a 3D maksed array before > calling np.sum(monthlyrain, axis=0). In this case np.sum will call the > masked array's .sum() method which knows about the mask. > > monthlyrain = np.ma.asarray(monthlyrain) > r_sum=np.sum(monthlyrain, axis=0) > > Consider the following simplified example: > > alist = [] > for k in range(2): > a = np.arange(4).reshape((2,2)) > > alist.append(np.ma.masked_array(a, mask=[[0,1],[0,0]])) > > print(alist) > print(type(alist)) > > alist = np.ma.asarray(alist) > print(alist) > print(type(alist)) > > asum = np.sum(alist, axis=0) > > print(asum) > print(type(asum)) > > print(asum.mean()) > > Cheers, > Scott > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jtaylor.debian at googlemail.com Sat Nov 30 06:32:57 2013 From: jtaylor.debian at googlemail.com (Julian Taylor) Date: Sat, 30 Nov 2013 12:32:57 +0100 Subject: [Numpy-discussion] -ffast-math In-Reply-To: References: Message-ID: <5299CCE9.90602@googlemail.com> On 29.11.2013 21:15, Dan Goodman wrote: > Hi, > > Is it possible to get access to versions of ufuncs like sin and cos but > compiled with the -ffast-math compiler switch? > > I recently noticed that my weave.inline code was much faster for some fairly > simple operations than my pure numpy code, and realised after some fiddling > around that it was due to using this switch. The speed difference was > enormous in my test example. I checked the difference in accuracy and over > the range of values I'm interested in the errors are not really significant. > can you show the code that is slow in numpy? which version of gcc and libc are you using? with gcc 4.8 it uses the glibc 2.17 sin/cos with fast-math, so there should be no difference. > If there's currently no way of using these faster versions of these > functions in numpy, maybe it would be worth adding as a feature for future > versions? > it might be useful for some purposes to add a sort of precision context which allows using faster but less accurate functions, hypot is a another case where it would be useful. But its probably a rather large change and the applications for it are limited in numpy. E.g. the main advantage of ffast-math is for vectorization and complex numbers. For the former numpy can not merge operations like numexpr and the simple loops are already vectorized. For complex numbers numpy already implements them as if #pragma STDC CX_LIMITED_RANGE is enabled (python does the same). From heng at cantab.net Sat Nov 30 12:02:56 2013 From: heng at cantab.net (Henry Gomersall) Date: Sat, 30 Nov 2013 17:02:56 +0000 Subject: [Numpy-discussion] array aligned on a different byte-boundary than reported by dtype.alignment (travis-ci) Message-ID: <529A1A40.9020402@cantab.net> I'm running some test code on travis-ci, which is currently failing, but passing locally. I've identified the problem as being that my code tests internally for the alignment of an array being its "natural alignment", which I establish by checking "data_pointer%test_array.dtype.alignment" (I do this actually on the pointer in Cython, but the same is true if I check .ctypes.data). The issue I'm having is that the array that is being created inside the travis-ci environment is _not_ naturally aligned. I don't do anything funny in creating the array - it comes from a .copy() of another array. Specifically, the problem is that .dtype.alignment is reporting 32 on a complex256 array, but the actual alignment is 16. My machine (running a much more up to date Ubuntu installation) reports 16 for the .dtype.alignment attribute on a similar array. Is this something that can be expected in some situation? Is there an explanation? Is this a bug? Cheers, Henry p.s. The error is raised here: https://github.com/hgomersall/pyFFTW/blob/travis-debug/pyfftw/pyfftw.pyx#L792 through https://github.com/hgomersall/pyFFTW/blob/travis-debug/pyfftw/builders/_utils.py#L184 in which the problematic input array is simply created as in https://github.com/hgomersall/pyFFTW/blob/travis-debug/test/test_pyfftw_builders.py#L435 With output: https://travis-ci.org/hgomersall/pyFFTW/jobs/14739340 (the last line)