From Jerome.Kieffer at esrf.fr Wed Apr 1 09:15:51 2015 From: Jerome.Kieffer at esrf.fr (Jerome Kieffer) Date: Wed, 1 Apr 2015 15:15:51 +0200 Subject: [SciPy-User] Windows 64bits support Message-ID: <20150401151551.d0aab7d2dcd9013a3ba6e137@esrf.fr> Hi, I am wondering why there is no "official" packages for scipy on windows 64 bits ? The same is True for NumPy and even Python which systematically suggests the 32-bits version. After the nice move from Microsoft for provide compiler, is it still an issue ? Cheers, -- J?r?me Kieffer Data analysis unit - ESRF PS: I know the wonderful work of Christopher Gohlke, but his work is "unofficial" From cournape at gmail.com Wed Apr 1 13:15:15 2015 From: cournape at gmail.com (David Cournapeau) Date: Wed, 1 Apr 2015 18:15:15 +0100 Subject: [SciPy-User] Windows 64bits support In-Reply-To: <20150401151551.d0aab7d2dcd9013a3ba6e137@esrf.fr> References: <20150401151551.d0aab7d2dcd9013a3ba6e137@esrf.fr> Message-ID: On Wed, Apr 1, 2015 at 2:15 PM, Jerome Kieffer wrote: > Hi, > > I am wondering why there is no "official" packages for scipy on windows > 64 bits ? The same is True for NumPy and even Python which > systematically suggests the 32-bits version. > > After the nice move from Microsoft for provide compiler, is it still an > issue ? > Yes, because of fortran. There is work in progress to build numpy/scipy with the mingw toolchain on 64 bits windows, David > > Cheers, > > -- > J?r?me Kieffer > Data analysis unit - ESRF > > PS: I know the wonderful work of Christopher Gohlke, but his work is > "unofficial" > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla.molden at gmail.com Wed Apr 1 13:19:25 2015 From: sturla.molden at gmail.com (Sturla Molden) Date: Wed, 1 Apr 2015 17:19:25 +0000 (UTC) Subject: [SciPy-User] Windows 64bits support References: <20150401151551.d0aab7d2dcd9013a3ba6e137@esrf.fr> Message-ID: <607196642449599984.335371sturla.molden-gmail.com@news.gmane.org> Jerome Kieffer wrote: > I am wondering why there is no "official" packages for scipy on windows > 64 bits ? The same is True for NumPy and even Python which > systematically suggests the 32-bits version. > > After the nice move from Microsoft for provide compiler, is it still an issue ? We need a Fortran compiler too, which means we have to use MinGW-w64 (gcc, g++, and gfortran) or Intel C++ (or MSVC) in combination with Intel Fortran. g77 is no longer considered a viable option. Carl Kleffner is working on a MinGW (and OpenBLAS) based toolchain for SciPy. We need some special tweaking, e.g. to ensure static linkage and correct stack alignment. It is getting ready for production use, but we have not used it yet. Continuum (Anaconda), Gohlke, and Enthought use Microsoft and Intel compilers and MKL. If you need a binary installer for a full SciPy stack on Windows, get one of these three. We could use Microsoft's MSVC compiler for Python 2.7, but we do not have access to Intel Fortran. And even if we did, the installer would be tainted with Intel's Fortran runtime, which has a commercial license. So a "free" installer from the SciPy project should be built with gfortran. There are also issues with the LAPACK library as Microsoft does not provide one. On Apple and Linux distros a BLAS and LAPACK library is shipped with the operating system, so it's never a problem. On Windows it is more difficult: MKL has a commercial license (though we have permission to use it it will taint the binary installer), OpenBLAS has too many bugs (at least it did last time I checked), ATLAS is alow and a PITA to build. The lack of a vendor-supplied BLAS and LAPACK library is one reason Windows is difficult to use for scientific computing. Anaconda and Enthought does not have to worry that MKL has a commercial license as they do too. Sturla From joseph.slater at wright.edu Wed Apr 1 15:07:51 2015 From: joseph.slater at wright.edu (Joseph C Slater, PhD, PE) Date: Wed, 1 Apr 2015 15:07:51 -0400 Subject: [SciPy-User] Array indexing question Message-ID: On http://wiki.scipy.org/NumPy_for_Matlab_Users, under Linear Algebra Equivalents, the following row is given: a(1:5,:) a[0:5] or a[:5] or a[0:5,:] the first five rows of a I'm quite confused as I thought since the first row is indexed as zero, the fifth would be as 5. Indeed, a[5,5] provides the value in the 6th row and 6th column. However, it seems that 0:5 means 0, 1, 2, 3, 4 . So, when used with a colon, the 5 no longer means the same value as when used without. Am I missing something? Why this peculiar behavior, and how does one avoid errors with this inconsistency? (What's the logic to help me understand why it works this way?) Thank you for any guidance. Joe From Jerome.Kieffer at esrf.fr Wed Apr 1 15:17:24 2015 From: Jerome.Kieffer at esrf.fr (Jerome Kieffer) Date: Wed, 1 Apr 2015 21:17:24 +0200 Subject: [SciPy-User] Windows 64bits support In-Reply-To: <607196642449599984.335371sturla.molden-gmail.com@news.gmane.org> References: <20150401151551.d0aab7d2dcd9013a3ba6e137@esrf.fr> <607196642449599984.335371sturla.molden-gmail.com@news.gmane.org> Message-ID: <20150401211724.7a01ca11c019d20707353fca@esrf.fr> On Wed, 1 Apr 2015 17:19:25 +0000 (UTC) Sturla Molden wrote: > Jerome Kieffer wrote: > > > I am wondering why there is no "official" packages for scipy on windows > > 64 bits ? The same is True for NumPy and even Python which > > systematically suggests the 32-bits version. > > > > After the nice move from Microsoft for provide compiler, is it still an issue ? > > We need a Fortran compiler too, which means we have to use MinGW-w64 (gcc, > g++, and gfortran) or Intel C++ (or MSVC) in combination with Intel > Fortran. g77 is no longer considered a viable option. Thanks David and Sturla for the precise answer. Windows-64 is one of the (main?) target for our new product and some of us are reluctant to depend on scipy if it is too complicated to install. > Carl Kleffner is working on a MinGW (and OpenBLAS) based toolchain for > SciPy. We need some special tweaking, e.g. to ensure static linkage and > correct stack alignment. It is getting ready for production use, but we > have not used it yet. Keep us informed, it is an important point for us. > Continuum (Anaconda), Gohlke, and Enthought use Microsoft and Intel > compilers and MKL. If you need a binary installer for a full SciPy stack on > Windows, get one of these three. I usually develop on Linux, but our concerns were about how difficult it is to get running (and re-distribute) software based on "official" versions. Cheers, -- J?r?me Kieffer Data analysis unit - ESRF From sturla.molden at gmail.com Wed Apr 1 15:20:36 2015 From: sturla.molden at gmail.com (Sturla Molden) Date: Wed, 1 Apr 2015 19:20:36 +0000 (UTC) Subject: [SciPy-User] Array indexing question References: Message-ID: <497527349449608263.974255sturla.molden-gmail.com@news.gmane.org> "Joseph C Slater, PhD, PE" wrote: > On http://wiki.scipy.org/NumPy_for_Matlab_Users, under Linear Algebra > Equivalents, the following row is given: > a(1:5,:) a[0:5] or a[:5] or a[0:5,:] the first five rows of a > > I'm quite confused as I thought since the first row is indexed as zero, > the fifth would be as 5. Indeed, a[5,5] provides the value in the 6th row > and 6th column. However, it seems that 0:5 means 0, 1, 2, 3, 4 . So, when > used with a colon, the 5 no longer means the same value as when used > without. Am I missing something? Why this peculiar behavior, and how does > one avoid errors with this inconsistency? (What's the logic to help me > understand why it works this way?) Consider what range(5) does. Python lists also index in the same way. In C we have this too: for(i=0; i<5; i++) The logic is this: To get n elements starting from i, we take a[i:i+n]. We also have the nice symmetry a[:n] and a[n:]. BDFL Guido van Rossum decided that this is how Python objects should index. So that's what NumPy does too. Sturla From pav at iki.fi Wed Apr 1 15:38:22 2015 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 01 Apr 2015 22:38:22 +0300 Subject: [SciPy-User] Array indexing question In-Reply-To: <497527349449608263.974255sturla.molden-gmail.com@news.gmane.org> References: <497527349449608263.974255sturla.molden-gmail.com@news.gmane.org> Message-ID: 01.04.2015, 22:20, Sturla Molden kirjoitti: [clip] > BDFL Guido van Rossum decided > that this is how Python objects should index. So that's what NumPy does > too. http://www.cs.utexas.edu/users/EWD/transcriptions/EWD08xx/EWD831.html From sturla.molden at gmail.com Wed Apr 1 15:57:09 2015 From: sturla.molden at gmail.com (Sturla Molden) Date: Wed, 1 Apr 2015 19:57:09 +0000 (UTC) Subject: [SciPy-User] Windows 64bits support References: <20150401151551.d0aab7d2dcd9013a3ba6e137@esrf.fr> <607196642449599984.335371sturla.molden-gmail.com@news.gmane.org> <20150401211724.7a01ca11c019d20707353fca@esrf.fr> Message-ID: <746458673449608932.781040sturla.molden-gmail.com@news.gmane.org> Jerome Kieffer wrote: > I usually develop on Linux, but our concerns were about how difficult > it is to get running (and re-distribute) software based on "official" > versions. Redistribution is a major reason we are not using Intel Fortran and MKL. Even though the SciPy project has a permission from Intel to distribute NumPy and SciPy built with their tools, it will produce licensing issues downstream if someone wants to redistribute the binaries. Robert Kern had some strong (but well informed) opinions on this last time it was discusssed, so you can search up that. Redistribution will for most parties be easier if we use a MinGW-w64 based toolchain and OpenBLAS. But a third-party might also object to redistribute GNU libraries (e.g. libgfortran) but be ok with shipping Intel libraries, so there is no perfect answer to this. If you need to redistribute SciPy binaries for Windows you can probably also contact Enthought or Continuum IO and ask to purchase a license to redistribute their software. (I don't know if they sell such licenses, but they might; nor do I know what they might charge for it.) Getting Carl Kleffner's toolchain ready is limited by the fact that nobody is really working on it. But SciPy 0.16 milestone is due in a year it seems, so perhaps it will be ready then? And if you want to make sure it happens, you know what to do :) Sturla From jkhilmer at chemistry.montana.edu Wed Apr 1 16:07:53 2015 From: jkhilmer at chemistry.montana.edu (jkhilmer at chemistry.montana.edu) Date: Wed, 1 Apr 2015 14:07:53 -0600 Subject: [SciPy-User] Array indexing question In-Reply-To: <497527349449608263.974255sturla.molden-gmail.com@news.gmane.org> References: <497527349449608263.974255sturla.molden-gmail.com@news.gmane.org> Message-ID: > > Indeed, a[5,5] provides the value in the 6th row > > and 6th column. However, it seems that 0:5 means 0, 1, 2, 3, 4 . So, when > > used with a colon, the 5 no longer means the same value as when used > > without. > Consider what range(5) does. Python lists also index in the same way. > > >From https://docs.python.org/2/tutorial/introduction.html: "One way to remember how slices work is to think of the indices as pointing *between* characters, with the left edge of the first character numbered 0. Then the right edge of the last character of a string of *n* characters has index *n*, for example:" +---+---+---+---+---+---+ | P | y | t | h | o | n | +---+---+---+---+---+---+ 0 1 2 3 4 5 6 -6 -5 -4 -3 -2 -1 If you omit the colon, it's equivalent to a single character/item/row: a[n] = a[n:n+1] Jonathan -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Wed Apr 1 16:11:41 2015 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 1 Apr 2015 13:11:41 -0700 Subject: [SciPy-User] Windows 64bits support In-Reply-To: <746458673449608932.781040sturla.molden-gmail.com@news.gmane.org> References: <20150401151551.d0aab7d2dcd9013a3ba6e137@esrf.fr> <607196642449599984.335371sturla.molden-gmail.com@news.gmane.org> <20150401211724.7a01ca11c019d20707353fca@esrf.fr> <746458673449608932.781040sturla.molden-gmail.com@news.gmane.org> Message-ID: On Apr 1, 2015 12:57 PM, "Sturla Molden" wrote: > Redistribution will for most parties be easier if we use a MinGW-w64 based > toolchain and OpenBLAS. But a third-party might also object to redistribute > GNU libraries (e.g. libgfortran) but be ok with shipping Intel libraries, > so there is no perfect answer to this. While I suppose such objections might be raised out of pure prejudice, I'm not aware of any rational basis for them. The GCC runtime libraries (including gfortran) have a blanket license exception that says that you can redistribute gcc-compiled binaries under absolutely any license you wish: https://www.gnu.org/licenses/gcc-exception-3.1.html -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From klemm at phys.ethz.ch Wed Apr 1 16:44:23 2015 From: klemm at phys.ethz.ch (Hanno Klemm) Date: Wed, 1 Apr 2015 22:44:23 +0200 Subject: [SciPy-User] Array indexing question In-Reply-To: References: <497527349449608263.974255sturla.molden-gmail.com@news.gmane.org> Message-ID: <0A3B14C8-F5E2-46DC-A433-8EB84E91159D@phys.ethz.ch> > On 01 Apr 2015, at 22:07, jkhilmer at chemistry.montana.edu wrote: > > > > Indeed, a[5,5] provides the value in the 6th row > > and 6th column. However, it seems that 0:5 means 0, 1, 2, 3, 4 . So, when > > used with a colon, the 5 no longer means the same value as when used > > without. > > Consider what range(5) does. Python lists also index in the same way. > > > From https://docs.python.org/2/tutorial/introduction.html: > "One way to remember how slices work is to think of the indices as pointing between characters, with the left edge of the first character numbered 0. Then the right edge of the last character of a string of n characters has index n, for example:" > +---+---+---+---+---+---+ > | P | y | t | h | o | n | > +---+---+---+---+---+---+ > 0 1 2 3 4 5 6 > -6 -5 -4 -3 -2 -1 > > > If you omit the colon, it's equivalent to a single character/item/row: a[n] = a[n:n+1] > Jonathan > This is not entirely correct. the first form tends to yield the element, the second form a sequence: In [1]: a=[1,2,3] In [2]: a[1] Out[2]: 2 In [3]: a[1:2] Out[3]: [2] In [5]: import numpy as np In [6]: a = np.array([1,2,3]) In [7]: a[1] Out[7]: 2 In [8]: a[1:2] Out[8]: array([2]) Hanno From sturla.molden at gmail.com Wed Apr 1 17:12:21 2015 From: sturla.molden at gmail.com (Sturla Molden) Date: Wed, 1 Apr 2015 21:12:21 +0000 (UTC) Subject: [SciPy-User] Windows 64bits support References: <20150401151551.d0aab7d2dcd9013a3ba6e137@esrf.fr> <607196642449599984.335371sturla.molden-gmail.com@news.gmane.org> <20150401211724.7a01ca11c019d20707353fca@esrf.fr> <746458673449608932.781040sturla.molden-gmail.com@news.gmane.org> Message-ID: <2012579473449614957.523949sturla.molden-gmail.com@news.gmane.org> Nathaniel Smith wrote: > While I suppose such objections might be raised out of pure prejudice, I'm > not aware of any rational basis for them. I didn't specify a reason. :-) But I am not a lawyer, so I would not know. Sturla From yw5aj at virginia.edu Wed Apr 1 17:22:08 2015 From: yw5aj at virginia.edu (Yuxiang Wang) Date: Wed, 1 Apr 2015 17:22:08 -0400 Subject: [SciPy-User] Windows 64bits support In-Reply-To: <2012579473449614957.523949sturla.molden-gmail.com@news.gmane.org> References: <20150401151551.d0aab7d2dcd9013a3ba6e137@esrf.fr> <607196642449599984.335371sturla.molden-gmail.com@news.gmane.org> <20150401211724.7a01ca11c019d20707353fca@esrf.fr> <746458673449608932.781040sturla.molden-gmail.com@news.gmane.org> <2012579473449614957.523949sturla.molden-gmail.com@news.gmane.org> Message-ID: Hi all, Sorry about being naive on this topic - I thought combining with Carl's mingw-w64 toolchain + openblas was almost ready for the next release, due to the merged PR 5614 (https://github.com/numpy/numpy/pull/5614). Shawn On Wed, Apr 1, 2015 at 5:12 PM, Sturla Molden wrote: > Nathaniel Smith wrote: > >> While I suppose such objections might be raised out of pure prejudice, I'm >> not aware of any rational basis for them. > > I didn't specify a reason. :-) > > But I am not a lawyer, so I would not know. > > Sturla > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -- Yuxiang "Shawn" Wang Gerling Research Lab University of Virginia yw5aj at virginia.edu +1 (434) 284-0836 https://sites.google.com/a/virginia.edu/yw5aj/ From sturla.molden at gmail.com Wed Apr 1 17:33:29 2015 From: sturla.molden at gmail.com (Sturla Molden) Date: Wed, 1 Apr 2015 21:33:29 +0000 (UTC) Subject: [SciPy-User] Windows 64bits support References: <20150401151551.d0aab7d2dcd9013a3ba6e137@esrf.fr> <607196642449599984.335371sturla.molden-gmail.com@news.gmane.org> <20150401211724.7a01ca11c019d20707353fca@esrf.fr> <746458673449608932.781040sturla.molden-gmail.com@news.gmane.org> <2012579473449614957.523949sturla.molden-gmail.com@news.gmane.org> Message-ID: <1780648014449616624.269595sturla.molden-gmail.com@news.gmane.org> Yuxiang Wang wrote: > Hi all, > > Sorry about being naive on this topic - > > I thought combining with Carl's mingw-w64 toolchain + openblas was > almost ready for the next release, due to the merged PR 5614 > (https://github.com/numpy/numpy/pull/5614). Unless we are going to use it in a maintenance release of 0.15, the next release would be 0.16 which is due 31st of March next year. Sturla From matthew.brett at gmail.com Wed Apr 1 17:35:47 2015 From: matthew.brett at gmail.com (Matthew Brett) Date: Wed, 1 Apr 2015 14:35:47 -0700 Subject: [SciPy-User] Windows 64bits support In-Reply-To: <1780648014449616624.269595sturla.molden-gmail.com@news.gmane.org> References: <20150401151551.d0aab7d2dcd9013a3ba6e137@esrf.fr> <607196642449599984.335371sturla.molden-gmail.com@news.gmane.org> <20150401211724.7a01ca11c019d20707353fca@esrf.fr> <746458673449608932.781040sturla.molden-gmail.com@news.gmane.org> <2012579473449614957.523949sturla.molden-gmail.com@news.gmane.org> <1780648014449616624.269595sturla.molden-gmail.com@news.gmane.org> Message-ID: On Wed, Apr 1, 2015 at 2:33 PM, Sturla Molden wrote: > Yuxiang Wang wrote: >> Hi all, >> >> Sorry about being naive on this topic - >> >> I thought combining with Carl's mingw-w64 toolchain + openblas was >> almost ready for the next release, due to the merged PR 5614 >> (https://github.com/numpy/numpy/pull/5614). > > Unless we are going to use it in a maintenance release of 0.15, the next > release would be 0.16 which is due 31st of March next year. I don't think there is any reason to wait for a new release, we can just upload the wheels for the existing release, when they are ready... Matthew From yw5aj at virginia.edu Wed Apr 1 20:59:49 2015 From: yw5aj at virginia.edu (Yuxiang Wang) Date: Wed, 1 Apr 2015 20:59:49 -0400 Subject: [SciPy-User] Windows 64bits support In-Reply-To: <1780648014449616624.269595sturla.molden-gmail.com@news.gmane.org> References: <20150401151551.d0aab7d2dcd9013a3ba6e137@esrf.fr> <607196642449599984.335371sturla.molden-gmail.com@news.gmane.org> <20150401211724.7a01ca11c019d20707353fca@esrf.fr> <746458673449608932.781040sturla.molden-gmail.com@news.gmane.org> <2012579473449614957.523949sturla.molden-gmail.com@news.gmane.org> <1780648014449616624.269595sturla.molden-gmail.com@news.gmane.org> Message-ID: Sturla, Ah I see... Thanks! So I was thinking about the same thing then :) Shawn -- Yuxiang "Shawn" Wang Gerling Research Lab University of Virginia yw5aj at virginia.edu +1 (434) 284-0836 https://sites.google.com/a/virginia.edu/yw5aj/ On Apr 1, 2015 5:33 PM, "Sturla Molden" wrote: > Yuxiang Wang wrote: > > Hi all, > > > > Sorry about being naive on this topic - > > > > I thought combining with Carl's mingw-w64 toolchain + openblas was > > almost ready for the next release, due to the merged PR 5614 > > (https://github.com/numpy/numpy/pull/5614). > > Unless we are going to use it in a maintenance release of 0.15, the next > release would be 0.16 which is due 31st of March next year. > > Sturla > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cgodshall at enthought.com Thu Apr 2 20:07:46 2015 From: cgodshall at enthought.com (Courtenay Godshall (Enthought)) Date: Thu, 2 Apr 2015 19:07:46 -0500 Subject: [SciPy-User] SciPy 2015 Conference Updates - LAST CALL for talks - 4/10 extension, registration open, keynotes announced, John Hunter Plotting Contest Message-ID: <054801d06da2$37e57ae0$a7b070a0$@enthought.com> --------------------------------------------------------------------------- **LAST CALL FOR SCIPY 2015 TALK AND POSTER SUBMISSIONS - EXTENSION TO 4/10* --------------------------------------------------------------------------- SciPy 2015 will include 3 major topic tracks and 7 mini-symposia tracks. Submit a proposal on the SciPy 2015 website: http://scipy2015.scipy.org. If you have any questions or comments, feel free to contact us at: scipy-organizers at scipy.org. You can also follow @scipyconf on Twitter or sign up for the mailing list on the website for the latest updates! Major topic tracks include: - Scientific Computing in Python (General track) - Python in Data Science - Quantitative Finance and Computational Social Sciences Mini-symposia will include the applications of Python in: - Astronomy and astrophysics - Computational life and medical sciences - Engineering - Geographic information systems (GIS) - Geophysics - Oceanography and meteorology - Visualization, vision and imaging -------------------------------------------------------------------------- **SCIPY 2015 REGISTRATION IS OPEN** Please register ASAP to help us get a good headcount and open the conference to as many people as we can. PLUS, everyone who registers before May 15 will not only get early bird discounts, but will also be entered in a drawing for a free registration (via refund or extra)! Register on the website at http://scipy2015.scipy.org -------------------------------------------------------------------------- **SCIPY 2015 KEYNOTE SPEAKERS ANNOUNCED** Keynote speakers were just announced and include Wes McKinney, author of Pandas; Chris Wiggins, Chief Data Scientist for The New York Times; and Jake VanderPlas, director of research at the University of Washington's eScience Institute and core contributor to a number of scientific Python libraries including sci-kit learn and AstroML. -------------------------------------------------------------------------- **ENTER THE SCIPY JOHN HUNTER EXCELLENCE IN PLOTTING CONTEST - DUE 4/13** In memory of John Hunter, creator of matplotlib, we are pleased to announce the Third Annual SciPy John Hunter Excellence in Plotting Competition. This open competition aims to highlight the importance of quality plotting to scientific progress and showcase the capabilities of the current generation of plotting software. Participants are invited to submit scientific plots to be judged by a panel. The winning entries will be announced and displayed at the conference. John Hunter's family is graciously sponsoring cash prizes up to $1,000 for the winners. We look forward to exciting submissions that push the boundaries of plotting! See details here: http://scipy2015.scipy.org/ehome/115969/276538/ Entries must be submitted by April 13, 2015 via e-mail to plotting-contest at scipy.org -------------------------------------------------------------------------- **CALENDAR AND IMPORTANT DATES** --Sprint, Birds of a Feather, Financial Aid and Talk submissions OPEN NOW --Apr 10, 2015: Talk and Poster submission deadline --Apr 13, 2015: Plotting contest submissions due --Apr 15, 2015: Financial aid application deadline --Apr 17, 2015: Tutorial schedule announced --May 1, 2015: General conference speakers & schedule announced --May 15, 2015 (or 150 registrants): Early-bird registration ends --Jun 1, 2015: BoF submission deadline --Jul 6-7, 2015: SciPy 2015 Tutorials --Jul 8-10, 2015: SciPy 2015 General Conference --Jul 11-12, 2015: SciPy 2015 Sprints -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Fri Apr 3 17:31:21 2015 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Fri, 3 Apr 2015 23:31:21 +0200 Subject: [SciPy-User] Windows 64bits support In-Reply-To: <1780648014449616624.269595sturla.molden-gmail.com@news.gmane.org> References: <20150401151551.d0aab7d2dcd9013a3ba6e137@esrf.fr> <607196642449599984.335371sturla.molden-gmail.com@news.gmane.org> <20150401211724.7a01ca11c019d20707353fca@esrf.fr> <746458673449608932.781040sturla.molden-gmail.com@news.gmane.org> <2012579473449614957.523949sturla.molden-gmail.com@news.gmane.org> <1780648014449616624.269595sturla.molden-gmail.com@news.gmane.org> Message-ID: On Wed, Apr 1, 2015 at 11:33 PM, Sturla Molden wrote: > Yuxiang Wang wrote: > > Hi all, > > > > Sorry about being naive on this topic - > > > > I thought combining with Carl's mingw-w64 toolchain + openblas was > > almost ready for the next release, due to the merged PR 5614 > > (https://github.com/numpy/numpy/pull/5614). > > Unless we are going to use it in a maintenance release of 0.15, the next > release would be 0.16 which is due 31st of March next year. > Eh, not really. The milestone was meant to be 31st of March 2015, i.e. right now (2016 was a typo maybe). But due to the delay in getting 0.15.0 out, it will be a little later. We should start preparing for it soon though. I just bumped the date to 31 May 2015. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From cmkleffner at gmail.com Sat Apr 4 16:16:27 2015 From: cmkleffner at gmail.com (Carl Kleffner) Date: Sat, 4 Apr 2015 22:16:27 +0200 Subject: [SciPy-User] Windows 64bits support In-Reply-To: References: <20150401151551.d0aab7d2dcd9013a3ba6e137@esrf.fr> <607196642449599984.335371sturla.molden-gmail.com@news.gmane.org> <20150401211724.7a01ca11c019d20707353fca@esrf.fr> <746458673449608932.781040sturla.molden-gmail.com@news.gmane.org> <2012579473449614957.523949sturla.molden-gmail.com@news.gmane.org> <1780648014449616624.269595sturla.molden-gmail.com@news.gmane.org> Message-ID: Please understand, that the mingw-w64 toolchain adapted for python extension https://bitbucket.org/carlkl/mingw-w64-for-python/downloads/mingwpy-2015-01-readme.html is still work in process albeit usuable. This toolchain (without OpenBLAS) is already included in the winpython distro since dec of 2014. Working wheels (32 and 64bit) for numpy, scipy are distributed at binstar: https://binstar.org/carlkl . On the TODO list is the reduction of numpy, scipy test failures and errors caused by mingw-w64-crt. Cheers, Carl 2015-04-03 23:31 GMT+02:00 Ralf Gommers : > > > On Wed, Apr 1, 2015 at 11:33 PM, Sturla Molden > wrote: > >> Yuxiang Wang wrote: >> > Hi all, >> > >> > Sorry about being naive on this topic - >> > >> > I thought combining with Carl's mingw-w64 toolchain + openblas was >> > almost ready for the next release, due to the merged PR 5614 >> > (https://github.com/numpy/numpy/pull/5614). >> >> Unless we are going to use it in a maintenance release of 0.15, the next >> release would be 0.16 which is due 31st of March next year. >> > > Eh, not really. The milestone was meant to be 31st of March 2015, i.e. > right now (2016 was a typo maybe). But due to the delay in getting 0.15.0 > out, it will be a little later. We should start preparing for it soon > though. I just bumped the date to 31 May 2015. > > Ralf > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yw5aj at virginia.edu Sat Apr 4 16:58:22 2015 From: yw5aj at virginia.edu (Yuxiang Wang) Date: Sat, 4 Apr 2015 16:58:22 -0400 Subject: [SciPy-User] Windows 64bits support In-Reply-To: References: <20150401151551.d0aab7d2dcd9013a3ba6e137@esrf.fr> <607196642449599984.335371sturla.molden-gmail.com@news.gmane.org> <20150401211724.7a01ca11c019d20707353fca@esrf.fr> <746458673449608932.781040sturla.molden-gmail.com@news.gmane.org> <2012579473449614957.523949sturla.molden-gmail.com@news.gmane.org> <1780648014449616624.269595sturla.molden-gmail.com@news.gmane.org> Message-ID: Hi Carl, First of all - thanks again for the wonderful work! Being curious, are you planning to also make a conda package on binstar for the toolchain itself? Shawn -- Yuxiang "Shawn" Wang Gerling Research Lab University of Virginia yw5aj at virginia.edu +1 (434) 284-0836 https://sites.google.com/a/virginia.edu/yw5aj/ On Apr 4, 2015 16:16, "Carl Kleffner" wrote: > Please understand, that the mingw-w64 toolchain adapted for python > extension > https://bitbucket.org/carlkl/mingw-w64-for-python/downloads/mingwpy-2015-01-readme.html > is still work in process albeit usuable. This toolchain (without OpenBLAS) > is already included in the winpython distro since dec of 2014. > > Working wheels (32 and 64bit) for numpy, scipy are distributed at binstar: > https://binstar.org/carlkl . On the TODO list is the reduction of numpy, > scipy test failures and errors caused by mingw-w64-crt. > > Cheers, > > Carl > > > 2015-04-03 23:31 GMT+02:00 Ralf Gommers : > >> >> >> On Wed, Apr 1, 2015 at 11:33 PM, Sturla Molden >> wrote: >> >>> Yuxiang Wang wrote: >>> > Hi all, >>> > >>> > Sorry about being naive on this topic - >>> > >>> > I thought combining with Carl's mingw-w64 toolchain + openblas was >>> > almost ready for the next release, due to the merged PR 5614 >>> > (https://github.com/numpy/numpy/pull/5614). >>> >>> Unless we are going to use it in a maintenance release of 0.15, the next >>> release would be 0.16 which is due 31st of March next year. >>> >> >> Eh, not really. The milestone was meant to be 31st of March 2015, i.e. >> right now (2016 was a typo maybe). But due to the delay in getting 0.15.0 >> out, it will be a little later. We should start preparing for it soon >> though. I just bumped the date to 31 May 2015. >> >> Ralf >> >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From toddrjen at gmail.com Tue Apr 7 08:13:40 2015 From: toddrjen at gmail.com (Todd) Date: Tue, 7 Apr 2015 14:13:40 +0200 Subject: [SciPy-User] Python benchmarks project Message-ID: Although not strictly scipy-related, I was wondering if anyone was aware of a project containing basic python benchmarks. I don't mean a project with benchmark tools, but rather a project with measurements of common python operations. For example, it used to be the case that list comprehensions were faster than for loops. The sort of project I am looking for would have benchmarks showing how long those took to do the same thing. Does anyone know if such a thing exists? Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Wed Apr 8 12:22:24 2015 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 8 Apr 2015 09:22:24 -0700 Subject: [SciPy-User] Python benchmarks project In-Reply-To: References: Message-ID: On Apr 8, 2015 12:16 PM, "Todd" wrote: > > Although not strictly scipy-related, I was wondering if anyone was aware of a project containing basic python benchmarks. I don't mean a project with benchmark tools, but rather a project with measurements of common python operations. > > For example, it used to be the case that list comprehensions were faster than for loops. The sort of project I am looking for would have benchmarks showing how long those took to do the same thing. There are some general small benchmarks distributed with cpython ("pystone"), and speed.pypy.org maintains a set of larger benchmarks, but for the sort of targeted micro benchmarks you're talking about I think most people just use IPython's %timeit magic on demand. E.g. %timeit [i for i in range(1000)] -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From tfmoraes at cti.gov.br Wed Apr 8 12:07:08 2015 From: tfmoraes at cti.gov.br (Thiago Franco de Moraes) Date: Wed, 8 Apr 2015 13:07:08 -0300 (BRT) Subject: [SciPy-User] =?utf-8?q?Research_position_in_the_Brazilian_Researc?= =?utf-8?q?h_Institute_for_Science_and_Neurotechnology_=E2=80=93_BRAINN?= Message-ID: <1243550016.449174.1428509228271.JavaMail.zimbra@cti.gov.br> Research position in the Brazilian Research Institute for Science and Neurotechnology ? BRAINN Postdoc researcher to work with software development for medical imaging The Brazilian Research Institute for Neuroscience and Neurotechnology (BRAINN) (www.brainn.org.br) focuses on the investigation of basic mechanisms leading to epilepsy and stroke, and the injury mechanisms that follow disease onset and progression. This research has important applications related to prevention, diagnosis, treatment and rehabilitation and will serve as a model for better understanding normal and abnormal brain function. The BRAINN Institute is composed of 10 institutions from Brazil and abroad and hosted by State University of Campinas (UNICAMP). Among the associated institutions is Renato Archer Information Technology Center (CTI) that has a specialized team in open-source software development for medical imaging (www.cti.gov.br/invesalius) and 3D printing applications for healthcare. CTI is located close the UNICAMP in the city of Campinas, State of S?o Paulo in a very technological region of Brazil and is looking for a postdoc researcher to work with software development for medical imaging related to the imaging analysis, diagnosis and treatment of brain diseases. The postdoc position is for two years with the possibility of being renovated for more two years. Education - PhD in computer science, computer engineering, mathematics, physics or related. Requirements - Digital image processing (Medical imaging) - Computer graphics (basic) Benefits 6.143,40 Reais per month free of taxes (about US$ 2.800,00); 15% technical reserve for conferences participation and specific materials acquisition; Interested Send curriculum to: jorge.silva at cti.gov.br with subject ?Postdoc position? Applications reviews will begin April 30, 2015 and continue until the position is filled. From sturla.molden at gmail.com Wed Apr 8 14:04:08 2015 From: sturla.molden at gmail.com (Sturla Molden) Date: Wed, 08 Apr 2015 20:04:08 +0200 Subject: [SciPy-User] Python benchmarks project In-Reply-To: References: Message-ID: On 07/04/15 14:13, Todd wrote: > For example, it used to be the case that list comprehensions were faster > than for loops. It still is the case because the attribute lookups are done only once in a list comprehension. The builtin function map is faster than a for loop for the same reason. You can come along way with just some common sence if you think about what your code does. Sturla From wenlei.xie at gmail.com Fri Apr 10 21:07:56 2015 From: wenlei.xie at gmail.com (Wenlei Xie) Date: Fri, 10 Apr 2015 21:07:56 -0400 Subject: [SciPy-User] 1-norm estimation via Hager's algorithm in SCIPY Message-ID: Hi, I would like to estimate the 1-norm of matrix A*B, where A is n*k and B is k*n. n is around 1 million while k is around 100. So I cannot materialize it directly. I am wondering if there is anything similar to condest in Matlab to estimate the matrix 1-norm via Hager's algorithm? Would scipy.sparse.linalg.onenormest be a good choice? (It seems to be working for Sparse matrices?) Thank you! Best, Wenlei -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Sat Apr 11 08:27:06 2015 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 11 Apr 2015 15:27:06 +0300 Subject: [SciPy-User] 1-norm estimation via Hager's algorithm in SCIPY In-Reply-To: References: Message-ID: 11.04.2015, 04:07, Wenlei Xie kirjoitti: > I would like to estimate the 1-norm of matrix A*B, where A is n*k and B is > k*n. n is around 1 million while k is around 100. So I cannot materialize > it directly. > > I am wondering if there is anything similar to condest in Matlab to > estimate the matrix 1-norm via Hager's algorithm? Would > scipy.sparse.linalg.onenormest be a good choice? (It seems to be working > for Sparse matrices?) You can realize the matrix C=A*B as an abstract linear operator like this: from scipy.sparse.linalg import aslinearoperator, onenormest C = aslinearoperator(A) * aslinearoperator(B) print(onenormest(C)) The resulting C however has a transpose defined only in Scipy versions >= 0.15, so it won't be a valid input for onenormest in earlier versions. Unfortunately, it seems that one step of the current implementation of onenormest has quite large memory usage --- although it still scales linearly with matrix dimension, there's a too big constant in front, so it won't work OK for 100e6x100e6. From bhmerchant at gmail.com Sat Apr 11 14:35:04 2015 From: bhmerchant at gmail.com (Brian Merchant) Date: Sat, 11 Apr 2015 11:35:04 -0700 Subject: [SciPy-User] Julia and SciPy -- will they ever be married yet? Message-ID: In one of his blog posts titled "Why Python is the last language you'll have to learn", Jake Vanderplas mentioned that a potential merger/collaboration between Julia and SciPy seemed to be just on the horizon based on the mood of this discussion on the julia-dev boards. It has been a long time since 2012 now, and I wonder, what is the status of all of that happy-talk? -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sun Apr 12 04:21:50 2015 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 12 Apr 2015 10:21:50 +0200 Subject: [SciPy-User] Julia and SciPy -- will they ever be married yet? In-Reply-To: References: Message-ID: On Sat, Apr 11, 2015 at 8:35 PM, Brian Merchant wrote: > In one of his blog posts > titled > "Why Python is the last language you'll have to learn", Jake Vanderplas > mentioned that a potential merger/collaboration between Julia and SciPy > seemed to be just on the horizon based on the mood of this discussion > > on the julia-dev boards. > > It has been a long time since 2012 now, and I wonder, what is the status > of all of that happy-talk? > Julia-Python interaction is in pretty good shape I think. Some links to get you started: https://github.com/JuliaLang/IJulia.jl https://github.com/stevengj/PyCall.jl https://github.com/stevengj/PyPlot.jl http://blog.leahhanson.us/julia-calling-python-calling-julia.html And I recommend watching this EuroSciPy'14 keynote "Crossing Language Barriers with Julia, SciPy, IPython" by Steven G. Johnson: https://www.youtube.com/watch?v=jhlVHoeB05A Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From clancyr at gmail.com Mon Apr 13 00:39:16 2015 From: clancyr at gmail.com (Clancy Rowley) Date: Mon, 13 Apr 2015 00:39:16 -0400 Subject: [SciPy-User] Non-constant timesteps in scipy.signal.lsim Message-ID: <719DFE8C-9CE9-4F0C-B598-3126DB41C810@gmail.com> The current behavior of scipy.signal.lsim is that it accepts input signals with non-constant timesteps. We are considering a non-backward-compatible change in implementation that would break this, and require constant timesteps: https://github.com/scipy/scipy/pull/4675 Does anybody rely on the current behavior, and if so, could you work around such a change in behavior? Some background: - The current implementation of scipy.signal.lsim has a number of problems. For instance, for a linear system with a pole at the origin, it throws an exception; for systems with repeated poles, it gives incorrect results. - There is a workaround, a routine scipy.signal.lsim2, that fixes these problems (and also accepts non-uniform timesteps), but it is typically hundreds or even thousands of times slower than the (broken) implementation in lsim. (It calls a generic ODE integrator instead of a specialized solver for linear systems.) - An efficient implementation of lsim requires constant time steps. - Matlab's version of lsim also requires constant time steps. ===== Clancy Rowley Professor, Mechanical and Aerospace Engineering Affiliated faculty, Program in Applied and Computational Math Princeton University http://www.princeton.edu/~cwrowley From wenlei.xie at gmail.com Fri Apr 17 23:47:30 2015 From: wenlei.xie at gmail.com (Wenlei Xie) Date: Fri, 17 Apr 2015 23:47:30 -0400 Subject: [SciPy-User] Number of input partitions Message-ID: Hi, I am wondering the mechanism that determines the number of partitions created by SparkContext.sequenceFile ? For example, although my file has only 4 splits, Spark would create 16 partitions for it. Is it determined by the file size? Is there any way to control it? (Looks like I can only tune minPartitions but not maxPartitions) Thank you! Best, Wenlei -------------- next part -------------- An HTML attachment was scrubbed... URL: From wenlei.xie at gmail.com Sat Apr 18 00:51:46 2015 From: wenlei.xie at gmail.com (Wenlei Xie) Date: Sat, 18 Apr 2015 00:51:46 -0400 Subject: [SciPy-User] Number of input partitions In-Reply-To: References: Message-ID: Sorry typed the wrong address. Sorry for the spam... On Fri, Apr 17, 2015 at 11:47 PM, Wenlei Xie wrote: > Hi, > > I am wondering the mechanism that determines the number of partitions > created by SparkContext.sequenceFile ? > > For example, although my file has only 4 splits, Spark would create 16 > partitions for it. Is it determined by the file size? Is there any way to > control it? (Looks like I can only tune minPartitions but not maxPartitions) > > Thank you! > > Best, > Wenlei > -- Wenlei Xie (???) Ph.D. Candidate Department of Computer Science 456 Gates Hall, Cornell University Ithaca, NY 14853, USA Email: wenlei.xie at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeremy at jeremysanders.net Sat Apr 18 11:22:24 2015 From: jeremy at jeremysanders.net (Jeremy Sanders) Date: Sat, 18 Apr 2015 17:22:24 +0200 Subject: [SciPy-User] ANN: Veusz 1.23 Message-ID: <553276B0.80706@jeremysanders.net> I'm pleased to announce version 1.23 of the Veusz GUI scientific plotting package, which incorporates a object-oriented python plotting interface. Veusz is written in Python, based on PyQt and Numpy. For release notes, please see below. Jeremy -------------- next part -------------- Veusz 1.23 ---------- http://home.gna.org/veusz/ Veusz is a scientific plotting package. It is designed to produce publication-ready Postscript, PDF or SVG output. Graphs are built-up by combining plotting widgets. The user interface aims to be simple, consistent and powerful. Veusz provides GUI, Python module, command line, scripting, DBUS and SAMP interfaces to its plotting facilities. It also allows for manipulation and editing of datasets. Data can be captured from external sources such as Internet sockets or other programs. Changes in 1.23: * Add new export dialog box which can export multiple pages and modify the export options * Add new dataset filtering dialog * Add cubehelix() functional colormap * Add -stepN suffix for colormaps to make arbitrary numbers of steps * Fix incorrect colors in log images and log color scales * Fix unsafe commands not being run Minor changes * Fix incorrect use of None in (x,...) pattern * Catch crash if plotting nan/inf value in log space * Fix getData in dataset plugin for dimensions=2 * Catch error in too large float to date time conversion * Catch disappeared file during import * Index error fixed in pickable * Catch error in data edit dialog if 2d dataset size changes * If root widget is selected with others, do not error on hide * Fix undo for dataset histogram with a single output dataset * Fix error resizing ellipse with a tuple width, height or position setting * Only use finite values in histogram * Rewrite Line/FillSet setting controls for internal consistency and to fix new style extended fills * Do not crash with log date-time axes * Also ignore non-finite values when fitting with minuit * Avoid syntax error with invalid colormap * Updates to setup.py and desktop files * Recreate dataset now works if dialog hasn't been opened already * Restore dock layout when using Python3 * Fix undo after loading stylesheet/custom definitions * Support unicode example filenames * Clip bezier lines to avoid problems with log axes Features of package: Plotting features: * X-Y plots (with errorbars) * Line and function plots * Contour plots * Images (with colour mappings and colorbars) * Stepped plots (for histograms) * Bar graphs * Vector field plots * Box plots * Polar plots * Ternary plots * Plotting dates * Fitting functions to data * Stacked plots and arrays of plots * Nested plots * Plot keys * Plot labels * Shapes and arrows on plots * LaTeX-like formatting for text * Multiple axes * Axes with steps in axis scale (broken axes) * Axis scales using functional forms * Plotting functions of datasets Input and output: * EPS/PDF/PNG/SVG/EMF export * Dataset creation/manipulation * Embed Veusz within other programs * Text, HDF5, CSV, FITS, NPY/NPZ, QDP, binary and user-plugin importing * Data can be captured from external sources Extending: * Use as a Python module * User defined functions, constants and can import external Python functions * Plugin interface to allow user to write or load code to - import data using new formats - make new datasets, optionally linked to existing datasets - arbitrarily manipulate the document * Scripting interface * Control with DBUS and SAMP Other features: * Data filtering and manipulation * Data picker * Interactive tutorial * Multithreaded rendering Requirements for source install: Python 2.x (2.6 or greater required) or 3.x (3.3 or greater required) http://www.python.org/ Qt >= 4.6 (free edition) http://www.trolltech.com/products/qt/ PyQt >= 4.5 (SIP is required to be installed first) http://www.riverbankcomputing.co.uk/software/pyqt/ http://www.riverbankcomputing.co.uk/software/sip/ numpy >= 1.0 http://numpy.scipy.org/ Optional requirements: h5py (optional for HDF5 support) http://www.h5py.org/ astropy >= 0.2 or PyFITS >= 1.1 (optional for FITS import) http://www.stsci.edu/resources/software_hardware/pyfits http://www.astropy.org/ pyemf >= 2.0.0 (optional for EMF export) http://pyemf.sourceforge.net/ PyMinuit >= 1.1.2 (optional improved fitting) http://code.google.com/p/pyminuit/ dbus-python, for dbus interface http://dbus.freedesktop.org/doc/dbus-python/ astropy (optional for VO table import) http://www.astropy.org/ SAMPy or astropy >= 0.4 (optional for SAMP support) http://pypi.python.org/pypi/sampy/ Veusz is Copyright (C) 2003-2015 Jeremy Sanders and contributors. It is licensed under the GPL (version 2 or greater). For documentation on using Veusz, see the "Documents" directory. The manual is in PDF, HTML and text format (generated from docbook). The examples are also useful documentation. Please also see and contribute to the Veusz wiki: https://github.com/jeremysanders/veusz/wiki If you enjoy using Veusz, we would love to hear from you. Please join the mailing lists at https://gna.org/mail/?group=veusz to discuss new features or if you'd like to contribute code. The latest code can always be found in the Git repository at https://github.com/jeremysanders/veusz.git. From jr at sun.ac.za Tue Apr 21 10:50:15 2015 From: jr at sun.ac.za (Johann Rohwer) Date: Tue, 21 Apr 2015 16:50:15 +0200 Subject: [SciPy-User] wiki.scipy.org down? Message-ID: <4995460.bWDB3kDpak@bc433789> Is the wiki down? I'm getting connection reset errors. Specifically I can't access the Cookbook. Johann The integrity and confidentiality of this email is governed by these terms / Hierdie terme bepaal die integriteit en vertroulikheid van hierdie epos. http://www.sun.ac.za/emaildisclaimer From jmsachs at gmail.com Tue Apr 21 11:27:02 2015 From: jmsachs at gmail.com (Jason Sachs) Date: Tue, 21 Apr 2015 08:27:02 -0700 Subject: [SciPy-User] lossless 1-D signal compression Message-ID: Are there any fast lossless 1-D signal compression algorithms out there which do a better job than, say, zlib for dealing with signals that are either a sequence of integers or floating-point numbers with common spectral properties? (usually band-limited) I need to compress a bunch of data, and uncompress it in Python, and it seems like the general-purpose data compression algorithms might be missing opportunities. From daniele at grinta.net Tue Apr 21 11:33:00 2015 From: daniele at grinta.net (Daniele Nicolodi) Date: Tue, 21 Apr 2015 09:33:00 -0600 Subject: [SciPy-User] lossless 1-D signal compression In-Reply-To: References: Message-ID: <55366DAC.7010509@grinta.net> On 21/04/15 09:27, Jason Sachs wrote: > Are there any fast lossless 1-D signal compression algorithms out > there which do a better job than, say, zlib for dealing with signals > that are either a sequence of integers or floating-point numbers with > common spectral properties? (usually band-limited) > > I need to compress a bunch of data, and uncompress it in Python, and > it seems like the general-purpose data compression algorithms might be > missing opportunities. Blosc is designed for quite this exact use case: http://www.blosc.org/ and has nice python bindings. Cheers, Daniele From takowl at gmail.com Tue Apr 21 14:31:00 2015 From: takowl at gmail.com (Thomas Kluyver) Date: Tue, 21 Apr 2015 11:31:00 -0700 Subject: [SciPy-User] wiki.scipy.org down? In-Reply-To: <4995460.bWDB3kDpak@bc433789> References: <4995460.bWDB3kDpak@bc433789> Message-ID: On 21 April 2015 at 07:50, Johann Rohwer wrote: > Is the wiki down? I'm getting connection reset errors. Specifically I can't > access the Cookbook. > That server periodically goes down - I ping Enthought and it's back up now. Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From rob.clewley at gmail.com Tue Apr 21 15:02:02 2015 From: rob.clewley at gmail.com (Rob Clewley) Date: Tue, 21 Apr 2015 15:02:02 -0400 Subject: [SciPy-User] Is there existing code to log-with-bells-on for runtime algorithm diagnostics? Message-ID: Hi, I'm in need of a system for logging the step-wise results and diagnostic metadata about a python function implementation of an algorithm that I'm developing. The specific algorithm is not of great consequence except that it's for scientific computing and may produce large (e.g., '00s or maybe '000s, but not "big data" scale) amounts of intermediate numerical data that can be complex to understand when debugging its progress. In fact, I'm trying to build a general purpose tool for exploring the inner workings of numerical algorithms for teaching and learning purposes, e.g. for graduate student training or for figuring out parameter choices in difficult applications. I want to be able to insert commands inside of the loops that log certain variable states, completed stages of the algorithm (assume it's hierarchical), text warnings, and possibly even 'pointers' to graphical output objects of some of the intermediate data (e.g. matplotlib object handles for lines, points). Then I can trace the work done afterwards or step through the process in an IDE debugger and make interactive calls to access recent steps, plot certain relationships in the current state, etc. The basic logger's "levels" of output don't really apply here. I at least want categories, if not hierarchical sub-categories. I don't think the built-in logger is sophisticated enough for this, being a flat record of freeform text AFAIU, but the API looks appealing. I'm considering an in-memory sqlite DB to store structured records at any logged step, and an accompanying dictionary to store references to any python object metadata, keyed by a unique ID in the DB log. Then I'd write an API for it that resembles the logger's. It's not super hard for me to write my own thing here, but I'm wondering if anyone has come across any existing solutions in this vein, or has any advice before I go further in designing a solution? I can't really believe that no-one has attempted this before, but it's been really hard to find any existing work through online search. Thanks, Rob From takowl at gmail.com Tue Apr 21 15:11:07 2015 From: takowl at gmail.com (Thomas Kluyver) Date: Tue, 21 Apr 2015 12:11:07 -0700 Subject: [SciPy-User] Is there existing code to log-with-bells-on for runtime algorithm diagnostics? In-Reply-To: References: Message-ID: On 21 April 2015 at 12:02, Rob Clewley wrote: > The basic logger's "levels" > of output don't really apply here. I at least want categories, if not > hierarchical sub-categories. > Python's built-in logging has a hierarchy of loggers according to the name - you can set up log handling for the logger named 'foo', and things from the logger 'foo.bar' will go to that. The convention is to instantiate each logger with a module name, so you can configure logging by package, but I think you could use it with any hierarchy. It is text-only, as far as I know, but it might save you some work. Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue Apr 21 16:22:34 2015 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 21 Apr 2015 21:22:34 +0100 Subject: [SciPy-User] Is there existing code to log-with-bells-on for runtime algorithm diagnostics? In-Reply-To: References: Message-ID: On Tue, Apr 21, 2015 at 8:02 PM, Rob Clewley wrote: > > Hi, > > I'm in need of a system for logging the step-wise results and > diagnostic metadata about a python function implementation of an > algorithm that I'm developing. The specific algorithm is not of great > consequence except that it's for scientific computing and may produce > large (e.g., '00s or maybe '000s, but not "big data" scale) amounts of > intermediate numerical data that can be complex to understand when > debugging its progress. > > In fact, I'm trying to build a general purpose tool for exploring the > inner workings of numerical algorithms for teaching and learning > purposes, e.g. for graduate student training or for figuring out > parameter choices in difficult applications. The term you want to search for is "structured logging". http://www.structlog.org/en/stable/ http://eliot.readthedocs.org/en/stable/ https://twiggy.readthedocs.org/en/latest/logging.html#structured-logging http://netlogger.lbl.gov/ -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From rob.clewley at gmail.com Tue Apr 21 16:46:34 2015 From: rob.clewley at gmail.com (Rob Clewley) Date: Tue, 21 Apr 2015 16:46:34 -0400 Subject: [SciPy-User] Is there existing code to log-with-bells-on for runtime algorithm diagnostics? In-Reply-To: References: Message-ID: All of these ideas and links are very helpful, thank you! -Rob From jr at sun.ac.za Wed Apr 22 03:34:39 2015 From: jr at sun.ac.za (Johann Rohwer) Date: Wed, 22 Apr 2015 09:34:39 +0200 Subject: [SciPy-User] wiki.scipy.org down? In-Reply-To: References: <4995460.bWDB3kDpak@bc433789> Message-ID: <2693302.lExE4sgxDs@bc433789> On Tuesday 21 April 2015 11:31:00 Thomas Kluyver wrote: > On 21 April 2015 at 07:50, Johann Rohwer wrote: > > Is the wiki down? I'm getting connection reset errors. Specifically I > > can't > > access the Cookbook. > > That server periodically goes down - I ping Enthought and it's back up now. > > Thomas I'm still getting connection reset errors, albeit after trying to connect to the server for a long time (Waiting for wiki.scipy.org...). This is at 9:30 am (UTC+2). Johann The integrity and confidentiality of this email is governed by these terms / Hierdie terme bepaal die integriteit en vertroulikheid van hierdie epos. http://www.sun.ac.za/emaildisclaimer From matthew.brett at gmail.com Wed Apr 22 13:29:30 2015 From: matthew.brett at gmail.com (Matthew Brett) Date: Wed, 22 Apr 2015 10:29:30 -0700 Subject: [SciPy-User] wiki.scipy.org down? In-Reply-To: <2693302.lExE4sgxDs@bc433789> References: <4995460.bWDB3kDpak@bc433789> <2693302.lExE4sgxDs@bc433789> Message-ID: Hi, On Wed, Apr 22, 2015 at 12:34 AM, Johann Rohwer wrote: > On Tuesday 21 April 2015 11:31:00 Thomas Kluyver wrote: >> On 21 April 2015 at 07:50, Johann Rohwer wrote: >> > Is the wiki down? I'm getting connection reset errors. Specifically I >> > can't >> > access the Cookbook. >> >> That server periodically goes down - I ping Enthought and it's back up now. >> >> Thomas > > I'm still getting connection reset errors, albeit after trying to connect to > the server for a long time (Waiting for wiki.scipy.org...). This is at 9:30 am > (UTC+2). Working for me at 17.30 UTC. Matthew From jr at sun.ac.za Wed Apr 22 15:04:03 2015 From: jr at sun.ac.za (Johann Rohwer) Date: Wed, 22 Apr 2015 21:04:03 +0200 Subject: [SciPy-User] wiki.scipy.org down? In-Reply-To: References: <4995460.bWDB3kDpak@bc433789> <2693302.lExE4sgxDs@bc433789> Message-ID: <5537F0A3.7020608@sun.ac.za> On 22/04/2015 19:29, Matthew Brett wrote: > Hi, > > On Wed, Apr 22, 2015 at 12:34 AM, Johann Rohwer wrote: >> On Tuesday 21 April 2015 11:31:00 Thomas Kluyver wrote: >>> On 21 April 2015 at 07:50, Johann Rohwer wrote: >>>> Is the wiki down? I'm getting connection reset errors. Specifically I >>>> can't >>>> access the Cookbook. >>> That server periodically goes down - I ping Enthought and it's back up now. >>> >>> Thomas >> I'm still getting connection reset errors, albeit after trying to connect to >> the server for a long time (Waiting for wiki.scipy.org...). This is at 9:30 am >> (UTC+2). > Working for me at 17.30 UTC. > > Matthew Now working here as well (19:00 UTC). Johann From agomez26 at asu.edu Thu Apr 23 17:08:34 2015 From: agomez26 at asu.edu (Andres Gomez-Lievano) Date: Thu, 23 Apr 2015 21:08:34 +0000 (UTC) Subject: [SciPy-User] Crashing Message-ID: Hi, I am reaching out for help. First, the mathematical problem I am trying to solve: Suppose there is a rectangular matrix M = [[m11, m12, m13, m14], [m21, m22, m23, m24], [m31, m32, m33, m34]], from which I only know the totals (i.e., the sum of elements) by row and column. Hence, suppose my data is: tot_of_rows = [80.0, 230.0, 132.0] tot_of_cols = [74.0, 200.0, 91.0, 77.0] Problem: Infer the elements of the matrix using the method of Maximum Entropy. (See Cho, W. and Judge, G. (2008), "Recovering vote choice from incomplete data". Journal of Data Science 6, 155-171) Now, my current code that, in my mind, should solve this problem is (see also http://nbviewer.ipython.org/gist/anonymous/eee0f44d3de1a09570b0): # ----------------- beginning of python code ---------------------- # Importing libraries import numpy as np from scipy.optimize import minimize # Defining functions def Entropy(p_vec): # Function receives a vector of conditional probabilities, that come # from flattening a matrix of conditional probabilities. return -1.0 * np.dot(p_vec, np.log(p_vec)) def vec2mat(vec, nrows, ncols): # Re-generating the matrix from the vector mat = np.array([ [vec[j + i*ncols] for j in range(ncols)] for i in range(nrows) ]) return mat def mat2vec(mat): # Flattening the matrix into a vector nrows = int(np.array(mat).shape[0]) ncols = int(np.array(mat).shape[1]) vec = [mat[i, j] for i in range(nrows) for j in range(ncols)] return np.array(vec) # Functions for the Constraints: # Probabilities sum up to one # Probability: P[p,c] = "conditional probability of country c exporting p" # Xcp[c,p] = P[p,c]*Xc[c] # The problem is to maximize Entropy(P) # Subject to: # 1) sum_p P[p,c] - 1 = 0, for all c # 2) sum_c P[p,c]*Xc[c] - Xp[p] = 0, for all p # 3) 0 <= P[p,c] <= 1, for all c, p # for each c def Constraint_Norm(p_vec, Xc_vec, Xp_vec, c): lenvec_p = len(Xp_vec) lenvec_c = len(Xc_vec) # For clarity, reconstruct the matrix from p_vec p_mat = vec2mat(p_vec, lenvec_p, lenvec_c) return( np.sum([ p_mat[p,c] for p in range(lenvec_p) ]) - 1 ) # About the totals # for each p def Constraint_Mean(p_vec, Xc_vec, Xp_vec, p): lenvec_p = len(Xp_vec) lenvec_c = len(Xc_vec) # For clarity, reconstruct the matrix from p_vec p_mat = vec2mat(p_vec, lenvec_p, lenvec_c) return( np.sum([ p_mat[p,c]*Xc_vec[c] for c in range(lenvec_c) ]) - Xp_vec[p] ) # Initializing the parameters and data of the problem Xc = np.array([80.0, 230.0, 132.0]) Xp = np.array([74.0, 200.0, 91.0, 77.0]) size_c = len(Xc) size_p = len(Xp) # Initializing the tuple of constraints cons = () # Including constraint 1 cons += tuple( ({"type": 'eq', 'fun': lambda x: Constraint_Norm(x, Xc, Xp, c)} for c in range(size_c)) ) # Including constraint 2 cons += tuple( ({"type": 'eq', 'fun': lambda x: Constraint_Mean(x, Xc, Xp, p)} for p in range(size_p)) ) # Generating the bounds for each probability zeroonebounds = tuple((0,1) for c in range(size_c) for p in range(size_p)) # Generating the initial vector to start the 'minimize' algorithm init_vec = np.array([ 1.0/size_p + (np.random.random()-1)/10 for p in range(size_p) for c in range(size_c) ]) ##################################### # Finally, running the algorithm result = minimize(lambda x: -1.0 * Entropy(x), init_vec, bounds=zeroonebounds, constraints=cons, options={'disp': True}) # Display results res_mat = vec2mat(result.x, size_p, size_c) print res_mat # ----------------- end of python code ---------------------- This throws the following error: "Singular matrix C in LSQ subproblem (Exit mode 6)" Why? I don't see anything that would make a matrix in this context non invertible. Any help is highly appreciated. From paul.blelloch at ata-e.com Thu Apr 23 17:25:34 2015 From: paul.blelloch at ata-e.com (Paul Blelloch) Date: Thu, 23 Apr 2015 14:25:34 -0700 Subject: [SciPy-User] Decimate and filtfilt Message-ID: I was using the scipy.signal.decimate function and noticed that it seemed to have some unusual edge effects using the default IIR Chebychev filter. In doing a little research I found a message from 2011 saying that decimate only filtered in one direction, rather than using the filtfilt function to filter both directions in order to get rid of phase shifts. There was also a message that filtfilt had a bug associated with the way that it handled edge effects. I was wondering if anything had changed since 2011. Does decimate still only filter in one direction and are there issues with filtfilt? Using the default FIR filter (ftype='fir') does seem to correct the issue, but that's a more complex and expensive filter so I was wondering if the default IIR filter was working correctly. I'm comparing the Matlab decimate function that does use filtfilt and seems to handle the edge effects correctly. THANKS, Paul Blelloch -------------- next part -------------- An HTML attachment was scrubbed... URL: From servant.mathieu at gmail.com Fri Apr 24 04:45:13 2015 From: servant.mathieu at gmail.com (Servant Mathieu) Date: Fri, 24 Apr 2015 10:45:13 +0200 Subject: [SciPy-User] update scipy to v.0.15 from spyder on windows seven 64 bits Message-ID: Dear community, I've the latest python (x,y) distribution (2.7.9.0) which incorporates spyder 2.3.2-14 and scipy 0.14.0-7. Unfortunately, I need scipy 0.15 to use the new scipy.optimize.differential_evolution() function. How should I proceed to (easily) update scipy from my spyder? Best, Mathieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidmenhur at gmail.com Fri Apr 24 10:57:55 2015 From: davidmenhur at gmail.com (=?UTF-8?B?RGHPgGlk?=) Date: Fri, 24 Apr 2015 16:57:55 +0200 Subject: [SciPy-User] Crashing In-Reply-To: References: Message-ID: On 23 April 2015 at 23:08, Andres Gomez-Lievano wrote: > > # Initializing the parameters and data of the problem > Xc = np.array([80.0, 230.0, 132.0]) > Xp = np.array([74.0, 200.0, 91.0, 77.0]) > size_c = len(Xc) > size_p = len(Xp) > Your problem is impossible. You have a 4 by 3 matrix of elements between 0 and 1, and are trying to get sums of more than 200. Try first with a problem with existing solution: _mat = np.random.random((4,3)) Xc = _mat.sum(axis=0) Xp = _mat.sum(axis=1) Also, Numpy offers functions like reshape, that can replace your vec2mat. Hard constraints are something that minimisation algorithms don't like. It is usually better to add them as a penalty to your target function. See here: import numpy as np from scipy.optimize import minimize # Defining functions def entropy(p_vec): # Log missbehaves if there are negative values, so just take them away. p_vec = np.abs(p_vec) return -1.0 * np.dot(p_vec, np.log(p_vec)) def const_rows(p_vec, rows, shape): mat = p_vec.reshape(*shape) diff = mat.sum(axis=0) - rows return diff.dot(diff) def const_cols(p_vec, cols, shape): mat = p_vec.reshape(*shape) diff = mat.sum(axis=1) - cols return diff.dot(diff) def function(x): # Adjustable parameters lambda_1 = 100. lambda_2 = 100. shape = (4, 3) e = entropy(x) rows_violation = const_rows(x, Xc, shape) cols_violation = const_cols(x, Xp, shape) return -e + lambda_1 * rows_violation + lambda_2 * cols_violation # Initializing the parameters and data of the problem _mat = np.random.random((4,3)) Xc = _mat.sum(axis=0) Xp = _mat.sum(axis=1) size_p = len(Xp) size_c = len(Xc) # Generating the bounds for each probability. NB: we are not using them right now! zeroonebounds = tuple((0,1) for _ in xrange(size_p * size_c)) # Generating the initial vector to start the 'minimize' algorithm init_vec = np.random.random(len(Xc) * len(Xp)) # Finally, running the algorithm result = minimize(function, init_vec) # Display results res_mat = result.x.reshape(size_p, size_c) print np.abs(res_mat - _mat).sum() # Violations of the bounds print res_mat.sum(axis=1) - Xp print res_mat.sum(axis=0) - Xc /David. -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnaldorusso at gmail.com Fri Apr 24 14:50:53 2015 From: arnaldorusso at gmail.com (Arnaldo Russo) Date: Fri, 24 Apr 2015 15:50:53 -0300 Subject: [SciPy-User] Is it good practice to use IPython notebooks as your Python IDE? In-Reply-To: References: Message-ID: Hi Brian, If you are still looking for an IPython like IDE, I'd suggest to take a look at Rodeo (http://blog.yhathq.com/posts/introducing-rodeo.html) Cheers, Arnaldo. -------------- next part -------------- An HTML attachment was scrubbed... URL: From wenlei.xie at gmail.com Fri Apr 24 16:53:29 2015 From: wenlei.xie at gmail.com (Wenlei Xie) Date: Fri, 24 Apr 2015 16:53:29 -0400 Subject: [SciPy-User] Creating a Row in SparkSQL from an ArrayList Message-ID: Hi, I am wondering if there is any way to create a Row in SparkSQL in Java by using an List? It looks like Row.create(ArrayList) will create a row with single column (and the single column contains the array) Best, Wenlei -------------- next part -------------- An HTML attachment was scrubbed... URL: From wenlei.xie at gmail.com Fri Apr 24 16:54:20 2015 From: wenlei.xie at gmail.com (Wenlei Xie) Date: Fri, 24 Apr 2015 16:54:20 -0400 Subject: [SciPy-User] Creating a Row in SparkSQL from an ArrayList In-Reply-To: References: Message-ID: Sorry for post in the wrong place :( . Gmail always prompt scipy-user list at the first. On Fri, Apr 24, 2015 at 4:53 PM, Wenlei Xie wrote: > Hi, > > I am wondering if there is any way to create a Row in SparkSQL in Java by > using an List? It looks like > > Row.create(ArrayList) > > will create a row with single column (and the single column contains the > array) > > Best, > Wenlei > -- Wenlei Xie (???) Ph.D. Candidate Department of Computer Science 456 Gates Hall, Cornell University Ithaca, NY 14853, USA Email: wenlei.xie at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From cjaramillo at gradcenter.cuny.edu Sat Apr 25 02:02:39 2015 From: cjaramillo at gradcenter.cuny.edu (cjaramillo) Date: Fri, 24 Apr 2015 23:02:39 -0700 (MST) Subject: [SciPy-User] Normalization for optimization in python In-Reply-To: <52E5A1E9.5000009@chem.wisc.edu> References: <52E5A1E9.5000009@chem.wisc.edu> Message-ID: <1429941759218-20189.post@n7.nabble.com> Hi, Eric. I have a problem with my Jacobian matrices being non-homogeneous due to being derived from variables of different units, such as when combining rotation and translation parameters. Besides scaling the initial value by a factor, do you have an example of how to scale down the pertaining Jacobian matrices? Thanks I advance! Carlos -- View this message in context: http://scipy-user.10969.n7.nabble.com/SciPy-User-Normalization-for-optimization-in-python-tp19084p20189.html Sent from the Scipy-User mailing list archive at Nabble.com. From yw5aj at virginia.edu Sat Apr 25 14:34:21 2015 From: yw5aj at virginia.edu (Yuxiang Wang) Date: Sat, 25 Apr 2015 14:34:21 -0400 Subject: [SciPy-User] Normalization for optimization in python In-Reply-To: <1429941759218-20189.post@n7.nabble.com> References: <52E5A1E9.5000009@chem.wisc.edu> <1429941759218-20189.post@n7.nabble.com> Message-ID: Well, I guess I am not Eric but I did started this whole conversation :) Question: if your Jacobian is numerically computed (as done automatically by the minimize() function), shouldn't it be already normalized, if we already normalized the independent variable being passed into? Shawn On Sat, Apr 25, 2015 at 2:02 AM, cjaramillo wrote: > Hi, Eric. > > I have a problem with my Jacobian matrices being non-homogeneous due to > being derived from variables of different units, such as when combining > rotation and translation parameters. Besides scaling the initial value by a > factor, do you have an example of how to scale down the pertaining Jacobian > matrices? > > Thanks I advance! > > Carlos > > > > -- > View this message in context: http://scipy-user.10969.n7.nabble.com/SciPy-User-Normalization-for-optimization-in-python-tp19084p20189.html > Sent from the Scipy-User mailing list archive at Nabble.com. > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -- Yuxiang "Shawn" Wang Gerling Research Lab University of Virginia yw5aj at virginia.edu +1 (434) 284-0836 https://sites.google.com/a/virginia.edu/yw5aj/ From cjaramillo at gradcenter.cuny.edu Sat Apr 25 19:56:17 2015 From: cjaramillo at gradcenter.cuny.edu (cjaramillo) Date: Sat, 25 Apr 2015 16:56:17 -0700 (MST) Subject: [SciPy-User] Normalization for optimization in python In-Reply-To: References: <52E5A1E9.5000009@chem.wisc.edu> <1429941759218-20189.post@n7.nabble.com> Message-ID: <1430006177775-20191.post@n7.nabble.com> Yuxiang Wang wrote > Well, I guess I am not Eric but I did started this whole conversation :) > > Question: if your Jacobian is numerically computed (as done > automatically by the minimize() function), shouldn't it be already > normalized, if we already normalized the independent variable being > passed into? > > Shawn > > On Sat, Apr 25, 2015 at 2:02 AM, cjaramillo > < > cjaramillo at .cuny > > wrote: >> Hi, Eric. >> >> I have a problem with my Jacobian matrices being non-homogeneous due to >> being derived from variables of different units, such as when combining >> rotation and translation parameters. Besides scaling the initial value by >> a >> factor, do you have an example of how to scale down the pertaining >> Jacobian >> matrices? >> >> Thanks I advance! >> >> Carlos >> >> >> >> -- >> View this message in context: >> http://scipy-user.10969.n7.nabble.com/SciPy-User-Normalization-for-optimization-in-python-tp19084p20189.html >> Sent from the Scipy-User mailing list archive at Nabble.com. >> _______________________________________________ >> SciPy-User mailing list >> > SciPy-User@ >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > > -- > Yuxiang "Shawn" Wang > Gerling Research Lab > University of Virginia > yw5aj@ > +1 (434) 284-0836 > https://sites.google.com/a/virginia.edu/yw5aj/ > _______________________________________________ > SciPy-User mailing list > SciPy-User@ > http://mail.scipy.org/mailman/listinfo/scipy-user I'm providing the Jacobian functions to be used in the optimization. It's faster this way. I found a couple of articles doing some normalization of Jacobian matrices of non-homogeneous parameters. Any examples with scipy woukd be appreciated. I'm trying to grasp the normalization methos for the Jacobian and they seem to be a simple diagonal matrix multiplication. -- View this message in context: http://scipy-user.10969.n7.nabble.com/SciPy-User-Normalization-for-optimization-in-python-tp19084p20191.html Sent from the Scipy-User mailing list archive at Nabble.com. From davidmenhur at gmail.com Sun Apr 26 07:37:38 2015 From: davidmenhur at gmail.com (=?UTF-8?B?RGHPgGlk?=) Date: Sun, 26 Apr 2015 13:37:38 +0200 Subject: [SciPy-User] Normalization for optimization in python In-Reply-To: <1429941759218-20189.post@n7.nabble.com> References: <52E5A1E9.5000009@chem.wisc.edu> <1429941759218-20189.post@n7.nabble.com> Message-ID: On 25 April 2015 at 08:02, cjaramillo wrote: > I have a problem with my Jacobian matrices being non-homogeneous due to > being derived from variables of different units, such as when combining > rotation and translation parameters. Besides scaling the initial value by a > factor, do you have an example of how to scale down the pertaining Jacobian > matrices? Scaling factors can be error prone when you have to propagate them. Usually, the safest way is to blame it on the units. So, in your case, you define the rotation angles in radians and the typical displacement to be 1 in your units (assuming the rotations are big, so around 1 rad). This technique also works with any other mathematical problems, like differential equations or linear systems. /David. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Mon Apr 27 07:27:06 2015 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Mon, 27 Apr 2015 13:27:06 +0200 Subject: [SciPy-User] update scipy to v.0.15 from spyder on windows seven 64 bits In-Reply-To: References: Message-ID: On Fri, Apr 24, 2015 at 10:45 AM, Servant Mathieu wrote: > Dear community, > > I've the latest python (x,y) distribution (2.7.9.0) which incorporates > spyder 2.3.2-14 and scipy 0.14.0-7. Unfortunately, I need scipy 0.15 to use > the new scipy.optimize.differential_evolution() function. How should I > proceed to (easily) update scipy from my spyder? > I don't know how Python(x,y) builds its packages. If you need to upgrade Scipy it's probably best to ask for advice on the Python(x,y) mailing list. Note though that differential_evolution is pure Python code, so the path of least resistance may be to just take this file: https://github.com/scipy/scipy/blob/master/scipy/optimize/_differentialevolution.py and add an import to optimize/__init__.py to make the function available in your current install. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From cjaramillo at gradcenter.cuny.edu Mon Apr 27 09:40:44 2015 From: cjaramillo at gradcenter.cuny.edu (cjaramillo) Date: Mon, 27 Apr 2015 06:40:44 -0700 (MST) Subject: [SciPy-User] Normalization for optimization in python In-Reply-To: References: <52E5A1E9.5000009@chem.wisc.edu> <1429941759218-20189.post@n7.nabble.com> Message-ID: <1430142044600-20194.post@n7.nabble.com> Da?id wrote > On 25 April 2015 at 08:02, cjaramillo < > cjaramillo at .cuny > > > wrote: > > Scaling factors can be error prone when you have to propagate them. > Usually, the safest way is to blame it on the units. So, in your case, you > define the rotation angles in radians and the typical displacement to be 1 > in your units (assuming the rotations are big, so around 1 rad). Thanks, David. However, you can still run into biased convergences even when all variables would be under the same units. For example, when you have a parameter that translates in the range from 0 to 10 [mm] and others that translate in the hundreds of [mm]. I'm searching on the web more about this, but I would appreciate you can refer me to a source where they employ unit-based scaling (normalization) of Jacobian matrices for their use within optimization. -- View this message in context: http://scipy-user.10969.n7.nabble.com/SciPy-User-Normalization-for-optimization-in-python-tp19084p20194.html Sent from the Scipy-User mailing list archive at Nabble.com. From davidmenhur at gmail.com Mon Apr 27 18:17:57 2015 From: davidmenhur at gmail.com (=?UTF-8?B?RGHPgGlk?=) Date: Mon, 27 Apr 2015 23:17:57 +0100 Subject: [SciPy-User] Normalization for optimization in python In-Reply-To: <1430142044600-20194.post@n7.nabble.com> References: <52E5A1E9.5000009@chem.wisc.edu> <1429941759218-20189.post@n7.nabble.com> <1430142044600-20194.post@n7.nabble.com> Message-ID: On 27 April 2015 at 14:40, cjaramillo wrote: > Thanks, David. However, you can still run into biased convergences even > when > all variables would be under the same units. For example, when you have a > parameter that translates in the range from 0 to 10 [mm] and others that > translate in the hundreds of [mm]. That is still not a problem. 10 mm and 500 mm are the same amount (more or less). The whole purpose of scaling is to minimise numerical inaccuracies when adding or subtracting numbers. But since the resolution of a double is 1e-15, you need nine orders of magnitude difference before you get accuracies comparable with perfectly scaled float (1e-6) (waiving my hands violently here). Note that, due to the nature of the optimisation problems, the numerical noise in most "simple" functions will get swamped by the procedure, unless you are interested in an extremely accurate result on a finely crafted function. You can check on your case and compare with and without the scaling, without providing the gradient for simplicity. I'm searching on the web more about this, > but I would appreciate you can refer me to a source where they employ > unit-based scaling (normalization) of Jacobian matrices for their use > within > optimization. > This is usually called "natural units" or "nondimensionalisation". You will perhaps find more relevant hits in the context of differential equations. I don't know from the top of my head of any reference, but this should be covered in most numerical analysis texts. /David. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gallen at arlut.utexas.edu Tue Apr 28 15:09:20 2015 From: gallen at arlut.utexas.edu (Gregory Allen) Date: Tue, 28 Apr 2015 14:09:20 -0500 Subject: [SciPy-User] signal.firls Message-ID: I was surprised to find that there was no signal.firls function in scipy for designing FIR filters using least-squares error minimization. I wrote this based on a post in this list [http://mail.scipy.org/pipermail/scipy-user/2009-November/023101.html], added band weights, and fleshed it out in the style of signal.remez. It?s not as full-featured as the matlab version, but it solved a need for me. :) It could be pasted at the bottom of signal/fir_filter_design.py. I include it here in case it is of any use to someone else, or to be included in scipy. Thanks, -Greg --- import numpy as np from scipy.special import sinc def firls(numtaps, bands, desired, weight=None, Hz=1): """ FIR filter design using least-squares error minimization. Calculate the filter coefficients for the finite impulse response (FIR) filter which has the best approximation to the desired frequency response described by `bands` and `desired` in the least squares sense. Parameters ---------- numtaps : int The number of taps in the FIR filter. `numtaps` must be odd. bands : array_like A monotonic sequence containing the band edges in Hz. All elements must be non-negative and less than half the sampling frequency as given by `Hz`. desired : array_like A sequence half the size of bands containing the desired gain in each of the specified bands. weight : array_like, optional A relative weighting to give to each band region. The length of `weight` has to be half the length of `bands`. Hz : scalar, optional The sampling frequency in Hz. Default is 1. Returns ------- out : ndarray A rank-1 array containing the coefficients of the optimal (in a least squares sense) filter. Example ------- We want to construct a filter with a passband at 0.2-0.3 Hz, and stop bands at 0-0.1 Hz and 0.4-0.5 Hz. Note that this means that the behavior in the frequency ranges between those bands is unspecified and may overshoot. >>> from scipy import signal >>> bpass = signal.firls(71, [0, 0.1, 0.2, 0.3, 0.4, 0.5], [0, 1, 0]) >>> freq, response = signal.freqz(bpass) >>> ampl = np.abs(response) >>> import matplotlib.pyplot as plt >>> fig = plt.figure() >>> ax1 = fig.add_subplot(111) >>> ax1.semilogy(freq/(2*np.pi), ampl, 'b-') # freq in Hz >>> plt.show() """ if numtaps%2 == 0: raise ValueError("numtaps must be odd.") L = (numtaps-1)//2 # normalize bands and make it 2 columns bands = np.asarray(bands).flatten()/Hz if len(bands)%2 == 1: raise ValueError("bands must contain frequency pairs.") bands = bands.reshape(-1,2) # check remaining params if len(bands) != len(desired): raise ValueError("desired must have one entry per band.") if weight is None: weight = np.ones_like(desired) # set up the linear matrix equation to be solved, Ax = b k = np.arange(L+1)[np.newaxis] m = k.T A,b = 0,0 for i, (f0,f1) in enumerate(bands): Ai = f1 * (sinc(2*(m+k)*f1) + sinc(2*(m-k)*f1)) \ - f0 * (sinc(2*(m+k)*f0) + sinc(2*(m-k)*f0)) bi = desired[i] * (2*f1*sinc(2*m*f1) - 2*f0*sinc(2*m*f0)) A += Ai * abs(weight[i]**2) b += bi * abs(weight[i]**2) # solve and return x = np.linalg.solve(A,b).squeeze() h = np.hstack((x[:0:-1]/2, x[0], x[1:]/2)) return h -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4876 bytes Desc: not available URL: From warren.weckesser at gmail.com Wed Apr 29 12:39:17 2015 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Wed, 29 Apr 2015 12:39:17 -0400 Subject: [SciPy-User] Decimate and filtfilt In-Reply-To: References: Message-ID: On Thu, Apr 23, 2015 at 5:25 PM, Paul Blelloch wrote: > I was using the scipy.signal.decimate function and noticed that it seemed > to have some unusual edge effects using the default IIR Chebychev filter. > In doing a little research I found a message from 2011 saying that decimate > only filtered in one direction, rather than using the filtfilt function to > filter both directions in order to get rid of phase shifts. There was also > a message that filtfilt had a bug associated with the way that it handled > edge effects. I was wondering if anything had changed since 2011. Does > decimate still only filter in one direction and are there issues with > filtfilt? Using the default FIR filter (ftype=?fir?) does seem to correct > the issue, but that?s a more complex and expensive filter so I was > wondering if the default IIR filter was working correctly. > > > > I?m comparing the Matlab decimate function that does use filtfilt and > seems to handle the edge effects correctly. > > > > THANKS, Paul Blelloch > > Paul, The `decimate` function is just a few lines of code. At the moment, you can find the source code at the end of "signaltools.py": https://github.com/scipy/scipy/blob/master/scipy/signal/signaltools.py#L2501 For the given decimation factor q, `decimate` applies a low-pass filter to remove frequencies higher than 1/q (in normalized frequency units, where 1 is the Nyquist frequency), and then slices the filtered data using a step size of q. `decimate` does not provide the option to use the forward/backward filter implemented in `filtfilt`. To decimate using `filtfilt`, you could copy the few relevant lines of code from `decimate`, and replace `lfilter` with `filtfilt`. Then you would have complete control over the `filtfilt` arguments that control how it handles the edges. (As a bonus, you would also have complete control over the arguments that control the design of the low-pass filter.) Warren > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Wed Apr 29 14:35:29 2015 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 29 Apr 2015 14:35:29 -0400 Subject: [SciPy-User] linalg question:cholesky for semidefinite, or LDL, QR equivalent for squared matrix Message-ID: https://github.com/numpy/numpy/pull/4079 I'm looking for the equivalent of Cholesky for possibly singular, symmetric matrices using numpy or scipy linalg. details: I'm writing a function that can either take the data x (nobs, k_vars) or the moment matrix x.T.dot(x) (k_vars, k_vars), and I want to get the same result in both cases. Given data x, all I need is the R of the QR decomposition. If the moment matrix is not singular, then I can get the same with the Cholesky decomposition. However, what's the equivalent of R in QR if x is not of full rank and the moment matrix is singular? QR on the moment matrix gives different numbers and I don't know how to recover the correct R. requirement: I need the same sequential decomposition as qr and cholesky to be used for sequential least squares. Thanks, Josef -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Wed Apr 29 17:06:38 2015 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 29 Apr 2015 17:06:38 -0400 Subject: [SciPy-User] fun with debugging: np.linalg.inv with singular matrix Message-ID: reordering an array changes results by 1021173647741.1094 It's obvious if I calculate 1 / floating_point_noise and floating_point_noise depends on mathematically irrelevant details. But it took me a while to remember that noise doesn't follow the simple math. inv with nonsingular matrix - great >>> xm = np.corrcoef(x[:,::-2], rowvar=0) >>> np.linalg.inv(xm)[:, -3:] array([[ 0.38794217, 0.40358464, 0.12232308], [ 1.21001936, 0.39702137, -0.10164338], [ 0.39702137, 1.20088551, 0.03021732], [-0.10164338, 0.03021732, 1.03114181]]) >>> np.linalg.inv(xm[::-1,::-1])[::-1,::-1][:, -3:] array([[ 0.38794217, 0.40358464, 0.12232308], [ 1.21001936, 0.39702137, -0.10164338], [ 0.39702137, 1.20088551, 0.03021732], [-0.10164338, 0.03021732, 1.03114181]]) >>> np.max(np.abs(np.linalg.inv(xm[::-1,::-1])[::-1,::-1] - np.linalg.inv(xm))) 1.1102230246251565e-16 >>> inv with singular matrix - blowing up path dependent numerical noise >>> xm = np.corrcoef(x, rowvar=0) >>> np.linalg.inv(xm)[:, -3:] array([[ 4.68288729e-02, -1.86395743e-01, 1.47198710e-02], [ -1.35971574e-01, 2.90029902e-01, 3.41148182e-01], [ 9.77422379e-03, -1.17494713e-01, -2.43669915e-01], [ -1.00183312e+14, -1.00183312e+14, -1.00183312e+14], [ -1.00183312e+14, -1.00183312e+14, -1.00183312e+14], [ -1.00183312e+14, -1.00183312e+14, -1.00183312e+14], [ -1.00183312e+14, -1.00183312e+14, -1.00183312e+14], [ -1.00183312e+14, -1.00183312e+14, -1.00183312e+14]]) >>> np.linalg.inv(xm[::-1,::-1])[::-1,::-1][:, -3:] array([[ 5.01007835e-02, -1.84819296e-01, 1.57372647e-02], [ -1.46454280e-01, 2.88649950e-01, 3.38451911e-01], [ 1.13948216e-02, -1.18632618e-01, -2.43817631e-01], [ -1.01204486e+14, -1.01204486e+14, -1.01204486e+14], [ -1.01204486e+14, -1.01204486e+14, -1.01204486e+14], [ -1.01204486e+14, -1.01204486e+14, -1.01204486e+14], [ -1.01204486e+14, -1.01204486e+14, -1.01204486e+14], [ -1.01204486e+14, -1.01204486e+14, -1.01204486e+14]]) >>> np.max(np.abs(np.linalg.inv(xm[::-1,::-1])[::-1,::-1] - np.linalg.inv(xm))) 1021173647741.1094 >>> Josef -------------- next part -------------- An HTML attachment was scrubbed... URL: From agomez26 at asu.edu Wed Apr 29 19:07:12 2015 From: agomez26 at asu.edu (Andres Gomez-Lievano) Date: Wed, 29 Apr 2015 23:07:12 +0000 (UTC) Subject: [SciPy-User] Crashing References: Message-ID: Da?id gmail.com> writes: > > > > On 23 April 2015 at 23:08, Andres Gomez-Lievano asu.edu> wrote: > > # Initializing the parameters and data of the problem > Xc = np.array([80.0, 230.0, 132.0]) > Xp = np.array([74.0, 200.0, 91.0, 77.0]) > size_c = len(Xc) > size_p = len(Xp) > > > Your problem is impossible. You have a 4 by 3 matrix of elements between 0 and 1, and are trying to get sums of more than 200. Try first with a problem with existing solution:_mat = np.random.random((4,3))Xc = _mat.sum(axis=0)Xp = _mat.sum(axis=1) > > Also, Numpy offers functions like reshape, that can replace your vec2mat. > > Hard constraints are something that minimisation algorithms don't like. It is usually better to add them as a penalty to your target function. See here:import numpy as npfrom scipy.optimize import minimize# Defining functionsdef entropy(p_vec):??? # Log missbehaves if there are negative values, so just take them away.??? p_vec = np.abs(p_vec)??? return -1.0 * np.dot(p_vec, np.log(p_vec))def const_rows(p_vec, rows, shape):??? mat = p_vec.reshape(*shape)??? diff = mat.sum(axis=0) - rows??? return diff.dot(diff)def const_cols(p_vec, cols,? shape):??? mat = p_vec.reshape(*shape)??? diff = mat.sum(axis=1) - cols??? return diff.dot(diff)def function(x):?? # Adjustable parameters?? lambda_1 = 100.?? lambda_2 = 100.?? shape = (4, 3)?? e = entropy(x)?? rows_violation = const_rows(x, Xc, shape)?? cols_violation = const_cols(x, Xp, shape)?? return -e + lambda_1 * rows_violation + lambda_2 * cols_violation# Initializing the parameters and data of the problem_mat = np.random.random((4,3))Xc = _mat.sum(axis=0)Xp = _mat.sum(axis=1)size_p = len(Xp)size_c = len(Xc)# Generating the bounds for each probability. NB: we are not using them right now!zeroonebounds = tuple((0,1) for _ in xrange(size_p * size_c))# Generating the initial vector to start the 'minimize' algorithminit_vec = np.random.random(len(Xc) * len(Xp))# Finally, running the algorithmresult = minimize(function, init_vec)# Display resultsres_mat = result.x.reshape(size_p, size_c)print np.abs(res_mat - _mat).sum()# Violations of the boundsprint res_mat.sum(axis=1) - Xpprint res_mat.sum(axis=0) - Xc > > /David. > > ? > > > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > David, thank you for your help. My problem was well-posed (i.e., is not impossible!). The rows or columns of the probability matrix do not have to add up to the vectors Xc and Xp that are externally provided. But in any case, your suggestions helped me solve the problem. As you did, I included the constraints into the objective function, and the program is now working. THANK YOU! ... Now, other problems have emerged... but those will be the topic of another post. Best, Andres From ndbecker2 at gmail.com Thu Apr 30 08:46:06 2015 From: ndbecker2 at gmail.com (Neal Becker) Date: Thu, 30 Apr 2015 08:46:06 -0400 Subject: [SciPy-User] signal.firls References: Message-ID: Gregory Allen wrote: > I was surprised to find that there was no signal.firls function in scipy > for designing FIR filters using least-squares error minimization. > > I wrote this based on a post in this list > [http://mail.scipy.org/pipermail/scipy-user/2009-November/023101.html], > added band weights, and fleshed it out in the style of signal.remez. It?s > not as full-featured as the matlab version, but it solved a need for me. > :) > > It could be pasted at the bottom of signal/fir_filter_design.py. I include > it here in case it is of any use to someone else, or to be included in > scipy. > > Thanks, > -Greg Thanks! A great addition. From cgodshall at enthought.com Thu Apr 30 14:24:57 2015 From: cgodshall at enthought.com (Courtenay Godshall (Enthought)) Date: Thu, 30 Apr 2015 13:24:57 -0500 Subject: [SciPy-User] ANN: SciPy 2015 Tutorial Schedule Posted - Register Today - Already 30% Sold Out Message-ID: <013801d08372$f6f1b350$e4d519f0$@enthought.com> **The #SciPy2015 Conference (Scientific Computing with #Python) Tutorial Schedule is up! It is 1st come, 1st served and already 30% sold out. Register today!** http://www.scipy2015.scipy.org/ehome/115969/289057/? &.This year you can choose from 16 different SciPy tutorials OR select the 2 day Software Carpentry course on scientific Python that assumes some programming experience but no Python knowledge.? Please share! Tutorials include: *Introduction to NumPy (Beginner) *Machine Learning with Scikit-Learn (Intermediate) *Cython: Blend of the Best of Python and C/++ (Intermediate) *Image Analysis in Python with SciPy and Scikit-Image (Intermediate) *Analyzing and Manipulating Data with Pandas (Beginner) *Machine Learning with Scikit-Learn (Advanced) *Building Python Data Applications with Blaze and Bokeh (Intermediate) *Multibody Dynamics and Control with Python (Intermediate) *Anatomy of Matplotlib (Beginner) *Computational Statistics I (Intermediate) *Efficient Python for High-Performance Parallel Computing (Intermediate) *Geospatial Data with Open Source Tools in Python (Intermediate) *Decorating Drones: Using Drones to Delve Deeper into Intermediate Python (Intermediate) *Computational Statistics II (Intermediate) *Modern Optimization Methods in Python (Advanced) *Jupyter Advanced Topics Tutorial (Advanced) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Thu Apr 30 15:13:06 2015 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Thu, 30 Apr 2015 21:13:06 +0200 Subject: [SciPy-User] signal.firls In-Reply-To: References: Message-ID: On Thu, Apr 30, 2015 at 2:46 PM, Neal Becker wrote: > Gregory Allen wrote: > > > I was surprised to find that there was no signal.firls function in scipy > > for designing FIR filters using least-squares error minimization. > > > > I wrote this based on a post in this list > > [http://mail.scipy.org/pipermail/scipy-user/2009-November/023101.html], > > added band weights, and fleshed it out in the style of signal.remez. It?s > > not as full-featured as the matlab version, but it solved a need for me. > > :) > > > > It could be pasted at the bottom of signal/fir_filter_design.py. I > include > > it here in case it is of any use to someone else, or to be included in > > scipy. > > > > Thanks, > > -Greg > > Thanks! A great addition. > Indeed thanks, looks quite good already. It's not added just yet though... Does one of you want to add some unit tests and send a pull request on Github? Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From wenlei.xie at gmail.com Thu Apr 30 17:43:21 2015 From: wenlei.xie at gmail.com (Wenlei Xie) Date: Thu, 30 Apr 2015 17:43:21 -0400 Subject: [SciPy-User] Norm for Sparse Matrix Message-ID: Hi, The function numpy.linalg.norm doesn't seem to work with sparse matrices. I am wondering if there is any function to get the norm for sparse matrices? Thank you! Best, Wenlei -------------- next part -------------- An HTML attachment was scrubbed... URL: