From denis-bz-gg at t-online.de Fri Feb 1 08:24:06 2013 From: denis-bz-gg at t-online.de (denis) Date: Fri, 1 Feb 2013 13:24:06 +0000 (UTC) Subject: [Numpy-discussion] faster Array.take( floatindices.astype(int) ) ? Message-ID: Folks, is there a way to speed up Array.take( floatindices.astype(int) ) ? astype(int) makes a copy, floor() returns floats. (Is there a wiki of NumPy one-liners / various tricks ? would sure beat googling.) Thanks, cheers -- denis From toddrjen at gmail.com Fri Feb 1 09:13:35 2013 From: toddrjen at gmail.com (Todd) Date: Fri, 1 Feb 2013 15:13:35 +0100 Subject: [Numpy-discussion] Subclassing ndarray with concatenate In-Reply-To: <1359541239.2496.14.camel@sebastian-laptop> References: <1358858673.24631.20.camel@sebastian-laptop> <1359541239.2496.14.camel@sebastian-laptop> Message-ID: On Wed, Jan 30, 2013 at 11:20 AM, Sebastian Berg wrote: > > > > > > > > In my particular case at least, there are clear ways to > > handle corner > > > cases (like being passed a class that lacks these > > attributes), so in > > > principle there no problem handling concatenate in a general > > way, > > > assuming I can get access to the attributes. > > > > > > > > > So is there any way to subclass ndarray in such a way that > > concatenate > > > can be handled properly? > > > > > > > Quite simply, no. If you compare masked arrays, they also > > provide their > > own concatenate for this reason. > > > > I hope that helps a bit... > > > > > > > > Is this something that should be available? For instance a method > > that provides both the new array and the arrays that were used to > > construct it. This would seem to be an extremely common use-case for > > array subclasses, so letting them gracefully handle this would seem to > > be very important. > In any case, yes, it calls __array_finalize__, but as you noticed, it > calls it without the original array. Now it would be very easy and > harmless to change that, however I am not sure if giving only the parent > array is very useful (ie. you only get the one with highest array > priority). > > Another way to get around it would be maybe to call __array_wrap__ like > ufuncs do (with a context, so you get all inputs, but then the non-array > axis argument may not be reasonably placed into the context). > > In any case, if you think it would be helpful to at least get the single > parent array, that would be a very simple change, but I feel the whole > subclassing could use a bit thinking and quite a bit of work probably, > since I am not quite convinced that calling __array_wrap__ with a > complicated context from as many functions as possible is the right > approach for allowing more complex subclasses. > I was more thinking of a new method that is called when more than one input array is used, maybe something like __multi_array_finalize__. This would allow more fine-grained handling of such cases and would not break backwards compatibility with any existing subclasses (if they don't override the method the current behavior will remain). -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Fri Feb 1 16:23:34 2013 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 1 Feb 2013 14:23:34 -0700 Subject: [Numpy-discussion] faster Array.take( floatindices.astype(int) ) ? In-Reply-To: References: Message-ID: On Fri, Feb 1, 2013 at 6:24 AM, denis wrote: > Folks, > is there a way to speed up Array.take( floatindices.astype(int) ) ? > astype(int) makes a copy, floor() returns floats. > > (Is there a wiki of NumPy one-liners / various tricks ? > would sure beat googling.) > You can use floats for the indices in the take method, they are floored before use. That said, the Fortran FLOOR function returns integers and it would be useful if numpy had integer versions of floor, ceil, divmod, say, ifloor, iceil, idivmod that raised errors if the resulting integers overflowed. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From valentin at haenel.co Fri Feb 1 17:58:58 2013 From: valentin at haenel.co (Valentin Haenel) Date: Fri, 1 Feb 2013 23:58:58 +0100 Subject: [Numpy-discussion] PR: tiny documentation fix, please merge Message-ID: <20130201225858.GS30692@kudu.in-berlin.de> https://github.com/numpy/numpy/pull/2960 thanks V- From matthew.brett at gmail.com Sat Feb 2 18:14:07 2013 From: matthew.brett at gmail.com (Matthew Brett) Date: Sat, 2 Feb 2013 15:14:07 -0800 Subject: [Numpy-discussion] Any plans for windows 64-bit installer for 1.7? Message-ID: Hi, I see there is no Windows 64 bit installer for the 1.7 rc1. Is there any prospect of a 64 bit installer for the full release? Can I help? I have a Windows 7 64 bit machine that I use as a build slave; I am happy to give access. Cheers, Matthew From josef.pktd at gmail.com Sat Feb 2 19:28:44 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 2 Feb 2013 19:28:44 -0500 Subject: [Numpy-discussion] Any plans for windows 64-bit installer for 1.7? In-Reply-To: References: Message-ID: On Sat, Feb 2, 2013 at 6:14 PM, Matthew Brett wrote: > Hi, > > I see there is no Windows 64 bit installer for the 1.7 rc1. related: Is there any chance to get newer mingw or mingw-w64 support "soonish"? Josef > > Is there any prospect of a 64 bit installer for the full release? > > Can I help? I have a Windows 7 64 bit machine that I use as a build > slave; I am happy to give access. > > Cheers, > > Matthew > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From cournape at gmail.com Sun Feb 3 05:57:14 2013 From: cournape at gmail.com (David Cournapeau) Date: Sun, 3 Feb 2013 10:57:14 +0000 Subject: [Numpy-discussion] Any plans for windows 64-bit installer for 1.7? In-Reply-To: References: Message-ID: On Sun, Feb 3, 2013 at 12:28 AM, wrote: > On Sat, Feb 2, 2013 at 6:14 PM, Matthew Brett wrote: >> Hi, >> >> I see there is no Windows 64 bit installer for the 1.7 rc1. > > related: > Is there any chance to get newer mingw or mingw-w64 support "soonish"? The problem has no solution until we can restrict support to windows 7 and above. Otherwise, any acceptable solution would require user to be an admin. David From denis-bz-gg at t-online.de Sun Feb 3 06:28:37 2013 From: denis-bz-gg at t-online.de (denis) Date: Sun, 3 Feb 2013 11:28:37 +0000 (UTC) Subject: [Numpy-discussion] =?utf-8?q?faster_Array=2Etake=28_floatindices?= =?utf-8?b?LmFzdHlwZShpbnQpCSkgPw==?= References: Message-ID: Charles R Harris gmail.com> writes: > You can use floats for the indices in the take method, they are floored before use. Chuck, that's in 1.7 ? In 1.6.2 x = np.arange(10) x.take([3.14]) array([3]) x.take(np.array([3.14])) TypeError: array cannot be safely cast to required type cheers -- denis From ondrej.certik at gmail.com Mon Feb 4 01:04:02 2013 From: ondrej.certik at gmail.com (=?UTF-8?B?T25kxZllaiDEjGVydMOtaw==?=) Date: Sun, 3 Feb 2013 22:04:02 -0800 Subject: [Numpy-discussion] Issues to fix for 1.7.0rc2. Message-ID: Hi, Here are the last open issues for 1.7, there are 9 of them: https://github.com/numpy/numpy/issues?milestone=3&sort=updated&state=open >From these, 3 are very simple PRs that I just posted. Let's polish these, get them in. I propose to release rc2 after that and if all is ok, do the final release. Some of the issues are not fully addressed, but I don't think we should be holding the release any longer. Let me know if that is ok with you. My apologies for a delay on my side --- my son was just born 2 weeks ago and I had to submit an important article, with the deadline yesterday. Things are settled now and I now have time to get this done. Ondrej From ondrej.certik at gmail.com Mon Feb 4 15:27:23 2013 From: ondrej.certik at gmail.com (=?UTF-8?B?T25kxZllaiDEjGVydMOtaw==?=) Date: Mon, 4 Feb 2013 12:27:23 -0800 Subject: [Numpy-discussion] Any plans for windows 64-bit installer for 1.7? In-Reply-To: References: Message-ID: On Sun, Feb 3, 2013 at 2:57 AM, David Cournapeau wrote: > On Sun, Feb 3, 2013 at 12:28 AM, wrote: >> On Sat, Feb 2, 2013 at 6:14 PM, Matthew Brett wrote: >>> Hi, >>> >>> I see there is no Windows 64 bit installer for the 1.7 rc1. >> >> related: >> Is there any chance to get newer mingw or mingw-w64 support "soonish"? > > The problem has no solution until we can restrict support to windows 7 > and above. Otherwise, any acceptable solution would require user to be > an admin. The installer is built with this VM/scripts: https://github.com/certik/numpy-vendor currently the VM itself is 32 bit. I think that might be upgraded to 64bit, and maybe it's possible to use 64 bit Wine: http://wiki.winehq.org/Wine64 but then we would need to figure out how to use Mingw with 64 bits. I would be very happy to accept patches to the above repository. Alternatively, if the actual Windows 64bit machine would have to be used, is there any way to automate the process? Would you compile it from command line (cmd.exe), just like I do in Wine? I would much prefer if we can figure out how to do this in Wine, so that the process can be automated and other people can easily reproduce it. Ondrej From matthew.brett at gmail.com Mon Feb 4 15:36:41 2013 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 4 Feb 2013 12:36:41 -0800 Subject: [Numpy-discussion] Any plans for windows 64-bit installer for 1.7? In-Reply-To: References: Message-ID: Hi, On Mon, Feb 4, 2013 at 12:27 PM, Ond?ej ?ert?k wrote: > On Sun, Feb 3, 2013 at 2:57 AM, David Cournapeau wrote: >> On Sun, Feb 3, 2013 at 12:28 AM, wrote: >>> On Sat, Feb 2, 2013 at 6:14 PM, Matthew Brett wrote: >>>> Hi, >>>> >>>> I see there is no Windows 64 bit installer for the 1.7 rc1. >>> >>> related: >>> Is there any chance to get newer mingw or mingw-w64 support "soonish"? >> >> The problem has no solution until we can restrict support to windows 7 >> and above. Otherwise, any acceptable solution would require user to be >> an admin. > > The installer is built with this VM/scripts: > > https://github.com/certik/numpy-vendor > > currently the VM itself is 32 bit. I think that might be upgraded to 64bit, > and maybe it's possible to use 64 bit Wine: > > http://wiki.winehq.org/Wine64 > > but then we would need to figure out how to use Mingw with 64 bits. > > I would be very happy to accept patches to the above repository. > > Alternatively, if the actual Windows 64bit machine would have to be used, > is there any way to automate the process? Would you compile it from command line > (cmd.exe), just like I do in Wine? I would much prefer if we can figure out > how to do this in Wine, so that the process can be automated and other people > can easily reproduce it. I wonder whether getting ming64 to work on 64 bit Wine is too hard to get working before the release? I often can't get 32-bit Wine working, and we've apparently got problems with mingw64 on native windows. As a short term fix, how about an Amazon image with the Windows 64 bit compilers on it? Or can Christophe Gohlke help us out here? It seems a shame not to provide these builds. Cheers, Matthew From njs at pobox.com Mon Feb 4 15:39:33 2013 From: njs at pobox.com (Nathaniel Smith) Date: Mon, 4 Feb 2013 12:39:33 -0800 Subject: [Numpy-discussion] Any plans for windows 64-bit installer for 1.7? In-Reply-To: References: Message-ID: On Mon, Feb 4, 2013 at 12:36 PM, Matthew Brett wrote: > Hi, > > On Mon, Feb 4, 2013 at 12:27 PM, Ond?ej ?ert?k wrote: >> On Sun, Feb 3, 2013 at 2:57 AM, David Cournapeau wrote: >>> On Sun, Feb 3, 2013 at 12:28 AM, wrote: >>>> On Sat, Feb 2, 2013 at 6:14 PM, Matthew Brett wrote: >>>>> Hi, >>>>> >>>>> I see there is no Windows 64 bit installer for the 1.7 rc1. >>>> >>>> related: >>>> Is there any chance to get newer mingw or mingw-w64 support "soonish"? >>> >>> The problem has no solution until we can restrict support to windows 7 >>> and above. Otherwise, any acceptable solution would require user to be >>> an admin. >> >> The installer is built with this VM/scripts: >> >> https://github.com/certik/numpy-vendor >> >> currently the VM itself is 32 bit. I think that might be upgraded to 64bit, >> and maybe it's possible to use 64 bit Wine: >> >> http://wiki.winehq.org/Wine64 >> >> but then we would need to figure out how to use Mingw with 64 bits. >> >> I would be very happy to accept patches to the above repository. >> >> Alternatively, if the actual Windows 64bit machine would have to be used, >> is there any way to automate the process? Would you compile it from command line >> (cmd.exe), just like I do in Wine? I would much prefer if we can figure out >> how to do this in Wine, so that the process can be automated and other people >> can easily reproduce it. > > I wonder whether getting ming64 to work on 64 bit Wine is too hard to > get working before the release? I often can't get 32-bit Wine > working, and we've apparently got problems with mingw64 on native > windows. > > As a short term fix, how about an Amazon image with the Windows 64 bit > compilers on it? > > Or can Christophe Gohlke help us out here? As a temporary measure it might make sense to just deem the cgolke builds official and upload them to the usual places -- or at least, it seems like it might make sense to me, but it depends on what Christophe thinks :-). -n From ralf.gommers at gmail.com Mon Feb 4 15:55:04 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Mon, 4 Feb 2013 21:55:04 +0100 Subject: [Numpy-discussion] Any plans for windows 64-bit installer for 1.7? In-Reply-To: References: Message-ID: On Mon, Feb 4, 2013 at 9:39 PM, Nathaniel Smith wrote: > On Mon, Feb 4, 2013 at 12:36 PM, Matthew Brett > wrote: > > Hi, > > > > On Mon, Feb 4, 2013 at 12:27 PM, Ond?ej ?ert?k > wrote: > >> On Sun, Feb 3, 2013 at 2:57 AM, David Cournapeau > wrote: > >>> On Sun, Feb 3, 2013 at 12:28 AM, wrote: > >>>> On Sat, Feb 2, 2013 at 6:14 PM, Matthew Brett < > matthew.brett at gmail.com> wrote: > >>>>> Hi, > >>>>> > >>>>> I see there is no Windows 64 bit installer for the 1.7 rc1. > >>>> > >>>> related: > >>>> Is there any chance to get newer mingw or mingw-w64 support "soonish"? > >>> > >>> The problem has no solution until we can restrict support to windows 7 > >>> and above. Otherwise, any acceptable solution would require user to be > >>> an admin. > >> > >> The installer is built with this VM/scripts: > >> > >> https://github.com/certik/numpy-vendor > >> > >> currently the VM itself is 32 bit. I think that might be upgraded to > 64bit, > >> and maybe it's possible to use 64 bit Wine: > >> > >> http://wiki.winehq.org/Wine64 > >> > >> but then we would need to figure out how to use Mingw with 64 bits. > >> > >> I would be very happy to accept patches to the above repository. > >> > >> Alternatively, if the actual Windows 64bit machine would have to be > used, > >> is there any way to automate the process? Would you compile it from > command line > >> (cmd.exe), just like I do in Wine? I would much prefer if we can figure > out > >> how to do this in Wine, so that the process can be automated and other > people > >> can easily reproduce it. > > > > I wonder whether getting ming64 to work on 64 bit Wine is too hard to > > get working before the release? I often can't get 32-bit Wine > > working, and we've apparently got problems with mingw64 on native > > windows. > > > > As a short term fix, how about an Amazon image with the Windows 64 bit > > compilers on it? > -1 on providing an "official" solution which will require admin rights for the produced installer and not work for scipy. > > > Or can Christophe Gohlke help us out here? > > As a temporary measure it might make sense to just deem the cgolke > builds official and upload them to the usual places -- or at least, it > seems like it might make sense to me, but it depends on what > Christophe thinks :-). > I'm +0 on that. MSVC + MKL is currently the only real option for 64-bit binaries, so providing Christoph's installers as "official" could make sense. On the other hand, Christoph has a matching set of other packages on his site, so users will be better off being redirected there than just grabbing only a numpy installer from SF and not finding a scipy one there. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ondrej.certik at gmail.com Mon Feb 4 15:57:06 2013 From: ondrej.certik at gmail.com (=?UTF-8?B?T25kxZllaiDEjGVydMOtaw==?=) Date: Mon, 4 Feb 2013 12:57:06 -0800 Subject: [Numpy-discussion] Any plans for windows 64-bit installer for 1.7? In-Reply-To: References: Message-ID: On Mon, Feb 4, 2013 at 12:36 PM, Matthew Brett wrote: > Hi, > > On Mon, Feb 4, 2013 at 12:27 PM, Ond?ej ?ert?k wrote: >> On Sun, Feb 3, 2013 at 2:57 AM, David Cournapeau wrote: >>> On Sun, Feb 3, 2013 at 12:28 AM, wrote: >>>> On Sat, Feb 2, 2013 at 6:14 PM, Matthew Brett wrote: >>>>> Hi, >>>>> >>>>> I see there is no Windows 64 bit installer for the 1.7 rc1. >>>> >>>> related: >>>> Is there any chance to get newer mingw or mingw-w64 support "soonish"? >>> >>> The problem has no solution until we can restrict support to windows 7 >>> and above. Otherwise, any acceptable solution would require user to be >>> an admin. >> >> The installer is built with this VM/scripts: >> >> https://github.com/certik/numpy-vendor >> >> currently the VM itself is 32 bit. I think that might be upgraded to 64bit, >> and maybe it's possible to use 64 bit Wine: >> >> http://wiki.winehq.org/Wine64 >> >> but then we would need to figure out how to use Mingw with 64 bits. >> >> I would be very happy to accept patches to the above repository. >> >> Alternatively, if the actual Windows 64bit machine would have to be used, >> is there any way to automate the process? Would you compile it from command line >> (cmd.exe), just like I do in Wine? I would much prefer if we can figure out >> how to do this in Wine, so that the process can be automated and other people >> can easily reproduce it. > > I wonder whether getting ming64 to work on 64 bit Wine is too hard to > get working before the release? I often can't get 32-bit Wine > working, Yep, it took me over a week of work to figure out exactly how to get 32-bit Wine working with mingw and numpy. However, the work is done and you can either run my scripts in the VM, or you can easily reproduce it yourself by consulting the scripts: https://github.com/certik/numpy-vendor/blob/master/setup-wine.sh https://github.com/certik/numpy-vendor/blob/master/fabfile.py There are a lot of little tricks involved. However, as Ralf says, apparently we would need to use MSVC + MKL anyway on 64bits. Ondrej From cournape at gmail.com Mon Feb 4 15:59:21 2013 From: cournape at gmail.com (David Cournapeau) Date: Mon, 4 Feb 2013 20:59:21 +0000 Subject: [Numpy-discussion] Any plans for windows 64-bit installer for 1.7? In-Reply-To: References: Message-ID: On Mon, Feb 4, 2013 at 8:27 PM, Ond?ej ?ert?k wrote: > On Sun, Feb 3, 2013 at 2:57 AM, David Cournapeau wrote: >> On Sun, Feb 3, 2013 at 12:28 AM, wrote: >>> On Sat, Feb 2, 2013 at 6:14 PM, Matthew Brett wrote: >>>> Hi, >>>> >>>> I see there is no Windows 64 bit installer for the 1.7 rc1. >>> >>> related: >>> Is there any chance to get newer mingw or mingw-w64 support "soonish"? >> >> The problem has no solution until we can restrict support to windows 7 >> and above. Otherwise, any acceptable solution would require user to be >> an admin. > > The installer is built with this VM/scripts: I am not sure whether you're replying to my observation or just giving a status update: with mingw-w64 (or recent mingw), the built installer will depend on several .dll (libgcc_s_sjil.dll) that we can't easily distribute. The only place we can realistically put them is in C:\Python$VERSION (or wherever python happens to be installed), and I think it is a very bad idea to install dll from NumPy there. In Windows 2008 and above, one can refer in .pyd where to look for dlls in another directory which is private to numpy. David From matthew.brett at gmail.com Mon Feb 4 16:06:00 2013 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 4 Feb 2013 13:06:00 -0800 Subject: [Numpy-discussion] Any plans for windows 64-bit installer for 1.7? In-Reply-To: References: Message-ID: Hi, On Mon, Feb 4, 2013 at 12:55 PM, Ralf Gommers wrote: > > > > On Mon, Feb 4, 2013 at 9:39 PM, Nathaniel Smith wrote: >> >> On Mon, Feb 4, 2013 at 12:36 PM, Matthew Brett >> wrote: >> > Hi, >> > >> > On Mon, Feb 4, 2013 at 12:27 PM, Ond?ej ?ert?k >> > wrote: >> >> On Sun, Feb 3, 2013 at 2:57 AM, David Cournapeau >> >> wrote: >> >>> On Sun, Feb 3, 2013 at 12:28 AM, wrote: >> >>>> On Sat, Feb 2, 2013 at 6:14 PM, Matthew Brett >> >>>> wrote: >> >>>>> Hi, >> >>>>> >> >>>>> I see there is no Windows 64 bit installer for the 1.7 rc1. >> >>>> >> >>>> related: >> >>>> Is there any chance to get newer mingw or mingw-w64 support >> >>>> "soonish"? >> >>> >> >>> The problem has no solution until we can restrict support to windows 7 >> >>> and above. Otherwise, any acceptable solution would require user to be >> >>> an admin. >> >> >> >> The installer is built with this VM/scripts: >> >> >> >> https://github.com/certik/numpy-vendor >> >> >> >> currently the VM itself is 32 bit. I think that might be upgraded to >> >> 64bit, >> >> and maybe it's possible to use 64 bit Wine: >> >> >> >> http://wiki.winehq.org/Wine64 >> >> >> >> but then we would need to figure out how to use Mingw with 64 bits. >> >> >> >> I would be very happy to accept patches to the above repository. >> >> >> >> Alternatively, if the actual Windows 64bit machine would have to be >> >> used, >> >> is there any way to automate the process? Would you compile it from >> >> command line >> >> (cmd.exe), just like I do in Wine? I would much prefer if we can figure >> >> out >> >> how to do this in Wine, so that the process can be automated and other >> >> people >> >> can easily reproduce it. >> > >> > I wonder whether getting ming64 to work on 64 bit Wine is too hard to >> > get working before the release? I often can't get 32-bit Wine >> > working, and we've apparently got problems with mingw64 on native >> > windows. >> > >> > As a short term fix, how about an Amazon image with the Windows 64 bit >> > compilers on it? > > > -1 on providing an "official" solution which will require admin rights for > the produced installer and not work for scipy. Sorry if I am being slow, but I don't follow. Am I right in thinking that we can currently build numpy 64 bit installers with the Microsoft tools, and that these would be distributable without admin rights for windows >=XP ? Why would this solution not work for scipy? Cheers, Matthew From ralf.gommers at gmail.com Mon Feb 4 16:15:17 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Mon, 4 Feb 2013 22:15:17 +0100 Subject: [Numpy-discussion] Any plans for windows 64-bit installer for 1.7? In-Reply-To: References: Message-ID: On Mon, Feb 4, 2013 at 10:06 PM, Matthew Brett wrote: > Hi, > > On Mon, Feb 4, 2013 at 12:55 PM, Ralf Gommers > wrote: > > > > > > > > On Mon, Feb 4, 2013 at 9:39 PM, Nathaniel Smith wrote: > >> > >> On Mon, Feb 4, 2013 at 12:36 PM, Matthew Brett > > >> wrote: > >> > Hi, > >> > > >> > On Mon, Feb 4, 2013 at 12:27 PM, Ond?ej ?ert?k < > ondrej.certik at gmail.com> > >> > wrote: > >> >> On Sun, Feb 3, 2013 at 2:57 AM, David Cournapeau > > >> >> wrote: > >> >>> On Sun, Feb 3, 2013 at 12:28 AM, wrote: > >> >>>> On Sat, Feb 2, 2013 at 6:14 PM, Matthew Brett > >> >>>> wrote: > >> >>>>> Hi, > >> >>>>> > >> >>>>> I see there is no Windows 64 bit installer for the 1.7 rc1. > >> >>>> > >> >>>> related: > >> >>>> Is there any chance to get newer mingw or mingw-w64 support > >> >>>> "soonish"? > >> >>> > >> >>> The problem has no solution until we can restrict support to > windows 7 > >> >>> and above. Otherwise, any acceptable solution would require user to > be > >> >>> an admin. > >> >> > >> >> The installer is built with this VM/scripts: > >> >> > >> >> https://github.com/certik/numpy-vendor > >> >> > >> >> currently the VM itself is 32 bit. I think that might be upgraded to > >> >> 64bit, > >> >> and maybe it's possible to use 64 bit Wine: > >> >> > >> >> http://wiki.winehq.org/Wine64 > >> >> > >> >> but then we would need to figure out how to use Mingw with 64 bits. > >> >> > >> >> I would be very happy to accept patches to the above repository. > >> >> > >> >> Alternatively, if the actual Windows 64bit machine would have to be > >> >> used, > >> >> is there any way to automate the process? Would you compile it from > >> >> command line > >> >> (cmd.exe), just like I do in Wine? I would much prefer if we can > figure > >> >> out > >> >> how to do this in Wine, so that the process can be automated and > other > >> >> people > >> >> can easily reproduce it. > >> > > >> > I wonder whether getting ming64 to work on 64 bit Wine is too hard to > >> > get working before the release? I often can't get 32-bit Wine > >> > working, and we've apparently got problems with mingw64 on native > >> > windows. > >> > > >> > As a short term fix, how about an Amazon image with the Windows 64 bit > >> > compilers on it? > > > > > > -1 on providing an "official" solution which will require admin rights > for > > the produced installer and not work for scipy. > > Sorry if I am being slow, but I don't follow. > > Am I right in thinking that we can currently build numpy 64 bit > installers with the Microsoft tools, and that these would be > distributable without admin rights for windows >=XP ? > MSVC + Intel Fortran + MKL, yes. But those aren't free. So how can you provide an Amazon image for those? > Why would this solution not work for scipy? > It would. gfortran doesn't. Looking at your mail, I still read it as providing an image with mingw64 + gfortran. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Mon Feb 4 16:24:26 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 4 Feb 2013 16:24:26 -0500 Subject: [Numpy-discussion] Any plans for windows 64-bit installer for 1.7? In-Reply-To: References: Message-ID: On Mon, Feb 4, 2013 at 4:15 PM, Ralf Gommers wrote: > > > > On Mon, Feb 4, 2013 at 10:06 PM, Matthew Brett > wrote: >> >> Hi, >> >> On Mon, Feb 4, 2013 at 12:55 PM, Ralf Gommers >> wrote: >> > >> > >> > >> > On Mon, Feb 4, 2013 at 9:39 PM, Nathaniel Smith wrote: >> >> >> >> On Mon, Feb 4, 2013 at 12:36 PM, Matthew Brett >> >> >> >> wrote: >> >> > Hi, >> >> > >> >> > On Mon, Feb 4, 2013 at 12:27 PM, Ond?ej ?ert?k >> >> > >> >> > wrote: >> >> >> On Sun, Feb 3, 2013 at 2:57 AM, David Cournapeau >> >> >> >> >> >> wrote: >> >> >>> On Sun, Feb 3, 2013 at 12:28 AM, wrote: >> >> >>>> On Sat, Feb 2, 2013 at 6:14 PM, Matthew Brett >> >> >>>> wrote: >> >> >>>>> Hi, >> >> >>>>> >> >> >>>>> I see there is no Windows 64 bit installer for the 1.7 rc1. >> >> >>>> >> >> >>>> related: >> >> >>>> Is there any chance to get newer mingw or mingw-w64 support >> >> >>>> "soonish"? >> >> >>> >> >> >>> The problem has no solution until we can restrict support to >> >> >>> windows 7 >> >> >>> and above. Otherwise, any acceptable solution would require user to >> >> >>> be >> >> >>> an admin. >> >> >> >> >> >> The installer is built with this VM/scripts: >> >> >> >> >> >> https://github.com/certik/numpy-vendor >> >> >> >> >> >> currently the VM itself is 32 bit. I think that might be upgraded to >> >> >> 64bit, >> >> >> and maybe it's possible to use 64 bit Wine: >> >> >> >> >> >> http://wiki.winehq.org/Wine64 >> >> >> >> >> >> but then we would need to figure out how to use Mingw with 64 bits. >> >> >> >> >> >> I would be very happy to accept patches to the above repository. >> >> >> >> >> >> Alternatively, if the actual Windows 64bit machine would have to be >> >> >> used, >> >> >> is there any way to automate the process? Would you compile it from >> >> >> command line >> >> >> (cmd.exe), just like I do in Wine? I would much prefer if we can >> >> >> figure >> >> >> out >> >> >> how to do this in Wine, so that the process can be automated and >> >> >> other >> >> >> people >> >> >> can easily reproduce it. >> >> > >> >> > I wonder whether getting ming64 to work on 64 bit Wine is too hard to >> >> > get working before the release? I often can't get 32-bit Wine >> >> > working, and we've apparently got problems with mingw64 on native >> >> > windows. >> >> > >> >> > As a short term fix, how about an Amazon image with the Windows 64 >> >> > bit >> >> > compilers on it? >> > >> > >> > -1 on providing an "official" solution which will require admin rights >> > for >> > the produced installer and not work for scipy. >> >> Sorry if I am being slow, but I don't follow. >> >> Am I right in thinking that we can currently build numpy 64 bit >> installers with the Microsoft tools, and that these would be >> distributable without admin rights for windows >=XP ? > > > MSVC + Intel Fortran + MKL, yes. But those aren't free. So how can you > provide an Amazon image for those? Related question Would scipy and similar packages complain when we try to build them without MKL against an MKL numpy. Gohlke has the compatibility notes to warn users. If incompatible files are on numpy sourceforge for download, then users might install by accident incompatible versions, or not? Josef > >> >> Why would this solution not work for scipy? > > > It would. gfortran doesn't. Looking at your mail, I still read it as > providing an image with mingw64 + gfortran. > > Ralf > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From ralf.gommers at gmail.com Mon Feb 4 16:48:43 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Mon, 4 Feb 2013 22:48:43 +0100 Subject: [Numpy-discussion] Issues to fix for 1.7.0rc2. In-Reply-To: References: Message-ID: On Mon, Feb 4, 2013 at 7:04 AM, Ond?ej ?ert?k wrote: > Hi, > > Here are the last open issues for 1.7, there are 9 of them: > > https://github.com/numpy/numpy/issues?milestone=3&sort=updated&state=open > > >From these, 3 are very simple PRs that I just posted. > Let's polish these, get them in. > > I propose to release rc2 after that and if all is ok, do the final > release. Some of the > issues are not fully addressed, but I don't think we should be holding > the release any longer. > Let me know if that is ok with you. > I'm OK with that. None of the issues are hard blockers. > > My apologies for a delay on my side --- my son was just born 2 weeks > ago Congrats, Ondrej! Ralf > and I had to submit > an important article, with the deadline yesterday. Things are settled > now and I now have time > to get this done. > > Ondrej > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Mon Feb 4 17:38:38 2013 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 4 Feb 2013 14:38:38 -0800 Subject: [Numpy-discussion] Any plans for windows 64-bit installer for 1.7? In-Reply-To: References: Message-ID: Hi, On Mon, Feb 4, 2013 at 1:15 PM, Ralf Gommers wrote: > > > > On Mon, Feb 4, 2013 at 10:06 PM, Matthew Brett > wrote: >> >> Hi, >> >> On Mon, Feb 4, 2013 at 12:55 PM, Ralf Gommers >> wrote: >> > >> > >> > >> > On Mon, Feb 4, 2013 at 9:39 PM, Nathaniel Smith wrote: >> >> >> >> On Mon, Feb 4, 2013 at 12:36 PM, Matthew Brett >> >> >> >> wrote: >> >> > Hi, >> >> > >> >> > On Mon, Feb 4, 2013 at 12:27 PM, Ond?ej ?ert?k >> >> > >> >> > wrote: >> >> >> On Sun, Feb 3, 2013 at 2:57 AM, David Cournapeau >> >> >> >> >> >> wrote: >> >> >>> On Sun, Feb 3, 2013 at 12:28 AM, wrote: >> >> >>>> On Sat, Feb 2, 2013 at 6:14 PM, Matthew Brett >> >> >>>> wrote: >> >> >>>>> Hi, >> >> >>>>> >> >> >>>>> I see there is no Windows 64 bit installer for the 1.7 rc1. >> >> >>>> >> >> >>>> related: >> >> >>>> Is there any chance to get newer mingw or mingw-w64 support >> >> >>>> "soonish"? >> >> >>> >> >> >>> The problem has no solution until we can restrict support to >> >> >>> windows 7 >> >> >>> and above. Otherwise, any acceptable solution would require user to >> >> >>> be >> >> >>> an admin. >> >> >> >> >> >> The installer is built with this VM/scripts: >> >> >> >> >> >> https://github.com/certik/numpy-vendor >> >> >> >> >> >> currently the VM itself is 32 bit. I think that might be upgraded to >> >> >> 64bit, >> >> >> and maybe it's possible to use 64 bit Wine: >> >> >> >> >> >> http://wiki.winehq.org/Wine64 >> >> >> >> >> >> but then we would need to figure out how to use Mingw with 64 bits. >> >> >> >> >> >> I would be very happy to accept patches to the above repository. >> >> >> >> >> >> Alternatively, if the actual Windows 64bit machine would have to be >> >> >> used, >> >> >> is there any way to automate the process? Would you compile it from >> >> >> command line >> >> >> (cmd.exe), just like I do in Wine? I would much prefer if we can >> >> >> figure >> >> >> out >> >> >> how to do this in Wine, so that the process can be automated and >> >> >> other >> >> >> people >> >> >> can easily reproduce it. >> >> > >> >> > I wonder whether getting ming64 to work on 64 bit Wine is too hard to >> >> > get working before the release? I often can't get 32-bit Wine >> >> > working, and we've apparently got problems with mingw64 on native >> >> > windows. >> >> > >> >> > As a short term fix, how about an Amazon image with the Windows 64 >> >> > bit >> >> > compilers on it? >> > >> > >> > -1 on providing an "official" solution which will require admin rights >> > for >> > the produced installer and not work for scipy. >> >> Sorry if I am being slow, but I don't follow. >> >> Am I right in thinking that we can currently build numpy 64 bit >> installers with the Microsoft tools, and that these would be >> distributable without admin rights for windows >=XP ? > > > MSVC + Intel Fortran + MKL, yes. But those aren't free. So how can you > provide an Amazon image for those? You can make an image that is not public, I guess. I suppose anyone who uses the image would have to have their own licenses for the Intel stuff? Does anyone have experience of this? Does ATLAS / openBLAS not build for windows? Sorry for my ignorance. Cheers, Matthew From robert.kern at gmail.com Mon Feb 4 18:04:35 2013 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 4 Feb 2013 23:04:35 +0000 Subject: [Numpy-discussion] Any plans for windows 64-bit installer for 1.7? In-Reply-To: References: Message-ID: On Mon, Feb 4, 2013 at 10:38 PM, Matthew Brett wrote: > Hi, > > On Mon, Feb 4, 2013 at 1:15 PM, Ralf Gommers wrote: >> MSVC + Intel Fortran + MKL, yes. But those aren't free. So how can you >> provide an Amazon image for those? > > You can make an image that is not public, I guess. I suppose anyone > who uses the image would have to have their own licenses for the Intel > stuff? Does anyone have experience of this? You need to purchase one license per developer: http://software.intel.com/en-us/articles/intel-math-kernel-library-licensing-faq#eula1 -- Robert Kern From charlesr.harris at gmail.com Mon Feb 4 18:46:32 2013 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 4 Feb 2013 16:46:32 -0700 Subject: [Numpy-discussion] Any plans for windows 64-bit installer for 1.7? In-Reply-To: References: Message-ID: On Mon, Feb 4, 2013 at 4:04 PM, Robert Kern wrote: > On Mon, Feb 4, 2013 at 10:38 PM, Matthew Brett > wrote: > > Hi, > > > > On Mon, Feb 4, 2013 at 1:15 PM, Ralf Gommers > wrote: > > >> MSVC + Intel Fortran + MKL, yes. But those aren't free. So how can you > >> provide an Amazon image for those? > > > > You can make an image that is not public, I guess. I suppose anyone > > who uses the image would have to have their own licenses for the Intel > > stuff? Does anyone have experience of this? > > You need to purchase one license per developer: > > > http://software.intel.com/en-us/articles/intel-math-kernel-library-licensing-faq#eula1 > > I think 64 bits on windows is best pushed off to 1.7.1 or 1.8. It would be a bit much to get it implemented in the next week or two. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From cgohlke at uci.edu Mon Feb 4 18:49:15 2013 From: cgohlke at uci.edu (Christoph Gohlke) Date: Mon, 04 Feb 2013 15:49:15 -0800 Subject: [Numpy-discussion] Any plans for windows 64-bit installer for 1.7? In-Reply-To: References: Message-ID: <511048FB.9050602@uci.edu> On 2/4/2013 12:59 PM, David Cournapeau wrote: > On Mon, Feb 4, 2013 at 8:27 PM, Ond?ej ?ert?k wrote: >> On Sun, Feb 3, 2013 at 2:57 AM, David Cournapeau wrote: >>> On Sun, Feb 3, 2013 at 12:28 AM, wrote: >>>> On Sat, Feb 2, 2013 at 6:14 PM, Matthew Brett wrote: >>>>> Hi, >>>>> >>>>> I see there is no Windows 64 bit installer for the 1.7 rc1. >>>> >>>> related: >>>> Is there any chance to get newer mingw or mingw-w64 support "soonish"? >>> >>> The problem has no solution until we can restrict support to windows 7 >>> and above. Otherwise, any acceptable solution would require user to be >>> an admin. >> >> The installer is built with this VM/scripts: > > I am not sure whether you're replying to my observation or just giving > a status update: with mingw-w64 (or recent mingw), the built installer > will depend on several .dll (libgcc_s_sjil.dll) that we can't easily > distribute. The only place we can realistically put them is in > C:\Python$VERSION (or wherever python happens to be installed), and I > think it is a very bad idea to install dll from NumPy there. In > Windows 2008 and above, one can refer in .pyd where to look for dlls > in another directory which is private to numpy. > > David If I understand correctly the problem is distributing dependency/runtime DLLs with a package and ensuring the DLLs are found by Windows when the pyd extensions are imported? For numpy-MKL and other packages I include/install the extra DLLs in the package directories and, if necessary, (i) append the package directory to os.environ['PATH'] or (ii) "pre-load" the DLLs into the process using Ctypes, both early in the package's main __init__.py. No admin rights are required. Christoph From ondrej.certik at gmail.com Mon Feb 4 19:27:15 2013 From: ondrej.certik at gmail.com (=?UTF-8?B?T25kxZllaiDEjGVydMOtaw==?=) Date: Mon, 4 Feb 2013 16:27:15 -0800 Subject: [Numpy-discussion] Any plans for windows 64-bit installer for 1.7? In-Reply-To: <511048FB.9050602@uci.edu> References: <511048FB.9050602@uci.edu> Message-ID: On Mon, Feb 4, 2013 at 3:49 PM, Christoph Gohlke wrote: > On 2/4/2013 12:59 PM, David Cournapeau wrote: >> On Mon, Feb 4, 2013 at 8:27 PM, Ond?ej ?ert?k wrote: >>> On Sun, Feb 3, 2013 at 2:57 AM, David Cournapeau wrote: >>>> On Sun, Feb 3, 2013 at 12:28 AM, wrote: >>>>> On Sat, Feb 2, 2013 at 6:14 PM, Matthew Brett wrote: >>>>>> Hi, >>>>>> >>>>>> I see there is no Windows 64 bit installer for the 1.7 rc1. >>>>> >>>>> related: >>>>> Is there any chance to get newer mingw or mingw-w64 support "soonish"? >>>> >>>> The problem has no solution until we can restrict support to windows 7 >>>> and above. Otherwise, any acceptable solution would require user to be >>>> an admin. >>> >>> The installer is built with this VM/scripts: >> >> I am not sure whether you're replying to my observation or just giving >> a status update: with mingw-w64 (or recent mingw), the built installer I was just giving a general status, sorry about not being clear. >> will depend on several .dll (libgcc_s_sjil.dll) that we can't easily >> distribute. The only place we can realistically put them is in >> C:\Python$VERSION (or wherever python happens to be installed), and I >> think it is a very bad idea to install dll from NumPy there. In >> Windows 2008 and above, one can refer in .pyd where to look for dlls >> in another directory which is private to numpy. Yes. >> >> David > > If I understand correctly the problem is distributing dependency/runtime > DLLs with a package and ensuring the DLLs are found by Windows when the > pyd extensions are imported? > For numpy-MKL and other packages I include/install the extra DLLs in the > package directories and, if necessary, (i) append the package directory > to os.environ['PATH'] or (ii) "pre-load" the DLLs into the process using > Ctypes, both early in the package's main __init__.py. No admin rights > are required. So that seems to be the only option. Is there any other solution? Ondrej From cournape at gmail.com Mon Feb 4 19:53:27 2013 From: cournape at gmail.com (David Cournapeau) Date: Tue, 5 Feb 2013 00:53:27 +0000 Subject: [Numpy-discussion] Any plans for windows 64-bit installer for 1.7? In-Reply-To: References: <511048FB.9050602@uci.edu> Message-ID: On Tue, Feb 5, 2013 at 12:27 AM, Ond?ej ?ert?k wrote: > On Mon, Feb 4, 2013 at 3:49 PM, Christoph Gohlke wrote: >> On 2/4/2013 12:59 PM, David Cournapeau wrote: >>> On Mon, Feb 4, 2013 at 8:27 PM, Ond?ej ?ert?k wrote: >>>> On Sun, Feb 3, 2013 at 2:57 AM, David Cournapeau wrote: >>>>> On Sun, Feb 3, 2013 at 12:28 AM, wrote: >>>>>> On Sat, Feb 2, 2013 at 6:14 PM, Matthew Brett wrote: >>>>>>> Hi, >>>>>>> >>>>>>> I see there is no Windows 64 bit installer for the 1.7 rc1. >>>>>> >>>>>> related: >>>>>> Is there any chance to get newer mingw or mingw-w64 support "soonish"? >>>>> >>>>> The problem has no solution until we can restrict support to windows 7 >>>>> and above. Otherwise, any acceptable solution would require user to be >>>>> an admin. >>>> >>>> The installer is built with this VM/scripts: >>> >>> I am not sure whether you're replying to my observation or just giving >>> a status update: with mingw-w64 (or recent mingw), the built installer > > I was just giving a general status, sorry about not being clear. > >>> will depend on several .dll (libgcc_s_sjil.dll) that we can't easily >>> distribute. The only place we can realistically put them is in >>> C:\Python$VERSION (or wherever python happens to be installed), and I >>> think it is a very bad idea to install dll from NumPy there. In >>> Windows 2008 and above, one can refer in .pyd where to look for dlls >>> in another directory which is private to numpy. > > Yes. > >>> >>> David >> >> If I understand correctly the problem is distributing dependency/runtime >> DLLs with a package and ensuring the DLLs are found by Windows when the >> pyd extensions are imported? >> For numpy-MKL and other packages I include/install the extra DLLs in the >> package directories and, if necessary, (i) append the package directory >> to os.environ['PATH'] or (ii) "pre-load" the DLLs into the process using >> Ctypes, both early in the package's main __init__.py. No admin rights >> are required. > > So that seems to be the only option. Is there any other solution? I don't think it is an acceptable solution in general: modifying the PATH in a package is a big no-no, even worse than adding the dll in $prefix. I have not thought about pre-loading, but if it works, that may be a workaround. That's a very ugly workaround, though... David From matthew.brett at gmail.com Mon Feb 4 20:09:50 2013 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 4 Feb 2013 17:09:50 -0800 Subject: [Numpy-discussion] Any plans for windows 64-bit installer for 1.7? In-Reply-To: References: Message-ID: Hi, On Mon, Feb 4, 2013 at 3:46 PM, Charles R Harris wrote: > > > On Mon, Feb 4, 2013 at 4:04 PM, Robert Kern wrote: >> >> On Mon, Feb 4, 2013 at 10:38 PM, Matthew Brett >> wrote: >> > Hi, >> > >> > On Mon, Feb 4, 2013 at 1:15 PM, Ralf Gommers >> > wrote: >> >> >> MSVC + Intel Fortran + MKL, yes. But those aren't free. So how can you >> >> provide an Amazon image for those? >> > >> > You can make an image that is not public, I guess. I suppose anyone >> > who uses the image would have to have their own licenses for the Intel >> > stuff? Does anyone have experience of this? >> >> You need to purchase one license per developer: >> >> >> http://software.intel.com/en-us/articles/intel-math-kernel-library-licensing-faq#eula1 >> > > I think 64 bits on windows is best pushed off to 1.7.1 or 1.8. It would be a > bit much to get it implemented in the next week or two. The problem with not providing these binaries is that they are at the bottom of everyone's stack, so a delay in numpy holds everyone back. I can't find completely convincing stats, but it looks as though 64 bit windows 7 is now the most common version of Windows, at least for Gamers [1] around now, and it was getting that way for everyone in 2010 [2]. It don't think it reflects well on on us that we don't appear to support 64 bits out of the box; just for example, R already has a 32 bit / 64 bit installer. If I understand correctly, the options for doing this right now are: 1) Minimal cost in time : ask Christophe nicely whether we can distribute his binaries via the Numpy page 2) Small cost in time / money : pay for licenses for Ondrej or me or someone to install the dependencies on my Berkeley machine / an Amazon image. Ralf : I suppose we qualify for the free licenses you referred to earlier ? [3] . I guess that covers us for the Numpy build? Then it's only a question of paying for ifort licenses when it comes to do the Scipy build? So, if the cost of option 2 is too high, how about option 1? Cheers, Matthew [1] http://store.steampowered.com/hwsurvey [2] http://blogs.windows.com/windows/b/bloggingwindows/archive/2010/07/08/64-bit-momentum-surges-with-windows-7.aspx [3] http://numpy-discussion.10968.n7.nabble.com/MKL-licenses-for-core-scientific-Python-projects-td32530.html From matthew.brett at gmail.com Mon Feb 4 20:14:01 2013 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 4 Feb 2013 17:14:01 -0800 Subject: [Numpy-discussion] Any plans for windows 64-bit installer for 1.7? In-Reply-To: References: Message-ID: Hi, On Mon, Feb 4, 2013 at 5:09 PM, Matthew Brett wrote: > Hi, > > On Mon, Feb 4, 2013 at 3:46 PM, Charles R Harris > wrote: >> >> >> On Mon, Feb 4, 2013 at 4:04 PM, Robert Kern wrote: >>> >>> On Mon, Feb 4, 2013 at 10:38 PM, Matthew Brett >>> wrote: >>> > Hi, >>> > >>> > On Mon, Feb 4, 2013 at 1:15 PM, Ralf Gommers >>> > wrote: >>> >>> >> MSVC + Intel Fortran + MKL, yes. But those aren't free. So how can you >>> >> provide an Amazon image for those? >>> > >>> > You can make an image that is not public, I guess. I suppose anyone >>> > who uses the image would have to have their own licenses for the Intel >>> > stuff? Does anyone have experience of this? >>> >>> You need to purchase one license per developer: >>> >>> >>> http://software.intel.com/en-us/articles/intel-math-kernel-library-licensing-faq#eula1 >>> >> >> I think 64 bits on windows is best pushed off to 1.7.1 or 1.8. It would be a >> bit much to get it implemented in the next week or two. > > The problem with not providing these binaries is that they are at the > bottom of everyone's stack, so a delay in numpy holds everyone back. > > I can't find completely convincing stats, but it looks as though 64 > bit windows 7 is now the most common version of Windows, at least for > Gamers [1] around now, and it was getting that way for everyone in > 2010 [2]. > > It don't think it reflects well on on us that we don't appear to > support 64 bits out of the box; just for example, R already has a 32 > bit / 64 bit installer. > > If I understand correctly, the options for doing this right now are: > > 1) Minimal cost in time : ask Christophe nicely whether we can > distribute his binaries via the Numpy page > 2) Small cost in time / money : pay for licenses for Ondrej or me or > someone to install the dependencies on my Berkeley machine / an Amazon > image. David - obviously if you were willing to do this - that would be far preferable to me stumbling along. Can provide machine and beer IOUs. Cheers, Matthew From lists at hilboll.de Tue Feb 5 03:38:16 2013 From: lists at hilboll.de (Andreas Hilboll) Date: Tue, 05 Feb 2013 09:38:16 +0100 Subject: [Numpy-discussion] savez documentation flaw Message-ID: <5110C4F8.10408@hilboll.de> Hi, I noticed that on http://docs.scipy.org/doc/numpy/reference/generated/numpy.savez.html there's a "see also" to a function numpy.savez_compressed, which doesn't seem to exist (neither on my system nor in the online documentation). What would be the easiest way to find out where to fix this? For someone without deeper knowledge of how numpy sources are organized it's hard to find the place where to fix things. How about adding the "source" link to the docstrings via sphinx, like in scipy? Cheers, Andreas. From robert.kern at gmail.com Tue Feb 5 04:17:28 2013 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 5 Feb 2013 09:17:28 +0000 Subject: [Numpy-discussion] savez documentation flaw In-Reply-To: <5110C4F8.10408@hilboll.de> References: <5110C4F8.10408@hilboll.de> Message-ID: On Tue, Feb 5, 2013 at 8:38 AM, Andreas Hilboll wrote: > Hi, > > I noticed that on > http://docs.scipy.org/doc/numpy/reference/generated/numpy.savez.html > there's a "see also" to a function numpy.savez_compressed, which doesn't > seem to exist (neither on my system nor in the online documentation). > > What would be the easiest way to find out where to fix this? For someone > without deeper knowledge of how numpy sources are organized it's hard to > find the place where to fix things. How about adding the "source" link > to the docstrings via sphinx, like in scipy? Click on the "Edit Page" link on the left. Follow the instructions on the front page of the numpy Docstring Editor site to sign up: http://docs.scipy.org/numpy/Front%20Page/ -- Robert Kern From scott.sinclair.za at gmail.com Tue Feb 5 05:40:53 2013 From: scott.sinclair.za at gmail.com (Scott Sinclair) Date: Tue, 5 Feb 2013 12:40:53 +0200 Subject: [Numpy-discussion] savez documentation flaw In-Reply-To: <5110C4F8.10408@hilboll.de> References: <5110C4F8.10408@hilboll.de> Message-ID: On 5 February 2013 10:38, Andreas Hilboll wrote: > I noticed that on > http://docs.scipy.org/doc/numpy/reference/generated/numpy.savez.html > there's a "see also" to a function numpy.savez_compressed, which doesn't > seem to exist (neither on my system nor in the online documentation). Seems like a problem with the online documentation, savez_compressed does exist on my numpy 1.6.2 and on master... The docstrings for these functions are in numpy/lib/npyio.py. It's sometimes easiest to locate the docstrings by following the source link in the Doceditor (in this case from http://docs.scipy.org/numpy/docs/numpy.lib.npyio.savez/). Cheers, Scott From p.j.a.cock at googlemail.com Tue Feb 5 09:01:20 2013 From: p.j.a.cock at googlemail.com (Peter Cock) Date: Tue, 5 Feb 2013 14:01:20 +0000 Subject: [Numpy-discussion] Will numpy 1.7.0 final be binary compatible with the rc? Message-ID: Hello all, Will the numpy 1.7.0 'final' be binary compatible with the release candidate(s)? i.e. Would it be safe for me to release a Windows installer for a package using the NumPy C API compiled against the NumPy 1.7.0rc? I'm specifically interested in Python 3.3, and NumPy 1.7 will be the first release to support that. For older versions of Python I can use NumPy 1.6 instead. Thanks, Peter From jniehof at lanl.gov Tue Feb 5 09:03:48 2013 From: jniehof at lanl.gov (Jonathan T. Niehof) Date: Tue, 05 Feb 2013 07:03:48 -0700 Subject: [Numpy-discussion] Any plans for windows 64-bit installer for 1.7? In-Reply-To: References: Message-ID: <51111144.4080608@lanl.gov> On 02/04/2013 06:09 PM, Matthew Brett wrote: > The problem with not providing these binaries is that they are at the > bottom of everyone's stack, so a delay in numpy holds everyone back. OTOH, so far it's been an *excellent* excuse for those of us further up the stack not to make a 64-bit binary. It sounds like we're completely out of luck on 64-bit Windows with free tools? So the rest of the community is going to face shelling out for compiler licenses as well? -- Jonathan Niehof ISR-3 Space Data Systems Los Alamos National Laboratory MS-D466 Los Alamos, NM 87545 Phone: 505-667-9595 email: jniehof at lanl.gov Correspondence / Technical data or Software Publicly Available From matthew.brett at gmail.com Tue Feb 5 13:48:02 2013 From: matthew.brett at gmail.com (Matthew Brett) Date: Tue, 5 Feb 2013 10:48:02 -0800 Subject: [Numpy-discussion] Any plans for windows 64-bit installer for 1.7? In-Reply-To: <51111144.4080608@lanl.gov> References: <51111144.4080608@lanl.gov> Message-ID: Hi, On Tue, Feb 5, 2013 at 6:03 AM, Jonathan T. Niehof wrote: > On 02/04/2013 06:09 PM, Matthew Brett wrote: > >> The problem with not providing these binaries is that they are at the >> bottom of everyone's stack, so a delay in numpy holds everyone back. > > OTOH, so far it's been an *excellent* excuse for those of us further up > the stack not to make a 64-bit binary. It sounds like we're completely > out of luck on 64-bit Windows with free tools? So the rest of the > community is going to face shelling out for compiler licenses as well? Normally you'd only need the free (as in beer) MS C compilers. You'd need a (possibly free) MKL license if you wanted to link against an optimized blas / lapack (it seems), and you'd need to use the Intel ifort compiler if you had fortran code in there (I think). Cheers, Matthew From matthew.brett at gmail.com Tue Feb 5 13:51:44 2013 From: matthew.brett at gmail.com (Matthew Brett) Date: Tue, 5 Feb 2013 10:51:44 -0800 Subject: [Numpy-discussion] Any plans for windows 64-bit installer for 1.7? In-Reply-To: References: Message-ID: Hi, On Mon, Feb 4, 2013 at 5:09 PM, Matthew Brett wrote: > Hi, > > On Mon, Feb 4, 2013 at 3:46 PM, Charles R Harris > wrote: >> >> >> On Mon, Feb 4, 2013 at 4:04 PM, Robert Kern wrote: >>> >>> On Mon, Feb 4, 2013 at 10:38 PM, Matthew Brett >>> wrote: >>> > Hi, >>> > >>> > On Mon, Feb 4, 2013 at 1:15 PM, Ralf Gommers >>> > wrote: >>> >>> >> MSVC + Intel Fortran + MKL, yes. But those aren't free. So how can you >>> >> provide an Amazon image for those? >>> > >>> > You can make an image that is not public, I guess. I suppose anyone >>> > who uses the image would have to have their own licenses for the Intel >>> > stuff? Does anyone have experience of this? >>> >>> You need to purchase one license per developer: >>> >>> >>> http://software.intel.com/en-us/articles/intel-math-kernel-library-licensing-faq#eula1 >>> >> >> I think 64 bits on windows is best pushed off to 1.7.1 or 1.8. It would be a >> bit much to get it implemented in the next week or two. > > The problem with not providing these binaries is that they are at the > bottom of everyone's stack, so a delay in numpy holds everyone back. > > I can't find completely convincing stats, but it looks as though 64 > bit windows 7 is now the most common version of Windows, at least for > Gamers [1] around now, and it was getting that way for everyone in > 2010 [2]. > > It don't think it reflects well on on us that we don't appear to > support 64 bits out of the box; just for example, R already has a 32 > bit / 64 bit installer. > > If I understand correctly, the options for doing this right now are: > > 1) Minimal cost in time : ask Christophe nicely whether we can > distribute his binaries via the Numpy page > 2) Small cost in time / money : pay for licenses for Ondrej or me or > someone to install the dependencies on my Berkeley machine / an Amazon > image. In order not to leave this discussion without a resolution: Christophe - would you allow us to distribute your numpy binaries for 1.7 from the numpy sourceforge page? Cheers, Matthew From ralf.gommers at gmail.com Tue Feb 5 15:22:37 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Tue, 5 Feb 2013 21:22:37 +0100 Subject: [Numpy-discussion] Will numpy 1.7.0 final be binary compatible with the rc? In-Reply-To: References: Message-ID: On Tue, Feb 5, 2013 at 3:01 PM, Peter Cock wrote: > Hello all, > > Will the numpy 1.7.0 'final' be binary compatible with the release > candidate(s)? i.e. Would it be safe for me to release a Windows > installer for a package using the NumPy C API compiled against > the NumPy 1.7.0rc? > Yes, that should be safe. Ralf > > I'm specifically interested in Python 3.3, and NumPy 1.7 will be > the first release to support that. For older versions of Python I > can use NumPy 1.6 instead. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Tue Feb 5 16:23:34 2013 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 5 Feb 2013 14:23:34 -0700 Subject: [Numpy-discussion] Dealing with the mode argument in qr. Message-ID: Hi All, This post is to bring the discussion of PR #2965to the attention of the list. There are at least three issues in play here. 1) The PR adds modes 'big' and 'thin' to the current modes 'full', 'r', 'economic' for qr factorization. The problem is that the current 'full' is actually 'thin' and 'big' should be 'full'. The solution here was to raise a FutureWarning on use of 'full', alias it to 'thin' for the time being, and at some distant time change 'full' to alias 'big'. 2) The 'economic' mode serves little purpose. I propose to deprecate it and add a 'qrf' mode instead, corresponding to scipy's 'raw' mode. We can't use 'raw' itself as traditionally the mode may be specified using the first letter only and that leads to a conflict with 'r'. 3) As suggested in 2, the use of single letter abbreviations can constrain the options in choosing mode names and they are not as informative as the full name. A possibility here is to deprecate the use of the abbreviations in favor of the full names. A longer term problem is the divergence between the numpy and scipy versions of qr. The divergence is enough that I don't see any easy way to come to a common interface, but that is something that would be desirable if possible. Thoughts? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From cgohlke at uci.edu Tue Feb 5 18:04:39 2013 From: cgohlke at uci.edu (Christoph Gohlke) Date: Tue, 05 Feb 2013 15:04:39 -0800 Subject: [Numpy-discussion] Any plans for windows 64-bit installer for 1.7? In-Reply-To: References: Message-ID: <51119007.6090806@uci.edu> On 2/5/2013 10:51 AM, Matthew Brett wrote: > Hi, > > On Mon, Feb 4, 2013 at 5:09 PM, Matthew Brett wrote: >> Hi, >> >> On Mon, Feb 4, 2013 at 3:46 PM, Charles R Harris >> wrote: >>> >>> >>> On Mon, Feb 4, 2013 at 4:04 PM, Robert Kern wrote: >>>> >>>> On Mon, Feb 4, 2013 at 10:38 PM, Matthew Brett >>>> wrote: >>>>> Hi, >>>>> >>>>> On Mon, Feb 4, 2013 at 1:15 PM, Ralf Gommers >>>>> wrote: >>>> >>>>>> MSVC + Intel Fortran + MKL, yes. But those aren't free. So how can you >>>>>> provide an Amazon image for those? >>>>> >>>>> You can make an image that is not public, I guess. I suppose anyone >>>>> who uses the image would have to have their own licenses for the Intel >>>>> stuff? Does anyone have experience of this? >>>> >>>> You need to purchase one license per developer: >>>> >>>> >>>> http://software.intel.com/en-us/articles/intel-math-kernel-library-licensing-faq#eula1 >>>> >>> >>> I think 64 bits on windows is best pushed off to 1.7.1 or 1.8. It would be a >>> bit much to get it implemented in the next week or two. >> >> The problem with not providing these binaries is that they are at the >> bottom of everyone's stack, so a delay in numpy holds everyone back. >> >> I can't find completely convincing stats, but it looks as though 64 >> bit windows 7 is now the most common version of Windows, at least for >> Gamers [1] around now, and it was getting that way for everyone in >> 2010 [2]. >> >> It don't think it reflects well on on us that we don't appear to >> support 64 bits out of the box; just for example, R already has a 32 >> bit / 64 bit installer. >> >> If I understand correctly, the options for doing this right now are: >> >> 1) Minimal cost in time : ask Christophe nicely whether we can >> distribute his binaries via the Numpy page >> 2) Small cost in time / money : pay for licenses for Ondrej or me or >> someone to install the dependencies on my Berkeley machine / an Amazon >> image. > > In order not to leave this discussion without a resolution: > > Christophe - would you allow us to distribute your numpy binaries for > 1.7 from the numpy sourceforge page? > > Cheers, > > Matthew I am OK with providing 64 bit "numpy-MKL" binaries (that is numpy compiled with MSVC compilers and linked to Intel's MKL) for official numpy releases. However: 1) There seems to be no real consensus and urge for doing this. Using a free toolchain capable of building the whole scipy-stack would be much preferred. Several 64 bit Python distributions containing numpy-MKL are already available, some for free. 2) Releasing 64 bit numpy without matching scipy binaries would make little sense to me. 3) Please do not just redistribute the binaries from my website and declare them official. They might contain unreleased fixes from git master and pull requests that are needed for my work and other packages. 4) Numpy-MKL requires the Intel runtime DLLs (MKL is linked statically btw). I ship those with the installers and append the directory containing the DLLs to os.environ['PATH'] in numpy/__init__.py. This is a big no-no according to numpy developers. I don't agree. Anyway, those changes are not in the numpy source repositories. 5) My numpy-MKL installers are Python distutils bdist_wininst installers. That means if Python was installed for all users, installing numpy-MKL on Windows >6.0 will prompt for UAC elevation. Another no-no? Christoph From matthew.brett at gmail.com Tue Feb 5 19:32:52 2013 From: matthew.brett at gmail.com (Matthew Brett) Date: Tue, 5 Feb 2013 16:32:52 -0800 Subject: [Numpy-discussion] Any plans for windows 64-bit installer for 1.7? In-Reply-To: <51119007.6090806@uci.edu> References: <51119007.6090806@uci.edu> Message-ID: Hi, On Tue, Feb 5, 2013 at 3:04 PM, Christoph Gohlke wrote: > On 2/5/2013 10:51 AM, Matthew Brett wrote: >> Hi, >> >> On Mon, Feb 4, 2013 at 5:09 PM, Matthew Brett wrote: >>> Hi, >>> >>> On Mon, Feb 4, 2013 at 3:46 PM, Charles R Harris >>> wrote: >>>> >>>> >>>> On Mon, Feb 4, 2013 at 4:04 PM, Robert Kern wrote: >>>>> >>>>> On Mon, Feb 4, 2013 at 10:38 PM, Matthew Brett >>>>> wrote: >>>>>> Hi, >>>>>> >>>>>> On Mon, Feb 4, 2013 at 1:15 PM, Ralf Gommers >>>>>> wrote: >>>>> >>>>>>> MSVC + Intel Fortran + MKL, yes. But those aren't free. So how can you >>>>>>> provide an Amazon image for those? >>>>>> >>>>>> You can make an image that is not public, I guess. I suppose anyone >>>>>> who uses the image would have to have their own licenses for the Intel >>>>>> stuff? Does anyone have experience of this? >>>>> >>>>> You need to purchase one license per developer: >>>>> >>>>> >>>>> http://software.intel.com/en-us/articles/intel-math-kernel-library-licensing-faq#eula1 >>>>> >>>> >>>> I think 64 bits on windows is best pushed off to 1.7.1 or 1.8. It would be a >>>> bit much to get it implemented in the next week or two. >>> >>> The problem with not providing these binaries is that they are at the >>> bottom of everyone's stack, so a delay in numpy holds everyone back. >>> >>> I can't find completely convincing stats, but it looks as though 64 >>> bit windows 7 is now the most common version of Windows, at least for >>> Gamers [1] around now, and it was getting that way for everyone in >>> 2010 [2]. >>> >>> It don't think it reflects well on on us that we don't appear to >>> support 64 bits out of the box; just for example, R already has a 32 >>> bit / 64 bit installer. >>> >>> If I understand correctly, the options for doing this right now are: >>> >>> 1) Minimal cost in time : ask Christophe nicely whether we can >>> distribute his binaries via the Numpy page >>> 2) Small cost in time / money : pay for licenses for Ondrej or me or >>> someone to install the dependencies on my Berkeley machine / an Amazon >>> image. >> >> In order not to leave this discussion without a resolution: >> >> Christophe - would you allow us to distribute your numpy binaries for >> 1.7 from the numpy sourceforge page? >> >> Cheers, >> >> Matthew > > > I am OK with providing 64 bit "numpy-MKL" binaries (that is numpy > compiled with MSVC compilers and linked to Intel's MKL) for official > numpy releases. Thank you - that is great. > However: > > 1) There seems to be no real consensus and urge for doing this. I certainly feel the urge and feel it strongly. As a packager for two or three projects myself, it's a royal pain having to tell someone to go to two different places for binaries depending on the number of bits of their Windows. I think Chuck was worried about the time it would take to do it, and I think you've already solved this problem. Ralf was worried about Scipy - see below. > Using a > free toolchain capable of building the whole scipy-stack would be much > preferred. That's true, but there seems general agreement this is not practical in the very near future. > Several 64 bit Python distributions containing numpy-MKL are > already available, some for free. You mean EPD and AnacondaCE? I don't think we should withhold easily available vanilla builds because they are also available in company-sponsored projects. Python.org provides windows builds even though ActiveState is free-as-in-beer. > 2) Releasing 64 bit numpy without matching scipy binaries would make > little sense to me. Would you consider also releasing your scipy binaries? > 3) Please do not just redistribute the binaries from my website and > declare them official. They might contain unreleased fixes from git > master and pull requests that are needed for my work and other packages. Right - would you consider then being the release provider for numpy / scipy binaries on windows, much as it appears that Martin v Lowis supplies Windows builds for Python? > 4) Numpy-MKL requires the Intel runtime DLLs (MKL is linked statically > btw). I ship those with the installers and append the directory > containing the DLLs to os.environ['PATH'] in numpy/__init__.py. This is > a big no-no according to numpy developers. I don't agree. Anyway, those > changes are not in the numpy source repositories. > > 5) My numpy-MKL installers are Python distutils bdist_wininst > installers. That means if Python was installed for all users, installing > numpy-MKL on Windows >6.0 will prompt for UAC elevation. Another no-no? I defer to others on these ones, Thanks a lot, Matthew From chris.barker at noaa.gov Tue Feb 5 19:55:28 2013 From: chris.barker at noaa.gov (Chris Barker - NOAA Federal) Date: Tue, 5 Feb 2013 16:55:28 -0800 Subject: [Numpy-discussion] Any plans for windows 64-bit installer for 1.7? In-Reply-To: References: <51119007.6090806@uci.edu> Message-ID: On Tue, Feb 5, 2013 at 4:32 PM, Matthew Brett wrote: >> 4) Numpy-MKL requires the Intel runtime DLLs (MKL is linked statically >> btw). I ship those with the installers and append the directory >> containing the DLLs to os.environ['PATH'] in numpy/__init__.py. This is >> a big no-no according to numpy developers. I don't agree. Anyway, those >> changes are not in the numpy source repositories. I think you pointed out that another option is to load the dlls with ctypes -- is it much work to make that change? >> 5) My numpy-MKL installers are Python distutils bdist_wininst >> installers. That means if Python was installed for all users, installing >> numpy-MKL on Windows >6.0 will prompt for UAC elevation. Another no-no? not sure about the UAC elevation -- but: 1) most folks use bdist_wininst for Windows binaries -- including the current numpy builds, and python.org python -- yes? 2) UAC aside, It would be great to have binaries that could be used with virtualenv -- binary eggs? -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From matthew.brett at gmail.com Tue Feb 5 20:01:29 2013 From: matthew.brett at gmail.com (Matthew Brett) Date: Tue, 5 Feb 2013 17:01:29 -0800 Subject: [Numpy-discussion] Any plans for windows 64-bit installer for 1.7? In-Reply-To: References: <51119007.6090806@uci.edu> Message-ID: Hi, On Tue, Feb 5, 2013 at 4:55 PM, Chris Barker - NOAA Federal wrote: > On Tue, Feb 5, 2013 at 4:32 PM, Matthew Brett wrote: >>> 4) Numpy-MKL requires the Intel runtime DLLs (MKL is linked statically >>> btw). I ship those with the installers and append the directory >>> containing the DLLs to os.environ['PATH'] in numpy/__init__.py. This is >>> a big no-no according to numpy developers. I don't agree. Anyway, those >>> changes are not in the numpy source repositories. > > I think you pointed out that another option is to load the dlls with > ctypes -- is it much work to make that change? > >>> 5) My numpy-MKL installers are Python distutils bdist_wininst >>> installers. That means if Python was installed for all users, installing >>> numpy-MKL on Windows >6.0 will prompt for UAC elevation. Another no-no? > > not sure about the UAC elevation -- but: > > 1) most folks use bdist_wininst for Windows binaries -- including the > current numpy builds, and python.org python -- yes? > > 2) UAC aside, It would be great to have binaries that could be used > with virtualenv -- binary eggs? easy_install can install into virtualenvs from bdist_wininst installers, at least the ones I have built... See you, Matthew From ondrej.certik at gmail.com Tue Feb 5 22:46:13 2013 From: ondrej.certik at gmail.com (=?UTF-8?B?T25kxZllaiDEjGVydMOtaw==?=) Date: Tue, 5 Feb 2013 19:46:13 -0800 Subject: [Numpy-discussion] Will numpy 1.7.0 final be binary compatible with the rc? In-Reply-To: References: Message-ID: On Tue, Feb 5, 2013 at 12:22 PM, Ralf Gommers wrote: > > > > On Tue, Feb 5, 2013 at 3:01 PM, Peter Cock > wrote: >> >> Hello all, >> >> Will the numpy 1.7.0 'final' be binary compatible with the release >> candidate(s)? i.e. Would it be safe for me to release a Windows >> installer for a package using the NumPy C API compiled against >> the NumPy 1.7.0rc? > > > Yes, that should be safe. Yes. I plan to release rc2 immediately once https://github.com/numpy/numpy/pull/2964 is merged (e.g. I am hoping for today). The final should then be identical to rc2. Ondrej From charlesr.harris at gmail.com Wed Feb 6 00:12:58 2013 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 5 Feb 2013 22:12:58 -0700 Subject: [Numpy-discussion] Dealing with the mode argument in qr. In-Reply-To: References: Message-ID: On Tue, Feb 5, 2013 at 2:23 PM, Charles R Harris wrote: > Hi All, > > This post is to bring the discussion of PR #2965to the attention of the list. There are at least three issues in play here. > > 1) The PR adds modes 'big' and 'thin' to the current modes 'full', 'r', > 'economic' for qr factorization. The problem is that the current 'full' is > actually 'thin' and 'big' should be 'full'. The solution here was to raise > a FutureWarning on use of 'full', alias it to 'thin' for the time being, > and at some distant time change 'full' to alias 'big'. > > 2) The 'economic' mode serves little purpose. I propose to deprecate it > and add a 'qrf' mode instead, corresponding to scipy's 'raw' mode. We can't > use 'raw' itself as traditionally the mode may be specified using the first > letter only and that leads to a conflict with 'r'. > > 3) As suggested in 2, the use of single letter abbreviations can constrain > the options in choosing mode names and they are not as informative as the > full name. A possibility here is to deprecate the use of the abbreviations > in favor of the full names. > > A longer term problem is the divergence between the numpy and scipy > versions of qr. The divergence is enough that I don't see any easy way to > come to a common interface, but that is something that would be desirable > if possible. > > Thoughts? > > bfroehle has suggested the names 1. complete: Q is a M-by-M matrix, i.e. a complete orthogonal basis. 2. reduced: Q is a M-by-K matrix. 3. r: Only return R 4. raw: Return Householder reflectors Q and TAU Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From jason-sage at creativetrax.com Wed Feb 6 00:23:19 2013 From: jason-sage at creativetrax.com (Jason Grout) Date: Tue, 05 Feb 2013 23:23:19 -0600 Subject: [Numpy-discussion] Issues to fix for 1.7.0rc2. In-Reply-To: References: Message-ID: <5111E8C7.5030504@creativetrax.com> On 2/4/13 12:04 AM, Ond?ej ?ert?k wrote: > Hi, > > Here are the last open issues for 1.7, there are 9 of them: > > https://github.com/numpy/numpy/issues?milestone=3&sort=updated&state=open > Here's something we noticed while working on getting 1.7rc1 into Sage with one of our doctests. With numpy 1.5.1 (we skipped 1.6.x because of backwards compatibility issues...): import numpy as np print np.array([None, None]).any() prints False, but with 1.7.rc1, we get None. For comparison, in Python 2.6.1 and 3.3.0: >>> any([None,None]) False >>> print None or None None Was this change between 1.5.1 and 1.7 intentional? Thanks, Jason From charlesr.harris at gmail.com Wed Feb 6 01:14:40 2013 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 5 Feb 2013 23:14:40 -0700 Subject: [Numpy-discussion] Issues to fix for 1.7.0rc2. In-Reply-To: <5111E8C7.5030504@creativetrax.com> References: <5111E8C7.5030504@creativetrax.com> Message-ID: On Tue, Feb 5, 2013 at 10:23 PM, Jason Grout wrote: > On 2/4/13 12:04 AM, Ond?ej ?ert?k wrote: > > Hi, > > > > Here are the last open issues for 1.7, there are 9 of them: > > > > > https://github.com/numpy/numpy/issues?milestone=3&sort=updated&state=open > > > > Here's something we noticed while working on getting 1.7rc1 into Sage > with one of our doctests. With numpy 1.5.1 (we skipped 1.6.x because of > backwards compatibility issues...): > > import numpy as np > print np.array([None, None]).any() > > prints False, but with 1.7.rc1, we get None. > > For comparison, in Python 2.6.1 and 3.3.0: > > >>> any([None,None]) > False > >>> print None or None > None > > Was this change between 1.5.1 and 1.7 intentional? > > Probably not, but maybe yes. I'd guess the cause is lines 2601-2603 of ufunc_object.c if (PyArray_ISOBJECT(arr) && PyArray_SIZE(arr) != 0) { assign_identity = NULL; } The variable assign_identity is a function pointer, so some new functions are probably needed that return python integers 0 and 1 when PyUFunc_Zero and PyUFunc_One turn up. I don't know what all the consequences of that change would be and it may not be quite so simple. Clearly that code isn't well tested. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Wed Feb 6 01:46:05 2013 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 5 Feb 2013 23:46:05 -0700 Subject: [Numpy-discussion] Issues to fix for 1.7.0rc2. In-Reply-To: References: <5111E8C7.5030504@creativetrax.com> Message-ID: On Tue, Feb 5, 2013 at 11:14 PM, Charles R Harris wrote: > > > On Tue, Feb 5, 2013 at 10:23 PM, Jason Grout wrote: > >> On 2/4/13 12:04 AM, Ond?ej ?ert?k wrote: >> > Hi, >> > >> > Here are the last open issues for 1.7, there are 9 of them: >> > >> > >> https://github.com/numpy/numpy/issues?milestone=3&sort=updated&state=open >> > >> >> Here's something we noticed while working on getting 1.7rc1 into Sage >> with one of our doctests. With numpy 1.5.1 (we skipped 1.6.x because of >> backwards compatibility issues...): >> >> import numpy as np >> print np.array([None, None]).any() >> >> prints False, but with 1.7.rc1, we get None. >> >> For comparison, in Python 2.6.1 and 3.3.0: >> >> >>> any([None,None]) >> False >> >>> print None or None >> None >> >> Was this change between 1.5.1 and 1.7 intentional? >> >> > Probably not, but maybe yes. I'd guess the cause is lines 2601-2603 of > ufunc_object.c > > if (PyArray_ISOBJECT(arr) && PyArray_SIZE(arr) != 0) { > assign_identity = NULL; > } > > > The variable assign_identity is a function pointer, so some new functions > are probably needed that return python integers 0 and 1 when PyUFunc_Zero > and PyUFunc_One turn up. I don't know what all the consequences of that > change would be and it may not be quite so simple. Clearly that code isn't > well tested. > > Oh, and you do realize Python is insane here. None really shouldn't *have* a logical value, which I suppos >>> 0 or None >>> 1 or None 1 >>> print None or None None If ndarray.any is equivalent to logical_or.reduce there is no way to make all those work. We will need to special case some things. In fact, in 1.5 logical_or was not defined for object dtypes, so any must have been special cased and the change that has come about is from implementing logical_or for objects and using a reduction. So in 1.5 >>> a = np.array([None, None], dtype='O') >>> np.logical_or(a,a) Traceback (most recent call last): File "", line 1, in AttributeError: logical_or >>> a = np.array([1, 1]) >>> np.logical_or(a,a) array([ True, True], dtype=bool) While in 1.7 In [1]: a = array([None, None]) In [2]: logical_or(a, a) Out[2]: array([None, None], dtype=object) In [3]: logical_or(a, 0) Out[3]: array([0, 0], dtype=object) In [4]: logical_or(a, 1) Out[4]: array([1, 1], dtype=object) Which agrees with python in two out of three. Not bad for dealing with insane inconsistency. And it looks like the changes suggested in the previous post will fix the problem if we decide to do so. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From jason-sage at creativetrax.com Wed Feb 6 01:50:46 2013 From: jason-sage at creativetrax.com (Jason Grout) Date: Wed, 06 Feb 2013 00:50:46 -0600 Subject: [Numpy-discussion] Issues to fix for 1.7.0rc2. In-Reply-To: References: <5111E8C7.5030504@creativetrax.com> Message-ID: <5111FD46.3040208@creativetrax.com> On 2/6/13 12:46 AM, Charles R Harris wrote: > if we decide to do so I should mention that we don't really depend on either behavior (we probably should have a better doctest testing for an array of None values anyway), but we noticed the oddity and thought we ought to mention it. So it doesn't matter to us which way the decision goes. Thanks, Jason From charlesr.harris at gmail.com Wed Feb 6 02:41:23 2013 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 6 Feb 2013 00:41:23 -0700 Subject: [Numpy-discussion] Issues to fix for 1.7.0rc2. In-Reply-To: <5111FD46.3040208@creativetrax.com> References: <5111E8C7.5030504@creativetrax.com> <5111FD46.3040208@creativetrax.com> Message-ID: On Tue, Feb 5, 2013 at 11:50 PM, Jason Grout wrote: > On 2/6/13 12:46 AM, Charles R Harris wrote: > > if we decide to do so > > I should mention that we don't really depend on either behavior (we > probably should have a better doctest testing for an array of None > values anyway), but we noticed the oddity and thought we ought to > mention it. So it doesn't matter to us which way the decision goes. > > More Python craziness In [6]: print None or 0 0 In [7]: print 0 or None None Numpy any is consistent with python when considered as logical_or.reduce In [13]: print array([0, None]).any() None In [14]: print array([None, 0]).any() 0 This appears to be an __ror__, __or__ inconsistency in python. Note that None possesses neither of those operators. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From d.s.seljebotn at astro.uio.no Wed Feb 6 04:18:39 2013 From: d.s.seljebotn at astro.uio.no (Dag Sverre Seljebotn) Date: Wed, 06 Feb 2013 10:18:39 +0100 Subject: [Numpy-discussion] Issues to fix for 1.7.0rc2. In-Reply-To: References: <5111E8C7.5030504@creativetrax.com> <5111FD46.3040208@creativetrax.com> Message-ID: <51121FEF.4040406@astro.uio.no> On 02/06/2013 08:41 AM, Charles R Harris wrote: > > > On Tue, Feb 5, 2013 at 11:50 PM, Jason Grout > > wrote: > > On 2/6/13 12:46 AM, Charles R Harris wrote: > > if we decide to do so > > I should mention that we don't really depend on either behavior (we > probably should have a better doctest testing for an array of None > values anyway), but we noticed the oddity and thought we ought to > mention it. So it doesn't matter to us which way the decision goes. > > > More Python craziness > > In [6]: print None or 0 > 0 > > In [7]: print 0 or None > None To me this seems natural and is just how Python works? I think the rule for "or" is simply "evaluate __nonzero__ of left operand, if it is False, return right operand". The reason is so that you can use it like this: x = get_foo() or get_bar() # if get_foo() returns None # use result of get_bar or def f(x=None): x = x or create_default_x() ... I guess that after the "a if expr else b" was introduced this has become less common. Dag Sverre > > Numpy any is consistent with python when considered as logical_or.reduce > > In [13]: print array([0, None]).any() > None > > In [14]: print array([None, 0]).any() > 0 > > This appears to be an __ror__, __or__ inconsistency in python. Note that > None possesses neither of those operators. > > Chuck > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From sebastian at sipsolutions.net Wed Feb 6 05:14:10 2013 From: sebastian at sipsolutions.net (Sebastian Berg) Date: Wed, 06 Feb 2013 11:14:10 +0100 Subject: [Numpy-discussion] Issues to fix for 1.7.0rc2. In-Reply-To: <51121FEF.4040406@astro.uio.no> References: <5111E8C7.5030504@creativetrax.com> <5111FD46.3040208@creativetrax.com> <51121FEF.4040406@astro.uio.no> Message-ID: <1360145650.19070.7.camel@sebastian-laptop> On Wed, 2013-02-06 at 10:18 +0100, Dag Sverre Seljebotn wrote: > On 02/06/2013 08:41 AM, Charles R Harris wrote: > > > > > > On Tue, Feb 5, 2013 at 11:50 PM, Jason Grout > > > wrote: > > > > On 2/6/13 12:46 AM, Charles R Harris wrote: > > > if we decide to do so > > > > I should mention that we don't really depend on either behavior (we > > probably should have a better doctest testing for an array of None > > values anyway), but we noticed the oddity and thought we ought to > > mention it. So it doesn't matter to us which way the decision goes. > > > > > > More Python craziness > > > > In [6]: print None or 0 > > 0 > > > > In [7]: print 0 or None > > None > > To me this seems natural and is just how Python works? I think the rule > for "or" is simply "evaluate __nonzero__ of left operand, if it is > False, return right operand". > > The reason is so that you can use it like this: > Yes, but any() and all() functions in python return forcibly a bool as one would expect. So probably logical_and.reduce and all should simply not be the same thing, at least for objects. Though it is a bit weird that objects do something different from other types, so maybe it would be OK to say that numpy just differs from python here, since I am not sure if you can easily change it for the other types. Regards, Sebastian > x = get_foo() or get_bar() # if get_foo() returns None > # use result of get_bar > > or > > def f(x=None): > x = x or create_default_x() > ... > > I guess that after the "a if expr else b" was introduced this has become > less common. > > Dag Sverre > > > > > Numpy any is consistent with python when considered as logical_or.reduce > > > > In [13]: print array([0, None]).any() > > None > > > > In [14]: print array([None, 0]).any() > > 0 > > > > This appears to be an __ror__, __or__ inconsistency in python. Note that > > None possesses neither of those operators. > > > > Chuck > > > > > > _______________________________________________ > > NumPy-Discussion mailing list > > NumPy-Discussion at scipy.org > > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From p.j.a.cock at googlemail.com Wed Feb 6 05:37:47 2013 From: p.j.a.cock at googlemail.com (Peter Cock) Date: Wed, 6 Feb 2013 10:37:47 +0000 Subject: [Numpy-discussion] Will numpy 1.7.0 final be binary compatible with the rc? In-Reply-To: References: Message-ID: On Wed, Feb 6, 2013 at 3:46 AM, Ond?ej ?ert?k wrote: > On Tue, Feb 5, 2013 at 12:22 PM, Ralf Gommers wrote: >> On Tue, Feb 5, 2013 at 3:01 PM, Peter Cock >> wrote: >>> >>> Hello all, >>> >>> Will the numpy 1.7.0 'final' be binary compatible with the release >>> candidate(s)? i.e. Would it be safe for me to release a Windows >>> installer for a package using the NumPy C API compiled against >>> the NumPy 1.7.0rc? >> >> >> Yes, that should be safe. > > Yes. I plan to release rc2 immediately once > > https://github.com/numpy/numpy/pull/2964 > > is merged (e.g. I am hoping for today). The final should then be > identical to rc2. > > Ondrej Great - in that case I'll wait a couple of days and use rc2 for this (just in case there is a subtle difference from rc1). Thanks, Peter From davidmenhur at gmail.com Wed Feb 6 06:57:00 2013 From: davidmenhur at gmail.com (=?UTF-8?B?RGHPgGlk?=) Date: Wed, 6 Feb 2013 12:57:00 +0100 Subject: [Numpy-discussion] Issues to fix for 1.7.0rc2. In-Reply-To: References: <5111E8C7.5030504@creativetrax.com> <5111FD46.3040208@creativetrax.com> Message-ID: On 6 February 2013 08:41, Charles R Harris wrote: > > More Python craziness > > In [6]: print None or 0 > 0 > > In [7]: print 0 or None > None Just for clarifying this behaviour: In [1]: print None or 0 0 In [2]: print 0 or None None In [3]: val = 0 or None In [4]: print val None From gk230-freebsd at yahoo.de Wed Feb 6 08:31:06 2013 From: gk230-freebsd at yahoo.de (gk230-freebsd at yahoo.de) Date: Wed, 6 Feb 2013 13:31:06 +0000 (GMT) Subject: [Numpy-discussion] Building numpy for python3.3: No _numpyconfig.h Message-ID: <1360157466.30706.YahooMailClassic@web171606.mail.ir2.yahoo.com> Hi, I'm currently trying to build numpy 1.6.2 for python python 3.3 from ports on FreeBSD. Unfortunately, the setup.py execution fails because some [1] gcc command trying to access _numpyconfig.h fails since _numpyconfig.h is not generated from _numpyconfig.h.in. How do I manually build the proper header from the .h.in, and why does that not happen automatically? -- -- Gereon [1] # gcc46 -DNDEBUG -O2 -pipe -fno-strict-aliasing -O2 -pipe -Wl,-rpath=/usr/local/lib/gcc46 -fno-strict-aliasing -fPIC -Inumpy/core/include -Ibuild/src.freebsd-9.0-RELEASE-amd64-3.3/numpy/core/include/numpy -Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/include -I/usr/local/include/python3.3m -Ibuild/src.freebsd-9.0-RELEASE-amd64-3.3/numpy/core/src/multiarray -Ibuild/src.freebsd-9.0-RELEASE-amd64-3.3/numpy/core/src/umath -c numpy/core/src/multiarray/multiarraymodule_onefile.c -o build/temp.freebsd-9.0-RELEASE-amd64-3.3/numpy/core/src/multiarray/multiarraymodule_onefile.o Assembler messages: Fatal error: can't create build/temp.freebsd-9.0-RELEASE-amd64-3.3/numpy/core/src/multiarray/multiarraymodule_onefile.o: No such file or directory In file included from numpy/core/include/numpy/ndarraytypes.h:5:0, from numpy/core/include/numpy/ndarrayobject.h:17, from numpy/core/include/numpy/arrayobject.h:14, from numpy/core/src/multiarray/common.c:6, from numpy/core/src/multiarray/multiarraymodule_onefile.c:8: numpy/core/include/numpy/numpyconfig.h:4:26: fatal error: _numpyconfig.h: No such file or directory compilation terminated. From sebastian at sipsolutions.net Wed Feb 6 09:18:07 2013 From: sebastian at sipsolutions.net (Sebastian Berg) Date: Wed, 06 Feb 2013 15:18:07 +0100 Subject: [Numpy-discussion] Building numpy for python3.3: No _numpyconfig.h In-Reply-To: <1360157466.30706.YahooMailClassic@web171606.mail.ir2.yahoo.com> References: <1360157466.30706.YahooMailClassic@web171606.mail.ir2.yahoo.com> Message-ID: <1360160287.22438.4.camel@sebastian-laptop> On Wed, 2013-02-06 at 13:31 +0000, gk230-freebsd at yahoo.de wrote: > Hi, > > I'm currently trying to build numpy 1.6.2 for python python 3.3 from ports on FreeBSD. Unfortunately, the setup.py execution fails because some [1] gcc command trying to access _numpyconfig.h fails since _numpyconfig.h is not generated from _numpyconfig.h.in. How do I manually build the proper header from the .h.in, and why does that not happen automatically? > 1.6.2 probably does not support python 3.3 at all. You should instead try the 1.7. release candidate (or wait a bit longer for 1.7rc2 or even the final release), which supports python 3.3. Note that when 1.6.2 was released python 3.3 did not exist. Regards, Sebastian > -- > -- Gereon > > [1] > # gcc46 -DNDEBUG -O2 -pipe -fno-strict-aliasing -O2 -pipe -Wl,-rpath=/usr/local/lib/gcc46 -fno-strict-aliasing -fPIC -Inumpy/core/include -Ibuild/src.freebsd-9.0-RELEASE-amd64-3.3/numpy/core/include/numpy -Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/include -I/usr/local/include/python3.3m -Ibuild/src.freebsd-9.0-RELEASE-amd64-3.3/numpy/core/src/multiarray -Ibuild/src.freebsd-9.0-RELEASE-amd64-3.3/numpy/core/src/umath -c numpy/core/src/multiarray/multiarraymodule_onefile.c -o build/temp.freebsd-9.0-RELEASE-amd64-3.3/numpy/core/src/multiarray/multiarraymodule_onefile.o > Assembler messages: > Fatal error: can't create build/temp.freebsd-9.0-RELEASE-amd64-3.3/numpy/core/src/multiarray/multiarraymodule_onefile.o: No such file or directory > In file included from numpy/core/include/numpy/ndarraytypes.h:5:0, > from numpy/core/include/numpy/ndarrayobject.h:17, > from numpy/core/include/numpy/arrayobject.h:14, > from numpy/core/src/multiarray/common.c:6, > from numpy/core/src/multiarray/multiarraymodule_onefile.c:8: > numpy/core/include/numpy/numpyconfig.h:4:26: fatal error: _numpyconfig.h: No such file or directory > compilation terminated. > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From ben.root at ou.edu Wed Feb 6 09:27:58 2013 From: ben.root at ou.edu (Benjamin Root) Date: Wed, 6 Feb 2013 09:27:58 -0500 Subject: [Numpy-discussion] Issues to fix for 1.7.0rc2. In-Reply-To: <51121FEF.4040406@astro.uio.no> References: <5111E8C7.5030504@creativetrax.com> <5111FD46.3040208@creativetrax.com> <51121FEF.4040406@astro.uio.no> Message-ID: On Wed, Feb 6, 2013 at 4:18 AM, Dag Sverre Seljebotn < d.s.seljebotn at astro.uio.no> wrote: > On 02/06/2013 08:41 AM, Charles R Harris wrote: > > > > > > On Tue, Feb 5, 2013 at 11:50 PM, Jason Grout > > > > wrote: > > > > On 2/6/13 12:46 AM, Charles R Harris wrote: > > > if we decide to do so > > > > I should mention that we don't really depend on either behavior (we > > probably should have a better doctest testing for an array of None > > values anyway), but we noticed the oddity and thought we ought to > > mention it. So it doesn't matter to us which way the decision goes. > > > > > > More Python craziness > > > > In [6]: print None or 0 > > 0 > > > > In [7]: print 0 or None > > None > > To me this seems natural and is just how Python works? I think the rule > for "or" is simply "evaluate __nonzero__ of left operand, if it is > False, return right operand". > > The reason is so that you can use it like this: > > x = get_foo() or get_bar() # if get_foo() returns None > # use result of get_bar > > or > > def f(x=None): > x = x or create_default_x() > ... > > And what if the user passes in a zero or an empty string or an empty list, or if the return value from get_foo() is a perfectly valid zero? This is one of the very few things I have disagreed with PEP8, and Python in general about. I can understand implicit casting of numbers to booleans in order to attract the C/C++ crowd (but I don't have to like it), but what was so hard about "x is not None" or "len(x) == 0"? I like my languages explicit. Less magic, more WYSIWYM. Cheers! Ben Root -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben.root at ou.edu Wed Feb 6 09:44:07 2013 From: ben.root at ou.edu (Benjamin Root) Date: Wed, 6 Feb 2013 09:44:07 -0500 Subject: [Numpy-discussion] Dealing with the mode argument in qr. In-Reply-To: References: Message-ID: On Tue, Feb 5, 2013 at 4:23 PM, Charles R Harris wrote: > Hi All, > > This post is to bring the discussion of PR #2965to the attention of the list. There are at least three issues in play here. > > 1) The PR adds modes 'big' and 'thin' to the current modes 'full', 'r', > 'economic' for qr factorization. The problem is that the current 'full' is > actually 'thin' and 'big' should be 'full'. The solution here was to raise > a FutureWarning on use of 'full', alias it to 'thin' for the time being, > and at some distant time change 'full' to alias 'big'. > > 2) The 'economic' mode serves little purpose. I propose to deprecate it > and add a 'qrf' mode instead, corresponding to scipy's 'raw' mode. We can't > use 'raw' itself as traditionally the mode may be specified using the first > letter only and that leads to a conflict with 'r'. > > 3) As suggested in 2, the use of single letter abbreviations can constrain > the options in choosing mode names and they are not as informative as the > full name. A possibility here is to deprecate the use of the abbreviations > in favor of the full names. > > A longer term problem is the divergence between the numpy and scipy > versions of qr. The divergence is enough that I don't see any easy way to > come to a common interface, but that is something that would be desirable > if possible. > > Thoughts? > > Chuck > > I would definitely be in favor of deprecating abbreviations. And while we are on the topic of mode names, scipy.ndimage.filters.percentile_filter() has modes of 'mirror' and 'reflect', and I don't see any documentation stating if they are the same, or what are different about them. I just came across this yesterday. Cheers! Ben Root -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Wed Feb 6 12:00:02 2013 From: chris.barker at noaa.gov (Chris Barker - NOAA Federal) Date: Wed, 6 Feb 2013 09:00:02 -0800 Subject: [Numpy-discussion] Any plans for windows 64-bit installer for 1.7? In-Reply-To: References: <51119007.6090806@uci.edu> Message-ID: On Tue, Feb 5, 2013 at 5:01 PM, Matthew Brett wrote: > easy_install can install into virtualenvs from bdist_wininst > installers, at least the ones I have built... really? cool! I never thought to try that! Thanks, -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From josef.pktd at gmail.com Wed Feb 6 13:08:56 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 6 Feb 2013 13:08:56 -0500 Subject: [Numpy-discussion] Where's that function? Message-ID: I'm convinced that I saw a while ago a function that uses a list of interval boundaries to index into an array, either to iterate or to take. I thought that's very useful, but didn't make a note. Now, I have no idea where I saw this (I thought numpy), and I cannot find it anywhere. any clues? Thanks, Josef From ralf.gommers at gmail.com Wed Feb 6 13:48:40 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Wed, 6 Feb 2013 19:48:40 +0100 Subject: [Numpy-discussion] Any plans for windows 64-bit installer for 1.7? In-Reply-To: References: <51119007.6090806@uci.edu> Message-ID: On Wed, Feb 6, 2013 at 1:32 AM, Matthew Brett wrote: > Hi, > > On Tue, Feb 5, 2013 at 3:04 PM, Christoph Gohlke wrote: > > On 2/5/2013 10:51 AM, Matthew Brett wrote: > >> Hi, > >> > >> On Mon, Feb 4, 2013 at 5:09 PM, Matthew Brett > wrote: > >>> Hi, > >>> > >>> On Mon, Feb 4, 2013 at 3:46 PM, Charles R Harris > >>> wrote: > >>>> > >>>> > >>>> On Mon, Feb 4, 2013 at 4:04 PM, Robert Kern > wrote: > >>>>> > >>>>> On Mon, Feb 4, 2013 at 10:38 PM, Matthew Brett < > matthew.brett at gmail.com> > >>>>> wrote: > >>>>>> Hi, > >>>>>> > >>>>>> On Mon, Feb 4, 2013 at 1:15 PM, Ralf Gommers < > ralf.gommers at gmail.com> > >>>>>> wrote: > >>>>> > >>>>>>> MSVC + Intel Fortran + MKL, yes. But those aren't free. So how can > you > >>>>>>> provide an Amazon image for those? > >>>>>> > >>>>>> You can make an image that is not public, I guess. I suppose > anyone > >>>>>> who uses the image would have to have their own licenses for the > Intel > >>>>>> stuff? Does anyone have experience of this? > >>>>> > >>>>> You need to purchase one license per developer: > >>>>> > >>>>> > >>>>> > http://software.intel.com/en-us/articles/intel-math-kernel-library-licensing-faq#eula1 > >>>>> > >>>> > >>>> I think 64 bits on windows is best pushed off to 1.7.1 or 1.8. It > would be a > >>>> bit much to get it implemented in the next week or two. > >>> > >>> The problem with not providing these binaries is that they are at the > >>> bottom of everyone's stack, so a delay in numpy holds everyone back. > >>> > >>> I can't find completely convincing stats, but it looks as though 64 > >>> bit windows 7 is now the most common version of Windows, at least for > >>> Gamers [1] around now, and it was getting that way for everyone in > >>> 2010 [2]. > >>> > >>> It don't think it reflects well on on us that we don't appear to > >>> support 64 bits out of the box; just for example, R already has a 32 > >>> bit / 64 bit installer. > >>> > >>> If I understand correctly, the options for doing this right now are: > >>> > >>> 1) Minimal cost in time : ask Christophe nicely whether we can > >>> distribute his binaries via the Numpy page > >>> 2) Small cost in time / money : pay for licenses for Ondrej or me or > >>> someone to install the dependencies on my Berkeley machine / an Amazon > >>> image. > >> > >> In order not to leave this discussion without a resolution: > >> > >> Christophe - would you allow us to distribute your numpy binaries for > >> 1.7 from the numpy sourceforge page? > >> > >> Cheers, > >> > >> Matthew > > > > > > I am OK with providing 64 bit "numpy-MKL" binaries (that is numpy > > compiled with MSVC compilers and linked to Intel's MKL) for official > > numpy releases. > > Thank you - that is great. > > > However: > > > > 1) There seems to be no real consensus and urge for doing this. > > I certainly feel the urge and feel it strongly. As a packager for two > or three projects myself, it's a royal pain having to tell someone to > go to two different places for binaries depending on the number of > bits of their Windows. If you're relying on .exe installers on SF, then you have to send your users to more places than that probably. Really the separate installers are a poor alternative to the available scientific distributions. And the more packages you need as a user, the more annoying these separate installers are. > I think Chuck was worried about the time it > would take to do it, and I think you've already solved this problem. > Ralf was worried about Scipy - see below. > Not just Scipy - that would be my worry number one, but the same holds for all packages based on Numpy. You double the number of Windows installers that every single project needs to provide. > > > Using a > > free toolchain capable of building the whole scipy-stack would be much > > preferred. > > That's true, but there seems general agreement this is not practical > in the very near future. > > > Several 64 bit Python distributions containing numpy-MKL are > > already available, some for free. > > You mean EPD and AnacondaCE? I don't think we should withhold easily > available vanilla builds because they are also available in > company-sponsored projects. Python.org provides windows builds even > though ActiveState is free-as-in-beer. > If the company-sponsored bit bothers you, there's also a 64-bit Python(x,y) now. Ralf > > > 2) Releasing 64 bit numpy without matching scipy binaries would make > > little sense to me. > > Would you consider also releasing your scipy binaries? > > > 3) Please do not just redistribute the binaries from my website and > > declare them official. They might contain unreleased fixes from git > > master and pull requests that are needed for my work and other packages. > > Right - would you consider then being the release provider for numpy / > scipy binaries on windows, much as it appears that Martin v Lowis > supplies Windows builds for Python? > > > 4) Numpy-MKL requires the Intel runtime DLLs (MKL is linked statically > > btw). I ship those with the installers and append the directory > > containing the DLLs to os.environ['PATH'] in numpy/__init__.py. This is > > a big no-no according to numpy developers. I don't agree. Anyway, those > > changes are not in the numpy source repositories. > > > > 5) My numpy-MKL installers are Python distutils bdist_wininst > > installers. That means if Python was installed for all users, installing > > numpy-MKL on Windows >6.0 will prompt for UAC elevation. Another no-no? > > I defer to others on these ones, > > Thanks a lot, > > Matthew > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Wed Feb 6 14:17:02 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Wed, 6 Feb 2013 20:17:02 +0100 Subject: [Numpy-discussion] Dealing with the mode argument in qr. In-Reply-To: References: Message-ID: On Wed, Feb 6, 2013 at 3:44 PM, Benjamin Root wrote: > > > On Tue, Feb 5, 2013 at 4:23 PM, Charles R Harris < > charlesr.harris at gmail.com> wrote: > >> Hi All, >> >> This post is to bring the discussion of PR #2965to the attention of the list. There are at least three issues in play here. >> >> 1) The PR adds modes 'big' and 'thin' to the current modes 'full', 'r', >> 'economic' for qr factorization. The problem is that the current 'full' is >> actually 'thin' and 'big' should be 'full'. The solution here was to raise >> a FutureWarning on use of 'full', alias it to 'thin' for the time being, >> and at some distant time change 'full' to alias 'big'. >> > This is asking for problems, to gain some naming consistency. I can't tell how confusing 'full' is now, but deprecating and removing would be better than changing what it returns. > >> 2) The 'economic' mode serves little purpose. I propose to deprecate it >> and add a 'qrf' mode instead, corresponding to scipy's 'raw' mode. We can't >> use 'raw' itself as traditionally the mode may be specified using the first >> letter only and that leads to a conflict with 'r'. >> > That's not a very good reason to not use "raw", since "raw" is a new option and you therefore don't have to apply the rule that you can give only the first letter to it. > >> 3) As suggested in 2, the use of single letter abbreviations can >> constrain the options in choosing mode names and they are not as >> informative as the full name. A possibility here is to deprecate the use of >> the abbreviations in favor of the full names. >> > I'm not feeling very strongly about this, but we have to be careful about deprecations. Possible future naming constraints on new modes is not a good reason to deprecate. This one-letter option isn't even mentioned in the docs it looks like. So why not leave that as is and ensure it keeps working (add a unit test if needed)? > >> A longer term problem is the divergence between the numpy and scipy >> versions of qr. The divergence is enough that I don't see any easy way to >> come to a common interface, but that is something that would be desirable >> if possible. >> > This would be a problem imho. But I don't see why you can't add "raw" to numpy's qr. And if you add "big" and "thin" in numpy, you can add those modes in scipy too. Ralf > Thoughts? >> >> Chuck >> >> > I would definitely be in favor of deprecating abbreviations. > > And while we are on the topic of mode names, > scipy.ndimage.filters.percentile_filter() has modes of 'mirror' and > 'reflect', and I don't see any documentation stating if they are the same, > or what are different about them. I just came across this yesterday. > > Cheers! > Ben Root > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebastian at sipsolutions.net Wed Feb 6 14:17:25 2013 From: sebastian at sipsolutions.net (Sebastian Berg) Date: Wed, 06 Feb 2013 20:17:25 +0100 Subject: [Numpy-discussion] Where's that function? In-Reply-To: References: Message-ID: <1360178245.4750.4.camel@sebastian-laptop> On Wed, 2013-02-06 at 13:08 -0500, josef.pktd at gmail.com wrote: > I'm convinced that I saw a while ago a function that uses a list of > interval boundaries to index into an array, either to iterate or to > take. > I thought that's very useful, but didn't make a note. > > Now, I have no idea where I saw this (I thought numpy), and I cannot > find it anywhere. > > any clues? > It does not quite sound like what you are looking for, but the only thing I can think of in numpy right now that does something in that direction is the ufunc.reduceat functionality: In [1]: a = np.arange(10) In [2]: a[2:5].sum(), a[5:9].sum(), a[9:].sum() Out[2]: (9, 26, 9) In [3]: np.add.reduceat(a, [2, 5, 9]) Out[3]: array([ 9, 26, 9]) Regards, Sebastian > Thanks, > > Josef > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From josef.pktd at gmail.com Wed Feb 6 14:32:29 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 6 Feb 2013 14:32:29 -0500 Subject: [Numpy-discussion] Where's that function? In-Reply-To: <1360178245.4750.4.camel@sebastian-laptop> References: <1360178245.4750.4.camel@sebastian-laptop> Message-ID: On Wed, Feb 6, 2013 at 2:17 PM, Sebastian Berg wrote: > On Wed, 2013-02-06 at 13:08 -0500, josef.pktd at gmail.com wrote: >> I'm convinced that I saw a while ago a function that uses a list of >> interval boundaries to index into an array, either to iterate or to >> take. >> I thought that's very useful, but didn't make a note. >> >> Now, I have no idea where I saw this (I thought numpy), and I cannot >> find it anywhere. >> >> any clues? >> > > It does not quite sound like what you are looking for, but the only > thing I can think of in numpy right now that does something in that > direction is the ufunc.reduceat functionality: > > In [1]: a = np.arange(10) > > In [2]: a[2:5].sum(), a[5:9].sum(), a[9:].sum() > Out[2]: (9, 26, 9) > > In [3]: np.add.reduceat(a, [2, 5, 9]) > Out[3]: array([ 9, 26, 9]) That's what I remembered seeing, but obviously I didn't remember it correctly. Not useful for my current case, but I will keep it in mind to clean up some other code. I will need a python loop after all to take a list of (unequal length) slices out of an array. Thank you, Josef > > Regards, > > Sebastian > >> Thanks, >> >> Josef >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From andrea.gavana at gmail.com Wed Feb 6 14:45:40 2013 From: andrea.gavana at gmail.com (Andrea Gavana) Date: Wed, 6 Feb 2013 20:45:40 +0100 Subject: [Numpy-discussion] Any plans for windows 64-bit installer for 1.7? In-Reply-To: References: <51119007.6090806@uci.edu> Message-ID: On 6 February 2013 01:55, Chris Barker - NOAA Federal wrote: > On Tue, Feb 5, 2013 at 4:32 PM, Matthew Brett wrote: >>> 4) Numpy-MKL requires the Intel runtime DLLs (MKL is linked statically >>> btw). I ship those with the installers and append the directory >>> containing the DLLs to os.environ['PATH'] in numpy/__init__.py. This is >>> a big no-no according to numpy developers. I don't agree. Anyway, those >>> changes are not in the numpy source repositories. > > I think you pointed out that another option is to load the dlls with > ctypes -- is it much work to make that change? > >>> 5) My numpy-MKL installers are Python distutils bdist_wininst >>> installers. That means if Python was installed for all users, installing >>> numpy-MKL on Windows >6.0 will prompt for UAC elevation. Another no-no? > > not sure about the UAC elevation -- but: > > 1) most folks use bdist_wininst for Windows binaries -- including the > current numpy builds, and python.org python -- yes? Even the current approach is off-limits for the few haggards out there (like me at work), who can not update numpy/scipy/anything else that requires an installation procedure (like pretty much all the bdist_wininst distributions for Windows 64 bits/Python 64 bits). The only (partial) solution I found is to install at home and bring the site-packages folder on a USB stick with me at work. Which is a bit sad overall: if I can make dozens of installers of my applications for my colleagues with InnoSetup/NSIS which do *not* require any UAC crap/elevation, what's stopping the bdist_wininst to do the same? The same holds (unfortunately) for the excellent distributions from Christoph Gohlke. The fact that Windows 64 bits/Python 64 bits UAC-free installers are pretty much non-existent in the Python world does not play very nicely with the fact that 64 bits architectures have been available for 783648660236729 years. I'd rather prefer if someone would upload to sourceforge/scipy.org/whatever a zipped folder containing numpy as it was in their site-packages directory. Now that would be a huge plus :-) Andrea. "Imagination Is The Only Weapon In The War Against Reality." http://www.infinity77.net # ------------------------------------------------------------- # def ask_mailing_list_support(email): if mention_platform_and_version() and include_sample_app(): send_message(email) else: install_malware() erase_hard_drives() # ------------------------------------------------------------- # From ben.root at ou.edu Wed Feb 6 14:45:57 2013 From: ben.root at ou.edu (Benjamin Root) Date: Wed, 6 Feb 2013 14:45:57 -0500 Subject: [Numpy-discussion] Where's that function? In-Reply-To: References: Message-ID: On Wed, Feb 6, 2013 at 1:08 PM, wrote: > I'm convinced that I saw a while ago a function that uses a list of > interval boundaries to index into an array, either to iterate or to > take. > I thought that's very useful, but didn't make a note. > > Now, I have no idea where I saw this (I thought numpy), and I cannot > find it anywhere. > > any clues? > > Some possibilities: np.array_split() np.split() np.ndindex() np.nditer() np.nested_iters() np.ravel_multi_index() Your description reminded me of a function I came across once, but I can't remember if one of these was it or if it was another one. IHTH, Ben Root -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Wed Feb 6 14:46:28 2013 From: matthew.brett at gmail.com (Matthew Brett) Date: Wed, 6 Feb 2013 11:46:28 -0800 Subject: [Numpy-discussion] Any plans for windows 64-bit installer for 1.7? In-Reply-To: References: <51119007.6090806@uci.edu> Message-ID: Hi, On Wed, Feb 6, 2013 at 10:48 AM, Ralf Gommers wrote: > > > > On Wed, Feb 6, 2013 at 1:32 AM, Matthew Brett > wrote: >> >> Hi, >> >> On Tue, Feb 5, 2013 at 3:04 PM, Christoph Gohlke wrote: >> > On 2/5/2013 10:51 AM, Matthew Brett wrote: >> >> Hi, >> >> >> >> On Mon, Feb 4, 2013 at 5:09 PM, Matthew Brett >> >> wrote: >> >>> Hi, >> >>> >> >>> On Mon, Feb 4, 2013 at 3:46 PM, Charles R Harris >> >>> wrote: >> >>>> >> >>>> >> >>>> On Mon, Feb 4, 2013 at 4:04 PM, Robert Kern >> >>>> wrote: >> >>>>> >> >>>>> On Mon, Feb 4, 2013 at 10:38 PM, Matthew Brett >> >>>>> >> >>>>> wrote: >> >>>>>> Hi, >> >>>>>> >> >>>>>> On Mon, Feb 4, 2013 at 1:15 PM, Ralf Gommers >> >>>>>> >> >>>>>> wrote: >> >>>>> >> >>>>>>> MSVC + Intel Fortran + MKL, yes. But those aren't free. So how can >> >>>>>>> you >> >>>>>>> provide an Amazon image for those? >> >>>>>> >> >>>>>> You can make an image that is not public, I guess. I suppose >> >>>>>> anyone >> >>>>>> who uses the image would have to have their own licenses for the >> >>>>>> Intel >> >>>>>> stuff? Does anyone have experience of this? >> >>>>> >> >>>>> You need to purchase one license per developer: >> >>>>> >> >>>>> >> >>>>> >> >>>>> http://software.intel.com/en-us/articles/intel-math-kernel-library-licensing-faq#eula1 >> >>>>> >> >>>> >> >>>> I think 64 bits on windows is best pushed off to 1.7.1 or 1.8. It >> >>>> would be a >> >>>> bit much to get it implemented in the next week or two. >> >>> >> >>> The problem with not providing these binaries is that they are at the >> >>> bottom of everyone's stack, so a delay in numpy holds everyone back. >> >>> >> >>> I can't find completely convincing stats, but it looks as though 64 >> >>> bit windows 7 is now the most common version of Windows, at least for >> >>> Gamers [1] around now, and it was getting that way for everyone in >> >>> 2010 [2]. >> >>> >> >>> It don't think it reflects well on on us that we don't appear to >> >>> support 64 bits out of the box; just for example, R already has a 32 >> >>> bit / 64 bit installer. >> >>> >> >>> If I understand correctly, the options for doing this right now are: >> >>> >> >>> 1) Minimal cost in time : ask Christophe nicely whether we can >> >>> distribute his binaries via the Numpy page >> >>> 2) Small cost in time / money : pay for licenses for Ondrej or me or >> >>> someone to install the dependencies on my Berkeley machine / an Amazon >> >>> image. >> >> >> >> In order not to leave this discussion without a resolution: >> >> >> >> Christophe - would you allow us to distribute your numpy binaries for >> >> 1.7 from the numpy sourceforge page? >> >> >> >> Cheers, >> >> >> >> Matthew >> > >> > >> > I am OK with providing 64 bit "numpy-MKL" binaries (that is numpy >> > compiled with MSVC compilers and linked to Intel's MKL) for official >> > numpy releases. >> >> Thank you - that is great. >> >> > However: >> > >> > 1) There seems to be no real consensus and urge for doing this. >> >> I certainly feel the urge and feel it strongly. As a packager for two >> or three projects myself, it's a royal pain having to tell someone to >> go to two different places for binaries depending on the number of >> bits of their Windows. > > > If you're relying on .exe installers on SF, then you have to send your users > to more places than that probably. Really the separate installers are a poor > alternative to the available scientific distributions. And the more packages > you need as a user, the more annoying these separate installers are. > >> >> I think Chuck was worried about the time it >> would take to do it, and I think you've already solved this problem. >> Ralf was worried about Scipy - see below. > > > Not just Scipy - that would be my worry number one, but the same holds for > all packages based on Numpy. You double the number of Windows installers > that every single project needs to provide. > >> >> >> > Using a >> > free toolchain capable of building the whole scipy-stack would be much >> > preferred. >> >> That's true, but there seems general agreement this is not practical >> in the very near future. >> >> > Several 64 bit Python distributions containing numpy-MKL are >> > already available, some for free. >> >> You mean EPD and AnacondaCE? I don't think we should withhold easily >> available vanilla builds because they are also available in >> company-sponsored projects. Python.org provides windows builds even >> though ActiveState is free-as-in-beer. > > > If the company-sponsored bit bothers you, there's also a 64-bit Python(x,y) > now. I'm sure you've seen that the question 'where are the 64-bit installers' comes up often for Numpy. It seems to me that we'd have to have a good reason not provide them. There will always be some number of people like me who like to install the various parts by hand, and don't like using large packages, free or or open or not. For example, I don't use macports. At the moment, the reasons I see you are giving are: 1) Then everyone would have to provide 64-bit binaries 2) You should use a super-package instead On those arguments, we should withdraw all binary installers. Or withdraw the 32-bit ones, on the basis that 64-bit is likely the more common now. Of course we shouldn't do that, because it will put off some measurable number of users, and convey the impression that the numpy stack is a bit shaky, because it cannot be easily installed without a monolithic build framework. I think that's a real shame, and would harm numpy, to no benefit. I guess I'm saying, please, please, pretty please, oh please oh please oh please can we have a Windows 64-bit installer? Cheers, Matthew From charlesr.harris at gmail.com Wed Feb 6 14:54:41 2013 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 6 Feb 2013 12:54:41 -0700 Subject: [Numpy-discussion] Dealing with the mode argument in qr. In-Reply-To: References: Message-ID: On Wed, Feb 6, 2013 at 12:17 PM, Ralf Gommers wrote: > > > > On Wed, Feb 6, 2013 at 3:44 PM, Benjamin Root wrote: > >> >> >> On Tue, Feb 5, 2013 at 4:23 PM, Charles R Harris < >> charlesr.harris at gmail.com> wrote: >> >>> Hi All, >>> >>> This post is to bring the discussion of PR #2965to the attention of the list. There are at least three issues in play here. >>> >>> 1) The PR adds modes 'big' and 'thin' to the current modes 'full', 'r', >>> 'economic' for qr factorization. The problem is that the current 'full' is >>> actually 'thin' and 'big' should be 'full'. The solution here was to raise >>> a FutureWarning on use of 'full', alias it to 'thin' for the time being, >>> and at some distant time change 'full' to alias 'big'. >>> >> > This is asking for problems, to gain some naming consistency. I can't tell > how confusing 'full' is now, but deprecating and removing would be better > than changing what it returns. > That's what the current state of the PR is, both 'full' and 'economic' are deprecated. > > >> >>> 2) The 'economic' mode serves little purpose. I propose to deprecate it >>> and add a 'qrf' mode instead, corresponding to scipy's 'raw' mode. We can't >>> use 'raw' itself as traditionally the mode may be specified using the first >>> letter only and that leads to a conflict with 'r'. >>> >> > That's not a very good reason to not use "raw", since "raw" is a new > option and you therefore don't have to apply the rule that you can give > only the first letter to it. > Also the current state. > > >> >>> 3) As suggested in 2, the use of single letter abbreviations can >>> constrain the options in choosing mode names and they are not as >>> informative as the full name. A possibility here is to deprecate the use of >>> the abbreviations in favor of the full names. >>> >> > I'm not feeling very strongly about this, but we have to be careful about > deprecations. Possible future naming constraints on new modes is not a good > reason to deprecate. This one-letter option isn't even mentioned in the > docs it looks like. So why not leave that as is and ensure it keeps working > (add a unit test if needed)? > Currently qr requires full names for the new modes but not for the deprecated 'full' and 'economic'. That can be changed if we use 'thin' instead of 'reduced'. > >> >>> A longer term problem is the divergence between the numpy and scipy >>> versions of qr. The divergence is enough that I don't see any easy way to >>> come to a common interface, but that is something that would be desirable >>> if possible. >>> >> > This would be a problem imho. But I don't see why you can't add "raw" to > numpy's qr. And if you add "big" and "thin" in numpy, you can add those > modes in scipy too. > Currently I've used bfroehle's suggestions, although I'm tempted by 'thin' instead of 'reduced' Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Wed Feb 6 15:02:41 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 6 Feb 2013 15:02:41 -0500 Subject: [Numpy-discussion] Where's that function? In-Reply-To: References: Message-ID: On Wed, Feb 6, 2013 at 2:45 PM, Benjamin Root wrote: > > > On Wed, Feb 6, 2013 at 1:08 PM, wrote: >> >> I'm convinced that I saw a while ago a function that uses a list of >> interval boundaries to index into an array, either to iterate or to >> take. >> I thought that's very useful, but didn't make a note. >> >> Now, I have no idea where I saw this (I thought numpy), and I cannot >> find it anywhere. >> >> any clues? >> > > Some possibilities: > > np.array_split() > np.split() perfect (haven't gotten further down the list yet) >>> np.split(np.arange(15), [3,4,6, 13]) [array([0, 1, 2]), array([3]), array([4, 5]), array([ 6, 7, 8, 9, 10, 11, 12]), array([13, 14])] docstring says equal size, which fortunately is not correct http://docs.scipy.org/doc/numpy/reference/generated/numpy.split.html Thank you, Josef > np.ndindex() > np.nditer() > np.nested_iters() > np.ravel_multi_index() > > Your description reminded me of a function I came across once, but I can't > remember if one of these was it or if it was another one. > > IHTH, > Ben Root > > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From brad.froehle at gmail.com Wed Feb 6 15:38:10 2013 From: brad.froehle at gmail.com (Bradley M. Froehle) Date: Wed, 6 Feb 2013 12:38:10 -0800 Subject: [Numpy-discussion] Dealing with the mode argument in qr. In-Reply-To: References: Message-ID: > This would be a problem imho. But I don't see why you can't add "raw" to >> numpy's qr. And if you add "big" and "thin" in numpy, you can add those >> modes in scipy too. >> > > Currently I've used bfroehle's suggestions, although I'm tempted by 'thin' > instead of 'reduced' > Thin sounds fine to me. Either way I think we need to clean up the docstring to make the different calling styles more obvious. Perhaps we can just add a quick list of variants: q, r = qr(a) # q is m-by-k, r is k-by-n q, r = qr(a, 'thin') # same as qr(a) q, r = qr(a, 'complete') # q is m-by-n, r is n-by-n r = qr(a, 'r') # r is .... a2 = qr(a, 'economic') # r is contained in the upper triangular part of a2 a2, tau = qr(a, 'raw') # ... Regards, Brad -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Wed Feb 6 15:57:34 2013 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 6 Feb 2013 13:57:34 -0700 Subject: [Numpy-discussion] Dealing with the mode argument in qr. In-Reply-To: References: Message-ID: On Wed, Feb 6, 2013 at 1:38 PM, Bradley M. Froehle wrote: > > This would be a problem imho. But I don't see why you can't add "raw" to >>> numpy's qr. And if you add "big" and "thin" in numpy, you can add those >>> modes in scipy too. >>> >> >> Currently I've used bfroehle's suggestions, although I'm tempted by >> 'thin' instead of 'reduced' >> > > Thin sounds fine to me. Either way I think we need to clean up the > docstring to make the different calling styles more obvious. Perhaps we > can just add a quick list of variants: > q, r = qr(a) # q is m-by-k, r is k-by-n > q, r = qr(a, 'thin') # same as qr(a) > q, r = qr(a, 'complete') # q is m-by-n, r is n-by-n > r = qr(a, 'r') # r is .... > a2 = qr(a, 'economic') # r is contained in the upper triangular part of a2 > a2, tau = qr(a, 'raw') # ... > > There is a list similar to that already there. Take a look and comment. Chuck > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Wed Feb 6 16:36:16 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Wed, 6 Feb 2013 22:36:16 +0100 Subject: [Numpy-discussion] Any plans for windows 64-bit installer for 1.7? In-Reply-To: References: <51119007.6090806@uci.edu> Message-ID: On Wed, Feb 6, 2013 at 8:46 PM, Matthew Brett wrote: > Hi, > > On Wed, Feb 6, 2013 at 10:48 AM, Ralf Gommers > wrote: > > > > > > > > On Wed, Feb 6, 2013 at 1:32 AM, Matthew Brett > > wrote: > >> > >> Hi, > >> > >> On Tue, Feb 5, 2013 at 3:04 PM, Christoph Gohlke > wrote: > >> > On 2/5/2013 10:51 AM, Matthew Brett wrote: > >> >> Hi, > >> >> > >> >> On Mon, Feb 4, 2013 at 5:09 PM, Matthew Brett < > matthew.brett at gmail.com> > >> >> wrote: > >> >>> Hi, > >> >>> > >> >>> On Mon, Feb 4, 2013 at 3:46 PM, Charles R Harris > >> >>> wrote: > >> >>>> > >> >>>> > >> >>>> On Mon, Feb 4, 2013 at 4:04 PM, Robert Kern > > >> >>>> wrote: > >> >>>>> > >> >>>>> On Mon, Feb 4, 2013 at 10:38 PM, Matthew Brett > >> >>>>> > >> >>>>> wrote: > >> >>>>>> Hi, > >> >>>>>> > >> >>>>>> On Mon, Feb 4, 2013 at 1:15 PM, Ralf Gommers > >> >>>>>> > >> >>>>>> wrote: > >> >>>>> > >> >>>>>>> MSVC + Intel Fortran + MKL, yes. But those aren't free. So how > can > >> >>>>>>> you > >> >>>>>>> provide an Amazon image for those? > >> >>>>>> > >> >>>>>> You can make an image that is not public, I guess. I suppose > >> >>>>>> anyone > >> >>>>>> who uses the image would have to have their own licenses for the > >> >>>>>> Intel > >> >>>>>> stuff? Does anyone have experience of this? > >> >>>>> > >> >>>>> You need to purchase one license per developer: > >> >>>>> > >> >>>>> > >> >>>>> > >> >>>>> > http://software.intel.com/en-us/articles/intel-math-kernel-library-licensing-faq#eula1 > >> >>>>> > >> >>>> > >> >>>> I think 64 bits on windows is best pushed off to 1.7.1 or 1.8. It > >> >>>> would be a > >> >>>> bit much to get it implemented in the next week or two. > >> >>> > >> >>> The problem with not providing these binaries is that they are at > the > >> >>> bottom of everyone's stack, so a delay in numpy holds everyone back. > >> >>> > >> >>> I can't find completely convincing stats, but it looks as though 64 > >> >>> bit windows 7 is now the most common version of Windows, at least > for > >> >>> Gamers [1] around now, and it was getting that way for everyone in > >> >>> 2010 [2]. > >> >>> > >> >>> It don't think it reflects well on on us that we don't appear to > >> >>> support 64 bits out of the box; just for example, R already has a 32 > >> >>> bit / 64 bit installer. > >> >>> > >> >>> If I understand correctly, the options for doing this right now are: > >> >>> > >> >>> 1) Minimal cost in time : ask Christophe nicely whether we can > >> >>> distribute his binaries via the Numpy page > >> >>> 2) Small cost in time / money : pay for licenses for Ondrej or me or > >> >>> someone to install the dependencies on my Berkeley machine / an > Amazon > >> >>> image. > >> >> > >> >> In order not to leave this discussion without a resolution: > >> >> > >> >> Christophe - would you allow us to distribute your numpy binaries for > >> >> 1.7 from the numpy sourceforge page? > >> >> > >> >> Cheers, > >> >> > >> >> Matthew > >> > > >> > > >> > I am OK with providing 64 bit "numpy-MKL" binaries (that is numpy > >> > compiled with MSVC compilers and linked to Intel's MKL) for official > >> > numpy releases. > >> > >> Thank you - that is great. > >> > >> > However: > >> > > >> > 1) There seems to be no real consensus and urge for doing this. > >> > >> I certainly feel the urge and feel it strongly. As a packager for two > >> or three projects myself, it's a royal pain having to tell someone to > >> go to two different places for binaries depending on the number of > >> bits of their Windows. > > > > > > If you're relying on .exe installers on SF, then you have to send your > users > > to more places than that probably. Really the separate installers are a > poor > > alternative to the available scientific distributions. And the more > packages > > you need as a user, the more annoying these separate installers are. > > > >> > >> I think Chuck was worried about the time it > >> would take to do it, and I think you've already solved this problem. > >> Ralf was worried about Scipy - see below. > > > > > > Not just Scipy - that would be my worry number one, but the same holds > for > > all packages based on Numpy. You double the number of Windows installers > > that every single project needs to provide. > > > >> > >> > >> > Using a > >> > free toolchain capable of building the whole scipy-stack would be much > >> > preferred. > >> > >> That's true, but there seems general agreement this is not practical > >> in the very near future. > >> > >> > Several 64 bit Python distributions containing numpy-MKL are > >> > already available, some for free. > >> > >> You mean EPD and AnacondaCE? I don't think we should withhold easily > >> available vanilla builds because they are also available in > >> company-sponsored projects. Python.org provides windows builds even > >> though ActiveState is free-as-in-beer. > > > > > > If the company-sponsored bit bothers you, there's also a 64-bit > Python(x,y) > > now. > > I'm sure you've seen that the question 'where are the 64-bit > installers' comes up often for Numpy. > > It seems to me that we'd have to have a good reason not provide them. > There will always be some number of people like me who like to install > the various parts by hand, and don't like using large packages, free > or or open or not. For example, I don't use macports. At the moment, > the reasons I see you are giving are: > > 1) Then everyone would have to provide 64-bit binaries > Indeed. And since many packages won't do that because there's no free compilers, users just get stuck a bit later in the "installing the stack" process. I'm sure providing the binaries will help some people, but I expect it to cause as many problems as it solves. Maybe there's a volunteer to help with building official binaries for a large number of packages? If not, I'm simply not convinced that this will be a net gain for users. 2) You should use a super-package instead > > On those arguments, we should withdraw all binary installers. 32-bit is a very different situation. Or withdraw the 32-bit ones, on the basis that 64-bit is likely the more > common now. Of course we shouldn't do that, because it will put off > some measurable number of users, and convey the impression that the > numpy stack is a bit shaky, because it cannot be easily installed > without a monolithic build framework. The installation of Python-based stacks *is* a bit shaky, that's no big secret. Ralf > I think that's a real shame, and would harm numpy, to no benefit. > > I guess I'm saying, please, please, pretty please, oh please oh please > oh please can we have a Windows 64-bit installer? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Wed Feb 6 17:25:01 2013 From: chris.barker at noaa.gov (Chris Barker - NOAA Federal) Date: Wed, 6 Feb 2013 14:25:01 -0800 Subject: [Numpy-discussion] Any plans for windows 64-bit installer for 1.7? In-Reply-To: References: <51119007.6090806@uci.edu> Message-ID: I'm trying to weed out the issues here: 1) what should the binary installer for Windows look like: * bdist_wininst is the obvious choice but apparently has issues with the newer Windows security stuff -- the real solution is for distutils to be fixed/use something else/ etc -- but is there anyone here that has expertise, interest or time to get in and add-to or fix distutils? Do binary eggs solve any of this ? maybe, but pip doesn't support that last I saw, and setuptools easy-install is kind-of sort-of broken. But coming up with the best binary installer tool is orthogonal to the other questions: 2) Should we distribute binaries at all? - No: most people need a whole stack anyway, so they should get all that from the same place: - Enthought - Python(x.y) - Chris Gohlke's repository... - Yes: Some folks just want numpy damn it! Those big ol' distributions are a mess that don't work for everyone. (I'm in that camp.....) But if we distribute binaries, we really should: - distribute them for 32 & 64 bit - Clearly define what the "official" binaries are, so that third-party packagers can build against that. Historically, this has been a huge mess on the Mac, as there are many options for Python itself: Apple's builds, Python,org builds, macports, fink, homebrew, ...... Over the years, we in teh MacPython community have tried hard to establish that the python.org builds are the "official" builds that folks should support with binaries -- this has kind-of, sort-of worked, but I still don't see a better solution. (macports et al aren't really the issue, they aren't based on binaries anyway) On Windows, this has been pretty much a non-issue: MS doesn't provide a build, and the pyton.org builds are well accepted as the default binaries. AFAIU, third party distributions are compatible as well (Active State, Enthought) The parallel here is that we can establish in the scientific python community (that is, folks building packages based on numpy) that the binaries distributed by numpy are the "official" ones that should be supported by third part binaries. If we succeed in that, then folks can get the rest of the stack from any number of sources. the python.org python builds are built with the MS compilers, is there any reason we HAVE to build with a open-source stack? Can you build third party packages against an MS-built binary numpy with open-source compilers? -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From matthew.brett at gmail.com Wed Feb 6 18:16:32 2013 From: matthew.brett at gmail.com (Matthew Brett) Date: Wed, 6 Feb 2013 15:16:32 -0800 Subject: [Numpy-discussion] Any plans for windows 64-bit installer for 1.7? In-Reply-To: References: <51119007.6090806@uci.edu> Message-ID: Hi, On Wed, Feb 6, 2013 at 1:36 PM, Ralf Gommers wrote: > > > > On Wed, Feb 6, 2013 at 8:46 PM, Matthew Brett > wrote: >> >> Hi, >> >> On Wed, Feb 6, 2013 at 10:48 AM, Ralf Gommers >> wrote: >> > >> > >> > >> > On Wed, Feb 6, 2013 at 1:32 AM, Matthew Brett >> > wrote: >> >> >> >> Hi, >> >> >> >> On Tue, Feb 5, 2013 at 3:04 PM, Christoph Gohlke >> >> wrote: >> >> > On 2/5/2013 10:51 AM, Matthew Brett wrote: >> >> >> Hi, >> >> >> >> >> >> On Mon, Feb 4, 2013 at 5:09 PM, Matthew Brett >> >> >> >> >> >> wrote: >> >> >>> Hi, >> >> >>> >> >> >>> On Mon, Feb 4, 2013 at 3:46 PM, Charles R Harris >> >> >>> wrote: >> >> >>>> >> >> >>>> >> >> >>>> On Mon, Feb 4, 2013 at 4:04 PM, Robert Kern >> >> >>>> >> >> >>>> wrote: >> >> >>>>> >> >> >>>>> On Mon, Feb 4, 2013 at 10:38 PM, Matthew Brett >> >> >>>>> >> >> >>>>> wrote: >> >> >>>>>> Hi, >> >> >>>>>> >> >> >>>>>> On Mon, Feb 4, 2013 at 1:15 PM, Ralf Gommers >> >> >>>>>> >> >> >>>>>> wrote: >> >> >>>>> >> >> >>>>>>> MSVC + Intel Fortran + MKL, yes. But those aren't free. So how >> >> >>>>>>> can >> >> >>>>>>> you >> >> >>>>>>> provide an Amazon image for those? >> >> >>>>>> >> >> >>>>>> You can make an image that is not public, I guess. I suppose >> >> >>>>>> anyone >> >> >>>>>> who uses the image would have to have their own licenses for the >> >> >>>>>> Intel >> >> >>>>>> stuff? Does anyone have experience of this? >> >> >>>>> >> >> >>>>> You need to purchase one license per developer: >> >> >>>>> >> >> >>>>> >> >> >>>>> >> >> >>>>> >> >> >>>>> http://software.intel.com/en-us/articles/intel-math-kernel-library-licensing-faq#eula1 >> >> >>>>> >> >> >>>> >> >> >>>> I think 64 bits on windows is best pushed off to 1.7.1 or 1.8. It >> >> >>>> would be a >> >> >>>> bit much to get it implemented in the next week or two. >> >> >>> >> >> >>> The problem with not providing these binaries is that they are at >> >> >>> the >> >> >>> bottom of everyone's stack, so a delay in numpy holds everyone >> >> >>> back. >> >> >>> >> >> >>> I can't find completely convincing stats, but it looks as though 64 >> >> >>> bit windows 7 is now the most common version of Windows, at least >> >> >>> for >> >> >>> Gamers [1] around now, and it was getting that way for everyone in >> >> >>> 2010 [2]. >> >> >>> >> >> >>> It don't think it reflects well on on us that we don't appear to >> >> >>> support 64 bits out of the box; just for example, R already has a >> >> >>> 32 >> >> >>> bit / 64 bit installer. >> >> >>> >> >> >>> If I understand correctly, the options for doing this right now >> >> >>> are: >> >> >>> >> >> >>> 1) Minimal cost in time : ask Christophe nicely whether we can >> >> >>> distribute his binaries via the Numpy page >> >> >>> 2) Small cost in time / money : pay for licenses for Ondrej or me >> >> >>> or >> >> >>> someone to install the dependencies on my Berkeley machine / an >> >> >>> Amazon >> >> >>> image. >> >> >> >> >> >> In order not to leave this discussion without a resolution: >> >> >> >> >> >> Christophe - would you allow us to distribute your numpy binaries >> >> >> for >> >> >> 1.7 from the numpy sourceforge page? >> >> >> >> >> >> Cheers, >> >> >> >> >> >> Matthew >> >> > >> >> > >> >> > I am OK with providing 64 bit "numpy-MKL" binaries (that is numpy >> >> > compiled with MSVC compilers and linked to Intel's MKL) for official >> >> > numpy releases. >> >> >> >> Thank you - that is great. >> >> >> >> > However: >> >> > >> >> > 1) There seems to be no real consensus and urge for doing this. >> >> >> >> I certainly feel the urge and feel it strongly. As a packager for two >> >> or three projects myself, it's a royal pain having to tell someone to >> >> go to two different places for binaries depending on the number of >> >> bits of their Windows. >> > >> > >> > If you're relying on .exe installers on SF, then you have to send your >> > users >> > to more places than that probably. Really the separate installers are a >> > poor >> > alternative to the available scientific distributions. And the more >> > packages >> > you need as a user, the more annoying these separate installers are. >> > >> >> >> >> I think Chuck was worried about the time it >> >> would take to do it, and I think you've already solved this problem. >> >> Ralf was worried about Scipy - see below. >> > >> > >> > Not just Scipy - that would be my worry number one, but the same holds >> > for >> > all packages based on Numpy. You double the number of Windows installers >> > that every single project needs to provide. >> > >> >> >> >> >> >> > Using a >> >> > free toolchain capable of building the whole scipy-stack would be >> >> > much >> >> > preferred. >> >> >> >> That's true, but there seems general agreement this is not practical >> >> in the very near future. >> >> >> >> > Several 64 bit Python distributions containing numpy-MKL are >> >> > already available, some for free. >> >> >> >> You mean EPD and AnacondaCE? I don't think we should withhold easily >> >> available vanilla builds because they are also available in >> >> company-sponsored projects. Python.org provides windows builds even >> >> though ActiveState is free-as-in-beer. >> > >> > >> > If the company-sponsored bit bothers you, there's also a 64-bit >> > Python(x,y) >> > now. >> >> I'm sure you've seen that the question 'where are the 64-bit >> installers' comes up often for Numpy. >> >> It seems to me that we'd have to have a good reason not provide them. >> There will always be some number of people like me who like to install >> the various parts by hand, and don't like using large packages, free >> or or open or not. For example, I don't use macports. At the moment, >> the reasons I see you are giving are: >> >> 1) Then everyone would have to provide 64-bit binaries > > > Indeed. And since many packages won't do that because there's no free > compilers, users just get stuck a bit later in the "installing the stack" > process. I'm sure providing the binaries will help some people, but I expect > it to cause as many problems as it solves. Can you clarify the people you think will get stuck? I think I'm right in saying that anyone with a C extension should be able to build them against numpy, by installing the free (as-in-beer) MS tools? So do you just mean people needing a Fortran compiler? That's a small constituency, I think. It seems to me against the give-it-a-go spirit of open source to say 'sure we can build installers, but you might get stuck later on so we won't give them to you'. And, if we start providing installers, I suspect we'll find that people start solving the remaining issues much more quickly than would happen if we don't provide them. Cheers, Matthew From chris.barker at noaa.gov Wed Feb 6 19:52:44 2013 From: chris.barker at noaa.gov (Chris Barker - NOAA Federal) Date: Wed, 6 Feb 2013 16:52:44 -0800 Subject: [Numpy-discussion] Any plans for windows 64-bit installer for 1.7? In-Reply-To: References: <51119007.6090806@uci.edu> Message-ID: On Wed, Feb 6, 2013 at 3:16 PM, Matthew Brett wrote: > Can you clarify the people you think will get stuck? I think I'm > right in saying that anyone with a C extension should be able to build > them against numpy, by installing the free (as-in-beer) MS tools? yup -- and that's the recommended (easiest) way to do it against the the python.org python. and you can use mingw to compile extensions that will run with the python.org python (built with MSVC), can you not use mingw to build extensions that will work with a MSVC-build numpy? > So > do you just mean people needing a Fortran compiler? That's a small > constituency, I think. particularly small overlap between needing fortran (and knowing how to deal with building it) and needing binaries of numpy.... if someone has a fortran-extension-building solution -- is it likeley not to work with a MSVC-built numpy? > And, if we start providing installers, I > suspect we'll find that people start solving the remaining issues much > more quickly than would happen if we don't provide them. +1 -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From ondrej.certik at gmail.com Wed Feb 6 22:10:35 2013 From: ondrej.certik at gmail.com (=?UTF-8?B?T25kxZllaiDEjGVydMOtaw==?=) Date: Wed, 6 Feb 2013 19:10:35 -0800 Subject: [Numpy-discussion] ANN: NumPy 1.7.0rc2 release Message-ID: Hi, I'm pleased to announce the availability of the second release candidate of NumPy 1.7.0rc2. Sources and binary installers can be found at https://sourceforge.net/projects/numpy/files/NumPy/1.7.0rc2/ We have fixed all issues known to us since the 1.7.0rc1 release. Please test this release and report any issues on the numpy-discussion mailing list. If there are no further problems, I plan to release the final version in a few days. I would like to thank Sandro Tosi, Sebastian Berg, Charles Harris, Marcin Juszkiewicz, Mark Wiebe, Ralf Gommers and Nathaniel J. Smith for sending patches, fixes and helping with reviews for this release since 1.7.0rc1, and Vincent Davis for providing the Mac build machine. Cheers, Ondrej ========================= NumPy 1.7.0 Release Notes ========================= This release includes several new features as well as numerous bug fixes and refactorings. It supports Python 2.4 - 2.7 and 3.1 - 3.3 and is the last release that supports Python 2.4 - 2.5. Highlights ========== * ``where=`` parameter to ufuncs (allows the use of boolean arrays to choose where a computation should be done) * ``vectorize`` improvements (added 'excluded' and 'cache' keyword, general cleanup and bug fixes) * ``numpy.random.choice`` (random sample generating function) Compatibility notes =================== In a future version of numpy, the functions np.diag, np.diagonal, and the diagonal method of ndarrays will return a view onto the original array, instead of producing a copy as they do now. This makes a difference if you write to the array returned by any of these functions. To facilitate this transition, numpy 1.7 produces a FutureWarning if it detects that you may be attempting to write to such an array. See the documentation for np.diagonal for details. Similar to np.diagonal above, in a future version of numpy, indexing a record array by a list of field names will return a view onto the original array, instead of producing a copy as they do now. As with np.diagonal, numpy 1.7 produces a FutureWarning if it detects that you may be attempting to write to such an array. See the documentation for array indexing for details. In a future version of numpy, the default casting rule for UFunc out= parameters will be changed from 'unsafe' to 'same_kind'. (This also applies to in-place operations like a += b, which is equivalent to np.add(a, b, out=a).) Most usages which violate the 'same_kind' rule are likely bugs, so this change may expose previously undetected errors in projects that depend on NumPy. In this version of numpy, such usages will continue to succeed, but will raise a DeprecationWarning. Full-array boolean indexing has been optimized to use a different, optimized code path. This code path should produce the same results, but any feedback about changes to your code would be appreciated. Attempting to write to a read-only array (one with ``arr.flags.writeable`` set to ``False``) used to raise either a RuntimeError, ValueError, or TypeError inconsistently, depending on which code path was taken. It now consistently raises a ValueError. The .reduce functions evaluate some reductions in a different order than in previous versions of NumPy, generally providing higher performance. Because of the nature of floating-point arithmetic, this may subtly change some results, just as linking NumPy to a different BLAS implementations such as MKL can. If upgrading from 1.5, then generally in 1.6 and 1.7 there have been substantial code added and some code paths altered, particularly in the areas of type resolution and buffered iteration over universal functions. This might have an impact on your code particularly if you relied on accidental behavior in the past. New features ============ Reduction UFuncs Generalize axis= Parameter ------------------------------------------- Any ufunc.reduce function call, as well as other reductions like sum, prod, any, all, max and min support the ability to choose a subset of the axes to reduce over. Previously, one could say axis=None to mean all the axes or axis=# to pick a single axis. Now, one can also say axis=(#,#) to pick a list of axes for reduction. Reduction UFuncs New keepdims= Parameter ---------------------------------------- There is a new keepdims= parameter, which if set to True, doesn't throw away the reduction axes but instead sets them to have size one. When this option is set, the reduction result will broadcast correctly to the original operand which was reduced. Datetime support ---------------- .. note:: The datetime API is *experimental* in 1.7.0, and may undergo changes in future versions of NumPy. There have been a lot of fixes and enhancements to datetime64 compared to NumPy 1.6: * the parser is quite strict about only accepting ISO 8601 dates, with a few convenience extensions * converts between units correctly * datetime arithmetic works correctly * business day functionality (allows the datetime to be used in contexts where only certain days of the week are valid) The notes in `doc/source/reference/arrays.datetime.rst `_ (also available in the online docs at `arrays.datetime.html `_) should be consulted for more details. Custom formatter for printing arrays ------------------------------------ See the new ``formatter`` parameter of the ``numpy.set_printoptions`` function. New function numpy.random.choice --------------------------------- A generic sampling function has been added which will generate samples from a given array-like. The samples can be with or without replacement, and with uniform or given non-uniform probabilities. New function isclose -------------------- Returns a boolean array where two arrays are element-wise equal within a tolerance. Both relative and absolute tolerance can be specified. Preliminary multi-dimensional support in the polynomial package --------------------------------------------------------------- Axis keywords have been added to the integration and differentiation functions and a tensor keyword was added to the evaluation functions. These additions allow multi-dimensional coefficient arrays to be used in those functions. New functions for evaluating 2-D and 3-D coefficient arrays on grids or sets of points were added together with 2-D and 3-D pseudo-Vandermonde matrices that can be used for fitting. Ability to pad rank-n arrays ---------------------------- A pad module containing functions for padding n-dimensional arrays has been added. The various private padding functions are exposed as options to a public 'pad' function. Example:: pad(a, 5, mode='mean') Current modes are ``constant``, ``edge``, ``linear_ramp``, ``maximum``, ``mean``, ``median``, ``minimum``, ``reflect``, ``symmetric``, ``wrap``, and ````. New argument to searchsorted ---------------------------- The function searchsorted now accepts a 'sorter' argument that is a permutation array that sorts the array to search. Build system ------------ Added experimental support for the AArch64 architecture. C API ----- New function ``PyArray_RequireWriteable`` provides a consistent interface for checking array writeability -- any C code which works with arrays whose WRITEABLE flag is not known to be True a priori, should make sure to call this function before writing. NumPy C Style Guide added (``doc/C_STYLE_GUIDE.rst.txt``). Changes ======= General ------- The function np.concatenate tries to match the layout of its input arrays. Previously, the layout did not follow any particular reason, and depended in an undesirable way on the particular axis chosen for concatenation. A bug was also fixed which silently allowed out of bounds axis arguments. The ufuncs logical_or, logical_and, and logical_not now follow Python's behavior with object arrays, instead of trying to call methods on the objects. For example the expression (3 and 'test') produces the string 'test', and now np.logical_and(np.array(3, 'O'), np.array('test', 'O')) produces 'test' as well. The ``.base`` attribute on ndarrays, which is used on views to ensure that the underlying array owning the memory is not deallocated prematurely, now collapses out references when you have a view-of-a-view. For example:: a = np.arange(10) b = a[1:] c = b[1:] In numpy 1.6, ``c.base`` is ``b``, and ``c.base.base`` is ``a``. In numpy 1.7, ``c.base`` is ``a``. To increase backwards compatibility for software which relies on the old behaviour of ``.base``, we only 'skip over' objects which have exactly the same type as the newly created view. This makes a difference if you use ``ndarray`` subclasses. For example, if we have a mix of ``ndarray`` and ``matrix`` objects which are all views on the same original ``ndarray``:: a = np.arange(10) b = np.asmatrix(a) c = b[0, 1:] d = c[0, 1:] then ``d.base`` will be ``b``. This is because ``d`` is a ``matrix`` object, and so the collapsing process only continues so long as it encounters other ``matrix`` objects. It considers ``c``, ``b``, and ``a`` in that order, and ``b`` is the last entry in that list which is a ``matrix`` object. Casting Rules ------------- Casting rules have undergone some changes in corner cases, due to the NA-related work. In particular for combinations of scalar+scalar: * the `longlong` type (`q`) now stays `longlong` for operations with any other number (`? b h i l q p B H I`), previously it was cast as `int_` (`l`). The `ulonglong` type (`Q`) now stays as `ulonglong` instead of `uint` (`L`). * the `timedelta64` type (`m`) can now be mixed with any integer type (`b h i l q p B H I L Q P`), previously it raised `TypeError`. For array + scalar, the above rules just broadcast except the case when the array and scalars are unsigned/signed integers, then the result gets converted to the array type (of possibly larger size) as illustrated by the following examples:: >>> (np.zeros((2,), dtype=np.uint8) + np.int16(257)).dtype dtype('uint16') >>> (np.zeros((2,), dtype=np.int8) + np.uint16(257)).dtype dtype('int16') >>> (np.zeros((2,), dtype=np.int16) + np.uint32(2**17)).dtype dtype('int32') Whether the size gets increased depends on the size of the scalar, for example:: >>> (np.zeros((2,), dtype=np.uint8) + np.int16(255)).dtype dtype('uint8') >>> (np.zeros((2,), dtype=np.uint8) + np.int16(256)).dtype dtype('uint16') Also a ``complex128`` scalar + ``float32`` array is cast to ``complex64``. In NumPy 1.7 the `datetime64` type (`M`) must be constructed by explicitly specifying the type as the second argument (e.g. ``np.datetime64(2000, 'Y')``). Deprecations ============ General ------- Specifying a custom string formatter with a `_format` array attribute is deprecated. The new `formatter` keyword in ``numpy.set_printoptions`` or ``numpy.array2string`` can be used instead. The deprecated imports in the polynomial package have been removed. ``concatenate`` now raises DepractionWarning for 1D arrays if ``axis != 0``. Versions of numpy < 1.7.0 ignored axis argument value for 1D arrays. We allow this for now, but in due course we will raise an error. C-API ----- Direct access to the fields of PyArrayObject* has been deprecated. Direct access has been recommended against for many releases. Expect similar deprecations for PyArray_Descr* and other core objects in the future as preparation for NumPy 2.0. The macros in old_defines.h are deprecated and will be removed in the next major release (>= 2.0). The sed script tools/replace_old_macros.sed can be used to replace these macros with the newer versions. You can test your code against the deprecated C API by #defining NPY_NO_DEPRECATED_API to the target version number, for example NPY_1_7_API_VERSION, before including any NumPy headers. The ``NPY_CHAR`` member of the ``NPY_TYPES`` enum is deprecated and will be removed in NumPy 1.8. See the discussion at `gh-2801 `_ for more details. From d.s.seljebotn at astro.uio.no Thu Feb 7 00:20:31 2013 From: d.s.seljebotn at astro.uio.no (Dag Sverre Seljebotn) Date: Thu, 07 Feb 2013 06:20:31 +0100 Subject: [Numpy-discussion] Any plans for windows 64-bit installer for 1.7? In-Reply-To: References: <51119007.6090806@uci.edu> Message-ID: <5113399F.3090803@astro.uio.no> On 02/07/2013 12:16 AM, Matthew Brett wrote: > Hi, > > On Wed, Feb 6, 2013 at 1:36 PM, Ralf Gommers wrote: >> >> >> >> On Wed, Feb 6, 2013 at 8:46 PM, Matthew Brett >> wrote: >>> >>> Hi, >>> >>> On Wed, Feb 6, 2013 at 10:48 AM, Ralf Gommers >>> wrote: >>>> >>>> >>>> >>>> On Wed, Feb 6, 2013 at 1:32 AM, Matthew Brett >>>> wrote: >>>>> >>>>> Hi, >>>>> >>>>> On Tue, Feb 5, 2013 at 3:04 PM, Christoph Gohlke >>>>> wrote: >>>>>> On 2/5/2013 10:51 AM, Matthew Brett wrote: >>>>>>> Hi, >>>>>>> >>>>>>> On Mon, Feb 4, 2013 at 5:09 PM, Matthew Brett >>>>>>> >>>>>>> wrote: >>>>>>>> Hi, >>>>>>>> >>>>>>>> On Mon, Feb 4, 2013 at 3:46 PM, Charles R Harris >>>>>>>> wrote: >>>>>>>>> >>>>>>>>> >>>>>>>>> On Mon, Feb 4, 2013 at 4:04 PM, Robert Kern >>>>>>>>> >>>>>>>>> wrote: >>>>>>>>>> >>>>>>>>>> On Mon, Feb 4, 2013 at 10:38 PM, Matthew Brett >>>>>>>>>> >>>>>>>>>> wrote: >>>>>>>>>>> Hi, >>>>>>>>>>> >>>>>>>>>>> On Mon, Feb 4, 2013 at 1:15 PM, Ralf Gommers >>>>>>>>>>> >>>>>>>>>>> wrote: >>>>>>>>>> >>>>>>>>>>>> MSVC + Intel Fortran + MKL, yes. But those aren't free. So how >>>>>>>>>>>> can >>>>>>>>>>>> you >>>>>>>>>>>> provide an Amazon image for those? >>>>>>>>>>> >>>>>>>>>>> You can make an image that is not public, I guess. I suppose >>>>>>>>>>> anyone >>>>>>>>>>> who uses the image would have to have their own licenses for the >>>>>>>>>>> Intel >>>>>>>>>>> stuff? Does anyone have experience of this? >>>>>>>>>> >>>>>>>>>> You need to purchase one license per developer: >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> http://software.intel.com/en-us/articles/intel-math-kernel-library-licensing-faq#eula1 >>>>>>>>>> >>>>>>>>> >>>>>>>>> I think 64 bits on windows is best pushed off to 1.7.1 or 1.8. It >>>>>>>>> would be a >>>>>>>>> bit much to get it implemented in the next week or two. >>>>>>>> >>>>>>>> The problem with not providing these binaries is that they are at >>>>>>>> the >>>>>>>> bottom of everyone's stack, so a delay in numpy holds everyone >>>>>>>> back. >>>>>>>> >>>>>>>> I can't find completely convincing stats, but it looks as though 64 >>>>>>>> bit windows 7 is now the most common version of Windows, at least >>>>>>>> for >>>>>>>> Gamers [1] around now, and it was getting that way for everyone in >>>>>>>> 2010 [2]. >>>>>>>> >>>>>>>> It don't think it reflects well on on us that we don't appear to >>>>>>>> support 64 bits out of the box; just for example, R already has a >>>>>>>> 32 >>>>>>>> bit / 64 bit installer. >>>>>>>> >>>>>>>> If I understand correctly, the options for doing this right now >>>>>>>> are: >>>>>>>> >>>>>>>> 1) Minimal cost in time : ask Christophe nicely whether we can >>>>>>>> distribute his binaries via the Numpy page >>>>>>>> 2) Small cost in time / money : pay for licenses for Ondrej or me >>>>>>>> or >>>>>>>> someone to install the dependencies on my Berkeley machine / an >>>>>>>> Amazon >>>>>>>> image. >>>>>>> >>>>>>> In order not to leave this discussion without a resolution: >>>>>>> >>>>>>> Christophe - would you allow us to distribute your numpy binaries >>>>>>> for >>>>>>> 1.7 from the numpy sourceforge page? >>>>>>> >>>>>>> Cheers, >>>>>>> >>>>>>> Matthew >>>>>> >>>>>> >>>>>> I am OK with providing 64 bit "numpy-MKL" binaries (that is numpy >>>>>> compiled with MSVC compilers and linked to Intel's MKL) for official >>>>>> numpy releases. >>>>> >>>>> Thank you - that is great. >>>>> >>>>>> However: >>>>>> >>>>>> 1) There seems to be no real consensus and urge for doing this. >>>>> >>>>> I certainly feel the urge and feel it strongly. As a packager for two >>>>> or three projects myself, it's a royal pain having to tell someone to >>>>> go to two different places for binaries depending on the number of >>>>> bits of their Windows. >>>> >>>> >>>> If you're relying on .exe installers on SF, then you have to send your >>>> users >>>> to more places than that probably. Really the separate installers are a >>>> poor >>>> alternative to the available scientific distributions. And the more >>>> packages >>>> you need as a user, the more annoying these separate installers are. >>>> >>>>> >>>>> I think Chuck was worried about the time it >>>>> would take to do it, and I think you've already solved this problem. >>>>> Ralf was worried about Scipy - see below. >>>> >>>> >>>> Not just Scipy - that would be my worry number one, but the same holds >>>> for >>>> all packages based on Numpy. You double the number of Windows installers >>>> that every single project needs to provide. >>>> >>>>> >>>>> >>>>>> Using a >>>>>> free toolchain capable of building the whole scipy-stack would be >>>>>> much >>>>>> preferred. >>>>> >>>>> That's true, but there seems general agreement this is not practical >>>>> in the very near future. >>>>> >>>>>> Several 64 bit Python distributions containing numpy-MKL are >>>>>> already available, some for free. >>>>> >>>>> You mean EPD and AnacondaCE? I don't think we should withhold easily >>>>> available vanilla builds because they are also available in >>>>> company-sponsored projects. Python.org provides windows builds even >>>>> though ActiveState is free-as-in-beer. >>>> >>>> >>>> If the company-sponsored bit bothers you, there's also a 64-bit >>>> Python(x,y) >>>> now. >>> >>> I'm sure you've seen that the question 'where are the 64-bit >>> installers' comes up often for Numpy. >>> >>> It seems to me that we'd have to have a good reason not provide them. >>> There will always be some number of people like me who like to install >>> the various parts by hand, and don't like using large packages, free >>> or or open or not. For example, I don't use macports. At the moment, >>> the reasons I see you are giving are: >>> >>> 1) Then everyone would have to provide 64-bit binaries >> >> >> Indeed. And since many packages won't do that because there's no free >> compilers, users just get stuck a bit later in the "installing the stack" >> process. I'm sure providing the binaries will help some people, but I expect >> it to cause as many problems as it solves. > > Can you clarify the people you think will get stuck? I think I'm > right in saying that anyone with a C extension should be able to build > them against numpy, by installing the free (as-in-beer) MS tools? So > do you just mean people needing a Fortran compiler? That's a small > constituency, I think. Off the top of my head there's SciPy and pymc... Anyway, I'm butting in because I wish this discussion could separate between the user perspective and the developer perspective. FWIW, 1) From a user's perspective, I don't understand this either. If you are already using a closed source, not-free-as-in-beer operating system, why would you not use (or buy!) a closed source, not-free-as-in-beer Fortran compiler? 2) BUT, the argument I've seen that I can at least understand is that the release manager should be able to do a release using only open source tools (even using Wine instead of Windows) and not rely on a limited number of licenses. And that the release manager must be able to perform all the official builds directly. Dag Sverre From matthew.brett at gmail.com Thu Feb 7 01:10:10 2013 From: matthew.brett at gmail.com (Matthew Brett) Date: Wed, 6 Feb 2013 22:10:10 -0800 Subject: [Numpy-discussion] Any plans for windows 64-bit installer for 1.7? In-Reply-To: <5113399F.3090803@astro.uio.no> References: <51119007.6090806@uci.edu> <5113399F.3090803@astro.uio.no> Message-ID: Hi, On Wed, Feb 6, 2013 at 9:20 PM, Dag Sverre Seljebotn wrote: > On 02/07/2013 12:16 AM, Matthew Brett wrote: >> Hi, >> >> On Wed, Feb 6, 2013 at 1:36 PM, Ralf Gommers wrote: >>> >>> >>> >>> On Wed, Feb 6, 2013 at 8:46 PM, Matthew Brett >>> wrote: >>>> >>>> Hi, >>>> >>>> On Wed, Feb 6, 2013 at 10:48 AM, Ralf Gommers >>>> wrote: >>>>> >>>>> >>>>> >>>>> On Wed, Feb 6, 2013 at 1:32 AM, Matthew Brett >>>>> wrote: >>>>>> >>>>>> Hi, >>>>>> >>>>>> On Tue, Feb 5, 2013 at 3:04 PM, Christoph Gohlke >>>>>> wrote: >>>>>>> On 2/5/2013 10:51 AM, Matthew Brett wrote: >>>>>>>> Hi, >>>>>>>> >>>>>>>> On Mon, Feb 4, 2013 at 5:09 PM, Matthew Brett >>>>>>>> >>>>>>>> wrote: >>>>>>>>> Hi, >>>>>>>>> >>>>>>>>> On Mon, Feb 4, 2013 at 3:46 PM, Charles R Harris >>>>>>>>> wrote: >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Mon, Feb 4, 2013 at 4:04 PM, Robert Kern >>>>>>>>>> >>>>>>>>>> wrote: >>>>>>>>>>> >>>>>>>>>>> On Mon, Feb 4, 2013 at 10:38 PM, Matthew Brett >>>>>>>>>>> >>>>>>>>>>> wrote: >>>>>>>>>>>> Hi, >>>>>>>>>>>> >>>>>>>>>>>> On Mon, Feb 4, 2013 at 1:15 PM, Ralf Gommers >>>>>>>>>>>> >>>>>>>>>>>> wrote: >>>>>>>>>>> >>>>>>>>>>>>> MSVC + Intel Fortran + MKL, yes. But those aren't free. So how >>>>>>>>>>>>> can >>>>>>>>>>>>> you >>>>>>>>>>>>> provide an Amazon image for those? >>>>>>>>>>>> >>>>>>>>>>>> You can make an image that is not public, I guess. I suppose >>>>>>>>>>>> anyone >>>>>>>>>>>> who uses the image would have to have their own licenses for the >>>>>>>>>>>> Intel >>>>>>>>>>>> stuff? Does anyone have experience of this? >>>>>>>>>>> >>>>>>>>>>> You need to purchase one license per developer: >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> http://software.intel.com/en-us/articles/intel-math-kernel-library-licensing-faq#eula1 >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> I think 64 bits on windows is best pushed off to 1.7.1 or 1.8. It >>>>>>>>>> would be a >>>>>>>>>> bit much to get it implemented in the next week or two. >>>>>>>>> >>>>>>>>> The problem with not providing these binaries is that they are at >>>>>>>>> the >>>>>>>>> bottom of everyone's stack, so a delay in numpy holds everyone >>>>>>>>> back. >>>>>>>>> >>>>>>>>> I can't find completely convincing stats, but it looks as though 64 >>>>>>>>> bit windows 7 is now the most common version of Windows, at least >>>>>>>>> for >>>>>>>>> Gamers [1] around now, and it was getting that way for everyone in >>>>>>>>> 2010 [2]. >>>>>>>>> >>>>>>>>> It don't think it reflects well on on us that we don't appear to >>>>>>>>> support 64 bits out of the box; just for example, R already has a >>>>>>>>> 32 >>>>>>>>> bit / 64 bit installer. >>>>>>>>> >>>>>>>>> If I understand correctly, the options for doing this right now >>>>>>>>> are: >>>>>>>>> >>>>>>>>> 1) Minimal cost in time : ask Christophe nicely whether we can >>>>>>>>> distribute his binaries via the Numpy page >>>>>>>>> 2) Small cost in time / money : pay for licenses for Ondrej or me >>>>>>>>> or >>>>>>>>> someone to install the dependencies on my Berkeley machine / an >>>>>>>>> Amazon >>>>>>>>> image. >>>>>>>> >>>>>>>> In order not to leave this discussion without a resolution: >>>>>>>> >>>>>>>> Christophe - would you allow us to distribute your numpy binaries >>>>>>>> for >>>>>>>> 1.7 from the numpy sourceforge page? >>>>>>>> >>>>>>>> Cheers, >>>>>>>> >>>>>>>> Matthew >>>>>>> >>>>>>> >>>>>>> I am OK with providing 64 bit "numpy-MKL" binaries (that is numpy >>>>>>> compiled with MSVC compilers and linked to Intel's MKL) for official >>>>>>> numpy releases. >>>>>> >>>>>> Thank you - that is great. >>>>>> >>>>>>> However: >>>>>>> >>>>>>> 1) There seems to be no real consensus and urge for doing this. >>>>>> >>>>>> I certainly feel the urge and feel it strongly. As a packager for two >>>>>> or three projects myself, it's a royal pain having to tell someone to >>>>>> go to two different places for binaries depending on the number of >>>>>> bits of their Windows. >>>>> >>>>> >>>>> If you're relying on .exe installers on SF, then you have to send your >>>>> users >>>>> to more places than that probably. Really the separate installers are a >>>>> poor >>>>> alternative to the available scientific distributions. And the more >>>>> packages >>>>> you need as a user, the more annoying these separate installers are. >>>>> >>>>>> >>>>>> I think Chuck was worried about the time it >>>>>> would take to do it, and I think you've already solved this problem. >>>>>> Ralf was worried about Scipy - see below. >>>>> >>>>> >>>>> Not just Scipy - that would be my worry number one, but the same holds >>>>> for >>>>> all packages based on Numpy. You double the number of Windows installers >>>>> that every single project needs to provide. >>>>> >>>>>> >>>>>> >>>>>>> Using a >>>>>>> free toolchain capable of building the whole scipy-stack would be >>>>>>> much >>>>>>> preferred. >>>>>> >>>>>> That's true, but there seems general agreement this is not practical >>>>>> in the very near future. >>>>>> >>>>>>> Several 64 bit Python distributions containing numpy-MKL are >>>>>>> already available, some for free. >>>>>> >>>>>> You mean EPD and AnacondaCE? I don't think we should withhold easily >>>>>> available vanilla builds because they are also available in >>>>>> company-sponsored projects. Python.org provides windows builds even >>>>>> though ActiveState is free-as-in-beer. >>>>> >>>>> >>>>> If the company-sponsored bit bothers you, there's also a 64-bit >>>>> Python(x,y) >>>>> now. >>>> >>>> I'm sure you've seen that the question 'where are the 64-bit >>>> installers' comes up often for Numpy. >>>> >>>> It seems to me that we'd have to have a good reason not provide them. >>>> There will always be some number of people like me who like to install >>>> the various parts by hand, and don't like using large packages, free >>>> or or open or not. For example, I don't use macports. At the moment, >>>> the reasons I see you are giving are: >>>> >>>> 1) Then everyone would have to provide 64-bit binaries >>> >>> >>> Indeed. And since many packages won't do that because there's no free >>> compilers, users just get stuck a bit later in the "installing the stack" >>> process. I'm sure providing the binaries will help some people, but I expect >>> it to cause as many problems as it solves. >> >> Can you clarify the people you think will get stuck? I think I'm >> right in saying that anyone with a C extension should be able to build >> them against numpy, by installing the free (as-in-beer) MS tools? So >> do you just mean people needing a Fortran compiler? That's a small >> constituency, I think. > > Off the top of my head there's SciPy and pymc... We covered Scipy a while back in this thread. The proposal was that Christophe also supply these. Perhaps we can at least agree that the large majority of numpy-based projects that require compilation do not need FORTRAN. And if they do, then, as Chris said, they are likely to have sophisticated developers who are not likely to be confused by a new Windows build for numpy. > Anyway, I'm butting in because I wish this discussion could separate > between the user perspective and the developer perspective. > > FWIW, > > 1) From a user's perspective, I don't understand this either. If you are > already using a closed source, not-free-as-in-beer operating system, why > would you not use (or buy!) a closed source, not-free-as-in-beer Fortran > compiler? I think you might be asking whether Windows users care about having free (beer and or freedom) software? I think the answer to this is yes. As indeed OSX users care about having free software. > 2) BUT, the argument I've seen that I can at least understand is that > the release manager should be able to do a release using only open > source tools (even using Wine instead of Windows) and not rely on a > limited number of licenses. And that the release manager must be able to > perform all the official builds directly. I haven't heard that argument made. I've heard the general wish that we could build numpy with open-source tools. I haven't heard anyone assert that we won't release anything that can't be build with open-source tools. Python itself is not built with open-source tools, so I can't personally see the reason for this restriction. Perhaps you can explain it? I haven't heard the argument that the release manager should be able to make all the builds. That would be nice, obviously, but as an absolute rule it seems unnecessary. For example, is it your understanding that all the Python builds are made by the same person? Why would we want to be more restrictive than that? I don't know if anyone has estimated the number of people running Numpy via windows, but I notice from the download stats on Sourceforge, that there are around 800 downloads of OSX binary installers and over 4000 downloads of windows binaries. To be clear, what we seem to be heading for at the moment is this: * We have a experienced builder in Christophe building Windows 64 binaries for numpy and scipy * He has offered us at least Numpy and maybe Scipy * We are saying no to this zero-cost-to-us build because of one of the following reasons 1) Anyone using Windows 64 should be using EPD or AnacondaCE [1]. To enforce this we should not provide our own build 2) If we provide a Windows 64 build then other people will have to provide them too 3) We insist for some reason on only using open-source build tools 4) We insist for some reason that the release manager personally builds all the builds. This means that there is no way that I has a packager of - say - nipy - can give you a binary installer for Windows 64 bits without making you install one of (Christophe's builds, EPD, AnacondaCE). Which one should I build and installer for? All three? You have to give up your Python.org python to use my package? We are, because of this, leaving more than half of what appears to be our largest user base without a standard binary installer. I suppose, in 6 months time, someone will look for a Windows 64 bit installer, and they will not see one, and they think 'I wonder why?'. Then they will look at this thread. I wonder if they'll think we made the right decision or not. Cheers, Matthew [1] I can't find a 64-bit Python X Y but maybe one exists. From ondrej.certik at gmail.com Thu Feb 7 01:21:47 2013 From: ondrej.certik at gmail.com (=?UTF-8?B?T25kxZllaiDEjGVydMOtaw==?=) Date: Wed, 6 Feb 2013 22:21:47 -0800 Subject: [Numpy-discussion] Any plans for windows 64-bit installer for 1.7? In-Reply-To: <5113399F.3090803@astro.uio.no> References: <51119007.6090806@uci.edu> <5113399F.3090803@astro.uio.no> Message-ID: On Wed, Feb 6, 2013 at 9:20 PM, Dag Sverre Seljebotn wrote: > On 02/07/2013 12:16 AM, Matthew Brett wrote: [...] >> Can you clarify the people you think will get stuck? I think I'm >> right in saying that anyone with a C extension should be able to build >> them against numpy, by installing the free (as-in-beer) MS tools? So >> do you just mean people needing a Fortran compiler? That's a small >> constituency, I think. > > Off the top of my head there's SciPy and pymc... > > Anyway, I'm butting in because I wish this discussion could separate > between the user perspective and the developer perspective. > > FWIW, > > 1) From a user's perspective, I don't understand this either. If you are > already using a closed source, not-free-as-in-beer operating system, why > would you not use (or buy!) a closed source, not-free-as-in-beer Fortran > compiler? Indeed. Though I really have no clue on the Windows use cases. Maybe most Windows users don't want to compile anything, just use numpy and scipy from Python? > > 2) BUT, the argument I've seen that I can at least understand is that > the release manager should be able to do a release using only open > source tools (even using Wine instead of Windows) and not rely on a > limited number of licenses. And that the release manager must be able to > perform all the official builds directly. As the release manager, I really only have two requirements: * I want to ssh in there from my Ubuntu * I want to automate the whole process For Mac, linux and Wine I can do that. So I have just spend few hours browsing the net and it looks like that the combination of Windows PowerShell 2.0: http://en.wikipedia.org/wiki/Windows_PowerShell and some SSH server, there are quite a few, one commercial but free for one user one connection (perfect for me!): http://www.powershellinside.com/powershell/ssh/ So if I understand the pages correctly, I can login there from linux, and then I use the PowerShell commands to script anything. It looks like I can even use my Fabric fabfiles with powershell: https://gist.github.com/diyan/2850866 I can also use git with PowerShell: http://windows.github.com/ http://haacked.com/archive/2011/12/13/better-git-with-powershell.aspx So the final problem is how to execute MSVC and Fortran from Power Shell on Windows. These links might help for MSVC: http://stackoverflow.com/questions/4398136/use-powershell-for-visual-studio-command-prompt http://geekswithblogs.net/dwdii/archive/2011/05/20/automating-a-visual-studio-build-with-powershell---part-1.aspx Finally, for Intel Fortran + powershell: http://software.intel.com/en-us/forums/topic/284425 So I think it is all possible. If somebody can provide a machine with Windows, MSVC, PowerShell2.0, SSH server and some Fortran compiler, it should be possible for me to automate everything from Ubuntu using my Fabric files (https://github.com/certik/numpy-vendor). Ondrej From ondrej.certik at gmail.com Thu Feb 7 01:35:09 2013 From: ondrej.certik at gmail.com (=?UTF-8?B?T25kxZllaiDEjGVydMOtaw==?=) Date: Wed, 6 Feb 2013 22:35:09 -0800 Subject: [Numpy-discussion] Any plans for windows 64-bit installer for 1.7? In-Reply-To: <51119007.6090806@uci.edu> References: <51119007.6090806@uci.edu> Message-ID: Christoph, On Tue, Feb 5, 2013 at 3:04 PM, Christoph Gohlke wrote: [...] >> In order not to leave this discussion without a resolution: >> >> Christophe - would you allow us to distribute your numpy binaries for >> 1.7 from the numpy sourceforge page? >> >> Cheers, >> >> Matthew > > > I am OK with providing 64 bit "numpy-MKL" binaries (that is numpy > compiled with MSVC compilers and linked to Intel's MKL) for official > numpy releases. > > However: > > 1) There seems to be no real consensus and urge for doing this. Using a > free toolchain capable of building the whole scipy-stack would be much > preferred. Several 64 bit Python distributions containing numpy-MKL are > already available, some for free. > > 2) Releasing 64 bit numpy without matching scipy binaries would make > little sense to me. > > 3) Please do not just redistribute the binaries from my website and > declare them official. They might contain unreleased fixes from git > master and pull requests that are needed for my work and other packages. > > 4) Numpy-MKL requires the Intel runtime DLLs (MKL is linked statically > btw). I ship those with the installers and append the directory > containing the DLLs to os.environ['PATH'] in numpy/__init__.py. This is > a big no-no according to numpy developers. I don't agree. Anyway, those > changes are not in the numpy source repositories. > > 5) My numpy-MKL installers are Python distutils bdist_wininst > installers. That means if Python was installed for all users, installing > numpy-MKL on Windows >6.0 will prompt for UAC elevation. Another no-no? I think that all these things should be possible to fix so that the binary is acceptable for the official NumPy binary. How exactly do you build the binaries? I wasn't able to find the info at: http://www.lfd.uci.edu/~gohlke/pythonlibs/ Do you have some scripts to do that? Do you use PowerShell? Or you do it by hand by mouse and clicks in Visual Studio somehow? If I can figure out how to do these builds, I'll be happy to figure out how to automate it and then we can try to figure out a solution that works for NumPy. Ondrej From ondrej.certik at gmail.com Thu Feb 7 01:37:22 2013 From: ondrej.certik at gmail.com (=?UTF-8?B?T25kxZllaiDEjGVydMOtaw==?=) Date: Wed, 6 Feb 2013 22:37:22 -0800 Subject: [Numpy-discussion] Will numpy 1.7.0 final be binary compatible with the rc? In-Reply-To: References: Message-ID: On Wed, Feb 6, 2013 at 2:37 AM, Peter Cock wrote: > On Wed, Feb 6, 2013 at 3:46 AM, Ond?ej ?ert?k wrote: >> On Tue, Feb 5, 2013 at 12:22 PM, Ralf Gommers wrote: >>> On Tue, Feb 5, 2013 at 3:01 PM, Peter Cock >>> wrote: >>>> >>>> Hello all, >>>> >>>> Will the numpy 1.7.0 'final' be binary compatible with the release >>>> candidate(s)? i.e. Would it be safe for me to release a Windows >>>> installer for a package using the NumPy C API compiled against >>>> the NumPy 1.7.0rc? >>> >>> >>> Yes, that should be safe. >> >> Yes. I plan to release rc2 immediately once >> >> https://github.com/numpy/numpy/pull/2964 >> >> is merged (e.g. I am hoping for today). The final should then be >> identical to rc2. >> >> Ondrej > > Great - in that case I'll wait a couple of days and use rc2 for this > (just in case there is a subtle difference from rc1). Awesome. The rc2 is out. Ondrej From matthew.brett at gmail.com Thu Feb 7 01:41:28 2013 From: matthew.brett at gmail.com (Matthew Brett) Date: Wed, 6 Feb 2013 22:41:28 -0800 Subject: [Numpy-discussion] Any plans for windows 64-bit installer for 1.7? In-Reply-To: References: <51119007.6090806@uci.edu> <5113399F.3090803@astro.uio.no> Message-ID: Hi, On Wed, Feb 6, 2013 at 10:21 PM, Ond?ej ?ert?k wrote: > On Wed, Feb 6, 2013 at 9:20 PM, Dag Sverre Seljebotn > wrote: >> On 02/07/2013 12:16 AM, Matthew Brett wrote: > [...] >>> Can you clarify the people you think will get stuck? I think I'm >>> right in saying that anyone with a C extension should be able to build >>> them against numpy, by installing the free (as-in-beer) MS tools? So >>> do you just mean people needing a Fortran compiler? That's a small >>> constituency, I think. >> >> Off the top of my head there's SciPy and pymc... >> >> Anyway, I'm butting in because I wish this discussion could separate >> between the user perspective and the developer perspective. >> >> FWIW, >> >> 1) From a user's perspective, I don't understand this either. If you are >> already using a closed source, not-free-as-in-beer operating system, why >> would you not use (or buy!) a closed source, not-free-as-in-beer Fortran >> compiler? > > Indeed. Though I really have no clue on the Windows use cases. Maybe > most Windows users don't want to compile anything, just > use numpy and scipy from Python? Well - yes - as a packager I really want to be able to provide a binary so my binary consumers don't have to have a C compiler installed. I imagine it's the same for all of us packagers out there. >> 2) BUT, the argument I've seen that I can at least understand is that >> the release manager should be able to do a release using only open >> source tools (even using Wine instead of Windows) and not rely on a >> limited number of licenses. And that the release manager must be able to >> perform all the official builds directly. > > As the release manager, I really only have two requirements: > > * I want to ssh in there from my Ubuntu > * I want to automate the whole process > > For Mac, linux and Wine I can do that. So I have just spend few hours > browsing the net and it looks like that the combination of Windows > PowerShell 2.0: > > http://en.wikipedia.org/wiki/Windows_PowerShell > > and some SSH server, there are quite a few, one commercial but free > for one user one connection (perfect for me!): > > http://www.powershellinside.com/powershell/ssh/ > > So if I understand the pages correctly, I can login there from linux, > and then I use the PowerShell commands to script anything. It looks > like I can even use my Fabric fabfiles with powershell: > > https://gist.github.com/diyan/2850866 > > I can also use git with PowerShell: > > http://windows.github.com/ > http://haacked.com/archive/2011/12/13/better-git-with-powershell.aspx > > > So the final problem is how to execute MSVC and Fortran from Power > Shell on Windows. These links might help for MSVC: > > http://stackoverflow.com/questions/4398136/use-powershell-for-visual-studio-command-prompt > http://geekswithblogs.net/dwdii/archive/2011/05/20/automating-a-visual-studio-build-with-powershell---part-1.aspx > > Finally, for Intel Fortran + powershell: > > http://software.intel.com/en-us/forums/topic/284425 > > > So I think it is all possible. If somebody can provide a machine with > Windows, MSVC, PowerShell2.0, SSH server and some Fortran compiler, it > should be possible for me to automate everything from Ubuntu using my > Fabric files (https://github.com/certik/numpy-vendor). Many many thanks for trying to solve this. I had really started to give up hope. I think you will need a developer's license for MKL for Numpy. Ralf - any ETA for those? I think I'm right in thinking you'll need a Fortran compiler for Scipy but not Numpy? Can we defer the Scipy build until after the Numpy build? I will try to get you set up with ssh on my Windows 7 machine in case you can use it. It has the MS tools. Thanks again, Matthew From ondrej.certik at gmail.com Thu Feb 7 01:59:48 2013 From: ondrej.certik at gmail.com (=?UTF-8?B?T25kxZllaiDEjGVydMOtaw==?=) Date: Wed, 6 Feb 2013 22:59:48 -0800 Subject: [Numpy-discussion] Any plans for windows 64-bit installer for 1.7? In-Reply-To: References: <51119007.6090806@uci.edu> <5113399F.3090803@astro.uio.no> Message-ID: On Wed, Feb 6, 2013 at 10:41 PM, Matthew Brett wrote: > Hi, > > On Wed, Feb 6, 2013 at 10:21 PM, Ond?ej ?ert?k wrote: >> On Wed, Feb 6, 2013 at 9:20 PM, Dag Sverre Seljebotn >> wrote: >>> On 02/07/2013 12:16 AM, Matthew Brett wrote: >> [...] >>>> Can you clarify the people you think will get stuck? I think I'm >>>> right in saying that anyone with a C extension should be able to build >>>> them against numpy, by installing the free (as-in-beer) MS tools? So >>>> do you just mean people needing a Fortran compiler? That's a small >>>> constituency, I think. >>> >>> Off the top of my head there's SciPy and pymc... >>> >>> Anyway, I'm butting in because I wish this discussion could separate >>> between the user perspective and the developer perspective. >>> >>> FWIW, >>> >>> 1) From a user's perspective, I don't understand this either. If you are >>> already using a closed source, not-free-as-in-beer operating system, why >>> would you not use (or buy!) a closed source, not-free-as-in-beer Fortran >>> compiler? >> >> Indeed. Though I really have no clue on the Windows use cases. Maybe >> most Windows users don't want to compile anything, just >> use numpy and scipy from Python? > > Well - yes - as a packager I really want to be able to provide a > binary so my binary consumers don't have to have a C compiler > installed. I imagine it's the same for all of us packagers out > there. > >>> 2) BUT, the argument I've seen that I can at least understand is that >>> the release manager should be able to do a release using only open >>> source tools (even using Wine instead of Windows) and not rely on a >>> limited number of licenses. And that the release manager must be able to >>> perform all the official builds directly. >> >> As the release manager, I really only have two requirements: >> >> * I want to ssh in there from my Ubuntu >> * I want to automate the whole process >> >> For Mac, linux and Wine I can do that. So I have just spend few hours >> browsing the net and it looks like that the combination of Windows >> PowerShell 2.0: >> >> http://en.wikipedia.org/wiki/Windows_PowerShell >> >> and some SSH server, there are quite a few, one commercial but free >> for one user one connection (perfect for me!): >> >> http://www.powershellinside.com/powershell/ssh/ >> >> So if I understand the pages correctly, I can login there from linux, >> and then I use the PowerShell commands to script anything. It looks >> like I can even use my Fabric fabfiles with powershell: >> >> https://gist.github.com/diyan/2850866 >> >> I can also use git with PowerShell: >> >> http://windows.github.com/ >> http://haacked.com/archive/2011/12/13/better-git-with-powershell.aspx >> >> >> So the final problem is how to execute MSVC and Fortran from Power >> Shell on Windows. These links might help for MSVC: >> >> http://stackoverflow.com/questions/4398136/use-powershell-for-visual-studio-command-prompt >> http://geekswithblogs.net/dwdii/archive/2011/05/20/automating-a-visual-studio-build-with-powershell---part-1.aspx >> >> Finally, for Intel Fortran + powershell: >> >> http://software.intel.com/en-us/forums/topic/284425 >> >> >> So I think it is all possible. If somebody can provide a machine with >> Windows, MSVC, PowerShell2.0, SSH server and some Fortran compiler, it >> should be possible for me to automate everything from Ubuntu using my >> Fabric files (https://github.com/certik/numpy-vendor). > > Many many thanks for trying to solve this. I had really started to > give up hope. > > I think you will need a developer's license for MKL for Numpy. Ralf - > any ETA for those? > > I think I'm right in thinking you'll need a Fortran compiler for Scipy > but not Numpy? Can we defer the Scipy build until after the Numpy > build? > > I will try to get you set up with ssh on my Windows 7 machine in case > you can use it. It has the MS tools. That would be amazing! If you can set me up with the Power Shell and some ssh server, I'll start playing with this right away. Ondrej From matthew.brett at gmail.com Thu Feb 7 04:44:24 2013 From: matthew.brett at gmail.com (Matthew Brett) Date: Thu, 7 Feb 2013 01:44:24 -0800 Subject: [Numpy-discussion] Any plans for windows 64-bit installer for 1.7? In-Reply-To: References: <51119007.6090806@uci.edu> <5113399F.3090803@astro.uio.no> Message-ID: Hi, On Wed, Feb 6, 2013 at 10:59 PM, Ond?ej ?ert?k wrote: > On Wed, Feb 6, 2013 at 10:41 PM, Matthew Brett wrote: >> Hi, >> >> On Wed, Feb 6, 2013 at 10:21 PM, Ond?ej ?ert?k wrote: >>> On Wed, Feb 6, 2013 at 9:20 PM, Dag Sverre Seljebotn >>> wrote: >>>> On 02/07/2013 12:16 AM, Matthew Brett wrote: >>> [...] >>>>> Can you clarify the people you think will get stuck? I think I'm >>>>> right in saying that anyone with a C extension should be able to build >>>>> them against numpy, by installing the free (as-in-beer) MS tools? So >>>>> do you just mean people needing a Fortran compiler? That's a small >>>>> constituency, I think. >>>> >>>> Off the top of my head there's SciPy and pymc... >>>> >>>> Anyway, I'm butting in because I wish this discussion could separate >>>> between the user perspective and the developer perspective. >>>> >>>> FWIW, >>>> >>>> 1) From a user's perspective, I don't understand this either. If you are >>>> already using a closed source, not-free-as-in-beer operating system, why >>>> would you not use (or buy!) a closed source, not-free-as-in-beer Fortran >>>> compiler? >>> >>> Indeed. Though I really have no clue on the Windows use cases. Maybe >>> most Windows users don't want to compile anything, just >>> use numpy and scipy from Python? >> >> Well - yes - as a packager I really want to be able to provide a >> binary so my binary consumers don't have to have a C compiler >> installed. I imagine it's the same for all of us packagers out >> there. >> >>>> 2) BUT, the argument I've seen that I can at least understand is that >>>> the release manager should be able to do a release using only open >>>> source tools (even using Wine instead of Windows) and not rely on a >>>> limited number of licenses. And that the release manager must be able to >>>> perform all the official builds directly. >>> >>> As the release manager, I really only have two requirements: >>> >>> * I want to ssh in there from my Ubuntu >>> * I want to automate the whole process >>> >>> For Mac, linux and Wine I can do that. So I have just spend few hours >>> browsing the net and it looks like that the combination of Windows >>> PowerShell 2.0: >>> >>> http://en.wikipedia.org/wiki/Windows_PowerShell >>> >>> and some SSH server, there are quite a few, one commercial but free >>> for one user one connection (perfect for me!): >>> >>> http://www.powershellinside.com/powershell/ssh/ >>> >>> So if I understand the pages correctly, I can login there from linux, >>> and then I use the PowerShell commands to script anything. It looks >>> like I can even use my Fabric fabfiles with powershell: >>> >>> https://gist.github.com/diyan/2850866 >>> >>> I can also use git with PowerShell: >>> >>> http://windows.github.com/ >>> http://haacked.com/archive/2011/12/13/better-git-with-powershell.aspx >>> >>> >>> So the final problem is how to execute MSVC and Fortran from Power >>> Shell on Windows. These links might help for MSVC: >>> >>> http://stackoverflow.com/questions/4398136/use-powershell-for-visual-studio-command-prompt >>> http://geekswithblogs.net/dwdii/archive/2011/05/20/automating-a-visual-studio-build-with-powershell---part-1.aspx >>> >>> Finally, for Intel Fortran + powershell: >>> >>> http://software.intel.com/en-us/forums/topic/284425 >>> >>> >>> So I think it is all possible. If somebody can provide a machine with >>> Windows, MSVC, PowerShell2.0, SSH server and some Fortran compiler, it >>> should be possible for me to automate everything from Ubuntu using my >>> Fabric files (https://github.com/certik/numpy-vendor). >> >> Many many thanks for trying to solve this. I had really started to >> give up hope. >> >> I think you will need a developer's license for MKL for Numpy. Ralf - >> any ETA for those? >> >> I think I'm right in thinking you'll need a Fortran compiler for Scipy >> but not Numpy? Can we defer the Scipy build until after the Numpy >> build? >> >> I will try to get you set up with ssh on my Windows 7 machine in case >> you can use it. It has the MS tools. > > That would be amazing! If you can set me up with the Power Shell > and some ssh server, I'll start playing with this right away. I've set up a Cygwin SSH server on the box, and powershell 2 comes with windows 7, I believe. At least, that's the version I'm getting. However, it's hard to run powershell scripts interactively via cygwin : http://hivearchive.com/2006/07/03/using-powershell-through-ssh/ http://cygwin.com/ml/cygwin/2008-10/msg00393.html so you might need to debug the scripts interactively via remote desktop protocol and then run them non-interactively. Could you send me your ssh public key off list or give me a call to get set up? Thanks again, Matthew From cgohlke at uci.edu Thu Feb 7 04:51:38 2013 From: cgohlke at uci.edu (Christoph Gohlke) Date: Thu, 07 Feb 2013 01:51:38 -0800 Subject: [Numpy-discussion] Any plans for windows 64-bit installer for 1.7? In-Reply-To: References: <51119007.6090806@uci.edu> Message-ID: <5113792A.1050303@uci.edu> On 2/6/2013 10:35 PM, Ond?ej ?ert?k wrote: > Christoph, > > On Tue, Feb 5, 2013 at 3:04 PM, Christoph Gohlke wrote: > [...] >>> In order not to leave this discussion without a resolution: >>> >>> Christophe - would you allow us to distribute your numpy binaries for >>> 1.7 from the numpy sourceforge page? >>> >>> Cheers, >>> >>> Matthew >> >> >> I am OK with providing 64 bit "numpy-MKL" binaries (that is numpy >> compiled with MSVC compilers and linked to Intel's MKL) for official >> numpy releases. >> >> However: >> >> 1) There seems to be no real consensus and urge for doing this. Using a >> free toolchain capable of building the whole scipy-stack would be much >> preferred. Several 64 bit Python distributions containing numpy-MKL are >> already available, some for free. >> >> 2) Releasing 64 bit numpy without matching scipy binaries would make >> little sense to me. >> >> 3) Please do not just redistribute the binaries from my website and >> declare them official. They might contain unreleased fixes from git >> master and pull requests that are needed for my work and other packages. >> >> 4) Numpy-MKL requires the Intel runtime DLLs (MKL is linked statically >> btw). I ship those with the installers and append the directory >> containing the DLLs to os.environ['PATH'] in numpy/__init__.py. This is >> a big no-no according to numpy developers. I don't agree. Anyway, those >> changes are not in the numpy source repositories. >> >> 5) My numpy-MKL installers are Python distutils bdist_wininst >> installers. That means if Python was installed for all users, installing >> numpy-MKL on Windows >6.0 will prompt for UAC elevation. Another no-no? > > I think that all these things should be possible to fix so that the > binary is acceptable > for the official NumPy binary. > > How exactly do you build the binaries? I wasn't able to find the info at: > > http://www.lfd.uci.edu/~gohlke/pythonlibs/ > > Do you have some scripts to do that? Do you use PowerShell? Or you do > it by hand by mouse and clicks in Visual Studio somehow? If I can > figure out how to do these builds, I'll be happy to figure out how to > automate it and then we can try to figure out a solution that works > for NumPy. > > Ondrej My development/build environment is listed at . Not that it helps much... Assuming that Windows 7|8 Pro 64 bit, Visual Studio 2008 Pro SP1 (with 64 bit compiler option), Visual Studio 2010 Pro, Intel Composer XE 2013, 64 bit CPython 2.6, 2.7, 3.2 and 3.3 are installed, the following batch script (no need for PowerShell or an IDE) should build 64 bit numpy-MKL installers when run from within the numpy source directory. I do not really use this script but the "secrets" are there. It can be extended for building eggs and MSIs, 32 bit, and automated testing. Probably not all the libraries listed in site.cfg are needed but this works for me also with scipy and other packages. @echo off setlocal set ICDIR=C:/Program Files (x86)/Intel/Composer XE rem Work around a bug in numpy distutils. Requires admin privileges fsutil hardlink create "%ICDIR%/mkl/lib/intel64/libiomp5md.lib" "%ICDIR%/compiler/lib/intel64/libiomp5md.lib" fsutil hardlink create "%ICDIR%/mkl/lib/intel64/libifportmd.lib" "%ICDIR%/compiler/lib/intel64/libifportmd.lib" rem Create site.cfg for static linking to 64 bit MKL echo [mkl] > site.cfg echo include_dirs = %ICDIR%/mkl/include >> site.cfg echo library_dirs = %ICDIR%/mkl/lib/intel64;%ICDIR%/compiler/lib/intel64 >> site.cfg echo mkl_libs = mkl_lapack95_lp64,mkl_blas95_lp64,mkl_intel_lp64,mkl_intel_thread,mkl_core,libiomp5md,libifportmd >> site.cfg echo lapack_libs = mkl_lapack95_lp64,mkl_blas95_lp64,mkl_intel_lp64,mkl_intel_thread,mkl_core,libiomp5md,libifportmd >> site.cfg rem Build installers using distutils rd /q /s build call C:\Python26\python.exe setup.py bdist_wininst --user-access-control=auto rd /q /s build call C:\Python27\python.exe setup.py bdist_wininst --user-access-control=auto rd /q /s build call C:\Python32\python.exe setup.py bdist_wininst --user-access-control=auto copy /Y build\py3k\dist\*.exe dist\ rd /q /s build call C:\Python33\python.exe setup.py bdist_wininst --user-access-control=auto copy /Y build\py3k\dist\*.exe dist\ rd /q /s build endlocal -- Christoph From nouiz at nouiz.org Thu Feb 7 11:36:13 2013 From: nouiz at nouiz.org (=?ISO-8859-1?Q?Fr=E9d=E9ric_Bastien?=) Date: Thu, 7 Feb 2013 11:36:13 -0500 Subject: [Numpy-discussion] ANN: NumPy 1.7.0rc2 release In-Reply-To: References: Message-ID: Hi, As expected all Theano's tests passed. thanks Fred On Wed, Feb 6, 2013 at 10:10 PM, Ond?ej ?ert?k wrote: > Hi, > > I'm pleased to announce the availability of the second release candidate of > NumPy 1.7.0rc2. > > Sources and binary installers can be found at > https://sourceforge.net/projects/numpy/files/NumPy/1.7.0rc2/ > > We have fixed all issues known to us since the 1.7.0rc1 release. > > Please test this release and report any issues on the numpy-discussion > mailing list. If there are no further problems, I plan to release the final > version in a few days. > > I would like to thank Sandro Tosi, Sebastian Berg, Charles Harris, > Marcin Juszkiewicz, Mark Wiebe, Ralf Gommers and Nathaniel J. Smith > for sending patches, fixes and helping with reviews for this release since > 1.7.0rc1, and Vincent Davis for providing the Mac build machine. > > Cheers, > Ondrej > > > > > ========================= > NumPy 1.7.0 Release Notes > ========================= > > This release includes several new features as well as numerous bug fixes > and > refactorings. It supports Python 2.4 - 2.7 and 3.1 - 3.3 and is the last > release that supports Python 2.4 - 2.5. > > Highlights > ========== > > * ``where=`` parameter to ufuncs (allows the use of boolean arrays to > choose > where a computation should be done) > * ``vectorize`` improvements (added 'excluded' and 'cache' keyword, general > cleanup and bug fixes) > * ``numpy.random.choice`` (random sample generating function) > > > Compatibility notes > =================== > > In a future version of numpy, the functions np.diag, np.diagonal, and the > diagonal method of ndarrays will return a view onto the original array, > instead of producing a copy as they do now. This makes a difference if you > write to the array returned by any of these functions. To facilitate this > transition, numpy 1.7 produces a FutureWarning if it detects that you may > be attempting to write to such an array. See the documentation for > np.diagonal for details. > > Similar to np.diagonal above, in a future version of numpy, indexing a > record array by a list of field names will return a view onto the original > array, instead of producing a copy as they do now. As with np.diagonal, > numpy 1.7 produces a FutureWarning if it detects that you may be attempting > to write to such an array. See the documentation for array indexing for > details. > > In a future version of numpy, the default casting rule for UFunc out= > parameters will be changed from 'unsafe' to 'same_kind'. (This also applies > to in-place operations like a += b, which is equivalent to np.add(a, b, > out=a).) Most usages which violate the 'same_kind' rule are likely bugs, so > this change may expose previously undetected errors in projects that depend > on NumPy. In this version of numpy, such usages will continue to succeed, > but will raise a DeprecationWarning. > > Full-array boolean indexing has been optimized to use a different, > optimized code path. This code path should produce the same results, > but any feedback about changes to your code would be appreciated. > > Attempting to write to a read-only array (one with ``arr.flags.writeable`` > set to ``False``) used to raise either a RuntimeError, ValueError, or > TypeError inconsistently, depending on which code path was taken. It now > consistently raises a ValueError. > > The .reduce functions evaluate some reductions in a different order > than in previous versions of NumPy, generally providing higher performance. > Because of the nature of floating-point arithmetic, this may subtly change > some results, just as linking NumPy to a different BLAS implementations > such as MKL can. > > If upgrading from 1.5, then generally in 1.6 and 1.7 there have been > substantial code added and some code paths altered, particularly in the > areas of type resolution and buffered iteration over universal functions. > This might have an impact on your code particularly if you relied on > accidental behavior in the past. > > New features > ============ > > Reduction UFuncs Generalize axis= Parameter > ------------------------------------------- > > Any ufunc.reduce function call, as well as other reductions like sum, prod, > any, all, max and min support the ability to choose a subset of the axes to > reduce over. Previously, one could say axis=None to mean all the axes or > axis=# to pick a single axis. Now, one can also say axis=(#,#) to pick a > list of axes for reduction. > > Reduction UFuncs New keepdims= Parameter > ---------------------------------------- > > There is a new keepdims= parameter, which if set to True, doesn't throw > away the reduction axes but instead sets them to have size one. When this > option is set, the reduction result will broadcast correctly to the > original operand which was reduced. > > Datetime support > ---------------- > > .. note:: The datetime API is *experimental* in 1.7.0, and may undergo > changes > in future versions of NumPy. > > There have been a lot of fixes and enhancements to datetime64 compared > to NumPy 1.6: > > * the parser is quite strict about only accepting ISO 8601 dates, with a > few > convenience extensions > * converts between units correctly > * datetime arithmetic works correctly > * business day functionality (allows the datetime to be used in contexts > where > only certain days of the week are valid) > > The notes in `doc/source/reference/arrays.datetime.rst > < > https://github.com/numpy/numpy/blob/maintenance/1.7.x/doc/source/reference/arrays.datetime.rst > >`_ > (also available in the online docs at `arrays.datetime.html > `_) > should be > consulted for more details. > > Custom formatter for printing arrays > ------------------------------------ > > See the new ``formatter`` parameter of the ``numpy.set_printoptions`` > function. > > New function numpy.random.choice > --------------------------------- > > A generic sampling function has been added which will generate samples from > a given array-like. The samples can be with or without replacement, and > with uniform or given non-uniform probabilities. > > New function isclose > -------------------- > > Returns a boolean array where two arrays are element-wise equal within a > tolerance. Both relative and absolute tolerance can be specified. > > Preliminary multi-dimensional support in the polynomial package > --------------------------------------------------------------- > > Axis keywords have been added to the integration and differentiation > functions and a tensor keyword was added to the evaluation functions. > These additions allow multi-dimensional coefficient arrays to be used in > those functions. New functions for evaluating 2-D and 3-D coefficient > arrays on grids or sets of points were added together with 2-D and 3-D > pseudo-Vandermonde matrices that can be used for fitting. > > > Ability to pad rank-n arrays > ---------------------------- > > A pad module containing functions for padding n-dimensional arrays has been > added. The various private padding functions are exposed as options to a > public 'pad' function. Example:: > > pad(a, 5, mode='mean') > > Current modes are ``constant``, ``edge``, ``linear_ramp``, ``maximum``, > ``mean``, ``median``, ``minimum``, ``reflect``, ``symmetric``, ``wrap``, > and > ````. > > > New argument to searchsorted > ---------------------------- > > The function searchsorted now accepts a 'sorter' argument that is a > permutation array that sorts the array to search. > > Build system > ------------ > > Added experimental support for the AArch64 architecture. > > C API > ----- > > New function ``PyArray_RequireWriteable`` provides a consistent interface > for checking array writeability -- any C code which works with arrays whose > WRITEABLE flag is not known to be True a priori, should make sure to call > this function before writing. > > NumPy C Style Guide added (``doc/C_STYLE_GUIDE.rst.txt``). > > Changes > ======= > > General > ------- > > The function np.concatenate tries to match the layout of its input arrays. > Previously, the layout did not follow any particular reason, and depended > in an undesirable way on the particular axis chosen for concatenation. A > bug was also fixed which silently allowed out of bounds axis arguments. > > The ufuncs logical_or, logical_and, and logical_not now follow Python's > behavior with object arrays, instead of trying to call methods on the > objects. For example the expression (3 and 'test') produces the string > 'test', and now np.logical_and(np.array(3, 'O'), np.array('test', 'O')) > produces 'test' as well. > > The ``.base`` attribute on ndarrays, which is used on views to ensure that > the > underlying array owning the memory is not deallocated prematurely, now > collapses out references when you have a view-of-a-view. For example:: > > a = np.arange(10) > b = a[1:] > c = b[1:] > > In numpy 1.6, ``c.base`` is ``b``, and ``c.base.base`` is ``a``. In numpy > 1.7, > ``c.base`` is ``a``. > > To increase backwards compatibility for software which relies on the old > behaviour of ``.base``, we only 'skip over' objects which have exactly the > same > type as the newly created view. This makes a difference if you use > ``ndarray`` > subclasses. For example, if we have a mix of ``ndarray`` and ``matrix`` > objects > which are all views on the same original ``ndarray``:: > > a = np.arange(10) > b = np.asmatrix(a) > c = b[0, 1:] > d = c[0, 1:] > > then ``d.base`` will be ``b``. This is because ``d`` is a ``matrix`` > object, > and so the collapsing process only continues so long as it encounters other > ``matrix`` objects. It considers ``c``, ``b``, and ``a`` in that order, and > ``b`` is the last entry in that list which is a ``matrix`` object. > > Casting Rules > ------------- > > Casting rules have undergone some changes in corner cases, due to the > NA-related work. In particular for combinations of scalar+scalar: > > * the `longlong` type (`q`) now stays `longlong` for operations with any > other > number (`? b h i l q p B H I`), previously it was cast as `int_` (`l`). > The > `ulonglong` type (`Q`) now stays as `ulonglong` instead of `uint` (`L`). > > * the `timedelta64` type (`m`) can now be mixed with any integer type (`b > h i l > q p B H I L Q P`), previously it raised `TypeError`. > > For array + scalar, the above rules just broadcast except the case when > the array and scalars are unsigned/signed integers, then the result gets > converted to the array type (of possibly larger size) as illustrated by the > following examples:: > > >>> (np.zeros((2,), dtype=np.uint8) + np.int16(257)).dtype > dtype('uint16') > >>> (np.zeros((2,), dtype=np.int8) + np.uint16(257)).dtype > dtype('int16') > >>> (np.zeros((2,), dtype=np.int16) + np.uint32(2**17)).dtype > dtype('int32') > > Whether the size gets increased depends on the size of the scalar, for > example:: > > >>> (np.zeros((2,), dtype=np.uint8) + np.int16(255)).dtype > dtype('uint8') > >>> (np.zeros((2,), dtype=np.uint8) + np.int16(256)).dtype > dtype('uint16') > > Also a ``complex128`` scalar + ``float32`` array is cast to ``complex64``. > > In NumPy 1.7 the `datetime64` type (`M`) must be constructed by explicitly > specifying the type as the second argument (e.g. ``np.datetime64(2000, > 'Y')``). > > > Deprecations > ============ > > General > ------- > > Specifying a custom string formatter with a `_format` array attribute is > deprecated. The new `formatter` keyword in ``numpy.set_printoptions`` or > ``numpy.array2string`` can be used instead. > > The deprecated imports in the polynomial package have been removed. > > ``concatenate`` now raises DepractionWarning for 1D arrays if ``axis != > 0``. > Versions of numpy < 1.7.0 ignored axis argument value for 1D arrays. We > allow this for now, but in due course we will raise an error. > > C-API > ----- > > Direct access to the fields of PyArrayObject* has been deprecated. Direct > access has been recommended against for many releases. Expect similar > deprecations for PyArray_Descr* and other core objects in the future as > preparation for NumPy 2.0. > > The macros in old_defines.h are deprecated and will be removed in the next > major release (>= 2.0). The sed script tools/replace_old_macros.sed can be > used to replace these macros with the newer versions. > > You can test your code against the deprecated C API by #defining > NPY_NO_DEPRECATED_API to the target version number, for example > NPY_1_7_API_VERSION, before including any NumPy headers. > > The ``NPY_CHAR`` member of the ``NPY_TYPES`` enum is deprecated and will be > removed in NumPy 1.8. See the discussion at > `gh-2801 `_ for more details. > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Thu Feb 7 13:47:19 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Thu, 7 Feb 2013 19:47:19 +0100 Subject: [Numpy-discussion] Any plans for windows 64-bit installer for 1.7? In-Reply-To: References: <51119007.6090806@uci.edu> <5113399F.3090803@astro.uio.no> Message-ID: On Thu, Feb 7, 2013 at 7:41 AM, Matthew Brett wrote: > > I think you will need a developer's license for MKL for Numpy. Ralf - > any ETA for those? > No, I'll have to ask again. > I think I'm right in thinking you'll need a Fortran compiler for Scipy > but not Numpy? Correct. > Can we defer the Scipy build until after the Numpy build? > That doesn't sound like a good idea to me. > > I will try to get you set up with ssh on my Windows 7 machine in case > you can use it. It has the MS tools. > As pointed out before by Robert, sharing the machine between you requires multiple licenses. Intel has promised us some licenses, but those are given to specific people. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Thu Feb 7 13:52:03 2013 From: matthew.brett at gmail.com (Matthew Brett) Date: Thu, 7 Feb 2013 10:52:03 -0800 Subject: [Numpy-discussion] Any plans for windows 64-bit installer for 1.7? In-Reply-To: References: <51119007.6090806@uci.edu> <5113399F.3090803@astro.uio.no> Message-ID: Hi, On Thu, Feb 7, 2013 at 10:47 AM, Ralf Gommers wrote: > > > > On Thu, Feb 7, 2013 at 7:41 AM, Matthew Brett > wrote: >> >> >> I think you will need a developer's license for MKL for Numpy. Ralf - >> any ETA for those? > > > No, I'll have to ask again. > >> >> I think I'm right in thinking you'll need a Fortran compiler for Scipy >> but not Numpy? > > > Correct. > >> >> Can we defer the Scipy build until after the Numpy build? > > > That doesn't sound like a good idea to me. > >> I will try to get you set up with ssh on my Windows 7 machine in case >> you can use it. It has the MS tools. > > > As pointed out before by Robert, sharing the machine between you requires > multiple licenses. Intel has promised us some licenses, but those are given > to specific people. Right. Ondrej has his own account on my machine, and it will be Ondrej's license, when it is available. Cheers, Matthew From matthew.brett at gmail.com Thu Feb 7 13:59:13 2013 From: matthew.brett at gmail.com (Matthew Brett) Date: Thu, 7 Feb 2013 10:59:13 -0800 Subject: [Numpy-discussion] Any plans for windows 64-bit installer for 1.7? In-Reply-To: References: <51119007.6090806@uci.edu> <5113399F.3090803@astro.uio.no> Message-ID: Hi, On Thu, Feb 7, 2013 at 10:47 AM, Ralf Gommers wrote: > > > > On Thu, Feb 7, 2013 at 7:41 AM, Matthew Brett > wrote: >> >> >> I think you will need a developer's license for MKL for Numpy. Ralf - >> any ETA for those? > > > No, I'll have to ask again. > >> >> I think I'm right in thinking you'll need a Fortran compiler for Scipy >> but not Numpy? > > > Correct. > >> >> Can we defer the Scipy build until after the Numpy build? > > > That doesn't sound like a good idea to me. I must say I'm a little confused as to how we're going to make the decisions here. I'm sure you agree that there's an opposite argument to be made, and I would make it if I thought it would make a difference, but I'm losing faith in my ability to keep the discussion on track, and I don't know what to do about that. Cheers, Matthew From ondrej.certik at gmail.com Thu Feb 7 14:04:41 2013 From: ondrej.certik at gmail.com (=?UTF-8?B?T25kxZllaiDEjGVydMOtaw==?=) Date: Thu, 7 Feb 2013 11:04:41 -0800 Subject: [Numpy-discussion] Any plans for windows 64-bit installer for 1.7? In-Reply-To: References: <51119007.6090806@uci.edu> <5113399F.3090803@astro.uio.no> Message-ID: On Thu, Feb 7, 2013 at 10:59 AM, Matthew Brett wrote: > Hi, > > On Thu, Feb 7, 2013 at 10:47 AM, Ralf Gommers wrote: >> >> >> >> On Thu, Feb 7, 2013 at 7:41 AM, Matthew Brett >> wrote: >>> >>> >>> I think you will need a developer's license for MKL for Numpy. Ralf - >>> any ETA for those? >> >> >> No, I'll have to ask again. >> >>> >>> I think I'm right in thinking you'll need a Fortran compiler for Scipy >>> but not Numpy? >> >> >> Correct. >> >>> >>> Can we defer the Scipy build until after the Numpy build? >> >> >> That doesn't sound like a good idea to me. > > I must say I'm a little confused as to how we're going to make the > decisions here. > > I'm sure you agree that there's an opposite argument to be made, and I > would make it if I thought it would make a difference, but I'm losing > faith in my ability to keep the discussion on track, and I don't know > what to do about that. Matthew I don't see any problem here. I agree with Ralf that we need to figure out how to build scipy with Fortran pretty much at the same time, to see if the solution is a viable solution. With Christoph help and experience, I am sure it can get done in a satisfactory way. Ondrej From ralf.gommers at gmail.com Thu Feb 7 14:05:15 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Thu, 7 Feb 2013 20:05:15 +0100 Subject: [Numpy-discussion] Any plans for windows 64-bit installer for 1.7? In-Reply-To: References: <51119007.6090806@uci.edu> <5113399F.3090803@astro.uio.no> Message-ID: On Thu, Feb 7, 2013 at 7:59 PM, Matthew Brett wrote: > > >> Can we defer the Scipy build until after the Numpy build? > > > > > > That doesn't sound like a good idea to me. > > I must say I'm a little confused as to how we're going to make the > decisions here. > How about: attempt to reach consensus? David's concern on DLLs hasn't been addressed yet, nor has mine on packages being unavailable. I was actually still answering another of your emails, but I can't seem to reply fast enough. > > I'm sure you agree that there's an opposite argument to be made, and I > would make it if I thought it would make a difference, but I'm losing > faith in my ability to keep the discussion on track, and I don't know > what to do about that. > I don't see the problem. Before you offered to put in work. Ondrej is willing to help, so is Christoph. So why is it impossible to do Scipy builds? I can see us getting to a solution here, but offering Numpy installers without Scipy ones is not a solution in my book. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ondrej.certik at gmail.com Thu Feb 7 14:15:04 2013 From: ondrej.certik at gmail.com (=?UTF-8?B?T25kxZllaiDEjGVydMOtaw==?=) Date: Thu, 7 Feb 2013 11:15:04 -0800 Subject: [Numpy-discussion] Any plans for windows 64-bit installer for 1.7? In-Reply-To: References: <51119007.6090806@uci.edu> <5113399F.3090803@astro.uio.no> Message-ID: On Thu, Feb 7, 2013 at 11:05 AM, Ralf Gommers wrote: > > > > On Thu, Feb 7, 2013 at 7:59 PM, Matthew Brett > wrote: >> >> >> >> Can we defer the Scipy build until after the Numpy build? >> > >> > >> > That doesn't sound like a good idea to me. >> >> I must say I'm a little confused as to how we're going to make the >> decisions here. > > > How about: attempt to reach consensus? David's concern on DLLs hasn't been > addressed yet, nor has mine on packages being unavailable. I was actually > still answering another of your emails, but I can't seem to reply fast > enough. Yep, we will need to address those. >> >> >> I'm sure you agree that there's an opposite argument to be made, and I >> would make it if I thought it would make a difference, but I'm losing >> faith in my ability to keep the discussion on track, and I don't know >> what to do about that. > > > I don't see the problem. Before you offered to put in work. Ondrej is > willing to help, so is Christoph. So why is it impossible to do Scipy > builds? > > I can see us getting to a solution here, but offering Numpy installers > without Scipy ones is not a solution in my book. Exactly. There is no problem here. Fortran needs to be working as a first class citizen. I personally use modern Fortran a lot. I've setup this page: http://fortran90.org/ with a relevant FAQ about binary compatibility: http://fortran90.org/src/faq.html#are-fortran-compilers-abi-compatible and based on how things work on Windows, I'll be happy to extend the information there. Ondrej From ondrej.certik at gmail.com Thu Feb 7 14:22:12 2013 From: ondrej.certik at gmail.com (=?UTF-8?B?T25kxZllaiDEjGVydMOtaw==?=) Date: Thu, 7 Feb 2013 11:22:12 -0800 Subject: [Numpy-discussion] Modern Fortran vs NumPy syntax Message-ID: Hi, I have recently setup a page about modern Fortran: http://fortran90.org/ and in particular, it has a long section with side by side syntax examples of Python/NumPy vs Fortran: http://fortran90.org/src/rosetta.html I would be very interested if some NumPy gurus would provide me feedback. I personally knew NumPy long before I learned Fortran, and I was amazed that the modern Fortran pretty much allows 1:1 syntax with NumPy, including most of all the fancy indexing etc. Is there some NumPy feature that is not covered there? I would like it to be a nice resource for people who know NumPy to feel like at home with Fortran, and vice versa. I personally use both every day (Fortran a bit more than NumPy). Or of you have any other comments or tips for the site, please let me know. Eventually I'd like to also put there C++ way of doing the same things, but at the moment I want to get Fortran and Python/NumPy done first. Ondrej From nouiz at nouiz.org Thu Feb 7 14:31:10 2013 From: nouiz at nouiz.org (=?ISO-8859-1?Q?Fr=E9d=E9ric_Bastien?=) Date: Thu, 7 Feb 2013 14:31:10 -0500 Subject: [Numpy-discussion] Modern Fortran vs NumPy syntax In-Reply-To: References: Message-ID: Hi, I just read a paper[1] that compare python with numpy or pypy vs c++ and fortran from a code, memory and speed point of view. The python code was still better as you can't have list of ndarray in fortran and some other stuff was harder to do. The fastest was fortran, then C++, but pypy around 2x slower then c++. That isn't bad for a more productive development language. Maybe you can check that article to find more case to compare. HTH Fred [1] http://arxiv.org/abs/1301.1334 On Thu, Feb 7, 2013 at 2:22 PM, Ond?ej ?ert?k wrote: > Hi, > > I have recently setup a page about modern Fortran: > > http://fortran90.org/ > > and in particular, it has a long section with side by side syntax > examples of Python/NumPy vs Fortran: > > http://fortran90.org/src/rosetta.html > > I would be very interested if some NumPy gurus would provide me > feedback. I personally knew > NumPy long before I learned Fortran, and I was amazed that the modern > Fortran pretty much > allows 1:1 syntax with NumPy, including most of all the fancy indexing etc. > > Is there some NumPy feature that is not covered there? I would like it > to be a nice resource > for people who know NumPy to feel like at home with Fortran, and vice > versa. I personally > use both every day (Fortran a bit more than NumPy). > > Or of you have any other comments or tips for the site, please let me > know. Eventually I'd like > to also put there C++ way of doing the same things, but at the moment > I want to get Fortran > and Python/NumPy done first. > > Ondrej > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Thu Feb 7 14:38:55 2013 From: matthew.brett at gmail.com (Matthew Brett) Date: Thu, 7 Feb 2013 11:38:55 -0800 Subject: [Numpy-discussion] Any plans for windows 64-bit installer for 1.7? In-Reply-To: References: <51119007.6090806@uci.edu> <5113399F.3090803@astro.uio.no> Message-ID: Hi, On Thu, Feb 7, 2013 at 11:05 AM, Ralf Gommers wrote: > > > > On Thu, Feb 7, 2013 at 7:59 PM, Matthew Brett > wrote: >> >> >> >> Can we defer the Scipy build until after the Numpy build? >> > >> > >> > That doesn't sound like a good idea to me. >> >> I must say I'm a little confused as to how we're going to make the >> decisions here. > > > How about: attempt to reach consensus? David's concern on DLLs hasn't been > addressed yet, nor has mine on packages being unavailable. I was actually > still answering another of your emails, but I can't seem to reply fast > enough. Right - consensus is good - but at the moment I keep getting lost because the arguments seem to shift and get lost, and sometimes they are not made. So, here is the summary as I understand it, please correct if I am wrong I think we agree that: 1) Having a binary installer for numpy Windows 64 bit is desirable 2) It is desirable to have a matching binary installer for Scipy as soon as possible 3) It is preferable to build with free tools 4) It is acceptable to use non-free tools 5) The build will need to do some run-time linking to MKL and / or mingw 6) It is preferable that the build should be fully automated 7) It is preferable that one person can build all numpy / scipy builds The points of potential disagreement are: a) If we cannot build Scipy now, it may or may not be acceptable to release numpy now. I think it is, you (Ralf) think it isn't, we haven't discussed that. It may not come up. b) It may or may not be acceptable for someone other than Ondrej to be responsible for the Windows 64-bit builds. I think it should be, if necessary, we haven't really discussed that, it may not come up. c) It may or may not be acceptable for the build to be only partially automated. Ditto. d) It may or may not be acceptable to add the DLL directory to the PATH on numpy import. David says not, Christophe disagrees, we haven't really discussed that. Is that right? Cheers, Matthew From ralf.gommers at gmail.com Thu Feb 7 14:52:26 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Thu, 7 Feb 2013 20:52:26 +0100 Subject: [Numpy-discussion] Any plans for windows 64-bit installer for 1.7? In-Reply-To: References: <51119007.6090806@uci.edu> <5113399F.3090803@astro.uio.no> Message-ID: On Thu, Feb 7, 2013 at 8:38 PM, Matthew Brett wrote: > Hi, > > On Thu, Feb 7, 2013 at 11:05 AM, Ralf Gommers > wrote: > > > > > > > > On Thu, Feb 7, 2013 at 7:59 PM, Matthew Brett > > wrote: > >> > >> > >> >> Can we defer the Scipy build until after the Numpy build? > >> > > >> > > >> > That doesn't sound like a good idea to me. > >> > >> I must say I'm a little confused as to how we're going to make the > >> decisions here. > > > > > > How about: attempt to reach consensus? David's concern on DLLs hasn't > been > > addressed yet, nor has mine on packages being unavailable. I was actually > > still answering another of your emails, but I can't seem to reply fast > > enough. > > Right - consensus is good - but at the moment I keep getting lost > because the arguments seem to shift and get lost, and sometimes they > are not made. > > So, here is the summary as I understand it, please correct if I am wrong > > I think we agree that: > > 1) Having a binary installer for numpy Windows 64 bit is desirable > 2) It is desirable to have a matching binary installer for Scipy as > soon as possible > 3) It is preferable to build with free tools > 4) It is acceptable to use non-free tools > 5) The build will need to do some run-time linking to MKL and / or mingw > 6) It is preferable that the build should be fully automated > 7) It is preferable that one person can build all numpy / scipy builds > > The points of potential disagreement are: > > a) If we cannot build Scipy now, it may or may not be acceptable to > release numpy now. I think it is, you (Ralf) think it isn't, we > haven't discussed that. It may not come up. > b) It may or may not be acceptable for someone other than Ondrej to be > responsible for the Windows 64-bit builds. I think it should be, if > necessary, we haven't really discussed that, it may not come up. > c) It may or may not be acceptable for the build to be only partially > automated. Ditto. > d) It may or may not be acceptable to add the DLL directory to the > PATH on numpy import. David says not, Christophe disagrees, we > haven't really discussed that. > > Is that right? > Good summary, looks complete to me. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Thu Feb 7 15:29:45 2013 From: chris.barker at noaa.gov (Chris Barker - NOAA Federal) Date: Thu, 7 Feb 2013 12:29:45 -0800 Subject: [Numpy-discussion] Any plans for windows 64-bit installer for 1.7? In-Reply-To: References: <51119007.6090806@uci.edu> <5113399F.3090803@astro.uio.no> Message-ID: On Thu, Feb 7, 2013 at 11:38 AM, Matthew Brett wrote: > a) If we cannot build Scipy now, it may or may not be acceptable to > release numpy now. I think it is, you (Ralf) think it isn't, we > haven't discussed that. It may not come up. Is anyone suggesting we hold the whole release for this? I fnot, then why would you say we shouldnt' put out a 64 bit Windows numpy binary because we dont' ahve a matching Scipy one? folks that need Scipy will have to find a way to built it themselves anyway, and folks that use numpy without scipy would have a nice binary to use -- That seems like value-added to me. > d) It may or may not be acceptable to add the DLL directory to the > PATH on numpy import. David says not, Christophe disagrees, we > haven't really discussed that. I like the "load them with ctypes" approach better, but I don't know how much effort it would be to implement that. I'm still not totally clear if there is an issue with buildings other extensions tht rely on numpy/scipy -- can they be built with mingGW if numpy/scipy were built with MSVC (Or any other compiler for that matter). -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From matthew.brett at gmail.com Thu Feb 7 15:50:52 2013 From: matthew.brett at gmail.com (Matthew Brett) Date: Thu, 7 Feb 2013 12:50:52 -0800 Subject: [Numpy-discussion] Any plans for windows 64-bit installer for 1.7? In-Reply-To: References: <51119007.6090806@uci.edu> <5113399F.3090803@astro.uio.no> Message-ID: Hi, On Thu, Feb 7, 2013 at 12:29 PM, Chris Barker - NOAA Federal wrote: > On Thu, Feb 7, 2013 at 11:38 AM, Matthew Brett wrote: >> a) If we cannot build Scipy now, it may or may not be acceptable to >> release numpy now. I think it is, you (Ralf) think it isn't, we >> haven't discussed that. It may not come up. > > Is anyone suggesting we hold the whole release for this? I fnot, then > why would you say we shouldnt' put out a 64 bit Windows numpy binary > because we dont' ahve a matching Scipy one? folks that need Scipy will > have to find a way to built it themselves anyway, and folks that use > numpy without scipy would have a nice binary to use -- That seems like > value-added to me. Right - me too - but we could hold off that question until Ondrej has had a chance to build both I guess? >> d) It may or may not be acceptable to add the DLL directory to the >> PATH on numpy import. David says not, Christophe disagrees, we >> haven't really discussed that. > > I like the "load them with ctypes" approach better, but I don't know > how much effort it would be to implement that. > > I'm still not totally clear if there is an issue with buildings other > extensions tht rely on numpy/scipy -- can they be built with mingGW if > numpy/scipy were built with MSVC (Or any other compiler for that > matter). I had the impression that the problem with Mingw64 was the need to load Mingw DLLs at run time - if that's the same for MSVC / MKL, maybe there's no advantage to using the MS tools? But I don't understand the issues very well. Cheers, Matthew From cournape at gmail.com Thu Feb 7 16:05:22 2013 From: cournape at gmail.com (David Cournapeau) Date: Thu, 7 Feb 2013 21:05:22 +0000 Subject: [Numpy-discussion] Any plans for windows 64-bit installer for 1.7? In-Reply-To: References: <51119007.6090806@uci.edu> <5113399F.3090803@astro.uio.no> Message-ID: On Thu, Feb 7, 2013 at 7:38 PM, Matthew Brett wrote: > Hi, > > On Thu, Feb 7, 2013 at 11:05 AM, Ralf Gommers wrote: >> >> >> >> On Thu, Feb 7, 2013 at 7:59 PM, Matthew Brett >> wrote: >>> >>> >>> >> Can we defer the Scipy build until after the Numpy build? >>> > >>> > >>> > That doesn't sound like a good idea to me. >>> >>> I must say I'm a little confused as to how we're going to make the >>> decisions here. >> >> >> How about: attempt to reach consensus? David's concern on DLLs hasn't been >> addressed yet, nor has mine on packages being unavailable. I was actually >> still answering another of your emails, but I can't seem to reply fast >> enough. > > Right - consensus is good - but at the moment I keep getting lost > because the arguments seem to shift and get lost, and sometimes they > are not made. > > So, here is the summary as I understand it, please correct if I am wrong > > I think we agree that: > > 1) Having a binary installer for numpy Windows 64 bit is desirable > 2) It is desirable to have a matching binary installer for Scipy as > soon as possible > 3) It is preferable to build with free tools > 4) It is acceptable to use non-free tools > 5) The build will need to do some run-time linking to MKL and / or mingw > 6) It is preferable that the build should be fully automated > 7) It is preferable that one person can build all numpy / scipy builds > > The points of potential disagreement are: > > a) If we cannot build Scipy now, it may or may not be acceptable to > release numpy now. I think it is, you (Ralf) think it isn't, we > haven't discussed that. It may not come up. > b) It may or may not be acceptable for someone other than Ondrej to be > responsible for the Windows 64-bit builds. I think it should be, if > necessary, we haven't really discussed that, it may not come up. > c) It may or may not be acceptable for the build to be only partially > automated. Ditto. > d) It may or may not be acceptable to add the DLL directory to the > PATH on numpy import. David says not, Christophe disagrees, we > haven't really discussed that. I assumed this was obvious, but looks like it isn't: modifying the process user environment when importing a python package is quite user hostile. os.environ is a global variable, and it could cause quite hard to diagnose issues when something goes wrong. I would see a library doing this kind of things, especially at import time, quite badly. David From ondrej.certik at gmail.com Thu Feb 7 18:18:05 2013 From: ondrej.certik at gmail.com (=?UTF-8?B?T25kxZllaiDEjGVydMOtaw==?=) Date: Thu, 7 Feb 2013 15:18:05 -0800 Subject: [Numpy-discussion] Any plans for windows 64-bit installer for 1.7? In-Reply-To: References: <51119007.6090806@uci.edu> <5113399F.3090803@astro.uio.no> Message-ID: On Thu, Feb 7, 2013 at 12:29 PM, Chris Barker - NOAA Federal wrote: > On Thu, Feb 7, 2013 at 11:38 AM, Matthew Brett wrote: >> a) If we cannot build Scipy now, it may or may not be acceptable to >> release numpy now. I think it is, you (Ralf) think it isn't, we >> haven't discussed that. It may not come up. > > Is anyone suggesting we hold the whole release for this? I fnot, then Just to make it clear, I do not plan to hold the whole release because of this. Previous releases also didn't have this official 64bit Windows binary, so there is no regression. Once we figure out how to create 64bit binaries, then we'll start uploading them. Ondrej From cgohlke at uci.edu Thu Feb 7 18:38:33 2013 From: cgohlke at uci.edu (Christoph Gohlke) Date: Thu, 07 Feb 2013 15:38:33 -0800 Subject: [Numpy-discussion] ANN: NumPy 1.7.0rc2 release In-Reply-To: References: Message-ID: <51143AF9.1030500@uci.edu> On 2/6/2013 7:10 PM, Ond?ej ?ert?k wrote: > Hi, > > I'm pleased to announce the availability of the second release candidate of > NumPy 1.7.0rc2. > > Sources and binary installers can be found at > https://sourceforge.net/projects/numpy/files/NumPy/1.7.0rc2/ > > We have fixed all issues known to us since the 1.7.0rc1 release. > > Please test this release and report any issues on the numpy-discussion > mailing list. If there are no further problems, I plan to release the final > version in a few days. > > I would like to thank Sandro Tosi, Sebastian Berg, Charles Harris, > Marcin Juszkiewicz, Mark Wiebe, Ralf Gommers and Nathaniel J. Smith > for sending patches, fixes and helping with reviews for this release since > 1.7.0rc1, and Vincent Davis for providing the Mac build machine. > > Cheers, > Ondrej > Thanks. It works well on win-amd64-py2.7 . Christoph From ondrej.certik at gmail.com Thu Feb 7 18:57:10 2013 From: ondrej.certik at gmail.com (=?UTF-8?B?T25kxZllaiDEjGVydMOtaw==?=) Date: Thu, 7 Feb 2013 15:57:10 -0800 Subject: [Numpy-discussion] Modern Fortran vs NumPy syntax In-Reply-To: References: Message-ID: Fr?d?ric, On Thu, Feb 7, 2013 at 11:31 AM, Fr?d?ric Bastien wrote: > Hi, > > I just read a paper[1] that compare python with numpy or pypy vs c++ and > fortran from a code, memory and speed point of view. The python code was > still better as you can't have list of ndarray in fortran and some other > stuff was harder to do. The fastest was fortran, then C++, but pypy around > 2x slower then c++. That isn't bad for a more productive development > language. > > Maybe you can check that article to find more case to compare. Yes, I know about this article --- I've been in touch with Sylwester about it, as I found his code on github a few months ago, so we discussed it. I also CCed him if he wants to add some comments. The article is well balanced. To my taste, they use way too much OOP in the Fortran version, in fact I am bit surprised that Fortran was still the fastest, even with the OOP. But Sylwester was interested in comparing OOP approaches, so that's fair. If I have time (which I don't see likely soon, but who knows), I will see if I can write a simple direct non-OOP version of their Fortran code: https://github.com/slayoo/mpdata Possibly just by understanding the original reference [2] and see what datastructures/arrays I would use to implement it. In general however, I like their approach, that they took a real world method and not some artificial benchmark. So it's a very good contribution. Ondrej [2] Smolarkiewicz, P. K. (1984). A Fully Multidimensional Positive Definite Advection Transport Algorithm with Small Implicit Diffusion. Journal of Computational Physics, 54(2), 325?362. From ondrej.certik at gmail.com Fri Feb 8 00:26:15 2013 From: ondrej.certik at gmail.com (=?UTF-8?B?T25kxZllaiDEjGVydMOtaw==?=) Date: Thu, 7 Feb 2013 21:26:15 -0800 Subject: [Numpy-discussion] ANN: pandas 0.10.1 is released In-Reply-To: References: Message-ID: Hi Wes, On Tue, Jan 22, 2013 at 8:32 AM, Wes McKinney wrote: > hi all, > > We've released pandas 0.10.1 which includes many bug fixes from > 0.10.0 (including a number of issues with the new file parser, > e.g. reading multiple files in separate threads), various > performance improvements, and major new PyTables/HDF5-based > functionality contributed by Jeff Reback. I strongly recommend > that all users upgrade. > > Thanks to all who contributed to this release, especially Chang > She, Jeff Reback, and Yoval P. > > As always source archives and Windows installers are on PyPI. > > What's new: http://pandas.pydata.org/pandas-docs/stable/whatsnew.html > Installers: http://pypi.python.org/pypi/pandas > > $ git log v0.10.0..v0.10.1 --pretty=format:%aN | sort | uniq -c | sort -rn > 66 jreback > 59 Wes McKinney > 43 Chang She > 12 y-p > 5 Vincent Arel-Bundock > 4 Damien Garaud > 3 Christopher Whelan > 3 Andy Hayden > 2 Jay Parlar > 2 Dan Allan > 1 Thouis (Ray) Jones > 1 svaksha > 1 herrfz > 1 Garrett Drapala > 1 elpres > 1 Dieter Vandenbussche > 1 Anton I. Sipos In case you were interested, you can get this easily by just: $ git shortlog -ns v0.10.0..v0.10.1 66 jreback 59 Wes McKinney 43 Chang She 12 y-p 5 Vincent Arel-Bundock 4 Damien Garaud 3 Andy Hayden 3 Christopher Whelan 2 Dan Allan 2 Jay Parlar 1 Anton I. Sipos 1 Dieter Vandenbussche 1 Garrett Drapala 1 Thouis (Ray) Jones 1 elpres 1 herrfz 1 svaksha Ondrej From klonuo at gmail.com Fri Feb 8 01:19:57 2013 From: klonuo at gmail.com (klo uo) Date: Fri, 8 Feb 2013 07:19:57 +0100 Subject: [Numpy-discussion] Modern Fortran vs NumPy syntax In-Reply-To: References: Message-ID: Thanks for providing this. Reference is excellent, especially as I was collecting Fortran and f2py resources, some month ago, and I found nothing similar to answers you expose. Side by side syntax is just great and intuitive And rest is... Thanks On Thu, Feb 7, 2013 at 8:22 PM, Ond?ej ?ert?k wrote: > Hi, > > I have recently setup a page about modern Fortran: > > http://fortran90.org/ > > and in particular, it has a long section with side by side syntax > examples of Python/NumPy vs Fortran: > > http://fortran90.org/src/rosetta.html > > I would be very interested if some NumPy gurus would provide me > feedback. I personally knew > NumPy long before I learned Fortran, and I was amazed that the modern > Fortran pretty much > allows 1:1 syntax with NumPy, including most of all the fancy indexing etc. > > Is there some NumPy feature that is not covered there? I would like it > to be a nice resource > for people who know NumPy to feel like at home with Fortran, and vice > versa. I personally > use both every day (Fortran a bit more than NumPy). > > Or of you have any other comments or tips for the site, please let me > know. Eventually I'd like > to also put there C++ way of doing the same things, but at the moment > I want to get Fortran > and Python/NumPy done first. > > Ondrej > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gokhansever at gmail.com Fri Feb 8 13:49:07 2013 From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_Sever?=) Date: Fri, 8 Feb 2013 11:49:07 -0700 Subject: [Numpy-discussion] Modern Fortran vs NumPy syntax In-Reply-To: References: Message-ID: Hi Ond?ej, Any ideas that your manual syntax mapping would evolve to an automatic translation tool like i2py [http://code.google.com/p/i2py/] Thanks. On Thu, Feb 7, 2013 at 12:22 PM, Ond?ej ?ert?k wrote: > Hi, > > I have recently setup a page about modern Fortran: > > http://fortran90.org/ > > and in particular, it has a long section with side by side syntax > examples of Python/NumPy vs Fortran: > > http://fortran90.org/src/rosetta.html > > I would be very interested if some NumPy gurus would provide me > feedback. I personally knew > NumPy long before I learned Fortran, and I was amazed that the modern > Fortran pretty much > allows 1:1 syntax with NumPy, including most of all the fancy indexing etc. > > Is there some NumPy feature that is not covered there? I would like it > to be a nice resource > for people who know NumPy to feel like at home with Fortran, and vice > versa. I personally > use both every day (Fortran a bit more than NumPy). > > Or of you have any other comments or tips for the site, please let me > know. Eventually I'd like > to also put there C++ way of doing the same things, but at the moment > I want to get Fortran > and Python/NumPy done first. > > Ondrej > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -- G?khan -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaime.frio at gmail.com Fri Feb 8 18:12:33 2013 From: jaime.frio at gmail.com (=?ISO-8859-1?Q?Jaime_Fern=E1ndez_del_R=EDo?=) Date: Fri, 8 Feb 2013 15:12:33 -0800 Subject: [Numpy-discussion] Type casting problems with numpy.take Message-ID: Hi, I see there is a lot of ongoing discussion on casting rules, but I couldn't find any reference to the following issue I am facing. I am trying to 'take' from an array of uint8's, using an array of uint16's as indices. Even though the return dtype would be uint8, I want to direct the output back into the array of uint16's: >>> lut = np.random.randint(256, size=(65536,)).astype('uint8') >>> arr = np.random.randint(65536, size=(1000, 1000)).astype('uint16') >>> np.take(lut, arr, out=arr) Traceback (most recent call last): File "", line 1, in File "C:\Python27\lib\site-packages\numpy\core\fromnumeric.py", line 103, in take return take(indices, axis, out, mode) TypeError: array cannot be safely cast to required type This is puzzling, since the only casting that should be happening is from uint8's to uint16's, which is as safe as it gets: >>> np.can_cast('uint8', 'uint16') True To make things even weirder, I can get the above code to work if the type of lut is uint16, uint32, uint64, int32 or int 64, but not if it is uint8, int8 or int16. Without looking at the source, it almost looks as if the type checking in numpy.take was reversed... Am I missing something, or is this broken? My numpy's version is: >>> np.__version__ '1.6.2' which is the one packaged in Python xy 2.7.3.1, running on a 64 bit Windows 7 system. Thanks, Jaime P.S. I have posted the same question in StackExchange: http://stackoverflow.com/questions/14782135/type-casting-error-with-numpy-take -- (\__/) ( O.o) ( > <) Este es Conejo. Copia a Conejo en tu firma y ay?dale en sus planes de dominaci?n mundial. -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidmenhur at gmail.com Fri Feb 8 18:54:06 2013 From: davidmenhur at gmail.com (=?UTF-8?B?RGHPgGlk?=) Date: Sat, 9 Feb 2013 00:54:06 +0100 Subject: [Numpy-discussion] Type casting problems with numpy.take In-Reply-To: References: Message-ID: On 9 February 2013 00:12, Jaime Fern?ndez del R?o wrote: > TypeError: array cannot be safely cast to required type With version 1.7.rc1 TypeError: Cannot cast array data from dtype('uint16') to dtype('uint8') according to the rule 'safe'. I have also tried with bigger values of lut, being it uint32, so, when they are casted to uint16 they are modified, and it will do it without complaining: >>> lut = np.random.randint(256000, size=(65536,)).astype('uint16') >>> arr = np.random.randint(65536, size=(1000, 1000)).astype('uint16') >>> np.take(lut, arr, out=arr) >>> arr.dtype dtype('uint16') From jaime.frio at gmail.com Fri Feb 8 19:34:02 2013 From: jaime.frio at gmail.com (=?ISO-8859-1?Q?Jaime_Fern=E1ndez_del_R=EDo?=) Date: Fri, 8 Feb 2013 16:34:02 -0800 Subject: [Numpy-discussion] Type casting problems with numpy.take In-Reply-To: References: Message-ID: On Fri, Feb 8, 2013 at 3:54 PM, Da?id wrote: > TypeError: Cannot cast array data from dtype('uint16') to > dtype('uint8') according to the rule 'safe'. > That really makes it sound like the check is being done the other way around! But I'd be surprised if something so obvious hadn't been seen and reported earlier, especially since I have tried it on a Linux box with older versions, and things were the same in 1.2.1. So that means this would be a 5 year old bug. >>> np.__version__ '1.2.1' >>> lut = np.random.randint(256, size=(65536,)).astype('uint8') >>> arr = np.random.randint(65536, size=(1000, 1000)).astype('uint16') >>> np.take(lut, arr) array([[ 56, 131, 248, ..., 233, 34, 191], [229, 217, 233, ..., 183, 8, 86], [249, 238, 79, ..., 38, 17, 72], ..., [ 19, 95, 199, ..., 236, 148, 39], [178, 129, 208, ..., 76, 46, 125], [ 66, 196, 71, ..., 227, 252, 94]], dtype=uint8) >>> np.take(lut, arr, out=arr) Traceback (most recent call last): File "", line 1, in File "/usr/lib/python2.6/dist-packages/numpy/core/fromnumeric.py", line 97, in take return take(indices, axis, out, mode) TypeError: array cannot be safely cast to required type -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Fri Feb 8 21:44:20 2013 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 8 Feb 2013 19:44:20 -0700 Subject: [Numpy-discussion] Type casting problems with numpy.take In-Reply-To: References: Message-ID: On Fri, Feb 8, 2013 at 5:34 PM, Jaime Fern?ndez del R?o < jaime.frio at gmail.com> wrote: > On Fri, Feb 8, 2013 at 3:54 PM, Da?id wrote: > >> TypeError: Cannot cast array data from dtype('uint16') to >> dtype('uint8') according to the rule 'safe'. >> > > That really makes it sound like the check is being done the other way > around! > > But I'd be surprised if something so obvious hadn't been seen and reported > earlier, especially since I have tried it on a Linux box with older > versions, and things were the same in 1.2.1. So that means this would be a > 5 year old bug. > > >>> np.__version__ > '1.2.1' > >>> lut = np.random.randint(256, size=(65536,)).astype('uint8') > >>> arr = np.random.randint(65536, size=(1000, 1000)).astype('uint16') > >>> np.take(lut, arr) > array([[ 56, 131, 248, ..., 233, 34, 191], > [229, 217, 233, ..., 183, 8, 86], > [249, 238, 79, ..., 38, 17, 72], > ..., > [ 19, 95, 199, ..., 236, 148, 39], > [178, 129, 208, ..., 76, 46, 125], > [ 66, 196, 71, ..., 227, 252, 94]], dtype=uint8) > >>> np.take(lut, arr, out=arr) > Traceback (most recent call last): > File "", line 1, in > File "/usr/lib/python2.6/dist-packages/numpy/core/fromnumeric.py", line > 97, in take > return take(indices, axis, out, mode) > TypeError: array cannot be safely cast to required type > > > My money is on 'five year old bug'. Many basic numpy functions are not well tested. Writing tests is a tedious job but doesn't require any C foo, just Python an patience, so if anyone would like to get involved... Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaime.frio at gmail.com Sat Feb 9 01:27:44 2013 From: jaime.frio at gmail.com (=?ISO-8859-1?Q?Jaime_Fern=E1ndez_del_R=EDo?=) Date: Fri, 8 Feb 2013 22:27:44 -0800 Subject: [Numpy-discussion] Type casting problems with numpy.take In-Reply-To: References: Message-ID: On Fri, Feb 8, 2013 at 6:44 PM, Charles R Harris wrote: > My money is on 'five year old bug'. > A bug indeed it seems to be. I have cloned the source code, and in item_selection.c, in function PyArray_TakeFrom, when 'out' is an argument in the call, the code is actually trying to cast 'out' to the type of 'self' (the first array in the call to take): int flags = NPY_ARRAY_CARRAY | NPY_ARRAY_UPDATEIFCOPY; dtype = PyArray_DESCR(self); obj = (PyArrayObject *)PyArray_FromArray(out, dtype, flags); I have also been looking at PyArray_FromArray in ctors.c, and it would be very easy to fix the broken behaviour, by adding NPY_ARRAY_FORCECAST to the flags in the call to PyArray_FromArray, the casting mode would be changed to NPY_UNSAFE_CASTING, and that should do away with the error. I'm not sure if a smarter type checking is in order here, that would require a more in depth redoing of how PyArray_TakeFrom operates. In think ufuncs let you happily cast unsafely, so maybe take should just be the same? Or should 'self' should be cast to the type of 'out'? Would that break anything else? But if nothing else, the above fix should just make the current possibly dysfunctional typecasting a consistent feature of numpy, which would be better than what's going on right now. So, where do I go to file a bug report? Should I try to send the above proposed change as a patch? I am not sure how to do either thing, any reference explaining it a little more in depth that you can point me to? > Many basic numpy functions are not well tested. Writing tests is a tedious > job but doesn't require any C foo, just Python an patience, so if anyone > would like to get involved... > How does one get involved? Jaime -- (\__/) ( O.o) ( > <) Este es Conejo. Copia a Conejo en tu firma y ay?dale en sus planes de dominaci?n mundial. -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sat Feb 9 09:15:15 2013 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 9 Feb 2013 07:15:15 -0700 Subject: [Numpy-discussion] Type casting problems with numpy.take In-Reply-To: References: Message-ID: On Fri, Feb 8, 2013 at 11:27 PM, Jaime Fern?ndez del R?o < jaime.frio at gmail.com> wrote: > On Fri, Feb 8, 2013 at 6:44 PM, Charles R Harris < > charlesr.harris at gmail.com> wrote: > >> My money is on 'five year old bug'. >> > > A bug indeed it seems to be. I have cloned the source code, and in > item_selection.c, in function PyArray_TakeFrom, when 'out' is an argument > in the call, the code is actually trying to cast 'out' to the type of > 'self' (the first array in the call to take): > > int flags = NPY_ARRAY_CARRAY | NPY_ARRAY_UPDATEIFCOPY; > dtype = PyArray_DESCR(self); > obj = (PyArrayObject *)PyArray_FromArray(out, dtype, flags); > > I have also been looking at PyArray_FromArray in ctors.c, and it would be > very easy to fix the broken behaviour, by adding NPY_ARRAY_FORCECAST to the > flags in the call to PyArray_FromArray, the casting mode would be changed > to NPY_UNSAFE_CASTING, and that should do away with the error. > > I'm not sure if a smarter type checking is in order here, that would > require a more in depth redoing of how PyArray_TakeFrom operates. In think > ufuncs let you happily cast unsafely, so maybe take should just be the > same? Or should 'self' should be cast to the type of 'out'? Would that > break anything else? > > But if nothing else, the above fix should just make the current possibly > dysfunctional typecasting a consistent feature of numpy, which would be > better than what's going on right now. > > So, where do I go to file a bug report? Should I try to send the above > proposed change as a patch? I am not sure how to do either thing, any > reference explaining it a little more in depth that you can point me to? > > >> Many basic numpy functions are not well tested. Writing tests is a >> tedious job but doesn't require any C foo, just Python an patience, so if >> anyone would like to get involved... >> > > How does one get involved? > > Just as you have done, by starting with a look, the next step is a PR. There are guidelines for developers, but if you are on github just fork numpy, make a branch on your fork with the changes and hit the PR button while you have the branch checked out. It might take some trial and error, but you are unlikely to cause any damage in the process. For testing, the next step would be to write a test that tested the take function with all combinations of types, etc, the current tests looks to be in numpy/core/tests/test_item_selection.py with some bits in test_regressions.py. Fixes to the C code need to come with a test, so you will end up writing tests anyway. I find writing tests takes more time and work than fixing the bugs. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From ondrej.certik at gmail.com Sat Feb 9 14:58:39 2013 From: ondrej.certik at gmail.com (=?UTF-8?B?T25kxZllaiDEjGVydMOtaw==?=) Date: Sat, 9 Feb 2013 11:58:39 -0800 Subject: [Numpy-discussion] ANN: NumPy 1.7.0rc2 release In-Reply-To: <51143AF9.1030500@uci.edu> References: <51143AF9.1030500@uci.edu> Message-ID: Thanks Fr?d?ric and Christoph for the feedback. Looks like there are no further problems, so I will go ahead and do the final release today. Ondrej From charlesr.harris at gmail.com Sat Feb 9 16:20:14 2013 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 9 Feb 2013 14:20:14 -0700 Subject: [Numpy-discussion] ANN: NumPy 1.7.0rc2 release In-Reply-To: References: <51143AF9.1030500@uci.edu> Message-ID: On Sat, Feb 9, 2013 at 12:58 PM, Ond?ej ?ert?k wrote: > Thanks Fr?d?ric and Christoph for the feedback. Looks like there are > no further problems, so I will go ahead and do the final release today. > > Ondrej > Congratulations on your second child ;) And so soon after the first ... Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From sarabas at igf.fuw.edu.pl Sat Feb 9 19:39:48 2013 From: sarabas at igf.fuw.edu.pl (Sylwester Arabas) Date: Sun, 10 Feb 2013 01:39:48 +0100 Subject: [Numpy-discussion] Modern Fortran vs NumPy syntax In-Reply-To: References: Message-ID: <5116EC54.2030309@igf.fuw.edu.pl> Hi All, On 08/02/13 00:57, Ond?ej ?ert?k wrote: > On Thu, Feb 7, 2013 at 11:31 AM, Fr?d?ric Bastien wrote: >> ... >> Maybe you can check that article to find more case to compare. > > Yes, I know about this article --- I've been in touch with Sylwester about it, > as I found his code on github a few months ago, so we discussed it. > I also CCed him if he wants to add some comments. > ... Thanks Fr?d?ric and Ond?ej for mentioning the paper here. I'll just add that the arXiv manuscript is a draft. We're working on a revised version to be posted on arXiv and submitted to a journal in a few weeks. Any comments are thus very much welcome! (http://arxiv.org/abs/1301.1334) Regards, Sylwester -- http://www.igf.fuw.edu.pl/~slayoo/ From ondrej.certik at gmail.com Sat Feb 9 20:25:03 2013 From: ondrej.certik at gmail.com (=?UTF-8?B?T25kxZllaiDEjGVydMOtaw==?=) Date: Sat, 9 Feb 2013 17:25:03 -0800 Subject: [Numpy-discussion] ANN: NumPy 1.7.0 release Message-ID: Hi, I'm pleased to announce the availability of the final release of NumPy 1.7.0. Sources and binary installers can be found at https://sourceforge.net/projects/numpy/files/NumPy/1.7.0/ This release is equivalent to the 1.7.0rc2 release, since no more problems were found. For release notes see below. I would like to thank everybody who contributed to this release. Cheers, Ondrej ========================= NumPy 1.7.0 Release Notes ========================= This release includes several new features as well as numerous bug fixes and refactorings. It supports Python 2.4 - 2.7 and 3.1 - 3.3 and is the last release that supports Python 2.4 - 2.5. Highlights ========== * ``where=`` parameter to ufuncs (allows the use of boolean arrays to choose where a computation should be done) * ``vectorize`` improvements (added 'excluded' and 'cache' keyword, general cleanup and bug fixes) * ``numpy.random.choice`` (random sample generating function) Compatibility notes =================== In a future version of numpy, the functions np.diag, np.diagonal, and the diagonal method of ndarrays will return a view onto the original array, instead of producing a copy as they do now. This makes a difference if you write to the array returned by any of these functions. To facilitate this transition, numpy 1.7 produces a FutureWarning if it detects that you may be attempting to write to such an array. See the documentation for np.diagonal for details. Similar to np.diagonal above, in a future version of numpy, indexing a record array by a list of field names will return a view onto the original array, instead of producing a copy as they do now. As with np.diagonal, numpy 1.7 produces a FutureWarning if it detects that you may be attempting to write to such an array. See the documentation for array indexing for details. In a future version of numpy, the default casting rule for UFunc out= parameters will be changed from 'unsafe' to 'same_kind'. (This also applies to in-place operations like a += b, which is equivalent to np.add(a, b, out=a).) Most usages which violate the 'same_kind' rule are likely bugs, so this change may expose previously undetected errors in projects that depend on NumPy. In this version of numpy, such usages will continue to succeed, but will raise a DeprecationWarning. Full-array boolean indexing has been optimized to use a different, optimized code path. This code path should produce the same results, but any feedback about changes to your code would be appreciated. Attempting to write to a read-only array (one with ``arr.flags.writeable`` set to ``False``) used to raise either a RuntimeError, ValueError, or TypeError inconsistently, depending on which code path was taken. It now consistently raises a ValueError. The .reduce functions evaluate some reductions in a different order than in previous versions of NumPy, generally providing higher performance. Because of the nature of floating-point arithmetic, this may subtly change some results, just as linking NumPy to a different BLAS implementations such as MKL can. If upgrading from 1.5, then generally in 1.6 and 1.7 there have been substantial code added and some code paths altered, particularly in the areas of type resolution and buffered iteration over universal functions. This might have an impact on your code particularly if you relied on accidental behavior in the past. New features ============ Reduction UFuncs Generalize axis= Parameter ------------------------------------------- Any ufunc.reduce function call, as well as other reductions like sum, prod, any, all, max and min support the ability to choose a subset of the axes to reduce over. Previously, one could say axis=None to mean all the axes or axis=# to pick a single axis. Now, one can also say axis=(#,#) to pick a list of axes for reduction. Reduction UFuncs New keepdims= Parameter ---------------------------------------- There is a new keepdims= parameter, which if set to True, doesn't throw away the reduction axes but instead sets them to have size one. When this option is set, the reduction result will broadcast correctly to the original operand which was reduced. Datetime support ---------------- .. note:: The datetime API is *experimental* in 1.7.0, and may undergo changes in future versions of NumPy. There have been a lot of fixes and enhancements to datetime64 compared to NumPy 1.6: * the parser is quite strict about only accepting ISO 8601 dates, with a few convenience extensions * converts between units correctly * datetime arithmetic works correctly * business day functionality (allows the datetime to be used in contexts where only certain days of the week are valid) The notes in `doc/source/reference/arrays.datetime.rst `_ (also available in the online docs at `arrays.datetime.html `_) should be consulted for more details. Custom formatter for printing arrays ------------------------------------ See the new ``formatter`` parameter of the ``numpy.set_printoptions`` function. New function numpy.random.choice --------------------------------- A generic sampling function has been added which will generate samples from a given array-like. The samples can be with or without replacement, and with uniform or given non-uniform probabilities. New function isclose -------------------- Returns a boolean array where two arrays are element-wise equal within a tolerance. Both relative and absolute tolerance can be specified. Preliminary multi-dimensional support in the polynomial package --------------------------------------------------------------- Axis keywords have been added to the integration and differentiation functions and a tensor keyword was added to the evaluation functions. These additions allow multi-dimensional coefficient arrays to be used in those functions. New functions for evaluating 2-D and 3-D coefficient arrays on grids or sets of points were added together with 2-D and 3-D pseudo-Vandermonde matrices that can be used for fitting. Ability to pad rank-n arrays ---------------------------- A pad module containing functions for padding n-dimensional arrays has been added. The various private padding functions are exposed as options to a public 'pad' function. Example:: pad(a, 5, mode='mean') Current modes are ``constant``, ``edge``, ``linear_ramp``, ``maximum``, ``mean``, ``median``, ``minimum``, ``reflect``, ``symmetric``, ``wrap``, and ````. New argument to searchsorted ---------------------------- The function searchsorted now accepts a 'sorter' argument that is a permutation array that sorts the array to search. Build system ------------ Added experimental support for the AArch64 architecture. C API ----- New function ``PyArray_RequireWriteable`` provides a consistent interface for checking array writeability -- any C code which works with arrays whose WRITEABLE flag is not known to be True a priori, should make sure to call this function before writing. NumPy C Style Guide added (``doc/C_STYLE_GUIDE.rst.txt``). Changes ======= General ------- The function np.concatenate tries to match the layout of its input arrays. Previously, the layout did not follow any particular reason, and depended in an undesirable way on the particular axis chosen for concatenation. A bug was also fixed which silently allowed out of bounds axis arguments. The ufuncs logical_or, logical_and, and logical_not now follow Python's behavior with object arrays, instead of trying to call methods on the objects. For example the expression (3 and 'test') produces the string 'test', and now np.logical_and(np.array(3, 'O'), np.array('test', 'O')) produces 'test' as well. The ``.base`` attribute on ndarrays, which is used on views to ensure that the underlying array owning the memory is not deallocated prematurely, now collapses out references when you have a view-of-a-view. For example:: a = np.arange(10) b = a[1:] c = b[1:] In numpy 1.6, ``c.base`` is ``b``, and ``c.base.base`` is ``a``. In numpy 1.7, ``c.base`` is ``a``. To increase backwards compatibility for software which relies on the old behaviour of ``.base``, we only 'skip over' objects which have exactly the same type as the newly created view. This makes a difference if you use ``ndarray`` subclasses. For example, if we have a mix of ``ndarray`` and ``matrix`` objects which are all views on the same original ``ndarray``:: a = np.arange(10) b = np.asmatrix(a) c = b[0, 1:] d = c[0, 1:] then ``d.base`` will be ``b``. This is because ``d`` is a ``matrix`` object, and so the collapsing process only continues so long as it encounters other ``matrix`` objects. It considers ``c``, ``b``, and ``a`` in that order, and ``b`` is the last entry in that list which is a ``matrix`` object. Casting Rules ------------- Casting rules have undergone some changes in corner cases, due to the NA-related work. In particular for combinations of scalar+scalar: * the `longlong` type (`q`) now stays `longlong` for operations with any other number (`? b h i l q p B H I`), previously it was cast as `int_` (`l`). The `ulonglong` type (`Q`) now stays as `ulonglong` instead of `uint` (`L`). * the `timedelta64` type (`m`) can now be mixed with any integer type (`b h i l q p B H I L Q P`), previously it raised `TypeError`. For array + scalar, the above rules just broadcast except the case when the array and scalars are unsigned/signed integers, then the result gets converted to the array type (of possibly larger size) as illustrated by the following examples:: >>> (np.zeros((2,), dtype=np.uint8) + np.int16(257)).dtype dtype('uint16') >>> (np.zeros((2,), dtype=np.int8) + np.uint16(257)).dtype dtype('int16') >>> (np.zeros((2,), dtype=np.int16) + np.uint32(2**17)).dtype dtype('int32') Whether the size gets increased depends on the size of the scalar, for example:: >>> (np.zeros((2,), dtype=np.uint8) + np.int16(255)).dtype dtype('uint8') >>> (np.zeros((2,), dtype=np.uint8) + np.int16(256)).dtype dtype('uint16') Also a ``complex128`` scalar + ``float32`` array is cast to ``complex64``. In NumPy 1.7 the `datetime64` type (`M`) must be constructed by explicitly specifying the type as the second argument (e.g. ``np.datetime64(2000, 'Y')``). Deprecations ============ General ------- Specifying a custom string formatter with a `_format` array attribute is deprecated. The new `formatter` keyword in ``numpy.set_printoptions`` or ``numpy.array2string`` can be used instead. The deprecated imports in the polynomial package have been removed. ``concatenate`` now raises DepractionWarning for 1D arrays if ``axis != 0``. Versions of numpy < 1.7.0 ignored axis argument value for 1D arrays. We allow this for now, but in due course we will raise an error. C-API ----- Direct access to the fields of PyArrayObject* has been deprecated. Direct access has been recommended against for many releases. Expect similar deprecations for PyArray_Descr* and other core objects in the future as preparation for NumPy 2.0. The macros in old_defines.h are deprecated and will be removed in the next major release (>= 2.0). The sed script tools/replace_old_macros.sed can be used to replace these macros with the newer versions. You can test your code against the deprecated C API by #defining NPY_NO_DEPRECATED_API to the target version number, for example NPY_1_7_API_VERSION, before including any NumPy headers. The ``NPY_CHAR`` member of the ``NPY_TYPES`` enum is deprecated and will be removed in NumPy 1.8. See the discussion at `gh-2801 `_ for more details. Checksums ========= 7b72cc17b6a9043f6d46af4e71cd3dbe release/installers/numpy-1.7.0-win32-superpack-python3.3.exe 4fa54e40b6a243416f0248123b6ec332 release/installers/numpy-1.7.0.tar.gz 9ef1688bb9f8deb058a8022b4788686c release/installers/numpy-1.7.0-win32-superpack-python2.7.exe 909fe47da05d2a35edd6909ba0152213 release/installers/numpy-1.7.0-win32-superpack-python3.2.exe 5d4318b722d0098f78b49c0030d47026 release/installers/numpy-1.7.0-win32-superpack-python2.6.exe 92b61d6f278a81cf9a5033b0c8e7b53e release/installers/numpy-1.7.0-win32-superpack-python3.1.exe 51d6f4f854cdca224fa56a327ad7c620 release/installers/numpy-1.7.0-win32-superpack-python2.5.exe ca27913c59393940e880fab420f985b4 release/installers/numpy-1.7.0.zip 3f20becbb80da09412d94815ad3b586b release/installers/numpy-1.7.0-py2.5-python.org-macosx10.3.dmg 600dfa4dab31db5dc2ed9655521cfa9e release/installers/numpy-1.7.0-py2.6-python.org-macosx10.3.dmg a907a37416163b3245a30cfd160506ab release/installers/numpy-1.7.0-py2.7-python.org-macosx10.3.dmg From ndbecker2 at gmail.com Sat Feb 9 21:36:02 2013 From: ndbecker2 at gmail.com (Neal Becker) Date: Sat, 09 Feb 2013 21:36:02 -0500 Subject: [Numpy-discussion] ANN: NumPy 1.7.0 release References: Message-ID: Is there a way to add '-march=native' flag to gcc for the build? From charlesr.harris at gmail.com Sun Feb 10 00:31:10 2013 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 9 Feb 2013 22:31:10 -0700 Subject: [Numpy-discussion] ANN: NumPy 1.7.0 release In-Reply-To: References: Message-ID: On Sat, Feb 9, 2013 at 6:25 PM, Ond?ej ?ert?k wrote: News report attached. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Pulp-O-Mizer_Cover_Image.jpg Type: image/jpeg Size: 116388 bytes Desc: not available URL: From francesc at continuum.io Sun Feb 10 03:13:33 2013 From: francesc at continuum.io (Francesc Alted) Date: Sun, 10 Feb 2013 09:13:33 +0100 Subject: [Numpy-discussion] ANN: NumPy 1.7.0 release In-Reply-To: References: Message-ID: Exciting stuff. Thanks a lot to you and everybody implied in the release for an amazing job. Francesc El 10/02/2013 2:25, "Ond?ej ?ert?k" va escriure: > Hi, > > I'm pleased to announce the availability of the final release of > NumPy 1.7.0. > > Sources and binary installers can be found at > https://sourceforge.net/projects/numpy/files/NumPy/1.7.0/ > > This release is equivalent to the 1.7.0rc2 release, since no more problems > were found. For release notes see below. > > I would like to thank everybody who contributed to this release. > > Cheers, > Ondrej > > > ========================= > NumPy 1.7.0 Release Notes > ========================= > > This release includes several new features as well as numerous bug fixes > and > refactorings. It supports Python 2.4 - 2.7 and 3.1 - 3.3 and is the last > release that supports Python 2.4 - 2.5. > > Highlights > ========== > > * ``where=`` parameter to ufuncs (allows the use of boolean arrays to > choose > where a computation should be done) > * ``vectorize`` improvements (added 'excluded' and 'cache' keyword, general > cleanup and bug fixes) > * ``numpy.random.choice`` (random sample generating function) > > > Compatibility notes > =================== > > In a future version of numpy, the functions np.diag, np.diagonal, and the > diagonal method of ndarrays will return a view onto the original array, > instead of producing a copy as they do now. This makes a difference if you > write to the array returned by any of these functions. To facilitate this > transition, numpy 1.7 produces a FutureWarning if it detects that you may > be attempting to write to such an array. See the documentation for > np.diagonal for details. > > Similar to np.diagonal above, in a future version of numpy, indexing a > record array by a list of field names will return a view onto the original > array, instead of producing a copy as they do now. As with np.diagonal, > numpy 1.7 produces a FutureWarning if it detects that you may be attempting > to write to such an array. See the documentation for array indexing for > details. > > In a future version of numpy, the default casting rule for UFunc out= > parameters will be changed from 'unsafe' to 'same_kind'. (This also applies > to in-place operations like a += b, which is equivalent to np.add(a, b, > out=a).) Most usages which violate the 'same_kind' rule are likely bugs, so > this change may expose previously undetected errors in projects that depend > on NumPy. In this version of numpy, such usages will continue to succeed, > but will raise a DeprecationWarning. > > Full-array boolean indexing has been optimized to use a different, > optimized code path. This code path should produce the same results, > but any feedback about changes to your code would be appreciated. > > Attempting to write to a read-only array (one with ``arr.flags.writeable`` > set to ``False``) used to raise either a RuntimeError, ValueError, or > TypeError inconsistently, depending on which code path was taken. It now > consistently raises a ValueError. > > The .reduce functions evaluate some reductions in a different order > than in previous versions of NumPy, generally providing higher performance. > Because of the nature of floating-point arithmetic, this may subtly change > some results, just as linking NumPy to a different BLAS implementations > such as MKL can. > > If upgrading from 1.5, then generally in 1.6 and 1.7 there have been > substantial code added and some code paths altered, particularly in the > areas of type resolution and buffered iteration over universal functions. > This might have an impact on your code particularly if you relied on > accidental behavior in the past. > > New features > ============ > > Reduction UFuncs Generalize axis= Parameter > ------------------------------------------- > > Any ufunc.reduce function call, as well as other reductions like sum, prod, > any, all, max and min support the ability to choose a subset of the axes to > reduce over. Previously, one could say axis=None to mean all the axes or > axis=# to pick a single axis. Now, one can also say axis=(#,#) to pick a > list of axes for reduction. > > Reduction UFuncs New keepdims= Parameter > ---------------------------------------- > > There is a new keepdims= parameter, which if set to True, doesn't throw > away the reduction axes but instead sets them to have size one. When this > option is set, the reduction result will broadcast correctly to the > original operand which was reduced. > > Datetime support > ---------------- > > .. note:: The datetime API is *experimental* in 1.7.0, and may undergo > changes > in future versions of NumPy. > > There have been a lot of fixes and enhancements to datetime64 compared > to NumPy 1.6: > > * the parser is quite strict about only accepting ISO 8601 dates, with a > few > convenience extensions > * converts between units correctly > * datetime arithmetic works correctly > * business day functionality (allows the datetime to be used in contexts > where > only certain days of the week are valid) > > The notes in `doc/source/reference/arrays.datetime.rst > < > https://github.com/numpy/numpy/blob/maintenance/1.7.x/doc/source/reference/arrays.datetime.rst > >`_ > (also available in the online docs at `arrays.datetime.html > `_) > should be > consulted for more details. > > Custom formatter for printing arrays > ------------------------------------ > > See the new ``formatter`` parameter of the ``numpy.set_printoptions`` > function. > > New function numpy.random.choice > --------------------------------- > > A generic sampling function has been added which will generate samples from > a given array-like. The samples can be with or without replacement, and > with uniform or given non-uniform probabilities. > > New function isclose > -------------------- > > Returns a boolean array where two arrays are element-wise equal within a > tolerance. Both relative and absolute tolerance can be specified. > > Preliminary multi-dimensional support in the polynomial package > --------------------------------------------------------------- > > Axis keywords have been added to the integration and differentiation > functions and a tensor keyword was added to the evaluation functions. > These additions allow multi-dimensional coefficient arrays to be used in > those functions. New functions for evaluating 2-D and 3-D coefficient > arrays on grids or sets of points were added together with 2-D and 3-D > pseudo-Vandermonde matrices that can be used for fitting. > > > Ability to pad rank-n arrays > ---------------------------- > > A pad module containing functions for padding n-dimensional arrays has been > added. The various private padding functions are exposed as options to a > public 'pad' function. Example:: > > pad(a, 5, mode='mean') > > Current modes are ``constant``, ``edge``, ``linear_ramp``, ``maximum``, > ``mean``, ``median``, ``minimum``, ``reflect``, ``symmetric``, ``wrap``, > and > ````. > > > New argument to searchsorted > ---------------------------- > > The function searchsorted now accepts a 'sorter' argument that is a > permutation array that sorts the array to search. > > Build system > ------------ > > Added experimental support for the AArch64 architecture. > > C API > ----- > > New function ``PyArray_RequireWriteable`` provides a consistent interface > for checking array writeability -- any C code which works with arrays whose > WRITEABLE flag is not known to be True a priori, should make sure to call > this function before writing. > > NumPy C Style Guide added (``doc/C_STYLE_GUIDE.rst.txt``). > > Changes > ======= > > General > ------- > > The function np.concatenate tries to match the layout of its input arrays. > Previously, the layout did not follow any particular reason, and depended > in an undesirable way on the particular axis chosen for concatenation. A > bug was also fixed which silently allowed out of bounds axis arguments. > > The ufuncs logical_or, logical_and, and logical_not now follow Python's > behavior with object arrays, instead of trying to call methods on the > objects. For example the expression (3 and 'test') produces the string > 'test', and now np.logical_and(np.array(3, 'O'), np.array('test', 'O')) > produces 'test' as well. > > The ``.base`` attribute on ndarrays, which is used on views to ensure that > the > underlying array owning the memory is not deallocated prematurely, now > collapses out references when you have a view-of-a-view. For example:: > > a = np.arange(10) > b = a[1:] > c = b[1:] > > In numpy 1.6, ``c.base`` is ``b``, and ``c.base.base`` is ``a``. In numpy > 1.7, > ``c.base`` is ``a``. > > To increase backwards compatibility for software which relies on the old > behaviour of ``.base``, we only 'skip over' objects which have exactly the > same > type as the newly created view. This makes a difference if you use > ``ndarray`` > subclasses. For example, if we have a mix of ``ndarray`` and ``matrix`` > objects > which are all views on the same original ``ndarray``:: > > a = np.arange(10) > b = np.asmatrix(a) > c = b[0, 1:] > d = c[0, 1:] > > then ``d.base`` will be ``b``. This is because ``d`` is a ``matrix`` > object, > and so the collapsing process only continues so long as it encounters other > ``matrix`` objects. It considers ``c``, ``b``, and ``a`` in that order, and > ``b`` is the last entry in that list which is a ``matrix`` object. > > Casting Rules > ------------- > > Casting rules have undergone some changes in corner cases, due to the > NA-related work. In particular for combinations of scalar+scalar: > > * the `longlong` type (`q`) now stays `longlong` for operations with any > other > number (`? b h i l q p B H I`), previously it was cast as `int_` (`l`). > The > `ulonglong` type (`Q`) now stays as `ulonglong` instead of `uint` (`L`). > > * the `timedelta64` type (`m`) can now be mixed with any integer type (`b > h i l > q p B H I L Q P`), previously it raised `TypeError`. > > For array + scalar, the above rules just broadcast except the case when > the array and scalars are unsigned/signed integers, then the result gets > converted to the array type (of possibly larger size) as illustrated by the > following examples:: > > >>> (np.zeros((2,), dtype=np.uint8) + np.int16(257)).dtype > dtype('uint16') > >>> (np.zeros((2,), dtype=np.int8) + np.uint16(257)).dtype > dtype('int16') > >>> (np.zeros((2,), dtype=np.int16) + np.uint32(2**17)).dtype > dtype('int32') > > Whether the size gets increased depends on the size of the scalar, for > example:: > > >>> (np.zeros((2,), dtype=np.uint8) + np.int16(255)).dtype > dtype('uint8') > >>> (np.zeros((2,), dtype=np.uint8) + np.int16(256)).dtype > dtype('uint16') > > Also a ``complex128`` scalar + ``float32`` array is cast to ``complex64``. > > In NumPy 1.7 the `datetime64` type (`M`) must be constructed by explicitly > specifying the type as the second argument (e.g. ``np.datetime64(2000, > 'Y')``). > > > Deprecations > ============ > > General > ------- > > Specifying a custom string formatter with a `_format` array attribute is > deprecated. The new `formatter` keyword in ``numpy.set_printoptions`` or > ``numpy.array2string`` can be used instead. > > The deprecated imports in the polynomial package have been removed. > > ``concatenate`` now raises DepractionWarning for 1D arrays if ``axis != > 0``. > Versions of numpy < 1.7.0 ignored axis argument value for 1D arrays. We > allow this for now, but in due course we will raise an error. > > C-API > ----- > > Direct access to the fields of PyArrayObject* has been deprecated. Direct > access has been recommended against for many releases. Expect similar > deprecations for PyArray_Descr* and other core objects in the future as > preparation for NumPy 2.0. > > The macros in old_defines.h are deprecated and will be removed in the next > major release (>= 2.0). The sed script tools/replace_old_macros.sed can be > used to replace these macros with the newer versions. > > You can test your code against the deprecated C API by #defining > NPY_NO_DEPRECATED_API to the target version number, for example > NPY_1_7_API_VERSION, before including any NumPy headers. > > The ``NPY_CHAR`` member of the ``NPY_TYPES`` enum is deprecated and will be > removed in NumPy 1.8. See the discussion at > `gh-2801 `_ for more details. > > Checksums > ========= > > 7b72cc17b6a9043f6d46af4e71cd3dbe > release/installers/numpy-1.7.0-win32-superpack-python3.3.exe > 4fa54e40b6a243416f0248123b6ec332 release/installers/numpy-1.7.0.tar.gz > 9ef1688bb9f8deb058a8022b4788686c > release/installers/numpy-1.7.0-win32-superpack-python2.7.exe > 909fe47da05d2a35edd6909ba0152213 > release/installers/numpy-1.7.0-win32-superpack-python3.2.exe > 5d4318b722d0098f78b49c0030d47026 > release/installers/numpy-1.7.0-win32-superpack-python2.6.exe > 92b61d6f278a81cf9a5033b0c8e7b53e > release/installers/numpy-1.7.0-win32-superpack-python3.1.exe > 51d6f4f854cdca224fa56a327ad7c620 > release/installers/numpy-1.7.0-win32-superpack-python2.5.exe > ca27913c59393940e880fab420f985b4 release/installers/numpy-1.7.0.zip > 3f20becbb80da09412d94815ad3b586b > release/installers/numpy-1.7.0-py2.5-python.org-macosx10.3.dmg > 600dfa4dab31db5dc2ed9655521cfa9e > release/installers/numpy-1.7.0-py2.6-python.org-macosx10.3.dmg > a907a37416163b3245a30cfd160506ab > release/installers/numpy-1.7.0-py2.7-python.org-macosx10.3.dmg > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From d.s.seljebotn at astro.uio.no Sun Feb 10 05:58:07 2013 From: d.s.seljebotn at astro.uio.no (Dag Sverre Seljebotn) Date: Sun, 10 Feb 2013 11:58:07 +0100 Subject: [Numpy-discussion] ANN: NumPy 1.7.0 release In-Reply-To: References: Message-ID: <51177D3F.5070500@astro.uio.no> On 02/10/2013 03:36 AM, Neal Becker wrote: > Is there a way to add '-march=native' flag to gcc for the build? I think something along these lines should work (untested): CFLAGS="$(python-config --cflags) -march=native" python setup.py install Dag Sverre From d.s.seljebotn at astro.uio.no Sun Feb 10 06:00:22 2013 From: d.s.seljebotn at astro.uio.no (Dag Sverre Seljebotn) Date: Sun, 10 Feb 2013 12:00:22 +0100 Subject: [Numpy-discussion] ANN: NumPy 1.7.0 release In-Reply-To: <51177D3F.5070500@astro.uio.no> References: <51177D3F.5070500@astro.uio.no> Message-ID: <51177DC6.60804@astro.uio.no> On 02/10/2013 11:58 AM, Dag Sverre Seljebotn wrote: > On 02/10/2013 03:36 AM, Neal Becker wrote: >> Is there a way to add '-march=native' flag to gcc for the build? > > I think something along these lines should work (untested): > > CFLAGS="$(python-config --cflags) -march=native" python setup.py install Actually I retract this -- it's what I once did for a Cython project, but numpy.distutils is doing so much in addition I can't be sure at all. (Until somebody gives the correct answer, you could also try to set the "OPT" environment variable.) Dag Sverre From dave.hirschfeld at gmail.com Sun Feb 10 09:25:00 2013 From: dave.hirschfeld at gmail.com (Dave Hirschfeld) Date: Sun, 10 Feb 2013 14:25:00 +0000 (UTC) Subject: [Numpy-discussion] nditer gurus: is there a more efficient way to do this? Message-ID: I have two NxMx3 arrays and I want to reduce over the last dimension of the first array by selecting those elements corresponding to the index of the maximum value of each 3-vector of the second array to give an NxM result. Hopefully that makes sense? If not hopefully the example below will shed some light. Can anyone think of a more efficient way to do this than my 1st attempt? I thought that nditer might be the solution; I got it to work (mostly by trial & error) but found that it's ~50x slower for this task! Is this not a good usecase for nditer or am I doing something wrong? In [42]: value_array = np.outer(ones(1000), arange(3)).reshape(20,50,3) ...: index_array = randn(*value_array.shape) In [43]: indices = index_array.reshape(-1,3).argmax(axis=1) ...: result = value_array.reshape(-1,3)[np.arange(indices.size), indices] ...: result = result.reshape(value_array.shape[0:-1]) In [44]: it = np.nditer([value_array, index_array, None], ...: flags=['reduce_ok', 'external_loop','buffered', 'delay_bufalloc'], ...: op_flags=[['readonly'],['readonly'],['readwrite', 'allocate']], ...: op_axes=[[0,1,2], [0,1,2], [0,1,-1]]) ...: it.reset() ...: for values, index_values, out in it: ...: out[...] = values[index_values.argmax()] ...: # In [45]: allclose(result, it.operands[2]) Out[45]: True In [46]: %%timeit ...: indices = index_array.reshape(-1,3).argmax(axis=1) ...: result = value_array.reshape(-1,3)[np.arange(indices.size), indices] ...: result = result.reshape(value_array.shape[0:-1]) ...: 10000 loops, best of 3: 113 ?s per loop In [47]: %%timeit ...: it = np.nditer([value_array, index_array, None], ...: flags=['reduce_ok', 'external_loop','buffered', 'delay_bufalloc'], ...: op_flags=[['readonly'],['readonly'],['readwrite', 'allocate']], ...: op_axes=[[0,1,2], [0,1,2], [0,1,-1]]) ...: it.reset() ...: for values, index_values, out in it: ...: out[...] = values[index_values.argmax()] ...: # ...: 100 loops, best of 3: 5.26 ms per loop In [48]: Thanks, Dave From roberto.colistete at gmail.com Mon Feb 11 17:40:28 2013 From: roberto.colistete at gmail.com (Roberto Colistete Jr.) Date: Mon, 11 Feb 2013 20:40:28 -0200 Subject: [Numpy-discussion] NumPy 1.7.0 for MeeGo Harmattan OS Message-ID: <5119735C.9060309@gmail.com> Hi, It is my first participation here. About NumPy on Mobile OS : - NumPy 1.7.0 was released today (11/02/2013) for MeeGo Harmattan OS (for Nokia N9/N950), just 1 day after the mainstream release. See the Talk Maemo.org topic : http://talk.maemo.org/showthread.php?p=1322503 MeeGo Harmattan OS also has NumPy 1.4.1 from Nokia repositories : http://wiki.maemo.org/Python/Harmattan - Maemo 5 OS (Nokia N900) has NumPy 1.4.0 : http://maemo.org/packages/view/python-numpy/ Also for MeeGo Harmattan OS : - MatPlotLib 1.2.0 was released (in 09/02/2013) for MeeGo Harmattan OS. Including Qt4/PySide backend : http://talk.maemo.org/showthread.php?p=1128672 - IPython 0.13.1, including Notebook and Qt console interfaces, was released in 22/01/2013. See the Talk Maemo.org topic : http://talk.maemo.org/showthread.php?p=1123672 It can work as an IPython Notebook server , with web clients running on Android, desktop OS, etc via via WiFi hotspot of Nokia N9/N950. See my blog article comparing scientific Python tools for computers, tablets and smartphones : http://translate.google.com/translate?hl=pt-BR&sl=pt&tl=en&u=http%3A%2F%2Frobertocolistete.wordpress.com%2F2012%2F12%2F26%2Fpython-cientifico-em-computadores-tablets-e-smartphones%2F The conclusion is very simple : real mobile Linux OS (with glibc, X11, dependencies, etc) are better for scientific Python. Like Maemo 5 OS and MeeGo Harmattan OS. And future Sailfish OS and Ubuntu Phone OS can follow the same path. Best regards, Roberto Colistete Jr. From ddvento at ucar.edu Mon Feb 11 22:54:03 2013 From: ddvento at ucar.edu (Davide Del Vento) Date: Mon, 11 Feb 2013 20:54:03 -0700 Subject: [Numpy-discussion] number of tests Message-ID: I compiled numpy 1.6.2 (right before 1.7 came out) with the intel compiler and MKL library. I'm trying to assess whether or not everything has been build fine. Since my machine is actually a cluster, I'm running the tests in different configurations (login node and batch script). However, I'm confused by the number of tests which ran. On the login nodes (either interactively or without tty) I get: Ran 3587 tests in 22.211s FAILED (KNOWNFAIL=5, SKIP=11, failures=2) Whereas in a remote batch node (with a script) I get: Ran 3229 tests in 15.642s OK (KNOWNFAIL=5, SKIP=19) Where did the 358 "missing" tests go in the batch run? The handful difference in SKIPped and FAILed (which I am investigating) cannot be the reason. What is it happening? PS: a similar thing happened with scipy, which I'm asking on the scipy mailing list. Thanks and Regards, Davide From ondrej.certik at gmail.com Tue Feb 12 00:49:21 2013 From: ondrej.certik at gmail.com (=?UTF-8?B?T25kxZllaiDEjGVydMOtaw==?=) Date: Mon, 11 Feb 2013 21:49:21 -0800 Subject: [Numpy-discussion] How to upload to pypi Message-ID: Hi, I have uploaded the NumPy 1.7.0 source distribution to pypi: http://pypi.python.org/pypi/numpy/1.7.0 I did it by uploading the file PKG-INFO from numpy-1.7.0.tar.gz. It said "Error processing form. Form Failure; reset form submission" about 3x times, but on the 4th try it went through. I reported the issue here: https://sourceforge.net/tracker/?func=detail&aid=3604194&group_id=66150&atid=513504 I then attached the numpy-1.7.0.tar.gz and numpy-1.7.0.zip source files. Now I am having trouble attaching the windows installers, just like they are here: http://pypi.python.org/pypi/numpy/1.6.2 but whenever I upload the file numpy-1.7.0-win32-superpack-python2.5.exe (and set it as "MS Windows installer"), it uploads and then I get a blank page with the text: """ Error processing form invalid distribution file """ That's it.... Not very useful. Do you know if the sources of pypi are somewhere online? (I didn't find them, only a similar package https://github.com/schmir/pypiserver, but that doesn't seem to be it.) Anyway, so I at least reported the bug here: http://sourceforge.net/tracker/?func=detail&aid=3604193&group_id=66150&atid=513504 Finally, should we try to upload the Windows installers to pypi at all? I am probably doing something wrong if I am hitting these pypi bugs. Many thanks, Ondrej From dominic at steinitz.org Tue Feb 12 04:55:35 2013 From: dominic at steinitz.org (Dominic Steinitz) Date: Tue, 12 Feb 2013 09:55:35 +0000 Subject: [Numpy-discussion] numpy.test('full') errors and failures Message-ID: <1A1C946D-D1CA-4039-BECA-035845982664@steinitz.org> Apologies if this is the wrong mailing list. I have installed bumpy using the excellent script here: http://fonnesbeck.github.com/ScipySuperpack/ I ran numpy.test('full') and got several errors. Should I be worried? Thanks, Dominic. PS I can send the full errors if that would be helpful. >>> np.test('full') Running unit tests for numpy NumPy version 1.8.0.dev-4600b2f NumPy is installed in /Library/Python/2.7/site-packages/numpy-1.8.0.dev_4600b2f_20130131-py2.7-macosx-10.8-intel.egg/numpy Python version 2.7.3 (v2.7.3:70274d53c1dd, Apr 9 2012, 20:52:43) [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] nose version 1.2.1 ? Ran 4471 tests in 13.065s FAILED (KNOWNFAIL=5, SKIP=26, errors=2, failures=3) From robert.kern at gmail.com Tue Feb 12 05:40:18 2013 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 12 Feb 2013 10:40:18 +0000 Subject: [Numpy-discussion] How to upload to pypi In-Reply-To: References: Message-ID: On Tue, Feb 12, 2013 at 5:49 AM, Ond?ej ?ert?k wrote: > Hi, > > I have uploaded the NumPy 1.7.0 source distribution to pypi: > > http://pypi.python.org/pypi/numpy/1.7.0 > > I did it by uploading the file PKG-INFO from numpy-1.7.0.tar.gz. It > said "Error processing form. Form Failure; reset form submission" > about 3x times, > but on the 4th try it went through. I reported the issue here: > > https://sourceforge.net/tracker/?func=detail&aid=3604194&group_id=66150&atid=513504 > > I then attached the numpy-1.7.0.tar.gz and numpy-1.7.0.zip source files. > > Now I am having trouble attaching the windows installers, just like > they are here: > > http://pypi.python.org/pypi/numpy/1.6.2 > > but whenever I upload the file > numpy-1.7.0-win32-superpack-python2.5.exe (and set it as "MS Windows > installer"), > it uploads and then I get a blank page with the text: > > """ > Error processing form > > invalid distribution file > """ PyPI does some validation on the files that are uploaded. .exe files must be created by bdist_wininst. https://bitbucket.org/loewis/pypi/src/fc588bcd668aba643e2e7f9bd6901a7a4296dddb/verify_filetype.py?at=default#cl-15 I am guessing that the superpack installer is manually built through another mechanism. > That's it.... Not very useful. Do you know if the sources of pypi are > somewhere online? (I didn't find them, only a similar package > https://github.com/schmir/pypiserver, but that doesn't seem to be it.) http://wiki.python.org/moin/CheeseShopDev You can get help with PyPI on Catalog-SIG: http://mail.python.org/mailman/listinfo/catalog-sig -- Robert Kern From davidmenhur at gmail.com Tue Feb 12 07:28:45 2013 From: davidmenhur at gmail.com (=?UTF-8?B?RGHPgGlk?=) Date: Tue, 12 Feb 2013 13:28:45 +0100 Subject: [Numpy-discussion] numpy.test('full') errors and failures In-Reply-To: <1A1C946D-D1CA-4039-BECA-035845982664@steinitz.org> References: <1A1C946D-D1CA-4039-BECA-035845982664@steinitz.org> Message-ID: On 12 February 2013 10:55, Dominic Steinitz wrote: > Running unit tests for numpy > NumPy version 1.8.0.dev-4600b2f I can see this is not the stable version, try the 1.7 that has been just released. From davidmenhur at gmail.com Tue Feb 12 07:30:44 2013 From: davidmenhur at gmail.com (=?UTF-8?B?RGHPgGlk?=) Date: Tue, 12 Feb 2013 13:30:44 +0100 Subject: [Numpy-discussion] number of tests In-Reply-To: References: Message-ID: On 12 February 2013 04:54, Davide Del Vento wrote: > Ran 3587 tests in 22.211s > FAILED (KNOWNFAIL=5, SKIP=11, failures=2) > > Whereas in a remote batch node (with a script) I get: > > Ran 3229 tests in 15.642s > OK (KNOWNFAIL=5, SKIP=19) On my machine (linux 64 bits) In [3]: np.test('full') Running unit tests for numpy NumPy version 1.7.0 NumPy is installed in /usr/lib64/python2.7/site-packages/numpy Python version 2.7.3 (default, Aug 9 2012, 17:23:57) [GCC 4.7.1 20120720 (Red Hat 4.7.1-5)] nose version 1.2.1 ---------------------------------------------------------------------- Ran 4836 tests in 33.016s OK (KNOWNFAIL=5, SKIP=1) Out[3]: From davidmenhur at gmail.com Tue Feb 12 07:37:28 2013 From: davidmenhur at gmail.com (=?UTF-8?B?RGHPgGlk?=) Date: Tue, 12 Feb 2013 13:37:28 +0100 Subject: [Numpy-discussion] pip install numpy throwing a lot of output. Message-ID: I have just upgraded numpy with pip on Linux 64 bits with Python 2.7, and I got *a lot* of output, so much it doesn't fit in the terminal. Most of it are gcc commands, but there are many different errors thrown by the compiler. Is this expected? I am not too worried as the test suite passes, but pip is supposed to give only meaningful output (or at least, this is what the creators intended). David. From francesc at continuum.io Tue Feb 12 08:58:44 2013 From: francesc at continuum.io (Francesc Alted) Date: Tue, 12 Feb 2013 14:58:44 +0100 Subject: [Numpy-discussion] pip install numpy throwing a lot of output. In-Reply-To: References: Message-ID: <511A4A94.2090509@continuum.io> On 2/12/13 1:37 PM, Da?id wrote: > I have just upgraded numpy with pip on Linux 64 bits with Python 2.7, > and I got *a lot* of output, so much it doesn't fit in the terminal. > Most of it are gcc commands, but there are many different errors > thrown by the compiler. Is this expected? Yes, I think that's expected. Just to make sure, can you send some excerpts of the errors that you are getting? > > I am not too worried as the test suite passes, but pip is supposed to > give only meaningful output (or at least, this is what the creators > intended). Well, pip needs to compile the libraries prior to install them, so compile messages are meaningful. Another question would be to reduce the amount of compile messages by default in NumPy, but I don't think this is realistic (and even not desirable). -- Francesc Alted From davidmenhur at gmail.com Tue Feb 12 09:18:55 2013 From: davidmenhur at gmail.com (=?UTF-8?B?RGHPgGlk?=) Date: Tue, 12 Feb 2013 15:18:55 +0100 Subject: [Numpy-discussion] pip install numpy throwing a lot of output. In-Reply-To: <511A4A94.2090509@continuum.io> References: <511A4A94.2090509@continuum.io> Message-ID: On 12 February 2013 14:58, Francesc Alted wrote: > Yes, I think that's expected. Just to make sure, can you send some > excerpts of the errors that you are getting? Actually the errors are at the beginning of the process, so they are out of the reach of my terminal right now. Seems like pip doesn't keep a log in case of success. The ones I can see are mostly warnings of unused variables and functions, maybe this is the expected behaviour for a library? This errors come from a complete reinstall instead of the original upgrade (the cat closed the terminal, worst excuse ever!): compile options: '-Inumpy/core/include -Ibuild/src.linux-x86_64-2.7/numpy/core/include/numpy -Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -Inumpy/core/include -I/usr/include/python2.7 -Ibuild/src.linux-x86_64-2.7/numpy/core/src/multiarray -Ibuild/src.linux-x86_64-2.7/numpy/core/src/umath -c' gcc: build/src.linux-x86_64-2.7/numpy/core/src/npysort/quicksort.c In file included from /usr/include/python2.7/pyconfig.h:6:0, from /usr/include/python2.7/Python.h:8, from numpy/core/src/private/npy_sort.h:5, from numpy/core/src/npysort/quicksort.c.src:32: /usr/include/python2.7/pyconfig-64.h:1170:0: warning: "_POSIX_C_SOURCE" redefined [enabled by default] In file included from /usr/include/stdlib.h:24:0, from numpy/core/src/npysort/quicksort.c.src:31: /usr/include/features.h:168:0: note: this is the location of the previous definition In file included from /usr/include/python2.7/pyconfig.h:6:0, from /usr/include/python2.7/Python.h:8, from numpy/core/src/private/npy_sort.h:5, from numpy/core/src/npysort/quicksort.c.src:32: /usr/include/python2.7/pyconfig-64.h:1192:0: warning: "_XOPEN_SOURCE" redefined [enabled by default] In file included from /usr/include/stdlib.h:24:0, from numpy/core/src/npysort/quicksort.c.src:31: /usr/include/features.h:170:0: note: this is the location of the previous definition gcc: build/src.linux-x86_64-2.7/numpy/core/src/npysort/mergesort.c In file included from /usr/include/python2.7/pyconfig.h:6:0, from /usr/include/python2.7/Python.h:8, from numpy/core/src/private/npy_sort.h:5, from numpy/core/src/npysort/mergesort.c.src:32: /usr/include/python2.7/pyconfig-64.h:1170:0: warning: "_POSIX_C_SOURCE" redefined [enabled by default] In file included from /usr/include/stdlib.h:24:0, from numpy/core/src/npysort/mergesort.c.src:31: /usr/include/features.h:168:0: note: this is the location of the previous definition In file included from /usr/include/python2.7/pyconfig.h:6:0, from /usr/include/python2.7/Python.h:8, from numpy/core/src/private/npy_sort.h:5, from numpy/core/src/npysort/mergesort.c.src:32: /usr/include/python2.7/pyconfig-64.h:1192:0: warning: "_XOPEN_SOURCE" redefined [enabled by default] In file included from /usr/include/stdlib.h:24:0, from numpy/core/src/npysort/mergesort.c.src:31: /usr/include/features.h:170:0: note: this is the location of the previous definition gcc: build/src.linux-x86_64-2.7/numpy/core/src/npysort/heapsort.c In file included from /usr/include/python2.7/pyconfig.h:6:0, from /usr/include/python2.7/Python.h:8, from numpy/core/src/private/npy_sort.h:5, from numpy/core/src/npysort/heapsort.c.src:32: /usr/include/python2.7/pyconfig-64.h:1170:0: warning: "_POSIX_C_SOURCE" redefined [enabled by default] In file included from /usr/include/stdlib.h:24:0, from numpy/core/src/npysort/heapsort.c.src:31: /usr/include/features.h:168:0: note: this is the location of the previous definition In file included from /usr/include/python2.7/pyconfig.h:6:0, from /usr/include/python2.7/Python.h:8, from numpy/core/src/private/npy_sort.h:5, from numpy/core/src/npysort/heapsort.c.src:32: /usr/include/python2.7/pyconfig-64.h:1192:0: warning: "_XOPEN_SOURCE" redefined [enabled by default] compile options: '-Ibuild/src.linux-x86_64-2.7/numpy/core/src/umath -Inumpy/core/include -Ibuild/src.linux-x86_64-2.7/numpy/core/include/numpy -Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -Inumpy/core/include -I/usr/include/python2.7 -Ibuild/src.linux-x86_64-2.7/numpy/core/src/multiarray -Ibuild/src.linux-x86_64-2.7/numpy/core/src/umath -c' gcc: numpy/core/src/umath/umathmodule_onefile.c In file included from numpy/core/src/umath/umathmodule_onefile.c:3:0: numpy/core/src/umath/ufunc_object.c: In function ?get_ufunc_arguments?: numpy/core/src/umath/ufunc_object.c:717:37: warning: unused variable ?nout? [-Wunused-variable] In file included from numpy/core/src/umath/ufunc_object.c:42:0, from numpy/core/src/umath/umathmodule_onefile.c:3: numpy/core/src/umath/umathmodule_onefile.c: At top level: numpy/core/src/private/lowlevel_strided_loops.h:82:1: warning: ?PyArray_GetStridedCopyFn? declared ?static? but never defined [-Wunused-function] numpy/core/src/private/lowlevel_strided_loops.h:97:1: warning: ?PyArray_GetStridedCopySwapFn? declared ?static? but never defined [-Wunused-function] numpy/core/src/private/lowlevel_strided_loops.h:112:1: warning: ?PyArray_GetStridedCopySwapPairFn? declared ?static? but never defined [-Wunused-function] numpy/core/src/private/lowlevel_strided_loops.h:127:1: warning: ?PyArray_GetStridedZeroPadCopyFn? declared ?static? but never defined [-Wunused-function] numpy/core/src/private/lowlevel_strided_loops.h:140:1: warning: ?PyArray_GetStridedNumericCastFn? declared ?static? but never defined [-Wunused-function] numpy/core/src/private/lowlevel_strided_loops.h:151:1: warning: ?PyArray_GetDTypeCopySwapFn? declared ?static? but never defined [-Wunused-function] numpy/core/src/private/lowlevel_strided_loops.h:203:1: warning: ?PyArray_GetDTypeTransferFunction? declared ?static? but never defined [-Wunused-function] numpy/core/src/private/lowlevel_strided_loops.h:227:1: warning: ?PyArray_GetMaskedDTypeTransferFunction? declared ?static? but never defined [-Wunused-function] numpy/core/src/private/lowlevel_strided_loops.h:247:1: warning: ?PyArray_CastRawArrays? declared ?static? but never defined [-Wunused-function] numpy/core/src/private/lowlevel_strided_loops.h:296:1: warning: ?PyArray_TransferNDimToStrided? declared ?static? but never defined [-Wunused-function] numpy/core/src/private/lowlevel_strided_loops.h:306:1: warning: ?PyArray_TransferStridedToNDim? declared ?static? but never defined [-Wunused-function] numpy/core/src/private/lowlevel_strided_loops.h:316:1: warning: ?PyArray_TransferMaskedStridedToNDim? declared ?static? but never defined [-Wunused-function] numpy/core/src/private/lowlevel_strided_loops.h:343:1: warning: ?PyArray_PrepareOneRawArrayIter? declared ?static? but never defined [-Wunused-function] numpy/core/src/private/lowlevel_strided_loops.h:365:1: warning: ?PyArray_PrepareTwoRawArrayIter? declared ?static? but never defined [-Wunused-function] numpy/core/src/private/lowlevel_strided_loops.h:389:1: warning: ?PyArray_PrepareThreeRawArrayIter? declared ?static? but never defined [-Wunused-function] gcc -pthread -shared -Wl,-z,relro build/temp.linux-x86_64-2.7/numpy/core/src/umath/umathmodule_onefile.o -L/usr/lib64 -Lbuild/temp.linux-x86_64-2.7 -lnpymath -lm -lpython2.7 -o build/lib.linux-x86_64-2.7/numpy/core/umath.so building 'numpy.core.scalarmath' extension compiling C sources C compiler: gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC From francesc at continuum.io Tue Feb 12 09:31:05 2013 From: francesc at continuum.io (Francesc Alted) Date: Tue, 12 Feb 2013 15:31:05 +0100 Subject: [Numpy-discussion] pip install numpy throwing a lot of output. In-Reply-To: References: <511A4A94.2090509@continuum.io> Message-ID: <511A5229.8080305@continuum.io> On 2/12/13 3:18 PM, Da?id wrote: > On 12 February 2013 14:58, Francesc Alted wrote: >> Yes, I think that's expected. Just to make sure, can you send some >> excerpts of the errors that you are getting? > Actually the errors are at the beginning of the process, so they are > out of the reach of my terminal right now. Seems like pip doesn't keep > a log in case of success. Well, I think these errors are part of the auto-discovering process of the functions supported by the libraries in the hosting OS (kind of `autoconf`for Python), so they can be considered 'normal'. > > The ones I can see are mostly warnings of unused variables and > functions, maybe this is the expected behaviour for a library? This > errors come from a complete reinstall instead of the original upgrade > (the cat closed the terminal, worst excuse ever!): [clip] These ones are not errors, but warnings. While it should be desirable to avoid any warning during the compilation process, not many libraries fulfill this (but patches for removing them are accepted). -- Francesc Alted From cournape at gmail.com Tue Feb 12 09:46:38 2013 From: cournape at gmail.com (David Cournapeau) Date: Tue, 12 Feb 2013 14:46:38 +0000 Subject: [Numpy-discussion] How to upload to pypi In-Reply-To: References: Message-ID: On Tue, Feb 12, 2013 at 5:49 AM, Ond?ej ?ert?k wrote: > Hi, > > I have uploaded the NumPy 1.7.0 source distribution to pypi: > > http://pypi.python.org/pypi/numpy/1.7.0 > > I did it by uploading the file PKG-INFO from numpy-1.7.0.tar.gz. It > said "Error processing form. Form Failure; reset form submission" > about 3x times, > but on the 4th try it went through. I reported the issue here: > > https://sourceforge.net/tracker/?func=detail&aid=3604194&group_id=66150&atid=513504 > > I then attached the numpy-1.7.0.tar.gz and numpy-1.7.0.zip source files. > > Now I am having trouble attaching the windows installers, just like > they are here: > > http://pypi.python.org/pypi/numpy/1.6.2 Those installers are ones built through bdist_wininst. You should *not* upload superpack installers there, as most python tools will not know what to do with it. For example, easy_install will not work with those, even though it does with simple installers from bdist_wininst. So ideally, one should build simple (== bdist_wininst-generated) installers using the lowest common denominator for architecture (i.e. pure blas/lapack, not atlas), and the superpack installer on sourceforge. Incidentally, that's why the super pack installer uses a different filename, to avoid confusion. David From ddvento at ucar.edu Tue Feb 12 12:02:33 2013 From: ddvento at ucar.edu (Davide Del Vento) Date: Tue, 12 Feb 2013 10:02:33 -0700 Subject: [Numpy-discussion] number of tests In-Reply-To: References: Message-ID: <511A75A9.9080708@ucar.edu> I should have added: $ lsb_release -a LSB Version: :core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-noarch Distributor ID: RedHatEnterpriseServer Description: Red Hat Enterprise Linux Server release 6.2 (Santiago) Release: 6.2 Codename: Santiago $ python -c "import numpy; numpy.test('full')" Running unit tests for numpy NumPy version 1.6.2 NumPy is installed in /opt/numpy/1.6.2/intel/13.0.1/lib/python2.7/site-packages/numpy Python version 2.7.3 (default, Feb 9 2013, 16:14:16) [GCC 4.7.2] nose version 1.2.1 So nobody knows why the number of tests run are different among different runs of the same binary/library on different nodes? https://github.com/numpy/numpy/blob/master/doc/TESTS.rst.txt implies they shouldn't... Regards, Davide Del Vento, On 02/11/2013 08:54 PM, Davide Del Vento wrote: > I compiled numpy 1.6.2 (right before 1.7 came out) with the intel > compiler and MKL library. I'm trying to assess > whether or not everything has been build fine. Since my machine is > actually a cluster, I'm running the tests in different configurations > (login node and batch script). However, I'm confused by the number of > tests which ran. > On the login nodes (either interactively or without tty) I get: > > Ran 3587 tests in 22.211s > FAILED (KNOWNFAIL=5, SKIP=11, failures=2) > > Whereas in a remote batch node (with a script) I get: > > Ran 3229 tests in 15.642s > OK (KNOWNFAIL=5, SKIP=19) > > Where did the 358 "missing" tests go in the batch run? > The handful difference in SKIPped and FAILed (which I am > investigating) cannot be the reason. > > What is it happening? > > PS: a similar thing happened with scipy, which I'm asking on the > scipy mailing list. > > Thanks and Regards, > Davide From Nicolas.Rougier at inria.fr Tue Feb 12 12:53:17 2013 From: Nicolas.Rougier at inria.fr (Nicolas Rougier) Date: Tue, 12 Feb 2013 18:53:17 +0100 Subject: [Numpy-discussion] View on sliced array without copy Message-ID: Hi, I'm trying to get a view on a sliced array without copy but I'm not able to do it as illustrated below: dtype = np.dtype( [('color', 'f4', 4)] ) Z = np.zeros(100, dtype=dtype) print Z.view('f4').base is Z # True print Z[:50].base is Z # True print (Z.view('f4'))[:50].base is Z # False print Z[:50].view('f4').base is Z # False Did I do something wrong or is it expected behavior ? Nicolas From jaime.frio at gmail.com Tue Feb 12 13:25:30 2013 From: jaime.frio at gmail.com (=?ISO-8859-1?Q?Jaime_Fern=E1ndez_del_R=EDo?=) Date: Tue, 12 Feb 2013 10:25:30 -0800 Subject: [Numpy-discussion] View on sliced array without copy In-Reply-To: References: Message-ID: On Tue, Feb 12, 2013 at 9:53 AM, Nicolas Rougier wrote: > Did I do something wrong or is it expected behavior ? > Try: print (Z.view('f4'))[:50].base.base is Z # True print Z[:50].view('f4').base.base is Z # True This weird behaviour is fixed in the just-released numpy 1.7. From the notes of the release: The ``.base`` attribute on ndarrays, which is used on views to ensure that the underlying array owning the memory is not deallocated prematurely, now collapses out references when you have a view-of-a-view. For example:: a = np.arange(10) b = a[1:] c = b[1:] In numpy 1.6, ``c.base`` is ``b``, and ``c.base.base`` is ``a``. In numpy 1.7, ``c.base`` is ``a``. Jaime -- (\__/) ( O.o) ( > <) Este es Conejo. Copia a Conejo en tu firma y ay?dale en sus planes de dominaci?n mundial. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Tue Feb 12 15:39:58 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Tue, 12 Feb 2013 21:39:58 +0100 Subject: [Numpy-discussion] numpy.test('full') errors and failures In-Reply-To: References: <1A1C946D-D1CA-4039-BECA-035845982664@steinitz.org> Message-ID: On Tue, Feb 12, 2013 at 1:28 PM, Da?id wrote: > On 12 February 2013 10:55, Dominic Steinitz wrote: > > Running unit tests for numpy > > NumPy version 1.8.0.dev-4600b2f > > I can see this is not the stable version, try the 1.7 that has been > just released. Sending the full output of this test run would be useful, even if 1.7.0 has no failures. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From mesanthu at gmail.com Tue Feb 12 17:06:15 2013 From: mesanthu at gmail.com (santhu kumar) Date: Tue, 12 Feb 2013 16:06:15 -0600 Subject: [Numpy-discussion] Help with python in C code Message-ID: Hello all, I was able to successfully embed python code into C code. The basic skeleton is something like this : In C code : PyObject *pName, *pModule, *pFunc,*pArgs,*pReturn; PyArrayObject *cusForce; Py_Initialize(); import_array(); // required while using numpy arrays in C // Call some python code // Py_Decref all the used values Py_Finalize() Py_Finalize caused segmentation faults when numpy was one of import modules [node1:05762] Failing at address: 0x1 [node1:05762] [ 0] /lib64/libc.so.6 [0x3afa630280] [node1:05762] [ 1] /usr/lib64/python2.4/site-packages/numpy/core/multiarray.so [0x2b4e950aaf37] [node1:05762] [ 2] /usr/lib64/python2.4/site-packages/numpy/core/multiarray.so [0x2b4e950dd1d0] [node1:05762] [ 3] /usr/lib64/python2.4/site-packages/numpy/core/multiarray.so [0x2b4e950e22fa] [node1:05762] [ 4] /usr/lib64/libpython2.4.so.1.0(PyObject_IsTrue+0x37) [0x2b4e8a683c77] Which could be avoided by not calling Py_Finalize. But without calling Py_Finalize(), I am having huge memory leaks as the process keeps on increasing its memory usage as I see in top. Is there any clean way of calling Py_Finalize() when using numpy. Thanks santhosh -------------- next part -------------- An HTML attachment was scrubbed... URL: From Nicolas.Rougier at inria.fr Wed Feb 13 00:30:37 2013 From: Nicolas.Rougier at inria.fr (Nicolas Rougier) Date: Wed, 13 Feb 2013 06:30:37 +0100 Subject: [Numpy-discussion] View on sliced array without copy In-Reply-To: References: Message-ID: <10CA1944-1560-4E1F-A925-C248598144C9@inria.fr> I did not know that. Thanks for the clear explanation. Nicolas On Feb 12, 2013, at 19:25 , Jaime Fern?ndez del R?o wrote: > On Tue, Feb 12, 2013 at 9:53 AM, Nicolas Rougier wrote: > Did I do something wrong or is it expected behavior ? > > Try: > > print (Z.view('f4'))[:50].base.base is Z # True > print Z[:50].view('f4').base.base is Z # True > > This weird behaviour is fixed in the just-released numpy 1.7. From the notes of the release: > > The ``.base`` attribute on ndarrays, which is used on views to ensure that the > underlying array owning the memory is not deallocated prematurely, now > collapses out references when you have a view-of-a-view. For example:: > > a = np.arange(10) > b = a[1:] > c = b[1:] > > In numpy 1.6, ``c.base`` is ``b``, and ``c.base.base`` is ``a``. In numpy 1.7, > ``c.base`` is ``a``. > > Jaime > > -- > (\__/) > ( O.o) > ( > <) Este es Conejo. Copia a Conejo en tu firma y ay?dale en sus planes de dominaci?n mundial. > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From ondrej.certik at gmail.com Wed Feb 13 00:32:49 2013 From: ondrej.certik at gmail.com (=?UTF-8?B?T25kxZllaiDEjGVydMOtaw==?=) Date: Tue, 12 Feb 2013 21:32:49 -0800 Subject: [Numpy-discussion] How to upload to pypi In-Reply-To: References: Message-ID: Hi Robert, On Tue, Feb 12, 2013 at 2:40 AM, Robert Kern wrote: > On Tue, Feb 12, 2013 at 5:49 AM, Ond?ej ?ert?k wrote: >> Hi, >> >> I have uploaded the NumPy 1.7.0 source distribution to pypi: >> >> http://pypi.python.org/pypi/numpy/1.7.0 >> >> I did it by uploading the file PKG-INFO from numpy-1.7.0.tar.gz. It >> said "Error processing form. Form Failure; reset form submission" >> about 3x times, >> but on the 4th try it went through. I reported the issue here: >> >> https://sourceforge.net/tracker/?func=detail&aid=3604194&group_id=66150&atid=513504 >> >> I then attached the numpy-1.7.0.tar.gz and numpy-1.7.0.zip source files. >> >> Now I am having trouble attaching the windows installers, just like >> they are here: >> >> http://pypi.python.org/pypi/numpy/1.6.2 >> >> but whenever I upload the file >> numpy-1.7.0-win32-superpack-python2.5.exe (and set it as "MS Windows >> installer"), >> it uploads and then I get a blank page with the text: >> >> """ >> Error processing form >> >> invalid distribution file >> """ > > PyPI does some validation on the files that are uploaded. .exe files > must be created by bdist_wininst. > > https://bitbucket.org/loewis/pypi/src/fc588bcd668aba643e2e7f9bd6901a7a4296dddb/verify_filetype.py?at=default#cl-15 > > I am guessing that the superpack installer is manually built through > another mechanism. Ha, I see --- thanks for pointing me to the source code. All is clear now. > >> That's it.... Not very useful. Do you know if the sources of pypi are >> somewhere online? (I didn't find them, only a similar package >> https://github.com/schmir/pypiserver, but that doesn't seem to be it.) > > http://wiki.python.org/moin/CheeseShopDev > > You can get help with PyPI on Catalog-SIG: > > http://mail.python.org/mailman/listinfo/catalog-sig Thanks again. Ondrej From ondrej.certik at gmail.com Wed Feb 13 00:35:33 2013 From: ondrej.certik at gmail.com (=?UTF-8?B?T25kxZllaiDEjGVydMOtaw==?=) Date: Tue, 12 Feb 2013 21:35:33 -0800 Subject: [Numpy-discussion] How to upload to pypi In-Reply-To: References: Message-ID: David, On Tue, Feb 12, 2013 at 6:46 AM, David Cournapeau wrote: > On Tue, Feb 12, 2013 at 5:49 AM, Ond?ej ?ert?k wrote: >> Hi, >> >> I have uploaded the NumPy 1.7.0 source distribution to pypi: >> >> http://pypi.python.org/pypi/numpy/1.7.0 >> >> I did it by uploading the file PKG-INFO from numpy-1.7.0.tar.gz. It >> said "Error processing form. Form Failure; reset form submission" >> about 3x times, >> but on the 4th try it went through. I reported the issue here: >> >> https://sourceforge.net/tracker/?func=detail&aid=3604194&group_id=66150&atid=513504 >> >> I then attached the numpy-1.7.0.tar.gz and numpy-1.7.0.zip source files. >> >> Now I am having trouble attaching the windows installers, just like >> they are here: >> >> http://pypi.python.org/pypi/numpy/1.6.2 > > Those installers are ones built through bdist_wininst. You should > *not* upload superpack installers there, as most python tools will not > know what to do with it. For example, easy_install will not work with > those, even though it does with simple installers from bdist_wininst. > > So ideally, one should build simple (== bdist_wininst-generated) > installers using the lowest common denominator for architecture (i.e. > pure blas/lapack, not atlas), and the superpack installer on > sourceforge. Incidentally, that's why the super pack installer uses a > different filename, to avoid confusion. I see. I looked into my scripts and it turns out that actually I do build the bdist_wininst versions as well, I just didn't know what they are for, so I ignored them. Now I can see that those will get uploaded to pypi, so I did that now and it works! http://pypi.python.org/pypi/numpy/1.7.0 I learned something new today, thanks for the explanation. So pypi should be done. Ondrej From pradeep.kumar.jha at gmail.com Wed Feb 13 02:51:04 2013 From: pradeep.kumar.jha at gmail.com (Pradeep Jha) Date: Wed, 13 Feb 2013 07:51:04 +0000 (UTC) Subject: [Numpy-discussion] example reading binary Fortran file References: <6df541a5c8d6ecf26b6e38f404958401.squirrel@webmail.uio.no> <200902031015.17490.faltet@pytables.org> <4A1EBB85.5080706@wartburg.edu> <4A20A036.3030804@wartburg.edu> Message-ID: Neil Martinsen-Burrell wartburg.edu> writes: > > On 2009-05-29 10:12 , David Froger wrote: > > I think the FortranFile class is not intended to read arrays written > > with the syntax 'write(11) array1, array2, array3' (correct me if I'm > > wrong). This is the use in the laboratory where I'm currently > > completing a phd. > > You're half wrong. FortranFile can read arrays written as above, but it > sees them as a single real array. So, with the attached Fortran program:: > > In [1]: from fortranfile import FortranFile > > In [2]: f = FortranFile('uxuyp.bin', endian='<') # Original bug was > incorrect byte order > > In [3]: u = f.readReals() > > In [4]: u.shape > Out[4]: (20,) > > In [5]: u > Out[5]: > array([ 101., 111., 102., 112., 103., 113., 104., 114., 105., > 115., 201., 211., 202., 212., 203., 213., 204., 214., > 205., 215.], dtype=float32) > > In [6]: ux = u[:10].reshape(2,5); uy = u[10:].reshape(2,5) > > In [7]: p = f.readReals().reshape(2,5) > > In [8]: ux, uy, p > Out[8]: > (array([[ 101., 111., 102., 112., 103.], > [ 113., 104., 114., 105., 115.]], dtype=float32), > array([[ 201., 211., 202., 212., 203.], > [ 213., 204., 214., 205., 215.]], dtype=float32), > array([[ 301., 311., 302., 312., 303.], > [ 313., 304., 314., 305., 315.]], dtype=float32)) > > What doesn't currently work is to have arrays of mixed types in the same > write statement, e.g. > > integer :: index(10) > real :: x(10,10) > ... > write(13) x, index > > To address the original problem, I've changed the code to default to the > native byte-ordering (f.ENDIAN='@') and to be more informative about > what happened in the error. In the latest version (attached): > > In [1]: from fortranfile import FortranFile > > In [2]: f = FortranFile('uxuyp.bin', endian='>') # incorrect endian-ness > > In [3]: u = f.readReals() > > IOError: Could not read enough data. Wanted 1342177280 bytes, got 132 > Hello, I am trying to read a fortran unformatted binary file with FortranFile as follows but I get an error. ----------------------------------------- >>> from FortranFile import FortranFile >>> f = FortranFile("vor_465.float",endian="<") >>> u = f.readReals() Traceback (most recent call last): File "", line 1, in File "FortranFile.py", line 193, in readReals data_str = self.readRecord() File "FortranFile.py", line 140, in readRecord data_str = self._read_exactly(l) File "FortranFile.py", line 124, in _read_exactly ' Wanted %d bytes, got %d.' % (num_bytes, l)) IOError: Could not read enough data. Wanted 1124054321 bytes, got 536870908. ------------------------------------------ My file, "vor_465.float" has the following size: ------------------------------------------ [pradeep at laptop ~]$ls -l vor_465.float -rwxr-xr-x 1 pradeep staff 536870912 Feb 13 *** ----------------------------------------- I am sure this file has data in the right format as when I read it using fortran using the following command: -------------------------------------------------- open (in_file_id,FILE="vor_465.float",form='unformatted', access='direct',recl=4*512*512*512) read (in_file_id,rec=1) buffer -------------------------------------------------- it works completely fine. This data contains single precision float values for a scalar over a cube with 512 points in each direction. Any ideas? Pradeep From pradeep.kumar.jha at gmail.com Wed Feb 13 03:49:36 2013 From: pradeep.kumar.jha at gmail.com (Pradeep Jha) Date: Wed, 13 Feb 2013 08:49:36 +0000 (UTC) Subject: [Numpy-discussion] example reading binary Fortran file References: <6df541a5c8d6ecf26b6e38f404958401.squirrel@webmail.uio.no> <200902031015.17490.faltet@pytables.org> <4A1EBB85.5080706@wartburg.edu> <4A20A036.3030804@wartburg.edu> Message-ID: Pradeep Jha gmail.com> writes: > > > > Hello, > > I am trying to read a fortran unformatted binary file with > FortranFile as follows but I get an error. > ----------------------------------------- > >>> from FortranFile import FortranFile > >>> f = FortranFile("vor_465.float",endian="<") > >>> u = f.readReals() > Traceback (most recent call last): > File "", line 1, in > File "FortranFile.py", line 193, in readReals > data_str = self.readRecord() > File "FortranFile.py", line 140, in readRecord > data_str = self._read_exactly(l) > File "FortranFile.py", line 124, in _read_exactly > ' Wanted %d bytes, got %d.' % (num_bytes, l)) > IOError: Could not read enough data. > Wanted 1124054321 bytes, got 536870908. > ------------------------------------------ > > My file, "vor_465.float" has the following size: > ------------------------------------------ > [pradeep laptop ~]$ls -l vor_465.float > -rwxr-xr-x 1 pradeep staff 536870912 Feb 13 *** > ----------------------------------------- > > I am sure this file has data in the right format as > when I read it using fortran using the following command: > > -------------------------------------------------- > open (in_file_id,FILE="vor_465.float",form='unformatted', > access='direct',recl=4*512*512*512) > read (in_file_id,rec=1) buffer > -------------------------------------------------- > > it works completely fine. This data contains single precision float > values for a scalar over a cube with 512 points in each direction. > > Any ideas? > Pradeep > Also, when I am trying to write an unformatted binary file with FortranFile I am getting the following error: --------------------------------------------------- [pradeep at laptop ~]$python Python 2.7.3 (v2.7.3:70274d53c1dd, Apr 9 2012, 20:52:43) [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> from FortranFile import FortranFile >>> f = FortranFile("filename", endian="<") >>> array = [1,2,3,4,5,6,6,7] >>> f.writeReals(array) Traceback (most recent call last): File "", line 1, in File "FortranFile.py", line 214, in writeReals self._write_check(length_bytes) File "FortranFile.py", line 135, in _write_check number_of_bytes)) IOError: File not open for writing ------------------------------------------------ I dont know how to make the file available for writing. I checked the file permissions and they are completely fine. Please help, Pradeep From charlesr.harris at gmail.com Wed Feb 13 18:03:04 2013 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 13 Feb 2013 16:03:04 -0700 Subject: [Numpy-discussion] Google Summer of Code Message-ID: Just thought I'd mention that the scheduleis up. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From orion at cora.nwra.com Wed Feb 13 19:06:09 2013 From: orion at cora.nwra.com (Orion Poplawski) Date: Wed, 13 Feb 2013 17:06:09 -0700 Subject: [Numpy-discussion] Fwd: Package: scipy-0.11.0-0.1.rc2.fc18 Tag: f18-updates-candidate Status: failed Built by: orion In-Reply-To: References: <20120920210154.D759123187@bastion01.phx2.fedoraproject.org> <505B9109.60607@cora.nwra.com> Message-ID: <511C2A71.1090502@cora.nwra.com> On 09/21/2012 11:41 AM, Ond?ej ?ert?k wrote: > Hi Orion, > > On Thu, Sep 20, 2012 at 2:56 PM, Orion Poplawski wrote: >> This is a plea for some help. We've been having trouble getting scipy to >> pass all of the tests in the Fedora 18 build with python 3.3 (although it >> seems to build okay in Fedora 19). Below are the logs of the build. There >> appears to be some kind of memory corruption that manifests itself a little >> differently on 32-bit vs. 64-bit. I really have no idea myself how to >> pursue debugging this, though I'm happy to provide any more needed >> information. > > Thanks for testing the latest beta2 release. > >> Task 4509077 on buildvm-35.phx2.fedoraproject.org >> Task Type: buildArch (scipy-0.11.0-0.1.rc2.fc18.src.rpm, i686) >> logs: >> http://koji.fedoraproject.org/koji/getfile?taskID=4509077&name=build.log > > This link has the following stacktrace: > > /lib/libpython3.3m.so.1.0(PyMem_Free+0x1c)[0xbf044c] > /usr/lib/python3.3/site-packages/numpy/core/multiarray.cpython-33m.so(+0x4d52b)[0x42252b] > /usr/lib/python3.3/site-packages/numpy/core/multiarray.cpython-33m.so(+0xcb7c5)[0x4a07c5] > /usr/lib/python3.3/site-packages/numpy/core/multiarray.cpython-33m.so(+0xcbc5e)[0x4a0c5e] > > Which indeed looks like in NumPy. Would you be able to obtain full stacktrace? > > There has certainly been segfaults in Python 3.3 with NumPy, but we've > fixed all that we could reproduce. That doesn't mean there couldn't be > more. If you could nail it down a little bit more, that would be > great. I'll help once I can reproduce it somehow. > > Ondrej > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > Trying to get back to this as we still see it with numpy 1.7.0 and scipy 0.11. I'm seeing a segfault in malloc_consolidate(), which seems like would only occur if there were problems earlier, so I'm not sure a stack trace is all that useful. Starting program: /usr/bin/python3 /export/home/orion/redhat/BUILDROOT/scipy-0.11.0-3.fc19.x86_64/usr/lib64/python3.3/site-packages/scipy/linalg/tests/test_decomp.py [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib64/libthread_db.so.1". .............. Program received signal SIGSEGV, Segmentation fault. 0x0000003d8d67bdad in malloc_consolidate (av=av at entry=0x3d8d9b1740 ) at malloc.c:4151 4151 unlink(av, nextchunk, bck, fwd); Here's some: #0 0x0000003d8d67bdad in malloc_consolidate (av=av at entry=0x3d8d9b1740 ) at malloc.c:4151 #1 0x0000003d8d67d09e in _int_malloc (av=0x3d8d9b1740 , bytes=) at malloc.c:3422 #2 0x0000003d8d67f443 in __GI___libc_malloc (bytes=2632) at malloc.c:2862 #3 0x00007ffff121816c in PyArray_IterNew (obj=) at numpy/core/src/multiarray/iterators.c:385 #4 0x00007ffff1218201 in PyArray_IterAllButAxis (obj=obj at entry= , inaxis=inaxis at entry=0x7fffffff873c) at numpy/core/src/multiarray/iterators.c:488 #5 0x00007ffff1257970 in _new_argsort (which=NPY_QUICKSORT, axis=0, op=0xe02fd0) at numpy/core/src/multiarray/item_selection.c:815 #6 PyArray_ArgSort (op=op at entry=0xe02fd0, axis=0, which=NPY_QUICKSORT) at numpy/core/src/multiarray/item_selection.c:1104 #7 0x00007ffff125873a in array_argsort (self=0xe02fd0, args=, kwds=) at numpy/core/src/multiarray/methods.c:1213 #8 0x0000003b74d0cc8e in call_function (oparg=, pp_stack=0x7fffffff8998) at /usr/src/debug/Python-3.3.0/Python/ceval.c:4091 #9 PyEval_EvalFrameEx (f=f at entry= Frame 0xd3ecb0, for file /usr/lib64/python3.3/site-packages/numpy/core/fromnumeric.py, line 681, in argsort (a=, axis=-1, kind='quicksort', order=None, argsort=), throwflag=throwflag at entry=0) at /usr/src/debug/Python-3.3.0/Python/ceval.c:2703 #10 0x0000003b74d0de63 in PyEval_EvalCodeEx (_co=_co at entry=, globals=, locals=locals at entry=0x0, args=, argcount=argcount at entry=1, kws=0xe23ab8, kwcount=kwcount at entry=0, defs=0x7ffff1a965b8, defcount=3, kwdefs=0x0, closure=0x0) at /usr/src/debug/Python-3.3.0/Python/ceval.c:3462 #11 0x0000003b74d0c707 in fast_function (nk=0, na=1, n=, pp_stack= 0x7fffffff8c88, func=) at /usr/src/debug/Python-3.3.0/Python/ceval.c:4189 #12 call_function (oparg=, pp_stack=0x7fffffff8c88) at /usr/src/debug/Python-3.3.0/Python/ceval.c:4112 (gdb) up 3 #3 0x00007ffff121816c in PyArray_IterNew (obj=) at numpy/core/src/multiarray/iterators.c:385 385 it = (PyArrayIterObject *)PyArray_malloc(sizeof(PyArrayIterObject)); (gdb) print *obj $4 = {ob_refcnt = 5, ob_type = 0x7ffff14c6900 } (gdb) list 380 PyErr_BadInternalCall(); 381 return NULL; 382 } 383 ao = (PyArrayObject *)obj; 384 385 it = (PyArrayIterObject *)PyArray_malloc(sizeof(PyArrayIterObject)); 386 PyObject_Init((PyObject *)it, &PyArrayIter_Type); 387 /* it = PyObject_New(PyArrayIterObject, &PyArrayIter_Type);*/ 388 if (it == NULL) { 389 return NULL; valgrind reports problems like: ==10886== Invalid write of size 8 ==10886== at 0x3D9C5CB576: dlacpy_ (in /usr/lib64/atlas/liblapack.so.3.0) ==10886== by 0x3D9C6481F7: dsbevx_ (in /usr/lib64/atlas/liblapack.so.3.0) ==10886== by 0x115D8212: ??? (in /export/home/orion/redhat/BUILDROOT/scipy-0.11.0-3.fc19.x86_64/usr/lib64/python3.3/site-packages/scipy/linalg/flapack.cpython-33m.so) ==10886== by 0x3B74C5EF8E: PyObject_Call (abstract.c:2082) ==10886== by 0x3B74D07DDC: PyEval_EvalFrameEx (ceval.c:4311) ==10886== by 0x3B74D0C9B4: PyEval_EvalFrameEx (ceval.c:4179) ==10886== by 0x3B74D0DE62: PyEval_EvalCodeEx (ceval.c:3462) ==10886== by 0x3B74D0C706: PyEval_EvalFrameEx (ceval.c:4189) ==10886== by 0x3B74D0DE62: PyEval_EvalCodeEx (ceval.c:3462) ==10886== by 0x3B74C8547F: function_call (funcobject.c:633) ==10886== by 0x3B74C5EF8E: PyObject_Call (abstract.c:2082) ==10886== by 0x3B74D05F7F: PyEval_EvalFrameEx (ceval.c:4406) ==10886== Address 0xbbc8cd0 is 0 bytes after a block of size 80 alloc'd ==10886== at 0x4A0883C: malloc (vg_replace_malloc.c:270) ==10886== by 0xE8A103A: PyDataMem_NEW (multiarraymodule.c:3492) ==10886== by 0xE8C3F74: PyArray_NewFromDescr (ctors.c:970) ==10886== by 0x115E032B: array_from_pyobj (in /export/home/orion/redhat/BUILDROOT/scipy-0.11.0-3.fc19.x86_64/usr/lib64/python3.3/site-packages/scipy/linalg/flapack.cpython-33m.so) ==10886== by 0x115D7F5E: ??? (in /export/home/orion/redhat/BUILDROOT/scipy-0.11.0-3.fc19.x86_64/usr/lib64/python3.3/site-packages/scipy/linalg/flapack.cpython-33m.so) ==10886== by 0x3B74C5EF8E: PyObject_Call (abstract.c:2082) ==10886== by 0x3B74D07DDC: PyEval_EvalFrameEx (ceval.c:4311) ==10886== by 0x3B74D0C9B4: PyEval_EvalFrameEx (ceval.c:4179) ==10886== by 0x3B74D0DE62: PyEval_EvalCodeEx (ceval.c:3462) ==10886== by 0x3B74D0C706: PyEval_EvalFrameEx (ceval.c:4189) ==10886== by 0x3B74D0DE62: PyEval_EvalCodeEx (ceval.c:3462) ==10886== by 0x3B74C8547F: function_call (funcobject.c:633) ==10886== ==10886== Invalid read of size 8 ==10886== at 0x3D9C61DAD5: dlasr_ (in /usr/lib64/atlas/liblapack.so.3.0) ==10886== by 0x3D9C663092: dsteqr_ (in /usr/lib64/atlas/liblapack.so.3.0) ==10886== by 0x3D9C648290: dsbevx_ (in /usr/lib64/atlas/liblapack.so.3.0) ==10886== by 0x115D8212: ??? (in /export/home/orion/redhat/BUILDROOT/scipy-0.11.0-3.fc19.x86_64/usr/lib64/python3.3/site-packages/scipy/linalg/flapack.cpython-33m.so) ==10886== by 0x3B74C5EF8E: PyObject_Call (abstract.c:2082) ==10886== by 0x3B74D07DDC: PyEval_EvalFrameEx (ceval.c:4311) ==10886== by 0x3B74D0C9B4: PyEval_EvalFrameEx (ceval.c:4179) ==10886== by 0x3B74D0DE62: PyEval_EvalCodeEx (ceval.c:3462) ==10886== by 0x3B74D0C706: PyEval_EvalFrameEx (ceval.c:4189) ==10886== by 0x3B74D0DE62: PyEval_EvalCodeEx (ceval.c:3462) ==10886== by 0x3B74C8547F: function_call (funcobject.c:633) ==10886== by 0x3B74C5EF8E: PyObject_Call (abstract.c:2082) ==10886== Address 0xbbc8dc0 is not stack'd, malloc'd or (recently) free'd So perhaps an atlas issue, or the way scipy/numpy calls it. I'll try to look into it more. Suggestions welcome. -- Orion Poplawski Technical Manager 303-415-9701 x222 NWRA, Boulder Office FAX: 303-415-9702 3380 Mitchell Lane orion at nwra.com Boulder, CO 80301 http://www.nwra.com From ondrej.certik at gmail.com Wed Feb 13 19:18:58 2013 From: ondrej.certik at gmail.com (=?UTF-8?B?T25kxZllaiDEjGVydMOtaw==?=) Date: Wed, 13 Feb 2013 16:18:58 -0800 Subject: [Numpy-discussion] Fwd: Package: scipy-0.11.0-0.1.rc2.fc18 Tag: f18-updates-candidate Status: failed Built by: orion In-Reply-To: <511C2A71.1090502@cora.nwra.com> References: <20120920210154.D759123187@bastion01.phx2.fedoraproject.org> <505B9109.60607@cora.nwra.com> <511C2A71.1090502@cora.nwra.com> Message-ID: Orion, On Wed, Feb 13, 2013 at 4:06 PM, Orion Poplawski wrote: > On 09/21/2012 11:41 AM, Ond?ej ?ert?k wrote: >> >> Hi Orion, >> >> On Thu, Sep 20, 2012 at 2:56 PM, Orion Poplawski >> wrote: >>> >>> This is a plea for some help. We've been having trouble getting scipy to >>> pass all of the tests in the Fedora 18 build with python 3.3 (although it >>> seems to build okay in Fedora 19). Below are the logs of the build. >>> There >>> appears to be some kind of memory corruption that manifests itself a >>> little >>> differently on 32-bit vs. 64-bit. I really have no idea myself how to >>> pursue debugging this, though I'm happy to provide any more needed >>> information. >> >> >> Thanks for testing the latest beta2 release. >> >>> Task 4509077 on buildvm-35.phx2.fedoraproject.org >>> Task Type: buildArch (scipy-0.11.0-0.1.rc2.fc18.src.rpm, i686) >>> logs: >>> >>> http://koji.fedoraproject.org/koji/getfile?taskID=4509077&name=build.log >> >> >> This link has the following stacktrace: >> >> /lib/libpython3.3m.so.1.0(PyMem_Free+0x1c)[0xbf044c] >> >> /usr/lib/python3.3/site-packages/numpy/core/multiarray.cpython-33m.so(+0x4d52b)[0x42252b] >> >> /usr/lib/python3.3/site-packages/numpy/core/multiarray.cpython-33m.so(+0xcb7c5)[0x4a07c5] >> >> /usr/lib/python3.3/site-packages/numpy/core/multiarray.cpython-33m.so(+0xcbc5e)[0x4a0c5e] >> >> Which indeed looks like in NumPy. Would you be able to obtain full >> stacktrace? >> >> There has certainly been segfaults in Python 3.3 with NumPy, but we've >> fixed all that we could reproduce. That doesn't mean there couldn't be >> more. If you could nail it down a little bit more, that would be >> great. I'll help once I can reproduce it somehow. >> >> Ondrej >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> > > Trying to get back to this as we still see it with numpy 1.7.0 and scipy > 0.11. > > I'm seeing a segfault in malloc_consolidate(), which seems like would only > occur if there were problems earlier, so I'm not sure a stack trace is all > that useful. > > Starting program: /usr/bin/python3 > /export/home/orion/redhat/BUILDROOT/scipy-0.11.0-3.fc19.x86_64/usr/lib64/python3.3/site-packages/scipy/linalg/tests/test_decomp.py > [Thread debugging using libthread_db enabled] > Using host libthread_db library "/lib64/libthread_db.so.1". > .............. > Program received signal SIGSEGV, Segmentation fault. > 0x0000003d8d67bdad in malloc_consolidate (av=av at entry=0x3d8d9b1740 > ) > at malloc.c:4151 > 4151 unlink(av, nextchunk, bck, fwd); > > Here's some: > #0 0x0000003d8d67bdad in malloc_consolidate (av=av at entry=0x3d8d9b1740 > ) > at malloc.c:4151 > #1 0x0000003d8d67d09e in _int_malloc (av=0x3d8d9b1740 , > bytes=) > at malloc.c:3422 > #2 0x0000003d8d67f443 in __GI___libc_malloc (bytes=2632) at malloc.c:2862 > #3 0x00007ffff121816c in PyArray_IterNew (obj= 0xe02fd0>) > at numpy/core/src/multiarray/iterators.c:385 > #4 0x00007ffff1218201 in PyArray_IterAllButAxis (obj=obj at entry= > , inaxis=inaxis at entry=0x7fffffff873c) > at numpy/core/src/multiarray/iterators.c:488 > #5 0x00007ffff1257970 in _new_argsort (which=NPY_QUICKSORT, axis=0, > op=0xe02fd0) > at numpy/core/src/multiarray/item_selection.c:815 > #6 PyArray_ArgSort (op=op at entry=0xe02fd0, axis=0, which=NPY_QUICKSORT) > at numpy/core/src/multiarray/item_selection.c:1104 > #7 0x00007ffff125873a in array_argsort (self=0xe02fd0, args= out>, > kwds=) at numpy/core/src/multiarray/methods.c:1213 > #8 0x0000003b74d0cc8e in call_function (oparg=, > pp_stack=0x7fffffff8998) > at /usr/src/debug/Python-3.3.0/Python/ceval.c:4091 > #9 PyEval_EvalFrameEx (f=f at entry= > Frame 0xd3ecb0, for file > /usr/lib64/python3.3/site-packages/numpy/core/fromnumeric.py, line 681, in > argsort (a=, axis=-1, kind='quicksort', > order=None, argsort= remote 0xe02fd0>), > throwflag=throwflag at entry=0) at > /usr/src/debug/Python-3.3.0/Python/ceval.c:2703 > #10 0x0000003b74d0de63 in PyEval_EvalCodeEx (_co=_co at entry= 0x7ffff1a8ac00>, > globals=, locals=locals at entry=0x0, args=, > argcount=argcount at entry=1, kws=0xe23ab8, kwcount=kwcount at entry=0, > defs=0x7ffff1a965b8, > defcount=3, kwdefs=0x0, closure=0x0) at > /usr/src/debug/Python-3.3.0/Python/ceval.c:3462 > #11 0x0000003b74d0c707 in fast_function (nk=0, na=1, n=, > pp_stack= > 0x7fffffff8c88, func=) > at /usr/src/debug/Python-3.3.0/Python/ceval.c:4189 > #12 call_function (oparg=, pp_stack=0x7fffffff8c88) > at /usr/src/debug/Python-3.3.0/Python/ceval.c:4112 > > > (gdb) up 3 > #3 0x00007ffff121816c in PyArray_IterNew (obj= 0xe02fd0>) > at numpy/core/src/multiarray/iterators.c:385 > 385 it = (PyArrayIterObject > *)PyArray_malloc(sizeof(PyArrayIterObject)); > (gdb) print *obj > $4 = {ob_refcnt = 5, ob_type = 0x7ffff14c6900 } > (gdb) list > 380 PyErr_BadInternalCall(); > 381 return NULL; > 382 } > 383 ao = (PyArrayObject *)obj; > 384 > 385 it = (PyArrayIterObject > *)PyArray_malloc(sizeof(PyArrayIterObject)); > 386 PyObject_Init((PyObject *)it, &PyArrayIter_Type); > 387 /* it = PyObject_New(PyArrayIterObject, &PyArrayIter_Type);*/ > 388 if (it == NULL) { > 389 return NULL; ^^^ This is very useful, thanks! How can it segfault on the line 385 though? That would suggest that something has gone terribly wrong before and this call to malloc simply is the final nail to the coffin. Otherwise I thought that malloc can't really segfault like this, can it? Ondrej From charlesr.harris at gmail.com Wed Feb 13 19:33:56 2013 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 13 Feb 2013 17:33:56 -0700 Subject: [Numpy-discussion] Fwd: Package: scipy-0.11.0-0.1.rc2.fc18 Tag: f18-updates-candidate Status: failed Built by: orion In-Reply-To: References: <20120920210154.D759123187@bastion01.phx2.fedoraproject.org> <505B9109.60607@cora.nwra.com> <511C2A71.1090502@cora.nwra.com> Message-ID: On Wed, Feb 13, 2013 at 5:18 PM, Ond?ej ?ert?k wrote: > Orion, > > On Wed, Feb 13, 2013 at 4:06 PM, Orion Poplawski > wrote: > dsbevx_ Looks suspicious. There was a problem with 3.3and scipy in that function which should be fixed in the next release. See also the discussion on gmane . Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From orion at cora.nwra.com Wed Feb 13 21:35:11 2013 From: orion at cora.nwra.com (Orion Poplawski) Date: Wed, 13 Feb 2013 19:35:11 -0700 Subject: [Numpy-discussion] Fwd: Package: scipy-0.11.0-0.1.rc2.fc18 Tag: f18-updates-candidate Status: failed Built by: orion In-Reply-To: References: <20120920210154.D759123187@bastion01.phx2.fedoraproject.org> <505B9109.60607@cora.nwra.com> <511C2A71.1090502@cora.nwra.com> Message-ID: <511C4D5F.2050506@cora.nwra.com> On 02/13/2013 05:18 PM, Ond?ej ?ert?k wrote: >> >> (gdb) up 3 >> #3 0x00007ffff121816c in PyArray_IterNew (obj=> 0xe02fd0>) >> at numpy/core/src/multiarray/iterators.c:385 >> 385 it = (PyArrayIterObject >> *)PyArray_malloc(sizeof(PyArrayIterObject)); >> (gdb) print *obj >> $4 = {ob_refcnt = 5, ob_type = 0x7ffff14c6900 } >> (gdb) list >> 380 PyErr_BadInternalCall(); >> 381 return NULL; >> 382 } >> 383 ao = (PyArrayObject *)obj; >> 384 >> 385 it = (PyArrayIterObject >> *)PyArray_malloc(sizeof(PyArrayIterObject)); >> 386 PyObject_Init((PyObject *)it, &PyArrayIter_Type); >> 387 /* it = PyObject_New(PyArrayIterObject, &PyArrayIter_Type);*/ >> 388 if (it == NULL) { >> 389 return NULL; > > > ^^^ This is very useful, thanks! How can it segfault on the line 385 though? > > That would suggest that something has gone terribly wrong before and > this call to malloc simply is the final nail to the coffin. > Otherwise I thought that malloc can't really segfault like this, can it? > > Ondrej > Yeah, that's why I didn't think the backtrace would be very useful - it's gone off the deep-end long before. The valgrind reports seem more useful. Need to get the scipy debug stuff installed properly. I'm guessing the interfacing with atlas is not correct. -- Orion Poplawski Technical Manager 303-415-9701 x222 NWRA/CoRA Division FAX: 303-415-9702 3380 Mitchell Lane orion at cora.nwra.com Boulder, CO 80301 http://www.cora.nwra.com From orion at cora.nwra.com Wed Feb 13 23:29:10 2013 From: orion at cora.nwra.com (Orion Poplawski) Date: Wed, 13 Feb 2013 21:29:10 -0700 Subject: [Numpy-discussion] Fwd: Package: scipy-0.11.0-0.1.rc2.fc18 Tag: f18-updates-candidate Status: failed Built by: orion In-Reply-To: References: <20120920210154.D759123187@bastion01.phx2.fedoraproject.org> <505B9109.60607@cora.nwra.com> <511C2A71.1090502@cora.nwra.com> Message-ID: <511C6816.8040109@cora.nwra.com> On 02/13/2013 05:33 PM, Charles R Harris wrote: > > > On Wed, Feb 13, 2013 at 5:18 PM, Ond?ej ?ert?k > wrote: > > Orion, > > On Wed, Feb 13, 2013 at 4:06 PM, Orion Poplawski > > wrote: > > > > > dsbevx_ > > Looks suspicious. There was a problem with 3.3 > and scipy in that function > which should be fixed in the next release. See also the discussion on > gmane > . > > Chuck Thank you! That looks like exactly the issue! -- Orion Poplawski Technical Manager 303-415-9701 x222 NWRA/CoRA Division FAX: 303-415-9702 3380 Mitchell Lane orion at cora.nwra.com Boulder, CO 80301 http://www.cora.nwra.com From cournape at gmail.com Thu Feb 14 06:56:46 2013 From: cournape at gmail.com (David Cournapeau) Date: Thu, 14 Feb 2013 11:56:46 +0000 Subject: [Numpy-discussion] How to upload to pypi In-Reply-To: References: Message-ID: On Wed, Feb 13, 2013 at 5:35 AM, Ond?ej ?ert?k wrote: > David, > > On Tue, Feb 12, 2013 at 6:46 AM, David Cournapeau wrote: >> On Tue, Feb 12, 2013 at 5:49 AM, Ond?ej ?ert?k wrote: >>> Hi, >>> >>> I have uploaded the NumPy 1.7.0 source distribution to pypi: >>> >>> http://pypi.python.org/pypi/numpy/1.7.0 >>> >>> I did it by uploading the file PKG-INFO from numpy-1.7.0.tar.gz. It >>> said "Error processing form. Form Failure; reset form submission" >>> about 3x times, >>> but on the 4th try it went through. I reported the issue here: >>> >>> https://sourceforge.net/tracker/?func=detail&aid=3604194&group_id=66150&atid=513504 >>> >>> I then attached the numpy-1.7.0.tar.gz and numpy-1.7.0.zip source files. >>> >>> Now I am having trouble attaching the windows installers, just like >>> they are here: >>> >>> http://pypi.python.org/pypi/numpy/1.6.2 >> >> Those installers are ones built through bdist_wininst. You should >> *not* upload superpack installers there, as most python tools will not >> know what to do with it. For example, easy_install will not work with >> those, even though it does with simple installers from bdist_wininst. >> >> So ideally, one should build simple (== bdist_wininst-generated) >> installers using the lowest common denominator for architecture (i.e. >> pure blas/lapack, not atlas), and the superpack installer on >> sourceforge. Incidentally, that's why the super pack installer uses a >> different filename, to avoid confusion. > > I see. I looked into my scripts and it turns out that actually I do > build the bdist_wininst versions as well, I just didn't know what they > are for, so I ignored them. Now I can see that those will get uploaded > to pypi, so I did that now and it works! > > http://pypi.python.org/pypi/numpy/1.7.0 > > I learned something new today, thanks for the explanation. So pypi > should be done. Great, thanks a lot for doing all this ! David From steve at spvi.com Thu Feb 14 08:09:37 2013 From: steve at spvi.com (Steve Spicklemire) Date: Thu, 14 Feb 2013 06:09:37 -0700 Subject: [Numpy-discussion] Trouble building numpy on different version of OSX. Message-ID: <43B5C941-D4BA-44AD-AB3F-55BB8FA0866C@spvi.com> Hi Numpy Folks! When I try to build numpy on MacOSX 10.6 with Xcode 3.2.5 installed (python3.2 setup.py build) things go great! At some point I get this: -------------------- Generating build/src.macosx-10.6-intel-3.2/numpy/core/include/numpy/config.h C compiler: gcc-4.2 -fno-strict-aliasing -fno-common -dynamic -DNDEBUG -g -O3 -isysroot /Developer/SDKs/MacOSX10.6.sdk -arch i386 -arch x86_64 -isysroot /Developer/SDKs/MacOSX10.6.sdk compile options: '-Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -Inumpy/core/include -I/Library/Frameworks/Python.framework/Versions/3.2/include/python3.2m -c' gcc-4.2: _configtest.c success! removing: _configtest.c _configtest.o C compiler: gcc-4.2 -fno-strict-aliasing -fno-common -dynamic -DNDEBUG -g -O3 -isysroot /Developer/SDKs/MacOSX10.6.sdk -arch i386 -arch x86_64 -isysroot /Developer/SDKs/MacOSX10.6.sdk -------------------- Keeping everything as nearly as possible the same on MacOSX 10.7 with Xcode 4.6 installed I get this: -------------------- Generating build/src.macosx-10.6-intel-3.2/numpy/core/include/numpy/config.h C compiler: gcc-4.2 -fno-strict-aliasing -fno-common -dynamic -DNDEBUG -g -O3 -isysroot /Developer/SDKs/MacOSX10.6.sdk -arch i386 -arch x86_64 -isysroot /Developer/SDKs/MacOSX10.6.sdk compile options: '-Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -Inumpy/core/include -c' gcc-4.2: _configtest.c _configtest.c:1:20: error: Python.h: No such file or directory _configtest.c:1:20: error: Python.h: No such file or directory lipo: can't figure out the architecture type of: /var/folders/4h/7kcqgdb55yjdtfs6dpwjytjh0000gn/T//ccIEwAT5.out _configtest.c:1:20: error: Python.h: No such file or directory _configtest.c:1:20: error: Python.h: No such file or directory lipo: can't figure out the architecture type of: /var/folders/4h/7kcqgdb55yjdtfs6dpwjytjh0000gn/T//ccIEwAT5.out failure. removing: _configtest.c _configtest.o -------------------- Obviously the -I/Library/Frameworks/etc... is missing. I get the same thing with Xcode 4.6 on 10.8. ;-(. I can *run* my numpy build from 10.6 on 10.7 and 10.8, but I'd really like to be able to build it without having to reboot from an old backup disk. ;-) For what it's worth, on 10.7 and 10.8 (and 10.6 for that matter) python3.2-config works and returns reasonable results. Where does setup.py decide about which paths to include in the compile options string? thanks! -steve From cournape at gmail.com Thu Feb 14 09:27:41 2013 From: cournape at gmail.com (David Cournapeau) Date: Thu, 14 Feb 2013 14:27:41 +0000 Subject: [Numpy-discussion] Trouble building numpy on different version of OSX. In-Reply-To: <43B5C941-D4BA-44AD-AB3F-55BB8FA0866C@spvi.com> References: <43B5C941-D4BA-44AD-AB3F-55BB8FA0866C@spvi.com> Message-ID: On Thu, Feb 14, 2013 at 1:09 PM, Steve Spicklemire wrote: > Hi Numpy Folks! > > When I try to build numpy on MacOSX 10.6 with Xcode 3.2.5 installed (python3.2 setup.py build) things go great! At some point I get this: > > -------------------- > > Generating build/src.macosx-10.6-intel-3.2/numpy/core/include/numpy/config.h > C compiler: gcc-4.2 -fno-strict-aliasing -fno-common -dynamic -DNDEBUG -g -O3 -isysroot /Developer/SDKs/MacOSX10.6.sdk -arch i386 -arch x86_64 -isysroot /Developer/SDKs/MacOSX10.6.sdk > > compile options: '-Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -Inumpy/core/include -I/Library/Frameworks/Python.framework/Versions/3.2/include/python3.2m -c' > gcc-4.2: _configtest.c > success! > removing: _configtest.c _configtest.o > C compiler: gcc-4.2 -fno-strict-aliasing -fno-common -dynamic -DNDEBUG -g -O3 -isysroot /Developer/SDKs/MacOSX10.6.sdk -arch i386 -arch x86_64 -isysroot /Developer/SDKs/MacOSX10.6.sdk > > -------------------- > > Keeping everything as nearly as possible the same on MacOSX 10.7 with Xcode 4.6 installed I get this: IIRC, xcode 4.6 does not include Mac OS X 10.6 sdk. Where did you get it ? Unfortunately, I don't think it is actually possible to build many combinations on mac os x without tweaking flags and adapting the -isysroot accordingly. > > -------------------- > > Generating build/src.macosx-10.6-intel-3.2/numpy/core/include/numpy/config.h > C compiler: gcc-4.2 -fno-strict-aliasing -fno-common -dynamic -DNDEBUG -g -O3 -isysroot /Developer/SDKs/MacOSX10.6.sdk -arch i386 -arch x86_64 -isysroot /Developer/SDKs/MacOSX10.6.sdk > > compile options: '-Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -Inumpy/core/include -c' > gcc-4.2: _configtest.c > _configtest.c:1:20: error: Python.h: No such file or directory > _configtest.c:1:20: error: Python.h: No such file or directory > lipo: can't figure out the architecture type of: /var/folders/4h/7kcqgdb55yjdtfs6dpwjytjh0000gn/T//ccIEwAT5.out > _configtest.c:1:20: error: Python.h: No such file or directory > _configtest.c:1:20: error: Python.h: No such file or directory > lipo: can't figure out the architecture type of: /var/folders/4h/7kcqgdb55yjdtfs6dpwjytjh0000gn/T//ccIEwAT5.out > failure. > removing: _configtest.c _configtest.o I suspect that you're having a message about invalid SDK paths before that. David From steve at spvi.com Thu Feb 14 09:55:52 2013 From: steve at spvi.com (Steve Spicklemire) Date: Thu, 14 Feb 2013 07:55:52 -0700 Subject: [Numpy-discussion] Trouble building numpy on different version of OSX. In-Reply-To: References: <43B5C941-D4BA-44AD-AB3F-55BB8FA0866C@spvi.com> Message-ID: I got Xcode 4,6 from the App Store. I don't think it's the SDK since the python 2.7 version builds fine. It's just the 3.2 version that doesn't have the -I/Library/Frameworks/Python.Framework/Versions/3.2/include/python3.2m in the compile options line. When I run setup for 2.7 I see the right include. I'm just not sure where setup is building those options, and why they're not working on 10.7 and 10.8 and python3.2. Strange! thanks, -steve On Feb 14, 2013, at 7:27 AM, David Cournapeau wrote: > On Thu, Feb 14, 2013 at 1:09 PM, Steve Spicklemire wrote: >> Hi Numpy Folks! >> >> When I try to build numpy on MacOSX 10.6 with Xcode 3.2.5 installed (python3.2 setup.py build) things go great! At some point I get this: >> >> -------------------- >> >> Generating build/src.macosx-10.6-intel-3.2/numpy/core/include/numpy/config.h >> C compiler: gcc-4.2 -fno-strict-aliasing -fno-common -dynamic -DNDEBUG -g -O3 -isysroot /Developer/SDKs/MacOSX10.6.sdk -arch i386 -arch x86_64 -isysroot /Developer/SDKs/MacOSX10.6.sdk >> >> compile options: '-Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -Inumpy/core/include -I/Library/Frameworks/Python.framework/Versions/3.2/include/python3.2m -c' >> gcc-4.2: _configtest.c >> success! >> removing: _configtest.c _configtest.o >> C compiler: gcc-4.2 -fno-strict-aliasing -fno-common -dynamic -DNDEBUG -g -O3 -isysroot /Developer/SDKs/MacOSX10.6.sdk -arch i386 -arch x86_64 -isysroot /Developer/SDKs/MacOSX10.6.sdk >> >> -------------------- >> >> Keeping everything as nearly as possible the same on MacOSX 10.7 with Xcode 4.6 installed I get this: > > IIRC, xcode 4.6 does not include Mac OS X 10.6 sdk. Where did you get it ? > > Unfortunately, I don't think it is actually possible to build many > combinations on mac os x without tweaking flags and adapting the > -isysroot accordingly. > >> >> -------------------- >> >> Generating build/src.macosx-10.6-intel-3.2/numpy/core/include/numpy/config.h >> C compiler: gcc-4.2 -fno-strict-aliasing -fno-common -dynamic -DNDEBUG -g -O3 -isysroot /Developer/SDKs/MacOSX10.6.sdk -arch i386 -arch x86_64 -isysroot /Developer/SDKs/MacOSX10.6.sdk >> >> compile options: '-Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -Inumpy/core/include -c' >> gcc-4.2: _configtest.c >> _configtest.c:1:20: error: Python.h: No such file or directory >> _configtest.c:1:20: error: Python.h: No such file or directory >> lipo: can't figure out the architecture type of: /var/folders/4h/7kcqgdb55yjdtfs6dpwjytjh0000gn/T//ccIEwAT5.out >> _configtest.c:1:20: error: Python.h: No such file or directory >> _configtest.c:1:20: error: Python.h: No such file or directory >> lipo: can't figure out the architecture type of: /var/folders/4h/7kcqgdb55yjdtfs6dpwjytjh0000gn/T//ccIEwAT5.out >> failure. >> removing: _configtest.c _configtest.o > > I suspect that you're having a message about invalid SDK paths before that. > > David > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From derek at astro.physik.uni-goettingen.de Thu Feb 14 10:57:12 2013 From: derek at astro.physik.uni-goettingen.de (Derek Homeier) Date: Thu, 14 Feb 2013 16:57:12 +0100 Subject: [Numpy-discussion] Trouble building numpy on different version of OSX. In-Reply-To: References: <43B5C941-D4BA-44AD-AB3F-55BB8FA0866C@spvi.com> Message-ID: On 14.02.2013, at 3:55PM, Steve Spicklemire wrote: > I got Xcode 4,6 from the App Store. I don't think it's the SDK since the python 2.7 version builds fine. It's just the 3.2 version that doesn't have the -I/Library/Frameworks/Python.Framework/Versions/3.2/include/python3.2m in the compile options line. When I run setup for 2.7 I see the right include. I'm just not sure where setup is building those options, and why they're not working on 10.7 and 10.8 and python3.2. Strange! Where did you get the python3.2 from? Building the 1.7.0 release works for me under 10.8 and Xcode 4.6 both with the system-provided /usr/bin/python2.7 and with fink-installed versions of python2.7 and python3.2, but in no case is it linking or including any 10.6 SDK: C compiler: gcc -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes compile options: '-Inumpy/core/include -Ibuild/src.macosx-10.8-x86_64-3.2/numpy/core/include/numpy -Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -Inumpy/core/include -I/sw/include/python3.2m -Ibuild/src.macosx-10.8-x86_64-3.2/numpy/core/src/multiarray -Ibuild/src.macosx-10.8-x86_64-3.2/numpy/core/src/umath -c' HTH, Derek From steve at spvi.com Thu Feb 14 11:00:50 2013 From: steve at spvi.com (Steve Spicklemire) Date: Thu, 14 Feb 2013 09:00:50 -0700 Subject: [Numpy-discussion] Trouble building numpy on different version of OSX. In-Reply-To: References: <43B5C941-D4BA-44AD-AB3F-55BB8FA0866C@spvi.com> Message-ID: <04A8E4C9-D060-4FC6-8C4B-2CB6392463A3@spvi.com> The python3.2 was from python.org, 3.2.3 universal 32/64. thanks, -steve On Feb 14, 2013, at 8:57 AM, Derek Homeier wrote: > On 14.02.2013, at 3:55PM, Steve Spicklemire wrote: > >> I got Xcode 4,6 from the App Store. I don't think it's the SDK since the python 2.7 version builds fine. It's just the 3.2 version that doesn't have the -I/Library/Frameworks/Python.Framework/Versions/3.2/include/python3.2m in the compile options line. When I run setup for 2.7 I see the right include. I'm just not sure where setup is building those options, and why they're not working on 10.7 and 10.8 and python3.2. Strange! > > Where did you get the python3.2 from? Building the 1.7.0 release works for me under 10.8 and Xcode 4.6 > both with the system-provided /usr/bin/python2.7 and with fink-installed versions of python2.7 and python3.2, > but in no case is it linking or including any 10.6 SDK: > > C compiler: gcc -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes > > compile options: '-Inumpy/core/include -Ibuild/src.macosx-10.8-x86_64-3.2/numpy/core/include/numpy -Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -Inumpy/core/include -I/sw/include/python3.2m -Ibuild/src.macosx-10.8-x86_64-3.2/numpy/core/src/multiarray -Ibuild/src.macosx-10.8-x86_64-3.2/numpy/core/src/umath -c' > > HTH, > Derek > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From chris.barker at noaa.gov Thu Feb 14 12:58:17 2013 From: chris.barker at noaa.gov (Chris Barker - NOAA Federal) Date: Thu, 14 Feb 2013 09:58:17 -0800 Subject: [Numpy-discussion] Trouble building numpy on different version of OSX. In-Reply-To: References: <43B5C941-D4BA-44AD-AB3F-55BB8FA0866C@spvi.com> Message-ID: On Thu, Feb 14, 2013 at 7:57 AM, Derek Homeier wrote: > Where did you get the python3.2 from? Building the 1.7.0 release works for me under 10.8 and Xcode 4.6 > both with the system-provided /usr/bin/python2.7 That makes sense, as Apple probably built it with XCode 4.6 in the first place. > and with fink-installed versions of python2.7 and python3.2, Again, the whole point of fink is to build everything natively. On Thu, Feb 14, 2013 at 8:00 AM, Steve Spicklemire wrote: > The python3.2 was from python.org, 3.2.3 universal 32/64. that was built with XCode 3.* -- originally on 10.6 The point of distutils (one of them anyway) is to build extensions with the same compiler, flags, etc as pyton itself -- that means XCode 3, 10.6 SDK in this case. I haven't gone to 10.8 yet -- partly for this reason! I know it's a pain, at best, to build stuff on 10.8 (and XCode4 ) that runs on older systems, but not sure if/how it can be done if you really need to. I'd try the pythonmac list -- soem smart folks there (and the people that maintain the python.org builds) -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From steve at spvi.com Thu Feb 14 13:40:15 2013 From: steve at spvi.com (Steve Spicklemire) Date: Thu, 14 Feb 2013 11:40:15 -0700 Subject: [Numpy-discussion] Trouble building numpy on different version of OSX. In-Reply-To: References: <43B5C941-D4BA-44AD-AB3F-55BB8FA0866C@spvi.com> Message-ID: <1AA72273-14D2-46FD-92D9-F8249401A00D@spvi.com> Ahhh... I didn't realize that important bit. Thanks... I'll try to see if I can use xcode3 on 10.8. thanks, -steve On Feb 14, 2013, at 10:58 AM, Chris Barker - NOAA Federal wrote: > On Thu, Feb 14, 2013 at 7:57 AM, Derek Homeier > wrote: > >> Where did you get the python3.2 from? Building the 1.7.0 release works for me under 10.8 and Xcode 4.6 >> both with the system-provided /usr/bin/python2.7 > > That makes sense, as Apple probably built it with XCode 4.6 in the first place. > >> and with fink-installed versions of python2.7 and python3.2, > > Again, the whole point of fink is to build everything natively. > > On Thu, Feb 14, 2013 at 8:00 AM, Steve Spicklemire wrote: >> The python3.2 was from python.org, 3.2.3 universal 32/64. > > that was built with XCode 3.* -- originally on 10.6 The point of > distutils (one of them anyway) is to build extensions with the same > compiler, flags, etc as pyton itself -- that means XCode 3, 10.6 SDK > in this case. > > I haven't gone to 10.8 yet -- partly for this reason! I know it's a > pain, at best, to build stuff on 10.8 (and XCode4 ) that runs on older > systems, but not sure if/how it can be done if you really need to. > > I'd try the pythonmac list -- soem smart folks there (and the people > that maintain the python.org builds) > > -Chris > > > -- > > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From steve at spvi.com Thu Feb 14 14:34:46 2013 From: steve at spvi.com (Steve Spicklemire) Date: Thu, 14 Feb 2013 12:34:46 -0700 Subject: [Numpy-discussion] Trouble building numpy on different version of OSX. In-Reply-To: <1AA72273-14D2-46FD-92D9-F8249401A00D@spvi.com> References: <43B5C941-D4BA-44AD-AB3F-55BB8FA0866C@spvi.com> <1AA72273-14D2-46FD-92D9-F8249401A00D@spvi.com> Message-ID: <40558D76-6F17-4E05-992A-574031C9790C@spvi.com> OK,,, I happen to have an old: /Developer-3,2,5 directory on my 10.8 system, and I found the "xcode-select' command. I tried sudo xcode-select --switch /Developer-3.2.5 but that had no apparent effect. Next I put a link in /Developer -> /Developer-3.2.5 since that seemed to be the path numpy was trying to use. Aha! I got the right include path now. So there was an SDK effect somehow. I wonder if the 2.7.3 build on python.org used XCode 4 with a newer SDK? Anyway... still no luck though since I was getting complaints about stdarg.h not being found, even though I could see it was there. I noticed that /usr/bin/gcc was really a link to llvm-gcc-4.2 and maybe that was a problem, so I changed my PATH environment variable to have /Developer-3.2.5/usr/bin before /usr/bin and voila! It works. Thanks for the hints! Now just have to remember all this atrocious stuff whenever I need to rebuild it. ;-) thanks, -steve On Feb 14, 2013, at 11:40 AM, Steve Spicklemire wrote: > Ahhh... I didn't realize that important bit. Thanks... I'll try to see if I can use xcode3 on 10.8. > > thanks, > -steve > > On Feb 14, 2013, at 10:58 AM, Chris Barker - NOAA Federal wrote: > >> On Thu, Feb 14, 2013 at 7:57 AM, Derek Homeier >> wrote: >> >>> Where did you get the python3.2 from? Building the 1.7.0 release works for me under 10.8 and Xcode 4.6 >>> both with the system-provided /usr/bin/python2.7 >> >> That makes sense, as Apple probably built it with XCode 4.6 in the first place. >> >>> and with fink-installed versions of python2.7 and python3.2, >> >> Again, the whole point of fink is to build everything natively. >> >> On Thu, Feb 14, 2013 at 8:00 AM, Steve Spicklemire wrote: >>> The python3.2 was from python.org, 3.2.3 universal 32/64. >> >> that was built with XCode 3.* -- originally on 10.6 The point of >> distutils (one of them anyway) is to build extensions with the same >> compiler, flags, etc as pyton itself -- that means XCode 3, 10.6 SDK >> in this case. >> >> I haven't gone to 10.8 yet -- partly for this reason! I know it's a >> pain, at best, to build stuff on 10.8 (and XCode4 ) that runs on older >> systems, but not sure if/how it can be done if you really need to. >> >> I'd try the pythonmac list -- soem smart folks there (and the people >> that maintain the python.org builds) >> >> -Chris >> >> >> -- >> >> Christopher Barker, Ph.D. >> Oceanographer >> >> Emergency Response Division >> NOAA/NOS/OR&R (206) 526-6959 voice >> 7600 Sand Point Way NE (206) 526-6329 fax >> Seattle, WA 98115 (206) 526-6317 main reception >> >> Chris.Barker at noaa.gov >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From chris.barker at noaa.gov Thu Feb 14 21:30:50 2013 From: chris.barker at noaa.gov (Chris Barker - NOAA Federal) Date: Thu, 14 Feb 2013 18:30:50 -0800 Subject: [Numpy-discussion] Trouble building numpy on different version of OSX. In-Reply-To: <40558D76-6F17-4E05-992A-574031C9790C@spvi.com> References: <43B5C941-D4BA-44AD-AB3F-55BB8FA0866C@spvi.com> <1AA72273-14D2-46FD-92D9-F8249401A00D@spvi.com> <40558D76-6F17-4E05-992A-574031C9790C@spvi.com> Message-ID: <-2075091725512545993@unknownmsgid> Steve, Thanks for the report of what worked-- I may well need to do this soon. sudo xcode-select --switch /Developer-3.2.5 but that had no apparent effect. Next I put a link in /Developer -> /Developer-3.2.5 since that seemed to be the path numpy was trying to use. Aha! I got the right include path now. So there was an SDK effect somehow. I wonder if the 2.7.3 build on python.org used XCode 4 with a newer SDK? Anyway... still no luck though since I was getting complaints /usr/bin/gcc was really a link to llvm-gcc-4.2 and maybe that was a problem, so I changed my PATH environment variable to have /Developer-3.2.5/usr/bin before /usr/bin and voila! It works. You'd think that was Xcode-select would be for, but what can you do? Thanks for the hints! Now just have to remember all this atrocious stuff whenever I need to rebuild it. ;-) thanks, -steve On Feb 14, 2013, at 11:40 AM, Steve Spicklemire wrote: Ahhh... I didn't realize that important bit. Thanks... I'll try to see if I can use xcode3 on 10.8. thanks, -steve On Feb 14, 2013, at 10:58 AM, Chris Barker - NOAA Federal < chris.barker at noaa.gov> wrote: On Thu, Feb 14, 2013 at 7:57 AM, Derek Homeier wrote: Where did you get the python3.2 from? Building the 1.7.0 release works for me under 10.8 and Xcode 4.6 both with the system-provided /usr/bin/python2.7 That makes sense, as Apple probably built it with XCode 4.6 in the first place. and with fink-installed versions of python2.7 and python3.2, Again, the whole point of fink is to build everything natively. On Thu, Feb 14, 2013 at 8:00 AM, Steve Spicklemire wrote: The python3.2 was from python.org, 3.2.3 universal 32/64. that was built with XCode 3.* -- originally on 10.6 The point of distutils (one of them anyway) is to build extensions with the same compiler, flags, etc as pyton itself -- that means XCode 3, 10.6 SDK in this case. I haven't gone to 10.8 yet -- partly for this reason! I know it's a pain, at best, to build stuff on 10.8 (and XCode4 ) that runs on older systems, but not sure if/how it can be done if you really need to. I'd try the pythonmac list -- soem smart folks there (and the people that maintain the python.org builds) -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion at scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion at scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion at scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion -------------- next part -------------- An HTML attachment was scrubbed... URL: From teoliphant at gmail.com Fri Feb 15 02:11:12 2013 From: teoliphant at gmail.com (Travis Oliphant) Date: Fri, 15 Feb 2013 01:11:12 -0600 Subject: [Numpy-discussion] A new webpage promoting Compiler technology for CPython Message-ID: <3E96A7DD-C8A2-47FB-89C4-D18EB7AEF018@gmail.com> Hey all, With Numba and Blaze we have been doing a lot of work on what essentially is compiler technology and realizing more and more that we are treading on ground that has been plowed before with many other projects. So, we wanted to create a web-site and perhaps even a mailing list or forum where people could coordinate and communicate about compiler projects, compiler tools, and ways to share efforts and ideas. The website is: http://compilers.pydata.org/ This page is specifically for Compiler projects that either integrate with or work directly with the CPython run-time which is why PyPy is not presently listed. The PyPy project is a great project but we just felt that we wanted to explicitly create a collection of links to compilation projects that are accessible from CPython which are likely less well known. But that is just where we started from. The website is intended to be a community website constructed from a github repository. So, we welcome pull requests from anyone who would like to see the website updated to reflect their related project. Jon Riehl (Mython, PyFront, ROFL, and many other interesting projects) and Stephen Diehl (Blaze) and I will be moderating the pull requests to begin with. But, we welcome others with similar interests to participate in that effort of moderation. The github repository is here: https://github.com/pydata/compilers-webpage This is intended to be a community website for information spreading, and so we welcome any and all contributions. Thank you, Travis Oliphant -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Thu Feb 14 17:56:22 2013 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 14 Feb 2013 14:56:22 -0800 Subject: [Numpy-discussion] Python trademark in legal trouble in Europe, please help Message-ID: Hi all, please do NOT respond to this thread or to me directly. This is strictly to spread this message as widely as possible, so that anyone who receives it and can act on it does so. Needless to say, do forward this to anyone you think might be in a position to take useful action. The Python trademark is in serious risk across Europe due to the actions of a UK-based IP troll. If you operate in Europe, please help the Python Software Foundation gather evidence in the legal battle to protect the Python trademark across the EU. You can find the official blog post from the PSF with instructions on how to help here: http://pyfound.blogspot.com/2013/02/python-trademark-at-risk-in-europe-we.html and this thread on Hacker News is being monitored by the PSF Chairman Van Lindberg in case you want to ask the team directly any questions: http://news.ycombinator.com/item?id=5221093 Cheers, f From ndbecker2 at gmail.com Fri Feb 15 10:50:49 2013 From: ndbecker2 at gmail.com (Neal Becker) Date: Fri, 15 Feb 2013 10:50:49 -0500 Subject: [Numpy-discussion] confused about tensordot Message-ID: In the following code c = np.multiply (a, b.conj()) d = np.abs (np.sum (c, axis=0)/rows) d2 = np.abs (np.tensordot (a, b.conj(), ((0,),(0,)))/rows) print a.shape, b.shape, d.shape, d2.shape The 1st compute steps, where I do multiply and then sum give the result I want. I wanted to try to combine these 2 steps using tensordot, but it's not doing what I want. The print statement outputs this: (1004, 13) (1004, 13) (13,) (13, 13) The correct output should be (13,), but the tensordot output is (13,13). It's supposed to take 2 matrixes, each (1004, 13) and do element-wise multiply, then sum over axis 0. From juanlu001 at gmail.com Fri Feb 15 10:58:45 2013 From: juanlu001 at gmail.com (Juan Luis Cano) Date: Fri, 15 Feb 2013 16:58:45 +0100 Subject: [Numpy-discussion] Purpose of this list Message-ID: <511E5B35.6050100@gmail.com> Hello all, I have a brief question about the general purpose of Numpy-discussion. I suscribed a month ago more or less to keep an eye on NumPy development (and perhaps contributing eventually), for I saw that some decisions about the roadmap were discussed here, but 90 % of the emails that arrive through the list are related to support. Whats is actually the scope/purpose of the list? I am not critizicing anyone, just asking for a clarification on this point, to manage my mail client filters etc. Best regards, Juan Luis Cano From brad.froehle at gmail.com Fri Feb 15 11:54:22 2013 From: brad.froehle at gmail.com (Bradley M. Froehle) Date: Fri, 15 Feb 2013 08:54:22 -0800 Subject: [Numpy-discussion] confused about tensordot In-Reply-To: References: Message-ID: Hi Neal: The tensordot part: np.tensordot (a, b.conj(), ((0,),(0,)) is returning a (13, 13) array whose [i, j]-th entry is sum( a[k, i] * b.conj()[k, j] for k in xrange(1004) ). -Brad The print statement outputs this: > > (1004, 13) (1004, 13) (13,) (13, 13) > > The correct output should be (13,), but the tensordot output is (13,13). > > It's supposed to take 2 matrixes, each (1004, 13) and do element-wise > multiply, > then sum over axis 0. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ndbecker2 at gmail.com Fri Feb 15 11:58:30 2013 From: ndbecker2 at gmail.com (Neal Becker) Date: Fri, 15 Feb 2013 11:58:30 -0500 Subject: [Numpy-discussion] confused about tensordot References: Message-ID: Bradley M. Froehle wrote: > Hi Neal: > > The tensordot part: > np.tensordot (a, b.conj(), ((0,),(0,)) > > is returning a (13, 13) array whose [i, j]-th entry is sum( a[k, i] * > b.conj()[k, j] for k in xrange(1004) ). > > -Brad > > > The print statement outputs this: >> >> (1004, 13) (1004, 13) (13,) (13, 13) >> >> The correct output should be (13,), but the tensordot output is (13,13). >> >> It's supposed to take 2 matrixes, each (1004, 13) and do element-wise >> multiply, >> then sum over axis 0. >> Can I use tensordot to do what I want? From robert.kern at gmail.com Fri Feb 15 12:29:02 2013 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 15 Feb 2013 17:29:02 +0000 Subject: [Numpy-discussion] Purpose of this list In-Reply-To: <511E5B35.6050100@gmail.com> References: <511E5B35.6050100@gmail.com> Message-ID: On Fri, Feb 15, 2013 at 3:58 PM, Juan Luis Cano wrote: > Hello all, I have a brief question about the general purpose of > Numpy-discussion. I suscribed a month ago more or less to keep an eye on > NumPy development (and perhaps contributing eventually), for I saw that > some decisions about the roadmap were discussed here, but 90 % of the > emails that arrive through the list are related to support. Whats is > actually the scope/purpose of the list? I am not critizicing anyone, > just asking for a clarification on this point, to manage my mail client > filters etc. numpy-discussion is for both development discussions and support questions. The set of people interested in the developer discussions is mostly the same as the set of people giving support, so there has never been too much impetus for breaking the list into two halves. -- Robert Kern From juanlu001 at gmail.com Fri Feb 15 12:52:29 2013 From: juanlu001 at gmail.com (Juan Luis Cano) Date: Fri, 15 Feb 2013 18:52:29 +0100 Subject: [Numpy-discussion] Purpose of this list In-Reply-To: References: <511E5B35.6050100@gmail.com> Message-ID: <511E75DD.4000205@gmail.com> On 02/15/2013 06:29 PM, Robert Kern wrote: > On Fri, Feb 15, 2013 at 3:58 PM, Juan Luis Cano wrote: >> Hello all, I have a brief question about the general purpose of >> Numpy-discussion. I suscribed a month ago more or less to keep an eye on >> NumPy development (and perhaps contributing eventually), for I saw that >> some decisions about the roadmap were discussed here, but 90 % of the >> emails that arrive through the list are related to support. Whats is >> actually the scope/purpose of the list? I am not critizicing anyone, >> just asking for a clarification on this point, to manage my mail client >> filters etc. > numpy-discussion is for both development discussions and support > questions. The set of people interested in the developer discussions > is mostly the same as the set of people giving support, so there has > never been too much impetus for breaking the list into two halves. > All right, clarified; thank you very much. Juan Luis Cano From brad.froehle at gmail.com Fri Feb 15 13:04:46 2013 From: brad.froehle at gmail.com (Bradley M. Froehle) Date: Fri, 15 Feb 2013 10:04:46 -0800 Subject: [Numpy-discussion] confused about tensordot In-Reply-To: References: Message-ID: > > >> It's supposed to take 2 matrixes, each (1004, 13) and do element-wise > >> multiply, > >> then sum over axis 0. > >> > > Can I use tensordot to do what I want? No. In your case I'd just do (a*b.conj()).sum(0). (Assuming that a and b are arrays, not matrices). It is most helpful to think of tensordot as a generalization on matrix multiplication where the axes argument gives the axes of the first and second arrays which should be summed over. a = np.random.rand(4,5,6,7) b = np.random.rand(8,7,5,2) c = np.tensordot(a, b, axes=((1, 3), (2, 1))) # contract over dimensions with size 5 and 7 assert c.shape == (4, 6, 8, 2) # the resulting shape is the shape given by a.shape + b.shape, which contracted dimensions removed. -Brad -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomasmcoffee at gmail.com Fri Feb 15 21:59:08 2013 From: thomasmcoffee at gmail.com (Thomas Coffee) Date: Fri, 15 Feb 2013 18:59:08 -0800 Subject: [Numpy-discussion] A proposed change to rollaxis() behavior for negative 'start' values Message-ID: Adding to an old discussion thread (see below) ... an implementation of the proposed functionality: from numpy import rollaxis def moveaxis(a, i, j = 0): """ move axis i of array a to position j """ n = a.ndim i = i if i >= 0 else i + n if j > i: return rollaxis(a, i, j + 1) elif j >= 0: return rollaxis(a, i, j) elif j == -1: return rollaxis(a, i, n) elif j >= -i: return rollaxis(a, i, j + 1) else: return rollaxis(a, i, j) Examples: In [464]: a = numpy.ones((3,4,5,6)) In [465]: moveaxis(a,2,0).shape Out[465]: (5, 3, 4, 6) In [466]: moveaxis(a,2,1).shape Out[466]: (3, 5, 4, 6) In [467]: moveaxis(a,2,2).shape Out[467]: (3, 4, 5, 6) In [468]: moveaxis(a,2,3).shape Out[468]: (3, 4, 6, 5) In [469]: moveaxis(a,2,4).shape --------------------------------------------------------------------------- ValueError: rollaxis: start (5) must be >=0 and < 5 In [470]: moveaxis(a,2,-1).shape Out[470]: (3, 4, 6, 5) In [471]: moveaxis(a,2,-2).shape Out[471]: (3, 4, 5, 6) In [472]: moveaxis(a,2,-3).shape Out[472]: (3, 5, 4, 6) In [473]: moveaxis(a,2,-4).shape Out[473]: (5, 3, 4, 6) In [474]: moveaxis(a,2,-5).shape --------------------------------------------------------------------------- ValueError: rollaxis: start (-1) must be >=0 and < 5 On Thu, Sep 23, 2010 at 11:33 AM, Nathaniel Smith wrote: > > On Tue, Sep 21, 2010 at 12:48 PM, Ken Basye wrote: > > If that's going to break too much code, here's a pathway that might be > > acceptable: Add a new function moveaxis() which works the way > > rollaxis() does for positive arguments but in the new way for negative > > arguments. Eventually, rollaxis could be deprecated to keep things > > tidy. This has the added advantage of using a name that seems to fit > > what the function does better - 'rollaxis' suggests a behavior like the > > roll() function which affects other axes, which isn't what happens. > > My 2 cents: +1 on a new function, but I'd change the behavior for > positive arguments too. > > Currently, the API is (AFAICT): You give the index of the axis you > want to move, and you give the index of the axis that you want the > first axis to be moved in front of. This is super confusing! > > I propose that a much better API would be: You give the index of the > axis you want to move, and you give the index you *want* that axis to > have. So we'd have the invariant: > b = np.moveaxis(a, i, j) > assert a.shape[i] == b.shape[j] > This is way easier to think about, at least for me. And it solves the > problem with negative indices too. > > BTW, note that that the documentation for rollaxis is actually > self-contradictory at the moment: > http://docs.scipy.org/doc/numpy/reference/generated/numpy.rollaxis.html > At the top it seems to document the behavior that I propose ("Roll the > specified axis backwards, until it lies *in a given* position."), and > then in the details it describes the actual behavior("The axis is > rolled until it lies *before* this position"). I take this as further > evidence that the current behavior is unnatural and confusing :-). > > -- Nathaniel From scopatz at gmail.com Fri Feb 15 12:43:17 2013 From: scopatz at gmail.com (Anthony Scopatz) Date: Fri, 15 Feb 2013 11:43:17 -0600 Subject: [Numpy-discussion] SciPy 2013 Announcement -- June 24 - 29, 2013, Austin, TX! Message-ID: Hello All, As this years conference communications co-chair, it is my extreme pleasure to announce the *SciPy 2013 conference from June 24th - 29th in sunny Austin, Texas, USA.* Please see our website(or below) for more details. A call for presentations, tutorials, and papers will be coming out very soon. Please make sure to sign up on our mailing list, follow us on twitter , or Google Plus so that you don't miss out on any important updates. I sincerely hope that you can attend this year and make 2013 the best year for SciPy yet! Happy Hacking! Anthony Scopatz SciPy 2013 Conference Announcement ---------------------------------- SciPy 2013, the twelfth annual Scientific Computing with Python conference, will be held this June 24th-29th in Austin, Texas. SciPy is a community dedicated to the advancement of scientific computing through open source Python software for mathematics, science, and engineering. The annual SciPy Conference allows participants from academic, commercial, and governmental organizations to showcase their latest projects, learn from skilled users and developers, and collaborate on code development. The conference consists of two days of tutorials by followed by two days of presentations, and concludes with two days of developer sprints on projects of interest to the attendees. Specialized Tracks ------------------ This year we are happy to announce two specialized tracks run in parallel to the general conference: *Machine Learning* In recent years, Python's machine learning libraries rapidly matured with a flurry of new libraries and cutting-edge algorithm implement and development occurring within Python. As Python makes these algorithms more accessible, machine learning algorithm application has spread across disciplines. Showcase your favorite machine learning library or how it has been used as an effective tool in your work! *Reproducible Science* Over recent years, the Open Science movement has stoked a renewed acknowledgement of the importance of reproducible research. The goals of this movement include improving the dissemination of progress, prevent fraud through transparency, and enable deeper/wider development and collaboration. This track is to discuss the tools and methods used to achieve reproducible scientific computing. Domain-specific Mini-symposia ----------------------------- Introduced in 2012, mini-symposia are held to discuss scientific computing applied to a specific scientific domain/industry during a half afternoon after the general conference. Their goal is to promote industry specific libraries and tools, and gather people with similar interests for discussions. Mini-symposia on the following topics will take place this year: - Meteorology, climatology, and atmospheric and oceanic science - Astronomy and astrophysics - Medical imaging - Bio-informatics Tutorials --------- Multiple interactive half-day tutorials will be taught by community experts. The tutorials provide conceptual and practical coverage of tools that have broad interest at both an introductory or advanced level. This year, a third track will be added, targeting specifically programmers with no prior knowledge of scientific python. Developer Sprints ----------------- A hackathon environment is setup for attendees to work on the core SciPy packages or their own personal projects. The conference is an opportunity for developers that are usually physically separated to come together and engage in highly productive sessions. It is also an occasion for new community members to introduce themselves and recieve tips from community experts. This year, some of the sprints will be scheduled and announced ahead of the conference. Birds-of-a-Feather (BOF) Sessions --------------------------------- Birds-of-a-Feather sessions are self-organized discussions that run parallel to the main conference. The BOFs sessions cover primary, tangential, or unrelated topics in an interactive, discussion setting. This year, some of the BOF sessions will be scheduled and announced ahead of the conference. Important Dates --------------- - March 18th: Presentation abstracts, poster, tutorial submission deadline. Application for sponsorship deadline. - April 15th: Speakers selected - April 22nd: Sponsorship acceptance deadline - May 1st: Speaker schedule announced - May 5th: Paper submission deadline - May 6th: Early-bird registration ends - June 24th-29th: 2 days of tutorials, 2 days of conference, 2 days of sprints We look forward to a very exciting conference and hope to see you all at the conference. The SciPy2013 organization team: * Andy Terrel, Co-chair * Jonathan Rocher, Co-chair * Katy Huff, Program Committee co-chair * Matt McCormick, Program Committee co-chair * Dharhas Pothina, Tutorial co-chair * Francesc Alted, Tutorial co-chair * Corran Webster, Sprint co-chair * Peter Wang, Sprint co-chair * Matthew Turk, BoF co-chair * Jarrod Millman, Proceeding co-chair * St?fan van der Walt, Proceeding co-chair * Anthony Scopatz, Communications co-chair * Majken Tranby, Communications co-chair * Jeff Daily, Financial Aid co-chair * John Wiggins, Financial Aid co-chair * Leah Jones, Operations chair * Brett Murphy, Sponsor chair * Bill Cowan, Financial chair -------------- next part -------------- An HTML attachment was scrubbed... URL: From ronan.lamy at gmail.com Sat Feb 16 10:59:04 2013 From: ronan.lamy at gmail.com (Ronan Lamy) Date: Sat, 16 Feb 2013 15:59:04 +0000 Subject: [Numpy-discussion] A new webpage promoting Compiler technology for CPython In-Reply-To: <3E96A7DD-C8A2-47FB-89C4-D18EB7AEF018@gmail.com> References: <3E96A7DD-C8A2-47FB-89C4-D18EB7AEF018@gmail.com> Message-ID: <511FACC8.6010304@gmail.com> Le 15/02/2013 07:11, Travis Oliphant a ?crit : > This page is specifically for Compiler projects that either integrate > with or work directly with the CPython run-time which is why PyPy is not > presently listed. The PyPy project is a great project but we just felt > that we wanted to explicitly create a collection of links to compilation > projects that are accessible from CPython which are likely less well known. > I won't argue here with the exclusion of PyPy, but RPython is definitely compiler technology that runs on CPython 2.6/2.7. For now, it is only accessible from a source checkout of PyPy but that will soon change and "pip install rpython" isn't far off. Since it's a whole tool chain, it has a wealth of functionalities, though they aren't always well-documented or easy to access from the outside: bytecode analysis, type inference, several GC implementations, a JIT generator, assemblers for several architectures, ... Cheers, Ronan From massimo.dipierro at gmail.com Sat Feb 16 11:08:09 2013 From: massimo.dipierro at gmail.com (Massimo DiPierro) Date: Sat, 16 Feb 2013 10:08:09 -0600 Subject: [Numpy-discussion] A new webpage promoting Compiler technology for CPython In-Reply-To: <511FACC8.6010304@gmail.com> References: <3E96A7DD-C8A2-47FB-89C4-D18EB7AEF018@gmail.com> <511FACC8.6010304@gmail.com> Message-ID: Sorry for injecting... Which page is this about? On Feb 16, 2013, at 9:59 AM, Ronan Lamy wrote: > Le 15/02/2013 07:11, Travis Oliphant a ?crit : > >> This page is specifically for Compiler projects that either integrate >> with or work directly with the CPython run-time which is why PyPy is not >> presently listed. The PyPy project is a great project but we just felt >> that we wanted to explicitly create a collection of links to compilation >> projects that are accessible from CPython which are likely less well known. >> > I won't argue here with the exclusion of PyPy, but RPython is definitely > compiler technology that runs on CPython 2.6/2.7. For now, it is only > accessible from a source checkout of PyPy but that will soon change and > "pip install rpython" isn't far off. > > Since it's a whole tool chain, it has a wealth of functionalities, > though they aren't always well-documented or easy to access from the > outside: bytecode analysis, type inference, several GC implementations, > a JIT generator, assemblers for several architectures, ... > > Cheers, > Ronan > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From ronan.lamy at gmail.com Sat Feb 16 11:13:16 2013 From: ronan.lamy at gmail.com (Ronan Lamy) Date: Sat, 16 Feb 2013 16:13:16 +0000 Subject: [Numpy-discussion] A new webpage promoting Compiler technology for CPython In-Reply-To: References: <3E96A7DD-C8A2-47FB-89C4-D18EB7AEF018@gmail.com> <511FACC8.6010304@gmail.com> Message-ID: <511FB01C.7010808@gmail.com> Le 16/02/2013 16:08, Massimo DiPierro a ?crit : > Sorry for injecting... Which page is this about? http://compilers.pydata.org/ Cf. the post I answered to. > On Feb 16, 2013, at 9:59 AM, Ronan Lamy wrote: > >> Le 15/02/2013 07:11, Travis Oliphant a ?crit : >> >>> This page is specifically for Compiler projects that either integrate >>> with or work directly with the CPython run-time which is why PyPy is not >>> presently listed. The PyPy project is a great project but we just felt >>> that we wanted to explicitly create a collection of links to compilation >>> projects that are accessible from CPython which are likely less well known. >>> >> I won't argue here with the exclusion of PyPy, but RPython is definitely >> compiler technology that runs on CPython 2.6/2.7. For now, it is only >> accessible from a source checkout of PyPy but that will soon change and >> "pip install rpython" isn't far off. >> >> Since it's a whole tool chain, it has a wealth of functionalities, >> though they aren't always well-documented or easy to access from the >> outside: bytecode analysis, type inference, several GC implementations, >> a JIT generator, assemblers for several architectures, ... >> >> Cheers, >> Ronan >> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From ralf.gommers at gmail.com Sat Feb 16 11:20:32 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 16 Feb 2013 17:20:32 +0100 Subject: [Numpy-discussion] NumPy 1.7.0 for MeeGo Harmattan OS In-Reply-To: <5119735C.9060309@gmail.com> References: <5119735C.9060309@gmail.com> Message-ID: On Mon, Feb 11, 2013 at 11:40 PM, Roberto Colistete Jr. < roberto.colistete at gmail.com> wrote: > Hi, > > It is my first participation here. > > About NumPy on Mobile OS : > - NumPy 1.7.0 was released today (11/02/2013) for MeeGo Harmattan OS > (for Nokia N9/N950), just 1 day after the mainstream release. See the > Talk Maemo.org topic : > http://talk.maemo.org/showthread.php?p=1322503 > MeeGo Harmattan OS also has NumPy 1.4.1 from Nokia repositories : > http://wiki.maemo.org/Python/Harmattan > - Maemo 5 OS (Nokia N900) has NumPy 1.4.0 : > http://maemo.org/packages/view/python-numpy/ > > Also for MeeGo Harmattan OS : > - MatPlotLib 1.2.0 was released (in 09/02/2013) for MeeGo Harmattan OS. > Including Qt4/PySide backend : > http://talk.maemo.org/showthread.php?p=1128672 > - IPython 0.13.1, including Notebook and Qt console interfaces, was > released in 22/01/2013. See the Talk Maemo.org topic : > http://talk.maemo.org/showthread.php?p=1123672 > It can work as an IPython Notebook server , with web clients running on > Android, desktop OS, etc via via WiFi hotspot of Nokia N9/N950. > > See my blog article comparing scientific Python tools for > computers, tablets and smartphones : > > http://translate.google.com/translate?hl=pt-BR&sl=pt&tl=en&u=http%3A%2F%2Frobertocolistete.wordpress.com%2F2012%2F12%2F26%2Fpython-cientifico-em-computadores-tablets-e-smartphones%2F > > > The conclusion is very simple : real mobile Linux OS (with glibc, > X11, dependencies, etc) are better for scientific Python. Like Maemo 5 > OS and MeeGo Harmattan OS. And future Sailfish OS and Ubuntu Phone OS > can follow the same path. > Thanks for the report and the packaging! I'm still using a phone that doesn't know how to do anything but call other phones, but if I'd want to get a smarter phone then one that gives me mobile NumPy without any trouble would definitely be nice. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From massimo.dipierro at gmail.com Sat Feb 16 11:46:05 2013 From: massimo.dipierro at gmail.com (Massimo DiPierro) Date: Sat, 16 Feb 2013 10:46:05 -0600 Subject: [Numpy-discussion] A new webpage promoting Compiler technology for CPython In-Reply-To: <511FB01C.7010808@gmail.com> References: <3E96A7DD-C8A2-47FB-89C4-D18EB7AEF018@gmail.com> <511FACC8.6010304@gmail.com> <511FB01C.7010808@gmail.com> Message-ID: <8BE21E0D-0580-480B-BC51-D5C8A6B86EB1@gmail.com> Thank you. Should this be listed: https://github.com/mdipierro/ocl ? It is based on meta (which is listed) and pyopencl (which is listed, only used to run with opencl) and has some overlap with Cython and Pyjamas although it is not based on any of them. It is minimalist in scope: it only coverts to C/JS/OpenCL a common subset of those languages. But it does what it advertises. It is written in pure python and implemented and implemented in a single file. Massimo On Feb 16, 2013, at 10:13 AM, Ronan Lamy wrote: > Le 16/02/2013 16:08, Massimo DiPierro a ?crit : >> Sorry for injecting... Which page is this about? > > http://compilers.pydata.org/ > Cf. the post I answered to. > >> On Feb 16, 2013, at 9:59 AM, Ronan Lamy wrote: >> >>> Le 15/02/2013 07:11, Travis Oliphant a ?crit : >>> >>>> This page is specifically for Compiler projects that either integrate >>>> with or work directly with the CPython run-time which is why PyPy is not >>>> presently listed. The PyPy project is a great project but we just felt >>>> that we wanted to explicitly create a collection of links to compilation >>>> projects that are accessible from CPython which are likely less well known. >>>> >>> I won't argue here with the exclusion of PyPy, but RPython is definitely >>> compiler technology that runs on CPython 2.6/2.7. For now, it is only >>> accessible from a source checkout of PyPy but that will soon change and >>> "pip install rpython" isn't far off. >>> >>> Since it's a whole tool chain, it has a wealth of functionalities, >>> though they aren't always well-documented or easy to access from the >>> outside: bytecode analysis, type inference, several GC implementations, >>> a JIT generator, assemblers for several architectures, ... >>> >>> Cheers, >>> Ronan >>> >>> _______________________________________________ >>> NumPy-Discussion mailing list >>> NumPy-Discussion at scipy.org >>> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion -------------- next part -------------- An HTML attachment was scrubbed... URL: From brad.froehle at gmail.com Sat Feb 16 13:45:52 2013 From: brad.froehle at gmail.com (Bradley M. Froehle) Date: Sat, 16 Feb 2013 10:45:52 -0800 Subject: [Numpy-discussion] A proposed change to rollaxis() behavior for negative 'start' values In-Reply-To: References: Message-ID: Have you considered using .transpose(...) instead? In [4]: a = numpy.ones((3,4,5,6)) In [5]: a.transpose(2,0,1,3).shape Out[5]: (5, 3, 4, 6) In [6]: a.transpose(0,2,1,3).shape Out[6]: (3, 5, 4, 6) In [7]: a.transpose(0,1,2,3).shape Out[7]: (3, 4, 5, 6) In [8]: a.transpose(0,1,3,2).shape Out[8]: (3, 4, 6, 5) -------------- next part -------------- An HTML attachment was scrubbed... URL: From brad.froehle at gmail.com Sat Feb 16 13:52:30 2013 From: brad.froehle at gmail.com (Bradley M. Froehle) Date: Sat, 16 Feb 2013 10:52:30 -0800 Subject: [Numpy-discussion] A proposed change to rollaxis() behavior for negative 'start' values In-Reply-To: References: Message-ID: On Sat, Feb 16, 2013 at 10:45 AM, Bradley M. Froehle wrote: > Have you considered using .transpose(...) instead? > My apologies... I didn't read enough of the thread to see what the issue was about. I personally think rollaxis(...) is quite confusing and instead choose to use .transpose(...) for clarity. You can interpret my suggestion as a means of implementing moveaxis w/o calling rollaxis. In fact rollaxis is built out of the .transpose(...) primitive anyway. Cheers, Brad -------------- next part -------------- An HTML attachment was scrubbed... URL: From travis at continuum.io Sat Feb 16 18:56:11 2013 From: travis at continuum.io (Travis Oliphant) Date: Sat, 16 Feb 2013 17:56:11 -0600 Subject: [Numpy-discussion] A new webpage promoting Compiler technology for CPython In-Reply-To: <8BE21E0D-0580-480B-BC51-D5C8A6B86EB1@gmail.com> References: <3E96A7DD-C8A2-47FB-89C4-D18EB7AEF018@gmail.com> <511FACC8.6010304@gmail.com> <511FB01C.7010808@gmail.com> <8BE21E0D-0580-480B-BC51-D5C8A6B86EB1@gmail.com> Message-ID: We should take this discussion off list. Please email me directly if you have questions. But, we are open to listing all of these tools. On Feb 16, 2013 10:46 AM, "Massimo DiPierro" wrote: > Thank you. > > Should this be listed: https://github.com/mdipierro/ocl ? > > It is based on meta (which is listed) and pyopencl (which is listed, only > used to run with opencl) and has some overlap with Cython and Pyjamas > although it is not based on any of them. > It is minimalist in scope: it only coverts to C/JS/OpenCL a common subset > of those languages. But it does what it advertises. It is written in pure > python and implemented and implemented in a single file. > > Massimo > > On Feb 16, 2013, at 10:13 AM, Ronan Lamy wrote: > > Le 16/02/2013 16:08, Massimo DiPierro a ?crit : > > Sorry for injecting... Which page is this about? > > > http://compilers.pydata.org/ > Cf. the post I answered to. > > On Feb 16, 2013, at 9:59 AM, Ronan Lamy wrote: > > > Le 15/02/2013 07:11, Travis Oliphant a ?crit : > > > This page is specifically for Compiler projects that either integrate > > with or work directly with the CPython run-time which is why PyPy is not > > presently listed. The PyPy project is a great project but we just felt > > that we wanted to explicitly create a collection of links to compilation > > projects that are accessible from CPython which are likely less well known. > > > I won't argue here with the exclusion of PyPy, but RPython is definitely > > compiler technology that runs on CPython 2.6/2.7. For now, it is only > > accessible from a source checkout of PyPy but that will soon change and > > "pip install rpython" isn't far off. > > > Since it's a whole tool chain, it has a wealth of functionalities, > > though they aren't always well-documented or easy to access from the > > outside: bytecode analysis, type inference, several GC implementations, > > a JIT generator, assemblers for several architectures, ... > > > Cheers, > > Ronan > > > _______________________________________________ > > NumPy-Discussion mailing list > > NumPy-Discussion at scipy.org > > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > > _______________________________________________ > > NumPy-Discussion mailing list > > NumPy-Discussion at scipy.org > > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Sat Feb 16 19:14:33 2013 From: fperez.net at gmail.com (Fernando Perez) Date: Sat, 16 Feb 2013 16:14:33 -0800 Subject: [Numpy-discussion] A new webpage promoting Compiler technology for CPython In-Reply-To: References: <3E96A7DD-C8A2-47FB-89C4-D18EB7AEF018@gmail.com> <511FACC8.6010304@gmail.com> <511FB01C.7010808@gmail.com> <8BE21E0D-0580-480B-BC51-D5C8A6B86EB1@gmail.com> Message-ID: On Sat, Feb 16, 2013 at 3:56 PM, Travis Oliphant wrote: > We should take this discussion off list. Just as a bystander interested in this: why? It seems that OCL is within the scope of what's being proposed and another entrant into the vibrant new world of compiler-extended machinery for fast numerical work in cpython, so I suspect I'm not the only numpy user curious to know the answer on-list. I know sometimes there are legitimate reasons to take a discussion off-list, but in this case it seemed to be a perfectly reasonable question that also made me curious (as I only learned of OCL thanks to this discussion). Cheers, f From travis at continuum.io Sat Feb 16 19:24:11 2013 From: travis at continuum.io (Travis Oliphant) Date: Sat, 16 Feb 2013 18:24:11 -0600 Subject: [Numpy-discussion] A new webpage promoting Compiler technology for CPython In-Reply-To: References: <3E96A7DD-C8A2-47FB-89C4-D18EB7AEF018@gmail.com> <511FACC8.6010304@gmail.com> <511FB01C.7010808@gmail.com> <8BE21E0D-0580-480B-BC51-D5C8A6B86EB1@gmail.com> Message-ID: I only meant off the NumPy list as it seems this is off-topic for this forum. I thought I made clear in the rest of the paragraph that we would *love* this contribution. I recommend a pull request. If you want to discuss this in public. Let's have the discussion over at numfocus at googlegroups.com until a more specific list is created. On Sat, Feb 16, 2013 at 6:14 PM, Fernando Perez wrote: > On Sat, Feb 16, 2013 at 3:56 PM, Travis Oliphant > wrote: > > We should take this discussion off list. > > Just as a bystander interested in this: why? It seems that OCL is > within the scope of what's being proposed and another entrant into the > vibrant new world of compiler-extended machinery for fast numerical > work in cpython, so I suspect I'm not the only numpy user curious to > know the answer on-list. > > I know sometimes there are legitimate reasons to take a discussion > off-list, but in this case it seemed to be a perfectly reasonable > question that also made me curious (as I only learned of OCL thanks to > this discussion). > > Cheers, > > f > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Sat Feb 16 19:36:20 2013 From: fperez.net at gmail.com (Fernando Perez) Date: Sat, 16 Feb 2013 16:36:20 -0800 Subject: [Numpy-discussion] [numfocus] Re: A new webpage promoting Compiler technology for CPython In-Reply-To: References: <3E96A7DD-C8A2-47FB-89C4-D18EB7AEF018@gmail.com> <511FACC8.6010304@gmail.com> <511FB01C.7010808@gmail.com> <8BE21E0D-0580-480B-BC51-D5C8A6B86EB1@gmail.com> Message-ID: On Sat, Feb 16, 2013 at 4:24 PM, Travis Oliphant wrote: > If you want to discuss this in public. Let's have the discussion over at > numfocus at googlegroups.com until a more specific list is created. Sounds good. I actually think numfocus is a great list for these kind of 'in-between' discussions that don't quite fit a specific project too well, but for things that also don't warrant their own mailing list yet. I'd love it if the numfocus list was always seen as a good space for that kind of 'connective tissue' work in our own community. Obviously there's a point where such topics may evolve into their own projects, but in the past we haven't had a good space for that kind of thing, and in my mind the numfocus list can really help play that role. Cheers, f From lists at hilboll.de Sun Feb 17 05:50:35 2013 From: lists at hilboll.de (Andreas Hilboll) Date: Sun, 17 Feb 2013 11:50:35 +0100 Subject: [Numpy-discussion] np.percentile docstring Message-ID: <5120B5FB.9010907@hilboll.de> In my numpy 1.6.1 (from Ubuntu 12.04LTS repository), the docstring of np.percentile is wrong. I'm not just submitting a PR because I don't understand something. in the "Notes" and "Examples" sections, there seems to be some confusion if ``q`` should be in the range [0,1] or in [0,100]. The parameters section is very clear about this, it should be [0,100]. However, in the Examples section, it says >>> np.percentile(a, 0.5, axis=0) array([ 6.5, 4.5, 2.5]) The given array is clearly the result of a call to np.percentile(a, 50., axis=0). I thought the docs are auto-generated, and that the "array..." result of the docstring would be calculated by numpy while building the docs? Or am I misunderstanding something here? Cheers, Andreas. From sebastian at sipsolutions.net Sun Feb 17 06:40:30 2013 From: sebastian at sipsolutions.net (Sebastian Berg) Date: Sun, 17 Feb 2013 12:40:30 +0100 Subject: [Numpy-discussion] np.percentile docstring In-Reply-To: <5120B5FB.9010907@hilboll.de> References: <5120B5FB.9010907@hilboll.de> Message-ID: <1361101230.4669.6.camel@sebastian-laptop> Hey, On Sun, 2013-02-17 at 11:50 +0100, Andreas Hilboll wrote: > In my numpy 1.6.1 (from Ubuntu 12.04LTS repository), the docstring of > np.percentile is wrong. I'm not just submitting a PR because I don't > understand something. > > in the "Notes" and "Examples" sections, there seems to be some confusion > if ``q`` should be in the range [0,1] or in [0,100]. The parameters > section is very clear about this, it should be [0,100]. > > However, in the Examples section, it says > > >>> np.percentile(a, 0.5, axis=0) > array([ 6.5, 4.5, 2.5]) > > The given array is clearly the result of a call to np.percentile(a, 50., > axis=0). > > I thought the docs are auto-generated, and that the "array..." result of > the docstring would be calculated by numpy while building the docs? Or > am I misunderstanding something here? They are not auto generated, in principle the other way would be possible, i.e. numpy can use the documentation to test the code. But the doctests have been relatively broken for a while I think, so there is at the time no way to automatically find such issues. There is some work right now going on for percentile, but please just do your pull request. Regards, Sebastian > > Cheers, Andreas. > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From heng at cantab.net Sun Feb 17 10:58:39 2013 From: heng at cantab.net (Henry Gomersall) Date: Sun, 17 Feb 2013 15:58:39 +0000 Subject: [Numpy-discussion] FFTW bindings now implement numpy.fft interface Message-ID: <1361116719.17035.10.camel@farnsworth> Some of you may be interested in the latest release of my FFTW bindings. It can now serve as a drop in replacement* for numpy.fft and scipy.fftpack. This means you can get most of the speed-up of FFTW with a one line code change or monkey patch existing libraries. Lots of other goodness too of course. Source here: https://github.com/hgomersall/pyFFTW pypi here: http://pypi.python.org/pypi/pyFFTW docs here: http://hgomersall.github.com/pyFFTW/ It's GPL3 due to license restrictions on FFTW. Get in touch if you want a different license and I'm sure we can reach an agreement ;) Cheers, Henry *In the case where the input array is not a numpy array, it doesn't work. This was an oversight and will be fixed in the next release. That said, if you're converting from a list on every transform, you have better optimisations than using FFTW. One other small caveat in a corner case to do with repeated axes - described in the docs. From stevenj at alum.mit.edu Sun Feb 17 11:12:43 2013 From: stevenj at alum.mit.edu (Steven G. Johnson) Date: Sun, 17 Feb 2013 11:12:43 -0500 Subject: [Numpy-discussion] calling NumPy from Julia - a plea for fewer macros Message-ID: Dear NumPy developers, I've been working on a glue package that allows the Julia language (http://julialang.org/) to call Python routines easily https://github.com/stevengj/PyCall.jl and I'm using NumPy to pass multidimensional arrays between languages. Julia has the ability to call C functions directly (without writing C glue), and I've been exploiting this to write PyCall purely in Julia. (This is nice for a number of reasons; besides programming and linking convenience, it means that I can dynamically load different Python versions on the same machine, and don't need to recompile if e.g. NumPy is updated.) However, calling NumPy has been a challenge, because of NumPy's heavy reliance on macros in its C API. I wanted to make a couple of suggestions to keep in mind as you plan for NumPy 2.0: 1) Dynamically linking to NumPy's C API was challenging, to say the least. Assuming you stick with the PyArray_API lookup table of pointers, it would be much easier to call from other languages if you include e.g. a numpy.core.multiarray._ARRAY_API_NAMES variable in the Python module that is a list of strings giving the symbol names corresponding to the numpy.core.multiarray._ARRAY_API pointer. (Plus documentation, of course.) Currently I need to parse __multiarray_api.h to extract this information, which is somewhat hackish. 2) Please provide non-macro equivalents (exported in the _ARRAY_API symbol table or otherwise) of PyArray_NDIM etcetera to access PyArrayObject members. (e.g. call them PyArray_ndim etc. Note that inline functions are not enough, since they are not loadable dynamically.) Right now, the only ways[*] I can see to access this information are either to use C glue (which I want to avoid for the reasons above) or to call Python to access the __array_interface__ attribute (which is suboptimal from a performance standpoint). Thanks for all your efforts! Any feedback on PyCall would be welcome, too. --SGJ [*] A third way would be to parse ndarraytypes.h to extract the format of the PyArrayObject_fields structure, and use upcoming Julia support for accessing C struct types to read the fields. This is likely to require tracking NumPy releases carefully to avoid breakage, however, as well as involving some care with the PyObject_HEAD macro. PS. If you want to try out PyCall with NumPy, note that a patch to Julia is currently required for this to work: https://github.com/JuliaLang/julia/pull/2317 From njs at pobox.com Sun Feb 17 11:33:48 2013 From: njs at pobox.com (Nathaniel Smith) Date: Sun, 17 Feb 2013 08:33:48 -0800 Subject: [Numpy-discussion] calling NumPy from Julia - a plea for fewer macros In-Reply-To: References: Message-ID: On 17 Feb 2013 08:13, "Steven G. Johnson" wrote: > Julia has the ability to call C functions directly (without writing C > glue), and I've been exploiting this to write PyCall purely in Julia. > (This is nice for a number of reasons; besides programming and linking > convenience, it means that I can dynamically load different Python > versions on the same machine, and don't need to recompile if e.g. NumPy > is updated.) However, calling NumPy has been a challenge, because of > NumPy's heavy reliance on macros in its C API. > > I wanted to make a couple of suggestions to keep in mind as you plan for > NumPy 2.0: There are currently no plans to produce a NumPy 2.0, but everything you suggest would be just fine as changes to numpy 1.x. PRs gratefully accepted. -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From ndbecker2 at gmail.com Sun Feb 17 12:38:24 2013 From: ndbecker2 at gmail.com (Neal Becker) Date: Sun, 17 Feb 2013 12:38:24 -0500 Subject: [Numpy-discussion] FFTW bindings now implement numpy.fft interface References: <1361116719.17035.10.camel@farnsworth> Message-ID: Henry Gomersall wrote: > Some of you may be interested in the latest release of my FFTW bindings. > It can now serve as a drop in replacement* for numpy.fft and > scipy.fftpack. > > This means you can get most of the speed-up of FFTW with a one line code > change or monkey patch existing libraries. > > Lots of other goodness too of course. > > Source here: https://github.com/hgomersall/pyFFTW > pypi here: http://pypi.python.org/pypi/pyFFTW > docs here: http://hgomersall.github.com/pyFFTW/ > > It's GPL3 due to license restrictions on FFTW. Get in touch if you want > a different license and I'm sure we can reach an agreement ;) > > Cheers, > > Henry > > *In the case where the input array is not a numpy array, it doesn't > work. This was an oversight and will be fixed in the next release. That > said, if you're converting from a list on every transform, you have > better optimisations than using FFTW. One other small caveat in a corner > case to do with repeated axes - described in the docs. The 1st example says: >>> import pyfftw >>> import numpy >>> a = pyfftw.n_byte_align_empty(128, 16, 'complex128') >>> a[:] = numpy.random.randn(128) + 1j*numpy.random.randn(128) >>> b = pyfftw.interfaces.numpy_fft.fft(a) I don't see why I need to specify the alignment. The fftw library has a function to allocate aligned arrays that are allocated optimally. Why doesn't pyfft.n_byte_align_empty just align things correctly without me having to tell it the alignment? From heng at cantab.net Sun Feb 17 14:36:50 2013 From: heng at cantab.net (Henry Gomersall) Date: Sun, 17 Feb 2013 19:36:50 +0000 Subject: [Numpy-discussion] FFTW bindings now implement numpy.fft interface In-Reply-To: References: <1361116719.17035.10.camel@farnsworth> Message-ID: <1361129810.17035.21.camel@farnsworth> On Sun, 2013-02-17 at 12:38 -0500, Neal Becker wrote: > The 1st example says: > >>> import pyfftw > >>> import numpy > >>> a = pyfftw.n_byte_align_empty(128, 16, 'complex128') > >>> a[:] = numpy.random.randn(128) + 1j*numpy.random.randn(128) > >>> b = pyfftw.interfaces.numpy_fft.fft(a) > > I don't see why I need to specify the alignment. The fftw library has > a > function to allocate aligned arrays that are allocated optimally. Why > doesn't > pyfft.n_byte_align_empty just align things correctly without me having > to tell > it the alignment? No very good reason. When I started it was simply easier to have numpy handle the memory management, which is what is still used, and that precluded using FFTW's memory manager. It was all written when FFTW didn't support AVX and so basically 16-byte alignment was all that was needed. Extending to support 32-byte alignment was done in this framework because I deemed there were more important things to work on. It is possible to achieve the same result by replacing the alignment argument with pyfft.simd_alignment, which acquires the optimum alignment by inspecting the CPU (on x86/amd64). This isn't obvious from the tutorial, which is a deficiency borne of limited time. Why didn't I write a function that does that for you? I'm sure I had a good reason at the time ;) Consider this a 0.9.1 feature (i'll add an issue now). hen From charlesr.harris at gmail.com Sun Feb 17 14:43:20 2013 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 17 Feb 2013 12:43:20 -0700 Subject: [Numpy-discussion] calling NumPy from Julia - a plea for fewer macros In-Reply-To: References: Message-ID: On Sun, Feb 17, 2013 at 9:12 AM, Steven G. Johnson wrote: > Dear NumPy developers, > > I've been working on a glue package that allows the Julia language > (http://julialang.org/) to call Python routines easily > https://github.com/stevengj/PyCall.jl > and I'm using NumPy to pass multidimensional arrays between languages. > > Julia has the ability to call C functions directly (without writing C > glue), and I've been exploiting this to write PyCall purely in Julia. > (This is nice for a number of reasons; besides programming and linking > convenience, it means that I can dynamically load different Python > versions on the same machine, and don't need to recompile if e.g. NumPy > is updated.) However, calling NumPy has been a challenge, because of > NumPy's heavy reliance on macros in its C API. > > I wanted to make a couple of suggestions to keep in mind as you plan for > NumPy 2.0: > > 1) Dynamically linking to NumPy's C API was challenging, to say the > least. Assuming you stick with the PyArray_API lookup table of > pointers, it would be much easier to call from other languages if you > include e.g. a numpy.core.multiarray._ARRAY_API_NAMES variable in the > Python module that is a list of strings giving the symbol names > corresponding to the numpy.core.multiarray._ARRAY_API pointer. (Plus > documentation, of course.) Currently I need to parse > __multiarray_api.h to extract this information, which is somewhat hackish. > > It shouldn't be too much work to provide something like that. The current API is generated, take a look at numpy/core/codegenerators/numpy_api.py. PR's welcome. > 2) Please provide non-macro equivalents (exported in the _ARRAY_API > symbol table or otherwise) of PyArray_NDIM etcetera to access > PyArrayObject members. (e.g. call them PyArray_ndim etc. Note that > inline functions are not enough, since they are not loadable > dynamically.) Right now, the only ways[*] I can see to access this > information are either to use C glue (which I want to avoid for the > reasons above) or to call Python to access the __array_interface__ > attribute (which is suboptimal from a performance standpoint). > > There are already functional versions of PyArray_NDIM and some others, put in as part of a long term project to hide the numpy internals so that we can modify structures and such at some point. We could use more work in that direction and would welcome any input/PR's you might offer. The current functions can be used instead of the macros by putting #define NPY_NO_DEPRECATED_API NPY_API_VERSION before any includes. The NPY_API_VERSION serves to mark which functions were introduced in which numpy version, so as to maintain backward compatibility with 3'rd party code. See the lines starting at 1377 in ndarraytypes.h. for currently available functions. There might also be some useful things in dynd/blaze which, IIRC, support numpy for some computations. They are located at https://github.com/ContinuumIO/ Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sun Feb 17 15:00:00 2013 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 17 Feb 2013 13:00:00 -0700 Subject: [Numpy-discussion] calling NumPy from Julia - a plea for fewer macros In-Reply-To: References: Message-ID: On Sun, Feb 17, 2013 at 12:43 PM, Charles R Harris < charlesr.harris at gmail.com> wrote: > > > On Sun, Feb 17, 2013 at 9:12 AM, Steven G. Johnson wrote: > >> Dear NumPy developers, >> >> I've been working on a glue package that allows the Julia language >> (http://julialang.org/) to call Python routines easily >> https://github.com/stevengj/PyCall.jl >> and I'm using NumPy to pass multidimensional arrays between languages. >> >> Julia has the ability to call C functions directly (without writing C >> glue), and I've been exploiting this to write PyCall purely in Julia. >> (This is nice for a number of reasons; besides programming and linking >> convenience, it means that I can dynamically load different Python >> versions on the same machine, and don't need to recompile if e.g. NumPy >> is updated.) However, calling NumPy has been a challenge, because of >> NumPy's heavy reliance on macros in its C API. >> >> I wanted to make a couple of suggestions to keep in mind as you plan for >> NumPy 2.0: >> >> 1) Dynamically linking to NumPy's C API was challenging, to say the >> least. Assuming you stick with the PyArray_API lookup table of >> pointers, it would be much easier to call from other languages if you >> include e.g. a numpy.core.multiarray._ARRAY_API_NAMES variable in the >> Python module that is a list of strings giving the symbol names >> corresponding to the numpy.core.multiarray._ARRAY_API pointer. (Plus >> documentation, of course.) Currently I need to parse >> __multiarray_api.h to extract this information, which is somewhat hackish. >> >> > It shouldn't be too much work to provide something like that. The current > API is generated, take a look at numpy/core/codegenerators/numpy_api.py. > PR's welcome. > > >> 2) Please provide non-macro equivalents (exported in the _ARRAY_API >> symbol table or otherwise) of PyArray_NDIM etcetera to access >> PyArrayObject members. (e.g. call them PyArray_ndim etc. Note that >> inline functions are not enough, since they are not loadable >> dynamically.) Right now, the only ways[*] I can see to access this >> information are either to use C glue (which I want to avoid for the >> reasons above) or to call Python to access the __array_interface__ >> attribute (which is suboptimal from a performance standpoint). >> >> > There are already functional versions of PyArray_NDIM and some others, put > in as part of a long term project to hide the numpy internals so that we > can modify structures and such at some point. We could use more work in > that direction and would welcome any input/PR's you might offer. The current > functions can be used instead of the macros by putting > > #define NPY_NO_DEPRECATED_API NPY_API_VERSION > > before any includes. The NPY_API_VERSION serves to mark which functions > were introduced in which numpy version, so as to maintain backward > compatibility with 3'rd party code. See the lines starting at 1377 in > ndarraytypes.h. for currently available functions. > > There might also be some useful things in dynd/blaze which, IIRC, support > numpy for some computations. They are located at > https://github.com/ContinuumIO/ > > Oops, sorry, I didn't see your comments about inline functions. I don't see why something like this couldn't be supported, perhaps as a library like we have for math functions, umath.so. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From stevenj at alum.mit.edu Sun Feb 17 20:43:49 2013 From: stevenj at alum.mit.edu (Steven G. Johnson) Date: Sun, 17 Feb 2013 20:43:49 -0500 Subject: [Numpy-discussion] calling NumPy from Julia - a plea for fewer macros In-Reply-To: References: Message-ID: Nathaniel Smith wrote: > There are currently no plans to produce a NumPy 2.0, but everything you > suggest would be just fine as changes to numpy 1.x. PRs gratefully accepted. Thanks, just posted https://github.com/numpy/numpy/issues/2997 https://github.com/numpy/numpy/issues/2998 --SGJ From ml at redcowmedia.de Mon Feb 18 02:52:23 2013 From: ml at redcowmedia.de (Michael =?ISO-8859-1?Q?L=F6ffler?=) Date: Mon, 18 Feb 2013 14:52:23 +0700 Subject: [Numpy-discussion] Suggestion for a speed improved accumarray receipe Message-ID: <1361173943.16219.8.camel@terpsichore> Hello numpy list! I had tried to contribute to the AccumarrayLike recipe cookbook, but access seems to be restricted and no registration allowed. The current recipe mimics the matlab version in most aspects, but concerning performance it's horribly slow. The following snippet handles only the most simple usecase, but gives about 16x speed improvement compared to the current receipe, so it would probably worth to also mention it in the cookbook: def accum_custom(accmap, a, func=np.sum): indices = np.where(np.ediff1d(accmap, to_begin=[1], to_end=[np.nan]))[0] vals = np.zeros(len(indices) - 1) for i in xrange(len(indices) - 1): vals[i] = func(a[indices[i]:indices[i+1]]) return vals accmap = np.repeat(np.arange(100000), 20) a = np.random.randn(accmap.size) %timeit accum(accmap, a, func=np.sum) >>> 1 loops, best of 3: 16.7 s per loop %timeit accum_custom(accmap, a, func=np.sum) >>> 1 loops, best of 3: 945 ms per loop Best regards, Michael From sergio.callegari at gmail.com Mon Feb 18 10:38:27 2013 From: sergio.callegari at gmail.com (Sergio Callegari) Date: Mon, 18 Feb 2013 15:38:27 +0000 (UTC) Subject: [Numpy-discussion] Windows, blas, atlas and dlls Message-ID: Hi, I have a project that includes a cython script which in turn does some direct access to a couple of cblas functions. This is necessary, since some matrix multiplications need to be done inside a tight loop that gets called thousands of times. Speedup wrt calling scipy.linalg.blas.cblas routines is 10x to 20x. Now, all this is very nice on linux where the setup script can assure that the cython code gets linked with the atlas dynamic library, which is the same library that numpy and scipy link to on this platform. However, I now have trouble in providing easy ways to use my project in windows. All the free windows distros for scientific python that I have looked at (python(x,y) and winpython) seem to repackage the windows version of numpy/scipy as it is built in the numpy/scipy development sites. These appear to statically link atlas inside some pyd files. So I get no atlas to link against, and I have to ship an additional pre-built atlas with my project. All this seems somehow inconvenient. In the end, when my code runs, due to static linking I get 3 replicas of 2 slightly different atlas libs in memory. One coming with _dotblas.pyd in numpy, another one with cblas.pyd or fblas.pyd in scipy. And the last one as the one shipped in my code. Would it be possible to have a win distro of scipy which provides some pre built atlas dlls, and to have numpy and scipy dynamically link to them? This would save memory and also provide a decent blas to link to for things done in cython. But I believe there must be some problem since the scipy site says "IMPORTANT: NumPy and SciPy in Windows can currently only make use of CBLAS and LAPACK as static libraries - DLLs are not supported." Can someone please explain why or link to an explanation? Unfortunately, not having a good, prebuilt and cheap blas implementation in windows is really striking me as a severe limitation, since you loose the ability to prototype in python/scipy and then move to C or Cython the major bottlenecks to achieve speed. Many thanks in advance! From rif at google.com Mon Feb 18 11:26:19 2013 From: rif at google.com (rif) Date: Mon, 18 Feb 2013 08:26:19 -0800 Subject: [Numpy-discussion] Windows, blas, atlas and dlls In-Reply-To: References: Message-ID: I have no answer to the question, but I was curious as to why directly calling the cblas would be 10x-20x slower in the first place. That seems surprising, although I'm just learning about python numerics. On Mon, Feb 18, 2013 at 7:38 AM, Sergio Callegari < sergio.callegari at gmail.com> wrote: > Hi, > > I have a project that includes a cython script which in turn does some > direct > access to a couple of cblas functions. This is necessary, since some matrix > multiplications need to be done inside a tight loop that gets called > thousands > of times. Speedup wrt calling scipy.linalg.blas.cblas routines is 10x to > 20x. > > Now, all this is very nice on linux where the setup script can assure that > the > cython code gets linked with the atlas dynamic library, which is the same > library that numpy and scipy link to on this platform. > > However, I now have trouble in providing easy ways to use my project in > windows. All the free windows distros for scientific python that I have > looked at (python(x,y) and winpython) seem to repackage the windows > version of > numpy/scipy as it is built in the numpy/scipy development sites. These > appear > to statically link atlas inside some pyd files. So I get no atlas to link > against, and I have to ship an additional pre-built atlas with my project. > > All this seems somehow inconvenient. > > In the end, when my code runs, due to static linking I get 3 replicas of 2 > slightly different atlas libs in memory. One coming with _dotblas.pyd in > numpy, > another one with cblas.pyd or fblas.pyd in scipy. And the last one as the > one > shipped in my code. > > Would it be possible to have a win distro of scipy which provides some > pre built atlas dlls, and to have numpy and scipy dynamically link to them? > This would save memory and also provide a decent blas to link to for things > done in cython. But I believe there must be some problem since the scipy > site > says > > "IMPORTANT: NumPy and SciPy in Windows can currently only make use of > CBLAS and > LAPACK as static libraries - DLLs are not supported." > > Can someone please explain why or link to an explanation? > > Unfortunately, not having a good, prebuilt and cheap blas implementation in > windows is really striking me as a severe limitation, since you loose the > ability to prototype in python/scipy and then move to C or Cython the major > bottlenecks to achieve speed. > > Many thanks in advance! > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sole at esrf.fr Mon Feb 18 11:26:50 2013 From: sole at esrf.fr (=?ISO-8859-1?Q?=22V=2E_Armando_Sol=E9=22?=) Date: Mon, 18 Feb 2013 17:26:50 +0100 Subject: [Numpy-discussion] Windows, blas, atlas and dlls In-Reply-To: References: Message-ID: <5122564A.6030006@esrf.fr> Hi Sergio, I faced a similar problem one year ago. I solved it writing a C function receiving a pointer to the relevant linear algebra routine I needed. Numpy does not offers the direct access to the underlying library functions, but scipy does it: from scipy.linalg.blas import fblas dgemm = fblas.dgemm._cpointer sgemm = fblas.sgemm._cpointer So I wrote a small extension receiving the data to operate with and the relevant pointer. The drawback of the approach is the dependency on scipy but it works nicely. Armando On 18/02/2013 16:38, Sergio Callegari wrote: > Hi, > > I have a project that includes a cython script which in turn does some direct > access to a couple of cblas functions. This is necessary, since some matrix > multiplications need to be done inside a tight loop that gets called thousands > of times. Speedup wrt calling scipy.linalg.blas.cblas routines is 10x to 20x. > > Now, all this is very nice on linux where the setup script can assure that the > cython code gets linked with the atlas dynamic library, which is the same > library that numpy and scipy link to on this platform. > > However, I now have trouble in providing easy ways to use my project in > windows. All the free windows distros for scientific python that I have > looked at (python(x,y) and winpython) seem to repackage the windows version of > numpy/scipy as it is built in the numpy/scipy development sites. These appear > to statically link atlas inside some pyd files. So I get no atlas to link > against, and I have to ship an additional pre-built atlas with my project. > > All this seems somehow inconvenient. > > In the end, when my code runs, due to static linking I get 3 replicas of 2 > slightly different atlas libs in memory. One coming with _dotblas.pyd in numpy, > another one with cblas.pyd or fblas.pyd in scipy. And the last one as the one > shipped in my code. > > Would it be possible to have a win distro of scipy which provides some > pre built atlas dlls, and to have numpy and scipy dynamically link to them? > This would save memory and also provide a decent blas to link to for things > done in cython. But I believe there must be some problem since the scipy site > says > > "IMPORTANT: NumPy and SciPy in Windows can currently only make use of CBLAS and > LAPACK as static libraries - DLLs are not supported." > > Can someone please explain why or link to an explanation? > > Unfortunately, not having a good, prebuilt and cheap blas implementation in > windows is really striking me as a severe limitation, since you loose the > ability to prototype in python/scipy and then move to C or Cython the major > bottlenecks to achieve speed. > > Many thanks in advance! > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > From d.s.seljebotn at astro.uio.no Mon Feb 18 11:28:36 2013 From: d.s.seljebotn at astro.uio.no (Dag Sverre Seljebotn) Date: Mon, 18 Feb 2013 17:28:36 +0100 Subject: [Numpy-discussion] Windows, blas, atlas and dlls In-Reply-To: References: Message-ID: <512256B4.50206@astro.uio.no> On 02/18/2013 05:26 PM, rif wrote: > I have no answer to the question, but I was curious as to why directly > calling the cblas would be 10x-20x slower in the first place. That > seems surprising, although I'm just learning about python numerics. The statement was that directly (on the Cython level) calling cblas is 10x-20x slower than going through the (slow) SciPy wrapper routines. That makes a lot of sense if the matrices are smalle nough. Dag Sverre > > > On Mon, Feb 18, 2013 at 7:38 AM, Sergio Callegari > > wrote: > > Hi, > > I have a project that includes a cython script which in turn does > some direct > access to a couple of cblas functions. This is necessary, since some > matrix > multiplications need to be done inside a tight loop that gets called > thousands > of times. Speedup wrt calling scipy.linalg.blas.cblas routines is > 10x to 20x. > > Now, all this is very nice on linux where the setup script can > assure that the > cython code gets linked with the atlas dynamic library, which is the > same > library that numpy and scipy link to on this platform. > > However, I now have trouble in providing easy ways to use my project in > windows. All the free windows distros for scientific python that I have > looked at (python(x,y) and winpython) seem to repackage the windows > version of > numpy/scipy as it is built in the numpy/scipy development sites. > These appear > to statically link atlas inside some pyd files. So I get no atlas > to link > against, and I have to ship an additional pre-built atlas with my > project. > > All this seems somehow inconvenient. > > In the end, when my code runs, due to static linking I get 3 > replicas of 2 > slightly different atlas libs in memory. One coming with > _dotblas.pyd in numpy, > another one with cblas.pyd or fblas.pyd in scipy. And the last one > as the one > shipped in my code. > > Would it be possible to have a win distro of scipy which provides some > pre built atlas dlls, and to have numpy and scipy dynamically link > to them? > This would save memory and also provide a decent blas to link to for > things > done in cython. But I believe there must be some problem since the > scipy site > says > > "IMPORTANT: NumPy and SciPy in Windows can currently only make use > of CBLAS and > LAPACK as static libraries - DLLs are not supported." > > Can someone please explain why or link to an explanation? > > Unfortunately, not having a good, prebuilt and cheap blas > implementation in > windows is really striking me as a severe limitation, since you > loose the > ability to prototype in python/scipy and then move to C or Cython > the major > bottlenecks to achieve speed. > > Many thanks in advance! > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From d.s.seljebotn at astro.uio.no Mon Feb 18 11:29:23 2013 From: d.s.seljebotn at astro.uio.no (Dag Sverre Seljebotn) Date: Mon, 18 Feb 2013 17:29:23 +0100 Subject: [Numpy-discussion] Windows, blas, atlas and dlls In-Reply-To: <512256B4.50206@astro.uio.no> References: <512256B4.50206@astro.uio.no> Message-ID: <512256E3.4050802@astro.uio.no> On 02/18/2013 05:28 PM, Dag Sverre Seljebotn wrote: > On 02/18/2013 05:26 PM, rif wrote: >> I have no answer to the question, but I was curious as to why directly >> calling the cblas would be 10x-20x slower in The statement was that directly (on the Cython level) calling cblas is 10x-20x slower than going through the (slow) SciPy wrapper routines. That makes a lot of sense if the matrices are smalle nough. the first place. That >> seems surprising, although I'm just learning about python numerics. > > The statement was that directly (on the Cython level) calling cblas is > 10x-20x slower than going through the (slow) SciPy wrapper routines. > That makes a lot of sense if the matrices are smalle nough. > Argh. I meant: The statement was that directly (on the Cython level) calling cblas is 10x-20x **faster** than going through the (slow) SciPy wrapper routines. That makes a lot of sense if the matrices are small enough. Dag Sverre > Dag Sverre > >> >> >> On Mon, Feb 18, 2013 at 7:38 AM, Sergio Callegari >> > wrote: >> >> Hi, >> >> I have a project that includes a cython script which in turn does >> some direct >> access to a couple of cblas functions. This is necessary, since some >> matrix >> multiplications need to be done inside a tight loop that gets called >> thousands >> of times. Speedup wrt calling scipy.linalg.blas.cblas routines is >> 10x to 20x. >> >> Now, all this is very nice on linux where the setup script can >> assure that the >> cython code gets linked with the atlas dynamic library, which is the >> same >> library that numpy and scipy link to on this platform. >> >> However, I now have trouble in providing easy ways to use my >> project in >> windows. All the free windows distros for scientific python that I >> have >> looked at (python(x,y) and winpython) seem to repackage the windows >> version of >> numpy/scipy as it is built in the numpy/scipy development sites. >> These appear >> to statically link atlas inside some pyd files. So I get no atlas >> to link >> against, and I have to ship an additional pre-built atlas with my >> project. >> >> All this seems somehow inconvenient. >> >> In the end, when my code runs, due to static linking I get 3 >> replicas of 2 >> slightly different atlas libs in memory. One coming with >> _dotblas.pyd in numpy, >> another one with cblas.pyd or fblas.pyd in scipy. And the last one >> as the one >> shipped in my code. >> >> Would it be possible to have a win distro of scipy which provides >> some >> pre built atlas dlls, and to have numpy and scipy dynamically link >> to them? >> This would save memory and also provide a decent blas to link to for >> things >> done in cython. But I believe there must be some problem since the >> scipy site >> says >> >> "IMPORTANT: NumPy and SciPy in Windows can currently only make use >> of CBLAS and >> LAPACK as static libraries - DLLs are not supported." >> >> Can someone please explain why or link to an explanation? >> >> Unfortunately, not having a good, prebuilt and cheap blas >> implementation in >> windows is really striking me as a severe limitation, since you >> loose the >> ability to prototype in python/scipy and then move to C or Cython >> the major >> bottlenecks to achieve speed. >> >> Many thanks in advance! >> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> >> >> >> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> > From rif at google.com Mon Feb 18 11:29:58 2013 From: rif at google.com (rif) Date: Mon, 18 Feb 2013 08:29:58 -0800 Subject: [Numpy-discussion] Windows, blas, atlas and dlls In-Reply-To: <512256B4.50206@astro.uio.no> References: <512256B4.50206@astro.uio.no> Message-ID: But I'd hope that the overhead for going through the wrappers is constant, rather than dependent on the size, so that for large matrices you'd get essentially equivalent performance? On Mon, Feb 18, 2013 at 8:28 AM, Dag Sverre Seljebotn < d.s.seljebotn at astro.uio.no> wrote: > On 02/18/2013 05:26 PM, rif wrote: > > I have no answer to the question, but I was curious as to why directly > > calling the cblas would be 10x-20x slower in the first place. That > > seems surprising, although I'm just learning about python numerics. > > The statement was that directly (on the Cython level) calling cblas is > 10x-20x slower than going through the (slow) SciPy wrapper routines. > That makes a lot of sense if the matrices are smalle nough. > > Dag Sverre > > > > > > > On Mon, Feb 18, 2013 at 7:38 AM, Sergio Callegari > > > wrote: > > > > Hi, > > > > I have a project that includes a cython script which in turn does > > some direct > > access to a couple of cblas functions. This is necessary, since some > > matrix > > multiplications need to be done inside a tight loop that gets called > > thousands > > of times. Speedup wrt calling scipy.linalg.blas.cblas routines is > > 10x to 20x. > > > > Now, all this is very nice on linux where the setup script can > > assure that the > > cython code gets linked with the atlas dynamic library, which is the > > same > > library that numpy and scipy link to on this platform. > > > > However, I now have trouble in providing easy ways to use my project > in > > windows. All the free windows distros for scientific python that I > have > > looked at (python(x,y) and winpython) seem to repackage the windows > > version of > > numpy/scipy as it is built in the numpy/scipy development sites. > > These appear > > to statically link atlas inside some pyd files. So I get no atlas > > to link > > against, and I have to ship an additional pre-built atlas with my > > project. > > > > All this seems somehow inconvenient. > > > > In the end, when my code runs, due to static linking I get 3 > > replicas of 2 > > slightly different atlas libs in memory. One coming with > > _dotblas.pyd in numpy, > > another one with cblas.pyd or fblas.pyd in scipy. And the last one > > as the one > > shipped in my code. > > > > Would it be possible to have a win distro of scipy which provides > some > > pre built atlas dlls, and to have numpy and scipy dynamically link > > to them? > > This would save memory and also provide a decent blas to link to for > > things > > done in cython. But I believe there must be some problem since the > > scipy site > > says > > > > "IMPORTANT: NumPy and SciPy in Windows can currently only make use > > of CBLAS and > > LAPACK as static libraries - DLLs are not supported." > > > > Can someone please explain why or link to an explanation? > > > > Unfortunately, not having a good, prebuilt and cheap blas > > implementation in > > windows is really striking me as a severe limitation, since you > > loose the > > ability to prototype in python/scipy and then move to C or Cython > > the major > > bottlenecks to achieve speed. > > > > Many thanks in advance! > > > > _______________________________________________ > > NumPy-Discussion mailing list > > NumPy-Discussion at scipy.org > > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > > > > > > > > > _______________________________________________ > > NumPy-Discussion mailing list > > NumPy-Discussion at scipy.org > > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Mon Feb 18 12:10:44 2013 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 18 Feb 2013 10:10:44 -0700 Subject: [Numpy-discussion] Windows, blas, atlas and dlls In-Reply-To: <512256B4.50206@astro.uio.no> References: <512256B4.50206@astro.uio.no> Message-ID: On Mon, Feb 18, 2013 at 9:28 AM, Dag Sverre Seljebotn < d.s.seljebotn at astro.uio.no> wrote: > On 02/18/2013 05:26 PM, rif wrote: > > I have no answer to the question, but I was curious as to why directly > > calling the cblas would be 10x-20x slower in the first place. That > > seems surprising, although I'm just learning about python numerics. > > The statement was that directly (on the Cython level) calling cblas is > 10x-20x slower than going through the (slow) SciPy wrapper routines. > That makes a lot of sense if the matrices are smalle nough. > For really small matrices, not using blas at all provides another speedup. Chuck > > Dag Sverre > > > > > > > On Mon, Feb 18, 2013 at 7:38 AM, Sergio Callegari > > > wrote: > > > > Hi, > > > > I have a project that includes a cython script which in turn does > > some direct > > access to a couple of cblas functions. This is necessary, since some > > matrix > > multiplications need to be done inside a tight loop that gets called > > thousands > > of times. Speedup wrt calling scipy.linalg.blas.cblas routines is > > 10x to 20x. > > > > Now, all this is very nice on linux where the setup script can > > assure that the > > cython code gets linked with the atlas dynamic library, which is the > > same > > library that numpy and scipy link to on this platform. > > > > However, I now have trouble in providing easy ways to use my project > in > > windows. All the free windows distros for scientific python that I > have > > looked at (python(x,y) and winpython) seem to repackage the windows > > version of > > numpy/scipy as it is built in the numpy/scipy development sites. > > These appear > > to statically link atlas inside some pyd files. So I get no atlas > > to link > > against, and I have to ship an additional pre-built atlas with my > > project. > > > > All this seems somehow inconvenient. > > > > In the end, when my code runs, due to static linking I get 3 > > replicas of 2 > > slightly different atlas libs in memory. One coming with > > _dotblas.pyd in numpy, > > another one with cblas.pyd or fblas.pyd in scipy. And the last one > > as the one > > shipped in my code. > > > > Would it be possible to have a win distro of scipy which provides > some > > pre built atlas dlls, and to have numpy and scipy dynamically link > > to them? > > This would save memory and also provide a decent blas to link to for > > things > > done in cython. But I believe there must be some problem since the > > scipy site > > says > > > > "IMPORTANT: NumPy and SciPy in Windows can currently only make use > > of CBLAS and > > LAPACK as static libraries - DLLs are not supported." > > > > Can someone please explain why or link to an explanation? > > > > Unfortunately, not having a good, prebuilt and cheap blas > > implementation in > > windows is really striking me as a severe limitation, since you > > loose the > > ability to prototype in python/scipy and then move to C or Cython > > the major > > bottlenecks to achieve speed. > > > > Many thanks in advance! > > > > _______________________________________________ > > NumPy-Discussion mailing list > > NumPy-Discussion at scipy.org > > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > > > > > > > > > _______________________________________________ > > NumPy-Discussion mailing list > > NumPy-Discussion at scipy.org > > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From d.s.seljebotn at astro.uio.no Mon Feb 18 12:20:42 2013 From: d.s.seljebotn at astro.uio.no (Dag Sverre Seljebotn) Date: Mon, 18 Feb 2013 18:20:42 +0100 Subject: [Numpy-discussion] Windows, blas, atlas and dlls In-Reply-To: References: <512256B4.50206@astro.uio.no> Message-ID: <512262EA.2000304@astro.uio.no> On 02/18/2013 05:29 PM, rif wrote: > But I'd hope that the overhead for going through the wrappers is > constant, rather than dependent on the size, so that for large matrices > you'd get essentially equivalent performance? That is correct. Ah, so then the quality of the BLAS matters much less in this situation. But if you have a code that is used with either many small or fewer large matrices, then a compiled loop over a good BLAS is a good compromise without splitting up the code paths. DS > > > On Mon, Feb 18, 2013 at 8:28 AM, Dag Sverre Seljebotn > > wrote: > > On 02/18/2013 05:26 PM, rif wrote: > > I have no answer to the question, but I was curious as to why > directly > > calling the cblas would be 10x-20x slower in the first place. That > > seems surprising, although I'm just learning about python numerics. > > The statement was that directly (on the Cython level) calling cblas is > 10x-20x slower than going through the (slow) SciPy wrapper routines. > That makes a lot of sense if the matrices are smalle nough. > > Dag Sverre > > > > > > > On Mon, Feb 18, 2013 at 7:38 AM, Sergio Callegari > > > >> wrote: > > > > Hi, > > > > I have a project that includes a cython script which in turn does > > some direct > > access to a couple of cblas functions. This is necessary, > since some > > matrix > > multiplications need to be done inside a tight loop that gets > called > > thousands > > of times. Speedup wrt calling scipy.linalg.blas.cblas routines is > > 10x to 20x. > > > > Now, all this is very nice on linux where the setup script can > > assure that the > > cython code gets linked with the atlas dynamic library, which > is the > > same > > library that numpy and scipy link to on this platform. > > > > However, I now have trouble in providing easy ways to use my > project in > > windows. All the free windows distros for scientific python > that I have > > looked at (python(x,y) and winpython) seem to repackage the > windows > > version of > > numpy/scipy as it is built in the numpy/scipy development sites. > > These appear > > to statically link atlas inside some pyd files. So I get no > atlas > > to link > > against, and I have to ship an additional pre-built atlas with my > > project. > > > > All this seems somehow inconvenient. > > > > In the end, when my code runs, due to static linking I get 3 > > replicas of 2 > > slightly different atlas libs in memory. One coming with > > _dotblas.pyd in numpy, > > another one with cblas.pyd or fblas.pyd in scipy. And the > last one > > as the one > > shipped in my code. > > > > Would it be possible to have a win distro of scipy which > provides some > > pre built atlas dlls, and to have numpy and scipy dynamically > link > > to them? > > This would save memory and also provide a decent blas to link > to for > > things > > done in cython. But I believe there must be some problem > since the > > scipy site > > says > > > > "IMPORTANT: NumPy and SciPy in Windows can currently only > make use > > of CBLAS and > > LAPACK as static libraries - DLLs are not supported." > > > > Can someone please explain why or link to an explanation? > > > > Unfortunately, not having a good, prebuilt and cheap blas > > implementation in > > windows is really striking me as a severe limitation, since you > > loose the > > ability to prototype in python/scipy and then move to C or Cython > > the major > > bottlenecks to achieve speed. > > > > Many thanks in advance! > > > > _______________________________________________ > > NumPy-Discussion mailing list > > NumPy-Discussion at scipy.org > > > > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > > > > > > > > > _______________________________________________ > > NumPy-Discussion mailing list > > NumPy-Discussion at scipy.org > > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From pav at iki.fi Mon Feb 18 12:48:37 2013 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 18 Feb 2013 19:48:37 +0200 Subject: [Numpy-discussion] Windows, blas, atlas and dlls In-Reply-To: <512262EA.2000304@astro.uio.no> References: <512256B4.50206@astro.uio.no> <512262EA.2000304@astro.uio.no> Message-ID: 18.02.2013 19:20, Dag Sverre Seljebotn kirjoitti: > On 02/18/2013 05:29 PM, rif wrote: >> But I'd hope that the overhead for going through the wrappers is >> constant, rather than dependent on the size, so that for large matrices >> you'd get essentially equivalent performance? > > That is correct. > > Ah, so then the quality of the BLAS matters much less in this situation. > > But if you have a code that is used with either many small or fewer > large matrices, then a compiled loop over a good BLAS is a good > compromise without splitting up the code paths. I'm open to suggestions on providing low-level Cython interface to BLAS and LAPACK in scipy.linalg. I think this is possible with Cython --- we already have scipy.interpolate talking to scipy.spatial, so why not also 3rd party modules. Pull requests are accepted --- there are several interesting Cython BLAS/LAPACK projects though. -- Pauli Virtanen From d.s.seljebotn at astro.uio.no Mon Feb 18 13:41:58 2013 From: d.s.seljebotn at astro.uio.no (Dag Sverre Seljebotn) Date: Mon, 18 Feb 2013 19:41:58 +0100 Subject: [Numpy-discussion] Windows, blas, atlas and dlls In-Reply-To: References: <512256B4.50206@astro.uio.no> <512262EA.2000304@astro.uio.no> Message-ID: <512275F6.7000302@astro.uio.no> On 02/18/2013 06:48 PM, Pauli Virtanen wrote: > 18.02.2013 19:20, Dag Sverre Seljebotn kirjoitti: >> On 02/18/2013 05:29 PM, rif wrote: >>> But I'd hope that the overhead for going through the wrappers is >>> constant, rather than dependent on the size, so that for large matrices >>> you'd get essentially equivalent performance? >> >> That is correct. >> >> Ah, so then the quality of the BLAS matters much less in this situation. >> >> But if you have a code that is used with either many small or fewer >> large matrices, then a compiled loop over a good BLAS is a good >> compromise without splitting up the code paths. > > I'm open to suggestions on providing low-level Cython interface to BLAS > and LAPACK in scipy.linalg. I think this is possible with Cython --- we > already have scipy.interpolate talking to scipy.spatial, so why not also > 3rd party modules. > > Pull requests are accepted --- there are several interesting Cython > BLAS/LAPACK projects though. I think there should be a new project, pylapack or similar, for this, outside of NumPy and SciPy. NumPy and SciPy could try to import it, and if found, fetch a function pointer table. (If not found, just stay with what has been working for a decade.) The main motivation would be to decouple building NumPy from linking with BLAS and have that all happen at run-time. But a Cython interface would follow naturally too. I've wanted to start on this for some time but the Hashdist visions got bigger and my PhD more difficult... As for the interesting Cython BLAS/LAPACK projects, the ones I've seen (Tokyo, my own work on SciPy for .NET) isn't templated enough for my taste. I'd start with writing a YAML file describing BLAS/LAPACK, then generate the Cython code and wrappers from that, since the APIs are so regular.. Dag Sverre From pav at iki.fi Mon Feb 18 15:23:30 2013 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 18 Feb 2013 22:23:30 +0200 Subject: [Numpy-discussion] Windows, blas, atlas and dlls In-Reply-To: <512275F6.7000302@astro.uio.no> References: <512256B4.50206@astro.uio.no> <512262EA.2000304@astro.uio.no> <512275F6.7000302@astro.uio.no> Message-ID: 18.02.2013 20:41, Dag Sverre Seljebotn kirjoitti: [clip] > I think there should be a new project, pylapack or similar, for this, > outside of NumPy and SciPy. NumPy and SciPy could try to import it, and > if found, fetch a function pointer table. (If not found, just stay with > what has been working for a decade.) > > The main motivation would be to decouple building NumPy from linking > with BLAS and have that all happen at run-time. But a Cython interface > would follow naturally too. The main motivation for sticking it into Scipy would be a bit different --- since the build and distribution infra is in place for Scipy, putting it in scipy.linalg makes it more easily available for a larger number of people than some random 3-rd party module. We already ship low-level f2py bindings, so I don't see a reason for not shipping Cython ones too. -- Pauli Virtanen From d.s.seljebotn at astro.uio.no Mon Feb 18 16:12:10 2013 From: d.s.seljebotn at astro.uio.no (Dag Sverre Seljebotn) Date: Mon, 18 Feb 2013 22:12:10 +0100 Subject: [Numpy-discussion] Windows, blas, atlas and dlls In-Reply-To: References: <512256B4.50206@astro.uio.no> <512262EA.2000304@astro.uio.no> <512275F6.7000302@astro.uio.no> Message-ID: <5122992A.702@astro.uio.no> On 02/18/2013 09:23 PM, Pauli Virtanen wrote: > 18.02.2013 20:41, Dag Sverre Seljebotn kirjoitti: > [clip] >> I think there should be a new project, pylapack or similar, for this, >> outside of NumPy and SciPy. NumPy and SciPy could try to import it, and >> if found, fetch a function pointer table. (If not found, just stay with >> what has been working for a decade.) >> >> The main motivation would be to decouple building NumPy from linking >> with BLAS and have that all happen at run-time. But a Cython interface >> would follow naturally too. > > The main motivation for sticking it into Scipy would be a bit different > --- since the build and distribution infra is in place for Scipy, > putting it in scipy.linalg makes it more easily available for a larger > number of people than some random 3-rd party module. Right. In my case it's rather the case that the build and distribution infra is *not* in place for my needs :-). Yes, it was definitely from the POV of power-users on HPC clusters who want to tinker with the build, not as wide reach as possible. > > We already ship low-level f2py bindings, so I don't see a reason for not > shipping Cython ones too. Well, in that case (and for the information of anybody else interested, Pauli already knows this), the fwrap-generated files from the SciPy .NET port may be a good starting point, https://github.com/enthought/scipy-refactor/blob/refactor/scipy/linalg/flapack.pyx.in Dag Sverre From sole at esrf.fr Mon Feb 18 16:29:52 2013 From: sole at esrf.fr (V. Armando Sole) Date: Mon, 18 Feb 2013 22:29:52 +0100 Subject: [Numpy-discussion] Windows, blas, atlas and dlls In-Reply-To: References: <512256B4.50206@astro.uio.no> <512262EA.2000304@astro.uio.no> <512275F6.7000302@astro.uio.no> Message-ID: <541193c3b71c8b7043bbb411b67f69ed@esrf.fr> On 18.02.2013 21:23, Pauli Virtanen wrote: > 18.02.2013 20:41, Dag Sverre Seljebotn kirjoitti: > [clip] >> I think there should be a new project, pylapack or similar, for >> this, >> outside of NumPy and SciPy. NumPy and SciPy could try to import it, >> and >> if found, fetch a function pointer table. (If not found, just stay >> with >> what has been working for a decade.) >> >> The main motivation would be to decouple building NumPy from linking >> with BLAS and have that all happen at run-time. But a Cython >> interface >> would follow naturally too. > > The main motivation for sticking it into Scipy would be a bit > different > --- since the build and distribution infra is in place for Scipy, > putting it in scipy.linalg makes it more easily available for a > larger > number of people than some random 3-rd party module. > > We already ship low-level f2py bindings, so I don't see a reason for > not > shipping Cython ones too. I find Dag's approach more appealing. SciPy can be problematic (windows 64-bit) and if one could offer access to the linear algebra functions without needing SciPy I would certainly prefer it. Armando From pav at iki.fi Mon Feb 18 16:47:59 2013 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 18 Feb 2013 23:47:59 +0200 Subject: [Numpy-discussion] Windows, blas, atlas and dlls In-Reply-To: <541193c3b71c8b7043bbb411b67f69ed@esrf.fr> References: <512256B4.50206@astro.uio.no> <512262EA.2000304@astro.uio.no> <512275F6.7000302@astro.uio.no> <541193c3b71c8b7043bbb411b67f69ed@esrf.fr> Message-ID: 18.02.2013 23:29, V. Armando Sole kirjoitti: [clip] > I find Dag's approach more appealing. > > SciPy can be problematic (windows 64-bit) and if one could offer access > to the linear algebra functions without needing SciPy I would certainly > prefer it. Well, the two approaches are not exclusive. Moreover, there already exist Cython wrappers for BLAS that you can just take and use. Windows 64-bit is probably problematic for everyone who wants to provide binaries --- I don't think there's a big difference in difficulty in making binaries for a light Cython wrapper to BLAS/LAPACK vs. providing the whole of Scipy :) -- Pauli Virtanen From sole at esrf.fr Tue Feb 19 01:05:20 2013 From: sole at esrf.fr (V. Armando Sole) Date: Tue, 19 Feb 2013 07:05:20 +0100 Subject: [Numpy-discussion] Windows, blas, atlas and dlls In-Reply-To: References: <512256B4.50206@astro.uio.no> <512262EA.2000304@astro.uio.no> <512275F6.7000302@astro.uio.no> <541193c3b71c8b7043bbb411b67f69ed@esrf.fr> Message-ID: <91eebaa16e3708a7f9ccf2755cdc8389@esrf.fr> On 18.02.2013 22:47, Pauli Virtanen wrote: > 18.02.2013 23:29, V. Armando Sole kirjoitti: > [clip] >> I find Dag's approach more appealing. >> >> SciPy can be problematic (windows 64-bit) and if one could offer >> access >> to the linear algebra functions without needing SciPy I would >> certainly >> prefer it. > > Well, the two approaches are not exclusive. Moreover, there already > exist Cython wrappers for BLAS that you can just take and use. > Please correct me if I am wrong. I assume those wrappers force you to provide the shared libraries so the problem is still there. If not, I would really be interested on getting one of those wrappers :-) It is really nice to provide extensions receiving the pointer to the function to be used even under Linux: the extension does not need to be compiled each time the user changes/updates shared libraries. It is really nice to find your C extension is slow, you find ATLAS is not installed, you install it and your extension becomes very fast without needing to recompile. > Windows 64-bit is probably problematic for everyone who wants to > provide > binaries --- I don't think there's a big difference in difficulty in > making binaries for a light Cython wrapper to BLAS/LAPACK vs. > providing > the whole of Scipy :) I have an Intel Fortran compiler license just to be able to provide windows 64-bit frozen binaries and extension modules :-) but that is not enough: - If provide the MKL dll's a person willing to re-distribute the module also needs an MKL license - If I do not provide the MKL dll's the extension module is useless For the time being the best solution I have found is to use pointers to the wrapped functions in SciPy: the extension module use whatever library installed on the target system and I do not need to provide the shared libraries. It is just a pity that having the libraries in numpy, one cannot access them while one can do it in SciPy. Therefore I found Dag's approach quite nice: numpy and SciPy using the linear algebra functions via a third package providing all the needed pointers (or at least having that package available in first instance). Best regards, Armando From tladd at che.ufl.edu Tue Feb 19 10:00:25 2013 From: tladd at che.ufl.edu (Tony Ladd) Date: Tue, 19 Feb 2013 10:00:25 -0500 Subject: [Numpy-discussion] Array accumulation in numpy Message-ID: <51239389.6050409@che.ufl.edu> I want to accumulate elements of a vector (x) to an array (f) based on an index list (ind). For example: x=[1,2,3,4,5,6] ind=[1,3,9,3,4,1] f=np.zeros(10) What I want would be produced by the loop for i=range(6): f[ind[i]]=f[ind[i]]+x[i] The answer is f=array([ 0., 7., 0., 6., 5., 0., 0., 0., 0., 3.]) When I try to use implicit arguments f[ind]=f[ind]+x I get f=array([ 0., 6., 0., 4., 5., 0., 0., 0., 0., 3.]) So it takes the last value of x that is pointed to by ind and adds it to f, but its the wrong answer when there are repeats of the same entry in ind (e.g. 3 or 1) I realize my code is incorrect, but is there a way to make numpy accumulate without using loops? I would have thought so but I cannot find anything in the documentation. Would much appreciate any help - probably a really simple question. Thanks Tony -- Tony Ladd Chemical Engineering Department University of Florida Gainesville, Florida 32611-6005 USA Email: tladd-"(AT)"-che.ufl.edu Web http://ladd.che.ufl.edu Tel: (352)-392-6509 FAX: (352)-392-9514 From ben.root at ou.edu Tue Feb 19 10:04:18 2013 From: ben.root at ou.edu (Benjamin Root) Date: Tue, 19 Feb 2013 10:04:18 -0500 Subject: [Numpy-discussion] Array accumulation in numpy In-Reply-To: <51239389.6050409@che.ufl.edu> References: <51239389.6050409@che.ufl.edu> Message-ID: On Tue, Feb 19, 2013 at 10:00 AM, Tony Ladd wrote: > I want to accumulate elements of a vector (x) to an array (f) based on > an index list (ind). > > For example: > > x=[1,2,3,4,5,6] > ind=[1,3,9,3,4,1] > f=np.zeros(10) > > What I want would be produced by the loop > > for i=range(6): > f[ind[i]]=f[ind[i]]+x[i] > > The answer is f=array([ 0., 7., 0., 6., 5., 0., 0., 0., 0., 3.]) > > When I try to use implicit arguments > > f[ind]=f[ind]+x > > I get f=array([ 0., 6., 0., 4., 5., 0., 0., 0., 0., 3.]) > > > So it takes the last value of x that is pointed to by ind and adds it to > f, but its the wrong answer when there are repeats of the same entry in > ind (e.g. 3 or 1) > > I realize my code is incorrect, but is there a way to make numpy > accumulate without using loops? I would have thought so but I cannot > find anything in the documentation. > > Would much appreciate any help - probably a really simple question. > > Thanks > > Tony > > I believe you are looking for the equivalent of accumarray in Matlab? Try this: http://www.scipy.org/Cookbook/AccumarrayLike It is a bit touchy about lists and 1-D numpy arrays, but it does the job. Also, I think somebody posted an optimized version for simple sums recently to this list. Cheers! Ben Root -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebastian at sipsolutions.net Tue Feb 19 10:16:54 2013 From: sebastian at sipsolutions.net (Sebastian Berg) Date: Tue, 19 Feb 2013 16:16:54 +0100 Subject: [Numpy-discussion] Array accumulation in numpy In-Reply-To: <51239389.6050409@che.ufl.edu> References: <51239389.6050409@che.ufl.edu> Message-ID: <1361287014.2572.5.camel@sebastian-laptop> On Tue, 2013-02-19 at 10:00 -0500, Tony Ladd wrote: > I want to accumulate elements of a vector (x) to an array (f) based on > an index list (ind). > > For example: > > x=[1,2,3,4,5,6] > ind=[1,3,9,3,4,1] > f=np.zeros(10) > > What I want would be produced by the loop > > for i=range(6): > f[ind[i]]=f[ind[i]]+x[i] > > The answer is f=array([ 0., 7., 0., 6., 5., 0., 0., 0., 0., 3.]) > > When I try to use implicit arguments > > f[ind]=f[ind]+x > > I get f=array([ 0., 6., 0., 4., 5., 0., 0., 0., 0., 3.]) > > > So it takes the last value of x that is pointed to by ind and adds it to > f, but its the wrong answer when there are repeats of the same entry in > ind (e.g. 3 or 1) > > I realize my code is incorrect, but is there a way to make numpy > accumulate without using loops? I would have thought so but I cannot > find anything in the documentation. > You might be interested in this: https://github.com/numpy/numpy/pull/2821 But anyway, you should however be able to do what you want to do using np.bincount with the weights keyword argument. Regards, Sebastian > Would much appreciate any help - probably a really simple question. > > Thanks > > Tony > From alan.isaac at gmail.com Tue Feb 19 10:17:32 2013 From: alan.isaac at gmail.com (Alan G Isaac) Date: Tue, 19 Feb 2013 10:17:32 -0500 Subject: [Numpy-discussion] Array accumulation in numpy Message-ID: <5123978C.2030402@gmail.com> x=[1,2,3,4,5,6] ind=[1,3,9,3,4,1] f=np.zeros(10) np.bincount(ind,x) hth, Alan Isaac From tladd at che.ufl.edu Tue Feb 19 10:22:56 2013 From: tladd at che.ufl.edu (Tony Ladd) Date: Tue, 19 Feb 2013 10:22:56 -0500 Subject: [Numpy-discussion] Array accumulation in numpy In-Reply-To: <5123978C.2030402@gmail.com> References: <5123978C.2030402@gmail.com> Message-ID: <512398D0.2000901@che.ufl.edu> Alan Thanks - I felt sure there had to be an easy idea. Best Tony On 02/19/2013 10:17 AM, Alan G Isaac wrote: > x=[1,2,3,4,5,6] > ind=[1,3,9,3,4,1] > f=np.zeros(10) > np.bincount(ind,x) > > hth, > Alan Isaac > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion -- Tony Ladd Chemical Engineering Department University of Florida Gainesville, Florida 32611-6005 USA Email: tladd-"(AT)"-che.ufl.edu Web http://ladd.che.ufl.edu Tel: (352)-392-6509 FAX: (352)-392-9514 From tladd at che.ufl.edu Tue Feb 19 10:24:25 2013 From: tladd at che.ufl.edu (Tony Ladd) Date: Tue, 19 Feb 2013 10:24:25 -0500 Subject: [Numpy-discussion] Array accumulation in numpy In-Reply-To: References: <51239389.6050409@che.ufl.edu> Message-ID: <51239929.5040607@che.ufl.edu> Thanks to all for a very quick response. np.bincount does what I need. Tony On 02/19/2013 10:04 AM, Benjamin Root wrote: > > > On Tue, Feb 19, 2013 at 10:00 AM, Tony Ladd > wrote: > > I want to accumulate elements of a vector (x) to an array (f) based on > an index list (ind). > > For example: > > x=[1,2,3,4,5,6] > ind=[1,3,9,3,4,1] > f=np.zeros(10) > > What I want would be produced by the loop > > for i=range(6): > f[ind[i]]=f[ind[i]]+x[i] > > The answer is f=array([ 0., 7., 0., 6., 5., 0., 0., 0., > 0., 3.]) > > When I try to use implicit arguments > > f[ind]=f[ind]+x > > I get f=array([ 0., 6., 0., 4., 5., 0., 0., 0., 0., 3.]) > > > So it takes the last value of x that is pointed to by ind and adds > it to > f, but its the wrong answer when there are repeats of the same > entry in > ind (e.g. 3 or 1) > > I realize my code is incorrect, but is there a way to make numpy > accumulate without using loops? I would have thought so but I cannot > find anything in the documentation. > > Would much appreciate any help - probably a really simple question. > > Thanks > > Tony > > > I believe you are looking for the equivalent of accumarray in Matlab? > > Try this: > > http://www.scipy.org/Cookbook/AccumarrayLike > > It is a bit touchy about lists and 1-D numpy arrays, but it does the job. > Also, I think somebody posted an optimized version for simple sums > recently to this list. > > Cheers! > Ben Root > > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion -- Tony Ladd Chemical Engineering Department University of Florida Gainesville, Florida 32611-6005 USA Email: tladd-"(AT)"-che.ufl.edu Web http://ladd.che.ufl.edu Tel: (352)-392-6509 FAX: (352)-392-9514 From eraldo.pomponi at gmail.com Tue Feb 19 10:38:52 2013 From: eraldo.pomponi at gmail.com (Eraldo Pomponi) Date: Tue, 19 Feb 2013 16:38:52 +0100 Subject: [Numpy-discussion] Array accumulation in numpy In-Reply-To: <51239929.5040607@che.ufl.edu> References: <51239389.6050409@che.ufl.edu> <51239929.5040607@che.ufl.edu> Message-ID: Dear Tony, I would suggest to look at this post already mentioned by Benjamin ..... maybe it fits with your needs! http://numpy-discussion.10968.n7.nabble.com/Pre-allocate-array-td4870.html Cheers, Eraldo On Tue, Feb 19, 2013 at 4:24 PM, Tony Ladd wrote: > Thanks to all for a very quick response. np.bincount does what I need. > > Tony > > On 02/19/2013 10:04 AM, Benjamin Root wrote: > > > > > > On Tue, Feb 19, 2013 at 10:00 AM, Tony Ladd > > wrote: > > > > I want to accumulate elements of a vector (x) to an array (f) based > on > > an index list (ind). > > > > For example: > > > > x=[1,2,3,4,5,6] > > ind=[1,3,9,3,4,1] > > f=np.zeros(10) > > > > What I want would be produced by the loop > > > > for i=range(6): > > f[ind[i]]=f[ind[i]]+x[i] > > > > The answer is f=array([ 0., 7., 0., 6., 5., 0., 0., 0., > > 0., 3.]) > > > > When I try to use implicit arguments > > > > f[ind]=f[ind]+x > > > > I get f=array([ 0., 6., 0., 4., 5., 0., 0., 0., 0., 3.]) > > > > > > So it takes the last value of x that is pointed to by ind and adds > > it to > > f, but its the wrong answer when there are repeats of the same > > entry in > > ind (e.g. 3 or 1) > > > > I realize my code is incorrect, but is there a way to make numpy > > accumulate without using loops? I would have thought so but I cannot > > find anything in the documentation. > > > > Would much appreciate any help - probably a really simple question. > > > > Thanks > > > > Tony > > > > > > I believe you are looking for the equivalent of accumarray in Matlab? > > > > Try this: > > > > http://www.scipy.org/Cookbook/AccumarrayLike > > > > It is a bit touchy about lists and 1-D numpy arrays, but it does the job. > > Also, I think somebody posted an optimized version for simple sums > > recently to this list. > > > > Cheers! > > Ben Root > > > > > > > > _______________________________________________ > > NumPy-Discussion mailing list > > NumPy-Discussion at scipy.org > > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > -- > Tony Ladd > > Chemical Engineering Department > University of Florida > Gainesville, Florida 32611-6005 > USA > > Email: tladd-"(AT)"-che.ufl.edu > Web http://ladd.che.ufl.edu > > Tel: (352)-392-6509 > FAX: (352)-392-9514 > > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rob.clewley at gmail.com Wed Feb 20 00:47:15 2013 From: rob.clewley at gmail.com (Rob Clewley) Date: Wed, 20 Feb 2013 00:47:15 -0500 Subject: [Numpy-discussion] Seeking help and support for next-gen math modeling tools using Python Message-ID: Hi all, and apologies for a little cross-posting: First, thanks to those of you who have used and contributed to the PyDSTool math modeling environment [1]. This project has greatly benefitted from the underlying platform of numpy / scipy / matplotlib / ipython. Going forward I have three goals, for which I would like the urgent input and support of existing or potential users. (i) I have ideas for expanding PyDSTool with innovative tools in my research area, which is essentially for the reverse engineering of complex mechanisms in multi-scale dynamic systems [2]. These tools have already been prototyped and show promise, but they need a lot of work. (ii) I want to grow and develop the community of users who will help drive new ideas, provide feedback, and collaborate on writing and testing code for both the core and application aspects of PyDSTool. (iii) The first two goals will help me to expand the scientific / engineering applications and use cases of PyDSTool as well as further sustain the project in the long-term. I am applying for NSF funding to support these software and application goals over the next few years [3], but the proposal deadline is in just four weeks! If you are interested in helping in any way I would greatly appreciate your replies (off list) to either of the following queries: I need to better understand my existing and potential users, many of whom may not be registered on the sourceforge users list. Please tell me who you are and what you use PyDSTool for. If you are not using it yet but you?re interested in this area then please provide feedback regarding what you would like to see change. If you are interested in these future goals, even if you are not an existing user but may be in the future, please write a brief letter of support on a letterhead document that I will send in with the proposal as PDFs. I have sample text that I can send you, as well as my draft proposal?s introduction and specific aims. These letters can make a great deal of difference during review. Without funding, collaborators, user demand and community support, these more ambitious goals for PyDSTool will not happen, although I am committed to a basic level of maintenance. For instance, based on user feedback I am about to release an Ubuntu-based Live CD [4] that will allow users to try PyDSTool on any OS without having to install it. PyDSTool will also acquire an improved setup procedure and will be added to the NeuroDebian repository [5], among others. I am also finalizing an integrated interface to CUDA GPUs to perform fast parallel ODE solving [6]. Thanks for your time, Rob Clewley [1] http://pydstool.sourceforge.net [2] http://www.ni.gsu.edu/~rclewley/Research/index.html, and in particular http://www.ploscompbiol.org/article/info%3Adoi%2F10.1371%2Fjournal.pcbi.1002628 [3] NSF Software Infrastructure for Sustained Innovation (SI2-SSE) program solicitation: http://www.nsf.gov/pubs/2013/nsf13525/nsf13525.htm [4] http://help.ubuntu.com/community/LiveCD [5] http://neuro.debian.net/ [6] http://www.nvidia.com/object/cuda_home_new.html -- Robert Clewley, Ph.D. Assistant Professor Neuroscience Institute and Department of Mathematics and Statistics Georgia State University PO Box 5030 Atlanta, GA 30302, USA tel: 404-413-6420 fax: 404-413-5446 http://neuroscience.gsu.edu/rclewley.html From sergio.callegari at gmail.com Wed Feb 20 04:18:24 2013 From: sergio.callegari at gmail.com (Sergio) Date: Wed, 20 Feb 2013 09:18:24 +0000 (UTC) Subject: [Numpy-discussion] Windows, blas, atlas and dlls References: <512256B4.50206@astro.uio.no> Message-ID: Dag Sverre Seljebotn astro.uio.no> writes: > > On 02/18/2013 05:26 PM, rif wrote: > > I have no answer to the question, but I was curious as to why directly > > calling the cblas would be 10x-20x slower in the first place. That > > seems surprising, although I'm just learning about python numerics. > > The statement was that directly (on the Cython level) calling cblas is > 10x-20x slower than going through the (slow) SciPy wrapper routines. > That makes a lot of sense if the matrices are smalle nough. > > Dag Sverre Soory for expressing myself badly. I need to call cblas directly from cython, because it is faster. I use matrix multiplication in a tight loop. Let the speed with the standard dot be 100, Speed using the scipy.linalg.blas routines is 200 And speed calling directly atlas from cython is 2000 Which is reasonable, since this avoids any type checking. The point is that I need to ship an extra atlas lib to do so in windows, notwithstanding the fact that numpy/scipy incorporate atlas in the windows build. I was wondering if there is a way to build numpy/scipy with atlas dynamically linked into it, in order to be able to share the atlas libs between my code and scipy. From sergio.callegari at gmail.com Wed Feb 20 05:05:25 2013 From: sergio.callegari at gmail.com (Sergio) Date: Wed, 20 Feb 2013 10:05:25 +0000 (UTC) Subject: [Numpy-discussion] Windows, blas, atlas and dlls References: <5122564A.6030006@esrf.fr> Message-ID: V. Armando Sol? esrf.fr> writes: > from scipy.linalg.blas import fblas > dgemm = fblas.dgemm._cpointer > sgemm = fblas.sgemm._cpointer > I'm going to try and benchmark it asap. Thanks From d.s.seljebotn at astro.uio.no Wed Feb 20 05:15:12 2013 From: d.s.seljebotn at astro.uio.no (Dag Sverre Seljebotn) Date: Wed, 20 Feb 2013 11:15:12 +0100 Subject: [Numpy-discussion] Windows, blas, atlas and dlls In-Reply-To: References: <512256B4.50206@astro.uio.no> Message-ID: <5124A230.9010308@astro.uio.no> On 02/20/2013 10:18 AM, Sergio wrote: > Dag Sverre Seljebotn astro.uio.no> writes: > >> >> On 02/18/2013 05:26 PM, rif wrote: >>> I have no answer to the question, but I was curious as to why directly >>> calling the cblas would be 10x-20x slower in the first place. That >>> seems surprising, although I'm just learning about python numerics. >> >> The statement was that directly (on the Cython level) calling cblas is >> 10x-20x slower than going through the (slow) SciPy wrapper routines. >> That makes a lot of sense if the matrices are smalle nough. >> >> Dag Sverre > > Soory for expressing myself badly. > > I need to call cblas directly from cython, because it is faster. > > I use matrix multiplication in a tight loop. > > Let the speed with the standard dot be 100, > > Speed using the scipy.linalg.blas routines is 200 > > And speed calling directly atlas from cython is 2000 > > Which is reasonable, since this avoids any type checking. > > The point is that I need to ship an extra atlas lib to do so in windows, > notwithstanding the fact that numpy/scipy incorporate atlas in the windows build. > > I was wondering if there is a way to build numpy/scipy with atlas dynamically > linked into it, in order to be able to share the atlas libs between my code and > scipy. You could also look into OpenBLAS, which is easier to build and generally faster than ATLAS. (But alas, not supported by NumPy/SciPY AFAIK.) Dag Sverre From ndbecker2 at gmail.com Wed Feb 20 08:25:30 2013 From: ndbecker2 at gmail.com (Neal Becker) Date: Wed, 20 Feb 2013 08:25:30 -0500 Subject: [Numpy-discussion] savetxt trouble Message-ID: I tried to save a vector as a csv, but it didn't work. The vector is: a[0,0] array([-0.70710678-0.70710678j, 0.70710678+0.70710678j, 0.70710678-0.70710678j, 0.70710678+0.70710678j, -0.70710678-0.70710678j, 0.70710678-0.70710678j, -0.70710678+0.70710678j, -0.70710678+0.70710678j, -0.70710678-0.70710678j, -0.70710678-0.70710678j, 0.70710678-0.70710678j, -0.70710678-0.70710678j, -0.70710678+0.70710678j, 0.70710678-0.70710678j, 0.70710678-0.70710678j, 0.70710678+0.70710678j, 0.70710678-0.70710678j, -0.70710678-0.70710678j, -0.70710678-0.70710678j, 0.70710678-0.70710678j, 0.70710678+0.70710678j, 0.70710678+0.70710678j, -0.70710678+0.70710678j, 0.70710678+0.70710678j, -0.70710678-0.70710678j, -0.70710678+0.70710678j, 0.70710678-0.70710678j, -0.70710678+0.70710678j, 0.70710678+0.70710678j, 0.70710678+0.70710678j, 0.70710678-0.70710678j, -0.70710678-0.70710678j, 0.70710678-0.70710678j, -0.70710678-0.70710678j, -0.70710678+0.70710678j, 0.70710678+0.70710678j, 0.70710678-0.70710678j, 0.70710678-0.70710678j, 0.70710678+0.70710678j, -0.70710678+0.70710678j]) np.savetxt ('test.out', a[0,0], delimiter=',') The saved txt file says: (-7.071067811865540120e-01+-7.071067811865411334e-01j) (7.071067811865535679e-01+7.071067811865415775e-01j) (7.071067811865422437e-01+-7.071067811865529018e-01j) (7.071067811865520136e-01+7.071067811865431318e-01j) ... From robert.kern at gmail.com Wed Feb 20 08:35:14 2013 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 20 Feb 2013 13:35:14 +0000 Subject: [Numpy-discussion] savetxt trouble In-Reply-To: References: Message-ID: On Wed, Feb 20, 2013 at 1:25 PM, Neal Becker wrote: > I tried to save a vector as a csv, but it didn't work. > > The vector is: > a[0,0] > array([-0.70710678-0.70710678j, 0.70710678+0.70710678j, > 0.70710678-0.70710678j, 0.70710678+0.70710678j, ... > np.savetxt ('test.out', a[0,0], delimiter=',') > > The saved txt file says: > (-7.071067811865540120e-01+-7.071067811865411334e-01j) > (7.071067811865535679e-01+7.071067811865415775e-01j) > (7.071067811865422437e-01+-7.071067811865529018e-01j) > (7.071067811865520136e-01+7.071067811865431318e-01j) > ... What were you expecting? A single row? savetxt() always writes out len(arr) rows. Reshape your vector into a (1,N) array if you want a single row. -- Robert Kern From ndbecker2 at gmail.com Wed Feb 20 08:46:41 2013 From: ndbecker2 at gmail.com (Neal Becker) Date: Wed, 20 Feb 2013 08:46:41 -0500 Subject: [Numpy-discussion] savetxt trouble References: Message-ID: Robert Kern wrote: > On Wed, Feb 20, 2013 at 1:25 PM, Neal Becker wrote: >> I tried to save a vector as a csv, but it didn't work. >> >> The vector is: >> a[0,0] >> array([-0.70710678-0.70710678j, 0.70710678+0.70710678j, >> 0.70710678-0.70710678j, 0.70710678+0.70710678j, > ... >> np.savetxt ('test.out', a[0,0], delimiter=',') >> >> The saved txt file says: >> (-7.071067811865540120e-01+-7.071067811865411334e-01j) >> (7.071067811865535679e-01+7.071067811865415775e-01j) >> (7.071067811865422437e-01+-7.071067811865529018e-01j) >> (7.071067811865520136e-01+7.071067811865431318e-01j) >> ... > > What were you expecting? A single row? savetxt() always writes out > len(arr) rows. Reshape your vector into a (1,N) array if you want a > single row. > Ah, thanks! From nouiz at nouiz.org Wed Feb 20 09:08:27 2013 From: nouiz at nouiz.org (=?ISO-8859-1?Q?Fr=E9d=E9ric_Bastien?=) Date: Wed, 20 Feb 2013 09:08:27 -0500 Subject: [Numpy-discussion] Windows, blas, atlas and dlls In-Reply-To: <5124A230.9010308@astro.uio.no> References: <512256B4.50206@astro.uio.no> <5124A230.9010308@astro.uio.no> Message-ID: Hi, We also have the same problem for Theano. Having one reusable blas on windows would be useful to many project. Also, if possible try to make it accesible from C,C++ too. Not just cython. Fred On Feb 20, 2013 5:15 AM, "Dag Sverre Seljebotn" wrote: > On 02/20/2013 10:18 AM, Sergio wrote: > > Dag Sverre Seljebotn astro.uio.no> writes: > > > >> > >> On 02/18/2013 05:26 PM, rif wrote: > >>> I have no answer to the question, but I was curious as to why directly > >>> calling the cblas would be 10x-20x slower in the first place. That > >>> seems surprising, although I'm just learning about python numerics. > >> > >> The statement was that directly (on the Cython level) calling cblas is > >> 10x-20x slower than going through the (slow) SciPy wrapper routines. > >> That makes a lot of sense if the matrices are smalle nough. > >> > >> Dag Sverre > > > > Soory for expressing myself badly. > > > > I need to call cblas directly from cython, because it is faster. > > > > I use matrix multiplication in a tight loop. > > > > Let the speed with the standard dot be 100, > > > > Speed using the scipy.linalg.blas routines is 200 > > > > And speed calling directly atlas from cython is 2000 > > > > Which is reasonable, since this avoids any type checking. > > > > The point is that I need to ship an extra atlas lib to do so in windows, > > notwithstanding the fact that numpy/scipy incorporate atlas in the > windows build. > > > > I was wondering if there is a way to build numpy/scipy with atlas > dynamically > > linked into it, in order to be able to share the atlas libs between my > code and > > scipy. > > You could also look into OpenBLAS, which is easier to build and > generally faster than ATLAS. (But alas, not supported by NumPy/SciPY > AFAIK.) > > Dag Sverre > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From silva at lma.cnrs-mrs.fr Wed Feb 20 09:27:12 2013 From: silva at lma.cnrs-mrs.fr (Fabrice Silva) Date: Wed, 20 Feb 2013 15:27:12 +0100 Subject: [Numpy-discussion] savetxt trouble In-Reply-To: References: Message-ID: <1361370432.5868.29.camel@laptop-101> Le mercredi 20 f?vrier 2013 ? 13:35 +0000, Robert Kern a ?crit : > On Wed, Feb 20, 2013 at 1:25 PM, Neal Becker wrote: > > I tried to save a vector as a csv, but it didn't work. > > > > The vector is: > > a[0,0] > > array([-0.70710678-0.70710678j, 0.70710678+0.70710678j, > > 0.70710678-0.70710678j, 0.70710678+0.70710678j, > ... > > np.savetxt ('test.out', a[0,0], delimiter=',') > > > > The saved txt file says: > > (-7.071067811865540120e-01+-7.071067811865411334e-01j) > > (7.071067811865535679e-01+7.071067811865415775e-01j) > > (7.071067811865422437e-01+-7.071067811865529018e-01j) > > (7.071067811865520136e-01+7.071067811865431318e-01j) > > ... > > What were you expecting? A single row? savetxt() always writes out > len(arr) rows. Reshape your vector into a (1,N) array if you want a > single row. The imaginary part seems broken, see the succession "+-" for negative imaginary parts. From oscar.villellas at continuum.io Wed Feb 20 13:19:41 2013 From: oscar.villellas at continuum.io (Oscar Villellas) Date: Wed, 20 Feb 2013 19:19:41 +0100 Subject: [Numpy-discussion] pull request: generalized ufunc signature fix and lineal algebra generalized ufuncs In-Reply-To: References: Message-ID: Hello, I've updated the pull request following feedback: https://github.com/numpy/numpy/pull/2954 It caters all the comments raised up-to-now On Thu, Jan 31, 2013 at 5:43 PM, Oscar Villellas wrote: > Hello, > > At Continuum Analytics we've been working on a submodule implementing > a set of lineal algebra operations as generalized ufuncs. This allows > specifying arrays of lineal algebra problems to be computed with a > single Python call, allowing broadcasting as well. As the > vectorization is handled in the kernel, this gives a speed edge on the > operations. We think this could be useful to the community and we want > to share the work done. > > I've created a couple of pull-requests: > > The first one contains a fix for a bug in the handling of certain > signatures in the gufuncs. This was found while building the > submodule. The fix was done by Mark Wiebe, so credit should go to him > :). > https://github.com/numpy/numpy/pull/2953 > > The second pull request contains the submodule itself and builds on > top of the previous fix. It contains a rst file that explains the > submodule, enumerates the functions implemented and details some > implementation bits. The entry point to the module is in written in > Python and contains detailed docstrings. > https://github.com/numpy/numpy/pull/2954 > > We are open to discussion and to make improvements to the code if > needed, in order to adapt to NumPy standards. > > Thanks, > Oscar. From oscar.villellas at continuum.io Wed Feb 20 13:31:28 2013 From: oscar.villellas at continuum.io (Oscar Villellas) Date: Wed, 20 Feb 2013 19:31:28 +0100 Subject: [Numpy-discussion] pull request: generalized ufunc signature fix and lineal algebra generalized ufuncs In-Reply-To: References: Message-ID: On Thu, Jan 31, 2013 at 9:35 PM, Robert Kern wrote: > On Thu, Jan 31, 2013 at 7:44 PM, Nathaniel Smith wrote: >> On Thu, Jan 31, 2013 at 8:43 AM, Oscar Villellas >> wrote: >>> Hello, >>> >>> At Continuum Analytics we've been working on a submodule implementing >>> a set of lineal algebra operations as generalized ufuncs. This allows >>> specifying arrays of lineal algebra problems to be computed with a >>> single Python call, allowing broadcasting as well. As the >>> vectorization is handled in the kernel, this gives a speed edge on the >>> operations. We think this could be useful to the community and we want >>> to share the work done. >> >> It certainly does look useful. My question is -- why do we need two >> complete copies of the linear algebra routine interfaces? Can we just >> replace the existing linalg functions with these new implementations? >> Or if not, what prevents it? > > The error reporting would have to be bodged back in. > Error reporting is one part of the equation. Right now the functions in the linalg module perform some baking of the data. For example, in linalg eigenvalue functions for real arrays check if all the eigenvalues/eigenvectors in the result are real. If that's the case it returns arrays with a real dtype. There is no way to handle that in a gufunc without a performance hit. The gufunc version always returns a matrix with complex dtype (it will involve copying otherwise). So this provides the flexibility of being able to use linalg functions as generalized ufuncs. It also provides some extra performance as there is little python wrapper code and when used as gufuncs buffer handling is quite efficient. IMO it would be great if this could evolve into a complete replacement for linalg, that's for sure, but right now it is missing functionality and is not a drop-in replacement. In the other hand it provides value, so it is a worthy addition. Cheers, Oscar From toddrjen at gmail.com Wed Feb 20 15:27:13 2013 From: toddrjen at gmail.com (Todd) Date: Wed, 20 Feb 2013 21:27:13 +0100 Subject: [Numpy-discussion] Seeking help and support for next-gen math modeling tools using Python In-Reply-To: References: Message-ID: On Feb 20, 2013 12:47 AM, "Rob Clewley" wrote: > > Hi all, and apologies for a little cross-posting: > > First, thanks to those of you who have used and contributed to the > PyDSTool math modeling environment [1]. This project has greatly > benefitted from the underlying platform of numpy / scipy / matplotlib > / ipython. Going forward I have three goals, for which I would like > the urgent input and support of existing or potential users. > > (i) I have ideas for expanding PyDSTool with innovative tools in my > research area, which is essentially for the reverse engineering of > complex mechanisms in multi-scale dynamic systems [2]. These tools > have already been prototyped and show promise, but they need a lot of > work. > (ii) I want to grow and develop the community of users who will help > drive new ideas, provide feedback, and collaborate on writing and > testing code for both the core and application aspects of PyDSTool. > (iii) The first two goals will help me to expand the scientific / > engineering applications and use cases of PyDSTool as well as further > sustain the project in the long-term. > > I am applying for NSF funding to support these software and > application goals over the next few years [3], but the proposal > deadline is in just four weeks! If you are interested in helping in > any way I would greatly appreciate your replies (off list) to either > of the following queries: > > I need to better understand my existing and potential users, many of > whom may not be registered on the sourceforge users list. Please tell > me who you are and what you use PyDSTool for. If you are not using it > yet but you?re interested in this area then please provide feedback > regarding what you would like to see change. > > If you are interested in these future goals, even if you are not an > existing user but may be in the future, please write a brief letter of > support on a letterhead document that I will send in with the proposal > as PDFs. I have sample text that I can send you, as well as my draft > proposal?s introduction and specific aims. These letters can make a > great deal of difference during review. > > Without funding, collaborators, user demand and community support, > these more ambitious goals for PyDSTool will not happen, although I am > committed to a basic level of maintenance. For instance, based on user > feedback I am about to release an Ubuntu-based Live CD [4] that will > allow users to try PyDSTool on any OS without having to install it. > PyDSTool will also acquire an improved setup procedure and will be > added to the NeuroDebian repository [5], among others. I am also > finalizing an integrated interface to CUDA GPUs to perform fast > parallel ODE solving [6]. > > Thanks for your time, > Rob Clewley > > [1] http://pydstool.sourceforge.net > [2] http://www.ni.gsu.edu/~rclewley/Research/index.html, and in > particular http://www.ploscompbiol.org/article/info%3Adoi%2F10.1371%2Fjournal.pcbi.1002628 > [3] NSF Software Infrastructure for Sustained Innovation (SI2-SSE) > program solicitation: > http://www.nsf.gov/pubs/2013/nsf13525/nsf13525.htm > [4] http://help.ubuntu.com/community/LiveCD > [5] http://neuro.debian.net/ > [6] http://www.nvidia.com/object/cuda_home_new.html > I am looking at documentation now, but a couple things from what I seen: Are you particularly tied to sourceforge? It seems a lot of python development is moving to github, and it makes third party contribution much easier. You can still distribute releases through sourceforge even if you use github for revision control. Are you in touch with the neuroensemble mailing list? This seems relevant to it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.j.a.cock at googlemail.com Wed Feb 20 18:21:14 2013 From: p.j.a.cock at googlemail.com (Peter Cock) Date: Wed, 20 Feb 2013 23:21:14 +0000 Subject: [Numpy-discussion] Seeking help and support for next-gen math modeling tools using Python In-Reply-To: References: Message-ID: On Wed, Feb 20, 2013 at 8:27 PM, Todd wrote: > > I am looking at documentation now, but a couple things from what I seen: > > Are you particularly tied to sourceforge? It seems a lot of python > development is moving to github, and it makes third party contribution much > easier. You can still distribute releases through sourceforge even if you > use github for revision control. That's what NumPy has been doing for some time now, the repo is here: https://github.com/numpy/numpy http://sourceforge.net/projects/numpy/files/ Is there some misleading documentation still around that gave you a different impression? Peter From robert.kern at gmail.com Wed Feb 20 18:23:24 2013 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 20 Feb 2013 23:23:24 +0000 Subject: [Numpy-discussion] Seeking help and support for next-gen math modeling tools using Python In-Reply-To: References: Message-ID: On Wed, Feb 20, 2013 at 11:21 PM, Peter Cock wrote: > On Wed, Feb 20, 2013 at 8:27 PM, Todd wrote: >> >> I am looking at documentation now, but a couple things from what I seen: >> >> Are you particularly tied to sourceforge? It seems a lot of python >> development is moving to github, and it makes third party contribution much >> easier. You can still distribute releases through sourceforge even if you >> use github for revision control. > > That's what NumPy has been doing for some time now, the repo is here: > https://github.com/numpy/numpy > http://sourceforge.net/projects/numpy/files/ > > Is there some misleading documentation still around that gave > you a different impression? Todd is responding to a message about PyDSTool, which is developed on Sourceforge, not numpy. -- Robert Kern From p.j.a.cock at googlemail.com Wed Feb 20 18:30:18 2013 From: p.j.a.cock at googlemail.com (Peter Cock) Date: Wed, 20 Feb 2013 23:30:18 +0000 Subject: [Numpy-discussion] Seeking help and support for next-gen math modeling tools using Python In-Reply-To: References: Message-ID: On Wed, Feb 20, 2013 at 11:23 PM, Robert Kern wrote: > On Wed, Feb 20, 2013 at 11:21 PM, Peter Cock wrote: >> On Wed, Feb 20, 2013 at 8:27 PM, Todd wrote: >>> >>> I am looking at documentation now, but a couple things from what I seen: >>> >>> Are you particularly tied to sourceforge? It seems a lot of python >>> development is moving to github, and it makes third party contribution much >>> easier. You can still distribute releases through sourceforge even if you >>> use github for revision control. >> >> That's what NumPy has been doing for some time now, the repo is here: >> https://github.com/numpy/numpy >> http://sourceforge.net/projects/numpy/files/ >> >> Is there some misleading documentation still around that gave >> you a different impression? > > Todd is responding to a message about PyDSTool, which is developed on > Sourceforge, not numpy. Ah - apologies for the noise (and plus one for adopting github). Peter From logik at centrum.cz Thu Feb 21 07:16:39 2013 From: logik at centrum.cz (=?windows-1252?Q?Maty=E1=9A_Nov=E1k?=) Date: Thu, 21 Feb 2013 13:16:39 +0100 Subject: [Numpy-discussion] Windows, blas, atlas and dlls In-Reply-To: <5124A230.9010308@astro.uio.no> References: <512256B4.50206@astro.uio.no> <5124A230.9010308@astro.uio.no> Message-ID: <51261027.3010602@centrum.cz> > You could also look into OpenBLAS, which is easier to build and > generally faster than ATLAS. (But alas, not supported by NumPy/SciPY AFAIK.) Hi, maybe not officially supported, so the integration into numpy is a bit tricky (after long tries I had success with exporting BLAS and LAPACK environment variables prior to running setup.py, if I remember correctly) but IMHE one can use OpenBlas (or in my case it's older version GotoBlas) with (sci|num)py without problems. So I reccomend to use Open/GotoBlas too. Best, Matyas Novak From pierre.haessig at crans.org Thu Feb 21 13:00:42 2013 From: pierre.haessig at crans.org (Pierre Haessig) Date: Thu, 21 Feb 2013 19:00:42 +0100 Subject: [Numpy-discussion] another discussion on numpy correlate (and convolution) Message-ID: <512660CA.4060603@crans.org> Hi everybody, (just coming from a discussion on the performance of Matplotlib's (x)corr function which uses np.correlate) There have been already many discussions on how to compute (cross-)correlations of time-series in Python (like http://stackoverflow.com/questions/6991471/computing-cross-correlation-function). The discussion is spread between the various stakeholders (just to name some I've in mind : scipy, statsmodel, matplotlib, ...). There are basically 2 implementations : time-domain and frequency-domain (using fft + multiplication). My discussion is only on time-domain implementation. The key usecase which I feel is not well adressed today is when computing the (cross)correlation of two long timeseries on only a few lagpoints. For time-domain, one can either write its own implementation or rely on numpy.correlate. The latter rely on the fast C implementation _pyarray_correlate() in multiarraymodule.c (https://github.com/numpy/numpy/blob/master/numpy/core/src/multiarray/multiarraymodule.c#L1177) Now, the signature of this function is meant to accept one of three *computation modes* ('valid', 'same', 'full'). Thoses modes make a lot of sense when using this function in the context of convoluting two signals (say an "input array" and a "impulse response array"). In the context of computing the (cross)correlation of two long timeseries on only a few lagpoints, those computation modes are unadapted and potentially leads to huge computational and memory wastes (for example : https://github.com/matplotlib/matplotlib/blob/master/lib/matplotlib/axes.py#L4332). For some time, I was thinking the solution was to write a dedicated function in one of the stakeholder modules (but which ? statsmodel ?) but now I came to think numpy is the best place to put a change and this would quickly benefit to all stackeholders downstream. Indeed, I looked more carefully at the C _pyarray_correlate() function, and I've come to the conclusion that these three computation modes are an unnecessary restriction because the actual computation relies on a triple for loop (https://github.com/numpy/numpy/blob/master/numpy/core/src/multiarray/multiarraymodule.c#L1177) which boundaries are governed by two integers `n_left` and `n_right` instead of the three modes. Maybe I've misunderstood the inner-working of this function, but I've the feeling that the ability to set these two integers directly instead of just the mode would open up the possibility to compute the correlation on only a few lagpoints. I'm fully aware that changing the signature of such a core numpy is out-of-question but I've got the feeling that a reasonably small some code refactor might lift the longgoing problem of (cross-)correlation computation. The Python np.correlate would require two additional keywords -`n_left` and `n_right`)which would override the `mode` keyword. Only the C function would need some more care to keep good backward compatibility in case it's used externally. What do other people think ? Best, Pierre -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 900 bytes Desc: OpenPGP digital signature URL: From josef.pktd at gmail.com Fri Feb 22 10:12:59 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 22 Feb 2013 10:12:59 -0500 Subject: [Numpy-discussion] another discussion on numpy correlate (and convolution) In-Reply-To: <512660CA.4060603@crans.org> References: <512660CA.4060603@crans.org> Message-ID: On Thu, Feb 21, 2013 at 1:00 PM, Pierre Haessig wrote: > Hi everybody, > > (just coming from a discussion on the performance of Matplotlib's > (x)corr function which uses np.correlate) > > There have been already many discussions on how to compute > (cross-)correlations of time-series in Python (like > http://stackoverflow.com/questions/6991471/computing-cross-correlation-function). > The discussion is spread between the various stakeholders (just to name > some I've in mind : scipy, statsmodel, matplotlib, ...). > > There are basically 2 implementations : time-domain and frequency-domain > (using fft + multiplication). My discussion is only on time-domain > implementation. The key usecase which I feel is not well adressed today > is when computing the (cross)correlation of two long timeseries on only > a few lagpoints. > > For time-domain, one can either write its own implementation or rely on > numpy.correlate. The latter rely on the fast C implementation > _pyarray_correlate() in multiarraymodule.c > (https://github.com/numpy/numpy/blob/master/numpy/core/src/multiarray/multiarraymodule.c#L1177) > > Now, the signature of this function is meant to accept one of three > *computation modes* ('valid', 'same', 'full'). Thoses modes make a lot > of sense when using this function in the context of convoluting two > signals (say an "input array" and a "impulse response array"). In the > context of computing the (cross)correlation of two long timeseries on > only a few lagpoints, those computation modes are unadapted and > potentially leads to huge computational and memory wastes > (for example : > https://github.com/matplotlib/matplotlib/blob/master/lib/matplotlib/axes.py#L4332). > > > > For some time, I was thinking the solution was to write a dedicated > function in one of the stakeholder modules (but which ? statsmodel ?) > but now I came to think numpy is the best place to put a change and this > would quickly benefit to all stackeholders downstream. > > Indeed, I looked more carefully at the C _pyarray_correlate() function, > and I've come to the conclusion that these three computation modes are > an unnecessary restriction because the actual computation relies on a > triple for loop > (https://github.com/numpy/numpy/blob/master/numpy/core/src/multiarray/multiarraymodule.c#L1177) > which boundaries are governed by two integers `n_left` and `n_right` > instead of the three modes. > > Maybe I've misunderstood the inner-working of this function, but I've > the feeling that the ability to set these two integers directly instead > of just the mode would open up the possibility to compute the > correlation on only a few lagpoints. > > I'm fully aware that changing the signature of such a core numpy is > out-of-question but I've got the feeling that a reasonably small some > code refactor might lift the longgoing problem of (cross-)correlation > computation. The Python np.correlate would require two additional > keywords -`n_left` and `n_right`)which would override the `mode` > keyword. Only the C function would need some more care to keep good > backward compatibility in case it's used externally. > > > > What do other people think ? I think it's a good idea. I didn't look at the implementation details. In statsmodels we also use explicit loops (x[lag:] * y[:lag]).sum(0) at places where we expect that in most cases we only need a few correlations (<10 or <20). (the fft and correlate versions are mainly used where we expect or need a large number or the full number of valid correlations) Josef > > Best, > Pierre > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From jaakko.luttinen at aalto.fi Fri Feb 22 11:00:41 2013 From: jaakko.luttinen at aalto.fi (Jaakko Luttinen) Date: Fri, 22 Feb 2013 18:00:41 +0200 Subject: [Numpy-discussion] numpy.einsum bug? Message-ID: <51279629.9030605@aalto.fi> Hi, Is this a bug in numpy.einsum? >>> np.einsum(3, [], 2, [], []) ValueError: If 'op_axes' or 'itershape' is not NULL in theiterator constructor, 'oa_ndim' must be greater than zero I think it should return 6 (i.e., 3*2). Regards, Jaakko From matthew.brett at gmail.com Fri Feb 22 11:40:46 2013 From: matthew.brett at gmail.com (Matthew Brett) Date: Fri, 22 Feb 2013 08:40:46 -0800 Subject: [Numpy-discussion] another discussion on numpy correlate (and convolution) In-Reply-To: <512660CA.4060603@crans.org> References: <512660CA.4060603@crans.org> Message-ID: Hi, On Thu, Feb 21, 2013 at 10:00 AM, Pierre Haessig wrote: > Hi everybody, > > (just coming from a discussion on the performance of Matplotlib's > (x)corr function which uses np.correlate) > > There have been already many discussions on how to compute > (cross-)correlations of time-series in Python (like > http://stackoverflow.com/questions/6991471/computing-cross-correlation-function). > The discussion is spread between the various stakeholders (just to name > some I've in mind : scipy, statsmodel, matplotlib, ...). > > There are basically 2 implementations : time-domain and frequency-domain > (using fft + multiplication). My discussion is only on time-domain > implementation. The key usecase which I feel is not well adressed today > is when computing the (cross)correlation of two long timeseries on only > a few lagpoints. > > For time-domain, one can either write its own implementation or rely on > numpy.correlate. The latter rely on the fast C implementation > _pyarray_correlate() in multiarraymodule.c > (https://github.com/numpy/numpy/blob/master/numpy/core/src/multiarray/multiarraymodule.c#L1177) > > Now, the signature of this function is meant to accept one of three > *computation modes* ('valid', 'same', 'full'). Thoses modes make a lot > of sense when using this function in the context of convoluting two > signals (say an "input array" and a "impulse response array"). In the > context of computing the (cross)correlation of two long timeseries on > only a few lagpoints, those computation modes are unadapted and > potentially leads to huge computational and memory wastes > (for example : > https://github.com/matplotlib/matplotlib/blob/master/lib/matplotlib/axes.py#L4332). > > > > For some time, I was thinking the solution was to write a dedicated > function in one of the stakeholder modules (but which ? statsmodel ?) > but now I came to think numpy is the best place to put a change and this > would quickly benefit to all stackeholders downstream. > > Indeed, I looked more carefully at the C _pyarray_correlate() function, > and I've come to the conclusion that these three computation modes are > an unnecessary restriction because the actual computation relies on a > triple for loop > (https://github.com/numpy/numpy/blob/master/numpy/core/src/multiarray/multiarraymodule.c#L1177) > which boundaries are governed by two integers `n_left` and `n_right` > instead of the three modes. > > Maybe I've misunderstood the inner-working of this function, but I've > the feeling that the ability to set these two integers directly instead > of just the mode would open up the possibility to compute the > correlation on only a few lagpoints. > > I'm fully aware that changing the signature of such a core numpy is > out-of-question but I've got the feeling that a reasonably small some > code refactor might lift the longgoing problem of (cross-)correlation > computation. The Python np.correlate would require two additional > keywords -`n_left` and `n_right`)which would override the `mode` > keyword. Only the C function would need some more care to keep good > backward compatibility in case it's used externally. >From complete ignorance, do you think it is an option to allow a (n_left, n_right) tuple as a value for 'mode'? Cheers, Matthew From chris.barker at noaa.gov Fri Feb 22 11:52:37 2013 From: chris.barker at noaa.gov (Chris Barker - NOAA Federal) Date: Fri, 22 Feb 2013 08:52:37 -0800 Subject: [Numpy-discussion] Windows, blas, atlas and dlls In-Reply-To: <51261027.3010602@centrum.cz> References: <512256B4.50206@astro.uio.no> <5124A230.9010308@astro.uio.no> <51261027.3010602@centrum.cz> Message-ID: On Thu, Feb 21, 2013 at 4:16 AM, Maty?? Nov?k wrote: >> You could also look into OpenBLAS, which is easier to build and >> generally faster than ATLAS. (But alas, not supported by NumPy/SciPY AFAIK.) It look slike OpenBLAS is BSD-licensed, and thus compatible with numpy/sciy. It there a reason (other than someone having to do the work) it could not be used as the "standard" BLAS for numpy? -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From toddrjen at gmail.com Fri Feb 22 12:01:39 2013 From: toddrjen at gmail.com (Todd) Date: Fri, 22 Feb 2013 18:01:39 +0100 Subject: [Numpy-discussion] another discussion on numpy correlate (and convolution) In-Reply-To: References: <512660CA.4060603@crans.org> Message-ID: We don't actually want remove sensitive data, but this tutorial should still allow us to remove a file totally and completely from git history. It doesn't look that hard: https://help.github.com/articles/remove-sensitive-data It will require everyone to rebase, so if you want to do this it may be a good idea to schedule it for a specific day maybe 2-4 weeks in the future and warn people on the mailing list so nobody tries to commit anything while the process is underway. On Feb 22, 2013 5:40 PM, "Matthew Brett" wrote: > Hi, > > On Thu, Feb 21, 2013 at 10:00 AM, Pierre Haessig > wrote: > > Hi everybody, > > > > (just coming from a discussion on the performance of Matplotlib's > > (x)corr function which uses np.correlate) > > > > There have been already many discussions on how to compute > > (cross-)correlations of time-series in Python (like > > > http://stackoverflow.com/questions/6991471/computing-cross-correlation-function > ). > > The discussion is spread between the various stakeholders (just to name > > some I've in mind : scipy, statsmodel, matplotlib, ...). > > > > There are basically 2 implementations : time-domain and frequency-domain > > (using fft + multiplication). My discussion is only on time-domain > > implementation. The key usecase which I feel is not well adressed today > > is when computing the (cross)correlation of two long timeseries on only > > a few lagpoints. > > > > For time-domain, one can either write its own implementation or rely on > > numpy.correlate. The latter rely on the fast C implementation > > _pyarray_correlate() in multiarraymodule.c > > ( > https://github.com/numpy/numpy/blob/master/numpy/core/src/multiarray/multiarraymodule.c#L1177 > ) > > > > Now, the signature of this function is meant to accept one of three > > *computation modes* ('valid', 'same', 'full'). Thoses modes make a lot > > of sense when using this function in the context of convoluting two > > signals (say an "input array" and a "impulse response array"). In the > > context of computing the (cross)correlation of two long timeseries on > > only a few lagpoints, those computation modes are unadapted and > > potentially leads to huge computational and memory wastes > > (for example : > > > https://github.com/matplotlib/matplotlib/blob/master/lib/matplotlib/axes.py#L4332 > ). > > > > > > > > For some time, I was thinking the solution was to write a dedicated > > function in one of the stakeholder modules (but which ? statsmodel ?) > > but now I came to think numpy is the best place to put a change and this > > would quickly benefit to all stackeholders downstream. > > > > Indeed, I looked more carefully at the C _pyarray_correlate() function, > > and I've come to the conclusion that these three computation modes are > > an unnecessary restriction because the actual computation relies on a > > triple for loop > > ( > https://github.com/numpy/numpy/blob/master/numpy/core/src/multiarray/multiarraymodule.c#L1177 > ) > > which boundaries are governed by two integers `n_left` and `n_right` > > instead of the three modes. > > > > Maybe I've misunderstood the inner-working of this function, but I've > > the feeling that the ability to set these two integers directly instead > > of just the mode would open up the possibility to compute the > > correlation on only a few lagpoints. > > > > I'm fully aware that changing the signature of such a core numpy is > > out-of-question but I've got the feeling that a reasonably small some > > code refactor might lift the longgoing problem of (cross-)correlation > > computation. The Python np.correlate would require two additional > > keywords -`n_left` and `n_right`)which would override the `mode` > > keyword. Only the C function would need some more care to keep good > > backward compatibility in case it's used externally. > > >From complete ignorance, do you think it is an option to allow a > (n_left, n_right) tuple as a value for 'mode'? > > Cheers, > > Matthew > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From d.s.seljebotn at astro.uio.no Fri Feb 22 12:24:22 2013 From: d.s.seljebotn at astro.uio.no (Dag Sverre Seljebotn) Date: Fri, 22 Feb 2013 18:24:22 +0100 Subject: [Numpy-discussion] Windows, blas, atlas and dlls In-Reply-To: References: <512256B4.50206@astro.uio.no> <5124A230.9010308@astro.uio.no> <51261027.3010602@centrum.cz> Message-ID: <5127A9C6.6080705@astro.uio.no> On 02/22/2013 05:52 PM, Chris Barker - NOAA Federal wrote: > On Thu, Feb 21, 2013 at 4:16 AM, Maty?? Nov?k wrote: >>> You could also look into OpenBLAS, which is easier to build and >>> generally faster than ATLAS. (But alas, not supported by NumPy/SciPY AFAIK.) > > It look slike OpenBLAS is BSD-licensed, and thus compatible with numpy/sciy. > > It there a reason (other than someone having to do the work) it could > not be used as the "standard" BLAS for numpy? This was discussed some weeks ago (I think the thread title contains openblas). IIRC it was just that somebody needs to do the work but don't take my word for it. Dag Svere From sergio.callegari at gmail.com Fri Feb 22 13:54:43 2013 From: sergio.callegari at gmail.com (Sergio Callegari) Date: Fri, 22 Feb 2013 18:54:43 +0000 (UTC) Subject: [Numpy-discussion] Windows, blas, atlas and dlls References: <5122564A.6030006@esrf.fr> Message-ID: > > from scipy.linalg.blas import fblas > dgemm = fblas.dgemm._cpointer > sgemm = fblas.sgemm._cpointer > OK, but this gives me a PyCObject. How do I make it a function pointer of the correct type in cython? Thanks again Sergio From sole at esrf.fr Fri Feb 22 14:21:45 2013 From: sole at esrf.fr (V. Armando Sole) Date: Fri, 22 Feb 2013 20:21:45 +0100 Subject: [Numpy-discussion] Windows, blas, atlas and dlls In-Reply-To: References: <5122564A.6030006@esrf.fr> Message-ID: On 22.02.2013 19:54, Sergio Callegari wrote: >> >> from scipy.linalg.blas import fblas >> dgemm = fblas.dgemm._cpointer >> sgemm = fblas.sgemm._cpointer >> > > OK, but this gives me a PyCObject. How do I make it a function > pointer of the > correct type in cython? > In cython I do not know it. I coded it directly in C. In my code I receive the pointer in input2. The relevant part is: PyObject *input1; PyObject *input2 = NULL; /*pointer to dgemm function */ void * gemm_pointer = NULL; /** statements **/ if (!PyArg_ParseTuple(args, "OO", &input1, &input2)) return NULL; if (input2 != NULL){ #if PY_MAJOR_VERSION >= 3 if (PyCapsule_CheckExact(input2)) gemm_pointer = PyCapsule_GetPointer(input2, NULL); #else gemm_pointer = PyCObject_AsVoidPtr(input2); #endif if (gemm_pointer != NULL) { /* release the GIL */ Py_BEGIN_ALLOW_THREADS /* your function call here */ Py_END_ALLOW_THREADS } } Best regards, Armando From cournape at gmail.com Fri Feb 22 15:38:06 2013 From: cournape at gmail.com (David Cournapeau) Date: Fri, 22 Feb 2013 20:38:06 +0000 Subject: [Numpy-discussion] Windows, blas, atlas and dlls In-Reply-To: References: <512256B4.50206@astro.uio.no> <5124A230.9010308@astro.uio.no> <51261027.3010602@centrum.cz> Message-ID: On 22 Feb 2013 16:53, "Chris Barker - NOAA Federal" wrote: > > On Thu, Feb 21, 2013 at 4:16 AM, Maty?? Nov?k wrote: > >> You could also look into OpenBLAS, which is easier to build and > >> generally faster than ATLAS. (But alas, not supported by NumPy/SciPY AFAIK.) > > It look slike OpenBLAS is BSD-licensed, and thus compatible with numpy/sciy. > > It there a reason (other than someone having to do the work) it could > not be used as the "standard" BLAS for numpy? no reason, and it actually works quite nicely. Bento supports it, at least on Mac/linux. David > > -Chris > > > -- > > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion -------------- next part -------------- An HTML attachment was scrubbed... URL: From nouiz at nouiz.org Fri Feb 22 16:04:26 2013 From: nouiz at nouiz.org (=?ISO-8859-1?Q?Fr=E9d=E9ric_Bastien?=) Date: Fri, 22 Feb 2013 16:04:26 -0500 Subject: [Numpy-discussion] Windows, blas, atlas and dlls In-Reply-To: References: <512256B4.50206@astro.uio.no> <5124A230.9010308@astro.uio.no> <51261027.3010602@centrum.cz> Message-ID: I just read a web page on how to embed python in an application[1]. They explain that we can keep the symbol exported event if we statically link the BLAS library in scipy. This make me think we could just change how we compile the lib that link with BLAS and we will be able to reuse it for other project! But I didn't played much with this type of thing. Do someone have more information? Do you think it would be useful? Fred [1] http://docs.python.org/2/extending/embedding.html#linking-requirements On Fri, Feb 22, 2013 at 3:38 PM, David Cournapeau wrote: > > On 22 Feb 2013 16:53, "Chris Barker - NOAA Federal" > wrote: >> >> On Thu, Feb 21, 2013 at 4:16 AM, Maty?? Nov?k wrote: >> >> You could also look into OpenBLAS, which is easier to build and >> >> generally faster than ATLAS. (But alas, not supported by NumPy/SciPY >> >> AFAIK.) >> >> It look slike OpenBLAS is BSD-licensed, and thus compatible with >> numpy/sciy. >> >> It there a reason (other than someone having to do the work) it could >> not be used as the "standard" BLAS for numpy? > > no reason, and it actually works quite nicely. Bento supports it, at least > on Mac/linux. > > David > > >> >> -Chris >> >> >> -- >> >> Christopher Barker, Ph.D. >> Oceanographer >> >> Emergency Response Division >> NOAA/NOS/OR&R (206) 526-6959 voice >> 7600 Sand Point Way NE (206) 526-6329 fax >> Seattle, WA 98115 (206) 526-6317 main reception >> >> Chris.Barker at noaa.gov >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From mesanthu at gmail.com Fri Feb 22 21:35:51 2013 From: mesanthu at gmail.com (santhu kumar) Date: Fri, 22 Feb 2013 20:35:51 -0600 Subject: [Numpy-discussion] Smart way to do this? Message-ID: Hi all, I dont want to run a loop for this but it should be possible using numpy "smart" ways. a = np.ones(30) idx = np.array([2,3,2]) # there is a duplicate index of 2 a += 2 >>>a array([ 1., 1., 3., 3., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.]) But if we do this : for i in range(idx.shape[0]): a[idx[i]] += 2 >>> a array([ 1., 1., 5., 3., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.]) How to achieve the second result without looping?? Thanks Santhosh -------------- next part -------------- An HTML attachment was scrubbed... URL: From mesanthu at gmail.com Fri Feb 22 21:38:15 2013 From: mesanthu at gmail.com (santhu kumar) Date: Fri, 22 Feb 2013 20:38:15 -0600 Subject: [Numpy-discussion] Smart way to do this? In-Reply-To: References: Message-ID: Sorry typo : a = np.ones(30) idx = np.array([2,3,2]) # there is a duplicate index of 2 a[idx] += 2 On Fri, Feb 22, 2013 at 8:35 PM, santhu kumar wrote: > Hi all, > > I dont want to run a loop for this but it should be possible using numpy > "smart" ways. > > a = np.ones(30) > idx = np.array([2,3,2]) # there is a duplicate index of 2 > a += 2 > > >>>a > array([ 1., 1., 3., 3., 1., 1., 1., 1., 1., 1., 1., 1., 1., > 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., > 1., 1., 1., 1.]) > > > But if we do this : > for i in range(idx.shape[0]): > a[idx[i]] += 2 > > >>> a > array([ 1., 1., 5., 3., 1., 1., 1., 1., 1., 1., 1., 1., 1., > 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., > 1., 1., 1., 1.]) > > How to achieve the second result without looping?? > Thanks > Santhosh > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett.olsen at gmail.com Sat Feb 23 01:45:55 2013 From: brett.olsen at gmail.com (Brett Olsen) Date: Sat, 23 Feb 2013 00:45:55 -0600 Subject: [Numpy-discussion] Smart way to do this? In-Reply-To: References: Message-ID: a = np.ones(30) idx = np.array([2, 3, 2]) a += 2 * np.bincount(idx, minlength=len(a)) >>> a array([ 1., 1., 5., 3., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.]) As for speed: def loop(a, idx): for i in idx: a[i] += 2 def count(a, idx): a += 2 * np.bincount(idx, minlength=len(a)) %timeit loop(np.ones(30), np.array([2, 3, 2])) 10000 loops, best of 3: 19.9 us per loop %timeit count(np.ones(30), np.array(2, 3, 2])) 100000 loops, best of 3: 19.2 us per loop So no big difference here. But go to larger systems and you'll see a huge difference: %timeit loop(np.ones(10000), np.random.randint(10000, size=100000)) 1 loops, best of 3: 260 ms per loop %timeit count(np.ones(10000), np.random.randint(10000, size=100000)) 100 loops, best of 3: 3.03 ms per loop. ~Brett On Fri, Feb 22, 2013 at 8:38 PM, santhu kumar wrote: > Sorry typo : > > a = np.ones(30) > idx = np.array([2,3,2]) # there is a duplicate index of 2 > a[idx] += 2 > > On Fri, Feb 22, 2013 at 8:35 PM, santhu kumar wrote: > >> Hi all, >> >> I dont want to run a loop for this but it should be possible using numpy >> "smart" ways. >> >> a = np.ones(30) >> idx = np.array([2,3,2]) # there is a duplicate index of 2 >> a += 2 >> >> >>>a >> array([ 1., 1., 3., 3., 1., 1., 1., 1., 1., 1., 1., 1., 1., >> 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., >> 1., 1., 1., 1.]) >> >> >> But if we do this : >> for i in range(idx.shape[0]): >> a[idx[i]] += 2 >> >> >>> a >> array([ 1., 1., 5., 3., 1., 1., 1., 1., 1., 1., 1., 1., 1., >> 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., >> 1., 1., 1., 1.]) >> >> How to achieve the second result without looping?? >> Thanks >> Santhosh >> > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sergio.callegari at gmail.com Sat Feb 23 10:33:14 2013 From: sergio.callegari at gmail.com (Sergio Callegari) Date: Sat, 23 Feb 2013 15:33:14 +0000 (UTC) Subject: [Numpy-discussion] Calling scipy blas from cython is extremely slow Message-ID: Hi, following the excellent advice of V. Armando Sole, I have finally succeeded in calling the blas routines shipped with scipy from cython. I am doing this to avoid shipping an extra blas library for some project of mine that uses scipy but has some things coded in cython for extra speed. So far I managed getting things working on Linux. Here is what I do: The following code snippet gives me the dgemv pointer (which is a pointer to a fortran function, even if it comes from scipy.linalg.blas.cblas, weird). from cpython cimport PyCObject_AsVoidPtr import scipy as sp __import__('scipy.linalg.blas') ctypedef void (*dgemv_ptr) (char *trans, int *m, int *n,\ double *alpha, double *a, int *lda, double *x,\ int *incx,\ double *beta, double *y, int *incy) cdef dgemv_ptr dgemv=PyCObject_AsVoidPtr(\ sp.linalg.blas.cblas.dgemv._cpointer) Then, in a tight loop, I can call dgemv by first defining the constants and then calling dgemv inside the loop cdef int one=1 cdef double onedot = 1.0 cdef double zerodot = 0.0 cdef char trans = 'N' for i in xrange(N): dgemv(&trans, &nq, &order,\ &onedot, np.PyArray_DATA(C), &order, \ np.PyArray_DATA(c_x0), &one, \ &zerodot, np.PyArray_DATA(y0), &one) It works, but it is many many times slower than linking to the cblas that is available on the same system. Specifically, I have about 8 calls to blas in my tight loop, 4 of them are to dgemv and the others are to dcopy. Changing a single dgemv call from the system cblas to the blas function returned by scipy.linalg.blas.cblas.dgemv._cpointer makes the execution time of a test case jump from about 0.7 s to 1.25 on my system. Any clue about why is this happening? In the end, on linux, scipy dynamically link to atlas exactly as I link to atlas when I use the cblas functions. From mail.till at gmx.de Sat Feb 23 10:38:29 2013 From: mail.till at gmx.de (Till Stensitzki) Date: Sat, 23 Feb 2013 15:38:29 +0000 (UTC) Subject: [Numpy-discussion] Adding .abs() method to the array object Message-ID: Hello, i know that the array object is already crowded, but i would like to see the abs method added, especially doing work on the console. Considering that many much less used functions are also implemented as a method, i don't think adding one more would be problematic. greetings Till From sergio.callegari at gmail.com Sat Feb 23 10:53:07 2013 From: sergio.callegari at gmail.com (Sergio Callegari) Date: Sat, 23 Feb 2013 15:53:07 +0000 (UTC) Subject: [Numpy-discussion] Calling scipy blas from cython is extremely slow References: Message-ID: ... and it is not deterministic too... About 1 time over 6 the code calling the scipy blas gives a completely wrong result. How can this be? From ljmamoreira at gmail.com Sat Feb 23 11:51:24 2013 From: ljmamoreira at gmail.com (Jose Amoreira) Date: Sat, 23 Feb 2013 16:51:24 +0000 Subject: [Numpy-discussion] Smart way to do this? In-Reply-To: References: Message-ID: <1721315.o9skInkMbC@mu.site> On Saturday, February 23, 2013 00:45:55 Brett Olsen wrote: > a = np.ones(30) > idx = np.array([2, 3, 2]) > a += 2 * np.bincount(idx, minlength=len(a)) > > >>> a > > array([ 1., 1., 5., 3., 1., 1., 1., 1., 1., 1., 1., 1., 1., > 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., > 1., 1., 1., 1.]) > Hi! OK, but is there any reason why Santhu's first option doesn't work? Shouldn't it work? Cheers, Ze -------------- next part -------------- An HTML attachment was scrubbed... URL: From sergio.callegari at gmail.com Sat Feb 23 13:31:42 2013 From: sergio.callegari at gmail.com (Sergio Callegari) Date: Sat, 23 Feb 2013 18:31:42 +0000 (UTC) Subject: [Numpy-discussion] =?utf-8?q?Calling_scipy_blas_from_cython_is_ex?= =?utf-8?q?tremely=09slow?= References: Message-ID: Partially fixed. I was messing the row, column order. For some reason this was working in some case. Now I've fixed it and it *always* works. However, it is still slower than the cblas cblas -> 0.69 sec scipy blas -> 0.74 sec Any clue why? From njs at pobox.com Sat Feb 23 14:22:42 2013 From: njs at pobox.com (Nathaniel Smith) Date: Sat, 23 Feb 2013 19:22:42 +0000 Subject: [Numpy-discussion] Smart way to do this? In-Reply-To: <1721315.o9skInkMbC@mu.site> References: <1721315.o9skInkMbC@mu.site> Message-ID: On Sat, Feb 23, 2013 at 4:51 PM, Jose Amoreira wrote: > On Saturday, February 23, 2013 00:45:55 Brett Olsen wrote: > >> a = np.ones(30) > >> idx = np.array([2, 3, 2]) > >> a += 2 * np.bincount(idx, minlength=len(a)) > >> > >> >>> a > >> > >> array([ 1., 1., 5., 3., 1., 1., 1., 1., 1., 1., 1., 1., 1., > >> 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., > >> 1., 1., 1., 1.]) > >> > > > > > > Hi! > > OK, but is there any reason why Santhu's first option doesn't work? > Shouldn't it work? It can't -- python limitation, unfixable by numpy. The += gets divided into two operations, and there's no way for numpy doing the "+" part to "see" that the two locations you are adding the value to are really "the same" location. There's a pull request to add a new ufunc method to solve this: https://github.com/numpy/numpy/pull/2821 So if/when that gets merged you'll be able to do: np.add.at(a, idx, 2) But it still needs some tweaking and has been stalled for a bit, so might be a good opportunity for someone to take on if they're interested in seeing this functionality... -n From njs at pobox.com Sat Feb 23 14:25:32 2013 From: njs at pobox.com (Nathaniel Smith) Date: Sat, 23 Feb 2013 19:25:32 +0000 Subject: [Numpy-discussion] Adding .abs() method to the array object In-Reply-To: References: Message-ID: On Sat, Feb 23, 2013 at 3:38 PM, Till Stensitzki wrote: > Hello, > i know that the array object is already crowded, but i would like > to see the abs method added, especially doing work on the console. > Considering that many much less used functions are also implemented > as a method, i don't think adding one more would be problematic. My gut feeling is that we have too many methods on ndarray, not too few, but in any case, can you elaborate? What's the rationale for why np.abs(a) is so much harder than a.abs(), and why this function and not other unary functions? -n From charlesr.harris at gmail.com Sat Feb 23 15:05:54 2013 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 23 Feb 2013 13:05:54 -0700 Subject: [Numpy-discussion] Adding .abs() method to the array object In-Reply-To: References: Message-ID: On Sat, Feb 23, 2013 at 12:25 PM, Nathaniel Smith wrote: > On Sat, Feb 23, 2013 at 3:38 PM, Till Stensitzki wrote: > > Hello, > > i know that the array object is already crowded, but i would like > > to see the abs method added, especially doing work on the console. > > Considering that many much less used functions are also implemented > > as a method, i don't think adding one more would be problematic. > > My gut feeling is that we have too many methods on ndarray, not too > few, but in any case, can you elaborate? What's the rationale for why > np.abs(a) is so much harder than a.abs(), and why this function and > not other unary functions? > > IIRC, there was a long thread about adding 'abs' back around 1.0.3-1.1.0 with Travis against it ;) I don't feel strongly one way or the other. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jerome.Kieffer at esrf.fr Sat Feb 23 15:29:01 2013 From: Jerome.Kieffer at esrf.fr (Jerome Kieffer) Date: Sat, 23 Feb 2013 21:29:01 +0100 Subject: [Numpy-discussion] Calling scipy blas f rom cython is extr emely slow In-Reply-To: References: Message-ID: <20130223212901.98b94bc06b95fe841b63b9d2@esrf.fr> On Sat, 23 Feb 2013 18:31:42 +0000 (UTC) Sergio Callegari wrote: > However, it is still slower than the cblas > > cblas -> 0.69 sec > scipy blas -> 0.74 sec if you are using scipy blas, the real question is which blas is underneath ? OpenBlas, GotoBlas, Atlas, MKL ? Under Debian I observed a x17 in speed from 35s to 2s with an "apt-get install atlas" on Armando's code. Cheers, -- J?r?me Kieffer Data analysis unit - ESRF From robert.kern at gmail.com Sat Feb 23 15:33:21 2013 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 23 Feb 2013 20:33:21 +0000 Subject: [Numpy-discussion] Adding .abs() method to the array object In-Reply-To: References: Message-ID: On Sat, Feb 23, 2013 at 7:25 PM, Nathaniel Smith wrote: > On Sat, Feb 23, 2013 at 3:38 PM, Till Stensitzki wrote: >> Hello, >> i know that the array object is already crowded, but i would like >> to see the abs method added, especially doing work on the console. >> Considering that many much less used functions are also implemented >> as a method, i don't think adding one more would be problematic. > > My gut feeling is that we have too many methods on ndarray, not too > few, but in any case, can you elaborate? What's the rationale for why > np.abs(a) is so much harder than a.abs(), and why this function and > not other unary functions? Or even abs(a). -- Robert Kern From elaine.angelino at gmail.com Sat Feb 23 17:25:41 2013 From: elaine.angelino at gmail.com (Elaine Angelino) Date: Sat, 23 Feb 2013 17:25:41 -0500 Subject: [Numpy-discussion] Tabular package updated to v0.1 Message-ID: Hi there, We are writing to announce that we've updated "Tabular" to v0.1. Tabular is a package of Python modules for working with tabular data. Its main object is the tabarray class, a data structure for holding and manipulating tabular data. By putting data into a tabarray object, you?ll get a representation of the data that is more flexible and powerful than a native Python representation. More specifically, tabarray provides: -- ultra-fast filtering, selection, and numerical analysis methods, using convenient Matlab-style matrix operation syntax -- spreadsheet-style operations, including row & column operations, 'sort', 'replace', 'aggregate', 'pivot', and 'join' -- flexible load and save methods for a variety of file formats, including delimited text (CSV), binary, and HTML -- helpful inference algorithms for determining formatting parameters and data types of input files You can download Tabular from PyPI (http://pypi.python.org/pypi/tabular/) or alternatively clone our git repository from github (http://github.com/yamins81/tabular). We also have posted tutorial-style Sphinx documentation (http://web.mit.edu/yamins/www/tabular/). The tabarray object is based on the record array object from the Numerical Python package (NumPy), and Tabular is built to interface well with NumPy in general. Our intended audience is two-fold: (1) Python users who, though they may not be familiar with NumPy, are in need of a way to work with tabular data, and (2) NumPy users who would like to do spreadsheet-style operations on top of their more "numerical" work. We hope that some of you find Tabular useful! Best, Elaine and Dan From josef.pktd at gmail.com Sat Feb 23 20:20:47 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 23 Feb 2013 20:20:47 -0500 Subject: [Numpy-discussion] Adding .abs() method to the array object In-Reply-To: References: Message-ID: On Sat, Feb 23, 2013 at 3:33 PM, Robert Kern wrote: > On Sat, Feb 23, 2013 at 7:25 PM, Nathaniel Smith wrote: >> On Sat, Feb 23, 2013 at 3:38 PM, Till Stensitzki wrote: >>> Hello, >>> i know that the array object is already crowded, but i would like >>> to see the abs method added, especially doing work on the console. >>> Considering that many much less used functions are also implemented >>> as a method, i don't think adding one more would be problematic. >> >> My gut feeling is that we have too many methods on ndarray, not too >> few, but in any case, can you elaborate? What's the rationale for why >> np.abs(a) is so much harder than a.abs(), and why this function and >> not other unary functions? > > Or even abs(a). my reason is that I often use arr.max() but then decide I want to us abs and need np.max(np.abs(arr)) instead of arr.abs().max() (and often I write that first to see the error message) I don't like np.abs(arr).max() because I have to concentrate to much on the braces, especially if arr is a calculation I wrote several times def maxabs(arr): return np.max(np.abs(arr)) silly, but I use it often and np.is_close is not useful (doesn't show how close) Just a small annoyance, but I think it's the method that I miss most often. Josef > > -- > Robert Kern > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From ben.root at ou.edu Sat Feb 23 21:34:34 2013 From: ben.root at ou.edu (Benjamin Root) Date: Sat, 23 Feb 2013 21:34:34 -0500 Subject: [Numpy-discussion] Adding .abs() method to the array object In-Reply-To: References: Message-ID: On Sat, Feb 23, 2013 at 8:20 PM, wrote: > On Sat, Feb 23, 2013 at 3:33 PM, Robert Kern > wrote: > > On Sat, Feb 23, 2013 at 7:25 PM, Nathaniel Smith wrote: > >> On Sat, Feb 23, 2013 at 3:38 PM, Till Stensitzki > wrote: > >>> Hello, > >>> i know that the array object is already crowded, but i would like > >>> to see the abs method added, especially doing work on the console. > >>> Considering that many much less used functions are also implemented > >>> as a method, i don't think adding one more would be problematic. > >> > >> My gut feeling is that we have too many methods on ndarray, not too > >> few, but in any case, can you elaborate? What's the rationale for why > >> np.abs(a) is so much harder than a.abs(), and why this function and > >> not other unary functions? > > > > Or even abs(a). > > > my reason is that I often use > > arr.max() > but then decide I want to us abs and need > np.max(np.abs(arr)) > instead of arr.abs().max() (and often I write that first to see the > error message) > > I don't like > np.abs(arr).max() > because I have to concentrate to much on the braces, especially if arr > is a calculation > > I wrote several times > def maxabs(arr): > return np.max(np.abs(arr)) > > silly, but I use it often and np.is_close is not useful (doesn't show how > close) > > Just a small annoyance, but I think it's the method that I miss most often. > > Josef > > My issue is having to remember which ones are methods and which ones are functions. There doesn't seem to be a rhyme or reason for the choices, and I would rather like to see that a line is drawn, but I am not picky as to where it is drawn. Ben Root -------------- next part -------------- An HTML attachment was scrubbed... URL: From dynamicgl at gmail.com Sun Feb 24 03:08:34 2013 From: dynamicgl at gmail.com (Gelin Yan) Date: Sun, 24 Feb 2013 16:08:34 +0800 Subject: [Numpy-discussion] a question about freeze on numpy 1.7.0 Message-ID: Hi All When I used numpy 1.7.0 with cx_freeze 4.3.1 on windows, I quickly found out even a simple "import numpy" may lead to program failed with following exception: "AttributeError: 'module' object has no attribute 'sys' After a poking around some codes I noticed /numpy/core/__init__.py has a line 'del sys' at the bottom. After I commented this line, and repacked the whole program, It ran fine. I also noticed this 'del sys' didn't exist on numpy 1.6.2 I am curious why this 'del sys' should be here and whether it is safe to omit it. Thanks. Regards gelin yan -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Sun Feb 24 07:16:49 2013 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 24 Feb 2013 14:16:49 +0200 Subject: [Numpy-discussion] Calling scipy blas from cython is extremely slow In-Reply-To: References: Message-ID: 23.02.2013 20:31, Sergio Callegari kirjoitti: > Partially fixed. > > I was messing the row, column order. For some reason this was working in some > case. Now I've fixed it and it *always* works. > > However, it is still slower than the cblas > > cblas -> 0.69 sec > scipy blas -> 0.74 sec The possible explanations are that either the routine called is different in the two cases, or, the benchmark if somehow faulty. -- Pauli Virtanen From ondrej.certik at gmail.com Sun Feb 24 20:16:47 2013 From: ondrej.certik at gmail.com (=?UTF-8?B?T25kxZllaiDEjGVydMOtaw==?=) Date: Sun, 24 Feb 2013 17:16:47 -0800 Subject: [Numpy-discussion] a question about freeze on numpy 1.7.0 In-Reply-To: References: Message-ID: Hi Gelin, On Sun, Feb 24, 2013 at 12:08 AM, Gelin Yan wrote: > Hi All > > When I used numpy 1.7.0 with cx_freeze 4.3.1 on windows, I quickly > found out even a simple "import numpy" may lead to program failed with > following exception: > > "AttributeError: 'module' object has no attribute 'sys' > > After a poking around some codes I noticed /numpy/core/__init__.py has a > line 'del sys' at the bottom. After I commented this line, and repacked the > whole program, It ran fine. > I also noticed this 'del sys' didn't exist on numpy 1.6.2 > > I am curious why this 'del sys' should be here and whether it is safe to > omit it. Thanks. The "del sys" line was introduced in the commit: https://github.com/numpy/numpy/commit/4c0576fe9947ef2af8351405e0990cebd83ccbb6 and it seems to me that it is needed so that the numpy.core namespace is not cluttered by it. Can you post the full stacktrace of your program (and preferably some instructions how to reproduce the problem)? It should become clear where the problem is. Thanks, Ondrej From ben.root at ou.edu Sun Feb 24 20:25:55 2013 From: ben.root at ou.edu (Benjamin Root) Date: Sun, 24 Feb 2013 20:25:55 -0500 Subject: [Numpy-discussion] a question about freeze on numpy 1.7.0 In-Reply-To: References: Message-ID: On Sun, Feb 24, 2013 at 8:16 PM, Ond?ej ?ert?k wrote: > Hi Gelin, > > On Sun, Feb 24, 2013 at 12:08 AM, Gelin Yan wrote: > > Hi All > > > > When I used numpy 1.7.0 with cx_freeze 4.3.1 on windows, I quickly > > found out even a simple "import numpy" may lead to program failed with > > following exception: > > > > "AttributeError: 'module' object has no attribute 'sys' > > > > After a poking around some codes I noticed /numpy/core/__init__.py has a > > line 'del sys' at the bottom. After I commented this line, and repacked > the > > whole program, It ran fine. > > I also noticed this 'del sys' didn't exist on numpy 1.6.2 > > > > I am curious why this 'del sys' should be here and whether it is safe to > > omit it. Thanks. > > The "del sys" line was introduced in the commit: > > > https://github.com/numpy/numpy/commit/4c0576fe9947ef2af8351405e0990cebd83ccbb6 > > and it seems to me that it is needed so that the numpy.core namespace is > not > cluttered by it. > > Can you post the full stacktrace of your program (and preferably some > instructions > how to reproduce the problem)? It should become clear where the problem is. > > Thanks, > Ondrej > I have run into issues with doing "del sys" before, but usually with respect to my pythonrc file. Because the import of sys has already happened, python won't let you import the module again in the same namespace (in my case, the runtime environment). I don't know how the frozen binaries work, but maybe something along those lines is happening? Ben Root -------------- next part -------------- An HTML attachment was scrubbed... URL: From wfspotz at sandia.gov Mon Feb 25 00:12:37 2013 From: wfspotz at sandia.gov (Bill Spotz) Date: Sun, 24 Feb 2013 22:12:37 -0700 Subject: [Numpy-discussion] [EXTERNAL] swig interface file (numpy.i) warning In-Reply-To: References: <7F19AD02-EE44-4BF2-AC13-B4EF8A04117E@sandia.gov> Message-ID: <54B7C755-4C5A-4FF7-8F39-CA657DEEFEDD@sandia.gov> I wanted to take a stab at updating numpy.i to not use deprecated NumPy C/API code. Nothing in the git logs indicates this has already been done. I added #define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION to numpy.i right before it includes numpy/arrayobject.h. The built-in tests for numpy.i should be sufficient to ferret out all of the deprecated calls. When I try to make the tests, the compiler tells me that it has included npy_deprecated_api.h and I get a compiler error when it includes old_defines.h, telling me that it is deprecated. Shouldn't my #define prevent this from happening? I'm confused. Any guidance would be appreciated. Thanks, Bill On Oct 9, 2012, at 9:18 PM, Tom Krauss wrote: > This code reproduces the error - I think it is small enough for email. (large) numpy.i not included, let me know if you want that too. Makefile will need to be tailored to your environment. > If it's more convenient, or you have trouble reproducing, I can create a branch on github - let me know. > > On Tue, Oct 9, 2012 at 1:47 PM, Tom Krauss wrote: > I can't attach the exact code but would be happy to provide something simple that has the same issue. I'll post something here when I can get to it. > - Tom > > > On Tue, Oct 9, 2012 at 10:52 AM, Bill Spotz wrote: > Tom, Charles, > > If you discuss this further, be sure to CC me. > > -Bill Spotz > > On Oct 9, 2012, at 8:50 AM, Charles R Harris wrote: > >> Hi Tom, >> >> On Tue, Oct 9, 2012 at 8:30 AM, Tom Krauss wrote: >> Hi, >> >> I've been happy to use numpy.i for generating SWIG interfaces to C++. >> >> For a while, I've noticed this warning while compiling: >> /Users/tkrauss/projects/dev_env/lib/python2.7/site-packages/numpy-1.8.0.dev_f2f0ac0_20120725-py2.7-macosx-10.8-x86_64.egg/numpy/core/include/numpy/npy_deprecated_api.h:11:2: warning: #warning "Using deprecated NumPy API, disable it by #defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" >> >> and today tried to get rid of the warning. >> >> So, in numpy.i, I followed the warning's advice. I added the # def here: >> >> %{ >> #ifndef SWIG_FILE_WITH_INIT >> # define NO_IMPORT_ARRAY >> #endif >> #include "stdio.h" >> #define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION >> #include >> %} >> >> SWIG was happy, but when compiling the C++ wrapper, there were many warnings followed by many errors. The warnings were for redefinition of NPY_MIN_BYTE and similar. The errors were for all kinds of stuff, excerpt here: >> native_wrap.cpp:3632: error: ?PyArray_NOTYPE? was not declared in this scope >> native_wrap.cpp:3633: error: cannot convert ?PyObject*? to ?const PyArrayObject*? for argument ?1? to ?int PyArray_TYPE(const PyArrayObject*)? >> native_wrap.cpp: At global scope: >> native_wrap.cpp:3877: error: ?intp? has not been declared >> native_wrap.cpp: In function ?int require_fortran(PyArrayObject*)?: >> native_wrap.cpp:3929: error: ?struct tagPyArrayObject? has no member named ?nd? >> native_wrap.cpp:3933: error: ?struct tagPyArrayObject? has no member named ?flags? >> native_wrap.cpp:3933: error: ?FARRAY? was not declared in this scope >> native_wrap.cpp:20411: error: ?struct tagPyArrayObject? has no member named ?data? >> >> It looks like there is a new C API for numpy, and the version of numpy.i that I have doesn't use it. >> >> Is there a new version of numpy.i available (or in development) that works with the new API? Short term it will just get rid of a warning but I am interested in a good long term solution in case I need to upgrade numpy. >> >> >> In the long term we would like to hide the ndarray internals, essentially making them private. There are still some incomplete areas, f2py and, apparently, numpy.i. Your feedback here is quite helpful and if you have some time we can try to get this straightened out. Could you attach the code you are trying to interface? If you have a github account you could also set up a branch where we could work on this. >> >> Chuck >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion > > ** Bill Spotz ** > ** Sandia National Laboratories Voice: (505)845-0170 ** > ** P.O. Box 5800 Fax: (505)284-0154 ** > ** Albuquerque, NM 87185-0370 Email: wfspotz at sandia.gov ** > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion ** Bill Spotz ** ** Sandia National Laboratories Voice: (505)845-0170 ** ** P.O. Box 5800 Fax: (505)284-0154 ** ** Albuquerque, NM 87185-0370 Email: wfspotz at sandia.gov ** From dynamicgl at gmail.com Mon Feb 25 01:49:32 2013 From: dynamicgl at gmail.com (Gelin Yan) Date: Mon, 25 Feb 2013 14:49:32 +0800 Subject: [Numpy-discussion] a question about freeze on numpy 1.7.0 In-Reply-To: References: Message-ID: On Mon, Feb 25, 2013 at 9:16 AM, Ond?ej ?ert?k wrote: > Hi Gelin, > > On Sun, Feb 24, 2013 at 12:08 AM, Gelin Yan wrote: > > Hi All > > > > When I used numpy 1.7.0 with cx_freeze 4.3.1 on windows, I quickly > > found out even a simple "import numpy" may lead to program failed with > > following exception: > > > > "AttributeError: 'module' object has no attribute 'sys' > > > > After a poking around some codes I noticed /numpy/core/__init__.py has a > > line 'del sys' at the bottom. After I commented this line, and repacked > the > > whole program, It ran fine. > > I also noticed this 'del sys' didn't exist on numpy 1.6.2 > > > > I am curious why this 'del sys' should be here and whether it is safe to > > omit it. Thanks. > > The "del sys" line was introduced in the commit: > > > https://github.com/numpy/numpy/commit/4c0576fe9947ef2af8351405e0990cebd83ccbb6 > > and it seems to me that it is needed so that the numpy.core namespace is > not > cluttered by it. > > Can you post the full stacktrace of your program (and preferably some > instructions > how to reproduce the problem)? It should become clear where the problem is. > > Thanks, > Ondrej > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > Hi Ondrej I attached two files here for demonstration. you need cx_freeze to build a standalone executable file. simply running python setup.py build and try to run the executable file you may see this exception. This example works with numpy 1.6.2. Thanks. Regards gelin yan -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: numpytest.py Type: application/octet-stream Size: 90 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: setup.py Type: application/octet-stream Size: 763 bytes Desc: not available URL: From brad.froehle at gmail.com Mon Feb 25 02:53:55 2013 From: brad.froehle at gmail.com (Bradley M. Froehle) Date: Sun, 24 Feb 2013 23:53:55 -0800 Subject: [Numpy-discussion] a question about freeze on numpy 1.7.0 In-Reply-To: References: Message-ID: I can reproduce with NumPy 1.7.0, but I'm not convinced the bug lies within NumPy. The exception is not being raised on the `del sys` line. Rather it is being raised in numpy.__init__: File "/home/bfroehle/.local/lib/python2.7/site-packages/cx_Freeze/initscripts/Console.py", line 27, in exec code in m.__dict__ File "numpytest.py", line 1, in import numpy File "/home/bfroehle/.local/lib/python2.7/site-packages/numpy/__init__.py", line 147, in from core import * AttributeError: 'module' object has no attribute 'sys' This is because, somehow, `'sys' in numpy.core.__all__` returns True in the cx_Freeze context but False in the regular Python context. -Brad On Sun, Feb 24, 2013 at 10:49 PM, Gelin Yan wrote: > > > On Mon, Feb 25, 2013 at 9:16 AM, Ond?ej ?ert?k wrote: > >> Hi Gelin, >> >> On Sun, Feb 24, 2013 at 12:08 AM, Gelin Yan wrote: >> > Hi All >> > >> > When I used numpy 1.7.0 with cx_freeze 4.3.1 on windows, I quickly >> > found out even a simple "import numpy" may lead to program failed with >> > following exception: >> > >> > "AttributeError: 'module' object has no attribute 'sys' >> > >> > After a poking around some codes I noticed /numpy/core/__init__.py has a >> > line 'del sys' at the bottom. After I commented this line, and repacked >> the >> > whole program, It ran fine. >> > I also noticed this 'del sys' didn't exist on numpy 1.6.2 >> > >> > I am curious why this 'del sys' should be here and whether it is safe to >> > omit it. Thanks. >> >> The "del sys" line was introduced in the commit: >> >> >> https://github.com/numpy/numpy/commit/4c0576fe9947ef2af8351405e0990cebd83ccbb6 >> >> and it seems to me that it is needed so that the numpy.core namespace is >> not >> cluttered by it. >> >> Can you post the full stacktrace of your program (and preferably some >> instructions >> how to reproduce the problem)? It should become clear where the problem >> is. >> >> Thanks, >> Ondrej >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> > > Hi Ondrej > > I attached two files here for demonstration. you need cx_freeze to > build a standalone executable file. simply running python setup.py build > and try to run the executable file you may see this exception. This > example works with numpy 1.6.2. Thanks. > > Regards > > gelin yan > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dynamicgl at gmail.com Mon Feb 25 03:06:19 2013 From: dynamicgl at gmail.com (Gelin Yan) Date: Mon, 25 Feb 2013 16:06:19 +0800 Subject: [Numpy-discussion] a question about freeze on numpy 1.7.0 In-Reply-To: References: Message-ID: On Mon, Feb 25, 2013 at 3:53 PM, Bradley M. Froehle wrote: > I can reproduce with NumPy 1.7.0, but I'm not convinced the bug lies > within NumPy. > > The exception is not being raised on the `del sys` line. Rather it is > being raised in numpy.__init__: > > File > "/home/bfroehle/.local/lib/python2.7/site-packages/cx_Freeze/initscripts/Console.py", > line 27, in > exec code in m.__dict__ > File "numpytest.py", line 1, in > import numpy > File > "/home/bfroehle/.local/lib/python2.7/site-packages/numpy/__init__.py", line > 147, in > from core import * > AttributeError: 'module' object has no attribute 'sys' > > This is because, somehow, `'sys' in numpy.core.__all__` returns True in > the cx_Freeze context but False in the regular Python context. > > -Brad > > > On Sun, Feb 24, 2013 at 10:49 PM, Gelin Yan wrote: > >> >> >> On Mon, Feb 25, 2013 at 9:16 AM, Ond?ej ?ert?k wrote: >> >>> Hi Gelin, >>> >>> On Sun, Feb 24, 2013 at 12:08 AM, Gelin Yan wrote: >>> > Hi All >>> > >>> > When I used numpy 1.7.0 with cx_freeze 4.3.1 on windows, I quickly >>> > found out even a simple "import numpy" may lead to program failed with >>> > following exception: >>> > >>> > "AttributeError: 'module' object has no attribute 'sys' >>> > >>> > After a poking around some codes I noticed /numpy/core/__init__.py has >>> a >>> > line 'del sys' at the bottom. After I commented this line, and >>> repacked the >>> > whole program, It ran fine. >>> > I also noticed this 'del sys' didn't exist on numpy 1.6.2 >>> > >>> > I am curious why this 'del sys' should be here and whether it is safe >>> to >>> > omit it. Thanks. >>> >>> The "del sys" line was introduced in the commit: >>> >>> >>> https://github.com/numpy/numpy/commit/4c0576fe9947ef2af8351405e0990cebd83ccbb6 >>> >>> and it seems to me that it is needed so that the numpy.core namespace is >>> not >>> cluttered by it. >>> >>> Can you post the full stacktrace of your program (and preferably some >>> instructions >>> how to reproduce the problem)? It should become clear where the problem >>> is. >>> >>> Thanks, >>> Ondrej >>> _______________________________________________ >>> NumPy-Discussion mailing list >>> NumPy-Discussion at scipy.org >>> http://mail.scipy.org/mailman/listinfo/numpy-discussion >>> >> >> Hi Ondrej >> >> I attached two files here for demonstration. you need cx_freeze to >> build a standalone executable file. simply running python setup.py build >> and try to run the executable file you may see this exception. This >> example works with numpy 1.6.2. Thanks. >> >> Regards >> >> gelin yan >> >> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> >> > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > Hi Bradley So is it supposed to be a bug of cx_freeze? Any work around for that except omit 'del sys'? If the answer is no, I may consider submit a ticket on cx_freeze site. Thanks Regards gelin yan -------------- next part -------------- An HTML attachment was scrubbed... URL: From brad.froehle at gmail.com Mon Feb 25 03:09:08 2013 From: brad.froehle at gmail.com (Bradley M. Froehle) Date: Mon, 25 Feb 2013 00:09:08 -0800 Subject: [Numpy-discussion] a question about freeze on numpy 1.7.0 In-Reply-To: References: Message-ID: I submitted a bug report (and patch) to cx_freeze. You can follow up with them at http://sourceforge.net/p/cx-freeze/bugs/36/. -Brad On Mon, Feb 25, 2013 at 12:06 AM, Gelin Yan wrote: > > > On Mon, Feb 25, 2013 at 3:53 PM, Bradley M. Froehle < > brad.froehle at gmail.com> wrote: > >> I can reproduce with NumPy 1.7.0, but I'm not convinced the bug lies >> within NumPy. >> >> The exception is not being raised on the `del sys` line. Rather it is >> being raised in numpy.__init__: >> >> File >> "/home/bfroehle/.local/lib/python2.7/site-packages/cx_Freeze/initscripts/Console.py", >> line 27, in >> exec code in m.__dict__ >> File "numpytest.py", line 1, in >> import numpy >> File >> "/home/bfroehle/.local/lib/python2.7/site-packages/numpy/__init__.py", line >> 147, in >> from core import * >> AttributeError: 'module' object has no attribute 'sys' >> >> This is because, somehow, `'sys' in numpy.core.__all__` returns True in >> the cx_Freeze context but False in the regular Python context. >> >> -Brad >> >> >> On Sun, Feb 24, 2013 at 10:49 PM, Gelin Yan wrote: >> >>> >>> >>> On Mon, Feb 25, 2013 at 9:16 AM, Ond?ej ?ert?k wrote: >>> >>>> Hi Gelin, >>>> >>>> On Sun, Feb 24, 2013 at 12:08 AM, Gelin Yan >>>> wrote: >>>> > Hi All >>>> > >>>> > When I used numpy 1.7.0 with cx_freeze 4.3.1 on windows, I >>>> quickly >>>> > found out even a simple "import numpy" may lead to program failed with >>>> > following exception: >>>> > >>>> > "AttributeError: 'module' object has no attribute 'sys' >>>> > >>>> > After a poking around some codes I noticed /numpy/core/__init__.py >>>> has a >>>> > line 'del sys' at the bottom. After I commented this line, and >>>> repacked the >>>> > whole program, It ran fine. >>>> > I also noticed this 'del sys' didn't exist on numpy 1.6.2 >>>> > >>>> > I am curious why this 'del sys' should be here and whether it is safe >>>> to >>>> > omit it. Thanks. >>>> >>>> The "del sys" line was introduced in the commit: >>>> >>>> >>>> https://github.com/numpy/numpy/commit/4c0576fe9947ef2af8351405e0990cebd83ccbb6 >>>> >>>> and it seems to me that it is needed so that the numpy.core namespace >>>> is not >>>> cluttered by it. >>>> >>>> Can you post the full stacktrace of your program (and preferably some >>>> instructions >>>> how to reproduce the problem)? It should become clear where the problem >>>> is. >>>> >>>> Thanks, >>>> Ondrej >>>> _______________________________________________ >>>> NumPy-Discussion mailing list >>>> NumPy-Discussion at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/numpy-discussion >>>> >>> >>> Hi Ondrej >>> >>> I attached two files here for demonstration. you need cx_freeze to >>> build a standalone executable file. simply running python setup.py build >>> and try to run the executable file you may see this exception. This >>> example works with numpy 1.6.2. Thanks. >>> >>> Regards >>> >>> gelin yan >>> >>> >>> _______________________________________________ >>> NumPy-Discussion mailing list >>> NumPy-Discussion at scipy.org >>> http://mail.scipy.org/mailman/listinfo/numpy-discussion >>> >>> >> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> >> > Hi Bradley > > So is it supposed to be a bug of cx_freeze? Any work around for that > except omit 'del sys'? If the answer is no, I may consider submit a ticket > on cx_freeze site. Thanks > > Regards > > gelin yan > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaakko.luttinen at aalto.fi Mon Feb 25 08:41:27 2013 From: jaakko.luttinen at aalto.fi (Jaakko Luttinen) Date: Mon, 25 Feb 2013 15:41:27 +0200 Subject: [Numpy-discussion] Leaking memory problem Message-ID: <512B6A07.8060701@aalto.fi> Hi! I was wondering if anyone could help me in finding a memory leak problem with NumPy. My project is quite massive and I haven't been able to construct a simple example which would reproduce the problem.. I have an iterative algorithm which should not increase the memory usage as the iteration progresses. However, after the first iteration, 1GB of memory is used and it steadily increases until at about 100-200 iterations 8GB is used and the program exits with MemoryError. I have a collection of objects which contain large arrays. In each iteration, the objects are updated in turns by re-computing the arrays they contain. The number of arrays and their sizes are constant (do not change during the iteration). So the memory usage should not increase, and I'm a bit confused, how can the program run out of memory if it can easily compute at least a few iterations.. I've tried to use Pympler, but I've understood that it doesn't show the memory usage of NumPy arrays.. ? I also tried gc.set_debug(gc.DEBUG_UNCOLLECTABLE) and then printing gc.garbage at each iteration, but that doesn't show anything. Does anyone have any ideas how to debug this kind of memory leak bug? And how to find out whether the bug is in my code, NumPy or elsewhere? Thanks for any help! Jaakko From sebastian at sipsolutions.net Mon Feb 25 10:10:16 2013 From: sebastian at sipsolutions.net (Sebastian Berg) Date: Mon, 25 Feb 2013 16:10:16 +0100 Subject: [Numpy-discussion] What should np.ndarray.__contains__ do Message-ID: <1361805016.32413.24.camel@sebastian-laptop> Hello all, currently the `__contains__` method or the `in` operator on arrays, does not return what the user would expect when in the operation `a in b` the `a` is not a single element (see "In [3]-[4]" below). The first solution coming to mind might be checking `all()` for all dimensions given in argument `a` (see line "In [5]" for a simplistic example). This does not play too well with broadcasting however, but one could maybe simply *not* broadcast at all (i.e. a.shape == b.shape[b.ndim-a.ndim:]) and raise an error/return False otherwise. On the other hand one could say broadcasting of `a` onto `b` should be "any" along that dimension (see "In [8]"). The other way should maybe raise an error though (see "In [9]" to understand what I mean). I think using broadcasting dimensions where `a` is repeated over `b` as the dimensions to use "any" logic on is the most general way for numpy to handle this consistently, while the other way around could be handled with an `all` but to me makes so little sense that I think it should be an error. Of course this is different to a list of lists, which gives False in these cases, but arrays are not list of lists... As a side note, since for loop, etc. use "for item in array", I do not think that vectorizing along `a` as np.in1d does is reasonable. `in` should return a single boolean. I have opened an issue for it: https://github.com/numpy/numpy/issues/3016#issuecomment-14045545 Regards, Sebastian In [1]: a = np.array([0, 2]) In [2]: b = np.arange(10).reshape(5,2) In [3]: b Out[3]: array([[0, 1], [2, 3], [4, 5], [6, 7], [8, 9]]) In [4]: a in b Out[4]: True In [5]: (b == a).any() Out[5]: True In [6]: (b == a).all(0).any() # the 0 could be multiple axes Out[6]: False In [7]: a_2d = a[None,:] In [8]: a_2d in b # broadcast dimension means "any" -> True Out[8]: True In [9]: [0, 1] in b[:,:1] # should not work (or be False, not True) Out[9]: True From mail.till at gmx.de Mon Feb 25 10:43:09 2013 From: mail.till at gmx.de (Till Stensitzki) Date: Mon, 25 Feb 2013 15:43:09 +0000 (UTC) Subject: [Numpy-discussion] Adding .abs() method to the array object References: Message-ID: First, sorry that i didnt search for an old thread, but because i disagree with conclusion i would at least address my reason: > I don't like > np.abs(arr).max() > because I have to concentrate to much on the braces, especially if arr > is a calculation This exactly, adding an abs into an old expression is always a little annoyance due to the parenthesis. The argument that np.abs() also works is true for (almost?) every other method. The fact that so many methods already exists, especially for most of the commonly used functions (min, max, dot, mean, std, argmin, argmax, conj, T) makes me missing abs. Of course, if one would redesign the api, one would drop most methods (i am looking at you ptp and byteswap). But the objected is already cluttered and adding abs is imo logical application of "practicality beats purity". greetings Till From nouiz at nouiz.org Mon Feb 25 10:47:52 2013 From: nouiz at nouiz.org (=?ISO-8859-1?Q?Fr=E9d=E9ric_Bastien?=) Date: Mon, 25 Feb 2013 10:47:52 -0500 Subject: [Numpy-discussion] Adding .abs() method to the array object In-Reply-To: References: Message-ID: On Sat, Feb 23, 2013 at 9:34 PM, Benjamin Root wrote: > > My issue is having to remember which ones are methods and which ones are > functions. There doesn't seem to be a rhyme or reason for the choices, and > I would rather like to see that a line is drawn, but I am not picky as to > where it is drawn. I like that. I think it would be a good idea to find a good line for NumPy 2.0. As we already will break the API, why not break it for another part at the same time. I don't have any idea what would be a good line... Do someone have a good idea? Do you agree that it would be a good idea for 2.0? Fred From thouis at gmail.com Mon Feb 25 10:50:11 2013 From: thouis at gmail.com (Thouis (Ray) Jones) Date: Mon, 25 Feb 2013 10:50:11 -0500 Subject: [Numpy-discussion] Leaking memory problem In-Reply-To: <512B6A07.8060701@aalto.fi> References: <512B6A07.8060701@aalto.fi> Message-ID: I added allocation tracking tools to numpy for exactly this reason. They are not very well documented, but you can see how to use them here: https://github.com/numpy/numpy/tree/master/tools/allocation_tracking Ray On Mon, Feb 25, 2013 at 8:41 AM, Jaakko Luttinen wrote: > Hi! > > I was wondering if anyone could help me in finding a memory leak problem > with NumPy. My project is quite massive and I haven't been able to > construct a simple example which would reproduce the problem.. > > I have an iterative algorithm which should not increase the memory usage > as the iteration progresses. However, after the first iteration, 1GB of > memory is used and it steadily increases until at about 100-200 > iterations 8GB is used and the program exits with MemoryError. > > I have a collection of objects which contain large arrays. In each > iteration, the objects are updated in turns by re-computing the arrays > they contain. The number of arrays and their sizes are constant (do not > change during the iteration). So the memory usage should not increase, > and I'm a bit confused, how can the program run out of memory if it can > easily compute at least a few iterations.. > > I've tried to use Pympler, but I've understood that it doesn't show the > memory usage of NumPy arrays.. ? > > I also tried gc.set_debug(gc.DEBUG_UNCOLLECTABLE) and then printing > gc.garbage at each iteration, but that doesn't show anything. > > Does anyone have any ideas how to debug this kind of memory leak bug? > And how to find out whether the bug is in my code, NumPy or elsewhere? > > Thanks for any help! > Jaakko > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From jsseabold at gmail.com Mon Feb 25 10:50:59 2013 From: jsseabold at gmail.com (Skipper Seabold) Date: Mon, 25 Feb 2013 10:50:59 -0500 Subject: [Numpy-discussion] Adding .abs() method to the array object In-Reply-To: References: Message-ID: On Mon, Feb 25, 2013 at 10:43 AM, Till Stensitzki wrote: > > First, sorry that i didnt search for an old thread, but because i disagree with > conclusion i would at least address my reason: > >> I don't like >> np.abs(arr).max() >> because I have to concentrate to much on the braces, especially if arr >> is a calculation > > This exactly, adding an abs into an old expression is always a little annoyance > due to the parenthesis. The argument that np.abs() also works is true for > (almost?) every other method. The fact that so many methods already exists, > especially for most of the commonly used functions (min, max, dot, mean, std, > argmin, argmax, conj, T) makes me missing abs. Of course, if one would redesign > the api, one would drop most methods (i am looking at you ptp and byteswap). But > the objected is already cluttered and adding abs is imo logical application of > "practicality beats purity". > I tend to agree here. The situation isn't all that dire for the number of methods in an array. No scrolling at reasonably small terminal sizes. [~/] [3]: x. x.T x.copy x.getfield x.put x.std x.all x.ctypes x.imag x.ravel x.strides x.any x.cumprod x.item x.real x.sum x.argmax x.cumsum x.itemset x.repeat x.swapaxes x.argmin x.data x.itemsize x.reshape x.take x.argsort x.diagonal x.max x.resize x.tofile x.astype x.dot x.mean x.round x.tolist x.base x.dtype x.min x.searchsorted x.tostring x.byteswap x.dump x.nbytes x.setfield x.trace x.choose x.dumps x.ndim x.setflags x.transpose x.clip x.fill x.newbyteorder x.shape x.var x.compress x.flags x.nonzero x.size x.view x.conj x.flat x.prod x.sort x.conjugate x.flatten x.ptp x.squeeze I find myself typing things like arr.abs() and arr.unique() quite often. Skipper -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Mon Feb 25 11:03:48 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 25 Feb 2013 11:03:48 -0500 Subject: [Numpy-discussion] Leaking memory problem In-Reply-To: <512B6A07.8060701@aalto.fi> References: <512B6A07.8060701@aalto.fi> Message-ID: On Mon, Feb 25, 2013 at 8:41 AM, Jaakko Luttinen wrote: > Hi! > > I was wondering if anyone could help me in finding a memory leak problem > with NumPy. My project is quite massive and I haven't been able to > construct a simple example which would reproduce the problem.. > > I have an iterative algorithm which should not increase the memory usage > as the iteration progresses. However, after the first iteration, 1GB of > memory is used and it steadily increases until at about 100-200 > iterations 8GB is used and the program exits with MemoryError. > > I have a collection of objects which contain large arrays. In each > iteration, the objects are updated in turns by re-computing the arrays > they contain. The number of arrays and their sizes are constant (do not > change during the iteration). So the memory usage should not increase, > and I'm a bit confused, how can the program run out of memory if it can > easily compute at least a few iterations.. There are some stories where pythons garbage collection is too slow to kick in. try to call gc.collect in the loop to see if it helps. roughly what I remember: collection works by the number of objects, if you have a few very large arrays, then memory increases, but garbage collection doesn't start yet. Josef > > I've tried to use Pympler, but I've understood that it doesn't show the > memory usage of NumPy arrays.. ? > > I also tried gc.set_debug(gc.DEBUG_UNCOLLECTABLE) and then printing > gc.garbage at each iteration, but that doesn't show anything. > > Does anyone have any ideas how to debug this kind of memory leak bug? > And how to find out whether the bug is in my code, NumPy or elsewhere? > > Thanks for any help! > Jaakko > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From njs at pobox.com Mon Feb 25 11:33:49 2013 From: njs at pobox.com (Nathaniel Smith) Date: Mon, 25 Feb 2013 16:33:49 +0000 Subject: [Numpy-discussion] What should np.ndarray.__contains__ do In-Reply-To: <1361805016.32413.24.camel@sebastian-laptop> References: <1361805016.32413.24.camel@sebastian-laptop> Message-ID: On Mon, Feb 25, 2013 at 3:10 PM, Sebastian Berg wrote: > Hello all, > > currently the `__contains__` method or the `in` operator on arrays, does > not return what the user would expect when in the operation `a in b` the > `a` is not a single element (see "In [3]-[4]" below). True, I did not expect that! > The first solution coming to mind might be checking `all()` for all > dimensions given in argument `a` (see line "In [5]" for a simplistic > example). This does not play too well with broadcasting however, but one > could maybe simply *not* broadcast at all (i.e. a.shape == > b.shape[b.ndim-a.ndim:]) and raise an error/return False otherwise. > > On the other hand one could say broadcasting of `a` onto `b` should be > "any" along that dimension (see "In [8]"). The other way should maybe > raise an error though (see "In [9]" to understand what I mean). > > I think using broadcasting dimensions where `a` is repeated over `b` as > the dimensions to use "any" logic on is the most general way for numpy > to handle this consistently, while the other way around could be handled > with an `all` but to me makes so little sense that I think it should be > an error. Of course this is different to a list of lists, which gives > False in these cases, but arrays are not list of lists... > > As a side note, since for loop, etc. use "for item in array", I do not > think that vectorizing along `a` as np.in1d does is reasonable. `in` > should return a single boolean. Python effectively calls bool() on the return value from __contains__, so reasonableness doesn't even come into it -- the only possible behaviours for `in` are to return True, False, or raise an exception. I admit that I don't actually really understand any of this discussion of broadcasting. in's semantics are, "is this scalar in this container"? (And the scalarness is enforced by Python, as per above.) So I think we should find some approach where the left argument is treated as a scalar. The two approaches that I can see, and which generalize the behaviour of simple Python lists in natural ways, are: a) the left argument is coerced to a scalar of the appropriate type, then we check if that value appears anywhere in the array (basically raveling the right argument). b) for an array with shape (n1, n2, n3, ...), the left argument is treated as an array of shape (n2, n3, ...), and we check if that subarray (as a whole) appears anywhere in the array. Or in other words, 'A in B' is true iff there is some i such that np.array_equals(B[i], A). Question 1: are there any other sensible options that aren't on this list? Question 2: if not, then which should we choose? (Or we could choose both, I suppose, depending on what the left argument looks like.) Between these two options, I like (a) and don't like (b). The pretending-to-be-a-list-of-lists special case behaviour for multidimensional arrays is already weird and confusing, and besides, I'd expect equality comparison on arrays to use ==, not array_equals. So (b) feels pretty inconsistent with other numpy conventions to me. -n > I have opened an issue for it: > https://github.com/numpy/numpy/issues/3016#issuecomment-14045545 > > > Regards, > > Sebastian > > In [1]: a = np.array([0, 2]) > > In [2]: b = np.arange(10).reshape(5,2) > > In [3]: b > Out[3]: > array([[0, 1], > [2, 3], > [4, 5], > [6, 7], > [8, 9]]) > > In [4]: a in b > Out[4]: True > > In [5]: (b == a).any() > Out[5]: True > > In [6]: (b == a).all(0).any() # the 0 could be multiple axes > Out[6]: False > > In [7]: a_2d = a[None,:] > > In [8]: a_2d in b # broadcast dimension means "any" -> True > Out[8]: True > > In [9]: [0, 1] in b[:,:1] # should not work (or be False, not True) > Out[9]: True > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From toddrjen at gmail.com Mon Feb 25 12:01:53 2013 From: toddrjen at gmail.com (Todd) Date: Mon, 25 Feb 2013 18:01:53 +0100 Subject: [Numpy-discussion] What should np.ndarray.__contains__ do In-Reply-To: References: <1361805016.32413.24.camel@sebastian-laptop> Message-ID: The problem with b is that it breaks down if the two status have the same dimensionality. I think a better approach would be for a in b With a having n dimensions, it returns true if there is any subarray of b that matches a along the last n dimensions. So if a has 3 dimensions and b has 6, a in b is true iff there is any i, j, k, m, n, p such that a=b[i, j, k, m:m+a.shape[0], n:n+a.shape[1], p:p+a.shape[2]] ] This isn't a very clear way to describe it, but I think it is consistent with the concept of a being a subarray of b even when they have the same dimensionality. On Feb 25, 2013 5:34 PM, "Nathaniel Smith" wrote: > On Mon, Feb 25, 2013 at 3:10 PM, Sebastian Berg > wrote: > > Hello all, > > > > currently the `__contains__` method or the `in` operator on arrays, does > > not return what the user would expect when in the operation `a in b` the > > `a` is not a single element (see "In [3]-[4]" below). > > True, I did not expect that! > > > The first solution coming to mind might be checking `all()` for all > > dimensions given in argument `a` (see line "In [5]" for a simplistic > > example). This does not play too well with broadcasting however, but one > > could maybe simply *not* broadcast at all (i.e. a.shape == > > b.shape[b.ndim-a.ndim:]) and raise an error/return False otherwise. > > > > On the other hand one could say broadcasting of `a` onto `b` should be > > "any" along that dimension (see "In [8]"). The other way should maybe > > raise an error though (see "In [9]" to understand what I mean). > > > > I think using broadcasting dimensions where `a` is repeated over `b` as > > the dimensions to use "any" logic on is the most general way for numpy > > to handle this consistently, while the other way around could be handled > > with an `all` but to me makes so little sense that I think it should be > > an error. Of course this is different to a list of lists, which gives > > False in these cases, but arrays are not list of lists... > > > > As a side note, since for loop, etc. use "for item in array", I do not > > think that vectorizing along `a` as np.in1d does is reasonable. `in` > > should return a single boolean. > > Python effectively calls bool() on the return value from __contains__, > so reasonableness doesn't even come into it -- the only possible > behaviours for `in` are to return True, False, or raise an exception. > > I admit that I don't actually really understand any of this discussion > of broadcasting. in's semantics are, "is this scalar in this > container"? (And the scalarness is enforced by Python, as per above.) > So I think we should find some approach where the left argument is > treated as a scalar. > > The two approaches that I can see, and which generalize the behaviour > of simple Python lists in natural ways, are: > > a) the left argument is coerced to a scalar of the appropriate type, > then we check if that value appears anywhere in the array (basically > raveling the right argument). > > b) for an array with shape (n1, n2, n3, ...), the left argument is > treated as an array of shape (n2, n3, ...), and we check if that > subarray (as a whole) appears anywhere in the array. Or in other > words, 'A in B' is true iff there is some i such that > np.array_equals(B[i], A). > > Question 1: are there any other sensible options that aren't on this list? > > Question 2: if not, then which should we choose? (Or we could choose > both, I suppose, depending on what the left argument looks like.) > > Between these two options, I like (a) and don't like (b). The > pretending-to-be-a-list-of-lists special case behaviour for > multidimensional arrays is already weird and confusing, and besides, > I'd expect equality comparison on arrays to use ==, not array_equals. > So (b) feels pretty inconsistent with other numpy conventions to me. > > -n > > > I have opened an issue for it: > > https://github.com/numpy/numpy/issues/3016#issuecomment-14045545 > > > > > > Regards, > > > > Sebastian > > > > In [1]: a = np.array([0, 2]) > > > > In [2]: b = np.arange(10).reshape(5,2) > > > > In [3]: b > > Out[3]: > > array([[0, 1], > > [2, 3], > > [4, 5], > > [6, 7], > > [8, 9]]) > > > > In [4]: a in b > > Out[4]: True > > > > In [5]: (b == a).any() > > Out[5]: True > > > > In [6]: (b == a).all(0).any() # the 0 could be multiple axes > > Out[6]: False > > > > In [7]: a_2d = a[None,:] > > > > In [8]: a_2d in b # broadcast dimension means "any" -> True > > Out[8]: True > > > > In [9]: [0, 1] in b[:,:1] # should not work (or be False, not True) > > Out[9]: True > > > > > > _______________________________________________ > > NumPy-Discussion mailing list > > NumPy-Discussion at scipy.org > > http://mail.scipy.org/mailman/listinfo/numpy-discussion > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Mon Feb 25 12:07:21 2013 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 25 Feb 2013 10:07:21 -0700 Subject: [Numpy-discussion] Adding .abs() method to the array object In-Reply-To: References: Message-ID: On Mon, Feb 25, 2013 at 8:50 AM, Skipper Seabold wrote: > On Mon, Feb 25, 2013 at 10:43 AM, Till Stensitzki > wrote: > > > > First, sorry that i didnt search for an old thread, but because i > disagree with > > conclusion i would at least address my reason: > > > >> I don't like > >> np.abs(arr).max() > >> because I have to concentrate to much on the braces, especially if arr > >> is a calculation > > > > This exactly, adding an abs into an old expression is always a little > annoyance > > due to the parenthesis. The argument that np.abs() also works is true for > > (almost?) every other method. The fact that so many methods already > exists, > > especially for most of the commonly used functions (min, max, dot, mean, > std, > > argmin, argmax, conj, T) makes me missing abs. Of course, if one would > redesign > > the api, one would drop most methods (i am looking at you ptp and > byteswap). But > > the objected is already cluttered and adding abs is imo logical > application of > > "practicality beats purity". > > > > I tend to agree here. The situation isn't all that dire for the number of > methods in an array. No scrolling at reasonably small terminal sizes. > > [~/] > [3]: x. > x.T x.copy x.getfield x.put x.std > x.all x.ctypes x.imag x.ravel x.strides > x.any x.cumprod x.item x.real x.sum > x.argmax x.cumsum x.itemset x.repeat x.swapaxes > x.argmin x.data x.itemsize x.reshape x.take > x.argsort x.diagonal x.max x.resize x.tofile > x.astype x.dot x.mean x.round x.tolist > x.base x.dtype x.min x.searchsorted x.tostring > x.byteswap x.dump x.nbytes x.setfield x.trace > x.choose x.dumps x.ndim x.setflags x.transpose > x.clip x.fill x.newbyteorder x.shape x.var > x.compress x.flags x.nonzero x.size x.view > x.conj x.flat x.prod x.sort > x.conjugate x.flatten x.ptp x.squeeze > > > I find myself typing things like > > arr.abs() > > and > > arr.unique() > > quite often. > > Somehow, this reminds me of I. N. Hersteins book, Topics in Algebra, where he did function composition left-to-right instead of right-to-left. >From a practical reading and typing point of view, left-to-right makes sense, to add a new function you don't need to go way back to the beginning of a line and you don't need to read right-to-left to see what is going on. However, the line eventually gets too long, line breaks are inconvenient, and you eventually forget what was at the beginning. I think short left-to-right sequences are very nice. OTOH, one call/line is a bit like left-to-right, only up-to-down, and there is no right margin, but you do save the output generated on the previous line. Herstein's attempt to revolutionise old algebraic habits never caught on. But it was a bold attempt ;) Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebastian at sipsolutions.net Mon Feb 25 12:09:48 2013 From: sebastian at sipsolutions.net (Sebastian Berg) Date: Mon, 25 Feb 2013 18:09:48 +0100 Subject: [Numpy-discussion] What should np.ndarray.__contains__ do In-Reply-To: References: <1361805016.32413.24.camel@sebastian-laptop> Message-ID: <1361812188.32413.47.camel@sebastian-laptop> On Mon, 2013-02-25 at 16:33 +0000, Nathaniel Smith wrote: > On Mon, Feb 25, 2013 at 3:10 PM, Sebastian Berg > wrote: > > Hello all, > > > > currently the `__contains__` method or the `in` operator on arrays, does > > not return what the user would expect when in the operation `a in b` the > > `a` is not a single element (see "In [3]-[4]" below). > > True, I did not expect that! > > The two approaches that I can see, and which generalize the behaviour > of simple Python lists in natural ways, are: > > a) the left argument is coerced to a scalar of the appropriate type, > then we check if that value appears anywhere in the array (basically > raveling the right argument). > > b) for an array with shape (n1, n2, n3, ...), the left argument is > treated as an array of shape (n2, n3, ...), and we check if that > subarray (as a whole) appears anywhere in the array. Or in other > words, 'A in B' is true iff there is some i such that > np.array_equals(B[i], A). > > Question 1: are there any other sensible options that aren't on this list? > > Question 2: if not, then which should we choose? (Or we could choose > both, I suppose, depending on what the left argument looks like.) > > Between these two options, I like (a) and don't like (b). The > pretending-to-be-a-list-of-lists special case behaviour for > multidimensional arrays is already weird and confusing, and besides, > I'd expect equality comparison on arrays to use ==, not array_equals. > So (b) feels pretty inconsistent with other numpy conventions to me. > I agree with rejecting (b). (a) seems a good way to think about the problem and I don't see other sensible options. The question is, lets say you have an array b = [[0, 1], [2, 3]] and a = [[0, 1]] since they are both 2d, should b be interpreted as two 2d elements? Another way of seeing this would be ignoring one sized dimensions in `a` for the sake of defining its "element". This would allow: In [1]: b = np.arange(10).reshape(5,2) In [2]: b Out[2]: array([[0, 1], [2, 3], [4, 5], [6, 7], [8, 9]]) In [3]: a = np.array([[0, 1]]) # extra dimensions at the start In [4]: a in b Out[4]: True # But would also allow transpose, since now the last axes is a dummy: In [5]: a.T in b.T Out[5]: True Those two examples could also be a shape mismatch error, I tend to think they are reasonable enough to work, but then the user could just reshape/transpose to achieve the same. I also wondered about b having i.e. b.shape = (5,1) with a.shape = (1,2) being sensible enough to be not an error, but this element thinking is a good reasoning for rejecting it IMO. Maybe this is clearer, Sebastian > -n > > > I have opened an issue for it: > > https://github.com/numpy/numpy/issues/3016#issuecomment-14045545 > > > > > > Regards, > > > > Sebastian > > > > In [1]: a = np.array([0, 2]) > > > > In [2]: b = np.arange(10).reshape(5,2) > > > > In [3]: b > > Out[3]: > > array([[0, 1], > > [2, 3], > > [4, 5], > > [6, 7], > > [8, 9]]) > > > > In [4]: a in b > > Out[4]: True > > > > In [5]: (b == a).any() > > Out[5]: True > > > > In [6]: (b == a).all(0).any() # the 0 could be multiple axes > > Out[6]: False > > > > In [7]: a_2d = a[None,:] > > > > In [8]: a_2d in b # broadcast dimension means "any" -> True > > Out[8]: True > > > > In [9]: [0, 1] in b[:,:1] # should not work (or be False, not True) > > Out[9]: True > > > > > > _______________________________________________ > > NumPy-Discussion mailing list > > NumPy-Discussion at scipy.org > > http://mail.scipy.org/mailman/listinfo/numpy-discussion > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From sebastian at sipsolutions.net Mon Feb 25 12:49:13 2013 From: sebastian at sipsolutions.net (Sebastian Berg) Date: Mon, 25 Feb 2013 18:49:13 +0100 Subject: [Numpy-discussion] What should np.ndarray.__contains__ do In-Reply-To: References: <1361805016.32413.24.camel@sebastian-laptop> Message-ID: <1361814553.32413.57.camel@sebastian-laptop> On Mon, 2013-02-25 at 18:01 +0100, Todd wrote: > The problem with b is that it breaks down if the two status have the > same dimensionality. > > I think a better approach would be for > > a in b > > With a having n dimensions, it returns true if there is any subarray > of b that matches a along the last n dimensions. > > So if a has 3 dimensions and b has 6, a in b is true iff there is any > i, j, k, m, n, p such that > > a=b[i, j, k, > m:m+a.shape[0], > n:n+a.shape[1], > p:p+a.shape[2]] ] > > This isn't a very clear way to describe it, but I think it is > consistent with the concept of a being a subarray of b even when they > have the same dimensionality. > Oh, great point. Guess this is the most general way, I completely missed this option. Allows [0, 3] in [1, 0, 3, 5] to be true. I am not sure if this kind of matching should be part of the in operator or not, though on the other hand it would only do something reasonable when otherwise an error would be thrown and it definitely is useful and compatible to what anyone else might expect. > On Feb 25, 2013 5:34 PM, "Nathaniel Smith" wrote: > On Mon, Feb 25, 2013 at 3:10 PM, Sebastian Berg > wrote: > > Hello all, > > > > currently the `__contains__` method or the `in` operator on > arrays, does > > not return what the user would expect when in the operation > `a in b` the > > `a` is not a single element (see "In [3]-[4]" below). > > True, I did not expect that! > > > The first solution coming to mind might be checking `all()` > for all > > dimensions given in argument `a` (see line "In [5]" for a > simplistic > > example). This does not play too well with broadcasting > however, but one > > could maybe simply *not* broadcast at all (i.e. a.shape == > > b.shape[b.ndim-a.ndim:]) and raise an error/return False > otherwise. > > > > On the other hand one could say broadcasting of `a` onto `b` > should be > > "any" along that dimension (see "In [8]"). The other way > should maybe > > raise an error though (see "In [9]" to understand what I > mean). > > > > I think using broadcasting dimensions where `a` is repeated > over `b` as > > the dimensions to use "any" logic on is the most general way > for numpy > > to handle this consistently, while the other way around > could be handled > > with an `all` but to me makes so little sense that I think > it should be > > an error. Of course this is different to a list of lists, > which gives > > False in these cases, but arrays are not list of lists... > > > > As a side note, since for loop, etc. use "for item in > array", I do not > > think that vectorizing along `a` as np.in1d does is > reasonable. `in` > > should return a single boolean. > > Python effectively calls bool() on the return value from > __contains__, > so reasonableness doesn't even come into it -- the only > possible > behaviours for `in` are to return True, False, or raise an > exception. > > I admit that I don't actually really understand any of this > discussion > of broadcasting. in's semantics are, "is this scalar in this > container"? (And the scalarness is enforced by Python, as per > above.) > So I think we should find some approach where the left > argument is > treated as a scalar. > > The two approaches that I can see, and which generalize the > behaviour > of simple Python lists in natural ways, are: > > a) the left argument is coerced to a scalar of the appropriate > type, > then we check if that value appears anywhere in the array > (basically > raveling the right argument). > > b) for an array with shape (n1, n2, n3, ...), the left > argument is > treated as an array of shape (n2, n3, ...), and we check if > that > subarray (as a whole) appears anywhere in the array. Or in > other > words, 'A in B' is true iff there is some i such that > np.array_equals(B[i], A). > > Question 1: are there any other sensible options that aren't > on this list? > > Question 2: if not, then which should we choose? (Or we could > choose > both, I suppose, depending on what the left argument looks > like.) > > Between these two options, I like (a) and don't like (b). The > pretending-to-be-a-list-of-lists special case behaviour for > multidimensional arrays is already weird and confusing, and > besides, > I'd expect equality comparison on arrays to use ==, not > array_equals. > So (b) feels pretty inconsistent with other numpy conventions > to me. > > -n > > > I have opened an issue for it: > > > https://github.com/numpy/numpy/issues/3016#issuecomment-14045545 > > > > > > Regards, > > > > Sebastian > > > > In [1]: a = np.array([0, 2]) > > > > In [2]: b = np.arange(10).reshape(5,2) > > > > In [3]: b > > Out[3]: > > array([[0, 1], > > [2, 3], > > [4, 5], > > [6, 7], > > [8, 9]]) > > > > In [4]: a in b > > Out[4]: True > > > > In [5]: (b == a).any() > > Out[5]: True > > > > In [6]: (b == a).all(0).any() # the 0 could be multiple axes > > Out[6]: False > > > > In [7]: a_2d = a[None,:] > > > > In [8]: a_2d in b # broadcast dimension means "any" -> True > > Out[8]: True > > > > In [9]: [0, 1] in b[:,:1] # should not work (or be False, > not True) > > Out[9]: True > > > > > > _______________________________________________ > > NumPy-Discussion mailing list > > NumPy-Discussion at scipy.org > > http://mail.scipy.org/mailman/listinfo/numpy-discussion > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From raul at virtualmaterials.com Mon Feb 25 16:52:08 2013 From: raul at virtualmaterials.com (Raul Cota) Date: Mon, 25 Feb 2013 14:52:08 -0700 Subject: [Numpy-discussion] Leaking memory problem In-Reply-To: References: <512B6A07.8060701@aalto.fi> Message-ID: <512BDD08.8060204@virtualmaterials.com> Josef's suggestion is the first thing I'd try. Are you doing any of this in C ? It is easy to end up duplicating memory that you need to Py_DECREF . In the C debugger you should be able to monitor the ref count of your python objects. btw, for manual tracking of reference counts you can do, sys.getrefcount it has come handy for me every once in a while but usually the garbage collector is all I've needed besides patience. The way I usually run the gc is by doing gc.enable( ) gc.set_debug(gc.DEBUG_LEAK) as pretty much my first lines and then after everything is said and done I do something along the lines of, gc.collect( ) for x in gc.garbage: s = str(x) print type(x) you'd have to set up your program to quite before it runs out of memory of course but I understand you get to run for quite a few iterations before failure. Raul On 25/02/2013 9:03 AM, josef.pktd at gmail.com wrote: > On Mon, Feb 25, 2013 at 8:41 AM, Jaakko Luttinen > wrote: >> Hi! >> >> I was wondering if anyone could help me in finding a memory leak problem >> with NumPy. My project is quite massive and I haven't been able to >> construct a simple example which would reproduce the problem.. >> >> I have an iterative algorithm which should not increase the memory usage >> as the iteration progresses. However, after the first iteration, 1GB of >> memory is used and it steadily increases until at about 100-200 >> iterations 8GB is used and the program exits with MemoryError. >> >> I have a collection of objects which contain large arrays. In each >> iteration, the objects are updated in turns by re-computing the arrays >> they contain. The number of arrays and their sizes are constant (do not >> change during the iteration). So the memory usage should not increase, >> and I'm a bit confused, how can the program run out of memory if it can >> easily compute at least a few iterations.. > There are some stories where pythons garbage collection is too slow to kick in. > try to call gc.collect in the loop to see if it helps. > > roughly what I remember: collection works by the number of objects, if > you have a few very large arrays, then memory increases, but garbage > collection doesn't start yet. > > Josef > > >> I've tried to use Pympler, but I've understood that it doesn't show the >> memory usage of NumPy arrays.. ? >> >> I also tried gc.set_debug(gc.DEBUG_UNCOLLECTABLE) and then printing >> gc.garbage at each iteration, but that doesn't show anything. >> >> Does anyone have any ideas how to debug this kind of memory leak bug? >> And how to find out whether the bug is in my code, NumPy or elsewhere? >> >> Thanks for any help! >> Jaakko >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > From charlesr.harris at gmail.com Mon Feb 25 19:11:24 2013 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 25 Feb 2013 17:11:24 -0700 Subject: [Numpy-discussion] Adding .abs() method to the array object In-Reply-To: References: Message-ID: On Sat, Feb 23, 2013 at 1:33 PM, Robert Kern wrote: > On Sat, Feb 23, 2013 at 7:25 PM, Nathaniel Smith wrote: > > On Sat, Feb 23, 2013 at 3:38 PM, Till Stensitzki > wrote: > >> Hello, > >> i know that the array object is already crowded, but i would like > >> to see the abs method added, especially doing work on the console. > >> Considering that many much less used functions are also implemented > >> as a method, i don't think adding one more would be problematic. > > > > My gut feeling is that we have too many methods on ndarray, not too > > few, but in any case, can you elaborate? What's the rationale for why > > np.abs(a) is so much harder than a.abs(), and why this function and > > not other unary functions? > > Or even abs(a). > Well, that just calls a method: In [1]: ones(3).__abs__() Out[1]: array([ 1., 1., 1.]) Which shows the advantage of methods, they provide universal function hooks. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Mon Feb 25 19:49:51 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 25 Feb 2013 19:49:51 -0500 Subject: [Numpy-discussion] Adding .abs() method to the array object In-Reply-To: References: Message-ID: On Mon, Feb 25, 2013 at 7:11 PM, Charles R Harris wrote: > > > On Sat, Feb 23, 2013 at 1:33 PM, Robert Kern wrote: >> >> On Sat, Feb 23, 2013 at 7:25 PM, Nathaniel Smith wrote: >> > On Sat, Feb 23, 2013 at 3:38 PM, Till Stensitzki >> > wrote: >> >> Hello, >> >> i know that the array object is already crowded, but i would like >> >> to see the abs method added, especially doing work on the console. >> >> Considering that many much less used functions are also implemented >> >> as a method, i don't think adding one more would be problematic. >> > >> > My gut feeling is that we have too many methods on ndarray, not too >> > few, but in any case, can you elaborate? What's the rationale for why >> > np.abs(a) is so much harder than a.abs(), and why this function and >> > not other unary functions? >> >> Or even abs(a). > > > Well, that just calls a method: > > In [1]: ones(3).__abs__() > Out[1]: array([ 1., 1., 1.]) > > Which shows the advantage of methods, they provide universal function hooks. Maybe we should start to advertise magic methods. I only recently discovered I can use divmod instead of the numpy functions: >>> divmod(np.array([1.4]), 1) (array([ 1.]), array([ 0.4])) >>> np.array([1.4]).__divmod__(1) (array([ 1.]), array([ 0.4])) Josef > > Chuck > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From josef.pktd at gmail.com Mon Feb 25 19:57:27 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 25 Feb 2013 19:57:27 -0500 Subject: [Numpy-discussion] Adding .abs() method to the array object In-Reply-To: References: Message-ID: On Mon, Feb 25, 2013 at 7:49 PM, wrote: > On Mon, Feb 25, 2013 at 7:11 PM, Charles R Harris > wrote: >> >> >> On Sat, Feb 23, 2013 at 1:33 PM, Robert Kern wrote: >>> >>> On Sat, Feb 23, 2013 at 7:25 PM, Nathaniel Smith wrote: >>> > On Sat, Feb 23, 2013 at 3:38 PM, Till Stensitzki >>> > wrote: >>> >> Hello, >>> >> i know that the array object is already crowded, but i would like >>> >> to see the abs method added, especially doing work on the console. >>> >> Considering that many much less used functions are also implemented >>> >> as a method, i don't think adding one more would be problematic. >>> > >>> > My gut feeling is that we have too many methods on ndarray, not too >>> > few, but in any case, can you elaborate? What's the rationale for why >>> > np.abs(a) is so much harder than a.abs(), and why this function and >>> > not other unary functions? >>> >>> Or even abs(a). >> >> >> Well, that just calls a method: >> >> In [1]: ones(3).__abs__() >> Out[1]: array([ 1., 1., 1.]) >> >> Which shows the advantage of methods, they provide universal function hooks. > > Maybe we should start to advertise magic methods. > I only recently discovered I can use divmod instead of the numpy functions: > >>>> divmod(np.array([1.4]), 1) > (array([ 1.]), array([ 0.4])) >>>> np.array([1.4]).__divmod__(1) > (array([ 1.]), array([ 0.4])) Thanks for the hint. my new favorite :) >>> (freq - nobs * probs).__abs__().max() 132.0 Josef > > Josef > > >> >> Chuck >> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> From alan.isaac at gmail.com Mon Feb 25 20:23:15 2013 From: alan.isaac at gmail.com (Alan G Isaac) Date: Mon, 25 Feb 2013 20:23:15 -0500 Subject: [Numpy-discussion] drawing the line (was: Adding .abs() method to the array object) In-Reply-To: References: Message-ID: <512C0E83.7020906@gmail.com> I'm hoping this discussion will return to the drawing the line question. http://stackoverflow.com/questions/8108688/in-python-when-should-i-use-a-function-instead-of-a-method Alan Isaac From sebastian at sipsolutions.net Mon Feb 25 21:20:17 2013 From: sebastian at sipsolutions.net (Sebastian Berg) Date: Tue, 26 Feb 2013 03:20:17 +0100 Subject: [Numpy-discussion] Adding .abs() method to the array object In-Reply-To: References: Message-ID: <1361845217.4602.6.camel@sebastian-laptop> On Mon, 2013-02-25 at 10:50 -0500, Skipper Seabold wrote: > On Mon, Feb 25, 2013 at 10:43 AM, Till Stensitzki > wrote: > > > > First, sorry that i didnt search for an old thread, but because i > disagree with > > conclusion i would at least address my reason: > > > >> I don't like > >> np.abs(arr).max() > >> because I have to concentrate to much on the braces, especially if > arr > >> is a calculation > > > > This exactly, adding an abs into an old expression is always a > little annoyance > > due to the parenthesis. The argument that np.abs() also works is > true for > > (almost?) every other method. The fact that so many methods already > exists, > > especially for most of the commonly used functions (min, max, dot, > mean, std, > > argmin, argmax, conj, T) makes me missing abs. Of course, if one > would redesign > > the api, one would drop most methods (i am looking at you ptp and > byteswap). But > > the objected is already cluttered and adding abs is imo logical > application of > > "practicality beats purity". > > > > I tend to agree here. The situation isn't all that dire for the number > of methods in an array. No scrolling at reasonably small terminal > sizes. > > [~/] > [3]: x. > x.T x.copy x.getfield x.put x.std > x.all x.ctypes x.imag x.ravel > x.strides > x.any x.cumprod x.item x.real x.sum > x.argmax x.cumsum x.itemset x.repeat > x.swapaxes > x.argmin x.data x.itemsize x.reshape x.take > x.argsort x.diagonal x.max x.resize > x.tofile > x.astype x.dot x.mean x.round > x.tolist > x.base x.dtype x.min x.searchsorted > x.tostring > x.byteswap x.dump x.nbytes x.setfield > x.trace > x.choose x.dumps x.ndim x.setflags > x.transpose > x.clip x.fill x.newbyteorder x.shape x.var > x.compress x.flags x.nonzero x.size x.view > x.conj x.flat x.prod x.sort > x.conjugate x.flatten x.ptp x.squeeze > > Two small things (not sure if it matters much). But first almost all of these methods are related to the container and not the elements. Second actually using a method arr.abs() has a tiny pitfall, since abs would work on numpy types, but not on python types. This means that: np.array([1, 2, 3]).max().abs() works, but np.array([1, 2, 3], dtype=object).max().abs() breaks. Python has a safe name for abs already... > I find myself typing things like > > arr.abs() > > and > > arr.unique() > > quite often. > > Skipper > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From josef.pktd at gmail.com Mon Feb 25 21:58:01 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 25 Feb 2013 21:58:01 -0500 Subject: [Numpy-discussion] Adding .abs() method to the array object In-Reply-To: <1361845217.4602.6.camel@sebastian-laptop> References: <1361845217.4602.6.camel@sebastian-laptop> Message-ID: On Mon, Feb 25, 2013 at 9:20 PM, Sebastian Berg wrote: > On Mon, 2013-02-25 at 10:50 -0500, Skipper Seabold wrote: >> On Mon, Feb 25, 2013 at 10:43 AM, Till Stensitzki >> wrote: >> > >> > First, sorry that i didnt search for an old thread, but because i >> disagree with >> > conclusion i would at least address my reason: >> > >> >> I don't like >> >> np.abs(arr).max() >> >> because I have to concentrate to much on the braces, especially if >> arr >> >> is a calculation >> > >> > This exactly, adding an abs into an old expression is always a >> little annoyance >> > due to the parenthesis. The argument that np.abs() also works is >> true for >> > (almost?) every other method. The fact that so many methods already >> exists, >> > especially for most of the commonly used functions (min, max, dot, >> mean, std, >> > argmin, argmax, conj, T) makes me missing abs. Of course, if one >> would redesign >> > the api, one would drop most methods (i am looking at you ptp and >> byteswap). But >> > the objected is already cluttered and adding abs is imo logical >> application of >> > "practicality beats purity". >> > >> >> I tend to agree here. The situation isn't all that dire for the number >> of methods in an array. No scrolling at reasonably small terminal >> sizes. >> >> [~/] >> [3]: x. >> x.T x.copy x.getfield x.put x.std >> x.all x.ctypes x.imag x.ravel >> x.strides >> x.any x.cumprod x.item x.real x.sum >> x.argmax x.cumsum x.itemset x.repeat >> x.swapaxes >> x.argmin x.data x.itemsize x.reshape x.take >> x.argsort x.diagonal x.max x.resize >> x.tofile >> x.astype x.dot x.mean x.round >> x.tolist >> x.base x.dtype x.min x.searchsorted >> x.tostring >> x.byteswap x.dump x.nbytes x.setfield >> x.trace >> x.choose x.dumps x.ndim x.setflags >> x.transpose >> x.clip x.fill x.newbyteorder x.shape x.var >> x.compress x.flags x.nonzero x.size x.view >> x.conj x.flat x.prod x.sort >> x.conjugate x.flatten x.ptp x.squeeze >> >> > Two small things (not sure if it matters much). But first almost all of > these methods are related to the container and not the elements. Second > actually using a method arr.abs() has a tiny pitfall, since abs would > work on numpy types, but not on python types. This means that: > > np.array([1, 2, 3]).max().abs() > > works, but > > np.array([1, 2, 3], dtype=object).max().abs() > > breaks. Python has a safe name for abs already... >>> (np.array([1, 2, 3], dtype=object)).max() 3 >>> (np.array([1, 2, 3], dtype=object)).__abs__().max() 3 >>> (np.array([1, 2, '3'], dtype=object)).__abs__() Traceback (most recent call last): File "", line 1, in TypeError: bad operand type for abs(): 'str' >>> map(abs, [1, 2, 3]) [1, 2, 3] >>> map(abs, [1, 2, '3']) Traceback (most recent call last): File "", line 1, in TypeError: bad operand type for abs(): 'str' I don't see a difference. (I don't expect to use max abs on anything else than numbers.) Josef > > >> I find myself typing things like >> >> arr.abs() >> >> and >> >> arr.unique() >> >> quite often. >> >> Skipper >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From josef.pktd at gmail.com Mon Feb 25 22:04:54 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 25 Feb 2013 22:04:54 -0500 Subject: [Numpy-discussion] Adding .abs() method to the array object In-Reply-To: References: <1361845217.4602.6.camel@sebastian-laptop> Message-ID: On Mon, Feb 25, 2013 at 9:58 PM, wrote: > On Mon, Feb 25, 2013 at 9:20 PM, Sebastian Berg > wrote: >> On Mon, 2013-02-25 at 10:50 -0500, Skipper Seabold wrote: >>> On Mon, Feb 25, 2013 at 10:43 AM, Till Stensitzki >>> wrote: >>> > >>> > First, sorry that i didnt search for an old thread, but because i >>> disagree with >>> > conclusion i would at least address my reason: >>> > >>> >> I don't like >>> >> np.abs(arr).max() >>> >> because I have to concentrate to much on the braces, especially if >>> arr >>> >> is a calculation >>> > >>> > This exactly, adding an abs into an old expression is always a >>> little annoyance >>> > due to the parenthesis. The argument that np.abs() also works is >>> true for >>> > (almost?) every other method. The fact that so many methods already >>> exists, >>> > especially for most of the commonly used functions (min, max, dot, >>> mean, std, >>> > argmin, argmax, conj, T) makes me missing abs. Of course, if one >>> would redesign >>> > the api, one would drop most methods (i am looking at you ptp and >>> byteswap). But >>> > the objected is already cluttered and adding abs is imo logical >>> application of >>> > "practicality beats purity". >>> > >>> >>> I tend to agree here. The situation isn't all that dire for the number >>> of methods in an array. No scrolling at reasonably small terminal >>> sizes. >>> >>> [~/] >>> [3]: x. >>> x.T x.copy x.getfield x.put x.std >>> x.all x.ctypes x.imag x.ravel >>> x.strides >>> x.any x.cumprod x.item x.real x.sum >>> x.argmax x.cumsum x.itemset x.repeat >>> x.swapaxes >>> x.argmin x.data x.itemsize x.reshape x.take >>> x.argsort x.diagonal x.max x.resize >>> x.tofile >>> x.astype x.dot x.mean x.round >>> x.tolist >>> x.base x.dtype x.min x.searchsorted >>> x.tostring >>> x.byteswap x.dump x.nbytes x.setfield >>> x.trace >>> x.choose x.dumps x.ndim x.setflags >>> x.transpose >>> x.clip x.fill x.newbyteorder x.shape x.var >>> x.compress x.flags x.nonzero x.size x.view >>> x.conj x.flat x.prod x.sort >>> x.conjugate x.flatten x.ptp x.squeeze >>> >>> >> Two small things (not sure if it matters much). But first almost all of >> these methods are related to the container and not the elements. Second >> actually using a method arr.abs() has a tiny pitfall, since abs would >> work on numpy types, but not on python types. This means that: >> >> np.array([1, 2, 3]).max().abs() >> >> works, but >> >> np.array([1, 2, 3], dtype=object).max().abs() >> >> breaks. Python has a safe name for abs already... > >>>> (np.array([1, 2, 3], dtype=object)).max() > 3 >>>> (np.array([1, 2, 3], dtype=object)).__abs__().max() > 3 >>>> (np.array([1, 2, '3'], dtype=object)).__abs__() > Traceback (most recent call last): > File "", line 1, in > TypeError: bad operand type for abs(): 'str' > >>>> map(abs, [1, 2, 3]) > [1, 2, 3] >>>> map(abs, [1, 2, '3']) > Traceback (most recent call last): > File "", line 1, in > TypeError: bad operand type for abs(): 'str' or maybe more useful >>> from decimal import Decimal >>> d = [Decimal(str(k)) for k in np.linspace(-1, 1, 5)] >>> map(abs, d) [Decimal('1.0'), Decimal('0.5'), Decimal('0.0'), Decimal('0.5'), Decimal('1.0')] >>> np.asarray(d).__abs__() array([1.0, 0.5, 0.0, 0.5, 1.0], dtype=object) >>> np.asarray(d).__abs__()[0] Decimal('1.0') Josef > > I don't see a difference. > > (I don't expect to use max abs on anything else than numbers.) > > Josef >> >> >>> I find myself typing things like >>> >>> arr.abs() >>> >>> and >>> >>> arr.unique() >>> >>> quite often. >>> >>> Skipper >>> _______________________________________________ >>> NumPy-Discussion mailing list >>> NumPy-Discussion at scipy.org >>> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> >> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion From njs at pobox.com Tue Feb 26 02:04:12 2013 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 26 Feb 2013 07:04:12 +0000 Subject: [Numpy-discussion] Leaking memory problem In-Reply-To: <512B6A07.8060701@aalto.fi> References: <512B6A07.8060701@aalto.fi> Message-ID: Is this with 1.7? There see a few memory leak fixes in 1.7, so if you aren't using that you should try it to be sure. And if you are using it, then there is one known memory leak bug in 1.7 that you might want to check whether you're hitting: https://github.com/numpy/numpy/issues/2969 -n On 25 Feb 2013 13:41, "Jaakko Luttinen" wrote: > Hi! > > I was wondering if anyone could help me in finding a memory leak problem > with NumPy. My project is quite massive and I haven't been able to > construct a simple example which would reproduce the problem.. > > I have an iterative algorithm which should not increase the memory usage > as the iteration progresses. However, after the first iteration, 1GB of > memory is used and it steadily increases until at about 100-200 > iterations 8GB is used and the program exits with MemoryError. > > I have a collection of objects which contain large arrays. In each > iteration, the objects are updated in turns by re-computing the arrays > they contain. The number of arrays and their sizes are constant (do not > change during the iteration). So the memory usage should not increase, > and I'm a bit confused, how can the program run out of memory if it can > easily compute at least a few iterations.. > > I've tried to use Pympler, but I've understood that it doesn't show the > memory usage of NumPy arrays.. ? > > I also tried gc.set_debug(gc.DEBUG_UNCOLLECTABLE) and then printing > gc.garbage at each iteration, but that doesn't show anything. > > Does anyone have any ideas how to debug this kind of memory leak bug? > And how to find out whether the bug is in my code, NumPy or elsewhere? > > Thanks for any help! > Jaakko > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre.haessig at crans.org Tue Feb 26 03:20:48 2013 From: pierre.haessig at crans.org (Pierre Haessig) Date: Tue, 26 Feb 2013 09:20:48 +0100 Subject: [Numpy-discussion] another discussion on numpy correlate (and convolution) In-Reply-To: References: <512660CA.4060603@crans.org> Message-ID: <512C7060.5060005@crans.org> Hi, Le 22/02/2013 17:40, Matthew Brett a ?crit : > >From complete ignorance, do you think it is an option to allow a > (n_left, n_right) tuple as a value for 'mode'? > That may be an option. Another one would be to add some kind of `bounds` option which would be set to None by default but would accept the (n_left, n_right) tuple and would override `mode`. I don't know which one is better. The only thing I've in mind which may cause confusion is that `mode` normally receives a string ('valid', 'same', 'full') but it also accept corresponding integer codes (undocumented, but I guess it corresponds to a old signature of that function). So I wonder if there could be a confusion between : * mode = n * mode = (n_left, n_right) What do you think ? Best, Pierre -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 900 bytes Desc: OpenPGP digital signature URL: From jaakko.luttinen at aalto.fi Tue Feb 26 03:27:40 2013 From: jaakko.luttinen at aalto.fi (Jaakko Luttinen) Date: Tue, 26 Feb 2013 10:27:40 +0200 Subject: [Numpy-discussion] Leaking memory problem In-Reply-To: References: <512B6A07.8060701@aalto.fi> Message-ID: <512C71FC.30304@aalto.fi> Thanks for all the answers, they were helpful! I was using 1.7.0 and now installed from git: https://github.com/numpy/numpy/archive/master.zip And it looks like the memory leak is gone, so I guess I was hitting that known memory leak bug. Thanks! -Jaakko On 02/26/2013 09:04 AM, Nathaniel Smith wrote: > Is this with 1.7? There see a few memory leak fixes in 1.7, so if you > aren't using that you should try it to be sure. And if you are using it, > then there is one known memory leak bug in 1.7 that you might want to > check whether you're hitting: > https://github.com/numpy/numpy/issues/2969 > > -n > > On 25 Feb 2013 13:41, "Jaakko Luttinen" > wrote: > > Hi! > > I was wondering if anyone could help me in finding a memory leak problem > with NumPy. My project is quite massive and I haven't been able to > construct a simple example which would reproduce the problem.. > > I have an iterative algorithm which should not increase the memory usage > as the iteration progresses. However, after the first iteration, 1GB of > memory is used and it steadily increases until at about 100-200 > iterations 8GB is used and the program exits with MemoryError. > > I have a collection of objects which contain large arrays. In each > iteration, the objects are updated in turns by re-computing the arrays > they contain. The number of arrays and their sizes are constant (do not > change during the iteration). So the memory usage should not increase, > and I'm a bit confused, how can the program run out of memory if it can > easily compute at least a few iterations.. > > I've tried to use Pympler, but I've understood that it doesn't show the > memory usage of NumPy arrays.. ? > > I also tried gc.set_debug(gc.DEBUG_UNCOLLECTABLE) and then printing > gc.garbage at each iteration, but that doesn't show anything. > > Does anyone have any ideas how to debug this kind of memory leak bug? > And how to find out whether the bug is in my code, NumPy or elsewhere? > > Thanks for any help! > Jaakko > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From pierre.haessig at crans.org Tue Feb 26 03:35:06 2013 From: pierre.haessig at crans.org (Pierre Haessig) Date: Tue, 26 Feb 2013 09:35:06 +0100 Subject: [Numpy-discussion] Adding .abs() method to the array object In-Reply-To: References: Message-ID: <512C73BA.7020906@crans.org> Hi, Le 23/02/2013 20:25, Nathaniel Smith a ?crit : > My gut feeling is that we have too many methods on ndarray, not too > few, but in any case, can you elaborate? What's the rationale for why > np.abs(a) is so much harder than a.abs(), and why this function and > not other unary functions? (Just another usecase where I see the x.abs() notation useful) If x is a complex array, I feel that x.abs() and x.angle() would be natural complements to x.real and x.imag. Of course, x.angle() only make much sense for complex arrays while x.abs() makes sense for any numerical array. best, Pierre -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 900 bytes Desc: OpenPGP digital signature URL: From matthew.brett at gmail.com Tue Feb 26 03:39:46 2013 From: matthew.brett at gmail.com (Matthew Brett) Date: Tue, 26 Feb 2013 00:39:46 -0800 Subject: [Numpy-discussion] another discussion on numpy correlate (and convolution) In-Reply-To: <512C7060.5060005@crans.org> References: <512660CA.4060603@crans.org> <512C7060.5060005@crans.org> Message-ID: Hi, On Tue, Feb 26, 2013 at 12:20 AM, Pierre Haessig wrote: > Hi, > > Le 22/02/2013 17:40, Matthew Brett a ?crit : >> >From complete ignorance, do you think it is an option to allow a >> (n_left, n_right) tuple as a value for 'mode'? >> > That may be an option. Another one would be to add some kind of `bounds` > option which would be set to None by default but would accept the > (n_left, n_right) tuple and would override `mode`. > > I don't know which one is better. Personally I try to avoid mutually incompatible arguments. I guess you'd have to raise and error if the bounds were defined as well as mode? > The only thing I've in mind which may cause confusion is that `mode` > normally receives a string ('valid', 'same', 'full') but it also accept > corresponding integer codes (undocumented, but I guess it corresponds to > a old signature of that function). So I wonder if there could be a > confusion between : > * mode = n > * mode = (n_left, n_right) Maybe deprecate the integer arguments? It doesn't seem too bad to me that the numbers in the tuple are different in meaning from a scalar, particularly if the scalar argument is undocumented and will soon go away. Cheers, Matthew From sebastian at sipsolutions.net Tue Feb 26 04:58:03 2013 From: sebastian at sipsolutions.net (Sebastian Berg) Date: Tue, 26 Feb 2013 10:58:03 +0100 Subject: [Numpy-discussion] Adding .abs() method to the array object In-Reply-To: References: <1361845217.4602.6.camel@sebastian-laptop> Message-ID: <1361872683.4602.12.camel@sebastian-laptop> On Mon, 2013-02-25 at 22:04 -0500, josef.pktd at gmail.com wrote: > On Mon, Feb 25, 2013 at 9:58 PM, wrote: > > On Mon, Feb 25, 2013 at 9:20 PM, Sebastian Berg > > wrote: > >> On Mon, 2013-02-25 at 10:50 -0500, Skipper Seabold wrote: > >>> On Mon, Feb 25, 2013 at 10:43 AM, Till Stensitzki > >>> wrote: > >>> > > >>> > First, sorry that i didnt search for an old thread, but because i > >>> disagree with > >>> > conclusion i would at least address my reason: > >>> > > >> Two small things (not sure if it matters much). But first almost all of > >> these methods are related to the container and not the elements. Second > >> actually using a method arr.abs() has a tiny pitfall, since abs would > >> work on numpy types, but not on python types. This means that: > >> > >> np.array([1, 2, 3]).max().abs() > >> > >> works, but > >> > >> np.array([1, 2, 3], dtype=object).max().abs() > >> > >> breaks. Python has a safe name for abs already... > > > >>>> (np.array([1, 2, 3], dtype=object)).max() > > 3 > >>>> (np.array([1, 2, 3], dtype=object)).__abs__().max() > > 3 > >>>> (np.array([1, 2, '3'], dtype=object)).__abs__() > > Traceback (most recent call last): > > File "", line 1, in > > TypeError: bad operand type for abs(): 'str' > > > >>>> map(abs, [1, 2, 3]) > > [1, 2, 3] > >>>> map(abs, [1, 2, '3']) > > Traceback (most recent call last): > > File "", line 1, in > > TypeError: bad operand type for abs(): 'str' > > or maybe more useful > > >>> from decimal import Decimal > >>> d = [Decimal(str(k)) for k in np.linspace(-1, 1, 5)] > >>> map(abs, d) > [Decimal('1.0'), Decimal('0.5'), Decimal('0.0'), Decimal('0.5'), Decimal('1.0')] > > >>> np.asarray(d).__abs__() > array([1.0, 0.5, 0.0, 0.5, 1.0], dtype=object) > >>> np.asarray(d).__abs__()[0] > Decimal('1.0') > > Josef > > > > > I don't see a difference. > > > > (I don't expect to use max abs on anything else than numbers.) > > The difference is about scalars only. And of course __abs__ is fine, but if numpy adds an abs method, its scalars would logically have it too. But then you diverge from python scalars. That has exactly the downside that you may write code that suddenly stops working for python scalars without noticing. I turned around the abs and max order here, so that the abs works on the scalar, not useful but just as an example. > > Josef > >> > >> > >>> I find myself typing things like > >>> > >>> arr.abs() > >>> > >>> and > >>> > >>> arr.unique() > >>> > >>> quite often. > >>> > >>> Skipper > >>> _______________________________________________ > >>> NumPy-Discussion mailing list > >>> NumPy-Discussion at scipy.org > >>> http://mail.scipy.org/mailman/listinfo/numpy-discussion > >> > >> > >> _______________________________________________ > >> NumPy-Discussion mailing list > >> NumPy-Discussion at scipy.org > >> http://mail.scipy.org/mailman/listinfo/numpy-discussion > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From toddrjen at gmail.com Tue Feb 26 05:16:47 2013 From: toddrjen at gmail.com (Todd) Date: Tue, 26 Feb 2013 11:16:47 +0100 Subject: [Numpy-discussion] Adding .abs() method to the array object In-Reply-To: <1361872683.4602.12.camel@sebastian-laptop> References: <1361845217.4602.6.camel@sebastian-laptop> <1361872683.4602.12.camel@sebastian-laptop> Message-ID: On Tue, Feb 26, 2013 at 10:58 AM, Sebastian Berg wrote: > On Mon, 2013-02-25 at 22:04 -0500, josef.pktd at gmail.com wrote: > > On Mon, Feb 25, 2013 at 9:58 PM, wrote: > > > On Mon, Feb 25, 2013 at 9:20 PM, Sebastian Berg > > > wrote: > > >> On Mon, 2013-02-25 at 10:50 -0500, Skipper Seabold wrote: > > >>> On Mon, Feb 25, 2013 at 10:43 AM, Till Stensitzki > > >>> wrote: > > >>> > > > >>> > First, sorry that i didnt search for an old thread, but because i > > >>> disagree with > > >>> > conclusion i would at least address my reason: > > >>> > > > > >> Two small things (not sure if it matters much). But first almost all > of > > >> these methods are related to the container and not the elements. > Second > > >> actually using a method arr.abs() has a tiny pitfall, since abs would > > >> work on numpy types, but not on python types. This means that: > > >> > > >> np.array([1, 2, 3]).max().abs() > > >> > > >> works, but > > >> > > >> np.array([1, 2, 3], dtype=object).max().abs() > > >> > > >> breaks. Python has a safe name for abs already... > > > > > >>>> (np.array([1, 2, 3], dtype=object)).max() > > > 3 > > >>>> (np.array([1, 2, 3], dtype=object)).__abs__().max() > > > 3 > > >>>> (np.array([1, 2, '3'], dtype=object)).__abs__() > > > Traceback (most recent call last): > > > File "", line 1, in > > > TypeError: bad operand type for abs(): 'str' > > > > > >>>> map(abs, [1, 2, 3]) > > > [1, 2, 3] > > >>>> map(abs, [1, 2, '3']) > > > Traceback (most recent call last): > > > File "", line 1, in > > > TypeError: bad operand type for abs(): 'str' > > > > or maybe more useful > > > > >>> from decimal import Decimal > > >>> d = [Decimal(str(k)) for k in np.linspace(-1, 1, 5)] > > >>> map(abs, d) > > [Decimal('1.0'), Decimal('0.5'), Decimal('0.0'), Decimal('0.5'), > Decimal('1.0')] > > > > >>> np.asarray(d).__abs__() > > array([1.0, 0.5, 0.0, 0.5, 1.0], dtype=object) > > >>> np.asarray(d).__abs__()[0] > > Decimal('1.0') > > > > Josef > > > > > > > > I don't see a difference. > > > > > > (I don't expect to use max abs on anything else than numbers.) > > > > > The difference is about scalars only. And of course __abs__ is fine, but > if numpy adds an abs method, its scalars would logically have it too. > But then you diverge from python scalars. That has exactly the downside > that you may write code that suddenly stops working for python scalars > without noticing. > > I turned around the abs and max order here, so that the abs works on the > scalar, not useful but just as an example. > But doesn't this also apply to many existing methods? -------------- next part -------------- An HTML attachment was scrubbed... URL: From toddrjen at gmail.com Tue Feb 26 05:17:46 2013 From: toddrjen at gmail.com (Todd) Date: Tue, 26 Feb 2013 11:17:46 +0100 Subject: [Numpy-discussion] GSOC 2013 Message-ID: Is numpy planning to participate in GSOC this year, either on their own or as a part of another group? If so, should we start trying to get some project suggestions together? -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebastian at sipsolutions.net Tue Feb 26 05:21:29 2013 From: sebastian at sipsolutions.net (Sebastian Berg) Date: Tue, 26 Feb 2013 11:21:29 +0100 Subject: [Numpy-discussion] What should np.ndarray.__contains__ do In-Reply-To: References: <1361805016.32413.24.camel@sebastian-laptop> Message-ID: <1361874089.4602.18.camel@sebastian-laptop> On Mon, 2013-02-25 at 16:33 +0000, Nathaniel Smith wrote: > On Mon, Feb 25, 2013 at 3:10 PM, Sebastian Berg > wrote: > > Hello all, > > > > currently the `__contains__` method or the `in` operator on arrays, does > > not return what the user would expect when in the operation `a in b` the > > `a` is not a single element (see "In [3]-[4]" below). > > True, I did not expect that! > > The two approaches that I can see, and which generalize the behaviour > of simple Python lists in natural ways, are: > > a) the left argument is coerced to a scalar of the appropriate type, > then we check if that value appears anywhere in the array (basically > raveling the right argument). > How did I misread that? I guess you mean element and never subarray matching. Actually I am starting to think that is best. Subarray matching may be useful, but would probably be better off inside its own function. That also might be best with object arrays, since it is difficult to know if the user means a tuple as an element or a two element subarray, unless you say "input is array-like", which is possible (or more sensible) for a function. That would mean just make the use cases that current give weird results into errors. And maybe those errors hint to np.in1d and if numpy would get it, some dedicated subarray matching function. -- Sebastian > b) for an array with shape (n1, n2, n3, ...), the left argument is > treated as an array of shape (n2, n3, ...), and we check if that > subarray (as a whole) appears anywhere in the array. Or in other > words, 'A in B' is true iff there is some i such that > np.array_equals(B[i], A). > > Question 1: are there any other sensible options that aren't on this list? > > Question 2: if not, then which should we choose? (Or we could choose > both, I suppose, depending on what the left argument looks like.) > > Between these two options, I like (a) and don't like (b). The > pretending-to-be-a-list-of-lists special case behaviour for > multidimensional arrays is already weird and confusing, and besides, > I'd expect equality comparison on arrays to use ==, not array_equals. > So (b) feels pretty inconsistent with other numpy conventions to me. > > -n > > > I have opened an issue for it: > > https://github.com/numpy/numpy/issues/3016#issuecomment-14045545 > > > > > > Regards, > > > > Sebastian > > > > In [1]: a = np.array([0, 2]) > > > > In [2]: b = np.arange(10).reshape(5,2) > > > > In [3]: b > > Out[3]: > > array([[0, 1], > > [2, 3], > > [4, 5], > > [6, 7], > > [8, 9]]) > > > > In [4]: a in b > > Out[4]: True > > > > In [5]: (b == a).any() > > Out[5]: True > > > > In [6]: (b == a).all(0).any() # the 0 could be multiple axes > > Out[6]: False > > > > In [7]: a_2d = a[None,:] > > > > In [8]: a_2d in b # broadcast dimension means "any" -> True > > Out[8]: True > > > > In [9]: [0, 1] in b[:,:1] # should not work (or be False, not True) > > Out[9]: True > > > > > > _______________________________________________ > > NumPy-Discussion mailing list > > NumPy-Discussion at scipy.org > > http://mail.scipy.org/mailman/listinfo/numpy-discussion > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From sebastian at sipsolutions.net Tue Feb 26 05:43:45 2013 From: sebastian at sipsolutions.net (Sebastian Berg) Date: Tue, 26 Feb 2013 11:43:45 +0100 Subject: [Numpy-discussion] Adding .abs() method to the array object In-Reply-To: References: <1361845217.4602.6.camel@sebastian-laptop> <1361872683.4602.12.camel@sebastian-laptop> Message-ID: <1361875425.4602.36.camel@sebastian-laptop> On Tue, 2013-02-26 at 11:16 +0100, Todd wrote: > On Tue, Feb 26, 2013 at 10:58 AM, Sebastian Berg > wrote: > On Mon, 2013-02-25 at 22:04 -0500, josef.pktd at gmail.com wrote: > > On Mon, Feb 25, 2013 at 9:58 PM, > wrote: > > > On Mon, Feb 25, 2013 at 9:20 PM, Sebastian Berg > > > wrote: > > >> On Mon, 2013-02-25 at 10:50 -0500, Skipper Seabold wrote: > > >>> On Mon, Feb 25, 2013 at 10:43 AM, Till Stensitzki > > > >>> wrote: > > >>> > > > >>> > First, sorry that i didnt search for an old thread, > but because i > > >>> disagree with > > >>> > conclusion i would at least address my reason: > > >>> > > > > > >> Two small things (not sure if it matters much). But first > almost all of > > >> these methods are related to the container and not the > elements. Second > > >> actually using a method arr.abs() has a tiny pitfall, > since abs would > > >> work on numpy types, but not on python types. This means > that: > > >> > > >> np.array([1, 2, 3]).max().abs() > > >> > > >> works, but > > >> > > >> np.array([1, 2, 3], dtype=object).max().abs() > > >> > > >> breaks. Python has a safe name for abs already... > > > > > >>>> (np.array([1, 2, 3], dtype=object)).max() > > > 3 > > >>>> (np.array([1, 2, 3], dtype=object)).__abs__().max() > > > 3 > > >>>> (np.array([1, 2, '3'], dtype=object)).__abs__() > > > Traceback (most recent call last): > > > File "", line 1, in > > > TypeError: bad operand type for abs(): 'str' > > > > > >>>> map(abs, [1, 2, 3]) > > > [1, 2, 3] > > >>>> map(abs, [1, 2, '3']) > > > Traceback (most recent call last): > > > File "", line 1, in > > > TypeError: bad operand type for abs(): 'str' > > > > or maybe more useful > > > > >>> from decimal import Decimal > > >>> d = [Decimal(str(k)) for k in np.linspace(-1, 1, 5)] > > >>> map(abs, d) > > [Decimal('1.0'), Decimal('0.5'), Decimal('0.0'), > Decimal('0.5'), Decimal('1.0')] > > > > >>> np.asarray(d).__abs__() > > array([1.0, 0.5, 0.0, 0.5, 1.0], dtype=object) > > >>> np.asarray(d).__abs__()[0] > > Decimal('1.0') > > > > Josef > > > > > > > > I don't see a difference. > > > > > > (I don't expect to use max abs on anything else than > numbers.) > > > > > > The difference is about scalars only. And of course __abs__ is > fine, but > if numpy adds an abs method, its scalars would logically have > it too. > But then you diverge from python scalars. That has exactly the > downside > that you may write code that suddenly stops working for python > scalars > without noticing. > > I turned around the abs and max order here, so that the abs > works on the > scalar, not useful but just as an example. > > > But doesn't this also apply to many existing methods? > I do not think that it does, or at a different quality. Almost all of those methods either logically work on the container not the array. I.e. reshape, etc. or are the most common reductions like mean/max/min (which are also only sensible for containers). Now those also work on numpy scalars but you should rarely use them on scalars. The only example I can see is round (there is no special method for that before python 3, so you could argue that python did not provide a canonical way for numpy). Now this is not a huge argument, obviously scalars can have a lot of specialized methods (like decimals for example), but in the case of abs, python provides a default way of doing it which always works, and it makes me tend to say that it is better to use that (even if I sometimes write .abs() too...). > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion From njs at pobox.com Tue Feb 26 05:44:53 2013 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 26 Feb 2013 10:44:53 +0000 Subject: [Numpy-discussion] What should np.ndarray.__contains__ do In-Reply-To: <1361874089.4602.18.camel@sebastian-laptop> References: <1361805016.32413.24.camel@sebastian-laptop> <1361874089.4602.18.camel@sebastian-laptop> Message-ID: On Tue, Feb 26, 2013 at 10:21 AM, Sebastian Berg wrote: > On Mon, 2013-02-25 at 16:33 +0000, Nathaniel Smith wrote: >> On Mon, Feb 25, 2013 at 3:10 PM, Sebastian Berg >> wrote: >> > Hello all, >> > >> > currently the `__contains__` method or the `in` operator on arrays, does >> > not return what the user would expect when in the operation `a in b` the >> > `a` is not a single element (see "In [3]-[4]" below). >> >> True, I did not expect that! >> > > >> The two approaches that I can see, and which generalize the behaviour >> of simple Python lists in natural ways, are: >> >> a) the left argument is coerced to a scalar of the appropriate type, >> then we check if that value appears anywhere in the array (basically >> raveling the right argument). >> > > How did I misread that? I guess you mean element and never subarray > matching. Actually I am starting to think that is best. Subarray > matching may be useful, but would probably be better off inside its own > function. > That also might be best with object arrays, since it is difficult to > know if the user means a tuple as an element or a two element subarray, > unless you say "input is array-like", which is possible (or more > sensible) for a function. > > That would mean just make the use cases that current give weird results > into errors. And maybe those errors hint to np.in1d and if numpy would > get it, some dedicated subarray matching function. Yeah, I don't have a strong opinion on whether or not sub-array matching should work, but personally I lean towards "not". There's precedent both ways: In [2]: "bc" in "abcd" Out[2]: True In [3]: ["b", "c"] in ["a", "b", "c", "d"] Out[3]: False But arrays are much more like lists than they are like strings. And it's not clear to what extent anyone even wants this kind of sub-array matching. (I can't think of any use cases, really. I can for a version that returns all the match locations, or a similarity map, like cv.MatchTemplate, but not for this itself...) And there's a lot of ambiguity about which axes are matched with which axes. So maybe subarray matching is better off in its own function that can have some extra arguments and more flexibility in its return value and so forth. -n From robert.kern at gmail.com Tue Feb 26 05:50:20 2013 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 26 Feb 2013 10:50:20 +0000 Subject: [Numpy-discussion] Adding .abs() method to the array object In-Reply-To: References: Message-ID: On Tue, Feb 26, 2013 at 12:11 AM, Charles R Harris wrote: > > On Sat, Feb 23, 2013 at 1:33 PM, Robert Kern wrote: >> >> On Sat, Feb 23, 2013 at 7:25 PM, Nathaniel Smith wrote: >> > On Sat, Feb 23, 2013 at 3:38 PM, Till Stensitzki >> > wrote: >> >> Hello, >> >> i know that the array object is already crowded, but i would like >> >> to see the abs method added, especially doing work on the console. >> >> Considering that many much less used functions are also implemented >> >> as a method, i don't think adding one more would be problematic. >> > >> > My gut feeling is that we have too many methods on ndarray, not too >> > few, but in any case, can you elaborate? What's the rationale for why >> > np.abs(a) is so much harder than a.abs(), and why this function and >> > not other unary functions? >> >> Or even abs(a). > > > Well, that just calls a method: > > In [1]: ones(3).__abs__() > Out[1]: array([ 1., 1., 1.]) > > Which shows the advantage of methods, they provide universal function hooks. I'm not sure what point you are trying to make. It does not appear to be relevant to adding an ndarray.abs() method. -- Robert Kern From ben.root at ou.edu Tue Feb 26 09:39:48 2013 From: ben.root at ou.edu (Benjamin Root) Date: Tue, 26 Feb 2013 09:39:48 -0500 Subject: [Numpy-discussion] drawing the line (was: Adding .abs() method to the array object) In-Reply-To: <512C0E83.7020906@gmail.com> References: <512C0E83.7020906@gmail.com> Message-ID: On Mon, Feb 25, 2013 at 8:23 PM, Alan G Isaac wrote: > I'm hoping this discussion will return to the drawing the line question. > > http://stackoverflow.com/questions/8108688/in-python-when-should-i-use-a-function-instead-of-a-method > > Alan Isaac > Proposed line: Reduction methods only. Discuss... Ben Root -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Tue Feb 26 10:03:53 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 26 Feb 2013 10:03:53 -0500 Subject: [Numpy-discussion] drawing the line (was: Adding .abs() method to the array object) In-Reply-To: References: <512C0E83.7020906@gmail.com> Message-ID: On Tue, Feb 26, 2013 at 9:39 AM, Benjamin Root wrote: > > > On Mon, Feb 25, 2013 at 8:23 PM, Alan G Isaac wrote: >> >> I'm hoping this discussion will return to the drawing the line question. >> >> http://stackoverflow.com/questions/8108688/in-python-when-should-i-use-a-function-instead-of-a-method >> >> Alan Isaac > > > Proposed line: > > Reduction methods only. > > Discuss... arr.dot ? the 99 most common functions for which chaining looks nice. Josef > > > Ben Root > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From e.antero.tammi at gmail.com Tue Feb 26 12:33:18 2013 From: e.antero.tammi at gmail.com (eat) Date: Tue, 26 Feb 2013 19:33:18 +0200 Subject: [Numpy-discussion] drawing the line (was: Adding .abs() method to the array object) In-Reply-To: References: <512C0E83.7020906@gmail.com> Message-ID: Huh, On Tue, Feb 26, 2013 at 5:03 PM, wrote: > On Tue, Feb 26, 2013 at 9:39 AM, Benjamin Root wrote: > > > > > > On Mon, Feb 25, 2013 at 8:23 PM, Alan G Isaac > wrote: > >> > >> I'm hoping this discussion will return to the drawing the line question. > >> > >> > http://stackoverflow.com/questions/8108688/in-python-when-should-i-use-a-function-instead-of-a-method > >> > >> Alan Isaac > > > > > > Proposed line: > > > > Reduction methods only. > > > > Discuss... > > arr.dot ? > > the 99 most common functions for which chaining looks nice. > Care to elaborate more for us less uninitiated? Regards, -eat > > Josef > > > > > > > Ben Root > > > > _______________________________________________ > > NumPy-Discussion mailing list > > NumPy-Discussion at scipy.org > > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Tue Feb 26 13:11:01 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 26 Feb 2013 13:11:01 -0500 Subject: [Numpy-discussion] drawing the line (was: Adding .abs() method to the array object) In-Reply-To: References: <512C0E83.7020906@gmail.com> Message-ID: On Tue, Feb 26, 2013 at 12:33 PM, eat wrote: > Huh, > > On Tue, Feb 26, 2013 at 5:03 PM, wrote: >> >> On Tue, Feb 26, 2013 at 9:39 AM, Benjamin Root wrote: >> > >> > >> > On Mon, Feb 25, 2013 at 8:23 PM, Alan G Isaac >> > wrote: >> >> >> >> I'm hoping this discussion will return to the drawing the line >> >> question. >> >> >> >> >> >> http://stackoverflow.com/questions/8108688/in-python-when-should-i-use-a-function-instead-of-a-method >> >> >> >> Alan Isaac >> > >> > >> > Proposed line: >> > >> > Reduction methods only. >> > >> > Discuss... >> >> arr.dot ? >> >> the 99 most common functions for which chaining looks nice. > > Care to elaborate more for us less uninitiated? partially a joke. I don't see any good and simple rule to restrict the number of methods. dot was added as a method a few numpy versions ago, because it is painful to write nested or sequential dot products. Alan was in favor of the dot method and of matrix algebra because it's much easier on new users who come from a background that has a dot product as "*" or similar operator. As Skipper, I think the number of methods is not really huge (especially not numerical operations) Josef > > Regards, > -eat >> >> >> Josef >> >> > >> > >> > Ben Root >> > >> > _______________________________________________ >> > NumPy-Discussion mailing list >> > NumPy-Discussion at scipy.org >> > http://mail.scipy.org/mailman/listinfo/numpy-discussion >> > >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion > > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From josef.pktd at gmail.com Tue Feb 26 13:29:45 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 26 Feb 2013 13:29:45 -0500 Subject: [Numpy-discussion] drawing the line (was: Adding .abs() method to the array object) In-Reply-To: References: <512C0E83.7020906@gmail.com> Message-ID: On Tue, Feb 26, 2013 at 1:11 PM, wrote: > On Tue, Feb 26, 2013 at 12:33 PM, eat wrote: >> Huh, >> >> On Tue, Feb 26, 2013 at 5:03 PM, wrote: >>> >>> On Tue, Feb 26, 2013 at 9:39 AM, Benjamin Root wrote: >>> > >>> > >>> > On Mon, Feb 25, 2013 at 8:23 PM, Alan G Isaac >>> > wrote: >>> >> >>> >> I'm hoping this discussion will return to the drawing the line >>> >> question. >>> >> >>> >> >>> >> http://stackoverflow.com/questions/8108688/in-python-when-should-i-use-a-function-instead-of-a-method >>> >> >>> >> Alan Isaac >>> > >>> > >>> > Proposed line: >>> > >>> > Reduction methods only. >>> > >>> > Discuss... >>> >>> arr.dot ? >>> >>> the 99 most common functions for which chaining looks nice. >> >> Care to elaborate more for us less uninitiated? > > partially a joke. I don't see any good and simple rule to restrict the > number of methods. 99 was just the first number that came to mind that sounded exaggerated enough. (I like methods and will use __abs__ from now on, but I won't argue much in any direction of changes.) Josef http://en.wikipedia.org/wiki/99_Luftballons > > dot was added as a method a few numpy versions ago, because it is > painful to write nested or sequential dot products. Alan was in favor > of the dot method and of matrix algebra because it's much easier on > new users who come from a background that has a dot product as "*" or > similar operator. > > As Skipper, I think the number of methods is not really huge > (especially not numerical operations) > > Josef > >> >> Regards, >> -eat >>> >>> >>> Josef >>> >>> > >>> > >>> > Ben Root >>> > >>> > _______________________________________________ >>> > NumPy-Discussion mailing list >>> > NumPy-Discussion at scipy.org >>> > http://mail.scipy.org/mailman/listinfo/numpy-discussion >>> > >>> _______________________________________________ >>> NumPy-Discussion mailing list >>> NumPy-Discussion at scipy.org >>> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> >> >> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> From charlesr.harris at gmail.com Tue Feb 26 13:47:23 2013 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 26 Feb 2013 11:47:23 -0700 Subject: [Numpy-discussion] 1.7.1 Message-ID: When should we put out 1.7.1? Discuss ;) Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan.isaac at gmail.com Tue Feb 26 15:38:05 2013 From: alan.isaac at gmail.com (Alan G Isaac) Date: Tue, 26 Feb 2013 15:38:05 -0500 Subject: [Numpy-discussion] drawing the line In-Reply-To: References: <512C0E83.7020906@gmail.com> Message-ID: <512D1D2D.1070808@gmail.com> On 2/26/2013 1:11 PM, josef.pktd at gmail.com wrote: > Alan was in favor of the dot method I still really like this, and it probably violates any simple rule for drawing the line. Nevertheless it would be nice to have some principle(s) other than the squeaky wheel principle for thinking about such proposals. Cheers, Alan From ralf.gommers at gmail.com Tue Feb 26 15:41:59 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Tue, 26 Feb 2013 21:41:59 +0100 Subject: [Numpy-discussion] [EXTERNAL] swig interface file (numpy.i) warning In-Reply-To: <54B7C755-4C5A-4FF7-8F39-CA657DEEFEDD@sandia.gov> References: <7F19AD02-EE44-4BF2-AC13-B4EF8A04117E@sandia.gov> <54B7C755-4C5A-4FF7-8F39-CA657DEEFEDD@sandia.gov> Message-ID: On Mon, Feb 25, 2013 at 6:12 AM, Bill Spotz wrote: > I wanted to take a stab at updating numpy.i to not use deprecated NumPy > C/API code. Nothing in the git logs indicates this has already been done. > I added > > #define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION > > to numpy.i right before it includes numpy/arrayobject.h. The built-in > tests for numpy.i should be sufficient to ferret out all of the deprecated > calls. When I try to make the tests, the compiler tells me that it has > included > > npy_deprecated_api.h > > and I get a compiler error when it includes old_defines.h, telling me that > it is deprecated. Shouldn't my #define prevent this from happening? I'm > confused. Any guidance would be appreciated. > Hi Bill, this works as expected for me. I did the following: 1. Build 1.7.0 release in-place 2. Set PYTHONPATH to in-place build. 3. $ cd doc/swig 4. $ make test # runs all tests 5. Add define in numpy.i like you did 6. $ make test # now fails because old_defines.h wasn't included First failure is: Array_wrap.cxx:5506:20: error: ?PyArrayObject? has no member named ?data? Array_wrap.cxx: In function ?PyObject* _wrap_new_Array2(PyObject*, PyObject*)?: Array_wrap.cxx:5635:55: error: cannot convert ?PyObject* {aka _object*}? to ?const PyArrayObject* {aka const tagPyArrayObject*}? for argument ?1? to ?int PyArray_TYPE(const PyArrayObject*)? error: command 'gcc' failed with exit status 1 If you're about to update numpy.i, maybe you could take along the divergence between the numpy and scipy versions of it? A ticket was just opened for that: http://projects.scipy.org/scipy/ticket/1825 I don't know how much work that would be, or why we even have two versions. Cheers, Ralf > > Thanks, > Bill > > On Oct 9, 2012, at 9:18 PM, Tom Krauss wrote: > > > This code reproduces the error - I think it is small enough for email. > (large) numpy.i not included, let me know if you want that too. Makefile > will need to be tailored to your environment. > > If it's more convenient, or you have trouble reproducing, I can create a > branch on github - let me know. > > > > On Tue, Oct 9, 2012 at 1:47 PM, Tom Krauss > wrote: > > I can't attach the exact code but would be happy to provide something > simple that has the same issue. I'll post something here when I can get to > it. > > - Tom > > > > > > On Tue, Oct 9, 2012 at 10:52 AM, Bill Spotz wrote: > > Tom, Charles, > > > > If you discuss this further, be sure to CC me. > > > > -Bill Spotz > > > > On Oct 9, 2012, at 8:50 AM, Charles R Harris wrote: > > > >> Hi Tom, > >> > >> On Tue, Oct 9, 2012 at 8:30 AM, Tom Krauss > wrote: > >> Hi, > >> > >> I've been happy to use numpy.i for generating SWIG interfaces to C++. > >> > >> For a while, I've noticed this warning while compiling: > >> > /Users/tkrauss/projects/dev_env/lib/python2.7/site-packages/numpy-1.8.0.dev_f2f0ac0_20120725-py2.7-macosx-10.8-x86_64.egg/numpy/core/include/numpy/npy_deprecated_api.h:11:2: > warning: #warning "Using deprecated NumPy API, disable it by #defining > NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" > >> > >> and today tried to get rid of the warning. > >> > >> So, in numpy.i, I followed the warning's advice. I added the # def > here: > >> > >> %{ > >> #ifndef SWIG_FILE_WITH_INIT > >> # define NO_IMPORT_ARRAY > >> #endif > >> #include "stdio.h" > >> #define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION > >> #include > >> %} > >> > >> SWIG was happy, but when compiling the C++ wrapper, there were many > warnings followed by many errors. The warnings were for redefinition of > NPY_MIN_BYTE and similar. The errors were for all kinds of stuff, excerpt > here: > >> native_wrap.cpp:3632: error: ?PyArray_NOTYPE? was not declared in this > scope > >> native_wrap.cpp:3633: error: cannot convert ?PyObject*? to ?const > PyArrayObject*? for argument ?1? to ?int PyArray_TYPE(const PyArrayObject*)? > >> native_wrap.cpp: At global scope: > >> native_wrap.cpp:3877: error: ?intp? has not been declared > >> native_wrap.cpp: In function ?int require_fortran(PyArrayObject*)?: > >> native_wrap.cpp:3929: error: ?struct tagPyArrayObject? has no member > named ?nd? > >> native_wrap.cpp:3933: error: ?struct tagPyArrayObject? has no member > named ?flags? > >> native_wrap.cpp:3933: error: ?FARRAY? was not declared in this scope > >> native_wrap.cpp:20411: error: ?struct tagPyArrayObject? has no member > named ?data? > >> > >> It looks like there is a new C API for numpy, and the version of > numpy.i that I have doesn't use it. > >> > >> Is there a new version of numpy.i available (or in development) that > works with the new API? Short term it will just get rid of a warning but I > am interested in a good long term solution in case I need to upgrade numpy. > >> > >> > >> In the long term we would like to hide the ndarray internals, > essentially making them private. There are still some incomplete areas, > f2py and, apparently, numpy.i. Your feedback here is quite helpful and if > you have some time we can try to get this straightened out. Could you > attach the code you are trying to interface? If you have a github account > you could also set up a branch where we could work on this. > >> > >> Chuck > >> _______________________________________________ > >> NumPy-Discussion mailing list > >> NumPy-Discussion at scipy.org > >> http://mail.scipy.org/mailman/listinfo/numpy-discussion > > > > ** Bill Spotz ** > > ** Sandia National Laboratories Voice: (505)845-0170 ** > > ** P.O. Box 5800 Fax: (505)284-0154 ** > > ** Albuquerque, NM 87185-0370 Email: wfspotz at sandia.gov ** > > > > > > _______________________________________________ > > NumPy-Discussion mailing list > > NumPy-Discussion at scipy.org > > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > > > > > _______________________________________________ > > NumPy-Discussion mailing list > > NumPy-Discussion at scipy.org > > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > ** Bill Spotz ** > ** Sandia National Laboratories Voice: (505)845-0170 ** > ** P.O. Box 5800 Fax: (505)284-0154 ** > ** Albuquerque, NM 87185-0370 Email: wfspotz at sandia.gov ** > > > > > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Tue Feb 26 16:03:25 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Tue, 26 Feb 2013 22:03:25 +0100 Subject: [Numpy-discussion] 1.7.1 In-Reply-To: References: Message-ID: On Tue, Feb 26, 2013 at 7:47 PM, Charles R Harris wrote: > When should we put out 1.7.1? Discuss ;) > When we have X times more fixes in maintenance/1.7.x than the one commit with a one-liner fix that we have now. Where X is >= 5 at least, unless there's a very high prio fix that needs releasing asap? Having at least 2 months between bugfix releases unless something very urgent comes up would also make sense to me. I think we do need to be diligent in backporting fixes quickly after they're merged into master, and not leaving that till right before the release candidate is scheduled. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From wfspotz at sandia.gov Tue Feb 26 16:04:43 2013 From: wfspotz at sandia.gov (Bill Spotz) Date: Tue, 26 Feb 2013 14:04:43 -0700 Subject: [Numpy-discussion] [EXTERNAL] swig interface file (numpy.i) warning In-Reply-To: References: <7F19AD02-EE44-4BF2-AC13-B4EF8A04117E@sandia.gov> <54B7C755-4C5A-4FF7-8F39-CA657DEEFEDD@sandia.gov> Message-ID: So the difference is that I was wanting to make changes in the git repository that is at version 1.8. I would expect it to still work, though. I can take a look at the scipy issue. -Bill On Feb 26, 2013, at 1:41 PM, Ralf Gommers wrote: > On Mon, Feb 25, 2013 at 6:12 AM, Bill Spotz wrote: > I wanted to take a stab at updating numpy.i to not use deprecated NumPy C/API code. Nothing in the git logs indicates this has already been done. I added > > #define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION > > to numpy.i right before it includes numpy/arrayobject.h. The built-in tests for numpy.i should be sufficient to ferret out all of the deprecated calls. When I try to make the tests, the compiler tells me that it has included > > npy_deprecated_api.h > > and I get a compiler error when it includes old_defines.h, telling me that it is deprecated. Shouldn't my #define prevent this from happening? I'm confused. Any guidance would be appreciated. > > Hi Bill, this works as expected for me. I did the following: > > 1. Build 1.7.0 release in-place > 2. Set PYTHONPATH to in-place build. > 3. $ cd doc/swig > 4. $ make test # runs all tests > 5. Add define in numpy.i like you did > 6. $ make test # now fails because old_defines.h wasn't included > > First failure is: > > Array_wrap.cxx:5506:20: error: ?PyArrayObject? has no member named ?data? > Array_wrap.cxx: In function ?PyObject* _wrap_new_Array2(PyObject*, PyObject*)?: > Array_wrap.cxx:5635:55: error: cannot convert ?PyObject* {aka _object*}? to ?const PyArrayObject* {aka const tagPyArrayObject*}? for argument ?1? to ?int PyArray_TYPE(const PyArrayObject*)? > error: command 'gcc' failed with exit status 1 > > > If you're about to update numpy.i, maybe you could take along the divergence between the numpy and scipy versions of it? A ticket was just opened for that: http://projects.scipy.org/scipy/ticket/1825 > I don't know how much work that would be, or why we even have two versions. > > Cheers, > Ralf > > > > Thanks, > Bill > > On Oct 9, 2012, at 9:18 PM, Tom Krauss wrote: > > > This code reproduces the error - I think it is small enough for email. (large) numpy.i not included, let me know if you want that too. Makefile will need to be tailored to your environment. > > If it's more convenient, or you have trouble reproducing, I can create a branch on github - let me know. > > > > On Tue, Oct 9, 2012 at 1:47 PM, Tom Krauss wrote: > > I can't attach the exact code but would be happy to provide something simple that has the same issue. I'll post something here when I can get to it. > > - Tom > > > > > > On Tue, Oct 9, 2012 at 10:52 AM, Bill Spotz wrote: > > Tom, Charles, > > > > If you discuss this further, be sure to CC me. > > > > -Bill Spotz > > > > On Oct 9, 2012, at 8:50 AM, Charles R Harris wrote: > > > >> Hi Tom, > >> > >> On Tue, Oct 9, 2012 at 8:30 AM, Tom Krauss wrote: > >> Hi, > >> > >> I've been happy to use numpy.i for generating SWIG interfaces to C++. > >> > >> For a while, I've noticed this warning while compiling: > >> /Users/tkrauss/projects/dev_env/lib/python2.7/site-packages/numpy-1.8.0.dev_f2f0ac0_20120725-py2.7-macosx-10.8-x86_64.egg/numpy/core/include/numpy/npy_deprecated_api.h:11:2: warning: #warning "Using deprecated NumPy API, disable it by #defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" > >> > >> and today tried to get rid of the warning. > >> > >> So, in numpy.i, I followed the warning's advice. I added the # def here: > >> > >> %{ > >> #ifndef SWIG_FILE_WITH_INIT > >> # define NO_IMPORT_ARRAY > >> #endif > >> #include "stdio.h" > >> #define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION > >> #include > >> %} > >> > >> SWIG was happy, but when compiling the C++ wrapper, there were many warnings followed by many errors. The warnings were for redefinition of NPY_MIN_BYTE and similar. The errors were for all kinds of stuff, excerpt here: > >> native_wrap.cpp:3632: error: ?PyArray_NOTYPE? was not declared in this scope > >> native_wrap.cpp:3633: error: cannot convert ?PyObject*? to ?const PyArrayObject*? for argument ?1? to ?int PyArray_TYPE(const PyArrayObject*)? > >> native_wrap.cpp: At global scope: > >> native_wrap.cpp:3877: error: ?intp? has not been declared > >> native_wrap.cpp: In function ?int require_fortran(PyArrayObject*)?: > >> native_wrap.cpp:3929: error: ?struct tagPyArrayObject? has no member named ?nd? > >> native_wrap.cpp:3933: error: ?struct tagPyArrayObject? has no member named ?flags? > >> native_wrap.cpp:3933: error: ?FARRAY? was not declared in this scope > >> native_wrap.cpp:20411: error: ?struct tagPyArrayObject? has no member named ?data? > >> > >> It looks like there is a new C API for numpy, and the version of numpy.i that I have doesn't use it. > >> > >> Is there a new version of numpy.i available (or in development) that works with the new API? Short term it will just get rid of a warning but I am interested in a good long term solution in case I need to upgrade numpy. > >> > >> > >> In the long term we would like to hide the ndarray internals, essentially making them private. There are still some incomplete areas, f2py and, apparently, numpy.i. Your feedback here is quite helpful and if you have some time we can try to get this straightened out. Could you attach the code you are trying to interface? If you have a github account you could also set up a branch where we could work on this. > >> > >> Chuck > >> _______________________________________________ > >> NumPy-Discussion mailing list > >> NumPy-Discussion at scipy.org > >> http://mail.scipy.org/mailman/listinfo/numpy-discussion > > > > ** Bill Spotz ** > > ** Sandia National Laboratories Voice: (505)845-0170 ** > > ** P.O. Box 5800 Fax: (505)284-0154 ** > > ** Albuquerque, NM 87185-0370 Email: wfspotz at sandia.gov ** From charlesr.harris at gmail.com Tue Feb 26 16:21:09 2013 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 26 Feb 2013 14:21:09 -0700 Subject: [Numpy-discussion] 1.7.1 In-Reply-To: References: Message-ID: On Tue, Feb 26, 2013 at 2:03 PM, Ralf Gommers wrote: > > > > On Tue, Feb 26, 2013 at 7:47 PM, Charles R Harris < > charlesr.harris at gmail.com> wrote: > >> When should we put out 1.7.1? Discuss ;) >> > > When we have X times more fixes in maintenance/1.7.x than the one commit > with a one-liner fix that we have now. Where X is >= 5 at least, unless > there's a very high prio fix that needs releasing asap? > > Having at least 2 months between bugfix releases unless something very > urgent comes up would also make sense to me. > > I think we do need to be diligent in backporting fixes quickly after > they're merged into master, and not leaving that till right before the > release candidate is scheduled. > > Ralph, the backports are PR's marked with the backport tag and there are more than one. It is up to Ondrej to decide whether to include them or not. Chuck > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Tue Feb 26 16:46:29 2013 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 26 Feb 2013 21:46:29 +0000 Subject: [Numpy-discussion] 1.7.1 In-Reply-To: References: Message-ID: On Tue, Feb 26, 2013 at 9:21 PM, Charles R Harris wrote: > > > On Tue, Feb 26, 2013 at 2:03 PM, Ralf Gommers > wrote: >> >> >> >> >> On Tue, Feb 26, 2013 at 7:47 PM, Charles R Harris >> wrote: >>> >>> When should we put out 1.7.1? Discuss ;) >> >> >> When we have X times more fixes in maintenance/1.7.x than the one commit >> with a one-liner fix that we have now. Where X is >= 5 at least, unless >> there's a very high prio fix that needs releasing asap? There is, actually; we (probably I) broke np.diagonal()/ndarray.diagonal() so any array that has diagonal() called on it gets pinned in memory forever. That's why the scikits-learn folk are grumpy. >> Having at least 2 months between bugfix releases unless something very >> urgent comes up would also make sense to me. I'm not sure why -- bug-fixes don't age like wine or anything. Obviously there's a trade-off around how much effort we want to spend on managing releases versus other things, but I don't see why there'd be anything *wrong* with putting out a tiny point-release whenever we find a real bug in a stable series. No-one has to upgrade... >> I think we do need to be diligent in backporting fixes quickly after >> they're merged into master, and not leaving that till right before the >> release candidate is scheduled. > > Ralph, the backports are PR's marked with the backport tag and there are > more than one. It is up to Ondrej to decide whether to include them or not. Oh argh, we should probably document some of this stuff, I just merged a bunch of them myself (which all looked fine of course). Mea culpa. For the moment: Ondrej, heads up that I did this! For the future: I definitely see the benefit of having one person keeping track of what goes in and what doesn't as we get into the late stage of the release cycle, but in between, maybe it makes sense for us all to work together on keeping the basic backports branch up to date and taking some of the load off the release manager? (I'll try not to continue implementing this plan unless we agree on it, though...) -n From charlesr.harris at gmail.com Tue Feb 26 16:54:46 2013 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 26 Feb 2013 14:54:46 -0700 Subject: [Numpy-discussion] 1.7.1 In-Reply-To: References: Message-ID: On Tue, Feb 26, 2013 at 2:46 PM, Nathaniel Smith wrote: > On Tue, Feb 26, 2013 at 9:21 PM, Charles R Harris > wrote: > > > > > > On Tue, Feb 26, 2013 at 2:03 PM, Ralf Gommers > > wrote: > >> > >> > >> > >> > >> On Tue, Feb 26, 2013 at 7:47 PM, Charles R Harris > >> wrote: > >>> > >>> When should we put out 1.7.1? Discuss ;) > >> > >> > >> When we have X times more fixes in maintenance/1.7.x than the one commit > >> with a one-liner fix that we have now. Where X is >= 5 at least, unless > >> there's a very high prio fix that needs releasing asap? > > There is, actually; we (probably I) broke > np.diagonal()/ndarray.diagonal() so any array that has diagonal() > called on it gets pinned in memory forever. That's why the > scikits-learn folk are grumpy. > > >> Having at least 2 months between bugfix releases unless something very > >> urgent comes up would also make sense to me. > > I'm not sure why -- bug-fixes don't age like wine or anything. > Obviously there's a trade-off around how much effort we want to spend > on managing releases versus other things, but I don't see why there'd > be anything *wrong* with putting out a tiny point-release whenever we > find a real bug in a stable series. No-one has to upgrade... > > >> I think we do need to be diligent in backporting fixes quickly after > >> they're merged into master, and not leaving that till right before the > >> release candidate is scheduled. > > > > Ralph, the backports are PR's marked with the backport tag and there are > > more than one. It is up to Ondrej to decide whether to include them or > not. > > Oh argh, we should probably document some of this stuff, I just merged > a bunch of them myself (which all looked fine of course). Mea culpa. > > For the moment: Ondrej, heads up that I did this! > That's OK, I think you did a fine job. > > For the future: I definitely see the benefit of having one person > keeping track of what goes in and what doesn't as we get into the late > stage of the release cycle, but in between, maybe it makes sense for > us all to work together on keeping the basic backports branch up to > date and taking some of the load off the release manager? (I'll try > not to continue implementing this plan unless we agree on it, > though...) > > You seem to have a pretty good eye on things. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.j.a.cock at googlemail.com Tue Feb 26 17:17:12 2013 From: p.j.a.cock at googlemail.com (Peter Cock) Date: Tue, 26 Feb 2013 22:17:12 +0000 Subject: [Numpy-discussion] numpy.scipy.org giving 404 error Message-ID: Hello all, http://numpy.scipy.org is giving a GitHub 404 error. As this used to be a widely used URL for the project, and likely appears in many printed references, could it be fixed to point to or redirect to the (relatively new) http://www.numpy.org site please? Thanks, Peter From ralf.gommers at gmail.com Tue Feb 26 17:34:51 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Tue, 26 Feb 2013 23:34:51 +0100 Subject: [Numpy-discussion] 1.7.1 In-Reply-To: References: Message-ID: On Tue, Feb 26, 2013 at 10:54 PM, Charles R Harris < charlesr.harris at gmail.com> wrote: > > > On Tue, Feb 26, 2013 at 2:46 PM, Nathaniel Smith wrote: > >> On Tue, Feb 26, 2013 at 9:21 PM, Charles R Harris >> wrote: >> > >> > >> > On Tue, Feb 26, 2013 at 2:03 PM, Ralf Gommers >> > wrote: >> >> >> >> >> >> >> >> >> >> On Tue, Feb 26, 2013 at 7:47 PM, Charles R Harris >> >> wrote: >> >>> >> >>> When should we put out 1.7.1? Discuss ;) >> >> >> >> >> >> When we have X times more fixes in maintenance/1.7.x than the one >> commit >> >> with a one-liner fix that we have now. Where X is >= 5 at least, unless >> >> there's a very high prio fix that needs releasing asap? >> >> There is, actually; we (probably I) broke >> np.diagonal()/ndarray.diagonal() so any array that has diagonal() >> called on it gets pinned in memory forever. That's why the >> scikits-learn folk are grumpy. >> >> >> Having at least 2 months between bugfix releases unless something very >> >> urgent comes up would also make sense to me. >> >> I'm not sure why -- bug-fixes don't age like wine or anything. >> Obviously there's a trade-off around how much effort we want to spend >> on managing releases versus other things, but I don't see why there'd >> be anything *wrong* with putting out a tiny point-release whenever we >> find a real bug in a stable series. No-one has to upgrade... >> > Nothing wrong per se, just a waste of developer and packager resources if the frequency is too high. Of course if there's 10 bug fixes ready sooner after the previous release, do a bugfix release sooner. But then we should really look in the mirror and ask why there's so many fixes so soon.... >> >> I think we do need to be diligent in backporting fixes quickly after >> >> they're merged into master, and not leaving that till right before the >> >> release candidate is scheduled. >> > >> > Ralph, the backports are PR's marked with the backport tag and there are >> > more than one. It is up to Ondrej to decide whether to include them or >> not. >> >> Oh argh, we should probably document some of this stuff, I just merged >> a bunch of them myself (which all looked fine of course). Mea culpa. >> >> For the moment: Ondrej, heads up that I did this! >> > > That's OK, I think you did a fine job. > > >> >> For the future: I definitely see the benefit of having one person >> keeping track of what goes in and what doesn't as we get into the late >> stage of the release cycle, but in between, maybe it makes sense for >> us all to work together on keeping the basic backports branch up to >> date and taking some of the load off the release manager. > > +1 > (I'll try >> not to continue implementing this plan unless we agree on it, >> though...) >> >> > > You seem to have a pretty good eye on things. > +1 Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ondrej.certik at gmail.com Tue Feb 26 19:43:10 2013 From: ondrej.certik at gmail.com (=?UTF-8?B?T25kxZllaiDEjGVydMOtaw==?=) Date: Tue, 26 Feb 2013 16:43:10 -0800 Subject: [Numpy-discussion] 1.7.1 In-Reply-To: References: Message-ID: On Tue, Feb 26, 2013 at 1:21 PM, Charles R Harris wrote: > > > On Tue, Feb 26, 2013 at 2:03 PM, Ralf Gommers > wrote: >> >> >> >> >> On Tue, Feb 26, 2013 at 7:47 PM, Charles R Harris >> wrote: >>> >>> When should we put out 1.7.1? Discuss ;) >> >> >> When we have X times more fixes in maintenance/1.7.x than the one commit >> with a one-liner fix that we have now. Where X is >= 5 at least, unless >> there's a very high prio fix that needs releasing asap? >> >> Having at least 2 months between bugfix releases unless something very >> urgent comes up would also make sense to me. >> >> I think we do need to be diligent in backporting fixes quickly after >> they're merged into master, and not leaving that till right before the >> release candidate is scheduled. >> > > Ralph, the backports are PR's marked with the backport tag and there are > more than one. It is up to Ondrej to decide whether to include them or not. I can't see the backport tag. How can I find it at github? Otherwise I think we should release sooner, I think this bug is quite annoying for scikit-learn: https://github.com/scikit-learn/scikit-learn/issues/1715 Ondrej From charlesr.harris at gmail.com Tue Feb 26 20:31:37 2013 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 26 Feb 2013 18:31:37 -0700 Subject: [Numpy-discussion] 1.7.1 In-Reply-To: References: Message-ID: On Tue, Feb 26, 2013 at 5:43 PM, Ond?ej ?ert?k wrote: > On Tue, Feb 26, 2013 at 1:21 PM, Charles R Harris > wrote: > > > > > > On Tue, Feb 26, 2013 at 2:03 PM, Ralf Gommers > > wrote: > >> > >> > >> > >> > >> On Tue, Feb 26, 2013 at 7:47 PM, Charles R Harris > >> wrote: > >>> > >>> When should we put out 1.7.1? Discuss ;) > >> > >> > >> When we have X times more fixes in maintenance/1.7.x than the one commit > >> with a one-liner fix that we have now. Where X is >= 5 at least, unless > >> there's a very high prio fix that needs releasing asap? > >> > >> Having at least 2 months between bugfix releases unless something very > >> urgent comes up would also make sense to me. > >> > >> I think we do need to be diligent in backporting fixes quickly after > >> they're merged into master, and not leaving that till right before the > >> release candidate is scheduled. > >> > > > > Ralph, the backports are PR's marked with the backport tag and there are > > more than one. It is up to Ondrej to decide whether to include them or > not. > > I can't see the backport tag. How can I find it at github? > > Otherwise I think we should release sooner, I think this bug is quite > annoying > for scikit-learn: > > https://github.com/scikit-learn/scikit-learn/issues/1715 > > I don't know the best way, but you can go to the search box, type numpy/numpy issues, and when the results come up select the stable 1.7 backport milestone. Nathaniel committed a bunch, but it looks like there is still one to backport. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From jorgesmbox-ml at yahoo.es Wed Feb 27 07:57:02 2013 From: jorgesmbox-ml at yahoo.es (Jorge Scandaliaris) Date: Wed, 27 Feb 2013 12:57:02 +0000 (UTC) Subject: [Numpy-discussion] Apply transform to many small matrices Message-ID: Hi, First of all excuse me if this is a trivial question. I have the feeling it is, but searching and looking through the docs has proven unsuccesful so far. I have an ndarray A of shape (M,2,2) representing M 2 x 2 matrices. Now I want to apply a transform T of shape (2,2) to each of matrix. The way I do this now is by iterating over all rows of A multiplying the matrices using numpy.dot(): for row in np.arange(A.shape[0]): A[row] = np.dot(A[row],T) but this seems to be slow when M is large and I have the feeling there must be a way of doing it better. Thanks, jorges From jorgesmbox-ml at yahoo.es Wed Feb 27 08:41:21 2013 From: jorgesmbox-ml at yahoo.es (Jorge Scandaliaris) Date: Wed, 27 Feb 2013 13:41:21 +0000 (UTC) Subject: [Numpy-discussion] Apply transform to many small matrices References: Message-ID: Jorge Scandaliaris yahoo.es> writes: <...> > I have an ndarray A of shape (M,2,2) representing M 2 x 2 matrices. > Now I want to apply a transform T of shape (2,2) to each of matrix. > The way I do this now is by iterating over all rows of A > multiplying the matrices using numpy.dot(): > > for row in np.arange(A.shape[0]): > A[row] = np.dot(A[row],T) > > but this seems to be slow when M is large and I have the feeling > there must be a way of doing it better. > Well, I think I getting close, but still don't understand exactly what I am doing: A = array([[[ 1, 2], [ 3, 4]], [[ 5, 6], [ 7, 8]], [[ 9, 10], [11, 12]]]) T = array([[1, 2], [3, 4]]) np.tensordot(a, T.T, axes=((2,),(1,))) gives array([[[ 7, 10], [15, 22]], [[23, 34], [31, 46]], [[39, 58], [47, 70]]]) which is what I want. The problem is that I only arrived at this result after trying many axes combinations, and the transpose in T was just intuition (The idea of using tensordot came from reading various posts in the list). Can someone help grasp tensordot, the doc is a bit cryptic to me. Thanks, Jorges From djpine at gmail.com Wed Feb 27 08:46:29 2013 From: djpine at gmail.com (David Pine) Date: Wed, 27 Feb 2013 08:46:29 -0500 Subject: [Numpy-discussion] polyfit in NumPy v1.7 Message-ID: As of NumPy v1.7, numpy.polyfit includes an option for providing weighting to data to be fit. It's a welcome addition, but the implementation seems a bit non-standard, perhaps even wrong, and I wonder if someone can enlighten me. 1. The documentation does not specify what the weighting array "w" is supposed to be. After some fooling around I figured out that it is 1/sigma, where sigma is the standard deviation uncertainty "sigma" in the y data. A better implementation, which would be consistent with how weighting is done in scipy.optimize.curve_fit, would be to specify "sigma" directly, rather than "w". This is typically what the user has at hand, this syntax is more transparent, and it would make polyfit consistent with the nonlinear fitting routine scipy.optimize.curve_fit. 2. The definition of the covariance matrix in polyfit is peculiar (or maybe wrong, I'm not sure). Towards the end of the code for polyfit, the standard covariance matrix is calculated and given the variable name "Vbase". However, instead of returning Vbase as the covariance matrix, polyfit returns the quantity Vbase * fac, where fac = resids / (len(x) - order - 2.0). fac scales as N, the number of data points, so the covariance matrix is about N times larger than it should be for large values of N; the increase is smaller for small N (of order 10). Perhaps the authors of polyfit want to use a more conservative error estimate, but the references given in the poyfit code make no mention of it. I think it would be much better to return the standard definition of the covariance matrix (see, for example, the book Numerical Recipes). David Pine -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Wed Feb 27 09:03:27 2013 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 27 Feb 2013 07:03:27 -0700 Subject: [Numpy-discussion] polyfit in NumPy v1.7 In-Reply-To: References: Message-ID: On Wed, Feb 27, 2013 at 6:46 AM, David Pine wrote: > As of NumPy v1.7, numpy.polyfit includes an option for providing weighting > to data to be fit. It's a welcome addition, but the implementation seems a > bit non-standard, perhaps even wrong, and I wonder if someone can enlighten > me. > > 1. The documentation does not specify what the weighting array "w" is > supposed to be. After some fooling around I figured out that it is > 1/sigma, where sigma is the standard deviation uncertainty "sigma" in the y > data. A better implementation, which would be consistent with how > weighting is done in scipy.optimize.curve_fit, would be to specify "sigma" > directly, rather than "w". This is typically what the user has at hand, > this syntax is more transparent, and it would make polyfit consistent with > the nonlinear fitting routine scipy.optimize.curve_fit. > Weights can be used for more things than sigma. Another common use is to set the weight to zero for bad data points. Zero is a nicer number than inf, although that would probably work for ieee numbers. Inf and 0/inf == 0 are modern innovations, so multiplication weights are somewhat traditional. Weights of zero do need to be taken into account in counting the degrees of freedom however. > > 2. The definition of the covariance matrix in polyfit is peculiar (or > maybe wrong, I'm not sure). Towards the end of the code for polyfit, the > standard covariance matrix is calculated and given the variable name > "Vbase". However, instead of returning Vbase as the covariance matrix, > polyfit returns the quantity Vbase * fac, where fac = resids / (len(x) - > order - 2.0). fac scales as N, the number of data points, so the > covariance matrix is about N times larger than it should be for large > values of N; the increase is smaller for small N (of order 10). Perhaps > the authors of polyfit want to use a more conservative error estimate, but > the references given in the poyfit code make no mention of it. I think it > would be much better to return the standard definition of the covariance > matrix (see, for example, the book Numerical Recipes). > The resids scale as N, so fac is approximately constant. The effect of more data points shows up in (A^T * A)^-1, which is probably what is called covariance, but really isn't until it is scaled by fac. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From kgabor79 at gmail.com Wed Feb 27 09:17:16 2013 From: kgabor79 at gmail.com (Gabor Kovacs) Date: Wed, 27 Feb 2013 14:17:16 +0000 Subject: [Numpy-discussion] flatnonzero / nonzero returned indices order Message-ID: Hello, for 1-D arrays, can one assume that the indices returned by flatnonzero is ordered? It looks so, but no such statement anywhere in the docs. (In general, order of indices in case of higher dimensions?) Thanks Gabor From djpine at gmail.com Wed Feb 27 09:40:40 2013 From: djpine at gmail.com (David Pine) Date: Wed, 27 Feb 2013 09:40:40 -0500 Subject: [Numpy-discussion] polyfit in NumPy v1.7 In-Reply-To: References: Message-ID: Chuck, Thanks for the quick reply. 1. I see your point about zero weights but the code in its current form doesn't take into account zero weights in counting the degrees of freedom, as you point out, so it seems to me like a moot point. More importantly, the documentation doesn't explain what the weights variable w is. In some implementations of least square fitting it is 1/sigma^2 while in this one it's 1/sigma. At the very least, the documentation should be more explicit. How does one go about changing the documentation? 2. I am sorry but I don't understand your response. The matrix Vbase in the code is already the covariance matrix, _before_ it is scaled by fac. Scaling it by fac and returning Vbase*fac as the covariance matrix is wrong, at least according to the references I know, including "Numerical Recipes", by Press et al, "Data Reduction and Error Analysis for the Physical Sciences" by Bevington, both standard works. Dave On Wed, Feb 27, 2013 at 9:03 AM, Charles R Harris wrote: > > > On Wed, Feb 27, 2013 at 6:46 AM, David Pine wrote: > >> As of NumPy v1.7, numpy.polyfit includes an option for providing >> weighting to data to be fit. It's a welcome addition, but the >> implementation seems a bit non-standard, perhaps even wrong, and I wonder >> if someone can enlighten me. >> >> 1. The documentation does not specify what the weighting array "w" is >> supposed to be. After some fooling around I figured out that it is >> 1/sigma, where sigma is the standard deviation uncertainty "sigma" in the y >> data. A better implementation, which would be consistent with how >> weighting is done in scipy.optimize.curve_fit, would be to specify "sigma" >> directly, rather than "w". This is typically what the user has at hand, >> this syntax is more transparent, and it would make polyfit consistent with >> the nonlinear fitting routine scipy.optimize.curve_fit. >> > > Weights can be used for more things than sigma. Another common use is to > set the weight to zero for bad data points. Zero is a nicer number than > inf, although that would probably work for ieee numbers. Inf and 0/inf == 0 > are modern innovations, so multiplication weights are somewhat traditional. > Weights of zero do need to be taken into account in counting the degrees of > freedom however. > > >> >> 2. The definition of the covariance matrix in polyfit is peculiar (or >> maybe wrong, I'm not sure). Towards the end of the code for polyfit, the >> standard covariance matrix is calculated and given the variable name >> "Vbase". However, instead of returning Vbase as the covariance matrix, >> polyfit returns the quantity Vbase * fac, where fac = resids / (len(x) - >> order - 2.0). fac scales as N, the number of data points, so the >> covariance matrix is about N times larger than it should be for large >> values of N; the increase is smaller for small N (of order 10). Perhaps >> the authors of polyfit want to use a more conservative error estimate, but >> the references given in the poyfit code make no mention of it. I think it >> would be much better to return the standard definition of the covariance >> matrix (see, for example, the book Numerical Recipes). >> > > The resids scale as N, so fac is approximately constant. The effect of > more data points shows up in (A^T * A)^-1, which is probably what is > called covariance, but really isn't until it is scaled by fac. > > Chuck > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaime.frio at gmail.com Wed Feb 27 10:18:59 2013 From: jaime.frio at gmail.com (=?ISO-8859-1?Q?Jaime_Fern=E1ndez_del_R=EDo?=) Date: Wed, 27 Feb 2013 07:18:59 -0800 Subject: [Numpy-discussion] Apply transform to many small matrices In-Reply-To: References: Message-ID: On Wed, Feb 27, 2013 at 5:41 AM, Jorge Scandaliaris wrote: > Jorge Scandaliaris yahoo.es> writes: > > I have an ndarray A of shape (M,2,2) representing M 2 x 2 matrices. > > Now I want to apply a transform T of shape (2,2) to each of matrix. > np.einsum makes a lot of these easier to figure out: In [7]: np.einsum('ijk, kl', A, T) Out[7]: array([[[ 7, 10], [15, 22]], [[23, 34], [31, 46]], [[39, 58], [47, 70]]]) Jaime -- (\__/) ( O.o) ( > <) Este es Conejo. Copia a Conejo en tu firma y ay?dale en sus planes de dominaci?n mundial. -------------- next part -------------- An HTML attachment was scrubbed... URL: From xscript at gmx.net Wed Feb 27 10:31:53 2013 From: xscript at gmx.net (=?utf-8?Q?Llu=C3=ADs?=) Date: Wed, 27 Feb 2013 16:31:53 +0100 Subject: [Numpy-discussion] Plans for missing data Message-ID: <87fw0hixhy.fsf@fimbulvetr.bsc.es> Hi there, I haven't been able to find an answer after a quick search, but are there any plans regarding the (re)inclusion of missing data support into some future version of numpy? Thanks a lot! Lluis -- "And it's much the same thing with knowledge, for whenever you learn something new, the whole world becomes that much richer." -- The Princess of Pure Reason, as told by Norton Juster in The Phantom Tollbooth From charlesr.harris at gmail.com Wed Feb 27 11:23:12 2013 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 27 Feb 2013 09:23:12 -0700 Subject: [Numpy-discussion] Plans for missing data In-Reply-To: <87fw0hixhy.fsf@fimbulvetr.bsc.es> References: <87fw0hixhy.fsf@fimbulvetr.bsc.es> Message-ID: On Wed, Feb 27, 2013 at 8:31 AM, Llu?s wrote: > Hi there, > > I haven't been able to find an answer after a quick search, but are there > any > plans regarding the (re)inclusion of missing data support into some future > version of numpy? > > Not at the moment, but it sits in the back of my mind. Mark's dynd work will include missing values at some point and I think we should see what that looks like. We also need to agree on how they should be interpreted. They won't be in 1.8, too little time for that. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Wed Feb 27 12:01:05 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 27 Feb 2013 12:01:05 -0500 Subject: [Numpy-discussion] polyfit in NumPy v1.7 In-Reply-To: References: Message-ID: Please post inline so we have the context. On Wed, Feb 27, 2013 at 9:40 AM, David Pine wrote: > Chuck, > > Thanks for the quick reply. > > 1. I see your point about zero weights but the code in its current form > doesn't take into account zero weights in counting the degrees of freedom, > as you point out, so it seems to me like a moot point. More importantly, > the documentation doesn't explain what the weights variable w is. In some > implementations of least square fitting it is 1/sigma^2 while in this one > it's 1/sigma. At the very least, the documentation should be more explicit. > How does one go about changing the documentation? weighted least squares, multiplied by weights, is useful and used in robust regression for example with downweighting of observations, so it's not "moot" (statsmodels RLM) If it's called weights, I would expect multiplication. If it's called sigma, I would expect division. > > 2. I am sorry but I don't understand your response. The matrix Vbase in > the code is already the covariance matrix, _before_ it is scaled by fac. > Scaling it by fac and returning Vbase*fac as the covariance matrix is > wrong, at least according to the references I know, including "Numerical > Recipes", by Press et al, "Data Reduction and Error Analysis for the > Physical Sciences" by Bevington, both standard works. Sounds like the same discussion we have about the scaling of the covariance of the parameter estimates in scipy.optimize.curvefit. https://github.com/scipy/scipy/pull/448 for the latest incarnation. (my impression is roughly: Some physical sciences know the scale of their sigma, the rest of the statistical world does not.) Josef > > Dave > > > On Wed, Feb 27, 2013 at 9:03 AM, Charles R Harris > wrote: >> >> >> >> On Wed, Feb 27, 2013 at 6:46 AM, David Pine wrote: >>> >>> As of NumPy v1.7, numpy.polyfit includes an option for providing >>> weighting to data to be fit. It's a welcome addition, but the >>> implementation seems a bit non-standard, perhaps even wrong, and I wonder if >>> someone can enlighten me. >>> >>> 1. The documentation does not specify what the weighting array "w" is >>> supposed to be. After some fooling around I figured out that it is 1/sigma, >>> where sigma is the standard deviation uncertainty "sigma" in the y data. A >>> better implementation, which would be consistent with how weighting is done >>> in scipy.optimize.curve_fit, would be to specify "sigma" directly, rather >>> than "w". This is typically what the user has at hand, this syntax is more >>> transparent, and it would make polyfit consistent with the nonlinear fitting >>> routine scipy.optimize.curve_fit. >> >> >> Weights can be used for more things than sigma. Another common use is to >> set the weight to zero for bad data points. Zero is a nicer number than inf, >> although that would probably work for ieee numbers. Inf and 0/inf == 0 are >> modern innovations, so multiplication weights are somewhat traditional. >> Weights of zero do need to be taken into account in counting the degrees of >> freedom however. >> >>> >>> >>> 2. The definition of the covariance matrix in polyfit is peculiar (or >>> maybe wrong, I'm not sure). Towards the end of the code for polyfit, the >>> standard covariance matrix is calculated and given the variable name >>> "Vbase". However, instead of returning Vbase as the covariance matrix, >>> polyfit returns the quantity Vbase * fac, where fac = resids / (len(x) - >>> order - 2.0). fac scales as N, the number of data points, so the covariance >>> matrix is about N times larger than it should be for large values of N; the >>> increase is smaller for small N (of order 10). Perhaps the authors of >>> polyfit want to use a more conservative error estimate, but the references >>> given in the poyfit code make no mention of it. I think it would be much >>> better to return the standard definition of the covariance matrix (see, for >>> example, the book Numerical Recipes). >> >> >> The resids scale as N, so fac is approximately constant. The effect of >> more data points shows up in (A^T * A)^-1, which is probably what is called >> covariance, but really isn't until it is scaled by fac. >> >> Chuck >> >> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From pav at iki.fi Wed Feb 27 12:47:07 2013 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 27 Feb 2013 19:47:07 +0200 Subject: [Numpy-discussion] polyfit in NumPy v1.7 In-Reply-To: References: Message-ID: 27.02.2013 16:40, David Pine kirjoitti: [clip] > 2. I am sorry but I don't understand your response. The matrix Vbase > in the code is already the covariance matrix, _before_ it is scaled by > fac. Scaling it by fac and returning Vbase*fac as the covariance > matrix is wrong, at least according to the references I know, including > "Numerical Recipes", by Press et al, "Data Reduction and Error Analysis > for the Physical Sciences" by Bevington, both standard works. The covariance matrix is (A^T A)^-1 only if the data is weighed by its standard errors prior to lstsq. Polyfit estimates the standard errors from the fit itself, which results in the `fac` multiplication. This is apparently what some people expect. The way the weight parameters work is however confusing, as they are w[i]=sigma_estimate/sigma[i], rather than being absolute errors. Anyway, as Josef noted, it's the same problem that curve_fit in Scipy had and probably the same fix needs to be done here. -- Pauli Virtanen From jorgesmbox-ml at yahoo.es Wed Feb 27 13:32:00 2013 From: jorgesmbox-ml at yahoo.es (Jorge Scandaliaris) Date: Wed, 27 Feb 2013 18:32:00 +0000 (UTC) Subject: [Numpy-discussion] Apply transform to many small matrices References: Message-ID: Jaime Fern?ndez del R?o gmail.com> writes: > np.einsum makes a lot of these easier to figure out: > In [7]: np.einsum('ijk, kl', A, T) > Out[7]:? > array([[[ 7, 10], > ? ? ? ? [15, 22]], > > ? ? ? ?[[23, 34], > ? ? ? ? [31, 46]], > > ? ? ? ?[[39, 58], > ? ? ? ? [47, 70]]]) > Thanks, I will have a look at it. jorges From djpine at gmail.com Wed Feb 27 15:01:41 2013 From: djpine at gmail.com (David Pine) Date: Wed, 27 Feb 2013 15:01:41 -0500 Subject: [Numpy-discussion] polyfit in NumPy v1.7 In-Reply-To: References: Message-ID: Pauli, Josef, Chuck, I read over the discussion on curve_fit. I believe I now understand what people are trying to do when they write about scaling the weighting and/or covariance matrix. And I agree that what polyfit does in its current form is estimate the absolute errors in the data from the data and the fit. Unfortunately, in the case that you want to supply the absolute uncertainty estimates, polyfit doesn't leave the option of not making this "correction". This is a mistake, In my opinion, polyfit and curve_fit should be organized with the following arguments: ydata_err : array_like or float or None absolute_err : bool (True or False) ------------------------ If ydata_err is an array, then it has the same length as xdata and ydata and gives the uncertainty (absolute or relative) of ydata. If ydata_err is a float (a single value), it gives the uncertainty (absolute or relative) of ydata. That is, all ydata points have the same uncertainty If ydata_err is None, the ydata_err is set equal to 1 internally. If absolute_err is True, then no correction is made to the covariance matrix If absolute_err is False, the a correction is made to the covariance matrix based on the square of residuals and the value(s) of ydata_err so that it gives useful estimates of the uncertainties in the fitting parameters. Finally, I have a quibble with the use of the extra factor of 2 subtracted in the denominator of fac (line 596). This is highly non-standard. Most of the literature does not use this factor, and for good reason. It leads to complete nonsense when there are only 3 or 4 data points, for one thing. In supplying software for general use, the most common standard should be used. Finally, the documentation needs to be improved so that it is clear. I would be happy to contribute both to improving the documentation and software. David If On Wed, Feb 27, 2013 at 12:47 PM, Pauli Virtanen wrote: > 27.02.2013 16:40, David Pine kirjoitti: > [clip] > > 2. I am sorry but I don't understand your response. The matrix Vbase > > in the code is already the covariance matrix, _before_ it is scaled by > > fac. Scaling it by fac and returning Vbase*fac as the covariance > > matrix is wrong, at least according to the references I know, including > > "Numerical Recipes", by Press et al, "Data Reduction and Error Analysis > > for the Physical Sciences" by Bevington, both standard works. > > The covariance matrix is (A^T A)^-1 only if the data is weighed by its > standard errors prior to lstsq. Polyfit estimates the standard errors > from the fit itself, which results in the `fac` multiplication. > > This is apparently what some people expect. The way the weight > parameters work is however confusing, as they are > w[i]=sigma_estimate/sigma[i], rather than being absolute errors. > > Anyway, as Josef noted, it's the same problem that curve_fit in Scipy > had and probably the same fix needs to be done here. > > -- > Pauli Virtanen > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From xscript at gmx.net Wed Feb 27 15:14:31 2013 From: xscript at gmx.net (=?utf-8?Q?Llu=C3=ADs?=) Date: Wed, 27 Feb 2013 21:14:31 +0100 Subject: [Numpy-discussion] Plans for missing data In-Reply-To: (Charles R. Harris's message of "Wed, 27 Feb 2013 09:23:12 -0700") References: <87fw0hixhy.fsf@fimbulvetr.bsc.es> Message-ID: <871uc15xaw.fsf@fimbulvetr.bsc.es> Charles R Harris writes: > On Wed, Feb 27, 2013 at 8:31 AM, Llu?s wrote: > Hi there, > I haven't been able to find an answer after a quick search, but are there > any > plans regarding the (re)inclusion of missing data support into some future > version of numpy? > Not at the moment, but it sits in the back of my mind. Mark's dynd work will > include missing values at some point and I think we should see what that looks > like. We also need to agree on how they should be interpreted. > They won't be in 1.8, too little time for that. Thanks for the info. Lluis -- "And it's much the same thing with knowledge, for whenever you learn something new, the whole world becomes that much richer." -- The Princess of Pure Reason, as told by Norton Juster in The Phantom Tollbooth From jrocher at enthought.com Wed Feb 27 17:17:33 2013 From: jrocher at enthought.com (Jonathan Rocher) Date: Wed, 27 Feb 2013 16:17:33 -0600 Subject: [Numpy-discussion] [ANN] SciPy2013: Call for abstracts Message-ID: [Apologies for cross-posts] Dear all, The annual SciPy Conference (Scientific Computing with Python) allows participants from academic, commercial, and governmental organizations to showcase their latest projects, learn from skilled users and developers, and collaborate on code development. *The deadline for abstract submissions is March 20th, 2013. * Submissions are welcome that address general Scientific Computing with Python, one of the two special themes for this years conference (machine learning & reproducible science), or the domain-specific mini-symposiaheld during the conference (Meteorology, climatology, and atmospheric and oceanic science, Astronomy and astrophysics, Medical imaging, Bio-informatics). Please submit your abstract at the SciPy 2013 website abstract submission form . Abstracts will be accepted for posters or presentations. Optional papers to be published in the conference proceedings will be requested following abstract submission. This year the proceedings will be made available prior to the conference to help attendees navigate the conference. We look forward to an exciting and interesting set of talks, posters, and discussions and hope to see you at the conference. The SciPy 2013 Program Committee Chairs Matt McCormick, Kitware, Inc. Katy Huff, University of Wisconsin-Madison and Argonne National Laboratory -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Wed Feb 27 18:44:26 2013 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 27 Feb 2013 23:44:26 +0000 Subject: [Numpy-discussion] Apply transform to many small matrices In-Reply-To: References: Message-ID: On 27 Feb 2013 12:57, "Jorge Scandaliaris" wrote: > > Hi, > First of all excuse me if this is a trivial question. I have the feeling it is, > but searching and looking through the docs has proven unsuccesful so far. > > I have an ndarray A of shape (M,2,2) representing M 2 x 2 matrices. Now I want > to apply a transform T of shape (2,2) to each of matrix. The way I do this now > is by iterating over all rows of A multiplying the matrices using numpy.dot(): > > for row in np.arange(A.shape[0]): > A[row] = np.dot(A[row],T) > > but this seems to be slow when M is large and I have the feeling there must be a > way of doing it better. Pretty sure the code you wrote above is equivalent to np.dot(A, T, out=A) -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From wfspotz at sandia.gov Wed Feb 27 18:44:24 2013 From: wfspotz at sandia.gov (Bill Spotz) Date: Wed, 27 Feb 2013 16:44:24 -0700 Subject: [Numpy-discussion] [EXTERNAL] swig interface file (numpy.i) warning In-Reply-To: References: <7F19AD02-EE44-4BF2-AC13-B4EF8A04117E@sandia.gov> <54B7C755-4C5A-4FF7-8F39-CA657DEEFEDD@sandia.gov> Message-ID: Is there documentation for the new C API for numpy? Thanks, Bill On Feb 26, 2013, at 2:04 PM, Bill Spotz wrote: > So the difference is that I was wanting to make changes in the git repository that is at version 1.8. I would expect it to still work, though. > > I can take a look at the scipy issue. > > -Bill > > On Feb 26, 2013, at 1:41 PM, Ralf Gommers wrote: > >> On Mon, Feb 25, 2013 at 6:12 AM, Bill Spotz wrote: >> I wanted to take a stab at updating numpy.i to not use deprecated NumPy C/API code. Nothing in the git logs indicates this has already been done. I added >> >> #define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION >> >> to numpy.i right before it includes numpy/arrayobject.h. The built-in tests for numpy.i should be sufficient to ferret out all of the deprecated calls. When I try to make the tests, the compiler tells me that it has included >> >> npy_deprecated_api.h >> >> and I get a compiler error when it includes old_defines.h, telling me that it is deprecated. Shouldn't my #define prevent this from happening? I'm confused. Any guidance would be appreciated. >> >> Hi Bill, this works as expected for me. I did the following: >> >> 1. Build 1.7.0 release in-place >> 2. Set PYTHONPATH to in-place build. >> 3. $ cd doc/swig >> 4. $ make test # runs all tests >> 5. Add define in numpy.i like you did >> 6. $ make test # now fails because old_defines.h wasn't included >> >> First failure is: >> >> Array_wrap.cxx:5506:20: error: ?PyArrayObject? has no member named ?data? >> Array_wrap.cxx: In function ?PyObject* _wrap_new_Array2(PyObject*, PyObject*)?: >> Array_wrap.cxx:5635:55: error: cannot convert ?PyObject* {aka _object*}? to ?const PyArrayObject* {aka const tagPyArrayObject*}? for argument ?1? to ?int PyArray_TYPE(const PyArrayObject*)? >> error: command 'gcc' failed with exit status 1 >> >> >> If you're about to update numpy.i, maybe you could take along the divergence between the numpy and scipy versions of it? A ticket was just opened for that: http://projects.scipy.org/scipy/ticket/1825 >> I don't know how much work that would be, or why we even have two versions. >> >> Cheers, >> Ralf >> >> >> >> Thanks, >> Bill >> >> On Oct 9, 2012, at 9:18 PM, Tom Krauss wrote: >> >>> This code reproduces the error - I think it is small enough for email. (large) numpy.i not included, let me know if you want that too. Makefile will need to be tailored to your environment. >>> If it's more convenient, or you have trouble reproducing, I can create a branch on github - let me know. >>> >>> On Tue, Oct 9, 2012 at 1:47 PM, Tom Krauss wrote: >>> I can't attach the exact code but would be happy to provide something simple that has the same issue. I'll post something here when I can get to it. >>> - Tom >>> >>> >>> On Tue, Oct 9, 2012 at 10:52 AM, Bill Spotz wrote: >>> Tom, Charles, >>> >>> If you discuss this further, be sure to CC me. >>> >>> -Bill Spotz >>> >>> On Oct 9, 2012, at 8:50 AM, Charles R Harris wrote: >>> >>>> Hi Tom, >>>> >>>> On Tue, Oct 9, 2012 at 8:30 AM, Tom Krauss wrote: >>>> Hi, >>>> >>>> I've been happy to use numpy.i for generating SWIG interfaces to C++. >>>> >>>> For a while, I've noticed this warning while compiling: >>>> /Users/tkrauss/projects/dev_env/lib/python2.7/site-packages/numpy-1.8.0.dev_f2f0ac0_20120725-py2.7-macosx-10.8-x86_64.egg/numpy/core/include/numpy/npy_deprecated_api.h:11:2: warning: #warning "Using deprecated NumPy API, disable it by #defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" >>>> >>>> and today tried to get rid of the warning. >>>> >>>> So, in numpy.i, I followed the warning's advice. I added the # def here: >>>> >>>> %{ >>>> #ifndef SWIG_FILE_WITH_INIT >>>> # define NO_IMPORT_ARRAY >>>> #endif >>>> #include "stdio.h" >>>> #define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION >>>> #include >>>> %} >>>> >>>> SWIG was happy, but when compiling the C++ wrapper, there were many warnings followed by many errors. The warnings were for redefinition of NPY_MIN_BYTE and similar. The errors were for all kinds of stuff, excerpt here: >>>> native_wrap.cpp:3632: error: ?PyArray_NOTYPE? was not declared in this scope >>>> native_wrap.cpp:3633: error: cannot convert ?PyObject*? to ?const PyArrayObject*? for argument ?1? to ?int PyArray_TYPE(const PyArrayObject*)? >>>> native_wrap.cpp: At global scope: >>>> native_wrap.cpp:3877: error: ?intp? has not been declared >>>> native_wrap.cpp: In function ?int require_fortran(PyArrayObject*)?: >>>> native_wrap.cpp:3929: error: ?struct tagPyArrayObject? has no member named ?nd? >>>> native_wrap.cpp:3933: error: ?struct tagPyArrayObject? has no member named ?flags? >>>> native_wrap.cpp:3933: error: ?FARRAY? was not declared in this scope >>>> native_wrap.cpp:20411: error: ?struct tagPyArrayObject? has no member named ?data? >>>> >>>> It looks like there is a new C API for numpy, and the version of numpy.i that I have doesn't use it. >>>> >>>> Is there a new version of numpy.i available (or in development) that works with the new API? Short term it will just get rid of a warning but I am interested in a good long term solution in case I need to upgrade numpy. >>>> >>>> >>>> In the long term we would like to hide the ndarray internals, essentially making them private. There are still some incomplete areas, f2py and, apparently, numpy.i. Your feedback here is quite helpful and if you have some time we can try to get this straightened out. Could you attach the code you are trying to interface? If you have a github account you could also set up a branch where we could work on this. >>>> >>>> Chuck >>>> _______________________________________________ >>>> NumPy-Discussion mailing list >>>> NumPy-Discussion at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/numpy-discussion >>> >>> ** Bill Spotz ** >>> ** Sandia National Laboratories Voice: (505)845-0170 ** >>> ** P.O. Box 5800 Fax: (505)284-0154 ** >>> ** Albuquerque, NM 87185-0370 Email: wfspotz at sandia.gov ** > From josef.pktd at gmail.com Wed Feb 27 20:27:10 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 27 Feb 2013 20:27:10 -0500 Subject: [Numpy-discussion] polyfit in NumPy v1.7 In-Reply-To: References: Message-ID: On Wed, Feb 27, 2013 at 3:01 PM, David Pine wrote: > Pauli, Josef, Chuck, > > I read over the discussion on curve_fit. I believe I now understand what > people are trying to do when they write about scaling the weighting and/or > covariance matrix. And I agree that what polyfit does in its current form > is estimate the absolute errors in the data from the data and the fit. > > Unfortunately, in the case that you want to supply the absolute uncertainty > estimates, polyfit doesn't leave the option of not making this "correction". > This is a mistake, > > In my opinion, polyfit and curve_fit should be organized with the following > arguments: I'm fine with the structure (as long as estimating the error scale remains the default). I'm not a big fan of the names, but I'm not a developer or main user of polyfit or curve_fit, so whatever... > > ydata_err : array_like or float or None I still prefer weights or sigma because then I know what it means. > > absolute_err : bool (True or False) > > ------------------------ > > If ydata_err is an array, then it has the same length as xdata and ydata and > gives the uncertainty (absolute or relative) of ydata. uncertainty is vague, either standard deviation or "weights" multiplying y and x > > If ydata_err is a float (a single value), it gives the uncertainty (absolute > or relative) of ydata. That is, all ydata points have the same uncertainty fine, with the single value case, although it won't have any effect if absolute_err is False > > If ydata_err is None, the ydata_err is set equal to 1 internally. > > If absolute_err is True, then no correction is made to the covariance matrix > > If absolute_err is False, the a correction is made to the covariance matrix > based on the square of residuals and the value(s) of ydata_err so that it > gives useful estimates of the uncertainties in the fitting parameters. > > Finally, I have a quibble with the use of the extra factor of 2 subtracted > in the denominator of fac (line 596). This is highly non-standard. Most > of the literature does not use this factor, and for good reason. It leads > to complete nonsense when there are only 3 or 4 data points, for one thing. > In supplying software for general use, the most common standard should be > used. I've never heard of the -2 correction 592 # Some literature ignores the extra -2.0 factor in the denominator, but 593 # it is included here because the covariance of Multivariate Student-T 594 # (which is implied by a Bayesian uncertainty analysis) includes it. 595 # Plus, it gives a slightly more conservative estimate of uncertainty. 596 fac = resids / (len(x) - order - 2.0) ??? Josef > > Finally, the documentation needs to be improved so that it is clear. I > would be happy to contribute both to improving the documentation and > software. > > David > > If > > > > On Wed, Feb 27, 2013 at 12:47 PM, Pauli Virtanen wrote: >> >> 27.02.2013 16:40, David Pine kirjoitti: >> [clip] >> > 2. I am sorry but I don't understand your response. The matrix Vbase >> > in the code is already the covariance matrix, _before_ it is scaled by >> > fac. Scaling it by fac and returning Vbase*fac as the covariance >> > matrix is wrong, at least according to the references I know, including >> > "Numerical Recipes", by Press et al, "Data Reduction and Error Analysis >> > for the Physical Sciences" by Bevington, both standard works. >> >> The covariance matrix is (A^T A)^-1 only if the data is weighed by its >> standard errors prior to lstsq. Polyfit estimates the standard errors >> from the fit itself, which results in the `fac` multiplication. >> >> This is apparently what some people expect. The way the weight >> parameters work is however confusing, as they are >> w[i]=sigma_estimate/sigma[i], rather than being absolute errors. >> >> Anyway, as Josef noted, it's the same problem that curve_fit in Scipy >> had and probably the same fix needs to be done here. >> >> -- >> Pauli Virtanen >> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion > > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > From sunghwanchoi91 at gmail.com Thu Feb 28 00:30:26 2013 From: sunghwanchoi91 at gmail.com (Sunghwan Choi) Date: Thu, 28 Feb 2013 14:30:26 +0900 Subject: [Numpy-discussion] embeding numpy in c++ Message-ID: <13df01ce1574$b967a490$2c36edb0$@gmail.com> Hi, I am now trying to embedding numpy in my c++ code (more correctly ase module, which is dependent to numpy). My test code is simple test.cpp #include "Python.h" #include #include extern "C" void Py_Initialize(); extern "C" void PyErr_Print(); using namespace std; int main(int argc, char* argv[]) { double answer = 0; PyObject *modname, *mod, *mdict, *func, *stringarg, *args, *rslt; Py_Initialize(); modname = PyString_FromString("numpy"); mod = PyImport_Import(modname); PyErr_Print(); cout << mod << endl; return 0; } shchoi at aims0:~/tmp/test2$ vi test.cpp shchoi at aims0:~/tmp/test2$ icpc -c test.cpp -I ~/program/epd/include/python2.7/ shchoi at aims0:~/tmp/test2$ icpc -o exe test.o -L ~/program/epd/lib/ -lpython2.7 shchoi at aims0:~/tmp/test2$ ./exe Traceback (most recent call last): File "/home/shchoi/program/epd/lib/python2.7/site-packages/numpy/__init__.py", line 137, in import add_newdocs File "/home/shchoi/program/epd/lib/python2.7/site-packages/numpy/add_newdocs.py", line 9, in from numpy.lib import add_newdoc File "/home/shchoi/program/epd/lib/python2.7/site-packages/numpy/lib/__init__.py" , line 4, in from type_check import * File "/home/shchoi/program/epd/lib/python2.7/site-packages/numpy/lib/type_check.p y", line 8, in import numpy.core.numeric as _nx File "/home/shchoi/program/epd/lib/python2.7/site-packages/numpy/core/__init__.py ", line 5, in import multiarray ImportError: /home/shchoi/program/epd/lib/python2.7/site-packages/numpy/core/multiarray.s o: undefined symbol: PyUnicodeUCS2_AsASCIIString How to solve this problem? Can anyone help me? If anyone know the solution, please let me know as soon as possible Sincerely Sunghwan Choi -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Thu Feb 28 01:51:13 2013 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 27 Feb 2013 23:51:13 -0700 Subject: [Numpy-discussion] interactive distutils Message-ID: It's there, it uses raw_input, does anyone use it? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From raghu330 at gmail.com Thu Feb 28 07:58:42 2013 From: raghu330 at gmail.com (raghu330) Date: Thu, 28 Feb 2013 04:58:42 -0800 (PST) Subject: [Numpy-discussion] scipy build errors Message-ID: <1362056322310-33032.post@n7.nabble.com> Dear All, I am trying to install iris, which needs scipy0.10 or above. My machine is RHEL6 with gfortran and g95 installed. I was able to install numpy without any trouble, but have issues with scipy. I have installed ATLAS, BLAS and LAPACK, but I still get this error. Kindly let me know how to go ahead.. Thanks in advance & Regards Raghu ----------------- /opt/python2.7.3/bin/python setup.py build --fcompiler=gfortran Running from scipy source directory. blas_opt_info: blas_mkl_info: libraries mkl,vml,guide not found in /opt/python2.7.3/lib libraries mkl,vml,guide not found in /usr/local/lib64 libraries mkl,vml,guide not found in /usr/local/lib libraries mkl,vml,guide not found in /opt/lib libraries mkl,vml,guide not found in /usr/lib64 libraries mkl,vml,guide not found in /usr/lib NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS libraries ptf77blas,ptcblas,atlas not found in /opt/python2.7.3/lib libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib64 libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib libraries ptf77blas,ptcblas,atlas not found in /opt/lib libraries ptf77blas,ptcblas,atlas not found in /usr/lib64/atlas libraries ptf77blas,ptcblas,atlas not found in /usr/lib64/sse2 libraries ptf77blas,ptcblas,atlas not found in /usr/lib64 libraries ptf77blas,ptcblas,atlas not found in /usr/lib NOT AVAILABLE atlas_blas_info: libraries f77blas,cblas,atlas not found in /opt/python2.7.3/lib libraries f77blas,cblas,atlas not found in /usr/local/lib64 libraries f77blas,cblas,atlas not found in /usr/local/lib libraries f77blas,cblas,atlas not found in /opt/lib libraries f77blas,cblas,atlas not found in /usr/lib64/atlas libraries f77blas,cblas,atlas not found in /usr/lib64/sse2 libraries f77blas,cblas,atlas not found in /usr/lib64 libraries f77blas,cblas,atlas not found in /usr/lib NOT AVAILABLE /opt/python2.7.3/lib/python2.7/site-packages/numpy/distutils/system_info.py:1425 : UserWarning: Atlas (http://math-atlas.sourceforge.net/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [atlas]) or by setting the ATLAS environment variable. warnings.warn(AtlasNotFoundError.__doc__) blas_info: libraries blas not found in /opt/python2.7.3/lib libraries blas not found in /usr/local/lib64 libraries blas not found in /usr/local/lib libraries blas not found in /opt/lib libraries blas not found in /usr/lib64 libraries blas not found in /usr/lib NOT AVAILABLE /opt/python2.7.3/lib/python2.7/site-packages/numpy/distutils/system_info.py:1434 : UserWarning: Blas (http://www.netlib.org/blas/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [blas]) or by setting the BLAS environment variable. warnings.warn(BlasNotFoundError.__doc__) blas_src_info: NOT AVAILABLE /opt/python2.7.3/lib/python2.7/site-packages/numpy/distutils/system_info.py:1437 : UserWarning: Blas (http://www.netlib.org/blas/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [blas_src]) or by setting the BLAS_SRC environment variable. warnings.warn(BlasSrcNotFoundError.__doc__) Traceback (most recent call last): File "setup.py", line 208, in setup_package() File "setup.py", line 199, in setup_package configuration=configuration ) File "/opt/python2.7.3/lib/python2.7/site-packages/numpy/distutils/core.py", l ine 152, in setup config = configuration() File "setup.py", line 136, in configuration config.add_subpackage('scipy') File "/opt/python2.7.3/lib/python2.7/site-packages/numpy/distutils/misc_util.p y", line 1002, in add_subpackage caller_level = 2) File "/opt/python2.7.3/lib/python2.7/site-packages/numpy/distutils/misc_util.p y", line 971, in get_subpackage caller_level = caller_level + 1) File "/opt/python2.7.3/lib/python2.7/site-packages/numpy/distutils/misc_util.p y", line 908, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File "scipy/setup.py", line 8, in configuration config.add_subpackage('integrate') File "/opt/python2.7.3/lib/python2.7/site-packages/numpy/distutils/misc_util.p y", line 1002, in add_subpackage caller_level = 2) File "/opt/python2.7.3/lib/python2.7/site-packages/numpy/distutils/misc_util.p y", line 971, in get_subpackage caller_level = caller_level + 1) File "/opt/python2.7.3/lib/python2.7/site-packages/numpy/distutils/misc_util.p y", line 908, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File "scipy/integrate/setup.py", line 10, in configuration blas_opt = get_info('blas_opt',notfound_action=2) File "/opt/python2.7.3/lib/python2.7/site-packages/numpy/distutils/system_info .py", line 321, in get_info return cl().get_info(notfound_action) File "/opt/python2.7.3/lib/python2.7/site-packages/numpy/distutils/system_info .py", line 472, in get_info raise self.notfounderror(self.notfounderror.__doc__) numpy.distutils.system_info.BlasNotFoundError: Blas (http://www.netlib.org/blas/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [blas]) or by setting the BLAS environment variable. ------------------------------------ EOM -------------------------------------- -- View this message in context: http://numpy-discussion.10968.n7.nabble.com/scipy-build-errors-tp33032.html Sent from the Numpy-discussion mailing list archive at Nabble.com. From jrocher at enthought.com Thu Feb 28 08:31:45 2013 From: jrocher at enthought.com (Jonathan Rocher) Date: Thu, 28 Feb 2013 07:31:45 -0600 Subject: [Numpy-discussion] [ANN] SciPy2013 Tutorials: Call for Submissions Message-ID: [Apologies for cross-posts] Dear all, We are excited to kick off the SciPy2013 conferencewith two days of tutorials. This year we are proud to expand the session to include *THREE parallel tracks*: introductory, intermediate and advanced. Teachers will receive a stipend for their service. We are accepting tutorial proposals from individuals or teams until *April 1st*. Click here for more details and to submit applications . Looking forward to a very exciting conference! The SciPy 2013 Tutorial Chairs Francesc Alted, Continuum Analytics Inc. Dharhas Pothina, Texas Water Development Board -- Jonathan Rocher, PhD Scientific software developer Co-chair of SciPy2013 Conference Enthought, Inc. jrocher at enthought.com 1-512-536-1057 http://www.enthought.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From jrocher at enthought.com Thu Feb 28 13:29:24 2013 From: jrocher at enthought.com (Jonathan Rocher) Date: Thu, 28 Feb 2013 12:29:24 -0600 Subject: [Numpy-discussion] [ANN] SciPy2013 Sponsorships: Apply Now! Message-ID: [Apologies for cross-posts] Dear all, The SciPy2013 conference will continue the tradition of offering sponsorships to attend the conference. These sponsorships provide funding for airfare, lodging, and conference registration. This year, these sponsorships will be *open to community members rather than just students*. Applications will be judged both on merit as well as need. If you would like to apply for yourself or a worthy candidate , please note our application due date of *Monday, March 25th*. Winners will be announced on April 22nd. Looking forward to a very exciting conference! The SciPy 2013 Financial Aid Chairs Jeff Daily, Pacific Northwest National Lab. John Wiggins, Enthought Inc. -- Jonathan Rocher, PhD Scientific software developer Co-chair of SciPy2013 Conference Enthought, Inc. jrocher at enthought.com 1-512-536-1057 http://www.enthought.com -------------- next part -------------- An HTML attachment was scrubbed... URL: