From Sebastian.Wagner.fl at ait.ac.at Mon Sep 2 11:10:22 2013 From: Sebastian.Wagner.fl at ait.ac.at (Wagner Sebastian) Date: Mon, 2 Sep 2013 15:10:22 +0000 Subject: [SciPy-User] solve_banded usage (once again) Message-ID: Dear Scipy-Users, I'm currently working with Spare Matrices, in particular diagonal spare matrices. When searching for solve-Functions I found a function called solve_banded and some information about it in the docs (very little and that's why I searched for more), StackOverflow (http://stackoverflow.com/questions/12978518/scipy-sparse-dia-matrix-solver), a ticket on this problem (https://github.com/scipy/scipy/issues/2285) and this mailinglist. But nowhere I found any information about how to use it with a diagonal sparse matrix. The mentioned thread on this is at http://thread.gmane.org/gmane.comp.python.scientific.user/10308, which states that "the docstring of solve_banded seems to be not correct/up-to-date, or at least unclear." Which is totally correct (the Mail is from 2007!) and until now nothing has changed. Due to lack of documentation I tried to get it running, and got it running partly with contiuous offsets, like this way: dia = scipy.sparse.dia_matrix(([[1,1,3,0,0],[1,1,1,1,1],[0,2,1,3,2]],[-2,0,1]), shape=(5,5)) offsets = [np.abs(dia.offsets[0]), dia.offsets[-1]] data = np.flipud(dia.data) scipy.linalg.solve_banded(offsets, data, b) This gives be the same solutions as linalg.solve and sparse.linalg.spsolve. But with uncontinuous offsets I couldn't, I thought of inserting a row of zeros for the missing diagonals: dia = scipy.sparse.dia_matrix(([[1,1,3,0,0],[1,1,1,1,1],[0,2,1,3,2]],[-2,0,1]), shape=(5,5)) l = np.abs(dia.offsets[0]) u = dia.offsets[-1] data = np.empty((l+u+1,dia.data.shape[1]),dtype=int) missingCounter = 0 for i, d in enumerate(range(-l, u+1)): if d in dia.offsets: data[i] = dia.data[-(i+1-missingCounter)] # flip the array else: data[i] = np.zeros(dia.data.shape[1]) missingCounter += 1 For me it looks fine, but it gives not the same answer as linalg.solve and sparse.linalg.spsolve. Can anybody give me an advice of how to use solve_banded correctly? Regards, Sebastian -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Mon Sep 2 11:30:23 2013 From: cournape at gmail.com (David Cournapeau) Date: Mon, 2 Sep 2013 16:30:23 +0100 Subject: [SciPy-User] [Numpy-discussion] ANN: Scipy 0.13.0 beta 1 release In-Reply-To: References: Message-ID: On Fri, Aug 30, 2013 at 7:16 PM, David Cournapeau wrote: > It looks like it broke the build with MKL as well (in, surprised, ARPACK). > I will investigate this further this WE > Ok, I think the commit 5935030f8cced33e433804a21bdb15572d1d38e8 is quite wrong. It conflates the issue of dealing with Accelerate brokenness and using g77 ABI. I would suggest reverting it for 0.13.0 (and re-disable single precision), as fixing this correctly may require quite some time/testing. David > > > On Thu, Aug 22, 2013 at 2:12 PM, Ralf Gommers wrote: > >> Hi all, >> >> I'm happy to announce the availability of the first beta release of Scipy >> 0.13.0. Please try this beta and report any issues on the scipy-dev mailing >> list. >> >> Source tarballs and release notes can be found at >> https://sourceforge.net/projects/scipy/files/scipy/0.13.0b1/. Windows >> and OS X installers will follow later (we have a minor infrastructure issue >> to solve, and I'm at EuroScipy now). >> >> Cheers, >> Ralf >> >> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Mon Sep 2 14:12:00 2013 From: cournape at gmail.com (David Cournapeau) Date: Mon, 2 Sep 2013 19:12:00 +0100 Subject: [SciPy-User] [Numpy-discussion] ANN: Scipy 0.13.0 beta 1 release In-Reply-To: References: Message-ID: On Mon, Sep 2, 2013 at 5:46 PM, Pauli Virtanen wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > Hi, > > 02.09.2013 18:30, David Cournapeau kirjoitti: > [clip] > > Ok, I think the commit 5935030f8cced33e433804a21bdb15572d1d38e8 is > > quite wrong. > > > > It conflates the issue of dealing with Accelerate brokenness and > > using g77 ABI. I would suggest reverting it for 0.13.0 (and > > re-disable single precision), as fixing this correctly may require > > quite some time/testing. > > I'm -1 on returning to the previous situation where many routines on > OSX are simply broken (which was the situation previously --- > "disabling single precision" left several things still broken). I'd > rather just postpone the 0.13.0 release until this issue is solved > properly. > I see, I missed that it was more than just reverting to slower versions. > > Can you say what exactly is wrong -- as far as I know, Accelerate on > OSX simply uses g77 ABI. There were some bugs previously where it in > places did things differently, but on recent OSX releases this is no > longer the case? > > I guess you are trying to link with MKL? In that case, I would rather > propose restoring the previous MKL wrappers (with trivial extensions > for the missing w* symbols). > The pb is specific to MKL, yes, but I've found a workaround: in the case of MKL, you just need to workaround the g77 ABI of the MKL vs gfortran ABI, but the LAPACK interface is the actual LAPACK, not CLAPACK. So for MKL, you need non dummy wrappers iff the function returns a complex. A dirty patch seems to confirm that this fixes the issue, I will prepare an actual patch tomorrow, David > > - -- > Pauli Virtanen > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.12 (GNU/Linux) > > iEYEARECAAYFAlIkwPQACgkQ6BQxb7O0pWBnkwCfeKj32CYYCPdEWVcYYMtq/OZM > 32wAoIn9yG/HNtUeh+XwqAm2uAS9sVQ5 > =CKBN > -----END PGP SIGNATURE----- > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.mcminn at iwcdownhole.com Tue Sep 3 07:59:13 2013 From: david.mcminn at iwcdownhole.com (David McMinn) Date: Tue, 3 Sep 2013 11:59:13 +0000 Subject: [SciPy-User] argrelextrema returns tuple of 2 empty arrays when no peaks found in 1D vector Message-ID: <463F3CF32EC88745AA8B81DA65C0564D015B55CE@Gia-Exch06.MessageExchange.com> Hello, I'm using the scipy.signal.argrelextrema function to find the peaks in a 1D numpy vector. While this is working for the most part I found that when no peaks were present in the data I get an tuple with two empty arrays as a result. I can handle this, but it got me wondering whether I have misunderstood the return value? The docs say it returns "Indices of the maxima, as an array of integers". I would expect with no peaks it would be an empty array (or a tuple with one empty array) since my input data is a 1D array, but I get two arrays. Any reason for this? Checking the source for scipy it does return "(np.array([]),) * 2" when there are no peaks, but that doesn't explain why it does that. For example: import numpy as np import scipy.signal as sig y = np.concatenate([np.arange(1, 0, -0.01), np.arange(0, 1, 0.01)]) peaks = sig.argrelextrema(0.5-np.abs(y - 0.5), np.greater) print peaks # Prints '(array([ 50, 150]),)' peaks = sig.argrelextrema(y, np.greater) # No peaks print peaks This prints '((array([]), dtype=float64),(array([]), dtype=float64),)' for the second print statement whereas I would have expected '((array([]), dtype=float64),)', which would have been more along the lines of the first print statement. So is this by design, and if so why? Have I misunderstood the return value? Thanks. From szebi at gmx.at Tue Sep 3 13:09:06 2013 From: szebi at gmx.at (szebi) Date: Tue, 03 Sep 2013 19:09:06 +0200 Subject: [SciPy-User] argrelextrema returns tuple of 2 empty arrays when no peaks found in 1D vector In-Reply-To: <463F3CF32EC88745AA8B81DA65C0564D015B55CE@Gia-Exch06.MessageExchange.com> References: <463F3CF32EC88745AA8B81DA65C0564D015B55CE@Gia-Exch06.MessageExchange.com> Message-ID: <522617B2.1080104@gmx.at> Hi David, Checking the source for scipy it does return "(np.array([]),) * 2" when there are no peaks, but that doesn't explain why it does that. It does explain it? (np.array([]),) * 2 is the same as (np.array([]),np.array([])) Maybe it would make more sense if in this case the same number of dimensions would be returned: (np.array([]),) * data.ndim regards, Sebastian On 09/03/2013 01:59 PM, David McMinn wrote: > Hello, > > I'm using the scipy.signal.argrelextrema function to find the peaks in a 1D numpy vector. While this is working for the most part I found that when no peaks were present in the data I get an tuple with two empty arrays as a result. I can handle this, but it got me wondering whether I have misunderstood the return value? > > The docs say it returns "Indices of the maxima, as an array of integers". I would expect with no peaks it would be an empty array (or a tuple with one empty array) since my input data is a 1D array, but I get two arrays. Any reason for this? Checking the source for scipy it does return "(np.array([]),) * 2" when there are no peaks, but that doesn't explain why it does that. > > For example: > > import numpy as np > import scipy.signal as sig > y = np.concatenate([np.arange(1, 0, -0.01), np.arange(0, 1, 0.01)]) > peaks = sig.argrelextrema(0.5-np.abs(y - 0.5), np.greater) > print peaks # Prints '(array([ 50, 150]),)' > peaks = sig.argrelextrema(y, np.greater) # No peaks > print peaks > > This prints '((array([]), dtype=float64),(array([]), dtype=float64),)' for the second print statement whereas I would have expected '((array([]), dtype=float64),)', which would have been more along the lines of the first print statement. > > So is this by design, and if so why? Have I misunderstood the return value? > > Thanks. > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 555 bytes Desc: OpenPGP digital signature URL: From gabe at gabegaster.com Tue Sep 3 13:18:34 2013 From: gabe at gabegaster.com (Gabriel Gaster) Date: Tue, 3 Sep 2013 12:18:34 -0500 Subject: [SciPy-User] __sizeof__ seems wrong Message-ID: This might be a bug/fix that is more appropriate for the numpy list -- but it also relates to scipy sparse... Run the following code: from scipy import sparse x = sparse.rand(100,10,.2) print x.__sizeof__() ## 32 or 16, depending print x.data.__sizeof__() ## 80 or 40, depending print x.data.nbytes ## 1600 import numpy print numpy.arange(200).__sizeof__() ## 80 or 40, depending print numpy.arange(200).nbytes ## 1600 1. Why is nbytes different than __sizeof__ ? I believe nbytes and do not believe __sizeof__. __sizeof__ should just be rewritten to default to nbytes. 2. It seems that the __sizeof__ methods need to be updated for numpy.array AND for scipy.sparse. I would be happy to do this -- but before I do, I wanted to sanity check with the list. Thanks. Gabe -------------- next part -------------- An HTML attachment was scrubbed... URL: From david_baddeley at yahoo.com.au Tue Sep 3 15:37:23 2013 From: david_baddeley at yahoo.com.au (David Baddeley) Date: Tue, 3 Sep 2013 12:37:23 -0700 (PDT) Subject: [SciPy-User] __sizeof__ seems wrong In-Reply-To: References: Message-ID: <1378237043.576.YahooMailNeo@web163905.mail.gq1.yahoo.com> Hi Gabriel, I believe that sizeof returns the size of the 'ndarray' object (ie a pointer to the data, information about dimensions, strides, and data type etc ..), not the actual data. The full size in memory would be array.nbytes + array.__sizeof__(). The situation is, however, complicated in that the ndarray object doesn't necessarily own it's memory, and with the possibility of having multiple ndarrays pointing at the same data (ie after slicing), and it makes more sense to think of an ndarray object as being a kind of smart pointer rather than actually encompassing it's data, making the current implementation of __sizeof__ correct. One could potentially make sizeof return the full size if the array owned it's data, the 'pointer' size otherwise, but this is likely to be hard (if not impossible) to implement consistently over all possible use cases. cheers, David ________________________________ From: Gabriel Gaster To: scipy-user at scipy.org Sent: Tuesday, 3 September 2013 1:18 PM Subject: [SciPy-User] __sizeof__ seems wrong This might be a bug/fix that is more appropriate for the numpy list -- but it also relates to scipy sparse... Run the following code: from scipy import sparse x = sparse.rand(100,10,.2) print x.__sizeof__() ## 32 or 16, depending print x.data.__sizeof__() ## 80 or 40, depending print x.data.nbytes ## 1600 import numpy print numpy.arange(200).__sizeof__() ## 80 or 40, depending print numpy.arange(200).nbytes ## 1600 1. Why is nbytes different than __sizeof__ ? ?I believe nbytes and do not believe __sizeof__. __sizeof__ should just be rewritten to default to nbytes. 2. It seems that the __sizeof__ methods need to be updated for numpy.array AND for scipy.sparse. I would be happy to do this -- but before I do, I wanted to sanity check with the list. Thanks. Gabe _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.mcminn at iwcdownhole.com Wed Sep 4 05:11:18 2013 From: david.mcminn at iwcdownhole.com (David McMinn) Date: Wed, 4 Sep 2013 09:11:18 +0000 Subject: [SciPy-User] argrelextrema returns tuple of 2 empty arrays when no peaks found in 1D vector In-Reply-To: <522617B2.1080104@gmx.at> References: <463F3CF32EC88745AA8B81DA65C0564D015B55CE@Gia-Exch06.MessageExchange.com> <522617B2.1080104@gmx.at> Message-ID: <463F3CF32EC88745AA8B81DA65C0564D016B7B94@Gia-Exch06.MessageExchange.com> Hi Sebastian, Thanks for replying. > > Checking the source for scipy it does return "(np.array([]),) * 2" when > > there are no peaks, but that doesn't explain why it does that. > > It does explain it? > (np.array([]),) * 2 > is the same as > (np.array([]),np.array([])) I should have been clearer, I know that those two statements are equivalent, my question was why does it return 2 empty arrays instead of 1, since I get 1 array when a peak is found (the input array is 1D). > Maybe it would make more sense if in this case the same number of > dimensions would be returned: > (np.array([]),) * data.ndim That was my thoughts but wasn't sure if there was a reason for returning "* 2" instead of "* data.ndim". Cheers. From danielcarlminer at gmail.com Wed Sep 4 05:24:54 2013 From: danielcarlminer at gmail.com (Daniel Miner) Date: Wed, 4 Sep 2013 11:24:54 +0200 Subject: [SciPy-User] Color Lists in Dendrograms / Hierarchical Clustering Message-ID: Hi everyone, I'd written over a month ago but never got any replies, so I'm trying again. Any help would be sincerely appreciated. I'm trying to use hierarchical clustering to tease out some structure in data that I already know exists as a sort of test case to (hopefully) show that it can be reliably done for the type of data I'm concerned with. To this end, knowing that the full call for dendrogram generation is: scipy.cluster.hioerarchy.dendrogram(Z, p=30, truncate_mode=None, color_threshold=None, get_leaves=True, orientation='top',labels=None, count_sort=False, distance_sort=False, show_leaf_counts=True, no_plot=False, no_labels=False, color_list=None,leaf_font_size=None, leaf_rotation=None, leaf_label_func=None, no_leaves=False, show_contracted=False,link_color_func=None) I use one of the linkage algorithms to generate the linkage, manually create a list of colors "c_list" as a list with a color corresponding to each known category of original data for each data point - i.e. if I have data [1,2,3] and know that 1 and 2 come from the same category but 3 is different, I make a list ['r','r','g'] - and try to use it as follows: import matplotlib.pyplot as pt import scipy.cluster.hierarchy as sc [import other stuff] [load data DAT, generate color list] lw = sc.ward(DAT) dw = sc.dendrogram(lw,color_list='c_list') pt.show() However, the colors seem to do nothing. I've tried listing them both numerically (i.e. [1,2,3]) and as characters (i.e. ['r','g','b']), and have tried making the call with c_list in single quotes as shown and with no quotes at all. There is no documentation at http://docs.scipy.org/doc/scipy/reference/generated/scipy.cluster.hierarchy.dendrogram.html regarding an expected format for the color list, and without this color labeling, I can't check to see if the clustering is doing what I hope it does, as there are many data points and it would be prohibitively difficult to read though each of the tiny index labels at the bottom of the default dendrogram plot. I really have no idea how to proceed in order to make this work and am hoping that someone here can provide some advice. Thanks. Best regards, Daniel -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at gmail.com Wed Sep 4 07:53:31 2013 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Wed, 4 Sep 2013 07:53:31 -0400 Subject: [SciPy-User] argrelextrema returns tuple of 2 empty arrays when no peaks found in 1D vector In-Reply-To: <463F3CF32EC88745AA8B81DA65C0564D016B7B94@Gia-Exch06.MessageExchange.com> References: <463F3CF32EC88745AA8B81DA65C0564D015B55CE@Gia-Exch06.MessageExchange.com> <522617B2.1080104@gmx.at> <463F3CF32EC88745AA8B81DA65C0564D016B7B94@Gia-Exch06.MessageExchange.com> Message-ID: On 9/4/13, David McMinn wrote: > Hi Sebastian, > > Thanks for replying. > >> > Checking the source for scipy it does return "(np.array([]),) * 2" when >> > there are no peaks, but that doesn't explain why it does that. >> >> It does explain it? >> (np.array([]),) * 2 >> is the same as >> (np.array([]),np.array([])) > > I should have been clearer, I know that those two statements are equivalent, > my question was why does it return 2 empty arrays instead of 1, since I get > 1 array when a peak is found (the input array is 1D). > >> Maybe it would make more sense if in this case the same number of >> dimensions would be returned: >> (np.array([]),) * data.ndim > > That was my thoughts but wasn't sure if there was a reason for returning "* > 2" instead of "* data.ndim". > Thanks for reporting the problem. That looks like a bug, and using `ndim` looks like the correct fix. Could you create an issue for this? Issues are managed on github: https://github.com/scipy/scipy/issues Warren > Cheers. > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From david.mcminn at iwcdownhole.com Wed Sep 4 09:55:28 2013 From: david.mcminn at iwcdownhole.com (David McMinn) Date: Wed, 4 Sep 2013 13:55:28 +0000 Subject: [SciPy-User] argrelextrema returns tuple of 2 empty arrays when no peaks found in 1D vector In-Reply-To: References: <463F3CF32EC88745AA8B81DA65C0564D015B55CE@Gia-Exch06.MessageExchange.com> <522617B2.1080104@gmx.at> <463F3CF32EC88745AA8B81DA65C0564D016B7B94@Gia-Exch06.MessageExchange.com> Message-ID: <463F3CF32EC88745AA8B81DA65C0564D016B8E5B@Gia-Exch06.MessageExchange.com> Hi Warren, > Thanks for reporting the problem. That looks like a bug, and using `ndim` > looks like the correct fix. > > Could you create an issue for this? Issues are managed on github: > https://github.com/scipy/scipy/issues OK. I've created an issue for this on github. Cheers. From aronne.merrelli at gmail.com Wed Sep 4 10:23:13 2013 From: aronne.merrelli at gmail.com (Aronne Merrelli) Date: Wed, 4 Sep 2013 09:23:13 -0500 Subject: [SciPy-User] Color Lists in Dendrograms / Hierarchical Clustering In-Reply-To: References: Message-ID: On Wed, Sep 4, 2013 at 4:24 AM, Daniel Miner wrote: > Hi everyone, > > I'd written over a month ago but never got any replies, so I'm trying > again. Any help would be sincerely appreciated. > > I'm trying to use hierarchical clustering to tease out some structure in > data that I already know exists as a sort of test case to (hopefully) show > that it can be reliably done for the type of data I'm concerned with. To > this end, knowing that the full call for dendrogram generation is: > > scipy.cluster.hioerarchy.dendrogram(Z, p=30, truncate_mode=None, > color_threshold=None, get_leaves=True, orientation='top',labels=None, > count_sort=False, distance_sort=False, show_leaf_counts=True, > no_plot=False, no_labels=False, color_list=None,leaf_font_size=None, > leaf_rotation=None, leaf_label_func=None, no_leaves=False, > show_contracted=False,link_color_func=None) > > I don't know much about these algorithms, but looking at the code, the color_list keyword is indeed not used. It is initialized to an empty list and then used as an internal variable for some recursive algorithm to build the dendrogram. This is a sort of a documentation bug, I guess, since the color_list keyword argument should not be in the function definition (although the function's doc string is correct, in that it does not mention this keyword.) > I use one of the linkage algorithms to generate the linkage, manually > create a list of colors "c_list" as a list with a color corresponding to > each known category of original data for each data point - i.e. if I have > data [1,2,3] and know that 1 and 2 come from the same category but 3 is > different, I make a list ['r','r','g'] - and try to use it as follows: > > import matplotlib.pyplot as pt > import scipy.cluster.hierarchy as sc > [import other stuff] > [load data DAT, generate color list] > lw = sc.ward(DAT) > dw = sc.dendrogram(lw,color_list='c_list') > pt.show() > > However, the colors seem to do nothing. I've tried listing them both > numerically (i.e. [1,2,3]) and as characters (i.e. ['r','g','b']), and have > tried making the call with c_list in single quotes as shown and with no > quotes at all. There is no documentation at > http://docs.scipy.org/doc/scipy/reference/generated/scipy.cluster.hierarchy.dendrogram.html regarding > an expected format for the color list, and without this color labeling, I > can't check to see if the clustering is doing what I hope it does, as there > are many data points and it would be prohibitively difficult to read though > each of the tiny index labels at the bottom of the default dendrogram > plot. I really have no idea how to proceed in order to make this work and > am hoping that someone here can provide some advice. Thanks. > Again, I am not very familiar with this algorithm, but if I understand what you are trying to do, I think the "link_color_func" that is in the documentation does not do what you want either, since it is coloring by the node index. I don't think the node index maps in any obvious way back to the original data index (where the labels would be relevant). If you have only a couple group labels, then the labels might be usable through the "leaf_label_func". Here is an example: In [92]: import scipy.cluster.hierarchy as sc In [93]: import numpy as np In [94]: def foo(data, labels): ....: lw = sc.ward(data) ....: dw = sc.dendrogram(lw, leaf_label_func = lambda k: labels[k]) In [106]: labels = ['A',]*20 + ['BBB',]*20 In [107]: data1 = np.random.normal(0,1,(40,2)) In [108]: data2 = np.random.normal(0,1,(40,2)); data2[20:,:] += 5 # Data1 will not cluster according to labels In [110]: foo(data1, labels) # Data2 should cluster. In [113]: foo(data2, labels) Hope this helps, Aronne > Best regards, > Daniel > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at gmail.com Wed Sep 4 11:00:24 2013 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Wed, 4 Sep 2013 11:00:24 -0400 Subject: [SciPy-User] argrelextrema returns tuple of 2 empty arrays when no peaks found in 1D vector In-Reply-To: <463F3CF32EC88745AA8B81DA65C0564D016B8E5B@Gia-Exch06.MessageExchange.com> References: <463F3CF32EC88745AA8B81DA65C0564D015B55CE@Gia-Exch06.MessageExchange.com> <522617B2.1080104@gmx.at> <463F3CF32EC88745AA8B81DA65C0564D016B7B94@Gia-Exch06.MessageExchange.com> <463F3CF32EC88745AA8B81DA65C0564D016B8E5B@Gia-Exch06.MessageExchange.com> Message-ID: On 9/4/13, David McMinn wrote: > Hi Warren, > >> Thanks for reporting the problem. That looks like a bug, and using >> `ndim` >> looks like the correct fix. >> >> Could you create an issue for this? Issues are managed on github: >> https://github.com/scipy/scipy/issues > > OK. I've created an issue for this on github. > Thanks! Warren > Cheers. > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From gabe at gabegaster.com Wed Sep 4 13:08:22 2013 From: gabe at gabegaster.com (Gabriel Gaster) Date: Wed, 4 Sep 2013 12:08:22 -0500 Subject: [SciPy-User] SciPy-User Digest, Vol 121, Issue 3 In-Reply-To: References: Message-ID: Thanks David, I hadn't thought about it that way, but that makes sense. Cheers, Gabe > Date: Tue, 3 Sep 2013 12:37:23 -0700 (PDT) > From: David Baddeley > Subject: Re: [SciPy-User] __sizeof__ seems wrong > To: SciPy Users List > Message-ID: <1378237043.576.YahooMailNeo at web163905.mail.gq1.yahoo.com (mailto:1378237043.576.YahooMailNeo at web163905.mail.gq1.yahoo.com)> > Content-Type: text/plain; charset="iso-8859-1" > > Hi Gabriel, > > I believe that sizeof returns the size of the 'ndarray' object (ie a pointer to the data, information about dimensions, strides, and data type etc ..), not the actual data. The full size in memory would be array.nbytes + array.__sizeof__(). The situation is, however, complicated in that the ndarray object doesn't necessarily own it's memory, and with the possibility of having multiple ndarrays pointing at the same data (ie after slicing), and it makes more sense to think of an ndarray object as being a kind of smart pointer rather than actually encompassing it's data, making the current implementation of __sizeof__ correct. One could potentially make sizeof return the full size if the array owned it's data, the 'pointer' size otherwise, but this is likely to be hard (if not impossible) to implement consistently over all possible use cases. > > cheers, > David > > > ________________________________ > From: Gabriel Gaster > To: scipy-user at scipy.org (mailto:scipy-user at scipy.org) > Sent: Tuesday, 3 September 2013 1:18 PM > Subject: [SciPy-User] __sizeof__ seems wrong > > > > This might be a bug/fix that is more appropriate for the numpy list -- but it also relates to scipy sparse... > > Run the following code: > > from scipy import sparse > x = sparse.rand(100,10,.2) > print x.__sizeof__() > ## 32 or 16, depending > print x.data.__sizeof__() > ## 80 or 40, depending > print x.data.nbytes > ## 1600 > > import numpy > print numpy.arange(200).__sizeof__() > ## 80 or 40, depending > print numpy.arange(200).nbytes > > ## 1600 > > 1. Why is nbytes different than __sizeof__ ? ?I believe nbytes and do not believe __sizeof__. __sizeof__ should just be rewritten to default to nbytes. > 2. It seems that the __sizeof__ methods need to be updated for numpy.array AND for scipy.sparse. > > I would be happy to do this -- but before I do, I wanted to sanity check with the list. > Thanks. > > Gabe > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org (mailto:SciPy-User at scipy.org) > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: http://mail.scipy.org/pipermail/scipy-user/attachments/20130903/e352615a/attachment-0001.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From cweisiger at msg.ucsf.edu Wed Sep 4 16:35:42 2013 From: cweisiger at msg.ucsf.edu (Chris Weisiger) Date: Wed, 4 Sep 2013 13:35:42 -0700 Subject: [SciPy-User] Scaling up root-finding Message-ID: We have a somewhat elderly microscopy camera whose response curve is best-characterized as a degree-2 polynomial (starts out steep, then flattens out; we hit the camera's maximum bit depth before the function stops being monotonic though). That is: reported photons = a + b * (incident photons) + c * (incident photons)^2 Of course, each pixel is slightly different, giving 512*512 different polynomials. I want to linearize the camera's response in post-production, by inverting the (monotonic) function so that it maps reported photons to incident photons. The obvious method would be to use something from scipy.optimize's root-finding functions. I'm not remotely a numerical analysis expert, so I have no feel for which function would be best for this problem. Performance-wise, a quick test indicates that it would take about 3.5s on my laptop to process a single frame by manually iterating over every pixel in the frame and calling scipy.optimize.newton(that pixel's response function, that pixel's starting value). This isn't great, but it's well within the realm of feasibility. Is there some more efficient method to do this? In the ideal world we'd be able to handle arbitrary-degree polynomials, but I'd be willing to accept being restricted to degree-2. Of course, degree-1 is trivial to implement. Thanks for any advice you care to share. -Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Wed Sep 4 16:45:04 2013 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 4 Sep 2013 21:45:04 +0100 Subject: [SciPy-User] Scaling up root-finding In-Reply-To: References: Message-ID: If you know the polynomial coefficients a, b, c and the reported photons, then why not just use the textbook quadratic formula to get a (vectorised!) closed form for the incident photons? This only works for a few polynomial degrees of course, and there's a small subtlety in selecting the correct root, but it should be more efficient than your current approach... -n On 4 Sep 2013 21:39, "Chris Weisiger" wrote: > We have a somewhat elderly microscopy camera whose response curve is > best-characterized as a degree-2 polynomial (starts out steep, then > flattens out; we hit the camera's maximum bit depth before the function > stops being monotonic though). That is: > > reported photons = a + b * (incident photons) + c * (incident photons)^2 > > Of course, each pixel is slightly different, giving 512*512 different > polynomials. I want to linearize the camera's response in post-production, > by inverting the (monotonic) function so that it maps reported photons to > incident photons. > > The obvious method would be to use something from scipy.optimize's > root-finding functions. I'm not remotely a numerical analysis expert, so I > have no feel for which function would be best for this problem. > Performance-wise, a quick test indicates that it would take about 3.5s on > my laptop to process a single frame by manually iterating over every pixel > in the frame and calling scipy.optimize.newton(that pixel's response > function, that pixel's starting value). This isn't great, but it's well > within the realm of feasibility. > > Is there some more efficient method to do this? In the ideal world we'd be > able to handle arbitrary-degree polynomials, but I'd be willing to accept > being restricted to degree-2. Of course, degree-1 is trivial to implement. > > Thanks for any advice you care to share. > > -Chris > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cweisiger at msg.ucsf.edu Wed Sep 4 16:51:04 2013 From: cweisiger at msg.ucsf.edu (Chris Weisiger) Date: Wed, 4 Sep 2013 13:51:04 -0700 Subject: [SciPy-User] Scaling up root-finding In-Reply-To: References: Message-ID: ...yes, sorry, I should have thought that through. For some reason I thought the quadratic formula wasn't amenable to vectorization, but on reflection that's not the case. The +- portion is a bit tricky, but given that we know the resulting value is positive and one of its roots is where x < 0, I think it's safe to say that we should always use +, not -. Thanks for the gut-check. Sometimes you just need someone to point out the obvious! I'm still interested in knowing if there's some efficient way to handle arbitrary polynomials, though. -Chris On Wed, Sep 4, 2013 at 1:45 PM, Nathaniel Smith wrote: > If you know the polynomial coefficients a, b, c and the reported photons, > then why not just use the textbook quadratic formula to get a (vectorised!) > closed form for the incident photons? > > This only works for a few polynomial degrees of course, and there's a > small subtlety in selecting the correct root, but it should be more > efficient than your current approach... > > -n > On 4 Sep 2013 21:39, "Chris Weisiger" wrote: > >> We have a somewhat elderly microscopy camera whose response curve is >> best-characterized as a degree-2 polynomial (starts out steep, then >> flattens out; we hit the camera's maximum bit depth before the function >> stops being monotonic though). That is: >> >> reported photons = a + b * (incident photons) + c * (incident photons)^2 >> >> Of course, each pixel is slightly different, giving 512*512 different >> polynomials. I want to linearize the camera's response in post-production, >> by inverting the (monotonic) function so that it maps reported photons to >> incident photons. >> >> The obvious method would be to use something from scipy.optimize's >> root-finding functions. I'm not remotely a numerical analysis expert, so I >> have no feel for which function would be best for this problem. >> Performance-wise, a quick test indicates that it would take about 3.5s on >> my laptop to process a single frame by manually iterating over every pixel >> in the frame and calling scipy.optimize.newton(that pixel's response >> function, that pixel's starting value). This isn't great, but it's well >> within the realm of feasibility. >> >> Is there some more efficient method to do this? In the ideal world we'd >> be able to handle arbitrary-degree polynomials, but I'd be willing to >> accept being restricted to degree-2. Of course, degree-1 is trivial to >> implement. >> >> Thanks for any advice you care to share. >> >> -Chris >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Wed Sep 4 16:56:21 2013 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 4 Sep 2013 14:56:21 -0600 Subject: [SciPy-User] Scaling up root-finding In-Reply-To: References: Message-ID: On Wed, Sep 4, 2013 at 2:35 PM, Chris Weisiger wrote: > We have a somewhat elderly microscopy camera whose response curve is > best-characterized as a degree-2 polynomial (starts out steep, then > flattens out; we hit the camera's maximum bit depth before the function > stops being monotonic though). That is: > > reported photons = a + b * (incident photons) + c * (incident photons)^2 > > Of course, each pixel is slightly different, giving 512*512 different > polynomials. I want to linearize the camera's response in post-production, > by inverting the (monotonic) function so that it maps reported photons to > incident photons. > > The obvious method would be to use something from scipy.optimize's > root-finding functions. I'm not remotely a numerical analysis expert, so I > have no feel for which function would be best for this problem. > Performance-wise, a quick test indicates that it would take about 3.5s on > my laptop to process a single frame by manually iterating over every pixel > in the frame and calling scipy.optimize.newton(that pixel's response > function, that pixel's starting value). This isn't great, but it's well > within the realm of feasibility. > > Is there some more efficient method to do this? In the ideal world we'd be > able to handle arbitrary-degree polynomials, but I'd be willing to accept > being restricted to degree-2. Of course, degree-1 is trivial to implement. > > Thanks for any advice you care to share. > > Depends on the data you use to fit the curve. I always fit the inverse response rather than the response. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From cweisiger at msg.ucsf.edu Wed Sep 4 17:27:27 2013 From: cweisiger at msg.ucsf.edu (Chris Weisiger) Date: Wed, 4 Sep 2013 14:27:27 -0700 Subject: [SciPy-User] Scaling up root-finding In-Reply-To: References: Message-ID: That...makes a lot of sense, actually. Why bother manually inverting the function when I can just switch the order of parameters I pass to numpy.polyfit? And then (when correcting actual data) just plug my measured values directly into the resulting polynomials? Intuitively this sounds great. In practice my brain's a bit fried right now so I'm going to put off really working on it until tomorrow. We'll see how it goes. Thanks for the advice! -Chris On Wed, Sep 4, 2013 at 1:56 PM, Charles R Harris wrote: > > > > On Wed, Sep 4, 2013 at 2:35 PM, Chris Weisiger wrote: > >> We have a somewhat elderly microscopy camera whose response curve is >> best-characterized as a degree-2 polynomial (starts out steep, then >> flattens out; we hit the camera's maximum bit depth before the function >> stops being monotonic though). That is: >> >> reported photons = a + b * (incident photons) + c * (incident photons)^2 >> >> Of course, each pixel is slightly different, giving 512*512 different >> polynomials. I want to linearize the camera's response in post-production, >> by inverting the (monotonic) function so that it maps reported photons to >> incident photons. >> >> The obvious method would be to use something from scipy.optimize's >> root-finding functions. I'm not remotely a numerical analysis expert, so I >> have no feel for which function would be best for this problem. >> Performance-wise, a quick test indicates that it would take about 3.5s on >> my laptop to process a single frame by manually iterating over every pixel >> in the frame and calling scipy.optimize.newton(that pixel's response >> function, that pixel's starting value). This isn't great, but it's well >> within the realm of feasibility. >> >> Is there some more efficient method to do this? In the ideal world we'd >> be able to handle arbitrary-degree polynomials, but I'd be willing to >> accept being restricted to degree-2. Of course, degree-1 is trivial to >> implement. >> >> Thanks for any advice you care to share. >> >> > Depends on the data you use to fit the curve. I always fit the inverse > response rather than the response. > > Chuck > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jkhilmer at chemistry.montana.edu Thu Sep 5 00:01:29 2013 From: jkhilmer at chemistry.montana.edu (jkhilmer at chemistry.montana.edu) Date: Wed, 4 Sep 2013 22:01:29 -0600 Subject: [SciPy-User] Scaling up root-finding In-Reply-To: References: Message-ID: Chris, Just as an academic exercise, don't forget: 1. This is a highly parallel problem. 2. You need to dramatically modify the parameters of the root-finding function: by default they will iterate far too many times. You probably only have ~10 bits of precision, correct? 3. Don't store 512x512 polynomials: break it down to maybe 10-20. Remember that excessive precision isn't necessary. I'd be willing to bet that #2 above makes a huge improvement. Jonathan On Wed, Sep 4, 2013 at 3:27 PM, Chris Weisiger wrote: > That...makes a lot of sense, actually. Why bother manually inverting the > function when I can just switch the order of parameters I pass to > numpy.polyfit? And then (when correcting actual data) just plug my measured > values directly into the resulting polynomials? > > Intuitively this sounds great. In practice my brain's a bit fried right > now so I'm going to put off really working on it until tomorrow. We'll see > how it goes. > > Thanks for the advice! > > -Chris > > > On Wed, Sep 4, 2013 at 1:56 PM, Charles R Harris < > charlesr.harris at gmail.com> wrote: > >> >> >> >> On Wed, Sep 4, 2013 at 2:35 PM, Chris Weisiger wrote: >> >>> We have a somewhat elderly microscopy camera whose response curve is >>> best-characterized as a degree-2 polynomial (starts out steep, then >>> flattens out; we hit the camera's maximum bit depth before the function >>> stops being monotonic though). That is: >>> >>> reported photons = a + b * (incident photons) + c * (incident photons)^2 >>> >>> Of course, each pixel is slightly different, giving 512*512 different >>> polynomials. I want to linearize the camera's response in post-production, >>> by inverting the (monotonic) function so that it maps reported photons to >>> incident photons. >>> >>> The obvious method would be to use something from scipy.optimize's >>> root-finding functions. I'm not remotely a numerical analysis expert, so I >>> have no feel for which function would be best for this problem. >>> Performance-wise, a quick test indicates that it would take about 3.5s on >>> my laptop to process a single frame by manually iterating over every pixel >>> in the frame and calling scipy.optimize.newton(that pixel's response >>> function, that pixel's starting value). This isn't great, but it's well >>> within the realm of feasibility. >>> >>> Is there some more efficient method to do this? In the ideal world we'd >>> be able to handle arbitrary-degree polynomials, but I'd be willing to >>> accept being restricted to degree-2. Of course, degree-1 is trivial to >>> implement. >>> >>> Thanks for any advice you care to share. >>> >>> >> Depends on the data you use to fit the curve. I always fit the inverse >> response rather than the response. >> >> Chuck >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Sebastian.Wagner.fl at ait.ac.at Thu Sep 5 04:45:06 2013 From: Sebastian.Wagner.fl at ait.ac.at (Wagner Sebastian) Date: Thu, 5 Sep 2013 08:45:06 +0000 Subject: [SciPy-User] eig_banded usage Message-ID: Hello, After I got solve_banded running for me (I have in mind to do some improvements on doc/code) I have the next problem with scipy.linalg.eig_banded wich does not work like scipy.linalg.solve_banded, as one might expect. The doc says the parameters are (beside others): a_band and lower >From the descriptions of these two I guess a_banded is an array of the diagonals. Unlike with solve_banded it is not necessary to give the lower and upper bounds, as the second argument, lower, let's me assume it is only possible to have only lower diagonals or only upper ones. Is this correct? If it is, how can I compute the Eigenvalues with data containing both lower and upper diagonals? Regards, Sebastian From charlesr.harris at gmail.com Thu Sep 5 09:37:02 2013 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 5 Sep 2013 07:37:02 -0600 Subject: [SciPy-User] Scaling up root-finding In-Reply-To: References: Message-ID: On Wed, Sep 4, 2013 at 3:27 PM, Chris Weisiger wrote: > That...makes a lot of sense, actually. Why bother manually inverting the > function when I can just switch the order of parameters I pass to > numpy.polyfit? And then (when correcting actual data) just plug my measured > values directly into the resulting polynomials? > > Intuitively this sounds great. In practice my brain's a bit fried right > now so I'm going to put off really working on it until tomorrow. We'll see > how it goes. > > Thanks for the advice! > > Might want to use polyval from numpy.polynomial.polynomial to evaluate the result as it works with arrays and arrays of coefficiients. For a 2x2 fpa and x^2 for the polynomial correction at each pixel In [1]: from numpy.polynomial.polynomial import polyval In [2]: c = zeros((3,2,2)) In [3]: c[2] = 1 In [4]: x = arange(4).reshape(2,2) In [5]: polyval(x, c, tensor=False) Out[5]: array([[ 0., 1.], [ 4., 9.]]) Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From cweisiger at msg.ucsf.edu Thu Sep 5 16:49:22 2013 From: cweisiger at msg.ucsf.edu (Chris Weisiger) Date: Thu, 5 Sep 2013 13:49:22 -0700 Subject: [SciPy-User] Scaling up root-finding In-Reply-To: References: Message-ID: On Wed, Sep 4, 2013 at 9:01 PM, jkhilmer at chemistry.montana.edu < jkhilmer at chemistry.montana.edu> wrote: > Chris, > > Just as an academic exercise, don't forget: > > 1. This is a highly parallel problem. > This is true, though on the flipside if I go multithreaded on my laptop then I can't do anything else while running processing. :) The final version probably ought to spawn a new thread for each image it processes. > 2. You need to dramatically modify the parameters of the root-finding > function: by default they will iterate far too many times. You probably > only have ~10 bits of precision, correct? > The cameras are typically providing 14 or 16 bits of precision. > 3. Don't store 512x512 polynomials: break it down to maybe 10-20. > Remember that excessive precision isn't necessary. > Each pixel has its own slightly-different response function that needs to be stored differently. We can do this readily by storing arrays of coefficients where each array is the same size as one camera frame. Precision in this sense (of having one polynomial per pixel) is absolutely required; if we tried to generalize polynomials to apply to blocks of pixels then we'd ruin our correction quality. Cameras can have "stuck pixels" (characteristically much higher/lower than their neighbors), 1-pixel-wide stripes of bad pixels, etc. > Jonathan > > -Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From jkhilmer at chemistry.montana.edu Thu Sep 5 20:27:33 2013 From: jkhilmer at chemistry.montana.edu (jkhilmer at chemistry.montana.edu) Date: Thu, 5 Sep 2013 18:27:33 -0600 Subject: [SciPy-User] Scaling up root-finding In-Reply-To: References: Message-ID: > > Each pixel has its own slightly-different response function that needs to > be stored differently. We can do this readily by storing arrays of > coefficients where each array is the same size as one camera frame. > Precision in this sense (of having one polynomial per pixel) is absolutely > required; if we tried to generalize polynomials to apply to blocks of > pixels then we'd ruin our correction quality. Cameras can have "stuck > pixels" (characteristically much higher/lower than their neighbors), > 1-pixel-wide stripes of bad pixels, etc. > Chris, Assuming that the last few bits of your camera are really usable (it's almost guaranteed they are not), then at best you need precision that is 1/100k to substantially exceed the camera capabilities. Looking at the scipy documentation, I see tolerances on the order of 1e-12, 1e-16: you need to be using 1e-5. As you said, your pixels are going to cluster into categories: there will be some continuum of behavior centering near "ideal", a few stuck pixels, some stripes (which are just arrays of stuck or semi-stuck), etc. Solve your 512x512 equations and cluster the 262k response vectors (ignore the parameters). Any response vectors which differ by less than 1e-5 are indistinguishable and redundant. This clustering is not specifically by xy of the pixel, although there will be trends. I may have underestimated a bit to say that you could condense the response functions to just 20, but I'm sure that you can condense it to something far closer to 2k than the full 262k. The target is 1e-5 precision, which is why you can tolerate a lot of approximation: it is NOT necessary to have a single polynomial per pixel, just a polynomial within 1e-5. Jonathan -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Thu Sep 5 22:10:44 2013 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 5 Sep 2013 20:10:44 -0600 Subject: [SciPy-User] Scaling up root-finding In-Reply-To: References: Message-ID: On Thu, Sep 5, 2013 at 6:27 PM, jkhilmer at chemistry.montana.edu < jkhilmer at chemistry.montana.edu> wrote: > Each pixel has its own slightly-different response function that needs >> to be stored differently. We can do this readily by storing arrays of >> coefficients where each array is the same size as one camera frame. >> Precision in this sense (of having one polynomial per pixel) is absolutely >> required; if we tried to generalize polynomials to apply to blocks of >> pixels then we'd ruin our correction quality. Cameras can have "stuck >> pixels" (characteristically much higher/lower than their neighbors), >> 1-pixel-wide stripes of bad pixels, etc. >> > > > Chris, > > Assuming that the last few bits of your camera are really usable (it's > almost guaranteed they are not), then at best you need precision that is > 1/100k to substantially exceed the camera capabilities. > > Looking at the scipy documentation, I see tolerances on the order of > 1e-12, 1e-16: you need to be using 1e-5. > > As you said, your pixels are going to cluster into categories: there will > be some continuum of behavior centering near "ideal", a few stuck pixels, > some stripes (which are just arrays of stuck or semi-stuck), etc. Solve > your 512x512 equations and cluster the 262k response vectors (ignore the > parameters). Any response vectors which differ by less than 1e-5 are > indistinguishable and redundant. This clustering is not specifically by xy > of the pixel, although there will be trends. > > I may have underestimated a bit to say that you could condense the > response functions to just 20, but I'm sure that you can condense it to > something far closer to 2k than the full 262k. The target is 1e-5 > precision, which is why you can tolerate a lot of approximation: it is NOT > necessary to have a single polynomial per pixel, just a polynomial within > 1e-5. > It's a small array, the savings wouldn't be that much, especially if the inverse response is fitted, i.e, maybe three megs of correction data. There is also flatfielding needed due to the cos^4 law and vignetting. One thing that does help is that the nonlinearity tends to be a function of the readout amplifier and, at least on CCD arrays, there are a limited number of spigots, each of which should have the same nonlinearity. Hot/cold pixel problems are probably in the photodiodes, which can also be nonlinear if incorrectly biased. Much depends on the type of array, signal levels (bleeding) and other such factors. Some arrays do need pixelwise calibration, some not. Silicon CCD's should be fairly uniform, but there will still be some bad pixels. Chuck > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Sat Sep 7 09:19:36 2013 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 7 Sep 2013 13:19:36 +0000 (UTC) Subject: [SciPy-User] Scaling up root-finding References: Message-ID: Chris Weisiger msg.ucsf.edu> writes: [clip] > I'm still interested in knowing if there's some efficient > way to handle arbitrary polynomials, though. Numpy 1.8 has vectorized eigenvalue solver, and the roots of a polynomial are the eigenvalues of the companion matrix. This then should give an easy way to solve a large number of polynomial equations. -- Pauli Virtanen From j.anderson at ambisonictoolkit.net Sat Sep 7 13:30:06 2013 From: j.anderson at ambisonictoolkit.net (Joseph Anderson) Date: Sat, 7 Sep 2013 18:30:06 +0100 Subject: [SciPy-User] =?windows-1252?q?test=85_please_ignore?= Message-ID: <564BA4F5-D714-4AF9-93FD-D5C5F4FEEDDC@ambisonictoolkit.net> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Joseph Anderson j.anderson at ambisonictoolkit.net http://www.ambisonictoolkit.net ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sun Sep 8 15:14:28 2013 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 8 Sep 2013 13:14:28 -0600 Subject: [SciPy-User] ANN: 1.8.0b2 release. Message-ID: Hi all, I'm happy to announce the second beta release of Numpy 1.8.0. This release should solve the Windows problems encountered in the first beta. Many thanks to Christolph Gohlke and Julian Taylor for their hard work in getting those issues settled. It would be good if folks running OS X could try out this release and report any issues on the numpy-dev mailing list. Unfortunately the files still need to be installed from source as dmg files are not avalable at this time. Source tarballs and release notes can be found at https://sourceforge.net/projects/numpy/files/NumPy/1.8.0b2/. The Windows and OS X installers will follow when the infrastructure issues are dealt with. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From jamesresearching at gmail.com Tue Sep 10 02:22:43 2013 From: jamesresearching at gmail.com (James) Date: Tue, 10 Sep 2013 15:22:43 +0900 Subject: [SciPy-User] Ipython parallel and PBS Message-ID: Dear all, I'm having a lot of trouble setting up IPython parallel on a PBS cluster, and I would really appreciate anybody helping. The architecture is a standard PBS cluster - a head node with slave nodes. I connect to the head node from my laptop over ssh. The client (laptop) -> Head node connection seems simple enough. The problem is the engines. Ignoring the laptop for a moment, I'll just focus on running ipython on the head node, with the engines on a slave node. I assume this is a correct method of working? I did the following on the head node, following instructions at http://ipython.org/ipython-doc/stable/parallel/parallel_process.html#using-ipcluster-in-pbs-mode: $ ipython profile create --parallel --profile=pbs Files are as follows: $cat ipcluster_config.py c = get_config() c.IPClusterStart.controller_launcher_class = 'PBSControllerLauncher' c.IPClusterEngines.engine_launcher_class = 'PBSEngineSetLauncher' c.PBSLauncher.queue = 'long' c.IPClusterEngines.n = 2 # Run 2 cores on 1 node or 2 nodes with all cores? Not sure. $ cat ipengine_config.py c = get_config() Then execute on the head node: $ ipcluster start --profile=pbs -n 2 2013-09-10 15:02:46,771.771 [IPClusterStart] Using existing profile dir: u'/home/username/.ipython/profile_pbs' 2013-09-10 15:02:46.777 [IPClusterStart] Starting ipcluster with [daemon=False] 2013-09-10 15:02:46.778 [IPClusterStart] Creating pid file: /home/username/.ipython/profile_pbs/pid/ipcluster.pid 2013-09-10 15:02:46.778 [IPClusterStart] Starting Controller with PBSControllerLauncher 2013-09-10 15:02:46.792 [IPClusterStart] Job submitted with job id: '2830' 2013-09-10 15:02:47.793 [IPClusterStart] Starting 2 Engines with PBSEngineSetLauncher 2013-09-10 15:02:47.808 [IPClusterStart] Job submitted with job id: '2831' Then the queue shows $ qstat Job id Name User Time Use S Queue ------------------------- ---------------- --------------- -------- - ----- 2830[].master ipcontroller username 0 Q long 2831[].master ipengine username 0 Q long And they just hang there, queued forever. I assume the engines at least should be running? Full information through "qstat -f" doesn't give the reason for the queuing. Normally it would do. There are more than 4 nodes available. $qstat -f Job Id: 2831[].master.domain Job_Name = ipengine Job_Owner = username at master.domain job_state = Q queue = long server = [head node's domain address] Checkpoint = u ctime = Tue Sep 10 15:02:47 2013 Error_Path = master.domain:/home/username/ipengine.e2831 Hold_Types = n Join_Path = n Keep_Files = n Mail_Points = a mtime = Tue Sep 10 15:02:47 2013 Output_Path = master.domain:/home/username/ipengine.o2831 Priority = 0 qtime = Tue Sep 10 15:02:47 2013 Rerunable = True [...] etime = Tue Sep 10 15:02:47 2013 submit_args = ./pbs_engines job_array_request = 1-2 fault_tolerant = False submit_host = master.domain init_work_dir = /home/username It also seems strange that the ipcontroller is launched through PBS. I thought this should be on the head node, so I changed 'PBSControllerLauncher' to 'LocalControllerLauncher'. Then it doesn't queue, but I don't know if what I'm doing is correct. Any help would be really greatly appreciated. Thank you. James -------------- next part -------------- An HTML attachment was scrubbed... URL: From frantisek.jahoda at gmail.com Tue Sep 10 06:02:24 2013 From: frantisek.jahoda at gmail.com (fj) Date: Tue, 10 Sep 2013 03:02:24 -0700 (PDT) Subject: [SciPy-User] scipy.optimize.slsqp.fmin_slsqp - Inequality constraints incompatible Message-ID: <1378807344134-18637.post@n7.nabble.com> I am using SciPy 0.10.1 and the following code stops at Inequality constraints incompatible (Exit mode 4) error. Can I avoid it? I suspect that the error is connected with bounds, because it works without them. Thank you. import scipy.optimize upper_bounds = [91680237, 87141351, 19183473, 84453111] x0 = [1461516, 1230049, 0, 0] ieqcons = [ lambda x: 3139153 - (x[0] + x[1] + x[2] + x[3]), lambda x: 1461516 - (x[0] + x[2] + x[3]), lambda x: 1230049 - (x[1] + x[2]), ] scipy.optimize.slsqp.fmin_slsqp( func=lambda x: sum((x[i] - upper_bounds[i])**2 for i in xrange(4)), fprime=lambda x: [2 * (x[i] - upper_bounds[i]) for i in xrange(4)], x0=x0, bounds=[(0, b) for b in upper_bounds], ieqcons=ieqcons, iprint=2, ) -- View this message in context: http://scipy-user.10969.n7.nabble.com/scipy-optimize-slsqp-fmin-slsqp-Inequality-constraints-incompatible-tp18637.html Sent from the Scipy-User mailing list archive at Nabble.com. From bfrenay at gmail.com Tue Sep 10 09:57:17 2013 From: bfrenay at gmail.com (bfrenay) Date: Tue, 10 Sep 2013 06:57:17 -0700 (PDT) Subject: [SciPy-User] Failed Scipy Install Message-ID: <1378821437765-18638.post@n7.nabble.com> Dear All, I am trying to install Scipy on a CentOS machine. After installing Python 2.7.1 and Numpy 1.5.1, I tried to install Scipy 0.8.0 and got this message creating build/temp.linux-x86_64-2.7/scipy/sparse/sparsetools compile options: '-I/linux/frenay/centos_lib/lib/numpy-1.5.1/lib/python2.7/site-packages/numpy/core/include -I/linux/frenay/centos_lib/lib/Python-2.7.1/include/python2.7 -c' g++: scipy/sparse/sparsetools/csr_wrap.cxx scipy/sparse/sparsetools/csr_wrap.cxx: In function ?int require_size(PyArrayObject*, npy_intp*, int)?: scipy/sparse/sparsetools/csr_wrap.cxx:2910: error: expected ?)? before ?PRIdPTR? scipy/sparse/sparsetools/csr_wrap.cxx:2910: warning: spurious trailing ?%? in format scipy/sparse/sparsetools/csr_wrap.cxx:2910: warning: too many arguments for format scipy/sparse/sparsetools/csr_wrap.cxx:2917: error: expected ?)? before ?PRIdPTR? scipy/sparse/sparsetools/csr_wrap.cxx:2917: warning: spurious trailing ?%? in format scipy/sparse/sparsetools/csr_wrap.cxx:2917: warning: too many arguments for format scipy/sparse/sparsetools/csr_wrap.cxx: In function ?int require_size(PyArrayObject*, npy_intp*, int)?: scipy/sparse/sparsetools/csr_wrap.cxx:2910: error: expected ?)? before ?PRIdPTR? scipy/sparse/sparsetools/csr_wrap.cxx:2910: warning: spurious trailing ?%? in format scipy/sparse/sparsetools/csr_wrap.cxx:2910: warning: too many arguments for format scipy/sparse/sparsetools/csr_wrap.cxx:2917: error: expected ?)? before ?PRIdPTR? scipy/sparse/sparsetools/csr_wrap.cxx:2917: warning: spurious trailing ?%? in format scipy/sparse/sparsetools/csr_wrap.cxx:2917: warning: too many arguments for format error: Command "g++ -pthread -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -fPIC -I/linux/frenay/centos_lib/lib/numpy-1.5.1/lib/python2.7/site-packages/numpy/core/include -I/linux/frenay/centos_lib/lib/Python-2.7.1/include/python2.7 -c scipy/sparse/sparsetools/csr_wrap.cxx -o build/temp.linux-x86_64-2.7/scipy/sparse/sparsetools/csr_wrap.o" failed with exit status 1 Any idea of the problem ? Thank you for your help ! -- View this message in context: http://scipy-user.10969.n7.nabble.com/Failed-Scipy-Install-tp18638.html Sent from the Scipy-User mailing list archive at Nabble.com. From pav at iki.fi Tue Sep 10 13:18:10 2013 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 10 Sep 2013 20:18:10 +0300 Subject: [SciPy-User] Failed Scipy Install In-Reply-To: <1378821437765-18638.post@n7.nabble.com> References: <1378821437765-18638.post@n7.nabble.com> Message-ID: 10.09.2013 16:57, bfrenay kirjoitti: > I am trying to install Scipy on a CentOS machine. After installing Python > 2.7.1 and Numpy 1.5.1, I tried to install Scipy 0.8.0 and got this message You are trying to install an obsolete version of Scipy. Try a more recent release (latest is 0.12.0). If you really want to use the old version, apply changeset 1c3f75f8e1e0ed3900d0b5f72527172882c98b2e -- Pauli Virtanen From david_baddeley at yahoo.com.au Thu Sep 12 09:52:29 2013 From: david_baddeley at yahoo.com.au (David Baddeley) Date: Thu, 12 Sep 2013 06:52:29 -0700 (PDT) Subject: [SciPy-User] Roll your own python distributions Message-ID: <1378993949.38536.YahooMailNeo@web163905.mail.gq1.yahoo.com> Hi all, I'm wondering if anyone knows of an easy (or relatively easy) way of putting together a scientific python distribution with a one-click installer. I've got a python package with _lots_ of dependencies and would like to give users (with relatively limited computer skills) a simple way of installing python, my package, and all the dependencies. I have previously told people to download EPD, upgrade wxpython, and install a couple of additional packages (which is already pushing it in terms of what the users are comfortable with). The switch to canopy (with the accompanying move to a package management system in which one has to manually select which packages to install) makes this infeasible. The alternative distributions (PythonXY, Anaconda etc ...) are all either 32 bit only, or lack many of the packages I need, meaning that I'd need to get users to download a much longer list of additional packages. I want a python distribution, rather than just a py2exed version as parts of my code don't work well with py2exe. Has anyone encountered this situation, and what did you do? many thanks, David -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.brucher at gmail.com Thu Sep 12 10:01:56 2013 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 12 Sep 2013 15:01:56 +0100 Subject: [SciPy-User] Roll your own python distributions In-Reply-To: <1378993949.38536.YahooMailNeo@web163905.mail.gq1.yahoo.com> References: <1378993949.38536.YahooMailNeo@web163905.mail.gq1.yahoo.com> Message-ID: Hi David, The latest Canopy version comes with a one-click/offline installer. Cheers, Matthieu 2013/9/12 David Baddeley : > Hi all, > > I'm wondering if anyone knows of an easy (or relatively easy) way of putting > together a scientific python distribution with a one-click installer. I've > got a python package with _lots_ of dependencies and would like to give > users (with relatively limited computer skills) a simple way of installing > python, my package, and all the dependencies. I have previously told people > to download EPD, upgrade wxpython, and install a couple of additional > packages (which is already pushing it in terms of what the users are > comfortable with). The switch to canopy (with the accompanying move to a > package management system in which one has to manually select which packages > to install) makes this infeasible. The alternative distributions (PythonXY, > Anaconda etc ...) are all either 32 bit only, or lack many of the packages I > need, meaning that I'd need to get users to download a much longer list of > additional packages. I want a python distribution, rather than just a > py2exed version as parts of my code don't work well with py2exe. > > Has anyone encountered this situation, and what did you do? > > many thanks, > David > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Information System Engineer, Ph.D. Blog: http://matt.eifelle.com LinkedIn: http://www.linkedin.com/in/matthieubrucher Music band: http://liliejay.com/ From gyromagnetic at gmail.com Thu Sep 12 10:10:26 2013 From: gyromagnetic at gmail.com (Gyro Funch) Date: Thu, 12 Sep 2013 21:10:26 +0700 Subject: [SciPy-User] Roll your own python distributions In-Reply-To: <1378993949.38536.YahooMailNeo@web163905.mail.gq1.yahoo.com> References: <1378993949.38536.YahooMailNeo@web163905.mail.gq1.yahoo.com> Message-ID: On 9/12/2013 8:52 PM, David Baddeley wrote: > Hi all, > > I'm wondering if anyone knows of an easy (or relatively easy) way > of putting together a scientific python distribution with a > one-click installer. I've got a python package with _lots_ of > dependencies and would like to give users (with relatively > limited computer skills) a simple way of installing python, my > package, and all the dependencies. I have previously told people > to download EPD, upgrade wxpython, and install a couple of > additional packages (which is already pushing it in terms of what > the users are comfortable with). The switch to canopy (with the > accompanying move to a package management system in which one has > to manually select which packages to install) makes this > infeasible. The alternative distributions (PythonXY, Anaconda etc > ...) are all either 32 bit only, or lack many of the packages I > need, meaning that I'd need to get users to download a much > longer list of additional packages. I want a python distribution, > rather than just a py2exed version as parts of my code don't work > well with py2exe. > > Has anyone encountered this situation, and what did you do? > > many thanks, David > Hi David, I recently taught a class in which we used Python and a variety of scientific packages, and faced a similar problem. My approach was to get things downloaded and tested locally before providing a 'batteries and extras included' distribution to the students. I used Python(x,y), installed the necessary software locally into this distribution, did thorough testing of all of the packages to be used, zipped the whole enhanced distribution onto a few usb drives, and had these available for students to copy. Best regards, gyro From Jerome.Kieffer at esrf.fr Thu Sep 12 11:01:29 2013 From: Jerome.Kieffer at esrf.fr (Jerome Kieffer) Date: Thu, 12 Sep 2013 17:01:29 +0200 Subject: [SciPy-User] Roll your own python distributions In-Reply-To: References: <1378993949.38536.YahooMailNeo@web163905.mail.gq1.yahoo.com> Message-ID: <20130912170129.8c451e07.Jerome.Kieffer@esrf.fr> On Thu, 12 Sep 2013 21:10:26 +0700 Gyro Funch wrote: > I used Python(x,y), installed the necessary software locally into > this distribution, did thorough testing of all of the packages to be > used, zipped the whole enhanced distribution onto a few usb drives, > and had these available for students to copy. WinPython is even better as it has 64-bits support. I add just a couple of .msi for the very specific stuff. Cheers, -- J?r?me Kieffer On-Line Data analysis / Software Group ISDD / ESRF tel +33 476 882 445 From jrocher at enthought.com Thu Sep 12 13:45:27 2013 From: jrocher at enthought.com (Jonathan Rocher) Date: Thu, 12 Sep 2013 12:45:27 -0500 Subject: [SciPy-User] Roll your own python distributions In-Reply-To: <20130912170129.8c451e07.Jerome.Kieffer@esrf.fr> References: <1378993949.38536.YahooMailNeo@web163905.mail.gq1.yahoo.com> <20130912170129.8c451e07.Jerome.Kieffer@esrf.fr> Message-ID: David, just to dwell on Mathieu's response, the move to canopy hasn't remove the ability to use the enpkg command from a terminal/command prompt to install packages. You should think of canopy as a GUI on top of EPD. Hope this helps. Jonathan On Thu, Sep 12, 2013 at 10:01 AM, Jerome Kieffer wrote: > On Thu, 12 Sep 2013 21:10:26 +0700 > Gyro Funch wrote: > > > I used Python(x,y), installed the necessary software locally into > > this distribution, did thorough testing of all of the packages to be > > used, zipped the whole enhanced distribution onto a few usb drives, > > and had these available for students to copy. > > WinPython is even better as it has 64-bits support. > I add just a couple of .msi for the very specific stuff. > > Cheers, > > -- > J?r?me Kieffer > On-Line Data analysis / Software Group > ISDD / ESRF > tel +33 476 882 445 > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Jonathan Rocher, PhD Scientific software developer Enthought, Inc. jrocher at enthought.com 1-512-536-1057 http://www.enthought.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Thu Sep 12 13:52:02 2013 From: njs at pobox.com (Nathaniel Smith) Date: Thu, 12 Sep 2013 18:52:02 +0100 Subject: [SciPy-User] Roll your own python distributions In-Reply-To: <1378993949.38536.YahooMailNeo@web163905.mail.gq1.yahoo.com> References: <1378993949.38536.YahooMailNeo@web163905.mail.gq1.yahoo.com> Message-ID: On Thu, Sep 12, 2013 at 2:52 PM, David Baddeley wrote: > Hi all, > > I'm wondering if anyone knows of an easy (or relatively easy) way of putting > together a scientific python distribution with a one-click installer. IIRC Matthew Brett had something like this he put together, if he doesn't pop up in the thread you might want to poke him and see if it's available/usable. -n From takowl at gmail.com Thu Sep 12 16:51:18 2013 From: takowl at gmail.com (Thomas Kluyver) Date: Thu, 12 Sep 2013 13:51:18 -0700 Subject: [SciPy-User] Roll your own python distributions In-Reply-To: <1378993949.38536.YahooMailNeo@web163905.mail.gq1.yahoo.com> References: <1378993949.38536.YahooMailNeo@web163905.mail.gq1.yahoo.com> Message-ID: On 12 September 2013 06:52, David Baddeley wrote: > I'm wondering if anyone knows of an easy (or relatively easy) way of > putting together a scientific python distribution with a one-click > installer. I've got a python package with _lots_ of dependencies and would > like to give users (with relatively limited computer skills) a simple way > of installing python, my package, and all the dependencies. I have > previously told people to download EPD, upgrade wxpython, and install a > couple of additional packages (which is already pushing it in terms of what > the users are comfortable with). The switch to canopy (with the > accompanying move to a package management system in which one has to > manually select which packages to install) makes this infeasible. The > alternative distributions (PythonXY, Anaconda etc ...) are all either 32 > bit only, or lack many of the packages I need, meaning that I'd need to get > users to download a much longer list of additional packages. I want a > python distribution, rather than just a py2exed version as parts of my code > don't work well with py2exe. I've recently put together a template for an NSIS installer for a Python application. Instead of freezing code with py2exe or similar tools, it installs a Python interpreter using an MSI from python.org, and then sets up the application to run with that. It would certainly be possible to extend this to install a set of packages. At the moment, it's quite rough, so I wouldn't call this an easy solution, but it might be "relatively easy" depending on what alternatives you're considering. Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From david_baddeley at yahoo.com.au Thu Sep 12 18:56:30 2013 From: david_baddeley at yahoo.com.au (David Baddeley) Date: Thu, 12 Sep 2013 15:56:30 -0700 (PDT) Subject: [SciPy-User] Roll your own python distributions In-Reply-To: References: <1378993949.38536.YahooMailNeo@web163905.mail.gq1.yahoo.com> Message-ID: <1379026590.11529.YahooMailNeo@web163904.mail.gq1.yahoo.com> Many thanks for all the suggestions ... at this stage I think I might end up going with WinPython and just distributing a .zip of the folder that I've set everything up in. The NSIS template definitely looks like it might be a good option for take 2 ... cheers, David ________________________________ From: Thomas Kluyver To: David Baddeley ; SciPy Users List Sent: Thursday, 12 September 2013 4:51 PM Subject: Re: [SciPy-User] Roll your own python distributions On 12 September 2013 06:52, David Baddeley wrote: I'm wondering if anyone knows of an easy (or relatively easy) way of putting together a scientific python distribution with a one-click installer. I've got a python package with _lots_ of dependencies and would like to give users (with relatively limited computer skills) a simple way of installing python, my package, and all the dependencies. I have previously told people to download EPD, upgrade wxpython, and install a couple of additional packages (which is already pushing it in terms of what the users are comfortable with). The switch to canopy (with the accompanying move to a package management system in which one has to manually select which packages to install) makes this infeasible. The alternative distributions (PythonXY, Anaconda etc ...) are all either 32 bit only, or lack many of the packages I need, meaning that I'd need to get users to download a much longer list of additional packages. I want a python distribution, rather than just a py2exed version as parts of my code don't work well with py2exe. I've recently put together a template for an NSIS installer for a Python application. Instead of freezing code with py2exe or similar tools, it installs a Python interpreter using an MSI from python.org, and then sets up the application to run with that. It would certainly be possible to extend this to install a set of packages. At the moment, it's quite rough, so I wouldn't call this an easy solution, but it might be "relatively easy" depending on what alternatives you're considering. Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From gyromagnetic at gmail.com Thu Sep 12 19:54:53 2013 From: gyromagnetic at gmail.com (Gyro Funch) Date: Fri, 13 Sep 2013 06:54:53 +0700 Subject: [SciPy-User] Roll your own python distributions In-Reply-To: <20130912170129.8c451e07.Jerome.Kieffer@esrf.fr> References: <1378993949.38536.YahooMailNeo@web163905.mail.gq1.yahoo.com> <20130912170129.8c451e07.Jerome.Kieffer@esrf.fr> Message-ID: On 9/12/2013 10:01 PM, Jerome Kieffer wrote: > On Thu, 12 Sep 2013 21:10:26 +0700 Gyro Funch > wrote: > >> I used Python(x,y), installed the necessary software locally >> into this distribution, did thorough testing of all of the >> packages to be used, zipped the whole enhanced distribution >> onto a few usb drives, and had these available for students to >> copy. > > WinPython is even better as it has 64-bits support. I add just a > couple of .msi for the very specific stuff. > > Cheers, > You are right. My 'extras-included' distribution had a generic name, but now that I check carefully, I ended up using WinPython as the base. Kind regards, gyro From matthew.brett at gmail.com Thu Sep 12 20:11:16 2013 From: matthew.brett at gmail.com (Matthew Brett) Date: Thu, 12 Sep 2013 19:11:16 -0500 Subject: [SciPy-User] Roll your own python distributions In-Reply-To: References: <1378993949.38536.YahooMailNeo@web163905.mail.gq1.yahoo.com> Message-ID: Hi, On Thu, Sep 12, 2013 at 12:52 PM, Nathaniel Smith wrote: > On Thu, Sep 12, 2013 at 2:52 PM, David Baddeley > wrote: >> Hi all, >> >> I'm wondering if anyone knows of an easy (or relatively easy) way of putting >> together a scientific python distribution with a one-click installer. > > IIRC Matthew Brett had something like this he put together, if he > doesn't pop up in the thread you might want to poke him and see if > it's available/usable. For our course we recommended Python X Y for windows. As Nathaniel said, I built my own installer for OSX - I should automate it a bit - but it worked without hitch for I guess 20 participants with Macs: Installer: http://nipy.bic.berkeley.edu/practical_neuroimaging/reginald.dmg Notes on building the thing: https://github.com/matthew-brett/reginald Cheers, Matthew From travis at continuum.io Thu Sep 12 20:28:29 2013 From: travis at continuum.io (Travis Oliphant) Date: Thu, 12 Sep 2013 19:28:29 -0500 Subject: [SciPy-User] Roll your own python distributions In-Reply-To: <1378993949.38536.YahooMailNeo@web163905.mail.gq1.yahoo.com> References: <1378993949.38536.YahooMailNeo@web163905.mail.gq1.yahoo.com> Message-ID: Hi David, You might consider a different approach to this installation problem entirely. Rather than build your own installer, use conda and binstar to create your own "package collection". Conda is BSD and completely open and free. Binstar is a free service and you could replace it with your own conda repository if you wanted to (it's not difficult to make a conda repository --- just a directory of packages and an index file). The advantage is that users will then be able to easily manage the collection, create environments, and you will be able to more easily update their installation. This works right now. You could also do some work and make it even easier for your users. What you do: 1) Create conda packages for all libraries that are not already in repositories -- you could also copy the packages into your own repo. 2) Create a meta-package that is your "distribution" (that's all Anaconda is...) 3) Create a binstar account and upload your conda packages and the meta-package to binstar (the binstar command-line client makes this easy). What users do: 1) Install miniconda: http://repo.continuum.io/miniconda/index.html (Just Python and conda the package manager) 2) conda config -f --add channels http://conda.binstar.org/ 3) conda create -n You could even wrap up the steps 1-3 in a simple NSIS installer (I'm sure Ilan could even give you the miniconda installer NSIS source so that you could just make your own installer that effectively does those things). You could also skip the environment creation, but creating the environment would let other meta-packages be installed and share resources without fighting over version numbers for competing packages. Ilan and I have been at this for a long while now, and we think we've found an approach that should scale with conda and binstar. The nice thing about using Miniconda is that your users now can install packages from many more people than just what you provide as well. If you have questions feel free to email and ask them to the anaconda at continuum.io mailing list or the conda mailing list. Best, -Travis On Thu, Sep 12, 2013 at 8:52 AM, David Baddeley wrote: > Hi all, > > I'm wondering if anyone knows of an easy (or relatively easy) way of > putting together a scientific python distribution with a one-click > installer. I've got a python package with _lots_ of dependencies and would > like to give users (with relatively limited computer skills) a simple way > of installing python, my package, and all the dependencies. I have > previously told people to download EPD, upgrade wxpython, and install a > couple of additional packages (which is already pushing it in terms of what > the users are comfortable with). The switch to canopy (with the > accompanying move to a package management system in which one has to > manually select which packages to install) makes this infeasible. The > alternative distributions (PythonXY, Anaconda etc ...) are all either 32 > bit only, or lack many of the packages I need, meaning that I'd need to get > users to download a much longer list of additional packages. I want a > python distribution, rather than just a py2exed version as parts of my code > don't work well with py2exe. > > Has anyone encountered this situation, and what did you do? > > many thanks, > David > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- Travis Oliphant Continuum Analytics, Inc. http://www.continuum.io -------------- next part -------------- An HTML attachment was scrubbed... URL: From jamesresearching at gmail.com Fri Sep 13 03:50:37 2013 From: jamesresearching at gmail.com (James) Date: Fri, 13 Sep 2013 16:50:37 +0900 Subject: [SciPy-User] Ipython parallel and PBS In-Reply-To: References: Message-ID: Nobody knows? Alternatively: Can anyone suggest a better place to ask? Maybe the mailing-list activity here is a bit low. Thank you. James On 10 September 2013 15:22, James wrote: > Dear all, > > I'm having a lot of trouble setting up IPython parallel on a PBS cluster, > and I would really appreciate anybody helping. > > The architecture is a standard PBS cluster - a head node with slave nodes. > I connect to the head node from my laptop over ssh. > > The client (laptop) -> Head node connection seems simple enough. The > problem is the engines. > > Ignoring the laptop for a moment, I'll just focus on running ipython on > the head node, with the engines on a slave node. I assume this is a correct > method of working? > > I did the following on the head node, following instructions at > http://ipython.org/ipython-doc/stable/parallel/parallel_process.html#using-ipcluster-in-pbs-mode: > > $ ipython profile create --parallel --profile=pbs > > Files are as follows: > > $cat ipcluster_config.py > c = get_config() > c.IPClusterStart.controller_launcher_class = 'PBSControllerLauncher' > c.IPClusterEngines.engine_launcher_class = 'PBSEngineSetLauncher' > c.PBSLauncher.queue = 'long' > c.IPClusterEngines.n = 2 # Run 2 cores on 1 node or 2 nodes with all > cores? Not sure. > > $ cat ipengine_config.py > c = get_config() > > Then execute on the head node: > $ ipcluster start --profile=pbs -n 2 > 2013-09-10 15:02:46,771.771 [IPClusterStart] Using existing profile dir: > u'/home/username/.ipython/profile_pbs' > 2013-09-10 15:02:46.777 [IPClusterStart] Starting ipcluster with > [daemon=False] > 2013-09-10 15:02:46.778 [IPClusterStart] Creating pid file: > /home/username/.ipython/profile_pbs/pid/ipcluster.pid > 2013-09-10 15:02:46.778 [IPClusterStart] Starting Controller with > PBSControllerLauncher > 2013-09-10 15:02:46.792 [IPClusterStart] Job submitted with job id: '2830' > 2013-09-10 15:02:47.793 [IPClusterStart] Starting 2 Engines with > PBSEngineSetLauncher > 2013-09-10 15:02:47.808 [IPClusterStart] Job submitted with job id: '2831' > > Then the queue shows > $ qstat > Job id Name User Time Use S Queue > ------------------------- ---------------- --------------- -------- - ----- > 2830[].master ipcontroller username 0 Q > long > 2831[].master ipengine username 0 Q > long > > And they just hang there, queued forever. I assume the engines at least > should be running? Full information through "qstat -f" doesn't give the > reason for the queuing. Normally it would do. There are more than 4 nodes > available. > > $qstat -f > Job Id: 2831[].master.domain > Job_Name = ipengine > Job_Owner = username at master.domain > job_state = Q > queue = long > server = [head node's domain address] > Checkpoint = u > ctime = Tue Sep 10 15:02:47 2013 > Error_Path = master.domain:/home/username/ipengine.e2831 > Hold_Types = n > Join_Path = n > Keep_Files = n > Mail_Points = a > mtime = Tue Sep 10 15:02:47 2013 > Output_Path = master.domain:/home/username/ipengine.o2831 > Priority = 0 > qtime = Tue Sep 10 15:02:47 2013 > Rerunable = True > [...] > etime = Tue Sep 10 15:02:47 2013 > submit_args = ./pbs_engines > job_array_request = 1-2 > fault_tolerant = False > submit_host = master.domain > init_work_dir = /home/username > > It also seems strange that the ipcontroller is launched through PBS. I > thought this should be on the head node, so I changed > 'PBSControllerLauncher' to 'LocalControllerLauncher'. Then it doesn't > queue, but I don't know if what I'm doing is correct. > > Any help would be really greatly appreciated. > > Thank you. > > James > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Fri Sep 13 05:29:11 2013 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 13 Sep 2013 10:29:11 +0100 Subject: [SciPy-User] Ipython parallel and PBS In-Reply-To: References: Message-ID: On Fri, Sep 13, 2013 at 8:50 AM, James wrote: > > Nobody knows? > > Alternatively: Can anyone suggest a better place to ask? Maybe the mailing-list activity here is a bit low. The best place for IPython questions would be the ipython-dev mailing list. http://mail.scipy.org/mailman/listinfo/ipython-dev IPython's website also has a link to a chat room where the devs hang out. They also pay attention to StackOverflow questions tagged "ipython". http://ipython.org/ -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From jamesresearching at gmail.com Fri Sep 13 06:39:25 2013 From: jamesresearching at gmail.com (James) Date: Fri, 13 Sep 2013 19:39:25 +0900 Subject: [SciPy-User] Ipython parallel and PBS In-Reply-To: References: Message-ID: Thanks a lot for the pointer to the ipython-dev mailing list. I will try there. Best regards, James On 13 September 2013 18:29, Robert Kern wrote: > On Fri, Sep 13, 2013 at 8:50 AM, James wrote: > > > > Nobody knows? > > > > Alternatively: Can anyone suggest a better place to ask? Maybe the > mailing-list activity here is a bit low. > > The best place for IPython questions would be the ipython-dev mailing list. > > http://mail.scipy.org/mailman/listinfo/ipython-dev > > IPython's website also has a link to a chat room where the devs hang out. > They also pay attention to StackOverflow questions tagged "ipython". > > http://ipython.org/ > > -- > Robert Kern > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rjohns67 at gmail.com Fri Sep 13 16:48:59 2013 From: rjohns67 at gmail.com (Richard Johns) Date: Fri, 13 Sep 2013 16:48:59 -0400 Subject: [SciPy-User] Z Message-ID: <1CEE5CCC-A67D-4A90-9634-01A9174F0DBD@gmail.com> Sent from my iPhone From tmp50 at ukr.net Sun Sep 15 08:00:38 2013 From: tmp50 at ukr.net (Dmitrey) Date: Sun, 15 Sep 2013 15:00:38 +0300 Subject: [SciPy-User] [ANN] OpenOpt suite v 0.51 Message-ID: <1379245965.95520674.hcvdp5xv@fmst-1.ukr.net> Hi all, new OpenOpt suite v 0.51 has been released: Some improvements for? FuncDesigner ? automatic differentiation and QP FuncDesigner now can model sparse (MI)(QC)QP Octave QP solver has been connected MATLAB solvers linprog ( LP ), quadprog ( QP ), lsqlin ( LLSP ), bintprog ( MILP ) New NLP solver: knitro Some elements of 2nd order interval analysis, mostly for interalg Some interalg improvements interalg can directly handle (MI)LP and (possibly nonconvex) (MI)(QC)QP New classes: knapsack problem ( KSP ), bin packing problem ( BPP ), dominating set problem ( DSP ) FuncDesigner can model SOCP SpaceFuncs ? has been adjusted for recent versions of Python and NumPy visit http://openopt.org for more details. Regards, D. -------------- next part -------------- An HTML attachment was scrubbed... URL: From joseluismietta at yahoo.com.ar Sun Sep 1 18:50:19 2013 From: joseluismietta at yahoo.com.ar (=?iso-8859-1?Q?Jos=E8_Luis_Mietta?=) Date: Sun, 01 Sep 2013 22:50:19 -0000 Subject: [SciPy-User] Stick intersection path algorithm In-Reply-To: <1378076104.58147.YahooMailNeo@web142301.mail.bf1.yahoo.com> References: <1378076104.58147.YahooMailNeo@web142301.mail.bf1.yahoo.com> Message-ID: <1378076141.59494.YahooMailNeo@web142301.mail.bf1.yahoo.com> Hi experts! I wanna study the intersection between line segments (sticks). I wrote a algorithm that generate a matrix, M, with N rows and N columns. The M-element Mij is 1 if stick number 'i' intersect stick number 'j' (obviously M is symmetric). Given two arbitrary sticks, i need a simple and effective algorithm that determinate if that two sticks are conected by a 'intersected-sticks' path. Any idea for that? Thanks a lot! -------------- next part -------------- An HTML attachment was scrubbed... URL: From Sebastian.Wagner.fl at ait.ac.at Mon Sep 2 09:23:05 2013 From: Sebastian.Wagner.fl at ait.ac.at (Wagner Sebastian) Date: Mon, 02 Sep 2013 13:23:05 -0000 Subject: [SciPy-User] solve_banded usage (once again) Message-ID: Dear Scipy-Users, I'm currently working with Spare Matrices, in particular diagonal spare matrices. When searching for solve-Functions I found a function called solve_banded and some information about it in the docs (very little and that's why I searched for more), StackOverflow (http://stackoverflow.com/questions/12978518/scipy-sparse-dia-matrix-solver), a ticket on this problem (https://github.com/scipy/scipy/issues/2285) and this mailinglist. But nowhere I found any information about how to use it with a diagonal sparse matrix. The mentioned thread on this is at http://thread.gmane.org/gmane.comp.python.scientific.user/10308, which states that "the docstring of solve_banded seems to be not correct/up-to-date, or at least unclear." Which is totally correct (the Mail is from 2007!) and until now nothing has changed. Due to lack of documentation I tried to get it running, and got it running partly with contiuous offsets, like this way: dia = scipy.sparse.dia_matrix(([[1,1,3,0,0],[1,1,1,1,1],[0,2,1,3,2]],[-2,0,1]), shape=(5,5)) offsets = [np.abs(dia.offsets[0]), dia.offsets[-1]] data = np.flipud(dia.data) scipy.linalg.solve_banded(offsets, data, b) This gives be the same solutions as linalg.solve and sparse.linalg.spsolve. But with uncontinuous offsets I couldn't, I thought of inserting a row of zeros for the missing diagonals: dia = scipy.sparse.dia_matrix(([[1,1,3,0,0],[1,1,1,1,1],[0,2,1,3,2]],[-2,0,1]), shape=(5,5)) l = np.abs(dia.offsets[0]) u = dia.offsets[-1] data = np.empty((l+u+1,dia.data.shape[1]),dtype=int) missingCounter = 0 for i, d in enumerate(range(-l, u+1)): if d in dia.offsets: data[i] = dia.data[-(i+1-missingCounter)] # flip the array else: data[i] = np.zeros(dia.data.shape[1]) missingCounter += 1 For me it looks fine, but it gives not the same answer as linalg.solve and sparse.linalg.spsolve. Can anybody give me an advice of how to use solve_banded correctly? Regards, Sebastian -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed Sep 18 09:20:21 2013 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 18 Sep 2013 14:20:21 +0100 Subject: [SciPy-User] RAM problem during code execution - Numpy arrays In-Reply-To: <1377266329.36562.YahooMailNeo@web142306.mail.bf1.yahoo.com> References: <1377266281.65599.YahooMailNeo@web142304.mail.bf1.yahoo.com> <1377266329.36562.YahooMailNeo@web142306.mail.bf1.yahoo.com> Message-ID: On Fri, Aug 23, 2013 at 2:58 PM, Jos? Luis Mietta < joseluismietta at yahoo.com.ar> wrote: > > > Hi experts. I need your help with a RAM porblem during execution of my script. > I wrote the next code. I use SAGE. In 1-2 hours of execution time the RAM of my laptop (8gb) is filled and the sistem crash: It could be memory fragmentation. np.append() creates a new array every time, and you are using it a lot. You should really never use np.append(). It is almost always the wrong thing to do. numpy arrays don't work like Python lists. One can append to lists efficiently because lists overallocate memory and thus do not need to reallocate on every append. I won't review your code thoroughly, but try to determine if you can replace some of those arrays with lists or simply preallocate the arrays with the right size. With these modifications, your code will also probably run substantially faster. -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed Sep 18 09:24:06 2013 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 18 Sep 2013 14:24:06 +0100 Subject: [SciPy-User] Cannot subscribe to mailing lists In-Reply-To: <1bf4e1ec-163c-4002-a857-5d1eb172e9c1@googlegroups.com> References: <1bf4e1ec-163c-4002-a857-5d1eb172e9c1@googlegroups.com> Message-ID: On Thu, Aug 22, 2013 at 3:50 PM, Yuxiang Wang wrote: > > Hi all, > > I tried to subscribe to mailing list on page: > > http://www.scipy.org/scipylib/mailing-lists.html > > However, I think the links are broken. Would anyone have any idea about where to report it? Exactly what did you try, and what results did you get? What is not working for you? All of the "Subscribe" links do go to the correct subscription pages for me. -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From vaggi.federico at gmail.com Wed Sep 18 10:17:37 2013 From: vaggi.federico at gmail.com (federico vaggi) Date: Wed, 18 Sep 2013 16:17:37 +0200 Subject: [SciPy-User] solve_banded usage (once again) (Wagner Sebastian) Message-ID: Hey Wagner, not sure how helpful this is, but if you dig into the SciPy source code, you'll see that solve_banded is a fairly thin wrapper around the gbsv function from lapack. If you go look at the lapack source code: http://software.intel.com/sites/products/documentation/hpc/mkl/mklman/GUID-36E347A0-A1DF-48F0-A554-A98A3704A3B5.htm You can get a better idea of the parameters there. Not sure if this is helpful - I've never really had to use that function myself, but at least it might give you some pointers. Federico -------------- next part -------------- An HTML attachment was scrubbed... URL: From cimrman3 at ntc.zcu.cz Wed Sep 18 10:20:27 2013 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Wed, 18 Sep 2013 16:20:27 +0200 Subject: [SciPy-User] ANN: SfePy 2013.3 Message-ID: <5239B6AB.3050208@ntc.zcu.cz> I am pleased to announce release 2013.3 of SfePy. Description ----------- SfePy (simple finite elements in Python) is a software for solving systems of coupled partial differential equations by the finite element method. The code is based on NumPy and SciPy packages. It is distributed under the new BSD license. Home page: http://sfepy.org Downloads, mailing list, wiki: http://code.google.com/p/sfepy/ Git (source) repository, issue tracker: http://github.com/sfepy Highlights of this release -------------------------- - implementation of Mesh topology data structures in C - implementation of regions based on C Mesh (*) - MultiProblem solver for conjugate solution of subproblems - new advanced examples (vibro-acoustics, Stokes flow with slip conditions) (*) Warning: region selection syntax has been changed in a principal way, see [1]. Besides the simple renaming, all regions meant for boundary conditions or boundary/surface integrals need to have their kind set explicitly to 'facet' (or 'edge' in 2D, 'face' in 3D). [1] http://sfepy.org/doc-devel/users_guide.html#regions For full release notes see http://docs.sfepy.org/doc/release_notes.html#id1 (rather long and technical). Best regards, Robert Cimrman and Contributors (*) (*) Contributors to this release (alphabetical order): Vladim?r Luke? From Sebastian.Wagner.fl at ait.ac.at Wed Sep 18 10:35:00 2013 From: Sebastian.Wagner.fl at ait.ac.at (Wagner Sebastian) Date: Wed, 18 Sep 2013 14:35:00 +0000 Subject: [SciPy-User] solve_banded usage (once again) (Wagner Sebastian) In-Reply-To: References: Message-ID: Hi Federico, Some days after posting it in beginning of Sept, I found it out. Maybe I can improve the doc in October or provide a conversion function for scipy.sparse.dia And I were also digging in the Python-Code but it only wraps the Fortran-Function, and the Fortran help itself also is very qiet. Sebastian Von: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] Im Auftrag von federico vaggi Gesendet: Mittwoch, 18. September 2013 16:18 An: scipy-user at scipy.org Betreff: Re: [SciPy-User] solve_banded usage (once again) (Wagner Sebastian) Hey Wagner, not sure how helpful this is, but if you dig into the SciPy source code, you'll see that solve_banded is a fairly thin wrapper around the gbsv function from lapack. If you go look at the lapack source code: http://software.intel.com/sites/products/documentation/hpc/mkl/mklman/GUID-36E347A0-A1DF-48F0-A554-A98A3704A3B5.htm You can get a better idea of the parameters there. Not sure if this is helpful - I've never really had to use that function myself, but at least it might give you some pointers. Federico -------------- next part -------------- An HTML attachment was scrubbed... URL: From szebi at gmx.at Wed Sep 18 14:12:46 2013 From: szebi at gmx.at (szebi) Date: Wed, 18 Sep 2013 20:12:46 +0200 Subject: [SciPy-User] how to get the indexes of non zero rows of sparse matric efficiently when using scipy? In-Reply-To: References: Message-ID: <5239ED1E.3070005@gmx.at> Hi, That depends on which matrix you are using. CSC: >>> A = np.arange(100).reshape(10,10) >>> A[[3,7]] = 0 >>> B = scipy.sparse.csc_matrix(A) >>> np.diff(B.indptr).nonzero()[0] CSR: >>> A = np.arange(100).reshape(10,10) >>> A[:,[3,7]] = 0 >>> B = scipy.sparse.csr_matrix(A) >>> np.unique(B.indices) Have Fun! Sebastian On 08/17/2013 05:26 AM, teccy wrote: > for example, a 500x1000 sparse matrix with 300 non zero rows and 5000 > non zero items, what i want is the index for the 300 rows, not the > index for 5000 items, how to do this efficiently, seems scipy haven't > provicde such kind of API ... > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 555 bytes Desc: OpenPGP digital signature URL: From ralf.gommers at gmail.com Thu Sep 19 15:48:59 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Thu, 19 Sep 2013 21:48:59 +0200 Subject: [SciPy-User] May I joint this list? In-Reply-To: References: Message-ID: On Tue, Aug 20, 2013 at 8:02 AM, Yuxiang Wang wrote: > Thank you so much! > You already did:) See http://scipy.org/scipylib/mailing-lists.html#before-you-post for the basics. Cheers, Ralf > > -- > Yuxiang "Shawn" Wang > Gerling Research Lab > University of Virginia > yw5aj at virginia.edu > +1 (434) 284-0836 > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alejandro.weinstein at gmail.com Sun Sep 22 17:22:11 2013 From: alejandro.weinstein at gmail.com (Alejandro Weinstein) Date: Sun, 22 Sep 2013 17:22:11 -0400 Subject: [SciPy-User] Small error in example of documentation of signal.freqz? Message-ID: Hi: The documentation of signal.freqz [1] has an example that plots the frequency response of a filter. The y axis is plotted in a logaritmic scale and labeled as "decibels": plt.semilogy(w, np.abs(h), 'b') plt.ylabel('Amplitude (dB)', color='b') I think that to be in dB, the plot should be in a linear scale and the magnitude of the response computed as 20*np.log10(np.abs(h)): plt.plot(w, 20*np.log10(np.abs(h)), 'b') plt.ylabel('Amplitude (dB)', color='b') Any comments? Alejandro [1] http://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.freqz.html From warren.weckesser at gmail.com Sun Sep 22 17:44:32 2013 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Sun, 22 Sep 2013 17:44:32 -0400 Subject: [SciPy-User] Small error in example of documentation of signal.freqz? In-Reply-To: References: Message-ID: On 9/22/13, Alejandro Weinstein wrote: > Hi: > > The documentation of signal.freqz [1] has an example that plots the > frequency response of a filter. The y axis is plotted in a logaritmic > scale and labeled as "decibels": > > plt.semilogy(w, np.abs(h), 'b') > plt.ylabel('Amplitude (dB)', color='b') > > I think that to be in dB, the plot should be in a linear scale and the > magnitude of the response computed as 20*np.log10(np.abs(h)): > > plt.plot(w, 20*np.log10(np.abs(h)), 'b') > plt.ylabel('Amplitude (dB)', color='b') > > Any comments? You are correct. The y axis has a log scale, but the labels have the form 10^{n}, which are not dB values. Your change is reasonable, or the label could be changed to just "Gain". Warren > > Alejandro > > [1] > http://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.freqz.html > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From josef.pktd at gmail.com Sun Sep 22 20:56:36 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 22 Sep 2013 20:56:36 -0400 Subject: [SciPy-User] discrete integration code ? Message-ID: general question: Is there something like integrate.quad for discrete integration sum of f(i) over integers (with or without finite bounds) the main point is to figure out some termination condition scipy.stats has a "home made" function, but we ran into a problem where the summation stops too soon. Are there any good rules when to stop summing terms? usually we just benefit from the `special` functions. our special case f(i) = g(i) * p(i) where p(i) is the probability mass function, and g(i) could be any function Josef From msarahan at gmail.com Sun Sep 22 23:18:28 2013 From: msarahan at gmail.com (Michael Sarahan) Date: Sun, 22 Sep 2013 20:18:28 -0700 Subject: [SciPy-User] Akima interpolation? In-Reply-To: <521F6E90.1080406@uni-bremen.de> References: <521F6E90.1080406@uni-bremen.de> Message-ID: Hello Andreas, I can't say for certain whether it exists in SciPy as-is. It seems to me that these functions are often known by too many names. SciPy may perform a fitting that is similar in effect to an Akima spline, but I haven't tested it to see. However, Christoph Gohlke has a nice Akima interpolation script, and an accompanying C module for speed: http://www.lfd.uci.edu/~gohlke/ I hope this helps. Michael On Thu, Aug 29, 2013 at 8:53 AM, Andreas Hilboll wrote: > I'm looking for 1d Akima interpolation, and couldn't find it in Scipy. > > 1.) Did I miss something? > 2.) Is there interest in having Akima interpolation in Scipy? > 3.) Does anyone know of a good, fast interpolation I should investigate > for inclusion in Scipy? > > Cheers, Andreas. > > -- > Andreas Hilboll > PhD Student > > Institute of Environmental Physics > University of Bremen > > U3145 > Otto-Hahn-Allee 1 > D-28359 Bremen > Germany > > +49(0)421 218 62133 (phone) > +49(0)421 218 98 62133 (fax) > http://www.iup.uni-bremen.de/~hilboll > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at radimrehurek.com Tue Sep 24 07:21:53 2013 From: me at radimrehurek.com (=?UTF-8?Q?Radim_=C5=98eh=C5=AF=C5=99ek?=) Date: Tue, 24 Sep 2013 04:21:53 -0700 (PDT) Subject: [SciPy-User] [cython-users] scipy cblas return value In-Reply-To: References: <1e44591c-dc17-46b4-a998-cab29016d434@googlegroups.com> Message-ID: <80fbdffc-ed06-46b6-a372-eeb01736d248@googlegroups.com> It's actually more serious than I though. This "double instead of float" apparently fails on some systems while manifesting itself as a weird slow down: https://groups.google.com/d/msg/gensim/6B99kdtHOZo/FSss6GgzE5IJ We've confirmed with the person who reported it that the problem is only in sdot. Changing the sdot return value to `float` fixes it. SciPy version doesn't seem to affect this (tried both 0.12 and 0.13b on the failing machine). CC to Scipy people: the code is using a protected pointer from scipy.linalg.blas.cblas, like so: https://gist.github.com/anonymous/6659007 Why is the required return value sometimes float and sometimes double? Is there a better/safer way to achieve calling numpy/scipy BLAS from cython? Cheers, Radim On Tuesday, September 24, 2013 6:58:20 AM UTC+2, Robert Bradshaw wrote: > > _cpointer seems to be woefully undocumented, but it looks like it is a > C function actually returning a double not a float. > > On Sun, Sep 22, 2013 at 4:54 AM, Radim ?eh??ek > > wrote: > > Hello all, > > > > I'm trying to call sdot from scipy.linalg.blas.cblas from Cython, but it > > only works if I declare the return type as double (not float): > > > > https://gist.github.com/anonymous/6659007 > > > > I don't know if this is a Cython issue or a SciPy issue or a BLAS > (vecLib) > > issue or a chair-keyboard issue. Could someone more knowledgeable > elucidate > > please? > > > > Cheers, > > Radim, Cython 0.19.1, SciPy 0.12.0, Python 2.7.1, NumPy & SciPy BLAS > from OS > > X Accelerate > > > > > > -- > > > > --- > > You received this message because you are subscribed to the Google > Groups > > "cython-users" group. > > To unsubscribe from this group and stop receiving emails from it, send > an > > email to cython-users... at googlegroups.com . > > For more options, visit https://groups.google.com/groups/opt_out. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla at molden.no Tue Sep 24 12:24:02 2013 From: sturla at molden.no (Sturla Molden) Date: Tue, 24 Sep 2013 18:24:02 +0200 Subject: [SciPy-User] [cython-users] scipy cblas return value In-Reply-To: <80fbdffc-ed06-46b6-a372-eeb01736d248@googlegroups.com> References: <1e44591c-dc17-46b4-a998-cab29016d434@googlegroups.com> <80fbdffc-ed06-46b6-a372-eeb01736d248@googlegroups.com> Message-ID: <32E0551E-667B-4510-B7C6-2FBBCE69D62F@molden.no> I believe the problem might be the cblas you are using. Apparently, cblas scot returns float according to cblas.h. But the f2c reference version of cblas scot returns double. http://www.netlib.org/clapack/CLAPACK-3.1.1/BLAS/WRAP/cblas.h http://www.netlib.org/clapack/cblas/sdot.c Sturla On Sep 24, 2013, at 1:21 PM, Radim ?eh??ek wrote: > It's actually more serious than I though. This "double instead of float" apparently fails on some systems while manifesting itself as a weird slow down: https://groups.google.com/d/msg/gensim/6B99kdtHOZo/FSss6GgzE5IJ > > We've confirmed with the person who reported it that the problem is only in sdot. Changing the sdot return value to `float` fixes it. > > SciPy version doesn't seem to affect this (tried both 0.12 and 0.13b on the failing machine). > > CC to Scipy people: the code is using a protected pointer from scipy.linalg.blas.cblas, like so: https://gist.github.com/anonymous/6659007 > > Why is the required return value sometimes float and sometimes double? Is there a better/safer way to achieve calling numpy/scipy BLAS from cython? > > Cheers, > Radim > > > On Tuesday, September 24, 2013 6:58:20 AM UTC+2, Robert Bradshaw wrote: > _cpointer seems to be woefully undocumented, but it looks like it is a > C function actually returning a double not a float. > > On Sun, Sep 22, 2013 at 4:54 AM, Radim ?eh??ek wrote: > > Hello all, > > > > I'm trying to call sdot from scipy.linalg.blas.cblas from Cython, but it > > only works if I declare the return type as double (not float): > > > > https://gist.github.com/anonymous/6659007 > > > > I don't know if this is a Cython issue or a SciPy issue or a BLAS (vecLib) > > issue or a chair-keyboard issue. Could someone more knowledgeable elucidate > > please? > > > > Cheers, > > Radim, Cython 0.19.1, SciPy 0.12.0, Python 2.7.1, NumPy & SciPy BLAS from OS > > X Accelerate > > > > > > -- > > > > --- > > You received this message because you are subscribed to the Google Groups > > "cython-users" group. > > To unsubscribe from this group and stop receiving emails from it, send an > > email to cython-users... at googlegroups.com. > > For more options, visit https://groups.google.com/groups/opt_out. > > -- > > --- > You received this message because you are subscribed to the Google Groups "cython-users" group. > To unsubscribe from this group and stop receiving emails from it, send an email to cython-users+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Tue Sep 24 13:45:29 2013 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 24 Sep 2013 18:45:29 +0100 Subject: [SciPy-User] Baffling error: ndarray += csc_matrix -> "ValueError: setting an array element with a sequence" Message-ID: Hi all, I'm getting a very strange error, and unfortunately I can't seem to reproduce it *except* when running on Travis-CI, so my debugging tools are really limited. I'm wondering if anyone else has seen anything like this? The offending line of code is: File "/home/travis/virtualenv/python2.7_with_system_site_packages/local/lib/python2.7/site-packages/pyrerp/incremental_ls.py", line 323, in append_bottom_half self.xtx += xtx ValueError: setting an array element with a sequence. And debug prints reveal that in the case that causes the error, 'self.xtx' is an ndarray that prints as: [[ 0. 0.] [ 0. 0.]] while 'xtx' is a scipy.sparse.csc.csc_matrix that prints as: (1, 0) 45.0 (0, 0) 10.0 (1, 1) 285.0 (0, 1) 45.0 On my laptop (Ubuntu 12.10, 64-bit, numpy 1.7.1, scipy 0.12.0), the same test passes fine. Travis is Ubuntu 12.04, 32-bit, numpy 1.7.1, scipy 0.12.0, so I guess in *principle* it could be a 32-bit/64-bit thing, but that's just a wild guess. Any ideas? -n P.S.: if anyone's curious, the full build log is here: https://s3.amazonaws.com/archive.travis-ci.org/jobs/11735709/log.txt and the code is here: https://github.com/njsmith/pyrerp/blob/master/pyrerp/incremental_ls.py#L323 From matthew.brett at gmail.com Tue Sep 24 13:51:43 2013 From: matthew.brett at gmail.com (Matthew Brett) Date: Tue, 24 Sep 2013 13:51:43 -0400 Subject: [SciPy-User] Baffling error: ndarray += csc_matrix -> "ValueError: setting an array element with a sequence" In-Reply-To: References: Message-ID: Hi, On Tue, Sep 24, 2013 at 1:45 PM, Nathaniel Smith wrote: > Hi all, > > I'm getting a very strange error, and unfortunately I can't seem to > reproduce it *except* when running on Travis-CI, so my debugging tools > are really limited. I'm wondering if anyone else has seen anything > like this? > > The offending line of code is: > > File "/home/travis/virtualenv/python2.7_with_system_site_packages/local/lib/python2.7/site-packages/pyrerp/incremental_ls.py", > line 323, in append_bottom_half > self.xtx += xtx > ValueError: setting an array element with a sequence. > > And debug prints reveal that in the case that causes the error, > 'self.xtx' is an ndarray that prints as: > > [[ 0. 0.] > [ 0. 0.]] > > while 'xtx' is a scipy.sparse.csc.csc_matrix that prints as: > > (1, 0) 45.0 > (0, 0) 10.0 > (1, 1) 285.0 > (0, 1) 45.0 > > On my laptop (Ubuntu 12.10, 64-bit, numpy 1.7.1, scipy 0.12.0), the > same test passes fine. Travis is Ubuntu 12.04, 32-bit, numpy 1.7.1, > scipy 0.12.0, so I guess in *principle* it could be a 32-bit/64-bit > thing, but that's just a wild guess. Any ideas? Did you try building on the Berkeley 12.04 laptop you have access to? I think it's the same (roughly) as the travis build system... Cheers, Matthew From pav at iki.fi Tue Sep 24 14:07:33 2013 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 24 Sep 2013 21:07:33 +0300 Subject: [SciPy-User] Baffling error: ndarray += csc_matrix -> "ValueError: setting an array element with a sequence" In-Reply-To: References: Message-ID: 24.09.2013 20:45, Nathaniel Smith kirjoitti: [clip] > On my laptop (Ubuntu 12.10, 64-bit, numpy 1.7.1, scipy 0.12.0), > the same test passes fine. Travis is Ubuntu 12.04, 32-bit, numpy > 1.7.1, scipy 0.12.0, so I guess in *principle* it could be a > 32-bit/64-bit thing, but that's just a wild guess. Any ideas? I'd try to set up a virtual machine (e.g. using Virtualbox) with a similar environment as in Travis CI, and try to reproduce the issue that way. No other ideas. The error message comes from Numpy, but it's not at all clear to me what goes wrong there... -- Pauli Virtanen From njs at pobox.com Tue Sep 24 14:19:20 2013 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 24 Sep 2013 19:19:20 +0100 Subject: [SciPy-User] Baffling error: ndarray += csc_matrix -> "ValueError: setting an array element with a sequence" In-Reply-To: References: Message-ID: On Tue, Sep 24, 2013 at 7:07 PM, Pauli Virtanen wrote: > 24.09.2013 20:45, Nathaniel Smith kirjoitti: > [clip] >> On my laptop (Ubuntu 12.10, 64-bit, numpy 1.7.1, scipy 0.12.0), >> the same test passes fine. Travis is Ubuntu 12.04, 32-bit, numpy >> 1.7.1, scipy 0.12.0, so I guess in *principle* it could be a >> 32-bit/64-bit thing, but that's just a wild guess. Any ideas? > > I'd try to set up a virtual machine (e.g. using Virtualbox) with a > similar environment as in Travis CI, and try to reproduce the issue > that way. > > No other ideas. The error message comes from Numpy, but it's not at all > clear to me what goes wrong there... AFAICT the travis machine images aren't available to download, and I just tried reproducing on a 32-bit linode and still couldn't trigger the error myself :-/. Back to the print/frown cycle I guess... -n From pav at iki.fi Tue Sep 24 14:36:17 2013 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 24 Sep 2013 21:36:17 +0300 Subject: [SciPy-User] Baffling error: ndarray += csc_matrix -> "ValueError: setting an array element with a sequence" In-Reply-To: References: Message-ID: 24.09.2013 21:19, Nathaniel Smith kirjoitti: [clip] > AFAICT the travis machine images aren't available to download, and I > just tried reproducing on a 32-bit linode and still couldn't trigger > the error myself :-/. Back to the print/frown cycle I guess... Did you use the same Ubuntu version as they? http://about.travis-ci.org/docs/user/ci-environment/ They also say it's 64-bit, hmm... -- Pauli Virtanen From njs at pobox.com Tue Sep 24 14:40:39 2013 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 24 Sep 2013 19:40:39 +0100 Subject: [SciPy-User] Baffling error: ndarray += csc_matrix -> "ValueError: setting an array element with a sequence" In-Reply-To: References: Message-ID: On Tue, Sep 24, 2013 at 7:36 PM, Pauli Virtanen wrote: > 24.09.2013 21:19, Nathaniel Smith kirjoitti: > [clip] >> AFAICT the travis machine images aren't available to download, and I >> just tried reproducing on a 32-bit linode and still couldn't trigger >> the error myself :-/. Back to the print/frown cycle I guess... > > Did you use the same Ubuntu version as they? > > http://about.travis-ci.org/docs/user/ci-environment/ Yeah, my Linode happens to be running 12.04 LTS in any case. > They also say it's 64-bit, hmm... Oh, hah. The bottom of that same page says 32-bit -- I think they switched to 64-bit actually and messed up updating the page. But since I can't reproduce it either way, word size is probably a red herring... -n From me at radimrehurek.com Tue Sep 24 17:55:58 2013 From: me at radimrehurek.com (=?UTF-8?Q?Radim_=C5=98eh=C5=AF=C5=99ek?=) Date: Tue, 24 Sep 2013 14:55:58 -0700 (PDT) Subject: [SciPy-User] scipy cblas return value In-Reply-To: References: <1e44591c-dc17-46b4-a998-cab29016d434@googlegroups.com> <80fbdffc-ed06-46b6-a372-eeb01736d248@googlegroups.com> Message-ID: Thanks guys, these comments are very helpful. But I don't know what BLAS/compiler the users will be using. That was the whole point of trying to go through SciPy -- I thought there was a translation/interface that would allow me to call the standard routines in a stable way, no matter what idiosyncratic BLAS hid underneath... I figured SciPy itself must be doing that already. But `scipy.linalg.blas.sdot` gives me the wrong result in Python, too. The same as when called from C through _cpointer. So now I'm confused -- surely this issue has something to do with SciPy? Scipy itself uses a combination of options 1 and 2, but the wrapper > functions are not exposed via _cpointer (they don't have the standard > BLAS API), which points to the "raw" BLAS function. > Aha. Ok. I'm sifting through the SciPy sources now, to better understand what wraps what, where... it may take a while. In the meanwhile, does my original idea make any sense? To let SciPy do all the necessary wrapping/compiling/linking, but otherwise offer BLAS using the smallest call overhead possible, with a stable signature. I know the strides and all, I don't need any extra ifs or input checks or Python. Cheers, Radim -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla at molden.no Wed Sep 25 11:22:23 2013 From: sturla at molden.no (Sturla Molden) Date: Wed, 25 Sep 2013 17:22:23 +0200 Subject: [SciPy-User] [cython-users] Re: scipy cblas return value In-Reply-To: References: <1e44591c-dc17-46b4-a998-cab29016d434@googlegroups.com> <80fbdffc-ed06-46b6-a372-eeb01736d248@googlegroups.com> Message-ID: On Sep 24, 2013, at 11:55 PM, Radim ?eh??ek wrote: > > But I don't know what BLAS/compiler the users will be using. Then you're screwed. > That was the whole point of trying to go through SciPy -- I thought there was a translation/interface that would allow me to call the standard routines in a stable way, no matter what idiosyncratic BLAS hid underneath... I figured SciPy itself must be doing that already. BLAS has a standard Fortran API. The exact ABI is compiler dependent. That is all you get to know. (Note the difference between API and ABI.) SciPy uses f2py to call Fortran. f2py knows the ABI of different compilers, and can thus generate wrappers for different Fortran compilers. It is the exposed Python interface that is "standard". > Aha. Ok. I'm sifting through the SciPy sources now, to better understand what wraps what, where... it may take a while. You don't see the C wrappers the the SciPy sources. They are generated at build-time by f2py. You only see the interface (.pyf) files for f2py. Or actually, you see the .src files used to generate the .pyf files for different dtypes. > > In the meanwhile, does my original idea make any sense? To let SciPy do all the necessary wrapping/compiling/linking, but otherwise offer BLAS using the smallest call overhead possible, with a stable signature. I know the strides and all, I don't need any extra ifs or input checks or Python. > It is a bad idea if you don't know which BLAS library will be used due to different ABIs. Use SciPy if the Python overhead is small compared to the computation, and then call trough the Python interface. Otherwise I strongly recommend to use a BLAS library you can control. The common ones: - Intel MKL - Apple Accelerate Framework - OpenBLAS (or GotoBLAS2) - AMD's ACML - ATLAS Sturla -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed Sep 25 11:34:56 2013 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 25 Sep 2013 16:34:56 +0100 Subject: [SciPy-User] [cython-users] Re: scipy cblas return value In-Reply-To: References: <1e44591c-dc17-46b4-a998-cab29016d434@googlegroups.com> <80fbdffc-ed06-46b6-a372-eeb01736d248@googlegroups.com> Message-ID: On Wed, Sep 25, 2013 at 4:22 PM, Sturla Molden wrote: > > On Sep 24, 2013, at 11:55 PM, Radim ?eh??ek wrote: > >> But I don't know what BLAS/compiler the users will be using. > > Then you're screwed. > >> That was the whole point of trying to go through SciPy -- I thought there was a translation/interface that would allow me to call the standard routines in a stable way, no matter what idiosyncratic BLAS hid underneath... I figured SciPy itself must be doing that already. > > BLAS has a standard Fortran API. The exact ABI is compiler dependent. That is all you get to know. (Note the difference between API and ABI.) BLAS also has a standard C API implemented by most, if not all, reasonable implementations. No, I will go so far as to say that any modern implementation that excludes the CBLAS API is unreasonable and can be ignored. http://www.netlib.org/blas/blast-forum/cinterface.pdf For example, all of these implement the CBLAS interface: > Otherwise I strongly recommend to use a BLAS library you can control. The common ones: > > - Intel MKL > - Apple Accelerate Framework > - OpenBLAS (or GotoBLAS2) > - AMD's ACML > - ATLAS You don't have to pick just one. Just require the CBLAS interface, then you can let downstream users freely pick and choose among the many fine implementations of the CBLAS interface. -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at radimrehurek.com Thu Sep 26 09:09:08 2013 From: me at radimrehurek.com (=?UTF-8?Q?Radim_=C5=98eh=C5=AF=C5=99ek?=) Date: Thu, 26 Sep 2013 06:09:08 -0700 (PDT) Subject: [SciPy-User] bug report + feature request (was: scipy cblas return value) In-Reply-To: References: <1e44591c-dc17-46b4-a998-cab29016d434@googlegroups.com> <80fbdffc-ed06-46b6-a372-eeb01736d248@googlegroups.com> Message-ID: <445edf58-50aa-463a-81cc-04c27e689c15@googlegroups.com> I guess I was too verbose, let me rephrase: 1. BUG REPORT `scipy.linalg.blas.sdot` gives wrong results on mac: >>> scipy.linalg.blas.sdot(np.array([ 10.], dtype=np.float32), np.array([ 0.01], dtype=np.float32)) -0.0 2. FEATURE REQUEST Expose the way scipy internally wraps whatever BLAS it uses in C, with a stable, python-less API. I already hacked around both points, because I had to. I don't know how much work #2 would be, whether f2py supports it or whatever magic, just that having it would be awesome indeed. -rr On Wednesday, September 25, 2013 5:22:23 PM UTC+2, Sturla Molden wrote: > > On Sep 24, 2013, at 11:55 PM, Radim ?eh??ek > > wrote: > > > But I don't know what BLAS/compiler the users will be using. > > > Then you're screwed. > > > That was the whole point of trying to go through SciPy -- I thought there > was a translation/interface that would allow me to call the standard > routines in a stable way, no matter what idiosyncratic BLAS hid > underneath... I figured SciPy itself must be doing that already. > > > > BLAS has a standard Fortran API. The exact ABI is compiler dependent. That > is all you get to know. (Note the difference between API and ABI.) > > SciPy uses f2py to call Fortran. f2py knows the ABI of different > compilers, and can thus generate wrappers for different Fortran compilers. > > It is the exposed Python interface that is "standard". > > > Aha. Ok. I'm sifting through the SciPy sources now, to better understand > what wraps what, where... it may take a while. > > > You don't see the C wrappers the the SciPy sources. They are generated at > build-time by f2py. > > You only see the interface (.pyf) files for f2py. Or actually, you see the > .src files used to generate the .pyf files for different dtypes. > > > > In the meanwhile, does my original idea make any sense? To let SciPy do > all the necessary wrapping/compiling/linking, but otherwise offer BLAS > using the smallest call overhead possible, with a stable signature. I know > the strides and all, I don't need any extra ifs or input checks or Python. > > > It is a bad idea if you don't know which BLAS library will be used due to > different ABIs. > > Use SciPy if the Python overhead is small compared to the computation, and > then call trough the Python interface. > > Otherwise I strongly recommend to use a BLAS library you can control. The > common ones: > > - Intel MKL > - Apple Accelerate Framework > - OpenBLAS (or GotoBLAS2) > - AMD's ACML > - ATLAS > > > Sturla > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla at molden.no Thu Sep 26 12:37:04 2013 From: sturla at molden.no (Sturla Molden) Date: Thu, 26 Sep 2013 18:37:04 +0200 Subject: [SciPy-User] [cython-users] bug report + feature request (was: scipy cblas return value) In-Reply-To: <445edf58-50aa-463a-81cc-04c27e689c15@googlegroups.com> References: <1e44591c-dc17-46b4-a998-cab29016d434@googlegroups.com> <80fbdffc-ed06-46b6-a372-eeb01736d248@googlegroups.com> <445edf58-50aa-463a-81cc-04c27e689c15@googlegroups.com> Message-ID: <401ED18D-80E9-4304-9C21-411EEEA61096@molden.no> On Sep 26, 2013, at 3:09 PM, Radim ?eh??ek wrote: > I guess I was too verbose, let me rephrase: > > 1. BUG REPORT > `scipy.linalg.blas.sdot` gives wrong results on mac: > > >>> scipy.linalg.blas.sdot(np.array([ 10.], dtype=np.float32), np.array([ 0.01], dtype=np.float32)) > -0.0 > Let me submit in evidence to the contrary: In [20]: scipy.linalg.blas.sdot(np.array([10.],dtype=np.float32),np.array([0.01],dtype=np.float32)) Out[20]: 0.09999999403953552 In [21]: sys.platform Out[21]: 'darwin' It is your BLAS library that makes the error, not SciPy. Sturla From guziy.sasha at gmail.com Thu Sep 26 12:45:13 2013 From: guziy.sasha at gmail.com (Oleksandr Huziy) Date: Thu, 26 Sep 2013 12:45:13 -0400 Subject: [SciPy-User] [cython-users] bug report + feature request (was: scipy cblas return value) In-Reply-To: <401ED18D-80E9-4304-9C21-411EEEA61096@molden.no> References: <1e44591c-dc17-46b4-a998-cab29016d434@googlegroups.com> <80fbdffc-ed06-46b6-a372-eeb01736d248@googlegroups.com> <445edf58-50aa-463a-81cc-04c27e689c15@googlegroups.com> <401ED18D-80E9-4304-9C21-411EEEA61096@molden.no> Message-ID: 2013/9/26 Sturla Molden > > On Sep 26, 2013, at 3:09 PM, Radim ?eh??ek wrote: > > > I guess I was too verbose, let me rephrase: > > > > 1. BUG REPORT > > `scipy.linalg.blas.sdot` gives wrong results on mac: > > > > >>> scipy.linalg.blas.sdot(np.array([ 10.], dtype=np.float32), > np.array([ 0.01], dtype=np.float32)) > > -0.0 > > > > Let me submit in evidence to the contrary: > > In [20]: > scipy.linalg.blas.sdot(np.array([10.],dtype=np.float32),np.array([0.01],dtype=np.float32)) > Out[20]: 0.09999999403953552 > > In [21]: sys.platform > Out[21]: 'darwin' > > It is your BLAS library that makes the error, not SciPy. > > > Sturla > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > Confirming Sturla's evidence In [9]: linalg.blas.sdot(np.array([10.],dtype=np.float32),np.array([0.01],dtype=np.float32)) Out[9]: 0.09999999403953552 In [10]: sys.platform Out[10]: 'darwin' In [11]: scipy.__version__ Out[11]: '0.12.0' -- Sasha -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla at molden.no Thu Sep 26 14:55:49 2013 From: sturla at molden.no (Sturla Molden) Date: Thu, 26 Sep 2013 20:55:49 +0200 Subject: [SciPy-User] [cython-users] bug report + feature request (was: scipy cblas return value) In-Reply-To: References: <1e44591c-dc17-46b4-a998-cab29016d434@googlegroups.com> <80fbdffc-ed06-46b6-a372-eeb01736d248@googlegroups.com> <445edf58-50aa-463a-81cc-04c27e689c15@googlegroups.com> <401ED18D-80E9-4304-9C21-411EEEA61096@molden.no> Message-ID: Den 26. sep. 2013 kl. 19:04 skrev Arnaud Bergeron : > Or scipy is badly compiled. Since OS X does not ship with a fortran compiler it is especially hard to compile and link it properly with a blas. If you use slightly incompatible compilers there is absolutely no compile error you just get broken results out of the blas functions. > What OS ships with a Fortran compiler today? Windows? Surely not. On a Mac, Apple GCC-LLVM is a PITA for scientific programming. It does not have gfortran, and trying to install it can produce conflicts. And as the GCC is stuck in version 4.2, we do not have C99 or C++11. There is no OpenMP due to LLVM, and Apple probably wants us to use the GCD instead of OpenMP. Accelerate Framework is a bad choice of BLAS. It is very slow comparted to Intel MKL ? often four to ten times slower. Accelerate Framework also uses the "Grand Central Dispatch". The GCD cannot be used on both sides of os.fork without an os.exec*. Thus, Python programs using multiprocessing (and most programs using posix fork) will randomly fail if they use the Accelerate Framework's BLAS functions. Basically, Apple is pointing nose at us. The solution to most of these issues is to get the Intel compilers. Intel C++ Composer XE is binary compatible with Apple GCC-LLVM. It has C99, C++11, OpenMP, and also ships with the MKL. If we need Fortran it can be extended with Intel or Absoft Fortran compilers. The BLAS library in MKL uses Intel's OpenMP implementation, not the GCD, and does not randomly fail with multiprocessing or os.fork. If we absolutely want to do scientific computing with free tools on a Mac, I'd recommend running Linux in a VirtualBox VM. Otherwise it might be painful. Sturla From Sebastian.Wagner.fl at ait.ac.at Fri Sep 27 03:12:06 2013 From: Sebastian.Wagner.fl at ait.ac.at (Wagner Sebastian) Date: Fri, 27 Sep 2013 07:12:06 +0000 Subject: [SciPy-User] Sparse dot differs from numpy dot by 10^5 Message-ID: Dear Scipy-Users, When changing some algorithms from dense matrices to sparse matrices I stumbled over a discrepancy between the sparse-version of dot and numpy's dot. All open issues mentioning 'dot' do not apply. The data-source is a square matrix with N=1470, which is originally created by the constructor of scipy.sparse.csc_matrix from a dense matrix (You can recreate it by converting the matrix first to dense and then back to csc). I compute the dot-product of the matrix with itself transposed and some of the dot products (28 exactly) are different from the np.dot by a factor of 10^5. I saved the csc-matrix to a npz-file and attached it. A code sample to reproduce the effect is here: sparsecsc = np.load('sparsecsc.npz') K = sparsecsc['K'][()] K[249].dot(K[:,251]).A # gives -9.61216512e+08 np.dot(K[249].A,K[:,251].A # gives -9.61150976e+08 I located the diverging cells by using the equations behind np.allclose(): def close(a, b, rtol=1e-05, atol=1e-08): c = np.absolute(a-b) <= (atol + rtol * np.absolute(b)) d = np.absolute(b-a) <= (atol + rtol * np.absolute(a)) return c | d c = close(K.dot(K.T).A, np.dot(K.A, K.T.A)) c = ~c c.nonzero() This gives pairs of 28 indices which diverge. Or to get all diverging cells: K.dot(K.T).A[c] np.dot(K.A, K.T.A)[c] # or the derived differences: K.dot(K.T).A[c] - np.dot(K.A, K.T.A)[c] # average of them: 541257 Does anyone have an idea what's going on here? Regards, Sebastian -------------- next part -------------- A non-text attachment was scrubbed... Name: sparsecsc.npz Type: application/octet-stream Size: 19540 bytes Desc: sparsecsc.npz URL: From Jerome.Kieffer at esrf.fr Fri Sep 27 03:31:39 2013 From: Jerome.Kieffer at esrf.fr (Jerome Kieffer) Date: Fri, 27 Sep 2013 09:31:39 +0200 Subject: [SciPy-User] Sparse dot differs from numpy dot by 10^5 In-Reply-To: References: Message-ID: <20130927093139.07611844.Jerome.Kieffer@esrf.fr> On Fri, 27 Sep 2013 07:12:06 +0000 Wagner Sebastian wrote: > When changing some algorithms from dense matrices to sparse matrices I stumbled over a discrepancy between the sparse-version of dot and numpy's dot. All open issues mentioning 'dot' do not apply. Floating point addition is not commutable ... the order you are summing is important (in silico) while it is not on the paper. -- J?r?me Kieffer On-Line Data analysis / Software Group ISDD / ESRF tel +33 476 882 445 From emanuele at relativita.com Fri Sep 27 03:36:27 2013 From: emanuele at relativita.com (Emanuele Olivetti) Date: Fri, 27 Sep 2013 09:36:27 +0200 Subject: [SciPy-User] Sparse dot differs from numpy dot by 10^5 In-Reply-To: <20130927093139.07611844.Jerome.Kieffer@esrf.fr> References: <20130927093139.07611844.Jerome.Kieffer@esrf.fr> Message-ID: <5245357B.7080402@relativita.com> On 09/27/2013 09:31 AM, Jerome Kieffer wrote: > On Fri, 27 Sep 2013 07:12:06 +0000 > Wagner Sebastian wrote: > >> When changing some algorithms from dense matrices to sparse matrices I stumbled over a discrepancy between the sparse-version of dot and numpy's dot. All open issues mentioning 'dot' do not apply. > Floating point addition is not commutable ... the order you are summing > is important (in silico) while it is not on the paper. > If that is the problem, this is a detailed description: http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html Best, Emanuele From Sebastian.Wagner.fl at ait.ac.at Fri Sep 27 04:21:44 2013 From: Sebastian.Wagner.fl at ait.ac.at (Wagner Sebastian) Date: Fri, 27 Sep 2013 08:21:44 +0000 Subject: [SciPy-User] Sparse dot differs from numpy dot by 10^5 Message-ID: So, I can't get rid of this issue, because this error does not lie within scipy or numpy but in the design of floating point arithmetic? At the end of the calculation (after 3 more operations) only two promille out of 1470 values do not differ, all others do differ from the third digit on (median of the difference). Is there any chance to know which one of the two results is "more correct"? Regards, Sebastian -----Urspr?ngliche Nachricht----- Von: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] Im Auftrag von Emanuele Olivetti Gesendet: Freitag, 27. September 2013 09:37 An: scipy-user at scipy.org Betreff: Re: [SciPy-User] Sparse dot differs from numpy dot by 10^5 On 09/27/2013 09:31 AM, Jerome Kieffer wrote: > On Fri, 27 Sep 2013 07:12:06 +0000 > Wagner Sebastian wrote: > >> When changing some algorithms from dense matrices to sparse matrices I stumbled over a discrepancy between the sparse-version of dot and numpy's dot. All open issues mentioning 'dot' do not apply. > Floating point addition is not commutable ... the order you are > summing is important (in silico) while it is not on the paper. > If that is the problem, this is a detailed description: http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html Best, Emanuele _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user From dimpase at gmail.com Fri Sep 27 04:59:37 2013 From: dimpase at gmail.com (Dima Pasechnik) Date: Fri, 27 Sep 2013 08:59:37 +0000 (UTC) Subject: [SciPy-User] [cython-users] bug report + feature request (was: scipy cblas return value) References: <1e44591c-dc17-46b4-a998-cab29016d434@googlegroups.com> <80fbdffc-ed06-46b6-a372-eeb01736d248@googlegroups.com> <445edf58-50aa-463a-81cc-04c27e689c15@googlegroups.com> <401ED18D-80E9-4304-9C21-411EEEA61096@molden.no> Message-ID: On 2013-09-26, William Stein wrote: > On Thu, Sep 26, 2013 at 11:35 AM, Arnaud Bergeron wrote: >> 2013/9/26 William Stein >>> >>> On Thu, Sep 26, 2013 at 10:04 AM, Arnaud Bergeron >>> wrote: >>> > Or scipy is badly compiled. Since OS X does not ship with a fortran >>> > compiler it is especially hard to compile and link it properly with a >>> > blas. >>> > If you use slightly incompatible compilers there is absolutely no >>> > compile >>> > error you just get broken results out of the blas functions. >>> > >>> > On that note the only prebuilt scipy that is not broken is the one that >>> > comes with Enthough Canopy. >>> >>> Are you saying that the prebuilt scipy we ship with Sage is broken? >>> I'm curious, since I can file a bug report, and I thought we had >>> things sorted out by now, after putting a heck of a lot of work into >>> this platform over the last 6 years... Here's our Mac binaries: >>> >>> >>> http://boxen.math.washington.edu/home/sagemath/sage-mirror/osx/intel/index.html >> >> >> Sage isn't just a python distribution :) It's also not listed on the scipy >> install page. > > OK, good point :-) > >> >> But still, the scipy binaries that come with it are broken (at least for the >> 10.6 package I tried). If you try the command above it returns -0.0 rather >> than the correct 0.09999999403953552. > > Ugh. Many thanks for the bug report. could be due to using binary for wrong arch. With OSX 10.6 there were lots of issues whether it's a 32-bit or 64-bit... At least on my 32-bit OSX 10.6 system self-compiled Sage 5.12.beta4 works as expected: $ uname -a Darwin nash 10.8.0 Darwin Kernel Version 10.8.0: Tue Jun 7 16:33:36 PDT 2011; root:xnu-1504.15.3~1/RELEASE_I386 i386 sage: scipy.linalg.blas.sdot(np.array([10.],dtype=np.float32),np.array([0.01],dtype=np.float32)) 0.09999999403953552 sage: version() 'Sage Version 5.12.beta4, Release Date: 2013-08-30' > >> >> >>> >>> Here's the source, which I think automatically gets things right >>> (regarding building scipy), assuming one follows the instructions... >>> >>> >>> http://boxen.math.washington.edu/home/sagemath/sage-mirror/src/index.html >>> >>> --William >>> >>> >>> > The Anaconda one is broken. I haven't tried >>> > the standalone packages, though. >>> > >>> > >>> > 2013/9/26 Sturla Molden >>> >> >>> >> >>> >> On Sep 26, 2013, at 3:09 PM, Radim ?eh??ek wrote: >>> >> >>> >> > I guess I was too verbose, let me rephrase: >>> >> > >>> >> > 1. BUG REPORT >>> >> > `scipy.linalg.blas.sdot` gives wrong results on mac: >>> >> > >>> >> > >>> scipy.linalg.blas.sdot(np.array([ 10.], dtype=np.float32), >>> >> > >>> np.array([ 0.01], dtype=np.float32)) >>> >> > -0.0 >>> >> > >>> >> >>> >> Let me submit in evidence to the contrary: >>> >> >>> >> In [20]: >>> >> >>> >> scipy.linalg.blas.sdot(np.array([10.],dtype=np.float32),np.array([0.01],dtype=np.float32)) >>> >> Out[20]: 0.09999999403953552 >>> >> >>> >> In [21]: sys.platform >>> >> Out[21]: 'darwin' >>> >> >>> >> It is your BLAS library that makes the error, not SciPy. >>> >> >>> >> >>> >> Sturla >>> >> >>> >> -- >>> >> >>> >> --- >>> >> You received this message because you are subscribed to the Google >>> >> Groups >>> >> "cython-users" group. >>> >> To unsubscribe from this group and stop receiving emails from it, send >>> >> an >>> >> email to cython-users+unsubscribe at googlegroups.com. >>> >> For more options, visit https://groups.google.com/groups/opt_out. >>> > >>> > >>> > >>> > >>> > -- >>> > La brigade SnW veut vous recruter - http://www.brigadesnw.ca >>> > >>> > -- >>> > >>> > --- >>> > You received this message because you are subscribed to the Google >>> > Groups >>> > "cython-users" group. >>> > To unsubscribe from this group and stop receiving emails from it, send >>> > an >>> > email to cython-users+unsubscribe at googlegroups.com. >>> > For more options, visit https://groups.google.com/groups/opt_out. >>> >>> >>> >>> -- >>> William Stein >>> Professor of Mathematics >>> University of Washington >>> http://wstein.org >>> >>> -- >>> >>> --- >>> You received this message because you are subscribed to the Google Groups >>> "cython-users" group. >>> To unsubscribe from this group and stop receiving emails from it, send an >>> email to cython-users+unsubscribe at googlegroups.com. >>> For more options, visit https://groups.google.com/groups/opt_out. >> >> >> >> -- >> >> --- >> You received this message because you are subscribed to the Google Groups >> "cython-users" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to cython-users+unsubscribe at googlegroups.com. >> For more options, visit https://groups.google.com/groups/opt_out. > > > > -- > William Stein > Professor of Mathematics > University of Washington > http://wstein.org > From Jerome.Kieffer at esrf.fr Fri Sep 27 06:16:37 2013 From: Jerome.Kieffer at esrf.fr (Jerome Kieffer) Date: Fri, 27 Sep 2013 12:16:37 +0200 Subject: [SciPy-User] Sparse dot differs from numpy dot by 10^5 In-Reply-To: References: Message-ID: <20130927121637.42ebbe60.Jerome.Kieffer@esrf.fr> On Fri, 27 Sep 2013 08:21:44 +0000 Wagner Sebastian wrote: > So, I can't get rid of this issue, because this error does not lie within scipy or numpy but in the design of floating point arithmetic? > At the end of the calculation (after 3 more operations) only two promille out of 1470 values do not differ, all others do differ from the third digit on (median of the difference). > > Is there any chance to know which one of the two results is "more correct"? Perform the operation using Kahan summation (see the article, or wikipedia). It is "more" correct. Cheers, -- J?r?me Kieffer On-Line Data analysis / Software Group ISDD / ESRF tel +33 476 882 445 From vaggi.federico at gmail.com Fri Sep 27 08:49:58 2013 From: vaggi.federico at gmail.com (federico vaggi) Date: Fri, 27 Sep 2013 14:49:58 +0200 Subject: [SciPy-User] Minimization of a function that can return nan Message-ID: Hi everyone, I'm using fmin_l_bfgs_b to find the best fit parameters for a system of ordinary differential equations. Since the system has multiple local minima, I found that it's very convenient to divide the parameter space in hypercubes, and perform multiple local minimizations. Occasionally, for certain regions of the parameter space, the numerical integration routine fails. What's the correct way to handle this gracefully? Capture the exception, and return inf as the cost function value, and set the gradient to equal to all nan? Federico -------------- next part -------------- An HTML attachment was scrubbed... URL: From jjstickel at gmail.com Fri Sep 27 09:57:30 2013 From: jjstickel at gmail.com (Jonathan Stickel) Date: Fri, 27 Sep 2013 07:57:30 -0600 Subject: [SciPy-User] bug report + feature request (was: scipy cblas return value) In-Reply-To: References: <1e44591c-dc17-46b4-a998-cab29016d434@googlegroups.com> <80fbdffc-ed06-46b6-a372-eeb01736d248@googlegroups.com> <445edf58-50aa-463a-81cc-04c27e689c15@googlegroups.com> <401ED18D-80E9-4304-9C21-411EEEA61096@molden.no> Message-ID: <52458ECA.20809@gmail.com> On 9/26/13 12:55 , Sturla Molden wrote: > Den 26. sep. 2013 kl. 19:04 skrev Arnaud Bergeron > : > >> Or scipy is badly compiled. Since OS X does not ship with a >> fortran compiler it is especially hard to compile and link it >> properly with a blas. If you use slightly incompatible compilers >> there is absolutely no compile error you just get broken results >> out of the blas functions. >> > > > What OS ships with a Fortran compiler today? Windows? Surely not. > > On a Mac, Apple GCC-LLVM is a PITA for scientific programming. It > does not have gfortran, and trying to install it can produce > conflicts. And as the GCC is stuck in version 4.2, we do not have C99 > or C++11. There is no OpenMP due to LLVM, and Apple probably wants us > to use the GCD instead of OpenMP. > > Accelerate Framework is a bad choice of BLAS. It is very slow > comparted to Intel MKL ? often four to ten times slower. Accelerate > Framework also uses the "Grand Central Dispatch". The GCD cannot be > used on both sides of os.fork without an os.exec*. Thus, Python > programs using multiprocessing (and most programs using posix fork) > will randomly fail if they use the Accelerate Framework's BLAS > functions. > > Basically, Apple is pointing nose at us. > > The solution to most of these issues is to get the Intel compilers. > Intel C++ Composer XE is binary compatible with Apple GCC-LLVM. It > has C99, C++11, OpenMP, and also ships with the MKL. If we need > Fortran it can be extended with Intel or Absoft Fortran compilers. > The BLAS library in MKL uses Intel's OpenMP implementation, not the > GCD, and does not randomly fail with multiprocessing or os.fork. > > If we absolutely want to do scientific computing with free tools on a > Mac, I'd recommend running Linux in a VirtualBox VM. Otherwise it > might be painful. This is a rather narrow viewpoint. There are free-software package managers for Mac OS X, and macports is a decent one. It has its critics, but I have found it to reliably provide a working python/numpy/scipy environment including all the necessary associated libraries. Jonathan From njs at pobox.com Fri Sep 27 12:33:56 2013 From: njs at pobox.com (Nathaniel Smith) Date: Fri, 27 Sep 2013 17:33:56 +0100 Subject: [SciPy-User] Baffling error: ndarray += csc_matrix -> "ValueError: setting an array element with a sequence" In-Reply-To: References: Message-ID: [CC'ing scipy-dev because see below] On Tue, Sep 24, 2013 at 6:45 PM, Nathaniel Smith wrote: > Hi all, > > I'm getting a very strange error, and unfortunately I can't seem to > reproduce it *except* when running on Travis-CI, so my debugging tools > are really limited. I'm wondering if anyone else has seen anything > like this? > > The offending line of code is: > > File "/home/travis/virtualenv/python2.7_with_system_site_packages/local/lib/python2.7/site-packages/pyrerp/incremental_ls.py", > line 323, in append_bottom_half > self.xtx += xtx > ValueError: setting an array element with a sequence. > > And debug prints reveal that in the case that causes the error, > 'self.xtx' is an ndarray that prints as: > > [[ 0. 0.] > [ 0. 0.]] > > while 'xtx' is a scipy.sparse.csc.csc_matrix that prints as: > > (1, 0) 45.0 > (0, 0) 10.0 > (1, 1) 285.0 > (0, 1) 45.0 In accordance with the cosmic law governing such things, this turns out to be a bug in numpy that I introduced in 2397c9d4, specifically this line, which introduces an unchecked DEPRECATE: https://github.com/numpy/numpy/commit/2397c9d4#L5R528 (Plus some complicated nonsense involving virtualenvs that sometimes fall back on system libraries even though the virtualenv contains a perfectly good version of the same library etc. to make reproduction more confusing.) My test code was setting warnings to raise errors by default, and apparently ndarray += csc_matrix goes through some circuitous path that (AFAICT) involves creating an object ndarray containing the csc_matrix and calling the add ufunc, which somehow trips on the cast warning above. Then later on, at line 159 of arraytypes.c.src, the the @TYPE at _setitem function does: if (PyErr_Occurred()) { if (PySequence_Check(op) && !PyString_Check(op) && !PyUnicode_Check(op)) { PyErr_Clear(); PyErr_SetString(PyExc_ValueError, "setting an array element with a sequence."); } return -1; } and the PyErr_Occurred here catches the real error and replaces it with some nonsense. So three points: 1) Maybe this will be useful to someone googling later. 2) I guess I'll file some horrible patch against numpy that just throws away exceptions caused by that deprecation, because there is no way for PyArray_CanCastTypeTo to raise an error :-(. 3) This script raises a DeprecationWarning with numpy 1.7.1 and scipy 0.12.0: import warnings, numpy, scipy.sparse warnings.filterwarnings("always") a = numpy.zeros((2, 2)) b = scipy.sparse.csc_matrix([[0.0, 0.0], [0.0, 0.0]]) a += b test-script3.py:5: DeprecationWarning: Implicitly casting between incompatible kinds. In a future numpy release, this will raise an error. Use casting="unsafe" if this is intentional. I really don't understand what arcane magic is used to make ndarray += csc_matrix work at all, but my question is, is it going to break when we complete the casting transition described above? It was just supposed to catch things like int += float. -n From yw5aj at virginia.edu Fri Sep 27 23:33:54 2013 From: yw5aj at virginia.edu (Yuxiang Wang) Date: Fri, 27 Sep 2013 23:33:54 -0400 Subject: [SciPy-User] How do I cite SciPy and NumPy, when I used them for a paper? Message-ID: Dear all, A quick question - how do I cite SciPy and NumPy, when I used them for a paper? Do I just say them without referencing, or should I refer to the www.scipy.org? Thanks in advance :) -Shawn -- Yuxiang "Shawn" Wang Gerling Research Lab University of Virginia yw5aj at virginia.edu +1 (434) 284-0836 -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Sat Sep 28 06:26:00 2013 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 28 Sep 2013 11:26:00 +0100 Subject: [SciPy-User] How do I cite SciPy and NumPy, when I used them for a paper? In-Reply-To: References: Message-ID: On Sat, Sep 28, 2013 at 4:33 AM, Yuxiang Wang wrote: > > Dear all, > > A quick question - how do I cite SciPy and NumPy, when I used them for a paper? Do I just say them without referencing, or should I refer to the www.scipy.org? Whether you wish to cite them is up to you and the standards of your field/journal, but we do have some information if you do decide to. http://www.scipy.org/scipylib/citing.html Check your journal's guidelines for citing software. They may have a particular set of information that is expected. If they don't have software-specific guidelines, then it's hard to go wrong with citing one of the listed papers. -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Sat Sep 28 06:49:39 2013 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 28 Sep 2013 13:49:39 +0300 Subject: [SciPy-User] How do I cite SciPy and NumPy, when I used them for a paper? In-Reply-To: References: Message-ID: 28.09.2013 13:26, Robert Kern kirjoitti: [clip] > Whether you wish to cite them is up to you and the standards of your > field/journal, but we do have some information if you do decide to. > > http://www.scipy.org/scipylib/citing.html > > Check your journal's guidelines for citing software. They may have a > particular set of information that is expected. If they don't have > software-specific guidelines, then it's hard to go wrong with citing one of > the listed papers. This citation guide should perhaps be also amended to say that if a specific algorithm is of importance for your work, it may be good practice to cite the original papers introducing it. Appropriate references are often given in Scipy documentation for each method. -- Pauli Virtanen From robert.kern at gmail.com Sat Sep 28 07:12:31 2013 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 28 Sep 2013 12:12:31 +0100 Subject: [SciPy-User] How do I cite SciPy and NumPy, when I used them for a paper? In-Reply-To: References: Message-ID: On Sat, Sep 28, 2013 at 11:49 AM, Pauli Virtanen wrote: > > 28.09.2013 13:26, Robert Kern kirjoitti: > [clip] > > Whether you wish to cite them is up to you and the standards of your > > field/journal, but we do have some information if you do decide to. > > > > http://www.scipy.org/scipylib/citing.html > > > > Check your journal's guidelines for citing software. They may have a > > particular set of information that is expected. If they don't have > > software-specific guidelines, then it's hard to go wrong with citing one of > > the listed papers. > > This citation guide should perhaps be also amended to say that if a > specific algorithm is of importance for your work, it may be good > practice to cite the original papers introducing it. Appropriate > references are often given in Scipy documentation for each method. +1. PRs accepted. :-) https://github.com/scipy/scipy.org/blob/master/www/scipylib/citing.rst -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.soukal at gmail.com Sun Sep 29 16:55:17 2013 From: david.soukal at gmail.com (David Soukal) Date: Sun, 29 Sep 2013 13:55:17 -0700 Subject: [SciPy-User] stats.gaussian_kde sensitive to outliers Message-ID: Hello folks, I was studying kernel density estimation in "The Elements of Statistical Learning" and playing with stats.gaussian_kde() to confirm my understanding. I found that, contrary to my expectation, the estimates seem very sensitive to outliers. The code at the end of this email shows this visually. It generates n=1000 points from normal distribution (d) and adds an outlier to it (d_o). As you can see in the graph the estimated density with the outlier looks very different. Inspecting the estimated density at 0 clearly shows that a lot of mass has shifted when the outlier is present. print(kde(0)) print(kde_o(0)) [ 0.00389799] [ 0.0009715] I thought this could be because of the automatic bandwidth estimate but it doesn't seem so. I was surprised at this behavior. I thought that an outlier should have little or no impact on the density elsewhere since the Gaussian kernel is exponentially decaying. Fitting this in R did produce a more stable estimate. Can you, please, help me understand what I'm missing? Is gaussian_kde() doing something different than a standard smoothing where kernels are placed at the sample points and added up? THANKS, David # import matplotlib.pylab as plt import numpy as np import scipy.stats as stats # generate data with and without an outlier n = 1000 d = np.random.normal(0, 100, n) outlier = 50000 d_o = np.insert(d, 0, outlier) # estimate the densities kde = stats.gaussian_kde(d) kde_o = stats.gaussian_kde(d_o) # plot xs = np.linspace(-400, 400, 1000) fg, ax = plt.subplots(nrows = 1, ncols = 1, figsize = (15, 5)) ax.hist(d, bins = 100, normed = True, alpha = 0.5, label = 'Histogram') ax.plot(xs, kde(xs), linewidth = 2, alpha = 0.8, label = 'Gaussian KDE, no outlier') ax.plot(xs, kde_o(xs), linewidth = 2, alpha = 0.8, label = 'Gaussian KDE, outlier') ax.legend(loc = 'best') R code for reference. library(ggplot2) c1 <- rnorm(1000, 0, 100) d1 <- data.frame(x = c1) c2 <- c(c1, 50000) d2 <- data.frame(x = c2) kde1 <- density(d1$x, kernel = 'gaussian') kde2 <- density(d2$x, kernel = 'gaussian') ggplot() + geom_density(data = d1, aes(x = x), color = 'red') + geom_density(data = d2, aes(x = x), color = 'green') + coord_cartesian(xlim=c(-400, 400)) -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Sun Sep 29 17:07:09 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 29 Sep 2013 17:07:09 -0400 Subject: [SciPy-User] stats.gaussian_kde sensitive to outliers In-Reply-To: References: Message-ID: On Sun, Sep 29, 2013 at 4:55 PM, David Soukal wrote: > Hello folks, > > I was studying kernel density estimation in "The Elements of Statistical > Learning" and playing with stats.gaussian_kde() to confirm my understanding. > > I found that, contrary to my expectation, the estimates seem very sensitive > to outliers. > > The code at the end of this email shows this visually. It generates n=1000 > points from normal distribution (d) and adds an outlier to it (d_o). As you > can see in the graph the estimated density with the outlier looks very > different. > > Inspecting the estimated density at 0 clearly shows that a lot of mass has > shifted when the outlier is present. > > print(kde(0)) > print(kde_o(0)) > > [ 0.00389799] > [ 0.0009715] > > > I thought this could be because of the automatic bandwidth estimate but it > doesn't seem so. > > I was surprised at this behavior. I thought that an outlier should have > little or no impact on the density elsewhere since the Gaussian kernel is > exponentially decaying. quick answer without having time to look at anything. IIRC: scipy gaussian_kde uses the sample variance or covariance matrix to scale the bandwidth. If you have outliers, then the variance covariance matrix can be heavily distorted. IIRC I have seen it mentioned that R also uses, or has an option to use, robust variance estimate (based on interquartile range). try to set the bandwidth in the outlier case to the same value as in the non-outlier case, then there should be only a minor influence on the kde except for the neighborhood of the outlier. Josef > > Fitting this in R did produce a more stable estimate. > > Can you, please, help me understand what I'm missing? Is gaussian_kde() > doing something different than a standard smoothing where kernels are placed > at the sample points and added up? > > THANKS, > David > > > # > import matplotlib.pylab as plt > import numpy as np > import scipy.stats as stats > > # generate data with and without an outlier > n = 1000 > d = np.random.normal(0, 100, n) > > outlier = 50000 > d_o = np.insert(d, 0, outlier) > > # estimate the densities > kde = stats.gaussian_kde(d) > kde_o = stats.gaussian_kde(d_o) > > # plot > xs = np.linspace(-400, 400, 1000) > > fg, ax = plt.subplots(nrows = 1, ncols = 1, figsize = (15, 5)) > ax.hist(d, bins = 100, normed = True, alpha = 0.5, label = 'Histogram') > ax.plot(xs, kde(xs), linewidth = 2, alpha = 0.8, label = 'Gaussian KDE, no > outlier') > ax.plot(xs, kde_o(xs), linewidth = 2, alpha = 0.8, label = 'Gaussian KDE, > outlier') > ax.legend(loc = 'best') > > > > R code for reference. > > library(ggplot2) > > c1 <- rnorm(1000, 0, 100) > d1 <- data.frame(x = c1) > > c2 <- c(c1, 50000) > d2 <- data.frame(x = c2) > > kde1 <- density(d1$x, kernel = 'gaussian') > kde2 <- density(d2$x, kernel = 'gaussian') > > ggplot() + > geom_density(data = d1, aes(x = x), color = 'red') + > geom_density(data = d2, aes(x = x), color = 'green') + > coord_cartesian(xlim=c(-400, 400)) > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From david.soukal at gmail.com Sun Sep 29 17:11:11 2013 From: david.soukal at gmail.com (David Soukal) Date: Sun, 29 Sep 2013 14:11:11 -0700 Subject: [SciPy-User] stats.gaussian_kde sensitive to outliers In-Reply-To: References: Message-ID: Thanks Josef. Seems that you're right, the covariance is indeed much bigger. I only checked the bandwidth factor before (they are the same). I'll study the source code to get a clearer understanding of the algorithm. Thanks! print(kde.covariance, kde.covariance_factor()) (array([[ 653.77896866]]), 0.25118864315095801) print(kde_o.covariance, kde_o.covariance_factor()) (array([[ 158142.83936064]]), 0.25113843554287807) On Sun, Sep 29, 2013 at 2:07 PM, wrote: > On Sun, Sep 29, 2013 at 4:55 PM, David Soukal > wrote: > > Hello folks, > > > > I was studying kernel density estimation in "The Elements of Statistical > > Learning" and playing with stats.gaussian_kde() to confirm my > understanding. > > > > I found that, contrary to my expectation, the estimates seem very > sensitive > > to outliers. > > > > The code at the end of this email shows this visually. It generates > n=1000 > > points from normal distribution (d) and adds an outlier to it (d_o). As > you > > can see in the graph the estimated density with the outlier looks very > > different. > > > > Inspecting the estimated density at 0 clearly shows that a lot of mass > has > > shifted when the outlier is present. > > > > print(kde(0)) > > print(kde_o(0)) > > > > [ 0.00389799] > > [ 0.0009715] > > > > > > I thought this could be because of the automatic bandwidth estimate but > it > > doesn't seem so. > > > > I was surprised at this behavior. I thought that an outlier should have > > little or no impact on the density elsewhere since the Gaussian kernel is > > exponentially decaying. > > quick answer without having time to look at anything. > > IIRC: > scipy gaussian_kde uses the sample variance or covariance matrix to > scale the bandwidth. > If you have outliers, then the variance covariance matrix can be > heavily distorted. > > IIRC I have seen it mentioned that R also uses, or has an option to > use, robust variance estimate (based on interquartile range). > > try to set the bandwidth in the outlier case to the same value as in > the non-outlier case, then there should be only a minor influence on > the kde except for the neighborhood of the outlier. > > Josef > > > > > > Fitting this in R did produce a more stable estimate. > > > > Can you, please, help me understand what I'm missing? Is gaussian_kde() > > doing something different than a standard smoothing where kernels are > placed > > at the sample points and added up? > > > > THANKS, > > David > > > > > > # > > import matplotlib.pylab as plt > > import numpy as np > > import scipy.stats as stats > > > > # generate data with and without an outlier > > n = 1000 > > d = np.random.normal(0, 100, n) > > > > outlier = 50000 > > d_o = np.insert(d, 0, outlier) > > > > # estimate the densities > > kde = stats.gaussian_kde(d) > > kde_o = stats.gaussian_kde(d_o) > > > > # plot > > xs = np.linspace(-400, 400, 1000) > > > > fg, ax = plt.subplots(nrows = 1, ncols = 1, figsize = (15, 5)) > > ax.hist(d, bins = 100, normed = True, alpha = 0.5, label = 'Histogram') > > ax.plot(xs, kde(xs), linewidth = 2, alpha = 0.8, label = 'Gaussian KDE, > no > > outlier') > > ax.plot(xs, kde_o(xs), linewidth = 2, alpha = 0.8, label = 'Gaussian KDE, > > outlier') > > ax.legend(loc = 'best') > > > > > > > > R code for reference. > > > > library(ggplot2) > > > > c1 <- rnorm(1000, 0, 100) > > d1 <- data.frame(x = c1) > > > > c2 <- c(c1, 50000) > > d2 <- data.frame(x = c2) > > > > kde1 <- density(d1$x, kernel = 'gaussian') > > kde2 <- density(d2$x, kernel = 'gaussian') > > > > ggplot() + > > geom_density(data = d1, aes(x = x), color = 'red') + > > geom_density(data = d2, aes(x = x), color = 'green') + > > coord_cartesian(xlim=c(-400, 400)) > > > > > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Sun Sep 29 17:36:39 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 29 Sep 2013 17:36:39 -0400 Subject: [SciPy-User] stats.gaussian_kde sensitive to outliers In-Reply-To: References: Message-ID: On Sun, Sep 29, 2013 at 5:11 PM, David Soukal wrote: > Thanks Josef. Seems that you're right, the covariance is indeed much bigger. > I only checked the bandwidth factor before (they are the same). > I'll study the source code to get a clearer understanding of the algorithm. > > Thanks! > > print(kde.covariance, kde.covariance_factor()) > > (array([[ 653.77896866]]), 0.25118864315095801) > > > print(kde_o.covariance, kde_o.covariance_factor()) > > (array([[ 158142.83936064]]), 0.25113843554287807) I think R uses the minimum of scaled interquartile range and standard deviation. However, that works only in the 1-d case. It would be good to enhance scipy gaussian_kde to get in some robustness. But I don't know of a quick way in the n-d case (without scikit-learn) Josef > > > > > > On Sun, Sep 29, 2013 at 2:07 PM, wrote: >> >> On Sun, Sep 29, 2013 at 4:55 PM, David Soukal >> wrote: >> > Hello folks, >> > >> > I was studying kernel density estimation in "The Elements of Statistical >> > Learning" and playing with stats.gaussian_kde() to confirm my >> > understanding. >> > >> > I found that, contrary to my expectation, the estimates seem very >> > sensitive >> > to outliers. >> > >> > The code at the end of this email shows this visually. It generates >> > n=1000 >> > points from normal distribution (d) and adds an outlier to it (d_o). As >> > you >> > can see in the graph the estimated density with the outlier looks very >> > different. >> > >> > Inspecting the estimated density at 0 clearly shows that a lot of mass >> > has >> > shifted when the outlier is present. >> > >> > print(kde(0)) >> > print(kde_o(0)) >> > >> > [ 0.00389799] >> > [ 0.0009715] >> > >> > >> > I thought this could be because of the automatic bandwidth estimate but >> > it >> > doesn't seem so. >> > >> > I was surprised at this behavior. I thought that an outlier should have >> > little or no impact on the density elsewhere since the Gaussian kernel >> > is >> > exponentially decaying. >> >> quick answer without having time to look at anything. >> >> IIRC: >> scipy gaussian_kde uses the sample variance or covariance matrix to >> scale the bandwidth. >> If you have outliers, then the variance covariance matrix can be >> heavily distorted. >> >> IIRC I have seen it mentioned that R also uses, or has an option to >> use, robust variance estimate (based on interquartile range). >> >> try to set the bandwidth in the outlier case to the same value as in >> the non-outlier case, then there should be only a minor influence on >> the kde except for the neighborhood of the outlier. >> >> Josef >> >> >> > >> > Fitting this in R did produce a more stable estimate. >> > >> > Can you, please, help me understand what I'm missing? Is gaussian_kde() >> > doing something different than a standard smoothing where kernels are >> > placed >> > at the sample points and added up? >> > >> > THANKS, >> > David >> > >> > >> > # >> > import matplotlib.pylab as plt >> > import numpy as np >> > import scipy.stats as stats >> > >> > # generate data with and without an outlier >> > n = 1000 >> > d = np.random.normal(0, 100, n) >> > >> > outlier = 50000 >> > d_o = np.insert(d, 0, outlier) >> > >> > # estimate the densities >> > kde = stats.gaussian_kde(d) >> > kde_o = stats.gaussian_kde(d_o) >> > >> > # plot >> > xs = np.linspace(-400, 400, 1000) >> > >> > fg, ax = plt.subplots(nrows = 1, ncols = 1, figsize = (15, 5)) >> > ax.hist(d, bins = 100, normed = True, alpha = 0.5, label = 'Histogram') >> > ax.plot(xs, kde(xs), linewidth = 2, alpha = 0.8, label = 'Gaussian KDE, >> > no >> > outlier') >> > ax.plot(xs, kde_o(xs), linewidth = 2, alpha = 0.8, label = 'Gaussian >> > KDE, >> > outlier') >> > ax.legend(loc = 'best') >> > >> > >> > >> > R code for reference. >> > >> > library(ggplot2) >> > >> > c1 <- rnorm(1000, 0, 100) >> > d1 <- data.frame(x = c1) >> > >> > c2 <- c(c1, 50000) >> > d2 <- data.frame(x = c2) >> > >> > kde1 <- density(d1$x, kernel = 'gaussian') >> > kde2 <- density(d2$x, kernel = 'gaussian') >> > >> > ggplot() + >> > geom_density(data = d1, aes(x = x), color = 'red') + >> > geom_density(data = d2, aes(x = x), color = 'green') + >> > coord_cartesian(xlim=c(-400, 400)) >> > >> > >> > >> > _______________________________________________ >> > SciPy-User mailing list >> > SciPy-User at scipy.org >> > http://mail.scipy.org/mailman/listinfo/scipy-user >> > >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From charlesr.harris at gmail.com Mon Sep 30 11:17:14 2013 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 30 Sep 2013 09:17:14 -0600 Subject: [SciPy-User] 1.8.0rc1 Message-ID: Hi All, NumPy 1.8.0rc1 is up now on sourceforge.The binary builds are included except for Python 3.3 on windows, which will arrive later. Many thanks to Ralf for the binaries, and to those who found and fixed the bugs in the last beta. Any remaining bugs are all my fault ;) I hope this will be the last release before final, so please test it thoroughly. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Mon Sep 30 14:02:38 2013 From: njs at pobox.com (Nathaniel Smith) Date: Mon, 30 Sep 2013 19:02:38 +0100 Subject: [SciPy-User] [SciPy-Dev] 1.8.0rc1 In-Reply-To: References: Message-ID: Everyone please do actually test this! It is really in your best interest, and I think people don't always realize this. Here's how it works: - If you test it *now*, and it breaks your code that worked with 1.7, and you *tell* us this now, then it's *our* problem and we hold up the release to fix the bug. - If you test it *after* we release, and it breaks your code, then we are sad but you have to work around it (because we can't magically make that release not have happened, your users will be using it anyway), and we put it on the stack with all the other bugs. All of which we care about but it's a large enough stack that it's not going to get any special priority, because, see above about how at this point you'll have had to work around it anyway. -n On Mon, Sep 30, 2013 at 4:17 PM, Charles R Harris wrote: > Hi All, > > NumPy 1.8.0rc1 is up now on sourceforge .The binary builds are included > except for Python 3.3 on windows, which will arrive later. Many thanks to > Ralf for the binaries, and to those who found and fixed the bugs in the last > beta. Any remaining bugs are all my fault ;) I hope this will be the last > release before final, so please test it thoroughly. > > Chuck > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From tmp50 at ukr.net Mon Sep 30 16:32:41 2013 From: tmp50 at ukr.net (Dmitrey) Date: Mon, 30 Sep 2013 23:32:41 +0300 Subject: [SciPy-User] [ANN] MATLAB fmincon now available in Python2 Message-ID: <1380572275.326208946.d9mh0o59@frv43.ukr.net> Hi all, current state of Python <-> MATLAB connection soft doesn't allow passing of function handlers, however, a walkaround has been implemented via some tricks, so now MATLAB function fmincon is available in Python-written OpenOpt and FuncDesigner frameworks (with possibility of automatic differentiation,? example ). Future plans? include MATLAB fsolve, ode23, ode45 (unlike scipy fsolve and ode they can handle sparse matrices), fgoalattain, maybe global optimization toolbox solvers. I intend to post the message to several forums, so to keep discussion in a single place use OpenOpt forum thread http://forum.openopt.org/viewtopic.php?id=769 Regards, D. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan.guyer at nist.gov Mon Sep 30 22:54:40 2013 From: jonathan.guyer at nist.gov (Guyer, Jonathan E. Dr.) Date: Tue, 1 Oct 2013 02:54:40 +0000 Subject: [SciPy-User] ANN: FiPy 3.1 Message-ID: <703D8DBA-731C-4C15-AB59-0BFBF5D80D0E@nist.gov> We are pleased to announce the release of FiPy 3.1. http://www.ctcms.nist.gov/fipy The significant changes since version 3.0 are: ? Level sets are now handled by :ref:`LSMLIBDOC` or :ref:`SCIKITFMM` solver libraries. These libraries are orders of magnitude faster than the original, :term:`Python`-only prototype. ? The :term:`Matplotlib` :func:`streamplot()` function can be used to display vector fields. ? Version control was switched to the Git_ distributed version control system. This system should make it much easier for :term:`FiPy` users to participate in development. This release addresses 59 tickets. Windows and Debian distributions will be posted soon. ======================================================================== FiPy is an object oriented, partial differential equation (PDE) solver, written in Python, based on a standard finite volume (FV) approach. The framework has been developed in the Metallurgy Division and Center for Theoretical and Computational Materials Science (CTCMS), in the Material Measurement Laboratory (MML) at the National Institute of Standards and Technology (NIST). The solution of coupled sets of PDEs is ubiquitous to the numerical simulation of science problems. Numerous PDE solvers exist, using a variety of languages and numerical approaches. Many are proprietary, expensive and difficult to customize. As a result, scientists spend considerable resources repeatedly developing limited tools for specific problems. Our approach, combining the FV method and Python, provides a tool that is extensible, powerful and freely available. A significant advantage to Python is the existing suite of tools for array calculations, sparse matrices and data rendering. The FiPy framework includes terms for transient diffusion, convection and standard sources, enabling the solution of arbitrary combinations of coupled elliptic, hyperbolic and parabolic PDEs. Currently implemented models include phase field treatments of polycrystalline, dendritic, and electrochemical phase transformations as well as a level set treatment of the electrodeposition process. From nils106 at googlemail.com Fri Sep 20 08:30:35 2013 From: nils106 at googlemail.com (Nils Wagner) Date: Fri, 20 Sep 2013 14:30:35 +0200 Subject: [SciPy-User] Strange results by scipy.spatial Delaunay Message-ID: Hi all, I tried to create a convex hull of a set of points distributed on a cylindrical surface by the following script. The needed input file coor.dat is attached. How can I fix the problem with the distorted mesh (convex.png) ? Nils from scipy.spatial import Delaunay import numpy as np points = np.loadtxt('coor.dat',usecols =(1,2,3)) nid = np.loadtxt('coor.dat',usecols=(0,)) m,n = points.shape print 'Number of points m =', m tri = Delaunay(points) faces = [] for ia, ib, ic in tri.convex_hull: x1 = points[ia] x2 = points[ib] x3 = points[ic] area = 0.5*np.linalg.norm(np.cross(x2-x1,x1-x3)) print 'Area of face', area faces.append(points[[ia, ib, ic]]) import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D from mpl_toolkits.mplot3d.art3d import Poly3DCollection fig = plt.figure() ax = fig.gca(projection='3d') items = Poly3DCollection(faces, facecolors=[(0, 0, 0, 0.1)]) ax.add_collection(items) ax.scatter(points[:,0], points[:,1], points[:,2], 'o') ax.legend(loc=0,shadow=True) plt.show() -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: coor.dat.gz Type: application/x-gzip Size: 1841 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: convex.png.gz Type: application/x-gzip Size: 280018 bytes Desc: not available URL: From tesla.bamf at gmail.com Tue Sep 24 20:10:37 2013 From: tesla.bamf at gmail.com (TFSM) Date: Tue, 24 Sep 2013 17:10:37 -0700 (PDT) Subject: [SciPy-User] Fitting data with optimize.curve_fit Message-ID: <1380067837076-18692.post@n7.nabble.com> lab1.py I have a couple questions. The data show as counts is the total number of counts in 60 seconds. When using the count rate instead of the total counts as the y data, curve_fit does not want to give a meaningful answer. It gives the co-variance as infinity and the cosine that is fit does not match the data. Using total counts y*60, the co-variance is reasonable and the cosine fits the data. Why does increasing the counts by 60 allow curve_fit to give a reasonable answer? A similar problem happens when trying to fit the first harmonic to this data, A11*cos(3x/pi) + A31*cos(3x/pi) but I must increase the counts artificially by at least 10 times for curve_fit to give me a curve that resembles the data being fit. Is there a better way to fit this data? Is what I am doing here legitimate artificially increase y to get a fit then just dividing by that amount to get the data back to count rate? Sorry for the noob questions and thanks. -- View this message in context: http://scipy-user.10969.n7.nabble.com/Fitting-data-with-optimize-curve-fit-tp18692.html Sent from the Scipy-User mailing list archive at Nabble.com. From abergeron at gmail.com Thu Sep 26 13:04:25 2013 From: abergeron at gmail.com (Arnaud Bergeron) Date: Thu, 26 Sep 2013 13:04:25 -0400 Subject: [SciPy-User] [cython-users] bug report + feature request (was: scipy cblas return value) In-Reply-To: <401ED18D-80E9-4304-9C21-411EEEA61096@molden.no> References: <1e44591c-dc17-46b4-a998-cab29016d434@googlegroups.com> <80fbdffc-ed06-46b6-a372-eeb01736d248@googlegroups.com> <445edf58-50aa-463a-81cc-04c27e689c15@googlegroups.com> <401ED18D-80E9-4304-9C21-411EEEA61096@molden.no> Message-ID: Or scipy is badly compiled. Since OS X does not ship with a fortran compiler it is especially hard to compile and link it properly with a blas. If you use slightly incompatible compilers there is absolutely no compile error you just get broken results out of the blas functions. On that note the only prebuilt scipy that is not broken is the one that comes with Enthough Canopy. The Anaconda one is broken. I haven't tried the standalone packages, though. 2013/9/26 Sturla Molden > > On Sep 26, 2013, at 3:09 PM, Radim ?eh??ek wrote: > > > I guess I was too verbose, let me rephrase: > > > > 1. BUG REPORT > > `scipy.linalg.blas.sdot` gives wrong results on mac: > > > > >>> scipy.linalg.blas.sdot(np.array([ 10.], dtype=np.float32), > np.array([ 0.01], dtype=np.float32)) > > -0.0 > > > > Let me submit in evidence to the contrary: > > In [20]: > scipy.linalg.blas.sdot(np.array([10.],dtype=np.float32),np.array([0.01],dtype=np.float32)) > Out[20]: 0.09999999403953552 > > In [21]: sys.platform > Out[21]: 'darwin' > > It is your BLAS library that makes the error, not SciPy. > > > Sturla > > -- > > --- > You received this message because you are subscribed to the Google Groups > "cython-users" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to cython-users+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. > -- La brigade SnW veut vous recruter - http://www.brigadesnw.ca -------------- next part -------------- An HTML attachment was scrubbed... URL: From wstein at gmail.com Thu Sep 26 13:11:23 2013 From: wstein at gmail.com (William Stein) Date: Thu, 26 Sep 2013 10:11:23 -0700 Subject: [SciPy-User] [cython-users] bug report + feature request (was: scipy cblas return value) In-Reply-To: References: <1e44591c-dc17-46b4-a998-cab29016d434@googlegroups.com> <80fbdffc-ed06-46b6-a372-eeb01736d248@googlegroups.com> <445edf58-50aa-463a-81cc-04c27e689c15@googlegroups.com> <401ED18D-80E9-4304-9C21-411EEEA61096@molden.no> Message-ID: On Thu, Sep 26, 2013 at 10:04 AM, Arnaud Bergeron wrote: > Or scipy is badly compiled. Since OS X does not ship with a fortran > compiler it is especially hard to compile and link it properly with a blas. > If you use slightly incompatible compilers there is absolutely no compile > error you just get broken results out of the blas functions. > > On that note the only prebuilt scipy that is not broken is the one that > comes with Enthough Canopy. Are you saying that the prebuilt scipy we ship with Sage is broken? I'm curious, since I can file a bug report, and I thought we had things sorted out by now, after putting a heck of a lot of work into this platform over the last 6 years... Here's our Mac binaries: http://boxen.math.washington.edu/home/sagemath/sage-mirror/osx/intel/index.html Here's the source, which I think automatically gets things right (regarding building scipy), assuming one follows the instructions... http://boxen.math.washington.edu/home/sagemath/sage-mirror/src/index.html --William > The Anaconda one is broken. I haven't tried > the standalone packages, though. > > > 2013/9/26 Sturla Molden >> >> >> On Sep 26, 2013, at 3:09 PM, Radim ?eh??ek wrote: >> >> > I guess I was too verbose, let me rephrase: >> > >> > 1. BUG REPORT >> > `scipy.linalg.blas.sdot` gives wrong results on mac: >> > >> > >>> scipy.linalg.blas.sdot(np.array([ 10.], dtype=np.float32), >> > >>> np.array([ 0.01], dtype=np.float32)) >> > -0.0 >> > >> >> Let me submit in evidence to the contrary: >> >> In [20]: >> scipy.linalg.blas.sdot(np.array([10.],dtype=np.float32),np.array([0.01],dtype=np.float32)) >> Out[20]: 0.09999999403953552 >> >> In [21]: sys.platform >> Out[21]: 'darwin' >> >> It is your BLAS library that makes the error, not SciPy. >> >> >> Sturla >> >> -- >> >> --- >> You received this message because you are subscribed to the Google Groups >> "cython-users" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to cython-users+unsubscribe at googlegroups.com. >> For more options, visit https://groups.google.com/groups/opt_out. > > > > > -- > La brigade SnW veut vous recruter - http://www.brigadesnw.ca > > -- > > --- > You received this message because you are subscribed to the Google Groups > "cython-users" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to cython-users+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. -- William Stein Professor of Mathematics University of Washington http://wstein.org From abergeron at gmail.com Thu Sep 26 14:35:46 2013 From: abergeron at gmail.com (Arnaud Bergeron) Date: Thu, 26 Sep 2013 14:35:46 -0400 Subject: [SciPy-User] [cython-users] bug report + feature request (was: scipy cblas return value) In-Reply-To: References: <1e44591c-dc17-46b4-a998-cab29016d434@googlegroups.com> <80fbdffc-ed06-46b6-a372-eeb01736d248@googlegroups.com> <445edf58-50aa-463a-81cc-04c27e689c15@googlegroups.com> <401ED18D-80E9-4304-9C21-411EEEA61096@molden.no> Message-ID: 2013/9/26 William Stein > On Thu, Sep 26, 2013 at 10:04 AM, Arnaud Bergeron > wrote: > > Or scipy is badly compiled. Since OS X does not ship with a fortran > > compiler it is especially hard to compile and link it properly with a > blas. > > If you use slightly incompatible compilers there is absolutely no compile > > error you just get broken results out of the blas functions. > > > > On that note the only prebuilt scipy that is not broken is the one that > > comes with Enthough Canopy. > > Are you saying that the prebuilt scipy we ship with Sage is broken? > I'm curious, since I can file a bug report, and I thought we had > things sorted out by now, after putting a heck of a lot of work into > this platform over the last 6 years... Here's our Mac binaries: > > > http://boxen.math.washington.edu/home/sagemath/sage-mirror/osx/intel/index.html Sage isn't just a python distribution :) It's also not listed on the scipy install page. But still, the scipy binaries that come with it are broken (at least for the 10.6 package I tried). If you try the command above it returns -0.0 rather than the correct 0.09999999403953552. > Here's the source, which I think automatically gets things right > (regarding building scipy), assuming one follows the instructions... > > > http://boxen.math.washington.edu/home/sagemath/sage-mirror/src/index.html > > --William > > > > The Anaconda one is broken. I haven't tried > > the standalone packages, though. > > > > > > 2013/9/26 Sturla Molden > >> > >> > >> On Sep 26, 2013, at 3:09 PM, Radim ?eh??ek wrote: > >> > >> > I guess I was too verbose, let me rephrase: > >> > > >> > 1. BUG REPORT > >> > `scipy.linalg.blas.sdot` gives wrong results on mac: > >> > > >> > >>> scipy.linalg.blas.sdot(np.array([ 10.], dtype=np.float32), > >> > >>> np.array([ 0.01], dtype=np.float32)) > >> > -0.0 > >> > > >> > >> Let me submit in evidence to the contrary: > >> > >> In [20]: > >> > scipy.linalg.blas.sdot(np.array([10.],dtype=np.float32),np.array([0.01],dtype=np.float32)) > >> Out[20]: 0.09999999403953552 > >> > >> In [21]: sys.platform > >> Out[21]: 'darwin' > >> > >> It is your BLAS library that makes the error, not SciPy. > >> > >> > >> Sturla > >> > >> -- > >> > >> --- > >> You received this message because you are subscribed to the Google > Groups > >> "cython-users" group. > >> To unsubscribe from this group and stop receiving emails from it, send > an > >> email to cython-users+unsubscribe at googlegroups.com. > >> For more options, visit https://groups.google.com/groups/opt_out. > > > > > > > > > > -- > > La brigade SnW veut vous recruter - http://www.brigadesnw.ca > > > > -- > > > > --- > > You received this message because you are subscribed to the Google Groups > > "cython-users" group. > > To unsubscribe from this group and stop receiving emails from it, send an > > email to cython-users+unsubscribe at googlegroups.com. > > For more options, visit https://groups.google.com/groups/opt_out. > > > > -- > William Stein > Professor of Mathematics > University of Washington > http://wstein.org > > -- > > --- > You received this message because you are subscribed to the Google Groups > "cython-users" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to cython-users+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wstein at gmail.com Thu Sep 26 15:22:00 2013 From: wstein at gmail.com (William Stein) Date: Thu, 26 Sep 2013 12:22:00 -0700 Subject: [SciPy-User] [cython-users] bug report + feature request (was: scipy cblas return value) In-Reply-To: References: <1e44591c-dc17-46b4-a998-cab29016d434@googlegroups.com> <80fbdffc-ed06-46b6-a372-eeb01736d248@googlegroups.com> <445edf58-50aa-463a-81cc-04c27e689c15@googlegroups.com> <401ED18D-80E9-4304-9C21-411EEEA61096@molden.no> Message-ID: On Thu, Sep 26, 2013 at 11:35 AM, Arnaud Bergeron wrote: > 2013/9/26 William Stein >> >> On Thu, Sep 26, 2013 at 10:04 AM, Arnaud Bergeron >> wrote: >> > Or scipy is badly compiled. Since OS X does not ship with a fortran >> > compiler it is especially hard to compile and link it properly with a >> > blas. >> > If you use slightly incompatible compilers there is absolutely no >> > compile >> > error you just get broken results out of the blas functions. >> > >> > On that note the only prebuilt scipy that is not broken is the one that >> > comes with Enthough Canopy. >> >> Are you saying that the prebuilt scipy we ship with Sage is broken? >> I'm curious, since I can file a bug report, and I thought we had >> things sorted out by now, after putting a heck of a lot of work into >> this platform over the last 6 years... Here's our Mac binaries: >> >> >> http://boxen.math.washington.edu/home/sagemath/sage-mirror/osx/intel/index.html > > > Sage isn't just a python distribution :) It's also not listed on the scipy > install page. OK, good point :-) > > But still, the scipy binaries that come with it are broken (at least for the > 10.6 package I tried). If you try the command above it returns -0.0 rather > than the correct 0.09999999403953552. Ugh. Many thanks for the bug report. > > >> >> Here's the source, which I think automatically gets things right >> (regarding building scipy), assuming one follows the instructions... >> >> >> http://boxen.math.washington.edu/home/sagemath/sage-mirror/src/index.html >> >> --William >> >> >> > The Anaconda one is broken. I haven't tried >> > the standalone packages, though. >> > >> > >> > 2013/9/26 Sturla Molden >> >> >> >> >> >> On Sep 26, 2013, at 3:09 PM, Radim ?eh??ek wrote: >> >> >> >> > I guess I was too verbose, let me rephrase: >> >> > >> >> > 1. BUG REPORT >> >> > `scipy.linalg.blas.sdot` gives wrong results on mac: >> >> > >> >> > >>> scipy.linalg.blas.sdot(np.array([ 10.], dtype=np.float32), >> >> > >>> np.array([ 0.01], dtype=np.float32)) >> >> > -0.0 >> >> > >> >> >> >> Let me submit in evidence to the contrary: >> >> >> >> In [20]: >> >> >> >> scipy.linalg.blas.sdot(np.array([10.],dtype=np.float32),np.array([0.01],dtype=np.float32)) >> >> Out[20]: 0.09999999403953552 >> >> >> >> In [21]: sys.platform >> >> Out[21]: 'darwin' >> >> >> >> It is your BLAS library that makes the error, not SciPy. >> >> >> >> >> >> Sturla >> >> >> >> -- >> >> >> >> --- >> >> You received this message because you are subscribed to the Google >> >> Groups >> >> "cython-users" group. >> >> To unsubscribe from this group and stop receiving emails from it, send >> >> an >> >> email to cython-users+unsubscribe at googlegroups.com. >> >> For more options, visit https://groups.google.com/groups/opt_out. >> > >> > >> > >> > >> > -- >> > La brigade SnW veut vous recruter - http://www.brigadesnw.ca >> > >> > -- >> > >> > --- >> > You received this message because you are subscribed to the Google >> > Groups >> > "cython-users" group. >> > To unsubscribe from this group and stop receiving emails from it, send >> > an >> > email to cython-users+unsubscribe at googlegroups.com. >> > For more options, visit https://groups.google.com/groups/opt_out. >> >> >> >> -- >> William Stein >> Professor of Mathematics >> University of Washington >> http://wstein.org >> >> -- >> >> --- >> You received this message because you are subscribed to the Google Groups >> "cython-users" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to cython-users+unsubscribe at googlegroups.com. >> For more options, visit https://groups.google.com/groups/opt_out. > > > > -- > > --- > You received this message because you are subscribed to the Google Groups > "cython-users" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to cython-users+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. -- William Stein Professor of Mathematics University of Washington http://wstein.org From vbraun.name at gmail.com Thu Sep 26 18:35:32 2013 From: vbraun.name at gmail.com (Volker Braun) Date: Thu, 26 Sep 2013 15:35:32 -0700 (PDT) Subject: [SciPy-User] [cython-users] bug report + feature request (was: scipy cblas return value) In-Reply-To: References: <1e44591c-dc17-46b4-a998-cab29016d434@googlegroups.com> <80fbdffc-ed06-46b6-a372-eeb01736d248@googlegroups.com> <445edf58-50aa-463a-81cc-04c27e689c15@googlegroups.com> <401ED18D-80E9-4304-9C21-411EEEA61096@molden.no> Message-ID: <1ae0a420-7271-43ea-b32c-29af8ef34ca9@googlegroups.com> That is the expected result on OSX 10.6. Well-known bug in the OSX accelerate framework which Apple never fixed. >> >> > 1. BUG REPORT > >> >> > `scipy.linalg.blas.sdot` gives wrong results on mac: > >> >> > > >> >> > >>> scipy.linalg.blas.sdot(np.array([ 10.], dtype=np.float32), > >> >> > >>> np.array([ 0.01], dtype=np.float32)) > >> >> > -0.0 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From abergeron at gmail.com Thu Sep 26 23:46:34 2013 From: abergeron at gmail.com (Arnaud Bergeron) Date: Thu, 26 Sep 2013 23:46:34 -0400 Subject: [SciPy-User] [cython-users] bug report + feature request (was: scipy cblas return value) In-Reply-To: <1ae0a420-7271-43ea-b32c-29af8ef34ca9@googlegroups.com> References: <1e44591c-dc17-46b4-a998-cab29016d434@googlegroups.com> <80fbdffc-ed06-46b6-a372-eeb01736d248@googlegroups.com> <445edf58-50aa-463a-81cc-04c27e689c15@googlegroups.com> <401ED18D-80E9-4304-9C21-411EEEA61096@molden.no> <1ae0a420-7271-43ea-b32c-29af8ef34ca9@googlegroups.com> Message-ID: 2013/9/26 Volker Braun > That is the expected result on OSX 10.6. Well-known bug in the OSX > accelerate framework which Apple never fixed. > No, that is not a bug in Accelerate. It is an ABI mismatch between what scipy expects and what vecLib provides. > >> >> > 1. BUG REPORT >> >> >> > `scipy.linalg.blas.sdot` gives wrong results on mac: >> >> >> > >> >> >> > >>> scipy.linalg.blas.sdot(np.**array([ 10.], dtype=np.float32), >> >> >> > >>> np.array([ 0.01], dtype=np.float32)) >> >> >> > -0.0 >> >> > > -- > > --- > You received this message because you are subscribed to the Google Groups > "cython-users" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to cython-users+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From abergeron at gmail.com Mon Sep 30 17:11:53 2013 From: abergeron at gmail.com (Arnaud Bergeron) Date: Mon, 30 Sep 2013 17:11:53 -0400 Subject: [SciPy-User] [cython-users] bug report + feature request (was: scipy cblas return value) In-Reply-To: References: <1e44591c-dc17-46b4-a998-cab29016d434@googlegroups.com> <80fbdffc-ed06-46b6-a372-eeb01736d248@googlegroups.com> <445edf58-50aa-463a-81cc-04c27e689c15@googlegroups.com> <401ED18D-80E9-4304-9C21-411EEEA61096@molden.no> Message-ID: Update on this issue: To get the right results for this call: scipy.linalg.blas.sdot(np.array([10.],dtype=np.float32), np.array([0.01],dtype=np.float32)) you can compile scipy using export FOPT=-ff2c But there are still some ABI mismatches in arpack, lapack and others. Scipy 0.13 include extended ABI wrappers which should fix all of the issues related to this when linking to Accelerate. No need to use any special flags or anything. So for people looking for an easy way to fix these issues: upgrade scipy. 2013/9/27 Dima Pasechnik > On 2013-09-26, William Stein wrote: > > On Thu, Sep 26, 2013 at 11:35 AM, Arnaud Bergeron > wrote: > >> 2013/9/26 William Stein > >>> > >>> On Thu, Sep 26, 2013 at 10:04 AM, Arnaud Bergeron > > >>> wrote: > >>> > Or scipy is badly compiled. Since OS X does not ship with a fortran > >>> > compiler it is especially hard to compile and link it properly with a > >>> > blas. > >>> > If you use slightly incompatible compilers there is absolutely no > >>> > compile > >>> > error you just get broken results out of the blas functions. > >>> > > >>> > On that note the only prebuilt scipy that is not broken is the one > that > >>> > comes with Enthough Canopy. > >>> > >>> Are you saying that the prebuilt scipy we ship with Sage is broken? > >>> I'm curious, since I can file a bug report, and I thought we had > >>> things sorted out by now, after putting a heck of a lot of work into > >>> this platform over the last 6 years... Here's our Mac binaries: > >>> > >>> > >>> > http://boxen.math.washington.edu/home/sagemath/sage-mirror/osx/intel/index.html > >> > >> > >> Sage isn't just a python distribution :) It's also not listed on the > scipy > >> install page. > > > > OK, good point :-) > > > >> > >> But still, the scipy binaries that come with it are broken (at least > for the > >> 10.6 package I tried). If you try the command above it returns -0.0 > rather > >> than the correct 0.09999999403953552. > > > > Ugh. Many thanks for the bug report. > > could be due to using binary for wrong arch. With OSX 10.6 there were > lots of issues whether it's a 32-bit or 64-bit... > At least on my 32-bit OSX 10.6 system self-compiled Sage 5.12.beta4 > works as expected: > $ uname -a > Darwin nash 10.8.0 Darwin Kernel Version 10.8.0: Tue Jun 7 16:33:36 PDT > 2011; root:xnu-1504.15.3~1/RELEASE_I386 i386 > > sage: > scipy.linalg.blas.sdot(np.array([10.],dtype=np.float32),np.array([0.01],dtype=np.float32)) > 0.09999999403953552 > sage: version() > 'Sage Version 5.12.beta4, Release Date: 2013-08-30' > > > > >> > >> > >>> > >>> Here's the source, which I think automatically gets things right > >>> (regarding building scipy), assuming one follows the instructions... > >>> > >>> > >>> > http://boxen.math.washington.edu/home/sagemath/sage-mirror/src/index.html > >>> > >>> --William > >>> > >>> > >>> > The Anaconda one is broken. I haven't tried > >>> > the standalone packages, though. > >>> > > >>> > > >>> > 2013/9/26 Sturla Molden > >>> >> > >>> >> > >>> >> On Sep 26, 2013, at 3:09 PM, Radim ?eh??ek > wrote: > >>> >> > >>> >> > I guess I was too verbose, let me rephrase: > >>> >> > > >>> >> > 1. BUG REPORT > >>> >> > `scipy.linalg.blas.sdot` gives wrong results on mac: > >>> >> > > >>> >> > >>> scipy.linalg.blas.sdot(np.array([ 10.], dtype=np.float32), > >>> >> > >>> np.array([ 0.01], dtype=np.float32)) > >>> >> > -0.0 > >>> >> > > >>> >> > >>> >> Let me submit in evidence to the contrary: > >>> >> > >>> >> In [20]: > >>> >> > >>> >> > scipy.linalg.blas.sdot(np.array([10.],dtype=np.float32),np.array([0.01],dtype=np.float32)) > >>> >> Out[20]: 0.09999999403953552 > >>> >> > >>> >> In [21]: sys.platform > >>> >> Out[21]: 'darwin' > >>> >> > >>> >> It is your BLAS library that makes the error, not SciPy. > >>> >> > >>> >> > >>> >> Sturla > >>> >> > >>> >> -- > >>> >> > >>> >> --- > >>> >> You received this message because you are subscribed to the Google > >>> >> Groups > >>> >> "cython-users" group. > >>> >> To unsubscribe from this group and stop receiving emails from it, > send > >>> >> an > >>> >> email to cython-users+unsubscribe at googlegroups.com. > >>> >> For more options, visit https://groups.google.com/groups/opt_out. > >>> > > >>> > > >>> > > >>> > > >>> > -- > >>> > La brigade SnW veut vous recruter - http://www.brigadesnw.ca > >>> > > >>> > -- > >>> > > >>> > --- > >>> > You received this message because you are subscribed to the Google > >>> > Groups > >>> > "cython-users" group. > >>> > To unsubscribe from this group and stop receiving emails from it, > send > >>> > an > >>> > email to cython-users+unsubscribe at googlegroups.com. > >>> > For more options, visit https://groups.google.com/groups/opt_out. > >>> > >>> > >>> > >>> -- > >>> William Stein > >>> Professor of Mathematics > >>> University of Washington > >>> http://wstein.org > >>> > >>> -- > >>> > >>> --- > >>> You received this message because you are subscribed to the Google > Groups > >>> "cython-users" group. > >>> To unsubscribe from this group and stop receiving emails from it, send > an > >>> email to cython-users+unsubscribe at googlegroups.com. > >>> For more options, visit https://groups.google.com/groups/opt_out. > >> > >> > >> > >> -- > >> > >> --- > >> You received this message because you are subscribed to the Google > Groups > >> "cython-users" group. > >> To unsubscribe from this group and stop receiving emails from it, send > an > >> email to cython-users+unsubscribe at googlegroups.com. > >> For more options, visit https://groups.google.com/groups/opt_out. > > > > > > > > -- > > William Stein > > Professor of Mathematics > > University of Washington > > http://wstein.org > > > > -- > > --- > You received this message because you are subscribed to the Google Groups > "cython-users" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to cython-users+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. > -- La brigade SnW veut vous recruter - http://www.brigadesnw.ca -------------- next part -------------- An HTML attachment was scrubbed... URL: