From jfosorio at gmail.com Sat Mar 1 08:16:01 2014 From: jfosorio at gmail.com (Juan Felipe Osorio) Date: Sat, 1 Mar 2014 14:16:01 +0100 Subject: [SciPy-User] ltisys and other ways to define a linear system in scipy Message-ID: Hello, I am a bit conduced with the different implementations of the lti systems in scipy. On one hand there are the procedures defined in scipy.signal that allow to create the state-space matrix (actually a tuple in python) and allow to do things like to convert the state-space to a discrete state space (using for example scipy.signal.cont2discrete). On the other hand there is also the class defined in scipy.signal.lti. This seems to be only for continues time lti systems. So when I do something like: import scipy.signal as sig import numpy as np import matplotlib.pyplot as plt wp = 0.2 fs = 10 oversampling = 10 t = np.arange(100)*1/fs; sys_lti = sig.lti([1],[1,1]) # this works sig.step(sys_lti) # this does not sysd_lti = sig.cont2discrete(sys_lti,1.0/10) Is this a bug ?. It seems to me that the class defined by scipy.signal.lti do not operate properly with the functions defined in scipy.signal. On the class scipy.signal the systems can be also represented as state-space tuples. That is a bit inconsistent. It is like that or there is something I am missing ? could some one illustrate me over how mature the community things this libraries are and if there are plans to put this two things to work better together ? Best Regards, Juan --------------------------------------------------------------------------- TypeError Traceback (most recent call last) in () ----> 1 scipy.signal.cont2discrete(sys,1.0/10) /Users/jfosorio/anaconda/lib/python2.7/site-packages/scipy/signal/cont2discrete.pyc in cont2discrete(sys, dt, method, alpha) 75 76 """ ---> 77 if len(sys) == 2: 78 sysd = cont2discrete(tf2ss(sys[0], sys[1]), dt, method=method, 79 alpha=alpha) TypeError: object of type 'lti' has no len() -------------- next part -------------- An HTML attachment was scrubbed... URL: From gb.gabrielebrambilla at gmail.com Tue Mar 4 15:00:55 2014 From: gb.gabrielebrambilla at gmail.com (Gabriele Brambilla) Date: Tue, 4 Mar 2014 15:00:55 -0500 Subject: [SciPy-User] problems with numpy array assignment Message-ID: Hi, Something strange happens in my code: I have added the lines after #... MYMAP is a 3d numpy array PAS = MYMAP[:, hhh, :] #multiplying per energy bin width PAS1 = PAS for l in nIph: PAS1[l,:] = PAS1[l,:]*zz But this affect the result I get when I elaborate the data from: EXCT = MYMAP[:, hhh, :] is python continuing to modify the original matrix?? Thanks Gabriele -------------- next part -------------- An HTML attachment was scrubbed... URL: From ehermes at chem.wisc.edu Tue Mar 4 15:04:38 2014 From: ehermes at chem.wisc.edu (Eric Hermes) Date: Tue, 04 Mar 2014 14:04:38 -0600 Subject: [SciPy-User] problems with numpy array assignment In-Reply-To: References: Message-ID: <531631D6.4080805@chem.wisc.edu> On 3/4/2014 2:00 PM, Gabriele Brambilla wrote: > Hi, > > Something strange happens in my code: I have added the lines after #... > > MYMAP is a 3d numpy array > PAS = MYMAP[:, hhh, :] > > #multiplying per energy bin width > PAS1 = PAS > for l in nIph: > PAS1[l,:] = PAS1[l,:]*zz > > But this affect the result I get when I elaborate the data from: > > EXCT = MYMAP[:, hhh, :] > > is python continuing to modify the original matrix?? > > Thanks > > Gabriele > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user Gabriele, Numpy arrays are passed as references, so in your code when you modify PAS1 (which inherits the reference to MYMAP from PAS), the changes propogates upwards into MYMAP. If you want to avoid this, you can explicitly create a copy of the array, either by doing: PAS = MYMAP[:,hhh,:].copy() or: PAS1 = PAS.copy() Eric -- Eric Hermes J.R. Schmidt Group Chemistry Department University of Wisconsin - Madison -------------- next part -------------- An HTML attachment was scrubbed... URL: From gb.gabrielebrambilla at gmail.com Tue Mar 4 15:07:11 2014 From: gb.gabrielebrambilla at gmail.com (Gabriele Brambilla) Date: Tue, 4 Mar 2014 15:07:11 -0500 Subject: [SciPy-User] problems with numpy array assignment In-Reply-To: <531631D6.4080805@chem.wisc.edu> References: <531631D6.4080805@chem.wisc.edu> Message-ID: thanks. Gabriele 2014-03-04 15:04 GMT-05:00 Eric Hermes : > On 3/4/2014 2:00 PM, Gabriele Brambilla wrote: > > Hi, > > Something strange happens in my code: I have added the lines after #... > > MYMAP is a 3d numpy array > PAS = MYMAP[:, hhh, :] > > #multiplying per energy bin width > PAS1 = PAS > for l in nIph: > PAS1[l,:] = PAS1[l,:]*zz > > > But this affect the result I get when I elaborate the data from: > > EXCT = MYMAP[:, hhh, :] > > is python continuing to modify the original matrix?? > > Thanks > > Gabriele > > > _______________________________________________ > SciPy-User mailing listSciPy-User at scipy.orghttp://mail.scipy.org/mailman/listinfo/scipy-user > > Gabriele, > > Numpy arrays are passed as references, so in your code when you modify > PAS1 (which inherits the reference to MYMAP from PAS), the changes > propogates upwards into MYMAP. If you want to avoid this, you can > explicitly create a copy of the array, either by doing: > > PAS = MYMAP[:,hhh,:].copy() > > or: > > PAS1 = PAS.copy() > > Eric > > -- > Eric Hermes > J.R. Schmidt Group > Chemistry Department > University of Wisconsin - Madison > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gb.gabrielebrambilla at gmail.com Tue Mar 4 15:13:05 2014 From: gb.gabrielebrambilla at gmail.com (Gabriele Brambilla) Date: Tue, 4 Mar 2014 15:13:05 -0500 Subject: [SciPy-User] problems with numpy array assignment In-Reply-To: References: <531631D6.4080805@chem.wisc.edu> Message-ID: But the changes I make on PAS=MYMAP[:, hhh. :] will be applied to G=MYMAP[:, :, :]? Gabriele 2014-03-04 15:07 GMT-05:00 Gabriele Brambilla < gb.gabrielebrambilla at gmail.com>: > thanks. > > Gabriele > > > 2014-03-04 15:04 GMT-05:00 Eric Hermes : > > On 3/4/2014 2:00 PM, Gabriele Brambilla wrote: >> >> Hi, >> >> Something strange happens in my code: I have added the lines after #... >> >> MYMAP is a 3d numpy array >> PAS = MYMAP[:, hhh, :] >> >> #multiplying per energy bin width >> PAS1 = PAS >> for l in nIph: >> PAS1[l,:] = PAS1[l,:]*zz >> >> >> But this affect the result I get when I elaborate the data from: >> >> EXCT = MYMAP[:, hhh, :] >> >> is python continuing to modify the original matrix?? >> >> Thanks >> >> Gabriele >> >> >> _______________________________________________ >> SciPy-User mailing listSciPy-User at scipy.orghttp://mail.scipy.org/mailman/listinfo/scipy-user >> >> Gabriele, >> >> Numpy arrays are passed as references, so in your code when you modify >> PAS1 (which inherits the reference to MYMAP from PAS), the changes >> propogates upwards into MYMAP. If you want to avoid this, you can >> explicitly create a copy of the array, either by doing: >> >> PAS = MYMAP[:,hhh,:].copy() >> >> or: >> >> PAS1 = PAS.copy() >> >> Eric >> >> -- >> Eric Hermes >> J.R. Schmidt Group >> Chemistry Department >> University of Wisconsin - Madison >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gb.gabrielebrambilla at gmail.com Tue Mar 4 15:18:14 2014 From: gb.gabrielebrambilla at gmail.com (Gabriele Brambilla) Date: Tue, 4 Mar 2014 15:18:14 -0500 Subject: [SciPy-User] problems with numpy array assignment In-Reply-To: References: <531631D6.4080805@chem.wisc.edu> Message-ID: So if I do: B2 = PAS1 Dx2 = ndimage.gaussian_filter(B2, sigma=[nsmopha, nsmoene]) B2 = Dx2 B1 = PAS1 Dx1 = ndimage.gaussian_filter(B1, sigma=[nsmopha, nsmoene]) B1 = Dx1 Am I applying the smooth two times? 2014-03-04 15:13 GMT-05:00 Gabriele Brambilla < gb.gabrielebrambilla at gmail.com>: > But the changes I make on PAS=MYMAP[:, hhh. :] will be applied to > G=MYMAP[:, :, :]? > > Gabriele > > > 2014-03-04 15:07 GMT-05:00 Gabriele Brambilla < > gb.gabrielebrambilla at gmail.com>: > > thanks. >> >> Gabriele >> >> >> 2014-03-04 15:04 GMT-05:00 Eric Hermes : >> >> On 3/4/2014 2:00 PM, Gabriele Brambilla wrote: >>> >>> Hi, >>> >>> Something strange happens in my code: I have added the lines after #... >>> >>> MYMAP is a 3d numpy array >>> PAS = MYMAP[:, hhh, :] >>> >>> #multiplying per energy bin width >>> PAS1 = PAS >>> for l in nIph: >>> PAS1[l,:] = PAS1[l,:]*zz >>> >>> >>> But this affect the result I get when I elaborate the data from: >>> >>> EXCT = MYMAP[:, hhh, :] >>> >>> is python continuing to modify the original matrix?? >>> >>> Thanks >>> >>> Gabriele >>> >>> >>> _______________________________________________ >>> SciPy-User mailing listSciPy-User at scipy.orghttp://mail.scipy.org/mailman/listinfo/scipy-user >>> >>> Gabriele, >>> >>> Numpy arrays are passed as references, so in your code when you modify >>> PAS1 (which inherits the reference to MYMAP from PAS), the changes >>> propogates upwards into MYMAP. If you want to avoid this, you can >>> explicitly create a copy of the array, either by doing: >>> >>> PAS = MYMAP[:,hhh,:].copy() >>> >>> or: >>> >>> PAS1 = PAS.copy() >>> >>> Eric >>> >>> -- >>> Eric Hermes >>> J.R. Schmidt Group >>> Chemistry Department >>> University of Wisconsin - Madison >>> >>> >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ehermes at chem.wisc.edu Tue Mar 4 15:18:18 2014 From: ehermes at chem.wisc.edu (Eric Hermes) Date: Tue, 04 Mar 2014 14:18:18 -0600 Subject: [SciPy-User] problems with numpy array assignment In-Reply-To: References: <531631D6.4080805@chem.wisc.edu> Message-ID: <5316350A.10000@chem.wisc.edu> On 3/4/2014 2:13 PM, Gabriele Brambilla wrote: > But the changes I make on PAS=MYMAP[:, hhh. :] will be applied to > G=MYMAP[:, :, :]? > > Gabriele > If PAS is assigned as: PAS = MYMAP[:,hhh,:] and G is assigned as: G = MYMAP[:,:,:] then modification of one variable will affect the other, since PAS and G are references to slices of the array MYMAP (which will also be modified). Eric > > 2014-03-04 15:07 GMT-05:00 Gabriele Brambilla > >: > > thanks. > > Gabriele > > > 2014-03-04 15:04 GMT-05:00 Eric Hermes >: > > On 3/4/2014 2:00 PM, Gabriele Brambilla wrote: >> Hi, >> >> Something strange happens in my code: I have added the lines >> after #... >> >> MYMAP is a 3d numpy array >> PAS = MYMAP[:, hhh, :] >> >> #multiplying per energy bin width >> PAS1 = PAS >> for l in nIph: >> PAS1[l,:] = PAS1[l,:]*zz >> >> But this affect the result I get when I elaborate the data from: >> >> EXCT = MYMAP[:, hhh, :] >> >> is python continuing to modify the original matrix?? >> >> Thanks >> >> Gabriele >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > Gabriele, > > Numpy arrays are passed as references, so in your code when > you modify PAS1 (which inherits the reference to MYMAP from > PAS), the changes propogates upwards into MYMAP. If you want > to avoid this, you can explicitly create a copy of the array, > either by doing: > > PAS = MYMAP[:,hhh,:].copy() > > or: > > PAS1 = PAS.copy() > > Eric > > -- > Eric Hermes > J.R. Schmidt Group > Chemistry Department > University of Wisconsin - Madison > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -- Eric Hermes J.R. Schmidt Group Chemistry Department University of Wisconsin - Madison -------------- next part -------------- An HTML attachment was scrubbed... URL: From ehermes at chem.wisc.edu Tue Mar 4 15:25:48 2014 From: ehermes at chem.wisc.edu (Eric Hermes) Date: Tue, 04 Mar 2014 14:25:48 -0600 Subject: [SciPy-User] problems with numpy array assignment In-Reply-To: References: <531631D6.4080805@chem.wisc.edu> Message-ID: <531636CC.4010406@chem.wisc.edu> On 3/4/2014 2:18 PM, Gabriele Brambilla wrote: > So if I do: > > B2 = PAS1 > Dx2 = ndimage.gaussian_filter(B2, sigma=[nsmopha, nsmoene]) > B2 = Dx2 > > B1 = PAS1 > Dx1 = ndimage.gaussian_filter(B1, sigma=[nsmopha, nsmoene]) > B1 = Dx1 > > Am I applying the smooth two times? > No. Based on the code snippets you've posted so far, this is what is happening, with comments: PAS = MYMAP[:,hhh,:] # PAS now refers to a slice of array MYMAP PAS1 = PAS # PAS1 now refers to a slice of array MYMAP B2 = PAS1 # B2 now refers to a slice of array MYMAP Dx2 = ndimage.gaussian_filter(B2, sigma=[nsmopha, nsmoene]) # Dx2 is a NEW array B2 = Dx2 # B2 now refers to Dx2. PAS1 *still* refers to a slice of array MYMAP B1 = PAS1 # B1 now refers to a slice of array MYMAP Dx1 = ndimage.gaussian_filter(B1, sigma=[nsmopha, nsmoene])# Dx1 is a NEW array B1 = Dx1 # B1 now refers to Dx1 Eric > 2014-03-04 15:13 GMT-05:00 Gabriele Brambilla > >: > > But the changes I make on PAS=MYMAP[:, hhh. :] will be applied to > G=MYMAP[:, :, :]? > > Gabriele > > > 2014-03-04 15:07 GMT-05:00 Gabriele Brambilla > >: > > thanks. > > Gabriele > > > 2014-03-04 15:04 GMT-05:00 Eric Hermes >: > > On 3/4/2014 2:00 PM, Gabriele Brambilla wrote: >> Hi, >> >> Something strange happens in my code: I have added the >> lines after #... >> >> MYMAP is a 3d numpy array >> PAS = MYMAP[:, hhh, :] >> >> #multiplying per energy bin width >> PAS1 = PAS >> for l in nIph: >> PAS1[l,:] = PAS1[l,:]*zz >> >> But this affect the result I get when I elaborate the >> data from: >> >> EXCT = MYMAP[:, hhh, :] >> >> is python continuing to modify the original matrix?? >> >> Thanks >> >> Gabriele >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > Gabriele, > > Numpy arrays are passed as references, so in your code > when you modify PAS1 (which inherits the reference to > MYMAP from PAS), the changes propogates upwards into > MYMAP. If you want to avoid this, you can explicitly > create a copy of the array, either by doing: > > PAS = MYMAP[:,hhh,:].copy() > > or: > > PAS1 = PAS.copy() > > Eric > > -- > Eric Hermes > J.R. Schmidt Group > Chemistry Department > University of Wisconsin - Madison > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -- Eric Hermes J.R. Schmidt Group Chemistry Department University of Wisconsin - Madison -------------- next part -------------- An HTML attachment was scrubbed... URL: From gb.gabrielebrambilla at gmail.com Tue Mar 4 15:46:30 2014 From: gb.gabrielebrambilla at gmail.com (Gabriele Brambilla) Date: Tue, 4 Mar 2014 15:46:30 -0500 Subject: [SciPy-User] problems with numpy array assignment In-Reply-To: <531636CC.4010406@chem.wisc.edu> References: <531631D6.4080805@chem.wisc.edu> <531636CC.4010406@chem.wisc.edu> Message-ID: thanks, I think I have understand: It pass the references only when I do something like A=B. if I do A=c() I create a new variable. But why Python works in this way? which utility has this behaviour? Gabriele 2014-03-04 15:25 GMT-05:00 Eric Hermes : > On 3/4/2014 2:18 PM, Gabriele Brambilla wrote: > > So if I do: > > B2 = PAS1 > Dx2 = ndimage.gaussian_filter(B2, sigma=[nsmopha, nsmoene]) > B2 = Dx2 > > B1 = PAS1 > Dx1 = ndimage.gaussian_filter(B1, sigma=[nsmopha, nsmoene]) > B1 = Dx1 > > Am I applying the smooth two times? > > No. Based on the code snippets you've posted so far, this is what is > happening, with comments: > > PAS = MYMAP[:,hhh,:] # PAS now > refers to a slice of array MYMAP > PAS1 = PAS # PAS1 now > refers to a slice of array MYMAP > > B2 = PAS1 # B2 now > refers to a slice of array MYMAP > Dx2 = ndimage.gaussian_filter(B2, sigma=[nsmopha, nsmoene]) # Dx2 is a NEW > array > B2 = Dx2 # B2 now > refers to Dx2. PAS1 *still* refers to a slice of array MYMAP > B1 = PAS1 # B1 now refers > to a slice of array MYMAP > Dx1 = ndimage.gaussian_filter(B1, sigma=[nsmopha, nsmoene]) # Dx1 is a > NEW array > B1 = Dx1 # B1 now > refers to Dx1 > > Eric > > 2014-03-04 15:13 GMT-05:00 Gabriele Brambilla < > gb.gabrielebrambilla at gmail.com>: > >> But the changes I make on PAS=MYMAP[:, hhh. :] will be applied to >> G=MYMAP[:, :, :]? >> >> Gabriele >> >> >> 2014-03-04 15:07 GMT-05:00 Gabriele Brambilla < >> gb.gabrielebrambilla at gmail.com>: >> >> thanks. >>> >>> Gabriele >>> >>> >>> 2014-03-04 15:04 GMT-05:00 Eric Hermes : >>> >>> On 3/4/2014 2:00 PM, Gabriele Brambilla wrote: >>>> >>>> Hi, >>>> >>>> Something strange happens in my code: I have added the lines after >>>> #... >>>> >>>> MYMAP is a 3d numpy array >>>> PAS = MYMAP[:, hhh, :] >>>> >>>> #multiplying per energy bin width >>>> PAS1 = PAS >>>> for l in nIph: >>>> PAS1[l,:] = PAS1[l,:]*zz >>>> >>>> >>>> But this affect the result I get when I elaborate the data from: >>>> >>>> EXCT = MYMAP[:, hhh, :] >>>> >>>> is python continuing to modify the original matrix?? >>>> >>>> Thanks >>>> >>>> Gabriele >>>> >>>> >>>> _______________________________________________ >>>> SciPy-User mailing listSciPy-User at scipy.orghttp://mail.scipy.org/mailman/listinfo/scipy-user >>>> >>>> Gabriele, >>>> >>>> Numpy arrays are passed as references, so in your code when you modify >>>> PAS1 (which inherits the reference to MYMAP from PAS), the changes >>>> propogates upwards into MYMAP. If you want to avoid this, you can >>>> explicitly create a copy of the array, either by doing: >>>> >>>> PAS = MYMAP[:,hhh,:].copy() >>>> >>>> or: >>>> >>>> PAS1 = PAS.copy() >>>> >>>> Eric >>>> >>>> -- >>>> Eric Hermes >>>> J.R. Schmidt Group >>>> Chemistry Department >>>> University of Wisconsin - Madison >>>> >>>> >>>> _______________________________________________ >>>> SciPy-User mailing list >>>> SciPy-User at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>> >>>> >>> >> > > > _______________________________________________ > SciPy-User mailing listSciPy-User at scipy.orghttp://mail.scipy.org/mailman/listinfo/scipy-user > > > -- > Eric Hermes > J.R. Schmidt Group > Chemistry Department > University of Wisconsin - Madison > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ehermes at chem.wisc.edu Tue Mar 4 15:52:04 2014 From: ehermes at chem.wisc.edu (Eric Hermes) Date: Tue, 04 Mar 2014 14:52:04 -0600 Subject: [SciPy-User] problems with numpy array assignment In-Reply-To: References: <531631D6.4080805@chem.wisc.edu> <531636CC.4010406@chem.wisc.edu> Message-ID: <53163CF4.9030204@chem.wisc.edu> On 3/4/2014 2:46 PM, Gabriele Brambilla wrote: > thanks, > I think I have understand: > It pass the references only when I do something like A=B. > if I do A=c() I create a new variable. > > But why Python works in this way? which utility has this behaviour? > > Gabriele > I am only speaking about numpy arrays. For example: B = np.array([...]) A = B makes A a reference to B. Modifications to A affect B, and vice versa. If, however, you do: B = np.array([...]) A = B.copy() A will instead be a copy of A, and they can be changed independently. This is not true of python datatypes in general. For example: B = [...] A = B In this case A and B can be independently modifed without affecting each other. This is because python lists are passed by value, unlike numpy arrays which are passed by reference. That is, saying "A = B" means something different depending on whether B is a list or a numpy array. I hope this cleared some things up, Eric > > 2014-03-04 15:25 GMT-05:00 Eric Hermes >: > > On 3/4/2014 2:18 PM, Gabriele Brambilla wrote: >> So if I do: >> >> B2 = PAS1 >> Dx2 = ndimage.gaussian_filter(B2, sigma=[nsmopha, nsmoene]) >> B2 = Dx2 >> >> B1 = PAS1 >> Dx1 = ndimage.gaussian_filter(B1, sigma=[nsmopha, nsmoene]) >> B1 = Dx1 >> >> Am I applying the smooth two times? >> > No. Based on the code snippets you've posted so far, this is what > is happening, with comments: > > PAS = MYMAP[:,hhh,:] # PAS now refers to a slice of array MYMAP > PAS1 = PAS # PAS1 now refers to a slice of array MYMAP > > B2 = PAS1 # B2 now refers to a slice of array MYMAP > Dx2 = ndimage.gaussian_filter(B2, sigma=[nsmopha, nsmoene]) # Dx2 > is a NEW array > B2 = Dx2 # B2 now refers to Dx2. PAS1 *still* refers > to a slice of array MYMAP > B1 = PAS1 # B1 now refers to a slice of array MYMAP > Dx1 = ndimage.gaussian_filter(B1, sigma=[nsmopha, nsmoene])# Dx1 > is a NEW array > B1 = Dx1 # B1 now refers to Dx1 > > Eric >> 2014-03-04 15:13 GMT-05:00 Gabriele Brambilla >> > >: >> >> But the changes I make on PAS=MYMAP[:, hhh. :] will be >> applied to G=MYMAP[:, :, :]? >> >> Gabriele >> >> >> 2014-03-04 15:07 GMT-05:00 Gabriele Brambilla >> > >: >> >> thanks. >> >> Gabriele >> >> >> 2014-03-04 15:04 GMT-05:00 Eric Hermes >> >: >> >> On 3/4/2014 2:00 PM, Gabriele Brambilla wrote: >>> Hi, >>> >>> Something strange happens in my code: I have added >>> the lines after #... >>> >>> MYMAP is a 3d numpy array >>> PAS = MYMAP[:, hhh, :] >>> >>> #multiplying per energy bin width >>> PAS1 = PAS >>> for l in nIph: >>> PAS1[l,:] = PAS1[l,:]*zz >>> >>> But this affect the result I get when I elaborate >>> the data from: >>> >>> EXCT = MYMAP[:, hhh, :] >>> >>> is python continuing to modify the original matrix?? >>> >>> Thanks >>> >>> Gabriele >>> >>> >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >> Gabriele, >> >> Numpy arrays are passed as references, so in your >> code when you modify PAS1 (which inherits the >> reference to MYMAP from PAS), the changes propogates >> upwards into MYMAP. If you want to avoid this, you >> can explicitly create a copy of the array, either by >> doing: >> >> PAS = MYMAP[:,hhh,:].copy() >> >> or: >> >> PAS1 = PAS.copy() >> >> Eric >> >> -- >> Eric Hermes >> J.R. Schmidt Group >> Chemistry Department >> University of Wisconsin - Madison >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >> >> >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > -- > Eric Hermes > J.R. Schmidt Group > Chemistry Department > University of Wisconsin - Madison > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -- Eric Hermes J.R. Schmidt Group Chemistry Department University of Wisconsin - Madison -------------- next part -------------- An HTML attachment was scrubbed... URL: From ehermes at chem.wisc.edu Tue Mar 4 15:53:34 2014 From: ehermes at chem.wisc.edu (Eric Hermes) Date: Tue, 04 Mar 2014 14:53:34 -0600 Subject: [SciPy-User] problems with numpy array assignment In-Reply-To: <53163CF4.9030204@chem.wisc.edu> References: <531631D6.4080805@chem.wisc.edu> <531636CC.4010406@chem.wisc.edu> <53163CF4.9030204@chem.wisc.edu> Message-ID: <53163D4E.9000208@chem.wisc.edu> On Tuesday, March 04, 2014 2:52:04 PM, Eric Hermes wrote: > > On 3/4/2014 2:46 PM, Gabriele Brambilla wrote: >> thanks, >> I think I have understand: >> It pass the references only when I do something like A=B. >> if I do A=c() I create a new variable. >> >> But why Python works in this way? which utility has this behaviour? >> >> Gabriele >> > I am only speaking about numpy arrays. For example: > > B = np.array([...]) > A = B > > makes A a reference to B. Modifications to A affect B, and vice > versa. If, however, you do: > > B = np.array([...]) > A = B.copy() > > A will instead be a copy of A, and they can be changed independently. Correction: A will instead by a copy of *B* > > This is not true of python datatypes in general. For example: > > B = [...] > A = B > > In this case A and B can be independently modifed without affecting > each other. This is because python lists are passed by value, unlike > numpy arrays which are passed by reference. That is, saying "A = B" > means something different depending on whether B is a list or a numpy > array. > > I hope this cleared some things up, > Eric >> >> 2014-03-04 15:25 GMT-05:00 Eric Hermes > >: >> >> On 3/4/2014 2:18 PM, Gabriele Brambilla wrote: >>> So if I do: >>> >>> B2 = PAS1 >>> Dx2 = ndimage.gaussian_filter(B2, sigma=[nsmopha, nsmoene]) >>> B2 = Dx2 >>> >>> B1 = PAS1 >>> Dx1 = ndimage.gaussian_filter(B1, sigma=[nsmopha, nsmoene]) >>> B1 = Dx1 >>> >>> Am I applying the smooth two times? >>> >> No. Based on the code snippets you've posted so far, this is what >> is happening, with comments: >> >> PAS = MYMAP[:,hhh,:] # PAS now refers to a slice of array MYMAP >> PAS1 = PAS # PAS1 now refers to a slice of array MYMAP >> >> B2 = PAS1 # B2 now refers to a slice of array MYMAP >> Dx2 = ndimage.gaussian_filter(B2, sigma=[nsmopha, nsmoene]) # Dx2 >> is a NEW array >> B2 = Dx2 # B2 now refers to Dx2. PAS1 *still* refers >> to a slice of array MYMAP >> B1 = PAS1 # B1 now refers to a slice of array MYMAP >> Dx1 = ndimage.gaussian_filter(B1, sigma=[nsmopha, nsmoene])# Dx1 >> is a NEW array >> B1 = Dx1 # B1 now refers to Dx1 >> >> Eric >>> 2014-03-04 15:13 GMT-05:00 Gabriele Brambilla >>> >> >: >>> >>> But the changes I make on PAS=MYMAP[:, hhh. :] will be >>> applied to G=MYMAP[:, :, :]? >>> >>> Gabriele >>> >>> >>> 2014-03-04 15:07 GMT-05:00 Gabriele Brambilla >>> >> >: >>> >>> thanks. >>> >>> Gabriele >>> >>> >>> 2014-03-04 15:04 GMT-05:00 Eric Hermes >>> >: >>> >>> On 3/4/2014 2:00 PM, Gabriele Brambilla wrote: >>>> Hi, >>>> >>>> Something strange happens in my code: I have added >>>> the lines after #... >>>> >>>> MYMAP is a 3d numpy array >>>> PAS = MYMAP[:, hhh, :] >>>> >>>> #multiplying per energy bin width >>>> PAS1 = PAS >>>> for l in nIph: >>>> PAS1[l,:] = PAS1[l,:]*zz >>>> >>>> But this affect the result I get when I elaborate >>>> the data from: >>>> >>>> EXCT = MYMAP[:, hhh, :] >>>> >>>> is python continuing to modify the original matrix?? >>>> >>>> Thanks >>>> >>>> Gabriele >>>> >>>> >>>> _______________________________________________ >>>> SciPy-User mailing list >>>> SciPy-User at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> Gabriele, >>> >>> Numpy arrays are passed as references, so in your >>> code when you modify PAS1 (which inherits the >>> reference to MYMAP from PAS), the changes propogates >>> upwards into MYMAP. If you want to avoid this, you >>> can explicitly create a copy of the array, either by >>> doing: >>> >>> PAS = MYMAP[:,hhh,:].copy() >>> >>> or: >>> >>> PAS1 = PAS.copy() >>> >>> Eric >>> >>> -- >>> Eric Hermes >>> J.R. Schmidt Group >>> Chemistry Department >>> University of Wisconsin - Madison >>> >>> >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >>> >>> >>> >>> >>> >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> -- >> Eric Hermes >> J.R. Schmidt Group >> Chemistry Department >> University of Wisconsin - Madison >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > -- > Eric Hermes > J.R. Schmidt Group > Chemistry Department > University of Wisconsin - Madison -- Eric Hermes J.R. Schmidt Group Chemistry Department University of Wisconsin - Madison From jniehof at lanl.gov Tue Mar 4 15:55:18 2014 From: jniehof at lanl.gov (Jonathan T. Niehof) Date: Tue, 04 Mar 2014 13:55:18 -0700 Subject: [SciPy-User] problems with numpy array assignment In-Reply-To: <53163CF4.9030204@chem.wisc.edu> References: <531631D6.4080805@chem.wisc.edu> <531636CC.4010406@chem.wisc.edu> <53163CF4.9030204@chem.wisc.edu> Message-ID: <53163DB6.1050604@lanl.gov> On 03/04/2014 01:52 PM, Eric Hermes wrote: > This is not true of python datatypes in general. For example: > > B = [...] > A = B > > In this case A and B can be independently modifed without affecting each > other. This is because python lists are passed by value, unlike numpy > arrays which are passed by reference. >>> a = [1, 2, 3] >>> b = a >>> b[1] = 99 >>> a [1, 99, 3] Python variables are essentially references. -- Jonathan Niehof ISR-3 Space Data Systems Los Alamos National Laboratory MS-D466 Los Alamos, NM 87545 Phone: 505-667-9595 email: jniehof at lanl.gov Correspondence / Technical data or Software Publicly Available From ehermes at chem.wisc.edu Tue Mar 4 15:56:34 2014 From: ehermes at chem.wisc.edu (Eric Hermes) Date: Tue, 04 Mar 2014 14:56:34 -0600 Subject: [SciPy-User] problems with numpy array assignment In-Reply-To: <53163DB6.1050604@lanl.gov> References: <531631D6.4080805@chem.wisc.edu> <531636CC.4010406@chem.wisc.edu> <53163CF4.9030204@chem.wisc.edu> <53163DB6.1050604@lanl.gov> Message-ID: <53163E02.8040608@chem.wisc.edu> On 3/4/2014 2:55 PM, Jonathan T. Niehof wrote: > On 03/04/2014 01:52 PM, Eric Hermes wrote: >> This is not true of python datatypes in general. For example: >> >> B = [...] >> A = B >> >> In this case A and B can be independently modifed without affecting each >> other. This is because python lists are passed by value, unlike numpy >> arrays which are passed by reference. > >>> a = [1, 2, 3] > >>> b = a > >>> b[1] = 99 > >>> a > [1, 99, 3] > > Python variables are essentially references. > You are absolutely correct, I apologize for making such an egregious mistake. Eric -- Eric Hermes J.R. Schmidt Group Chemistry Department University of Wisconsin - Madison From jniehof at lanl.gov Tue Mar 4 16:01:46 2014 From: jniehof at lanl.gov (Jonathan T. Niehof) Date: Tue, 04 Mar 2014 14:01:46 -0700 Subject: [SciPy-User] problems with numpy array assignment In-Reply-To: <53163E02.8040608@chem.wisc.edu> References: <531631D6.4080805@chem.wisc.edu> <531636CC.4010406@chem.wisc.edu> <53163CF4.9030204@chem.wisc.edu> <53163DB6.1050604@lanl.gov> <53163E02.8040608@chem.wisc.edu> Message-ID: <53163F3A.9050202@lanl.gov> There is a difference in that Python list slicing usually *does* return copies, vs. views for numpy arrays: >>> a = [1,2,3] >>> b = a[1:] >>> b[0] = 99 >>> a [1, 2, 3] but: >>> a = numpy.array([1, 2, 3]) >>> b = a[1:] >>> b[0] = 99 >>> a array([ 1, 99, 3]) Gabriele, I hope we're being illuminating here rather than more obscure...these can be irritating points. -- Jonathan Niehof ISR-3 Space Data Systems Los Alamos National Laboratory MS-D466 Los Alamos, NM 87545 Phone: 505-667-9595 email: jniehof at lanl.gov Correspondence / Technical data or Software Publicly Available From gb.gabrielebrambilla at gmail.com Tue Mar 4 16:37:35 2014 From: gb.gabrielebrambilla at gmail.com (Gabriele Brambilla) Date: Tue, 4 Mar 2014 16:37:35 -0500 Subject: [SciPy-User] problems with numpy array assignment In-Reply-To: <53163F3A.9050202@lanl.gov> References: <531631D6.4080805@chem.wisc.edu> <531636CC.4010406@chem.wisc.edu> <53163CF4.9030204@chem.wisc.edu> <53163DB6.1050604@lanl.gov> <53163E02.8040608@chem.wisc.edu> <53163F3A.9050202@lanl.gov> Message-ID: Yes, thank you. I read the last 4 emails all together so you have not confused me. I understand that I should pay attention to this aspect: I was used to pass all the things by value... Thanks Gabriele 2014-03-04 16:01 GMT-05:00 Jonathan T. Niehof : > There is a difference in that Python list slicing usually *does* return > copies, vs. views for numpy arrays: > >>> a = [1,2,3] > >>> b = a[1:] > >>> b[0] = 99 > >>> a > [1, 2, 3] > > but: > >>> a = numpy.array([1, 2, 3]) > >>> b = a[1:] > >>> b[0] = 99 > >>> a > array([ 1, 99, 3]) > > Gabriele, I hope we're being illuminating here rather than more > obscure...these can be irritating points. > > -- > Jonathan Niehof > ISR-3 Space Data Systems > Los Alamos National Laboratory > MS-D466 > Los Alamos, NM 87545 > > Phone: 505-667-9595 > email: jniehof at lanl.gov > > Correspondence / > Technical data or Software Publicly Available > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Wed Mar 5 14:37:06 2014 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Wed, 5 Mar 2014 20:37:06 +0100 Subject: [SciPy-User] EuroSciPy 2014 Call for Abstracts Message-ID: Dear all, EuroSciPy 2014, the Seventh Annual Conference on Python in Science, takes place in Cambridge, UK on 27 - 30 August 2013. The conference features two days of tutorials followed by two days of scientific talks. The day after the main conference, developer sprints will be organized on projects of interest to attendees. The topics presented at EuroSciPy are very diverse, with a focus on advanced software engineering and original uses of Python and its scientific libraries, either in theoretical or experimental research, from both academia and the industry. The program includes keynotes, contributed talks and posters. Submissions for talks and posters are welcome on our website (http://www. euroscipy.org/2014/). In your abstract, please provide details on what Python tools are being employed, and how. The deadline for submission is 14 April 2013. Also until 14 April 2014, you can apply for a sprint session on 31 August 2014. See https://www.euroscipy.org/2014/calls/sprints/ for details. Important dates: April 14th: Presentation abstracts, poster, tutorial submission deadline. Application for sponsorship deadline. May 17th: Speakers selected May 22nd: Sponsorship acceptance deadline June 1st: Speaker schedule announced June 6th, or 150 registrants: Early-bird registration ends August 27-31st: 2 days of tutorials, 2 days of conference, 1 day of sprints We look forward to an exciting conference and hope to see you in Cambridge in August! The EuroSciPy 2014 Team http://www.euroscipy.org/2014/ Conference Chairs -------------------------- Mark Hayes, Cambridge University, UK Didrik Pinte, Enthought Europe, UK Tutorial Chair ------------------- David Cournapeau, Enthought Europe, UK Program Chair -------------------- Ralf Gommers, ASML, The Netherlands Program Committee ----------------------------- Tiziano Zito, Humboldt-Universit?t zu Berlin, Germany Pierre de Buyl, Universit? libre de Bruxelles, Belgium Emmanuelle Gouillart, Joint Unit CNRS/Saint-Gobain, France Konrad Hinsen, Centre National de la Recherche Scientifique (CNRS), France Raphael Ritz, Garching Computing Centre of the Max Planck Society, Germany St?fan van der Walt, Applied Mathematics, Stellenbosch University, South Africa Ga?l Varoquaux, INRIA Parietal, Saclay, France Nelle Varoquaux, Mines ParisTech, France Pauli Virtanen, Aalto University, Finland Evgeni Burovski, Lancaster University, UK Robert Cimrman, New Technologies Research Centre, University of West Bohemia, Czech Republic Almar Klein, Cybermind, The Netherlands Organizing Committee ------------------------------ Simon Jagoe, Enthought Europe, UK Pierre de Buyl, Universit? libre de Bruxelles, Belgium -------------- next part -------------- An HTML attachment was scrubbed... URL: From anthony.j.mannucci at jpl.nasa.gov Thu Mar 6 20:54:51 2014 From: anthony.j.mannucci at jpl.nasa.gov (Mannucci, Anthony J (335G)) Date: Fri, 7 Mar 2014 01:54:51 +0000 Subject: [SciPy-User] Using scipy matrices for low-dimensional problems Message-ID: I want to write a least squares solver or Kalman filter that can handle varying numbers of parameters, including just one. I have found this fails in the scipy routines (e.g. linalg.lstsq). The reason appears to be that 1x1 matrices (or arrays) are not considered matrices. I believe a 1x1 matrix can be viewed as a legitimate square matrix. Is there a workaround that would permit me to use 1x1 matrices in my calculations? Scipy now complains that I have not passed in a matrix and will not apply the algorithm. Thank you. -Tony -- Tony Mannucci Supervisor, Ionospheric and Atmospheric Remote Sensing Group Mail-Stop 138-308, Tel > (818) 354-1699 Jet Propulsion Laboratory, Fax > (818) 393-5115 California Institute of Technology, Email > Tony.Mannucci at jpl.nasa.gov 4800 Oak Grove Drive, http://scienceandtechnology.jpl.nasa.gov/people/a_mannucci/ Pasadena, CA 91109 -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at gmail.com Thu Mar 6 21:09:48 2014 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Thu, 6 Mar 2014 21:09:48 -0500 Subject: [SciPy-User] Using scipy matrices for low-dimensional problems In-Reply-To: References: Message-ID: On 3/6/14, Mannucci, Anthony J (335G) wrote: > I want to write a least squares solver or Kalman filter that can handle > varying numbers of parameters, including just one. I have found this fails > in the scipy routines (e.g. linalg.lstsq). The reason appears to be that 1x1 > matrices (or arrays) are not considered matrices. I believe a 1x1 matrix can > be viewed as a legitimate square matrix. Is there a workaround that would > permit me to use 1x1 matrices in my calculations? Scipy now complains that I > have not passed in a matrix and will not apply the algorithm. Thank you. > Tony, This works for me: In [28]: from scipy.linalg import lstsq In [29]: a = np.array([[2.0]]) In [30]: b = np.array([1.0]) In [31]: lstsq(a, b) Out[31]: (array([ 0.5]), array([], dtype=float64), 1, array([ 2.])) In [32]: import scipy In [33]: scipy.__version__ Out[33]: '0.13.2' It will not work if `a` is not explicitly a 2-dimensional array. E.g. In [38]: a = np.array([2.0]) # This is not a 2D array! In [39]: lstsq(a, b) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) in () ----> 1 lstsq(a, b) /home/warren/anaconda/lib/python2.7/site-packages/scipy/linalg/basic.pyc in lstsq(a, b, cond, overwrite_a, overwrite_b, check_finite) 509 a1,b1 = map(np.asarray, (a,b)) 510 if len(a1.shape) != 2: --> 511 raise ValueError('expected matrix') 512 m, n = a1.shape 513 if len(b1.shape) == 2: ValueError: expected matrix So be sure `a` is really 2-dimensional. Warren > -Tony > > -- > Tony Mannucci > Supervisor, Ionospheric and Atmospheric Remote Sensing Group > Mail-Stop 138-308, Tel > (818) 354-1699 > Jet Propulsion Laboratory, Fax > (818) 393-5115 > California Institute of Technology, Email > Tony.Mannucci at jpl.nasa.gov > 4800 Oak Grove Drive, > http://scienceandtechnology.jpl.nasa.gov/people/a_mannucci/ > Pasadena, CA 91109 > > From jgomezdans at gmail.com Fri Mar 7 06:01:09 2014 From: jgomezdans at gmail.com (Jose Gomez-Dans) Date: Fri, 7 Mar 2014 11:01:09 +0000 Subject: [SciPy-User] Using scipy matrices for low-dimensional problems In-Reply-To: References: Message-ID: Hi, On 7 March 2014 02:09, Warren Weckesser wrote: > > It will not work if `a` is not explicitly a 2-dimensional array. E.g. > > > In [38]: a = np.array([2.0]) # This is not a 2D array! > > In [39]: lstsq(a, b) > I find that using np.atleast_2d (a) is a good and easy solution to this problem. Jose -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Sat Mar 8 13:30:39 2014 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 8 Mar 2014 13:30:39 -0500 Subject: [SciPy-User] non-linear function collection ? Message-ID: I'm trying out again some examples for nonlinear estimation. statsmodels still doesn't have nonlinear leastsquares, but now I'm trying out robust estimation, e.g. https://groups.google.com/d/msg/pystatsmodels/DPibQlUJmRA/arRlamlNivcJ Question since other packages have much more support for this: Is there a collection of frequently used non-linear functions? including analytical derivatives, and self-starting, automatically created starting values for numerical optimization? Josef example based mostly on a few stackoverflow questions def sigmoid(params, x): x0, y0, c, k = params y = c / (1. + np.exp(-k * (x - x0))) + y0 return y def sigmoid_deriv(params, x): x0, y0, c, k = params term = np.exp(-k * (x - x0)) denom = 1. / (1 + term) denom2 = denom**2 dx0 = - c * denom2 * term * k dy0 = np.ones(x.shape[0]) dc = denom dk = c * denom2 * term * (x - x0) return np.column_stack([dx0, dy0, dc, dk]) def sig_start(y, x): return np.median(x), np.median(y), y.max(), np.corrcoef(x, y)[0, 1] and my current usage (not fully cleaned up for example, in general some of the parameters will depend on explanatory variables eg. c = x dot beta) exog is lingo for x class SigmoidNL(MEstimatorHD): def predict(self, params, exog=None, linear=False): if exog is None: exog = self.exog xb = np.dot(exog, params[:self.exog.shape[1]]) if linear: return xb else: return sigmoid(params[:4], exog) def _predict_jac(self, params, exog=None): if exog is None: exog = self.exog from statsmodels.tools.numdiff import approx_fprime return approx_fprime(params[:4], self.predict) def predict_jac(self, params, exog=None, linear=False): if exog is None: exog = self.exog if linear: # doesn't make sense in this case return exog else: return sigmoid_deriv(params[:4], exog) -------------- next part -------------- An HTML attachment was scrubbed... URL: From newville at cars.uchicago.edu Sat Mar 8 18:15:07 2014 From: newville at cars.uchicago.edu (Matt Newville) Date: Sat, 8 Mar 2014 17:15:07 -0600 Subject: [SciPy-User] non-linear function collection ? In-Reply-To: References: Message-ID: Hi Josef, On Sat, Mar 8, 2014 at 12:30 PM, wrote: > I'm trying out again some examples for nonlinear estimation. > > statsmodels still doesn't have nonlinear leastsquares, but now I'm trying > out robust estimation, e.g. > https://groups.google.com/d/msg/pystatsmodels/DPibQlUJmRA/arRlamlNivcJ > > Question since other packages have much more support for this: > > Is there a collection of frequently used non-linear functions? > including analytical derivatives, and self-starting, automatically created > starting values for numerical optimization? We're trying to do something along these lines with lmfit-py, with the goal of providing easy-to-use "simple fitting models". We haven't really settled yet on the best final design (and there is some duplicated efforts), but we'd be open for suggestions. Currently, you might find the code at https://github.com/lmfit/lmfit-py/blob/master/lmfit/model.py and https://github.com/lmfit/lmfit-py/blob/master/lmfit/models1d.py useful. An attempt at 'canonical definitions' of such simple functions (inevitably incomplete) is at https://github.com/lmfit/lmfit-py/blob/rationalize_models/lmfit/utilfuncs.py (note: non-master branch) The code in models1d.py above does have automated initial guesses for parameter values. We haven't (yet?) added analytic derivatives, but that could be done. In the lmfit approach, analytic derivatives are made extra challenging since each Parameter may be fixed, bounded, or constrained as an expression of other Parameters. Hope that helps, --Matt From projetmbc at gmail.com Mon Mar 10 05:39:20 2014 From: projetmbc at gmail.com (Christophe Bal) Date: Mon, 10 Mar 2014 10:39:20 +0100 Subject: [SciPy-User] SVG logo of SciPy Message-ID: Hello, is there a SVG version of the SciPy logos ? This would be to be used on my website. Christophe BAL -------------- next part -------------- An HTML attachment was scrubbed... URL: From giurrero at gmail.com Tue Mar 11 09:30:48 2014 From: giurrero at gmail.com (Ruggero) Date: Tue, 11 Mar 2014 14:30:48 +0100 Subject: [SciPy-User] itemfreq dtype Message-ID: Dear experts, I have a sparse array of integers: x = np.arange(10, dtype=int) (this is not sparse, but it doesn't matter). I would like to use itemfreq as freq = scipy.stats.itemfreq(x) the problem is that it return float64 Nx2 array. freq.dtype Why this? Why itemfreq is not following the dtype of the input? The documentation says that bincount works better for integers, the problem is that I have a sparse array. What about a dtype(input.dtype, float_64) for the output? Cheers, Ruggero From thomas_unterthiner at web.de Tue Mar 11 10:00:54 2014 From: thomas_unterthiner at web.de (Thomas Unterthiner) Date: Tue, 11 Mar 2014 15:00:54 +0100 Subject: [SciPy-User] Using GEMM with specified output-array Message-ID: <531F1716.7090409@web.de> Hi there! I would like to use *GEMM without allocating an output-array. So I used the functions from scipy.linalg.blas, however it seems as if the 'overwrite_c' argument gets completely ignored: from scipy.linalg.blas import sgemm X = np.random.randn(30, 50).astype(np.float32) Y = np.random.randn(50, 30).astype(np.float32) Z = np.zeros((X.shape[0], Y.shape[1]), dtype=np.float32) res = f(1.0, X, Y, c=Z, trans_a=0, trans_b=0, overwrite_c=1) assert res == np.dot(X, Y).all() print res is Z prints "False", meaning a new array is allocated for the result. However, I want my result to be written into 'Z'. Before anyone asks: I can't use np.dot's 'out' argument since I also want to specify 'alpha'. Is there anything I'm missing here? (I'm using scipy 0.13.3 / numpy 1.8.0 with OpenBLAS on Ubuntu 13.10). All the best Thomas From sturla.molden at gmail.com Tue Mar 11 14:23:27 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Tue, 11 Mar 2014 18:23:27 +0000 (UTC) Subject: [SciPy-User] Using GEMM with specified output-array References: <531F1716.7090409@web.de> Message-ID: <262222894416254307.684582sturla.molden-gmail.com@news.gmane.org> Thomas Unterthiner wrote: > Hi there! > > I would like to use *GEMM without allocating an output-array. So I used > the functions from scipy.linalg.blas, however it seems as if the > 'overwrite_c' argument gets completely ignored: > > from scipy.linalg.blas import sgemm > X = np.random.randn(30, 50).astype(np.float32) > Y = np.random.randn(50, 30).astype(np.float32) > Z = np.zeros((X.shape[0], Y.shape[1]), dtype=np.float32) > res = f(1.0, X, Y, c=Z, trans_a=0, trans_b=0, overwrite_c=1) > assert res == np.dot(X, Y).all() > print res is Z > > prints "False", meaning a new array is allocated for the result. > However, I want my result to be written into 'Z'. Before anyone asks: I > can't use np.dot's 'out' argument since I also want to specify 'alpha'. > > Is there anything I'm missing here? (I'm using scipy 0.13.3 / numpy > 1.8.0 with OpenBLAS on Ubuntu 13.10). You are passing C contiguous arrays to Fortran. f2py will copy and transpose them. If you want to avoid this, always pass Fortran order arrays to scipy.linalg.* functions. You create a Fortran order view of a C array by taking .T (transpose is O(1) in NumPy) or create the array with keyword argument order='F'. Check that .flags['F_CONTIGUOUS'] is True for all the arrays you pass to scipy.linalg. The "overwrite_c" option in GEMM is just a suggestion. It is not enforced. If f2py has to copy and transpose your C input array it does nothing. Sturla From ralf.gommers at gmail.com Tue Mar 11 15:22:35 2014 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Tue, 11 Mar 2014 20:22:35 +0100 Subject: [SciPy-User] 2014 John Hunter Fellowship - Call for Applications Message-ID: Hi all, I'm excited to announce, on behalf of the Numfocus board, that applications for the 2014 John Hunter Technology Fellowship are now being accepted. This is the first fellowship Numfocus is able to offer, which we see as a significant milestone. The John Hunter Technology Fellowship aims to bridge the gap between academia and real-world, open-source scientific computing projects by providing a capstone experience for individuals coming from a scientific, engineering or mathematics background. The program consists of a 6 month project-based training program for postdoctoral scientists or senior graduate students. Fellows work on scientific computing open source projects under the guidance of mentors who are leading scientists and software engineers. The aim of the Fellowship is to enable Fellows to develop the skills needed to contribute to cutting-edge open source software projects while at the same time advancing or supporting the research program they and their mentor are involved in. While proposals in any area of science and engineering are welcome, the following areas are encouraged in particular: - Accessible and reproducible computing - Enabling technology for open access publishing - Infrastructural technology supporting open-source scientific software stacks - Core open-source projects promoted by NumFOCUS Eligible applicants are postdoctoral scientists or senior PhD students, or have equivalent experience in physics, mathematics, engineering, statistics, or a related science. The program is open to applicants from any nationality and can be performed at any university or institute world-wide (US export laws permitting). All applications are due May 15, 2014 by 11:59 p.m. Central Standard Time. For more details on the program see: http://numfocus.org/john_hunter_fellowship_2014.html (this call) http://numfocus.org/fellowships.html (program) And for some background see this blog post: http://numfocus.org/announcing-the-numfocus-technology-fellowship-program.html We're looking forward to receiving your applications! Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ziyuang at gmail.com Tue Mar 11 17:38:32 2014 From: ziyuang at gmail.com (ZiYuan) Date: Tue, 11 Mar 2014 23:38:32 +0200 Subject: [SciPy-User] Cannot find BLAS on a machine with MKL when installing scipy Message-ID: Dear all, I installed Intel MKL and other libraries for a customized numpy. Here is my ~/.numpy-site.cfg: [DEFAULT] library_dirs = /usr/lib:/usr/local/lib include_dirs = /usr/include:/usr/local/include [mkl] library_dirs = /opt/intel/mkl/lib/intel64/ include_dirs = /opt/intel/mkl/include/ mkl_libs = mkl_rt lapack_libs = [amd] amd_libs = amd [umfpack] umfpack_libs = umfpack [djbfft] include_dirs = /usr/local/djbfft/include library_dirs = /usr/local/djbfft/lib This configuration file seems OK during the installation of numpy. But when I was installing scipy via pip3 install scipy, it reported that numpy.distutils.system_info.BlasNotFoundError: Blas (http://www.netlib.org/blas/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [blas]) or by setting the BLAS environment variable. In my mind MKL is an implementation of Blas so just mentioning MKL should be fine. I've tried 1. export LD_LIBRARY_PATH=/opt/intel/mkl/lib/intel64:$LD_LIBRARY_PATH?? 2. export BLAS=/opt/intel/mkl/lib/intel64 3. Copy the content in the [mkl] section and paste into the [blas] section in the file ~/.numpy-site.cfg But none of these works. So what is going wrong? Does scipy respect ~/.numpy-site.cfg? Thank you. Best, Ziyuan -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas_unterthiner at web.de Wed Mar 12 06:21:34 2014 From: thomas_unterthiner at web.de (Thomas Unterthiner) Date: Wed, 12 Mar 2014 11:21:34 +0100 Subject: [SciPy-User] Using GEMM with specified output-array In-Reply-To: <262222894416254307.684582sturla.molden-gmail.com@news.gmane.org> References: <531F1716.7090409@web.de> <262222894416254307.684582sturla.molden-gmail.com@news.gmane.org> Message-ID: <5320352E.9020306@web.de> On 2014-03-11 19:23, Sturla Molden wrote: > Thomas Unterthiner wrote: >> Hi there! >> >> I would like to use *GEMM without allocating an output-array. So I used >> the functions from scipy.linalg.blas, however it seems as if the >> 'overwrite_c' argument gets completely ignored: >> >> from scipy.linalg.blas import sgemm >> X = np.random.randn(30, 50).astype(np.float32) >> Y = np.random.randn(50, 30).astype(np.float32) >> Z = np.zeros((X.shape[0], Y.shape[1]), dtype=np.float32) >> res = f(1.0, X, Y, c=Z, trans_a=0, trans_b=0, overwrite_c=1) >> assert res == np.dot(X, Y).all() >> print res is Z >> >> prints "False", meaning a new array is allocated for the result. >> However, I want my result to be written into 'Z'. Before anyone asks: I >> can't use np.dot's 'out' argument since I also want to specify 'alpha'. >> >> Is there anything I'm missing here? (I'm using scipy 0.13.3 / numpy >> 1.8.0 with OpenBLAS on Ubuntu 13.10). > > You are passing C contiguous arrays to Fortran. f2py will copy and > transpose them. If you want to avoid this, always pass Fortran order arrays > to scipy.linalg.* functions. You create a Fortran order view of a C array > by taking .T (transpose is O(1) in NumPy) or create the array with keyword > argument order='F'. Check that .flags['F_CONTIGUOUS'] is True for all the > arrays you pass to scipy.linalg. > > The "overwrite_c" option in GEMM is just a suggestion. It is not enforced. > If f2py has to copy and transpose your C input array it does nothing. > > Sturla > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user Thanks, that was helpful! :) I tried: from scipy.linalg.blas import sgemm X = np.random.randn(30, 50).astype(np.float32).T Y = np.random.randn(50, 30).astype(np.float32).T Z = np.zeros((X.shape[0], Y.shape[1]), dtype=np.float32).T res = sgemm(1.0, Y, X, c=Z, trans_a=0, trans_b=0, overwrite_c=1) assert (res.T == np.dot(X.T, Y.T)).all(), "not equal" assert (X.flags['F_CONTIGUOUS'], Y.flags['F_CONTIGUOUS'], Z.flags['F_CONTIGUOUS']) == (True, True, True) print res is Z However, this gives me "failed in converting 2nd keyword `c' of _fblas.sgemm to C/Fortran array". I assume this is because Z.flags['OWNDATA'] is False, due to the transpose. Is that correct? The problem is that other parts of my program expect all my matrices in row-major order, so sing `order="F"` is not really an option for me. Is there any way around this? Cheers Thomas From sturla.molden at gmail.com Wed Mar 12 09:36:20 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Wed, 12 Mar 2014 13:36:20 +0000 (UTC) Subject: [SciPy-User] Using GEMM with specified output-array References: <531F1716.7090409@web.de> <262222894416254307.684582sturla.molden-gmail.com@news.gmane.org> <5320352E.9020306@web.de> Message-ID: <436722578416322915.690962sturla.molden-gmail.com@news.gmane.org> Thomas Unterthiner wrote: > However, this gives me "failed in converting 2nd keyword `c' of > _fblas.sgemm to C/Fortran array". I assume this is because > Z.flags['OWNDATA'] is False, due to the transpose. Is that correct? I am not sure. Only Pearu knows all the details about f2py. In general f2py is written to produce correct results, not generate the most efficient interface code. > The problem is that other parts of my program expect all my matrices in > row-major order, so sing `order="F"` is not really an option for me. > Is there any way around this? Call cblas_sgemm from Cython or use numpy.dot would be the easiest. If you need to grab the BLAS SGEMM function from SciPy, sgemm._cpointer is a PyCObject that encapsulates a function pointer to the Fortran SGEMM subroutine. You can e.g. use this in Cython to call SGEMM without the f2py layer. This requires that you know the ABI of the Fortran compiler used to build BLAS. I would also like to add that if you use the Fortran 2003 ISO C bindings, it is almost impossible to avoid that the compiler generates temporary copies of your arrays. If you cast C pointers to Fortran pointers with f_c_pointer, the aliasing rules in Fortran mandates that copy-in copy-out is used when you call SGEMM with the Fortran pointers as dummy variables. Sturla From thomas_unterthiner at web.de Wed Mar 12 10:25:20 2014 From: thomas_unterthiner at web.de (Thomas Unterthiner) Date: Wed, 12 Mar 2014 15:25:20 +0100 Subject: [SciPy-User] Using GEMM with specified output-array In-Reply-To: <436722578416322915.690962sturla.molden-gmail.com@news.gmane.org> References: <531F1716.7090409@web.de> <262222894416254307.684582sturla.molden-gmail.com@news.gmane.org> <5320352E.9020306@web.de> <436722578416322915.690962sturla.molden-gmail.com@news.gmane.org> Message-ID: <53206E50.4080007@web.de> On 2014-03-12 14:36, Sturla Molden wrote: > Thomas Unterthiner wrote: > >> However, this gives me "failed in converting 2nd keyword `c' of >> _fblas.sgemm to C/Fortran array". I assume this is because >> Z.flags['OWNDATA'] is False, due to the transpose. Is that correct? > I am not sure. Only Pearu knows all the details about f2py. > > In general f2py is written to produce correct results, not generate the > most efficient interface code. > >> The problem is that other parts of my program expect all my matrices in >> row-major order, so sing `order="F"` is not really an option for me. >> Is there any way around this? > Call cblas_sgemm from Cython or use numpy.dot would be the easiest. numpy.dot was my starting point, I wanted to improte upon `np.dot(X, Y out=Z); Z *= alpha` by calling SGEMM directly (and using the 'alpha' variable). I hadn't thought about going to Cython. Is it possible to access CBLAS functions instead fortran BLAS from within numpy/scipy, or do those have the same problems anyway(i.e., do they expect Fortran-order arrays)? Cheers Thomas From sturla.molden at gmail.com Wed Mar 12 10:51:22 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Wed, 12 Mar 2014 14:51:22 +0000 (UTC) Subject: [SciPy-User] Using GEMM with specified output-array References: <531F1716.7090409@web.de> <262222894416254307.684582sturla.molden-gmail.com@news.gmane.org> <5320352E.9020306@web.de> <436722578416322915.690962sturla.molden-gmail.com@news.gmane.org> <53206E50.4080007@web.de> Message-ID: <353481097416327745.568300sturla.molden-gmail.com@news.gmane.org> Thomas Unterthiner wrote: > numpy.dot was my starting point, I wanted to improte upon `np.dot(X, Y > out=Z); Z *= alpha` by calling SGEMM directly (and using the 'alpha' > variable). Except for the alpha variable, this will gain you next to nothing. NumPy's _dotblas module is as fast as it gets. > I hadn't thought about going to Cython. > Is it possible to access CBLAS functions instead fortran BLAS from > within numpy/scipy, It is not possible. You must link the BLAS library directly (e.g. MKL, OpenBLAS, ACML, ATLAS, or Apple Accelerate Framework) to use cblas. If you don't have MKL or a Mac, building OpenBLAS is easy. Don't waste your time on ATLAS, ACML or Netlib reference BLAS. Here is a cblas_dgemm benchmark on my laptop: https://twitter.com/nedlom/status/437427557919891457 > or do those have the same problems anyway(i.e., do > they expect Fortran-order arrays)? cblas allows the data ordering to be specified. Sturla From njs at pobox.com Wed Mar 12 18:04:36 2014 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 12 Mar 2014 22:04:36 +0000 Subject: [SciPy-User] Matrix multiplication infix operator PEP nearly ready to go Message-ID: Hi all, The proposal to add an infix operator to Python for matrix multiplication is nearly ready for its debut on python-ideas; so if you want to look it over first, just want to check out where it's gone, then now's a good time: https://github.com/numpy/numpy/pull/4351 The basic idea here is to try to make the strongest argument we can for the simplest extension that we actually want, and then whether it gets accepted or rejected at least we'll know that's final. Absolutely all comments and feedback welcome. Cheers, -n -- Nathaniel J. Smith Postdoctoral researcher - Informatics - University of Edinburgh http://vorpus.org From josef.pktd at gmail.com Fri Mar 14 09:42:32 2014 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 14 Mar 2014 09:42:32 -0400 Subject: [SciPy-User] non-linear function collection ? In-Reply-To: References: Message-ID: On Sat, Mar 8, 2014 at 6:15 PM, Matt Newville wrote: > Hi Josef, > > On Sat, Mar 8, 2014 at 12:30 PM, wrote: > > I'm trying out again some examples for nonlinear estimation. > > > > statsmodels still doesn't have nonlinear leastsquares, but now I'm trying > > out robust estimation, e.g. > > https://groups.google.com/d/msg/pystatsmodels/DPibQlUJmRA/arRlamlNivcJ > > > > Question since other packages have much more support for this: > > > > Is there a collection of frequently used non-linear functions? > > including analytical derivatives, and self-starting, automatically > created > > starting values for numerical optimization? > > We're trying to do something along these lines with lmfit-py, with the > goal of providing easy-to-use "simple fitting models". We haven't > really settled yet on the best final design (and there is some > duplicated efforts), but we'd be open for suggestions. Currently, you > might find the code at > https://github.com/lmfit/lmfit-py/blob/master/lmfit/model.py > > and > https://github.com/lmfit/lmfit-py/blob/master/lmfit/models1d.py > > useful. > > An attempt at 'canonical definitions' of such simple functions > (inevitably incomplete) is at > > https://github.com/lmfit/lmfit-py/blob/rationalize_models/lmfit/utilfuncs.py > (note: non-master branch) > > The code in models1d.py above does have automated initial guesses for > parameter values. We haven't (yet?) added analytic derivatives, but > that could be done. In the lmfit approach, analytic derivatives are > made extra challenging since each Parameter may be fixed, bounded, or > constrained as an expression of other Parameters. > Thanks Matt, that's what I was looking for. Sorry for the late response, I'm getting too side tracked these days. I'm still trying to figure out how to get non-linear models into all or many of the estimation models that statsmodels has or should get, and what the statistics of it are. My main interest right now are robust estimators. visiting some older packages again: http://astropy.readthedocs.org/en/latest/modeling/#module-astropy.modeling.functional_models has also derivatives http://astropy.readthedocs.org/en/latest/api/astropy.modeling.functional_models.Beta1D.html#astropy.modeling.functional_models.Beta1D.fit_deriv zunzun/pyeq2 has the largest collection of functions that I know, but it's a bit hard to read because it supports the website and code generation. for example https://code.google.com/p/pyeq2/source/browse/trunk#trunk%2FModels_2D I' was just looking at non-linear models again, and my preferred solution for statsmodels would be to free-ride on some of these functions collections by adding a wrapper for compatibility. I don't know much about which kind of non-linear functions users are using. I would be more interested in modelling when some of the parameters depend on explanatory variables, for example the maximum and the speed of growth in the sigmoid as function of a linear combination of explanatory variables. for example: statsmodels has a collection of monotonic one parameter functions/transformations that are used as link functions in generalized linear models. y = f(eta) where eta = x dot beta https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/families/links.py they define function, inverse function plus both derivatives. for derivatives: I was using in my examples explicitly coded chain rules, and using numerical derivatives for those pieces for which I didn't want to figure out or hardcode the derivatives. I didn't look at parameter transformation for bounds yet, but I guess it can also be done by chaining, although that can get tricky If my quick browsing is correct, you have the derivatives already https://github.com/lmfit/lmfit-py/blob/35502f74e12a1f4155c2311d4530c38c7cc04293/lmfit/parameter.py#L156 I guess you use the derivatives of the bounding transformation in the covariance calculation. Josef > > Hope that helps, > > --Matt > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tmp50 at ukr.net Sat Mar 15 08:44:48 2014 From: tmp50 at ukr.net (Dmitrey) Date: Sat, 15 Mar 2014 14:44:48 +0200 Subject: [SciPy-User] [ANN] OpenOpt Suite release 0.53: Stochastic programming addon now is BSD-licensed Message-ID: <1394887377.137996545.15dx925h@frv44.fwdcdn.com> hi all, I'm glad to inform you about new OpenOpt Suite release 0.53: ? ? Stochastic programming addon now is available for free ? ? Some minor changes -------------------------------------------------- Regards, D. http://openopt.org/Dmitrey -------------- next part -------------- An HTML attachment was scrubbed... URL: From newville at cars.uchicago.edu Sat Mar 15 10:08:31 2014 From: newville at cars.uchicago.edu (Matt Newville) Date: Sat, 15 Mar 2014 09:08:31 -0500 Subject: [SciPy-User] non-linear function collection ? In-Reply-To: References: Message-ID: Hi Josef, On Fri, Mar 14, 2014 at 8:42 AM, wrote: > > > On Sat, Mar 8, 2014 at 6:15 PM, Matt Newville > wrote: >> >> Hi Josef, >> >> On Sat, Mar 8, 2014 at 12:30 PM, wrote: >> > I'm trying out again some examples for nonlinear estimation. >> > >> > statsmodels still doesn't have nonlinear leastsquares, but now I'm >> > trying >> > out robust estimation, e.g. >> > https://groups.google.com/d/msg/pystatsmodels/DPibQlUJmRA/arRlamlNivcJ >> > >> > Question since other packages have much more support for this: >> > >> > Is there a collection of frequently used non-linear functions? >> > including analytical derivatives, and self-starting, automatically >> > created >> > starting values for numerical optimization? >> >> We're trying to do something along these lines with lmfit-py, with the >> goal of providing easy-to-use "simple fitting models". We haven't >> really settled yet on the best final design (and there is some >> duplicated efforts), but we'd be open for suggestions. Currently, you >> might find the code at >> https://github.com/lmfit/lmfit-py/blob/master/lmfit/model.py >> >> and >> https://github.com/lmfit/lmfit-py/blob/master/lmfit/models1d.py >> >> useful. >> >> An attempt at 'canonical definitions' of such simple functions >> (inevitably incomplete) is at >> >> https://github.com/lmfit/lmfit-py/blob/rationalize_models/lmfit/utilfuncs.py >> (note: non-master branch) >> >> The code in models1d.py above does have automated initial guesses for >> parameter values. We haven't (yet?) added analytic derivatives, but >> that could be done. In the lmfit approach, analytic derivatives are >> made extra challenging since each Parameter may be fixed, bounded, or >> constrained as an expression of other Parameters. > > > Thanks Matt, that's what I was looking for. > > Sorry for the late response, I'm getting too side tracked these days. > I'm still trying to figure out how to get non-linear models into all or many > of the estimation models that statsmodels has or should get, and what the > statistics of it are. My main interest right now are robust estimators. No problem for the delayed response. > visiting some older packages again: > > http://astropy.readthedocs.org/en/latest/modeling/#module-astropy.modeling.functional_models > has also derivatives > http://astropy.readthedocs.org/en/latest /api/astropy.modeling.functional_models.Beta1D.html#astropy.modeling.functional_models.Beta1D.fit_deriv > This does look similar in aim, and worth further study. > zunzun/pyeq2 has the largest collection of functions that I know, but it's a > bit hard to read because it supports the website and code generation. > for example > https://code.google.com/p/pyeq2/source/browse/trunk#trunk%2FModels_2D > Yes, that's a very large collection. Personally, I would rather emphasize robust, canonical definitions of the most used functions (and a mechanism for adding more) over sheer quantity. Perhaps the zunzun/pyeq2 collection has grown that way and each of the functions has an important use case. The zunzun website is certainly useful and instructive, but I think I wouldn't want to support that many functions. > I' was just looking at non-linear models again, and my preferred solution > for statsmodels would be to free-ride on some of these functions collections > by adding a wrapper for compatibility. I don't know much about which kind of > non-linear functions users are using. > I would be more interested in modelling when some of the parameters depend > on explanatory variables, for example the maximum and the speed of growth in > the sigmoid as function of a linear combination of explanatory variables. > > for example: > statsmodels has a collection of monotonic one parameter > functions/transformations that are used as link functions in generalized > linear models. y = f(eta) where eta = x dot beta > https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/families/links.py > they define function, inverse function plus both derivatives. > > for derivatives: I was using in my examples explicitly coded chain rules, > and using numerical derivatives for those pieces for which I didn't want to > figure out or hardcode the derivatives. > I didn't look at parameter transformation for bounds yet, but I guess it can > also be done by chaining, although that can get tricky Agreed. > If my quick browsing is correct, you have the derivatives already > https://github.com/lmfit/lmfit-py/blob/35502f74e12a1f4155c2311d4530c38c7cc04293/lmfit/parameter.py#L156 > I guess you use the derivatives of the bounding transformation in the > covariance calculation. The code there (borrowed from JJ Helmus' leastsqbound) is to transform covariance matrix from unconstrained to box-constrained values. Lmfit does support user-provided derivative functions (the 'Dfun' argument for scipy.optimize.leastsq() and 'jac' argument for scipy.optimize.scalar_minimize()), including support for bounded parameters . I wouldn't call this really well tested (in a statistical sense), but I'm not aware of any problems with this. The 'built-in' models don't (yet?) use this, but that's definitely worth exploring. --Matt From ralf.gommers at gmail.com Sat Mar 15 13:01:12 2014 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 15 Mar 2014 18:01:12 +0100 Subject: [SciPy-User] SVG logo of SciPy In-Reply-To: References: Message-ID: On Mon, Mar 10, 2014 at 10:39 AM, Christophe Bal wrote: > > Hello, > is there a SVG version of the SciPy logos ? This would be to be used on my > website. > Someone's got to have them, but they're not in the scipy.org repo. Anyone know who created the original? The closest I could find is Tony Yu's ScipyCentral logo: https://github.com/tonysyu/SciPy-Central-Logo Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sun Mar 16 16:57:41 2014 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 16 Mar 2014 21:57:41 +0100 Subject: [SciPy-User] ANN: Scipy 0.14.0 beta 1 release Message-ID: Hi, I'm pleased to announce the availability of the first beta release of Scipy0.14.0. Please try this beta and report any issues on the scipy-dev mailing list. Source tarballs, binaries and the full release notes can be found at http://sourceforge.net/projects/scipy/files/scipy/0.14.0b1/. Part of the release notes copied below. A big thank you to everyone who contributed to this release! Ralf SciPy 0.14.0 is the culmination of 8 months of hard work. It contains many new features, numerous bug-fixes, improved test coverage and better documentation. There have been a number of deprecations and API changes in this release, which are documented below. All users are encouraged to upgrade to this release, as there are a large number of bug-fixes and optimizations. Moreover, our development attention will now shift to bug-fix releases on the 0.14.x branch, and on adding new features on the master branch. This release requires Python 2.6, 2.7 or 3.2-3.4 and NumPy 1.5.1 or greater. New features ============ ``scipy.interpolate`` improvements ---------------------------------- A new wrapper function `scipy.interpolate.interpn` for interpolation on regular grids has been added. `interpn` supports linear and nearest-neighbor interpolation in arbitrary dimensions and spline interpolation in two dimensions. Faster implementations of piecewise polynomials in power and Bernstein polynomial bases have been added as `scipy.interpolate.PPoly` and `scipy.interpolate.BPoly`. New users should use these in favor of `scipy.interpolate.PiecewisePolynomial`. `scipy.interpolate.interp1d` now accepts non-monotonic inputs and sorts them. If performance is critical, sorting can be turned off by using the new ``assume_sorted`` keyword. Functionality for evaluation of bivariate spline derivatives in ``scipy.interpolate`` has been added. The new class `scipy.interpolate.Akima1DInterpolator` implements the piecewise cubic polynomial interpolation scheme devised by H. Akima. Functionality for fast interpolation on regular, unevenly spaced grids in arbitrary dimensions has been added as `scipy.interpolate.RegularGridInterpolator` . ``scipy.linalg`` improvements ----------------------------- The new function `scipy.linalg.dft` computes the matrix of the discrete Fourier transform. A condition number estimation function for matrix exponential, `scipy.linalg.expm_cond`, has been added. ``scipy.optimize`` improvements ------------------------------- A set of benchmarks for optimize, which can be run with ``optimize.bench()``, has been added. `scipy.optimize.curve_fit` now has more controllable error estimation via the ``absolute_sigma`` keyword. Support for passing custom minimization methods to ``optimize.minimize()`` and ``optimize.minimize_scalar()`` has been added, currently useful especially for combining ``optimize.basinhopping()`` with custom local optimizer routines. ``scipy.stats`` improvements ---------------------------- A new class `scipy.stats.multivariate_normal` with functionality for multivariate normal random variables has been added. A lot of work on the ``scipy.stats`` distribution framework has been done. Moment calculations (skew and kurtosis mainly) are fixed and verified, all examples are now runnable, and many small accuracy and performance improvements for individual distributions were merged. The new function `scipy.stats.anderson_ksamp` computes the k-sample Anderson-Darling test for the null hypothesis that k samples come from the same parent population. ``scipy.signal`` improvements ----------------------------- ``scipy.signal.iirfilter`` and related functions to design Butterworth, Chebyshev, elliptical and Bessel IIR filters now all use pole-zero ("zpk") format internally instead of using transformations to numerator/denominator format. The accuracy of the produced filters, especially high-order ones, is improved significantly as a result. The new function `scipy.signal.vectorstrength` computes the vector strength, a measure of phase synchrony, of a set of events. ``scipy.special`` improvements ------------------------------ The functions `scipy.special.boxcox` and `scipy.special.boxcox1p`, which compute the Box-Cox transformation, have been added. ``scipy.sparse`` improvements ----------------------------- - Significant performance improvement in CSR, CSC, and DOK indexing speed. - When using Numpy >= 1.9 (to be released in MM 2014), sparse matrices function correctly when given to arguments of ``np.dot``, ``np.multiply`` and other ufuncs. With earlier Numpy and Scipy versions, the results of such operations are undefined and usually unexpected. - Sparse matrices are no longer limited to ``2^31`` nonzero elements. They automatically switch to using 64-bit index data type for matrices containing more elements. User code written assuming the sparse matrices use int32 as the index data type will continue to work, except for such large matrices. Code dealing with larger matrices needs to accept either int32 or int64 indices. Deprecated features =================== ``anneal`` ---------- The global minimization function `scipy.optimize.anneal` is deprecated. All users should use the `scipy.optimize.basinhopping` function instead. ``scipy.stats`` --------------- ``randwcdf`` and ``randwppf`` functions are deprecated. All users should use distribution-specific ``rvs`` methods instead. Probability calculation aliases ``zprob``, ``fprob`` and ``ksprob`` are deprecated. Use instead the ``sf`` methods of the corresponding distributions or the ``special`` functions directly. ``scipy.interpolate`` --------------------- ``PiecewisePolynomial`` class is deprecated. Backwards incompatible changes ============================== scipy.special.lpmn ------------------ ``lpmn`` no longer accepts complex-valued arguments. A new function ``clpmn`` with uniform complex analytic behavior has been added, and it should be used instead. scipy.sparse.linalg ------------------- Eigenvectors in the case of generalized eigenvalue problem are normalized to unit vectors in 2-norm, rather than following the LAPACK normalization convention. The deprecated UMFPACK wrapper in ``scipy.sparse.linalg`` has been removed due to license and install issues. If available, ``scikits.umfpack`` is still used transparently in the ``spsolve`` and ``factorized`` functions. Otherwise, SuperLU is used instead in these functions. scipy.stats ----------- The deprecated functions ``glm``, ``oneway`` and ``cmedian`` have been removed from ``scipy.stats``. ``stats.scoreatpercentile`` now returns an array instead of a list of percentiles. scipy.interpolate ----------------- The API for computing derivatives of a monotone piecewise interpolation has changed: if `p` is a ``PchipInterpolator`` object, `p.derivative(der)` returns a callable object representing the derivative of `p`. For in-place derivatives use the second argument of the `__call__` method: `p(0.1, der=2)` evaluates the second derivative of `p` at `x=0.1`. The method `p.derivatives` has been removed. Authors ======= * Marc Abramowitz + * andbo + * Vincent Arel-Bundock + * Petr Baudis + * Max Bolingbroke * Fran?ois Boulogne * Matthew Brett * Lars Buitinck * Evgeni Burovski * CJ Carey + * Thomas A Caswell + * Pawel Chojnacki + * Phillip Cloud + * Stefano Costa + * David Cournapeau * Dapid + * Matthieu Dartiailh + * Christoph Deil + * J?rg Dietrich + * endolith * Francisco de la Pe?a + * Ben FrantzDale + * Jim Garrison + * Andr? Gaul * Christoph Gohlke * Ralf Gommers * Robert David Grant * Alex Griffing * Blake Griffith * Yaroslav Halchenko * Andreas Hilboll * Kat Huang * Gert-Ludwig Ingold * jamestwebber + * Dorota Jarecka + * Todd Jennings + * Thouis (Ray) Jones * Juan Luis Cano Rodr?guez * ktritz + * Jacques Kvam + * Eric Larson + * Justin Lavoie + * Denis Laxalde * Jussi Leinonen + * lemonlaug + * Tim Leslie * Alain Leufroy + * George Lewis + * Max Linke + * Brandon Liu + * Benny Malengier + * Matthias K?mmerer + * Cimarron Mittelsteadt + * Eric Moore * Andrew Nelson + * Niklas Hamb?chen + * Joel Nothman + * Clemens Novak * Emanuele Olivetti + * Stefan Otte + * peb + * Josef Perktold * pjwerneck * Andrew Sczesnak + * poolio * J?r?me Roy + * Carl Sandrock + * Shauna + * Fabrice Silva * Daniel B. Smith * Patrick Snape + * Thomas Spura + * Jacob Stevenson * Julian Taylor * Tomas Tomecek * Richard Tsai * Joris Vankerschaver + * Pauli Virtanen * Warren Weckesser A total of 78 people contributed to this release. People with a "+" by their names contributed a patch for the first time. This list of names is automatically generated, and may not be fully complete. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kulas at mpia.de Mon Mar 17 15:52:51 2014 From: kulas at mpia.de (Martin Kulas) Date: Mon, 17 Mar 2014 12:52:51 -0700 Subject: [SciPy-User] thread-safe scipy.optimize.leastsq Message-ID: <53275293.30403@mpia.de> Hi scipy users! My project is using scipy.optimize.leastsq() in a multi-threaded applications Several active objects invoke that methods concurrently. As a result the following errors occurrs: [...] TypeError: gaussian() takes exactly 7 arguments (2 given) SystemError: null argument to internal routine [...] File "/usr/lib/python2.7/dist-packages/scipy/optimize/minpack.py", line 364, in leastsq gtol, maxfev, epsfcn, factor, diag) error: Error occurred while calling the Python function named File "/path/to/some_algo.py", line 219, in _fitpy ret= optimize.leastsq(errorfunction, params, full_output=1, ftol=1) ---------------------------------------------------------------------- File "/usr/lib/python2.7/dist-packages/scipy/optimize/minpack.py", line 364, in leastsq gtol, maxfev, epsfcn, factor, diag) Ran 1 test in 0.262serror: Internal error constructing argument list. [...] Is it safe to invoke scipy.optimize.lastsq() from several threads? if not: Which scipy functions are thread-safe? Thank you in advance! Ciao Martin From sturla.molden at gmail.com Mon Mar 17 21:18:30 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Tue, 18 Mar 2014 01:18:30 +0000 (UTC) Subject: [SciPy-User] thread-safe scipy.optimize.leastsq References: <53275293.30403@mpia.de> Message-ID: <62686254416793844.582523sturla.molden-gmail.com@news.gmane.org> Hello Martin, The underlying Fortran 77 routines (MINPACK lmder.f and lmdif.f) are not reentrant, so the GIL cannot be released. (Thus no chance of parallel processing with threads.) But if the Python wrapper is not thread-safe, that would count as a bug. All of SciPy should be thread-safe from the Python side. Looking at the code it seems the MINPACK optimizers (not just leastsq) in SciPy are not thread-safe from Python. This is for two reasons: First, the MINPACK subroutines are not reentrant (due to static data). Second, there are three variables storing global states in the _minpack module: https://github.com/scipy/scipy/blob/master/scipy/optimize/minpack.h This means that function calls to _minpack.* from minpack.py can conflict with each other unless the access is synchronized. Are they synchronized? It seems the GIL is used for synchronization of _minpack consistently. There is no other synchronization except for the GIL. Possibly Travis thought this would be enough. It is not! A thread-switch can happen in the interpreter while running the Python callback functions. This will allow another Python thread to enter the non-reentrant MINPACK Fortran functions and also alter the global states in the _minpack module. Depending on when and where this happens, we can get a Python exception, an errorneous answer, a memory leak, or a segfault (yes, all of those are possible). The solution is very simple: Protect all calls to _minpack.* functions in the minpack.py module with a global threading.Lock mutex: https://github.com/scipy/scipy/blob/master/scipy/optimize/minpack.py That will prevent other Python threads from entering the _minpack module and MINPACK Fortran subroutines if a thread-switch happened while running the Python callbacks. This is a bug in SciPy. Report it on the bug tracker on Github. Sturla Martin Kulas wrote: > Hi scipy users! > > My project is using scipy.optimize.leastsq() in a multi-threaded > applications > Several active objects invoke that methods concurrently. > As a result the following errors occurrs: > > [...] > TypeError: gaussian() takes exactly 7 arguments (2 given) > SystemError: null argument to internal routine > > > [...] > File "/usr/lib/python2.7/dist-packages/scipy/optimize/minpack.py", > line 364, in leastsq > gtol, maxfev, epsfcn, factor, diag) > error: Error occurred while calling the Python function named > File "/path/to/some_algo.py", line 219, in _fitpy > > ret= optimize.leastsq(errorfunction, params, full_output=1, ftol=1) > ---------------------------------------------------------------------- > File "/usr/lib/python2.7/dist-packages/scipy/optimize/minpack.py", line > 364, in leastsq > > gtol, maxfev, epsfcn, factor, diag) > Ran 1 test in 0.262serror: Internal error constructing argument list. > > [...] > > > Is it safe to invoke scipy.optimize.lastsq() from several threads? > if not: > Which scipy functions are thread-safe? > > Thank you in advance! > > Ciao > Martin From cjyxiaodi1 at gmail.com Thu Mar 20 11:04:06 2014 From: cjyxiaodi1 at gmail.com (Chia Jing Yi) Date: Thu, 20 Mar 2014 23:04:06 +0800 Subject: [SciPy-User] Scipy Installation Problem Asking Message-ID: Hi, I plan to plot a sashimi plot to view the alternative splicing event of my interested "gene" by using MISO. After follow the following link, http://genes.mit.edu/burgelab/miso/docs/and read through the forum. Unfortunately, I still face some problems to run the complete set of MISO successful. I can run some of the python script but I still fail to run some of it :( It shown the following error message when I try some of the MISO python script (compare_miso.py, run_miso.py, sashimi_plot.py, etc). *.* *.* *.* *ImportError: No module named _ufuncs* I can successful run some of the important MISO script (index_gff.py, sam_to_bam.py). I'm downloading everything by using Python-2.7. I suspect it might due to numpy or scipy installation problem. Unfortunately, I still fail to figure it out :( I have try to use Python-3.3. But it will show the following error message when I try to run all the python script under MISO : *.* *.* *SyntaxError: invalid syntax* I have browse through forum and seqanswer etc. Unfortunately I still fail to figure out why it happen :( I have to manual download all the package and install it separately as I don't have the purposely through access network through server. It is due to the network security of server at University. Thanks and looking forward to hear from any of you. best regards edge -------------- next part -------------- An HTML attachment was scrubbed... URL: From mutantturkey at gmail.com Thu Mar 20 12:43:55 2014 From: mutantturkey at gmail.com (Calvin Morrison) Date: Thu, 20 Mar 2014 12:43:55 -0400 Subject: [SciPy-User] [ANN] PyFeast v1.1 Released Message-ID: PyFeast, the feature selection toolkit for Python has just released a bugfix release v1.1: https://github.com/mutantturkey/PyFeast/releases/tag/v1.1 Changes: - Upstream changes to FSToolbox and MIToolbox. - Improved safe memory handling for large datasets. - native MIM algorithm instead of wrapping Beta Gamma - Mutual Information Maximization algorithm has improved performance If you're using a current version of PyFeast please upgrade. Algorithm Application: We're currently using PyFeast to identify bacteria (ie. taxa) to differentiate multiple phenotype in microbial ecology. https://github.com/EESI/Fizzy/ Thanks, Calvin Morrison -------------- next part -------------- An HTML attachment was scrubbed... URL: From tsyu80 at gmail.com Fri Mar 21 02:08:05 2014 From: tsyu80 at gmail.com (Tony Yu) Date: Fri, 21 Mar 2014 01:08:05 -0500 Subject: [SciPy-User] SVG logo of SciPy In-Reply-To: References: Message-ID: On Sat, Mar 15, 2014 at 12:01 PM, Ralf Gommers wrote: > > > > On Mon, Mar 10, 2014 at 10:39 AM, Christophe Bal wrote: > >> >> Hello, >> is there a SVG version of the SciPy logos ? This would be to be used on >> my website. >> > > Someone's got to have them, but they're not in the scipy.org repo. Anyone > know who created the original? > > The closest I could find is Tony Yu's ScipyCentral logo: > https://github.com/tonysyu/SciPy-Central-Logo > FWIW, I looked around for an SVG version of the SciPy logo about 3 years ago, while making the scikit-image logo. I ended up not finding anything better than a PNG of the logo and manually traced the geometry to get something for the logo. That said, it appears that Nicolas Rougier either created(-his-own) or found an SVG version of the SciPy logo (it's embedded in a larger image, but shouldn't be too hard to extract): http://webloria.loria.fr/~rougier/tmp/scipy.svg Cheers, -Tony -------------- next part -------------- An HTML attachment was scrubbed... URL: From projetmbc at gmail.com Fri Mar 21 07:30:17 2014 From: projetmbc at gmail.com (Christophe Bal) Date: Fri, 21 Mar 2014 12:30:17 +0100 Subject: [SciPy-User] SVG logo of SciPy In-Reply-To: References: Message-ID: Thanks for this. I will try to colorize this. Christophe BAL Le 21 mars 2014 07:08, "Tony Yu" a ?crit : > > > > On Sat, Mar 15, 2014 at 12:01 PM, Ralf Gommers wrote: > >> >> >> >> On Mon, Mar 10, 2014 at 10:39 AM, Christophe Bal wrote: >> >>> >>> Hello, >>> is there a SVG version of the SciPy logos ? This would be to be used on >>> my website. >>> >> >> Someone's got to have them, but they're not in the scipy.org repo. >> Anyone know who created the original? >> >> The closest I could find is Tony Yu's ScipyCentral logo: >> https://github.com/tonysyu/SciPy-Central-Logo >> > > FWIW, I looked around for an SVG version of the SciPy logo about 3 years > ago, while making the scikit-image logo. I ended up not finding anything > better than a PNG of the logo and manually traced the geometry to get > something for the logo. > > That said, it appears that Nicolas Rougier either created(-his-own) or > found an SVG version of the SciPy logo (it's embedded in a larger image, > but shouldn't be too hard to extract): > > http://webloria.loria.fr/~rougier/tmp/scipy.svg > > Cheers, > -Tony > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsseabold at gmail.com Mon Mar 24 15:23:13 2014 From: jsseabold at gmail.com (Skipper Seabold) Date: Mon, 24 Mar 2014 15:23:13 -0400 Subject: [SciPy-User] scipy.linalg.blas before 0.12.0 Message-ID: Hi, More feeling around in the dark. The docs [1] state that the low-level functions scipy.linalg.blas are new in 0.12.0. However, >>> import scipy as sp >>> sp.__version__ '0.9.0' >>> import scipy.linalg.blas >>> scipy.linalg.blas.cblas.dgemm._cpointer What am I missing? Is this just my build and it's not guaranteed? Can I still reliably do things like this [2,3] with scipy < 0.12.0? Skipper [1] http://docs.scipy.org/doc/scipy/reference/linalg.blas.html [2] http://www.mail-archive.com/numpy-discussion at scipy.org/msg40554.html [3] https://github.com/ChadFulton/pykalman_filter/blob/master/kalman/kalman_filter.pyx#L28 From pav at iki.fi Mon Mar 24 16:17:59 2014 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 24 Mar 2014 22:17:59 +0200 Subject: [SciPy-User] scipy.linalg.blas before 0.12.0 In-Reply-To: References: Message-ID: 24.03.2014 21:23, Skipper Seabold kirjoitti: > More feeling around in the dark. The docs [1] state that the > low-level functions scipy.linalg.blas are new in 0.12.0. They're new as public, documented functions. Writing code targeting also earlier Scipy versions is possible in principle. > However, > >>>> import scipy as sp sp.__version__ > '0.9.0' >>>> import scipy.linalg.blas >>>> scipy.linalg.blas.cblas.dgemm._cpointer > This is backward and forward compatible: dgemm, = get_blas_funcs(['gemm',], [np.array([1.0])]) Note that `.cblas` is not available at all on all platforms, and the functions it contains vary with Scipy versions. Also, both `.cblas` and `.fblas` submodules are deprecated. -- Pauli Virtanen From pav at iki.fi Mon Mar 24 16:40:52 2014 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 24 Mar 2014 22:40:52 +0200 Subject: [SciPy-User] scipy.linalg.blas before 0.12.0 In-Reply-To: References: Message-ID: 24.03.2014 21:23, Skipper Seabold kirjoitti: [clip] > [3] https://github.com/ChadFulton/pykalman_filter/blob/master/kalman/kalman_filter.pyx#L28 cdef sdot_t *sdot = PyCObject_AsVoidPtr(scipy.linalg.blas.sdot._cpointer) Note that _cpointer gives you the raw function pointer of the BLAS routine. If you're on OSX, this is bad news, because the signature of SDOT for Apple's Accelerate is in G77 ABI and it's unfortunately *not* float sdot_(int *, float *, int *, float *, int *) on other platforms, the above is the correct signature. You can check from scipy/_build_utils/src which routines are expected to be problematic. -- Pauli Virtanen From sturla.molden at gmail.com Mon Mar 24 23:36:14 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Tue, 25 Mar 2014 03:36:14 +0000 (UTC) Subject: [SciPy-User] scipy.linalg.blas before 0.12.0 References: Message-ID: <902085125417410706.990363sturla.molden-gmail.com@news.gmane.org> Pauli Virtanen wrote: > 24.03.2014 21:23, Skipper Seabold kirjoitti: > [clip] >> [3] https://github.com/ChadFulton/pykalman_filter/blob/master/kalman/kalman_filter.pyx#L28 > > cdef sdot_t *sdot = > PyCObject_AsVoidPtr(scipy.linalg.blas.sdot._cpointer) > > Note that _cpointer gives you the raw function pointer of the BLAS routine. > > If you're on OSX, this is bad news, because the signature of SDOT for > Apple's Accelerate is in G77 ABI and it's unfortunately *not* > > float sdot_(int *, float *, int *, float *, int *) > > on other platforms, the above is the correct signature. I see a claim about g77 ABI in Accelerate here: https://github.com/scipy/scipy/pull/2695 However, this is what Apple's documentation says: https://developer.apple.com/library/mac/documentation/Accelerate/Reference/BLAS_Ref/Reference/reference.html#//apple_ref/c/func/SDOT Fortran Prototypes Entry points that can be called from Fortran programs. You should generally not call these from software written in C. (...) SDOT See cblas_sdot. float SDOT ( const int *N, const float *X, const int *incX, const float *Y, const int *incY ); Availability Available in OS X v10.0 and later. Declared In vBLAS.h This does not look like g77 ABI to me. Actually, based on the name wrangling I think it's ifort ABI. Regards, Sturla From pav at iki.fi Tue Mar 25 02:30:36 2014 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 25 Mar 2014 08:30:36 +0200 Subject: [SciPy-User] scipy.linalg.blas before 0.12.0 In-Reply-To: <902085125417410706.990363sturla.molden-gmail.com@news.gmane.org> References: <902085125417410706.990363sturla.molden-gmail.com@news.gmane.org> Message-ID: 25.03.2014 05:36, Sturla Molden kirjoitti: [clip] > However, this is what Apple's documentation says: > > https://developer.apple.com/library/mac/documentation/Accelerate/Reference/BLAS_Ref/Reference/reference.html#//apple_ref/c/func/SDOT The prototype with which the function *actually* is callable from C is double SDOT(int *, float *, int *, float *, int *) See for yourself. -- Pauli Virtanen From jsseabold at gmail.com Tue Mar 25 10:53:38 2014 From: jsseabold at gmail.com (Skipper Seabold) Date: Tue, 25 Mar 2014 10:53:38 -0400 Subject: [SciPy-User] scipy.linalg.blas before 0.12.0 In-Reply-To: References: Message-ID: On Mon, Mar 24, 2014 at 4:17 PM, Pauli Virtanen wrote: > 24.03.2014 21:23, Skipper Seabold kirjoitti: >> More feeling around in the dark. The docs [1] state that the >> low-level functions scipy.linalg.blas are new in 0.12.0. > > They're new as public, documented functions. > > Writing code targeting also earlier Scipy versions is possible in > principle. > >> However, >> >>>>> import scipy as sp sp.__version__ >> '0.9.0' >>>>> import scipy.linalg.blas >>>>> scipy.linalg.blas.cblas.dgemm._cpointer >> > > This is backward and forward compatible: > > dgemm, = get_blas_funcs(['gemm',], [np.array([1.0])]) > > Note that `.cblas` is not available at all on all platforms, and the > functions it contains vary with Scipy versions. Also, both `.cblas` > and `.fblas` submodules are deprecated. Ok, that makes sense. Thanks. Skipper From jsseabold at gmail.com Tue Mar 25 10:54:26 2014 From: jsseabold at gmail.com (Skipper Seabold) Date: Tue, 25 Mar 2014 10:54:26 -0400 Subject: [SciPy-User] scipy.linalg.blas before 0.12.0 In-Reply-To: References: Message-ID: On Mon, Mar 24, 2014 at 4:40 PM, Pauli Virtanen wrote: > 24.03.2014 21:23, Skipper Seabold kirjoitti: > [clip] >> [3] https://github.com/ChadFulton/pykalman_filter/blob/master/kalman/kalman_filter.pyx#L28 > > cdef sdot_t *sdot = > PyCObject_AsVoidPtr(scipy.linalg.blas.sdot._cpointer) > > Note that _cpointer gives you the raw function pointer of the BLAS routine. > > If you're on OSX, this is bad news, because the signature of SDOT for > Apple's Accelerate is in G77 ABI and it's unfortunately *not* > > float sdot_(int *, float *, int *, float *, int *) > > on other platforms, the above is the correct signature. I think I came across the same here reading through some source. https://github.com/Theano/Theano/blob/master/theano/tensor/blas_headers.py https://github.com/theano/theano/issues/1240 > > You can check from scipy/_build_utils/src which routines are expected to > be problematic. Will do. Thanks, again. Skipper From matthew.brett at gmail.com Tue Mar 25 23:47:53 2014 From: matthew.brett at gmail.com (Matthew Brett) Date: Tue, 25 Mar 2014 20:47:53 -0700 Subject: [SciPy-User] [Numpy-discussion] ANN: NumPy 1.8.1 release In-Reply-To: <5332135C.7040903@googlemail.com> References: <5332135C.7040903@googlemail.com> Message-ID: Hi, On Tue, Mar 25, 2014 at 4:38 PM, Julian Taylor wrote: > Hello, > > I'm happy to announce the of Numpy 1.8.1. > This is a bugfix only release supporting Python 2.6 - 2.7 and 3.2 - 3.4. > > More than 48 issues have been fixed, the most important issues are > listed in the release notes: > https://github.com/numpy/numpy/blob/maintenance/1.8.x/doc/release/1.8.1-notes.rst > > Compared to the last release candidate we have fixed a regression of the > 1.8 series that prevented using some gufunc based linalg functions on > larger matrices on 32 bit systems. This implied a few changes in the > NDIter C-API which might expose insufficient checks for error conditions > in third party applications. Please check the release notes for details. > > Source tarballs, windows installers and release notes can be found at > https://sourceforge.net/projects/numpy/files/NumPy/1.8.1 Thanks a lot for this. I've just posted OSX wheels for Pythons 2.7, 3.3, 3.4. It's a strange feeling doing this: $ pip install numpy Downloading/unpacking numpy Downloading numpy-1.8.1-cp27-none-macosx_10_6_intel.whl (3.6MB): 3.6MB downloaded Installing collected packages: numpy Successfully installed numpy Cleaning up... 5 seconds waiting on a home internet connection and a numpy install.... Nice. Cheers, Matthew From sturla.molden at gmail.com Wed Mar 26 06:57:45 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Wed, 26 Mar 2014 11:57:45 +0100 Subject: [SciPy-User] scipy.linalg.blas before 0.12.0 In-Reply-To: References: <902085125417410706.990363sturla.molden-gmail.com@news.gmane.org> Message-ID: On 25/03/14 07:30, Pauli Virtanen wrote: > 25.03.2014 05:36, Sturla Molden kirjoitti: > [clip] >> However, this is what Apple's documentation says: >> >> https://developer.apple.com/library/mac/documentation/Accelerate/Reference/BLAS_Ref/Reference/reference.html#//apple_ref/c/func/SDOT > > The prototype with which the function *actually* is callable from C is > > double SDOT(int *, float *, int *, float *, int *) > > See for yourself. > This is what I find in vBLAS.h on my MBP. I am not sure what this is, but it is not g77 ABI. Sturla /* ================================================================================================= Prototypes for FORTRAN BLAS =========================== These are prototypes for the FORTRAN callable BLAS functions. They are implemented in C for Mac OS, as thin shims that simply call the C BLAS counterpart. These routines should never be called from C, but need to be included here so they will get output for the stub library. It won't hurt to call them from C, but who would want to since you can't pass literals for sizes? FORTRAN compilers are typically MPW tools and use PPCLink, so they will link with the official vecLib stub from Apple. ================================================================================================= */ /* * SDOT() * * Availability: * Mac OS X: in version 10.0 and later in vecLib.framework * CarbonLib: not in Carbon, but vecLib is compatible with CarbonLib * Non-Carbon CFM: in vecLib 1.0.2 and later */ extern float SDOT( const int * /* N */, const float * /* X */, const int * /* incX */, const float * /* Y */, const int * /* incY */) AVAILABLE_MAC_OS_X_VERSION_10_0_AND_LATER; From pav at iki.fi Wed Mar 26 07:04:20 2014 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 26 Mar 2014 11:04:20 +0000 (UTC) Subject: [SciPy-User] scipy.linalg.blas before 0.12.0 References: <902085125417410706.990363sturla.molden-gmail.com@news.gmane.org> Message-ID: Sturla Molden gmail.com> writes: [clip] > This is what I find in vBLAS.h on my MBP. I am not sure what this is, > but it is not g77 ABI. The function is not callable with that signature (OSX 10.6). Did you try it? Apple's documentation, and the vBLAS.h they ship, does not match what is actually implemented in their library. -- Pauli Virtanen From pav at iki.fi Wed Mar 26 07:14:09 2014 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 26 Mar 2014 11:14:09 +0000 (UTC) Subject: [SciPy-User] scipy.linalg.blas before 0.12.0 References: <902085125417410706.990363sturla.molden-gmail.com@news.gmane.org> Message-ID: Pauli Virtanen iki.fi> writes: > Sturla Molden gmail.com> writes: > [clip] > > This is what I find in vBLAS.h on my MBP. I am not sure what this is, > > but it is not g77 ABI. > > The function is not callable with that signature (OSX 10.6). > Did you try it? > > Apple's documentation, and the vBLAS.h they ship, does not match > what is actually implemented in their library. This may be a bug in the versions of Accelerate in question, however, as the signature is different for gcc -m32 and gcc -m64. -- Pauli Virtanen From sturla.molden at gmail.com Wed Mar 26 07:56:49 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Wed, 26 Mar 2014 12:56:49 +0100 Subject: [SciPy-User] scipy.linalg.blas before 0.12.0 In-Reply-To: References: <902085125417410706.990363sturla.molden-gmail.com@news.gmane.org> Message-ID: On 26/03/14 12:14, Pauli Virtanen wrote: > Pauli Virtanen iki.fi> writes: >> Sturla Molden gmail.com> writes: >> [clip] >>> This is what I find in vBLAS.h on my MBP. I am not sure what this is, >>> but it is not g77 ABI. >> >> The function is not callable with that signature (OSX 10.6). >> Did you try it? >> >> Apple's documentation, and the vBLAS.h they ship, does not match >> what is actually implemented in their library. > > This may be a bug in the versions of Accelerate in question, > however, as the signature is different for gcc -m32 and gcc -m64. Hm... On OSX 10.9 (with clang-500.2.79) either signature works with -m32. Only return type double works with -m64. Sturla From pav at iki.fi Wed Mar 26 08:12:55 2014 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 26 Mar 2014 12:12:55 +0000 (UTC) Subject: [SciPy-User] scipy.linalg.blas before 0.12.0 References: <902085125417410706.990363sturla.molden-gmail.com@news.gmane.org> Message-ID: Sturla Molden gmail.com> writes: > On 26/03/14 12:14, Pauli Virtanen wrote: > > Pauli Virtanen iki.fi> writes: > >> Sturla Molden gmail.com> writes: > >> [clip] > >>> This is what I find in vBLAS.h on my MBP. I am not sure what this is, > >>> but it is not g77 ABI. > >> > >> The function is not callable with that signature (OSX 10.6). > >> Did you try it? > >> > >> Apple's documentation, and the vBLAS.h they ship, does not match > >> what is actually implemented in their library. > > > > This may be a bug in the versions of Accelerate in question, > > however, as the signature is different for gcc -m32 and gcc -m64. > > Hm... > > On OSX 10.9 (with clang-500.2.79) either signature works with -m32. > > Only return type double works with -m64. Accelerate's ZDOTC/ZDOTU on the other hand work OK only with gfortran -ff2c, and crash without -ff2c, so they seem to be in g77 ABI. Anyway, these issues should be sorted out in Scipy 0.13.0. As you can see, it's a bit of a hassle to get this right, so the recommendation to use the C APIs is spot on. -- Pauli Virtanen From sturla.molden at gmail.com Wed Mar 26 09:04:46 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Wed, 26 Mar 2014 14:04:46 +0100 Subject: [SciPy-User] scipy.linalg.blas before 0.12.0 In-Reply-To: References: <902085125417410706.990363sturla.molden-gmail.com@news.gmane.org> Message-ID: On 26/03/14 13:12, Pauli Virtanen wrote: > Accelerate's ZDOTC/ZDOTU on the other hand work OK only with > gfortran -ff2c, and crash without -ff2c, so they seem to > be in g77 ABI. > > Anyway, these issues should be sorted out in Scipy 0.13.0. > As you can see, it's a bit of a hassle to get this right, > so the recommendation to use the C APIs is spot on. What about LAPACK? Should SciPy scrap the one in Accelerate and compile its own on top of a wrapped cblas? But hopefully the official OS X binaries can use OpenBLAS instead, as Accelerate is not fork safe. Sturla From pav at iki.fi Wed Mar 26 09:41:12 2014 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 26 Mar 2014 13:41:12 +0000 (UTC) Subject: [SciPy-User] scipy.linalg.blas before 0.12.0 References: <902085125417410706.990363sturla.molden-gmail.com@news.gmane.org> Message-ID: Sturla Molden gmail.com> writes: [clip] > > Anyway, these issues should be sorted out in Scipy 0.13.0. > > As you can see, it's a bit of a hassle to get this right, > > so the recommendation to use the C APIs is spot on. > > What about LAPACK? > > Should SciPy scrap the one in Accelerate and compile its own on top of a > wrapped cblas? See scipy/_build_utils/src The FUNCTIONs in LAPACK are also wrapped in Scipy 0.13.0 when using Accelerate. As far as I know, everything works properly at the moment. -- Pauli Virtanen From sturla.molden at gmail.com Wed Mar 26 10:05:45 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Wed, 26 Mar 2014 15:05:45 +0100 Subject: [SciPy-User] scipy.linalg.blas before 0.12.0 In-Reply-To: References: <902085125417410706.990363sturla.molden-gmail.com@news.gmane.org> Message-ID: On 26/03/14 14:41, Pauli Virtanen wrote: > See scipy/_build_utils/src > > The FUNCTIONs in LAPACK are also wrapped in Scipy 0.13.0 when using > Accelerate. As far as I know, everything works properly at the moment. Thanks. I use SciPy linked to MKL so I wouldn't know. But I could try to install it in a venv. Sturla From peter.zamb at gmail.com Wed Mar 26 10:19:38 2014 From: peter.zamb at gmail.com (Pietro) Date: Wed, 26 Mar 2014 14:19:38 +0000 Subject: [SciPy-User] resample and aggregate a numpy array Message-ID: Dear All, I would like to resample a numpy array using a function to aggregate the subtiles. I try to clarify the problem with a figure: # defineed an array, like: [[1, 1, 2, 2, 3], [1, 1, 2, 2, 3], [1, 1, 2, 2, 3], [1, 1, 2, 2, 3], [1, 1, 2, 2, 3]] # I would like to split, like: +------+------+------+ | 1, 1 | 2, 2 | 3 | | 1, 1 | 2, 2 | 3 | +------+------+------+ | 1, 1 | 2, 2 | 3 | | 1, 1 | 2, 2 | 3 | +------+------+------+ | 1, 1 | 2, 2 | 3 | | | | | +------+------+------+ # and aggregate using a function like: np.sum, to get: [[4, 8, 6], [4, 8, 6], [2, 4, 3]] I wrote two functions: resample_array that reshape the 2D array into a 3D array, and then I can apply whatever function along axis=2; resample that takes directly the function to be more efficient from a memory point. The code is available here: https://gist.github.com/zarch/9782938 Do you think that there is any better/efficient/faster approach? I?ve looked on numpy/scipy functions but I was not able to find anything to do this kind of things, have I missed something? All the best! Pietro From lasagnadavide at gmail.com Thu Mar 27 03:52:04 2014 From: lasagnadavide at gmail.com (Davide Lasagna) Date: Thu, 27 Mar 2014 07:52:04 +0000 Subject: [SciPy-User] resample and aggregate a numpy array In-Reply-To: References: Message-ID: Hi Pietro, I do not fully understand what is going on in your code. However, I remember doing something like that a few months ago, and I used numpy stride tricks to "virtually reshape" a 1d array into a 2d array with elements repeating along the rows. Then I could easily apply function over the rows of it, (for instance moving average). To start see this link: http://scipy-lectures.github.io/advanced/advanced_numpy/#indexing-scheme-strides In your case you have a 2d array and want to get a 3d out of it, so the strides pattern might be more complicated but with a bit of trial and error you do it. Regards, Davide On 26 Mar 2014 14:19, "Pietro" wrote: > Dear All, > > I would like to resample a numpy array using a function to aggregate > the subtiles. > I try to clarify the problem with a figure: > > # defineed an array, like: > [[1, 1, 2, 2, 3], > [1, 1, 2, 2, 3], > [1, 1, 2, 2, 3], > [1, 1, 2, 2, 3], > [1, 1, 2, 2, 3]] > > # I would like to split, like: > > +------+------+------+ > | 1, 1 | 2, 2 | 3 | > | 1, 1 | 2, 2 | 3 | > +------+------+------+ > | 1, 1 | 2, 2 | 3 | > | 1, 1 | 2, 2 | 3 | > +------+------+------+ > | 1, 1 | 2, 2 | 3 | > | | | | > +------+------+------+ > > # and aggregate using a function like: np.sum, to get: > [[4, 8, 6], > [4, 8, 6], > [2, 4, 3]] > > I wrote two functions: > > resample_array that reshape the 2D array into a 3D array, and then I > can apply whatever function along axis=2; > resample that takes directly the function to be more efficient from a > memory point. > > The code is available here: https://gist.github.com/zarch/9782938 > > Do you think that there is any better/efficient/faster approach? I've > looked on numpy/scipy functions but I was not able to find anything to > do this kind of things, have I missed something? > > All the best! > > Pietro > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter.zamb at gmail.com Thu Mar 27 05:34:20 2014 From: peter.zamb at gmail.com (Pietro) Date: Thu, 27 Mar 2014 09:34:20 +0000 Subject: [SciPy-User] resample and aggregate a numpy array In-Reply-To: References: Message-ID: Hi Davide, On Thu, Mar 27, 2014 at 7:52 AM, Davide Lasagna wrote: > I do not fully understand what is going on in your code. However, I remember > doing something like that a few months ago, and I used numpy stride tricks > to "virtually reshape" a 1d array into a 2d array with elements repeating > along the rows. Then I could easily apply function over the rows of it, (for > instance moving average). > > To start see this link: > > http://scipy-lectures.github.io/advanced/advanced_numpy/#indexing-scheme-strides Wow, I didn't know this thing of the numpy arrays, certainly I will look into it. But at least it seems that a function like this does not already exist in the numpy/scipy library. Thanks for the tip! Pietro From schut at sarvision.nl Thu Mar 27 07:57:42 2014 From: schut at sarvision.nl (Vincent Schut) Date: Thu, 27 Mar 2014 12:57:42 +0100 Subject: [SciPy-User] resample and aggregate a numpy array References: Message-ID: <20140327125742.5618097f@sarvision.nl> On Thu, 27 Mar 2014 09:34:20 +0000 Pietro wrote: > Hi Davide, > > On Thu, Mar 27, 2014 at 7:52 AM, Davide Lasagna > wrote: > > I do not fully understand what is going on in your code. However, I > > remember doing something like that a few months ago, and I used > > numpy stride tricks to "virtually reshape" a 1d array into a 2d > > array with elements repeating along the rows. Then I could easily > > apply function over the rows of it, (for instance moving average). > > > > To start see this link: > > > > http://scipy-lectures.github.io/advanced/advanced_numpy/#indexing-scheme-strides > > Wow, I didn't know this thing of the numpy arrays, certainly I will > look into it. > > But at least it seems that a function like this does not already exist > in the numpy/scipy library. > > Thanks for the tip! > > Pietro FYI, scikit-image (skimage) has a handy function for that: http://scikit-image.org/docs/dev/api/skimage.util.html#skimage.util.view_as_blocks VS. From sole155 at tiscali.it Mon Mar 31 17:15:52 2014 From: sole155 at tiscali.it (Syry) Date: Mon, 31 Mar 2014 21:15:52 +0000 (UTC) Subject: [SciPy-User] Connect Input File Pysces Message-ID: Hi! I'm using Pysces to reproduce the cell metabolism. I want to say if is possible to connect the Pysces Input File. I need to connect the input file beacause the output of the first input file is the input of the second. There is anyone that can help me? Thanks