From delphine.tardif-becquet5 at etu.univ-lorraine.fr Thu Aug 1 09:24:34 2013 From: delphine.tardif-becquet5 at etu.univ-lorraine.fr (Delphine t) Date: Thu, 1 Aug 2013 06:24:34 -0700 (PDT) Subject: [SciPy-User] optimize_leastsq : error calling the function Message-ID: <1375363474923-18547.post@n7.nabble.com> Hello everyone, I'm having trouble using the leastsq optimisation function, could anyone help me ? I have a list of drillings, each one whith its own parameters. All this parameters are concatenated in the array called VARIABLES the dictionnary dico_taille_variable associate to each drillings the size of the its vetcor "parameters" in order to deconcatenate the VARIABLES array and apply a function calculating the residuals of each drilling in the function residuals_global that I'm trying to optimize. Residual_global calls the function residuals which is in a Class Drilling and has this aspect : def residuals(self,vars) : m=self.model(vars) f=interpolate.interp1d(self.depth, self.age) resi_age=(f(self.icemarkers_depth)-self.icemarkers_age)/self.icemarkers_sigma k=interpolate.interp1d(self.depth, self.gage) resi_gage=(k(self.gasmarkers_depth)-self.gasmarkers_age)/self.gasmarkers_sigma l=interpolate.interp1d(self.depth, self.Ddepth) resi_Ddepth=(l(self.Ddepth_depth)-self.Ddepth_Ddepth)/self.Ddepth_sigma resi_corr_tau=self.corr_tau resi_corr_a=self.corr_a resi_corr_LIDIE=self.corr_LIDIE self.retresi=np.concatenate((resi_age,resi_gage, resi_Ddepth, resi_corr_tau, resi_corr_a, resi_corr_LIDIE)) return self.retresi _______The function to optimize_____ def residuals_global(VARIABLES): index=0 vecteur=[] RESIDUALS=np.array([]) for key in dico_taille_variable: t=dico_taille_variable[key] vecteur=VARIABLES[index+0:index+t] D.residuals(vecteur) RESIDUALS=np.concatenate((RESIDUALS, D.retresi)) index=index+t print 'RESIDUALS', RESIDUALS return RESIDUALS _______The call_________ def optimisation() : vars2, hess,infodict,mesg,ier=leastsq(residuals_global, VARIABLES, full_output=1) print mesg print 'vars',vars2 print 'hess',hess return vars2,hess optimisation() _______The result_______ error Traceback (most recent call last) /usr/lib/python2.7/dist-packages/IPython/utils/py3compat.pyc in execfile(fname, *where) 176 else: 177 filename = fname --> 178 __builtin__.execfile(filename, *where) /home/dtardif/T?l?chargements/python/Main.py in () 64 return vars2,hess 65 ---> 66 optimisation() 67 68 D.sigma(hess, vars2) /home/dtardif/T?l?chargements/python/Main.py in optimisation() 58 59 def optimisation() : ---> 60 vars2, hess,infodict,mesg,ier=leastsq(residuals_global, VARIABLES, full_output=1) 61 print mesg 62 print 'vars',vars2 /usr/lib/python2.7/dist-packages/scipy/optimize/minpack.pyc in leastsq(func, x0, args, Dfun, full_output, col_deriv, ftol, xtol, gtol, maxfev, epsfcn, factor, diag) 281 maxfev = 200*(n + 1) 282 retval = _minpack._lmdif(func, x0, args, full_output, ftol, xtol, --> 283 gtol, maxfev, epsfcn, factor, diag) 284 else: 285 if col_deriv: error: Error occurred while calling the Python function named residuals_global _______________________________ I hope I put all the data needed to understand my problem Any idea ? Thank's ! -- View this message in context: http://scipy-user.10969.n7.nabble.com/optimize-leastsq-error-calling-the-function-tp18547.html Sent from the Scipy-User mailing list archive at Nabble.com. From srey at asu.edu Thu Aug 1 09:57:46 2013 From: srey at asu.edu (Serge Rey) Date: Thu, 1 Aug 2013 06:57:46 -0700 Subject: [SciPy-User] ANN: PySAL 1.6 released Message-ID: On behalf of the PySAL development team, I'm happy to announce the official release of PySAL 1.6. PySAL is a library of tools for spatial data analysis and geocomputation written in Python. PySAL 1.6, the seventh official release of PySAL brings the following key enhancements: ### Spatial weights (weights) * Optimized contiguity builder * Explicit checks for disconnected observations (islands) * Lightweight sparse weights class * Handle coincident points in construction of distance based weights * Optimized construction of knn weights ### Spatial regression (spreg) * Chow test on spatial autoregressive coefficient in error, lag and combination regime models * Kernel based weights specialized for HAC estimators * Group-wise heteroskedasticity correction for OLS models with regimes * Optimal GMM estimator to account for heteroskedasticity in TSLS models with regimes ### Spatial inequality (inequality) * Spatial decomposition of the Gini coefficient ### Computational geometry (cg) * Robust segment intersection tests among the 246 commits and bug fixes since the last release, 6 months ago. In addition, 1.6 marks the first release since PySAL moved from Google Code to GitHub ## PySAL modules * pysal.core ? Core Data Structures and IO * pysal.cg ? Computational Geometry * pysal.esda ? Exploratory Spatial Data Analysis * pysal.inequality ? Spatial Inequality Analysis * pysal.spatial_dynamics ? Spatial Dynamics * pysal.spreg - Regression and Diagnostics * pysal.region ? Spatially Constrained Clustering * pysal.weights ? Spatial Weights * pysal.FileIO ? PySAL FileIO: Module for reading and writing various file types in a Pythonic way ## Downloads Source distributions are available at http://pypi.python.org/pypi/PySAL Binary installers are availble from the [GeoDa Center for Geospatial Analysis and Computation](https://geodacenter.asu.edu/projects/pysal) PySAL can also be installed with pip or easy_install. ## Documentation The documentation site is here http://pythonhosted.org/PySAL/ ## Web sites PySAL's home is here http://pysal.org/ The developer's site is here https://github.com/pysal/pysal ## Mailing Lists Please see the developer's list here http://groups.google.com/group/pysal-dev Help for users is here http://groups.google.com/group/openspace-list ## Bug reports and feature requests To search for or report bugs, as well as request enhancements, please see https://github.com/pysal/pysal/issues ## License information See the file "LICENSE.txt" for information on the history of this software, terms & conditions for usage, and a DISCLAIMER OF ALL WARRANTIES. Many thanks to [all who contributed!]( https://github.com/pysal/pysal/blob/master/THANKS.txt) Serge, on behalf of the PySAL development team. -- Sergio (Serge) Rey Professor, School of Geographical Sciences and Urban Planning GeoDa Center for Geospatial Analysis and Computation Arizona State University http://geoplan.asu.edu/rey Editor, International Regional Science Review http://irx.sagepub.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From joferkington at gmail.com Thu Aug 1 13:24:11 2013 From: joferkington at gmail.com (Joe Kington) Date: Thu, 1 Aug 2013 12:24:11 -0500 Subject: [SciPy-User] Fwd: Python Session at AGU 2013 Message-ID: For anyone attending the AGU (American Geophysical Union) fall meeting this year, there will be a session on python and "big data" in the earth sciences. Abstract submission is still open until Aug. 6th. See below for more info. Cheers, -Joe ---------- Forwarded message ---------- From: IRIS Webmaster Date: Thu, Aug 1, 2013 at 11:18 AM Subject: [iris-bulk] Python Session at AGU 2013 To: bulkmail at iris.washington.edu Forwarded on behalf of: Lion Krischer LMU Munich krischer at geophysik.uni-muenchen.de Dear members of the IRIS community, with the deadline for abstract submission to the AGU Fall Meeting 2013 approaching fast, I wanted to point out a session revolving around the Python programming language. If you will be attending the meeting and are using Python for your research or workflows, please consider submitting an abstract to the IN-034 session until *next week Tuesday, August 6th*. https://fallmeeting.agu.org/2013/scientific-program/session-search/sessions/in034-ultra-scale-earth-systems-analyses-using-python/ It aims to promote the use of Python in the earth-science community. All the best, Lion Krischer and Thomas Lecocq -------------- next part -------------- An HTML attachment was scrubbed... URL: From jgomezdans at gmail.com Fri Aug 2 14:54:40 2013 From: jgomezdans at gmail.com (Jose Gomez-Dans) Date: Fri, 2 Aug 2013 19:54:40 +0100 Subject: [SciPy-User] Quick "grid" creation Message-ID: Hi, I'd like to quickly create a labelled grid to overlay on an array and use as labels (eg scipy.ndimage.sum takes a labels option to calculate statistics using a different array as a mask. My attempts to quickly produce said masks work, but are a bit embarrassing... *a = np.zeros((100, 100), dtype=np.int)* *cnt = 0* *for i in xrange(20): * * for j in xrange(20):* * cnt += 1* * a[(i*5):((i+1)*5), (j*5):((j+1)*5)] = cnt* So the above works, but it quickly gets cumbersome with large arrays, and arrays where the ratio between how many cells in the finer array to the coarser array. Is there some "keys in hand" solution for this problem? Thanks! Jose -------------- next part -------------- An HTML attachment was scrubbed... URL: From cweisiger at msg.ucsf.edu Fri Aug 2 15:06:54 2013 From: cweisiger at msg.ucsf.edu (Chris Weisiger) Date: Fri, 2 Aug 2013 12:06:54 -0700 Subject: [SciPy-User] Quick "grid" creation In-Reply-To: References: Message-ID: You could use arange and then blow the array up using some scaling function, maybe? Google suggests the Kronecker product: http://stackoverflow.com/questions/7525214/how-to-scale-a-numpy-array http://docs.scipy.org/doc/numpy/reference/generated/numpy.kron.html a = numpy.arange(20*20).reshape(20, 20) b = numpy.kron(a, numpy.ones((100, 100)) -Chris On Fri, Aug 2, 2013 at 11:54 AM, Jose Gomez-Dans wrote: > Hi, > I'd like to quickly create a labelled grid to overlay on an array and use > as labels (eg scipy.ndimage.sum takes a labels option to calculate > statistics using a different array as a mask. My attempts to quickly > produce said masks work, but are a bit embarrassing... > > *a = np.zeros((100, 100), dtype=np.int)* > *cnt = 0* > *for i in xrange(20): * > * for j in xrange(20):* > * cnt += 1* > * a[(i*5):((i+1)*5), (j*5):((j+1)*5)] = cnt* > > So the above works, but it quickly gets cumbersome with large arrays, and > arrays where the ratio between how many cells in the finer array to the > coarser array. Is there some "keys in hand" solution for this problem? > > Thanks! > Jose > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zachary.pincus at yale.edu Fri Aug 2 15:10:07 2013 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Fri, 2 Aug 2013 15:10:07 -0400 Subject: [SciPy-User] Quick "grid" creation In-Reply-To: References: Message-ID: <21C003AB-3B25-4881-9CA2-1AC27B95DA18@yale.edu> This is cheesy, but you could make a grid with numpy.arange(20*20).reshape(20,20), and then enlarge by factor of 5 with ndimage.zoom: grid = numpy.arange(20*20).reshape(20,20) large = scipy.ndimage.zoom(grid, 5, order=0) (order=0 gives nearest-neighbor interpolation). Maybe there's a nicer way, though -- perhaps maybe something too-clever with stride tricks? Zach On Aug 2, 2013, at 2:54 PM, Jose Gomez-Dans wrote: > Hi, > I'd like to quickly create a labelled grid to overlay on an array and use as labels (eg scipy.ndimage.sum takes a labels option to calculate statistics using a different array as a mask. My attempts to quickly produce said masks work, but are a bit embarrassing... > > a = np.zeros((100, 100), dtype=np.int) > cnt = 0 > for i in xrange(20): > for j in xrange(20): > cnt += 1 > a[(i*5):((i+1)*5), (j*5):((j+1)*5)] = cnt > > So the above works, but it quickly gets cumbersome with large arrays, and arrays where the ratio between how many cells in the finer array to the coarser array. Is there some "keys in hand" solution for this problem? > > Thanks! > Jose > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From zachary.pincus at yale.edu Fri Aug 2 15:16:50 2013 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Fri, 2 Aug 2013 15:16:50 -0400 Subject: [SciPy-User] Quick "grid" creation In-Reply-To: References: Message-ID: <0C419A67-4CCF-4942-84E4-B8FEF1F673BA@yale.edu> On Aug 2, 2013, at 3:06 PM, Chris Weisiger wrote: > You could use arange and then blow the array up using some scaling function, maybe? Google suggests the Kronecker product: http://stackoverflow.com/questions/7525214/how-to-scale-a-numpy-array > http://docs.scipy.org/doc/numpy/reference/generated/numpy.kron.html > > a = numpy.arange(20*20).reshape(20, 20) > b = numpy.kron(a, numpy.ones((100, 100)) > Oh using kron is a good tip! Just for the record, you'd want: b = numpy.kron(grid, numpy.ones((5,5))) Also for the record, it turns out that kron is a bit faster than ndimage.zoom, but surprisingly not that much: In: timeit numpy.kron(grid, numpy.ones((5,5))) 1000 loops, best of 3: 212 us per loop In: timeit scipy.ndimage.zoom(grid, 5, order=0) 1000 loops, best of 3: 238 us per loop And as the zoom-factor gets larger, zoom catches up... interesting: In: timeit numpy.kron(grid, numpy.ones((50,50))) 10 loops, best of 3: 23.5 ms per loop In: timeit scipy.ndimage.zoom(grid, 50, order=0) 10 loops, best of 3: 20.5 ms per loop Zach > -Chris > > > On Fri, Aug 2, 2013 at 11:54 AM, Jose Gomez-Dans wrote: > Hi, > I'd like to quickly create a labelled grid to overlay on an array and use as labels (eg scipy.ndimage.sum takes a labels option to calculate statistics using a different array as a mask. My attempts to quickly produce said masks work, but are a bit embarrassing... > > a = np.zeros((100, 100), dtype=np.int) > cnt = 0 > for i in xrange(20): > for j in xrange(20): > cnt += 1 > a[(i*5):((i+1)*5), (j*5):((j+1)*5)] = cnt > > So the above works, but it quickly gets cumbersome with large arrays, and arrays where the ratio between how many cells in the finer array to the coarser array. Is there some "keys in hand" solution for this problem? > > Thanks! > Jose > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From lily.stat at gmail.com Fri Aug 2 16:51:28 2013 From: lily.stat at gmail.com (Jinghua Li) Date: Fri, 2 Aug 2013 16:51:28 -0400 Subject: [SciPy-User] Silly question -- how to install scipy on cygwin Message-ID: Hi, Hopefully my question is appropriate here. If not, please direct me where to ask, thanks~ I tried to install scipy on my "windows 7 cygwin" using the command "pip install scipy", but it failed with many error messages. thanks Liang -------------- next part -------------- An HTML attachment was scrubbed... URL: From mij.rafee.1344 at gmail.com Fri Aug 2 17:22:36 2013 From: mij.rafee.1344 at gmail.com (mij.rafee.1344 at gmail.com) Date: Sat, 3 Aug 2013 03:22:36 +0600 Subject: [SciPy-User] Silly question -- how to install scipy on cygwin In-Reply-To: References: Message-ID: Can you tell more about the error message you are facing On Aug 3, 2013 2:51 AM, "Jinghua Li" wrote: > Hi, > > Hopefully my question is appropriate here. If not, please direct me where > to ask, thanks~ > > I tried to install scipy on my "windows 7 cygwin" using the command "pip > install scipy", but it failed with many error messages. > > thanks > > Liang > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From newville at cars.uchicago.edu Sat Aug 3 08:41:40 2013 From: newville at cars.uchicago.edu (Matt Newville) Date: Sat, 3 Aug 2013 07:41:40 -0500 Subject: [SciPy-User] optimize_leastsq : error calling the function In-Reply-To: <1375363474923-18547.post@n7.nabble.com> References: <1375363474923-18547.post@n7.nabble.com> Message-ID: Hi, I didn't see any other responses to your question, so I'll try. Though the error messages you're getting aren't very helpful, I believe the basic issue is that you're expecting several global variables to be accessible in your objective function residual_global(). I would recommend that you instead add dico_taille_variable as a keyword argument to residual_global, and pass that in with scipy.optimize_leastsq(residual_global, VARIABLES, args=(dico_taille_variable,)) Also, in your objective function it wasn't clear how 'D.residuals()' and D.retresi was supposed to be found. I think you may want to pass in a list or dictionary of the individual instances of your Drilling class. Hope that helps, --Matt From joris.vankerschaver at gmail.com Sat Aug 3 12:55:36 2013 From: joris.vankerschaver at gmail.com (Joris Vankerschaver) Date: Sat, 3 Aug 2013 17:55:36 +0100 Subject: [SciPy-User] Pdf for multivariate normal distribution Message-ID: <3896F451-BC9F-46B3-B813-ABC0D440CBD6@gmail.com> Hi all, Is there an implementation of the PDF/CDF/etc for the multivariate normal distribution somewhere in SciPy? If not, how useful would it be to implement this? I have some snippets that I could put in the distribution framework used in scipy.stats. All the best, Joris From llewelr at gmail.com Wed Aug 7 15:15:31 2013 From: llewelr at gmail.com (Richard Llewellyn) Date: Wed, 7 Aug 2013 13:15:31 -0600 Subject: [SciPy-User] scipy.io.loadmat throws TypeError with large files Message-ID: Hi, I get this or similar (different integer than 75724 in error) exceptions when loading a sparse matrix (CSC) saved with savemat, all default options. >>> m = loadmat('my_large_mat.mat') Traceback (most recent call last): File "", line 1, in File "/home/richard/venv3.3/lib/python3.3/site-packages/scipy/io/matlab/mio.py", line 176, in loadmat matfile_dict = MR.get_variables(variable_names) File "/home/richard/venv3.3/lib/python3.3/site-packages/scipy/io/matlab/mio5.py", line 274, in get_variables hdr, next_position = self.read_var_header() File "/home/richard/venv3.3/lib/python3.3/site-packages/scipy/io/matlab/mio5.py", line 236, in read_var_header raise TypeError('Expecting miMATRIX type here, got %d' % mdtype) TypeError: Expecting miMATRIX type here, got 75724 here the matrix was: > matrix <400000x4176 sparse matrix of type '' with 934099575 stored elements in Compressed Sparse Column format> and looks fine before saving. It looks as if this only occurs when the saved matrix file size is > 4GB -- at least I haven't seen it with files in the 3GB range. 64 bit Linux. Not a crisis, as I am chunking anyway, so I can just chunk smaller, but when I get more RAM would be nice to bump it up to 8 GB files or so. Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sudipta.mml at gmail.com Wed Aug 7 18:30:46 2013 From: sudipta.mml at gmail.com (sudipta sinha) Date: Wed, 7 Aug 2013 18:30:46 -0400 Subject: [SciPy-User] constrained least square fitting using scipy.optimize.fmin_slsqp() function Message-ID: Hi All, I am facing a problem for constrained linear least square fitting. In my case the matrix equation looks like [Y]nX1=[X]nXm[P]mX1, where Y and P are vectors and X is a matrix and n, m are dimension of the matrix. Further, there is a equality constraint on P which is Sum(P(i))=0.0. How do I proceed to solve that? Which function of python is suitable for this? I saw few of discussion on scipy.optimize.fmin_slsqp() function but the implementation of this function is not very straightforward. Therefore, I need your help. I am new in SCIPY. Please help me out in this regard. From matthew.brett at gmail.com Wed Aug 7 18:47:41 2013 From: matthew.brett at gmail.com (Matthew Brett) Date: Wed, 7 Aug 2013 15:47:41 -0700 Subject: [SciPy-User] scipy.io.loadmat throws TypeError with large files In-Reply-To: References: Message-ID: Hi, On Wed, Aug 7, 2013 at 12:15 PM, Richard Llewellyn wrote: > Hi, > > I get this or similar (different integer than 75724 in error) exceptions > when loading a sparse matrix (CSC) saved with savemat, all default options. > >>>> m = loadmat('my_large_mat.mat') > Traceback (most recent call last): > File "", line 1, in > File > "/home/richard/venv3.3/lib/python3.3/site-packages/scipy/io/matlab/mio.py", > line 176, in loadmat > matfile_dict = MR.get_variables(variable_names) > File > "/home/richard/venv3.3/lib/python3.3/site-packages/scipy/io/matlab/mio5.py", > line 274, in get_variables > hdr, next_position = self.read_var_header() > File > "/home/richard/venv3.3/lib/python3.3/site-packages/scipy/io/matlab/mio5.py", > line 236, in read_var_header > raise TypeError('Expecting miMATRIX type here, got %d' % mdtype) > TypeError: Expecting miMATRIX type here, got 75724 > > > here the matrix was: > >> matrix > <400000x4176 sparse matrix of type '' > with 934099575 stored elements in Compressed Sparse Column format> > > and looks fine before saving. > > It looks as if this only occurs when the saved matrix file size is > 4GB -- > at least I haven't seen it with files in the 3GB range. > > 64 bit Linux. > > Not a crisis, as I am chunking anyway, so I can just chunk smaller, but when > I get more RAM would be nice to bump it up to 8 GB files or so. Ugh. I hesitate to ask, but do you get the same error for a very large non-sparse matrix? Thanks, Matthew From llewelr at gmail.com Thu Aug 8 00:04:09 2013 From: llewelr at gmail.com (Richard Llewellyn) Date: Wed, 7 Aug 2013 22:04:09 -0600 Subject: [SciPy-User] scipy.io.loadmat throws TypeError with large files In-Reply-To: References: Message-ID: Thanks Matthew for the thought. This may not fully answer your question, but the same values saved as a large sparse matrix (csc) at 4.9GB fails to load with same TypeError, but as a numpy 2D array and matrix, which are less than half the file size (1.8GB) when saved with savemat, load without issue. I also noticed that a sparse (csc) matrix that saved at 3.9GB loaded without issue, again suggesting that 4GB is a trigger. Again, this is not an immediate problem for me. Thanks, Richard PS scipy 0.12 On Wed, Aug 7, 2013 at 4:47 PM, Matthew Brett wrote: > Hi, > > On Wed, Aug 7, 2013 at 12:15 PM, Richard Llewellyn > wrote: > > Hi, > > > > I get this or similar (different integer than 75724 in error) exceptions > > when loading a sparse matrix (CSC) saved with savemat, all default > options. > > > >>>> m = loadmat('my_large_mat.mat') > > Traceback (most recent call last): > > File "", line 1, in > > File > > > "/home/richard/venv3.3/lib/python3.3/site-packages/scipy/io/matlab/mio.py", > > line 176, in loadmat > > matfile_dict = MR.get_variables(variable_names) > > File > > > "/home/richard/venv3.3/lib/python3.3/site-packages/scipy/io/matlab/mio5.py", > > line 274, in get_variables > > hdr, next_position = self.read_var_header() > > File > > > "/home/richard/venv3.3/lib/python3.3/site-packages/scipy/io/matlab/mio5.py", > > line 236, in read_var_header > > raise TypeError('Expecting miMATRIX type here, got %d' % mdtype) > > TypeError: Expecting miMATRIX type here, got 75724 > > > > > > here the matrix was: > > > >> matrix > > <400000x4176 sparse matrix of type '' > > with 934099575 stored elements in Compressed Sparse Column format> > > > > and looks fine before saving. > > > > It looks as if this only occurs when the saved matrix file size is > 4GB > -- > > at least I haven't seen it with files in the 3GB range. > > > > 64 bit Linux. > > > > Not a crisis, as I am chunking anyway, so I can just chunk smaller, but > when > > I get more RAM would be nice to bump it up to 8 GB files or so. > > Ugh. I hesitate to ask, but do you get the same error for a very > large non-sparse matrix? > > Thanks, > > Matthew > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Thu Aug 8 04:12:51 2013 From: matthew.brett at gmail.com (Matthew Brett) Date: Thu, 8 Aug 2013 01:12:51 -0700 Subject: [SciPy-User] scipy.io.loadmat throws TypeError with large files In-Reply-To: References: Message-ID: Hi, On Wed, Aug 7, 2013 at 9:04 PM, Richard Llewellyn wrote: > Thanks Matthew for the thought. > > This may not fully answer your question, but the same values saved as a > large sparse matrix (csc) at 4.9GB fails to load with same TypeError, but as > a numpy 2D array and matrix, which are less than half the file size (1.8GB) > when saved with savemat, load without issue. Do the dimensions of the arrays (M, N) make a difference? Or are they all the same (M, N) shape, with more or less non-zeros? Can you make a script that will replicate the problem for me? Thanks a lot, Matthew From nchlsearl at gmail.com Thu Aug 8 17:09:16 2013 From: nchlsearl at gmail.com (Nicholas Earl) Date: Thu, 8 Aug 2013 14:09:16 -0700 Subject: [SciPy-User] Built-in FAP for Lomb-Scargle? Message-ID: Hello everyone, I was just curious if there are any built-in functions to provide the false alarm probability of the Lomb-Scargle routine in SciPy? Thanks for your time! Nicholas Earl Graduate Research Assistant San Diego State University -------------- next part -------------- An HTML attachment was scrubbed... URL: From llewelr at gmail.com Thu Aug 8 19:41:57 2013 From: llewelr at gmail.com (Richard Llewellyn) Date: Thu, 8 Aug 2013 17:41:57 -0600 Subject: [SciPy-User] scipy.io.loadmat throws TypeError with large files In-Reply-To: References: Message-ID: Hi Matthew, A short script below that shows that increasing the density triggers the error, on my machine, at file sizes over 4GB. Originally I had increased either M and N to trigger the error as well. I suspect you'll run into a problem with available RAM. I run this on my 32GB machine with 64GB swap, and it swaps, so this takes several minutes to process at least. Pain, I know. Once I get more RAM it would be easier for me to test various permutations, but that will be awhile. Maybe a generator could be used to build the matrix? Still, I think RAM will be an issue. Richard #################################### import numpy as np import scipy from scipy import sparse from scipy.io import loadmat,savemat no_ones = 1000 # this fails, but 800 yields 3.6GB and passes filename = "test_csc" z = np.zeros(4250) # no of columns corresponds to my original problem, more or less. z[np.arange(no_ones)] += 1 m = sparse.csc_matrix(np.array([z]*400000)) # increasing the number of rows during chunking is where I first ran into error. savemat(filename,{'mat':m}) # fails here with TypeError m = loadmat(filename)['mat'] On Thu, Aug 8, 2013 at 2:12 AM, Matthew Brett wrote: > Hi, > > On Wed, Aug 7, 2013 at 9:04 PM, Richard Llewellyn > wrote: > > Thanks Matthew for the thought. > > > > This may not fully answer your question, but the same values saved as a > > large sparse matrix (csc) at 4.9GB fails to load with same TypeError, > but as > > a numpy 2D array and matrix, which are less than half the file size > (1.8GB) > > when saved with savemat, load without issue. > > Do the dimensions of the arrays (M, N) make a difference? Or are they > all the same (M, N) shape, with more or less non-zeros? > > Can you make a script that will replicate the problem for me? > > Thanks a lot, > > Matthew > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Thu Aug 8 21:36:53 2013 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 8 Aug 2013 18:36:53 -0700 Subject: [SciPy-User] [ANN] IPython 1.0 is finally released, nearly 12 years in the making! Message-ID: Hi all, I am incredibly thrilled, on behalf of the amazing IPython Dev Team, to announce the official release of IPython 1.0 today, an effort nearly 12 years in the making. The previous version (0.13) was released on June 30, 2012, and in this development cycle we had: ~12 months of work. ~700 pull requests merged. ~600 issues closed (non-pull requests). contributions from ~150 authors. ~4000 commits. # A little context What does "1.0" mean for IPython? Obviously IPython has been a staple of the scientific Python community for years, and we've made every effort to make it a robust and production ready tool for a long time, so what exactly do we mean by tagging this particular release as 1.0? Basically, we feel that the core design of IPython, and the scope of the project, is where we want it to be. What we have today is what we consider a reasonably complete, design- and scope-wise, IPython 1.0: an architecture for interactive computing, that can drive kernels in a number of ways using a well-defined protocol, and rich and powerful clients that let users control those kernels effectively. Our different clients serve different needs, with the old workhorse of the terminal still being very useful, but much of our current development energy going into the Notebook, obviously. The Notebook enables interactive exploration to become Literate Computing, bridging the gaps from individual work to collaboration and publication, all with an open file format that is a direct record of the underlying communication protocol. There are obviously plenty of open issues (many of them very important) that need fixing, and large and ambitious new lines of development for the years to come. But the work of the last four years, since the summer of 2009 when Brian Granger was able to devote a summer (thanks to funding from the NiPy project - nipy.org) to refactoring the old IPython core code, finally opened up or infrastructure for real innovation. By disentangling what was a useful but impenetrable codebase, it became possible for us to start building a flexible, modern system for interactive computing that abstracted the old REPL model into a generic protocol that kernels could use to talk to clients. This led at first to the creation of the Qt console, and then to the Notebook and out-of-process terminal client. It also allowed us to (finally!) unify our parallel computing machinery with the rest of the interactive system, which Min Ragan-Kelley pulled off in a development tour de force that involved rewriting in a few weeks a huge and complex Twisted-based system. We are very happy with how the Notebook work has turned out, and it seems the entire community agrees with us, as the uptake has been phenomenal. Back from the very first "IPython 0.0.1" that I started in 2001: https://gist.github.com/fperez/1579699 there were already hints of tools like Mathematica: it was my everyday workhorse as a theoretical physicist and I found its Notebook environment invaluable. But as a grad student trying out "just an afternoon hack" (IPython was my very first Python program as I was learning the language), I didn't have the resources, skills or vision to attempt building an entire notebook system, and to be honest the tools of the day would have made that enterprise a miserable one. But those ideas were always driving our efforts, and as IPython started becoming a project with a team, we made multiple attempts to get a good Notebook built around IPython. Those interested can read an old blog post of mine with the history (http://blog.fperez.org/2012/01/ipython-notebook-historical.html). The short story is that in 2011, on our sixth attempt, Brian was again able to devote a focused summer into using our client-server architecture and, with the stack of the modern web (Javascript, CSS, websockets, Tornado, ...), finally build a robust system for Literate Computing across programming languages. Today, thanks to the generous support and vision of Josh Greenberg at the Alfred P. Sloan Foundation, we are working very hard on building the notebook infrastructure, and this release contains major advances on that front. We have high hopes for what we'll do next; as a glimpse of the future that this enables, now there is a native Julia kernel that speaks to our clients, notebook included: https://github.com/JuliaLang/IJulia.jl. # Team I can't stress enough how impressed I am with the work people are doing in IPython, and what a privilege it is to work with colleagues like these. Brian Granger and Min Ragan-Kelley joined IPython around 2005, initially working on the parallel machinery, but since ~ 2009 they have become the heart of the project. Today Min is our top committer and knows our codebase better than anyone else, and I can't imagine better partners for an effort like this. And from regulars in our core team like Thomas Kluyver, Matthias Bussonnier, Brad Froehle and Paul Ivanov to newcomers like Jonathan Frederic and Zach Sailer, in addition to the many more whose names are in our logs, we have a crazy amount of energy being poured into IPython. I hope we'll continue to harness it productively! The full list of contributors to this release can be seen here: http://ipython.org/ipython-doc/rel-1.0.0/whatsnew/github-stats-1.0.html # Release highlights * nbconvert: this is the major piece of new functionality in this cycle, and was an explicit part of our roadmap (https://github.com/ipython/ipython/wiki/Roadmap:-IPython). nbconvert is now an IPython subcommand to convert notebooks into other formats such as HTML or LaTeX, but more importantly, it's a very flexible system that lets you write custom templates to generate new output with arbitrary control over the formatting and transformations that are applied to the input. We want to stress that despite the fact that a huge amount of work went into nbconvert, this should be considered a *tech preview* release. We've come to realize how complex this problem is, and while we'll make every effort to keep the high-level command-line syntax and APIs as stable as possible, it is quite likely that the internals will continue to evolve, possibly in backwards-incompatible ways. So if you start building services and libraries that make heavy use of the nbconvert internals, please be prepared for some turmoil in the months to come, and ping us on the dev list with questions or concerns. * Notebook improvements: there has been a ton of polish work in the notebook at many levels, though the file format remains unchanged from 0.13, so you shouldn't have any problems sharing notebooks with colleagues still using 0.13. - Autosave: probably the most oft-requested feature, the notebook server now autosaves your files! You can still hit Ctrl-S to force a manual save (which also creates a special 'checkpoint' you can come back to). - The notebook supports raw_input(), and thus also %debug. This was probably the main deficiency of the notebook as a client compared to the terminal/qtconsole, and it has been finally fixed. - Add %%html, %%svg, %%javascript, and %%latex cell magics for writing raw output in notebook cells. - Fix an issue parsing LaTeX in markdown cells, which required users to type \\\, instead of \\. -Images support width and height metadata, and thereby 2x scaling (retina support). - %%file has been renamed %%writefile (%%file) is deprecated. * The input transofrmation code has been updated and rationalized. This is a somewhat specialized part of IPython, but of importance to projects that build upon it for custom environments, like Sympy and Sage. Our full release notes are here: http://ipython.org/ipython-doc/rel-1.0.0/whatsnew/version1.0.html and the gory details are here: http://ipython.org/ipython-doc/rel-1.0.0/whatsnew/github-stats-1.0.html # Installation Installation links and instructions are at: http://ipython.org/install.html And IPython is also on PyPI: http://pypi.python.org/pypi/ipython # Requirements IPython 1.0 requires Python ? 2.6.5 or ? 3.2.1. It does not support Python 3.0, 3.1, or 2.5. # Acknowledgments Last but not least, we'd like to acknowledge the generous support of those who make it possible for us to spend our time working on IPython. In particular, the Alfred P. Sloan Foundation today lets us have a solid team working full-time on the project, and without the support of Enthought Inc at multiple points in our history, we wouldn't be where we are today. The full list of our support is here: http://ipython.org/index.html#support Thanks to everyone! Please enjoy IPython 1.0, and report all bugs as usual! Fernando, on behalf of the IPython Dev Team. -- Fernando Perez (@fperez_org; http://fperez.org) fperez.net-at-gmail: mailing lists only (I ignore this when swamped!) fernando.perez-at-berkeley: contact me here for any direct mail From mirpurisuraj1 at gmail.com Fri Aug 9 03:03:25 2013 From: mirpurisuraj1 at gmail.com (Suraj Mirpuri) Date: Fri, 9 Aug 2013 00:03:25 -0700 Subject: [SciPy-User] Optimize.Leastsq Issues Message-ID: Hello All, I think I've been having some difficulty with optimize.leastsq the past few hours because for the life of me I cannot seem to get it to work. Specifically, my issue is that when I use it without a Jacobian function it estimates a Jacobian of all 0's, and when I give it a real Jacobian and use full_output = 1 it also returns all 0's. However this cannot be the case as at least at the initial point that I feed in, the Jacobian is definitely non zero. It just ends up giving me the initial parameters each time. I am not quite sure I have it set up right so I have attached a paste bin link to my code if anyone could take a look at it, and the output of leastsq is at the end of it. Also it's been sort of cobbled together late at night so I recognize it's not exactly efficient code, I just need to get it working I guess. Also I tried both col_deriv=1 and 0 with appropriate return value in grad_func as I found a thread that suggests this makes a difference. I couldn't see that it did. Code: http://pastebin.com/qa3gw1AD Thanks in advance! -- Raj Mirpuri Research Assistant Roberts Group Caltech -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Fri Aug 9 08:26:40 2013 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 09 Aug 2013 15:26:40 +0300 Subject: [SciPy-User] Optimize.Leastsq Issues In-Reply-To: References: Message-ID: 09.08.2013 10:03, Suraj Mirpuri kirjoitti:> Hello All, [clip] > I think I've been having some difficulty with optimize.leastsq the > past few hours because for the life of me I cannot seem to get it to > work. The bug is in your own code: your resid() function always returns the same result: >>> print(randomtest.resid(numpy.array([ 60., 180., 22., 22., 0.2]), [randomtest.sorted_list])) [ 1.9845187 32.64266682 26.29888414 -16.90617247 -5.73287752 -0.58050037 2.75023576] >>> print(randomtest.resid(numpy.array([ 6., 18., 2., 2., 0.]), [randomtest.sorted_list])) [ 1.9845187 32.64266682 26.29888414 -16.90617247 -5.73287752 -0.58050037 2.75023576] From mirpurisuraj1 at gmail.com Fri Aug 9 08:53:02 2013 From: mirpurisuraj1 at gmail.com (Suraj Mirpuri) Date: Fri, 9 Aug 2013 05:53:02 -0700 Subject: [SciPy-User] Optimize.Leastsq Issues In-Reply-To: References: Message-ID: Well now I feel stupid. I don't even know what I did to correct that bug but I just tested it and it works like a charm now. I guess it really helps having another eye to look at your obvious mistakes. Thanks so much for your time! Regards, Raj Mirpuri On Fri, Aug 9, 2013 at 5:26 AM, Pauli Virtanen wrote: > 09.08.2013 10:03, Suraj Mirpuri kirjoitti:> Hello All, > [clip] > > I think I've been having some difficulty with optimize.leastsq the > > past few hours because for the life of me I cannot seem to get it to > > work. > > The bug is in your own code: your resid() function always returns the > same result: > > >>> print(randomtest.resid(numpy.array([ 60., 180., 22., 22., > 0.2]), [randomtest.sorted_list])) > [ 1.9845187 32.64266682 26.29888414 -16.90617247 -5.73287752 > -0.58050037 2.75023576] > > >>> print(randomtest.resid(numpy.array([ 6., 18., 2., 2., > 0.]), [randomtest.sorted_list])) > [ 1.9845187 32.64266682 26.29888414 -16.90617247 -5.73287752 > -0.58050037 2.75023576] > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Sat Aug 10 12:34:47 2013 From: njs at pobox.com (Nathaniel Smith) Date: Sat, 10 Aug 2013 17:34:47 +0100 Subject: [SciPy-User] [ANN] patsy version 0.2.0 released Message-ID: Hi all, I've just released version 0.2.0 of patsy, a Python library for describing statistical models (especially linear models, or models that have a linear component) and building design matrices. Patsy brings the convenience of R "formulas" to Python. -- New in this release (for more details: https://patsy.readthedocs.org/en/latest/changes.html#v0-2-0) New support for automatically detecting and (optionally) removing missing values. New stateful transform for B-spline regression: bs(). (Requires scipy.) Added a core API to make it possible to run predictions on only a subset of model terms. (This is particularly useful for e.g. plotting the isolated effect of a single fitted spline term.) See DesignMatrixBuilder.subset(). LookupFactor now allows users to mark variables as categorical directly. pandas.Categorical objects are now recognized as representing categorical data and handled appropriately. Better error reporting for exceptions raised by user code inside formulas. We now, whenever possible, tag the generated exception with information about which factor?s code raised it, and use this information to give better error reporting. EvalEnvironment.capture() now takes a reference argument, to make it easier to implement new dmatrix()-like functions. Miscellaneous doc improvements and bug fixes. -- General information Documentation: http://patsy.readthedocs.org/ Downloads: http://pypi.python.org/pypi/patsy/ Dependencies: numpy Python 2.5 or later (Python 3 works) Optional dependencies: nose: needed to run tests scipy: needed for spline-related functions like 'bs' Install: pip install patsy or python setup.py install Code and bug tracker: https://github.com/pydata/patsy Mailing list: pydata at googlegroups.com http://groups.google.com/group/pydata http://news.gmane.org/gmane.comp.python.pydata License: 2-clause BSD, see LICENSE.txt for details. Share and enjoy, -n From matthew.brett at gmail.com Sat Aug 10 21:28:39 2013 From: matthew.brett at gmail.com (Matthew Brett) Date: Sat, 10 Aug 2013 18:28:39 -0700 Subject: [SciPy-User] scipy.io.loadmat throws TypeError with large files In-Reply-To: References: Message-ID: Hi, On Thu, Aug 8, 2013 at 4:41 PM, Richard Llewellyn wrote: > Hi Matthew, > > A short script below that shows that increasing the density triggers the > error, on my machine, at file sizes over 4GB. Originally I had increased > either M and N to trigger the error as well. > I suspect you'll run into a problem with available RAM. I run this on my > 32GB machine with 64GB swap, and it swaps, so this takes several minutes to > process at least. Pain, I know. Once I get more RAM it would be easier for > me to test various permutations, but that will be awhile. > > Maybe a generator could be used to build the matrix? Still, I think RAM > will be an issue. Aha - thanks for tracking that down a little further. The problem is that the matlab 5-7 file format (non-HDF) has a uint32 to store the number of bytes that the matrix takes up on disk. Your matrices causing the error are a little larger than 2**32, hence the error. Here's a relevant thread: http://www.mathworks.de/matlabcentral/newsreader/view_thread/307845 It's not hard to reproduce the error on non-sparse (appended script). We certainly need a better error for this - I'll try putting one in, Cheers, Matthew from io import BytesIO import numpy as np from scipy.io import loadmat,savemat fobj = BytesIO() m = np.empty(2**32, dtype=np.int8) n = np.arange(10).reshape((2, 5)) savemat(fobj, {'mat': m, 'n': n}) # fails here with TypeError m = loadmat(fobj)['mat'] From ralf.gommers at gmail.com Sun Aug 11 08:05:18 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 11 Aug 2013 14:05:18 +0200 Subject: [SciPy-User] Built-in FAP for Lomb-Scargle? In-Reply-To: References: Message-ID: On Thu, Aug 8, 2013 at 11:09 PM, Nicholas Earl wrote: > Hello everyone, > > I was just curious if there are any built-in functions to provide the > false alarm probability of the Lomb-Scargle routine in SciPy? > There are no such functions in scipy.signal, but it would be a useful enhancement. Contributions welcome! Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdornfe1 at gmail.com Mon Aug 12 16:12:28 2013 From: mdornfe1 at gmail.com (Matthew Dornfeld) Date: Mon, 12 Aug 2013 16:12:28 -0400 Subject: [SciPy-User] vti file animation Message-ID: <520941AC.4080308@gmail.com> Hi, I have a bunch of vti files. Each one corresponds to the output of a simulation for one time step. I want to combine these all together and make an animation. I'm trying to do this with mayavi, but I'm having trouble finding documentation on how. Can anyone help? Matt From John.Floyd at environment.nsw.gov.au Mon Aug 12 18:38:04 2013 From: John.Floyd at environment.nsw.gov.au (John Floyd) Date: Mon, 12 Aug 2013 22:38:04 +0000 Subject: [SciPy-User] scipy sparse efficiency questions Message-ID: Scipy sparse direct solvers use superlu as the backend by default. Superlu supports using previous reordering data which is particularly useful when time stepping through solutions for systems where the matrix structure does not change with time. Looking through the scipy code indicates that the internal variables do keep the c_perm and r_perm variables. I cannot see a way to use this feature with this wrapper. Have I missed something? Can I rerun the solver without re-calculating the reordering? This also applies if I use the spilu to use as a preconditioner - it appears to recalc the reordering for call. This is numerically expensive. I realise that petsc and trilinos offer a more complete solution but for the time being just want a simplified approach that I can move easily from linux/unix to windows. The latter causes plenty of problems when I have to compile my own modules - I develop on linux. Thanks John John Floyd Water, Wetlands and Coasts Science Office of Environment and Heritage Department of Premier and Cabinet john.floyd at environment.nsw.gov.au Ph: 02-9995-5656 ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- This email is intended for the addressee(s) named and may contain confidential and/or privileged information. If you are not the intended recipient, please notify the sender and then delete it immediately. Any views expressed in this email are those of the individual sender except where the sender expressly and with authority states them to be the views of the Office of Environment and Heritage, NSW Department of Premier and Cabinet. PLEASE CONSIDER THE ENVIRONMENT BEFORE PRINTING THIS EMAIL -------------- next part -------------- An HTML attachment was scrubbed... URL: From parrenin.ujf at gmail.com Tue Aug 13 08:06:43 2013 From: parrenin.ujf at gmail.com (=?ISO-8859-1?Q?Fr=E9d=E9ric_Parrenin?=) Date: Tue, 13 Aug 2013 14:06:43 +0200 Subject: [SciPy-User] problem with the leastsq function Message-ID: Dear all, I am currently building a code which uses the leastsq function for a minimization problem. I apply leastsq to a function named residuals which takes as entry a vector named variables. For some configurations of the problem, I get the following error message: error: Error occurred while calling the Python function named residuals This is not very meaningful. Is there a way to debug this leastsq function and to really now where the problem is? Best regards, Fr?d?ric -------------- next part -------------- An HTML attachment was scrubbed... URL: From fx.thomas at gmail.com Tue Aug 13 09:02:44 2013 From: fx.thomas at gmail.com (=?UTF-8?Q?Fran=C3=A7ois=2DXavier_Thomas?=) Date: Tue, 13 Aug 2013 15:02:44 +0200 Subject: [SciPy-User] problem with the leastsq function In-Reply-To: References: Message-ID: Frederic, If I'm not mistaken, you definitely should be getting a full stacktrace in your console output, not just the error. Additionally, have you tried using the Python debugger[1]? Cheers, Fran?ois-Xavier [1] http://docs.python.org/2/library/pdb.html On Tue, Aug 13, 2013 at 2:06 PM, Fr?d?ric Parrenin wrote: > Dear all, > > I am currently building a code which uses the leastsq function for a > minimization problem. > I apply leastsq to a function named residuals which takes as entry a vector > named variables. > > For some configurations of the problem, I get the following error message: > > > error: Error occurred while calling the Python function named residuals > > > This is not very meaningful. > > Is there a way to debug this leastsq function and to really now where the > problem is? > > > Best regards, > > > Fr?d?ric > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From pav at iki.fi Tue Aug 13 10:39:49 2013 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 13 Aug 2013 17:39:49 +0300 Subject: [SciPy-User] scipy sparse efficiency questions In-Reply-To: References: Message-ID: 13.08.2013 01:38, John Floyd kirjoitti: > Scipy sparse direct solvers use superlu as the backend by default. > > Superlu supports using previous reordering data which is > particularly useful when time stepping through solutions for > systems where the matrix structure does not change with time. > Looking through the scipy code indicates > that the internal variables do keep the c_perm and r_perm > variables. I cannot > see a way to use this feature with this wrapper. splu and spilu accept many of SuperLU's internal parameters. However, I don't think they support keeping the sparsity structure the same. If you know how to do this via SuperLU's C-API, it is probably not a huge effort to extend Scipy's wrappers to support this. -- Pauli Virtanen From guziy.sasha at gmail.com Wed Aug 14 16:35:54 2013 From: guziy.sasha at gmail.com (Oleksandr Huziy) Date: Wed, 14 Aug 2013 16:35:54 -0400 Subject: [SciPy-User] Read file with comma decimal separator In-Reply-To: <4930843.uyAzFXlt8I@horus> References: <4930843.uyAzFXlt8I@horus> Message-ID: Hi, I do not know if this is still relevant but I just saw a function in pandas which has an option to specify decimal and header, this gives you a dataframe object, you can get the values using DataFrame.values property. pandas.io.parsers.read_table(*filepath_or_buffer,...,decimal=',', header=None)* * * See this example import pandas as pd In [31]: X = StringIO('1,1\t1,2\t1,3\n2,1\t2,2\t2,3') In [32]: table = pd.io.parsers.read_table(X, decimal=",", header=None) In [33]: table Out[33]: 0 1 2 0 1.1 1.2 1.3 1 2.1 2.2 2.3 In [34]: table.values Out[34]: array([[ 1.1, 1.2, 1.3], [ 2.1, 2.2, 2.3]]) -- Sasha * * * * 2013/7/31 Florian Lindner > Hello, > > I have a file that used comma as a decimal separator. How can I read a > file like > that using loadtxt or genfromtxt ? > > Thanks, > Florian > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsseabold at gmail.com Wed Aug 14 19:56:14 2013 From: jsseabold at gmail.com (Skipper Seabold) Date: Wed, 14 Aug 2013 19:56:14 -0400 Subject: [SciPy-User] [ANN] statsmodels version 0.5.0 released Message-ID: Hi all, I'm happy to announce on behalf of the statsmodels development team the release of 0.5.0. It's been a little over a year in the making. The release includes over 2000 commits from 38 authors. All users are encouraged to upgrade to this version. Highlights ======== * Support for fitting models via R-like formulas * ANOVA modeling * Multivariate kernel density estimators * Nonparametric regression * Empirical likelihood estimators * Quantile regression * Negative binomial maximum likelihood estimator * l1-penalized discrete choice models * New and improved graphics * Power and sample size calculations * And much more Full Release Notes: http://statsmodels.sourceforge.net/stable/release/version0.5.html Issues Closed: http://statsmodels.sourceforge.net/stable/release/github-stats-0.5.html#issues-list-05 Installers: https://pypi.python.org/pypi/statsmodels Thanks to ======== git log v0.4.3..v0.5.0 --format='* %aN ' | sort -u * Ana Martinez Pardo * anov * avishaylivne * Bruno Rodrigues * Carl Vogel * Chad Fulton * Christian Prinoth * Daniel B. Smith * dengemann * Dieter Vandenbussche * Dougal Sutherland * Enrico Giampieri * evelynmitchell * George Panterov * Grayson * Jan Schulz * Jeff Reback * Josef Perktold * Justin Grana * langmore * Matthew Brett * Nathaniel J. Smith * otterb * padarn * Paul Hobson * Pietro Battiston * Ralf Gommers * Richard T. Guy * Robert Cimrman * Skipper Seabold * Thomas Haslwanter * timmie * Tom Augspurger * Trent Hauck * tylerhartley * Vincent Arel-Bundock * VirgileFritsch * Zhenya Let me know if you want to update your Mailmap name. What is it ======== Statsmodels is a Python module that allows users to explore data, estimate statistical models, and perform statistical tests. An extensive list of descriptive statistics, statistical tests, plotting functions, and result statistics are available for different types of data and each estimator. Researchers across fields may find that statsmodels fully meets their needs for statistical computing and data analysis in Python. Dependencies =========== Required dependencies: python >= 2.6 (including Python 3.x) numpy >= 1.5.1 scipy >= 0.7 pandas >= 0.7.1 patsy >= 0.1.0 Build dependencies: cython >= 0.15.1 (If building from github repo) C compiler Optional dependencies: matplotlib >= 1.0.0 : needed for plotting sphinx >= 1.0.0 : needed to build the docs ipython >= 1.0 : needed to build the notebook examples nose >= 1.0.0 : needed to run the tests Links ====== Documentation: http://statsmodels.sourceforge.net/ Mailing List: https://groups.google.com/forum/#!forum/pystatsmodels PyPi: https://pypi.python.org/pypi/statsmodels Github: https://github.com/statsmodels/statsmodels/ Bug Tracker: https://github.com/statsmodels/statsmodels/issues Cheers, Skipper -------------- next part -------------- An HTML attachment was scrubbed... URL: From John.Floyd at environment.nsw.gov.au Thu Aug 8 22:26:30 2013 From: John.Floyd at environment.nsw.gov.au (John Floyd) Date: Fri, 9 Aug 2013 02:26:30 +0000 Subject: [SciPy-User] sparse.linalg.splu can I use option Fact=SamePattern Message-ID: I am using scipy in a time variable flow model. The matrix pattern does not change over time, so the matrix reordering does not need to be recalculated each timestep. SuperLU supports this with the Fact=SamePattern option - can it be done using the python interface - if so please give an example. Thanks John John Floyd Water, Wetlands and Coasts Science Office of Environment and Heritage Department of Premier and Cabinet john.floyd at environment.nsw.gov.au Ph: 02-9995-5656 ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- This email is intended for the addressee(s) named and may contain confidential and/or privileged information. If you are not the intended recipient, please notify the sender and then delete it immediately. Any views expressed in this email are those of the individual sender except where the sender expressly and with authority states them to be the views of the Office of Environment and Heritage, NSW Department of Premier and Cabinet. PLEASE CONSIDER THE ENVIRONMENT BEFORE PRINTING THIS EMAIL -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjm.guzman at gmail.com Tue Aug 13 13:48:45 2013 From: sjm.guzman at gmail.com (Jose Guzman) Date: Tue, 13 Aug 2013 19:48:45 +0200 Subject: [SciPy-User] constrained least square fitting using scipy.optimize.fmin_slsqp() function In-Reply-To: References: Message-ID: <520A717D.90503@gmail.com> On 08/08/13 00:30, sudipta sinha wrote: > Hi All, > > I am facing a problem for constrained linear least square fitting. In my case the matrix equation looks like [Y]nX1=[X]nXm[P]mX1, where Y and P are vectors and X is a matrix and n, m are dimension of the matrix. Further, there is a equality constraint on P which is Sum(P(i))=0.0. How do I proceed to solve that? Which function of python is suitable for this? I saw few of discussion on scipy.optimize.fmin_slsqp() function but the implementation of this function is not very straightforward. Therefore, I need your help. I am new in SCIPY. Please help me out in this regard. > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user Hi Did you have a look to lmfit package (http://newville.github.io/lmfit-py/). I am trying some minimization with boundaries and constrains, and it seems that this is the way to go. Do not know if somebody here has ever use it. Best Jose -- Jose Guzman http://www.ist.ac.at/~jguzman/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Fri Aug 16 21:39:32 2013 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 16 Aug 2013 21:39:32 -0400 Subject: [SciPy-User] constrained least square fitting using scipy.optimize.fmin_slsqp() function In-Reply-To: <520A717D.90503@gmail.com> References: <520A717D.90503@gmail.com> Message-ID: On Tue, Aug 13, 2013 at 1:48 PM, Jose Guzman wrote: > On 08/08/13 00:30, sudipta sinha wrote: > > Hi All, > > I am facing a problem for constrained linear least square fitting. In my > case the matrix equation looks like [Y]nX1=[X]nXm[P]mX1, where Y and P are > vectors and X is a matrix and n, m are dimension of the matrix. Further, > there is a equality constraint on P which is Sum(P(i))=0.0. How do I proceed > to solve that? Which function of python is suitable for this? I saw few of > discussion on scipy.optimize.fmin_slsqp() function but the implementation of > this function is not very straightforward. Therefore, I need your help. I am > new in SCIPY. Please help me out in this regard. If it's just a linear function and a linear constraint, then you could just reparameterize the X matrix X_new = X[:, :-1] - X[:, -1] and p[-1] = - sum P_new and use linalg.solve for example on X_new to give you the reduced P_new if I read your equation correctly. Josef > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > Hi > > Did you have a look to lmfit package (http://newville.github.io/lmfit-py/). > I am trying some minimization with boundaries and constrains, and it seems > that this is the way to go. Do not know if somebody here has ever use it. > > Best > > Jose > > -- > Jose Guzman > http://www.ist.ac.at/~jguzman/ > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From pav at iki.fi Sat Aug 17 05:57:51 2013 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 17 Aug 2013 12:57:51 +0300 Subject: [SciPy-User] sparse.linalg.splu can I use option Fact=SamePattern In-Reply-To: References: Message-ID: 09.08.2013 05:26, John Floyd kirjoitti: [clip] > SuperLU supports this with the Fact=SamePattern option - can it be > done using the python interface - if so please give an example. I'm sure you are the first to try to use the Python wrapper for this purpose, so the answers to your two questions here are not known. If the matter in using SuperLU is simply to pass in this flag, then it will probably work, but if you also need to pass in additional data telling SuperLU about the previous permutation, then it probably won't work out of the box. -- Pauli Virtanen From michal.romaniuk06 at imperial.ac.uk Mon Aug 19 10:44:48 2013 From: michal.romaniuk06 at imperial.ac.uk (Michal Romaniuk) Date: Mon, 19 Aug 2013 15:44:48 +0100 Subject: [SciPy-User] Maximum file size for savemat? Message-ID: <52122F60.8040607@imperial.ac.uk> Hi, I'm saving a large batch of data using savemat and although I get no errors, the files produced are not readable for either matlab or scipy. Is there a limit on file size? Thanks, Michal From matthew.brett at gmail.com Mon Aug 19 11:56:02 2013 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 19 Aug 2013 08:56:02 -0700 Subject: [SciPy-User] Maximum file size for savemat? In-Reply-To: <52122F60.8040607@imperial.ac.uk> References: <52122F60.8040607@imperial.ac.uk> Message-ID: Hi, On Mon, Aug 19, 2013 at 7:44 AM, Michal Romaniuk wrote: > Hi, > > I'm saving a large batch of data using savemat and although I get no > errors, the files produced are not readable for either matlab or scipy. > Is there a limit on file size? Ah - yes there is - the individual matrices in the mat file cannot be larger than 4GB. Is it possible you hit this limit? Sorry, I only realized this when Richard Llewellyn pointed this out a couple of weeks ago on the list: http://scipy-user.10969.n7.nabble.com/SciPy-User-scipy-io-loadmat-throws-TypeError-with-large-files-td18558.html The current scipy code has an error message for matrices that are too large. Cheers, Matthew From michal.romaniuk06 at imperial.ac.uk Tue Aug 20 10:39:24 2013 From: michal.romaniuk06 at imperial.ac.uk (Michal Romaniuk) Date: Tue, 20 Aug 2013 15:39:24 +0100 Subject: [SciPy-User] SciPy-User Digest, Vol 120, Issue 12 In-Reply-To: References: Message-ID: <52137F9C.5000802@imperial.ac.uk> Hi, > Hi, > > On Mon, Aug 19, 2013 at 7:44 AM, Michal Romaniuk > wrote: >> Hi, >> >> I'm saving a large batch of data using savemat and although I get no >> errors, the files produced are not readable for either matlab or scipy. >> Is there a limit on file size? > > Ah - yes there is - the individual matrices in the mat file cannot be > larger than 4GB. Is it possible you hit this limit? > > Sorry, I only realized this when Richard Llewellyn pointed this out a > couple of weeks ago on the list: > > http://scipy-user.10969.n7.nabble.com/SciPy-User-scipy-io-loadmat-throws-TypeError-with-large-files-td18558.html > > The current scipy code has an error message for matrices that are too large. > > Cheers, > > Matthew Well, I managed to work around the problem to some extent by setting do_compression=True. Now Matlab can read those files (so they must be valid to some extent) but SciPy can't (even though they were written with SciPy). I get this error: PATH/lib/python2.6/site-packages/scipy/io/matlab/mio.pyc in loadmat(file_name, mdict, appendmat, **kwargs) 173 variable_names = kwargs.pop('variable_names', None) 174 MR = mat_reader_factory(file_name, appendmat, **kwargs) --> 175 matfile_dict = MR.get_variables(variable_names) 176 if mdict is not None: 177 mdict.update(matfile_dict) PATH/lib/python2.6/site-packages/scipy/io/matlab/mio5.pyc in get_variables(self, variable_names) 290 continue 291 try: --> 292 res = self.read_var_array(hdr, process) 293 except MatReadError, err: 294 warnings.warn( PATH/lib/python2.6/site-packages/scipy/io/matlab/mio5.pyc in read_var_array(self, header, process) 253 `process`. 254 ''' --> 255 return self._matrix_reader.array_from_header(header, process) 256 257 def get_variables(self, variable_names=None): PATH/lib/python2.6/site-packages/scipy/io/matlab/mio5_utils.so in scipy.io.matlab.mio5_utils.VarReader5.array_from_header (scipy/io/matlab/mio5_utils.c:5401)() PATH/lib/python2.6/site-packages/scipy/io/matlab/mio5_utils.so in scipy.io.matlab.mio5_utils.VarReader5.array_from_header (scipy/io/matlab/mio5_utils.c:4849)() PATH/lib/python2.6/site-packages/scipy/io/matlab/mio5_utils.so in scipy.io.matlab.mio5_utils.VarReader5.read_real_complex (scipy/io/matlab/mio5_utils.c:5602)() ValueError: total size of new array must be unchanged The size of the main array is about 9 GB before compression, but the compressed files are less than 500 MB and closer to 400 MB. There are some other arrays in the file too but they are much smaller. Any ideas on how I could get SciPy to read this data back? Right now I can only think of storing the data in single precision format... Thanks, Michal From michal.romaniuk06 at imperial.ac.uk Tue Aug 20 13:33:37 2013 From: michal.romaniuk06 at imperial.ac.uk (Michal Romaniuk) Date: Tue, 20 Aug 2013 18:33:37 +0100 Subject: [SciPy-User] Maximum file size for savemat? In-Reply-To: References: Message-ID: <5213A871.3050209@imperial.ac.uk> Hi, With some further work, I found that the file produced by SciPy isn't actually correct. Matlab can read it but at some point in the array the rest of the data is just zeros. I'm surprised that SciPy doesn't throw an error when writing data that is too big... Are there any good alternatives to .mat files? (Preferably something that Matlab could read too.) PyTables? The data consists of one large array (around 9GB), one list containing a few smaller arrays, and a few other arrays and scalars. Thanks, Michal > Hi, > >> Hi, >> >> On Mon, Aug 19, 2013 at 7:44 AM, Michal Romaniuk >> wrote: >>> Hi, >>> >>> I'm saving a large batch of data using savemat and although I get no >>> errors, the files produced are not readable for either matlab or scipy. >>> Is there a limit on file size? >> >> Ah - yes there is - the individual matrices in the mat file cannot be >> larger than 4GB. Is it possible you hit this limit? >> >> Sorry, I only realized this when Richard Llewellyn pointed this out a >> couple of weeks ago on the list: >> >> http://scipy-user.10969.n7.nabble.com/SciPy-User-scipy-io-loadmat-throws-TypeError-with-large-files-td18558.html >> >> The current scipy code has an error message for matrices that are too large. >> >> Cheers, >> >> Matthew > > Well, I managed to work around the problem to some extent by setting > do_compression=True. Now Matlab can read those files (so they must be > valid to some extent) but SciPy can't (even though they were written > with SciPy). > > I get this error: > > > PATH/lib/python2.6/site-packages/scipy/io/matlab/mio.pyc in > loadmat(file_name, mdict, appendmat, **kwargs) > 173 variable_names = kwargs.pop('variable_names', None) > 174 MR = mat_reader_factory(file_name, appendmat, **kwargs) > --> 175 matfile_dict = MR.get_variables(variable_names) > 176 if mdict is not None: > 177 mdict.update(matfile_dict) > > PATH/lib/python2.6/site-packages/scipy/io/matlab/mio5.pyc in > get_variables(self, variable_names) > 290 continue > 291 try: > --> 292 res = self.read_var_array(hdr, process) > 293 except MatReadError, err: > 294 warnings.warn( > > PATH/lib/python2.6/site-packages/scipy/io/matlab/mio5.pyc in > read_var_array(self, header, process) > 253 `process`. > 254 ''' > --> 255 return self._matrix_reader.array_from_header(header, > process) > 256 > 257 def get_variables(self, variable_names=None): > > PATH/lib/python2.6/site-packages/scipy/io/matlab/mio5_utils.so in > scipy.io.matlab.mio5_utils.VarReader5.array_from_header > (scipy/io/matlab/mio5_utils.c:5401)() > > PATH/lib/python2.6/site-packages/scipy/io/matlab/mio5_utils.so in > scipy.io.matlab.mio5_utils.VarReader5.array_from_header > (scipy/io/matlab/mio5_utils.c:4849)() > > PATH/lib/python2.6/site-packages/scipy/io/matlab/mio5_utils.so in > scipy.io.matlab.mio5_utils.VarReader5.read_real_complex > (scipy/io/matlab/mio5_utils.c:5602)() > > ValueError: total size of new array must be unchanged > > > > The size of the main array is about 9 GB before compression, but the > compressed files are less than 500 MB and closer to 400 MB. There are > some other arrays in the file too but they are much smaller. > > Any ideas on how I could get SciPy to read this data back? Right now I > can only think of storing the data in single precision format... > > Thanks, > Michal From pav at iki.fi Tue Aug 20 14:00:05 2013 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 20 Aug 2013 21:00:05 +0300 Subject: [SciPy-User] Maximum file size for savemat? In-Reply-To: <5213A871.3050209@imperial.ac.uk> References: <5213A871.3050209@imperial.ac.uk> Message-ID: 20.08.2013 20:33, Michal Romaniuk kirjoitti: [clip] > Are there any good alternatives to .mat files? (Preferably > something that Matlab could read too.) PyTables? > > The data consists of one large array (around 9GB), one list > containing a few smaller arrays, and a few other arrays and > scalars. Use HDF5 --- both Matlab and Python can work with it. On the Python side use either h5py or PyTables, depending on which one you like more. As you found out, the .mat file format simply does not support data bigger than 4 GB. The development version of Scipy (to be 0.13.0) should throw a warning AFAIK. -- Pauli Virtanen From matthew.brett at gmail.com Tue Aug 20 14:24:29 2013 From: matthew.brett at gmail.com (Matthew Brett) Date: Tue, 20 Aug 2013 11:24:29 -0700 Subject: [SciPy-User] Maximum file size for savemat? In-Reply-To: <5213A871.3050209@imperial.ac.uk> References: <5213A871.3050209@imperial.ac.uk> Message-ID: Hi, On Tue, Aug 20, 2013 at 10:33 AM, Michal Romaniuk wrote: > Hi, > > With some further work, I found that the file produced by SciPy isn't > actually correct. Matlab can read it but at some point in the array the > rest of the data is just zeros. I'm surprised that SciPy doesn't throw > an error when writing data that is too big... Yes, that was a bug. It should be fixed in the current code and the next release. The problem is that there is a length-of-array entry in the matfile that is a uint32, so there is no way of storing matrices longer than 4GB. Because I hadn't considered the case of very large matrices, this length value was silently overflowing, so the pointer to the next matrix in the mat file will be garbage, and the effect is unpredictable. > Are there any good alternatives to .mat files? (Preferably something > that Matlab could read too.) PyTables? I have no experience with matlab hdf5 (7.3) format - but I guess that is a reasonable option > The data consists of one large array (around 9GB), one list containing a > few smaller arrays, and a few other arrays and scalars. I guess you could save everything but the large array in a mat file, and save the large array as simple binary data? Cheers, Matthew From hiei.arhiva at gmail.com Wed Aug 21 10:00:06 2013 From: hiei.arhiva at gmail.com (ha) Date: Wed, 21 Aug 2013 16:00:06 +0200 Subject: [SciPy-User] possible bug in OpenCV python module (cv2) In-Reply-To: References: Message-ID: When I run: img = uint8([[0,255,255,0],[0,255,255,0],[0,255,255,0],[0,255,255,0]]) aaa,bbb = cv2.findContours(img,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE) img I get the following output: array([[ 0, 0, 0, 0], [ 0, 2, 130, 0], [ 0, 2, 130, 0], [ 0, 0, 0, 0]], dtype=uint8) This does not seem to me as the expected behavior, as the input image has changed. Should this be treated as a bug? What's your experience with cv2 module? Is this the right place to discuss such problems? From juanlu001 at gmail.com Wed Aug 21 10:53:59 2013 From: juanlu001 at gmail.com (Juan Luis Cano) Date: Wed, 21 Aug 2013 16:53:59 +0200 Subject: [SciPy-User] possible bug in OpenCV python module (cv2) In-Reply-To: References: Message-ID: <5214D487.2080703@gmail.com> On 08/21/2013 04:00 PM, ha wrote: > When I run: > img = uint8([[0,255,255,0],[0,255,255,0],[0,255,255,0],[0,255,255,0]]) > aaa,bbb = cv2.findContours(img,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE) > img > > I get the following output: > array([[ 0, 0, 0, 0], > [ 0, 2, 130, 0], > [ 0, 2, 130, 0], > [ 0, 0, 0, 0]], dtype=uint8) > > This does not seem to me as the expected behavior, as the input image > has changed. Should this be treated as a bug? > > What's your experience with cv2 module? > Is this the right place to discuss such problems? Probably this is the right place http://code.opencv.org/projects/opencv/issues From pierre.raybaut at gmail.com Wed Aug 21 14:56:43 2013 From: pierre.raybaut at gmail.com (Pierre Raybaut) Date: Wed, 21 Aug 2013 20:56:43 +0200 Subject: [SciPy-User] ANN: Spyder v2.2.3 released Message-ID: Hi all, On the behalf of Spyder's development team ( http://code.google.com/p/spyderlib/people/list), I'm pleased to announce that Spyder v2.2.3 has been released and is available for Windows XP/Vista/7/8, GNU/Linux and MacOS X: http://code.google.com/p/spyderlib/. This is a maintenance release of the v2.2 branch which is the last release to support Python 2.5: * Spyder 2.2 supports Python 2.5 to 2.7 * Spyder 2.3 will support Python 2.7 to 3.3 * (Spyder 2.3.0dev6 is also available as of today: this is an experimental release but quite stable which already supports Python 3) See also https://code.google.com/p/spyderlib/downloads/list. Since v2.2.2: * Several bug have been fixed (see changelog for further details: https://code.google.com/p/spyderlib/wiki/ChangeLog) * New features have been added like full support for a MATLAB-like cell mode (see "Run" menu) and the Optional Dependencies dialog box (see menu "?") which gives the user a status of Spyder's dependencies Spyder is a free, open-source (MIT license) interactive development environment for the Python language with advanced editing, interactive testing, debugging and introspection features. Originally designed to provide MATLAB-like features (integrated help, interactive console, variable explorer with GUI-based editors for dictionaries, NumPy arrays, ...), it is strongly oriented towards scientific computing and software development. Thanks to the `spyderlib` library, Spyder also provides powerful ready-to-use widgets: embedded Python console (example: http://packages.python.org/guiqwt/_images/sift3.png), NumPy array editor (example: http://packages.python.org/guiqwt/_images/sift2.png), dictionary editor, source code editor, etc. Description of key features with tasty screenshots can be found at: http://code.google.com/p/spyderlib/wiki/Features Don't forget to follow Spyder updates/news: * on the project website: http://code.google.com/p/spyderlib/ * and on our official blog: http://spyder-ide.blogspot.com/ Last, but not least, we welcome any contribution that helps making Spyder an efficient scientific development/computing environment. Join us to help creating your favourite environment! (http://code.google.com/p/spyderlib/wiki/NoteForContributors) Enjoy! -Pierre -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Thu Aug 22 09:12:21 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Thu, 22 Aug 2013 15:12:21 +0200 Subject: [SciPy-User] ANN: Scipy 0.13.0 beta 1 release Message-ID: Hi all, I'm happy to announce the availability of the first beta release of Scipy 0.13.0. Please try this beta and report any issues on the scipy-dev mailing list. Source tarballs and release notes can be found at https://sourceforge.net/projects/scipy/files/scipy/0.13.0b1/. Windows and OS X installers will follow later (we have a minor infrastructure issue to solve, and I'm at EuroScipy now). Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From dragontse at hotmail.com Fri Aug 23 16:49:33 2013 From: dragontse at hotmail.com (Cy. T.) Date: Fri, 23 Aug 2013 13:49:33 -0700 Subject: [SciPy-User] Optimization on the subspace of a domain Message-ID: Hello there, As I am learning how to do minimization using SciPy, I wonder if I can do the minimization just on a (proper) subspace of the domain of a function. For instance, if I have a multivariate function f(x,y,z), can I ask SciPy to find a minimum by changing only x and y and keeping z fixed? It would work if I define a new function: f(x,y,z_0) = h(x,y) for a given z_0, but I wonder if there is an option for doing that without defining another function. Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidmenhur at gmail.com Fri Aug 23 16:58:44 2013 From: davidmenhur at gmail.com (=?UTF-8?B?RGHPgGlk?=) Date: Fri, 23 Aug 2013 22:58:44 +0200 Subject: [SciPy-User] Optimization on the subspace of a domain In-Reply-To: References: Message-ID: On 23 August 2013 22:49, Cy. T. wrote: > It would work if I define a new function: f(x,y,z_0) = h(x,y) for a given > z_0, but I wonder if there is an option for doing that without defining > another function. A lambda function will do the trick avoiding having to give it a new name: minimize(lambda x, y: f(x, y, z0), ...) This only works for very simple subdomains of the function, though. David. From newville at cars.uchicago.edu Fri Aug 23 18:03:39 2013 From: newville at cars.uchicago.edu (Matt Newville) Date: Fri, 23 Aug 2013 17:03:39 -0500 Subject: [SciPy-User] Optimization on the subspace of a domain In-Reply-To: References: Message-ID: Hi, On Fri, Aug 23, 2013 at 3:49 PM, Cy. T. wrote: > Hello there, > > As I am learning how to do minimization using SciPy, I wonder if I can do > the minimization just on a (proper) subspace of the domain of a function. > > For instance, if I have a multivariate function f(x,y,z), can I ask SciPy > to find a minimum by changing only x and y and keeping z fixed? > > It would work if I define a new function: f(x,y,z_0) = h(x,y) for a given > z_0, but I wonder if there is an option for doing that without defining > another function. > > Thanks. > Depending on your needs, you might find the lmfit-py package ( https://github.com/newville/lmfit-py) useful. With this approach, you would write your function once in terms of a set of Parameters: import lmfit def f(params, *args, **kwargs): x = params['x'].value y = params['y'].value z = params['z'].value # do calculation ... return value_to_minimize params = lmfit.Parameters() params.add('x', value= 10.0) params.add('y', value=2.0, min=0) params.add('z', value=0, vary=False) lmfit.minimize(f, params) print( lmfit.report_fit(params)) Each Parameter can be varied or fixed (vary=False), have upper and/or lower bounds placed on its value, or have its value evaluated as a function of other Parameters. All of these parameter settings can be changed independently of the implementation of the objective function. By default, the least-squares minimization from scipy.optimize.leastsq() is used (and so value_to_minimize should be an array that will be minimized in the least-squares sense), but the scalar minimization methods like Nelder-Mead can also be used. For leastsq(), uncertainties and correlations in the variables are reported from the covariance matrix, and confidence levels can also be determined more explicitly. Cheers, --Matt Newville -------------- next part -------------- An HTML attachment was scrubbed... URL: From dragontse at hotmail.com Sun Aug 25 02:25:55 2013 From: dragontse at hotmail.com (Cy. T.) Date: Sat, 24 Aug 2013 23:25:55 -0700 Subject: [SciPy-User] Optimization on the subspace of a domain In-Reply-To: References: , Message-ID: Thank you both for these great solutions! From: newville at cars.uchicago.edu Date: Fri, 23 Aug 2013 17:03:39 -0500 To: scipy-user at scipy.org Subject: Re: [SciPy-User] Optimization on the subspace of a domain Hi, On Fri, Aug 23, 2013 at 3:49 PM, Cy. T. wrote: Hello there, As I am learning how to do minimization using SciPy, I wonder if I can do the minimization just on a (proper) subspace of the domain of a function. For instance, if I have a multivariate function f(x,y,z), can I ask SciPy to find a minimum by changing only x and y and keeping z fixed? It would work if I define a new function: f(x,y,z_0) = h(x,y) for a given z_0, but I wonder if there is an option for doing that without defining another function. Thanks. Depending on your needs, you might find the lmfit-py package (https://github.com/newville/lmfit-py) useful. With this approach, you would write your function once in terms of a set of Parameters: import lmfit def f(params, *args, **kwargs): x = params['x'].value y = params['y'].value z = params['z'].value # do calculation ... return value_to_minimize params = lmfit.Parameters() params.add('x', value= 10.0) params.add('y', value=2.0, min=0) params.add('z', value=0, vary=False) lmfit.minimize(f, params) print( lmfit.report_fit(params)) Each Parameter can be varied or fixed (vary=False), have upper and/or lower bounds placed on its value, or have its value evaluated as a function of other Parameters. All of these parameter settings can be changed independently of the implementation of the objective function. By default, the least-squares minimization from scipy.optimize.leastsq() is used (and so value_to_minimize should be an array that will be minimized in the least-squares sense), but the scalar minimization methods like Nelder-Mead can also be used. For leastsq(), uncertainties and correlations in the variables are reported from the covariance matrix, and confidence levels can also be determined more explicitly. Cheers, --Matt Newville _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sun Aug 25 16:11:29 2013 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 25 Aug 2013 22:11:29 +0200 Subject: [SciPy-User] sprint mini-report Message-ID: Hi all, Today we had 18 people in the room at the EuroScipy sprint - many of which made their first contribution to scipy today - plus Josef joining remotely and David C hiding out in the scikit-learn room. Some statistics on what we managed to do: - 21 scipy PRs - 9 scipy PRs merged - 5 scipy issues closed, and many more discussed - 2 numpy commits - 2 scipy.org PRs Thanks everyone for joining (keep those PRs coming...:) ), and Pierre for the coffee runs and pizza! Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From johannesradinger at gmail.com Tue Aug 27 03:54:13 2013 From: johannesradinger at gmail.com (Johannes Radinger) Date: Tue, 27 Aug 2013 09:54:13 +0200 Subject: [SciPy-User] Populate an array with integers (randomly, weighted) Message-ID: Hi, I'd like to randomly populate a numpy array (100,100) so that the numpy.sum() of the array equals exactly 300 (e.g. distribute 300 rice grains on a chess board). The final array should only contain integers (so no floats) from 0 to X (max 300). Furthermore I'd like to use a weighing factor, so that some of my array-cells are more probably to be populated than others. Therefore I have an array of the same size (100,100) and that is in numpy.sum = 1: x = numpy.random.random_sample((10,10)) y = x/numpy.sum(x) Some side conditions: Of course if one cell has exactly 0 as weighing factor, also the result has to be 0 for that cell. If only one cell has a weighing factor at all (all others are 0) the entire 300 has to be assigned to that cell. Can that easily be achieved with Scipy/Numpy? Thank you so much for your help, Johannes -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue Aug 27 05:03:36 2013 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 27 Aug 2013 10:03:36 +0100 Subject: [SciPy-User] Populate an array with integers (randomly, weighted) In-Reply-To: References: Message-ID: On Tue, Aug 27, 2013 at 8:54 AM, Johannes Radinger < johannesradinger at gmail.com> wrote: > > Hi, > > I'd like to randomly populate a numpy array (100,100) so that the numpy.sum() of the array equals exactly 300 > (e.g. distribute 300 rice grains on a chess board). The final array should only contain integers (so no floats) from > 0 to X (max 300). Furthermore I'd like to use a weighing factor, so that some of my array-cells are more probably to > be populated than others. Therefore I have an array of the same size (100,100) and that is in numpy.sum = 1: > > x = numpy.random.random_sample((10,10)) > y = x/numpy.sum(x) > > Some side conditions: Of course if one cell has exactly 0 as weighing factor, also the result has to be 0 for that > cell. If only one cell has a weighing factor at all (all others are 0) the entire 300 has to be assigned to that cell. You are describing a multinomial distribution: http://docs.scipy.org/doc/numpy/reference/generated/numpy.random.multinomial.html -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan.isaac at gmail.com Tue Aug 27 13:50:59 2013 From: alan.isaac at gmail.com (Alan G Isaac) Date: Tue, 27 Aug 2013 13:50:59 -0400 Subject: [SciPy-User] Populate an array with integers (randomly, weighted) In-Reply-To: References: Message-ID: <521CE703.5060300@gmail.com> On 8/27/2013 3:54 AM, Johannes Radinger wrote: > I'd like to randomly populate a numpy array (100,100) so that the numpy.sum() of the array equals exactly 300 > (e.g. distribute 300 rice grains on a chess board). One possibility below. Alan Isaac >>> wts = np.array([[3,2,0],[1,0,2]]) >>> a = np.zeros(wts.shape) >>> accwts = np.cumsum(wts.flat) >>> draws = np.random.random((300)) * accwts[-1] >>> cts = np.bincount(np.digitize(draws,accwts),minlength=wts.size) >>> a.flat += cts >>> a array([[ 107., 83., 0.], [ 36., 0., 74.]]) From travis at continuum.io Tue Aug 27 14:27:23 2013 From: travis at continuum.io (Travis Oliphant) Date: Tue, 27 Aug 2013 13:27:23 -0500 Subject: [SciPy-User] Populate an array with integers (randomly, weighted) In-Reply-To: References: Message-ID: To Robert's point, the following should give you the result. Let y be this array that provides the "weighting" factor you were talking about. result = numpy.random.multinomial(300, y.ravel()).reshape(y.shape) This should be the array you are looking for. -Travis On Tue, Aug 27, 2013 at 4:03 AM, Robert Kern wrote: > On Tue, Aug 27, 2013 at 8:54 AM, Johannes Radinger < > johannesradinger at gmail.com> wrote: > > > > Hi, > > > > I'd like to randomly populate a numpy array (100,100) so that the > numpy.sum() of the array equals exactly 300 > > (e.g. distribute 300 rice grains on a chess board). The final array > should only contain integers (so no floats) from > > 0 to X (max 300). Furthermore I'd like to use a weighing factor, so > that some of my array-cells are more probably to > > be populated than others. Therefore I have an array of the same size > (100,100) and that is in numpy.sum = 1: > > > > x = numpy.random.random_sample((10,10)) > > y = x/numpy.sum(x) > > > > Some side conditions: Of course if one cell has exactly 0 as weighing > factor, also the result has to be 0 for that > > cell. If only one cell has a weighing factor at all (all others are 0) > the entire 300 has to be assigned to that cell. > > You are describing a multinomial distribution: > > > http://docs.scipy.org/doc/numpy/reference/generated/numpy.random.multinomial.html > > -- > Robert Kern > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- Travis Oliphant Continuum Analytics, Inc. http://www.continuum.io -------------- next part -------------- An HTML attachment was scrubbed... URL: From magnetotellurics at gmail.com Tue Aug 27 20:17:05 2013 From: magnetotellurics at gmail.com (Karl Kappler) Date: Tue, 27 Aug 2013 17:17:05 -0700 Subject: [SciPy-User] Small bug? in matplotlib.pyplot.plot_date() Message-ID: Hello, Wanted to share some unexpected behaviour encountered with matplotlib.pyplot.plot_date(). This method allows easy plotting of data with datetime abscissa values. A code snippet for usage is here: import matplotlib as mpl import numpy as np # N = 899 #nObservations data = np.random.rand(N) dt = 86400./N #set timeStep in seconds t0 = datetime.datetime.now() #t0 = datetime.datetime(1,1,1,0,0,0) #t0 = datetime.datetime(2000,1,1,0,0,0) #<\Make some data and metadata> # list_of_datetimes = [t0 + datetime.timedelta(seconds=i*dt) for i in range(N)] dates = mpl.dates.date2num(list_of_datetimes) #<\Reformat x-axis consistant for plot_date> mpl.pyplot.plot_date(dates, np.real(gData), tz=None, fmt = 'b-', hold=None) mpl.pyplot.show() The above snippet creates a plot with time-axis that I want. Uncommentting the second definition of t0 however results in a plot with no x-axis labels. Output to console (I am working in Spyder) is pasted below. Uncommentting the third definition of t0 results in all working fine again. So I am speculating that there is a problem in plot_date() having to do with string casting the year, perhaps an assumption that it has four digits? For my use cases I can work around this, but is it a (small) bug? Karl ________________________________ CONSOLE OUTPUT: Traceback (most recent call last): File "/usr/lib/pymodules/python2.7/matplotlib/backends/backend_qt4.py", line 220, in resizeEvent self.draw() File "/usr/lib/pymodules/python2.7/matplotlib/backends/backend_qt4agg.py", line 125, in draw FigureCanvasAgg.draw(self) File "/usr/lib/pymodules/python2.7/matplotlib/backends/backend_agg.py", line 421, in draw self.figure.draw(self.renderer) File "/usr/lib/pymodules/python2.7/matplotlib/artist.py", line 55, in draw_wrapper draw(artist, renderer, *args, **kwargs) File "/usr/lib/pymodules/python2.7/matplotlib/figure.py", line 898, in draw func(*args) File "/usr/lib/pymodules/python2.7/matplotlib/artist.py", line 55, in draw_wrapper draw(artist, renderer, *args, **kwargs) File "/usr/lib/pymodules/python2.7/matplotlib/axes.py", line 1997, in draw a.draw(renderer) File "/usr/lib/pymodules/python2.7/matplotlib/artist.py", line 55, in draw_wrapper draw(artist, renderer, *args, **kwargs) File "/usr/lib/pymodules/python2.7/matplotlib/axis.py", line 1041, in draw ticks_to_draw = self._update_ticks(renderer) File "/usr/lib/pymodules/python2.7/matplotlib/axis.py", line 931, in _update_ticks tick_tups = [ t for t in self.iter_ticks()] File "/usr/lib/pymodules/python2.7/matplotlib/axis.py", line 878, in iter_ticks majorLocs = self.major.locator() File "/usr/lib/pymodules/python2.7/matplotlib/dates.py", line 750, in __call__ return self._locator() File "/usr/lib/pymodules/python2.7/matplotlib/dates.py", line 571, in __call__ start = dmin - delta File "/usr/lib/python2.7/dist-packages/dateutil/relativedelta.py", line 289, in __rsub__ return self.__neg__().__radd__(other) File "/usr/lib/python2.7/dist-packages/dateutil/relativedelta.py", line 276, in __radd__ microseconds=self.microseconds)) OverflowError: date value out of range -------------- next part -------------- An HTML attachment was scrubbed... URL: From johannesradinger at gmail.com Wed Aug 28 03:34:35 2013 From: johannesradinger at gmail.com (Johannes Radinger) Date: Wed, 28 Aug 2013 09:34:35 +0200 Subject: [SciPy-User] SciPy-User Digest, Vol 120, Issue 19 In-Reply-To: References: Message-ID: Thank you Robert, that sounds like a very straight forward solution to that task. The working example following the given data can look like: np.random.multinomial(300, y.flat, size=1).reshape(y.shape) However, I had a similar discussion a while ago already in stackoverflow where a different solution has been proposed: np.bincount(np.searchsorted(np.cumsum(y), np.random.random(300)), minlength=y.size).reshape(y.size) respectively for numpy > 1.7 np.bincount(np.searchsorted(np.random.choice(y.size, 300, p=y.flat), minlength=y.size).reshape(y.size) So what I am wondering is what is the actual difference in the meaning behind both approaches? Do they actually provide results based on a totally different meaning of "weighted random"? And what are the consequences then? As am not really familiar with statistics maybe someone can clarify that? Thanks a lot! cheers, Johannes On Tue, Aug 27, 2013 at 7:00 PM, wrote: > Send SciPy-User mailing list submissions to > scipy-user at scipy.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://mail.scipy.org/mailman/listinfo/scipy-user > or, via email, send a message with subject or body 'help' to > scipy-user-request at scipy.org > > You can reach the person managing the list at > scipy-user-owner at scipy.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of SciPy-User digest..." > > > Today's Topics: > > 1. Populate an array with integers (randomly, weighted) > (Johannes Radinger) > 2. Re: Populate an array with integers (randomly, weighted) > (Robert Kern) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Tue, 27 Aug 2013 09:54:13 +0200 > From: Johannes Radinger > Subject: [SciPy-User] Populate an array with integers (randomly, > weighted) > To: scipy-user at scipy.org > Message-ID: > JU_OCVic3Q at mail.gmail.com> > Content-Type: text/plain; charset="iso-8859-1" > > Hi, > > I'd like to randomly populate a numpy array (100,100) so that the > numpy.sum() of the array equals exactly 300 > (e.g. distribute 300 rice grains on a chess board). The final array should > only contain integers (so no floats) from > 0 to X (max 300). Furthermore I'd like to use a weighing factor, so that > some of my array-cells are more probably to > be populated than others. Therefore I have an array of the same size > (100,100) and that is in numpy.sum = 1: > > x = numpy.random.random_sample((10,10)) > y = x/numpy.sum(x) > > Some side conditions: Of course if one cell has exactly 0 as weighing > factor, also the result has to be 0 for that > cell. If only one cell has a weighing factor at all (all others are 0) the > entire 300 has to be assigned to that cell. > > Can that easily be achieved with Scipy/Numpy? > > Thank you so much for your help, > > Johannes > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > http://mail.scipy.org/pipermail/scipy-user/attachments/20130827/a4b7164a/attachment-0001.html > > ------------------------------ > > Message: 2 > Date: Tue, 27 Aug 2013 10:03:36 +0100 > From: Robert Kern > Subject: Re: [SciPy-User] Populate an array with integers (randomly, > weighted) > To: SciPy Users List > Message-ID: > < > CAF6FJis6fUhLQ_OF3XmzkiFVqvKmLJuD1JZB8Mq2f0ypju8gxw at mail.gmail.com> > Content-Type: text/plain; charset="utf-8" > > On Tue, Aug 27, 2013 at 8:54 AM, Johannes Radinger < > johannesradinger at gmail.com> wrote: > > > > Hi, > > > > I'd like to randomly populate a numpy array (100,100) so that the > numpy.sum() of the array equals exactly 300 > > (e.g. distribute 300 rice grains on a chess board). The final array > should only contain integers (so no floats) from > > 0 to X (max 300). Furthermore I'd like to use a weighing factor, so that > some of my array-cells are more probably to > > be populated than others. Therefore I have an array of the same size > (100,100) and that is in numpy.sum = 1: > > > > x = numpy.random.random_sample((10,10)) > > y = x/numpy.sum(x) > > > > Some side conditions: Of course if one cell has exactly 0 as weighing > factor, also the result has to be 0 for that > > cell. If only one cell has a weighing factor at all (all others are 0) > the entire 300 has to be assigned to that cell. > > You are describing a multinomial distribution: > > > http://docs.scipy.org/doc/numpy/reference/generated/numpy.random.multinomial.html > > -- > Robert Kern > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > http://mail.scipy.org/pipermail/scipy-user/attachments/20130827/661b044a/attachment-0001.html > > ------------------------------ > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > End of SciPy-User Digest, Vol 120, Issue 19 > ******************************************* > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed Aug 28 05:05:02 2013 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 28 Aug 2013 10:05:02 +0100 Subject: [SciPy-User] SciPy-User Digest, Vol 120, Issue 19 In-Reply-To: References: Message-ID: On Wed, Aug 28, 2013 at 8:34 AM, Johannes Radinger < johannesradinger at gmail.com> wrote: > > Thank you Robert, that sounds like a very straight forward solution to that task. > The working example following the given data can look like: > > np.random.multinomial(300, y.flat, size=1).reshape(y.shape) > > > However, I had a similar discussion a while ago already in stackoverflow where a different solution has been proposed: > > np.bincount(np.searchsorted(np.cumsum(y), np.random.random(300)), minlength=y.size).reshape(y.size) > > respectively for numpy > 1.7 > np.bincount(np.searchsorted(np.random.choice(y.size, 300, p=y.flat), minlength=y.size).reshape(y.size) > > So what I am wondering is what is the actual difference in the meaning behind both approaches? > Do they actually provide results based on a totally different meaning of "weighted random"? And what are the consequences then? As am not really familiar with statistics maybe someone can clarify that? First, if you would like to participate in threads, we would really appreciate it if you subscribe to the mailing list normally instead of using the digest. If you feel you must use the digest, please trim your replies and adjust the Subject: line. Thanks. As for the different approaches, they only minor benefit of the second approach is that it does not need normalized weights. Otherwise, all of the approaches are sampling the same thing, just rather less efficiently and clearly than multinomial() does. -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From johannesradinger at gmail.com Thu Aug 29 00:58:41 2013 From: johannesradinger at gmail.com (Johannes Radinger) Date: Thu, 29 Aug 2013 06:58:41 +0200 Subject: [SciPy-User] Populate an array with integers (randomly, weighted) Message-ID: Hi, > > Thank you Robert, that sounds like a very straight forward solution to > that task. > > The working example following the given data can look like: > > > > np.random.multinomial(300, y.flat, size=1).reshape(y.shape) > > > > > > However, I had a similar discussion a while ago already in stackoverflow > where a different solution has been proposed: > > > > np.bincount(np.searchsorted(np.cumsum(y), np.random.random(300)), > minlength=y.size).reshape(y.size) > > > > respectively for numpy > 1.7 > > np.bincount(np.searchsorted(np.random.choice(y.size, 300, p=y.flat), > minlength=y.size).reshape(y.size) > > > > So what I am wondering is what is the actual difference in the meaning > behind both approaches? > > Do they actually provide results based on a totally different meaning of > "weighted random"? And what are the consequences then? As am not really > familiar with statistics maybe someone can clarify that? > > First, if you would like to participate in threads, we would really > appreciate it if you subscribe to the mailing list normally instead of > using the digest. If you feel you must use the digest, please trim your > replies and adjust the Subject: line. Thanks. > Oh sorry, I totally forgot that the digest was included in the mail, I was just faster writing email than I was thinking ;), sorry for that. Now the mail shoud be trimmed and the subject is adjusted. > > As for the different approaches, they only minor benefit of the second > approach is that it does not need normalized weights. Otherwise, all of the > approaches are sampling the same thing, just rather less efficiently and > clearly than multinomial() does. > Thank you for that answer, and as normalizing weights is an easy step and the multinomial() approach is much clearer to me I'll stay with that. /Johannes > > -- > Robert Kern > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sarifkin at ucsd.edu Thu Aug 29 23:37:56 2013 From: sarifkin at ucsd.edu (scott rifkin) Date: Thu, 29 Aug 2013 20:37:56 -0700 Subject: [SciPy-User] Bug? in scipy.interpolate.splprep Message-ID: <52201394.7030904@ucsd.edu> I have been getting an error running splprep in the interpolate module with task=-1. This is reproducible across different distributions of python (enthought, regular) although all 2.7.x I have pasted the error message below. In all cases it ends up Aborting or killing the kernel (for ipython notebook). The call is: tck,u=spi.splprep(xyz, k=deg,u=uu,task=-1,t=knots) where spi is scipy.interpolate, xyz is the data (3x216 numpy array), deg is 3 here, uu is a 1x216 numpy array, and knots is also a numpy array. In this case, knots started with four 0s, ended with 4 1s and had numbers about every 0.05. I don't get this error when task=1. Sometimes (although not very often) task=-1 doesn't give me an error, but I need to explore this more to see if there are any patterns (like a particular set of data). Thanks much for any help, Scott *** glibc detected *** python: free(): invalid next size (fast): 0x0bbd9bb8 *** ======= Backtrace: ========= /lib/i386-linux-gnu/libc.so.6(+0x75ee2)[0xb73e4ee2] /usr/lib/python2.7/dist-packages/numpy/core/multiarray.so(+0x8f729)[0xb6348729] python[0x81669e3] python[0x816653b] python(PyEval_EvalCodeEx+0x16a)[0x81a3bca] python(PyEval_EvalFrameEx+0x7c4)[0x819db04] python(PyEval_EvalCodeEx+0x150)[0x81a3bb0] python(PyEval_EvalFrameEx+0x7c4)[0x819db04] python(PyEval_EvalCodeEx+0x150)[0x81a3bb0] python(PyEval_EvalCode+0x57)[0x81a4b57] python(PyEval_EvalFrameEx+0x2b1e)[0x819fe5e] python(PyEval_EvalCodeEx+0x150)[0x81a3bb0] python(PyEval_EvalFrameEx+0x7c4)[0x819db04] python(PyEval_EvalCodeEx+0x150)[0x81a3bb0] python(PyEval_EvalFrameEx+0x7c4)[0x819db04] python(PyEval_EvalCodeEx+0x150)[0x81a3bb0] python(PyEval_EvalFrameEx+0x7c4)[0x819db04] python(PyEval_EvalCodeEx+0x150)[0x81a3bb0] python(PyEval_EvalFrameEx+0x7c4)[0x819db04] python(PyEval_EvalCodeEx+0x150)[0x81a3bb0] python(PyEval_EvalFrameEx+0x7c4)[0x819db04] python(PyEval_EvalCodeEx+0x150)[0x81a3bb0] python(PyEval_EvalFrameEx+0x7c4)[0x819db04] python(PyEval_EvalCodeEx+0x150)[0x81a3bb0] python(PyEval_EvalFrameEx+0x7c4)[0x819db04] python(PyEval_EvalCodeEx+0x150)[0x81a3bb0] python(PyRun_FileExFlags+0xe1)[0x80db871] python(PyRun_SimpleFileExFlags+0x21a)[0x80dbb2a] python(Py_Main+0x84c)[0x80dcf0c] python(main+0x1b)[0x805e9fb] /lib/i386-linux-gnu/libc.so.6(__libc_start_main+0xf3)[0xb73884d3] python[0x805ea21] ======= Memory map: ======== 08048000-0829e000 r-xp 00000000 08:01 4850418 /usr/bin/python2.7 0829e000-0829f000 r--p 00255000 08:01 4850418 /usr/bin/python2.7 0829f000-082f4000 rw-p 00256000 08:01 4850418 /usr/bin/python2.7 082f4000-08300000 rw-p 00000000 00:00 0 09da2000-0c6ac000 rw-p 00000000 00:00 0 [heap] b3e02000-b3e66000 r-xp 00000000 08:01 7736484 /usr/lib/pyshared/python2.7/matplotlib/backends/_backend_agg.so b3e66000-b3e67000 r--p 00064000 08:01 7736484 /usr/lib/pyshared/python2.7/matplotlib/backends/_backend_agg.so b3e67000-b3e68000 rw-p 00065000 08:01 7736484 /usr/lib/pyshared/python2.7/matplotlib/backends/_backend_agg.so b3e68000-b3e87000 r-xp 00000000 08:01 7736485 /usr/lib/pyshared/python2.7/matplotlib/backends/_tkagg.so b3e87000-b3e88000 r--p 0001e000 08:01 7736485 /usr/lib/pyshared/python2.7/matplotlib/backends/_tkagg.so b3e88000-b3e89000 rw-p 0001f000 08:01 7736485 /usr/lib/pyshared/python2.7/matplotlib/backends/_tkagg.so b3e89000-b3e8a000 ---p 00000000 00:00 0 b3e8a000-b468a000 rw-p 00000000 00:00 0 b468a000-b468f000 r-xp 00000000 08:01 4854393 /usr/lib/i386-linux-gnu/libXdmcp.so.6.0.0 b468f000-b4690000 r--p 00004000 08:01 4854393 /usr/lib/i386-linux-gnu/libXdmcp.so.6.0.0 b4690000-b4691000 rw-p 00005000 08:01 4854393 /usr/lib/i386-linux-gnu/libXdmcp.so.6.0.0 b4691000-b4699000 r-xp 00000000 08:01 4854415 /usr/lib/i386-linux-gnu/libXrender.so.1.3.0 b4699000-b469a000 r--p 00007000 08:01 4854415 /usr/lib/i386-linux-gnu/libXrender.so.1.3.0 b469a000-b469b000 rw-p 00008000 08:01 4854415 /usr/lib/i386-linux-gnu/libXrender.so.1.3.0 b469b000-b46ab000 r-xp 00000000 08:01 4854395 /usr/lib/i386-linux-gnu/libXext.so.6.4.0 b46ab000-b46ac000 r--p 0000f000 08:01 4854395 /usr/lib/i386-linux-gnu/libXext.so.6.4.0 b46ac000-b46ad000 rw-p 00010000 08:01 4854395 /usr/lib/i386-linux-gnu/libXext.so.6.4.0 b46ad000-b46cc000 r-xp 00000000 08:01 4854968 /usr/lib/i386-linux-gnu/libxcb.so.1.1.0 b46cc000-b46cd000 r--p 0001f000 08:01 4854968 /usr/lib/i386-linux-gnu/libxcb.so.1.1.0 b46cd000-b46ce000 rw-p 00020000 08:01 4854968 /usr/lib/i386-linux-gnu/libxcb.so.1.1.0 b46ce000-b4700000 r-xp 00000000 08:01 4854555 /usr/lib/i386-linux-gnu/libfontconfig.so.1.4.4 b4700000-b4701000 r--p 00032000 08:01 4854555 /usr/lib/i386-linux-gnu/libfontconfig.so.1.4.4 b4701000-b4702000 rw-p 00033000 08:01 4854555 /usr/lib/i386-linux-gnu/libfontconfig.so.1.4.4 b4702000-b4716000 r-xp 00000000 08:01 4854399 /usr/lib/i386-linux-gnu/libXft.so.2.2.0 b4716000-b4717000 r--p 00013000 08:01 4854399 /usr/lib/i386-linux-gnu/libXft.so.2.2.0 b4717000-b4718000 rw-p 00014000 08:01 4854399 /usr/lib/i386-linux-gnu/libXft.so.2.2.0 b4718000-b4848000 r-xp 00000000 08:01 4854380 /usr/lib/i386-linux-gnu/libX11.so.6.3.0 b4848000-b4849000 r--p 0012f000 08:01 4854380 /usr/lib/i386-linux-gnu/libX11.so.6.3.0 b4849000-b484b000 rw-p 00130000 08:01 4854380 /usr/lib/i386-linux-gnu/libX11.so.6.3.0 b484b000-b484c000 rw-p 00000000 00:00 0 b484c000-b495f000 r-xp 00000000 08:01 4852727 /usr/lib/libtcl8.5.so.0 b495f000-b4961000 r--p 00113000 08:01 4852727 /usr/lib/libtcl8.5.so.0 b4961000-b4965000 rw-p 00115000 08:01 4852727 /usr/lib/libtcl8.5.so.0 b4965000-b4966000 rw-p 00000000 00:00 0 b4966000-b4a87000 r-xp 00000000 08:01 4859164 /usr/lib/libtk8.5.so.0 b4a87000-b4a88000 ---p 00121000 08:01 4859164 /usr/lib/libtk8.5.so.0 b4a88000-b4a8e000 r--p 00121000 08:01 4859164 /usr/lib/libtk8.5.so.0 b4a8e000-b4a99000 rw-p 00127000 08:01 4859164 /usr/lib/libtk8.5.so.0 b4a99000-b4b73000 r-xp 00000000 08:01 4859166 /usr/lib/libBLT.2.4.so.8.5 b4b73000-b4b74000 r--p 000d9000 08:01 4859166 /usr/lib/libBLT.2.4.so.8.5 b4b74000-b4b85000 rw-p 000da000 08:01 4859166 /usr/lib/libBLT.2.4.so.8.5 b4b85000-b4b86000 rw-p 00000000 00:00 0 b4b9a000-b4ba5000 r-xp 00000000 08:01 4989616 /usr/lib/python2.7/lib-dynload/_tkinter.so b4ba5000-b4ba6000 r--p 0000a000 08:01 4989616 /usr/lib/python2.7/lib-dynload/_tkinter.so b4ba6000-b4ba7000 rw-p 0000b000 08:01 4989616 /usr/lib/python2.7/lib-dynload/_tkinter.so b4ba7000-b4bdc000 r-xp 00000000 08:01 7736488 /usr/lib/pyshared/python2.7/matplotlib/_tri.so b4bdc000-b4bdd000 r--p 00035000 08:01 7736488 /usr/lib/pyshared/python2.7/matplotlib/_tri.so b4bdd000-b4bde000 rw-p 00036000 08:01 7736488 /usr/lib/pyshared/python2.7/matplotlib/_tri.so b4bde000-b4c1e000 r-xp 00000000 08:01 7736489 /usr/lib/pyshared/python2.7/matplotlib/_image.so b4c1e000-b4c1f000 r--p 00040000 08:01 7736489 /usr/lib/pyshared/python2.7/matplotlib/_image.so b4c1f000-b4c21000 rw-p 00041000 08:01 7736489 /usr/lib/pyshared/python2.7/matplotlib/_image.so b4c21000-b4c49000 r-xp 00000000 08:01 9438192 /lib/i386-linux-gnu/libpng12.so.0.46.0 b4c49000-b4c4a000 r--p 00027000 08:01 9438192 /lib/i386-linux-gnu/libpng12.so.0.46.0 b4c4a000-b4c4b000 rw-p 00028000 08:01 9438192 /lib/i386-linux-gnu/libpng12.so.0.46.0 b4c4f000-b4c5d000 r-xp 00000000 08:01 7736490 /usr/lib/pyshared/python2.7/matplotlib/_delaunay.so b4c5d000-b4c5e000 r--p 0000d000 08:01 7736490 /usr/lib/pyshared/python2.7/matplotlib/_delaunay.so b4c5e000-b4c5f000 rw-p 0000e000 08:01 7736490 /usr/lib/pyshared/python2.7/matplotlib/_delaunay.so b4c5f000-b4c83000 r-xp 00000000 08:01 7736486 /usr/lib/pyshared/python2.7/matplotlib/_png.so b4c83000-b4c84000 r--p 00024000 08:01 7736486 /usr/lib/pyshared/python2.7/matplotlib/_png.so b4c84000-b4c85000 rw-p 00025000 08:01 7736486 /usr/lib/pyshared/python2.7/matplotlib/_png.so b4c85000-b4d1a000 r-xp 00000000 08:01 4854563 /usr/lib/i386-linux-gnu/libfreetype.so.6.8.0 b4d1a000-b4d1e000 r--p 00094000 08:01 4854563 /usr/lib/i386-linux-gnu/libfreetype.so.6.8.0 b4d1e000-b4d1f000 rw-p 00098000 08:01 4854563 /usr/lib/i386-linux-gnu/libfreetype.so.6.8.0 b4d1f000-b4d5f000 r-xp 00000000 08:01 7736491 /usr/lib/pyshared/python2.7/matplotlib/ft2font.so b4d5f000-b4d61000 r--p 0003f000 08:01 7736491 /usr/lib/pyshared/python2.7/matplotlib/ft2font.so b4d61000-b4d63000 rw-p 00041000 08:01 7736491 /usr/lib/pyshared/python2.7/matplotlib/ft2font.so b4d63000-b4ee4000 rw-p 00000000 00:00 0 b4ee4000-b4eed000 r-xp 00000000 08:01 4985383 /usr/lib/python2.7/dist-packages/scipy/spatial/_distance_wrap.so b4eed000-b4eee000 r--p 00009000 08:01 4985383 /usr/lib/python2.7/dist-packages/scipy/spatial/_distance_wrap.so b4eee000-b4eef000 rw-p 0000a000 08:01 4985383 /usr/lib/python2.7/dist-packages/scipy/spatial/_distance_wrap.so b4eef000-b4f6a000 r-xp 00000000 08:01 4985382 /usr/lib/python2.7/dist-packages/scipy/spatial/qhull.so b4f6a000-b4f6b000 r--p 0007a000 08:01 4985382 /usr/lib/python2.7/dist-packages/scipy/spatial/qhull.so b4f6b000-b4f6f000 rw-p 0007b000 08:01 4985382 /usr/lib/python2.7/dist-packages/scipy/spatial/qhull.so b4f6f000-b4f71000 rw-p 00000000 00:00 0 b4f71000-b4f84000 r-xp 00000000 08:01 4985381 /usr/lib/python2.7/dist-packages/scipy/spatial/ckdtree.so b4f84000-b4f85000 r--p 00012000 08:01 4985381 /usr/lib/python2.7/dist-packages/scipy/spatial/ckdtree.so b4f85000-b4f87000 rw-p 00013000 08:01 4985381 /usr/lib/python2.7/dist-packages/scipy/spatial/ckdtree.so b4f87000-b4f8d000 r-xp 00000000 08:01 7214810 /usr/lib/python2.7/dist-packages/scipy/sparse/sparsetools/_csgraph.so b4f8d000-b4f8e000 r--p 00005000 08:01 7214810 /usr/lib/python2.7/dist-packages/scipy/sparse/sparsetools/_csgraph.so b4f8e000-b4f8f000 rw-p 00006000 08:01 7214810 /usr/lib/python2.7/dist-packages/scipy/sparse/sparsetools/_csgraph.so b4f8f000-b5083000 r-xp 00000000 08:01 7214806 /usr/lib/python2.7/dist-packages/scipy/sparse/sparsetools/_bsr.so b5083000-b5084000 r--p 000f3000 08:01 7214806 /usr/lib/python2.7/dist-packages/scipy/sparse/sparsetools/_bsr.so b5084000-b5085000 rw-p 000f4000 08:01 7214806 /usr/lib/python2.7/dist-packages/scipy/sparse/sparsetools/_bsr.so b5085000-b50a9000 r-xp 00000000 08:01 7214807 /usr/lib/python2.7/dist-packages/scipy/sparse/sparsetools/_coo.so b50a9000-b50aa000 r--p 00023000 08:01 7214807 /usr/lib/python2.7/dist-packages/scipy/sparse/sparsetools/_coo.so b50aa000-b50ab000 rw-p 00024000 08:01 7214807 /usr/lib/python2.7/dist-packages/scipy/sparse/sparsetools/_coo.so b50ab000-b5131000 r-xp 00000000 08:01 7214808 /usr/lib/python2.7/dist-packages/scipy/sparse/sparsetools/_csc.so b5131000-b5132000 r--p 00085000 08:01 7214808 /usr/lib/python2.7/dist-packages/scipy/sparse/sparsetools/_csc.so b5132000-b5133000 rw-p 00086000 08:01 7214808 /usr/lib/python2.7/dist-packages/scipy/sparse/sparsetools/_csc.so b5133000-b520b000 r-xp 00000000 08:01 4854892 /usr/lib/i386-linux-gnu/libstdc++.so.6.0.16 b520b000-b520c000 ---p 000d8000 08:01 4854892 /usr/lib/i386-linux-gnu/libstdc++.so.6.0.16 b520c000-b5210000 r--p 000d8000 08:01 4854892 /usr/lib/i386-linux-gnu/libstdc++.so.6.0.16 b5210000-b5211000 rw-p 000dc000 08:01 4854892 /usr/lib/i386-linux-gnu/libstdc++.so.6.0.16 b5211000-b5218000 rw-p 00000000 00:00 0 b521c000-b522a000 r-xp 00000000 08:01 7214809 /usr/lib/python2.7/dist-packages/scipy/sparse/sparsetools/_dia.so b522a000-b522b000 r--p 0000d000 08:01 7214809 /usr/lib/python2.7/dist-packages/scipy/sparse/sparsetools/_dia.so b522b000-b522c000 rw-p 0000e000 08:01 7214809 /usr/lib/python2.7/dist-packages/scipy/sparse/sparsetools/_dia.so b522c000-b5301000 r-xp 00000000 08:01 7214811 /usr/lib/python2.7/dist-packages/scipy/sparse/sparsetools/_csr.so b5301000-b5302000 r--p 000d4000 08:01 7214811 /usr/lib/python2.7/dist-packages/scipy/sparse/sparsetools/_csr.so b5302000-b5303000 rw-p 000d5000 08:01 7214811 /usr/lib/python2.7/dist-packages/scipy/sparse/sparsetools/_csr.so b5303000-b532a000 r-xp 00000000 08:01 8132175 /usr/lib/python2.7/dist-packages/scipy/interpolate/interpnd.so b532a000-b532b000 r--p 00026000 08:01 8132175 /usr/lib/python2.7/dist-packages/scipy/interpolate/interpnd.so b532b000-b532e000 rw-p 00027000 08:01 8132175 /usr/lib/python2.7/dist-packages/scipy/interpolate/interpnd.so b532e000-b5333000 r-xp 00000000 08:01 8132213 /usr/lib/python2.7/dist-packages/scipy/linalg/cblas.so b5333000-b5334000 r--p 00004000 08:01 8132213 /usr/lib/python2.7/dist-packages/scipy/linalg/cblas.so b5334000-b5335000 rw-p 00005000 08:01 8132213 /usr/lib/python2.7/dist-packages/scipy/linalg/cblas.so b5335000-b5361000 r-xp 00000000 08:01 8132207 /usr/lib/python2.7/dist-packages/scipy/linalg/fblas.so b5361000-b5362000 r--p 0002b000 08:01 8132207 /usr/lib/python2.7/dist-packages/scipy/linalg/fblas.so b5362000-b536d000 rw-p 0002c000 08:01 8132207 /usr/lib/python2.7/dist-packages/scipy/linalg/fblas.so b536d000-b5377000 r-xp 00000000 08:01 8132210 /usr/lib/python2.7/dist-packages/scipy/linalg/calc_lwork.so b5377000-b5378000 r--p 00009000 08:01 8132210 /usr/lib/python2.7/dist-packages/scipy/linalg/calc_lwork.so b5378000-b537a000 rw-p 0000a000 08:01 8132210 /usr/lib/python2.7/dist-packages/scipy/linalg/calc_lwork.so b537a000-b537f000 r-xp 00000000 08:01 8132208 /usr/lib/python2.7/dist-packages/scipy/linalg/clapack.so b537f000-b5380000 r--p 00004000 08:01 8132208 /usr/lib/python2.7/dist-packages/scipy/linalg/clapack.so b5380000-b5381000 rw-p 00005000 08:01 8132208 /usr/lib/python2.7/dist-packages/scipy/linalg/clapack.so b5381000-b53db000 r-xp 00000000 08:01 8132211 /usr/lib/python2.7/dist-packages/scipy/linalg/flapack.so b53db000-b53dc000 r--p 00059000 08:01 8132211 /usr/lib/python2.7/dist-packages/scipy/linalg/flapack.so[1]+ Killed idle -n Aborted (core dumped) From cournape at gmail.com Fri Aug 30 14:16:05 2013 From: cournape at gmail.com (David Cournapeau) Date: Fri, 30 Aug 2013 19:16:05 +0100 Subject: [SciPy-User] [Numpy-discussion] ANN: Scipy 0.13.0 beta 1 release In-Reply-To: References: Message-ID: It looks like it broke the build with MKL as well (in, surprised, ARPACK). I will investigate this further this WE On Thu, Aug 22, 2013 at 2:12 PM, Ralf Gommers wrote: > Hi all, > > I'm happy to announce the availability of the first beta release of Scipy > 0.13.0. Please try this beta and report any issues on the scipy-dev mailing > list. > > Source tarballs and release notes can be found at > https://sourceforge.net/projects/scipy/files/scipy/0.13.0b1/. Windows and > OS X installers will follow later (we have a minor infrastructure issue to > solve, and I'm at EuroScipy now). > > Cheers, > Ralf > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From teccy at qq.com Fri Aug 16 23:21:13 2013 From: teccy at qq.com (=?ISO-8859-1?B?dGVjY3k=?=) Date: Sat, 17 Aug 2013 03:21:13 -0000 Subject: [SciPy-User] how to get the indexes of non zero rows of sparse matric efficiently when using scipy? Message-ID: for example, a 500x1000 sparse matrix with 300 non zero rows and 5000 non zero items, what i want is the index for the 300 rows, not the index for 5000 items, how to do this efficiently, seems scipy haven't provicde such kind of API ... -------------- next part -------------- An HTML attachment was scrubbed... URL: From joseluismietta at yahoo.com.ar Mon Aug 19 12:20:03 2013 From: joseluismietta at yahoo.com.ar (=?iso-8859-1?Q?Jos=E8_Luis_Mietta?=) Date: Mon, 19 Aug 2013 16:20:03 -0000 Subject: [SciPy-User] At.: question about RAM problem in numpy array code In-Reply-To: <1372287629.89197.YahooMailNeo@web142306.mail.bf1.yahoo.com> References: <1372287629.89197.YahooMailNeo@web142306.mail.bf1.yahoo.com> Message-ID: <1376929522.8308.YahooMailNeo@web142302.mail.bf1.yahoo.com> Hi experts! I have a core i3 3GB RAM Toshiba laptop. Im a newby Ubuntu 13.04, python and sage user. I note that RAM memory becomes full while running script (starts in 50% of RAM ocupation and becomes to 100% (full)). This generate that operating system become slower... In the code few numpy arrays are gereated (each one with ~700 elements or more). The script looks like this: ? OUTPUT=[] ? number of elements=[n1,n2,.....] ? numbers of executions =N ? forj innumber of elements: ? ? ? ? lalala=[] ? ? ? ? fori insrange(N): ? ? ? ? ? ? ?Algorithmisexecuted 'N'times an append one value inarray 'lalala'each time.Ineach execution three numpy matrix of 300x 300(=90000)elements (each one)participate.Thismatrix are called 'M','M_v'and'M_h'andare re-generated foreach 'i'index inthe for-cycle. ? ? ? ? Doingmath on all 'N'elements in'lalala'a 'pipipi'element isgenerated an thenappend intoOUTPUT array. When execution ends the OUTPUT array have the same lenght that 'number of elements' array. In each 'for' cycle, other arrayrs participate. What can I do for throubleshooting that? IS possible that the algorithm and old arrays are saving in RAM memory? If tahts possible, how can I di for deleting that? Waiting for your answers. Thanks a lot! -------------- next part -------------- An HTML attachment was scrubbed... URL: From yw5aj at virginia.edu Tue Aug 20 01:56:43 2013 From: yw5aj at virginia.edu (Yuxiang Wang) Date: Tue, 20 Aug 2013 05:56:43 -0000 Subject: [SciPy-User] May I joint this list? Message-ID: Thank you so much! -- Yuxiang "Shawn" Wang Gerling Research Lab University of Virginia yw5aj at virginia.edu +1 (434) 284-0836 From chris.barker at noaa.gov Wed Aug 21 12:10:07 2013 From: chris.barker at noaa.gov (Chris Barker - NOAA Federal) Date: Wed, 21 Aug 2013 16:10:07 -0000 Subject: [SciPy-User] [Image-SIG] possible bug in OpenCV python module (cv2) In-Reply-To: References: Message-ID: On Wed, Aug 21, 2013 at 7:00 AM, ha wrote: > Is this the right place to discuss such problems? not really -- this list os for more general image processing tools with Pyton, mostly the PIL. There must be an OpenCV list somewhere. If not, I'd try the scipy list, I expect there are users on there. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From wangyuxiang.uva at gmail.com Thu Aug 22 10:44:46 2013 From: wangyuxiang.uva at gmail.com (Yuxiang Wang) Date: Thu, 22 Aug 2013 14:44:46 -0000 Subject: [SciPy-User] Cannot subscribe to mailing lists Message-ID: <1bf4e1ec-163c-4002-a857-5d1eb172e9c1@googlegroups.com> Hi all, I tried to subscribe to mailing list on page: http://www.scipy.org/scipylib/mailing-lists.html However, I think the links are broken. Would anyone have any idea about where to report it? Thanks! -Shawn -------------- next part -------------- An HTML attachment was scrubbed... URL: From joseluismietta at yahoo.com.ar Fri Aug 23 09:53:35 2013 From: joseluismietta at yahoo.com.ar (=?iso-8859-1?Q?Jos=E8_Luis_Mietta?=) Date: Fri, 23 Aug 2013 13:53:35 -0000 Subject: [SciPy-User] RAM problem during code execution - Numpy arrays In-Reply-To: <1377266281.65599.YahooMailNeo@web142304.mail.bf1.yahoo.com> References: <1377266281.65599.YahooMailNeo@web142304.mail.bf1.yahoo.com> Message-ID: <1377266329.36562.YahooMailNeo@web142306.mail.bf1.yahoo.com> Hi experts. I need your help with a RAM porblem during execution of my script. I wrote the next code. I use SAGE. In 1-2 hours of execution time the RAM of my laptop (8gb) is filled and the sistem crash: fromscipy.stats importuniform importnumpy asnp cant_de_cadenas =[700,800,900]cantidad_de_cadenas=np.array([])forkkkkk incant_de_cadenas:cantidad_de_cadenas=np.append(cantidad_de_cadenas,kkkkk)cantidad_de_cadenas=np.transpose(cantidad_de_cadenas)b=10h=b Longitud=1numero_experimentos=150densidad_de_cadenas =cantidad_de_cadenas/(b**2)prob_perc=np.array([])tiempos=np.array([])S_int=np.array([])S_medio=np.array([])desviacion_standard=np.array([])desviacion_standard_nuevo=np.array([])anisotropia_macroscopica_porcentual=np.array([])componente_y=np.array([])componente_x=np.array([])importtime forN incant_de_cadenas:empieza=time.clock()PERCOLACION=np.array([])size_medio_intuitivo =np.array([])size_medio_nuevo =np.array([])std_dev_size_medio_intuitivo =np.array([])std_dev_size_medio_nuevo =np.array([])comp_y =np.array([])comp_x =np.array([])foru inxrange(numero_experimentos):perco =Falsearray_x1=uniform.rvs(loc=-b/2,scale=b,size=N)array_y1=uniform.rvs(loc=-h/2,scale=h,size=N)array_angle=uniform.rvs(loc=-0.5*(np.pi),scale=np.pi,size=N)array_pendiente_x=1./np.tan(array_angle)random=uniform.rvs(loc=-1,scale=2,size=N)lambda_sign=np.zeros([N])fort inxrange(N):ifrandom[t]<0:lambda_sign[t]=-1else:lambda_sign[t]=1array_lambdas=(lambda_sign*Longitud)/np.sqrt(1+array_pendiente_x**2)array_x2=array_x1 +array_lambdas*array_pendiente_x array_y2=array_y1 +array_lambdas*1array_x1 =np.append(array_x1,[-b/2,b/2,-b/2,-b/2])array_y1 =np.append(array_y1,[-h/2,-h/2,-h/2,h/2])array_x2 =np.append(array_x2,[-b/2,b/2,b/2,b/2])array_y2 =np.append(array_y2,[h/2,h/2,-h/2,h/2])M =np.zeros([N+4,N+4])forj inxrange(N+4):ifj>0:x_A1B1 =array_x2[j]-array_x1[j]y_A1B1 =array_y2[j]-array_y1[j]x_A1A2 =array_x1[0:j]-array_x1[j]y_A1A2 =array_y1[0:j]-array_y1[j]x_A2A1 =-1*x_A1A2 y_A2A1 =-1*y_A1A2 x_A2B2 =array_x2[0:j]-array_x1[0:j]y_A2B2 =array_y2[0:j]-array_y1[0:j]x_A1B2 =array_x2[0:j]-array_x1[j]y_A1B2 =array_y2[0:j]-array_y1[j]x_A2B1 =array_x2[j]-array_x1[0:j]y_A2B1 =array_y2[j]-array_y1[0:j]p1 =x_A1B1*y_A1A2 -y_A1B1*x_A1A2 p2 =x_A1B1*y_A1B2 -y_A1B1*x_A1B2 p3 =x_A2B2*y_A2B1 -y_A2B2*x_A2B1 p4 =x_A2B2*y_A2A1 -y_A2B2*x_A2A1 condicion_1=p1*p2 condicion_2=p3*p4 fork inxrange (j):ifcondicion_1[k]<=0andcondicion_2[k]<=0:M[j,k]=1delcondicion_1 delcondicion_2 ifj+1 From joseluismietta at yahoo.com.ar Fri Aug 23 18:04:53 2013 From: joseluismietta at yahoo.com.ar (=?iso-8859-1?Q?Jos=E8_Luis_Mietta?=) Date: Fri, 23 Aug 2013 22:04:53 -0000 Subject: [SciPy-User] Stick (line segments) percolation algorithm - graph theory? In-Reply-To: <1377295771.604.YahooMailNeo@web142305.mail.bf1.yahoo.com> References: <1377295771.604.YahooMailNeo@web142305.mail.bf1.yahoo.com> Message-ID: <1377295813.56600.YahooMailNeo@web142306.mail.bf1.yahoo.com> Hi experts! I wrote an algorithm for study stick percolation (i.e.: networks between line segments that intersect between them). In my algorithm N sticks (line segments) are created inside a rectanglar box of sides 'b' and 'h' and then, one by one, the algorithm explores the intersection between all line segments. This is a Monte Carlo simulation, so the 'experiment' is executed many times (no less than 100 times). Writen like that, very much RAM is consumed: array_x1=uniform.rvs(loc=-b/2,scale=b,size=N)? array_y1=uniform.rvs(loc=-h/2,scale=h,size=N) array_x2=uniform.rvs(loc=-b/2,scale=b,size=N)? array_y2=uniform.rvs(loc=-h/2,scale=h,size=N)? M =np.zeros([N,N])? foru inxrange(100):?---->This'100'isthe number of experiments. ? ? forj inxrange(N): ? ? ? ? ifj>0: ? ? ? ? ? ? x_A1B1 =array_x2[j]-array_x1[j] ? ? ? ? ? ? y_A1B1 =array_y2[j]-array_y1[j] ? ? ? ? ? ? x_A1A2 =array_x1[0:j]-array_x1[j] ? ? ? ? ? ? y_A1A2 =array_y1[0:j]-array_y1[j]? ? ? ? ? ? ? ? ? x_A2A1 =-1*x_A1A2 ? ? ? ? ? ? y_A2A1 =-1*y_A1A2 ? ? ? ? ? ? x_A2B2 =array_x2[0:j]-array_x1[0:j] ? ? ? ? ? ? y_A2B2 =array_y2[0:j]-array_y1[0:j] ? ? ? ? ? ? x_A1B2 =array_x2[0:j]-array_x1[j] ? ? ? ? ? ? y_A1B2 =array_y2[0:j]-array_y1[j] ? ? ? ? ? ? x_A2B1 =array_x2[j]-array_x1[0:j] ? ? ? ? ? ? y_A2B1 =array_y2[j]-array_y1[0:j] ? ? ? ? ? ? p1 =x_A1B1*y_A1A2 -y_A1B1*x_A1A2 ? ? ? ? ? ? p2 =x_A1B1*y_A1B2 -y_A1B1*x_A1B2 ? ? ? ? ? ? p3 =x_A2B2*y_A2B1 -y_A2B2*x_A2B1 ? ? ? ? ? ? p4 =x_A2B2*y_A2A1 -y_A2B2*x_A2A1 ? ? ? ? ? ? condition_1=p1*p2 ? ? ? ? ? ? condition_2=p3*p4 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? fork inxrange (j): ? ? ? ? ? ? ? ? ifcondicion_1[k]<=0andcondicion_2[k]<=0: ? ? ? ? ? ? ? ? ? ? M[j,k]=1 ? ? ? ? ifj+1 From hilboll at uni-bremen.de Thu Aug 29 11:48:31 2013 From: hilboll at uni-bremen.de (Andreas Hilboll) Date: Thu, 29 Aug 2013 15:48:31 -0000 Subject: [SciPy-User] Akima interpolation? Message-ID: <521F6E90.1080406@uni-bremen.de> I'm looking for 1d Akima interpolation, and couldn't find it in Scipy. 1.) Did I miss something? 2.) Is there interest in having Akima interpolation in Scipy? 3.) Does anyone know of a good, fast interpolation I should investigate for inclusion in Scipy? Cheers, Andreas. -- Andreas Hilboll PhD Student Institute of Environmental Physics University of Bremen U3145 Otto-Hahn-Allee 1 D-28359 Bremen Germany +49(0)421 218 62133 (phone) +49(0)421 218 98 62133 (fax) http://www.iup.uni-bremen.de/~hilboll -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4640 bytes Desc: S/MIME Kryptografische Unterschrift URL: From gabe.gaster at gmail.com Thu Aug 29 19:06:35 2013 From: gabe.gaster at gmail.com (Gabriel Gaster) Date: Thu, 29 Aug 2013 23:06:35 -0000 Subject: [SciPy-User] __sizeof__ disagrees with .nbytes Message-ID: <99DB777F76514A3B9B69636E627EF348@gmail.com> Run the following code: from scipy import sparse x = sparse.rand(100,10,.2) print x.__sizeof__() ## 32 or 16, depending print x.data.__sizeof__() ## 80 or 40, depending print x.data.nbytes ## 1600 import numpy print numpy.arange(200).__sizeof__() ## 80 or 40, depending print numpy.arange(200).nbytes ## 1600 1. Why is nbytes different than __sizeof__ ? I believe nbytes and do not believe __sizeof__. __sizeof__ should just be rewritten to default to nbytes. 2. It seems that the __sizeof__ methods need to be updated for numpy.array AND for scipy.sparse. I'm happy to do this -- but before I do, I wanted to sanity check with the list. Thanks. Gabe -- gabe -------------- next part -------------- An HTML attachment was scrubbed... URL: From ljetibo at gmail.com Fri Aug 30 14:19:25 2013 From: ljetibo at gmail.com (=?ISO-8859-2?Q?Dino_Bekte=B9evi=E6?=) Date: Fri, 30 Aug 2013 18:19:25 -0000 Subject: [SciPy-User] Scipy installation seemingly went ok, but some modules missing Message-ID: Hello, After a little mishap from ubuntu 12.04 after which I reinstalled the OS I installed the entire scipy-stack: >sudo apt-get install python-numpy python-scipy python-matplotlib ipython ipython-notebook python-pandas python-sympy python-nose as per instructions on scipy.org. Running numpy.test() returns OK (knownfail=3 skip=4) but running scipy.test() returns 'failed' and it seems that ndimage module wasn't even installed, or at the very least doesn't appear as a module in the IDLEs autocomplete and I kind of need it. Using apt-get again doesn't detect any new updates >sudo apt-get install python-scipy 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. neither does pip manage to update the library: (I understand this is a lot of text, but I don't know what is important and what isn't. The results of scipy.test() are given below so if this is not info of importance just scroll down below the > signs). >Downloading/unpacking scipy > Downloading scipy-0.12.0.zip (10.2Mb): 10.2Mb downloaded > Running setup.py egg_info for package scipy > Running from scipy source directory. > blas_opt_info: > blas_mkl_info: > libraries mkl,vml,guide not found in /usr/local/lib > libraries mkl,vml,guide not found in /usr/lib > libraries mkl,vml,guide not found in /usr/lib/x86_64-linux-gnu > NOT AVAILABLE > > atlas_blas_threads_info: > Setting PTATLAS=ATLAS > libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib > libraries ptf77blas,ptcblas,atlas not found in /usr/lib > libraries ptf77blas,ptcblas,atlas not found in /usr/lib/x86_64-linux-gnu > NOT AVAILABLE > > atlas_blas_info: > libraries f77blas,cblas,atlas not found in /usr/local/lib > libraries f77blas,cblas,atlas not found in /usr/lib > libraries f77blas,cblas,atlas not found in /usr/lib/x86_64-linux-gnu > NOT AVAILABLE > > /usr/lib/python2.7/dist-packages/numpy/distutils/system_info.py:1423: >UserWarning: > Atlas (http://math-atlas.sourceforge.net/) libraries not found. > Directories to search for the libraries can be specified in the > numpy/distutils/site.cfg file (section [atlas]) or by setting > the ATLAS environment variable. > warnings.warn(AtlasNotFoundError.__doc__) > blas_info: > libraries blas not found in /usr/local/lib > libraries blas not found in /usr/lib > libraries blas not found in /usr/lib/x86_64-linux-gnu > NOT AVAILABLE > > /usr/lib/python2.7/dist-packages/numpy/distutils/system_info.py:1432: >UserWarning: > Blas (http://www.netlib.org/blas/) libraries not found. > Directories to search for the libraries can be specified in the > numpy/distutils/site.cfg file (section [blas]) or by setting > the BLAS environment variable. > warnings.warn(BlasNotFoundError.__doc__) > blas_src_info: > NOT AVAILABLE > > /usr/lib/python2.7/dist-packages/numpy/distutils/system_info.py:1435: >UserWarning: > Blas (http://www.netlib.org/blas/) sources not found. > Directories to search for the sources can be specified in the > numpy/distutils/site.cfg file (section [blas_src]) or by setting > the BLAS_SRC environment variable. > warnings.warn(BlasSrcNotFoundError.__doc__) > Traceback (most recent call last): > File "", line 14, in > File "/home/dino/build/scipy/setup.py", line 165, in > setup_package() > File "/home/dino/build/scipy/setup.py", line 161, in setup_package > configuration=configuration) > File "/usr/lib/python2.7/dist-packages/numpy/distutils/core.py", line 152, >in setup > config = configuration() > File "/home/dino/build/scipy/setup.py", line 136, in configuration > config.add_subpackage('scipy') > File "/usr/lib/python2.7/dist-packages/numpy/distutils/misc_util.py", line >1002, in add_subpackage > caller_level = 2) > File "/usr/lib/python2.7/dist-packages/numpy/distutils/misc_util.py", line >971, in get_subpackage > caller_level = caller_level + 1) > File "/usr/lib/python2.7/dist-packages/numpy/distutils/misc_util.py", line >908, in _get_configuration_from_setup_py > config = setup_module.configuration(*args) > File "scipy/setup.py", line 9, in configuration > config.add_subpackage('integrate') > File "/usr/lib/python2.7/dist-packages/numpy/distutils/misc_util.py", line >1002, in add_subpackage > caller_level = 2) > File "/usr/lib/python2.7/dist-packages/numpy/distutils/misc_util.py", line >971, in get_subpackage > caller_level = caller_level + 1) > File "/usr/lib/python2.7/dist-packages/numpy/distutils/misc_util.py", line >908, in _get_configuration_from_setup_py > config = setup_module.configuration(*args) > File "scipy/integrate/setup.py", line 11, in configuration > blas_opt = get_info('blas_opt',notfound_action=2) > File "/usr/lib/python2.7/dist-packages/numpy/distutils/system_info.py", >line 320, in get_info > return cl().get_info(notfound_action) > File "/usr/lib/python2.7/dist-packages/numpy/distutils/system_info.py", >line 471, in get_info > raise self.notfounderror(self.notfounderror.__doc__) > numpy.distutils.system_info.BlasNotFoundError: > Blas (http://www.netlib.org/blas/) libraries not found. > Directories to search for the libraries can be specified in the > numpy/distutils/site.cfg file (section [blas]) or by setting > the BLAS environment variable. > Complete output from command python setup.py egg_info: > Running from scipy source directory. > >blas_opt_info: > >blas_mkl_info: > > libraries mkl,vml,guide not found in /usr/local/lib > > libraries mkl,vml,guide not found in /usr/lib > > libraries mkl,vml,guide not found in /usr/lib/x86_64-linux-gnu > > NOT AVAILABLE > > > >atlas_blas_threads_info: > >Setting PTATLAS=ATLAS > > libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib > > libraries ptf77blas,ptcblas,atlas not found in /usr/lib > > libraries ptf77blas,ptcblas,atlas not found in /usr/lib/x86_64-linux-gnu > > NOT AVAILABLE > > > >atlas_blas_info: > > libraries f77blas,cblas,atlas not found in /usr/local/lib > > libraries f77blas,cblas,atlas not found in /usr/lib > > libraries f77blas,cblas,atlas not found in /usr/lib/x86_64-linux-gnu > > NOT AVAILABLE > > > >/usr/lib/python2.7/dist-packages/numpy/distutils/system_info.py:1423: >UserWarning: > > Atlas (http://math-atlas.sourceforge.net/) libraries not found. > > Directories to search for the libraries can be specified in the > > numpy/distutils/site.cfg file (section [atlas]) or by setting > > the ATLAS environment variable. > > warnings.warn(AtlasNotFoundError.__doc__) > >blas_info: > > libraries blas not found in /usr/local/lib > > libraries blas not found in /usr/lib > > libraries blas not found in /usr/lib/x86_64-linux-gnu > > NOT AVAILABLE > > > >/usr/lib/python2.7/dist-packages/numpy/distutils/system_info.py:1432: >UserWarning: > > Blas (http://www.netlib.org/blas/) libraries not found. > > Directories to search for the libraries can be specified in the > > numpy/distutils/site.cfg file (section [blas]) or by setting > > the BLAS environment variable. > > warnings.warn(BlasNotFoundError.__doc__) > >blas_src_info: > > NOT AVAILABLE > > > >/usr/lib/python2.7/dist-packages/numpy/distutils/system_info.py:1435: >UserWarning: > > Blas (http://www.netlib.org/blas/) sources not found. > > Directories to search for the sources can be specified in the > > numpy/distutils/site.cfg file (section [blas_src]) or by setting > > the BLAS_SRC environment variable. > > warnings.warn(BlasSrcNotFoundError.__doc__) > >Traceback (most recent call last): > > File "", line 14, in > > File "/home/dino/build/scipy/setup.py", line 165, in > > setup_package() > > File "/home/dino/build/scipy/setup.py", line 161, in setup_package > > configuration=configuration) > > File "/usr/lib/python2.7/dist-packages/numpy/distutils/core.py", line 152, in >setup > > config = configuration() > > File "/home/dino/build/scipy/setup.py", line 136, in configuration > > config.add_subpackage('scipy') > > File "/usr/lib/python2.7/dist-packages/numpy/distutils/misc_util.py", line >1002, in add_subpackage > > caller_level = 2) > > File "/usr/lib/python2.7/dist-packages/numpy/distutils/misc_util.py", line >971, in get_subpackage > > caller_level = caller_level + 1) > > File "/usr/lib/python2.7/dist-packages/numpy/distutils/misc_util.py", line >908, in _get_configuration_from_setup_py > > config = setup_module.configuration(*args) > > File "scipy/setup.py", line 9, in configuration > > config.add_subpackage('integrate') > > File "/usr/lib/python2.7/dist-packages/numpy/distutils/misc_util.py", line >1002, in add_subpackage > > caller_level = 2) > > File "/usr/lib/python2.7/dist-packages/numpy/distutils/misc_util.py", line >971, in get_subpackage > > caller_level = caller_level + 1) > > File "/usr/lib/python2.7/dist-packages/numpy/distutils/misc_util.py", line >908, in _get_configuration_from_setup_py > > config = setup_module.configuration(*args) > > File "scipy/integrate/setup.py", line 11, in configuration > > blas_opt = get_info('blas_opt',notfound_action=2) > > File "/usr/lib/python2.7/dist-packages/numpy/distutils/system_info.py", line >320, in get_info > > return cl().get_info(notfound_action) > > File "/usr/lib/python2.7/dist-packages/numpy/distutils/system_info.py", line >471, in get_info > > raise self.notfounderror(self.notfounderror.__doc__) > >numpy.distutils.system_info.BlasNotFoundError: > > Blas (http://www.netlib.org/blas/) libraries not found. > > Directories to search for the libraries can be specified in the > > numpy/distutils/site.cfg file (section [blas]) or by setting > > the BLAS environment variable. > >---------------------------------------- >Command python setup.py egg_info failed with error code 1 >Storing complete log in /home/dino/.pip/pip.log HERE ARE THE RESULTS OF SCIPY.TEST(): ________________________________________________________________ ________________________________________________________________ Python 2.7.3 (default, Apr 10 2013, 06:20:15) [GCC 4.6.3] on linux2 Type "copyright", "credits" or "license()" for more information. >>> import scipy >>> scipy.test() Running unit tests for scipy NumPy version 1.6.1 NumPy is installed in /usr/lib/python2.7/dist-packages/numpy SciPy version 0.9.0 SciPy is installed in /usr/lib/python2.7/dist-packages/scipy Python version 2.7.3 (default, Apr 10 2013, 06:20:15) [GCC 4.6.3] nose version 1.1.2 ........................................K..K............................................................................................................................................................................................................................................................................................................................................................................................................................................................................SSSSSS......SSSSSS.....................................................................................................................................................................................................F......................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................K..................................................................................................................................................................................K...............................................................K...........................................................................................................................................................KK...............................................................................................................................................................................................................................................................K.K........................................................................................................................................................................................................................................................................................................................................................................................K........K........... Warning (from warnings module): File "/usr/lib/python2.7/dist-packages/scipy/special/tests/testutils.py", line 196 diff = np.absolute(x - y) RuntimeWarning: invalid value encountered in subtract .... Warning (from warnings module): File "/usr/lib/python2.7/dist-packages/scipy/special/tests/testutils.py", line 199 rdiff = diff / np.absolute(y) RuntimeWarning: invalid value encountered in divide .................................................................................................................................................................................................................................................................................................................................................................................................................................... Warning (from warnings module): File "/usr/lib/python2.7/dist-packages/scipy/stats/distributions.py", line 3955 vals = 2*(bt+1.0)*sqrt(b-2.0)/((b-3.0)*sqrt(b)) RuntimeWarning: invalid value encountered in sqrt .........................................................................................................................................................................................................................................................................................................S..............................................................................................................................................................................................F............................................................................................................................................................................................................................................................................................................................................................................................................. ====================================================================== FAIL: test_io.test_imread ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/usr/lib/python2.7/dist-packages/numpy/testing/decorators.py", line 146, in skipper_func return f(*args, **kwargs) File "/usr/lib/python2.7/dist-packages/scipy/ndimage/tests/test_io.py", line 16, in test_imread assert_array_equal(img.shape, (300, 420, 3)) File "/usr/lib/python2.7/dist-packages/numpy/testing/utils.py", line 707, in assert_array_equal verbose=verbose, header='Arrays are not equal') File "/usr/lib/python2.7/dist-packages/numpy/testing/utils.py", line 600, in assert_array_compare raise AssertionError(msg) AssertionError: Arrays are not equal (shapes (2,), (3,) mismatch) x: array([300, 420]) y: array([300, 420, 3]) ====================================================================== FAIL: test_expon (test_morestats.TestAnderson) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/scipy/stats/tests/test_morestats.py", line 72, in test_expon assert_array_less(crit[:-1], A) File "/usr/lib/python2.7/dist-packages/numpy/testing/utils.py", line 869, in assert_array_less header='Arrays are not less-ordered') File "/usr/lib/python2.7/dist-packages/numpy/testing/utils.py", line 613, in assert_array_compare chk_same_position(x_id, y_id, hasval='inf') File "/usr/lib/python2.7/dist-packages/numpy/testing/utils.py", line 588, in chk_same_position raise AssertionError(msg) AssertionError: Arrays are not less-ordered x and y inf location mismatch: x: array([ 0.911, 1.065, 1.325, 1.587]) y: array(inf) ---------------------------------------------------------------------- Ran 3780 tests in 57.542s FAILED (KNOWNFAIL=11, SKIP=13, failures=2) >>> help is greatly appreciated, Dino Bekte?evi?