From david at ar.media.kyoto-u.ac.jp Mon Jul 2 00:45:14 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 02 Jul 2007 13:45:14 +0900 Subject: [SciPy-dev] [scikits] setuptools, tests and subpackages Message-ID: <468882DA.6040602@ar.media.kyoto-u.ac.jp> Hi, I would like to know if anyone knowledgable about setuptools knows the best way to tell setuptools how to call unittests in a packages containing subpackages ? In a scipy package, having a test function in __init__ is enough to get the unittests added to the whole scipy test suite, but with setuptools, how do I do that ? cheers, David From robert.kern at gmail.com Mon Jul 2 12:21:43 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 02 Jul 2007 11:21:43 -0500 Subject: [SciPy-dev] [scikits] setuptools, tests and subpackages In-Reply-To: <468882DA.6040602@ar.media.kyoto-u.ac.jp> References: <468882DA.6040602@ar.media.kyoto-u.ac.jp> Message-ID: <46892617.6050709@gmail.com> David Cournapeau wrote: > Hi, > > I would like to know if anyone knowledgable about setuptools knows > the best way to tell setuptools how to call unittests in a packages > containing subpackages ? In a scipy package, having a test function in > __init__ is enough to get the unittests added to the whole scipy test > suite, but with setuptools, how do I do that ? http://peak.telecommunity.com/DevCenter/setuptools#test-build-package-and-run-a-unittest-suite -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From niels.ellegaard at gmail.com Mon Jul 2 15:31:20 2007 From: niels.ellegaard at gmail.com (Niels L. Ellegaard) Date: Mon, 02 Jul 2007 21:31:20 +0200 Subject: [SciPy-dev] equivalent to online Octave calculator? References: <467F89F8.4070202@ukr.net> Message-ID: <87wsxizjp3.fsf@gmail.com> dmitrey writes: > has numpy/scipy project something equivalent to online Octave calculator? > http://www.online-utility.org/math/math_calculator.jsp I found web a webpage that allows you to try of python online, but it doesn't provide scipy. http://www.mired.org/home/mwm/try_python/ Niels From david at ar.media.kyoto-u.ac.jp Tue Jul 3 02:16:38 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 03 Jul 2007 15:16:38 +0900 Subject: [SciPy-dev] Implementing a distance matrix between two sets of vectors concept Message-ID: <4689E9C6.2000700@ar.media.kyoto-u.ac.jp> Hi, for my machine learning toolbox, I need the concept of distance matrix, that is for two sets of vectors v and u (N u and M v), of dimension d, I want to compute the matrix D such as d(i,j) = distance(v_i, u_j). This is easy to do in numpy, but for big datasets, this becomes difficult without a significance loss of efficiency or big memory consumption. So I am thinking about implementing it in C. I think the overall concept is useful for other people, so before implementing something, I was wondering if other people would need/use it, and what would they need: - several distance (Euclidian, Mahalanobis, etc...), which would be a separate object to handle different sets of parameters. - C Api ? - datatypes ? Layout ? Contiguity ? - handling Nan ? cheers, David From wbaxter at gmail.com Tue Jul 3 05:36:16 2007 From: wbaxter at gmail.com (Bill Baxter) Date: Tue, 3 Jul 2007 18:36:16 +0900 Subject: [SciPy-dev] Implementing a distance matrix between two sets of vectors concept In-Reply-To: <4689E9C6.2000700@ar.media.kyoto-u.ac.jp> References: <4689E9C6.2000700@ar.media.kyoto-u.ac.jp> Message-ID: I would use it. I only need Euclidean distance, Python API only is ok. Data-types: float and double would do it for me. Double only if it's too much effort to do both. Order -- all combos of F and C both would be nice, but not critical Strides -- with strides better than without, but not critical nan -- don't need it. --bb On 7/3/07, David Cournapeau wrote: > > Hi, > > for my machine learning toolbox, I need the concept of distance > matrix, that is for two sets of vectors v and u (N u and M v), of > dimension d, I want to compute the matrix D such as d(i,j) = > distance(v_i, u_j). This is easy to do in numpy, but for big datasets, > this becomes difficult without a significance loss of efficiency or big > memory consumption. > So I am thinking about implementing it in C. I think the overall > concept is useful for other people, so before implementing something, I > was wondering if other people would need/use it, and what would they need: > - several distance (Euclidian, Mahalanobis, etc...), which would be > a separate object to handle different sets of parameters. > - C Api ? > - datatypes ? Layout ? Contiguity ? > - handling Nan ? > > cheers, > > David > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chanley at stsci.edu Tue Jul 3 12:25:55 2007 From: chanley at stsci.edu (Christopher Hanley) Date: Tue, 03 Jul 2007 12:25:55 -0400 Subject: [SciPy-dev] ndimage problems Message-ID: <468A7893.1050600@stsci.edu> Hi, We have found two problems with ndimage. I have filed a ticket #455 on the scipy trac page. The first problem can be seen with this example: > import numpy as n > from scipy import ndimage as nd > a = n.ones((10,5),dtype=n.float32) * 12.3 > x = nd.rotate(a,90.0) > x Out[17]: array([[ 12.30000019, 12.30000019, 12.30000019, 12.30000019, 12.30000019, 12.30000019, 12.30000019, 12.30000019, 0. , 0. ], [ 12.30000019, 12.30000019, 12.30000019, 12.30000019, 12.30000019, 12.30000019, 12.30000019, 12.30000019, 12.30000019, 12.30000019], [ 12.30000019, 12.30000019, 12.30000019, 12.30000019, 12.30000019, 12.30000019, 12.30000019, 12.30000019, 12.30000019, 12.30000019], [ 12.30000019, 12.30000019, 12.30000019, 12.30000019, 12.30000019, 12.30000019, 12.30000019, 12.30000019, 12.30000019, 12.30000019], [ 12.30000019, 12.30000019, 12.30000019, 12.30000019, 12.30000019, 12.30000019, 12.30000019, 12.30000019, 12.30000019, 12.30000019]], dtype=float32) }}} Notice that the last two entries of the first row are now 0. The second problem has to do with the reversing of byte order if you have big-endian data on a little endian machine. Please see the example below: >>> a = N.ones ((2,3), dtype=N.float32) * 12.3 >>> a = a.byteswap() >>> a.dtype = a.dtype.newbyteorder (">") >>> print a [[ 12.30000019 12.30000019 12.30000019] [ 12.30000019 12.30000019 12.30000019]] >>> print ndimage.rotate (a, 90.) [[ 0.00000000e+00 -4.28378144e+08] [ 0.00000000e+00 -4.28378144e+08] [ -4.28378144e+08 -4.28378144e+08]] I have taken a look at the ndimage python code and cannot find any explicit calls to byteswap. I'm guessing something is funny in one of the c-api calls. I haven't been able to track it down yet. Chris From peter.skomoroch at gmail.com Tue Jul 3 14:10:02 2007 From: peter.skomoroch at gmail.com (Peter Skomoroch) Date: Tue, 3 Jul 2007 14:10:02 -0400 Subject: [SciPy-dev] Implementing a distance matrix between two sets of vectors concept In-Reply-To: References: <4689E9C6.2000700@ar.media.kyoto-u.ac.jp> Message-ID: I've rolled my own in the past. If the vectors are really large and you are holding a collection of them, you probably want to use a sparse matrix data structure in either numpy or C. On 7/3/07, Bill Baxter wrote: > > I would use it. > > I only need Euclidean distance, > Python API only is ok. > Data-types: float and double would do it for me. Double only if it's too > much effort to do both. > Order -- all combos of F and C both would be nice, but not critical > Strides -- with strides better than without, but not critical > nan -- don't need it. > > --bb > > On 7/3/07, David Cournapeau < david at ar.media.kyoto-u.ac.jp> wrote: > > > > Hi, > > > > for my machine learning toolbox, I need the concept of distance > > matrix, that is for two sets of vectors v and u (N u and M v), of > > dimension d, I want to compute the matrix D such as d(i,j) = > > distance(v_i, u_j). This is easy to do in numpy, but for big datasets, > > this becomes difficult without a significance loss of efficiency or big > > memory consumption. > > So I am thinking about implementing it in C. I think the overall > > concept is useful for other people, so before implementing something, I > > was wondering if other people would need/use it, and what would they > > need: > > - several distance (Euclidian, Mahalanobis, etc...), which would be > > a separate object to handle different sets of parameters. > > - C Api ? > > - datatypes ? Layout ? Contiguity ? > > - handling Nan ? > > > > cheers, > > > > David > > _______________________________________________ > > Scipy-dev mailing list > > Scipy-dev at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-dev > > > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > > -- Peter N. Skomoroch peter.skomoroch at gmail.com http://www.datawrangling.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From l.mastrodomenico at gmail.com Tue Jul 3 19:22:46 2007 From: l.mastrodomenico at gmail.com (Lino Mastrodomenico) Date: Wed, 4 Jul 2007 01:22:46 +0200 Subject: [SciPy-dev] PEP 368: Standard image protocol and class Message-ID: [Sorry for the cross-posting, but I think this may be relevant for both NumPy and ndimage.] Hello everyone, I have submitted to the Python core developers a new PEP (Python Enhancement Proposal): http://www.python.org/dev/peps/pep-0368/ It proposes two things: * the creation of a standard image protocol/interface that can be hopefully implemented interoperably by most Python libraries that manipulate images; * the addition to the Python standard library of a basic implementation of the new protocol. The new image protocol is heavily inspired by a subset of the NumPy array interface, with a few image-specific additions and changes (e.g. the "size" attribute of an image is a tuple (width, height)). Of course it would be wonderful if these new image objects could interoperate out-of-the-box with numpy arrays and ndimage functions. There is another proposal that would be very useful for that, PEP 3118 by Travis Oliphant and Carl Banks: http://www.python.org/dev/peps/pep-3118/ The image PEP (368) currently lists only modes based on uint8/16/32 numbers, but the final version will probably also include modes based on float32 and float16 (converted in software to/from float32/64 when necessary). A discussion about it is currently going on in the python-3000 mailing list: Any suggestion, comment or criticism from the NumPy/SciPy people would be very useful, but IMHO keeping the discussion only on the python-3000 ML may be a good idea, to avoid duplicating answers on different mailing lists. Thanks in advance. -- Lino Mastrodomenico E-mail: l.mastrodomenico at gmail.com From wbaxter at gmail.com Tue Jul 3 21:24:19 2007 From: wbaxter at gmail.com (Bill Baxter) Date: Wed, 4 Jul 2007 10:24:19 +0900 Subject: [SciPy-dev] PEP 368: Standard image protocol and class In-Reply-To: References: Message-ID: I'm not subscribed to the main Python list, so I'll just ask here. It looks like the protocol doesn't support any floating point image formats, judging from the big table of formats in the PEP. These are becoming more important these days in computer graphics as a way to pass around high dynamic range images. OpenEXR is the main example of such a format: http://www.openexr.com/. I think a PEP that aims to be a generic image protocol should support at least 32 bit floats if not 64-bit doubles and 16 bit "Half"s used by some GPUs (and supported by the OpenEXR format). ---bb On 7/4/07, Lino Mastrodomenico wrote: > > [Sorry for the cross-posting, but I think this may be relevant for > both NumPy and ndimage.] > > Hello everyone, > > I have submitted to the Python core developers a new PEP (Python > Enhancement Proposal): > > http://www.python.org/dev/peps/pep-0368/ > > It proposes two things: > > * the creation of a standard image protocol/interface that can be > hopefully implemented interoperably by most Python libraries that > manipulate images; > > * the addition to the Python standard library of a basic > implementation of the new protocol. > > The new image protocol is heavily inspired by a subset of the NumPy > array interface, with a few image-specific additions and changes (e.g. > the "size" attribute of an image is a tuple (width, height)). > > Of course it would be wonderful if these new image objects could > interoperate out-of-the-box with numpy arrays and ndimage functions. > There is another proposal that would be very useful for that, PEP 3118 > by Travis Oliphant and Carl Banks: > > http://www.python.org/dev/peps/pep-3118/ > > The image PEP (368) currently lists only modes based on uint8/16/32 > numbers, but the final version will probably also include modes based > on float32 and float16 (converted in software to/from float32/64 when > necessary). > > A discussion about it is currently going on in the python-3000 mailing > list: > > > > Any suggestion, comment or criticism from the NumPy/SciPy people would > be very useful, but IMHO keeping the discussion only on the > python-3000 ML may be a good idea, to avoid duplicating answers on > different mailing lists. > > Thanks in advance. > > -- > Lino Mastrodomenico > E-mail: l.mastrodomenico at gmail.com > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Wed Jul 4 03:52:02 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 04 Jul 2007 16:52:02 +0900 Subject: [SciPy-dev] Implementing a distance matrix between two sets of vectors concept In-Reply-To: References: <4689E9C6.2000700@ar.media.kyoto-u.ac.jp> Message-ID: <468B51A2.4010306@ar.media.kyoto-u.ac.jp> Peter Skomoroch wrote: > I've rolled my own in the past. If the vectors are really large and > you are holding a collection of them, you probably want to use a > sparse matrix data structure in either numpy or C. Mmm, not sure to understand what you mean. The problem is that you have {u_1, ... , u_N} and {v_1, ..., v_M} vectors, and you want the distance for any possible combination {u_i, v_j}, which is a real (eg the actual size of the matrix in memory does not depends on the dimension of the data, only on N and M). I don't see how sparsity can help help here ? David From openopt at ukr.net Wed Jul 4 04:14:48 2007 From: openopt at ukr.net (dmitrey) Date: Wed, 04 Jul 2007 11:14:48 +0300 Subject: [SciPy-dev] numpy.cumproduct() documentation: bug? Message-ID: <468B56F8.6050708@ukr.net> help(cumproduct) says: Help on function cumproduct in module numpy.core.fromnumeric: cumproduct(x, axis=None, dtype=None, out=None) Sum the array over the given axis. however, cumproduct([1,2,3,4]) yields array([ 1, 2, 6, 24]) Btw, what is the difference between cumproduct and cumprod, prod and product? also, I think it would be nice to add cumsum, cumprod, diff to http://www.scipy.org/NumPy_for_Matlab_Users page D. P.S. doc for numpy.i0(x) and numpy.require() is missing. All the data above has been taken from http://www.scipy.org/Numpy_Example_List_With_Doc From matthieu.brucher at gmail.com Wed Jul 4 04:16:29 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 4 Jul 2007 10:16:29 +0200 Subject: [SciPy-dev] numpy.cumproduct() documentation: bug? In-Reply-To: <468B56F8.6050708@ukr.net> References: <468B56F8.6050708@ukr.net> Message-ID: Hi, For those discussion on numpy, you should send the mail to a numpy ML ;) Matthieu 2007/7/4, dmitrey : > > help(cumproduct) says: > Help on function cumproduct in module numpy.core.fromnumeric: > > cumproduct(x, axis=None, dtype=None, out=None) > Sum the array over the given axis. > > however, cumproduct([1,2,3,4]) yields > array([ 1, 2, 6, 24]) > > Btw, what is the difference between cumproduct and cumprod, prod and > product? > > also, I think it would be nice to add cumsum, cumprod, diff to > http://www.scipy.org/NumPy_for_Matlab_Users page > > D. > > P.S. doc for numpy.i0(x) and numpy.require() is missing. > > All the data above has been taken from > http://www.scipy.org/Numpy_Example_List_With_Doc > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From peridot.faceted at gmail.com Wed Jul 4 11:54:19 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Wed, 4 Jul 2007 11:54:19 -0400 Subject: [SciPy-dev] Implementing a distance matrix between two sets of vectors concept In-Reply-To: <468B51A2.4010306@ar.media.kyoto-u.ac.jp> References: <4689E9C6.2000700@ar.media.kyoto-u.ac.jp> <468B51A2.4010306@ar.media.kyoto-u.ac.jp> Message-ID: On 04/07/07, David Cournapeau wrote: > Mmm, not sure to understand what you mean. The problem is that you have > {u_1, ... , u_N} and {v_1, ..., v_M} vectors, and you want the distance > for any possible combination {u_i, v_j}, which is a real (eg the actual > size of the matrix in memory does not depends on the dimension of the > data, only on N and M). I don't see how sparsity can help help here ? If the problem is that an N by M by R matrix is too big, then it's probably perfectly fine to build the N by M matrix by looping over one axis (so you only construct new N by R matrices, say). Anne From strawman at astraw.com Wed Jul 4 12:51:56 2007 From: strawman at astraw.com (Andrew Straw) Date: Wed, 04 Jul 2007 09:51:56 -0700 Subject: [SciPy-dev] numpy.cumproduct() documentation: bug? In-Reply-To: <468B56F8.6050708@ukr.net> References: <468B56F8.6050708@ukr.net> Message-ID: <468BD02C.2060009@astraw.com> dmitrey wrote: > also, I think it would be nice to add cumsum, cumprod, diff to > http://www.scipy.org/NumPy_for_Matlab_Users page Go for it! From aisaac at american.edu Wed Jul 4 13:07:34 2007 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 4 Jul 2007 13:07:34 -0400 Subject: [SciPy-dev] numpy.cumproduct() documentation: bug? In-Reply-To: <468BD02C.2060009@astraw.com> References: <468B56F8.6050708@ukr.net><468BD02C.2060009@astraw.com> Message-ID: > dmitrey wrote: >> also, I think it would be nice to add cumsum, cumprod, diff to >> http://www.scipy.org/NumPy_for_Matlab_Users page On Wed, 04 Jul 2007, Andrew Straw apparently wrote: > Go for it! Since this type of interaction is fairly frequent, it may be useful to say a bit more. You do not need permission to edit the SciPy Wiki. Unfortunately new users who click the "edit" icon get this message: You are not allowed to edit this page. Is this configurable? A more helpful and accurate message would be: You need to log in to edit this page. Cheers, Alan Isaac From openopt at ukr.net Wed Jul 4 13:20:45 2007 From: openopt at ukr.net (dmitrey) Date: Wed, 04 Jul 2007 20:20:45 +0300 Subject: [SciPy-dev] numpy.cumproduct() documentation: bug? In-Reply-To: <468BD02C.2060009@astraw.com> References: <468B56F8.6050708@ukr.net> <468BD02C.2060009@astraw.com> Message-ID: <468BD6ED.7050601@ukr.net> I started to modify the page, but I noticed that python cumsum, cumprod, diff behavior differs from MATLAB one. for example, a array([[1, 2], [3, 4], [5, 6]]) cumsum(a) array([ 1, 3, 6, 10, 15, 21]) octave> cumsum([1 2; 3 4; 5 6]) ans = 1 2 4 6 9 12 diff(a) array([[1], [1], [1]]) octave> diff([1 2; 3 4; 5 6]) ans = 2 2 2 2 Also, cumsum missed "out" description: help(cumsum): cumsum(x, axis=None, dtype=None, out=None) Sum the array over the given axis. So let the description in the matlab webpage will be done by someone else, ok? Or just let it be scipped for now. Regards, D. Andrew Straw wrote: > dmitrey wrote: > > >> also, I think it would be nice to add cumsum, cumprod, diff to >> http://www.scipy.org/NumPy_for_Matlab_Users page >> > > Go for it! > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > > > > From otto at tronarp.se Wed Jul 4 13:36:24 2007 From: otto at tronarp.se (Otto Tronarp) Date: Wed, 4 Jul 2007 19:36:24 +0200 Subject: [SciPy-dev] numpy.cumproduct() documentation: bug? In-Reply-To: <468BD6ED.7050601@ukr.net> References: <468B56F8.6050708@ukr.net> <468BD02C.2060009@astraw.com> <468BD6ED.7050601@ukr.net> Message-ID: <4682D443-627D-4345-A206-D5CB0FD61697@tronarp.se> You want to specify the axis argument: In [14]: a Out[14]: array([[1, 2], [3, 4], [5, 6]]) In [15]: cumsum(a, axis=0) Out[15]: array([[ 1, 2], [ 4, 6], [ 9, 12]]) In [16]: diff(a, axis=0) Out[16]: array([[2, 2], [2, 2]]) Regards, Otto On Jul 4, 2007, at 7:20 PM, dmitrey wrote: > I started to modify the page, but I noticed that python cumsum, > cumprod, > diff behavior differs from MATLAB one. > for example, > > a > array([[1, 2], > [3, 4], > [5, 6]]) > > cumsum(a) > array([ 1, 3, 6, 10, 15, 21]) > > octave> cumsum([1 2; 3 4; 5 6]) > ans = > > 1 2 > 4 6 > 9 12 > > diff(a) > array([[1], > [1], > [1]]) > > octave> diff([1 2; 3 4; 5 6]) > ans = > > 2 2 > 2 2 > > Also, cumsum missed "out" description: > help(cumsum): > cumsum(x, axis=None, dtype=None, out=None) > Sum the array over the given axis. > > So let the description in the matlab webpage will be done by someone > else, ok? Or just let it be scipped for now. > Regards, D. > > > Andrew Straw wrote: >> dmitrey wrote: >> >> >>> also, I think it would be nice to add cumsum, cumprod, diff to >>> http://www.scipy.org/NumPy_for_Matlab_Users page >>> >> >> Go for it! >> _______________________________________________ >> Scipy-dev mailing list >> Scipy-dev at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-dev >> >> >> >> > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From aisaac at american.edu Wed Jul 4 13:42:48 2007 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 4 Jul 2007 13:42:48 -0400 Subject: [SciPy-dev] numpy.cumproduct() In-Reply-To: <468BD6ED.7050601@ukr.net> References: <468B56F8.6050708@ukr.net> <468BD02C.2060009@astraw.com><468BD6ED.7050601@ukr.net> Message-ID: On Wed, 04 Jul 2007, dmitrey apparently wrote: > I started to modify the page, but I noticed that python cumsum, cumprod, > diff behavior differs from MATLAB one. > for example, >>>> a >>>> array([[1, 2], >>>> [3, 4], >>>> [5, 6]]) >>>> cumsum(a) >>>> array([ 1, 3, 6, 10, 15, 21]) > octave> cumsum([1 2; 3 4; 5 6]) > ans = > 1 2 > 4 6 > 9 12 That is just a matter of the axis argument: >>> numpy.cumsum(a, axis=0) array([[ 1, 2], [ 4, 6], [ 9, 12]]) Cheers, Alan Isaac From peter.skomoroch at gmail.com Wed Jul 4 15:17:49 2007 From: peter.skomoroch at gmail.com (Peter Skomoroch) Date: Wed, 4 Jul 2007 15:17:49 -0400 Subject: [SciPy-dev] Implementing a distance matrix between two sets of vectors concept In-Reply-To: <468B51A2.4010306@ar.media.kyoto-u.ac.jp> References: <4689E9C6.2000700@ar.media.kyoto-u.ac.jp> <468B51A2.4010306@ar.media.kyoto-u.ac.jp> Message-ID: You're right, I was thinking the sparse data structures would help with storing the input vectors themselves during the computation rather than the final matrix (which will need to be 1/2 M*N if the distance is symmetric)...this comes up a lot in collaborative filtering where the dimensionality of the vectors is high, but most of the vector entries are missing. On 7/4/07, David Cournapeau wrote: > > Peter Skomoroch wrote: > > I've rolled my own in the past. If the vectors are really large and > > you are holding a collection of them, you probably want to use a > > sparse matrix data structure in either numpy or C. > Mmm, not sure to understand what you mean. The problem is that you have > {u_1, ... , u_N} and {v_1, ..., v_M} vectors, and you want the distance > for any possible combination {u_i, v_j}, which is a real (eg the actual > size of the matrix in memory does not depends on the dimension of the > data, only on N and M). I don't see how sparsity can help help here ? > > David > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -- Peter N. Skomoroch peter.skomoroch at gmail.com http://www.datawrangling.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Wed Jul 4 23:38:11 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 05 Jul 2007 12:38:11 +0900 Subject: [SciPy-dev] Implementing a distance matrix between two sets of vectors concept In-Reply-To: References: <4689E9C6.2000700@ar.media.kyoto-u.ac.jp> <468B51A2.4010306@ar.media.kyoto-u.ac.jp> Message-ID: <468C67A3.80306@ar.media.kyoto-u.ac.jp> Peter Skomoroch wrote: > You're right, I was thinking the sparse data structures would help > with storing the input vectors themselves during the computation > rather than the final matrix (which will need to be 1/2 M*N if the > distance is symmetric)...this comes up a lot in collaborative > filtering where the dimensionality of the vectors is high, but most of > the vector entries are missing. Ok, that this basically means supporting sparse input, right ? I have to say that I don't know anything about sparse implementations issues in numpy (or any other language for that matter). I guess that performances mainly depend on the flexibility between matrix representation and data storage. Are sparse arrays directly supported in numpy ? David From peter.skomoroch at gmail.com Thu Jul 5 00:00:11 2007 From: peter.skomoroch at gmail.com (Peter Skomoroch) Date: Thu, 5 Jul 2007 00:00:11 -0400 Subject: [SciPy-dev] Implementing a distance matrix between two sets of vectors concept In-Reply-To: <468C67A3.80306@ar.media.kyoto-u.ac.jp> References: <4689E9C6.2000700@ar.media.kyoto-u.ac.jp> <468B51A2.4010306@ar.media.kyoto-u.ac.jp> <468C67A3.80306@ar.media.kyoto-u.ac.jp> Message-ID: The sparse functionality I use is actually in scipy, this page describes how it works: http://www.scipy.org/SciPy_Tutorial#head-d074c4e5a3ef51a7e0456ae966669c7807dee904 On 7/4/07, David Cournapeau wrote: > > Peter Skomoroch wrote: > > You're right, I was thinking the sparse data structures would help > > with storing the input vectors themselves during the computation > > rather than the final matrix (which will need to be 1/2 M*N if the > > distance is symmetric)...this comes up a lot in collaborative > > filtering where the dimensionality of the vectors is high, but most of > > the vector entries are missing. > Ok, that this basically means supporting sparse input, right ? I have to > say that I don't know anything about sparse implementations issues in > numpy (or any other language for that matter). I guess that performances > mainly depend on the flexibility between matrix representation and data > storage. Are sparse arrays directly supported in numpy ? > > David > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -- Peter N. Skomoroch peter.skomoroch at gmail.com http://www.datawrangling.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From openopt at ukr.net Fri Jul 6 08:05:30 2007 From: openopt at ukr.net (dmitrey) Date: Fri, 06 Jul 2007 15:05:30 +0300 Subject: [SciPy-dev] GSoC weekly report Message-ID: <468E300A.4090609@ukr.net> Hi all, today's topics: 1. Automatic gradient checks had been implemented 2. f,c,h patterns: need more time & efforts (I describe the problems here according to Alan Isaac's request) 3. Letters from ALGENCAN developers. 4. My optimization department staff have explained me their own opinion about OpenOpt renaming, that is insisted by Jarrod Millman and Alan Isaac (and maybe some other scipy developers?). 1. This week automatic gradient checks for f->min, c[i](x)<=0, h[j](x)=0 had been implemented. I guess it's done in much more convenient way that MATLAB fmincon do, isn't it? See http://openopt.blogspot.com/2007/07/automatic-gradient-check-ready.html for more details, and/or see the example below: ... OpenOpt checks user-supplied gradient df (size: (15,)) according to: prob.diffInt = 1e-07# step for numerical gradient obtaining prob.check.maxViolation = 1e-07# lines where difference is less than the number will not be shown, default 1e-5 df num user-supplied numerical difference 0 +6.000e+00 +6.000e+00 -2.122e-07 2 -1.666e+01 -1.666e+01 +1.048e-06 3 -2.584e+01 -2.584e+01 +1.400e-06 4 -2.046e+01 -2.046e+01 +9.434e-07 5 -5.461e+00 -5.461e+00 +1.512e-06 6 +5.363e+00 +5.363e+00 +4.905e-07 9 -2.458e+01 -2.458e+01 +5.861e-07 10 -2.343e+01 -2.343e+01 +1.162e-06 11 -9.929e+00 -9.929e+00 +1.588e-07 12 +3.502e+00 +3.502e+00 +7.354e-07 14 -7.812e+00 -7.812e+00 -8.602e-07 max(abs(df_user - df_numerical)) = 1.51219384126e-06 (is registered in df number 5) sum(abs(df_user - df_numerical)) = 9.37081698371e-06 ======================== OpenOpt checks user-supplied gradient dc (size: (15, 2)) according to: prob.diffInt = 1e-07 prob.check.maxViolation = 1e-07 dc num i,j:dc[i]/dx[j] user-supplied numerical difference 0 0 / 0 +4.096e+03 +4.096e+03 -4.787e-05 2 1 / 0 +0.000e+00 -3.638e-05 +3.638e-05 3 1 / 1 +8.645e+00 +8.645e+00 -1.301e-07 4 2 / 0 +0.000e+00 -3.638e-05 +3.638e-05 5 2 / 1 -6.658e+00 -6.658e+00 -1.350e-07 6 3 / 0 +0.000e+00 -3.638e-05 +3.638e-05 8 4 / 0 +0.000e+00 -3.638e-05 +3.638e-05 10 5 / 0 +0.000e+00 -3.638e-05 +3.638e-05 12 6 / 0 +0.000e+00 -3.638e-05 +3.638e-05 14 7 / 0 +0.000e+00 -3.638e-05 +3.638e-05 16 8 / 0 +0.000e+00 -3.638e-05 +3.638e-05 18 9 / 0 +0.000e+00 -3.638e-05 +3.638e-05 20 10 / 0 +0.000e+00 -3.638e-05 +3.638e-05 22 11 / 0 +0.000e+00 -3.638e-05 +3.638e-05 24 12 / 0 +0.000e+00 -3.638e-05 +3.638e-05 26 13 / 0 +0.000e+00 -3.638e-05 +3.638e-05 28 14 / 0 +0.000e+00 -3.638e-05 +3.638e-05 max(abs(dc_user - dc_numerical)) = 4.78662564092e-05 (is registered in dc number 0) sum(abs(dc_user - dc_numerical)) = 0.000557448428236 ======================== OpenOpt checks user-supplied gradient dh (size: (15, 2)) according to: prob.diffInt = 1e-07 prob.check.maxViolation = 1e-07 dh num i,j:dh[i]/dx[j] user-supplied numerical difference 27 13 / 1 +7.642e+02 +7.642e+02 -2.108e-05 28 14 / 0 +3.312e+03 +3.312e+03 -5.292e-03 max(abs(dh_user - dh_numerical)) = 0.00529207172031 (is registered in dh number 28) sum(abs(dh_user - dh_numerical)) = 0.00531315227033 ======================== (to see the messages you need to turn prob.check.df=1, prob.check.dh=1, prob.check.dc=1) 2. Patterns: I should decide which way of implementing those ones is the best. For example, consider the problem n,m,s = 10000,1000,1000 ((x+15)^2).sum() -> min# size(x)=n acoording to constraints x1^2 + x2^2 + ... + xm^2 <=c1 x2^2 + x3^2 + ... + x[m+1]^2 <=c2 ... xs^2+x[s+1]^2+... + x[m+s]^2 <= c[s] so cPattern will look like 1 1 1...1 0 0 0 0 0 1 1 ..1 1 0 0 0 0 0 1 ..1 1 1 0 0 ... 0 0 0 ..0 0 0 0 0 0 0 0 ..0 0 0 0 0 if I will store dc as 1) dense matrix 2) sparse matrix it will require 1) O(n x s) ~= 10000*1000*sizeof(float) memory bytes 2) O(m x s) = 1000*1000*(sizeof(float) + 2*sizeof(int)) memory bytes However, lots of solvers require just dL/dx = df/dx + dc/dx + dh/dx = grad(f) + sum(dot(mu,c(x))) + sum(dot(lambda,h(x))) so, if dc/dx will be obtained sequentially as dc[i]/dx or dc/dx[i], we can just summarize these ones step by step and amount of memory needed will be O(n) , approximately sizeof(float)*n However, unfortunately not all solvers need just dL/dx, or L is not just f + + , like for example in Augmented Lagrangian - related solvers. On the other hand, seems like it should always work with fPattern, because f = Fwhole is always just mere sum of f[j] One more problem is how to implement data storing. The most obvious approach is of course double cycle for i in xrange(n) for j in xrange(len(c))#number of c inequalities dc[indexes related to the vector (and its' length) returned from c[j](x) evaluation] = (c[j](x)-c0[j])/p.diffInt ('cause c[j] can return several numbers, if we consider the most general case) However, double cycle is too slow for Python. I intend to write single cycle, based on for i in xrange(n) j = where(cPattern[i]) C = dc[some ind] = (C[some ind] - C0[some ind])/p.diffInt here some ind could be something like [1,2,3] + [4,5] + [8,9,10] + ... + [100, 101] - values obtained from those j: cPattern[j]=1 of couse in code I will divide dcmatrix only once, using ufunc /, when whole matrix dc is obtained However, some problems still exist. 3.This week I've received (and answered to) some letters from ALGENCAN developers, they are interested in connecting their solvers to Python and OO. Their solvers often works better than IPOPT, some results are attached in their articles http://www.ime.usp.br/~egbirgin/tango/publications.php (Their software is Augmented Lagrangian - based) + They informed me that they had changed lots of their solvers licenses from "free for noncommercial usage" (that are not OSI-approved) to GPL (some their software remain commercial). 4. So, I have informed my department about your intention to change OpenOpt name. They have answered me: Dmitrey, we had agreed to make OpenOpt our department's vendor, like TomOpt, PENOPT, IPOPT, SolvOpt, CVXOPT are, as well as construct (in future) a website like openopt.net and create a banner like "member of OpenOpt project" for link exchange and/or our partners. So you propose to turn it into something like "member of scikits.optimize project"? So it's your own decision, but please, don't annoy with your questions anymore - we have our own work, and we do not see any benefits of spending our time with your one, moreover, for free. Please, just finish your education as quickly as it can be and go away - we need people working in our projects. Particularly, it means that I will not be able to implement the QP/QPQC solver that I told about (this one is needed as default lincher QP subproblem solver with BSD license). Also, unfortunately it means problems with consulting about global GRASP-based solver (and implementing this one into scikits.optimization), because I still misunderstood some dr. Shilo's algorithm steps. In brief that's all for the week. BTW it's last one before 1st GSoC milestone (July 9th) and I had done all mentioned in http://projects.scipy.org/scipy/scipy/wiki/OptimizationProblems except patterns. Regards, D. From travis at enthought.com Fri Jul 6 08:09:41 2007 From: travis at enthought.com (Travis Vaught) Date: Fri, 6 Jul 2007 07:09:41 -0500 Subject: [SciPy-dev] ANN: SciPy Conference Early Registration Reminder Message-ID: <5A9C5264-C85B-4DD8-836D-E63582058720@enthought.com> Greetings, The *SciPy 2007 Conference on Scientific Computing with Python* early registration deadline is July 15, 2007. After this date, the price for registration will increase from $150 to $200. More information on the Conference is here: http://www.scipy.org/ SciPy2007 The registration page is here: https://www.enthought.com/scipy07/ The Conference is to be held on August 16-17. Tutorial Sessions are being offered on August 14-15 (http://www.scipy.org/SciPy2007/ Tutorials). The price to attend Tutorials is $75. The Saturday following the Conference will hold a Sprint session for those interested in pitching in on particular development efforts. (suggestions welcome: http://www.scipy.org/SciPy2007/Sprints) Today is the deadline for abstract submissions for those wanting to present at the conference. Please email to abstracts at scipy.org by midnight US Central Time. From the conference web page: "If you are using Python in Scientific Computing, we'd love to hear from you. If you are interested in presenting at the conference, you may submit an abstract in Plain Text, PDF or MS Word formats to abstracts at scipy.org -- the deadline for abstract submission is July 6, 2007. Papers and/or presentation slides are acceptable and are due by August 3, 2007. Presentations will be allowed 30-35 minutes, depending on the final schedule." We're looking forward to another great gathering. Best, Travis From aisaac at american.edu Fri Jul 6 10:10:18 2007 From: aisaac at american.edu (Alan G Isaac) Date: Fri, 6 Jul 2007 10:10:18 -0400 Subject: [SciPy-dev] GSoC weekly report In-Reply-To: <468E300A.4090609@ukr.net> References: <468E300A.4090609@ukr.net> Message-ID: I hope potential users or contributors will provide Dmitrey some feedback on this design decision. Thank you, Alan Isaac On Fri, 06 Jul 2007, dmitrey apparently wrote: > 2. Patterns: I should decide which way of implementing those ones is the > best. > For example, consider the problem > n,m,s = 10000,1000,1000 > ((x+15)^2).sum() -> min# size(x)=n > acoording to constraints > x1^2 + x2^2 + ... + xm^2 <=c1 > x2^2 + x3^2 + ... + x[m+1]^2 <=c2 > ... > xs^2+x[s+1]^2+... + x[m+s]^2 <= c[s] > so cPattern will look like > 1 1 1...1 0 0 0 0 > 0 1 1 ..1 1 0 0 0 > 0 0 1 ..1 1 1 0 0 > ... > 0 0 0 ..0 0 0 0 0 > 0 0 0 ..0 0 0 0 0 > if I will store dc as > 1) dense matrix > 2) sparse matrix > it will require > 1) O(n x s) ~= 10000*1000*sizeof(float) memory bytes > 2) O(m x s) = 1000*1000*(sizeof(float) + 2*sizeof(int)) memory bytes > However, lots of solvers require just dL/dx = df/dx + dc/dx + dh/dx = > grad(f) + sum(dot(mu,c(x))) + sum(dot(lambda,h(x))) > so, if dc/dx will be obtained sequentially as dc[i]/dx or dc/dx[i], we > can just summarize these ones step by step and amount of memory needed > will be O(n) , approximately sizeof(float)*n > However, unfortunately not all solvers need just dL/dx, or L is not just > f + + , like for example in Augmented Lagrangian - > related solvers. On the other hand, seems like it should always work > with fPattern, because f = Fwhole is always just mere sum of f[j] > One more problem is how to implement data storing. The most obvious > approach is of course double cycle > for i in xrange(n) > for j in xrange(len(c))#number of c inequalities > dc[indexes related to the vector (and its' length) returned from > c[j](x) evaluation] = (c[j](x)-c0[j])/p.diffInt > ('cause c[j] can return several numbers, if we consider the most general > case) > However, double cycle is too slow for Python. I intend to write single > cycle, based on > for i in xrange(n) > j = where(cPattern[i]) > C = > dc[some ind] = (C[some ind] - C0[some ind])/p.diffInt > here some ind could be something like [1,2,3] + [4,5] + [8,9,10] + ... + > [100, 101] - values obtained from those j: cPattern[j]=1 > of couse in code I will divide dcmatrix only once, using ufunc /, when > whole matrix dc is obtained From ondrej at certik.cz Fri Jul 6 15:15:59 2007 From: ondrej at certik.cz (Ondrej Certik) Date: Fri, 6 Jul 2007 21:15:59 +0200 Subject: [SciPy-dev] GSoC weekly report In-Reply-To: <468E300A.4090609@ukr.net> References: <468E300A.4090609@ukr.net> Message-ID: <85b5c3130707061215l37fd1355ve4a58e5c0f6051@mail.gmail.com> > So it's your own decision, but please, don't annoy with your questions > anymore - we have our own work, and we do not see any benefits of > spending our time with your one, moreover, for free. Please, just finish > your education as quickly as it can be and go away - we need people > working in our projects. Wow, so the chief of your department is telling you to go abroad as soon as possible and don't "annoy" him with any questions? Not a nice attitude to students. Ondrej From openopt at ukr.net Fri Jul 6 15:34:29 2007 From: openopt at ukr.net (dmitrey) Date: Fri, 06 Jul 2007 22:34:29 +0300 Subject: [SciPy-dev] GSoC weekly report In-Reply-To: <85b5c3130707061215l37fd1355ve4a58e5c0f6051@mail.gmail.com> References: <468E300A.4090609@ukr.net> <85b5c3130707061215l37fd1355ve4a58e5c0f6051@mail.gmail.com> Message-ID: <468E9945.8080501@ukr.net> Ondrej Certik wrote: >> So it's your own decision, but please, don't annoy with your questions >> anymore - we have our own work, and we do not see any benefits of >> spending our time with your one, moreover, for free. Please, just finish >> your education as quickly as it can be and go away - we need people >> working in our projects. >> > > Wow, so the chief of your department is telling you to go abroad as > soon as possible and don't "annoy" him with any questions? Not a nice > attitude to students. > > Ondrej > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > > > > At 1st, this opinion is a sum of some department workers, not just only chief. 2nd, they do not insist me to go abroad, they know that I'm working on-line. 3rd, I perfectly understand the opinion. And I think they are right. Here in Ukraine you should work very hard for to earn some money, so they have no time to do any additional job, moreover for free. And there is no sense to hire me for work in their department (and hence assist me) if I will spend most of time working for someone else, than their projects. Regards, D. From aisaac at american.edu Fri Jul 6 16:23:05 2007 From: aisaac at american.edu (Alan G Isaac) Date: Fri, 6 Jul 2007 16:23:05 -0400 Subject: [SciPy-dev] GSoC weekly report In-Reply-To: <468E9945.8080501@ukr.net> References: <468E300A.4090609@ukr.net><85b5c3130707061215l37fd1355ve4a58e5c0f6051@mail.gmail.com><468E9945.8080501@ukr.net> Message-ID: On Fri, 06 Jul 2007, dmitrey apparently wrote: > Here in Ukraine you should work very hard for to earn some > money, so they have no time to do any additional job, > moreover for free. And there is no sense to hire me for > work in their department (and hence assist me) if I will > spend most of time working for someone else, than their > projects. This is drifting OT, but I will add one comment, since many students may face such situations. You are mixing together a few different issues. 1. What will you spend your time on. 2. How will it benefit you. 3. How will it benefit them. 4. Will they pay you. It is still the case that many businesses have trouble understanding how paying some staff work on free and open source software can be profitable to the company. There a many ways this can work, and it is likely to be very firm specific. If you have the dual goals - would like to work with these people, and - would like to work on FOSS not just as a hobby then they will need to understand how your FOSS work can be good for them. Cheers, Alan Isaac From l.mastrodomenico at gmail.com Fri Jul 6 16:47:31 2007 From: l.mastrodomenico at gmail.com (Lino Mastrodomenico) Date: Fri, 6 Jul 2007 22:47:31 +0200 Subject: [SciPy-dev] [Numpy-discussion] PEP 368: Standard image protocol and class In-Reply-To: References: Message-ID: 2007/7/4, Bill Baxter : > I think a PEP that aims to be a generic image protocol should > support at least 32 bit floats if not 64-bit doubles and 16 bit > "Half"s used by some GPUs (and supported by the OpenEXR format). Yes, the next version of the PEP will include float16 and float32 versions of both the L and the RGBA modes. The float16 type is the IEEE 754r one, implemented in software and compatible with OpenGL and OpenEXR. -- Lino Mastrodomenico E-mail: l.mastrodomenico at gmail.com From matthieu.brucher at gmail.com Fri Jul 6 17:59:23 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 6 Jul 2007 23:59:23 +0200 Subject: [SciPy-dev] [Numpy-discussion] PEP 368: Standard image protocol and class In-Reply-To: References: Message-ID: Hi, What are the gains when compared to Numpy ? Numpy already supports 2D arrays as well as matrices, and it should be enough to support images, shouldn't it ? Matthieu 2007/7/6, Lino Mastrodomenico : > > 2007/7/4, Bill Baxter : > > I think a PEP that aims to be a generic image protocol should > > support at least 32 bit floats if not 64-bit doubles and 16 bit > > "Half"s used by some GPUs (and supported by the OpenEXR format). > > Yes, the next version of the PEP will include float16 and float32 > versions of both the L and the RGBA modes. The float16 type is the > IEEE 754r one, implemented in software and compatible with OpenGL and > OpenEXR. > > -- > Lino Mastrodomenico > E-mail: l.mastrodomenico at gmail.com > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openopt at ukr.net Sat Jul 7 02:34:21 2007 From: openopt at ukr.net (dmitrey) Date: Sat, 07 Jul 2007 09:34:21 +0300 Subject: [SciPy-dev] GSoC weekly report In-Reply-To: References: <468E300A.4090609@ukr.net><85b5c3130707061215l37fd1355ve4a58e5c0f6051@mail.gmail.com><468E9945.8080501@ukr.net> Message-ID: <468F33ED.7060204@ukr.net> Alan G Isaac wrote: > This is drifting OT, but I will add one comment, since many students may face such situations. > > You are mixing together a few different issues. > 1. What will you spend your time on. > 2. How will it benefit you. > 3. How will it benefit them. > 4. Will they pay you. > > > Cheers, > Alan Isaac > However, they (especially chief) doesn't believe in languages like python at all and don't respect them. In whole department only 2-3 persons know something different from C/Fortran (afaik they had deal with MATLAB, in SolvOpt project). Chief says "there were LOTS of languages, but Fortran is the most powerfull, it's faster than C++ in a factor of 5 or so and faster than C in a factor of 1.5-2! (however, I found these numbers too far from real, maybe the custom benchmark written in our department was too special). All serious organizations that I had contacted with use either Fortran or C/C++. And Fortran recently have implemented all features that are needed - garbage collection, object-oriented programming etc" (I guess he means f2003 or f95 standards). I had explained them that numpy has compiled C/fortran libraries as atlas, blas, lapack but still they don't believe in python (as interpreter language). What about name, they said "so we must each time explain to everyone that OpenOpt and scikits.optimize is the same? How scikits.optimize users will know that scikits.optimize is same as OpenOpt? We see no benefits here." >It is still the case that many businesses have trouble understanding how paying some staff work on free and open source software can be profitable to the company. There a many ways this can work, and it is likely to be very firm specific. All our department software and other solutions are opensourse, and they perfectly understand opensourse community, licenses, etc. But, as I have already mentioned, they doesn't accept Python seriously, as well as MATLAB. They have spent lots of years working with Fortran (or some with C/C++) and lots of them have some years left before retirement, so they certainly will not switch to any other lang. BTW it was THEY who proved me to remain in opensourse sector when I proposed them collaborating with TOMLAB (tomopt.com). So, if I will mention anything about opensourse, they will just answer me "OK, here are some our (fortran/c) projects, they are fully opensourse - you can freely copy, spread, modify them, etc" (however, the salaries are significantly smaller there, despite they have some grants from abroad). If you have the dual goals > - would like to work with these people, and > - would like to work on FOSS not just as a hobby >then they will need to understand how your FOSS work can be >good for them. The work on scikits package can't give them anything, because I just implement in Python some code, that had already been written in Fortran (or C) by our dept or ASAI or other our dept patrners. And they have no intentions to switch to python from fortran. they say "who knows, maybe 3-4 years later all your numpy/scipy/scikits will become unused (other language will appear, like fortress, or Python/scipy will be suppressed by Ruby/Rnum (ruby numerical library)), and several years will spent for nothing. Also, the type of work (implementing solvers and working on OO Kernel) will not yield you any scientific results, you do the same things that had been already done long time ago". Regards, Dmitrey. From matthieu.brucher at gmail.com Sat Jul 7 03:37:19 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Sat, 7 Jul 2007 09:37:19 +0200 Subject: [SciPy-dev] GSoC weekly report In-Reply-To: <468F33ED.7060204@ukr.net> References: <468E300A.4090609@ukr.net> <85b5c3130707061215l37fd1355ve4a58e5c0f6051@mail.gmail.com> <468E9945.8080501@ukr.net> <468F33ED.7060204@ukr.net> Message-ID: > > What about name, they said "so we must each time explain to everyone > that OpenOpt and scikits.optimize is the same? How scikits.optimize > users will know that scikits.optimize is same as OpenOpt? We see no > benefits here." > I don't understand this. Are they advertizing OpenOpt ? Why couldn't they advertize it under another name ? Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From openopt at ukr.net Sat Jul 7 03:49:13 2007 From: openopt at ukr.net (dmitrey) Date: Sat, 07 Jul 2007 10:49:13 +0300 Subject: [SciPy-dev] GSoC weekly report In-Reply-To: References: <468E300A.4090609@ukr.net> <85b5c3130707061215l37fd1355ve4a58e5c0f6051@mail.gmail.com> <468E9945.8080501@ukr.net> <468F33ED.7060204@ukr.net> Message-ID: <468F4579.10808@ukr.net> Matthieu Brucher wrote: > > What about name, they said "so we must each time explain to everyone > that OpenOpt and scikits.optimize is the same? How scikits.optimize > users will know that scikits.optimize is same as OpenOpt? We see no > benefits here." > > > I don't understand this. Are they advertizing OpenOpt ? Why couldn't > they advertize it under another name ? > > Matthieu I guess they don't want to explain each time what does "scikits.optimize" (or like that) mean. They want their work to be 100% belonging to their own department, and found only *OPT to be appropriate vendor for the product, like PENOPT or our SolvOpt are. Also, they want to have possibility to remain the same name for any other lang (they say: who knows, maybe ruby of fortress or something else will made python/scipy obsolete very soon? and efforts for the brand (or vendor? excuse my English) will be just waste of time?) D. From millman at berkeley.edu Sat Jul 7 04:42:37 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Sat, 7 Jul 2007 01:42:37 -0700 Subject: [SciPy-dev] GSoC weekly report In-Reply-To: <468F4579.10808@ukr.net> References: <468E300A.4090609@ukr.net> <85b5c3130707061215l37fd1355ve4a58e5c0f6051@mail.gmail.com> <468E9945.8080501@ukr.net> <468F33ED.7060204@ukr.net> <468F4579.10808@ukr.net> Message-ID: On 7/7/07, dmitrey wrote: > I guess they don't want to explain each time what does > "scikits.optimize" (or like that) mean. They want their work to be 100% > belonging to their own department, and found only *OPT to be appropriate > vendor for the product, like PENOPT or our SolvOpt are. Also, they want > to have possibility to remain the same name for any other lang (they > say: who knows, maybe ruby of fortress or something else will made > python/scipy obsolete very soon? and efforts for the brand (or vendor? > excuse my English) will be just waste of time?) Hey Dmitrey, First, I want to say that from what I can see it looks like your SoC project is going very well. I may well be misunderstanding some of the issues with your Department and I am not sure what the outcome of that discussion was, so please forgive me if that is the case. Any way, I will add my comments to help make clear my position. 1. I think that it is great that your Department has released so much open source code and it certainly makes sense that they want to brand their code. 2. You SoC of code project was accepted by the Python Software Foundation for you to work on the SciPy project and not your Department's OpenOpt project. OpenOpt is not a Google mentoring organization and it isn't a Python project. 3. Your project, which isn't being paid for by your Department, is a SciKit. You are the main developer of a new SciKit optimization project, which is based on your experience with OpenOpt as well as other optimization projects. 4. It doesn't seem that the OpenOpt project has any interest in including any Python code any way. 5. Your SciKit is not being developed or maintained by OpenOpt developers. 6. You are all ready planning to include Matthieu's Python optimization code in your scikit. That is clearly not related to OpenOpt. Given all this I think it is reasonable to conclude that your SciKit is technically not part of your Department's OpenOpt project. As such, code design and naming conventions for your SciKit should conform with SciPy standards and not OpenOpt ones. I am very happy with the work you are doing and have no interest in harming your Academic career or your standing in your Department. At this point it seems like there are some personal and professional issues that need to be resolved between you and your Department. I want to make absolutely sure that your Department doesn't feel that they are being taken advantage of and that you aren't put in bad standing with them either. At this point, I think we should move the discussion revolving around these political issues into a private discussion with your SoC mentors. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From openopt at ukr.net Sat Jul 7 06:19:44 2007 From: openopt at ukr.net (dmitrey) Date: Sat, 07 Jul 2007 13:19:44 +0300 Subject: [SciPy-dev] GSoC weekly report In-Reply-To: References: <468E300A.4090609@ukr.net> <85b5c3130707061215l37fd1355ve4a58e5c0f6051@mail.gmail.com> <468E9945.8080501@ukr.net> <468F33ED.7060204@ukr.net> <468F4579.10808@ukr.net> Message-ID: <468F68C0.8030803@ukr.net> Jarrod Millman wrote: > I may well be misunderstanding some of > the issues with your Department and I am not sure what the outcome of > that discussion was, so please forgive me if that is the case. Yes, you did. So the history and the current situations are as described below: 1. My dept staff accepted me as PhD student and thought I will assist them in their projects (after graduation or even earlier) 100% of my work time (or so). 2. In a time free from work and study I had wrote "OpenOpt for MATLAB/Octave" toolkit. Some of dept staff assisted me with their advices and provide me literature and some fortran code (non-smooth ralg solver). 3. I explained them about my GSoC participation, that will allow me to earn essential (for Ukraine) money (those PhDs who are hired in Ukrainian National Science Academy institutes (like my is) earn (as junior scientific workers) ~120$/month (from state), however, some (lucky) got grants from abroad (and/or Ukraine commercial organizations and/or foundations), like my dept do). Despite they do not take Python as scientific language seriously, they promised me to help from time to time. However, while discussions I always referred to the project as "OpenOpt project, creation of UNSA cybernetics institute, optimization department", with vendor belonging to our dept. Also, we intended creation of new dept web-site (vs obsolete web-page: http://www.icyb.kiev.ua/Web_120.htm) like "openopt.net" or so. 4. So, when they have heard it will be just "scikits.optimize" project, and no our vendor will be used ("scikits.openopt" at least), they have lost any sense of assisting me. So, I need somehow to return their interest (elseware 4-5 months later, when I should pass my graduation, I will not be hired into department and will not be able to consult with them), and I suspect returning of "openopt" is program-minimum. D. From wnbell at gmail.com Sat Jul 7 21:49:43 2007 From: wnbell at gmail.com (Nathan Bell) Date: Sat, 7 Jul 2007 18:49:43 -0700 Subject: [SciPy-dev] sparse comments and questions Message-ID: Currently the sparse matrix storage formats (e.g. sparse.csr_matrix ) only support integer indices ( numpy.intc ) and floating point data (float32, float64, complex64, and complex128). While this addresses the common usage, it would be nice to support other index and data formats also. For example, when using a csr_matrix to represent a graph the dtype is mostly irrelevant, so smaller types like uint8 would be desirable. Likewise, on a 64-bit machine, one might like to use 32-bit indices (instead of the default C-int size) to economize on space. The sparsetools routines are fully templated, so it's a simple matter to support the other types there. However, I don't know whether other subpackages that use sparse matrices would require changes. Any thoughts? I have a few other questions and comments regarding the sparse package. 1) Does anyone depend on the .ftype attribute of the sparse formats? Can it be eliminated? If not, can it be trapped by __getattr__() so that it doesn't become unsynchronized from data.dtype? 2) When operating on multiple types, what's the best way to get the correct upcast type? For example int32 + float32 -> float64. I know this can be done experimentally ( http://www.scipy.org/Tentative_NumPy_Tutorial/Discussion ). Is there a better way? 3) Should we allow modifications to CSR and CSC matrices that force O(N) updates/reallocations. For instance, adding a new value into a CSR matrix is essentially as expensive as building a new one from scratch. Could we instead prohibit such modifications and raise an exception informing the user that lil_matrix is the proper format for such operations? Note that changing an existing matrix value is not so costly (typically O(1)), so not all calls to __setitem__ would be banned. 4) lil_matrix now supports more sophisticated slicing, but the implementation is pretty hairy and not especially fast. Is anyone looking at this? Does the slicing work correctly? I fixed a bug related to negative indices, but I'm not completely confident in the robustness of my fix. 5) Scipy's spdiags() works differently than the MATLAB version. Should it be changed to coincide? If we later support a sparse diagonal format, should the output of spdiags change to dia_matrix instead of the present csc_matrix? 6) When working with sparse matrices, do people expect the result to be of the same type? For example, it's far more efficient to make transpose(csr_matrix) output a csc_matrix (as is currently done in SciPy) since that's a O(1) operation (as opposed to an O(n) cost otherwise). So in general, should a method to return the result in most efficient output format, relying on the user to convert it to another format if so desired? I.e. if you need a csr_matrix at a given point, then you should call .tocsr() to be sure. If it's already a csr_matix, then no conversion will be done. 7) Where do functions that operate on sparse matrices belong? For example, suppose I wrote a function to extract the lower diagonal entries of a matrix. Should it be added to sparse.py (like spdiags) or somewhere else? I'd like to add some functionality to allow the extraction of submatrices from CSR/CSC matrices. This could be used when removing the DOFs corresponding to boundary conditions from a bilinear form. Does anyone have a suggestion for an interface for this functionality? It's possible to implement this via sparse matrix multiplication, albeit with some additional overhead. -- Nathan Bell wnbell at gmail.com From peter.skomoroch at gmail.com Sat Jul 7 22:06:43 2007 From: peter.skomoroch at gmail.com (Peter Skomoroch) Date: Sat, 7 Jul 2007 22:06:43 -0400 Subject: [SciPy-dev] sparse comments and questions In-Reply-To: References: Message-ID: Nathan, Do you have any plans to implement sparse matrix division? That is something I've found lacking in the sparse matrix support... -Pete On 7/7/07, Nathan Bell wrote: > > Currently the sparse matrix storage formats (e.g. sparse.csr_matrix ) > only support integer indices ( numpy.intc ) and floating point data > (float32, float64, complex64, and complex128). While this addresses > the common usage, it would be nice to support other index and data > formats also. For example, when using a csr_matrix to represent a > graph the dtype is mostly irrelevant, so smaller types like uint8 > would be desirable. Likewise, on a 64-bit machine, one might like to > use 32-bit indices (instead of the default C-int size) to economize on > space. > > The sparsetools routines are fully templated, so it's a simple matter > to support the other types there. However, I don't know whether other > subpackages that use sparse matrices would require changes. Any > thoughts? > > > I have a few other questions and comments regarding the sparse package. > > 1) Does anyone depend on the .ftype attribute of the sparse formats? > Can it be eliminated? If not, can it be trapped by __getattr__() so > that it doesn't become unsynchronized from data.dtype? > > 2) When operating on multiple types, what's the best way to get the > correct upcast type? For example int32 + float32 -> float64. I know > this can be done experimentally ( > http://www.scipy.org/Tentative_NumPy_Tutorial/Discussion ). Is there > a better way? > > 3) Should we allow modifications to CSR and CSC matrices that force > O(N) updates/reallocations. For instance, adding a new value into a > CSR matrix is essentially as expensive as building a new one from > scratch. Could we instead prohibit such modifications and raise an > exception informing the user that lil_matrix is the proper format for > such operations? Note that changing an existing matrix value is not > so costly (typically O(1)), so not all calls to __setitem__ would be > banned. > > 4) lil_matrix now supports more sophisticated slicing, but the > implementation is pretty hairy and not especially fast. Is anyone > looking at this? Does the slicing work correctly? I fixed a bug > related to negative indices, but I'm not completely confident in the > robustness of my fix. > > 5) Scipy's spdiags() works differently than the MATLAB version. > Should it be changed to coincide? If we later support a sparse > diagonal format, should the output of spdiags change to dia_matrix > instead of the present csc_matrix? > > 6) When working with sparse matrices, do people expect the result to > be of the same type? For example, it's far more efficient to make > transpose(csr_matrix) output a csc_matrix (as is currently done in > SciPy) since that's a O(1) operation (as opposed to an O(n) cost > otherwise). So in general, should a method to return the result in > most efficient output format, relying on the user to convert it to > another format if so desired? I.e. if you need a csr_matrix at a > given point, then you should call .tocsr() to be sure. If it's > already a csr_matix, then no conversion will be done. > > 7) Where do functions that operate on sparse matrices belong? For > example, suppose I wrote a function to extract the lower diagonal > entries of a matrix. Should it be added to sparse.py (like spdiags) > or somewhere else? I'd like to add some functionality to allow the > extraction of submatrices from CSR/CSC matrices. This could be used > when removing the DOFs corresponding to boundary conditions from a > bilinear form. Does anyone have a suggestion for an interface for > this functionality? It's possible to implement this via sparse matrix > multiplication, albeit with some additional overhead. > > > > -- > Nathan Bell wnbell at gmail.com > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -- Peter N. Skomoroch peter.skomoroch at gmail.com http://www.datawrangling.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From edschofield at gmail.com Sun Jul 8 06:51:40 2007 From: edschofield at gmail.com (Ed Schofield) Date: Sun, 8 Jul 2007 11:51:40 +0100 Subject: [SciPy-dev] sparse comments and questions In-Reply-To: References: Message-ID: <1b5a37350707080351w6c23e0c1q126818f89c2b319@mail.gmail.com> On 7/8/07, Nathan Bell wrote: > Currently the sparse matrix storage formats (e.g. sparse.csr_matrix ) > only support integer indices ( numpy.intc ) and floating point data > (float32, float64, complex64, and complex128). While this addresses > the common usage, it would be nice to support other index and data > formats also. For example, when using a csr_matrix to represent a > graph the dtype is mostly irrelevant, so smaller types like uint8 > would be desirable. Likewise, on a 64-bit machine, one might like to > use 32-bit indices (instead of the default C-int size) to economize on > space. > > The sparsetools routines are fully templated, so it's a simple matter > to support the other types there. However, I don't know whether other > subpackages that use sparse matrices would require changes. Any > thoughts? Yes, good idea. I don't imagine preserving compatibility with existing packages would be difficult. > I have a few other questions and comments regarding the sparse package. > > 1) Does anyone depend on the .ftype attribute of the sparse formats? > Can it be eliminated? If not, can it be trapped by __getattr__() so > that it doesn't become unsynchronized from data.dtype? As far as I'm concerned it can be eliminated. But it's been around for years (before I started working on sparse) and I have no idea what it does. Travis? > 3) Should we allow modifications to CSR and CSC matrices that force > O(N) updates/reallocations. For instance, adding a new value into a > CSR matrix is essentially as expensive as building a new one from > scratch. Could we instead prohibit such modifications and raise an > exception informing the user that lil_matrix is the proper format for > such operations? Note that changing an existing matrix value is not > so costly (typically O(1)), so not all calls to __setitem__ would be > banned. Good point. There's an argument to be made for protecting users from themselves. Might there be a need for the occasional O(N) tweak to an existing CSR/C matrix? I don't know. I suggest we spin off this functionality from __setitem__ into a separate module-level function called modify_cs_matrix_for_masochists(). > 4) lil_matrix now supports more sophisticated slicing, but the > implementation is pretty hairy and not especially fast. Is anyone > looking at this? Does the slicing work correctly? I fixed a bug > related to negative indices, but I'm not completely confident in the > robustness of my fix. The implementation seems cleaner now as a result of your work. To have more confidence in the slicing we also want more unit tests ... > 5) Scipy's spdiags() works differently than the MATLAB version. > Should it be changed to coincide? If we later support a sparse > diagonal format, should the output of spdiags change to dia_matrix > instead of the present csc_matrix? Yes, I think so. > 6) When working with sparse matrices, do people expect the result to > be of the same type? For example, it's far more efficient to make > transpose(csr_matrix) output a csc_matrix (as is currently done in > SciPy) since that's a O(1) operation (as opposed to an O(n) cost > otherwise). So in general, should a method to return the result in > most efficient output format, relying on the user to convert it to > another format if so desired? I.e. if you need a csr_matrix at a > given point, then you should call .tocsr() to be sure. If it's > already a csr_matix, then no conversion will be done. Yes, definitely. > 7) Where do functions that operate on sparse matrices belong? For > example, suppose I wrote a function to extract the lower diagonal > entries of a matrix. Should it be added to sparse.py (like spdiags) > or somewhere else? Yes, sparse.py is a good place for it. -- Ed From matthieu.brucher at gmail.com Mon Jul 9 07:47:03 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Mon, 9 Jul 2007 13:47:03 +0200 Subject: [SciPy-dev] Current SVN seems to be broken Message-ID: Hi, I just updated scipy to the HEAD and when building, I got this : Lib/sparse/sparsetools/sparsetools_wrap.cxx: In function 'int SWIG_Python_ConvertPtr(PyObject*, void**, swig_type_info*, int)': Lib/sparse/sparsetools/sparsetools_wrap.cxx:1231: error: invalid conversion from 'const char*' to 'char*' Lib/sparse/sparsetools/sparsetools_wrap.cxx: In function 'void SWIG_Python_FixMethods(PyMethodDef*, swig_const_info*, swig_type_info**, swig_type_info**)': Lib/sparse/sparsetools/sparsetools_wrap.cxx:20206: error: invalid conversion from 'const char*' to 'char*' Lib/sparse/sparsetools/sparsetools_wrap.cxx: In function 'int SWIG_Python_ConvertPtr(PyObject*, void**, swig_type_info*, int)': Lib/sparse/sparsetools/sparsetools_wrap.cxx:1231: error: invalid conversion from 'const char*' to 'char*' Lib/sparse/sparsetools/sparsetools_wrap.cxx: In function 'void SWIG_Python_FixMethods(PyMethodDef*, swig_const_info*, swig_type_info**, swig_type_info**)': Lib/sparse/sparsetools/sparsetools_wrap.cxx:20206: error: invalid conversion from 'const char*' to 'char*' error: Command "g++ -pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -fPIC -ILib/sparse/sparsetools -I/home/brucher/local/lib/python2.5/site-packages/numpy/core/include -I/home/brucher/local/include/python2.5 -c Lib/sparse/sparsetools/sparsetools_wrap.cxx -o build/temp.linux-i686-2.5/Lib/sparse/sparsetools/sparsetools_wrap.o" failed with exit status 1 With gcc 4.1.1 on FC6 This file was updated since my last checkout, can it be repared ? Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From openopt at ukr.net Mon Jul 9 09:11:53 2007 From: openopt at ukr.net (dmitrey) Date: Mon, 09 Jul 2007 16:11:53 +0300 Subject: [SciPy-dev] restructuredtext en docformat Message-ID: <46923419.7080300@ukr.net> I noticed that http://twistedmatrix.com/pipermail/twisted-bugs/2006-October/001609.html says: epydoc has a epytext variant that's valid restructured text. You can either set `__docformat__ = "restructuredtext en"` in your module, or pass `--docformat restructuredtext` to `epydoc` to use it. So, maybe we can omit putting the line (just do latter proposition), + taking into account that in future more appropriate format can appear and then we should make all those changes in all source files? D. From aisaac at american.edu Mon Jul 9 09:50:33 2007 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 9 Jul 2007 09:50:33 -0400 Subject: [SciPy-dev] restructuredtext en docformat In-Reply-To: <46923419.7080300@ukr.net> References: <46923419.7080300@ukr.net> Message-ID: On Mon, 09 Jul 2007, dmitrey apparently wrote: > http://twistedmatrix.com/pipermail/twisted-bugs/2006-October/001609.html > epydoc has a epytext variant that's valid restructured > text. You can either set `__docformat__ > = "restructuredtext en"` in your module, or pass > `--docformat restructuredtext` to `epydoc` to use it. In my opinion, including the docformat declaration in each module is important. - It is the friendly thing to do. (I.e., it provides useful information that is otherwise missing.) - If doc standards change, not all modules will be changed at once, so it will be crucial for modules to have this inforamtion - the docformat declaration can be used by other processors (not just epydoc) - epydoc users can always override the module level docformat declaration at the command line, if desired All that said, it does seem like it would be appropriate to be able to declare a docformat for an entire package in its __init__.py file. I do not think this is currently possible. Cheers, Alan Isaac From jeremit0 at gmail.com Mon Jul 9 11:24:54 2007 From: jeremit0 at gmail.com (Jeremy Conlin) Date: Mon, 9 Jul 2007 11:24:54 -0400 Subject: [SciPy-dev] Sorted Eigenvalues/vectors Message-ID: <3db594f70707090824n15bc4a1avf18d50f2a43b7a88@mail.gmail.com> I have a need to have the eigenvalues/vectors calculated in my code returned sorted by eigenvalue. Of course this necessitates sorting the eigenvectors accordingly. If I understand correctly, a method for this doesn't exist in the current distribution so I wrote a simple one myself. Is this something that the community is interested in? Should a method like this be included in the scipy distribution? It is small, but I think very useful. Thanks, Jeremy Conlin From wnbell at gmail.com Mon Jul 9 12:48:41 2007 From: wnbell at gmail.com (Nathan Bell) Date: Mon, 9 Jul 2007 09:48:41 -0700 Subject: [SciPy-dev] Current SVN seems to be broken In-Reply-To: References: Message-ID: On 7/9/07, Matthieu Brucher wrote: > i686-2.5/Lib/sparse/sparsetools/sparsetools_wrap.o" failed > with exit status 1 > > With gcc 4.1.1 on FC6 > > This file was updated since my last checkout, can it be repared ? I've updated to the latest SWIG (from SVN) and regenerated sparsetools_wrap.cxx. Let me know if the problem persists. I don't have any problems on: gcc version 4.1.2 20060928 (prerelease) (Ubuntu 4.1.1-13ubuntu5) We've seen this very problem before (see thread titled "sparsetools_wrap.cxx" from Jan 6th 2007), so it's likely that I used the wrong SWIG to generate the file. -- Nathan Bell wnbell at gmail.com From matthieu.brucher at gmail.com Mon Jul 9 12:54:03 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Mon, 9 Jul 2007 18:54:03 +0200 Subject: [SciPy-dev] Current SVN seems to be broken In-Reply-To: References: Message-ID: Thanks, it works now ;) Matthieu 2007/7/9, Nathan Bell : > > On 7/9/07, Matthieu Brucher wrote: > > i686-2.5/Lib/sparse/sparsetools/sparsetools_wrap.o" failed > > with exit status 1 > > > > With gcc 4.1.1 on FC6 > > > > This file was updated since my last checkout, can it be repared ? > > I've updated to the latest SWIG (from SVN) and regenerated > sparsetools_wrap.cxx. Let me know if the problem persists. > > I don't have any problems on: > gcc version 4.1.2 20060928 (prerelease) (Ubuntu 4.1.1-13ubuntu5) > > We've seen this very problem before (see thread titled > "sparsetools_wrap.cxx" from Jan 6th 2007), so it's likely that I used > the wrong SWIG to generate the file. > > -- > Nathan Bell wnbell at gmail.com > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wbaxter at gmail.com Mon Jul 9 15:12:53 2007 From: wbaxter at gmail.com (Bill Baxter) Date: Tue, 10 Jul 2007 04:12:53 +0900 Subject: [SciPy-dev] Sorted Eigenvalues/vectors In-Reply-To: <3db594f70707090824n15bc4a1avf18d50f2a43b7a88@mail.gmail.com> References: <3db594f70707090824n15bc4a1avf18d50f2a43b7a88@mail.gmail.com> Message-ID: Sounds like you're looking for argsort. Something like: ix = numpy.argsort(eigvals) sorted_eigvals = eigvals[ix] sorted_eigvecs = eigvecs[:,ix] --bb On 7/10/07, Jeremy Conlin wrote: > > I have a need to have the eigenvalues/vectors calculated in my code > returned sorted by eigenvalue. Of course this necessitates sorting > the eigenvectors accordingly. If I understand correctly, a method for > this doesn't exist in the current distribution so I wrote a simple one > myself. Is this something that the community is interested in? > Should a method like this be included in the scipy distribution? It > is small, but I think very useful. > > Thanks, > Jeremy Conlin > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wnbell at gmail.com Mon Jul 9 18:54:24 2007 From: wnbell at gmail.com (Nathan Bell) Date: Mon, 9 Jul 2007 15:54:24 -0700 Subject: [SciPy-dev] sparse comments and questions In-Reply-To: References: Message-ID: On 7/7/07, Peter Skomoroch wrote: > Nathan, > > Do you have any plans to implement sparse matrix division? That is > something I've found lacking in the sparse matrix support... Sorry, I'm not sure what you mean by sparse matrix division. Can you elaborate? -- Nathan Bell wnbell at gmail.com From peter.skomoroch at gmail.com Mon Jul 9 19:33:51 2007 From: peter.skomoroch at gmail.com (Peter Skomoroch) Date: Mon, 9 Jul 2007 19:33:51 -0400 Subject: [SciPy-dev] sparse comments and questions In-Reply-To: References: Message-ID: Nathan, In Matlab and some C++ libraries, I've used the following "sparse division" functionality : When you attempt to divide one sparse matrix by another of the same size, return the result of elementwise division of the entries in the matrices. This assumes that the two input matrices have the same sparsity structure. This is useful when implementing some algorithms like NMF using sparse matrices. Right now in scipy, only division of a sparse matrix by a scalar is supported. If you look at sparse.py in trunk: 206 def __truediv__(self, other): 207 if isscalarlike(other): 208 return self * (1./other) 209 else: 210 raise NotImplementedError, "sparse matrix division not yet supported" 211 212 def __div__(self, other): 213 # Always do true division 214 if isscalarlike(other): 215 return self * (1./other) 216 else: 217 raise NotImplementedError, "sparse matrix division not yet supported" Here is a c implementation of sparse matrix division (mex file which is called from matlab ... sorry about the formating): source: http://journalclub.mit.edu/jclub/message?com_id=2;publication_id=21;message_id=58;session_id=2E87004B582D814FFB7D7DC3E64C3789;seq_no=54958 /* spdotdiv.c c = spdotdiv(a,b) Performs matrix element division c=a./b, but evaluated only at the sparse locations. (a and b must have same sparcity structure). */ #include "mex.h" #include #include #define C (plhs[0]) #define A (prhs[0]) #define B (prhs[1]) void mexFunction( int nlhs, mxArray *plhs[], int nrhs, const mxArray *prhs[] ) { int m, n, nzmax, nnz; int i; double *apr, *bpr, *cpr; if (nrhs != 2) mexErrMsgTxt("Two input arguments required."); if (!mxIsSparse(A) || !mxIsSparse(B)) mexErrMsgTxt("Input arguments must be sparse."); m = mxGetM(A); n = mxGetN(A); nzmax = mxGetNzmax(A); nnz = *(mxGetJc(A)+n); if ((mxGetM(B) != m) || (mxGetN(B) != n) || (mxGetNzmax(B) != nzmax)) mexErrMsgTxt("Input matrices must have same sparcity structure."); apr = mxGetPr(A); bpr = mxGetPr(B); if ((C = mxCreateSparse(m,n,nzmax,mxREAL)) == NULL) mexErrMsgTxt("Could not allocate sparse matrix."); cpr = mxGetPr(C); memcpy(mxGetIr(C), mxGetIr(A), nnz*sizeof(int)); memcpy(mxGetJc(C), mxGetJc(A), (n+1)*sizeof(int)); for (i=0; i wrote: > > On 7/7/07, Peter Skomoroch wrote: > > Nathan, > > > > Do you have any plans to implement sparse matrix division? That is > > something I've found lacking in the sparse matrix support... > > Sorry, I'm not sure what you mean by sparse matrix division. Can you > elaborate? > > -- > Nathan Bell wnbell at gmail.com > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -- Peter N. Skomoroch peter.skomoroch at gmail.com http://www.datawrangling.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Tue Jul 10 00:11:32 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 10 Jul 2007 13:11:32 +0900 Subject: [SciPy-dev] scikits infrastructure: ticket Mailing list, list of projects Message-ID: <469306F4.4020605@ar.media.kyoto-u.ac.jp> Hi, Since several projects are now under the scikits umbrella (mlabwrap, learn, optimization toolbox and audio IO tools), I would like to know if some infrastructure could be set. In particular, having tickets/commit Mailing lists would be useful. cheers, David From openopt at ukr.net Tue Jul 10 00:31:17 2007 From: openopt at ukr.net (dmitrey) Date: Tue, 10 Jul 2007 07:31:17 +0300 Subject: [SciPy-dev] scikits infrastructure: ticket Mailing list, list of projects In-Reply-To: <469306F4.4020605@ar.media.kyoto-u.ac.jp> References: <469306F4.4020605@ar.media.kyoto-u.ac.jp> Message-ID: <46930B95.3030205@ukr.net> As for me, I don't think that idea of common svn number releases (i.e. like these 178, 181) for scikits project is good. I think it should be separated for each scikit - it will be easier for someone interested in a single scikit from svn to see svn changelog. + what is the sense of wasting time for email related for mlabwrap svn update, when I work with for example pyaudiolab and have no any relations to mlabwrap? It's just a waste of time. just my 2 cents D. David Cournapeau wrote: > Hi, > > Since several projects are now under the scikits umbrella (mlabwrap, > learn, optimization toolbox and audio IO tools), I would like to know if > some infrastructure could be set. In particular, having tickets/commit > Mailing lists would be useful. > > cheers, > > David > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > > > > From robert.kern at gmail.com Tue Jul 10 00:34:45 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 09 Jul 2007 23:34:45 -0500 Subject: [SciPy-dev] scikits infrastructure: ticket Mailing list, list of projects In-Reply-To: <46930B95.3030205@ukr.net> References: <469306F4.4020605@ar.media.kyoto-u.ac.jp> <46930B95.3030205@ukr.net> Message-ID: <46930C65.4070708@gmail.com> dmitrey wrote: > As for me, I don't think that idea of common svn number releases (i.e. > like these 178, 181) for scikits project is good. > I think it should be separated for each scikit - it will be easier for > someone interested in a single scikit from svn to see svn changelog. > + what is the sense of wasting time for email related for mlabwrap svn > update, when I work with for example pyaudiolab and have no any > relations to mlabwrap? It's just a waste of time. Sorry, we're not going to set up a separate SVN repository for each scikit. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at ar.media.kyoto-u.ac.jp Tue Jul 10 00:33:32 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 10 Jul 2007 13:33:32 +0900 Subject: [SciPy-dev] scikits infrastructure: ticket Mailing list, list of projects In-Reply-To: <46930B95.3030205@ukr.net> References: <469306F4.4020605@ar.media.kyoto-u.ac.jp> <46930B95.3030205@ukr.net> Message-ID: <46930C1C.7020107@ar.media.kyoto-u.ac.jp> dmitrey wrote: > As for me, I don't think that idea of common svn number releases (i.e. > like these 178, 181) for scikits project is good. I don't see any way to avoid it since all projects share the same subversion repository. I agree this is not ideal, but this was done for a reason (I don't claim accuracy on this point, but if I remember correctly, the current layout was prefered to a project/{branch/tags/trunk} for performances reasons). If it were me, I wouldn't have used svn as a RCS anyway... But this is not my project nor my decision > I think it should be separated for each scikit - it will be easier for > someone interested in a single scikit from svn to see svn changelog. > + what is the sense of wasting time for email related for mlabwrap svn > update, when I work with for example pyaudiolab and have no any > relations to mlabwrap? It's just a waste of time. Maybe something could be done, like adding something in the subject of the email for each scikits, hence enabling easy filter. I don't know how trac works, nor do I know if this is possible. David From openopt at ukr.net Tue Jul 10 00:57:28 2007 From: openopt at ukr.net (dmitrey) Date: Tue, 10 Jul 2007 07:57:28 +0300 Subject: [SciPy-dev] scikits infrastructure: ticket Mailing list, list of projects In-Reply-To: <46930C1C.7020107@ar.media.kyoto-u.ac.jp> References: <469306F4.4020605@ar.media.kyoto-u.ac.jp> <46930B95.3030205@ukr.net> <46930C1C.7020107@ar.media.kyoto-u.ac.jp> Message-ID: <469311B8.5010605@ukr.net> I had thought about the filter for scikits svn tickets too. Also, I thought about something filtering mailing lists. For example, it would be good extension to mozilla thunderbird: "Ignore topic" suppose someone writes to mailing list: "problems with Mac install scipy". Since I have no relations to the problem, I press mouse right button and chose "ignore topic" so it will be automatically removed, as well as all messages with same topic that will come later as answers: *problems with Mac install scipy '*' means any symbols like re:, re[2], Re: etc So they will just not downloaded from mail server at all and no mail beep or other announce of new email message will divert me. It could be handled in the extension settings which period is ignored : for week, for month, for year etc. Or maybe you know appropriate email client where something like that is already implemented? D. David Cournapeau wrote: >> I think it should be separated for each scikit - it will be easier for >> someone interested in a single scikit from svn to see svn changelog. >> + what is the sense of wasting time for email related for mlabwrap svn >> update, when I work with for example pyaudiolab and have no any >> relations to mlabwrap? It's just a waste of time. >> > Maybe something could be done, like adding something in the subject of > the email for each scikits, hence enabling easy filter. I don't know how > trac works, nor do I know if this is possible. > > David > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > > > > From wbaxter at gmail.com Tue Jul 10 01:58:24 2007 From: wbaxter at gmail.com (Bill Baxter) Date: Tue, 10 Jul 2007 14:58:24 +0900 Subject: [SciPy-dev] scikits infrastructure: ticket Mailing list, list of projects In-Reply-To: <469311B8.5010605@ukr.net> References: <469306F4.4020605@ar.media.kyoto-u.ac.jp> <46930B95.3030205@ukr.net> <46930C1C.7020107@ar.media.kyoto-u.ac.jp> <469311B8.5010605@ukr.net> Message-ID: On 7/10/07, dmitrey wrote: > > For example, it would be good extension to mozilla thunderbird: "Ignore > topic" > suppose someone writes to mailing list: "problems with Mac install scipy". > Since I have no relations to the problem, I press mouse right button and > chose "ignore topic" > so it will be automatically removed, as well as all messages with same > topic that will come later as answers: > *problems with Mac install scipy > '*' means any symbols like re:, re[2], Re: etc > So they will just not downloaded from mail server at all and no mail > beep or other announce of new email message will divert me. > It could be handled in the extension settings which period is ignored : > for week, for month, for year etc. > > Or maybe you know appropriate email client where something like that is > already implemented? > D. Gmail: http://mail.google.com/support/bin/answer.py?hl=en&answer=47787 Would you like me to send you an invitation? ;-) --bb -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeremit0 at gmail.com Tue Jul 10 12:00:13 2007 From: jeremit0 at gmail.com (Jeremy Conlin) Date: Tue, 10 Jul 2007 12:00:13 -0400 Subject: [SciPy-dev] Sorted Eigenvalues/vectors In-Reply-To: References: <3db594f70707090824n15bc4a1avf18d50f2a43b7a88@mail.gmail.com> Message-ID: <3db594f70707100900p3892449ar7cc61e0b1c4a9f3@mail.gmail.com> I have used argsort in my code, but certainly not as efficiently as you have. That's nice, thanks. I guess my question is: should this be more obvious in scipy/numpy by creating a method specifically for sorting eigenpairs? Thanks, Jeremy On 7/9/07, Bill Baxter wrote: > Sounds like you're looking for argsort. Something like: > > ix = numpy.argsort(eigvals) > sorted_eigvals = eigvals[ix] > sorted_eigvecs = eigvecs[:,ix] > > --bb > > > On 7/10/07, Jeremy Conlin wrote: > > I have a need to have the eigenvalues/vectors calculated in my code > > returned sorted by eigenvalue. Of course this necessitates sorting > > the eigenvectors accordingly. If I understand correctly, a method for > > this doesn't exist in the current distribution so I wrote a simple one > > myself. Is this something that the community is interested in? > > Should a method like this be included in the scipy distribution? It > > is small, but I think very useful. > > > > Thanks, > > Jeremy Conlin > > > > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > > From wbaxter at gmail.com Tue Jul 10 15:37:56 2007 From: wbaxter at gmail.com (Bill Baxter) Date: Wed, 11 Jul 2007 04:37:56 +0900 Subject: [SciPy-dev] Sorted Eigenvalues/vectors In-Reply-To: <3db594f70707100900p3892449ar7cc61e0b1c4a9f3@mail.gmail.com> References: <3db594f70707090824n15bc4a1avf18d50f2a43b7a88@mail.gmail.com> <3db594f70707100900p3892449ar7cc61e0b1c4a9f3@mail.gmail.com> Message-ID: Generally the answer is "no" when the functionality in question requires so little code. I think once you get more familiar with how numpy works you'll be writing things like that in your sleep. BTW Maybe you actually wanted to sort by magnitude, not value? ix = numpy.argsort(abs(eigvals)) sorted_eigvals = eigvals[ix] sorted_eigvecs = eigvecs[:,ix] --bb On 7/11/07, Jeremy Conlin wrote: > > I have used argsort in my code, but certainly not as efficiently as > you have. That's nice, thanks. I guess my question is: should this > be more obvious in scipy/numpy by creating a method specifically for > sorting eigenpairs? > > Thanks, > Jeremy > > On 7/9/07, Bill Baxter wrote: > > Sounds like you're looking for argsort. Something like: > > > > ix = numpy.argsort(eigvals) > > sorted_eigvals = eigvals[ix] > > sorted_eigvecs = eigvecs[:,ix] > > > > --bb > > > > > > On 7/10/07, Jeremy Conlin wrote: > > > I have a need to have the eigenvalues/vectors calculated in my code > > > returned sorted by eigenvalue. Of course this necessitates sorting > > > the eigenvectors accordingly. If I understand correctly, a method for > > > this doesn't exist in the current distribution so I wrote a simple one > > > myself. Is this something that the community is interested in? > > > Should a method like this be included in the scipy distribution? It > > > is small, but I think very useful. > > > > > > Thanks, > > > Jeremy Conlin > > > > > > > > > > _______________________________________________ > > Scipy-dev mailing list > > Scipy-dev at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-dev > > > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wnbell at gmail.com Wed Jul 11 12:47:57 2007 From: wnbell at gmail.com (Nathan Bell) Date: Wed, 11 Jul 2007 09:47:57 -0700 Subject: [SciPy-dev] sparse comments and questions In-Reply-To: References: Message-ID: On 7/9/07, Peter Skomoroch wrote: > In Matlab and some C++ libraries, I've used the following "sparse division" > functionality : > > When you attempt to divide one sparse matrix by another of the same size, > return the result of elementwise division of the entries in the matrices. > This assumes that the two input matrices have the same sparsity structure. > > This is useful when implementing some algorithms like NMF using sparse > matrices. Right now in scipy, only division of a sparse matrix by a scalar > is supported. I see what you mean now. I'll try to implement that this weekend. We currently support elementwise multiplication (via A**B, I believe), so you could use that in the meantime. -- Nathan Bell wnbell at gmail.com From saintmlx at apstat.com Wed Jul 11 22:37:10 2007 From: saintmlx at apstat.com (saintmlx) Date: Wed, 11 Jul 2007 22:37:10 -0400 Subject: [SciPy-dev] proposed patches: reviewers welcome Message-ID: <469593D6.5060208@apstat.com> Hello SciPy Developers, I recently submitted patches that, I hope, will help closing two opened tickets: the conversion of scalars using int() (#549, opened by myself), and... the notorious "agitation" around the "standard" standard deviation (#502 et al.). Any reviews would be welcome. If these could make it to the svn repository, I'd be more than happy :) I've attached diff files to both tickets. Thanks for any help, Xavier Saint-Mleux From wnbell at gmail.com Fri Jul 13 05:37:17 2007 From: wnbell at gmail.com (Nathan Bell) Date: Fri, 13 Jul 2007 02:37:17 -0700 Subject: [SciPy-dev] sparse comments and questions In-Reply-To: References: Message-ID: On 7/9/07, Peter Skomoroch wrote: > In Matlab and some C++ libraries, I've used the following "sparse division" > functionality : > > When you attempt to divide one sparse matrix by another of the same size, > return the result of elementwise division of the entries in the matrices. > This assumes that the two input matrices have the same sparsity structure. > > This is useful when implementing some algorithms like NMF using sparse > matrices. Right now in scipy, only division of a sparse matrix by a scalar > is supported. I've added support for elementwise division in the latest SVN. I designed it so you get the same result as a dense A/B on the union of the sparsity structures. For example, if A[i.j] is nonzero while B[i.j] is zero, you get +/- inf. If A[i,j] is zero while B[i,j] is nonzero you'll get an implicit 0 in the result (i.e. it will not appear in the output matrix). This holds true for explicit zeros in the matrices as well. I think this is the most faithful analog of the dense case. Let me know if you discover any problems. -- Nathan Bell wnbell at gmail.com From peter.skomoroch at gmail.com Fri Jul 13 14:36:51 2007 From: peter.skomoroch at gmail.com (Peter Skomoroch) Date: Fri, 13 Jul 2007 14:36:51 -0400 Subject: [SciPy-dev] sparse comments and questions In-Reply-To: References: Message-ID: Nathan, Thanks for putting this in, I'll give it a try. -Pete On 7/13/07, Nathan Bell wrote: > > On 7/9/07, Peter Skomoroch wrote: > > In Matlab and some C++ libraries, I've used the following "sparse > division" > > functionality : > > > > When you attempt to divide one sparse matrix by another of the same > size, > > return the result of elementwise division of the entries in the > matrices. > > This assumes that the two input matrices have the same sparsity > structure. > > > > This is useful when implementing some algorithms like NMF using sparse > > matrices. Right now in scipy, only division of a sparse matrix by a > scalar > > is supported. > > I've added support for elementwise division in the latest SVN. I > designed it so you get the same result as a dense A/B on the union of > the sparsity structures. For example, if A[i.j] is nonzero while > B[i.j] is zero, you get +/- inf. If A[i,j] is zero while B[i,j] is > nonzero you'll get an implicit 0 in the result (i.e. it will not > appear in the output matrix). This holds true for explicit zeros in > the matrices as well. I think this is the most faithful analog of the > dense case. > > Let me know if you discover any problems. > > -- > Nathan Bell wnbell at gmail.com > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -- Peter N. Skomoroch peter.skomoroch at gmail.com http://www.datawrangling.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Sat Jul 14 11:10:16 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Sat, 14 Jul 2007 17:10:16 +0200 Subject: [SciPy-dev] sparse comments and questions In-Reply-To: References: Message-ID: <20070714151016.GC7182@mentat.za.net> On Sat, Jul 07, 2007 at 06:49:43PM -0700, Nathan Bell wrote: > 3) Should we allow modifications to CSR and CSC matrices that force > O(N) updates/reallocations. For instance, adding a new value into a > CSR matrix is essentially as expensive as building a new one from > scratch. Could we instead prohibit such modifications and raise an > exception informing the user that lil_matrix is the proper format for > such operations? Note that changing an existing matrix value is not > so costly (typically O(1)), so not all calls to __setitem__ would be > banned. Instead of throwing an error, you could also make use of the warnings module, i.e. In [1]: import warnings In [2]: class SparseOrderWarning(Warning): pass In [3]: warnings.simplefilter('always', SparseOrderWarning) In [4]: warnings.warn("Attempted O(n) operation, please use lil-format", SparseOrderWarning) /home/stefan/lib/python2.5/site-packages/IPython/FakeModule.py:1: SparseOrderWarning: Attempted O(n) operation, please use lil-format Then the user can always set these to errors by doing: In [6]: warnings.simplefilter('error', SparseOrderWarning) In [7]: warnings.warn("Attempted O(n) opreation, please use lil-format", SparseOrderWarning) --------------------------------------------------------------------------- SparseOrderWarning Traceback (most recent call last) ... > 6) When working with sparse matrices, do people expect the result to > be of the same type? For example, it's far more efficient to make > transpose(csr_matrix) output a csc_matrix (as is currently done in > SciPy) since that's a O(1) operation (as opposed to an O(n) cost > otherwise). So in general, should a method to return the result in > most efficient output format, relying on the user to convert it to > another format if so desired? I.e. if you need a csr_matrix at a > given point, then you should call .tocsr() to be sure. If it's > already a csr_matix, then no conversion will be done. I don't think users should make any assumptions about the output type, since we can't provide both optimal behaviour and identical input/output types. On the other hand, we should guarantee that an operation is done in the fewest steps necessary. Like you said, the user is still free to explicitly force the output type. > 7) Where do functions that operate on sparse matrices belong? For Should we also look at adding iterative sparse solvers? Regards St?fan From travis at enthought.com Sun Jul 15 13:40:42 2007 From: travis at enthought.com (Travis Vaught) Date: Sun, 15 Jul 2007 12:40:42 -0500 Subject: [SciPy-dev] ANN: SciPy 2007 Conference Updates Message-ID: <95D2E93A-0102-4BBF-BEDA-2BAE0CC20654@enthought.com> Greetings, We're excited to have *Ivan Krsti?*, the director of security architecture for the One Laptop Per Child project as our Keynote Speaker this year. The planning for the SciPy 2007 Conference is moving along. Please see below for some important updates. Schedule Available ------------------ The full schedule of talks has been posted here: http://www.scipy.org/SciPy2007/ConferenceSchedule Early Registration Extended --------------------------- If you haven't yet registered for the conference, the early registration deadline has been extended to Wednesday, July 18th, 2007. For more information on the conference see: http://www.scipy.org/SciPy2007 Student Sponsorship ------------------- Enthought, Inc. (http://www.enthought.com) is sponsoring the registration fees for up to 5 college or graduate students to attend the conference. To apply, please send a short description of what you are studying and why you'd like to attend to info at enthought.com. Please include telephone contact information. BOFs & Sprints -------------- If you're planning to attend and are interested in selecting BOF or Sprint Session topics, please weigh in at: BOFs: http://www.scipy.org/SciPy2007/BoFs Sprints: http://www.scipy.org/SciPy2007/Sprints We're looking forward to a great conference this year! Best, Travis From jdh2358 at gmail.com Mon Jul 16 16:32:48 2007 From: jdh2358 at gmail.com (John Hunter) Date: Mon, 16 Jul 2007 15:32:48 -0500 Subject: [SciPy-dev] gcc bzonfig.h Message-ID: <88e473830707161332m2a1f6bd4i964d21cf41d35016@mail.gmail.com> On our x86 solaris platform with gcc johnh at flag:svn> gcc --version gcc (GCC) 3.4.1 johnh at flag:svn> uname -a SunOS flag 5.10 Generic_118855-15 i86pc i386 i86pc we are getting an error with blitz inline (eg in Prabhu's laplace.py example) because we are getting the gnu/bzconfig.h macro defined /* define if the compiler has isnan function in namespace std */ #ifndef BZ_ISNAN_IN_NAMESPACE_STD #define BZ_ISNAN_IN_NAMESPACE_STD #endif and then a compiler error that isnan is not in the std::namespace. We tried to "undec_macros" this variable, but that was no help, so we patched the file with the scipy svn patch below. This shouldn't adversely affect anyone, because the default behavior will be the same, but will allow us to compile weave blitz code with the define_macros definte_macros=[('NO_BZ_ISNAN_IN_NAMESPACE_STD',1)] Although the double negative syntax is a bit irksome, I didn't know a better way to turn off this macro. If this, or something like it, could be included in scipy, that would be great. Thanks. JDH Index: Lib/weave/blitz/blitz/gnu/bzconfig.h =================================================================== --- Lib/weave/blitz/blitz/gnu/bzconfig.h (revision 3166) +++ Lib/weave/blitz/blitz/gnu/bzconfig.h (working copy) @@ -298,9 +298,11 @@ #endif /* define if the compiler has isnan function in namespace std */ +#ifndef NO_BZ_ISNAN_IN_NAMESPACE_STD #ifndef BZ_ISNAN_IN_NAMESPACE_STD #define BZ_ISNAN_IN_NAMESPACE_STD #endif +#endif /* define if the compiler has C math functions in namespace std */ #ifndef BZ_MATH_FN_IN_NAMESPACE_STD From openopt at ukr.net Wed Jul 18 07:30:51 2007 From: openopt at ukr.net (dmitrey) Date: Wed, 18 Jul 2007 14:30:51 +0300 Subject: [SciPy-dev] ticket 390 ("scipy.optimize.fmin_bfgs fails without warning", Reported by: ycopin) Message-ID: <469DF9EB.5070207@ukr.net> hi all, the ticket 390 http://projects.scipy.org/scipy/scipy/ticket/390 says "fmin_bfgs fails without warning" however, I guess the user just had chose inappropriate solver. fmin_bfgs is intended for local minimum, convex funcs. the user's objfunc is chi2 (chi-square distribution estimation) from the func def model(pars, x): a,b,c = pars return a*S.sin(x/b)*S.exp(x/c) # M so a,b,c are variables to be estimated (i.e. required solution from fmin_bfgs) So, since user has sin(const/b), the func is 1) non-convex 2) has lots of local minimum So I guess it's nothing special that sometimes fmin_bfgs yields other solution (from other start points) than the one the user mentioned. So I think the ticket should be closed. However, I don't know how to close tickets. D. From aisaac at american.edu Wed Jul 18 08:17:01 2007 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 18 Jul 2007 08:17:01 -0400 Subject: [SciPy-dev] ticket 390 ("scipy.optimize.fmin_bfgs fails without warning", Reported by: ycopin) In-Reply-To: <469DF9EB.5070207@ukr.net> References: <469DF9EB.5070207@ukr.net> Message-ID: On Wed, 18 Jul 2007, dmitrey apparently wrote: > http://projects.scipy.org/scipy/scipy/ticket/390 > So, since user has sin(const/b), the func is > 1) non-convex > 2) has lots of local minimum > So I guess it's nothing special that sometimes fmin_bfgs yields other > solution (from other start points) than the one the user mentioned. > So I think the ticket should be closed. Yannick, can you comment please? Nils too. I read the ticket as concerning not the particular point found but that stopping was away from a local minimum without warning, even though the Jacobian was ok. Alan Isaac PS Dmitrey is working on these tickets this week and needs very quick feedback. From nwagner at iam.uni-stuttgart.de Wed Jul 18 08:30:07 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 18 Jul 2007 14:30:07 +0200 Subject: [SciPy-dev] ticket 390 ("scipy.optimize.fmin_bfgs fails without warning", Reported by: ycopin) In-Reply-To: References: <469DF9EB.5070207@ukr.net> Message-ID: <469E07CF.1020501@iam.uni-stuttgart.de> Alan G Isaac wrote: > On Wed, 18 Jul 2007, dmitrey apparently wrote: > >> http://projects.scipy.org/scipy/scipy/ticket/390 >> So, since user has sin(const/b), the func is >> 1) non-convex >> 2) has lots of local minimum >> So I guess it's nothing special that sometimes fmin_bfgs yields other >> solution (from other start points) than the one the user mentioned. >> So I think the ticket should be closed. >> > > > Yannick, can you comment please? Nils too. > I read the ticket as concerning not the particular point > found but that stopping was away from a local > minimum without warning, even though the Jacobian was ok. > > Alan Isaac > > PS Dmitrey is working on these tickets this week and needs > very quick feedback. > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > Hi Alan, I am not an expert wrt. optimization. The following link might be useful to gain further insight into local and global convergence properties of BFGS. http://eom.springer.de/b/b120510.htm Cheers, Nils From aisaac at american.edu Wed Jul 18 08:46:37 2007 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 18 Jul 2007 08:46:37 -0400 Subject: [SciPy-dev] ticket 390 ("scipy.optimize.fmin_bfgs fails without warning", Reported by: ycopin) In-Reply-To: <469E07CF.1020501@iam.uni-stuttgart.de> References: <469DF9EB.5070207@ukr.net> <469E07CF.1020501@iam.uni-stuttgart.de> Message-ID: Hi Nils, You had commented on the ticket previously. I am just trying to get clear about the issue the ticket is addressing, as it seems to be a bit ambiguous. I read it as requesting a warning at failure. Is that the right reading? http://projects.scipy.org/scipy/scipy/ticket/390 Cheers, Alan From openopt at ukr.net Wed Jul 18 08:51:26 2007 From: openopt at ukr.net (dmitrey) Date: Wed, 18 Jul 2007 15:51:26 +0300 Subject: [SciPy-dev] ticket 390 ("scipy.optimize.fmin_bfgs fails without warning", Reported by: ycopin) In-Reply-To: <469E07CF.1020501@iam.uni-stuttgart.de> References: <469DF9EB.5070207@ukr.net> <469E07CF.1020501@iam.uni-stuttgart.de> Message-ID: <469E0CCE.6000205@ukr.net> Thank you, however, there are no needs to read anything for to understand that classic bfgs is for local minimum only. Even 2 first lines of your url already says * Broyden?Fletcher?Goldfarb?Shanno method,* /BFGS method/The unconstrained optimization problem is to minimize a real-valued function of variables. That is, to find a *local* minimizer, i.e. a point such that ... We can talk about global convergence of some bfgs modifications ONLY if some assumptions (usually very strong) about objfunc are known, for example f is Lipschitz twice continuously differentiable. BFGS uses gradient and is intended for medium and rather large scale objfunc, it's already enough for to understand that it's for local minimum only (w/o any other assumptions about f, any global solver is capable only for small-scale funcs, nVars up to 10-12). BTW in ticket 344 you were right about the bug, but on the other hand if you intend to find global minimum then you also use incorrect solver(s) (fmin_cg and fmin_powell) to your objfunc, that has 2 global (x=1.32733353855 and x=-1.32733353855, because your objfunc d(x) is zero-symmetric), non-convex and lots of local minima. def g(x): return 1./(1-cos(x)) def d(x): return sqrt(x**2+(g(x)-1.0)**2) HTH, D. Nils Wagner wrote: > Alan G Isaac wrote: > >> On Wed, 18 Jul 2007, dmitrey apparently wrote: >> >> >>> http://projects.scipy.org/scipy/scipy/ticket/390 >>> So, since user has sin(const/b), the func is >>> 1) non-convex >>> 2) has lots of local minimum >>> So I guess it's nothing special that sometimes fmin_bfgs yields other >>> solution (from other start points) than the one the user mentioned. >>> So I think the ticket should be closed. >>> >>> >> Yannick, can you comment please? Nils too. >> I read the ticket as concerning not the particular point >> found but that stopping was away from a local >> minimum without warning, even though the Jacobian was ok. >> >> Alan Isaac >> >> PS Dmitrey is working on these tickets this week and needs >> very quick feedback. >> >> >> _______________________________________________ >> Scipy-dev mailing list >> Scipy-dev at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-dev >> >> > Hi Alan, > > I am not an expert wrt. optimization. > The following link might be useful to gain further insight into local > and global > convergence properties of BFGS. > http://eom.springer.de/b/b120510.htm > > > Cheers, > Nils > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > > > > From openopt at ukr.net Wed Jul 18 08:51:32 2007 From: openopt at ukr.net (dmitrey) Date: Wed, 18 Jul 2007 15:51:32 +0300 Subject: [SciPy-dev] ticket 344: bugfix commited to svn In-Reply-To: References: <469DD954.4030208@ukr.net> Message-ID: <469E0CD4.1080001@ukr.net> hi all, I committed bugfix to ticket 344, as well as ticket 234, to svn so now it should be closed by someone. P.S. (to Alan Isaac) Yes, I meant exactly the same you have mentioned about tnc. From aisaac at american.edu Wed Jul 18 08:59:58 2007 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 18 Jul 2007 08:59:58 -0400 Subject: [SciPy-dev] ticket 344: bugfix commited to svn In-Reply-To: <469E0CD4.1080001@ukr.net> References: <469DD954.4030208@ukr.net><469E0CD4.1080001@ukr.net> Message-ID: On Wed, 18 Jul 2007, dmitrey apparently wrote: > I meant exactly the same you have mentioned about tnc. OK, so the issue is how to respond to people who use the old API. Two possible approaches: 1. Trap it and raise value error 2. Translate to new API and issue warning I favor the latter if it's easy enough. Comments? Cheers, Alan Isaac From matthieu.brucher at gmail.com Wed Jul 18 09:06:05 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 18 Jul 2007 15:06:05 +0200 Subject: [SciPy-dev] ticket 344: bugfix commited to svn In-Reply-To: References: <469DD954.4030208@ukr.net> <469E0CD4.1080001@ukr.net> Message-ID: Hi, If the repository uses a post-commit hook with the trac, you can use "refs #number" to indicate that your commit references ticket #number and "fixes #number" to close the ticket #number if your commit fixes for the tickets and if it needs to be closed. Matthieu 2007/7/18, Alan G Isaac : > > On Wed, 18 Jul 2007, dmitrey apparently wrote: > > I meant exactly the same you have mentioned about tnc. > > OK, so the issue is how to respond to people who > use the old API. > > Two possible approaches: > 1. Trap it and raise value error > 2. Translate to new API and issue warning > > I favor the latter if it's easy enough. > Comments? > > Cheers, > Alan Isaac > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Wed Jul 18 09:12:05 2007 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 18 Jul 2007 09:12:05 -0400 Subject: [SciPy-dev] ticket 344: bugfix commited to svn In-Reply-To: <469E0CD4.1080001@ukr.net> References: <469DD954.4030208@ukr.net><469E0CD4.1080001@ukr.net> Message-ID: On Wed, 18 Jul 2007, dmitrey apparently wrote: > I committed bugfix to ticket 344, as well as ticket 234, > to svn so now it should be closed by someone. Please add a comment to each ticket saying what you did and noting that this fixes the issue. The closer will examine this comment. Since you will be closing several tickets this week, keep a list, and post a list of closed tickets at the end of the week. When you post the list, include a note for each item (identical to your note on the ticket is fine) saying what you did to close the ticket. The tickets will probably all be closed at one go. But *additionally* keep posting the individual closures as they take place. Thank you, Alan Isaac From openopt at ukr.net Wed Jul 18 09:14:40 2007 From: openopt at ukr.net (dmitrey) Date: Wed, 18 Jul 2007 16:14:40 +0300 Subject: [SciPy-dev] ticket 344: bugfix commited to svn In-Reply-To: References: <469DD954.4030208@ukr.net><469E0CD4.1080001@ukr.net> Message-ID: <469E1240.6030405@ukr.net> Ok, however, 1 min ago I add one more ticket notice to svn (ticket 389) Of course, I will mention all finished tickets in the end of the week in my weekly report Regards, D. Alan G Isaac wrote: > On Wed, 18 Jul 2007, dmitrey apparently wrote: > >> I committed bugfix to ticket 344, as well as ticket 234, >> to svn so now it should be closed by someone. >> > > Please add a comment to each ticket saying what you did > and noting that this fixes the issue. The closer will > examine this comment. > > Since you will be closing several tickets this week, > keep a list, and post a list of closed tickets at > the end of the week. When you post the list, include > a note for each item (identical to your note on the > ticket is fine) saying what you did to close the > ticket. The tickets will probably all be closed > at one go. > > But *additionally* keep posting the individual closures > as they take place. > > Thank you, > Alan Isaac > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > > > > From aisaac at american.edu Wed Jul 18 09:28:14 2007 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 18 Jul 2007 09:28:14 -0400 Subject: [SciPy-dev] ticket 344: bugfix commited to svn In-Reply-To: <469E1240.6030405@ukr.net> References: <469DD954.4030208@ukr.net><469E0CD4.1080001@ukr.net> <469E1240.6030405@ukr.net> Message-ID: On Wed, 18 Jul 2007, dmitrey apparently wrote: > ticket 389 So you have closed this out? http://projects.scipy.org/scipy/scipy/ticket/389 Great! Again, please add a comment to the ticket saying what you have done, and if possible, contact the submitter to confirm that you have addressed their issue. (Nils?) Cheers, Alan Isaac From openopt at ukr.net Wed Jul 18 09:31:33 2007 From: openopt at ukr.net (dmitrey) Date: Wed, 18 Jul 2007 16:31:33 +0300 Subject: [SciPy-dev] ticket 344: bugfix commited to svn In-Reply-To: References: <469DD954.4030208@ukr.net><469E0CD4.1080001@ukr.net> <469E1240.6030405@ukr.net> Message-ID: <469E1635.3050704@ukr.net> Alan G Isaac wrote: > On Wed, 18 Jul 2007, dmitrey apparently wrote: > >> ticket 389 >> > > So you have closed this out? > http://projects.scipy.org/scipy/scipy/ticket/389 > Great! Ok, however, I just add "copy(x0)" vs "x0" in the cobyla.py file, I don't think it's so great, maybe, you meant another ticket. I send the copy of the message to Nils, however, I think he reads the scipy dev mailing list as well. Regards, D. > Again, please add a comment to the ticket > saying what you have done, and if possible, contact > the submitter to confirm that you have addressed > their issue. (Nils?) > > Cheers, > Alan Isaac > > > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > > > > From aisaac at american.edu Wed Jul 18 09:34:45 2007 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 18 Jul 2007 09:34:45 -0400 Subject: [SciPy-dev] ticket 390 ("scipy.optimize.fmin_bfgs fails without warning", Reported by: ycopin) In-Reply-To: <469E13E0.8010001@ipnl.in2p3.fr> References: <469DF9EB.5070207@ukr.net> <469E13E0.8010001@ipnl.in2p3.fr> Message-ID: On Wed, 18 Jul 2007, Yannick Copin apparently wrote: > I actually did not check formally if the solution found is > indeed a local minimum, but that would surprise me. So > indeed the problem is not so much that the algo could fall > on some secondary minima, but that it claims convergence > while the requirements for convergence are probably not > met (are they in this special case?) Hmm, I am a bit confused. This ticket suggested a convergence failure, but you are not sure there was one? What is your evidence of a failure? Could you please add a note to the ticket as well as explaining here. We cannot pursue this without a good reason to think there was a failure. Thank you for your help, Alan Isaac From openopt at ukr.net Wed Jul 18 09:50:03 2007 From: openopt at ukr.net (dmitrey) Date: Wed, 18 Jul 2007 16:50:03 +0300 Subject: [SciPy-dev] ticket 390 ("scipy.optimize.fmin_bfgs fails without warning", Reported by: ycopin) In-Reply-To: References: <469DF9EB.5070207@ukr.net> <469E13E0.8010001@ipnl.in2p3.fr> Message-ID: <469E1A8B.7040304@ukr.net> On Wed, 18 Jul 2007, Yannick Copin apparently wrote: >> I actually did not check formally if the solution found is >> indeed a local minimum, but that would surprise me. So >> indeed the problem is not so much that the algo could fall >> on some secondary minima, but that it claims convergence >> while the requirements for convergence are probably not >> met (are they in this special case?) >> I think it's better for you to insist that they are not met, then for us to investigate your objfunc are stop criteria met or not met, it's rather time-eating and it's not our duty. I suppose you should report bug if you are sure you that it's not your own. (please don't see an offense here) Using solver that relies on gradient in your case is already mistake. Lots of (local) solvers are just *enable* to found *any* minimum of sin(1000*x), for example, Naum Z.Shor ralg implementation. They will fail to solve line-search subproblem from certain points. So you can't demand from local solver to obtain ANY solution, local or global (in the case of non-convex func with lots of local minima). I can't understand, since you have just nVars = 3, why don't you use a global solver, like anneal? Regards, D. From david at ar.media.kyoto-u.ac.jp Thu Jul 19 03:18:53 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 19 Jul 2007 16:18:53 +0900 Subject: [SciPy-dev] Where to add special functions ? Message-ID: <469F105D.3060602@ar.media.kyoto-u.ac.jp> Hi, I would like to add a special function (implemented in python), but not sure where I should put it for a patch: it looks like scipy.special is implemented in C/Fortran except for orthgonal function. The function I would like to add is the generalized gamma: http://planetmath.org/encyclopedia/GammaFunctionMultivariateReal.html The function is useful to multi variate analysis (appears for example when one needs to compute the KL divergence of Wishart distributions, which appear quite often in Bayesian statistics). cheers, David From millman at berkeley.edu Thu Jul 19 04:16:49 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Thu, 19 Jul 2007 01:16:49 -0700 Subject: [SciPy-dev] [SciPy-user] changes in scipy.optimization.tnc (v 1.2 to 1.3) In-Reply-To: References: <469DD954.4030208@ukr.net> Message-ID: Hey Dmitrey, (I moved this discussion over to the developer's list.) Dmitrey is working on upgrading scipy.optimization.tnc from v1.2 to v1.3: http://projects.scipy.org/scipy/scipy/ticket/296 I started working on this awhile ago and never finished it. The code I worked on is in scipy.sandbox.newoptimize: http://projects.scipy.org/scipy/scipy/changeset/2530 Dmitrey has pointed out that the API has changed between 1.2 and 1.3; however, I don't think it will make much of a difference to anyone who is using scipy.optimize.tnc anyway. I was trying to change the external API of the tnc.py that Jean-Sebastien provides to match the scipy.optimize interface. Travis did the same thing when he originally added tnc version 1.2. I started making the changes in revision 2535: http://projects.scipy.org/scipy/scipy/changeset/2535 I assume this is still what we want to do. Is there something I am missing? Was there some specific change in the tnc v1.3 API that make it impossible to provide the same interface? I assume everyone using scipy should be using scipy.optimize.tnc.fmin_tnc if they want to use the tnc solver. And I think I was able to get the arguments to fmin_tnc to be same as before. Also make sure to double-check what I did in the sandbox.newoptimize code, because I was basing it off old code. There have been some changes since I worked on it, for instance: http://projects.scipy.org/scipy/scipy/ticket/423 Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From millman at berkeley.edu Thu Jul 19 04:32:38 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Thu, 19 Jul 2007 01:32:38 -0700 Subject: [SciPy-dev] Scikits to be read-only 10am Central (7/19) Message-ID: Hello everyone, As part of David Cournapeau's SoC machine learning project, we will be moving a few packages out of scipy.sanbox to scikits.learn. As previously discussed on the list, the specific packages are: scipy.sandbox.ann scipy.sandbox.ga scipy.sandbox.pyem scipy.sandbox.svm I am going to try to make this as painless for everyone as possible, so we are going to make this transition in a few steps. The first step is that tomorrow, I am going to copy all 4 packages (with their histories) over to the machine learning scikit. To make sure I don't mess anything up, the scikit repository will be read-only starting at 10am central time. The changes won't take very long, so by 11am central the scikits repository will be read-write again. I will send out an email before and after the changes. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From millman at berkeley.edu Thu Jul 19 11:30:21 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Thu, 19 Jul 2007 08:30:21 -0700 Subject: [SciPy-dev] Scikits to be read-only 10am Central (7/19) In-Reply-To: References: Message-ID: The scikits repository is writable again. Thanks, Jarrod On 7/19/07, Jarrod Millman wrote: > Hello everyone, > > As part of David Cournapeau's SoC machine learning project, we will be > moving a few packages out of scipy.sanbox to scikits.learn. As > previously discussed on the list, the specific packages are: > scipy.sandbox.ann > scipy.sandbox.ga > scipy.sandbox.pyem > scipy.sandbox.svm > > I am going to try to make this as painless for everyone as possible, > so we are going to make this transition in a few steps. The first > step is that tomorrow, I am going to copy all 4 packages (with their > histories) over to the machine learning scikit. > > To make sure I don't mess anything up, the scikit repository will be > read-only starting at 10am central time. The changes won't take very > long, so by 11am central the scikits repository will be read-write > again. I will send out an email before and after the changes. > > Thanks, > > -- > Jarrod Millman > Computational Infrastructure for Research Labs > 10 Giannini Hall, UC Berkeley > phone: 510.643.4014 > http://cirl.berkeley.edu/ > -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From aisaac at american.edu Thu Jul 19 11:33:56 2007 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 19 Jul 2007 11:33:56 -0400 Subject: [SciPy-dev] change scipy.optimize function signatures?? FEEDBACK NEEDED!! (ticket 285) Message-ID: Dmitrey has raised some questions about ticket 285: http://projects.scipy.org/scipy/scipy/ticket/285 The changes proposed are not complex to implement, but they increase the interface complexity. They also raise some design considerations. For example, according to the ticket, the signature for brent would change from def brent(func, args=(), brack=None, tol=1.48e-8, full_output=0, maxiter=500): to def brent(func, args=(), brack=None, tol=1.48e-8, full_output=0, maxiter=500, bracket_keywords = {} ): In my personal view, this signature is already messy. If it is to grow an argument, I suggest that the argument be a parameters object. This would work something like the following. (No deep thought on this yet.) class OptizationParams: def __init__(self, **kwargs): self.set_defaults() for k,v in kwargs.iteritems(): if hasattr(self, k): setattr(self, k, v) else: raise AttributeError def set_defaults(self): self.args = () self.tol = 1.48e-8 self.full_output=0 self.maxiter=500 self.bracket_interval = None self.bracket_grow_limit = None The signature for brent would become def brent(func, params=OptizationParams(), **kwargs): Any kwargs provided to `brent` would override the values in the OptParam instance. This should be fully backwards compatible and, looking forward, very flexible. Comments? Cheers, Alan Isaac From peridot.faceted at gmail.com Thu Jul 19 11:36:12 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Thu, 19 Jul 2007 11:36:12 -0400 Subject: [SciPy-dev] change scipy.optimize function signatures?? FEEDBACK NEEDED!! (ticket 285) In-Reply-To: References: Message-ID: On 19/07/07, Alan G Isaac wrote: > For example, according to the ticket, the signature for brent would change from > def brent(func, args=(), brack=None, tol=1.48e-8, full_output=0, maxiter=500): > to > def brent(func, args=(), brack=None, tol=1.48e-8, full_output=0, maxiter=500, bracket_keywords = {} ): > > In my personal view, this signature is already messy. > If it is to grow an argument, I suggest that the argument > be a parameters object. > > This would work something like the following. > (No deep thought on this yet.) > > class OptizationParams: > def __init__(self, **kwargs): > self.set_defaults() > for k,v in kwargs.iteritems(): > if hasattr(self, k): > setattr(self, k, v) > else: > raise AttributeError > def set_defaults(self): > self.args = () > self.tol = 1.48e-8 > self.full_output=0 > self.maxiter=500 > self.bracket_interval = None > self.bracket_grow_limit = None > > The signature for brent would become > def brent(func, params=OptizationParams(), **kwargs): > > Any kwargs provided to `brent` would override the values in > the OptParam instance. This should be fully backwards > compatible and, looking forward, very flexible. Please don't do this! This renders the signature useless in figuring out how to call the function. This is something I really hate about ufuncs, which don't even have useful docstrings. The issue is not that the signature is changing - who cares? - but that the celling API is changing. That's not something we can do much about; APIs do change. Then again, I'm not sure I see what this opaque OptimizationParams is supposed to accomplish. (Remember that you can, if you feel the urge, call brent(F,x0,**optimization_parameters) even with the current API). Anne From aisaac at american.edu Thu Jul 19 12:27:44 2007 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 19 Jul 2007 12:27:44 -0400 Subject: [SciPy-dev] change scipy.optimize function signatures?? FEEDBACK NEEDED!! (ticket 285) In-Reply-To: References: Message-ID: On Thu, 19 Jul 2007, Anne Archibald apparently wrote: > Please don't do this! This renders the signature useless > in figuring out how to call the function... > Then again, I'm not sure I see what this opaque > OptimizationParams is supposed to accomplish. It is supposed to 1. be a less ugly way to get what the ticket is seeking, and 2. eventually be a way of ensuring some uniformity across the optimization functions I am not pushing for it (really! it is just a quick exploration), but it seems that your core concern could be addressed by a docstring. I still need to know: do you like the change proposed in the ticket? Thanks, Alan From peridot.faceted at gmail.com Thu Jul 19 13:22:14 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Thu, 19 Jul 2007 13:22:14 -0400 Subject: [SciPy-dev] change scipy.optimize function signatures?? FEEDBACK NEEDED!! (ticket 285) In-Reply-To: References: Message-ID: On 19/07/07, Alan G Isaac wrote: > On Thu, 19 Jul 2007, Anne Archibald apparently wrote: > > Please don't do this! This renders the signature useless > > in figuring out how to call the function... > > Then again, I'm not sure I see what this opaque > > OptimizationParams is supposed to accomplish. > > It is supposed to > > 1. be a less ugly way to get what the ticket is seeking, and > > 2. eventually be a way of ensuring some uniformity across > the optimization functions > > I am not pushing for it (really! it is just a quick > exploration), but it seems that your core concern could be > addressed by a docstring. > > I still need to know: do you like the change proposed in the > ticket? Hmm. I like the ability to control bracketing, but I'd rather see the bracket options passed in individually rather than as a dictionary. If that means the functions have far too many options, hiding them in an object isn't going to help; a better interface is a better solution. Perhaps making objects to represent optimization problems might make it easier to suppy as many or as few parameters as desired? Anne From aisaac at american.edu Thu Jul 19 14:03:28 2007 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 19 Jul 2007 14:03:28 -0400 Subject: [SciPy-dev] change scipy.optimize function signatures?? FEEDBACK NEEDED!! (ticket 285) In-Reply-To: References: Message-ID: On Thu, 19 Jul 2007, Anne Archibald apparently wrote: > Perhaps making objects to represent optimization problems might make > it easier to suppy as many or as few parameters as desired? That is a goal: https://projects.scipy.org/scipy/scikits/browser/trunk/openopt/scikits/openopt/solvers/optimizers/optimizer BUT the immediate issue is: how to handle this ticket? http://projects.scipy.org/scipy/scipy/ticket/285 Are you saying, handle it like this... Keep the `brent` signature unchanged, but move the core code into a class `Brent`. The class can expose additional parameters as desired. The `brent` function will now create an instance of the `Brent` class and run it. Individuals wanting finer control over optimization parameters can create their own instance and set parameters to their heart's desire. Is that the idea? I think I like it. It would mesh with the longer term goal. Cheers, Alan Isaac PS Sorry not to know, but who is 'timl'? He should have a say here. From millman at berkeley.edu Thu Jul 19 14:13:42 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Thu, 19 Jul 2007 11:13:42 -0700 Subject: [SciPy-dev] change scipy.optimize function signatures?? FEEDBACK NEEDED!! (ticket 285) In-Reply-To: References: Message-ID: On 7/19/07, Alan G Isaac wrote: > PS Sorry not to know, but who is 'timl'? He should have > a say here. Hey Alan, That would be Tim Leslie. He was working for me and I asked him to take care of this ticket. It looks like all he did is take the ticket. So unless he states otherwise, I would assume that he never worked on it. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From robert.kern at gmail.com Thu Jul 19 15:08:18 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 19 Jul 2007 14:08:18 -0500 Subject: [SciPy-dev] Where to add special functions ? In-Reply-To: <469F105D.3060602@ar.media.kyoto-u.ac.jp> References: <469F105D.3060602@ar.media.kyoto-u.ac.jp> Message-ID: <469FB6A2.60301@gmail.com> David Cournapeau wrote: > Hi, > > I would like to add a special function (implemented in python), but > not sure where I should put it for a patch: it looks like scipy.special > is implemented in C/Fortran except for orthgonal function. > The function I would like to add is the generalized gamma: > > http://planetmath.org/encyclopedia/GammaFunctionMultivariateReal.html > > The function is useful to multi variate analysis (appears for example > when one needs to compute the KL divergence of Wishart distributions, > which appear quite often in Bayesian statistics). Put your Python file in Lib/special and import from it in __init__.py. Use your judgment as to its name, but you might want to make it sufficiently general such that it will be the obvious place to put more pure Python special functions. Thanks! -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From mforbes at physics.ubc.ca Fri Jul 20 01:02:43 2007 From: mforbes at physics.ubc.ca (Michael McNeil Forbes) Date: Thu, 19 Jul 2007 22:02:43 -0700 Subject: [SciPy-dev] change scipy.optimize function signatures?? FEEDBACK NEEDED!! (ticket 285) In-Reply-To: References: Message-ID: On 19 Jul 2007, at 11:03 AM, Alan G Isaac wrote: > On Thu, 19 Jul 2007, Anne Archibald apparently wrote: >> Perhaps making objects to represent optimization problems might make >> it easier to suppy as many or as few parameters as desired? > > BUT the immediate issue is: > how to handle this ticket? > http://projects.scipy.org/scipy/scipy/ticket/285 > > Are you saying, handle it like this... > > Keep the `brent` signature unchanged, > but move the core code into a class `Brent`. > The class can expose additional parameters as desired. > The `brent` function will now create an instance > of the `Brent` class and run it. > Individuals wanting finer control over optimization > parameters can create their own instance and set > parameters to their heart's desire. Perhaps the opposite idea would work: keep the brent function call with explicit arguments close to the underlying C or Fortran code and make a Brent class that provides users with a unified and simplified interface to this function along the line of Matthieu's proposal. My idea is to keep the workhorse functions with minimal argument processing so that they can be used when efficiency is a concern (keeping the signature close to the underlying C functions allows one to inline the code if speed is really needed and possibly maintain backward compatibility, though argument names should at least be consistent!) The class interface could then do more extensive argument and option processing that is slow, but user friendly and "smart": suitable for use outside of loops or interactively. --------------------- As for dealing with options, I have been thinking of several ideas, but have not had time to put them together in a proposal yet. I am just throwing out my ideas here since people are thinking about this. I have the following long-term "goals": User Oriented: 1) Consistent optimization parameters across all methods (same names and semantics where possible) 2) Easy access to access the parameters in docstrings. 3) Ability to use functions with a minimal number of parameters (good default values). 4) Easy to get and manipulate the default parameters programatically (i.e. the function/class could provide a dictionary filled with the default parameters for the user to modify in subsequent calls.) 4a) It would be nice if this "dictionary" of parameters was also "documented" in some way to allow easy access to the descriptions of the options. 6) Access to efficient underlying functions, possibly allowing for inlining with minimal code changes. Developer Oriented: 7) Options should be easily specified with a description in a single place. 8) Docstrings and other option documentation should be generated automatically from this specification so that there is no duplication of code/documentation. 9) Along the lines of Matthieu's proposal, if various optimization "modules" are grouped together, then it would be nice if the containing module could dynamically query the components and present a complete list of relevant parameters to the user both in the docstring and through a default parameter object. Ideas for implementation: A) Python's introspection tools provide ample ways for dynamically modifying docstrings, so as long as the development interface is kept clean, this should be easily implemented. B) Function decorators could provide a very nice syntactic mechanism for specifying options and arguments. The decorators can modify the docstring as required, and parameters/default values etc. I thought this would be a great way of dealing with these issues but there a big problems with pickling/archiving decorated functions and pickling functions can be very useful. Decorators also require Python 2.4 which may be a problem. C) Attributes can be used to nicely provide documented option containers. D) One could even dynamically create subclasses of the option values so the values themselves are documented (this may be abusing the flexibility of python's introspection abilities but I think it is cute... It is probably dangerous though.) I wish I had a more concrete proposal ready, but I simply have not had the time;-( Michael. From david at ar.media.kyoto-u.ac.jp Fri Jul 20 02:33:45 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 20 Jul 2007 15:33:45 +0900 Subject: [SciPy-dev] Making numpy test work in scikits Message-ID: <46A05749.80308@ar.media.kyoto-u.ac.jp> Hi, I had a hard time to get the usual numpy tests working inside a scikits (because of setuptools): I wanted to keep the usual numpy tests infrastructure (each package has a tests directory where all the tests are), while being able to use them with python setup.py test. Below is a an quick explanation how I did it: - foo/scikits/foo -> this is a typical "scipy package", in the sense that you can use the same layout for tests as in a scipy package. - foo/setup.py -> use the test_suite argument of the setup function, with the value "tester" - foo/tester.py -> this defines a function additional_tests, which is supposed to return a unittest.TestSuite. I use NumpyTest to find all the tests in the package foo, and make a TestSuite from it; I had to use a private function, though, but maybe adding public API in numpy would be possible. The module tester boils down to this: from numpy.testing import NumpyTest def additional_tests(): # XXX: does this guarantee that the package is the one in the dev trunk, and # not scikits.foo installed somewhere else ? import scikits.foo np = NumpyTest(scikits.foo) return np._test_suite_from_all_tests(np.package, level = 1, verbosity = 1) With this, you can call all the tests with python setup.py test (which is the point of all this, actually), but: - I don't know much about unittest, though, so this may be not ideal but is the only way I found to make is work without touching anything else than the setup.py file. - There may be a way to make this a bit more automatic (I don't like the fact that I have to give the name of the package in additional_tests, for example) - I didn't find a way to give argumements for tests in command line (setuptools does not seem to support it for the test command). cheers, David From matthieu.brucher at gmail.com Fri Jul 20 03:34:55 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 20 Jul 2007 09:34:55 +0200 Subject: [SciPy-dev] Making numpy test work in scikits In-Reply-To: <46A05749.80308@ar.media.kyoto-u.ac.jp> References: <46A05749.80308@ar.media.kyoto-u.ac.jp> Message-ID: Hi, I have to say that the fact that not all test_*.py are parsed for tests annoys me in Numpy. If you want to test a file something.py, you have to create a tests/test_something.py, you can't use an other name. And when you want to test more than just a part of your code - like for the optimizers, I want to test several modules together -, those tests are not part of the test suite that Numpy creates. Thus, I created a file that executes all test_*.py, but it cannot work with setuptools. But I thought that this worked (I think I've used it before) : def test(): from numpy.testing import NumpyTest return NumpyTest().test() in the __init__.py of a module that must be tested, doesn't it ? Matthieu 2007/7/20, David Cournapeau : > > Hi, > > I had a hard time to get the usual numpy tests working inside a > scikits (because of setuptools): I wanted to keep the usual numpy tests > infrastructure (each package has a tests directory where all the tests > are), while being able to use them with python setup.py test. Below is a > an quick explanation how I did it: > > - foo/scikits/foo -> this is a typical "scipy package", in the > sense that you can use the same layout for tests as in a scipy package. > - foo/setup.py -> use the test_suite argument of the setup function, > with the value "tester" > - foo/tester.py -> this defines a function additional_tests, which > is supposed to return a unittest.TestSuite. I use NumpyTest to find all > the tests in the package foo, and make a TestSuite from it; I had to use > a private function, though, but maybe adding public API in numpy would > be possible. The module tester boils down to this: > > from numpy.testing import NumpyTest > > def additional_tests(): > # XXX: does this guarantee that the package is the one in the dev > trunk, and > # not scikits.foo installed somewhere else ? > import scikits.foo > np = NumpyTest(scikits.foo) > return np._test_suite_from_all_tests(np.package, level = 1, > verbosity = 1) > > With this, you can call all the tests with python setup.py test (which > is the point of all this, actually), but: > - I don't know much about unittest, though, so this may be not ideal > but is the only way I found to make is work without touching anything > else than the setup.py file. > - There may be a way to make this a bit more automatic (I don't like > the fact that I have to give the name of the package in > additional_tests, for example) > - I didn't find a way to give argumements for tests in command line > (setuptools does not seem to support it for the test command). > > cheers, > > David > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Fri Jul 20 06:54:19 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 20 Jul 2007 19:54:19 +0900 Subject: [SciPy-dev] Making numpy test work in scikits In-Reply-To: References: <46A05749.80308@ar.media.kyoto-u.ac.jp> Message-ID: <46A0945B.3050502@ar.media.kyoto-u.ac.jp> Matthieu Brucher wrote: > Hi, > > I have to say that the fact that not all test_*.py are parsed for > tests annoys me in Numpy. If you want to test a file something.py, you > have to create a tests/test_something.py, you can't use an other name. > And when you want to test more than just a part of your code - like > for the optimizers, I want to test several modules together -, those > tests are not part of the test suite that Numpy creates. You can change the convention for the names of the NumpyTest modules (at least, it looks like, having briefly read the code), but I guess everybody does as I did: copy the code from another toolbox, without thinking too much. I don't understand what you mean by tests which are not part of the numpy test suite, though. For me, the problem is that there is no way (or more precisely, I didn't a find a way) to always get a testsuite from NumpyTest. Typically, my method works if you test everything, but does not work if you want to test some part of it. For me, the point of numpy tests is to have common conventions: - every package define a test function which runs all the tests (from scipy.foo import test) - all the tests are in well defined places in the file layout. Maybe there is room for improvement here: simplifying NumpyTestCase (why definining check_ function, for example ?), and making it a bit more flexible so that we can request a subset of the testsuite for a big package (for the package foo, getting a TestSuite containing all the tests in foo.bar.toto). > Thus, I created a file that executes all test_*.py, but it cannot work > with setuptools. > > But I thought that this worked (I think I've used it before) : > > def test(): > from numpy.testing import NumpyTest > return NumpyTest().test() > > in the __init__.py of a module that must be tested, doesn't it ? I don't think it works because the function passed to setuptools needs to return a testsuite (that's the core problem: NumpyTest sometimes run tests, sometimes returns the testsuite). David From stefan at sun.ac.za Fri Jul 20 07:58:38 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Fri, 20 Jul 2007 13:58:38 +0200 Subject: [SciPy-dev] Making numpy test work in scikits In-Reply-To: <46A0945B.3050502@ar.media.kyoto-u.ac.jp> References: <46A05749.80308@ar.media.kyoto-u.ac.jp> <46A0945B.3050502@ar.media.kyoto-u.ac.jp> Message-ID: <20070720115838.GB7290@mentat.za.net> On Fri, Jul 20, 2007 at 07:54:19PM +0900, David Cournapeau wrote: > to return a testsuite (that's the core problem: NumpyTest sometimes run > tests, sometimes returns the testsuite). When level < 0, NumpyTest.test always returns a testsuite, i.e. In [9]: print NumpyTest(numpy.linalg).test(level=-1) Found 32 tests for numpy.linalg Found 0 tests for __main__ , ... , ]> Regards St?fan From stefan at sun.ac.za Fri Jul 20 07:49:22 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Fri, 20 Jul 2007 13:49:22 +0200 Subject: [SciPy-dev] Making numpy test work in scikits In-Reply-To: <46A0945B.3050502@ar.media.kyoto-u.ac.jp> References: <46A05749.80308@ar.media.kyoto-u.ac.jp> <46A0945B.3050502@ar.media.kyoto-u.ac.jp> Message-ID: <20070720114922.GA7290@mentat.za.net> On Fri, Jul 20, 2007 at 07:54:19PM +0900, David Cournapeau wrote: > Matthieu Brucher wrote: > I don't understand what you mean by tests which are not part of the > numpy test suite, though. For me, the problem is that there is no way > (or more precisely, I didn't a find a way) to always get a testsuite > from NumpyTest. Typically, my method works if you test everything, but > does not work if you want to test some part of it. The following should work to test, for example, only numpy.linalg: from numpy.testing import NumpyTest import numpy suite = NumpyTest(numpy.linalg) suite.test() The test_*.py pattern is specified as NumpyTest.testfile_patterns (default is ['test_%(modulename)s.py']), but because of the way NumpyTest is written, there is a one-to-one correspondence between module names and test files. Maybe that should change. Regards St?fan From david at ar.media.kyoto-u.ac.jp Fri Jul 20 08:05:51 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 20 Jul 2007 21:05:51 +0900 Subject: [SciPy-dev] Making numpy test work in scikits In-Reply-To: <20070720115838.GB7290@mentat.za.net> References: <46A05749.80308@ar.media.kyoto-u.ac.jp> <46A0945B.3050502@ar.media.kyoto-u.ac.jp> <20070720115838.GB7290@mentat.za.net> Message-ID: <46A0A51F.5090603@ar.media.kyoto-u.ac.jp> Stefan van der Walt wrote: > On Fri, Jul 20, 2007 at 07:54:19PM +0900, David Cournapeau wrote: > >> to return a testsuite (that's the core problem: NumpyTest sometimes run >> tests, sometimes returns the testsuite). >> > > When level < 0, NumpyTest.test always returns a testsuite, i.e. > > In [9]: print NumpyTest(numpy.linalg).test(level=-1) > Found 32 tests for numpy.linalg > Found 0 tests for __main__ > testMethod=check_cdouble>, ... , ]> > > Can you do the same but without executing the testsuite ? I don't want to run anything, just get the testsuite, which will be run by setuptools. David From david at ar.media.kyoto-u.ac.jp Fri Jul 20 08:23:11 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 20 Jul 2007 21:23:11 +0900 Subject: [SciPy-dev] Making numpy test work in scikits In-Reply-To: <46A0A51F.5090603@ar.media.kyoto-u.ac.jp> References: <46A05749.80308@ar.media.kyoto-u.ac.jp> <46A0945B.3050502@ar.media.kyoto-u.ac.jp> <20070720115838.GB7290@mentat.za.net> <46A0A51F.5090603@ar.media.kyoto-u.ac.jp> Message-ID: <46A0A92F.6080208@ar.media.kyoto-u.ac.jp> David Cournapeau wrote: > Stefan van der Walt wrote: > >> On Fri, Jul 20, 2007 at 07:54:19PM +0900, David Cournapeau wrote: >> >> >>> to return a testsuite (that's the core problem: NumpyTest sometimes run >>> tests, sometimes returns the testsuite). >>> >>> >> When level < 0, NumpyTest.test always returns a testsuite, i.e. >> >> In [9]: print NumpyTest(numpy.linalg).test(level=-1) >> Found 32 tests for numpy.linalg >> Found 0 tests for __main__ >> > testMethod=check_cdouble>, ... , ]> >> >> >> > Can you do the same but without executing the testsuite ? I don't want > to run anything, just get the testsuite, which will be run by setuptools. > Well, I asked my question too quickly. The problem is that I want to define a function test_suite which returns the test suite of the package. If I do: def test_suite(*args): return NumpyTest().test(level = -1) Then while executing the tests with setuptools, I have some infinite recursion (which is due to the fact that NumpyTest is importing the current package I guess ?). I am kind of stuck, and I feel like I already wasted way too much time on this. cheers, David From stefan at sun.ac.za Fri Jul 20 08:32:55 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Fri, 20 Jul 2007 14:32:55 +0200 Subject: [SciPy-dev] Making numpy test work in scikits In-Reply-To: <46A0A51F.5090603@ar.media.kyoto-u.ac.jp> References: <46A05749.80308@ar.media.kyoto-u.ac.jp> <46A0945B.3050502@ar.media.kyoto-u.ac.jp> <20070720115838.GB7290@mentat.za.net> <46A0A51F.5090603@ar.media.kyoto-u.ac.jp> Message-ID: <20070720123255.GD7290@mentat.za.net> On Fri, Jul 20, 2007 at 09:05:51PM +0900, David Cournapeau wrote: > Stefan van der Walt wrote: > > On Fri, Jul 20, 2007 at 07:54:19PM +0900, David Cournapeau wrote: > > > >> to return a testsuite (that's the core problem: NumpyTest sometimes run > >> tests, sometimes returns the testsuite). > >> > > > > When level < 0, NumpyTest.test always returns a testsuite, i.e. > > > > In [9]: print NumpyTest(numpy.linalg).test(level=-1) > > Found 32 tests for numpy.linalg > > Found 0 tests for __main__ > > > testMethod=check_cdouble>, ... , ]> > > > > > Can you do the same but without executing the testsuite ? I don't want > to run anything, just get the testsuite, which will be run by > setuptools. The test suite isn't executed above. Cheers St?fan From stefan at sun.ac.za Fri Jul 20 08:34:48 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Fri, 20 Jul 2007 14:34:48 +0200 Subject: [SciPy-dev] Making numpy test work in scikits In-Reply-To: <46A0A92F.6080208@ar.media.kyoto-u.ac.jp> References: <46A05749.80308@ar.media.kyoto-u.ac.jp> <46A0945B.3050502@ar.media.kyoto-u.ac.jp> <20070720115838.GB7290@mentat.za.net> <46A0A51F.5090603@ar.media.kyoto-u.ac.jp> <46A0A92F.6080208@ar.media.kyoto-u.ac.jp> Message-ID: <20070720123448.GE7290@mentat.za.net> On Fri, Jul 20, 2007 at 09:23:11PM +0900, David Cournapeau wrote: > def test_suite(*args): > return NumpyTest().test(level = -1) > > Then while executing the tests with setuptools, I have some infinite > recursion (which is due to the fact that NumpyTest is importing the > current package I guess ?). I am kind of stuck, and I feel like I > already wasted way too much time on this. What happens if you specify the name of the module you wish to test, i.e. def test_suite(*args): return NumpyTest("mymodule").test(level=-1) ? Cheers St?fan From david at ar.media.kyoto-u.ac.jp Fri Jul 20 08:30:01 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 20 Jul 2007 21:30:01 +0900 Subject: [SciPy-dev] Making numpy test work in scikits In-Reply-To: <20070720123448.GE7290@mentat.za.net> References: <46A05749.80308@ar.media.kyoto-u.ac.jp> <46A0945B.3050502@ar.media.kyoto-u.ac.jp> <20070720115838.GB7290@mentat.za.net> <46A0A51F.5090603@ar.media.kyoto-u.ac.jp> <46A0A92F.6080208@ar.media.kyoto-u.ac.jp> <20070720123448.GE7290@mentat.za.net> Message-ID: <46A0AAC9.2080607@ar.media.kyoto-u.ac.jp> Stefan van der Walt wrote: > On Fri, Jul 20, 2007 at 09:23:11PM +0900, David Cournapeau wrote: > >> def test_suite(*args): >> return NumpyTest().test(level = -1) >> >> Then while executing the tests with setuptools, I have some infinite >> recursion (which is due to the fact that NumpyTest is importing the >> current package I guess ?). I am kind of stuck, and I feel like I >> already wasted way too much time on this. >> > > What happens if you specify the name of the module you wish to test, > i.e. > > def test_suite(*args): > return NumpyTest("mymodule").test(level=-1) > I thought about it too, but it did not change anything. Actually, I don't understand why NumpyTest tries to import the modules at all. Why not just getting the list of all test_* files in tests, and getting their corresponding TestCase instead ? David From aisaac at american.edu Fri Jul 20 13:05:52 2007 From: aisaac at american.edu (Alan G Isaac) Date: Fri, 20 Jul 2007 13:05:52 -0400 Subject: [SciPy-dev] change scipy.optimize function signatures?? FEEDBACK NEEDED!! (ticket 285) In-Reply-To: References: Message-ID: On Thu, 19 Jul 2007, Michael McNeil Forbes apparently wrote: > keep the brent function call with explicit arguments close > to the underlying C or Fortran code and make a Brent class > that provides users with a unified and simplified > interface to this function along the line of Matthieu's > proposal. I am understanding `brent` to be pure Python. I'm looking at http://svn.scipy.org/svn/scipy/trunk/Lib/optimize/optimize.py I am also thinking that (aside from issues outside the algorithm, like the evaluation of the function to be minimized) this is fine and will not be changed. Cheers, Alan Isaac From aisaac at american.edu Fri Jul 20 13:23:32 2007 From: aisaac at american.edu (Alan G Isaac) Date: Fri, 20 Jul 2007 13:23:32 -0400 Subject: [SciPy-dev] change scipy.optimize function signatures?? FEEDBACK NEEDED!! (ticket 285) In-Reply-To: References: Message-ID: OK, I need to finalize the decision on this. http://projects.scipy.org/scipy/scipy/ticket/285 Here are the current alternatives. 1. Implement the patch attached to the ticket. (I dislike this, as discussed earlier.) 2. Change nothing (ticket becomes "won't fix") (I am ok with this.) 3. Move the brent `code` into a `Brent` class. The `brent` function (with unchanged signature) remains as a convenient user interface to the `Brent` class, which can be used directly when more refined control is needed. The class will include bracketing parameters as a response to the ticket. (I think this is best.) Cheers, Alan Isaac From peridot.faceted at gmail.com Fri Jul 20 13:43:58 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Fri, 20 Jul 2007 13:43:58 -0400 Subject: [SciPy-dev] change scipy.optimize function signatures?? FEEDBACK NEEDED!! (ticket 285) In-Reply-To: References: Message-ID: On 20/07/07, Alan G Isaac wrote: > OK, I need to finalize the decision on this. > http://projects.scipy.org/scipy/scipy/ticket/285 > Here are the current alternatives. > > 1. Implement the patch attached to the ticket. > (I dislike this, as discussed earlier.) > 2. Change nothing (ticket becomes "won't fix") > (I am ok with this.) > 3. Move the brent `code` into a `Brent` class. > The `brent` function (with unchanged signature) > remains as a convenient user interface to the `Brent` > class, which can be used directly when more refined > control is needed. The class will include bracketing > parameters as a response to the ticket. > (I think this is best.) I like option 3 as well. Anne From matthieu.brucher at gmail.com Fri Jul 20 13:47:07 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 20 Jul 2007 19:47:07 +0200 Subject: [SciPy-dev] change scipy.optimize function signatures?? FEEDBACK NEEDED!! (ticket 285) In-Reply-To: References: Message-ID: 2007/7/20, Anne Archibald : > > On 20/07/07, Alan G Isaac wrote: > > OK, I need to finalize the decision on this. > > http://projects.scipy.org/scipy/scipy/ticket/285 > > Here are the current alternatives. > > > > 1. Implement the patch attached to the ticket. > > (I dislike this, as discussed earlier.) > > 2. Change nothing (ticket becomes "won't fix") > > (I am ok with this.) > > 3. Move the brent `code` into a `Brent` class. > > The `brent` function (with unchanged signature) > > remains as a convenient user interface to the `Brent` > > class, which can be used directly when more refined > > control is needed. The class will include bracketing > > parameters as a response to the ticket. > > (I think this is best.) > > I like option 3 as well. I third this option as well Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From mforbes at physics.ubc.ca Fri Jul 20 13:52:40 2007 From: mforbes at physics.ubc.ca (Michael McNeil Forbes) Date: Fri, 20 Jul 2007 10:52:40 -0700 Subject: [SciPy-dev] change scipy.optimize function signatures?? FEEDBACK NEEDED!! (ticket 285) In-Reply-To: References: Message-ID: On 20 Jul 2007, Alan G Isaac wrote: > I am understanding `brent` to be pure Python. Good point (I was actually looking at brentq etc.) > OK, I need to finalize the decision on this. > http://projects.scipy.org/scipy/scipy/ticket/285 > Here are the current alternatives. ... > 3. Move the brent `code` into a `Brent` class. > The `brent` function (with unchanged signature) > remains as a convenient user interface to the `Brent` > class, which can be used directly when more refined > control is needed. The class will include bracketing > parameters as a response to the ticket. > (I think this is best.) In this case, 3. sounds like the best option to me too. Michael. From openopt at ukr.net Fri Jul 20 15:50:59 2007 From: openopt at ukr.net (dmitrey) Date: Fri, 20 Jul 2007 22:50:59 +0300 Subject: [SciPy-dev] GSoC weekly report Message-ID: <46A11223.3060905@ukr.net> hi all, this week I was asked to close some tickets. Unfortunately, I got to know that I should mention number of svn commits to scipy too late, when I had already done the commits. Next time I will wrote the svn commit numbers. So > http://projects.scipy.org/scipy/scipy/ticket/234 > implementing possibility to turn off warnings about failing to solve in leastsq and fsolve done, same way as proposed > http://projects.scipy.org/scipy/scipy/ticket/285 > (bracket parameters) this feature is still under discussion. Most of feedbacks lead to chapter 3 of Alan Isaac proposition: 3. Move the brent `code` into a `Brent` class. The `brent` function (with unchanged signature) remains as a convenient user interface to the `Brent` class, which can be used directly when more refined control is needed. The class will include bracketing parameters as a response to the ticket. > http://projects.scipy.org/scipy/scipy/ticket/296 > connecting tnc 1.3. In progress. > http://projects.scipy.org/scipy/scipy/ticket/377 > fmin_bfgs incorrectly expects a ZeroDivisonError had fixed > http://projects.scipy.org/scipy/scipy/ticket/390 > scipy.optimize.fmin_bfgs fails without warning I mentioned that the user func is incorrect for the solver - it's non-convex and has lots of local minima, so obtaining different results from different x0 is nothing special > http://projects.scipy.org/scipy/scipy/ticket/399 > connecting PSwarm. Since it requires connecting C code to python, Alan agreed to freeze the ticket (I have no chance to do the ticket in required time). > http://projects.scipy.org/scipy/scipy/ticket/416 > the ticket propose to modify something in C code (__minpack.h), something related to transpose. it says "MATRIXC2F transposing the wrong way in optimize.leastsq" here's the defenition of MATRIXC2F from minpack.h (line 97), it comes w/o any description & I can't understand what does it do: #define MATRIXC2F(jac,data,n,m) {double *p1=(double *)(jac), *p2, *p3=(double *)(data);\ int i,j;\ for (j=0;j<(m);p3++,j++) \ for (p2=p3,i=0;i<(n);p2+=(m),i++,p1++) \ *p1 = *p2; } AJennings (author of the ticket) proposed to change the line 152 MATRIXC2F(fjac, result_array->data, *n, *ldfjac) to MATRIXC2F(fjac, result_array->data, *ldfjac, *n) However, line 94 is the same as line 152, so, maybe, it also requires the replacement. Unfortunately, lots of other c-funcs are also undocumented, so I guess it will take a significant time to fix the ticket. > http://projects.scipy.org/scipy/scipy/ticket/344 ValueError: objects are not aligned in optimize.fmin_cg The problem here was in dot([[a,b],[c,d]], [e,f]) statement (a,b,c, etc are 1x1 ndarrays). It formed 2x2x1 matrix (at least array(...).shape yields the answer (2,2,1)), so multiplying it with [c,d] yields error. I found nothing better than arr = empty(2,2) arr[0,0]=a arr[0,1]=... So now it works. > http://projects.scipy.org/scipy/scipy/ticket/384 > "tnc requires Python list, not numpy.ndarray". I think it's better to connect tnc 1.3 before (i.e. close another ticket before doing this one) > http://projects.scipy.org/scipy/scipy/ticket/389 > initial guess to minimum is overwritten by the minimum So I just set copy(x0) instead of x0 in cobyla.py, now it works. > http://projects.scipy.org/scipy/scipy/ticket/388 > remove uses of apply in optimize.py done, apply(func, ...) is replaced by func(*(...)) Regards, D. From cookedm at physics.mcmaster.ca Fri Jul 20 16:03:18 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Fri, 20 Jul 2007 16:03:18 -0400 Subject: [SciPy-dev] Making numpy test work in scikits In-Reply-To: (Matthieu Brucher's message of "Fri, 20 Jul 2007 09:34:55 +0200") References: <46A05749.80308@ar.media.kyoto-u.ac.jp> Message-ID: "Matthieu Brucher" writes: > Hi, > > I have to say that the fact that not all test_*.py are parsed for tests annoys > me in Numpy. If you want to test a file something.py, you have to create a > tests/test_something.py, you can't use an other name. And when you want to test > more than just a part of your code - like for the optimizers, I want to test > several modules together -, those tests are not part of the test suite that > Numpy creates. Yes, by default only tests/test_*.py are looked at when * is a module. However, if you use either NumpyTest().testall() or NumpyTest().test(all=True) or NumpyTest.test(level=(some number >10)), then all the the *.py files in tests/ will be run. Argubably, this should be the default. -- |>|\/|< /------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From matthieu.brucher at gmail.com Fri Jul 20 17:35:32 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 20 Jul 2007 23:35:32 +0200 Subject: [SciPy-dev] Making numpy test work in scikits In-Reply-To: References: <46A05749.80308@ar.media.kyoto-u.ac.jp> Message-ID: > > Yes, by default only tests/test_*.py are looked at when * is a module. > However, if you use either NumpyTest().testall() or > NumpyTest().test(all=True) or NumpyTest.test(level=(some number >10)), > then all the the *.py files in tests/ will be run. > > Argubably, this should be the default. Thank you for the tip ! Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Sat Jul 21 01:47:45 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 21 Jul 2007 14:47:45 +0900 Subject: [SciPy-dev] Making numpy test work in scikits In-Reply-To: <20070720123448.GE7290@mentat.za.net> References: <46A05749.80308@ar.media.kyoto-u.ac.jp> <46A0945B.3050502@ar.media.kyoto-u.ac.jp> <20070720115838.GB7290@mentat.za.net> <46A0A51F.5090603@ar.media.kyoto-u.ac.jp> <46A0A92F.6080208@ar.media.kyoto-u.ac.jp> <20070720123448.GE7290@mentat.za.net> Message-ID: <46A19E01.2020709@ar.media.kyoto-u.ac.jp> Stefan van der Walt wrote: > On Fri, Jul 20, 2007 at 09:23:11PM +0900, David Cournapeau wrote: > >> def test_suite(*args): >> return NumpyTest().test(level = -1) >> >> Then while executing the tests with setuptools, I have some infinite >> recursion (which is due to the fact that NumpyTest is importing the >> current package I guess ?). I am kind of stuck, and I feel like I >> already wasted way too much time on this. >> > > What happens if you specify the name of the module you wish to test, > i.e. > > def test_suite(*args): > return NumpyTest("mymodule").test(level=-1) > > Ok, I managed to make it work by doing the following: def test_suite(*args): # XXX: this is to avoid recursive call to itself. This is an horrible hack, # I have no idea why infinite recursion happens otherwise. if len(args) > 0: import unittest return unittest.TestSuite() return NumpyTest().test(level = -10) Now, I have to understand why it does not work when I give the all=True argument to NumpyTest().test. I am starting to think it would be much easier to build test suite by hand for each test script... David From david at ar.media.kyoto-u.ac.jp Sat Jul 21 03:56:55 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 21 Jul 2007 16:56:55 +0900 Subject: [SciPy-dev] [scikits] svm and pyem now available in scikits.learn Message-ID: <46A1BC47.2010104@ar.media.kyoto-u.ac.jp> Hi, In short: for people who need to use svm (SVM) and pyem (Gaussian Mixture Models) from the scipy.sandbox, they are now available and working in scikits.learn package. - Getting the code : svn co http://svn.scipy.org/svn/scikits/trunk/learn - Importing the toolboxes : they both reside in the scikits.learn.machine namespace, that is "from scipy.sandbox import svm" becomes "from scikits.learn.machine import svm" and so on. Anything which does not work as before is a bug, and should be filled as such on the scikits trac system (http://projects.scipy.org/scipy/scikits/). For the curious, the learn namespace will soon contain some code to load/pre process/manipulate datasets and some basic learner based on the above algoritms, cheers, David From mauger at physics.ucdavis.edu Sun Jul 22 17:16:52 2007 From: mauger at physics.ucdavis.edu (Matthew Auger) Date: Sun, 22 Jul 2007 14:16:52 -0700 (PDT) Subject: [SciPy-dev] nan functions still broken Message-ID: The stats nanstd() and nanmedian() functions are still broken. I submitted a patch associated with ticket 337, and an updated patch associated with ticket 368 (with an improved nanstd() algorithm, a fix for the gmean() function, some docstrings, and a new function sigmaclip()). Could someone please apply this patch. Thanks! Matt From david at ar.media.kyoto-u.ac.jp Sun Jul 22 22:55:37 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 23 Jul 2007 11:55:37 +0900 Subject: [SciPy-dev] nan functions still broken In-Reply-To: References: Message-ID: <46A418A9.5070905@ar.media.kyoto-u.ac.jp> Matthew Auger wrote: > The stats nanstd() and nanmedian() functions are still broken. I submitted > a patch associated with ticket 337, and an updated patch associated with > ticket 368 (with an improved nanstd() algorithm, a fix for the > gmean() function, some docstrings, and a new function sigmaclip()). Could > someone please apply this patch. > > Okay, I put the concerned tickets for the 0.5.3 milestone, and will try to take a look at it within today, cheers, David From matthieu.brucher at gmail.com Mon Jul 23 03:47:57 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Mon, 23 Jul 2007 09:47:57 +0200 Subject: [SciPy-dev] Making numpy test work in scikits In-Reply-To: <46A19E01.2020709@ar.media.kyoto-u.ac.jp> References: <46A05749.80308@ar.media.kyoto-u.ac.jp> <46A0945B.3050502@ar.media.kyoto-u.ac.jp> <20070720115838.GB7290@mentat.za.net> <46A0A51F.5090603@ar.media.kyoto-u.ac.jp> <46A0A92F.6080208@ar.media.kyoto-u.ac.jp> <20070720123448.GE7290@mentat.za.net> <46A19E01.2020709@ar.media.kyoto-u.ac.jp> Message-ID: > > Ok, I managed to make it work by doing the following: > > def test_suite(*args): > # XXX: this is to avoid recursive call to itself. This is an > horrible hack, > # I have no idea why infinite recursion happens > otherwise. > if len(args) > > 0: > import unittest > return > unittest.TestSuite() > return NumpyTest().test(level = -10) > > Now, I have to understand why it does not work when I give the all=True > argument to NumpyTest().test. I am starting to think it would be much > easier to build test suite by hand for each test script... I just tried to use this for openopt scikit, and strangely the first time it didn't work with testall(), the second time, it worked... I found something strange though (latest svn). When the level is positive, the tests are found and run, but when the level is negative, the files are found, but not the tests (it indicates Found 0 tests for ....py). Is that correct ? I thought the same number of tests should have been found. What is more, if specific data is needed for the test (and is in the same folder), you have to specify the full path in your test file because the current path is set to the folder where the test suite is run, not where the current test is placed. It's not due to Numpy, I know, but it adds a burden :| Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From chanley at stsci.edu Mon Jul 23 12:42:53 2007 From: chanley at stsci.edu (Christopher Hanley) Date: Mon, 23 Jul 2007 12:42:53 -0400 Subject: [SciPy-dev] SciPy 2007 Conference BOF Message-ID: <46A4DA8D.6060109@stsci.edu> Greetings, I have a question for those planning on attending the SciPy 2007 conference in a few weeks. I have been approached regarding the possibility about having a BOF meeting to discuss PyFITS. I was wondering if there would be an interest in this, or more generally, about astronomy software development. We at the Space Telescope Science Institute could talk about our future plans. Please let me know if you are interested in having a BOF meeting. If there is enough interest I will arrange a session for Thursday evening. Cheers, Chris Hanley -- Christopher Hanley Systems Software Engineer Space Telescope Science Institute 3700 San Martin Drive Baltimore MD, 21218 (410) 338-4338 From chanley at stsci.edu Mon Jul 23 21:24:11 2007 From: chanley at stsci.edu (Christopher Hanley) Date: Mon, 23 Jul 2007 21:24:11 -0400 Subject: [SciPy-dev] Astronomy / PyFITS BOF at SciPy Message-ID: <46A554BB.6090908@stsci.edu> Hi, It looks like we have more than enough interest for an astronomy BOF Thursday evening after dinner. I look forward to seeing everyone. Cheers, Chris From david at ar.media.kyoto-u.ac.jp Mon Jul 23 22:05:01 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 24 Jul 2007 11:05:01 +0900 Subject: [SciPy-dev] Making numpy test work in scikits In-Reply-To: References: <46A05749.80308@ar.media.kyoto-u.ac.jp> <46A0945B.3050502@ar.media.kyoto-u.ac.jp> <20070720115838.GB7290@mentat.za.net> <46A0A51F.5090603@ar.media.kyoto-u.ac.jp> <46A0A92F.6080208@ar.media.kyoto-u.ac.jp> <20070720123448.GE7290@mentat.za.net> <46A19E01.2020709@ar.media.kyoto-u.ac.jp> Message-ID: <46A55E4D.8020306@ar.media.kyoto-u.ac.jp> Matthieu Brucher wrote: > > Ok, I managed to make it work by doing the following: > > def test_suite(*args): > # XXX: this is to avoid recursive call to itself. This is an > horrible hack, > # I have no idea why infinite recursion happens > otherwise. > if len(args) > > 0: > import unittest > return > unittest.TestSuite() > return NumpyTest().test(level = -10) > > Now, I have to understand why it does not work when I give the > all=True > argument to NumpyTest().test. I am starting to think it would be much > easier to build test suite by hand for each test script... > > > I just tried to use this for openopt scikit, and strangely the first > time it didn't work with testall(), the second time, it worked... > I found something strange though (latest svn). When the level is > positive, the tests are found and run, but when the level is negative, > the files are found, but not the tests (it indicates Found 0 tests for > ....py). Is that correct ? It is intended if you read the docstring (I totally missed it too, at first). If level is negative, then it is equivalent to returning all the test corresponding to abs(level) in a test suite, without running them. As I do not want to run the tests, just the testsuite, I passed level -10 (which is supposed to return all the tests at all level, if I understand correctly). I agree this is not really intuitive. > I thought the same number of tests should have been found. > What is more, if specific data is needed for the test (and is in the > same folder), you have to specify the full path in your test file > because the current path is set to the folder where the test suite is > run, not where the current test is placed. It's not due to Numpy, I > know, but it adds a burden :| Are you aware of set_local_path ? I used it when I need to use the same function/global variables accross several test files. You can see an example in the learn scikits (scikits/learn/machine/em/test_densities). set_local_path() # import modules that are located in the same directory as this file. from testcommon import DEF_DEC restore_path() This is basically a hack to put the current path (or any other relative path you give to set_local_path function) in sys.path so that they are found by the python interpreter. cheers, David From matthieu.brucher at gmail.com Tue Jul 24 03:41:05 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Tue, 24 Jul 2007 09:41:05 +0200 Subject: [SciPy-dev] Making numpy test work in scikits In-Reply-To: <46A55E4D.8020306@ar.media.kyoto-u.ac.jp> References: <46A05749.80308@ar.media.kyoto-u.ac.jp> <46A0945B.3050502@ar.media.kyoto-u.ac.jp> <20070720115838.GB7290@mentat.za.net> <46A0A51F.5090603@ar.media.kyoto-u.ac.jp> <46A0A92F.6080208@ar.media.kyoto-u.ac.jp> <20070720123448.GE7290@mentat.za.net> <46A19E01.2020709@ar.media.kyoto-u.ac.jp> <46A55E4D.8020306@ar.media.kyoto-u.ac.jp> Message-ID: > > It is intended if you read the docstring (I totally missed it too, at > first). If level is negative, then it is equivalent to returning all the > test corresponding to abs(level) in a test suite, without running them. Yes, I read it, but it does not find tests at all for instance : Found 0 tests for Components.IO.ipb.tests.test_ipbreader Found 0 tests for Components.IO.nifti.tests.test_niftiimage instead of : Found 8 tests for Components.IO.ipb.tests.test_ipbreader Found 3 tests for Components.IO.nifti.tests.test_niftiimage for a positive level. As I do not want to run the tests, just the testsuite, I passed level > -10 (which is supposed to return all the tests at all level, if I > understand correctly). I agree this is not really intuitive. The problem is that it returns an empty test suite (I pass it to a TextTestRunner instance, like in the example on python.org, but nothing happens) > Are you aware of set_local_path ? I used it when I need to use the same > function/global variables accross several test files. You can see an > example in the learn scikits (scikits/learn/machine/em/test_densities). > > set_local_path() > # import modules that are located in the same directory as this file. > from testcommon import DEF_DEC > restore_path() > > This is basically a hack to put the current path (or any other relative > path you give to set_local_path function) in sys.path so that they are > found by the python interpreter. Thank you for the explanation on this one, but I can't use it, I need to modify os.cwd so that it points to the correct path. In fact what I do is create a subprocess for each test file as if it was started from its parnet folder. This way, I do not have to write load('folder1/folder2/folder3/tests/mydata'), only load('mydata'). I could put everything in a global ata folder, but if I want only to execute the tests in a specific file, I would not be able to do so, because os.cwd is different. But I don't think there is a solution to this problem anyway. Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Tue Jul 24 03:42:25 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 24 Jul 2007 16:42:25 +0900 Subject: [SciPy-dev] Making numpy test work in scikits In-Reply-To: References: <46A05749.80308@ar.media.kyoto-u.ac.jp> <46A0945B.3050502@ar.media.kyoto-u.ac.jp> <20070720115838.GB7290@mentat.za.net> <46A0A51F.5090603@ar.media.kyoto-u.ac.jp> <46A0A92F.6080208@ar.media.kyoto-u.ac.jp> <20070720123448.GE7290@mentat.za.net> <46A19E01.2020709@ar.media.kyoto-u.ac.jp> <46A55E4D.8020306@ar.media.kyoto-u.ac.jp> Message-ID: <46A5AD61.2090803@ar.media.kyoto-u.ac.jp> Matthieu Brucher wrote: > > It is intended if you read the docstring (I totally missed it too, at > first). If level is negative, then it is equivalent to returning > all the > test corresponding to abs(level) in a test suite, without running > them. > > > Yes, I read it, but it does not find tests at all for instance : > > Found 0 tests for Components.IO.ipb.tests.test_ipbreader > Found 0 tests for Components.IO.nifti.tests.test_niftiimage > > instead of : > > Found 8 tests for Components.IO.ipb.tests.test_ipbreader > Found 3 tests for Components.IO.nifti.tests.test_niftiimage > > for a positive level. I also have similar problems: for example, using testall does not work at all for me in some cases. The problem is setuptools is already kind of magic (many things going on in your back), and numpy testing facilities feel a bit too voodoo for me too (uses a lot of introspection, dynamic loading of test scripts and source parsing to build test suites). I kind of gave up on this one: I think I will convert all my tests to standard unittest, and build my own, simpler test runner: the only feature I need from NumpyTest class is the ability to find tests in test directories. I don't think it worth trying to make NumpyTest work as I want just to avoid building TestSuite by hand. David From openopt at ukr.net Tue Jul 24 04:38:31 2007 From: openopt at ukr.net (dmitrey) Date: Tue, 24 Jul 2007 11:38:31 +0300 Subject: [SciPy-dev] connecting tnc 1.3 Message-ID: <46A5BA87.4060601@ukr.net> hi all, So now I'm trying to make all the tests for tnc 1.3 running ok. But there is a problem encountered: here's a code from tnc.py, lines 210-213: for i in range(n): l,u = bounds[i] if l is None: low[i] = -HUGE_VAL So if bounds are for example ([-inf, -1.5], None), as it is written in test1fg(x), it yields error "None object is not iterable". (because it tries to get lb, ub = None ) Do you think it's a sort of bug that should be fixed in svn or not? Regards, D. From openopt at ukr.net Tue Jul 24 06:42:30 2007 From: openopt at ukr.net (dmitrey) Date: Tue, 24 Jul 2007 13:42:30 +0300 Subject: [SciPy-dev] connecting tnc 1.3 In-Reply-To: <46A5BA87.4060601@ukr.net> References: <46A5BA87.4060601@ukr.net> Message-ID: <46A5D796.5030601@ukr.net> so I commit the tnc 1.3 to svn (rev. 3185); the code related to the problem mentioned (about lb-ub) I decided to implement in the following way: if user pass bounds as (for example) ((-1,1), (-1, None), None, (3,4)) then it means 3rd variable is unconstrained. also I think ticket 384 (tnc:argument 2 must be python list, not numpy.ndarray) is ready to be closed (scipy svn rev. 3186) D. From nwagner at iam.uni-stuttgart.de Tue Jul 24 07:19:14 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 24 Jul 2007 13:19:14 +0200 Subject: [SciPy-dev] connecting tnc 1.3 In-Reply-To: <46A5D796.5030601@ukr.net> References: <46A5BA87.4060601@ukr.net> <46A5D796.5030601@ukr.net> Message-ID: <46A5E032.8020906@iam.uni-stuttgart.de> dmitrey wrote: > so I commit the tnc 1.3 to svn (rev. 3185); the code related to the > problem mentioned (about lb-ub) I decided to implement in the following way: > if user pass bounds as (for example) > ((-1,1), (-1, None), None, (3,4)) > then it means 3rd variable is unconstrained. > > also I think ticket 384 (tnc:argument 2 must be python list, not > numpy.ndarray) is ready to be closed (scipy svn rev. 3186) > > D. > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-dev > Dmitrey, scipy.test(1) results in one error connected with tnc ====================================================================== ERROR: test_tnc (scipy.tests.test_optimize.test_tnc) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib64/python2.4/site-packages/scipy/optimize/tests/test_optimize.py ", line 220, in test_tnc err = "Failed optimization of %s.\n" \ TypeError: list objects are unhashable Nils From nwagner at iam.uni-stuttgart.de Tue Jul 24 07:55:55 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 24 Jul 2007 13:55:55 +0200 Subject: [SciPy-dev] connecting tnc 1.3 In-Reply-To: <46A5D796.5030601@ukr.net> References: <46A5BA87.4060601@ukr.net> <46A5D796.5030601@ukr.net> Message-ID: <46A5E8CB.5010803@iam.uni-stuttgart.de> dmitrey wrote: > so I commit the tnc 1.3 to svn (rev. 3185); the code related to the > problem mentioned (about lb-ub) I decided to implement in the following way: > if user pass bounds as (for example) > ((-1,1), (-1, None), None, (3,4)) > then it means 3rd variable is unconstrained. > > also I think ticket 384 (tnc:argument 2 must be python list, not > numpy.ndarray) is ready to be closed (scipy svn rev. 3186) > > D. > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > Dmitrey, You have also reverted the order of the output of optimize.fmin_tnc Now it returns (rc, nfeval, x). It was x, nfeval, rc before. Is this wanted ? Nils From nwagner at iam.uni-stuttgart.de Tue Jul 24 08:37:12 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 24 Jul 2007 14:37:12 +0200 Subject: [SciPy-dev] Trouble with optimize.fmin_ncg Message-ID: <46A5F278.1040801@iam.uni-stuttgart.de> Hi all, I have some trouble with fmin_ncg. IIRC the attached code worked for me before (line 44-48) but it hangs with recent svn. Any pointer would be appreciated. Nils -------------- next part -------------- A non-text attachment was scrubbed... Name: unconstrained.py Type: text/x-python Size: 2454 bytes Desc: not available URL: From aisaac at american.edu Tue Jul 24 09:33:01 2007 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 24 Jul 2007 09:33:01 -0400 Subject: [SciPy-dev] connecting tnc 1.3 In-Reply-To: <46A5D796.5030601@ukr.net> References: <46A5BA87.4060601@ukr.net><46A5D796.5030601@ukr.net> Message-ID: On Tue, 24 Jul 2007, dmitrey apparently wrote: > so I commit the tnc 1.3 to svn (rev. 3185); the code related to the > problem mentioned (about lb-ub) I decided to implement in the following way: > if user pass bounds as (for example) > ((-1,1), (-1, None), None, (3,4)) > then it means 3rd variable is unconstrained. This seems right to me. Nils, do you see any problems with this approach? Cheers, Alan Isaac From openopt at ukr.net Tue Jul 24 09:31:07 2007 From: openopt at ukr.net (dmitrey) Date: Tue, 24 Jul 2007 16:31:07 +0300 Subject: [SciPy-dev] connecting tnc 1.3 In-Reply-To: <46A5E032.8020906@iam.uni-stuttgart.de> References: <46A5BA87.4060601@ukr.net> <46A5D796.5030601@ukr.net> <46A5E032.8020906@iam.uni-stuttgart.de> Message-ID: <46A5FF1B.7020804@ukr.net> Nils Wagner wrote: > dmitrey wrote: > >> so I commit the tnc 1.3 to svn (rev. 3185); the code related to the >> problem mentioned (about lb-ub) I decided to implement in the following way: >> if user pass bounds as (for example) >> ((-1,1), (-1, None), None, (3,4)) >> then it means 3rd variable is unconstrained. >> >> also I think ticket 384 (tnc:argument 2 must be python list, not >> numpy.ndarray) is ready to be closed (scipy svn rev. 3186) >> >> D. >> >> _______________________________________________ >> Scipy-dev mailing list >> Scipy-dev at scipy.org >> >> > > >> http://projects.scipy.org/mailman/listinfo/scipy-dev >> >> > Dmitrey, > > scipy.test(1) results in one error connected with tnc > > ====================================================================== > ERROR: test_tnc (scipy.tests.test_optimize.test_tnc) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/lib64/python2.4/site-packages/scipy/optimize/tests/test_optimize.py > ", line 220, in test_tnc > err = "Failed optimization of %s.\n" \ > TypeError: list objects are unhashable > > Nils > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > > > > >Dmitrey, >You have also reverted the order of the output of optimize.fmin_tnc >Now it returns (rc, nfeval, x). >It was x, nfeval, rc before. Is this wanted ? Afaik Robert Kern or someone else had mentioned the problem, but I forgot to check the one. C-compiled tnc module returns the values (x, funevals, rc) in revert order. I have fix this in svn. About failing tests: So I see that test38fg returns x: array([ 0.99327293, 0.98673917, 1.00557938, 1.01114086]) y: array([ 1., 1., 1., 1.]) since they differs more than numpy.assert_array_almost_equal allows, tnc test fails. in scipy.sandbox.newoptimize there is only check for ||f_final - f_opt|| <=1e-8, that's why all test in tnc.py run successfully (and I didn't know that something related to tnc fails). So now I decided just remove check for ||x_final-x_opt|| (I will commit it some minuties later). On the one hand, having ||f_final - f_opt|| is enough - we don't care how far will x_final be from x_opt, if our objfunc values differ very small. On the other, it should be checked that all constraints are ok in x_final. So I think in future, maybe, it's worth to have additional checks added to scipy.optimize tests, that will check is solution feasible or not (but since it will take too much time (for all those fmin_cobyla, l_bfgs_b, tnc), I can't provide it now). >I have some trouble with fmin_ncg. Ok, I will take a look now. afaik scipy tests pass the fmin_ncg ok. Regards, D. From nwagner at iam.uni-stuttgart.de Tue Jul 24 09:33:44 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 24 Jul 2007 15:33:44 +0200 Subject: [SciPy-dev] connecting tnc 1.3 In-Reply-To: References: <46A5BA87.4060601@ukr.net><46A5D796.5030601@ukr.net> Message-ID: <46A5FFB8.20409@iam.uni-stuttgart.de> Alan G Isaac wrote: > On Tue, 24 Jul 2007, dmitrey apparently wrote: > >> so I commit the tnc 1.3 to svn (rev. 3185); the code related to the >> problem mentioned (about lb-ub) I decided to implement in the following way: >> if user pass bounds as (for example) >> ((-1,1), (-1, None), None, (3,4)) >> then it means 3rd variable is unconstrained. >> > > This seems right to me. > Nils, do you see any problems with this approach? > > No. > Cheers, > Alan Isaac > > > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From aisaac at american.edu Tue Jul 24 09:48:12 2007 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 24 Jul 2007 09:48:12 -0400 Subject: [SciPy-dev] connecting tnc 1.3 In-Reply-To: <46A5FF1B.7020804@ukr.net> References: <46A5BA87.4060601@ukr.net> <46A5D796.5030601@ukr.net><46A5E032.8020906@iam.uni-stuttgart.de><46A5FF1B.7020804@ukr.net> Message-ID: On Tue, 24 Jul 2007, dmitrey apparently wrote: > So now I decided just remove check for ||x_final-x_opt|| Hmmm. I am not completely comfortable with this. Loosening the test would be better than eliminating it. But can some others please chime in on how to approach this? Also, Nils, any idea why this used to pass and now does not? > it should be checked that all constraints are ok in > x_final. So I think in future, maybe, it's worth to have > additional checks added to scipy.optimize tests, that will > check is solution feasible or not This seems right; please open a ticket for this. Thank you, Alan From openopt at ukr.net Tue Jul 24 09:56:50 2007 From: openopt at ukr.net (dmitrey) Date: Tue, 24 Jul 2007 16:56:50 +0300 Subject: [SciPy-dev] connecting tnc 1.3 In-Reply-To: References: <46A5BA87.4060601@ukr.net> <46A5D796.5030601@ukr.net><46A5E032.8020906@iam.uni-stuttgart.de><46A5FF1B.7020804@ukr.net> Message-ID: <46A60522.90902@ukr.net> Alan G Isaac wrote: > On Tue, 24 Jul 2007, dmitrey apparently wrote: > >> So now I decided just remove check for ||x_final-x_opt|| >> > > Hmmm. I am not completely comfortable with this. > But the idea is not mine - it had been already done by someone in scipy/sandbox/newoptimize/tnc.py in function test: ex = pow(enorm/norm, 0.5) print "X Error =", ex ef = abs(fg(xopt)[0] - fg(x)[0]) print "F Error =", ef if ef > 1e-8: raise "Test "+fg.__name__+" failed" so, as you see, check for ex is absent (and I think it's rigth, elseware we should provide different error-rising barriers for EACH test func from all those lots related to scipy.optimize test). But in scipy.optimize check of ||x-x_opt|| is still present. I intend just to move the changes to scipy.optimize. BTW if someone will run one of the well-known test func Hilbert(50) (or other rather big n), the typical ||x-x_opt|| (for different solvers) could be something like 100-500, while ||f-f_opt|| can be very small (like 1e-6). This is due to ill-conditioned matrix A from the unconstrained problem (A = hilb(50), b = sum(A), objfunc = x'Ax - bx) HTH, D. D. From nwagner at iam.uni-stuttgart.de Tue Jul 24 10:27:39 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 24 Jul 2007 16:27:39 +0200 Subject: [SciPy-dev] connecting tnc 1.3 In-Reply-To: References: <46A5BA87.4060601@ukr.net> <46A5D796.5030601@ukr.net><46A5E032.8020906@iam.uni-stuttgart.de><46A5FF1B.7020804@ukr.net> Message-ID: <46A60C5B.2070907@iam.uni-stuttgart.de> Alan G Isaac wrote: > On Tue, 24 Jul 2007, dmitrey apparently wrote: > >> So now I decided just remove check for ||x_final-x_opt|| >> > > Hmmm. I am not completely comfortable with this. > Loosening the test would be better than eliminating it. > But can some others please chime in on how to approach this? > Also, Nils, any idea why this used to pass and now does not? > > No idea. Probably Jean-S?bastien Roy could respond to your question. http://www.jeannot.org/~js/code/index.en.html#TNC Nils >> it should be checked that all constraints are ok in >> x_final. So I think in future, maybe, it's worth to have >> additional checks added to scipy.optimize tests, that will >> check is solution feasible or not >> > > This seems right; please open a ticket for this. > > Thank you, > Alan > > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From openopt at ukr.net Tue Jul 24 10:36:48 2007 From: openopt at ukr.net (dmitrey) Date: Tue, 24 Jul 2007 17:36:48 +0300 Subject: [SciPy-dev] Trouble with optimize.fmin_ncg In-Reply-To: <46A5F278.1040801@iam.uni-stuttgart.de> References: <46A5F278.1040801@iam.uni-stuttgart.de> Message-ID: <46A60E80.7030401@ukr.net> you have line A = io.mmread('nos4.mtx') # clustered eigenvalues but you didn't provide nos4.mtx file please send it to me. Regards, D. Nils Wagner wrote: > Hi all, > > I have some trouble with fmin_ncg. IIRC the attached code worked for me > before (line 44-48) but it hangs with recent svn. > > Any pointer would be appreciated. > > Nils > > > ------------------------------------------------------------------------ > > from scipy import * > from scipy.sparse import speye > from pylab import plot, show, semilogy, xlabel, ylabel, figure, savefig, legend > > def R(v): > rq = dot(v.T,A*v)/dot(v.T,B*v) > res = (A*v-rq*B*v)/linalg.norm(B*v) > data.append(linalg.norm(res)) > return rq > > def Rp(v): > """ Gradient """ > return 2*(A*v-R(v)*B*v)/dot(v.T,B*v) > > def Rpp(v): > """ Hessian """ > return 2*(A-R(v)*B-outer(B*v,Rp(v))-outer(Rp(v),B*v))/dot(v.T,B*v) > > > A = io.mmread('nos4.mtx') # clustered eigenvalues > #B = io.mmread('bcsstm02.mtx.gz') > #A = io.mmread('bcsstk06.mtx.gz') # clustered eigenvalues > #B = io.mmread('bcsstm06.mtx.gz') > n = A.shape[0] > B = speye(n,n) > random.seed(1) > v_0=random.rand(n) > > if n < 1000: > > w,vr = linalg.eig(A.todense(),B.todense()) > ind = argsort(w.real) > print 'Least eigenvalue',w[ind[0]] > > > data=[] > v,fopt, gopt, Hopt, func_calls, grad_calls, warnflag,allvecs = optimize.fmin_bfgs(R,v_0,fprime=Rp,full_output=1,retall=1) > if warnflag == 0: > semilogy(arange(0,len(data)),data) > print 'Rayleigh quotient BFGS',R(v) > # > # The program hangs if fmin_ncg is active > # > #data=[] > #v,fopt, fcalls, gcalls, hcalls, warnflag,allvecs = optimize.fmin_ncg(R,v_0,fprime=Rp,fhess=Rpp,full_output=1,retall=1) > #if warnflag==0: > # semilogy(arange(0,len(data)),data) > # print 'Rayleigh quotient NCG',R(v) > > data=[] > v,fopt, func_calls, grad_calls, warnflag,allvecs = optimize.fmin_cg(R,v_0,fprime=Rp,full_output=1,retall=1) > semilogy(arange(0,len(data)),data) > print 'Rayleigh quotient CG ',R(v),fopt > > data=[] > rc,nfeval,v = optimize.fmin_tnc(R,list(v_0),fprime=Rp) > #print optimize.tnc.RCSTRINGS[rc] > v = array(v) > semilogy(arange(0,len(data)),data) > print 'Rayleigh quotient',R(v) > > data=[] > v,f,d = optimize.fmin_l_bfgs_b(R,v_0,fprime=Rp) > semilogy(arange(0,len(data)),data) > print 'Rayleigh quotient l_bfgs_b',R(v),f > > xlabel('Function evaluation') > ylabel(r'Residual $\|Ax-\lambda\,Bx\|/\|Bx\|$') > #legend(('fmin\_bfgs','fmin\_ncg','fmin\_tnc','fmin\_l\_bfgs\_b'),shadow=True) > legend(('fmin\_bfgs','fmin\_cg','fmin\_tnc','fmin\_l\_bfgs\_b'),shadow=True) > #legend(('fmin\_bfgs','fmin\_ncg','fmin\_tnc','fmin\_l\_bfgs\_b'),shadow=True) > #savefig('leastR.png') > > figure(2) > > semilogy([0],[R(v)],'ro') > semilogy(arange(0,n),w[ind].real) > xlabel(r'$i$') > ylabel(r'$\lambda_i$') > legend((r'least eigenvalue $\lambda_1$','Spectrum'),shadow=True,loc=4) > > show() > > ------------------------------------------------------------------------ > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From nwagner at iam.uni-stuttgart.de Tue Jul 24 10:39:01 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 24 Jul 2007 16:39:01 +0200 Subject: [SciPy-dev] Trouble with optimize.fmin_ncg In-Reply-To: <46A60E80.7030401@ukr.net> References: <46A5F278.1040801@iam.uni-stuttgart.de> <46A60E80.7030401@ukr.net> Message-ID: <46A60F05.2050905@iam.uni-stuttgart.de> dmitrey wrote: > you have line > A = io.mmread('nos4.mtx') # clustered eigenvalues > but you didn't provide nos4.mtx file > please send it to me. > Regards, D. > > Sorry for that. The matrix is available at http://math.nist.gov/MatrixMarket/data/Harwell-Boeing/lanpro/nos4.html Cheers, Nils From openopt at ukr.net Tue Jul 24 11:33:51 2007 From: openopt at ukr.net (dmitrey) Date: Tue, 24 Jul 2007 18:33:51 +0300 Subject: [SciPy-dev] Trouble with optimize.fmin_ncg In-Reply-To: <46A60F05.2050905@iam.uni-stuttgart.de> References: <46A5F278.1040801@iam.uni-stuttgart.de> <46A60E80.7030401@ukr.net> <46A60F05.2050905@iam.uni-stuttgart.de> Message-ID: <46A61BDF.3070209@ukr.net> Nils, are you sure that troubles raised after last svn changes? All my changes are in func _cubicmin from optimize.py but when I placed a breakpoint there, the hanging cycle didn't reached the one. Can't you do the same trick? scipy/optimize/optimize.py line 309, d1 = empty((2,2)) I have found the hanging cycle (optimize.py, line 1030, while numpy.add.reduce(abs(ri)) > termcond: ) but numpy.add.reduce(abs(ri)) is constantly growing here. maybe you had changed x0 and now it's too far from x_opt? btw if 2nd derivatives are not supplied, then other cycle is hanging: line 1013: while (numpy.add.reduce(abs(update)) > xtol) and (k < maxiter): I don't know howto fix the problem. Please inform me about the breakpoint. BTW your func seems to be very suspicious to me def R(v): rq = dot(v.T,A*v)/dot(v.T,B*v) res = (A*v-rq*B*v)/linalg.norm(B*v) data.append(linalg.norm(res)) return rq are you sure that the func(v)=dot(v.T,A*v)/dot(v.T,B*v) is convex? I'm not. So using 2nd derivatives (or their approximating by fmin_ncg (if user didn't provide that ones) , in line 1033: Ap = approx_fhess_p(xk,psupi,fprime,epsilon) ) will handle non-convex funcs much more bad than 1-st order do. HTH, D. Nils Wagner wrote: > dmitrey wrote: > >> you have line >> A = io.mmread('nos4.mtx') # clustered eigenvalues >> but you didn't provide nos4.mtx file >> please send it to me. >> Regards, D. >> >> >> > Sorry for that. The matrix is available > at http://math.nist.gov/MatrixMarket/data/Harwell-Boeing/lanpro/nos4.html > > Cheers, > Nils > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > > > > From nwagner at iam.uni-stuttgart.de Tue Jul 24 15:52:37 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 24 Jul 2007 21:52:37 +0200 Subject: [SciPy-dev] Trouble with optimize.fmin_ncg In-Reply-To: <46A61BDF.3070209@ukr.net> References: <46A5F278.1040801@iam.uni-stuttgart.de> <46A60E80.7030401@ukr.net> <46A60F05.2050905@iam.uni-stuttgart.de> <46A61BDF.3070209@ukr.net> Message-ID: On Tue, 24 Jul 2007 18:33:51 +0300 dmitrey wrote: > Nils, are you sure that troubles raised after last svn >changes? Well, I believe it is less than 3 month ago that ncg worked as expected. > All my changes are in func _cubicmin from optimize.py > but when I placed a breakpoint there, the hanging cycle >didn't reached > the one. > Can't you do the same trick? > scipy/optimize/optimize.py > line 309, > d1 = empty((2,2)) > > I have found the hanging cycle (optimize.py, line 1030, > while numpy.add.reduce(abs(ri)) > termcond: ) > but numpy.add.reduce(abs(ri)) is constantly growing >here. > > maybe you had changed x0 and now it's too far from >x_opt? > Even if I start with a vector near x_opt ncg hangs. Nils > btw if 2nd derivatives are not supplied, then other >cycle is hanging: > line 1013: > while (numpy.add.reduce(abs(update)) > xtol) and (k < >maxiter): > I don't know howto fix the problem. > Please inform me about the breakpoint. > > BTW your func seems to be very suspicious to me > def R(v): > rq = dot(v.T,A*v)/dot(v.T,B*v) > res = (A*v-rq*B*v)/linalg.norm(B*v) > data.append(linalg.norm(res)) > return rq > > are you sure that the func(v)=dot(v.T,A*v)/dot(v.T,B*v) >is convex? > I'm not. > So using 2nd derivatives (or their approximating by >fmin_ncg (if user > didn't provide that ones) , in line 1033: > Ap = >approx_fhess_p(xk,psupi,fprime,epsilon) > ) > will handle non-convex funcs much more bad than 1-st >order do. > > HTH, D. > > From openopt at ukr.net Tue Jul 24 16:23:26 2007 From: openopt at ukr.net (dmitrey) Date: Tue, 24 Jul 2007 23:23:26 +0300 Subject: [SciPy-dev] Trouble with optimize.fmin_ncg In-Reply-To: References: <46A5F278.1040801@iam.uni-stuttgart.de> <46A60E80.7030401@ukr.net> <46A60F05.2050905@iam.uni-stuttgart.de> <46A61BDF.3070209@ukr.net> Message-ID: <46A65FBE.1090509@ukr.net> I still have no idea... I checked small example (below), it works ok I don't know, maybe I did something wrong, but I removed /site-packages/scipy, re-installed the package from http://packages.ubuntu.com/feisty/python/python-scipy (ver 0.5.2), and your problem still makes my CPU hanging on. can anyone who didn't update scipy from svn during last 10-11 days run Nils example? Regards, D. >>> f = lambda x: ((x-arange(len(x)))**2).sum() >>> fprime = lambda x: 2*(x-arange(len(x))) >>> r = optimize.fmin_ncg(f, [10]*50, fprime) Optimization terminated successfully. Current function value: 0.000000 Iterations: 2 Function evaluations: 3 Gradient evaluations: 4 Hessian evaluations: 0 >>> r array([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11., 12., 13., 14., 15., 16., 17., 18., 19., 20., 21., 22., 23., 24., 25., 26., 27., 28., 29., 30., 31., 32., 33., 34., 35., 36., 37., 38., 39., 40., 41., 42., 43., 44., 45., 46., 47., 48., 49.]) Nils Wagner wrote: > On Tue, 24 Jul 2007 18:33:51 +0300 > dmitrey wrote: > >> Nils, are you sure that troubles raised after last svn >> changes? >> > > Well, I believe it is less than 3 month ago that > ncg worked as expected. > > >> All my changes are in func _cubicmin from optimize.py >> but when I placed a breakpoint there, the hanging cycle >> didn't reached >> the one. >> Can't you do the same trick? >> scipy/optimize/optimize.py >> line 309, >> d1 = empty((2,2)) >> >> I have found the hanging cycle (optimize.py, line 1030, >> while numpy.add.reduce(abs(ri)) > termcond: ) >> but numpy.add.reduce(abs(ri)) is constantly growing >> here. >> >> maybe you had changed x0 and now it's too far from >> x_opt? >> >> > Even if I start with a vector near x_opt ncg hangs. > > Nils > > >> btw if 2nd derivatives are not supplied, then other >> cycle is hanging: >> line 1013: >> while (numpy.add.reduce(abs(update)) > xtol) and (k < >> maxiter): >> I don't know howto fix the problem. >> Please inform me about the breakpoint. >> >> BTW your func seems to be very suspicious to me >> def R(v): >> rq = dot(v.T,A*v)/dot(v.T,B*v) >> res = (A*v-rq*B*v)/linalg.norm(B*v) >> data.append(linalg.norm(res)) >> return rq >> >> are you sure that the func(v)=dot(v.T,A*v)/dot(v.T,B*v) >> is convex? >> I'm not. >> So using 2nd derivatives (or their approximating by >> fmin_ncg (if user >> didn't provide that ones) , in line 1033: >> Ap = >> approx_fhess_p(xk,psupi,fprime,epsilon) >> ) >> will handle non-convex funcs much more bad than 1-st >> order do. >> >> HTH, D. >> >> >> > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > > > > From aisaac at american.edu Tue Jul 24 16:31:38 2007 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 24 Jul 2007 16:31:38 -0400 Subject: [SciPy-dev] Trouble with optimize.fmin_ncg In-Reply-To: <46A65FBE.1090509@ukr.net> References: <46A5F278.1040801@iam.uni-stuttgart.de><46A60E80.7030401@ukr.net> <46A60F05.2050905@iam.uni-stuttgart.de><46A61BDF.3070209@ukr.net> <46A65FBE.1090509@ukr.net> Message-ID: On Tue, 24 Jul 2007, Nils wrote: > Well, I believe it is less than 3 month ago that ncg > worked as expected. On the present example, or some other problem? If some other problem that you can test, please do. Thank you, Alan Isaac From fperez.net at gmail.com Tue Jul 24 17:05:37 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Tue, 24 Jul 2007 15:05:37 -0600 Subject: [SciPy-dev] Launchpad hosting: an option for scipy? Message-ID: Hi all, I just wanted to mention this I ran across: http://arstechnica.com/news.ars/post/20070724-ars-at-ubuntu-live-new-launchpad-service-automatically-builds-and-hosts-user-packages.html It sounds like it could provide an interesting, low-overhead solution for hosting .debs. I know Andrew (at least) has hosted his privately, but this appears to lower the logistical bar for others to do the same without having to know quite as much about deb packaging as Andrew does. Just a thought. f From david at ar.media.kyoto-u.ac.jp Tue Jul 24 18:19:12 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 25 Jul 2007 07:19:12 +0900 Subject: [SciPy-dev] Launchpad hosting: an option for scipy? In-Reply-To: References: Message-ID: <46A67AE0.1030007@ar.media.kyoto-u.ac.jp> Fernando Perez wrote: > Hi all, > > I just wanted to mention this I ran across: > > http://arstechnica.com/news.ars/post/20070724-ars-at-ubuntu-live-new-launchpad-service-automatically-builds-and-hosts-user-packages.html > > It sounds like it could provide an interesting, low-overhead solution > for hosting .debs. I know Andrew (at least) has hosted his privately, > but this appears to lower the logistical bar for others to do the same > without having to know quite as much about deb packaging as Andrew > does. > I have not used the system, just read the description in your link. My quick thoughts - the scripts (debian folder) to build numpy and scipy packages are already done, since numpy and scipy are available. So this part is not really important. - building automatically debian packages regularly (say once a week) would be useful, but this needs some kind of SVN tarball, or any other thing which can provide the sources. Launchpad normally integrate through bzr, and a plugin to import svn from bzr is available (I use it myself, and I could import numpy and scipy repositories into bzr for something like 6 months now; having a public bzr mirror of numpy and scipy would be great: this is so much better than svn in any way for middle-size project like scipy). The above point, regular release tarball, would be a great help for me for the packaging I've done using the OpenSuse build system (http://software.opensuse.org/download/home:/ashigabou/ for the packages, http://en.opensuse.org/Build_Service for description), since it would avoid the backporting of features from svn to make released scipy sill buildable with last numpy. To go further, I think the meta-data to build deb and rpm should be part of numpy and scipy (eg releases and repository). I am in the camp of people who believe packaging is better done by people invovled in the development of the said software. cheers, David From aisaac at american.edu Tue Jul 24 23:12:23 2007 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 24 Jul 2007 23:12:23 -0400 Subject: [SciPy-dev] Trouble with optimize.fmin_ncg In-Reply-To: <46A65FBE.1090509@ukr.net> References: <46A5F278.1040801@iam.uni-stuttgart.de><46A60E80.7030401@ukr.net> <46A60F05.2050905@iam.uni-stuttgart.de><46A61BDF.3070209@ukr.net> <46A65FBE.1090509@ukr.net> Message-ID: On Tue, 24 Jul 2007, dmitrey apparently wrote: > can anyone who didn't update scipy from svn during last > 10-11 days run Nils example? The core of Nil's example is below. Get the data from here: http://math.nist.gov/MatrixMarket/data/Harwell-Boeing/lanpro/nos4.html Cheers, Alan Isaac %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% from numpy import dot, outer, random, argsort from scipy import io, linalg, optimize from scipy.sparse import speye def R(v): rq = dot(v.T,A*v)/dot(v.T,B*v) res = (A*v-rq*B*v)/linalg.norm(B*v) data.append(linalg.norm(res)) return rq def Rp(v): """ Gradient """ result = 2*(A*v-R(v)*B*v)/dot(v.T,B*v) print "Rp: ", result return result def Rpp(v): """ Hessian """ result = 2*(A-R(v)*B-outer(B*v,Rp(v))-outer(Rp(v),B*v))/dot(v.T,B*v) print "Rpp: ", result return result A = io.mmread('nos4.mtx') # clustered eigenvalues #B = io.mmread('bcsstm02.mtx.gz') #A = io.mmread('bcsstk06.mtx.gz') # clustered eigenvalues #B = io.mmread('bcsstm06.mtx.gz') n = A.shape[0] B = speye(n,n) random.seed(1) v_0=random.rand(n) print "try fmin_bfgs" data=[] v,fopt, gopt, Hopt, func_calls, grad_calls, warnflag,allvecs = optimize.fmin_bfgs(R,v_0,fprime=Rp,full_output=1,retall=1) if warnflag == 0: semilogy(arange(0,len(data)),data) print 'Rayleigh quotient BFGS',R(v) print "fmin_bfgs OK" print "try fmin_ncg" # # WARNING: the program may hangs if fmin_ncg is used # data=[] v,fopt, fcalls, gcalls, hcalls, warnflag,allvecs = optimize.fmin_ncg(R,v_0,fprime=Rp,fhess=Rpp,full_output=1,retall=1) if warnflag==0: semilogy(arange(0,len(data)),data) print 'Rayleigh quotient NCG',R(v) %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% From stefan at sun.ac.za Wed Jul 25 04:45:58 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Wed, 25 Jul 2007 10:45:58 +0200 Subject: [SciPy-dev] Trouble with optimize.fmin_ncg In-Reply-To: References: <46A65FBE.1090509@ukr.net> Message-ID: <20070725084558.GE8728@mentat.za.net> The optimisation converges under scipy r3116: $ python test_tnc.py try fmin_bfgs Optimization terminated successfully. Current function value: 0.000540 Iterations: 263 Function evaluations: 266 Gradient evaluations: 266 Rayleigh quotient BFGS 0.000540272922563 fmin_bfgs OK try fmin_ncg Optimization terminated successfully. Current function value: 0.000538 Iterations: 13 Function evaluations: 33 Gradient evaluations: 13 Hessian evaluations: 13 Rayleigh quotient NCG 0.000537952836927 Regards St?fan On Tue, Jul 24, 2007 at 11:12:23PM -0400, Alan G Isaac wrote: > On Tue, 24 Jul 2007, dmitrey apparently wrote: > > can anyone who didn't update scipy from svn during last > > 10-11 days run Nils example? > > The core of Nil's example is below. > Get the data from here: > http://math.nist.gov/MatrixMarket/data/Harwell-Boeing/lanpro/nos4.html > Cheers, > Alan Isaac From nwagner at iam.uni-stuttgart.de Wed Jul 25 04:59:41 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 25 Jul 2007 10:59:41 +0200 Subject: [SciPy-dev] Trouble with optimize.fmin_ncg In-Reply-To: <20070725084558.GE8728@mentat.za.net> References: <46A65FBE.1090509@ukr.net> <20070725084558.GE8728@mentat.za.net> Message-ID: <46A710FD.10905@iam.uni-stuttgart.de> Stefan van der Walt wrote: Stefan, Thank you very much for running the test. Is there an easy way to check the differences between r3116 and r3189 wrt. scipy.optimize ? Nils P.S I have send two additional tests to Alan Isaac off-list. They exhibit similar "failures" of ncg. > The optimisation converges under scipy r3116: > > $ python test_tnc.py > try fmin_bfgs > Optimization terminated successfully. > Current function value: 0.000540 > Iterations: 263 > Function evaluations: 266 > Gradient evaluations: 266 > Rayleigh quotient BFGS 0.000540272922563 > fmin_bfgs OK > try fmin_ncg > Optimization terminated successfully. > Current function value: 0.000538 > Iterations: 13 > Function evaluations: 33 > Gradient evaluations: 13 > Hessian evaluations: 13 > Rayleigh quotient NCG 0.000537952836927 > > Regards > St?fan > > On Tue, Jul 24, 2007 at 11:12:23PM -0400, Alan G Isaac wrote: > >> On Tue, 24 Jul 2007, dmitrey apparently wrote: >> >>> can anyone who didn't update scipy from svn during last >>> 10-11 days run Nils example? >>> >> The core of Nil's example is below. >> Get the data from here: >> http://math.nist.gov/MatrixMarket/data/Harwell-Boeing/lanpro/nos4.html >> Cheers, >> Alan Isaac >> > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From matthieu.brucher at gmail.com Wed Jul 25 05:20:09 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 25 Jul 2007 11:20:09 +0200 Subject: [SciPy-dev] Trouble with optimize.fmin_ncg In-Reply-To: <46A710FD.10905@iam.uni-stuttgart.de> References: <46A65FBE.1090509@ukr.net> <20070725084558.GE8728@mentat.za.net> <46A710FD.10905@iam.uni-stuttgart.de> Message-ID: Hi, Here is a link to the diff : http://projects.scipy.org/scipy/scipy/changeset?old_path=trunk%2FLib%2Foptimize&old=3116&new_path=trunk%2FLib%2Foptimize&new=3189 Matthieu 2007/7/25, Nils Wagner : > > Stefan van der Walt wrote: > > Stefan, > > Thank you very much for running the test. > Is there an easy way to check the differences between r3116 and r3189 > wrt. scipy.optimize ? > > Nils > > P.S I have send two additional tests to Alan Isaac off-list. They > exhibit similar "failures" of ncg. > > > > The optimisation converges under scipy r3116: > > > > $ python test_tnc.py > > try fmin_bfgs > > Optimization terminated successfully. > > Current function value: 0.000540 > > Iterations: 263 > > Function evaluations: 266 > > Gradient evaluations: 266 > > Rayleigh quotient BFGS 0.000540272922563 > > fmin_bfgs OK > > try fmin_ncg > > Optimization terminated successfully. > > Current function value: 0.000538 > > Iterations: 13 > > Function evaluations: 33 > > Gradient evaluations: 13 > > Hessian evaluations: 13 > > Rayleigh quotient NCG 0.000537952836927 > > > > Regards > > St?fan > > > > On Tue, Jul 24, 2007 at 11:12:23PM -0400, Alan G Isaac wrote: > > > >> On Tue, 24 Jul 2007, dmitrey apparently wrote: > >> > >>> can anyone who didn't update scipy from svn during last > >>> 10-11 days run Nils example? > >>> > >> The core of Nil's example is below. > >> Get the data from here: > >> http://math.nist.gov/MatrixMarket/data/Harwell-Boeing/lanpro/nos4.html > >> Cheers, > >> Alan Isaac > >> > > _______________________________________________ > > Scipy-dev mailing list > > Scipy-dev at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-dev > > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Wed Jul 25 05:22:13 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 25 Jul 2007 11:22:13 +0200 Subject: [SciPy-dev] Trouble with optimize.fmin_ncg In-Reply-To: <46A61BDF.3070209@ukr.net> References: <46A5F278.1040801@iam.uni-stuttgart.de> <46A60E80.7030401@ukr.net> <46A60F05.2050905@iam.uni-stuttgart.de> <46A61BDF.3070209@ukr.net> Message-ID: <46A71645.3050503@iam.uni-stuttgart.de> dmitrey wrote: > Nils, are you sure that troubles raised after last svn changes? > All my changes are in func _cubicmin from optimize.py > but when I placed a breakpoint there, the hanging cycle didn't reached > the one. > Can't you do the same trick? > scipy/optimize/optimize.py > line 309, > d1 = empty((2,2)) > > I have found the hanging cycle (optimize.py, line 1030, > while numpy.add.reduce(abs(ri)) > termcond: ) > but numpy.add.reduce(abs(ri)) is constantly growing here. > > maybe you had changed x0 and now it's too far from x_opt? > > btw if 2nd derivatives are not supplied, then other cycle is hanging: > line 1013: > while (numpy.add.reduce(abs(update)) > xtol) and (k < maxiter): > I don't know howto fix the problem. > Please inform me about the breakpoint. > > BTW your func seems to be very suspicious to me > def R(v): > rq = dot(v.T,A*v)/dot(v.T,B*v) > res = (A*v-rq*B*v)/linalg.norm(B*v) > data.append(linalg.norm(res)) > return rq > > are you sure that the func(v)=dot(v.T,A*v)/dot(v.T,B*v) is convex? > The function is the well-known Rayleigh quotient of the symmetric definite matrix pair (A,B) The theory of the unconstrained optimization problem is described in Giles Auchmuty, Globally and rapidly convergent algorithms for symmetric eigenrpoblems, SIAM J. Matrix Anal. Appl. Vol. 12 Issue 4 pp. 690-706 (1991). Another useful paper in this context is M. Mongeau, M. Torki Computing eigenelements of real symmetric matrices via optimization, Computational Optimization and Applications, Vol. 29, pp. 263-287 (2004). Nils > I'm not. > So using 2nd derivatives (or their approximating by fmin_ncg (if user > didn't provide that ones) , in line 1033: > Ap = approx_fhess_p(xk,psupi,fprime,epsilon) > ) > will handle non-convex funcs much more bad than 1-st order do. > > HTH, D. > From nwagner at iam.uni-stuttgart.de Wed Jul 25 05:33:20 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 25 Jul 2007 11:33:20 +0200 Subject: [SciPy-dev] Trouble with optimize.fmin_ncg In-Reply-To: References: <46A65FBE.1090509@ukr.net> <20070725084558.GE8728@mentat.za.net> <46A710FD.10905@iam.uni-stuttgart.de> Message-ID: <46A718E0.8020104@iam.uni-stuttgart.de> Matthieu Brucher wrote: > Hi, > > Here is a link to the diff : > http://projects.scipy.org/scipy/scipy/changeset?old_path=trunk%2FLib%2Foptimize&old=3116&new_path=trunk%2FLib%2Foptimize&new=3189 > > > Matthieu > Since the other optimizers are not affected, we may cut it down to optimize.py. Is it possible that http://projects.scipy.org/scipy/scipy/changeset/3176 http://projects.scipy.org/scipy/scipy/changeset/3177 are responsible for the trouble with ncg ? Nils From openopt at ukr.net Wed Jul 25 05:55:59 2007 From: openopt at ukr.net (dmitrey) Date: Wed, 25 Jul 2007 12:55:59 +0300 Subject: [SciPy-dev] Trouble with optimize.fmin_ncg In-Reply-To: <46A718E0.8020104@iam.uni-stuttgart.de> References: <46A65FBE.1090509@ukr.net> <20070725084558.GE8728@mentat.za.net> <46A710FD.10905@iam.uni-stuttgart.de> <46A718E0.8020104@iam.uni-stuttgart.de> Message-ID: <46A71E2F.3040204@ukr.net> Nils Wagner wrote: > Matthieu Brucher wrote: > >> Hi, >> >> Here is a link to the diff : >> http://projects.scipy.org/scipy/scipy/changeset?old_path=trunk%2FLib%2Foptimize&old=3116&new_path=trunk%2FLib%2Foptimize&new=3189 >> >> >> Matthieu >> >> > Since the other optimizers are not affected, we may cut it down to > optimize.py. > > Is it possible that > > http://projects.scipy.org/scipy/scipy/changeset/3176 > http://projects.scipy.org/scipy/scipy/changeset/3177 > I have missed one bracket fun(*(x,) +args) instead of fun(*((x,) +args)) (and now I fixed it, lines 1481, 1482,1027) however, seems like it work without additional brackets as well: >>> f = lambda x,y,z,t: x+y+z+t >>> f(*((4,)+(5,)+(6,)+(7,))) 22 >>> f(*(4,)+(5,)+(6,)+(7,)) 22 But since your funcs take no additional arguments (only x is present), it's not the matter. I checked - the problem remains. I took a look at other changes and still can't find what's the problem. the changes [A,B] = numpy.dot([[dc**2, -db**2],[-dc**3, db**3]],[fb-fa-C*db,fc-fa-C*dc]) 309 d1 = empty((2,2)) 310 d1[0,0] = dc**2 311 d1[0,1] = -db**2 312 d1[1,0] = -dc**3 313 d1[1,1] = db**3 314 [A,B] = numpy.dot(d1,asarray([fb-fa-C*db,fc-fa-C*dc]).flatten()) are not related to the problem, because my debugger don't reach a breakpoint placed here. the changes 710 try: 711 rhok = 1 / (numpy.dot(yk,sk)) 712 except ZeroDivisionError: 713 rhok = 1000. 715 try: # this was handled in numeric, let it remaines for more safety 716 rhok = 1.0 / (numpy.dot(yk,sk)) 717 except ZeroDivisionError: 718 rhok = 1000.0 719 print "Divide-by-zero encountered: rhok assumed large" 720 if isinf(rhok): # this is patch for numpy 721 rhok = 1000.0 are related to fmin_bfgs code. > are responsible for the trouble with ncg ? > > Nils > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > > > > From openopt at ukr.net Wed Jul 25 06:02:34 2007 From: openopt at ukr.net (dmitrey) Date: Wed, 25 Jul 2007 13:02:34 +0300 Subject: [SciPy-dev] Trouble with optimize.fmin_ncg In-Reply-To: <46A718E0.8020104@iam.uni-stuttgart.de> References: <46A65FBE.1090509@ukr.net> <20070725084558.GE8728@mentat.za.net> <46A710FD.10905@iam.uni-stuttgart.de> <46A718E0.8020104@iam.uni-stuttgart.de> Message-ID: <46A71FBA.5060604@ukr.net> I copy-paste optimize.py from http://projects.scipy.org/scipy/scipy/browser/trunk/Lib/optimize/optimize.py?rev=3116&format=raw but the problem still remains. D. From nwagner at iam.uni-stuttgart.de Wed Jul 25 07:17:08 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 25 Jul 2007 13:17:08 +0200 Subject: [SciPy-dev] Trouble with optimize.fmin_ncg In-Reply-To: <46A71FBA.5060604@ukr.net> References: <46A65FBE.1090509@ukr.net> <20070725084558.GE8728@mentat.za.net> <46A710FD.10905@iam.uni-stuttgart.de> <46A718E0.8020104@iam.uni-stuttgart.de> <46A71FBA.5060604@ukr.net> Message-ID: <46A73134.9060005@iam.uni-stuttgart.de> dmitrey wrote: > I copy-paste optimize.py from > > http://projects.scipy.org/scipy/scipy/browser/trunk/Lib/optimize/optimize.py?rev=3116&format=raw > > but the problem still remains. > > D. > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > Dmitrey, Thank you for your additional tests. Finally I converted the sparse matrices to dense arrays. Now it works ! See trouble.py So, the problem seems to be connected with scipy.sparse. Nathan, do you have a clue ? Nils -------------- next part -------------- A non-text attachment was scrubbed... Name: trouble.py Type: text/x-python Size: 1518 bytes Desc: not available URL: From nwagner at iam.uni-stuttgart.de Wed Jul 25 09:51:31 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 25 Jul 2007 15:51:31 +0200 Subject: [SciPy-dev] Trouble with fmin_ncg Message-ID: <46A75563.6080209@iam.uni-stuttgart.de> Hi, I guess I traced the problem with ncg down to the Hessian. The results between the sparse and the dense version differ. Nils -------------- next part -------------- A non-text attachment was scrubbed... Name: sparse_dense.py Type: text/x-python Size: 1319 bytes Desc: not available URL: From openopt at ukr.net Wed Jul 25 10:19:29 2007 From: openopt at ukr.net (dmitrey) Date: Wed, 25 Jul 2007 17:19:29 +0300 Subject: [SciPy-dev] Trouble with fmin_ncg In-Reply-To: <46A75563.6080209@iam.uni-stuttgart.de> References: <46A75563.6080209@iam.uni-stuttgart.de> Message-ID: <46A75BF1.4000901@ukr.net> Nils, I'm not sure that developers related to scipy.sparse read the messages with "Trouble with fmin_ncg" topic. You better to rename the one and/or call them directly. HTH, D. Nils Wagner wrote: > Hi, > > I guess I traced the problem with ncg down to the Hessian. > The results between the sparse and the dense version differ. > > Nils > > > ------------------------------------------------------------------------ > > from numpy import dot, outer, random, argsort, arange, array > from scipy import io, linalg, optimize > from scipy.sparse import speye > > def Rd(v): > """ Rayleigh quotient with dense arrays """ > rq = dot(v,dot(A1,v))/dot(v,dot(B1,v)) > return rq > > def Rs(v): > """ Rayleigh quotient with sparse matrices """ > rq = dot(v,A*v)/dot(v,B*v) > return rq > > def Rppd(v): > """ Hessian with dense arrays """ > result = 2*(A1-Rd(v)*B1-outer(dot(B1,v),Rp(v))-outer(Rp(v),dot(B1,v)))/dot(v.T,dot(B1,v)) > return result > > def Rpps(v): > """ Hessian with sparse matrices""" > result = 2*(A-Rs(v)*B-outer(B*v,Rps(v))-outer(Rps(v),B*v))/dot(v,B*v) > return result > > def Rp(v): > """ Gradient with dense arrays""" > result = 2*(dot(A1,v)-Rd(v)*dot(B1,v))/dot(v.T,dot(B1,v)) > return result > > def Rps(v): > """ Gradient with sparse matrices """ > result = 2*(A*v-Rs(v)*B*v)/dot(v.T,B*v) > return result > > A = io.mmread('nos4.mtx') > n = A.shape[0] > A1 = A.todense() > > B = speye(n,n) > > B1 = B.todense() > > A1 = array(A1) > B1 = array(B1) > > for i in arange(0,2): > > v_0=random.rand(n) > print > print > print > print Rd(v_0)-Rs(v_0) > print linalg.norm(Rp(v_0)-Rps(v_0)) > print linalg.norm(array(Rpps(v_0)) - Rppd(v_0)) > > > > ------------------------------------------------------------------------ > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From stefan at sun.ac.za Wed Jul 25 10:28:01 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Wed, 25 Jul 2007 16:28:01 +0200 Subject: [SciPy-dev] connecting tnc 1.3 In-Reply-To: <46A5BA87.4060601@ukr.net> References: <46A5BA87.4060601@ukr.net> Message-ID: <20070725142801.GL8728@mentat.za.net> Hi Dmitrey On Tue, Jul 24, 2007 at 11:38:31AM +0300, dmitrey wrote: > hi all, > So now I'm trying to make all the tests for tnc 1.3 running ok. > But there is a problem encountered: > > here's a code from tnc.py, lines 210-213: > > for i in range(n): > l,u = bounds[i] > if l is None: > low[i] = -HUGE_VAL > > > So if bounds are for example ([-inf, -1.5], None), as it is written in > test1fg(x), it yields error "None object is not iterable". > (because it tries to get > lb, ub = None > ) You may want to replace all instances of HUGE_VAL with inf, as was done in http://projects.scipy.org/scipy/scipy/changeset/3037 That changeset also updated the documentation and refactored the tests. Regards St?fan From wnbell at gmail.com Wed Jul 25 13:36:03 2007 From: wnbell at gmail.com (Nathan Bell) Date: Wed, 25 Jul 2007 10:36:03 -0700 Subject: [SciPy-dev] Trouble with optimize.fmin_ncg In-Reply-To: <46A73134.9060005@iam.uni-stuttgart.de> References: <46A65FBE.1090509@ukr.net> <20070725084558.GE8728@mentat.za.net> <46A710FD.10905@iam.uni-stuttgart.de> <46A718E0.8020104@iam.uni-stuttgart.de> <46A71FBA.5060604@ukr.net> <46A73134.9060005@iam.uni-stuttgart.de> Message-ID: On 7/25/07, Nils Wagner wrote: > So, the problem seems to be connected with scipy.sparse. Nathan, do you > have a clue ? Sorry about that, there was a bug when performing subtraction between sparse and dense matrices. It should be fixed in revision 3190. -- Nathan Bell wnbell at gmail.com From nwagner at iam.uni-stuttgart.de Wed Jul 25 13:43:41 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 25 Jul 2007 19:43:41 +0200 Subject: [SciPy-dev] Trouble with optimize.fmin_ncg In-Reply-To: References: <46A65FBE.1090509@ukr.net> <20070725084558.GE8728@mentat.za.net> <46A710FD.10905@iam.uni-stuttgart.de> <46A718E0.8020104@iam.uni-stuttgart.de> <46A71FBA.5060604@ukr.net> <46A73134.9060005@iam.uni-stuttgart.de> Message-ID: On Wed, 25 Jul 2007 10:36:03 -0700 "Nathan Bell" wrote: > On 7/25/07, Nils Wagner >wrote: >> So, the problem seems to be connected with scipy.sparse. >>Nathan, do you >> have a clue ? > > Sorry about that, there was a bug when performing >subtraction between > sparse and dense matrices. It should be fixed in >revision 3190. > Nathan, Thank you very much! It took me a while to locate the bug. Now it works fine again ! BTW, is there a way to support sparse vectors such that the outer product of sparse vectors yields a sparse matrix ? Nils > -- > Nathan Bell wnbell at gmail.com > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev From wnbell at gmail.com Wed Jul 25 14:33:09 2007 From: wnbell at gmail.com (Nathan Bell) Date: Wed, 25 Jul 2007 11:33:09 -0700 Subject: [SciPy-dev] Trouble with optimize.fmin_ncg In-Reply-To: References: <46A65FBE.1090509@ukr.net> <20070725084558.GE8728@mentat.za.net> <46A710FD.10905@iam.uni-stuttgart.de> <46A718E0.8020104@iam.uni-stuttgart.de> <46A71FBA.5060604@ukr.net> <46A73134.9060005@iam.uni-stuttgart.de> Message-ID: On 7/25/07, Nils Wagner wrote: > Thank you very much! It took me a while to locate > the bug. Now it works fine again ! > BTW, is there a way to support sparse vectors such > that the outer product of sparse vectors yields > a sparse matrix ? If the vector is represented as an Nx1 sparse matrix V, then the outer product is simply V*V.T. Likewise if V is 1xN then the outer product is V.T*V. The existing sparse matrix multiplication code should do these operations efficiently. -- Nathan Bell wnbell at gmail.com From nwagner at iam.uni-stuttgart.de Wed Jul 25 14:58:53 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 25 Jul 2007 20:58:53 +0200 Subject: [SciPy-dev] Trouble with optimize.fmin_ncg In-Reply-To: References: <46A65FBE.1090509@ukr.net> <20070725084558.GE8728@mentat.za.net> <46A710FD.10905@iam.uni-stuttgart.de> <46A718E0.8020104@iam.uni-stuttgart.de> <46A71FBA.5060604@ukr.net> <46A73134.9060005@iam.uni-stuttgart.de> Message-ID: On Wed, 25 Jul 2007 11:33:09 -0700 "Nathan Bell" wrote: > On 7/25/07, Nils Wagner >wrote: >> Thank you very much! It took me a while to locate >> the bug. Now it works fine again ! >> BTW, is there a way to support sparse vectors such >> that the outer product of sparse vectors yields >> a sparse matrix ? > > If the vector is represented as an Nx1 sparse matrix V, >then the outer > product is simply V*V.T. Likewise if V is 1xN then the >outer product > is V.T*V. The existing sparse matrix multiplication >code should do > these operations efficiently. > Great ! And how can I assign complex entries ? from scipy import * from scipy.sparse import * n = 10 A = sparse.lil_matrix((n,1),complex) A[4,0] = 2. A[-1,0] = -4.+1j print (A*A.T).todense() Nils A[-1,0] = -4.+1j File "/usr/local/lib64/python2.5/site-packages/scipy/sparse/sparse.py", line 2370, in __setitem__ x = self.dtype.type(x) TypeError: can't convert complex to float; use abs(z) From aisaac at american.edu Wed Jul 25 15:34:37 2007 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 25 Jul 2007 15:34:37 -0400 Subject: [SciPy-dev] Trouble with optimize.fmin_ncg In-Reply-To: References: <46A65FBE.1090509@ukr.net><20070725084558.GE8728@mentat.za.net><46A710FD.10905@iam.uni-stuttgart.de><46A718E0.8020104@iam.uni-stuttgart.de> <46A71FBA.5060604@ukr.net><46A73134.9060005@iam.uni-stuttgart.de> Message-ID: Many thanks to Nils for finding and localizing the bug, Dmitrey for helping to localize it, and Nathan for fixing it! It seems like a good idea to keep this around as a test case. (A stripped down version.) Nils, any objections? Cheers, Alan Isaac From nwagner at iam.uni-stuttgart.de Wed Jul 25 15:44:31 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 25 Jul 2007 21:44:31 +0200 Subject: [SciPy-dev] Trouble with optimize.fmin_ncg In-Reply-To: References: <46A65FBE.1090509@ukr.net> <20070725084558.GE8728@mentat.za.net> <46A710FD.10905@iam.uni-stuttgart.de> <46A718E0.8020104@iam.uni-stuttgart.de> <46A71FBA.5060604@ukr.net> <46A73134.9060005@iam.uni-stuttgart.de> Message-ID: On Wed, 25 Jul 2007 15:34:37 -0400 Alan G Isaac wrote: > Many thanks to Nils for finding and localizing the bug, > Dmitrey for helping to localize it, > and Nathan for fixing it! > > It seems like a good idea to keep this around as > a test case. (A stripped down version.) > Nils, any objections? Please feel to free to use it. However, the theory for unconstrained optimization connected with eigenproblems can be found in the papers by Auchmuty, Mongeau and Torki. The regression test communicated by Stefan in a previous email is even shorter. Nils From openopt at ukr.net Wed Jul 25 16:08:00 2007 From: openopt at ukr.net (dmitrey) Date: Wed, 25 Jul 2007 23:08:00 +0300 Subject: [SciPy-dev] Trouble with optimize.fmin_ncg In-Reply-To: References: <46A65FBE.1090509@ukr.net><20070725084558.GE8728@mentat.za.net><46A710FD.10905@iam.uni-stuttgart.de><46A718E0.8020104@iam.uni-stuttgart.de> <46A71FBA.5060604@ukr.net><46A73134.9060005@iam.uni-stuttgart.de> Message-ID: <46A7ADA0.80504@ukr.net> I'm not sure it's a good idea, because it requires having the file containing matrix + correct load by scipy.io I think constructing matrices explicitly is better. Also, I suspect the func / is non-convex even for B=I. But if you want, I can add the test case (for fmin_ncg) after closing all other currently still open tickets, assigned to me. Regards, D. Alan G Isaac wrote: > Many thanks to Nils for finding and localizing the bug, > Dmitrey for helping to localize it, > and Nathan for fixing it! > > It seems like a good idea to keep this around as > a test case. (A stripped down version.) > Nils, any objections? > > Cheers, > Alan Isaac > > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > > > > From aisaac at american.edu Wed Jul 25 16:56:37 2007 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 25 Jul 2007 16:56:37 -0400 Subject: [SciPy-dev] Trouble with optimize.fmin_ncg In-Reply-To: <46A7ADA0.80504@ukr.net> References: <46A65FBE.1090509@ukr.net><20070725084558.GE8728@mentat.za.net><46A710FD.10905@iam.uni-stuttgart.de><46A718E0.8020104@iam.uni-stuttgart.de> <46A71FBA.5060604@ukr.net><46A73134.9060005@iam.uni-stuttgart.de> <46A7ADA0.80504@ukr.net> Message-ID: On Wed, 25 Jul 2007, dmitrey apparently wrote: > I'm not sure it's a good idea, because it requires having > the file containing matrix + correct load by scipy.io There is another reason not to do this. This was not really a scipy.optimize bug. I suppose the need is rather for a test in scipy.sparse, so Nathan can decide what if anything to do about this. Cheers, Alan Isaac From matthieu.brucher at gmail.com Wed Jul 25 17:03:20 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 25 Jul 2007 23:03:20 +0200 Subject: [SciPy-dev] Trouble with optimize.fmin_ncg In-Reply-To: References: <46A65FBE.1090509@ukr.net> <46A718E0.8020104@iam.uni-stuttgart.de> <46A71FBA.5060604@ukr.net> <46A73134.9060005@iam.uni-stuttgart.de> <46A7ADA0.80504@ukr.net> Message-ID: 2007/7/25, Alan G Isaac : > > On Wed, 25 Jul 2007, dmitrey apparently wrote: > > I'm not sure it's a good idea, because it requires having > > the file containing matrix + correct load by scipy.io > > There is another reason not to do this. > This was not really a scipy.optimize bug. > I suppose the need is rather for a test in scipy.sparse, > so Nathan can decide what if anything to do about this. > > Cheers, > Alan Isaac It could also be turned into a more functional test, using several modules. Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From wnbell at gmail.com Wed Jul 25 17:15:50 2007 From: wnbell at gmail.com (Nathan Bell) Date: Wed, 25 Jul 2007 14:15:50 -0700 Subject: [SciPy-dev] Trouble with optimize.fmin_ncg In-Reply-To: References: <46A65FBE.1090509@ukr.net> <46A718E0.8020104@iam.uni-stuttgart.de> <46A71FBA.5060604@ukr.net> <46A73134.9060005@iam.uni-stuttgart.de> <46A7ADA0.80504@ukr.net> Message-ID: On 7/25/07, Alan G Isaac wrote: > There is another reason not to do this. > This was not really a scipy.optimize bug. > I suppose the need is rather for a test in scipy.sparse, > so Nathan can decide what if anything to do about this. The specific error in sparse can (and should) be tested by something simpler than the script Nil's used to demonstrate the problem. I'll add the appropriate unittests for this bug to sparse later today. I'll defer to the optimization people regarding whether to include it in their battery of tests. Personally, I find that testing against a few small, but non-trivial real-world data sets is a good thing. The matrix in question is rather small (9KB) so it's not unreasonable to include it in the source tree. -- Nathan Bell wnbell at gmail.com From wnbell at gmail.com Wed Jul 25 20:31:29 2007 From: wnbell at gmail.com (Nathan Bell) Date: Wed, 25 Jul 2007 17:31:29 -0700 Subject: [SciPy-dev] copy on write? Message-ID: Is there any support for copy on write behavior in numpy? I haven't seen anything to suggest that it does or doesn't exist. The particular application I have in mind is as follows: Sparse matrices, such as csr_matrix, are represented by three dense 1D arrays. Two of these arrays encode the structure of the matrix, while one records the nonzero values. Many operations like __abs__ and __neg__ require only a change to the data array and leave the structure arrays unchanged. Therefore, when performing B = abs(A), the two sparse matrices could share the same structure arrays. This sort of optimization is nearly always desirable, especially when B is short-lived (e.g. abs(A).sum(0)). Unfortunately, the user may occasionally decide to change A or B's sparsity structure directly (e.g. an inplace sort on the column indices). Hence, in general one must copy for complete safety. If the CSR structure data was COW, then one could apply such optimizations frequently without fear of atypical usage. Short of COW, can one easily get the refcount of an ndarray (in Python)? -- Nathan Bell wnbell at gmail.com From robert.kern at gmail.com Wed Jul 25 20:35:37 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 25 Jul 2007 19:35:37 -0500 Subject: [SciPy-dev] copy on write? In-Reply-To: References: Message-ID: <46A7EC59.6040508@gmail.com> Nathan Bell wrote: > Is there any support for copy on write behavior in numpy? No. > Short of COW, can one easily get the refcount of an ndarray (in Python)? sys.getrefcount(obj). Note that calling sys.getrefcount() adds a reference to obj while it's running, so the usual value is 2. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at ar.media.kyoto-u.ac.jp Wed Jul 25 23:07:40 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 26 Jul 2007 12:07:40 +0900 Subject: [SciPy-dev] scipy.stats: test for handling nan values Message-ID: <46A80FFC.6020807@ar.media.kyoto-u.ac.jp> Hi, Trying to solve a few tickets related to nanmean and co, I wanted to add tests for those functions, as well as general behaviour of basic statistics function with nan. Part of the test suite (in test_stats.py) is based on the Statistical quiz for Wilkinson; missing values are not supported. If I finish the test suite by implemenging MISSING by nan values, is this conceptually correct or not ? I wanted to be sure before committing the change in the test suite (actually, only adding originally disabled tests) cheers, David From prabhu at aero.iitb.ac.in Tue Jul 24 17:25:37 2007 From: prabhu at aero.iitb.ac.in (Prabhu Ramachandran) Date: Wed, 25 Jul 2007 02:55:37 +0530 Subject: [SciPy-dev] BoF on 3D visualization Message-ID: <18086.28241.432931.478493@gargle.gargle.HOWL> Hello, Gael Varoquaux and myself are planning on holding a "Pythonic 3D visualization" BoF at SciPy07 on Thursday evening. We'd like to know if any of you would be interested in this. If you are, please do let us know. Please also let us know if you have anything in particular you'd like to discuss. Thanks. Cheers, -- Prabhu Ramachandran http://www.aero.iitb.ac.in/~prabhu From fperez.net at gmail.com Thu Jul 26 01:22:15 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 25 Jul 2007 23:22:15 -0600 Subject: [SciPy-dev] [SciPy-user] BoF on 3D visualization In-Reply-To: <18086.28241.432931.478493@gargle.gargle.HOWL> References: <18086.28241.432931.478493@gargle.gargle.HOWL> Message-ID: On 7/24/07, Prabhu Ramachandran wrote: > Hello, > > Gael Varoquaux and myself are planning on holding a "Pythonic 3D > visualization" BoF at SciPy07 on Thursday evening. We'd like to know > if any of you would be interested in this. If you are, please do let > us know. Please also let us know if you have anything in particular > you'd like to discuss. Count me in. Cheers, f From openopt at ukr.net Thu Jul 26 03:37:38 2007 From: openopt at ukr.net (dmitrey) Date: Thu, 26 Jul 2007 10:37:38 +0300 Subject: [SciPy-dev] connecting tnc 1.3 In-Reply-To: <20070725142801.GL8728@mentat.za.net> References: <46A5BA87.4060601@ukr.net> <20070725142801.GL8728@mentat.za.net> Message-ID: <46A84F42.4020206@ukr.net> Ok, I committed scipy rev 3193 with +/-numpy.inf instead of +/-HUGE_VAL I checked in Python 2.5 all is OK But I'm not sure that all OS, C compilers and hardware platforms will handle the numpy.inf correctly. Updates to documentation and new tests had been implemented from scipy.newoptimize, seems like they are more detailed here. Also, test38fg contains arg2 being numpy.array, not python list, and works correctly (was committed earlier). However, maybe x from tnc output is also better do as numpy.ndarray, not Python list? Regards, D. Stefan van der Walt wrote: > Hi Dmitrey > > On Tue, Jul 24, 2007 at 11:38:31AM +0300, dmitrey wrote: > >> hi all, >> So now I'm trying to make all the tests for tnc 1.3 running ok. >> But there is a problem encountered: >> >> here's a code from tnc.py, lines 210-213: >> >> for i in range(n): >> l,u = bounds[i] >> if l is None: >> low[i] = -HUGE_VAL >> >> >> So if bounds are for example ([-inf, -1.5], None), as it is written in >> test1fg(x), it yields error "None object is not iterable". >> (because it tries to get >> lb, ub = None >> ) >> > > You may want to replace all instances of HUGE_VAL with inf, as was > done in > > http://projects.scipy.org/scipy/scipy/changeset/3037 > > That changeset also updated the documentation and refactored the tests. > > Regards > St?fan > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > > > > From openopt at ukr.net Thu Jul 26 07:44:30 2007 From: openopt at ukr.net (dmitrey) Date: Thu, 26 Jul 2007 14:44:30 +0300 Subject: [SciPy-dev] ticket 416 (gradient in leastsq): some questions Message-ID: <46A8891E.7080608@ukr.net> Hi all, the 416th ticket was ascribed to me. There some problems: 1. What's the difference between these 2 funcs from __minpack.h: int jac_multipack_calling_function(int *n, double *x, double *fvec, double *fjac, int *ldfjac, int *iflag) int jac_multipack_lm_function(int *m, int *n, double *x, double *fvec, double *fjac, int *ldfjac, int *iflag) They have same description. /* This is the function called from the Fortran code it should -- use call_python_function to get a multiarrayobject result -- check for errors and return -1 if any -- otherwise place result of calculation in *fvec or *fjac. If iflag = 1 this should compute the function. If iflag = 2 this should compute the jacobian (derivative matrix) */ 2. So patch assigned to the ticket proposes to rewrite the line 152 MATRIXC2F(fjac, result_array->data, *n, *ldfjac) as MATRIXC2F(fjac, result_array->data, *ldfjac, *n) however, line 92 is the same. so maybe it needs same patch. 3. The MATRIXC2F is defined in minpack.h w/o any description: #define MATRIXC2F(jac,data,n,m) {double *p1=(double *)(jac), *p2, *p3=(double *)(data);\ int i,j;\ for (j=0;j<(m);p3++,j++) \ for (p2=p3,i=0;i<(n);p2+=(m),i++,p1++) \ *p1 = *p2; } I have no idea what does it do. So I replaced (jac,data,n,m) by (jac,data,m,n), and user's example works correctly for all cases 1-3: 1) w/o gradient info 2) with gradient info, col_deriv=0 3) with gradient info, col_deriv=1 (in the case I modified the user's gradient func so that it returns transposed gradient) scipy.test(1) also didn't yield any bugs related to leastsq. Do you agree to submit the changes to svn? Regards, D. From openopt at ukr.net Thu Jul 26 10:51:17 2007 From: openopt at ukr.net (dmitrey) Date: Thu, 26 Jul 2007 17:51:17 +0300 Subject: [SciPy-dev] ticket 464 Message-ID: <46A8B4E5.2010806@ukr.net> hi all, Nils proposed me to fix the ticket 464 http://projects.scipy.org/scipy/scipy/ticket/464 I found the problem is here: rank = len(x0.shape)# So matrix(0.3) yields shape (1,1) here => rank = 2 if not -1 < rank < 2: raise ValueError, "Initial guess must be a scalar or rank-1 sequence." I propose to fix it in the way: rank = len(x.shape) if not -1 < rank < 2: if x.shape == (1,1): x = x.flatten()#matrix(0.3) for example else: raise ValueError, "Initial guess must be a scalar or rank-1 sequence." If you don't mind (no other opinions during several hours) I will commit the change to svn. I found the same change should be done for optimize.fmin Regards, D. From openopt at ukr.net Thu Jul 26 10:54:18 2007 From: openopt at ukr.net (dmitrey) Date: Thu, 26 Jul 2007 17:54:18 +0300 Subject: [SciPy-dev] ticket 464 In-Reply-To: <46A8B4E5.2010806@ukr.net> References: <46A8B4E5.2010806@ukr.net> Message-ID: <46A8B59A.4030905@ukr.net> also, I think rank is better to be replaced by x.ndim Am I right? Regards, D. dmitrey wrote: > hi all, > Nils proposed me to fix the ticket 464 > http://projects.scipy.org/scipy/scipy/ticket/464 > > I found the problem is here: > > rank = len(x0.shape)# So matrix(0.3) yields shape (1,1) here => rank = 2 > if not -1 < rank < 2: > raise ValueError, "Initial guess must be a scalar or rank-1 > sequence." > > I propose to fix it in the way: > > rank = len(x.shape) > if not -1 < rank < 2: > if x.shape == (1,1): x = x.flatten()#matrix(0.3) for example > else: raise ValueError, "Initial guess must be a scalar or > rank-1 sequence." > > If you don't mind (no other opinions during several hours) I will commit > the change to svn. > I found the same change should be done for optimize.fmin > Regards, D. > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > > > > From matthieu.brucher at gmail.com Thu Jul 26 10:56:30 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 26 Jul 2007 16:56:30 +0200 Subject: [SciPy-dev] ticket 464 In-Reply-To: <46A8B59A.4030905@ukr.net> References: <46A8B4E5.2010806@ukr.net> <46A8B59A.4030905@ukr.net> Message-ID: Instead of flatten(), you can use ravel() which does not copy the data if it is not needed (perhaps squeeze() is a good candidate too ?) Matthieu 2007/7/26, dmitrey : > > also, I think rank is better to be replaced by x.ndim > Am I right? > Regards, D. > > > dmitrey wrote: > > hi all, > > Nils proposed me to fix the ticket 464 > > http://projects.scipy.org/scipy/scipy/ticket/464 > > > > I found the problem is here: > > > > rank = len(x0.shape)# So matrix(0.3) yields shape (1,1) here => rank > = 2 > > if not -1 < rank < 2: > > raise ValueError, "Initial guess must be a scalar or rank-1 > > sequence." > > > > I propose to fix it in the way: > > > > rank = len(x.shape) > > if not -1 < rank < 2: > > if x.shape == (1,1): x = x.flatten()#matrix(0.3) for example > > else: raise ValueError, "Initial guess must be a scalar or > > rank-1 sequence." > > > > If you don't mind (no other opinions during several hours) I will commit > > the change to svn. > > I found the same change should be done for optimize.fmin > > Regards, D. > > > > _______________________________________________ > > Scipy-dev mailing list > > Scipy-dev at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-dev > > > > > > > > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Thu Jul 26 11:15:40 2007 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 26 Jul 2007 11:15:40 -0400 Subject: [SciPy-dev] BUG gradient in leastsq (http://projects.scipy.org/scipy/scipy/ticket/416) In-Reply-To: <46A8891E.7080608@ukr.net> References: <46A8891E.7080608@ukr.net> Message-ID: On Thu, 26 Jul 2007, dmitrey apparently wrote: > 1. What's the difference between these 2 funcs from __minpack.h: > int jac_multipack_calling_function(int *n, double *x, double *fvec, > double *fjac, int *ldfjac, int *iflag) > int jac_multipack_lm_function(int *m, int *n, double *x, double *fvec, > double *fjac, int *ldfjac, int *iflag) > They have same description. > /* This is the function called from the Fortran code it should > -- use call_python_function to get a multiarrayobject result > -- check for errors and return -1 if any > -- otherwise place result of calculation in *fvec or *fjac. > If iflag = 1 this should compute the function. > If iflag = 2 this should compute the jacobian (derivative matrix) > */ > 2. So patch assigned to the ticket proposes to rewrite the line 152 > MATRIXC2F(fjac, result_array->data, *n, *ldfjac) > as > MATRIXC2F(fjac, result_array->data, *ldfjac, *n) > however, line 92 is the same. so maybe it needs same patch. > 3. The MATRIXC2F is defined in minpack.h w/o any description: > #define MATRIXC2F(jac,data,n,m) {double *p1=(double *)(jac), *p2, > *p3=(double *)(data);\ > int i,j;\ > for (j=0;j<(m);p3++,j++) \ > for (p2=p3,i=0;i<(n);p2+=(m),i++,p1++) \ > *p1 = *p2; } > I have no idea what does it do. > So I replaced (jac,data,n,m) by (jac,data,m,n), and user's example works > correctly for all cases 1-3: > 1) w/o gradient info > 2) with gradient info, col_deriv=0 > 3) with gradient info, col_deriv=1 (in the case I modified the user's > gradient func so that it returns transposed gradient) > scipy.test(1) also didn't yield any bugs related to leastsq. Andy, and others, can you comment on this? Also, I would hope for Travis to comment before committing these changes, since he contributed the code. Here is the situation: http://projects.scipy.org/scipy/scipy/ticket/416 Andy identified a bug and a possible fix. (But see details above.) Neither Dmitrey nor I are familiar with this code. Dmitrey is willing to apply the patch if there is support for his doing so, but it will be done "mechancially". In any case, he will add a unit test exposing the problem. Cheers, Alan Isaac From aisaac at american.edu Thu Jul 26 11:33:15 2007 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 26 Jul 2007 11:33:15 -0400 Subject: [SciPy-dev] ticket 464 In-Reply-To: <46A8B4E5.2010806@ukr.net> References: <46A8B4E5.2010806@ukr.net> Message-ID: On Thu, 26 Jul 2007, dmitrey apparently wrote: > rank = len(x.shape) > if not -1 < rank < 2: > if x.shape == (1,1): x = x.flatten()#matrix(0.3) for example > else: raise ValueError, "Initial guess must be a scalar or > rank-1 sequence." That won't work. A flattened matrix is still a matrix. But you can work with the flatiter representation. if max(x.shape) == 1: x = x.flat.next() Cheers, Alan Isaac From aisaac at american.edu Thu Jul 26 11:44:30 2007 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 26 Jul 2007 11:44:30 -0400 Subject: [SciPy-dev] connecting tnc 1.3 In-Reply-To: <46A84F42.4020206@ukr.net> References: <46A5BA87.4060601@ukr.net> <20070725142801.GL8728@mentat.za.net><46A84F42.4020206@ukr.net> Message-ID: On Thu, 26 Jul 2007, dmitrey apparently wrote: > maybe x from tnc output is also better do as numpy.ndarray That seems right to me. Nils? Cheers, Alan Isaac From nwagner at iam.uni-stuttgart.de Thu Jul 26 11:43:08 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 26 Jul 2007 17:43:08 +0200 Subject: [SciPy-dev] connecting tnc 1.3 In-Reply-To: References: <46A5BA87.4060601@ukr.net> <20070725142801.GL8728@mentat.za.net><46A84F42.4020206@ukr.net> Message-ID: <46A8C10C.6060406@iam.uni-stuttgart.de> Alan G Isaac wrote: > On Thu, 26 Jul 2007, dmitrey apparently wrote: > >> maybe x from tnc output is also better do as numpy.ndarray >> > > That seems right to me. > Nils? > > +1 Nils From bsouthey at gmail.com Thu Jul 26 12:13:43 2007 From: bsouthey at gmail.com (Bruce Southey) Date: Thu, 26 Jul 2007 11:13:43 -0500 Subject: [SciPy-dev] scipy.stats: test for handling nan values In-Reply-To: <46A80FFC.6020807@ar.media.kyoto-u.ac.jp> References: <46A80FFC.6020807@ar.media.kyoto-u.ac.jp> Message-ID: Hi, Yes, I would agree that treating missing as nan would achieve the desired results if the stats functions are set to ignore nan. But it is not technically correct to treat missing as nan because you get a non-missing value by valid operations (like division by zero). Treating missing as nan really makes the bad assumption that the person using those functions 'knows' this difference. One solution is actually using something like masked arrays because a user can set their own coding for missing values. I think there is a very related thread that was discussed some time ago on the Numpy list with the title 'Re: ndarray.fill and ma.array.filled' by Sasha: http://projects.scipy.org/pipermail/numpy-discussion/2006-April/007438.html Regards Bruce On 7/25/07, David Cournapeau wrote: > Hi, > > Trying to solve a few tickets related to nanmean and co, I wanted to > add tests for those functions, as well as general behaviour of basic > statistics function with nan. Part of the test suite (in test_stats.py) > is based on the Statistical quiz for Wilkinson; missing values are not > supported. If I finish the test suite by implemenging MISSING by nan > values, is this conceptually correct or not ? I wanted to be sure before > committing the change in the test suite (actually, only adding > originally disabled tests) > > cheers, > > David > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From openopt at ukr.net Thu Jul 26 13:30:58 2007 From: openopt at ukr.net (dmitrey) Date: Thu, 26 Jul 2007 20:30:58 +0300 Subject: [SciPy-dev] ticket 464 In-Reply-To: References: <46A8B4E5.2010806@ukr.net> Message-ID: <46A8DA52.5080503@ukr.net> Alan G Isaac wrote: > On Thu, 26 Jul 2007, dmitrey apparently wrote: > >> rank = len(x.shape) >> if not -1 < rank < 2: >> if x.shape == (1,1): x = x.flatten()#matrix(0.3) for example >> else: raise ValueError, "Initial guess must be a scalar or >> rank-1 sequence." >> > > That won't work. A flattened matrix is still a matrix. > the matrix is converted to array before the check: def fmin_powell(func, x0, args=(), xtol=1e-4, ftol=1e-4, maxiter=None, maxfun=None, full_output=0, disp=1, retall=0, callback=None, direc=None): ... x = asfarray(x0).copy() ... rank = len(x.shape) if not -1 < rank < 2: if x.shape == (1,1): x = x.flatten()#matrix(0.3) for example else: raise ValueError, "Initial guess must be a scalar or rank-1 sequence." So I had checked - it works Regards, D > But you can work with the flatiter representation. > > if max(x.shape) == 1: x = x.flat.next() > > Cheers, > Alan Isaac > > > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > > > > From aisaac at american.edu Thu Jul 26 14:39:21 2007 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 26 Jul 2007 14:39:21 -0400 Subject: [SciPy-dev] ticket 464 In-Reply-To: <46A8DA52.5080503@ukr.net> References: <46A8B4E5.2010806@ukr.net> <46A8DA52.5080503@ukr.net> Message-ID: > Alan G Isaac wrote: >> That won't work. A flattened matrix is still a matrix. >> But you can work with the flatiter representation. >> >> if max(x.shape) == 1: x = x.flat.next() On Thu, 26 Jul 2007 20:30:58 +0300 dmitrey apparently wrote: > the matrix is converted to array before the check: > So I had checked - it works OK, I see. But I still like the the above solution: it's more general and in the context less confusing. Or so I claim. ;-) Anyway, pick what you like. Cheers, Alan Isaac From openopt at ukr.net Thu Jul 26 13:54:39 2007 From: openopt at ukr.net (dmitrey) Date: Thu, 26 Jul 2007 20:54:39 +0300 Subject: [SciPy-dev] connecting tnc 1.3 In-Reply-To: <46A8C10C.6060406@iam.uni-stuttgart.de> References: <46A5BA87.4060601@ukr.net> <20070725142801.GL8728@mentat.za.net><46A84F42.4020206@ukr.net> <46A8C10C.6060406@iam.uni-stuttgart.de> Message-ID: <46A8DFDF.9080008@ukr.net> Nils Wagner wrote: > Alan G Isaac wrote: > >> On Thu, 26 Jul 2007, dmitrey apparently wrote: >> >> >>> maybe x from tnc output is also better do as numpy.ndarray >>> >>> >> That seems right to me. >> Nils? >> >> >> > +1 > > Nils > > committed (revision 3198). D From david at ar.media.kyoto-u.ac.jp Fri Jul 27 03:57:04 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 27 Jul 2007 16:57:04 +0900 Subject: [SciPy-dev] Deprecating packages Message-ID: <46A9A550.3080400@ar.media.kyoto-u.ac.jp> Hi, In the context of releasing soon a new scipy tarball, I was wondering what is the most appropriate approach to deprecate toolboxes (in sandbox, not standard packages of course). 4 packages in scipy.sandbox are now in scikits.learn (pyem, svm, ann and ga). What I did for pyem is to throw an import error in the __init__ with instructions on where to get the new code. Can I do the same for other packages (those were not "mine", so I didn't want to change anything before feedback) ? cheers, David From robert.kern at gmail.com Fri Jul 27 04:49:54 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 27 Jul 2007 03:49:54 -0500 Subject: [SciPy-dev] Deprecating packages In-Reply-To: <46A9A550.3080400@ar.media.kyoto-u.ac.jp> References: <46A9A550.3080400@ar.media.kyoto-u.ac.jp> Message-ID: <46A9B1B2.1020902@gmail.com> David Cournapeau wrote: > Hi, > > In the context of releasing soon a new scipy tarball, I was > wondering what is the most appropriate approach to deprecate toolboxes > (in sandbox, not standard packages of course). 4 packages in > scipy.sandbox are now in scikits.learn (pyem, svm, ann and ga). What I > did for pyem is to throw an import error in the __init__ with > instructions on where to get the new code. Can I do the same for other > packages (those were not "mine", so I didn't want to change anything > before feedback) ? Sandbox packages have no guarantees of stability, buildability, or continued existence. You can just move them without bothering with any kind of deprecation step. That said, it would be a good idea to talk to the authors of the packages that aren't yours. As I haven't heard from Fred Mailhot since the end of the SoC project for ann, I'll stand in and say go for it. I'd kinda like to see ga go back to scipy.ga if it's working again instead of moving into scikits.learn.machine (I'm sorry I missed this aspect of the previous discussion). I don't think a general-purpose function optimizer should be "hidden" in a machine learning package. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at ar.media.kyoto-u.ac.jp Fri Jul 27 04:48:27 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 27 Jul 2007 17:48:27 +0900 Subject: [SciPy-dev] Deprecating packages In-Reply-To: <46A9B1B2.1020902@gmail.com> References: <46A9A550.3080400@ar.media.kyoto-u.ac.jp> <46A9B1B2.1020902@gmail.com> Message-ID: <46A9B15B.3040001@ar.media.kyoto-u.ac.jp> Robert Kern wrote: > David Cournapeau wrote: >> Hi, >> >> In the context of releasing soon a new scipy tarball, I was >> wondering what is the most appropriate approach to deprecate toolboxes >> (in sandbox, not standard packages of course). 4 packages in >> scipy.sandbox are now in scikits.learn (pyem, svm, ann and ga). What I >> did for pyem is to throw an import error in the __init__ with >> instructions on where to get the new code. Can I do the same for other >> packages (those were not "mine", so I didn't want to change anything >> before feedback) ? > > Sandbox packages have no guarantees of stability, buildability, or continued > existence. You can just move them without bothering with any kind of deprecation > step. That said, it would be a good idea to talk to the authors of the packages > that aren't yours. > > As I haven't heard from Fred Mailhot since the end of the SoC project for ann, > I'll stand in and say go for it. I'd kinda like to see ga go back to scipy.ga if > it's working again ga does not work, I think. I can remove ga from scikits.learn, it is still in the sandbox. So I will contact A. Strasheim for svm. David From stefan at sun.ac.za Fri Jul 27 08:57:35 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Fri, 27 Jul 2007 14:57:35 +0200 Subject: [SciPy-dev] connecting tnc 1.3 In-Reply-To: <46A84F42.4020206@ukr.net> References: <46A5BA87.4060601@ukr.net> <20070725142801.GL8728@mentat.za.net> <46A84F42.4020206@ukr.net> Message-ID: <20070727125735.GA7447@mentat.za.net> Hi Dmitrey On Thu, Jul 26, 2007 at 10:37:38AM +0300, dmitrey wrote: > Updates to documentation and new tests had been implemented from > scipy.newoptimize, seems like they are more detailed here. Also, > test38fg contains arg2 being numpy.array, not python list, and works > correctly (was committed earlier). Your patch caused two regressions: 1. The patch I referred to reformatted the documentation to use restructured text, which you changed back to plain text. 2. The patch corrected the order of the returned arguments, which now again again does does not correspond to the documentation. Regards St?fan From openopt at ukr.net Fri Jul 27 13:21:42 2007 From: openopt at ukr.net (dmitrey) Date: Fri, 27 Jul 2007 20:21:42 +0300 Subject: [SciPy-dev] connecting tnc 1.3 In-Reply-To: <20070727125735.GA7447@mentat.za.net> References: <46A5BA87.4060601@ukr.net> <20070725142801.GL8728@mentat.za.net> <46A84F42.4020206@ukr.net> <20070727125735.GA7447@mentat.za.net> Message-ID: <46AA29A6.8060409@ukr.net> please check the rev. 3204, is all correct? Regards, D Stefan van der Walt wrote: > Hi Dmitrey > > On Thu, Jul 26, 2007 at 10:37:38AM +0300, dmitrey wrote: > >> Updates to documentation and new tests had been implemented from >> scipy.newoptimize, seems like they are more detailed here. Also, >> test38fg contains arg2 being numpy.array, not python list, and works >> correctly (was committed earlier). >> > > Your patch caused two regressions: > > 1. The patch I referred to reformatted the documentation to use > restructured text, which you changed back to plain text. > > 2. The patch corrected the order of the returned arguments, which now > again again does does not correspond to the documentation. > > Regards > St?fan > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > > > > From openopt at ukr.net Fri Jul 27 15:30:30 2007 From: openopt at ukr.net (dmitrey) Date: Fri, 27 Jul 2007 22:30:30 +0300 Subject: [SciPy-dev] GSoC weekly report + need discuss of ticket 285 Message-ID: <46AA47D6.2010408@ukr.net> hi all, so http://projects.scipy.org/scipy/scipy/ticket/464 (optimize.fmin_powell doesn't accept a matrix input for the initial guess) rev 3199 http://projects.scipy.org/scipy/scipy/ticket/416 (MATRIXC2F transposing the wrong way in optimize.leastsq) I will commit my patch if nothing better will be proposed till some hours One more change in tnc - now return x value (optim point) is numpy.array, not Python list. Also, now it consumes x0 as numpy.array, not Python list (so http://projects.scipy.org/scipy/scipy/ticket/384 should be closed) About a day was spent for a bug related to fmin_ncg, but finally it turned out to be related to sparse matrices. So the last ticket left is bracket parameters (http://projects.scipy.org/scipy/scipy/ticket/285) Alan Isaac proposed the new format of brent (very preliminary, of course), see below So before implementing something like that I want to hear your opinions, is this the best way or something should be implemented in other way. As for me, I treat the benefits from these changes to be very suspicious. Regards, D. def brent2(func, args=(), brack=None, tol=1.48e-8, full_output=0, maxiter=500): """ copy brent docstring here """ brent = Brent(func=func, args=args, tol=tol, maxiter=maxiter) brent.set_bracket(brack=brack) brent.optimize() return brent.get_result(full_output=full_output) class Brent: #need to rethink design of __init__ def __init__(func, args=(), tol=1.48e-8, maxiter=500): self.func = func self.args = args self.tol = tol self.maxiter = maxiter self._mintol = 1.0e-11 self._cg = 0.3819660 self.xmin = None self.fval = None self.iter = 0 self.funcalls = 0 #etc...... #need to rethink design of set_bracket (new options, etc) def set_bracket(self, brack = None): self.brack = brack def get_bracket_info(self): #set up func = self.func args = self.args brack = self.brack ### BEGIN core bracket_info code ### ### carefully DOCUMENT any CHANGES in core ## if brack is None: xa,xb,xc,fa,fb,fc,funcalls = bracket(func, args=args) elif len(brack) == 2: xa,xb,xc,fa,fb,fc,funcalls = bracket(func, xa=brack[0], xb=brack[1], args=args) elif len(brack) == 3: xa,xb,xc = brack if (xa > xc): # swap so xa < xc can be assumed dum = xa; xa=xc; xc=dum assert ((xa < xb) and (xb < xc)), "Not a bracketing interval." fa = func(*((xa,)+args)) fb = func(*((xb,)+args)) fc = func(*((xc,)+args)) assert ((fb=xmid): deltax=a-x # do a golden section step else: deltax=b-x rat = _cg*deltax else: # do a parabolic step tmp1 = (x-w)*(fx-fv) tmp2 = (x-v)*(fx-fw) p = (x-v)*tmp2 - (x-w)*tmp1; tmp2 = 2.0*(tmp2-tmp1) if (tmp2 > 0.0): p = -p tmp2 = abs(tmp2) dx_temp = deltax deltax= rat # check parabolic fit if ((p > tmp2*(a-x)) and (p < tmp2*(b-x)) and (abs(p) < abs(0.5*tmp2*dx_temp))): rat = p*1.0/tmp2 # if parabolic step is useful. u = x + rat if ((u-a) < tol2 or (b-u) < tol2): if xmid-x >= 0: rat = tol1 else: rat = -tol1 else: if (x>=xmid): deltax=a-x # if it's not do a golden section step else: deltax=b-x rat = _cg*deltax if (abs(rat) < tol1): # update by at least tol1 if rat >= 0: u = x + tol1 else: u = x - tol1 else: u = x + rat fu = func(*((u,)+args)) # calculate new output value funcalls += 1 if (fu > fx): # if it's bigger than current if (u= x): a = x else: b = x v=w; w=x; x=u fv=fw; fw=fx; fx=fu iter += 1 ################################# #END CORE ALGORITHM ################################# self.xmin = x self.fval = fx self.iter = iter self.funcalls = funcalls def get_result(self, full_output=False): if full_output: return xmin, fval, iter, funcalls else: return xmin From strawman at astraw.com Mon Jul 30 01:19:48 2007 From: strawman at astraw.com (Andrew Straw) Date: Sun, 29 Jul 2007 22:19:48 -0700 Subject: [SciPy-dev] Launchpad hosting: an option for scipy? In-Reply-To: References: Message-ID: <46AD74F4.7010104@astraw.com> Thanks for the tip. From my reading of the press release, one will still have to know just as much about making .debs, it's just that they'll be automatically built on the target architectures and hosted on launchpad.net. (Which is nevertheless very cool.) Anyhow, I've applied to participate in the beta system and will hopefully soon be hosting my .debs on their server. -Andrew Fernando Perez wrote: > Hi all, > > I just wanted to mention this I ran across: > > http://arstechnica.com/news.ars/post/20070724-ars-at-ubuntu-live-new-launchpad-service-automatically-builds-and-hosts-user-packages.html > > It sounds like it could provide an interesting, low-overhead solution > for hosting .debs. I know Andrew (at least) has hosted his privately, > but this appears to lower the logistical bar for others to do the same > without having to know quite as much about deb packaging as Andrew > does. > > Just a thought. > > f > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev From scipy2mdjhs78c at jenningsstory.com Mon Jul 30 01:58:01 2007 From: scipy2mdjhs78c at jenningsstory.com (Andy Jennings) Date: Sun, 29 Jul 2007 22:58:01 -0700 Subject: [SciPy-dev] BUG gradient in leastsq (http://projects.scipy.org/scipy/scipy/ticket/416) In-Reply-To: References: <46A8891E.7080608@ukr.net> Message-ID: <149ddc5e0707292258w88c1c59k7696e1c7f5aa7f54@mail.gmail.com> On 7/26/07, Alan G Isaac wrote: > On Thu, 26 Jul 2007, dmitrey apparently wrote: > > 1. What's the difference between these 2 funcs from __minpack.h: > > int jac_multipack_calling_function(int *n, double *x, double *fvec, > > double *fjac, int *ldfjac, int *iflag) > > int jac_multipack_lm_function(int *m, int *n, double *x, double *fvec, > > double *fjac, int *ldfjac, int *iflag) > > > They have same description. > > > /* This is the function called from the Fortran code it should > > -- use call_python_function to get a multiarrayobject result > > -- check for errors and return -1 if any > > -- otherwise place result of calculation in *fvec or *fjac. > > > If iflag = 1 this should compute the function. > > If iflag = 2 this should compute the jacobian (derivative matrix) > > */ > > > 2. So patch assigned to the ticket proposes to rewrite the line 152 > > MATRIXC2F(fjac, result_array->data, *n, *ldfjac) > > as > > MATRIXC2F(fjac, result_array->data, *ldfjac, *n) > > > however, line 92 is the same. so maybe it needs same patch. > > > 3. The MATRIXC2F is defined in minpack.h w/o any description: > > #define MATRIXC2F(jac,data,n,m) {double *p1=(double *)(jac), *p2, > > *p3=(double *)(data);\ > > int i,j;\ > > for (j=0;j<(m);p3++,j++) \ > > for (p2=p3,i=0;i<(n);p2+=(m),i++,p1++) \ > > *p1 = *p2; } > > > I have no idea what does it do. > > So I replaced (jac,data,n,m) by (jac,data,m,n), and user's example works > > correctly for all cases 1-3: > > 1) w/o gradient info > > 2) with gradient info, col_deriv=0 > > 3) with gradient info, col_deriv=1 (in the case I modified the user's > > gradient func so that it returns transposed gradient) > > > scipy.test(1) also didn't yield any bugs related to leastsq. > > > > Andy, and others, can you comment on this? > Also, I would hope for Travis to comment > before committing these changes, since he > contributed the code. > > Here is the situation: > http://projects.scipy.org/scipy/scipy/ticket/416 > Andy identified a bug and a possible fix. > (But see details above.) > Neither Dmitrey nor I are familiar with this code. > Dmitrey is willing to apply the patch if there is support > for his doing so, but it will be done "mechancially". > In any case, he will add a unit test exposing the problem. > > Cheers, > Alan Isaac > > > > > Thanks for looking at this bug. The fix looks fine to me. I don't really have an opinion on the jac_multipack_calling_function question. I think it's likely that it has the same issue, but on the chance that it doesn't, you hate to risk breaking something that's working correctly. From openopt at ukr.net Mon Jul 30 02:15:27 2007 From: openopt at ukr.net (dmitrey) Date: Mon, 30 Jul 2007 09:15:27 +0300 Subject: [SciPy-dev] wiki page HowToDocument is broken Message-ID: <46AD81FF.6070200@ukr.net> hi all, the page http://projects.scipy.org/scipy/numpy/wiki/HowToDocument that is referred from http://projects.scipy.org/scipy/numpy/wiki/DocstringStandards need bugfix ("A problem occurred in a Python script..."), full output is attached below: *SubversionException* Python 2.4.3: /usr/bin/python Mon Jul 30 01:11:01 2007 A problem occurred in a Python script. Here is the sequence of function calls leading up to the error, in the order they occurred. /var/www/cgi-bin/build/bdist.linux-i686/egg/tracrst/macro.py in *render_macro*(self=, req=, name=u'ReST', args=u'trunk/numpy/doc/HOWTO_DOCUMENT.txt') /var/www/cgi-bin/build/bdist.linux-i686/egg/tracrst/svn_helper.py in *cat*(self=, path='file:///home/scipy/svn/numpytrunk/numpy/doc/HOWTO_DOCUMENT.txt', rev=) *SubversionException*: ('Unable to open an ra_local session to URL', 180001) apr_err = 180001 args = ('Unable to open an ra_local session to URL', 180001) -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Mon Jul 30 02:29:14 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Mon, 30 Jul 2007 00:29:14 -0600 Subject: [SciPy-dev] wiki page HowToDocument is broken In-Reply-To: <46AD81FF.6070200@ukr.net> References: <46AD81FF.6070200@ukr.net> Message-ID: On 7/30/07, dmitrey wrote: > > hi all, > the page > http://projects.scipy.org/scipy/numpy/wiki/HowToDocument > that is referred from > http://projects.scipy.org/scipy/numpy/wiki/DocstringStandards > need bugfix ("A problem occurred in a Python script..."), full output is > attached below: > I wonder if this has anything to do with the fact that the Moin site has been very slow this weekend, and throwing frequent errors when trying to edit pages. In the end it worked, but it felt like something wasn't quite happy in there. And ssh-ing directly into the machine is also *very* slow right now. A quick top shows: top - 01:27:36 up 126 days, 7:59, 2 users, load average: 20.11, 14.59, 12.91 Tasks: 165 total, 4 running, 161 sleeping, 0 stopped, 0 zombie Cpu(s): 33.1% us, 4.1% sy, 0.8% ni, 0.2% id, 61.0% wa, 0.2% hi, 0.7% si Mem: 2075100k total, 2020196k used, 54904k free, 15040k buffers Swap: 4192944k total, 1201472k used, 2991472k free, 232596k cached That's kind of a high load... And I see several python and trac.cgiprocesses eating up quite a bit of cpu. Cheers, f -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at enthought.com Mon Jul 30 03:47:05 2007 From: eric at enthought.com (eric jones) Date: Mon, 30 Jul 2007 02:47:05 -0500 Subject: [SciPy-dev] wiki page HowToDocument is broken In-Reply-To: References: <46AD81FF.6070200@ukr.net> Message-ID: <46AD9779.80608@enthought.com> Its the middle of the night here right now. Jeff will look into this ASAP in the morning. Sorry for the problems. eric Fernando Perez wrote: > On 7/30/07, *dmitrey* > wrote: > > hi all, > the page > http://projects.scipy.org/scipy/numpy/wiki/HowToDocument > that is referred from > http://projects.scipy.org/scipy/numpy/wiki/DocstringStandards > need bugfix ("A problem occurred in a Python script..."), full > output is attached below: > > > > I wonder if this has anything to do with the fact that the Moin site > has been very slow this weekend, and throwing frequent errors when > trying to edit pages. In the end it worked, but it felt like > something wasn't quite happy in there. > > And ssh-ing directly into the machine is also *very* slow right now. > A quick top shows: > > top - 01:27:36 up 126 days, 7:59, 2 users, load average: 20.11, > 14.59, 12.91 > Tasks: 165 total, 4 running, 161 sleeping, 0 stopped, 0 zombie > Cpu(s): 33.1% us, 4.1% sy, 0.8% ni, 0.2% id, 61.0% wa, 0.2% hi, > 0.7% si > Mem: 2075100k total, 2020196k used, 54904k free, 15040k buffers > Swap: 4192944k total, 1201472k used, 2991472k free, 232596k cached > > That's kind of a high load... And I see several python and trac.cgi > processes eating up quite a bit of cpu. > > Cheers, > > f > ------------------------------------------------------------------------ > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From bsouthey at gmail.com Mon Jul 30 09:47:06 2007 From: bsouthey at gmail.com (Bruce Southey) Date: Mon, 30 Jul 2007 08:47:06 -0500 Subject: [SciPy-dev] BUG gradient in leastsq (http://projects.scipy.org/scipy/scipy/ticket/416) In-Reply-To: <149ddc5e0707292258w88c1c59k7696e1c7f5aa7f54@mail.gmail.com> References: <46A8891E.7080608@ukr.net> <149ddc5e0707292258w88c1c59k7696e1c7f5aa7f54@mail.gmail.com> Message-ID: Hi, My 0.5 cents on this, as I don't use it, is that I do think the issue lies elsewhere than here. In my 'old' scipy0.5.2 code, MATRIXC2F is ONLY used if multipack_jac_transpose ==1 by the three functions: Lib/optimize/__minpack.h, h jac_multipack_lm_function and jac_multipack_calling_function Lib/integrate/__odepack.h the ode_jacobian_function Also, it may be that the expectation is not correct ie that case 2) with gradient info, col_deriv=0 is switched with 3) with gradient info, col_deriv=1 since, my limited take on the C code, is that col_deriv changes multipack_jac_transpose. Bruce On 7/30/07, Andy Jennings wrote: > On 7/26/07, Alan G Isaac wrote: > > On Thu, 26 Jul 2007, dmitrey apparently wrote: > > > 1. What's the difference between these 2 funcs from __minpack.h: > > > int jac_multipack_calling_function(int *n, double *x, double *fvec, > > > double *fjac, int *ldfjac, int *iflag) > > > int jac_multipack_lm_function(int *m, int *n, double *x, double *fvec, > > > double *fjac, int *ldfjac, int *iflag) > > > > > They have same description. > > > > > /* This is the function called from the Fortran code it should > > > -- use call_python_function to get a multiarrayobject result > > > -- check for errors and return -1 if any > > > -- otherwise place result of calculation in *fvec or *fjac. > > > > > If iflag = 1 this should compute the function. > > > If iflag = 2 this should compute the jacobian (derivative matrix) > > > */ > > > > > 2. So patch assigned to the ticket proposes to rewrite the line 152 > > > MATRIXC2F(fjac, result_array->data, *n, *ldfjac) > > > as > > > MATRIXC2F(fjac, result_array->data, *ldfjac, *n) > > > > > however, line 92 is the same. so maybe it needs same patch. > > > > > 3. The MATRIXC2F is defined in minpack.h w/o any description: > > > #define MATRIXC2F(jac,data,n,m) {double *p1=(double *)(jac), *p2, > > > *p3=(double *)(data);\ > > > int i,j;\ > > > for (j=0;j<(m);p3++,j++) \ > > > for (p2=p3,i=0;i<(n);p2+=(m),i++,p1++) \ > > > *p1 = *p2; } > > > > > I have no idea what does it do. > > > So I replaced (jac,data,n,m) by (jac,data,m,n), and user's example works > > > correctly for all cases 1-3: > > > 1) w/o gradient info > > > 2) with gradient info, col_deriv=0 > > > 3) with gradient info, col_deriv=1 (in the case I modified the user's > > > gradient func so that it returns transposed gradient) > > > > > scipy.test(1) also didn't yield any bugs related to leastsq. > > > > > > > > Andy, and others, can you comment on this? > > Also, I would hope for Travis to comment > > before committing these changes, since he > > contributed the code. > > > > Here is the situation: > > http://projects.scipy.org/scipy/scipy/ticket/416 > > Andy identified a bug and a possible fix. > > (But see details above.) > > Neither Dmitrey nor I are familiar with this code. > > Dmitrey is willing to apply the patch if there is support > > for his doing so, but it will be done "mechancially". > > In any case, he will add a unit test exposing the problem. > > > > Cheers, > > Alan Isaac > > > > > > > > > > > > Thanks for looking at this bug. The fix looks fine to me. > > I don't really have an opinion on the jac_multipack_calling_function > question. I think it's likely that it has the same issue, but on the > chance that it doesn't, you hate to risk breaking something that's > working correctly. > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From jstrunk at enthought.com Mon Jul 30 10:37:13 2007 From: jstrunk at enthought.com (Jeff Strunk) Date: Mon, 30 Jul 2007 09:37:13 -0500 Subject: [SciPy-dev] wiki page HowToDocument is broken In-Reply-To: References: <46AD81FF.6070200@ukr.net> Message-ID: <200707300937.13635.jstrunk@enthought.com> These things are not related. The problem with HowToDocument was a missing / in the ReST macro path. I added it, and the page works now. However, there are a few ReST errors in the file. The load average is much lower now, as well. Thanks, Jeff On Monday 30 July 2007 1:29 am, Fernando Perez wrote: > On 7/30/07, dmitrey wrote: > > hi all, > > the page > > http://projects.scipy.org/scipy/numpy/wiki/HowToDocument > > that is referred from > > http://projects.scipy.org/scipy/numpy/wiki/DocstringStandards > > need bugfix ("A problem occurred in a Python script..."), full output is > > attached below: > > I wonder if this has anything to do with the fact that the Moin site has > been very slow this weekend, and throwing frequent errors when trying to > edit pages. In the end it worked, but it felt like something wasn't quite > happy in there. > > And ssh-ing directly into the machine is also *very* slow right now. A > quick top shows: > > top - 01:27:36 up 126 days, 7:59, 2 users, load average: 20.11, 14.59, > 12.91 > Tasks: 165 total, 4 running, 161 sleeping, 0 stopped, 0 zombie > Cpu(s): 33.1% us, 4.1% sy, 0.8% ni, 0.2% id, 61.0% wa, 0.2% hi, 0.7% > si Mem: 2075100k total, 2020196k used, 54904k free, 15040k buffers > Swap: 4192944k total, 1201472k used, 2991472k free, 232596k cached > > That's kind of a high load... And I see several python and > trac.cgiprocesses eating up quite a bit of cpu. > > Cheers, > > f From openopt at ukr.net Mon Jul 30 11:05:32 2007 From: openopt at ukr.net (dmitrey) Date: Mon, 30 Jul 2007 18:05:32 +0300 Subject: [SciPy-dev] BUG gradient in leastsq (http://projects.scipy.org/scipy/scipy/ticket/416) In-Reply-To: References: <46A8891E.7080608@ukr.net> <149ddc5e0707292258w88c1c59k7696e1c7f5aa7f54@mail.gmail.com> Message-ID: <46ADFE3C.9030909@ukr.net> Bruce Southey wrote: > Hi, > My 0.5 cents on this, as I don't use it, is that I do think the issue > lies elsewhere than here. > > In my 'old' scipy0.5.2 code, MATRIXC2F is ONLY used if > multipack_jac_transpose ==1 > > by the three functions: > Lib/optimize/__minpack.h, h jac_multipack_lm_function and > jac_multipack_calling_function > Lib/integrate/__odepack.h the ode_jacobian_function > > Also, it may be that the expectation is not correct ie that case 2) > with gradient info, col_deriv=0 is switched with 3) with gradient > info, col_deriv=1 since, my limited take on the C code, is that > col_deriv changes multipack_jac_transpose. > > Bruce > the func MATRIXC2F is defined twice: in Lib/optimize/minpack.h for Lib/optimize/__minpack.h and in Lib/integrate/multipack.h for Lib/integrate/__odepack.h I didn't know anything about latter, but my changes didn't affect the Lib/integrate package. On the other hand, now scipy has 2 separate definition of MATRIXC2F, one with params (jac, data, m,n) and other with (jac,dataq, n, m). I don't know anything is there are any mistakes in integrate package, but having defined two funcs MATRIXC2F with different args isn't very good idea. Afaik there are no bugs for now, I checked all 3 cases mentioned: 1) w/o gradient info 2) with gradient info, col_deriv=0 3) with gradient info, col_deriv=1 (in the case I modified the user's gradient func so that it returns transposed gradient) Or do you have an example of incorrect leastsq work? If yes, please send the one. Regards, D. > > > On 7/30/07, Andy Jennings wrote: > >> On 7/26/07, Alan G Isaac wrote: >> >>> On Thu, 26 Jul 2007, dmitrey apparently wrote: >>> >>>> 1. What's the difference between these 2 funcs from __minpack.h: >>>> int jac_multipack_calling_function(int *n, double *x, double *fvec, >>>> double *fjac, int *ldfjac, int *iflag) >>>> int jac_multipack_lm_function(int *m, int *n, double *x, double *fvec, >>>> double *fjac, int *ldfjac, int *iflag) >>>> >>>> They have same description. >>>> >>>> /* This is the function called from the Fortran code it should >>>> -- use call_python_function to get a multiarrayobject result >>>> -- check for errors and return -1 if any >>>> -- otherwise place result of calculation in *fvec or *fjac. >>>> >>>> If iflag = 1 this should compute the function. >>>> If iflag = 2 this should compute the jacobian (derivative matrix) >>>> */ >>>> >>>> 2. So patch assigned to the ticket proposes to rewrite the line 152 >>>> MATRIXC2F(fjac, result_array->data, *n, *ldfjac) >>>> as >>>> MATRIXC2F(fjac, result_array->data, *ldfjac, *n) >>>> >>>> however, line 92 is the same. so maybe it needs same patch. >>>> >>>> 3. The MATRIXC2F is defined in minpack.h w/o any description: >>>> >>> > #define MATRIXC2F(jac,data,n,m) {double *p1=(double *)(jac), *p2, >>> >>>> *p3=(double *)(data);\ >>>> int i,j;\ >>>> for (j=0;j<(m);p3++,j++) \ >>>> for (p2=p3,i=0;i<(n);p2+=(m),i++,p1++) \ >>>> *p1 = *p2; } >>>> >>>> I have no idea what does it do. >>>> So I replaced (jac,data,n,m) by (jac,data,m,n), and user's example works >>>> correctly for all cases 1-3: >>>> 1) w/o gradient info >>>> 2) with gradient info, col_deriv=0 >>>> 3) with gradient info, col_deriv=1 (in the case I modified the user's >>>> gradient func so that it returns transposed gradient) >>>> >>>> scipy.test(1) also didn't yield any bugs related to leastsq. >>>> >>> >>> Andy, and others, can you comment on this? >>> Also, I would hope for Travis to comment >>> before committing these changes, since he >>> contributed the code. >>> >>> Here is the situation: >>> http://projects.scipy.org/scipy/scipy/ticket/416 >>> Andy identified a bug and a possible fix. >>> (But see details above.) >>> Neither Dmitrey nor I are familiar with this code. >>> Dmitrey is willing to apply the patch if there is support >>> for his doing so, but it will be done "mechancially". >>> In any case, he will add a unit test exposing the problem. >>> >>> Cheers, >>> Alan Isaac >>> >>> >>> >>> >>> >>> >> Thanks for looking at this bug. The fix looks fine to me. >> >> I don't really have an opinion on the jac_multipack_calling_function >> question. I think it's likely that it has the same issue, but on the >> chance that it doesn't, you hate to risk breaking something that's >> working correctly. >> _______________________________________________ >> Scipy-dev mailing list >> Scipy-dev at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-dev >> >> > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > > > > From bsouthey at gmail.com Mon Jul 30 11:47:58 2007 From: bsouthey at gmail.com (Bruce Southey) Date: Mon, 30 Jul 2007 10:47:58 -0500 Subject: [SciPy-dev] BUG gradient in leastsq (http://projects.scipy.org/scipy/scipy/ticket/416) In-Reply-To: <46ADFE3C.9030909@ukr.net> References: <46A8891E.7080608@ukr.net> <149ddc5e0707292258w88c1c59k7696e1c7f5aa7f54@mail.gmail.com> <46ADFE3C.9030909@ukr.net> Message-ID: Hi, I would agree that having multiple different MATRIXC2F is incorrect which is why I made the reply. To me (and which is probably incorrect), MATRIXC2F is just converting a C matrix to a Fortran matrix. I do think that the multipack_jac_transpose flag is being real issue so it would be easier to change the condition that calls MATRIXC2F rather than it's definition. So does changing the condition (multipack_jac_transpose==1) prior to calling MATRIXC2F have the same result as redefining MATRIXC2F? Bruce On 7/30/07, dmitrey wrote: > Bruce Southey wrote: > > Hi, > > My 0.5 cents on this, as I don't use it, is that I do think the issue > > lies elsewhere than here. > > > > In my 'old' scipy0.5.2 code, MATRIXC2F is ONLY used if > > multipack_jac_transpose ==1 > > > > by the three functions: > > Lib/optimize/__minpack.h, h jac_multipack_lm_function and > > jac_multipack_calling_function > > Lib/integrate/__odepack.h the ode_jacobian_function > > > > Also, it may be that the expectation is not correct ie that case 2) > > with gradient info, col_deriv=0 is switched with 3) with gradient > > info, col_deriv=1 since, my limited take on the C code, is that > > col_deriv changes multipack_jac_transpose. > > > > Bruce > > > the func MATRIXC2F is defined twice: > in Lib/optimize/minpack.h for Lib/optimize/__minpack.h > and > in Lib/integrate/multipack.h for Lib/integrate/__odepack.h > I didn't know anything about latter, but my changes didn't affect the > Lib/integrate package. On the other hand, now scipy has 2 separate > definition of MATRIXC2F, one with params (jac, data, m,n) and other with > (jac,dataq, n, m). I don't know anything is there are any mistakes in > integrate package, but having defined two funcs MATRIXC2F with different > args isn't very good idea. > Afaik there are no bugs for now, I checked all 3 cases mentioned: > > 1) w/o gradient info > 2) with gradient info, col_deriv=0 > 3) with gradient info, col_deriv=1 (in the case I modified the user's > gradient func so that it returns transposed gradient) > > Or do you have an example of incorrect leastsq work? > If yes, please send the one. > Regards, D. > > > > > > > > On 7/30/07, Andy Jennings wrote: > > > >> On 7/26/07, Alan G Isaac wrote: > >> > >>> On Thu, 26 Jul 2007, dmitrey apparently wrote: > >>> > >>>> 1. What's the difference between these 2 funcs from __minpack.h: > >>>> int jac_multipack_calling_function(int *n, double *x, double *fvec, > >>>> double *fjac, int *ldfjac, int *iflag) > >>>> int jac_multipack_lm_function(int *m, int *n, double *x, double *fvec, > >>>> double *fjac, int *ldfjac, int *iflag) > >>>> > >>>> They have same description. > >>>> > >>>> /* This is the function called from the Fortran code it should > >>>> -- use call_python_function to get a multiarrayobject result > >>>> -- check for errors and return -1 if any > >>>> -- otherwise place result of calculation in *fvec or *fjac. > >>>> > >>>> If iflag = 1 this should compute the function. > >>>> If iflag = 2 this should compute the jacobian (derivative matrix) > >>>> */ > >>>> > >>>> 2. So patch assigned to the ticket proposes to rewrite the line 152 > >>>> MATRIXC2F(fjac, result_array->data, *n, *ldfjac) > >>>> as > >>>> MATRIXC2F(fjac, result_array->data, *ldfjac, *n) > >>>> > >>>> however, line 92 is the same. so maybe it needs same patch. > >>>> > >>>> 3. The MATRIXC2F is defined in minpack.h w/o any description: > >>>> > >>> > #define MATRIXC2F(jac,data,n,m) {double *p1=(double *)(jac), *p2, > >>> > >>>> *p3=(double *)(data);\ > >>>> int i,j;\ > >>>> for (j=0;j<(m);p3++,j++) \ > >>>> for (p2=p3,i=0;i<(n);p2+=(m),i++,p1++) \ > >>>> *p1 = *p2; } > >>>> > >>>> I have no idea what does it do. > >>>> So I replaced (jac,data,n,m) by (jac,data,m,n), and user's example works > >>>> correctly for all cases 1-3: > >>>> 1) w/o gradient info > >>>> 2) with gradient info, col_deriv=0 > >>>> 3) with gradient info, col_deriv=1 (in the case I modified the user's > >>>> gradient func so that it returns transposed gradient) > >>>> > >>>> scipy.test(1) also didn't yield any bugs related to leastsq. > >>>> > >>> > >>> Andy, and others, can you comment on this? > >>> Also, I would hope for Travis to comment > >>> before committing these changes, since he > >>> contributed the code. > >>> > >>> Here is the situation: > >>> http://projects.scipy.org/scipy/scipy/ticket/416 > >>> Andy identified a bug and a possible fix. > >>> (But see details above.) > >>> Neither Dmitrey nor I are familiar with this code. > >>> Dmitrey is willing to apply the patch if there is support > >>> for his doing so, but it will be done "mechancially". > >>> In any case, he will add a unit test exposing the problem. > >>> > >>> Cheers, > >>> Alan Isaac > >>> > >>> > >>> > >>> > >>> > >>> > >> Thanks for looking at this bug. The fix looks fine to me. > >> > >> I don't really have an opinion on the jac_multipack_calling_function > >> question. I think it's likely that it has the same issue, but on the > >> chance that it doesn't, you hate to risk breaking something that's > >> working correctly. > >> _______________________________________________ > >> Scipy-dev mailing list > >> Scipy-dev at scipy.org > >> http://projects.scipy.org/mailman/listinfo/scipy-dev > >> > >> > > _______________________________________________ > > Scipy-dev mailing list > > Scipy-dev at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-dev > > > > > > > > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From openopt at ukr.net Mon Jul 30 15:26:52 2007 From: openopt at ukr.net (dmitrey) Date: Mon, 30 Jul 2007 22:26:52 +0300 Subject: [SciPy-dev] BUG gradient in leastsq (http://projects.scipy.org/scipy/scipy/ticket/416) In-Reply-To: References: <46A8891E.7080608@ukr.net> <149ddc5e0707292258w88c1c59k7696e1c7f5aa7f54@mail.gmail.com> <46ADFE3C.9030909@ukr.net> Message-ID: <46AE3B7C.50409@ukr.net> I guess yes, but didn't try that ones, because redefining multipack_jac_transpose will yield even more misunderstood than having 2 MATRIXC2F funcs from different scipy toolkits. multipack_jac_transpose = 1 if and only if user-defined variable col_deriv is set to 1, and I'm sure this should be remained. from leastsq docstring: col_deriv -- non-zero to specify that the Jacobian function computes derivatives down the columns (faster, because there is no transpose operation). (default col_deriv value is zero) so col_deriv=0 means "no transpose", col_deriv=1 means "do transpose". Regards, D. Bruce Southey wrote: > Hi, > > I would agree that having multiple different MATRIXC2F is incorrect > which is why I made the reply. To me (and which is probably > incorrect), MATRIXC2F is just converting a C matrix to a Fortran > matrix. I do think that the multipack_jac_transpose flag is being > real issue so it would be easier to change the condition that calls > MATRIXC2F rather than it's definition. > > So does changing the condition (multipack_jac_transpose==1) prior to > calling MATRIXC2F have the same result as redefining MATRIXC2F? > > Bruce > > > On 7/30/07, dmitrey wrote: > >> Bruce Southey wrote: >> >>> Hi, >>> My 0.5 cents on this, as I don't use it, is that I do think the issue >>> lies elsewhere than here. >>> >>> In my 'old' scipy0.5.2 code, MATRIXC2F is ONLY used if >>> multipack_jac_transpose ==1 >>> >>> by the three functions: >>> Lib/optimize/__minpack.h, h jac_multipack_lm_function and >>> jac_multipack_calling_function >>> Lib/integrate/__odepack.h the ode_jacobian_function >>> >>> Also, it may be that the expectation is not correct ie that case 2) >>> with gradient info, col_deriv=0 is switched with 3) with gradient >>> info, col_deriv=1 since, my limited take on the C code, is that >>> col_deriv changes multipack_jac_transpose. >>> >>> Bruce >>> >>> >> the func MATRIXC2F is defined twice: >> in Lib/optimize/minpack.h for Lib/optimize/__minpack.h >> and >> in Lib/integrate/multipack.h for Lib/integrate/__odepack.h >> I didn't know anything about latter, but my changes didn't affect the >> Lib/integrate package. On the other hand, now scipy has 2 separate >> definition of MATRIXC2F, one with params (jac, data, m,n) and other with >> (jac,dataq, n, m). I don't know anything is there are any mistakes in >> integrate package, but having defined two funcs MATRIXC2F with different >> args isn't very good idea. >> Afaik there are no bugs for now, I checked all 3 cases mentioned: >> >> 1) w/o gradient info >> 2) with gradient info, col_deriv=0 >> 3) with gradient info, col_deriv=1 (in the case I modified the user's >> gradient func so that it returns transposed gradient) >> >> Or do you have an example of incorrect leastsq work? >> If yes, please send the one. >> Regards, D. >> >> >> >>> On 7/30/07, Andy Jennings wrote: >>> >>> >>>> On 7/26/07, Alan G Isaac wrote: >>>> >>>> >>>>> On Thu, 26 Jul 2007, dmitrey apparently wrote: >>>>> >>>>> >>>>>> 1. What's the difference between these 2 funcs from __minpack.h: >>>>>> int jac_multipack_calling_function(int *n, double *x, double *fvec, >>>>>> double *fjac, int *ldfjac, int *iflag) >>>>>> int jac_multipack_lm_function(int *m, int *n, double *x, double *fvec, >>>>>> double *fjac, int *ldfjac, int *iflag) >>>>>> >>>>>> They have same description. >>>>>> >>>>>> /* This is the function called from the Fortran code it should >>>>>> -- use call_python_function to get a multiarrayobject result >>>>>> -- check for errors and return -1 if any >>>>>> -- otherwise place result of calculation in *fvec or *fjac. >>>>>> >>>>>> If iflag = 1 this should compute the function. >>>>>> If iflag = 2 this should compute the jacobian (derivative matrix) >>>>>> */ >>>>>> >>>>>> 2. So patch assigned to the ticket proposes to rewrite the line 152 >>>>>> MATRIXC2F(fjac, result_array->data, *n, *ldfjac) >>>>>> as >>>>>> MATRIXC2F(fjac, result_array->data, *ldfjac, *n) >>>>>> >>>>>> however, line 92 is the same. so maybe it needs same patch. >>>>>> >>>>>> 3. The MATRIXC2F is defined in minpack.h w/o any description: >>>>>> >>>>>> >>>>> > #define MATRIXC2F(jac,data,n,m) {double *p1=(double *)(jac), *p2, >>>>> >>>>> >>>>>> *p3=(double *)(data);\ >>>>>> int i,j;\ >>>>>> for (j=0;j<(m);p3++,j++) \ >>>>>> for (p2=p3,i=0;i<(n);p2+=(m),i++,p1++) \ >>>>>> *p1 = *p2; } >>>>>> >>>>>> I have no idea what does it do. >>>>>> So I replaced (jac,data,n,m) by (jac,data,m,n), and user's example works >>>>>> correctly for all cases 1-3: >>>>>> 1) w/o gradient info >>>>>> 2) with gradient info, col_deriv=0 >>>>>> 3) with gradient info, col_deriv=1 (in the case I modified the user's >>>>>> gradient func so that it returns transposed gradient) >>>>>> >>>>>> scipy.test(1) also didn't yield any bugs related to leastsq. >>>>>> >>>>>> >>>>> Andy, and others, can you comment on this? >>>>> Also, I would hope for Travis to comment >>>>> before committing these changes, since he >>>>> contributed the code. >>>>> >>>>> Here is the situation: >>>>> http://projects.scipy.org/scipy/scipy/ticket/416 >>>>> Andy identified a bug and a possible fix. >>>>> (But see details above.) >>>>> Neither Dmitrey nor I are familiar with this code. >>>>> Dmitrey is willing to apply the patch if there is support >>>>> for his doing so, but it will be done "mechancially". >>>>> In any case, he will add a unit test exposing the problem. >>>>> >>>>> Cheers, >>>>> Alan Isaac >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>> Thanks for looking at this bug. The fix looks fine to me. >>>> >>>> I don't really have an opinion on the jac_multipack_calling_function >>>> question. I think it's likely that it has the same issue, but on the >>>> chance that it doesn't, you hate to risk breaking something that's >>>> working correctly. >>>> _______________________________________________ >>>> Scipy-dev mailing list >>>> Scipy-dev at scipy.org >>>> http://projects.scipy.org/mailman/listinfo/scipy-dev >>>> >>>> >>>> >>> _______________________________________________ >>> Scipy-dev mailing list >>> Scipy-dev at scipy.org >>> http://projects.scipy.org/mailman/listinfo/scipy-dev >>> >>> >>> >>> >>> >> _______________________________________________ >> Scipy-dev mailing list >> Scipy-dev at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-dev >> >> > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > > > > From openopt at ukr.net Mon Jul 30 15:33:46 2007 From: openopt at ukr.net (dmitrey) Date: Mon, 30 Jul 2007 22:33:46 +0300 Subject: [SciPy-dev] ticket 285 (changes in optimize.brent) Message-ID: <46AE3D1A.2060202@ukr.net> hi all, rev. 3209 contains changes to optimize.brent (I implemented the brent class in the way proposed by Alan Isaac + 4 tests related to brent were added). So I guess the ticket 285 should be closed. Also, I have changed some docstrings related to brent and some other funcs (): If bracket is two numbers *(a,c)* then they are assumed to be a starting interval for a downhill bracket search (see bracket);* it doesn't always mean that obtained solution will satisfy a<=x<=c*. (As for me some weeks ago I was surprised to obtain solution from outside of the (a,c) interval, MATLAB has only one func fminbound that yields solution strictly from the given interval, but some scipy.optimize line-search routines use strictly the interval and some other, like brent, use only as starting interval for bracket search, and this is not properly descriebed in documentation). Regards, D. From zxd at bu.edu Tue Jul 31 22:02:57 2007 From: zxd at bu.edu (Xuedong Zhang) Date: Tue, 31 Jul 2007 22:02:57 -0400 Subject: [SciPy-dev] Save complex value ndarray Message-ID: <46AFE9D1.3000402@bu.edu> Hi, Thanks for all people made up this wonderful software. I am just switching from matlab and still need to save some complex data into mat file format. The version I have (scipy.io.mio.savemat function) only save matV4 file and don't support complex data. I wonder if there is any recent work on SVN or I have to work out my own solution??? Thanks Xuedong From david at ar.media.kyoto-u.ac.jp Tue Jul 31 22:03:52 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 01 Aug 2007 11:03:52 +0900 Subject: [SciPy-dev] Save complex value ndarray In-Reply-To: <46AFE9D1.3000402@bu.edu> References: <46AFE9D1.3000402@bu.edu> Message-ID: <46AFEA08.5080503@ar.media.kyoto-u.ac.jp> Xuedong Zhang wrote: > Hi, > Thanks for all people made up this wonderful software. > I am just switching from matlab and still need to save some complex > data into mat file format. > > The version I have (scipy.io.mio.savemat function) only save matV4 file > and don't > support complex data. I wonder if there is any recent work on SVN or I > have to > work out my own solution??? > I don't have scipy 0.5.2 at hand now, but I can confirm that using scipy.io.savemat from svn scipy can save complex arrays into mat files, which are loadable in matlab. David