From fred.jen at web.de Fri Jun 1 05:41:54 2007 From: fred.jen at web.de (Fred Jendrzejewski) Date: Fri, 01 Jun 2007 11:41:54 +0200 Subject: [SciPy-dev] Problems with compilling a bjam under ubuntu Message-ID: <1180690914.9108.4.camel@muli> I know, that this could be a lil bit offtopic, but maybe not. I have to compile c++-Library under ubuntu with boost.python. Did anyone worked like this too? There always appears a warning, that no Jamfile can be found. If this is senseless noise in the mailing-list i am really sry, but i didn't find any solution until now. Kind Regards, Fred Jendrzejewski From pearu at cens.ioc.ee Fri Jun 1 05:26:27 2007 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Fri, 1 Jun 2007 12:26:27 +0300 (EEST) Subject: [SciPy-dev] Severe installation problems In-Reply-To: <465F57CA.3060803@ee.byu.edu> References: <465E76EE.5020505@iam.uni-stuttgart.de> <465E7A84.6060409@cens.ioc.ee> <465F57CA.3060803@ee.byu.edu> Message-ID: <51282.84.202.199.60.1180689987.squirrel@cens.ioc.ee> On Fri, June 1, 2007 2:18 am, Travis Oliphant wrote: > Pearu Peterson wrote: > >>This was caused by Changeset 3845 in numpy/distutils/misc_util.py. >>Travis, what problems did you have >>with data-files in top-level package directory? >> >> > I'm not sure if you've fixed this or not, but the problem I had was > that I could not add data_files that were in the top-level package > name-space no matter what I tried. > > Look specifically in numpy/setup.py to see the addition of > COMPATIBILITY, scipy_compatibility, and site.cfg.example. Yes, I noticed that and have fixed in changeset 3848. This problem was introduced when someone cleaned up the code.. I don't know if you are using mc or not for viewing the content of sdist generated tar-ball, but I have noticed that sometimes, when there is no changes in tar-ball file name, the mc uses old cache to show the content of tar-ball. Copying the tar-ball to another directory and viewing from there, shows the correct content of the tar-ball. Pearu From erin.sheldon at gmail.com Fri Jun 1 08:40:21 2007 From: erin.sheldon at gmail.com (Erin Sheldon) Date: Fri, 1 Jun 2007 08:40:21 -0400 Subject: [SciPy-dev] Binary i/o package In-Reply-To: <331116dc0705301214k4b38c97ocdb7e266d6c6873a@mail.gmail.com> References: <331116dc0705301214k4b38c97ocdb7e266d6c6873a@mail.gmail.com> Message-ID: <331116dc0706010540n6059ae78xf2ac0d54ad351ecb@mail.gmail.com> The overwhelming silence tells me that either no one here thinks this is relevant or no one bothered reading the email. I feel like the functionality I have written into this package is so basic it belongs in scipy io if not in numpy itself. Please give me some feedback one way or another. If it just seems irrelevant then I may just look into making it a scikits package. Erin On 5/30/07, Erin Sheldon wrote: > Hi all - > > The tofile() and fromfile() methods are excellent for reading and > writing arrays to disk, and there are good tools in scipy for ascii > input/output. > > I often find myself writing huge binary files to disk and then wanting > to extract particular rows and columns from that file. It is natural > to associate the fields of a numpy array with the fields in the file, > which may be inhomogeneous. The ability to extract this information > is straightforward to code in C/C++ and brings the file close to a > database in functionality without all the overhead of working with a > full database or pytables. > > I have written a simple C++ numpy extension for reading binary data > into a numpy array, with the ability to select rows and fields > (columns). One enters the dtype that describes each row in > list-of-tuples form and the code creates a numpy array (with perhaps a > subset of the fields), reads in the requested data, and returns the > result. Pretty simple. > > I feel like this is a pretty generic and useful type of operation, and > if people agree I think it could go into the scipy io subpackage. > > The package is called readfields currently; it contains the > readfields.so from the C++ code as well as simple_format.py which > contains modules create files with a simple self-describing header and > data written using tofile()) and a read function which parses the > header and uses readfields to extract subsets of data. > > Anyone interested in trying it out can get the package here: > > http://sdss.physics.nyu.edu/esheldon/python/code/ > > Erin > From openopt at ukr.net Fri Jun 1 16:28:31 2007 From: openopt at ukr.net (dmitrey) Date: Fri, 01 Jun 2007 23:28:31 +0300 Subject: [SciPy-dev] small benchmark of LP solvers [from GSoC project] Message-ID: <4660816F.4090106@ukr.net> An HTML attachment was scrubbed... URL: From amcmorl at gmail.com Fri Jun 1 19:09:37 2007 From: amcmorl at gmail.com (Angus McMorland) Date: Sat, 2 Jun 2007 11:09:37 +1200 Subject: [SciPy-dev] scipy compilation error on RHEL4 In-Reply-To: <465F93F2.1040009@gmail.com> References: <465F93F2.1040009@gmail.com> Message-ID: On 01/06/07, Robert Kern wrote: > Angus McMorland wrote: > > Hi list, > > > > I'm trying to install scipy from svn source on our dual Intel Xeon > > x86_64 server, which due to administrative reasons beyond my control > > runs RHEL4, and for which I don't have root access, so I'm doing a > > home dir install. I'm following the instructions in INSTALL.txt, and > > have, I think successfully compiled LAPACK and ATLAS. In the scipy > > compilation step, I get the following error: > > > > /usr/bin/ld: /home/raid/amcmorl/lib/atlas/libptf77blas.a(dscal.o): > > relocation R_X86_64_PC32 against `atl_f77wrap_dscal__' can not be used > > when making a shared object; recompile with -fPIC > > /usr/bin/ld: final link failed: Bad value > > collect2: ld returned 1 exit status > > /usr/bin/ld: /home/raid/amcmorl/lib/atlas/libptf77blas.a(dscal.o): > > relocation R_X86_64_PC32 against `atl_f77wrap_dscal__' can not be used > > when making a shared object; recompile with -fPIC > > /usr/bin/ld: final link failed: Bad value > > collect2: ld returned 1 exit status > > error: Command "/usr/bin/g77 -g -Wall -shared > > build/temp.linux-x86_64-2.3/Lib/integrate/_odepackmodule.o > > -L/home/raid/amcmorl/lib/atlas -Lbuild/temp.linux-x86_64-2.3 -lodepack > > -llinpack_lite -lmach -lptf77blas -lptcblas -latlas -lg2c -o > > build/lib.linux-x86_64-2.3/scipy/integrate/_odepack.so" failed with > > exit status 1 > > > > I'm not sure what precisely it is that needs to be recompiled with the > > -fPIC flag. Any advice on how to proceed would be appreciated. > > ATLAS needs to be recompiled with -fPIC. Okay, then I don't know how to do that correctly. I tried, in the ATLAS configuration script, adding -fPIC to the c-compiler flags, which resulted, in the makefile, in: CCFLAG0 = -fomit-frame-pointer -O3 -funroll-all-loops -fPIC and MMFLAGS = -fomit-frame-pointer -O -fPIC and recompiled atlas, but still got the same error when compiling scipy. What's the correct approach? Thanks again, Angus. -- AJC McMorland, PhD Student Physiology, University of Auckland From peridot.faceted at gmail.com Sun Jun 3 14:48:57 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Sun, 3 Jun 2007 14:48:57 -0400 Subject: [SciPy-dev] Binary i/o package In-Reply-To: <331116dc0706010540n6059ae78xf2ac0d54ad351ecb@mail.gmail.com> References: <331116dc0705301214k4b38c97ocdb7e266d6c6873a@mail.gmail.com> <331116dc0706010540n6059ae78xf2ac0d54ad351ecb@mail.gmail.com> Message-ID: On 01/06/07, Erin Sheldon wrote: > The overwhelming silence tells me that either no one here thinks this > is relevant or no one bothered reading the email. I feel like the > functionality I have written into this package is so basic it belongs > in scipy io if not in numpy itself. Please give me some feedback one > way or another. > > If it just seems irrelevant then I may just look into making it a > scikits package. I'm not trying to knock your work, but it's not clear to me that there's enough room between readarray/writearray/tofile/fromfile and pytables to accommodate another package. Maybe I don't see what your package does, but why wouldn't I just install pytables instead? What are its advantages and disadvantages compared to pytables? Anne From dahl.joachim at gmail.com Sun Jun 3 16:28:30 2007 From: dahl.joachim at gmail.com (Joachim Dahl) Date: Sun, 3 Jun 2007 22:28:30 +0200 Subject: [SciPy-dev] small benchmark of LP solvers [from GSoC project] In-Reply-To: <4660816F.4090106@ukr.net> References: <4660816F.4090106@ukr.net> Message-ID: <47347f490706031328i2509207euf71f09c001db774@mail.gmail.com> If you use the native Python LP solver in CVXOPT for sparse problems, make sure you specify the matrices as sparse, as this conversion is not done automatically in the solvers. I couldn't tell from your post what comparisons you're doing, but just in case... Joachim On 6/1/07, dmitrey wrote: > > hi, > for all who are interested: > take a look at the puny benchmark of LP solvers available in my GSoC > openopt module (provided CVXOPT with glpk and mosek is installed, as well as > lp_solve). > I shall try to connect the one to scikits during the next week (and switch > for some weeks to other work, assigned to my GSoC milestones). > 1st number is time elapsed (seconds), 2nd - cputime (opeopt bindings take > less than 1-2% of the time) > LP name > nVars > nConstr > density > cvxopt_lp > cvxopt_glpk > cvxopt_mosek > LPSolve > > > > > > > > > > LGPL > GPL2 > non-free > LGPL > > > > > > > > > > > > > > > > > > > trig > 500 > 500 > 1 > 10.1/9.7 > 1.1/1.1 > N/A(2) > 0.4/0.4 > > trig > 500 > 1000 > 1 > 16/15.9 > 3.5/3.4 > N/A(2) > 0.76/0.72 > > trig > 1000 > 1000 > 1 > 99/98 > 6.8/6.7 > N/A(2) > 1.67/1.64 > > trig > 1500 > 3000 > 1 > 479/470 > 177(1)/83 > N/A(2) > 7.9/7.7 > > > > > > > > > > > > > > > > > > > trig_sparse > 500 > 500 > 0.1 > 0.39/0.38 > 0.39/0.38 > N/A(2) > 0.16/0.17 > > trig_sparse > 500 > 1000 > 0.1 > 4.83/4.64 > 0.83/0.82 > N/A(2) > 0.34/0.3 > > trig_sparse > 1000 > 1000 > 0.1 > 12.7/12.5 > 1.88/1.86 > N/A(2) > 0.64/0.62 > > trig_sparse > 1500 > 3000 > 0.1 > 131/128 > 17.5(1)/12.3 > N/A(2) > 3.91/3.81 > > > > (1): swap encountered (I have 1 Gb) > (2): internal mosek error (maybe problems with my AMD Athlon 64 3800+ X2) > > you should also take into account lb bounds used, it gives additional > nVars constraints. > some benchmarks of glpk are also available at > http://plato.asu.edu/ftp/lpfree.html > mosek is available at http://plato.asu.edu/ftp/lpcom.html > > all the solvers are available via single interface, let me paste data from > the LP() doc (excuse my English): > LP: constructor for Linear Problem assignment > valid calls are: > p = LP(f, ) > p = LP(f=objFunVector, ) > p = LP(f, A=A, Aeq=Aeq, Awhole=Awhole, b=b, beq=beq, bwhole=bwhole, > dwhole=dwhole, lb=lb, ub=ub) > > NB! Constraints can be separated in many ways, > either AX <= b, Aeq X = beq (MATLAB-, CVXOPT- style), > or Awhole X {< | = | >} bwhole (glpk-, lp_solve- and many other > software style), > or any mix of them > > INPUTS: > f: size n x 1 vector > A: size m1 x n matrix, subjected to A * x <= b > Aeq: size m2 x n matrix, subjected to Aeq * x = beq > Awhole: size m3 x n matrix, subjected to Awhole * x { < | = | > } > bwhole > b, beq, bwhole: corresponding vectors with lengthes m1, m2, m3 > dwhole: vector of length m3 from {-1,0,1}, descriptor, sign of what > (Awhole*x_opt - bwhole) should be equal to > (this will simplify translating from other languages to Python and > reduce the amount of mistakes > as well as amount of additional code lines) > > OUTPUT: OpenOpt LP class instance > > Solving of LPs is performed via > r = p.solve(string_name_of_solver) > r.xf - desired solution (NaNs if a problem occured) > r.ff - objFun value () (NaN if a problem occured) > (see also other fields, such as CPUTimeElapsed, TimeElapsed, etc) > Currently string_name_of_solver can be: > LPSolve (LGPL) - requires lpsolve + Python bindings installations (all > mentioned is available in http://sourceforge.net/projects/lpsolve) > cvxopt_lp (GPL) - requires CVXOPT (http://abel.ee.ucla.edu/cvxopt) > cvxopt_glpk(GPL2) - requires CVXOPT(http://abel.ee.ucla.edu/cvxopt) & > glpk (www.gnu.org/software/glpk) > cvxopt_mosek(commercial) - requires CVXOPT( > http://abel.ee.ucla.edu/cvxopt) & mosek (www.mosek.com) > > Example: > Let's concider the problem > 15x1 + 8x2 + 80x3 -> min (1) > subjected to > x1 + 2x2 + 3x3 <= 15 (2) > 8x1 + 15x2 + 80x3 <= 80 (3) > 8x1 + 80x2 + 15x3 <=150 (4) > 100x1 + 10x2 + x3 >= 800 (5) > 80x1 + 8x2 + 15x3 = 750 (6) > x1 + 10x2 + 100x3 = 80 (7) > > Let's pass (2), (3) to A X <= b, (6) to Aeq X = beq > and rest of constraints will be handled via Awhole, bwhole, dwhole > > array, list, matrix and real number are accepted: > f = array([15,8,80]) > A = mat('1 2 3; 8 15 80') > b = [15, 80] > Aeq = mat('80 8 15') > beq = 750 > Awhole = mat('8 80 15; 1 10 100; 100 10 1') > bwhole = array([150, 80, 800]) > dwhole = [-1, 0, 1] > p = LP(f, A=A, Aeq=Aeq, Awhole=Awhole, b=b, beq=beq, bwhole=bwhole, > dwhole=dwhole) > r = p.solve('LPSolve') #lp_solve must be installed > print 'objFunValue:', r.ff # should print 204.350288002 > print 'x_opt:', r.xf > > There are also NLP, NSP classes available (currently unconstrained solvers > ralg and ShorEllipsoid are supplied). > LPSolve and glpk also provide MILP solvers, but cvxopt connection to glpk > (that one is required) can't handle the integer indexes, so in my MILP class > in nearest future will be only one solver (LPSolve) > I know that there is a software for connecting commercial LP/MILP/QP > solver CPLEX to Python, but (at least currently) I have no time for the > one. > > I shall gladly take into account all your suggestions. > Regards, Dmitrey. > > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From erin.sheldon at gmail.com Sun Jun 3 16:42:04 2007 From: erin.sheldon at gmail.com (Erin Sheldon) Date: Sun, 3 Jun 2007 16:42:04 -0400 Subject: [SciPy-dev] Binary i/o package In-Reply-To: References: <331116dc0705301214k4b38c97ocdb7e266d6c6873a@mail.gmail.com> <331116dc0706010540n6059ae78xf2ac0d54ad351ecb@mail.gmail.com> Message-ID: <331116dc0706031342v3a9216ebnc406e71fa286b7fe@mail.gmail.com> On 6/3/07, Anne Archibald wrote: > On 01/06/07, Erin Sheldon wrote: > > The overwhelming silence tells me that either no one here thinks this > > is relevant or no one bothered reading the email. I feel like the > > functionality I have written into this package is so basic it belongs > > in scipy io if not in numpy itself. Please give me some feedback one > > way or another. > > > > If it just seems irrelevant then I may just look into making it a > > scikits package. > > I'm not trying to knock your work, but it's not clear to me that > there's enough room between readarray/writearray/tofile/fromfile and > pytables to accommodate another package. Maybe I don't see what your > package does, but why wouldn't I just install pytables instead? What > are its advantages and disadvantages compared to pytables? Anne - fromfile works on the whole file or nothing (or contiguous chunks of rows). read_array can read certain fields and rows from ascii. It is pure-python which means it is rather slow, but that OK because ascii files are rarely large. PyTables or a database like postgres are at a different level but are build on complex libraries and have complex interfaces. The need to random-access into a binary file with fixed-length records is basic for most data storage and retrieval. For example most standardized file formats are self-describing binary tables which require no previous knowledge of the data other than the format (e.g. FITS in astronomy). But in scripting languages one is usually limited to a read all or nothing approach because all you have is the equivalent of fromfile. I included a working example of such a self-describing format in the simple_format sub-module of readfields. Another example is a simple relational database which is a group of tables, with each table in a flat file or spread across flat files (again no variable length fields). For efficiency one needs to random access the files at a low level. This package fills the niche and is the backbone of such systems. And it is a small chunk of code. You can extract what you want from the file and store it directly into a numpy array in the most efficient manner possible. I can speak for myself that with the larger astronomical data sets that have come online it has become useful to write big files in a standardized format and treat them as a simple database. One does not have to install and administer a database system like postgres or pytables (cdf), and one does not have to learn a new system beyond numpy. But one gets most of the performance benefits of low-level random-access to the data. Erin From robert.kern at gmail.com Sun Jun 3 17:48:00 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 03 Jun 2007 16:48:00 -0500 Subject: [SciPy-dev] Problems with compilling a bjam under ubuntu In-Reply-To: <1180690914.9108.4.camel@muli> References: <1180690914.9108.4.camel@muli> Message-ID: <46633710.5040608@gmail.com> Fred Jendrzejewski wrote: > I know, that this could be a lil bit offtopic, but maybe not. The C++-SIG mailing list is the appropriate place to ask Boost.Python questions. http://www.python.org/community/sigs/current/c++-sig/ -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at ar.media.kyoto-u.ac.jp Sun Jun 3 22:46:36 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 04 Jun 2007 11:46:36 +0900 Subject: [SciPy-dev] Machine learning datasets (was Presentation of pymachine, a python package for machine learning) In-Reply-To: References: <465E2265.7010003@ar.media.kyoto-u.ac.jp> Message-ID: <46637D0C.8070707@ar.media.kyoto-u.ac.jp> Peter Skomoroch wrote: > The licensing of datasets is an interesting issue, it sounds like they > will need to be tackled one by one unless explicitly released to the > public domain. > > Check out the wikipedia entry on "Open Data": > > http://en.wikipedia.org/wiki/Open_Data > > "Creators of data often do not consider the need to state the > conditions of ownership, licensing and re-use. For example, many > scientists do not regard the published data arising from their work to > be theirs to control and the act of publication in a journal is an > implicit release of the data into the commons. However the lack of a > license makes it difficult to determine the status of a data set > and may restrict the use of > data offered in an Open spirit. Because of this uncertainty it is also > possible for public or private organisations to aggregate such data, > protect it with copyright and then resell it." > > I remember a while back Leslie Kaelbling bought the enron dataset > http://www.cs.cmu.edu/~enron/ for > use in machine learning. > > Maybe we can start a scipy wikipage with a list/table of datasets > along with license status...and check off the ones which we find are > not compatible so we can find replacements or get permission. Also, > we might want to add a column for which modules use the data in scipy > tests etc., > > Should I go ahead and create the page? I started something here: http://www.scipy.org/DataSets. I tried to put all websites talked about in this thread there, with license information if available, plus the comment of R. Kern on licensing (at least in the US). cheers, David From david at ar.media.kyoto-u.ac.jp Sun Jun 3 22:48:58 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 04 Jun 2007 11:48:58 +0900 Subject: [SciPy-dev] Machine learning datasets (was Presentation of pymachine, a python package for machine learning) In-Reply-To: <465F0323.9050605@gmail.com> References: <465E2265.7010003@ar.media.kyoto-u.ac.jp> <465E369B.6090300@ar.media.kyoto-u.ac.jp> <465F0323.9050605@gmail.com> Message-ID: <46637D9A.5020202@ar.media.kyoto-u.ac.jp> Robert Kern wrote: > Anne Archibald wrote: > >> Datasets published in academic papers are no less subject to these >> restrictions; generally if you want to use one you must negotiate with >> the author. > > Not necessarily. There is another US-specific exception. Data is not > copyrightable in the United States. For something to be copyrightable here, it > must contain some creative content. Thus, while I may not photocopy a phone book > and sell the copy (the arrangement, typography, etc. are deemed creative and > copyrightable), I may write down all of the numbers and typeset my own phone book. > > Now, most other countries don't have this rule. Notably, countries in the EU > tend to recognize "the sweat of the brow" expended in collecting the data as > being worthy of copyright protection. > > IANAL, but my approach would be to get in touch with the original source of the > data if possible, and ask. The biggest problem you'll face is that few of those > sources have ever thought about their datasets in terms of copyright licenses, > particularly *software* copyright licenses that permit modification to their > precious data. If it's an American source and the data appears to be freely > distributed, as in the UCI database, I would probably just take it as public > domain according to US law. Does that mean you would agree including those datasets into scipy ? (I sent an email to one of the author of the UCI database, waiting for his answer on the status of the data). Concerning data such as Iris of old faithful, which are in books of dead authors, is this public domain ? (I checked if by any chance it was available in the gutenberg project, but unfortunately not). David From david at ar.media.kyoto-u.ac.jp Mon Jun 4 07:23:57 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 04 Jun 2007 20:23:57 +0900 Subject: [SciPy-dev] How to handle exceptional cases in algorithms ? Exception vs Warning Message-ID: <4663F64D.7010906@ar.media.kyoto-u.ac.jp> Hi, I have a general question regarding the implementation of algorithm in python. When something unusual, but possible (that is it is a limitation of the algorithm, not a bug), is there a global policy which is better than another one: emitting a warning vs exception. For example, I recently reworked a bit the internal of scipy.cluster, which implements Vector Quantization and kmean algorithm. For those not familiar with those algorithms, the goal of kmeans is to separate a dataset into k clusters according to a criteria generally based on euclidian distance. Sometimes, it may happens during computation that one of the cluster has no data attached to it, which means that the algorithm won't returns k clusters at the end. Emitting a warning means that the computation can continue anyway, but those cases cannot be caught programmatically. On the contrary, raising an exception can be caught, but needs to be handled, and may break running code. Is there really one choice better then the other, or is it a matter of taste ? cheers, David From openopt at ukr.net Mon Jun 4 07:51:20 2007 From: openopt at ukr.net (dmitrey) Date: Mon, 04 Jun 2007 14:51:20 +0300 Subject: [SciPy-dev] small benchmark of LP solvers [from GSoC project] In-Reply-To: <47347f490706031328i2509207euf71f09c001db774@mail.gmail.com> References: <4660816F.4090106@ukr.net> <47347f490706031328i2509207euf71f09c001db774@mail.gmail.com> Message-ID: <4663FCB8.8060403@ukr.net> An HTML attachment was scrubbed... URL: From dahl.joachim at gmail.com Mon Jun 4 08:02:21 2007 From: dahl.joachim at gmail.com (Joachim Dahl) Date: Mon, 4 Jun 2007 14:02:21 +0200 Subject: [SciPy-dev] small benchmark of LP solvers [from GSoC project] In-Reply-To: <4663FCB8.8060403@ukr.net> References: <4660816F.4090106@ukr.net> <47347f490706031328i2509207euf71f09c001db774@mail.gmail.com> <4663FCB8.8060403@ukr.net> Message-ID: <47347f490706040502w282435f5q1b155e71307e6351@mail.gmail.com> On 6/4/07, dmitrey wrote: > > Yes, of course, I had noticed the cvxopt feature. > So I decided to transform matrix to cvxopt sparse matrix if > nnz(A)/numel(A)<0.3, as I had seen the recommendation somewhere in matlab > sparse stuff > (and I wonder why cvxopt developers hadn't do something like that by > themselves (like glpk and lp_solve do), it consumes 2 lines of code in my > CVXOPT_LP_Solver.py: > Not all optimization problems are sparse. In particular many engineering problems are dense, in which case you want to use dense BLAS/LAPACK. You can just download a 30 day trial version of MOSEK. It's quite easy, and their solvers are terrific at exploiting sparsity, it exploits multi-processors, and the next version will have a native Python interface. -------------- next part -------------- An HTML attachment was scrubbed... URL: From openopt at ukr.net Mon Jun 4 08:09:22 2007 From: openopt at ukr.net (dmitrey) Date: Mon, 04 Jun 2007 15:09:22 +0300 Subject: [SciPy-dev] small benchmark of LP solvers [from GSoC project] In-Reply-To: <47347f490706040502w282435f5q1b155e71307e6351@mail.gmail.com> References: <4660816F.4090106@ukr.net> <47347f490706031328i2509207euf71f09c001db774@mail.gmail.com> <4663FCB8.8060403@ukr.net> <47347f490706040502w282435f5q1b155e71307e6351@mail.gmail.com> Message-ID: <466400F2.9060207@ukr.net> An HTML attachment was scrubbed... URL: From ravi.rajagopal at amd.com Mon Jun 4 09:36:17 2007 From: ravi.rajagopal at amd.com (Ravikiran Rajagopal) Date: Mon, 4 Jun 2007 09:36:17 -0400 Subject: [SciPy-dev] small benchmark of LP solvers [from GSoC project] In-Reply-To: <466400F2.9060207@ukr.net> References: <4660816F.4090106@ukr.net> <47347f490706040502w282435f5q1b155e71307e6351@mail.gmail.com> <466400F2.9060207@ukr.net> Message-ID: <200706040936.17578.ravi@ati.com> Hi, Please do not post HTML-only messages to public mailing lists such as this one. Kindly set your mailer to generate text-only messages for this list. Regards, Ravi PS: I do not know whether this is off-topic or whether scipy mailing lists are an exception to this rule. Moderators: please correct me if I am wrong. On Monday 04 June 2007 8:09:22 am dmitrey wrote: > > > From robert.kern at gmail.com Mon Jun 4 12:31:23 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 04 Jun 2007 11:31:23 -0500 Subject: [SciPy-dev] How to handle exceptional cases in algorithms ? Exception vs Warning In-Reply-To: <4663F64D.7010906@ar.media.kyoto-u.ac.jp> References: <4663F64D.7010906@ar.media.kyoto-u.ac.jp> Message-ID: <46643E5B.3090604@gmail.com> David Cournapeau wrote: > Hi, > > I have a general question regarding the implementation of algorithm > in python. When something unusual, but possible (that is it is a > limitation of the algorithm, not a bug), is there a global policy which > is better than another one: emitting a warning vs exception. > For example, I recently reworked a bit the internal of > scipy.cluster, which implements Vector Quantization and kmean algorithm. > For those not familiar with those algorithms, the goal of kmeans is to > separate a dataset into k clusters according to a criteria generally > based on euclidian distance. Sometimes, it may happens during > computation that one of the cluster has no data attached to it, which > means that the algorithm won't returns k clusters at the end. Emitting a > warning means that the computation can continue anyway, but those cases > cannot be caught programmatically. On the contrary, raising an exception > can be caught, but needs to be handled, and may break running code. > Is there really one choice better then the other, or is it a matter > of taste ? A problem with exceptions in expensive calculations is that they stop the calculation outright. Unless if the calculation is carefully coded, it can't be restarted at the point of the exception. This is why we trap hardware floating point exceptions and uses NaNs and infs by default in numpy. Parts of the answer may be nonsensical, but you get the rest of your data which might be critical to debugging the issue. I like to reserve exceptions for things that are fatal to the calculation as a whole. If the calculation *can't* continue because an assumption gets violated in the middle, go ahead and raise an exception. If you can, use a custom exception that stores information that can be used to debug the problem. Exceptions don't have to just contain a string message. However, if you are just running into something unusual, or only technically a violation of assumptions (i.e. the results don't really make sense according to the algorithm, but the code will still work), issue a custom warning instead. Using the warnings module, the user can set things up to raise an exception when the warning is issued if he really wants that. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From peridot.faceted at gmail.com Mon Jun 4 14:23:34 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Mon, 4 Jun 2007 14:23:34 -0400 Subject: [SciPy-dev] How to handle exceptional cases in algorithms ? Exception vs Warning In-Reply-To: <46643E5B.3090604@gmail.com> References: <4663F64D.7010906@ar.media.kyoto-u.ac.jp> <46643E5B.3090604@gmail.com> Message-ID: On 04/06/07, Robert Kern wrote: > A problem with exceptions in expensive calculations is that they stop the > calculation outright. Unless if the calculation is carefully coded, it can't be > restarted at the point of the exception. This is why we trap hardware floating > point exceptions and uses NaNs and infs by default in numpy. Parts of the answer > may be nonsensical, but you get the rest of your data which might be critical to > debugging the issue. > > I like to reserve exceptions for things that are fatal to the calculation as a > whole. If the calculation *can't* continue because an assumption gets violated > in the middle, go ahead and raise an exception. If you can, use a custom > exception that stores information that can be used to debug the problem. > Exceptions don't have to just contain a string message. > > However, if you are just running into something unusual, or only technically a > violation of assumptions (i.e. the results don't really make sense according to > the algorithm, but the code will still work), issue a custom warning instead. > Using the warnings module, the user can set things up to raise an exception when > the warning is issued if he really wants that. Just a, uh, warning: I found that it was very difficult to make the warnings module do what it was documented to do in terms of throwing exceptions and warning the right number of times. In a recent case I was dealing with, if a matrix failed to be positive definite (which I noticed because cholesky threw an exception), I fell back to using the SVD to solve the equation but recorded a "numerical problems encountered" flag in the object that was running the computation. I recorded the singular values and the eigenvalues (to check that some were really negative). I still haven't found the bug, but at least the code keeps running... I think my inclination would be, if the algorithm is implemented as behaviours of an object, use the object to store a description of any problems that arise. Anne From robert.kern at gmail.com Mon Jun 4 15:21:50 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 04 Jun 2007 14:21:50 -0500 Subject: [SciPy-dev] How to handle exceptional cases in algorithms ? Exception vs Warning In-Reply-To: References: <4663F64D.7010906@ar.media.kyoto-u.ac.jp> <46643E5B.3090604@gmail.com> Message-ID: <4664664E.2060903@gmail.com> Anne Archibald wrote: > Just a, uh, warning: I found that it was very difficult to make the > warnings module do what it was documented to do in terms of throwing > exceptions and warning the right number of times. Can you give an example? I thought it was fairly straightforward. The only niggle is interactive use. After the first time a warning is issued (and not raised as an error), the warning, including its message and the location in the file, is stored in a registry in the module containing the warn() call. Consequently, if you run interactively, see a warnings, set a filter to raise an exception instead of printing the warning, then try the bad function again, you don't get an exception. In [1]: !cat warntest.py IPython system call: cat warntest.py import warnings class MyWarning(UserWarning): pass def does_warn(): warnings.warn("Stuff", MyWarning) In [2]: import warntest, warnings In [3]: warntest.does_warn() warntest.py:7: MyWarning: Stuff warnings.warn("Stuff", MyWarning) In [4]: warntest.does_warn() In [5]: warnings.simplefilter('error', warntest.MyWarning) In [6]: warntest.does_warn() In [7]: warntest.__warningregistry__ Out[7]: {('Stuff', , 7): 1} In [8]: del warntest.__warningregistry__ In [9]: warntest.does_warn() --------------------------------------------------------------------------- Traceback (most recent call last) /Users/rkern/hg/warntest/ in () /Users/rkern/hg/warntest/warntest.py in does_warn() 3 class MyWarning(UserWarning): 4 pass 5 6 def does_warn(): ----> 7 warnings.warn("Stuff", MyWarning) /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/warnings.py in warn(message, category, stacklevel) 60 registry = globals.setdefault("__warningregistry__", {}) 61 warn_explicit(message, category, filename, lineno, module, registry, ---> 62 globals) 63 64 def warn_explicit(message, category, filename, lineno, /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/warnings.py in warn_explicit(message, category, filename, lineno, module, registry, module_globals) 100 101 if action == "error": --> 102 raise message 103 # Other actions 104 if action == "once": : Stuff Now, I don't like this behavior, certainly. I would like the registry to be checked only if there isn't an 'error' filter. Nonetheless, I think the warnings module should be used. Attaching additional information to an object is a good idea if you have an object to hang stuff on. However, I think this should be in addition to issuing a warning with warnings.warn(). -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at ar.media.kyoto-u.ac.jp Tue Jun 5 04:01:13 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 05 Jun 2007 17:01:13 +0900 Subject: [SciPy-dev] Starting a datasets package, again Message-ID: <46651849.1040308@ar.media.kyoto-u.ac.jp> Hi, Following the recent discussion about datasets, licensing and inclusion in scipy, I sent several email to people I believe to be copyright holders for some data to get their authorization. As I am receiving answers, I would like to start a package for datasets in scipy or scikits. Robert proposed a convention for such packages a few weeks ago: http://projects.scipy.org/pipermail/scipy-dev/2007-April/006981.html. Basically, there would be a package scipydata with subpackages, one per dataset (ala scikits if I understand correctly). When time allow, some utilities for downloading, caching, etc... datasets could be implemented, but I guess that as long as we agree on the interface, this does not be to be done now. Would it be ok to create such a packages the next few days with the incoming data ? I think that starting the actual package may encourage other people to join the wagon. Concerning the license, if the copyright holder requires to be cited in the sources, is it OK (I am a bit confused because modified BSD does not require to keep the acknowledgments, so I am not sure exactly how to apply it correctly in this case) ? cheers, David From charlesr.harris at gmail.com Tue Jun 5 13:02:29 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 5 Jun 2007 11:02:29 -0600 Subject: [SciPy-dev] How to handle exceptional cases in algorithms ? Exception vs Warning In-Reply-To: <4664664E.2060903@gmail.com> References: <4663F64D.7010906@ar.media.kyoto-u.ac.jp> <46643E5B.3090604@gmail.com> <4664664E.2060903@gmail.com> Message-ID: On 6/4/07, Robert Kern wrote: > > Anne Archibald wrote: > > > Just a, uh, warning: I found that it was very difficult to make the > > warnings module do what it was documented to do in terms of throwing > > exceptions and warning the right number of times. > > Can you give an example? I thought it was fairly straightforward. The only > niggle is interactive use. After the first time a warning is issued (and > not > raised as an error), the warning, including its message and the location > in the > file, is stored in a registry in the module containing the warn() call. > Consequently, if you run interactively, see a warnings, set a filter to > raise an > exception instead of printing the warning, then try the bad function > again, you > don't get an exception. Yeah, I had that problem too. I just removed the registry storage for warnings. I think it is a bug, it should be possible to set the number of times a warning is issued. IIRC, the warnings module claims that this is possible, but it doesn't work correctly. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From openopt at ukr.net Tue Jun 5 15:03:38 2007 From: openopt at ukr.net (dmitrey) Date: Tue, 05 Jun 2007 22:03:38 +0300 Subject: [SciPy-dev] scikits svn Message-ID: <4665B38A.2090200@ukr.net> hi all, Now I'm reading the page https://projects.scipy.org/scipy/scikits/ and I have some questions: 1) should I place code in all svn branches (trunk, tags, branches)? (seems like David Cournapeau currently uses only trunk) 2) if I shall use config.add_data_dir('directory15'), will all subdirectories of directory15 added to python path automatically or it requires additional efforts? 3) In cvs I need .cvsignore file in my directories. I have noticed in http://projects.scipy.org/scipy/scikits/browser/trunk/pymat "Property *svn:ignore* set to /|*.pyc|/". So, now I don't need to create any *ignore files? Thx, D. From matthieu.brucher at gmail.com Tue Jun 5 15:09:15 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Tue, 5 Jun 2007 21:09:15 +0200 Subject: [SciPy-dev] scikits svn In-Reply-To: <4665B38A.2090200@ukr.net> References: <4665B38A.2090200@ukr.net> Message-ID: Hi You should first read http://subversion.tigris.org/faq.html as well as http://svnbook.red-bean.com/nightly/en/svn-book.html as indicated on the link you gave ;) Matthieu 2007/6/5, dmitrey : > > hi all, > Now I'm reading the page https://projects.scipy.org/scipy/scikits/ and I > have some questions: > 1) should I place code in all svn branches (trunk, tags, branches)? > (seems like David Cournapeau currently uses only trunk) > 2) if I shall use > config.add_data_dir('directory15'), will all subdirectories of > directory15 added to python path automatically or it requires additional > efforts? > 3) In cvs I need .cvsignore file in my directories. I have noticed in > http://projects.scipy.org/scipy/scikits/browser/trunk/pymat "Property > *svn:ignore* set to /|*.pyc|/". So, now I don't need to create any > *ignore files? > > Thx, D. > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ravi.rajagopal at amd.com Tue Jun 5 15:45:23 2007 From: ravi.rajagopal at amd.com (Ravikiran Rajagopal) Date: Tue, 5 Jun 2007 15:45:23 -0400 Subject: [SciPy-dev] Binary i/o package In-Reply-To: <331116dc0706031342v3a9216ebnc406e71fa286b7fe@mail.gmail.com> References: <331116dc0705301214k4b38c97ocdb7e266d6c6873a@mail.gmail.com> <331116dc0706031342v3a9216ebnc406e71fa286b7fe@mail.gmail.com> Message-ID: <200706051545.23802.ravi@ati.com> On Sunday 03 June 2007 4:42:04 pm Erin Sheldon wrote: > This package fills the niche and is the backbone of such systems. ?And > it is a small chunk of code. ?You can extract what you want from the > file and store it directly into a numpy array in the most efficient > manner possible. Apologies for the slow reply, but only now did I find time to go through your code. I agree with you that this is a pretty useful piece of code. However, the functionality offered by your code, IMHO, should be split into two parts: - readbinarray / writebinarray /skipfields - readheader / writeheader Possible prototypes would be as follows: readbinarray( fid, fieldtuple, columns, lines, headerskip=0 ) writebinarray( fid, fieldtuple, columns, lines ) skipfields( fid, fieldtuple, lines ) "fieldtuple" describes the structure of each record. This set of functions would make your code the equivalent of read_array and write_array without involving "self-documentation" of binary files. This allows arbitrary headers and arbitrary parsers of the header data. The second set of functions provides default methods for reading/writing headers. Combining these orthogonal functions gives the current interface. I would be very interested in seeing the first part in scipy.io. If no one else is interested in having a binary equivalent of read_array/write_Array in scipy, something like this is a perfect candidate for a scikit. Regards, Ravi From aisaac at american.edu Tue Jun 5 17:49:17 2007 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 5 Jun 2007 17:49:17 -0400 Subject: [SciPy-dev] scikits svn In-Reply-To: References: <4665B38A.2090200@ukr.net> Message-ID: I think the crucial information is here: https://projects.scipy.org/scipy/scikits/ All scikits share a common svn repository (learn about svn here), and tags and branches are at the toplevel, rather than per project. Consequently each project should prefix the project name to any tag/branch directory (see below for example). See the example provided at that link. hth, Alan Isaac From erin.sheldon at gmail.com Tue Jun 5 18:02:21 2007 From: erin.sheldon at gmail.com (Erin Sheldon) Date: Tue, 5 Jun 2007 18:02:21 -0400 Subject: [SciPy-dev] Binary i/o package In-Reply-To: <200706051545.23802.ravi@ati.com> References: <331116dc0705301214k4b38c97ocdb7e266d6c6873a@mail.gmail.com> <331116dc0706031342v3a9216ebnc406e71fa286b7fe@mail.gmail.com> <200706051545.23802.ravi@ati.com> Message-ID: <331116dc0706051502x4fa389b0i2fc76c61e88a214c@mail.gmail.com> Hi Ravi - I may not have been clear in my description. The code of interest, which is C++, just reads from a binary file into a numpy array: readfields.readfields(file or fileobj, dtype, nrows, rows=, fields=) You give it a file or file object, a dtype (list of tuples) describing each row of the file, and the number of rows. It creates internally the numpy array and reads into it. You can request a subset of rows with the rows= keyword, or a subset of fields by name with the fields= keyword. In that case it just grabs the defs for the subset of fields from tne dtype you entered, creates the correct output length based on the rows keyword and copies only the requested fields. That is it, the most basic reader that can select subsets of rows and fields. I also included, just as a working example, a little python module called readfields.simple_format that has functions for reading/writing to a self-describing file format. The write() function writes a numpy array to a file with a header; it just calls tofile() after writing the header, so nothing new there. Then there are functions read_header() which just reads the header and read() which reads data+header. That was just a working example and doesn't necessarily need to be included in scipy since this is by no means a standard file format. It is just the simplest format I could come up with that is natural for numpy and my readfields() module. Hope this clears things up, Erin On 6/5/07, Ravikiran Rajagopal wrote: > On Sunday 03 June 2007 4:42:04 pm Erin Sheldon wrote: > > This package fills the niche and is the backbone of such systems. And > > it is a small chunk of code. You can extract what you want from the > > file and store it directly into a numpy array in the most efficient > > manner possible. > > Apologies for the slow reply, but only now did I find time to go through your > code. I agree with you that this is a pretty useful piece of code. However, > the functionality offered by your code, IMHO, should be split into two parts: > - readbinarray / writebinarray /skipfields > - readheader / writeheader > > Possible prototypes would be as follows: > readbinarray( fid, fieldtuple, columns, lines, headerskip=0 ) > writebinarray( fid, fieldtuple, columns, lines ) > skipfields( fid, fieldtuple, lines ) > > "fieldtuple" describes the structure of each record. This set of functions > would make your code the equivalent of read_array and write_array without > involving "self-documentation" of binary files. This allows arbitrary headers > and arbitrary parsers of the header data. > > The second set of functions provides default methods for reading/writing > headers. Combining these orthogonal functions gives the current interface. > > I would be very interested in seeing the first part in scipy.io. If no one > else is interested in having a binary equivalent of read_array/write_Array in > scipy, something like this is a perfect candidate for a scikit. > > Regards, > Ravi > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From robert.kern at gmail.com Tue Jun 5 18:08:08 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 05 Jun 2007 17:08:08 -0500 Subject: [SciPy-dev] scikits svn In-Reply-To: <4665B38A.2090200@ukr.net> References: <4665B38A.2090200@ukr.net> Message-ID: <4665DEC8.7000707@gmail.com> dmitrey wrote: > hi all, > Now I'm reading the page https://projects.scipy.org/scipy/scikits/ and I > have some questions: > 1) should I place code in all svn branches (trunk, tags, branches)? > (seems like David Cournapeau currently uses only trunk) No, just the trunk for now. Matthieu points out the SVN Book, which is good reading for what each of these is used for. > 2) if I shall use > config.add_data_dir('directory15'), will all subdirectories of > directory15 added to python path automatically or it requires additional > efforts? All subdirectories are added. > 3) In cvs I need .cvsignore file in my directories. I have noticed in > http://projects.scipy.org/scipy/scikits/browser/trunk/pymat "Property > *svn:ignore* set to /|*.pyc|/". So, now I don't need to create any > *ignore files? Correct. However, I personally recommend setting *.pyc and similar (*.o, *.so, etc.) ignores in your personal SVN client configuration rather than having to set them on each directory. Here is the relevant part of my ~/.subversion/config file: [miscellany] ### Set global-ignores to a set of whitespace-delimited globs ### which Subversion will ignore in its 'status' output, and ### while importing or adding files and directories. global-ignores = *.o *.lo *.la .*.swp *.pyc *.so *.orig .*.rej *.rej .*~ *~ .#* .DS_Store .hg .hgignore be bi site.cfg .*.swo .gdb_history Unfortunately, I don't know how to do this on Windows. I reserve svn:ignore properties for specific things to ignore. For example, the root directory of your package (e.g. trunk/foo/) should probably have the following in its svn:ignore build dist in order to ignore these build directories that only show up in the root. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Tue Jun 5 18:13:07 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 05 Jun 2007 17:13:07 -0500 Subject: [SciPy-dev] Starting a datasets package, again In-Reply-To: <46651849.1040308@ar.media.kyoto-u.ac.jp> References: <46651849.1040308@ar.media.kyoto-u.ac.jp> Message-ID: <4665DFF3.8070006@gmail.com> David Cournapeau wrote: > Hi, > > Following the recent discussion about datasets, licensing and > inclusion in scipy, I sent several email to people I believe to be > copyright holders for some data to get their authorization. As I am > receiving answers, I would like to start a package for datasets in scipy > or scikits. Robert proposed a convention for such packages a few weeks > ago: > http://projects.scipy.org/pipermail/scipy-dev/2007-April/006981.html. > Basically, there would be a package scipydata with subpackages, one per > dataset (ala scikits if I understand correctly). When time allow, some > utilities for downloading, caching, etc... datasets could be > implemented, but I guess that as long as we agree on the interface, this > does not be to be done now. The iris and oldfaithful packages you posted earlier were good. We might want to fiddle with the metadata later, but what you had is probably sufficient. > Would it be ok to create such a packages the next few days with the > incoming data ? I think that starting the actual package may encourage > other people to join the wagon. Concerning the license, if the copyright > holder requires to be cited in the sources, is it OK (I am a bit > confused because modified BSD does not require to keep the > acknowledgments, so I am not sure exactly how to apply it correctly in > this case) ? It would not be okay to put a BSD license on that data. It would be making a false representation as to the actual terms attached to the data. But that's fine since they won't be distributed as part of scipy proper anyways and can have whatever license the authors deem appropriate. Personally, while I mind distributing non-open source *code* in scikits, I don't mind distributing non-open source, but redistributable datasets. We need to figure out a place for these, though. I'm not sure where to put them. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at ar.media.kyoto-u.ac.jp Tue Jun 5 21:23:35 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 06 Jun 2007 10:23:35 +0900 Subject: [SciPy-dev] Starting a datasets package, again In-Reply-To: <4665DFF3.8070006@gmail.com> References: <46651849.1040308@ar.media.kyoto-u.ac.jp> <4665DFF3.8070006@gmail.com> Message-ID: <46660C97.9000504@ar.media.kyoto-u.ac.jp> Robert Kern wrote: > > The iris and oldfaithful packages you posted earlier were good. We might want to > fiddle with the metadata later, but what you had is probably sufficient. Those data were from r-base, and I thought that following our discussion on licensing, it would have been better not to use them. > >> Would it be ok to create such a packages the next few days with the >> incoming data ? I think that starting the actual package may encourage >> other people to join the wagon. Concerning the license, if the copyright >> holder requires to be cited in the sources, is it OK (I am a bit >> confused because modified BSD does not require to keep the >> acknowledgments, so I am not sure exactly how to apply it correctly in >> this case) ? > > It would not be okay to put a BSD license on that data. It would be making a > false representation as to the actual terms attached to the data. But that's > fine since they won't be distributed as part of scipy proper anyways and can > have whatever license the authors deem appropriate. Personally, while I mind > distributing non-open source *code* in scikits, I don't mind distributing > non-open source, but redistributable datasets. The think I really like with the datasets in R is that any package can depend on them for demos/examples/etc... I don't know much about easy_install yet, but does the dependency tracking system work well ? For example, you install foo which uses faithful in some examples, when is the dependency resolved ? Would it be ok to use them in tests ? For the old faithful data, the answer I received from Pr Azzalani (whose article "A look at some data on the old faithful geyser" has original data) is that an acknowledgment would be welcomed, so if we acknowledge it in the sources, is it OK to apply BSD (the problem would be if people using it would be required to acknowledge as well, right ?) David From robert.kern at gmail.com Wed Jun 6 01:33:17 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 06 Jun 2007 00:33:17 -0500 Subject: [SciPy-dev] Starting a datasets package, again In-Reply-To: <46660C97.9000504@ar.media.kyoto-u.ac.jp> References: <46651849.1040308@ar.media.kyoto-u.ac.jp> <4665DFF3.8070006@gmail.com> <46660C97.9000504@ar.media.kyoto-u.ac.jp> Message-ID: <4666471D.2030705@gmail.com> David Cournapeau wrote: > Robert Kern wrote: >> The iris and oldfaithful packages you posted earlier were good. We might want to >> fiddle with the metadata later, but what you had is probably sufficient. > Those data were from r-base, and I thought that following our discussion > on licensing, it would have been better not to use them. Sorry, I meant the form, not necessarily the content. > The think I really like with the datasets in R is that any package can > depend on them for demos/examples/etc... I don't know much about > easy_install yet, but does the dependency tracking system work well ? It works fine, provided that all of the relevant packages are registered on the PyPI and are installed as eggs (or with egg metadata) on user's machines. > For example, you install foo which uses faithful in some examples, when > is the dependency resolved ? I would recommend that the example's dependencies be listed as an "extras" dependency. The setup() for, say, scikits.pyem would have these arguments: ... install_requires = ['numpy'], extras_require = { 'examples': ['scipydata.iris', 'scipydata.oldfaithful'], }, ... Then, if you want to be able to run the examples for scikits.pyem, you would do this: $ easy_install "scikits.pyem[examples]" However, just running $ easy_install scikits.pyem won't install the data packages (this is a good thing). > Would it be ok to use them in tests ? I would like to avoid that. Just include the data that you need in the code or in a file included with the package. If you need lots of data, though, you're writing the wrong kind of test, IMO. > For the old faithful data, the answer I received from Pr Azzalani (whose > article "A look at some data on the old faithful geyser" has original > data) is that an acknowledgment would be welcomed, so if we acknowledge > it in the sources, is it OK to apply BSD (the problem would be if people > using it would be required to acknowledge as well, right ?) If anyone has the right to make this decision, it would be him and his coauthor. I've just taken a look at the "Open Data" Wikipedia article that Peter Skomoroch linked in the last discussion, something I should have done earlier. From it, I found a link to Science Commons, a branch of the Creative Commons project. http://sciencecommons.org Sadly, they do not have a license pre-made that we could simply suggest to authors we approach. As the SC FAQ explains, the database protection is not, in fact, copyright but a similar kind (_sui generis_ in lawyer-speak) of right carved out by the EU Database Directive (and similar laws implemented by member and non-member states). The Creative Commons licenses (with some nationally-specific exceptions) only operate on copyrighted works, not almost-but-not-quite-copyrighted works. . However, and this is the good bit, that right expires in 15 years. http://ec.europa.eu/archives/ISPO/legal/en/ipr/database/text.html#HD_NM_14 Of course, we will give the appropriate citation (no law compels us to, but we shouldn't need laws to compel us to do this little). We should also include the request of the author for acknowledgment as well. I think it would be nice to state that we think the data is in the public domain given the above reasoning since this is one time that I think we can nail down something concrete in a very fuzzy area. We should write our own descriptive text instead of using that from the R package; that *does* fall under the copyright of whoever wrote it. And this raises another fuzzy issue: the copyright/_sui generis_ right of the data is different from the copyright of the surrounding text and code. There's going to necessarily be some confusion, I think. If you want a declaration from me, I would say that the surrounding text and code in scipydata packages should always be under the BSD license. This should be noted using the "License :: OSI Approved :: BSD License" classifier in the setup script and in a *comment* in the code following the copyright notice. However, the copyright notice and license should be accompanied by a note that the data does not fall under this license or copyright and the metadata to look at to find the status of the data. I'm not good at legal boilerplate, but something like the following would be fine, I think: # The code and descriptive text is copyrighted and offered under the terms of # the BSD License from the authors; see below. However, the actual dataset may # have a different origin and intellectual property status. See the SOURCE and # COPYRIGHT variables for this information. # # Copyright (c) 2007 Enthought, Inc. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions are met: # ..., etc. David, thank you for pursuing this with the care that you have, and thank you for bearing with my long-winded pontificating while you do all of the actual work. :-) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From openopt at ukr.net Wed Jun 6 03:05:19 2007 From: openopt at ukr.net (dmitrey) Date: Wed, 06 Jun 2007 10:05:19 +0300 Subject: [SciPy-dev] howto checkout project from scikits? (I get error) Message-ID: <46665CAF.6080701@ukr.net> I use svn co http://projects.scipy.org/scipy/scikits/browser/trunk pymat and it yields svn: PROPFIND request failed on '/scipy/scikits' svn: PROPFIND of '/scipy/scikits': 200 Ok (http://projects.scipy.org) (and nothing changes in my directory) I tried some other commands like svn co http://projects.scipy.org/scipy/scikits/trunk pymat but it yields the same First of all I'm interested because I want to add my project to correct place in scikits, I guess it should be something like svn add http://projects.scipy.org/scipy/scikits/trunk OpenOpt (and then svn ci) Thx, D. From david at ar.media.kyoto-u.ac.jp Wed Jun 6 03:11:50 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 06 Jun 2007 16:11:50 +0900 Subject: [SciPy-dev] howto checkout project from scikits? (I get error) In-Reply-To: <46665CAF.6080701@ukr.net> References: <46665CAF.6080701@ukr.net> Message-ID: <46665E36.4040102@ar.media.kyoto-u.ac.jp> dmitrey wrote: > I use > > svn co http://projects.scipy.org/scipy/scikits/browser/trunk pymat > > and it yields > > svn: PROPFIND request failed on '/scipy/scikits' > svn: PROPFIND of '/scipy/scikits': 200 Ok (http://projects.scipy.org) > > (and nothing changes in my directory) > > I tried some other commands like > svn co http://projects.scipy.org/scipy/scikits/trunk pymat > > but it yields the same > You should try something like: svn co http://svn.scipy.org/svn/scikits/trunk/pymat . (for pymat only). The error that svn returns to you is non descriptive, to say the least. All subversion repositories under the DNS scipy.org lies in svn.scipy.org, AFAIK. This information should be somewhere in scikits, I think. Actually, there should be a descriptive page for scikits for people who just want to use the code. cheers, David From robert.kern at gmail.com Wed Jun 6 03:24:03 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 06 Jun 2007 02:24:03 -0500 Subject: [SciPy-dev] howto checkout project from scikits? (I get error) In-Reply-To: <46665CAF.6080701@ukr.net> References: <46665CAF.6080701@ukr.net> Message-ID: <46666113.6010704@gmail.com> dmitrey wrote: > I use > > svn co http://projects.scipy.org/scipy/scikits/browser/trunk pymat > > and it yields > > svn: PROPFIND request failed on '/scipy/scikits' > svn: PROPFIND of '/scipy/scikits': 200 Ok (http://projects.scipy.org) > > (and nothing changes in my directory) That's right. That's just the repository browser; it is not the URL of the repository. Unfortunately, the current front page of the Trac is misleading. The actual repository is here: http://svn.scipy.org/svn/scikits/ In order to check out pymat, for example, you would issue this command: $ svn co http://svn.scipy.org/svn/scikits/trunk/pymat ./pymat That second argument is optional; it's just what you want the checkout directory to be named. It defaults to the last part of the URL. > First of all I'm interested because I want to add my project to correct > place in scikits, I guess it should be something like > svn add http://projects.scipy.org/scipy/scikits/trunk OpenOpt > (and then svn ci) The usual process is to make the target directory first: $ svn mkdir http://svn.scipy.org/svn/scikits/trunk/openopt Then check out the empty directory: $ svn co http://svn.scipy.org/svn/scikits/trunk/openopt openopt Now copy over or create all of the files into that directory: $ cd openopt $ cp -R ~/src/OpenOpt/* . Tell SVN to start tracking all of those files (this will add all of the files and directories recursively): $ svn add * Make your first checkin: $ svn ci -m "Initial checkin of the openopt package." -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at ar.media.kyoto-u.ac.jp Wed Jun 6 06:25:50 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 06 Jun 2007 19:25:50 +0900 Subject: [SciPy-dev] Starting a datasets package, again In-Reply-To: <4666471D.2030705@gmail.com> References: <46651849.1040308@ar.media.kyoto-u.ac.jp> <4665DFF3.8070006@gmail.com> <46660C97.9000504@ar.media.kyoto-u.ac.jp> <4666471D.2030705@gmail.com> Message-ID: <46668BAE.80309@ar.media.kyoto-u.ac.jp> Robert Kern wrote: > > It works fine, provided that all of the relevant packages are registered on the > PyPI and are installed as eggs (or with egg metadata) on user's machines. > >> For example, you install foo which uses faithful in some examples, when >> is the dependency resolved ? > > I would recommend that the example's dependencies be listed as an "extras" > dependency. The setup() for, say, scikits.pyem would have these arguments: > > ... > install_requires = ['numpy'], > extras_require = { > 'examples': ['scipydata.iris', 'scipydata.oldfaithful'], > }, > ... > > Then, if you want to be able to run the examples for scikits.pyem, you would do > this: > > $ easy_install "scikits.pyem[examples]" > > However, just running > > $ easy_install scikits.pyem > > won't install the data packages (this is a good thing). Why is this a good idea ? I guess there is a reason, but I don't see it :) The case I am worrying about is: someone not too familiar with the whole thing installs pyem, and wants to go through the examples because that's easier than reading the doc. Then, he realizes it does not work: what is the error message ? Should I handle this case in my code, or is there some kind of mechanism to handle it automatically ? There are already so many emails on the scipy ML (and personally, maybe 2/3 of the emails related to my packages) because of installation problems, I really worry about this point. I think this hurts the whole numpy/scipy community quite a lot (lack of one click button "make it work"), and I am afraid this may be a step away from this goal. > > If you want a declaration from me, I would say that the surrounding text and > code in scipydata packages should always be under the BSD license. This should > be noted using the "License :: OSI Approved :: BSD License" classifier in the > setup script and in a *comment* in the code following the copyright notice. > However, the copyright notice and license should be accompanied by a note that > the data does not fall under this license or copyright and the metadata to look > at to find the status of the data. I'm not good at legal boilerplate, but > something like the following would be fine, I think: > > # The code and descriptive text is copyrighted and offered under the terms of > # the BSD License from the authors; see below. However, the actual dataset may > # have a different origin and intellectual property status. See the SOURCE and > # COPYRIGHT variables for this information. > # > # Copyright (c) 2007 Enthought, Inc. > # > # Redistribution and use in source and binary forms, with or without > # modification, are permitted provided that the following conditions are met: > # ..., etc. Ok, I will prepare something in this spirit, then. Including it in scikits is not possible ? David From openopt at ukr.net Wed Jun 6 12:52:29 2007 From: openopt at ukr.net (dmitrey) Date: Wed, 06 Jun 2007 19:52:29 +0300 Subject: [SciPy-dev] scikits svn In-Reply-To: <4665DEC8.7000707@gmail.com> References: <4665B38A.2090200@ukr.net> <4665DEC8.7000707@gmail.com> Message-ID: <4666E64D.40705@ukr.net> Robert Kern wrote: >> 2) if I shall use >> config.add_data_dir('directory15'), will all subdirectories of >> directory15 added to python path automatically or it requires additional >> efforts? >> > > All subdirectories are added. > However, I have some subdirectories commited and when I try to run "python setup.py install" it yields /usr/bin/python setup.py install Traceback (most recent call last): File "setup.py", line 86, in 'Topic :: Scientific/Engineering'] File "/usr/lib/python2.5/site-packages/numpy/distutils/core.py", line 144, in setup config = configuration() File "setup.py", line 33, in configuration from scikits.openopt.info import __version__ as openopt_version File "/home/dmitrey/scikits/openopt/scikits/openopt/__init__.py", line 7, in from openopt import LP, NLP, NSP File "/home/dmitrey/scikits/openopt/scikits/openopt/openopt.py", line 2, in from BaseProblem import * ImportError: No module named BaseProblem (I.e. it can't find BaseProblem.py file from other (sub)directory ) > >> 3) In cvs I need .cvsignore file in my directories. I have noticed in >> http://projects.scipy.org/scipy/scikits/browser/trunk/pymat "Property >> *svn:ignore* set to /|*.pyc|/". So, now I don't need to create any >> *ignore files? >> > > Correct. however, unfortunately some .pyc-files were added, see for example http://projects.scipy.org/scipy/scikits/browser/trunk/openopt/scikits > However, I personally recommend setting *.pyc and similar (*.o, *.so, > etc.) ignores in your personal SVN client configuration rather than having to > set them on each directory. Here is the relevant part of my ~/.subversion/config > file: > > [miscellany] > ### Set global-ignores to a set of whitespace-delimited globs > ### which Subversion will ignore in its 'status' output, and > ### while importing or adding files and directories. > global-ignores = *.o *.lo *.la .*.swp *.pyc *.so *.orig .*.rej *.rej .*~ *~ .#* > .DS_Store .hg .hgignore be bi site.cfg .*.swo .gdb_historyUnfortunately, I don't know how to do this on Windows. > > Ok, I will try to use this one (I use Linux) > I reserve svn:ignore properties for specific things to ignore. For example, the > root directory of your package (e.g. trunk/foo/) should probably have the > following in its svn:ignore > > build > dist > > in order to ignore these build directories that only show up in the root. > > From robert.kern at gmail.com Wed Jun 6 14:40:31 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 06 Jun 2007 13:40:31 -0500 Subject: [SciPy-dev] scikits svn In-Reply-To: <4666E64D.40705@ukr.net> References: <4665B38A.2090200@ukr.net> <4665DEC8.7000707@gmail.com> <4666E64D.40705@ukr.net> Message-ID: <4666FF9F.2010807@gmail.com> dmitrey wrote: > Robert Kern wrote: >>> 2) if I shall use >>> config.add_data_dir('directory15'), will all subdirectories of >>> directory15 added to python path automatically or it requires additional >>> efforts? >>> >> All subdirectories are added. >> > However, I have some subdirectories commited and when I try to run > "python setup.py install" it yields > /usr/bin/python setup.py install > Traceback (most recent call last): > File "setup.py", line 86, in > 'Topic :: Scientific/Engineering'] > File "/usr/lib/python2.5/site-packages/numpy/distutils/core.py", line > 144, in setup > config = configuration() > File "setup.py", line 33, in configuration > from scikits.openopt.info import __version__ as openopt_version > File "/home/dmitrey/scikits/openopt/scikits/openopt/__init__.py", line > 7, in > from openopt import LP, NLP, NSP > File "/home/dmitrey/scikits/openopt/scikits/openopt/openopt.py", line > 2, in > from BaseProblem import * > ImportError: No module named BaseProblem > (I.e. it can't find BaseProblem.py file from other (sub)directory ) Yes, that's to be expected. The subdirectory Kernel is there with BaseProblem.py, but you can't import stuff from there like that. Either move the stuff in Kernel out to scikits/openopt/ where they can be imported like that, or add Kernel/__init__.py to make it a subpackage and import the modules in it properly. >>> 3) In cvs I need .cvsignore file in my directories. I have noticed in >>> http://projects.scipy.org/scipy/scikits/browser/trunk/pymat "Property >>> *svn:ignore* set to /|*.pyc|/". So, now I don't need to create any >>> *ignore files? >>> >> Correct. > however, unfortunately some .pyc-files were added, see for example > http://projects.scipy.org/scipy/scikits/browser/trunk/openopt/scikits Okay. Go ahead and delete them. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Wed Jun 6 14:58:15 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 06 Jun 2007 13:58:15 -0500 Subject: [SciPy-dev] Starting a datasets package, again In-Reply-To: <46668BAE.80309@ar.media.kyoto-u.ac.jp> References: <46651849.1040308@ar.media.kyoto-u.ac.jp> <4665DFF3.8070006@gmail.com> <46660C97.9000504@ar.media.kyoto-u.ac.jp> <4666471D.2030705@gmail.com> <46668BAE.80309@ar.media.kyoto-u.ac.jp> Message-ID: <466703C7.3080401@gmail.com> David Cournapeau wrote: > Robert Kern wrote: >> It works fine, provided that all of the relevant packages are registered on the >> PyPI and are installed as eggs (or with egg metadata) on user's machines. >> >>> For example, you install foo which uses faithful in some examples, when >>> is the dependency resolved ? >> I would recommend that the example's dependencies be listed as an "extras" >> dependency. The setup() for, say, scikits.pyem would have these arguments: >> >> ... >> install_requires = ['numpy'], >> extras_require = { >> 'examples': ['scipydata.iris', 'scipydata.oldfaithful'], >> }, >> ... >> >> Then, if you want to be able to run the examples for scikits.pyem, you would do >> this: >> >> $ easy_install "scikits.pyem[examples]" >> >> However, just running >> >> $ easy_install scikits.pyem >> >> won't install the data packages (this is a good thing). > Why is this a good idea ? I guess there is a reason, but I don't see it :) Because there are two different things that have requirements, the Python package itself and the package's examples. > The case I am worrying about is: someone not too familiar with the whole > thing installs pyem, and wants to go through the examples because that's > easier than reading the doc. Then, he realizes it does not work: what is > the error message ? If you do nothing special, just the regular ImportError. Of course, you can catch that error and give whatever error message you like. > Should I handle this case in my code, or is there > some kind of mechanism to handle it automatically ? pkg_resources.resolve(['scikits.pyem[examples]']) > There are already so many emails on the scipy ML (and personally, maybe > 2/3 of the emails related to my packages) because of installation > problems, I really worry about this point. I think this hurts the whole > numpy/scipy community quite a lot (lack of one click button "make it > work"), and I am afraid this may be a step away from this goal. There's no substitute for giving your users a binary with everything it needs in one tarball, data included. However, that doesn't scale at all. Everything else is a compromise between these two concerns. If bundling the example data into your examples works for your needs, by all means, do it, and ignore all notions of scipydata packages. There's nothing wrong with copy-and-paste, here. It's still useful to build a repository of scipydata packages with metadata and parsing code already done. If you are only concerned with distributing examples with your packages, you may not use the scipydata packages in them directly, but you can still use the repository as a resource when developing your examples. >> If you want a declaration from me, I would say that the surrounding text and >> code in scipydata packages should always be under the BSD license. This should >> be noted using the "License :: OSI Approved :: BSD License" classifier in the >> setup script and in a *comment* in the code following the copyright notice. >> However, the copyright notice and license should be accompanied by a note that >> the data does not fall under this license or copyright and the metadata to look >> at to find the status of the data. I'm not good at legal boilerplate, but >> something like the following would be fine, I think: >> >> # The code and descriptive text is copyrighted and offered under the terms of >> # the BSD License from the authors; see below. However, the actual dataset may >> # have a different origin and intellectual property status. See the SOURCE and >> # COPYRIGHT variables for this information. >> # >> # Copyright (c) 2007 Enthought, Inc. >> # >> # Redistribution and use in source and binary forms, with or without >> # modification, are permitted provided that the following conditions are met: >> # ..., etc. > Ok, I will prepare something in this spirit, then. Including it in > scikits is not possible ? Including what in scikits? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at ar.media.kyoto-u.ac.jp Wed Jun 6 21:13:38 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 07 Jun 2007 10:13:38 +0900 Subject: [SciPy-dev] Starting a datasets package, again In-Reply-To: <466703C7.3080401@gmail.com> References: <46651849.1040308@ar.media.kyoto-u.ac.jp> <4665DFF3.8070006@gmail.com> <46660C97.9000504@ar.media.kyoto-u.ac.jp> <4666471D.2030705@gmail.com> <46668BAE.80309@ar.media.kyoto-u.ac.jp> <466703C7.3080401@gmail.com> Message-ID: <46675BC2.8010807@ar.media.kyoto-u.ac.jp> Robert Kern wrote: > David Cournapeau wrote: > > There's no substitute for giving your users a binary with everything it needs in > one tarball, data included. However, that doesn't scale at all. Everything else > is a compromise between these two concerns. If bundling the example data into > your examples works for your needs, by all means, do it, and ignore all notions > of scipydata packages. There's nothing wrong with copy-and-paste, here. > > It's still useful to build a repository of scipydata packages with metadata and > parsing code already done. If you are only concerned with distributing examples > with your packages, you may not use the scipydata packages in them directly, but > you can still use the repository as a resource when developing your examples. Fair enough. > > Including what in scikits? A "meta"-package for datasets ? Eg putting scipydata in scikits ? Or do you prefer another repository to avoid licensing confusion (scikits being only for OSI approved code and data). cheers, David From robert.kern at gmail.com Thu Jun 7 14:21:58 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 07 Jun 2007 13:21:58 -0500 Subject: [SciPy-dev] Starting a datasets package, again In-Reply-To: <46675BC2.8010807@ar.media.kyoto-u.ac.jp> References: <46651849.1040308@ar.media.kyoto-u.ac.jp> <4665DFF3.8070006@gmail.com> <46660C97.9000504@ar.media.kyoto-u.ac.jp> <4666471D.2030705@gmail.com> <46668BAE.80309@ar.media.kyoto-u.ac.jp> <466703C7.3080401@gmail.com> <46675BC2.8010807@ar.media.kyoto-u.ac.jp> Message-ID: <46684CC6.1010105@gmail.com> David Cournapeau wrote: > Robert Kern wrote: >> Including what in scikits? > A "meta"-package for datasets ? Eg putting scipydata in scikits ? Or do > you prefer another repository to avoid licensing confusion (scikits > being only for OSI approved code and data). Oh, you mean putting the scipydata packages into the same SVN repository as scikits. Maybe. Honestly, I don't want to add the burden of administering yet more Trac and SVN instances (with new sets of logins, etc.). I'll think about it. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at ar.media.kyoto-u.ac.jp Thu Jun 7 22:58:58 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 08 Jun 2007 11:58:58 +0900 Subject: [SciPy-dev] Starting a datasets package, again In-Reply-To: <4666471D.2030705@gmail.com> References: <46651849.1040308@ar.media.kyoto-u.ac.jp> <4665DFF3.8070006@gmail.com> <46660C97.9000504@ar.media.kyoto-u.ac.jp> <4666471D.2030705@gmail.com> Message-ID: <4668C5F2.6040603@ar.media.kyoto-u.ac.jp> Robert Kern wrote: > > If you want a declaration from me, I would say that the surrounding text and > code in scipydata packages should always be under the BSD license. This should > be noted using the "License :: OSI Approved :: BSD License" classifier in the > setup script and in a *comment* in the code following the copyright notice. > However, the copyright notice and license should be accompanied by a note that > the data does not fall under this license or copyright and the metadata to look > at to find the status of the data. I'm not good at legal boilerplate, but > something like the following would be fine, I think: > > # The code and descriptive text is copyrighted and offered under the terms of > # the BSD License from the authors; see below. However, the actual dataset may > # have a different origin and intellectual property status. See the SOURCE and > # COPYRIGHT variables for this information. > # > # Copyright (c) 2007 Enthought, Inc. > # > # Redistribution and use in source and binary forms, with or without > # modification, are permitted provided that the following conditions are met: > # ..., etc. > > David, thank you for pursuing this with the care that you have, and thank you > for bearing with my long-winded pontificating while you do all of the actual > work. :-) > Ok, I checked in the new old faithful data (temporary in pyem), as exactly copied from Azzalini's reference. Does that look Ok to you (from a copyright point of view) ? If yes, I will use this as a template for all other datasets I will convert: http://projects.scipy.org/scipy/scipy/browser/trunk/Lib/sandbox/pyem/data/oldfaithful/data.py David From robert.kern at gmail.com Thu Jun 7 23:15:18 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 07 Jun 2007 22:15:18 -0500 Subject: [SciPy-dev] Starting a datasets package, again In-Reply-To: <4668C5F2.6040603@ar.media.kyoto-u.ac.jp> References: <46651849.1040308@ar.media.kyoto-u.ac.jp> <4665DFF3.8070006@gmail.com> <46660C97.9000504@ar.media.kyoto-u.ac.jp> <4666471D.2030705@gmail.com> <4668C5F2.6040603@ar.media.kyoto-u.ac.jp> Message-ID: <4668C9C6.5070705@gmail.com> David Cournapeau wrote: > Ok, I checked in the new old faithful data (temporary in pyem), as > exactly copied from Azzalini's reference. Does that look Ok to you (from > a copyright point of view) ? If yes, I will use this as a template for > all other datasets I will convert: > > http://projects.scipy.org/scipy/scipy/browser/trunk/Lib/sandbox/pyem/data/oldfaithful/data.py Just one nit: You have two "Copyright (c) ..." statements. I would remove the first one and leave the one that is in the same comment block as the license text. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From openopt at ukr.net Fri Jun 8 14:54:38 2007 From: openopt at ukr.net (dmitrey) Date: Fri, 08 Jun 2007 21:54:38 +0300 Subject: [SciPy-dev] GSoC weekly report Message-ID: <4669A5EE.1020605@ukr.net> hi all, as Alan G Isaac told me, I shall publish url to my blog weekly as report. for all who is interested http://openopt.blogspot.com/ WBR, Dmitrey From brian.lee.hawthorne at gmail.com Fri Jun 8 18:36:01 2007 From: brian.lee.hawthorne at gmail.com (Brian Hawthorne) Date: Fri, 8 Jun 2007 18:36:01 -0400 Subject: [SciPy-dev] Starting a datasets package, again In-Reply-To: <466703C7.3080401@gmail.com> References: <46651849.1040308@ar.media.kyoto-u.ac.jp> <4665DFF3.8070006@gmail.com> <46660C97.9000504@ar.media.kyoto-u.ac.jp> <4666471D.2030705@gmail.com> <46668BAE.80309@ar.media.kyoto-u.ac.jp> <466703C7.3080401@gmail.com> Message-ID: <796269930706081536h164cb134ob3089396224edaf8@mail.gmail.com> On 6/6/07, Robert Kern wrote: > > David Cournapeau wrote: > > There are already so many emails on the scipy ML (and personally, maybe > > 2/3 of the emails related to my packages) because of installation > > problems, I really worry about this point. I think this hurts the whole > > numpy/scipy community quite a lot (lack of one click button "make it > > work"), and I am afraid this may be a step away from this goal. > > There's no substitute for giving your users a binary with everything it > needs in > one tarball, data included. However, that doesn't scale at all. Everything > else > is a compromise between these two concerns. If bundling the example data > into > your examples works for your needs, by all means, do it, and ignore all > notions > of scipydata packages. There's nothing wrong with copy-and-paste, here. > > It's still useful to build a repository of scipydata packages with > metadata and > parsing code already done. If you are only concerned with distributing > examples > with your packages, you may not use the scipydata packages in them > directly, but > you can still use the repository as a resource when developing your > examples. We have run into this same issue of large example/testing datasets in the nipy (neuroimaging.scipy.org) project. Instead of packaging our data as a separate installable dependency, we keep the data online and developed a bit of boilerplate to transparently access it at runtime, including downloading, cacheing, and potentially unzipping: http://projects.scipy.org/neuroimaging/ni/browser/ni/trunk/neuroimaging/data_io/datasource.py Used in scipy for example this might look something like: >>> from scipy.data import Repository >>> repo = Repository(http://data.scipy.org/) # this could be set as a default >>> datablob = repo.open("pyem/example1.mat.bz2").read() The first time you run this it would download, unzip, and drop the result under some cache directory, then subsequent opens would open the local file. This way only the necessary data gets downloaded. The only non-builtin dependency is the path module, which is standalone (used in place of os.path) and found under neuroimaging.utils.path. Feel free to copy and modify if this is a direction you want to go. And if you do use it, then we could import it from scipy instead of maintaining our own copy :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Sat Jun 9 06:46:49 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 09 Jun 2007 19:46:49 +0900 Subject: [SciPy-dev] Best ways to set default values consistent within a module Message-ID: <466A8519.5090501@ar.media.kyoto-u.ac.jp> Hi, I have a question concerning default values for scipy functions. Basically, I have a class which have a significant number of function, several private, several private, and they provide different levels of the same functionality, with similar arguments with default values. class foo: def foo1(level = def1, dim = def2): pass def foo2(level = def1, dim = def2): pass What is the best way to maintain the default values consistent ? What I do for now is to set the default values from global variables; another way is to use the syntax *args, **kw, but I don't really like it myself, because then you do not know what the values when doing help foo.foo1 (I found this really annoying in matplotlib docstrings, for example). cheers, David From faltet at carabos.com Sat Jun 9 08:00:02 2007 From: faltet at carabos.com (Francesc Altet) Date: Sat, 09 Jun 2007 14:00:02 +0200 Subject: [SciPy-dev] Best ways to set default values consistent within a module In-Reply-To: <466A8519.5090501@ar.media.kyoto-u.ac.jp> References: <466A8519.5090501@ar.media.kyoto-u.ac.jp> Message-ID: <1181390402.2589.13.camel@carabos.com> El ds 09 de 06 del 2007 a les 19:46 +0900, en/na David Cournapeau va escriure: > Hi, > > I have a question concerning default values for scipy functions. > Basically, I have a class which have a significant number of function, > several private, several private, and they provide different levels of > the same functionality, with similar arguments with default values. > > class foo: > def foo1(level = def1, dim = def2): > pass > def foo2(level = def1, dim = def2): > pass > > What is the best way to maintain the default values consistent ? What I > do for now is to set the default values from global variables; another > way is to use the syntax *args, **kw, but I don't really like it myself, > because then you do not know what the values when doing help foo.foo1 (I > found this really annoying in matplotlib docstrings, for example). Well, if you can afford requiring Python 2.4 or higher in your application, one possibility is to use decorators. For example, consider this: ------------------------- prova.py --------------------------------- def add_dflts(): def decorator(func): def wrapper(hello, k=1, level="lvl1", dim=3): return func(hello, k, level, dim) wrapper.__name__ = func.__name__ wrapper.__dict__ = func.__dict__ wrapper.__doc__ = func.__doc__ return wrapper return decorator @add_dflts() def test(hello, k, level, dim): "Hello test" print hello, k, level, dim test("Hello!", k=2) --------------------------------------------------------------------- Importing this on ipython gives: In [1]:import prova Hello! 2 lvl1 3 So, the decorated test() function honors the defaults stated in the decorator. Also, you can see how the doc string is conveniently updated as well: In [2]:prova.test? Type: function Base Class: String Form: Namespace: Interactive File: /tmp/prova.py Definition: prova.test(hello, k=1, level='lvl1', dim=3) Docstring: Hello test As Phillip Eby says in www.ddj.com/184406073 (a strongly recommended read): """ Python decorators are a simple, highly customizable way to wrap functions or methods, annotate them with metadata, or register them with a framework of some kind. But, as a relatively new feature, their full possibilities have not yet been explored, and perhaps the most exciting uses haven't even been invented yet. """ I'm myself not very used to them, but I apparently should ;) HTH, -- Francesc Altet | Be careful about using the following code -- Carabos Coop. V. | I've only proven that it works, www.carabos.com | I haven't tested it. -- Donald Knuth From david at ar.media.kyoto-u.ac.jp Sun Jun 10 04:05:11 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sun, 10 Jun 2007 17:05:11 +0900 Subject: [SciPy-dev] Best ways to set default values consistent within a module In-Reply-To: <1181390402.2589.13.camel@carabos.com> References: <466A8519.5090501@ar.media.kyoto-u.ac.jp> <1181390402.2589.13.camel@carabos.com> Message-ID: <466BB0B7.4070108@ar.media.kyoto-u.ac.jp> Francesc Altet wrote: > El ds 09 de 06 del 2007 a les 19:46 +0900, en/na David Cournapeau va > escriure: >> Hi, >> >> I have a question concerning default values for scipy functions. >> Basically, I have a class which have a significant number of function, >> several private, several private, and they provide different levels of >> the same functionality, with similar arguments with default values. >> >> class foo: >> def foo1(level = def1, dim = def2): >> pass >> def foo2(level = def1, dim = def2): >> pass >> >> What is the best way to maintain the default values consistent ? What I >> do for now is to set the default values from global variables; another >> way is to use the syntax *args, **kw, but I don't really like it myself, >> because then you do not know what the values when doing help foo.foo1 (I >> found this really annoying in matplotlib docstrings, for example). > > Well, if you can afford requiring Python 2.4 or higher in your > application, one possibility is to use decorators. For example, > consider this: > > ------------------------- prova.py --------------------------------- > def add_dflts(): > def decorator(func): > def wrapper(hello, k=1, level="lvl1", dim=3): > return func(hello, k, level, dim) > wrapper.__name__ = func.__name__ > wrapper.__dict__ = func.__dict__ > wrapper.__doc__ = func.__doc__ > return wrapper > return decorator > > @add_dflts() > def test(hello, k, level, dim): > "Hello test" > print hello, k, level, dim > > test("Hello!", k=2) > --------------------------------------------------------------------- > > Importing this on ipython gives: > > In [1]:import prova > Hello! 2 lvl1 3 > > So, the decorated test() function honors the defaults stated in the > decorator. Also, you can see how the doc string is conveniently updated > as well: > > In [2]:prova.test? > Type: function > Base Class: > String Form: > Namespace: Interactive > File: /tmp/prova.py > Definition: prova.test(hello, k=1, level='lvl1', dim=3) > Docstring: > Hello test Mm. how is this different than "classic" function factory ? Depending on 2.4 features is a no-no anyway, because this code is meant to be used in scipy. Well, I will think about using function generators, but this sounds a bit overkill for what I want to do. > > As Phillip Eby says in www.ddj.com/184406073 (a strongly recommended > read): > Argh, it made me realize that I was using some python2.4 specific features in my code already :) cheers, David From jtravs at gmail.com Sun Jun 10 12:55:01 2007 From: jtravs at gmail.com (John Travers) Date: Sun, 10 Jun 2007 17:55:01 +0100 Subject: [SciPy-dev] Status of sandbox.spline and my other code. Message-ID: <3a1077e70706100955p4a705acy3b26d9cf9cf0298b@mail.gmail.com> Hi all, This email is intended to clarify the status of various bits of code I've committed/not committed so I don't leave too much of a mess after me. I'm about to start writing my PhD thesis, so I'll be too busy to work on scipy until September. 1. sandbox.spline stuff The module here was supposed to be a tidy up of scipy.interpolate. All of the dierkx functionality has been moved to f2py wrappers, which I think makes maintenance easier. However, it does not add any new functionality, and in retrospect appears to have been a waste of my time. So I'll leave to you to decide if you want to integrate it into scipy.interpolate or not; though it was originaly planned to be a seperate module to clear up the ambiguity between interpolation and smoothing spline functionality. However I have added 20 unit tests to the code (most of which simply check the wrapper against the pure fortran output, but still useful I think) which could easily be moved over to the current scipy.interpolate module. 2. Radial basis function module in sandbox (rbf) This code has recently had attention from Robert Hetland and is quite improved. I'm not sure if I will ever go into scipy though?? If not, I will put it into a scikit at some point. Related to this, the wiki page has been updated, however, it is at http://www.scipy.org/RadialBasisFunctions but should probably be in the Cookbook. I don't know how to move it over, so some pointers would be helpful. In addition there are four unused attachments I uploaded that could be deleted. 3. Boundary value ODE solver (BVP_SOLVER) I have fairly functional code interfacing to some very good BVP solver code (http://cs.smu.ca/~muir/BVP_SOLVER_Webpage.shtml). The license is good for scipy (as confirmed by the authors), but the code if fortran 95 - therefore cannot apparently go into scipy. I'll make a scikit when I get time (probably September). 4. An interface to the pikaia genetic algorithm routine I have a python interface to pikaia fortran code for genetic algorithm optimisation of real valued functions (http://www.hao.ucar.edu/Public/models/pikaia/pikaia.html). The fortran code itself is robust and highly optimised for numerical work, though it is nowhere near as general as the sandbox.ga module. I think it is useful to go in scipy.optimization alongside the annealing module. If people disagree I'll make it into a scikit. I hope this clears a few things up. Best regards, John Travers From strawman at astraw.com Sun Jun 10 16:09:58 2007 From: strawman at astraw.com (Andrew Straw) Date: Sun, 10 Jun 2007 13:09:58 -0700 Subject: [SciPy-dev] Status of sandbox.spline and my other code. In-Reply-To: <3a1077e70706100955p4a705acy3b26d9cf9cf0298b@mail.gmail.com> References: <3a1077e70706100955p4a705acy3b26d9cf9cf0298b@mail.gmail.com> Message-ID: <466C5A96.1010509@astraw.com> John Travers wrote: > Related to this, the wiki page has been updated, however, it is at > http://www.scipy.org/RadialBasisFunctions but should probably be in > the Cookbook. I don't know how to move it over, so some pointers would > be helpful. In addition there are four unused attachments I uploaded > that could be deleted. I added "JohnTravers" to the EditorsGroup page, so you should be able to do the following: Once you're logged into the wiki, select from the "More Actions:" menu on the left "Rename Page". Also you can then recreate a new page at the original location that has contents just "#redirect Cookbook/MyNewLocation" and both locations will work. To delete attachments, click into the "Attachments" page and go from there. -Andrew From jtravs at gmail.com Sun Jun 10 16:33:49 2007 From: jtravs at gmail.com (John Travers) Date: Sun, 10 Jun 2007 21:33:49 +0100 Subject: [SciPy-dev] Status of sandbox.spline and my other code. In-Reply-To: <466C5A96.1010509@astraw.com> References: <3a1077e70706100955p4a705acy3b26d9cf9cf0298b@mail.gmail.com> <466C5A96.1010509@astraw.com> Message-ID: <3a1077e70706101333i4514bdfey53d53c6e548cbc08@mail.gmail.com> On 10/06/07, Andrew Straw wrote: > John Travers wrote: > > > Related to this, the wiki page has been updated, however, it is at > > http://www.scipy.org/RadialBasisFunctions but should probably be in > > the Cookbook. I don't know how to move it over, so some pointers would > > be helpful. In addition there are four unused attachments I uploaded > > that could be deleted. > > I added "JohnTravers" to the EditorsGroup page, so you should be able to > do the following: Thanks very much. I have now made the changes I wanted to. Cheers, John From rex at nosyntax.com Sun Jun 10 16:35:08 2007 From: rex at nosyntax.com (rex) Date: Sun, 10 Jun 2007 13:35:08 -0700 Subject: [SciPy-dev] Compiling scipy with Intel ifort & MKL Message-ID: <20070610203508.GB5999@x2.nosyntax.com> Using recent svn SciPy: /usr/local/src/scipy # python setup.py config --compiler=intel --fcompiler=intel build_clib --compiler=intel --fcompiler=intel build_ext --compiler=intel --fcompiler=intel install non-existing path in 'scipy/cluster': 'tests' non-existing path in 'scipy/cluster': 'src/vq_wrap.cpp' mkl_info: FOUND: libraries = ['mkl', 'vml', 'guide', 'pthread'] library_dirs = ['/opt/intel/mkl/9.1/lib/32'] define_macros = [('SCIPY_MKL_H', None)] include_dirs = ['/opt/intel/mkl/9.1/include'] non-existing path in 'scipy/fftpack': 'tests' could not resolve pattern in 'scipy/fftpack': 'dfftpack/*.f' non-existing path in 'scipy/fftpack': 'fftpack.pyf' non-existing path in 'scipy/fftpack': 'src/zfft.c' ... blas_opt_info: blas_mkl_info: FOUND: libraries = ['mkl', 'vml', 'guide', 'pthread'] library_dirs = ['/opt/intel/mkl/9.1/lib/32'] define_macros = [('SCIPY_MKL_H', None)] include_dirs = ['/opt/intel/mkl/9.1/include'] FOUND: libraries = ['mkl', 'vml', 'guide', 'pthread'] library_dirs = ['/opt/intel/mkl/9.1/lib/32'] define_macros = [('SCIPY_MKL_H', None)] include_dirs = ['/opt/intel/mkl/9.1/include'] could not resolve pattern in 'scipy/integrate': 'linpack_lite/*.f' could not resolve pattern in 'scipy/integrate': 'mach/*.f' could not resolve pattern in 'scipy/integrate': 'quadpack/*.f' could not resolve pattern in 'scipy/integrate': 'odepack/*.f' non-existing path in 'scipy/integrate': '_quadpackmodule.c' ... non-existing path in 'scipy/io': 'docs' non-existing path in 'scipy/lib/blas': 'fblas.pyf.src' non-existing path in 'scipy/lib/blas': 'fblaswrap.f.src' could not resolve pattern in 'scipy/lib/blas': 'fblas_l?.pyf.src' non-existing path in 'scipy/lib/blas': 'cblas.pyf.src' could not resolve pattern in 'scipy/lib/blas': 'cblas_l?.pyf.src' non-existing path in 'scipy/lib/blas': 'tests' lapack_opt_info: lapack_mkl_info: FOUND: libraries = ['mkl_lapack', 'mkl', 'vml', 'guide', 'pthread'] library_dirs = ['/opt/intel/mkl/9.1/lib/32'] define_macros = [('SCIPY_MKL_H', None)] include_dirs = ['/opt/intel/mkl/9.1/include'] FOUND: libraries = ['mkl_lapack', 'mkl', 'vml', 'guide', 'pthread'] library_dirs = ['/opt/intel/mkl/9.1/lib/32'] define_macros = [('SCIPY_MKL_H', None)] include_dirs = ['/opt/intel/mkl/9.1/include'] non-existing path in 'scipy/lib/lapack': 'flapack.pyf.src' could not resolve pattern in 'scipy/lib/lapack': 'flapack_*.pyf.src' non-existing path in 'scipy/lib/lapack': 'clapack.pyf.src' non-existing path in 'scipy/lib/lapack': 'calc_lwork.f' non-existing path in 'scipy/lib/lapack': 'atlas_version.c' non-existing path in 'scipy/lib/lapack': 'tests' non-existing path in 'scipy/linalg': 'src/fblaswrap.f' ... non-existing path in 'scipy/linalg': 'tests' non-existing path in 'scipy/linsolve': 'tests' could not resolve pattern in 'scipy/linsolve': 'SuperLU/SRC/*.c' non-existing path in 'scipy/linsolve': '_zsuperlumodule.c' ... umfpack_info: libraries umfpack not found in /opt/intel/mkl/9.1/lib/32 /usr/local/lib/python2.5/site-packages/numpy/distutils/system_info.py:403: UserWarning: UMFPACK sparse solver (http://www.cise.ufl.edu/research/sparse/umfpack/) not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [umfpack]) or by setting the UMFPACK environment variable. warnings.warn(self.notfounderror.__doc__) NOT AVAILABLE [many more path messages elided] Warning: Subpackage 'Lib' configuration returned as 'scipy' running config running build_clib Could not locate executable ecc customize IntelCCompiler customize IntelCCompiler using build_clib Could not locate executable ifc Could not locate executable efort Could not locate executable efc Could not locate executable efort Could not locate executable efc customize IntelFCompiler Couldn't match compiler version for 'Intel(R) Fortran Compiler for applications running on IA-32, Version 10.0 Build 20070426 Package ID: l_fc_p_10.0.023\nCopyright (C) 1985-2007 Intel Corporation. All rights reserved.\nFOR NON-COMMERCIAL USE ONLY\n\n Intel Fortran 10.0-1023' customize IntelFCompiler using build_clib building 'dfftpack' library compiling Fortran sources Fortran f77 compiler: /opt/intel/fc/10.0.023/bin/ifort -72 -w90 -w95 -KPIC -cm -O3 -unroll -xM -arch SSE2 Fortran f90 compiler: /opt/intel/fc/10.0.023/bin/ifort -FR -KPIC -cm -O3 -unroll -xM -arch SSE2 Fortran fix compiler: /opt/intel/fc/10.0.023/bin/ifort -FI -KPIC -cm -O3 -unroll -xM -arch SSE2 error: file 'dfftpack/*.f' does not exist It exited here. Any pointers appreciated, thanks. -rex From rex at nosyntax.com Sun Jun 10 18:45:52 2007 From: rex at nosyntax.com (rex) Date: Sun, 10 Jun 2007 15:45:52 -0700 Subject: [SciPy-dev] Compiling scipy with Intel ifort & MKL In-Reply-To: <20070610203508.GB5999@x2.nosyntax.com> References: <20070610203508.GB5999@x2.nosyntax.com> Message-ID: <20070610224552.GC5999@x2.nosyntax.com> rex [2007-06-10 13:37]: > Using recent svn SciPy: > > /usr/local/src/scipy # python setup.py config --compiler=intel --fcompiler=intel build_clib --compiler=intel --fcompiler=intel build_ext --compiler=intel --fcompiler=intel install I get the same errors when using the gcc compiler, which is a strong hint that blas and lapack need to be installed. Duh! Doing that now... -rex From rex at nosyntax.com Sun Jun 10 20:24:32 2007 From: rex at nosyntax.com (rex) Date: Sun, 10 Jun 2007 17:24:32 -0700 Subject: [SciPy-dev] Compiling scipy with Intel ifort & MKL In-Reply-To: <20070610224552.GC5999@x2.nosyntax.com> References: <20070610203508.GB5999@x2.nosyntax.com> <20070610224552.GC5999@x2.nosyntax.com> Message-ID: <20070611002432.GD5999@x2.nosyntax.com> rex [2007-06-10 16:12]: > rex [2007-06-10 13:37]: > > Using recent svn SciPy: > > > > /usr/local/src/scipy # python setup.py config --compiler=intel --fcompiler=intel build_clib --compiler=intel --fcompiler=intel build_ext --compiler=intel --fcompiler=intel install > > I get the same errors when using the gcc compiler, which is a strong > hint that blas and lapack need to be installed. Duh! Doing that now... It still fails with: error: file 'dfftpack/*.f' does not exist Details are below. This is with SUSE 10.2 Intel MKL9.1, & Intel FORTRAN 10 I followed Steve Baum's instructions at: http://pong.tamu.edu/tiki/tiki-view_blog_post.php?blogId=6&postId=97 cd scipy #recent svn mkdir -p blas cd blas wget http://www.netlib.org/blas/blas.tgz tar xzf blas.tgz #this results in a blas/BLAS/*.f directory structure. I moved all files #up into the blas directory and eliminated the BLAS subdirectory. Instead of: #g77 -fno-second-underscore -O2 -c *.f ifort -fno-second-underscore -c -xT -fast *.f ifort: command line warning #10006: ignoring unknown option '-fno-second-underscore' #this ignored option may need to be fixed, but I think not. ar r libfblas.a *.o ranlib libfblas.a rm -rf *.o cp libfblas.a /usr/local/lib export BLAS=/usr/local/lib/libfblas.a cd .. wget http://www.netlib.org/lapack/lapack.tgz tar xzf lapack.tgz cd LAPACK cp INSTALL/make.inc.LINUX make.inc Now you must edit make.inc and change (if necessary) the following values: #FORTRAN = g77 FORTRAN = ifort #OPTS = -funroll-all-loops -O3 OPTS = -xT -funroll-all-loops -fast DRVOPTS = $(OPTS) NOOPT = #LOADER = g77 LOADER = ifort Now to finish the compilation: make lapacklib >& make.log make clean cp lapack_LINUX.a libflapack.a cp libflapack.a /usr/local/lib export LAPACK=/usr/local/lib/libflapack.a cd .. python setup.py config --compiler=intel --fcompiler=intel build_clib --compiler=intel --fcompiler=intel build_ext --compiler=intel --fcompiler=intel install >& inst.log non-existing path in 'scipy/cluster': 'tests' non-existing path in 'scipy/cluster': 'src/vq_wrap.cpp' mkl_info: FOUND: libraries = ['mkl', 'vml', 'guide', 'pthread'] library_dirs = ['/opt/intel/mkl/9.1/lib/32'] define_macros = [('SCIPY_MKL_H', None)] include_dirs = ['/opt/intel/mkl/9.1/include'] non-existing path in 'scipy/fftpack': 'tests' could not resolve pattern in 'scipy/fftpack': 'dfftpack/*.f' non-existing path in 'scipy/fftpack': 'fftpack.pyf' non-existing path in 'scipy/fftpack': 'src/zfft.c' non-existing path in 'scipy/fftpack': 'src/drfft.c' non-existing path in 'scipy/fftpack': 'src/zrfft.c' non-existing path in 'scipy/fftpack': 'src/zfftnd.c' non-existing path in 'scipy/fftpack': 'src/zfft_djbfft.c' non-existing path in 'scipy/fftpack': 'src/zfft_fftpack.c' non-existing path in 'scipy/fftpack': 'src/zfft_fftw.c' non-existing path in 'scipy/fftpack': 'src/zfft_fftw3.c' non-existing path in 'scipy/fftpack': 'src/zfft_mkl.c' non-existing path in 'scipy/fftpack': 'convolve.pyf' non-existing path in 'scipy/fftpack': 'src/convolve.c' blas_opt_info: blas_mkl_info: FOUND: libraries = ['mkl', 'vml', 'guide', 'pthread'] library_dirs = ['/opt/intel/mkl/9.1/lib/32'] define_macros = [('SCIPY_MKL_H', None)] include_dirs = ['/opt/intel/mkl/9.1/include'] FOUND: libraries = ['mkl', 'vml', 'guide', 'pthread'] library_dirs = ['/opt/intel/mkl/9.1/lib/32'] define_macros = [('SCIPY_MKL_H', None)] include_dirs = ['/opt/intel/mkl/9.1/include'] ... customize IntelFCompiler Couldn't match compiler version for 'Intel(R) Fortran Compiler for applications running on IA-32, Version 10.0 Build 20070426 Package ID: +l_fc_p_10.0.023\nCopyright (C) 1985-2007 Intel Corporation. All rights reserved.\nFOR NON-COMMERCIAL USE ONLY\n\n Intel Fortran 10.0-1023' customize IntelFCompiler using build_clib building 'dfftpack' library compiling Fortran sources Fortran f77 compiler: /opt/intel/fc/10.0.023/bin/ifort -72 -w90 -w95 -KPIC -cm -O3 -unroll -xM -arch SSE2 Fortran f90 compiler: /opt/intel/fc/10.0.023/bin/ifort -FR -KPIC -cm -O3 -unroll -xM -arch SSE2 Fortran fix compiler: /opt/intel/fc/10.0.023/bin/ifort -FI -KPIC -cm -O3 -unroll -xM -arch SSE2 error: file 'dfftpack/*.f' does not exist It does exist as you can see: /usr/local/src/scipy/Lib/fftpack/dfftpack # ls dcosqb.f dcost.f dfftb.f dffti1.f doc.double dsinqi.f dsinti.f zfftb.f zffti1.f dcosqf.f dcosti.f dfftf1.f dffti.f dsinqb.f dsint1.f .svn zfftf1.f zffti.f dcosqi.f dfftb1.f dfftf.f doc dsinqf.f dsint.f zfftb1.f zfftf.f Again, any hints appreciated. -rex From openopt at ukr.net Mon Jun 11 07:22:10 2007 From: openopt at ukr.net (dmitrey) Date: Mon, 11 Jun 2007 14:22:10 +0300 Subject: [SciPy-dev] NumPy_for_Matlab_Users page drawbacks Message-ID: <466D3062.9000308@ukr.net> I think the page still has some problems: 1) repmat entry: python equivalent is tile(), not repmat(). Also check for correct** **numpy.matrix entry for repmat is needed (I don't know is all ok there) 2) I think the page misses entry for c = [a b] equivalent for flat arrays (btw I need the one for my purposes) 3) everywhere is numerical equivalents, like a(1:3,5:9) -> a[0:3][:,4:9] But in size, I don't know why, symbolic eqivalent is implemented: size(a,n) -> a.shape[n] # btw I still think there should be shape[n-1], or, better, size(a,2) -> a.shape[1] There is a comment in 4th column about 0 vs 1 indexing, but who really reads the column?! D. From openopt at ukr.net Mon Jun 11 07:38:59 2007 From: openopt at ukr.net (dmitrey) Date: Mon, 11 Jun 2007 14:38:59 +0300 Subject: [SciPy-dev] problems with concatenate - isn't it a bug? (numpy 1.0.1) Message-ID: <466D3453.7060601@ukr.net> //numpy 1.0.1 from numpy import * concatenate((array([1,2]), array([3,4]))) ->array([ 1, 2, 3, 4]) but concatenate((array(1), array([2,3]))) Traceback (innermost last): File "", line 1, in ValueError: 0-d arrays can't be concatenated D. From mike at tashcorp.net Mon Jun 11 10:26:51 2007 From: mike at tashcorp.net (Mike Kost) Date: Mon, 11 Jun 2007 09:26:51 -0500 Subject: [SciPy-dev] problems with concatenate - isn't it a bug? (numpy 1.0.1) In-Reply-To: <466D3453.7060601@ukr.net> References: <466D3453.7060601@ukr.net> Message-ID: D, The intended form is: concatenate((array([1]), array([2,3]))) Whether that's right or not is up for debate. Mike On 6/11/07, dmitrey wrote: > > //numpy 1.0.1 > > from numpy import * > concatenate((array([1,2]), array([3,4]))) > > ->array([ 1, 2, 3, 4]) > > but > > concatenate((array(1), array([2,3]))) > Traceback (innermost last): > File "", line 1, in > ValueError: 0-d arrays can't be concatenated > > D. > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pearu at cens.ioc.ee Mon Jun 11 07:30:55 2007 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Mon, 11 Jun 2007 13:30:55 +0200 Subject: [SciPy-dev] Compiling scipy with Intel ifort & MKL In-Reply-To: <20070610203508.GB5999@x2.nosyntax.com> References: <20070610203508.GB5999@x2.nosyntax.com> Message-ID: <466D326F.2090806@cens.ioc.ee> rex wrote: > error: file 'dfftpack/*.f' does not exist > > It exited here. Any pointers appreciated, thanks. It is due to a bug in numpy introduced, I think, just before releasing 1.0.3. It has been fixed in numpy svn. So, I suggest using numpy from svn as at the moment it has only bug fixes compared to the last release. And it might be worth considering releasing 1.0.3.1 as with the given bug all codes using 1.0.3 numpy.distutils will fail to build. Pearu From robert.kern at gmail.com Mon Jun 11 17:38:37 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 11 Jun 2007 16:38:37 -0500 Subject: [SciPy-dev] Compiling scipy with Intel ifort & MKL In-Reply-To: <466D326F.2090806@cens.ioc.ee> References: <20070610203508.GB5999@x2.nosyntax.com> <466D326F.2090806@cens.ioc.ee> Message-ID: <466DC0DD.4040604@gmail.com> Pearu Peterson wrote: > > rex wrote: > >> error: file 'dfftpack/*.f' does not exist >> >> It exited here. Any pointers appreciated, thanks. > > It is due to a bug in numpy introduced, I think, just before releasing > 1.0.3. It has been fixed in numpy svn. What piece of code, specifically? > So, I suggest using > numpy from svn as at the moment it has only bug fixes compared to the > last release. No, SVN has quite a lot of features merged from David Cooke's numpy.distutils branch, too. > And it might be worth considering releasing 1.0.3.1 as with the given > bug all codes using 1.0.3 numpy.distutils will fail to build. Under what circumstances? I can build scipy, for instance, just fine. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From rex at nosyntax.com Mon Jun 11 19:11:40 2007 From: rex at nosyntax.com (rex) Date: Mon, 11 Jun 2007 16:11:40 -0700 Subject: [SciPy-dev] Compiling scipy with Intel ifort & MKL In-Reply-To: <466D326F.2090806@cens.ioc.ee> References: <20070610203508.GB5999@x2.nosyntax.com> <466D326F.2090806@cens.ioc.ee> Message-ID: <20070611231140.GI4914@x2.nosyntax.com> Pearu Peterson [2007-06-11 15:12]: > > > rex wrote: > > > error: file 'dfftpack/*.f' does not exist > > > > It exited here. Any pointers appreciated, thanks. > > It is due to a bug in numpy introduced, I think, just before releasing > 1.0.3. It has been fixed in numpy svn. So, I suggest using > numpy from svn as at the moment it has only bug fixes compared to the > last release. Thank you! Building numpy from svn did eliminate the "error: file 'dfftpack/*.f' does not exist" message when building scipy, and the scipy compilation proceeded much further. However, it still produces an error which appears to be due to a failure to recognize the ID of ifort 10. I've already had to fix one problem in distutils due to Intel changing 'mkl_lapack32' in MKL8.1 to 'mkl_lapack' in MKL9.1. It appears another change is needed, perhaps in numpy/distutils/fcompiler/intel.py -rex From rex at nosyntax.com Mon Jun 11 19:20:36 2007 From: rex at nosyntax.com (rex) Date: Mon, 11 Jun 2007 16:20:36 -0700 Subject: [SciPy-dev] Compiling scipy with Intel ifort & MKL In-Reply-To: <466DC0DD.4040604@gmail.com> References: <20070610203508.GB5999@x2.nosyntax.com> <466D326F.2090806@cens.ioc.ee> <466DC0DD.4040604@gmail.com> Message-ID: <20070611232036.GK4914@x2.nosyntax.com> Robert Kern [2007-06-11 15:12]: > Pearu Peterson wrote: > > And it might be worth considering releasing 1.0.3.1 as with the given > > bug all codes using 1.0.3 numpy.distutils will fail to build. > > Under what circumstances? I can build scipy, for instance, just fine. You can build scipy using Intel's icc v10, ifort v10 & MKL v9.1? Not without changing some files in distutils, I'll bet. -rex From robert.kern at gmail.com Mon Jun 11 19:43:44 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 11 Jun 2007 18:43:44 -0500 Subject: [SciPy-dev] Compiling scipy with Intel ifort & MKL In-Reply-To: <20070611232036.GK4914@x2.nosyntax.com> References: <20070610203508.GB5999@x2.nosyntax.com> <466D326F.2090806@cens.ioc.ee> <466DC0DD.4040604@gmail.com> <20070611232036.GK4914@x2.nosyntax.com> Message-ID: <466DDE30.6090303@gmail.com> rex wrote: > Robert Kern [2007-06-11 15:12]: >> Pearu Peterson wrote: >>> And it might be worth considering releasing 1.0.3.1 as with the given >>> bug all codes using 1.0.3 numpy.distutils will fail to build. >> Under what circumstances? I can build scipy, for instance, just fine. > > You can build scipy using Intel's icc v10, ifort v10 & MKL v9.1? Not without > changing some files in distutils, I'll bet. That's why I asked "Under what circumstances?". Even in the context of the thread, it's not clear to me what Pearu meant. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From rex at nosyntax.com Mon Jun 11 23:31:06 2007 From: rex at nosyntax.com (rex) Date: Mon, 11 Jun 2007 20:31:06 -0700 Subject: [SciPy-dev] Compiling scipy with Intel ifort & MKL In-Reply-To: <20070611231140.GI4914@x2.nosyntax.com> References: <20070610203508.GB5999@x2.nosyntax.com> <466D326F.2090806@cens.ioc.ee> <20070611231140.GI4914@x2.nosyntax.com> Message-ID: <20070612033106.GM4914@x2.nosyntax.com> rex [2007-06-11 16:12]: > Pearu Peterson [2007-06-11 15:12]: > > It is due to a bug in numpy introduced, I think, just before releasing > > 1.0.3. It has been fixed in numpy svn. > > Thank you! Building numpy from svn did eliminate the "error: file > 'dfftpack/*.f' does not exist" message when building scipy, and the > scipy compilation proceeded much further. However, it still produces an > error which appears to be due to a failure to recognize the ID of ifort > 10. > > I've already had to fix one problem in distutils due to Intel changing > 'mkl_lapack32' in MKL8.1 to 'mkl_lapack' in MKL9.1. It appears another > change is needed, perhaps in numpy/distutils/fcompiler/intel.py I don't understand the distutils/* code well enough to fix it, but it appears that the problem is triggered by packages that cause: library 'mach' defined more than once, overwriting build_info {'sources': ['Lib/integrate/mach/i1mach.f', 'Lib/integrate/mach/d1mach.f', 'Lib/integrate/mach/r1mach.f', 'Lib/integrate/mach/xerror.f'], 'config_fc': {'noopt': ('Lib/integrate/setup.pyc', 1)}, 'source_languages': ['f77']} with {'sources': ['Lib/special/mach/i1mach.f', 'Lib/special/mach/d1mach.f', 'Lib/special/mach/r1mach.f', 'Lib/special/mach/xerror.f'], 'config_fc': {'noopt': ('Lib/special/setup.pyc', 1)}, 'source_languages': ['f77']}. extending extension 'scipy.linsolve._zsuperlu' defined_macros with [('USE_VENDOR_BLAS', 1)] This results in an error: [...] customize IntelCCompiler customize IntelCCompiler using build_ext Traceback (most recent call last): File "setup.py", line 55, in setup_package() File "setup.py", line 47, in setup_package configuration=configuration ) File "/usr/local/lib/python2.5/site-packages/numpy/distutils/core.py", line 176, in setup return old_setup(**new_attr) File "/usr/lib/python2.5/distutils/core.py", line 151, in setup dist.run_commands() File "/usr/lib/python2.5/distutils/dist.py", line 974, in run_commands self.run_command(cmd) File "/usr/lib/python2.5/distutils/dist.py", line 994, in run_command cmd_obj.run() File "/usr/lib/python2.5/distutils/command/build.py", line 112, in run self.run_command(cmd_name) File "/usr/lib/python2.5/distutils/cmd.py", line 333, in run_command self.distribution.run_command(command) File "/usr/lib/python2.5/distutils/dist.py", line 994, in run_command cmd_obj.run() File "/usr/local/lib/python2.5/site-packages/numpy/distutils/command/build_ext.py", line 181, in run if fcompiler and fcompiler.get_version(): File "/usr/local/lib/python2.5/site-packages/numpy/distutils/ccompiler.py", line 265, in CCompiler_get_version cmd = ' '.join(version_cmd) TypeError: sequence item 1: expected string, NoneType found I determined this by the brute force method of deleting the current .f file that triggered the problem. The result was another .f file causing the same problem, leading to my conclusion that something in distutils/* is broken for the v10 Intel compilers. (I've already verified and fixed a problem in distutils/* re v9.1 MKL). Here's a shortend (by ~5000 lines) session illustrating the error: /usr/local/src/scipy/python setup.py config --compiler=intel --fcompiler=intel build >& build_noansari.log mkl_info: FOUND: libraries = ['mkl', 'vml', 'guide', 'pthread'] library_dirs = ['/opt/intel/mkl/9.1/lib/32'] define_macros = [('SCIPY_MKL_H', None)] include_dirs = ['/opt/intel/mkl/9.1/include'] blas_opt_info: blas_mkl_info: FOUND: libraries = ['mkl', 'vml', 'guide', 'pthread'] library_dirs = ['/opt/intel/mkl/9.1/lib/32'] define_macros = [('SCIPY_MKL_H', None)] include_dirs = ['/opt/intel/mkl/9.1/include'] FOUND: libraries = ['mkl', 'vml', 'guide', 'pthread'] library_dirs = ['/opt/intel/mkl/9.1/lib/32'] define_macros = [('SCIPY_MKL_H', None)] include_dirs = ['/opt/intel/mkl/9.1/include'] lapack_opt_info: lapack_mkl_info: FOUND: libraries = ['mkl_lapack', 'mkl', 'vml', 'guide', 'pthread'] library_dirs = ['/opt/intel/mkl/9.1/lib/32'] define_macros = [('SCIPY_MKL_H', None)] include_dirs = ['/opt/intel/mkl/9.1/include'] FOUND: libraries = ['mkl_lapack', 'mkl', 'vml', 'guide', 'pthread'] library_dirs = ['/opt/intel/mkl/9.1/lib/32'] define_macros = [('SCIPY_MKL_H', None)] include_dirs = ['/opt/intel/mkl/9.1/include'] non-existing path in 'Lib/linsolve': 'tests' umfpack_info: libraries umfpack not found in /opt/intel/mkl/9.1/lib/32 /usr/local/lib/python2.5/site-packages/numpy/distutils/system_info.py:403: UserWarning: UMFPACK sparse solver (http://www.cise.ufl.edu/research/sparse/umfpack/) not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [umfpack]) or by setting the UMFPACK environment variable. warnings.warn(self.notfounderror.__doc__) NOT AVAILABLE Warning: Subpackage 'Lib' configuration returned as 'scipy' non-existing path in 'Lib/maxentropy': 'doc' running config running build running config_cc unifing config_cc, config, build_clib, build_ext, build commands --compiler options running config_fc unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options running build_src building py_modules sources creating build creating build/src.linux-i686-2.5 creating build/src.linux-i686-2.5/scipy building library "dfftpack" sources building library "linpack_lite" sources building library "mach" sources building library "quadpack" sources building library "odepack" sources building library "fitpack" sources building library "superlu_src" sources building library "odrpack" sources building library "minpack" sources building library "rootfind" sources building library "c_misc" sources building library "cephes" sources building library "mach" sources building library "toms" sources building library "amos" sources building library "cdf" sources building library "specfun" sources building library "statlib" sources building extension "scipy.cluster._vq" sources building extension "scipy.fftpack._fftpack" sources creating build/src.linux-i686-2.5/Lib creating build/src.linux-i686-2.5/Lib/fftpack f2py options: [] f2py: Lib/fftpack/fftpack.pyf Reading fortran codes... Reading file 'Lib/fftpack/fftpack.pyf' (format:free) Post-processing... Block: _fftpack Block: zfft Block: drfft Block: zrfft Block: zfftnd Block: destroy_zfft_cache Block: destroy_zfftnd_cache Block: destroy_drfft_cache Post-processing (stage 2)... Building modules... Building module "_fftpack"... Constructing wrapper function "zfft"... getarrdims:warning: assumed shape array, using 0 instead of '*' [snip] Lib/fftpack/dfftpack/dfftb1.f(129): (col. 13) remark: LOOP WAS VECTORIZED. Lib/fftpack/dfftpack/dfftb1.f(142): (col. 16) remark: LOOP WAS VECTORIZED. Lib/fftpack/dfftpack/dfftb1.f(187): (col. 10) remark: LOOP WAS VECTORIZED. Lib/fftpack/dfftpack/dfftb1.f(214): (col. 16) remark: LOOP WAS VECTORIZED. Lib/fftpack/dfftpack/dfftb1.f(384): (col. 13) remark: LOOP WAS VECTORIZED. Lib/fftpack/dfftpack/dfftb1.f(272): (col. 13) remark: LOOP WAS VECTORIZED. Lib/fftpack/dfftpack/dfftb1.f(237): (col. 13) remark: LOOP WAS VECTORIZED. Lib/fftpack/dfftpack/dfftb1.f(315): (col. 13) remark: LOOP WAS VECTORIZED. ifort:f77: Lib/fftpack/dfftpack/dfftf1.f ifort: command line remark #10148: option '-K' not supported Lib/fftpack/dfftpack/dfftf1.f(56): (col. 10) remark: LOOP WAS VECTORIZED. Lib/fftpack/dfftpack/dfftf1.f(135): (col. 10) remark: LOOP WAS VECTORIZED. Lib/fftpack/dfftpack/dfftf1.f(76): (col. 10) remark: LOOP WAS VECTORIZED. Lib/fftpack/dfftpack/dfftf1.f(103): (col. 16) remark: LOOP WAS VECTORIZED. Lib/fftpack/dfftpack/dfftf1.f(153): (col. 13) remark: LOOP WAS VECTORIZED. Lib/fftpack/dfftpack/dfftf1.f(166): (col. 16) remark: LOOP WAS VECTORIZED. Lib/fftpack/dfftpack/dfftf1.f(185): (col. 10) remark: PERMUTED LOOP WAS VECTORIZED. Lib/fftpack/dfftpack/dfftf1.f(180): (col. 13) remark: LOOP WAS VECTORIZED. Lib/fftpack/dfftpack/dfftf1.f(204): (col. 16) remark: LOOP WAS VECTORIZED. Lib/fftpack/dfftpack/dfftf1.f(379): (col. 13) remark: LOOP WAS VECTORIZED. Lib/fftpack/dfftpack/dfftf1.f(277): (col. 13) remark: LOOP WAS VECTORIZED. Lib/fftpack/dfftpack/dfftf1.f(244): (col. 13) remark: LOOP WAS VECTORIZED. Lib/fftpack/dfftpack/dfftf1.f(318): (col. 13) remark: LOOP WAS VECTORIZED. ifort:f77: Lib/fftpack/dfftpack/dffti1.f ifort: command line remark #10148: option '-K' not supported [at least hundreds of yummy "LOOP WAS VECTORIZED" lines (and others) snipped] customize IntelCCompiler customize IntelCCompiler using build_ext library 'mach' defined more than once, overwriting build_info {'sources': ['Lib/integrate/mach/i1mach.f', 'Lib/integrate/mach/d1mach.f', 'Lib/integrate/mach/r1mach.f', 'Lib/integrate/mach/xerror.f'], 'config_fc': {'noopt': ('Lib/integrate/setup.pyc', 1)}, 'source_languages': ['f77']} with {'sources': ['Lib/special/mach/i1mach.f', 'Lib/special/mach/d1mach.f', 'Lib/special/mach/r1mach.f', 'Lib/special/mach/xerror.f'], 'config_fc': {'noopt': ('Lib/special/setup.pyc', 1)}, 'source_languages': ['f77']}. extending extension 'scipy.linsolve._zsuperlu' defined_macros with [('USE_VENDOR_BLAS', 1)] extending extension 'scipy.linsolve._dsuperlu' defined_macros with [('USE_VENDOR_BLAS', 1)] extending extension 'scipy.linsolve._csuperlu' defined_macros with [('USE_VENDOR_BLAS', 1)] extending extension 'scipy.linsolve._ssuperlu' defined_macros with [('USE_VENDOR_BLAS', 1)] customize IntelCCompiler customize IntelCCompiler using build_ext Traceback (most recent call last): File "setup.py", line 55, in setup_package() File "setup.py", line 47, in setup_package configuration=configuration ) File "/usr/local/lib/python2.5/site-packages/numpy/distutils/core.py", line 176, in setup return old_setup(**new_attr) File "/usr/lib/python2.5/distutils/core.py", line 151, in setup dist.run_commands() File "/usr/lib/python2.5/distutils/dist.py", line 974, in run_commands self.run_command(cmd) File "/usr/lib/python2.5/distutils/dist.py", line 994, in run_command cmd_obj.run() File "/usr/lib/python2.5/distutils/command/build.py", line 112, in run self.run_command(cmd_name) File "/usr/lib/python2.5/distutils/cmd.py", line 333, in run_command self.distribution.run_command(command) File "/usr/lib/python2.5/distutils/dist.py", line 994, in run_command cmd_obj.run() File "/usr/local/lib/python2.5/site-packages/numpy/distutils/command/build_ext.py", line 181, in run if fcompiler and fcompiler.get_version(): File "/usr/local/lib/python2.5/site-packages/numpy/distutils/ccompiler.py", line 265, in CCompiler_get_version cmd = ' '.join(version_cmd) TypeError: sequence item 1: expected string, NoneType found From david at ar.media.kyoto-u.ac.jp Tue Jun 12 02:27:46 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 12 Jun 2007 15:27:46 +0900 Subject: [SciPy-dev] Reliable way to know memory consumption of functions/scripts/etc.. Message-ID: <466E3CE2.9030007@ar.media.kyoto-u.ac.jp> Hi, I was wondering whether there was a simple and reliable way to know how much memory a python script takes between some code boundaries. I don't need a really precise thing, but more something like how does a given code scale given its input: does it take the same amount, several times the same amount, etc... Is this possible in python ? cheers, David From pearu at cens.ioc.ee Tue Jun 12 06:34:27 2007 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Tue, 12 Jun 2007 12:34:27 +0200 Subject: [SciPy-dev] Compiling scipy with Intel ifort & MKL In-Reply-To: <466DC0DD.4040604@gmail.com> References: <20070610203508.GB5999@x2.nosyntax.com> <466D326F.2090806@cens.ioc.ee> <466DC0DD.4040604@gmail.com> Message-ID: <466E76B3.1060906@cens.ioc.ee> Robert Kern wrote: > Pearu Peterson wrote: >> rex wrote: >> >>> error: file 'dfftpack/*.f' does not exist >>> >>> It exited here. Any pointers appreciated, thanks. >> It is due to a bug in numpy introduced, I think, just before releasing >> 1.0.3. It has been fixed in numpy svn. > > What piece of code, specifically? http://projects.scipy.org/scipy/numpy/changeset/3845#file0 has the bug, also numpy 1.0.3-2 tar-ball has it. >> So, I suggest using >> numpy from svn as at the moment it has only bug fixes compared to the >> last release. > > No, SVN has quite a lot of features merged from David Cooke's numpy.distutils > branch, too. ok, I forgot thoses. >> And it might be worth considering releasing 1.0.3.1 as with the given >> bug all codes using 1.0.3 numpy.distutils will fail to build. > > Under what circumstances? I can build scipy, for instance, just fine. Are you using numpy 1.0.3 tar-ball? There are at least two error reports of the given kind (building scipy fails with `dfftpack/*.f` not existing error) that was caused by this numpy.distutils bug. Pearu From pearu at cens.ioc.ee Tue Jun 12 06:44:35 2007 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Tue, 12 Jun 2007 12:44:35 +0200 Subject: [SciPy-dev] Compiling scipy with Intel ifort & MKL In-Reply-To: <20070612033106.GM4914@x2.nosyntax.com> References: <20070610203508.GB5999@x2.nosyntax.com> <466D326F.2090806@cens.ioc.ee> <20070611231140.GI4914@x2.nosyntax.com> <20070612033106.GM4914@x2.nosyntax.com> Message-ID: <466E7913.1040300@cens.ioc.ee> rex wrote: > rex [2007-06-11 16:12]: >> Pearu Peterson [2007-06-11 15:12]: >>> It is due to a bug in numpy introduced, I think, just before releasing >>> 1.0.3. It has been fixed in numpy svn. >> Thank you! Building numpy from svn did eliminate the "error: file >> 'dfftpack/*.f' does not exist" message when building scipy, and the >> scipy compilation proceeded much further. However, it still produces an >> error which appears to be due to a failure to recognize the ID of ifort >> 10. >> >> I've already had to fix one problem in distutils due to Intel changing >> 'mkl_lapack32' in MKL8.1 to 'mkl_lapack' in MKL9.1. It appears another >> change is needed, perhaps in numpy/distutils/fcompiler/intel.py > > I don't understand the distutils/* code well enough to fix it, but it > appears that the problem is triggered by packages that cause: > > library 'mach' defined more than once, overwriting build_info {'sources': ['Lib/integrate/mach/i1mach.f', 'Lib/integrate/mach/d1mach.f', 'Lib/integrate/mach/r1mach.f', 'Lib/integrate/mach/xerror.f'], 'config_fc': {'noopt': ('Lib/integrate/setup.pyc', 1)}, 'source_languages': ['f77']} with {'sources': ['Lib/special/mach/i1mach.f', 'Lib/special/mach/d1mach.f', 'Lib/special/mach/r1mach.f', 'Lib/special/mach/xerror.f'], 'config_fc': {'noopt': ('Lib/special/setup.pyc', 1)}, 'source_languages': ['f77']}. > extending extension 'scipy.linsolve._zsuperlu' defined_macros with [('USE_VENDOR_BLAS', 1)] I think this is irrelevant. > > This results in an error: > > [...] > customize IntelCCompiler > customize IntelCCompiler using build_ext > Traceback (most recent call last): > File "setup.py", line 55, in > setup_package() > File "setup.py", line 47, in setup_package > configuration=configuration ) > File "/usr/local/lib/python2.5/site-packages/numpy/distutils/core.py", line 176, in setup > return old_setup(**new_attr) > File "/usr/lib/python2.5/distutils/core.py", line 151, in setup > dist.run_commands() > File "/usr/lib/python2.5/distutils/dist.py", line 974, in run_commands > self.run_command(cmd) > File "/usr/lib/python2.5/distutils/dist.py", line 994, in run_command > cmd_obj.run() > File "/usr/lib/python2.5/distutils/command/build.py", line 112, in run > self.run_command(cmd_name) > File "/usr/lib/python2.5/distutils/cmd.py", line 333, in run_command > self.distribution.run_command(command) > File "/usr/lib/python2.5/distutils/dist.py", line 994, in run_command > cmd_obj.run() > File "/usr/local/lib/python2.5/site-packages/numpy/distutils/command/build_ext.py", line 181, in run > if fcompiler and fcompiler.get_version(): > File "/usr/local/lib/python2.5/site-packages/numpy/distutils/ccompiler.py", line 265, in CCompiler_get_version > cmd = ' '.join(version_cmd) > TypeError: sequence item 1: expected string, NoneType found > > I determined this by the brute force method of deleting the current .f file > that triggered the problem. The result was another .f file causing the > same problem, leading to my conclusion that something in distutils/* is > broken for the v10 Intel compilers. (I've already verified and fixed a > problem in distutils/* re v9.1 MKL). The Fortran compiler version checking code (among others related codes) in numpy.distutils has changed after merging David Cooke branch with numpy trunk. It appears that the new code is not well tested. Unfortunately I don't have intel compiler in my system to track down the problems you experince. I may get a chance to look at it in more detail may be on Friday. Pearu From robert.kern at gmail.com Tue Jun 12 12:16:44 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 12 Jun 2007 11:16:44 -0500 Subject: [SciPy-dev] Compiling scipy with Intel ifort & MKL In-Reply-To: <466E76B3.1060906@cens.ioc.ee> References: <20070610203508.GB5999@x2.nosyntax.com> <466D326F.2090806@cens.ioc.ee> <466DC0DD.4040604@gmail.com> <466E76B3.1060906@cens.ioc.ee> Message-ID: <466EC6EC.6080002@gmail.com> Pearu Peterson wrote: > Are you using numpy 1.0.3 tar-ball? There are at least two error reports > of the given kind (building scipy fails with `dfftpack/*.f` not existing > error) that was caused by this numpy.distutils bug. Ah, I am using the first 1.0.3 tarball, not the one with r3845 in it. Yes, I think we should release a 1.0.3.1 that fixes that issue and the problem that Travis had that caused him to do that in the first place. We should branch it from the tag rather than the trunk because of David Cooke's merge. Also, I removed misc_util.get_path() in favor of another function thinking that it was only used internally. Travis, can I put it back for 1.0.3.1? Not having it breaks some older forms of setup.py's that were closely translated from scipy_distutils. Notably, scipy 0.5.2 still has one (of my making!). -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From rex at nosyntax.com Tue Jun 12 13:53:32 2007 From: rex at nosyntax.com (rex) Date: Tue, 12 Jun 2007 10:53:32 -0700 Subject: [SciPy-dev] Compiling scipy with Intel ifort & MKL In-Reply-To: <466E7913.1040300@cens.ioc.ee> References: <20070610203508.GB5999@x2.nosyntax.com> <466D326F.2090806@cens.ioc.ee> <20070611231140.GI4914@x2.nosyntax.com> <20070612033106.GM4914@x2.nosyntax.com> <466E7913.1040300@cens.ioc.ee> Message-ID: <20070612175332.GB4818@x2.nosyntax.com> Pearu Peterson [2007-06-12 07:45]: > > > rex wrote: > >> I've already had to fix one problem in distutils due to Intel changing > >> 'mkl_lapack32' in MKL8.1 to 'mkl_lapack' in MKL9.1. It appears another > >> change is needed, perhaps in numpy/distutils/fcompiler/intel.py > > The Fortran compiler version checking code (among others related codes) > in numpy.distutils has changed after merging David Cooke branch with > numpy trunk. It appears that the new code is not well tested. In a post on the SciPy-users list George Nurser found the problem that I surmised exists in numpy/distutils/fcompiler/intel.py Following his lead, I changed all the instances of 'version_cmd' : ['', None] to 'version_cmd' : ['', '-V'] in numpy/distutils/fcompiler/intel.py, and rebuilt numpy. Scipy now builds & installs w/o error: python setup.py config --compiler=intel --fcompiler=intel build >& build_scipy-svn.log python setup.py config --compiler=intel --fcompiler=intel install >& inst_scipy-svn.log However, examination of the scipy build log shows that it's apparently using gcc instead of icc (which was used in the numpy build) and since gcc & icc are not binary compatible, this is going to cause errors, right? So now the question is why is scipy not recognizing the numpy setting of cc_exe = 'icc ...' in /usr/local/src/numpy/numpy/distutils/intelccompiler.py ? Numpy picks it up, but scipy apparently does not (unless I'm reading the scipy log messages wrong, e.g., building 'scipy.fftpack.convolve' extension compiling C sources C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -march=i586 -mtune=i686 -fmessage-length=0 -Wall -D_FORTIFY_SOURCE=2 -g -fPIC BTW, scipy.test() mostly works, but generates a few errors. Thanks much for the help. -rex From matthieu.brucher at gmail.com Tue Jun 12 14:52:35 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Tue, 12 Jun 2007 20:52:35 +0200 Subject: [SciPy-dev] Compiling scipy with Intel ifort & MKL In-Reply-To: <20070612175332.GB4818@x2.nosyntax.com> References: <20070610203508.GB5999@x2.nosyntax.com> <466D326F.2090806@cens.ioc.ee> <20070611231140.GI4914@x2.nosyntax.com> <20070612033106.GM4914@x2.nosyntax.com> <466E7913.1040300@cens.ioc.ee> <20070612175332.GB4818@x2.nosyntax.com> Message-ID: > > However, examination of the scipy build log shows that it's apparently > using gcc instead of icc (which was used in the numpy build) and since > gcc & icc are not binary compatible, this is going to cause errors, > right? > As far as I know, they are C++ compatible, so they should be C compatible as well. Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From rex at nosyntax.com Tue Jun 12 15:02:07 2007 From: rex at nosyntax.com (rex) Date: Tue, 12 Jun 2007 12:02:07 -0700 Subject: [SciPy-dev] Compiling scipy with Intel ifort & MKL In-Reply-To: <466EC6EC.6080002@gmail.com> References: <20070610203508.GB5999@x2.nosyntax.com> <466D326F.2090806@cens.ioc.ee> <466DC0DD.4040604@gmail.com> <466E76B3.1060906@cens.ioc.ee> <466EC6EC.6080002@gmail.com> Message-ID: <20070612190207.GE4818@x2.nosyntax.com> Robert Kern [2007-06-12 11:24]: > Pearu Peterson wrote: > > > Are you using numpy 1.0.3 tar-ball? There are at least two error reports > > of the given kind (building scipy fails with `dfftpack/*.f` not existing > > error) that was caused by this numpy.distutils bug. > > Ah, I am using the first 1.0.3 tarball, not the one with r3845 in it. Yes, I > think we should release a 1.0.3.1 that fixes that issue and the problem that > Travis had that caused him to do that in the first place. We should branch it > from the tag rather than the trunk because of David Cooke's merge. In the current numpy svn there are at least two distutil problems: In system_info.py, the line: lapack_libs = self.get_libs('lapack_libs',['mkl_lapack32','mkl_lapack64']) will not work with MKL 9.1 because the '32' and '64' have been removed. The line below works for me (but of course it won't work for the old MKL 8.1, so there probably should be a version check). lapack_libs = self.get_libs('lapack_libs',['mkl_lapack']) In fcompiler/intel.py, the line(s) 'version_cmd' : ['', None], causes the scipy build to fail when ifort 10.0 (and, ifort 9.1, I think) is used. Following George Nurser's lead, I changed all instances to: 'version_cmd' : ['', '-V'], and now scipy svn builds. ('-V' may not be correct for the visual compilers) The problem of scipy ignoring cc_exe = 'icc ... in intelccompiler.py and using gcc remains. -rex From rex at nosyntax.com Tue Jun 12 16:55:31 2007 From: rex at nosyntax.com (rex) Date: Tue, 12 Jun 2007 13:55:31 -0700 Subject: [SciPy-dev] Compiling scipy with Intel ifort & MKL In-Reply-To: References: <20070610203508.GB5999@x2.nosyntax.com> <466D326F.2090806@cens.ioc.ee> <20070611231140.GI4914@x2.nosyntax.com> <20070612033106.GM4914@x2.nosyntax.com> <466E7913.1040300@cens.ioc.ee> <20070612175332.GB4818@x2.nosyntax.com> Message-ID: <20070612205530.GG4818@x2.nosyntax.com> Matthieu Brucher [2007-06-12 12:02]: > However, examination of the scipy build log shows that it's apparently > using gcc instead of icc (which was used in the numpy build) and since > gcc & icc are not binary compatible, this is going to cause errors, > right? > > > As far as I know, they are C++ compatible, so they should be C compatible as > well. Ah, I was misremembering this: "Note that code compiled by the Intel Fortran Compiler (IFC) [ifort] is not binary compatible with code compiled by g77. Therefore, when using IFC, all Fortran codes used in SciPy must be compiled with IFC. This also includes the LAPACK, BLAS, and ATLAS libraries. Using GCC for compiling C code is OK." Still, I'd like to use icc in both numpy and scipy -- mixing them just doesn't seem like a good idea, and gcc will be slower, especially in the Core 2 Duo system I'm building on. Looking at the scipy build log, it appears that the problem may be in distutils/unixccompiler.py The first g++/gcc stuff appears here: customize UnixCCompiler customize UnixCCompiler using build_ext customize GnuFCompiler Found executable /usr/bin/g77 gnu: no Fortran 90 compiler found gnu: no Fortran 90 compiler found gnu: no Fortran 90 compiler found customize GnuFCompiler gnu: no Fortran 90 compiler found customize GnuFCompiler using build_ext building 'scipy.cluster._vq' extension compiling C++ sources C compiler: g++ -pthread -fno-strict-aliasing -DNDEBUG -O2 -march=i586 -mtune=i686 -fmessage-length=0 -Wall -D_FORTIFY_SOURCE=2 -g -fPIC >From my limited (and quite possibly wrong) understanding it appears that unixccompiler.py/ccompiler.py is not finding icc & mkl. None of the strings in ccompiler.py for intel match what icc -V returns: Intel(R) C Compiler for applications running on IA-32, Version 10.0 Build 20070426 Package ID: l_cc_p_10.0.023 Thanks, -rex Intel(R) C Compiler for applications running on IA-32 From matthieu.brucher at gmail.com Tue Jun 12 17:09:23 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Tue, 12 Jun 2007 23:09:23 +0200 Subject: [SciPy-dev] Compiling scipy with Intel ifort & MKL In-Reply-To: <20070612205530.GG4818@x2.nosyntax.com> References: <20070610203508.GB5999@x2.nosyntax.com> <466D326F.2090806@cens.ioc.ee> <20070611231140.GI4914@x2.nosyntax.com> <20070612033106.GM4914@x2.nosyntax.com> <466E7913.1040300@cens.ioc.ee> <20070612175332.GB4818@x2.nosyntax.com> <20070612205530.GG4818@x2.nosyntax.com> Message-ID: Ah, I was misremembering this: "Note that code compiled by the Intel > Fortran Compiler (IFC) [ifort] is not binary compatible with code compiled > by > g77. Therefore, when using IFC, all Fortran codes used in SciPy must be > compiled with IFC. This also includes the LAPACK, BLAS, and ATLAS > libraries. Using GCC for compiling C code is OK." Oups sorry... Perhaps the next GCC Fortran compiler will solve this... Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From rex at nosyntax.com Tue Jun 12 18:22:10 2007 From: rex at nosyntax.com (rex) Date: Tue, 12 Jun 2007 15:22:10 -0700 Subject: [SciPy-dev] Compiling scipy with Intel ifort & MKL In-Reply-To: References: <20070610203508.GB5999@x2.nosyntax.com> <466D326F.2090806@cens.ioc.ee> <20070611231140.GI4914@x2.nosyntax.com> <20070612033106.GM4914@x2.nosyntax.com> <466E7913.1040300@cens.ioc.ee> <20070612175332.GB4818@x2.nosyntax.com> <20070612205530.GG4818@x2.nosyntax.com> Message-ID: <20070612222210.GI4818@x2.nosyntax.com> Matthieu Brucher [2007-06-12 14:26]: > > > Ah, I was misremembering this: "Note that code compiled by the Intel > Fortran Compiler (IFC) [ifort] is not binary compatible with code compiled > by > g77. Therefore, when using IFC, all Fortran codes used in SciPy must be > compiled with IFC. This also includes the LAPACK, BLAS, and ATLAS > libraries. Using GCC for compiling C code is OK." > > Oups sorry... Perhaps the next GCC Fortran compiler will solve this... No need to be sorry; I misremembered, not you. :( And, the quote above is from the SciPy wiki and the recently released ifort 10.0 may (or may not) be binary compatible with g77. Thanks, -rex From rex at nosyntax.com Tue Jun 12 18:41:48 2007 From: rex at nosyntax.com (rex) Date: Tue, 12 Jun 2007 15:41:48 -0700 Subject: [SciPy-dev] Compiling scipy with Intel ifort & MKL In-Reply-To: <20070612205530.GG4818@x2.nosyntax.com> References: <20070610203508.GB5999@x2.nosyntax.com> <466D326F.2090806@cens.ioc.ee> <20070611231140.GI4914@x2.nosyntax.com> <20070612033106.GM4914@x2.nosyntax.com> <466E7913.1040300@cens.ioc.ee> <20070612175332.GB4818@x2.nosyntax.com> <20070612205530.GG4818@x2.nosyntax.com> Message-ID: <20070612224148.GJ4818@x2.nosyntax.com> rex [2007-06-12 13:59]: > Still, I'd like to use icc in both numpy and scipy -- mixing them just > doesn't seem like a good idea, and gcc will be slower, especially in the > Core 2 Duo system I'm building on. > > Looking at the scipy build log, it appears that the problem may be in distutils/unixccompiler.py > > From my limited (and quite possibly wrong) understanding it appears that > unixccompiler.py/ccompiler.py is not finding icc & mkl. None of the > strings in ccompiler.py for intel match what icc -V returns: > > Intel(R) C Compiler for applications running on IA-32, Version 10.0 Build 20070426 Package ID: l_cc_p_10.0.023 Continuing the quest, I changed ccompiler.py from: compiler_class['intel'] = ('intelccompiler','IntelCCompiler', "Intel C Compiler for 32-bit applications") to compiler_class['intel'] = ('intelccompiler','IntelCCompiler', "Intel(R) C Compiler for applications running on IA-32") and rebuilt numpy and then tried to build scipy. This time it found icc all the way though, but it lost the ifort it found earlier in the same build. :( icc -g -fomit-frame-pointer -xT -fast -shared build/temp.linux-i686-2.5/build/src.linux-i686-2.5/Lib/integrate/vodemodule.o +build/temp.linux-i686-2.5/build/src.linux-i686-2.5/fortranobject.o -L/opt/intel/mkl/9.1/lib/32 -L/usr/lib/python2.5/config +-Lbuild/temp.linux-i686-2.5 -lodepack -llinpack_lite -lmach -lmkl -lvml -lguide -lpthread -lpython2.5 -o +build/lib.linux-i686-2.5/scipy/integrate/vode.so ipo: remark #11000: performing multi-file optimizations ipo: remark #11005: generating object file /tmp/ipo_icc23RIxK.o build/src.linux-i686-2.5/fortranobject.c(534): (col. 9) remark: LOOP WAS VECTORIZED. build/src.linux-i686-2.5/fortranobject.c(751): (col. 5) remark: LOOP WAS VECTORIZED. build/src.linux-i686-2.5/fortranobject.c(782): (col. 5) remark: LOOP WAS VECTORIZED. building 'scipy.interpolate._fitpack' extension warning: build_ext: extension 'scipy.interpolate._fitpack' has Fortran libraries but no Fortran linker found, using default linker compiling C sources C compiler: icc -g -fomit-frame-pointer -xT -fast compile options: '-I/usr/local/lib/python2.5/site-packages/numpy/core/include -I/usr/include/python2.5 -c' icc: Lib/interpolate/_fitpackmodule.c icc -g -fomit-frame-pointer -xT -fast -shared build/temp.linux-i686-2.5/Lib/interpolate/_fitpackmodule.o -L/usr/lib/python2.5/config +-Lbuild/temp.linux-i686-2.5 -lfitpack -lpython2.5 -o build/lib.linux-i686-2.5/scipy/interpolate/_fitpack.so ipo: remark #11001: performing single-file optimizations ipo: remark #11005: generating object file /tmp/ipo_iccNl0Bbp.o Lib/interpolate/__fitpack.h(1053): (col. 13) remark: LOOP WAS VECTORIZED. building 'scipy.interpolate.dfitpack' extension error: extension 'scipy.interpolate.dfitpack' has Fortran sources but no Fortran compiler found It found the fortran compiler earlier in the same build: building 'statlib' library compiling Fortran sources Fortran f77 compiler: /opt/intel/fc/10.0.023/bin/ifort -72 -w90 -w95 -KPIC -cm -O3 -unroll -arch SSE2 Fortran f90 compiler: /opt/intel/fc/10.0.023/bin/ifort -FR -KPIC -cm -O3 -unroll -arch SSE2 Fortran fix compiler: /opt/intel/fc/10.0.023/bin/ifort -FI -KPIC -cm -O3 -unroll -arch SSE2 The first sign of a problem was later when this appeared: building 'scipy.fftpack._fftpack' extension warning: build_ext: extension 'scipy.fftpack._fftpack' has Fortran libraries but no Fortran linker found, using default linker Why did fixing the failure of ccompiler.py to find icc break finding the fortran complier late (after ~5000 lines) in the scipy build? -rex From rex at nosyntax.com Tue Jun 12 19:50:09 2007 From: rex at nosyntax.com (rex) Date: Tue, 12 Jun 2007 16:50:09 -0700 Subject: [SciPy-dev] Compiling scipy with Intel ifort & MKL In-Reply-To: <20070612224148.GJ4818@x2.nosyntax.com> References: <20070610203508.GB5999@x2.nosyntax.com> <466D326F.2090806@cens.ioc.ee> <20070611231140.GI4914@x2.nosyntax.com> <20070612033106.GM4914@x2.nosyntax.com> <466E7913.1040300@cens.ioc.ee> <20070612175332.GB4818@x2.nosyntax.com> <20070612205530.GG4818@x2.nosyntax.com> <20070612224148.GJ4818@x2.nosyntax.com> Message-ID: <20070612235009.GK4818@x2.nosyntax.com> rex [2007-06-12 15:50]: > rex [2007-06-12 13:59]: > > Looking at the scipy build log, it appears that the problem may be in distutils/unixccompiler.py > > > > From my limited (and quite possibly wrong) understanding it appears that > > unixccompiler.py/ccompiler.py is not finding icc & mkl. None of the > > strings in ccompiler.py for intel match what icc -V returns: > > > > Intel(R) C Compiler for applications running on IA-32, Version 10.0 Build 20070426 Package ID: l_cc_p_10.0.023 > > Continuing the quest, I changed ccompiler.py from: > > compiler_class['intel'] = ('intelccompiler','IntelCCompiler', > "Intel C Compiler for 32-bit applications") > > to > > compiler_class['intel'] = ('intelccompiler','IntelCCompiler', > "Intel(R) C Compiler for applications running on IA-32") > > and rebuilt numpy and then tried to build scipy. This time it found icc > all the way though, but it lost the ifort it found earlier in the same > build. :( > > Why did fixing the failure of ccompiler.py to find icc break finding the > fortran complier late (after ~5000 lines) in the scipy build? The new problem is in fcompiler/intel.py and is the same id string problem that ccompiler.py had. ifort -V returns: Intel(R) Fortran Compiler for applications running on IA-32, Version 10.0 Build 20070426 Package ID: l_fc_p_10.0.023 I changed version_match = intel_version_match('32-bit') to version_match = intel_version_match('IA-32') in fcompiler/intel.py, rebuilt/installed numpy and scipy. This time the scipy build log shows that icc and ifort were used all the way through. No gcc, no g77. That's the good news. The bad news is that there are even more scipy.test() errors. Here are the results of testing with the build that used ifort and both icc & gcc. Below that is the test using ifort & icc only. Python 2.5 (r25:51908, Nov 27 2006, 19:14:46) [GCC 4.1.2 20061115 (prerelease) (SUSE Linux)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> import scipy >>> scipy.test() Found 7 tests for scipy.cluster.vq Found 4 tests for scipy.io.array_import Found 28 tests for scipy.io.mio Found 12 tests for scipy.io.mmio Found 5 tests for scipy.io.npfile Found 4 tests for scipy.io.recaster Found 16 tests for scipy.lib.blas Found 128 tests for scipy.lib.blas.fblas **************************************************************** WARNING: clapack module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by numpy/distutils/system_info.py, then scipy uses flapack instead of clapack. **************************************************************** Found 42 tests for scipy.lib.lapack Found 41 tests for scipy.linalg.basic Found 14 tests for scipy.linalg.blas Found 56 tests for scipy.linalg.decomp Found 128 tests for scipy.linalg.fblas Found 6 tests for scipy.linalg.iterative Found 4 tests for scipy.linalg.lapack Found 7 tests for scipy.linalg.matfuncs Warning: FAILURE importing tests for /usr/local/lib/python2.5/site-packages/scipy/linsolve/_superlu.py:1: ImportError: /usr/local/lib/python2.5/site-packages/scipy/linsolve/_zsuperlu.so: undefined symbol: Destroy_CompCol_Permuted (in ) Warning: FAILURE importing tests for /usr/local/lib/python2.5/site-packages/scipy/linsolve/_superlu.py:1: ImportError: /usr/local/lib/python2.5/site-packages/scipy/linsolve/_zsuperlu.so: undefined symbol: Destroy_CompCol_Permuted (in ) Found 399 tests for scipy.ndimage Warning: FAILURE importing tests for /usr/local/lib/python2.5/site-packages/scipy/optimize/minpack.py:1: ImportError: /usr/local/lib/python2.5/site-packages/scipy/optimize/_minpack.so: undefined symbol: __libm_sse2_log10 (in ) Warning: FAILURE importing tests for /usr/local/lib/python2.5/site-packages/scipy/linsolve/_superlu.py:1: ImportError: /usr/local/lib/python2.5/site-pack ages/scipy/linsolve/_zsuperlu.so: undefined symbol: Destroy_CompCol_Permuted (in ) Warning: FAILURE importing tests for /usr/local/lib/python2.5/site-packages/scipy/linsolve/_superlu.py:1: ImportError: /usr/local/lib/python2.5/site-packages/scipy/linsolve/_zsuperlu.so: undefined symbol: Destroy_CompCol_Permuted (in ) Found 0 tests for __main__ ....... Don't worry about a warning regarding the number of bytes read. Warning: 1000000 bytes requested, 20 bytes read. ...E....................................................................caxpy:n=4 ..caxpy:n=3 ....ccopy:n=4 ..ccopy:n=3 .............cscal:n=4 ....cswap:n=4 ..cswap:n=3 .....daxpy:n=4 ..daxpy:n=3 ....dcopy:n=4 ..dcopy:n=3 .............dscal:n=4 ....dswap:n=4 ..dswap:n=3 .....saxpy:n=4 ..saxpy:n=3 ....scopy:n=4 ..scopy:n=3 .............sscal:n=4 ....sswap:n=4 ..sswap:n=3 .....zaxpy:n=4 ..zaxpy:n=3 ....zcopy:n=4 ..zcopy:n=3 .............zscal:n=4 ....zswap:n=4 ..zswap:n=3 ............................FF....................................................... **************************************************************** WARNING: cblas module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by numpy/distutils/system_info.py, then scipy uses fblas instead of cblas. **************************************************************** .........................................................................caxpy:n=4 ..caxpy:n=3 ....ccopy:n=4 ..ccopy:n=3 .............cscal:n=4 ....cswap:n=4 ..cswap:n=3 .....daxpy:n=4 ..daxpy:n=3 ....dcopy:n=4 ..dcopy:n=3 .............dscal:n=4 ....dswap:n=4 ..dswap:n=3 .....saxpy:n=4 ..saxpy:n=3 ....scopy:n=4 ..scopy:n=3 .............sscal:n=4 ....sswap:n=4 ..sswap:n=3 .....zaxpy:n=4 ..zaxpy:n=3 ....zcopy:n=4 ..zcopy:n=3 .............zscal:n=4 ....zswap:n=4 ..zswap:n=3 ..||A.x - b|| = 0.463278811259 ||A.x - b|| = 0.0904112510459 ||A.x - b|| = 0.00904717783342 ||A.x - b|| = 0.00142227284819 ||A.x - b|| = 0.000200415793499 .||A.x - b|| = 0.0701989794825 ||A.x - b|| = 0.00132382823161 .||A.x - b|| = 0.552990362175 ||A.x - b|| = 0.0853864153742 ||A.x - b|| = 0.00956204753397 ||A.x - b|| = 0.0015868128577 ||A.x - b|| = 0.000330271905952 .||A.x - b|| = 0.191333515726 ||A.x - b|| = 0.00850533932205 ||A.x - b|| = 0.000281144795047 ..||A.x - b|| = 0.463915921284 ||A.x - b|| = 0.049085267646 ||A.x - b|| = 0.00809201153244 ||A.x - b|| = 0.00101049839284 ||A.x - b|| = 5.2625953376e-05 ... **************************************************************** WARNING: clapack module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by numpy/distutils/system_info.py, then scipy uses flapack instead of clapack. **************************************************************** ...Result may be inaccurate, approximate err = 1.3204526861e-08 ............................................................................................................./usr/local/lib/python2.5/site-packages/scipy/ndimage/interpolation.py:41: UserWarning: Mode "reflect" may yield incorrect results on boundaries. Please use "mirror" instead. warnings.warn('Mode "reflect" may yield incorrect results on ' ........................................................................................................................................................................................................................................................................................................ ====================================================================== ERROR: check_integer (scipy.io.tests.test_array_import.test_read_array) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.5/site-packages/scipy/io/tests/test_array_import.py", line 55, in check_integer from scipy import stats File "/usr/local/lib/python2.5/site-packages/scipy/stats/__init__.py", line 7, in from stats import * File "/usr/local/lib/python2.5/site-packages/scipy/stats/stats.py", line 191, in import scipy.special as special File "/usr/local/lib/python2.5/site-packages/scipy/special/__init__.py", line 8, in from basic import * File "/usr/local/lib/python2.5/site-packages/scipy/special/basic.py", line 8, in from _cephes import * ImportError: /usr/local/lib/python2.5/site-packages/scipy/special/_cephes.so: undefined symbol: __libm_sse2_exp ====================================================================== FAIL: check_syevr (scipy.lib.tests.test_lapack.test_flapack_float) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.5/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 41, in check_syevr assert_array_almost_equal(w,exact_w) File "/usr/local/lib/python2.5/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/local/lib/python2.5/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 33.3333333333%) x: array([-0.66992444, 0.48769468, 9.18222618], dtype=float32) y: array([-0.66992434, 0.48769389, 9.18223045]) ====================================================================== FAIL: check_syevr_irange (scipy.lib.tests.test_lapack.test_flapack_float) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.5/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 66, in check_syevr_irange assert_array_almost_equal(w,exact_w[rslice]) File "/usr/local/lib/python2.5/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/local/lib/python2.5/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 33.3333333333%) x: array([-0.66992444, 0.48769468, 9.18222618], dtype=float32) y: array([-0.66992434, 0.48769389, 9.18223045]) ---------------------------------------------------------------------- Ran 901 tests in 1.148s FAILED (failures=2, errors=1) This test is from the build using ifort & icc only: Python 2.5 (r25:51908, Nov 27 2006, 19:14:46) [GCC 4.1.2 20061115 (prerelease) (SUSE Linux)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> import scipy >>> scipy.test() == Error while importing _vq, not testing C imp of vq == Found 7 tests for scipy.cluster.vq Found 4 tests for scipy.io.array_import Warning: FAILURE importing tests for /usr/local/lib/python2.5/site-packages/scipy/sparse/sparsetools.py:7: ImportError: /usr/local/lib/python2.5/site-packages/scipy/sparse/_sparsetools.so: undefined symbol: _Znwj (in ) Found 12 tests for scipy.io.mmio Found 5 tests for scipy.io.npfile Found 4 tests for scipy.io.recaster Found 16 tests for scipy.lib.blas Found 128 tests for scipy.lib.blas.fblas **************************************************************** WARNING: clapack module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by numpy/distutils/system_info.py, then scipy uses flapack instead of clapack. **************************************************************** Found 42 tests for scipy.lib.lapack Found 41 tests for scipy.linalg.basic Found 14 tests for scipy.linalg.blas Found 56 tests for scipy.linalg.decomp Found 128 tests for scipy.linalg.fblas Found 6 tests for scipy.linalg.iterative Found 4 tests for scipy.linalg.lapack Found 7 tests for scipy.linalg.matfuncs Found 399 tests for scipy.ndimage Found 5 tests for scipy.odr Warning: FAILURE importing tests for /usr/local/lib/python2.5/site-packages/scipy/optimize/zeros.py:3: ImportError: /usr/local/lib/python2.5/site-packages/scipy/optimize/_zeros.so: undefined symbol: bisect (in ) Found 0 tests for __main__ ......== not testing C imp of vq == . Don't worry about a warning regarding the number of bytes read. Warning: 1000000 bytes requested, 20 bytes read. ...E...........E............................caxpy:n=4 ..caxpy:n=3 ....ccopy:n=4 ..ccopy:n=3 .............cscal:n=4 ....cswap:n=4 ..cswap:n=3 .....daxpy:n=4 ..daxpy:n=3 ....dcopy:n=4 ..dcopy:n=3 .............dscal:n=4 ....dswap:n=4 ..dswap:n=3 .....saxpy:n=4 ..saxpy:n=3 ....scopy:n=4 ..scopy:n=3 .............sscal:n=4 ....sswap:n=4 ..sswap:n=3 .....zaxpy:n=4 ..zaxpy:n=3 ....zcopy:n=4 ..zcopy:n=3 .............zscal:n=4 ....zswap:n=4 ..zswap:n=3 ............................FF....................................................... **************************************************************** WARNING: cblas module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by numpy/distutils/system_info.py, then scipy uses fblas instead of cblas. **************************************************************** .........................................................................caxpy:n=4 ..caxpy:n=3 ....ccopy:n=4 ..ccopy:n=3 .............cscal:n=4 ....cswap:n=4 ..cswap:n=3 .....daxpy:n=4 ..daxpy:n=3 ....dcopy:n=4 ..dcopy:n=3 .............dscal:n=4 ....dswap:n=4 ..dswap:n=3 .....saxpy:n=4 ..saxpy:n=3 ....scopy:n=4 ..scopy:n=3 .............sscal:n=4 ....sswap:n=4 ..sswap:n=3 .....zaxpy:n=4 ..zaxpy:n=3 ....zcopy:n=4 ..zcopy:n=3 .............zscal:n=4 ....zswap:n=4 ..zswap:n=3 ..||A.x - b|| = 0.437138305526 ||A.x - b|| = 0.0940049148864 ||A.x - b|| = 0.0127517922701 ||A.x - b|| = 0.00137250351767 ||A.x - b|| = 8.29603014309e-05 .||A.x - b|| = 0.241701590498 ||A.x - b|| = 0.00268371042198 ||A.x - b|| = 2.2608852184e-05 .||A.x - b|| = 0.607549390567 ||A.x - b|| = 0.0919587464263 ||A.x - b|| = 0.00695201688476 ||A.x - b|| = 0.00078087385892 ||A.x - b|| = 8.82341930263e-05 .||A.x - b|| = 0.241040719387 ||A.x - b|| = 0.00981032340491 ||A.x - b|| = 0.000285396218674 ..||A.x - b|| = 0.438468906597 ||A.x - b|| = 0.048365422497 ||A.x - b|| = 0.00538985122199 ||A.x - b|| = 0.000653170894647 ||A.x - b|| = 9.11645602545e-05 ... **************************************************************** WARNING: clapack module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by numpy/distutils/system_info.py, then scipy uses flapack instead of clapack. **************************************************************** ...Result may be inaccurate, approximate err = 1.3204526861e-08 ............................................................................................................./usr/local/lib/python2.5/site-packages/scipy/ndimage/interpolation.py:41: UserWarning: Mode "reflect" may yield incorrect results on boundaries. Please use "mirror" instead. warnings.warn('Mode "reflect" may yield incorrect results on ' ..........................................................................................................................................................................................................................................................................................................FFF ====================================================================== ERROR: check_integer (scipy.io.tests.test_array_import.test_read_array) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.5/site-packages/scipy/io/tests/test_array_import.py", line 55, in check_integer from scipy import stats File "/usr/local/lib/python2.5/site-packages/scipy/stats/__init__.py", line 7, in from stats import * File "/usr/local/lib/python2.5/site-packages/scipy/stats/stats.py", line 191, in import scipy.special as special File "/usr/local/lib/python2.5/site-packages/scipy/special/__init__.py", line 8, in from basic import * File "/usr/local/lib/python2.5/site-packages/scipy/special/basic.py", line 8, in from _cephes import * ImportError: /usr/local/lib/python2.5/site-packages/scipy/special/_cephes.so: undefined symbol: NAN ====================================================================== ERROR: check_simple_todense (scipy.io.tests.test_mmio.test_mmio_coordinate) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.5/site-packages/scipy/io/tests/test_mmio.py", line 151, in check_simple_todense b = mmread(fn).todense() AttributeError: 'numpy.ndarray' object has no attribute 'todense' ====================================================================== FAIL: check_syevr (scipy.lib.tests.test_lapack.test_flapack_float) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.5/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 41, in check_syevr assert_array_almost_equal(w,exact_w) File "/usr/local/lib/python2.5/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/local/lib/python2.5/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 33.3333333333%) x: array([-0.66992444, 0.48769468, 9.18222618], dtype=float32) y: array([-0.66992434, 0.48769389, 9.18223045]) ====================================================================== FAIL: check_syevr_irange (scipy.lib.tests.test_lapack.test_flapack_float) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.5/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 66, in check_syevr_irange assert_array_almost_equal(w,exact_w[rslice]) File "/usr/local/lib/python2.5/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/local/lib/python2.5/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 33.3333333333%) x: array([-0.66992444, 0.48769468, 9.18222618], dtype=float32) y: array([-0.66992434, 0.48769389, 9.18223045]) ====================================================================== FAIL: test_lorentz (scipy.tests.test_odr.test_odr) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.5/site-packages/scipy/odr/tests/test_odr.py", line 295, in test_lorentz 3.7798193600109009e+00]), File "/usr/local/lib/python2.5/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/local/lib/python2.5/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 1.00000000e+03, 1.00000000e-01, 3.80000000e+00]) y: array([ 1.43067808e+03, 1.33905090e-01, 3.77981936e+00]) ====================================================================== FAIL: test_multi (scipy.tests.test_odr.test_odr) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.5/site-packages/scipy/odr/tests/test_odr.py", line 191, in test_multi 0.5101147161764654, 0.5173902330489161]), File "/usr/local/lib/python2.5/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/local/lib/python2.5/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 4. , 2. , 7. , 0.4, 0.5]) y: array([ 4.37998803, 2.43330576, 8.00288459, 0.51011472, 0.51739023]) ====================================================================== FAIL: test_pearson (scipy.tests.test_odr.test_odr) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.5/site-packages/scipy/odr/tests/test_odr.py", line 238, in test_pearson np.array([ 5.4767400299231674, -0.4796082367610305]), File "/usr/local/lib/python2.5/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/local/lib/python2.5/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 1., 1.]) y: array([ 5.47674003, -0.47960824]) ---------------------------------------------------------------------- Ran 878 tests in 1.500s FAILED (failures=5, errors=2) I think I've gone as far as I can. I'll leave it the the respective (and respected) experts to clean up my quick hack-fixes to 4 distutils bugs properly, and to sort out the errors in scipy.test(). Of course I'll be happy to provide more detailed scipy.test() results. Now that I have a (sort-of) working scipy using icc 10.0, ifort 10.0 and mkl 9.1, I'd like to benchmark it to see how much speed payoff all this work has. Any suggestions? Results from a Core 2 Duo using a gcc build would save me building blas, lapack, numpy & scipy with gcc to do a comparison. Thanks, -rex From cookedm at physics.mcmaster.ca Wed Jun 13 02:02:04 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 13 Jun 2007 02:02:04 -0400 Subject: [SciPy-dev] Compiling scipy with Intel ifort & MKL In-Reply-To: <466E7913.1040300@cens.ioc.ee> References: <20070610203508.GB5999@x2.nosyntax.com> <466D326F.2090806@cens.ioc.ee> <20070611231140.GI4914@x2.nosyntax.com> <20070612033106.GM4914@x2.nosyntax.com> <466E7913.1040300@cens.ioc.ee> Message-ID: <02FA28B2-AB7D-4CF5-B71B-4C4A23D8DC22@physics.mcmaster.ca> On Jun 12, 2007, at 06:44 , Pearu Peterson wrote: > > The Fortran compiler version checking code (among others related > codes) > in numpy.distutils has changed after merging David Cooke branch with > numpy trunk. It appears that the new code is not well tested. > Unfortunately I don't have intel compiler in my system to track down > the problems you experince. I may get a chance to look at it in more > detail may be on Friday. Ugh, not well-tested is right: it's kind of borked. I'm looking into it now. -- |>|\/|< /------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From faltet at carabos.com Wed Jun 13 05:15:11 2007 From: faltet at carabos.com (Francesc Altet) Date: Wed, 13 Jun 2007 11:15:11 +0200 Subject: [SciPy-dev] Reliable way to know memory consumption of functions/scripts/etc.. In-Reply-To: <466E3CE2.9030007@ar.media.kyoto-u.ac.jp> References: <466E3CE2.9030007@ar.media.kyoto-u.ac.jp> Message-ID: <1181726112.2580.11.camel@carabos.com> El dt 12 de 06 del 2007 a les 15:27 +0900, en/na David Cournapeau va escriure: > Hi, > > I was wondering whether there was a simple and reliable way to know how > much memory a python script takes between some code boundaries. I don't > need a really precise thing, but more something like how does a given > code scale given its input: does it take the same amount, several times > the same amount, etc... Is this possible in python ? I don't think this is going to be possible in plain python (in a non-debugging version of python at least). What I normally do is 'spying' in real time the process through the 'top' command and infer the increment of memory usage doing some experiments sequentially. There should be better tools around, though. Incidentally, I've some code that gives you the amount of memory that is currently being used by the process in some point of the code, but this is different from knowing the amount of memory taken between two points. If you are interested on this, tell me (only works on linux, but it should be feasible to port it to win). Cheers, -- Francesc Altet | Be careful about using the following code -- Carabos Coop. V. | I've only proven that it works, www.carabos.com | I haven't tested it. -- Donald Knuth From david at ar.media.kyoto-u.ac.jp Wed Jun 13 05:25:12 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 13 Jun 2007 18:25:12 +0900 Subject: [SciPy-dev] Reliable way to know memory consumption of functions/scripts/etc.. In-Reply-To: <1181726112.2580.11.camel@carabos.com> References: <466E3CE2.9030007@ar.media.kyoto-u.ac.jp> <1181726112.2580.11.camel@carabos.com> Message-ID: <466FB7F8.3090907@ar.media.kyoto-u.ac.jp> Francesc Altet wrote: > El dt 12 de 06 del 2007 a les 15:27 +0900, en/na David Cournapeau va > escriure: >> Hi, >> >> I was wondering whether there was a simple and reliable way to know how >> much memory a python script takes between some code boundaries. I don't >> need a really precise thing, but more something like how does a given >> code scale given its input: does it take the same amount, several times >> the same amount, etc... Is this possible in python ? > > I don't think this is going to be possible in plain python (in a > non-debugging version of python at least). What I normally do is > 'spying' in real time the process through the 'top' command and infer > the increment of memory usage doing some experiments sequentially. > There should be better tools around, though. In found in between the option COUNT_ALLOCS, which looks exactly like what I want, but unfortunately, it crashes when importing numpy, and this seems to be non trivial to fix (I stoped digging after half an hour). """ --------------------------------------------------------------------------- COUNT_ALLOCS introduced in 0.9.9 partly broken in 2.2 and 2.2.1 Each type object grows three new members: /* Number of times an object of this type was allocated. */ int tp_allocs; /* Number of times an object of this type was deallocated. */ int tp_frees; /* Highwater mark: the maximum value of tp_allocs - tp_frees so * far; or, IOW, the largest number of objects of this type alive at * the same time. */ int tp_maxalloc; Allocation and deallocation code keeps these counts up to date. Py_Finalize() displays a summary of the info returned by sys.getcounts() (see below), along with assorted other special allocation counts (like the number of tuple allocations satisfied by a tuple free-list, the number of 1-character strings allocated, etc). Before Python 2.2, type objects were immortal, and the COUNT_ALLOCS implementation relies on that. As of Python 2.2, heap-allocated type/ class objects can go away. COUNT_ALLOCS can blow up in 2.2 and 2.2.1 because of this; this was fixed in 2.2.2. Use of COUNT_ALLOCS makes all heap-allocated type objects immortal, except for those for which no object of that type is ever allocated. Starting with Python 2.3, If Py_TRACE_REFS is also defined, COUNT_ALLOCS arranges to ensure that the type object for each allocated object appears in the doubly-linked list of all objects maintained by Py_TRACE_REFS. Special gimmicks: sys.getcounts() Return a list of 4-tuples, one entry for each type object for which at least one object of that type was allocated. Each tuple is of the form: (tp_name, tp_allocs, tp_frees, tp_maxalloc) Each distinct type object gets a distinct entry in this list, even if two or more type objects have the same tp_name (in which case there's no way to distinguish them by looking at this list). The list is ordered by time of first object allocation: the type object for which the first allocation of an object of that type occurred most recently is at the front of the list. -------------------------------------------------------------------------- """ If someone else more knwoledgeable than me is willing to help, I think it would be a great addition for numpy. David > > Incidentally, I've some code that gives you the amount of memory that is > currently being used by the process in some point of the code, but this > is different from knowing the amount of memory taken between two points. > If you are interested on this, tell me (only works on linux, but it > should be feasible to port it to win). Well, I don't use windows, so I could use your code :) David From faltet at carabos.com Wed Jun 13 06:36:55 2007 From: faltet at carabos.com (Francesc Altet) Date: Wed, 13 Jun 2007 12:36:55 +0200 Subject: [SciPy-dev] Reliable way to know memory consumption of functions/scripts/etc.. In-Reply-To: <466FB7F8.3090907@ar.media.kyoto-u.ac.jp> References: <466E3CE2.9030007@ar.media.kyoto-u.ac.jp> <1181726112.2580.11.camel@carabos.com> <466FB7F8.3090907@ar.media.kyoto-u.ac.jp> Message-ID: <1181731015.2580.19.camel@carabos.com> El dc 13 de 06 del 2007 a les 18:25 +0900, en/na David Cournapeau va escriure: > In found in between the option COUNT_ALLOCS, which looks exactly like > what I want, but unfortunately, it crashes when importing numpy, and > this seems to be non trivial to fix (I stoped digging after half an hour). Too bad :-( > > Incidentally, I've some code that gives you the amount of memory that is > > currently being used by the process in some point of the code, but this > > is different from knowing the amount of memory taken between two points. > > If you are interested on this, tell me (only works on linux, but it > > should be feasible to port it to win). > Well, I don't use windows, so I could use your code :) Ok. I'm attaching a small module, called procstats.py, with the corresponding code. The usage is quite easy: import procstats import numpy tref = procstats.show("Starting program") a = numpy.arange(1e6) t1 = procstats.show("After allocating array a", tref) b = a*a t2 = procstats.show("After computing array b", t1) del a procstats.show("After removing array a") del b procstats.show("After removing array b") gives as output: Memory usage: ******* Starting program ******* VmSize: 18624 kB VmRSS: 5604 kB VmData: 3868 kB VmStk: 124 kB VmExe: 860 kB VmLib: 11760 kB WallClock time: 0.013 Memory usage: ******* After allocating array a ******* VmSize: 26444 kB VmRSS: 13432 kB VmData: 11688 kB VmStk: 124 kB VmExe: 860 kB VmLib: 11760 kB WallClock time: 0.098 Memory usage: ******* After computing array b ******* VmSize: 34260 kB VmRSS: 21248 kB VmData: 19504 kB VmStk: 124 kB VmExe: 860 kB VmLib: 11760 kB WallClock time: 0.122 Memory usage: ******* After removing array a ******* VmSize: 26444 kB VmRSS: 13432 kB VmData: 11688 kB VmStk: 124 kB VmExe: 860 kB VmLib: 11760 kB WallClock time: 0.018 Memory usage: ******* After removing array b ******* VmSize: 18628 kB VmRSS: 5616 kB VmData: 3872 kB VmStk: 124 kB VmExe: 860 kB VmLib: 11760 kB WallClock time: 0.049 HTH, -- Francesc Altet | Be careful about using the following code -- Carabos Coop. V. | I've only proven that it works, www.carabos.com | I haven't tested it. -- Donald Knuth -------------- next part -------------- A non-text attachment was scrubbed... Name: procstats.py Type: text/x-python Size: 1310 bytes Desc: not available URL: From matthieu.brucher at gmail.com Wed Jun 13 08:29:45 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 13 Jun 2007 14:29:45 +0200 Subject: [SciPy-dev] Status of sandbox.spline and my other code. In-Reply-To: <3a1077e70706100955p4a705acy3b26d9cf9cf0298b@mail.gmail.com> References: <3a1077e70706100955p4a705acy3b26d9cf9cf0298b@mail.gmail.com> Message-ID: Hi, 2. Radial basis function module in sandbox (rbf) > This code has recently had attention from Robert Hetland and is quite > improved. I'm not sure if I will ever go into scipy though?? If not, I > will put it into a scikit at some point. > Related to this, the wiki page has been updated, however, it is at > http://www.scipy.org/RadialBasisFunctions but should probably be in > the Cookbook. I don't know how to move it over, so some pointers would > be helpful. In addition there are four unused attachments I uploaded > that could be deleted. > What are the kernels that can be used with this RBF ? (This is related to the current work on SVM and/or KPCA for machine learning) Is it possible to give the samples in a single array ? Is there an optimization in the computation of the value of the RBF (i.e. if the point is "far" from the RBF, the kernel value is low or zero, so computing the value is not useful) ? I'd like to use this for probabilities (RBF fields), if possible ;) Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Wed Jun 13 09:18:20 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 13 Jun 2007 22:18:20 +0900 Subject: [SciPy-dev] Is it ok to depend on ctypes for scipy code ? Message-ID: <466FEE9C.7070204@ar.media.kyoto-u.ac.jp> Hi, Subject says it all. Is is ok to depends on ctypes for scipy code, or should the implementation be optional with a python fallback ? cheers, David From openopt at ukr.net Wed Jun 13 10:12:49 2007 From: openopt at ukr.net (dmitrey) Date: Wed, 13 Jun 2007 17:12:49 +0300 Subject: [SciPy-dev] can't import matrixmultiply (NumPy_for_Matlab_Users page) Message-ID: <466FFB61.6050702@ukr.net> >>> from numpy import dot >>> from numpy import matrixmultiply Traceback (innermost last): File "", line 1, in ImportError: cannot import name matrixmultiply From matthieu.brucher at gmail.com Wed Jun 13 10:17:34 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 13 Jun 2007 16:17:34 +0200 Subject: [SciPy-dev] can't import matrixmultiply (NumPy_for_Matlab_Users page) In-Reply-To: <466FFB61.6050702@ukr.net> References: <466FFB61.6050702@ukr.net> Message-ID: matrixmultiply is no more available, it is replaced by dot. Matthieu 2007/6/13, dmitrey : > > >>> from numpy import dot > >>> from numpy import matrixmultiply > Traceback (innermost last): > File "", line 1, in > ImportError: cannot import name matrixmultiply > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openopt at ukr.net Wed Jun 13 12:13:41 2007 From: openopt at ukr.net (dmitrey) Date: Wed, 13 Jun 2007 19:13:41 +0300 Subject: [SciPy-dev] Problems with scikits svn server (I can't commit there) Message-ID: <467017B5.3030705@ukr.net> $ svn ci svn: Commit failed (details follow): svn: OPTIONS request failed on '/svn/scikits/trunk/openopt/scikits/openopt' svn: OPTIONS of '/svn/scikits/trunk/openopt/scikits/openopt': could not connect to server (http://svn.scipy.org) From robert.kern at gmail.com Wed Jun 13 12:48:47 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 13 Jun 2007 11:48:47 -0500 Subject: [SciPy-dev] Is it ok to depend on ctypes for scipy code ? In-Reply-To: <466FEE9C.7070204@ar.media.kyoto-u.ac.jp> References: <466FEE9C.7070204@ar.media.kyoto-u.ac.jp> Message-ID: <46701FEF.5020608@gmail.com> David Cournapeau wrote: > Hi, > > Subject says it all. Is is ok to depends on ctypes for scipy code, > or should the implementation be optional with a python fallback ? I'd prefer not to depend on ctypes. More specifically, I'd prefer not to worry about building or finding non-extension shared libraries. What were you thinking about using ctypes for? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Wed Jun 13 12:54:47 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 13 Jun 2007 11:54:47 -0500 Subject: [SciPy-dev] Problems with scikits svn server (I can't commit there) In-Reply-To: <467017B5.3030705@ukr.net> References: <467017B5.3030705@ukr.net> Message-ID: <46702157.1060004@gmail.com> dmitrey wrote: > $ svn ci > svn: Commit failed (details follow): > svn: OPTIONS request failed on '/svn/scikits/trunk/openopt/scikits/openopt' > svn: OPTIONS of '/svn/scikits/trunk/openopt/scikits/openopt': could not > connect to server (http://svn.scipy.org) Are you still having the problem? I just made a minor commit, and it worked. Of course, I'm 20m or so from the server. Are you behind a proxy? Sometimes that causes problems. http://subversion.tigris.org/faq.html#proxy -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From openopt at ukr.net Wed Jun 13 13:11:12 2007 From: openopt at ukr.net (dmitrey) Date: Wed, 13 Jun 2007 20:11:12 +0300 Subject: [SciPy-dev] Problems with scikits svn server (I can't commit there) In-Reply-To: <46702157.1060004@gmail.com> References: <467017B5.3030705@ukr.net> <46702157.1060004@gmail.com> Message-ID: <46702530.9050805@ukr.net> Robert Kern wrote: > dmitrey wrote: > >> $ svn ci >> svn: Commit failed (details follow): >> svn: OPTIONS request failed on '/svn/scikits/trunk/openopt/scikits/openopt' >> svn: OPTIONS of '/svn/scikits/trunk/openopt/scikits/openopt': could not >> connect to server (http://svn.scipy.org) >> > > Are you still having the problem? I just made a minor commit, and it worked. Of > course, I'm 20m or so from the server. > > Are you behind a proxy? Sometimes that causes problems. > > http://subversion.tigris.org/faq.html#proxy > > I had already some successful commit to scikits svn, but today something wrong, I got again: $ svn ci Sending scikits/openopt/Kernel/BaseProblem.py svn: Commit failed (details follow): svn: CHECKOUT of '/svn/scikits/!svn/ver/158/trunk/openopt/scikits/openopt/Kernel/BaseProblem.py': could not connect to server (http://svn.scipy.org) From openopt at ukr.net Wed Jun 13 13:42:37 2007 From: openopt at ukr.net (dmitrey) Date: Wed, 13 Jun 2007 20:42:37 +0300 Subject: [SciPy-dev] Problems with scikits svn server (I can't commit there) In-Reply-To: <46702157.1060004@gmail.com> References: <467017B5.3030705@ukr.net> <46702157.1060004@gmail.com> Message-ID: <46702C8D.8050100@ukr.net> now svn works ok. Robert Kern wrote: > dmitrey wrote: > >> $ svn ci >> svn: Commit failed (details follow): >> svn: OPTIONS request failed on '/svn/scikits/trunk/openopt/scikits/openopt' >> svn: OPTIONS of '/svn/scikits/trunk/openopt/scikits/openopt': could not >> connect to server (http://svn.scipy.org) >> > > Are you still having the problem? I just made a minor commit, and it worked. Of > course, I'm 20m or so from the server. > > Are you behind a proxy? Sometimes that causes problems. > > http://subversion.tigris.org/faq.html#proxy > > From ellisonbg.net at gmail.com Wed Jun 13 14:27:26 2007 From: ellisonbg.net at gmail.com (Brian Granger) Date: Wed, 13 Jun 2007 12:27:26 -0600 Subject: [SciPy-dev] Is it ok to depend on ctypes for scipy code ? In-Reply-To: <46701FEF.5020608@gmail.com> References: <466FEE9C.7070204@ar.media.kyoto-u.ac.jp> <46701FEF.5020608@gmail.com> Message-ID: <6ce0ac130706131127w71c09a54g1d93718250a82af5@mail.gmail.com> > > Subject says it all. Is is ok to depends on ctypes for scipy code, > > or should the implementation be optional with a python fallback ? > > I'd prefer not to depend on ctypes. More specifically, I'd prefer not to worry > about building or finding non-extension shared libraries. Given the facts that i) ctypes now comes with python2.5 and ii) that ctypes is proving itself to be extremely useful for wrapping C-code, it seems like a shame to not be able to utilize ctypes for scipy code. I do understand the issue with having dependencies on external shared libraries that need to be built. But, is there a reasonable way of addressing this problem that would open the door for using ctypes in scipy? Brian > What were you thinking about using ctypes for? > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless enigma > that is made terrible by our own mad attempt to interpret it as though it had > an underlying truth." > -- Umberto Eco > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From robert.kern at gmail.com Wed Jun 13 14:32:38 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 13 Jun 2007 13:32:38 -0500 Subject: [SciPy-dev] Is it ok to depend on ctypes for scipy code ? In-Reply-To: <6ce0ac130706131127w71c09a54g1d93718250a82af5@mail.gmail.com> References: <466FEE9C.7070204@ar.media.kyoto-u.ac.jp> <46701FEF.5020608@gmail.com> <6ce0ac130706131127w71c09a54g1d93718250a82af5@mail.gmail.com> Message-ID: <46703846.1010800@gmail.com> Brian Granger wrote: >>> Subject says it all. Is is ok to depends on ctypes for scipy code, >>> or should the implementation be optional with a python fallback ? >> I'd prefer not to depend on ctypes. More specifically, I'd prefer not to worry >> about building or finding non-extension shared libraries. > > Given the facts that i) ctypes now comes with python2.5 and ii) that > ctypes is proving itself to be extremely useful for wrapping C-code, > it seems like a shame to not be able to utilize ctypes for scipy code. > I do understand the issue with having dependencies on external shared > libraries that need to be built. But, is there a reasonable way of > addressing this problem that would open the door for using ctypes in > scipy? I haven't seen one, yet; otherwise, I wouldn't have made the objection. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From oliphant at ee.byu.edu Wed Jun 13 14:40:13 2007 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 13 Jun 2007 12:40:13 -0600 Subject: [SciPy-dev] Is it ok to depend on ctypes for scipy code ? In-Reply-To: <46703846.1010800@gmail.com> References: <466FEE9C.7070204@ar.media.kyoto-u.ac.jp> <46701FEF.5020608@gmail.com> <6ce0ac130706131127w71c09a54g1d93718250a82af5@mail.gmail.com> <46703846.1010800@gmail.com> Message-ID: <46703A0D.5010309@ee.byu.edu> Robert Kern wrote: >Brian Granger wrote: > > >>>> Subject says it all. Is is ok to depends on ctypes for scipy code, >>>>or should the implementation be optional with a python fallback ? >>>> >>>> >>>I'd prefer not to depend on ctypes. More specifically, I'd prefer not to worry >>>about building or finding non-extension shared libraries. >>> >>> >>Given the facts that i) ctypes now comes with python2.5 and ii) that >>ctypes is proving itself to be extremely useful for wrapping C-code, >>it seems like a shame to not be able to utilize ctypes for scipy code. >> I do understand the issue with having dependencies on external shared >>libraries that need to be built. But, is there a reasonable way of >>addressing this problem that would open the door for using ctypes in >>scipy? >> >> > >I haven't seen one, yet; otherwise, I wouldn't have made the objection. > > > Robert is right. The big problem is fixing distutils to build a shared library. If that is fixed, then it would not be a hard thing to rely on ctypes for SciPy. Until that is fixed, we can't do it. There are several people who have started with this solution (I'm pretty sure it is solvable), but none of these solutions have ended up in distutils or numpy.distutils. -Travis From strawman at astraw.com Wed Jun 13 15:04:17 2007 From: strawman at astraw.com (Andrew Straw) Date: Wed, 13 Jun 2007 12:04:17 -0700 Subject: [SciPy-dev] Is it ok to depend on ctypes for scipy code ? In-Reply-To: <46703A0D.5010309@ee.byu.edu> References: <466FEE9C.7070204@ar.media.kyoto-u.ac.jp> <46701FEF.5020608@gmail.com> <6ce0ac130706131127w71c09a54g1d93718250a82af5@mail.gmail.com> <46703846.1010800@gmail.com> <46703A0D.5010309@ee.byu.edu> Message-ID: <46703FB1.6000704@astraw.com> > Robert is right. The big problem is fixing distutils to build a shared > library. If that is fixed, then it would not be a hard thing to rely on > ctypes for SciPy. Until that is fixed, we can't do it. > > There are several people who have started with this solution (I'm pretty > sure it is solvable), but none of these solutions have ended up in > distutils or numpy.distutils. Can someone refresh my memory - I thought Stefan van der Walt (IIRC) was working on this a while ago. What happened with that? From stefan at sun.ac.za Wed Jun 13 17:30:45 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Wed, 13 Jun 2007 23:30:45 +0200 Subject: [SciPy-dev] Is it ok to depend on ctypes for scipy code ? In-Reply-To: <46703FB1.6000704@astraw.com> References: <466FEE9C.7070204@ar.media.kyoto-u.ac.jp> <46701FEF.5020608@gmail.com> <6ce0ac130706131127w71c09a54g1d93718250a82af5@mail.gmail.com> <46703846.1010800@gmail.com> <46703A0D.5010309@ee.byu.edu> <46703FB1.6000704@astraw.com> Message-ID: <20070613213045.GF9984@mentat.za.net> On Wed, Jun 13, 2007 at 12:04:17PM -0700, Andrew Straw wrote: > > There are several people who have started with this solution (I'm pretty > > sure it is solvable), but none of these solutions have ended up in > > distutils or numpy.distutils. > > Can someone refresh my memory - I thought Stefan van der Walt (IIRC) was > working on this a while ago. What happened with that? I remember asking what *exactly* the problem is with building shared libraries using distutils (but I don't recall the answer). I happily use distutils to build libraries for use with ctypes under Linux. Cheers St?fan From ellisonbg.net at gmail.com Wed Jun 13 17:50:44 2007 From: ellisonbg.net at gmail.com (Brian Granger) Date: Wed, 13 Jun 2007 15:50:44 -0600 Subject: [SciPy-dev] Is it ok to depend on ctypes for scipy code ? In-Reply-To: <46703A0D.5010309@ee.byu.edu> References: <466FEE9C.7070204@ar.media.kyoto-u.ac.jp> <46701FEF.5020608@gmail.com> <6ce0ac130706131127w71c09a54g1d93718250a82af5@mail.gmail.com> <46703846.1010800@gmail.com> <46703A0D.5010309@ee.byu.edu> Message-ID: <6ce0ac130706131450y35ed428r6930c6696580dc67@mail.gmail.com> One of the big uses for ctypes though is for calling shared libraries that "already exist." For instance things like BLAS, LAPACK, MPI can all be called this way. It seems like the main objection is for cases where the shared library is not yet built. My feeling is that if foo.so can be installed using one of the package managers or through ./configure/make/make install, it is not as big of an issue. Granted, such a situation could possibly create a new external dependency which might not be wanted, but the actual issue of building the .so sort of goes away. Brian On 6/13/07, Travis Oliphant wrote: > Robert Kern wrote: > > >Brian Granger wrote: > > > > > >>>> Subject says it all. Is is ok to depends on ctypes for scipy code, > >>>>or should the implementation be optional with a python fallback ? > >>>> > >>>> > >>>I'd prefer not to depend on ctypes. More specifically, I'd prefer not to worry > >>>about building or finding non-extension shared libraries. > >>> > >>> > >>Given the facts that i) ctypes now comes with python2.5 and ii) that > >>ctypes is proving itself to be extremely useful for wrapping C-code, > >>it seems like a shame to not be able to utilize ctypes for scipy code. > >> I do understand the issue with having dependencies on external shared > >>libraries that need to be built. But, is there a reasonable way of > >>addressing this problem that would open the door for using ctypes in > >>scipy? > >> > >> > > > >I haven't seen one, yet; otherwise, I wouldn't have made the objection. > > > > > > > Robert is right. The big problem is fixing distutils to build a shared > library. If that is fixed, then it would not be a hard thing to rely on > ctypes for SciPy. Until that is fixed, we can't do it. > > There are several people who have started with this solution (I'm pretty > sure it is solvable), but none of these solutions have ended up in > distutils or numpy.distutils. > > -Travis > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From ellisonbg.net at gmail.com Wed Jun 13 17:51:23 2007 From: ellisonbg.net at gmail.com (Brian Granger) Date: Wed, 13 Jun 2007 15:51:23 -0600 Subject: [SciPy-dev] Is it ok to depend on ctypes for scipy code ? In-Reply-To: <20070613213045.GF9984@mentat.za.net> References: <466FEE9C.7070204@ar.media.kyoto-u.ac.jp> <46701FEF.5020608@gmail.com> <6ce0ac130706131127w71c09a54g1d93718250a82af5@mail.gmail.com> <46703846.1010800@gmail.com> <46703A0D.5010309@ee.byu.edu> <46703FB1.6000704@astraw.com> <20070613213045.GF9984@mentat.za.net> Message-ID: <6ce0ac130706131451i1ea0af8u8fa165d6ed22b62b@mail.gmail.com> Is there anything "special" that you have to do. Could you post an example? Thanks On 6/13/07, Stefan van der Walt wrote: > On Wed, Jun 13, 2007 at 12:04:17PM -0700, Andrew Straw wrote: > > > There are several people who have started with this solution (I'm pretty > > > sure it is solvable), but none of these solutions have ended up in > > > distutils or numpy.distutils. > > > > Can someone refresh my memory - I thought Stefan van der Walt (IIRC) was > > working on this a while ago. What happened with that? > > I remember asking what *exactly* the problem is with building shared > libraries using distutils (but I don't recall the answer). I happily > use distutils to build libraries for use with ctypes under Linux. > > Cheers > St?fan > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From robert.kern at gmail.com Wed Jun 13 17:57:15 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 13 Jun 2007 16:57:15 -0500 Subject: [SciPy-dev] Is it ok to depend on ctypes for scipy code ? In-Reply-To: <20070613213045.GF9984@mentat.za.net> References: <466FEE9C.7070204@ar.media.kyoto-u.ac.jp> <46701FEF.5020608@gmail.com> <6ce0ac130706131127w71c09a54g1d93718250a82af5@mail.gmail.com> <46703846.1010800@gmail.com> <46703A0D.5010309@ee.byu.edu> <46703FB1.6000704@astraw.com> <20070613213045.GF9984@mentat.za.net> Message-ID: <4670683B.3060302@gmail.com> Stefan van der Walt wrote: > On Wed, Jun 13, 2007 at 12:04:17PM -0700, Andrew Straw wrote: >>> There are several people who have started with this solution (I'm pretty >>> sure it is solvable), but none of these solutions have ended up in >>> distutils or numpy.distutils. >> Can someone refresh my memory - I thought Stefan van der Walt (IIRC) was >> working on this a while ago. What happened with that? > > I remember asking what *exactly* the problem is with building shared > libraries using distutils (but I don't recall the answer). I happily > use distutils to build libraries for use with ctypes under Linux. The linking process on Windows requires an extension module to actually have the initfoo function. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Wed Jun 13 18:05:20 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 13 Jun 2007 17:05:20 -0500 Subject: [SciPy-dev] Is it ok to depend on ctypes for scipy code ? In-Reply-To: <6ce0ac130706131450y35ed428r6930c6696580dc67@mail.gmail.com> References: <466FEE9C.7070204@ar.media.kyoto-u.ac.jp> <46701FEF.5020608@gmail.com> <6ce0ac130706131127w71c09a54g1d93718250a82af5@mail.gmail.com> <46703846.1010800@gmail.com> <46703A0D.5010309@ee.byu.edu> <6ce0ac130706131450y35ed428r6930c6696580dc67@mail.gmail.com> Message-ID: <46706A20.3030404@gmail.com> Brian Granger wrote: > One of the big uses for ctypes though is for calling shared libraries > that "already exist." For instance things like BLAS, LAPACK, MPI can > all be called this way. It seems like the main objection is for cases > where the shared library is not yet built. My feeling is that if > foo.so can be installed using one of the package managers or through > ./configure/make/make install, it is not as big of an issue. Granted, > such a situation could possibly create a new external dependency which > might not be wanted, but the actual issue of building the .so sort of > goes away. Right, and that works great for installations where you can actually control all of that. For generally distributable Python packages, this is rarely the case. We can't even rely on the BLAS and LAPACK libraries having predictable names (not to mention the FORTRAN call and symbol name conventions). I stand by my "not in scipy." -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at ar.media.kyoto-u.ac.jp Wed Jun 13 22:28:08 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 14 Jun 2007 11:28:08 +0900 Subject: [SciPy-dev] Is it ok to depend on ctypes for scipy code ? In-Reply-To: <46703846.1010800@gmail.com> References: <466FEE9C.7070204@ar.media.kyoto-u.ac.jp> <46701FEF.5020608@gmail.com> <6ce0ac130706131127w71c09a54g1d93718250a82af5@mail.gmail.com> <46703846.1010800@gmail.com> Message-ID: <4670A7B8.9060407@ar.media.kyoto-u.ac.jp> Robert Kern wrote: > > I haven't seen one, yet; otherwise, I wouldn't have made the objection. > Ok, I am confused. I asked the question because I thought the ctypes dependency itself may be problematic (who uses python 2.4 or 2.3 ? Is there a list of versions we have to support ?). Is the problem locating an external library ? Because otherwise, I do not see the different between ctypes or any other ways to wrap c code (swig, C api, etc... which is used a lot already in scipy). David From david at ar.media.kyoto-u.ac.jp Wed Jun 13 22:43:23 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 14 Jun 2007 11:43:23 +0900 Subject: [SciPy-dev] Is it ok to depend on ctypes for scipy code ? In-Reply-To: <4670683B.3060302@gmail.com> References: <466FEE9C.7070204@ar.media.kyoto-u.ac.jp> <46701FEF.5020608@gmail.com> <6ce0ac130706131127w71c09a54g1d93718250a82af5@mail.gmail.com> <46703846.1010800@gmail.com> <46703A0D.5010309@ee.byu.edu> <46703FB1.6000704@astraw.com> <20070613213045.GF9984@mentat.za.net> <4670683B.3060302@gmail.com> Message-ID: <4670AB4B.1000007@ar.media.kyoto-u.ac.jp> Robert Kern wrote: > > The linking process on Windows requires an extension module to actually have the > initfoo function. Don't tell me this is again windows doing things totally differently than everybody else for the sake of being incompatible :) On windows, the major problem I have with ctypes for pyaudiolab is detecting the library, because numpy.distutils does not find the dll if the .lib is not there (the library I depend on, sndfile, is compiled by mingw by the main author). Of course, I do not pretend there is no problem, I just don't understand it. David From robert.kern at gmail.com Wed Jun 13 23:02:55 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 13 Jun 2007 22:02:55 -0500 Subject: [SciPy-dev] Is it ok to depend on ctypes for scipy code ? In-Reply-To: <4670AB4B.1000007@ar.media.kyoto-u.ac.jp> References: <466FEE9C.7070204@ar.media.kyoto-u.ac.jp> <46701FEF.5020608@gmail.com> <6ce0ac130706131127w71c09a54g1d93718250a82af5@mail.gmail.com> <46703846.1010800@gmail.com> <46703A0D.5010309@ee.byu.edu> <46703FB1.6000704@astraw.com> <20070613213045.GF9984@mentat.za.net> <4670683B.3060302@gmail.com> <4670AB4B.1000007@ar.media.kyoto-u.ac.jp> Message-ID: <4670AFDF.9090306@gmail.com> David Cournapeau wrote: > Robert Kern wrote: >> The linking process on Windows requires an extension module to actually have the >> initfoo function. > Don't tell me this is again windows doing things totally differently > than everybody else for the sake of being incompatible :) Dynamic linking is inherently platform specific. Everyone does it differently. It just happens that Windows' choices gets in the way in this particular instance. > On windows, > the major problem I have with ctypes for pyaudiolab is detecting the > library, because numpy.distutils does not find the dll if the .lib is > not there (the library I depend on, sndfile, is compiled by mingw by the > main author). Of course, that's not a problem with ctypes, nor even numpy.distutils. It's just an unsupported use of numpy.distutils. > Of course, I do not pretend there is no problem, I just don't understand it. I think you may be reading my statement out of context. Andrew asked about building shared libraries for ctypes' use by abusing distutils to build a fake extension module from it. My response was providing the reason why that doesn't work in general. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Wed Jun 13 23:19:01 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 13 Jun 2007 22:19:01 -0500 Subject: [SciPy-dev] Is it ok to depend on ctypes for scipy code ? In-Reply-To: <4670A7B8.9060407@ar.media.kyoto-u.ac.jp> References: <466FEE9C.7070204@ar.media.kyoto-u.ac.jp> <46701FEF.5020608@gmail.com> <6ce0ac130706131127w71c09a54g1d93718250a82af5@mail.gmail.com> <46703846.1010800@gmail.com> <4670A7B8.9060407@ar.media.kyoto-u.ac.jp> Message-ID: <4670B3A5.4030503@gmail.com> David Cournapeau wrote: > Robert Kern wrote: >> I haven't seen one, yet; otherwise, I wouldn't have made the objection. >> > Ok, I am confused. I asked the question because I thought the ctypes > dependency itself may be problematic Okay, first, please answer my question as to what you were considering using ctypes for. That will help clarify the discussion; different uses impose different burdens. Again, what I was talking about *here* was building the shared library in the package itself. Another use case is to rely on having a shared library already installed; that case carries a different set of problems. > (who uses python 2.4 or 2.3 ? Is > there a list of versions we have to support ?). We are still maintaining 2.3 compatibility. > Is the problem locating an external library ? Because otherwise, I do > not see the different between ctypes or any other ways to wrap c code > (swig, C api, etc... which is used a lot already in scipy). The difference is that for building, we can use configuration files; they only have to present and correct once. If you get a binary from your distro, for example, you don't even have to deal with configuration files at all. Requiring configuration files at runtime for library code is a bad idea. Also, we *do* try to avoid requiring external libraries for scipy, even at build time. All of the external libraries are optional (I haven't tested it in a while, but if BLAS and LAPACK libraries aren't configured, they will be downloaded and built for you). -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at ar.media.kyoto-u.ac.jp Wed Jun 13 23:17:42 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 14 Jun 2007 12:17:42 +0900 Subject: [SciPy-dev] Is it ok to depend on ctypes for scipy code ? In-Reply-To: <4670AFDF.9090306@gmail.com> References: <466FEE9C.7070204@ar.media.kyoto-u.ac.jp> <46701FEF.5020608@gmail.com> <6ce0ac130706131127w71c09a54g1d93718250a82af5@mail.gmail.com> <46703846.1010800@gmail.com> <46703A0D.5010309@ee.byu.edu> <46703FB1.6000704@astraw.com> <20070613213045.GF9984@mentat.za.net> <4670683B.3060302@gmail.com> <4670AB4B.1000007@ar.media.kyoto-u.ac.jp> <4670AFDF.9090306@gmail.com> Message-ID: <4670B356.2090607@ar.media.kyoto-u.ac.jp> Robert Kern wrote: > > Dynamic linking is inherently platform specific. Everyone does it > differently. It just happens that Windows' choices gets in the way in this > particular instance. Well, details are platform specific of course, but at least solaris and linux are similar enough (I think there is no difference in System V and BSD anymore on recent unices), and OS X using gcc alleviates the dylib specificity. But complaining about it won't make it go away anyway, so just forget that I said that. > > I think you may be reading my statement out of context. Andrew asked about > building shared libraries for ctypes' use by abusing distutils to build a fake > extension module from it. Mmh, do you mean building a library with config.add_extension('foo', sources=[join('src', 'vq.c') ]) is not enought to load it through ctypes on windows ? I certainly do not want to add any more burden for scipy distribution, as this is already a big problem. I will try something else, but it is a bit problematic for scipy.sandbox.svm, because I will have to redo all the wrapping, which I didn't plan, cheers, David From robert.kern at gmail.com Wed Jun 13 23:33:24 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 13 Jun 2007 22:33:24 -0500 Subject: [SciPy-dev] Is it ok to depend on ctypes for scipy code ? In-Reply-To: <4670B356.2090607@ar.media.kyoto-u.ac.jp> References: <466FEE9C.7070204@ar.media.kyoto-u.ac.jp> <46701FEF.5020608@gmail.com> <6ce0ac130706131127w71c09a54g1d93718250a82af5@mail.gmail.com> <46703846.1010800@gmail.com> <46703A0D.5010309@ee.byu.edu> <46703FB1.6000704@astraw.com> <20070613213045.GF9984@mentat.za.net> <4670683B.3060302@gmail.com> <4670AB4B.1000007@ar.media.kyoto-u.ac.jp> <4670AFDF.9090306@gmail.com> <4670B356.2090607@ar.media.kyoto-u.ac.jp> Message-ID: <4670B704.2000303@gmail.com> David Cournapeau wrote: > Robert Kern wrote: >> I think you may be reading my statement out of context. Andrew asked about >> building shared libraries for ctypes' use by abusing distutils to build a fake >> extension module from it. > Mmh, do you mean building a library with > > config.add_extension('foo', > > sources=[join('src', 'vq.c') ]) > > is not enought to load it through ctypes on windows ? Correct. It will not even build. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at ar.media.kyoto-u.ac.jp Wed Jun 13 23:44:18 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 14 Jun 2007 12:44:18 +0900 Subject: [SciPy-dev] Is it ok to depend on ctypes for scipy code ? In-Reply-To: <4670B704.2000303@gmail.com> References: <466FEE9C.7070204@ar.media.kyoto-u.ac.jp> <46701FEF.5020608@gmail.com> <6ce0ac130706131127w71c09a54g1d93718250a82af5@mail.gmail.com> <46703846.1010800@gmail.com> <46703A0D.5010309@ee.byu.edu> <46703FB1.6000704@astraw.com> <20070613213045.GF9984@mentat.za.net> <4670683B.3060302@gmail.com> <4670AB4B.1000007@ar.media.kyoto-u.ac.jp> <4670AFDF.9090306@gmail.com> <4670B356.2090607@ar.media.kyoto-u.ac.jp> <4670B704.2000303@gmail.com> Message-ID: <4670B992.9000109@ar.media.kyoto-u.ac.jp> Robert Kern wrote: > David Cournapeau wrote: >> Mmh, do you mean building a library with >> >> config.add_extension('foo', >> >> sources=[join('src', 'vq.c') ]) >> >> is not enought to load it through ctypes on windows ? > > Correct. It will not even build. > But if this does not build, this means either a C compiler is not available, or the C api for python is not there, right ? So this means scipy has to be 100 % python code ? I start to feel like an idiot, but I really don't get it: scipy.stats depends on add_extension, as does fftpack, etc... And this is not optional. If the above foo does not work, how can fftpack work ? To answer your other email: I have a small, self contained (eg does not depend on anything else than the C runtime) C library, wrapped by eg swig. Eg: #include int foo() { printf("foo\n"); return 0; } What makes it different to wrap it with ctypes than with swig, or pure C extension ? David From robert.kern at gmail.com Thu Jun 14 00:15:02 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 13 Jun 2007 23:15:02 -0500 Subject: [SciPy-dev] Is it ok to depend on ctypes for scipy code ? In-Reply-To: <4670B992.9000109@ar.media.kyoto-u.ac.jp> References: <466FEE9C.7070204@ar.media.kyoto-u.ac.jp> <46701FEF.5020608@gmail.com> <6ce0ac130706131127w71c09a54g1d93718250a82af5@mail.gmail.com> <46703846.1010800@gmail.com> <46703A0D.5010309@ee.byu.edu> <46703FB1.6000704@astraw.com> <20070613213045.GF9984@mentat.za.net> <4670683B.3060302@gmail.com> <4670AB4B.1000007@ar.media.kyoto-u.ac.jp> <4670AFDF.9090306@gmail.com> <4670B356.2090607@ar.media.kyoto-u.ac.jp> <4670B704.2000303@gmail.com> <4670B992.9000109@ar.media.kyoto-u.ac.jp> Message-ID: <4670C0C6.6080709@gmail.com> David Cournapeau wrote: > Robert Kern wrote: >> David Cournapeau wrote: >>> Mmh, do you mean building a library with >>> >>> config.add_extension('foo', >>> >>> sources=[join('src', 'vq.c') ]) >>> >>> is not enought to load it through ctypes on windows ? >> Correct. It will not even build. >> > But if this does not build, this means either a C compiler is not > available, or the C api for python is not there, right ? So this means > scipy has to be 100 % python code ? This is what I wrote: """ The linking process on Windows requires an extension module to actually have the initfoo function. """ That is, the code that is compiled and linked by config.add_extension() must actually be a Python extension module, not some shared library with arbitrary contents. Is that clear enough? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From oliphant at ee.byu.edu Thu Jun 14 16:29:04 2007 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 14 Jun 2007 14:29:04 -0600 Subject: [SciPy-dev] Is it ok to depend on ctypes for scipy code ? In-Reply-To: <4670B3A5.4030503@gmail.com> References: <466FEE9C.7070204@ar.media.kyoto-u.ac.jp> <46701FEF.5020608@gmail.com> <6ce0ac130706131127w71c09a54g1d93718250a82af5@mail.gmail.com> <46703846.1010800@gmail.com> <4670A7B8.9060407@ar.media.kyoto-u.ac.jp> <4670B3A5.4030503@gmail.com> Message-ID: <4671A510.4070103@ee.byu.edu> >We are still maintaining 2.3 compatibility. > > > Yes, but I don't think we would mind having some code that requires another library to be installed to be used (as long as it builds and installs without it). For example, some code in SciPy requires the PIL in order to be useful. So, is it just a matter of inserting the dummy function initfoo() into a shared library and pretending it's a Python extension that stands in the way of using ctypes in a cross-platform manner? -Travis From matthieu.brucher at gmail.com Fri Jun 15 03:52:54 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 15 Jun 2007 09:52:54 +0200 Subject: [SciPy-dev] [Python SVN][scipy SVN] Error in stats/stats.py Message-ID: Hi, "as" is now a keyword and cannot be used as a variable name in stats.py at line 2097. Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Sat Jun 16 04:03:59 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 16 Jun 2007 17:03:59 +0900 Subject: [SciPy-dev] Is it ok to depend on ctypes for scipy code ? In-Reply-To: <4671A510.4070103@ee.byu.edu> References: <466FEE9C.7070204@ar.media.kyoto-u.ac.jp> <46701FEF.5020608@gmail.com> <6ce0ac130706131127w71c09a54g1d93718250a82af5@mail.gmail.com> <46703846.1010800@gmail.com> <4670A7B8.9060407@ar.media.kyoto-u.ac.jp> <4670B3A5.4030503@gmail.com> <4671A510.4070103@ee.byu.edu> Message-ID: <4673996F.70204@ar.media.kyoto-u.ac.jp> Travis Oliphant wrote: >> We are still maintaining 2.3 compatibility. >> >> >> > > Yes, but I don't think we would mind having some code that requires > another library to be installed to be used (as long as it builds and > installs without it). For example, some code in SciPy requires the PIL > in order to be useful. > > So, is it just a matter of inserting the dummy function > > initfoo() > I finally got it :) Your remark made me understand the thing that Robert was talking about. As I still couldn't understand the whole issue yet, I installed the whole visual studio thing and tried a fake module to get the problem. This all boils down to the fact that whereas on linux (and mac os X at least), a python module and a shared library really are the same thing, on windows, they are quite a different beast. For the record, here are the different problems as far as I understand: - first, distutils, at least on windows, is looking for a function initmodule (this is what you and Robert were talking about), and is needed for the link to success. The reason why I didn't understand Robert's explanation at all is that distutils *explicitely* tells VS to export the initmodule function, hence has to exist. - then, the obvious quick fix being as Travis suggested to put a fake function, but this does not seem to help much (eg the library builds but is not usable from ctypes). I knew there were some differences between "Linux" and windows linking models (symbol visibility), but they are actually much deeper than I thought. I will try to see if I can easily extend numpy.distutils to build "real" shared libraries (using oof as a reference, as mentionned by Rober on a different ML I found by googling for the problem), which should solve the whole issue, right ? Or am I missing something else ? David From openopt at ukr.net Sat Jun 16 13:41:25 2007 From: openopt at ukr.net (dmitrey) Date: Sat, 16 Jun 2007 20:41:25 +0300 Subject: [SciPy-dev] GSoC weekly report Message-ID: <467420C5.8090308@ukr.net> hi all, for those who are interested about my GSoC scikits optimization project (BSD lic.) - see my blog for details http://openopt.blogspot.com/ WBR, Dmitrey From mtroemel81 at web.de Mon Jun 18 08:48:17 2007 From: mtroemel81 at web.de (=?iso-8859-15?Q?Maik_Tr=F6mel?=) Date: Mon, 18 Jun 2007 14:48:17 +0200 Subject: [SciPy-dev] St9bad_alloc Message-ID: <40162582@web.de> Hello List, I've got some problems with Scipy/sandbox/delaunay. Everytime I run my script the following error occures: terminate called after throwing an instance of 'std::bad_alloc' what(): St9bad_alloc Does anybody know what this means? And what I can do to avoid this error? I have postet in several forums, but nowbody could help me. Thanks for your help! Greetings Maik _____________________________________________________________________ Der WEB.DE SmartSurfer hilft bis zu 70% Ihrer Onlinekosten zu sparen! http://smartsurfer.web.de/?mc=100071&distributionid=000000000066 From david at ar.media.kyoto-u.ac.jp Mon Jun 18 08:47:30 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 18 Jun 2007 21:47:30 +0900 Subject: [SciPy-dev] St9bad_alloc In-Reply-To: <40162582@web.de> References: <40162582@web.de> Message-ID: <46767EE2.2000609@ar.media.kyoto-u.ac.jp> Maik Tr?mel wrote: > Hello List, > > I've got some problems with Scipy/sandbox/delaunay. > Everytime I run my script the following error occures: > terminate called after throwing an instance of 'std::bad_alloc' > what(): St9bad_alloc > > Does anybody know what this means? And what I can do to avoid this error? > This means that dynamic allocation failed in some of the C++ code for delaunay. > I have postet in several forums, but nowbody could help me. Well, we would need more details: can you reproduce the problem on a small, self contained example ? David From mtroemel81 at web.de Mon Jun 18 09:24:51 2007 From: mtroemel81 at web.de (=?iso-8859-15?Q?Maik_Tr=F6mel?=) Date: Mon, 18 Jun 2007 15:24:51 +0200 Subject: [SciPy-dev] St9bad_alloc Message-ID: <40193227@web.de> Here is a small example: ################################ from numpy import * from scipy import * from numpy.random import * from scipy.sandbox.delaunay import * index_y= [] index_x = [] value = [] for i in range(200): index_y.append(randint(10,2500)) index_x.append(randint(10,2500)) value.append(randint(-10,10)) nwyx = indices((2600, 2548)) tri = Triangulation(index_y, index_x) interp = tri.nn_interpolator(value) nwwert = interp(nwyx[0], nwyx[1]) print nwwert ################################ If i split the funktion "interp()" into four sub-matrizes like ################################ nwwert = zeros((2600, 2548)) nwwert[0:1300, 0:1300] = interp(nwyx[0][0:1300, 0:1300], nwyx[1][0:1300, 0:1300])) nwwert[0:1300, 0:2548] = interp(nwyx[0][0:1300, 0:2548], nwyx[1][0:1300, 0:2548])) ... ################################ the script crashes always at the same sub-matrix. Regardless in which order I process the sub-matrixes. Maik > > Maik Tr?mel wrote: > > Hello List, > > > > I've got some problems with Scipy/sandbox/delaunay. > > Everytime I run my script the following error occures: > > terminate called after throwing an instance of 'std::bad_alloc' > > what(): St9bad_alloc > > > > Does anybody know what this means? And what I can do to avoid this error? > > > This means that dynamic allocation failed in some of the C++ code for > delaunay. > > I have postet in several forums, but nowbody could help me. > Well, we would need more details: can you reproduce the problem on a > small, self contained example ? > > David > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > _____________________________________________________________________ Der WEB.DE SmartSurfer hilft bis zu 70% Ihrer Onlinekosten zu sparen! http://smartsurfer.web.de/?mc=100071&distributionid=000000000066 From nwagner at iam.uni-stuttgart.de Mon Jun 18 09:43:02 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 18 Jun 2007 15:43:02 +0200 Subject: [SciPy-dev] St9bad_alloc In-Reply-To: <40193227@web.de> References: <40193227@web.de> Message-ID: <46768BE6.90201@iam.uni-stuttgart.de> Maik Tr?mel wrote: > Here is a small example: > > ################################ > from numpy import * > from scipy import * > from numpy.random import * > from scipy.sandbox.delaunay import * > > index_y= [] > index_x = [] > value = [] > for i in range(200): > index_y.append(randint(10,2500)) > index_x.append(randint(10,2500)) > value.append(randint(-10,10)) > > nwyx = indices((2600, 2548)) > tri = Triangulation(index_y, index_x) > interp = tri.nn_interpolator(value) > nwwert = interp(nwyx[0], nwyx[1]) > > print nwwert > ################################ > > > If i split the funktion "interp()" into four sub-matrizes like > ################################ > nwwert = zeros((2600, 2548)) > nwwert[0:1300, 0:1300] = interp(nwyx[0][0:1300, 0:1300], nwyx[1][0:1300, 0:1300])) > nwwert[0:1300, 0:2548] = interp(nwyx[0][0:1300, 0:2548], nwyx[1][0:1300, 0:2548])) > ... > ################################ > the script crashes always at the same sub-matrix. Regardless in which order I process the sub-matrixes. > > Maik > > >> Maik Tr?mel wrote: >> >>> Hello List, >>> >>> I've got some problems with Scipy/sandbox/delaunay. >>> Everytime I run my script the following error occures: >>> terminate called after throwing an instance of 'std::bad_alloc' >>> what(): St9bad_alloc >>> >>> Does anybody know what this means? And what I can do to avoid this error? >>> >>> >> This means that dynamic allocation failed in some of the C++ code for >> delaunay. >> >>> I have postet in several forums, but nowbody could help me. >>> >> Well, we would need more details: can you reproduce the problem on a >> small, self contained example ? >> >> David >> _______________________________________________ >> Scipy-dev mailing list >> Scipy-dev at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-dev >> >> > > > _____________________________________________________________________ > Der WEB.DE SmartSurfer hilft bis zu 70% Ihrer Onlinekosten zu sparen! > http://smartsurfer.web.de/?mc=100071&distributionid=000000000066 > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > I cannot reproduce the crash but python -i delaunay.py [[ nan nan nan ..., nan nan nan] [ nan nan nan ..., nan nan nan] [ nan nan nan ..., nan nan nan] ..., [ nan nan nan ..., nan nan nan] [ nan nan nan ..., nan nan nan] [ nan nan nan ..., nan nan nan]] Nils -------------- next part -------------- A non-text attachment was scrubbed... Name: delaunay.py Type: text/x-python Size: 436 bytes Desc: not available URL: From mtroemel81 at web.de Mon Jun 18 09:52:59 2007 From: mtroemel81 at web.de (=?iso-8859-15?Q?Maik_Tr=F6mel?=) Date: Mon, 18 Jun 2007 15:52:59 +0200 Subject: [SciPy-dev] St9bad_alloc Message-ID: <40229538@web.de> Every point out of a convex hull arround the input data points gets the value nan. Try nwwert.max() . Do you get a result? Maik > > Maik Tr?mel wrote: > > Here is a small example: > > > > ################################ > > from numpy import * > > from scipy import * > > from numpy.random import * > > from scipy.sandbox.delaunay import * > > > > index_y= [] > > index_x = [] > > value = [] > > for i in range(200): > > index_y.append(randint(10,2500)) > > index_x.append(randint(10,2500)) > > value.append(randint(-10,10)) > > > > nwyx = indices((2600, 2548)) > > tri = Triangulation(index_y, index_x) > > interp = tri.nn_interpolator(value) > > nwwert = interp(nwyx[0], nwyx[1]) > > > > print nwwert > > ################################ > > > > > > If i split the funktion "interp()" into four sub-matrizes like > > ################################ > > nwwert = zeros((2600, 2548)) > > nwwert[0:1300, 0:1300] = interp(nwyx[0][0:1300, 0:1300], nwyx[1][0:1300, 0:1300])) > > nwwert[0:1300, 0:2548] = interp(nwyx[0][0:1300, 0:2548], nwyx[1][0:1300, 0:2548])) > > ... > > ################################ > > the script crashes always at the same sub-matrix. Regardless in which order I process the sub-matrixes. > > > > Maik > > > > > >> Maik Tr?mel wrote: > >> > >>> Hello List, > >>> > >>> I've got some problems with Scipy/sandbox/delaunay. > >>> Everytime I run my script the following error occures: > >>> terminate called after throwing an instance of 'std::bad_alloc' > >>> what(): St9bad_alloc > >>> > >>> Does anybody know what this means? And what I can do to avoid this error? > >>> > >>> > >> This means that dynamic allocation failed in some of the C++ code for > >> delaunay. > >> > >>> I have postet in several forums, but nowbody could help me. > >>> > >> Well, we would need more details: can you reproduce the problem on a > >> small, self contained example ? > >> > >> David > >> _______________________________________________ > >> Scipy-dev mailing list > >> Scipy-dev at scipy.org > >> http://projects.scipy.org/mailman/listinfo/scipy-dev > >> > >> > > > > > > _____________________________________________________________________ > > Der WEB.DE SmartSurfer hilft bis zu 70% Ihrer Onlinekosten zu sparen! > > http://smartsurfer.web.de/?mc=100071&distributionid=000000000066 > > > > _______________________________________________ > > Scipy-dev mailing list > > Scipy-dev at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-dev > > > > I cannot reproduce the crash but > python -i delaunay.py > [[ nan nan > nan ..., nan > nan nan] > [ nan nan > nan ..., nan > nan nan] > [ nan nan > nan ..., nan > nan nan] > ..., > [ nan nan > nan ..., nan > nan nan] > [ nan nan > nan ..., nan > nan nan] > [ nan nan > nan ..., nan > nan nan]] > > Nils > > > >
> _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > > __________________________________________________________________________ Erweitern Sie FreeMail zu einem noch leistungsst?rkeren E-Mail-Postfach! Mehr Infos unter http://produkte.web.de/club/?mc=021131 From openopt at ukr.net Mon Jun 18 15:00:44 2007 From: openopt at ukr.net (dmitrey) Date: Mon, 18 Jun 2007 22:00:44 +0300 Subject: [SciPy-dev] why scipy license page doesn't exist? Message-ID: <4676D65C.50404@ukr.net> I thought it should be had written long time ago http://www.scipy.org/License page says: License *This page does not exist yet. You can create a new empty page, or use one of the page templates. Before creating the page, please check if a similar page already exists. * Dmitrey. From nwagner at iam.uni-stuttgart.de Mon Jun 18 15:04:19 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 18 Jun 2007 21:04:19 +0200 Subject: [SciPy-dev] why scipy license page doesn't exist? In-Reply-To: <4676D65C.50404@ukr.net> References: <4676D65C.50404@ukr.net> Message-ID: On Mon, 18 Jun 2007 22:00:44 +0300 dmitrey wrote: > I thought it should be had written long time ago > http://www.scipy.org/License page says: > > License *This page does not exist yet. You can create a >new empty page, > or use one of the page templates. Before creating the >page, please check > if a similar page already exists. > * > Dmitrey. See http://www.scipy.org/License_Compatibility From openopt at ukr.net Mon Jun 18 15:06:09 2007 From: openopt at ukr.net (dmitrey) Date: Mon, 18 Jun 2007 22:06:09 +0300 Subject: [SciPy-dev] (once again about scipy license page) Message-ID: <4676D7A1.1050602@ukr.net> I have been redirected to the http://www.scipy.org/License from here: http://www.scipy.org/FAQ#head-22f0cc18e232f57520678cd55ef7e904113fa304 D. From nwagner at iam.uni-stuttgart.de Mon Jun 18 15:26:39 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 18 Jun 2007 21:26:39 +0200 Subject: [SciPy-dev] (once again about scipy license page) In-Reply-To: <4676D7A1.1050602@ukr.net> References: <4676D7A1.1050602@ukr.net> Message-ID: On Mon, 18 Jun 2007 22:06:09 +0300 dmitrey wrote: > I have been redirected to the >http://www.scipy.org/License from here: > > http://www.scipy.org/FAQ#head-22f0cc18e232f57520678cd55ef7e904113fa304 > > D. > Fixed. Nils > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev From nwagner at iam.uni-stuttgart.de Wed Jun 20 02:42:00 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 20 Jun 2007 08:42:00 +0200 Subject: [SciPy-dev] scipy.cluster Message-ID: <4678CC38.2040107@iam.uni-stuttgart.de> Hi all, The recent changes in scipy.cluster have introduced some MemoryErrors ====================================================================== ERROR: Testing that kmeans2 init methods work. ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib64/python2.4/site-packages/scipy/cluster/tests/test_vq.py", line 137, in check_kmeans2_init kmeans2(data, 3, minit = 'random') File "/usr/lib64/python2.4/site-packages/scipy/cluster/vq.py", line 545, in kmeans2 return _kmeans2(data, clusters, iter, nc) File "/usr/lib64/python2.4/site-packages/scipy/cluster/vq.py", line 558, in _kmeans2 label = vq(data, code)[0] File "/usr/lib64/python2.4/site-packages/scipy/cluster/vq.py", line 144, in vq results = _vq.vq(c_obs, c_code_book) MemoryError ====================================================================== ERROR: Testing simple call to kmeans2 with rank 1 data. ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib64/python2.4/site-packages/scipy/cluster/tests/test_vq.py", line 129, in check_kmeans2_rank1 code1 = kmeans2(data1, code, iter = 1)[0] File "/usr/lib64/python2.4/site-packages/scipy/cluster/vq.py", line 545, in kmeans2 return _kmeans2(data, clusters, iter, nc) File "/usr/lib64/python2.4/site-packages/scipy/cluster/vq.py", line 558, in _kmeans2 label = vq(data, code)[0] File "/usr/lib64/python2.4/site-packages/scipy/cluster/vq.py", line 144, in vq results = _vq.vq(c_obs, c_code_book) MemoryError ====================================================================== ERROR: Testing simple call to kmeans2 and its results. ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib64/python2.4/site-packages/scipy/cluster/tests/test_vq.py", line 114, in check_kmeans2_simple code1 = kmeans2(X, code, iter = 1)[0] File "/usr/lib64/python2.4/site-packages/scipy/cluster/vq.py", line 545, in kmeans2 return _kmeans2(data, clusters, iter, nc) File "/usr/lib64/python2.4/site-packages/scipy/cluster/vq.py", line 558, in _kmeans2 label = vq(data, code)[0] File "/usr/lib64/python2.4/site-packages/scipy/cluster/vq.py", line 144, in vq results = _vq.vq(c_obs, c_code_book) MemoryError ====================================================================== ERROR: This will cause kmean to have a cluster with no points. ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib64/python2.4/site-packages/scipy/cluster/tests/test_vq.py", line 108, in check_kmeans_lost_cluster res = kmeans(data, initk) File "/usr/lib64/python2.4/site-packages/scipy/cluster/vq.py", line 400, in kmeans result = _kmeans(obs, guess, thresh = thresh) File "/usr/lib64/python2.4/site-packages/scipy/cluster/vq.py", line 320, in _kmeans obs_code, distort = vq(obs, code_book) File "/usr/lib64/python2.4/site-packages/scipy/cluster/vq.py", line 144, in vq results = _vq.vq(c_obs, c_code_book) MemoryError ====================================================================== ERROR: check_kmeans_simple (scipy.cluster.tests.test_vq.test_kmean) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib64/python2.4/site-packages/scipy/cluster/tests/test_vq.py", line 96, in check_kmeans_simple code1 = kmeans(X, code, iter = 1)[0] File "/usr/lib64/python2.4/site-packages/scipy/cluster/vq.py", line 400, in kmeans result = _kmeans(obs, guess, thresh = thresh) File "/usr/lib64/python2.4/site-packages/scipy/cluster/vq.py", line 320, in _kmeans obs_code, distort = vq(obs, code_book) File "/usr/lib64/python2.4/site-packages/scipy/cluster/vq.py", line 144, in vq results = _vq.vq(c_obs, c_code_book) MemoryError ====================================================================== ERROR: check_vq (scipy.cluster.tests.test_vq.test_vq) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib64/python2.4/site-packages/scipy/cluster/tests/test_vq.py", line 63, in check_vq label1, dist = _vq.vq(X, initc) MemoryError ====================================================================== ERROR: Test special rank 1 vq algo, python implementation. ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib64/python2.4/site-packages/scipy/cluster/tests/test_vq.py", line 85, in check_vq_1d a, b = _vq.vq(data, initc) MemoryError From david at ar.media.kyoto-u.ac.jp Wed Jun 20 03:09:59 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 20 Jun 2007 16:09:59 +0900 Subject: [SciPy-dev] scipy.cluster In-Reply-To: <4678CC38.2040107@iam.uni-stuttgart.de> References: <4678CC38.2040107@iam.uni-stuttgart.de> Message-ID: <4678D2C7.6000605@ar.media.kyoto-u.ac.jp> Nils Wagner wrote: > Hi all, > > The recent changes in scipy.cluster have introduced some MemoryErrors > > ====================================================================== > ERROR: Testing that kmeans2 init methods work. > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/lib64/python2.4/site-packages/scipy/cluster/tests/test_vq.py", > line 137, in check_kmeans2_init > kmeans2(data, 3, minit = 'random') > File "/usr/lib64/python2.4/site-packages/scipy/cluster/vq.py", line > 545, in kmeans2 > return _kmeans2(data, clusters, iter, nc) > File "/usr/lib64/python2.4/site-packages/scipy/cluster/vq.py", line > 558, in _kmeans2 > label = vq(data, code)[0] > File "/usr/lib64/python2.4/site-packages/scipy/cluster/vq.py", line > 144, in vq > results = _vq.vq(c_obs, c_code_book) > MemoryError > > ====================================================================== > ERROR: Testing simple call to kmeans2 with rank 1 data. > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/lib64/python2.4/site-packages/scipy/cluster/tests/test_vq.py", > line 129, in check_kmeans2_rank1 > code1 = kmeans2(data1, code, iter = 1)[0] > File "/usr/lib64/python2.4/site-packages/scipy/cluster/vq.py", line > 545, in kmeans2 > return _kmeans2(data, clusters, iter, nc) > File "/usr/lib64/python2.4/site-packages/scipy/cluster/vq.py", line > 558, in _kmeans2 > label = vq(data, code)[0] > File "/usr/lib64/python2.4/site-packages/scipy/cluster/vq.py", line > 144, in vq > results = _vq.vq(c_obs, c_code_book) > MemoryError > > ====================================================================== > ERROR: Testing simple call to kmeans2 and its results. > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/lib64/python2.4/site-packages/scipy/cluster/tests/test_vq.py", > line 114, in check_kmeans2_simple > code1 = kmeans2(X, code, iter = 1)[0] > File "/usr/lib64/python2.4/site-packages/scipy/cluster/vq.py", line > 545, in kmeans2 > return _kmeans2(data, clusters, iter, nc) > File "/usr/lib64/python2.4/site-packages/scipy/cluster/vq.py", line > 558, in _kmeans2 > label = vq(data, code)[0] > File "/usr/lib64/python2.4/site-packages/scipy/cluster/vq.py", line > 144, in vq > results = _vq.vq(c_obs, c_code_book) > MemoryError > > ====================================================================== > ERROR: This will cause kmean to have a cluster with no points. > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/lib64/python2.4/site-packages/scipy/cluster/tests/test_vq.py", > line 108, in check_kmeans_lost_cluster > res = kmeans(data, initk) > File "/usr/lib64/python2.4/site-packages/scipy/cluster/vq.py", line > 400, in kmeans > result = _kmeans(obs, guess, thresh = thresh) > File "/usr/lib64/python2.4/site-packages/scipy/cluster/vq.py", line > 320, in _kmeans > obs_code, distort = vq(obs, code_book) > File "/usr/lib64/python2.4/site-packages/scipy/cluster/vq.py", line > 144, in vq > results = _vq.vq(c_obs, c_code_book) > MemoryError > > ====================================================================== > ERROR: check_kmeans_simple (scipy.cluster.tests.test_vq.test_kmean) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/lib64/python2.4/site-packages/scipy/cluster/tests/test_vq.py", > line 96, in check_kmeans_simple > code1 = kmeans(X, code, iter = 1)[0] > File "/usr/lib64/python2.4/site-packages/scipy/cluster/vq.py", line > 400, in kmeans > result = _kmeans(obs, guess, thresh = thresh) > File "/usr/lib64/python2.4/site-packages/scipy/cluster/vq.py", line > 320, in _kmeans > obs_code, distort = vq(obs, code_book) > File "/usr/lib64/python2.4/site-packages/scipy/cluster/vq.py", line > 144, in vq > results = _vq.vq(c_obs, c_code_book) > MemoryError > > ====================================================================== > ERROR: check_vq (scipy.cluster.tests.test_vq.test_vq) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/lib64/python2.4/site-packages/scipy/cluster/tests/test_vq.py", > line 63, in check_vq > label1, dist = _vq.vq(X, initc) > MemoryError > > ====================================================================== > ERROR: Test special rank 1 vq algo, python implementation. > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/lib64/python2.4/site-packages/scipy/cluster/tests/test_vq.py", > line 85, in check_vq_1d > a, b = _vq.vq(data, initc) > MemoryError That's regression is my fault :) I see that your machine is a 64 bits, which may have exposed some bugs I didn't see. Unfortunately, I don't have a 64 bits machine available right now... There is one obvious error I could spot, though: could you try this simple patch ? Index: Lib/cluster/src/vq_module.c =================================================================== --- Lib/cluster/src/vq_module.c (revision 3110) +++ Lib/cluster/src/vq_module.c (working copy) @@ -1,5 +1,5 @@ /* - * Last Change: Tue Jun 19 11:00 PM 2007 J + * Last Change: Wed Jun 20 04:00 PM 2007 J * */ #include @@ -97,24 +97,24 @@ if (dist_a == NULL) { goto clean_code_a; } - index_a = (PyArrayObject*)PyArray_EMPTY(1, &n, NPY_INT, 0); + index_a = (PyArrayObject*)PyArray_EMPTY(1, &n, PyArray_INTP, 0); if (index_a == NULL) { goto clean_dist_a; } float_tvq((float*)obs_a->data, (float*)code_a->data, n, nc, d, - (int*)index_a->data, (float*)dist_a->data); + (npy_intp*)index_a->data, (float*)dist_a->data); break; case NPY_DOUBLE: dist_a = (PyArrayObject*)PyArray_EMPTY(1, &n, typenum1, 0); if (dist_a == NULL) { goto clean_code_a; } - index_a = (PyArrayObject*)PyArray_EMPTY(1, &n, NPY_INT, 0); + index_a = (PyArrayObject*)PyArray_EMPTY(1, &n, PyArray_INTP, 0); if (index_a == NULL) { goto clean_dist_a; } double_tvq((double*)obs_a->data, (double*)code_a->data, n, nc, d, - (int*)index_a->data, (double*)dist_a->data); + (npy_intp*)index_a->data, (double*)dist_a->data); break; default: PyErr_Format(PyExc_ValueError, @@ -151,4 +151,3 @@ Py_DECREF(obs_a); return NULL; } - David From nwagner at iam.uni-stuttgart.de Wed Jun 20 04:03:55 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 20 Jun 2007 10:03:55 +0200 Subject: [SciPy-dev] scipy.cluster In-Reply-To: <4678D2C7.6000605@ar.media.kyoto-u.ac.jp> References: <4678CC38.2040107@iam.uni-stuttgart.de> <4678D2C7.6000605@ar.media.kyoto-u.ac.jp> Message-ID: <4678DF6B.6030800@iam.uni-stuttgart.de> David Cournapeau wrote: > Nils Wagner wrote: > >> Hi all, >> >> The recent changes in scipy.cluster have introduced some MemoryErrors >> >> ====================================================================== >> ERROR: Testing that kmeans2 init methods work. >> ---------------------------------------------------------------------- >> Traceback (most recent call last): >> File >> "/usr/lib64/python2.4/site-packages/scipy/cluster/tests/test_vq.py", >> line 137, in check_kmeans2_init >> kmeans2(data, 3, minit = 'random') >> File "/usr/lib64/python2.4/site-packages/scipy/cluster/vq.py", line >> 545, in kmeans2 >> return _kmeans2(data, clusters, iter, nc) >> File "/usr/lib64/python2.4/site-packages/scipy/cluster/vq.py", line >> 558, in _kmeans2 >> label = vq(data, code)[0] >> File "/usr/lib64/python2.4/site-packages/scipy/cluster/vq.py", line >> 144, in vq >> results = _vq.vq(c_obs, c_code_book) >> MemoryError >> >> ====================================================================== >> ERROR: Testing simple call to kmeans2 with rank 1 data. >> ---------------------------------------------------------------------- >> Traceback (most recent call last): >> File >> "/usr/lib64/python2.4/site-packages/scipy/cluster/tests/test_vq.py", >> line 129, in check_kmeans2_rank1 >> code1 = kmeans2(data1, code, iter = 1)[0] >> File "/usr/lib64/python2.4/site-packages/scipy/cluster/vq.py", line >> 545, in kmeans2 >> return _kmeans2(data, clusters, iter, nc) >> File "/usr/lib64/python2.4/site-packages/scipy/cluster/vq.py", line >> 558, in _kmeans2 >> label = vq(data, code)[0] >> File "/usr/lib64/python2.4/site-packages/scipy/cluster/vq.py", line >> 144, in vq >> results = _vq.vq(c_obs, c_code_book) >> MemoryError >> >> ====================================================================== >> ERROR: Testing simple call to kmeans2 and its results. >> ---------------------------------------------------------------------- >> Traceback (most recent call last): >> File >> "/usr/lib64/python2.4/site-packages/scipy/cluster/tests/test_vq.py", >> line 114, in check_kmeans2_simple >> code1 = kmeans2(X, code, iter = 1)[0] >> File "/usr/lib64/python2.4/site-packages/scipy/cluster/vq.py", line >> 545, in kmeans2 >> return _kmeans2(data, clusters, iter, nc) >> File "/usr/lib64/python2.4/site-packages/scipy/cluster/vq.py", line >> 558, in _kmeans2 >> label = vq(data, code)[0] >> File "/usr/lib64/python2.4/site-packages/scipy/cluster/vq.py", line >> 144, in vq >> results = _vq.vq(c_obs, c_code_book) >> MemoryError >> >> ====================================================================== >> ERROR: This will cause kmean to have a cluster with no points. >> ---------------------------------------------------------------------- >> Traceback (most recent call last): >> File >> "/usr/lib64/python2.4/site-packages/scipy/cluster/tests/test_vq.py", >> line 108, in check_kmeans_lost_cluster >> res = kmeans(data, initk) >> File "/usr/lib64/python2.4/site-packages/scipy/cluster/vq.py", line >> 400, in kmeans >> result = _kmeans(obs, guess, thresh = thresh) >> File "/usr/lib64/python2.4/site-packages/scipy/cluster/vq.py", line >> 320, in _kmeans >> obs_code, distort = vq(obs, code_book) >> File "/usr/lib64/python2.4/site-packages/scipy/cluster/vq.py", line >> 144, in vq >> results = _vq.vq(c_obs, c_code_book) >> MemoryError >> >> ====================================================================== >> ERROR: check_kmeans_simple (scipy.cluster.tests.test_vq.test_kmean) >> ---------------------------------------------------------------------- >> Traceback (most recent call last): >> File >> "/usr/lib64/python2.4/site-packages/scipy/cluster/tests/test_vq.py", >> line 96, in check_kmeans_simple >> code1 = kmeans(X, code, iter = 1)[0] >> File "/usr/lib64/python2.4/site-packages/scipy/cluster/vq.py", line >> 400, in kmeans >> result = _kmeans(obs, guess, thresh = thresh) >> File "/usr/lib64/python2.4/site-packages/scipy/cluster/vq.py", line >> 320, in _kmeans >> obs_code, distort = vq(obs, code_book) >> File "/usr/lib64/python2.4/site-packages/scipy/cluster/vq.py", line >> 144, in vq >> results = _vq.vq(c_obs, c_code_book) >> MemoryError >> >> ====================================================================== >> ERROR: check_vq (scipy.cluster.tests.test_vq.test_vq) >> ---------------------------------------------------------------------- >> Traceback (most recent call last): >> File >> "/usr/lib64/python2.4/site-packages/scipy/cluster/tests/test_vq.py", >> line 63, in check_vq >> label1, dist = _vq.vq(X, initc) >> MemoryError >> >> ====================================================================== >> ERROR: Test special rank 1 vq algo, python implementation. >> ---------------------------------------------------------------------- >> Traceback (most recent call last): >> File >> "/usr/lib64/python2.4/site-packages/scipy/cluster/tests/test_vq.py", >> line 85, in check_vq_1d >> a, b = _vq.vq(data, initc) >> MemoryError >> > That's regression is my fault :) I see that your machine is a 64 bits, > which may have exposed some bugs I didn't see. Unfortunately, I don't > have a 64 bits machine available right now... > > There is one obvious error I could spot, though: could you try this > simple patch ? > > Index: Lib/cluster/src/vq_module.c > =================================================================== > --- Lib/cluster/src/vq_module.c (revision 3110) > +++ Lib/cluster/src/vq_module.c (working copy) > @@ -1,5 +1,5 @@ > /* > - * Last Change: Tue Jun 19 11:00 PM 2007 J > + * Last Change: Wed Jun 20 04:00 PM 2007 J > * > */ > #include > @@ -97,24 +97,24 @@ > if (dist_a == NULL) { > goto clean_code_a; > } > - index_a = (PyArrayObject*)PyArray_EMPTY(1, &n, NPY_INT, 0); > + index_a = (PyArrayObject*)PyArray_EMPTY(1, &n, > PyArray_INTP, 0); > if (index_a == NULL) { > goto clean_dist_a; > } > float_tvq((float*)obs_a->data, (float*)code_a->data, n, nc, d, > - (int*)index_a->data, (float*)dist_a->data); > + (npy_intp*)index_a->data, (float*)dist_a->data); > break; > case NPY_DOUBLE: > dist_a = (PyArrayObject*)PyArray_EMPTY(1, &n, typenum1, 0); > if (dist_a == NULL) { > goto clean_code_a; > } > - index_a = (PyArrayObject*)PyArray_EMPTY(1, &n, NPY_INT, 0); > + index_a = (PyArrayObject*)PyArray_EMPTY(1, &n, > PyArray_INTP, 0); > if (index_a == NULL) { > goto clean_dist_a; > } > double_tvq((double*)obs_a->data, (double*)code_a->data, n, > nc, d, > - (int*)index_a->data, (double*)dist_a->data); > + (npy_intp*)index_a->data, (double*)dist_a->data); > break; > default: > PyErr_Format(PyExc_ValueError, > @@ -151,4 +151,3 @@ > Py_DECREF(obs_a); > return NULL; > } > - > > David > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > Hi David, I have applied your patch and installed scipy from scratch. The errors persist. Nils From david at ar.media.kyoto-u.ac.jp Wed Jun 20 06:08:39 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 20 Jun 2007 19:08:39 +0900 Subject: [SciPy-dev] scipy.cluster In-Reply-To: <4678DF6B.6030800@iam.uni-stuttgart.de> References: <4678CC38.2040107@iam.uni-stuttgart.de> <4678D2C7.6000605@ar.media.kyoto-u.ac.jp> <4678DF6B.6030800@iam.uni-stuttgart.de> Message-ID: <4678FCA7.7020601@ar.media.kyoto-u.ac.jp> Nils Wagner wrote: > > Hi David, > > I have applied your patch and installed scipy from scratch. > The errors persist. Mmmh, I will test it tonight on my macbook, then. I have a virtual machine on ubuntu 64 for testing purpose. David From mtroemel81 at web.de Thu Jun 21 02:55:47 2007 From: mtroemel81 at web.de (=?iso-8859-15?Q?Maik_Tr=F6mel?=) Date: Thu, 21 Jun 2007 08:55:47 +0200 Subject: [SciPy-dev] St9bad_alloc Message-ID: <43748975@web.de> I tried it out at another system and I get the same error. Probably there is memory leak in the delaunay-package. Or does anybody else have an idea what the problem could be? Here is an example script (with sub-matrixes) which crashes: from numpy import * from scipy import * from numpy.random import * from scipy.sandbox.delaunay import * index_y= [] index_x = [] value = [] for i in range(200): index_y.append(randint(10,2500)) index_x.append(randint(10,2500)) value.append(randint(1,10)) nwyx = indices((2600, 2548)) tri = Triangulation(index_y, index_x) interp = tri.nn_interpolator(value) nwwert = zeros((2600, 2548)) nwwert[0:1300, 0:1300] = interp(nwyx[0][0:1300, 0:1300], nwyx[1][0:1300, 0:1300]) nwwert[0:1300, 1300:2548] = interp(nwyx[0][0:1300, 1300:2548], nwyx[1][0:1300, 1300:2548]) nwwert[1300:2600, 0:1300] = interp(nwyx[0][1300:2600, 0:1300], nwyx[1][1300:2600, 0:1300]) nwwert[1300:2600, 1300:2548] = interp(nwyx[0][1300:2600, 1300:2548], nwyx[1][1300:2600, 1300:2548]) nwwert = where(nwwert == nwwert, nwwert, 0) print nwwert.max() Greetings Maik > -----Urspr?ngliche Nachricht----- > Von: SciPy Developers List > Gesendet: 18.06.07 15:53:16 > An: SciPy Developers List > Betreff: Re: [SciPy-dev] St9bad_alloc > > Every point out of a convex hull arround the input data points gets the value nan. > Try nwwert.max() . Do you get a result? > > Maik > > > > > > Maik Tr?mel wrote: > > > Here is a small example: > > > > > > ################################ > > > from numpy import * > > > from scipy import * > > > from numpy.random import * > > > from scipy.sandbox.delaunay import * > > > > > > index_y= [] > > > index_x = [] > > > value = [] > > > for i in range(200): > > > index_y.append(randint(10,2500)) > > > index_x.append(randint(10,2500)) > > > value.append(randint(-10,10)) > > > > > > nwyx = indices((2600, 2548)) > > > tri = Triangulation(index_y, index_x) > > > interp = tri.nn_interpolator(value) > > > nwwert = interp(nwyx[0], nwyx[1]) > > > > > > print nwwert > > > ################################ > > > > > > > > > If i split the funktion "interp()" into four sub-matrizes like > > > ################################ > > > nwwert = zeros((2600, 2548)) > > > nwwert[0:1300, 0:1300] = interp(nwyx[0][0:1300, 0:1300], nwyx[1][0:1300, 0:1300])) > > > nwwert[0:1300, 0:2548] = interp(nwyx[0][0:1300, 0:2548], nwyx[1][0:1300, 0:2548])) > > > ... > > > ################################ > > > the script crashes always at the same sub-matrix. Regardless in which order I process the sub-matrixes. > > > > > > Maik > > > > > > > > >> Maik Tr?mel wrote: > > >> > > >>> Hello List, > > >>> > > >>> I've got some problems with Scipy/sandbox/delaunay. > > >>> Everytime I run my script the following error occures: > > >>> terminate called after throwing an instance of 'std::bad_alloc' > > >>> what(): St9bad_alloc > > >>> > > >>> Does anybody know what this means? And what I can do to avoid this error? > > >>> > > >>> > > >> This means that dynamic allocation failed in some of the C++ code for > > >> delaunay. > > >> > > >>> I have postet in several forums, but nowbody could help me. > > >>> > > >> Well, we would need more details: can you reproduce the problem on a > > >> small, self contained example ? > > >> > > >> David > > >> _______________________________________________ > > >> Scipy-dev mailing list > > >> Scipy-dev at scipy.org > > >> http://projects.scipy.org/mailman/listinfo/scipy-dev > > >> > > >> > > > > > > > > > _____________________________________________________________________ > > > Der WEB.DE SmartSurfer hilft bis zu 70% Ihrer Onlinekosten zu sparen! > > > http://smartsurfer.web.de/?mc=100071&distributionid=000000000066 > > > > > > _______________________________________________ > > > Scipy-dev mailing list > > > Scipy-dev at scipy.org > > > http://projects.scipy.org/mailman/listinfo/scipy-dev > > > > > > > I cannot reproduce the crash but > > python -i delaunay.py > > [[ nan nan > > nan ..., nan > > nan nan] > > [ nan nan > > nan ..., nan > > nan nan] > > [ nan nan > > nan ..., nan > > nan nan] > > ..., > > [ nan nan > > nan ..., nan > > nan nan] > > [ nan nan > > nan ..., nan > > nan nan] > > [ nan nan > > nan ..., nan > > nan nan]] > > > > Nils > > > > > > > >
> > _______________________________________________ > > Scipy-dev mailing list > > Scipy-dev at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-dev > > > > > > > __________________________________________________________________________ > Erweitern Sie FreeMail zu einem noch leistungsst?rkeren E-Mail-Postfach! > Mehr Infos unter http://produkte.web.de/club/?mc=021131 > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > _____________________________________________________________________ Der WEB.DE SmartSurfer hilft bis zu 70% Ihrer Onlinekosten zu sparen! http://smartsurfer.web.de/?mc=100071&distributionid=000000000066 From nwagner at iam.uni-stuttgart.de Thu Jun 21 03:00:51 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 21 Jun 2007 09:00:51 +0200 Subject: [SciPy-dev] St9bad_alloc In-Reply-To: <43748975@web.de> References: <43748975@web.de> Message-ID: <467A2223.50506@iam.uni-stuttgart.de> Maik Tr?mel wrote: > I tried it out at another system and I get the same error. > Probably there is memory leak in the delaunay-package. Or does anybody else have an idea what the problem could be? > > Here is an example script (with sub-matrixes) which crashes: > > from numpy import * > from scipy import * > from numpy.random import * > from scipy.sandbox.delaunay import * > > index_y= [] > index_x = [] > value = [] > for i in range(200): > index_y.append(randint(10,2500)) > index_x.append(randint(10,2500)) > value.append(randint(1,10)) > nwyx = indices((2600, 2548)) > tri = Triangulation(index_y, index_x) > interp = tri.nn_interpolator(value) > nwwert = zeros((2600, 2548)) > nwwert[0:1300, 0:1300] = interp(nwyx[0][0:1300, 0:1300], nwyx[1][0:1300, 0:1300]) > nwwert[0:1300, 1300:2548] = interp(nwyx[0][0:1300, 1300:2548], nwyx[1][0:1300, 1300:2548]) > nwwert[1300:2600, 0:1300] = interp(nwyx[0][1300:2600, 0:1300], nwyx[1][1300:2600, 0:1300]) > nwwert[1300:2600, 1300:2548] = interp(nwyx[0][1300:2600, 1300:2548], nwyx[1][1300:2600, 1300:2548]) > > nwwert = where(nwwert == nwwert, nwwert, 0) > print nwwert.max() > > Greetings Maik > > Here is the output of your script python -i maik.py 9.0 >>> >>> import scipy >>> scipy.__version__ '0.5.3.dev3112' >>> import numpy >>> numpy.__version__ '1.0.4.dev3875' Nils From mtroemel81 at web.de Thu Jun 21 04:52:02 2007 From: mtroemel81 at web.de (=?iso-8859-15?Q?Maik_Tr=F6mel?=) Date: Thu, 21 Jun 2007 10:52:02 +0200 Subject: [SciPy-dev] St9bad_alloc Message-ID: <43903540@web.de> My system has: Python 2.4.1 Scipy '0.5.2' Numpy '1.0.1' The Second System was Python 2.5.1, with the same scipy and numpy versions. Both system are running with Debian Testing. At the moment I can't update, but I'll try out. Greetings Maik > -----Urspr?ngliche Nachricht----- > Von: SciPy Developers List > Gesendet: 21.06.07 09:01:28 > An: SciPy Developers List > Betreff: Re: [SciPy-dev] St9bad_alloc > > Maik Tr?mel wrote: > > I tried it out at another system and I get the same error. > > Probably there is memory leak in the delaunay-package. Or does anybody else have an idea what the problem could be? > > > > Here is an example script (with sub-matrixes) which crashes: > > > > from numpy import * > > from scipy import * > > from numpy.random import * > > from scipy.sandbox.delaunay import * > > > > index_y= [] > > index_x = [] > > value = [] > > for i in range(200): > > index_y.append(randint(10,2500)) > > index_x.append(randint(10,2500)) > > value.append(randint(1,10)) > > nwyx = indices((2600, 2548)) > > tri = Triangulation(index_y, index_x) > > interp = tri.nn_interpolator(value) > > nwwert = zeros((2600, 2548)) > > nwwert[0:1300, 0:1300] = interp(nwyx[0][0:1300, 0:1300], nwyx[1][0:1300, 0:1300]) > > nwwert[0:1300, 1300:2548] = interp(nwyx[0][0:1300, 1300:2548], nwyx[1][0:1300, 1300:2548]) > > nwwert[1300:2600, 0:1300] = interp(nwyx[0][1300:2600, 0:1300], nwyx[1][1300:2600, 0:1300]) > > nwwert[1300:2600, 1300:2548] = interp(nwyx[0][1300:2600, 1300:2548], nwyx[1][1300:2600, 1300:2548]) > > > > nwwert = where(nwwert == nwwert, nwwert, 0) > > print nwwert.max() > > > > Greetings Maik > > > > > > Here is the output of your script > python -i maik.py > 9.0 > >>> > >>> import scipy > >>> scipy.__version__ > '0.5.3.dev3112' > >>> import numpy > >>> numpy.__version__ > '1.0.4.dev3875' > > > Nils > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > _____________________________________________________________________ Der WEB.DE SmartSurfer hilft bis zu 70% Ihrer Onlinekosten zu sparen! http://smartsurfer.web.de/?mc=100071&distributionid=000000000066 From david at ar.media.kyoto-u.ac.jp Thu Jun 21 04:51:45 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 21 Jun 2007 17:51:45 +0900 Subject: [SciPy-dev] St9bad_alloc In-Reply-To: <43903540@web.de> References: <43903540@web.de> Message-ID: <467A3C21.8090706@ar.media.kyoto-u.ac.jp> Maik Tr?mel wrote: > My system has: > Python 2.4.1 > Scipy '0.5.2' > Numpy '1.0.1' > > The Second System was Python 2.5.1, with the same scipy and numpy versions. > Both system are running with Debian Testing. > Something to keep in mind is that your script is using a lot of memory (several hundred MB seems likely). This would depend on the algorithms used, but with arrays of 2000x2000 double, it could simply be that one new call in the C++ code fails because no memory is available to the system anymore (new raises bad_alloc if not enough memory can be allocated, if I remember correctly). How much memory do you have on your computer ? Can you check the memory behaviour of the application with eg top, etc... ? David From nwagner at iam.uni-stuttgart.de Thu Jun 21 05:04:54 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 21 Jun 2007 11:04:54 +0200 Subject: [SciPy-dev] St9bad_alloc In-Reply-To: <467A3C21.8090706@ar.media.kyoto-u.ac.jp> References: <43903540@web.de> <467A3C21.8090706@ar.media.kyoto-u.ac.jp> Message-ID: <467A3F36.6040404@iam.uni-stuttgart.de> David Cournapeau wrote: > Maik Tr?mel wrote: > >> My system has: >> Python 2.4.1 >> Scipy '0.5.2' >> Numpy '1.0.1' >> >> The Second System was Python 2.5.1, with the same scipy and numpy versions. >> Both system are running with Debian Testing. >> >> > Something to keep in mind is that your script is using a lot of memory > (several hundred MB seems likely). This would depend on the algorithms > used, but with arrays of 2000x2000 double, it could simply be that one > new call in the C++ code fails because no memory is available to the > system anymore (new raises bad_alloc if not enough memory can be > allocated, if I remember correctly). > > How much memory do you have on your computer ? Can you check the memory > behaviour of the application with eg top, etc... ? > > David > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > Just for comparison with Maik cat /proc/meminfo MemTotal: 1024720 kB MemFree: 17524 kB Buffers: 48128 kB Cached: 488172 kB SwapCached: 4416 kB Active: 674772 kB Inactive: 227788 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 1024720 kB LowFree: 17524 kB SwapTotal: 4200956 kB SwapFree: 4151064 kB Dirty: 20 kB Writeback: 0 kB Mapped: 499428 kB Slab: 83668 kB CommitLimit: 4713316 kB Committed_AS: 645804 kB PageTables: 7212 kB VmallocTotal: 34359738367 kB VmallocUsed: 14860 kB VmallocChunk: 34359720875 kB HugePages_Total: 0 HugePages_Free: 0 Hugepagesize: 2048 kB Nils From mtroemel81 at web.de Thu Jun 21 05:55:26 2007 From: mtroemel81 at web.de (=?iso-8859-15?Q?Maik_Tr=F6mel?=) Date: Thu, 21 Jun 2007 11:55:26 +0200 Subject: [SciPy-dev] St9bad_alloc Message-ID: <44016353@web.de> I've watched the memory behaviour with 'top'. The example script runs for example until the third sub-matirx constantly with 33% memory. Then the memory usage raises very fast to 86% bevor the script crashes. For comparison: cat /proc/meminfo MemTotal: 2075184 kB MemFree: 324188 kB Buffers: 139112 kB Cached: 998348 kB SwapCached: 70988 kB Active: 990008 kB Inactive: 531008 kB HighTotal: 1177788 kB HighFree: 137732 kB LowTotal: 897396 kB LowFree: 186456 kB SwapTotal: 1951888 kB SwapFree: 1819768 kB Dirty: 256 kB Writeback: 0 kB AnonPages: 379712 kB Mapped: 98196 kB Slab: 208668 kB PageTables: 2948 kB NFS_Unstable: 0 kB Bounce: 0 kB CommitLimit: 2989480 kB Committed_AS: 735168 kB VmallocTotal: 114680 kB VmallocUsed: 14140 kB VmallocChunk: 100312 kB > -----Urspr?ngliche Nachricht----- > Von: SciPy Developers List > Gesendet: 21.06.07 11:05:12 > An: SciPy Developers List > Betreff: Re: [SciPy-dev] St9bad_alloc > > David Cournapeau wrote: > > Maik Tr?mel wrote: > > > >> My system has: > >> Python 2.4.1 > >> Scipy '0.5.2' > >> Numpy '1.0.1' > >> > >> The Second System was Python 2.5.1, with the same scipy and numpy versions. > >> Both system are running with Debian Testing. > >> > >> > > Something to keep in mind is that your script is using a lot of memory > > (several hundred MB seems likely). This would depend on the algorithms > > used, but with arrays of 2000x2000 double, it could simply be that one > > new call in the C++ code fails because no memory is available to the > > system anymore (new raises bad_alloc if not enough memory can be > > allocated, if I remember correctly). > > > > How much memory do you have on your computer ? Can you check the memory > > behaviour of the application with eg top, etc... ? > > > > David > > _______________________________________________ > > Scipy-dev mailing list > > Scipy-dev at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-dev > > > Just for comparison with Maik > > cat /proc/meminfo > > MemTotal: 1024720 kB > MemFree: 17524 kB > Buffers: 48128 kB > Cached: 488172 kB > SwapCached: 4416 kB > Active: 674772 kB > Inactive: 227788 kB > HighTotal: 0 kB > HighFree: 0 kB > LowTotal: 1024720 kB > LowFree: 17524 kB > SwapTotal: 4200956 kB > SwapFree: 4151064 kB > Dirty: 20 kB > Writeback: 0 kB > Mapped: 499428 kB > Slab: 83668 kB > CommitLimit: 4713316 kB > Committed_AS: 645804 kB > PageTables: 7212 kB > VmallocTotal: 34359738367 kB > VmallocUsed: 14860 kB > VmallocChunk: 34359720875 kB > HugePages_Total: 0 > HugePages_Free: 0 > Hugepagesize: 2048 kB > > Nils > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > ______________________________________________________________________ XXL-Speicher, PC-Virenschutz, Spartarife & mehr: Nur im WEB.DE Club! Jetzt gratis testen! http://produkte.web.de/club/?mc=021130 From david at ar.media.kyoto-u.ac.jp Thu Jun 21 05:58:01 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 21 Jun 2007 18:58:01 +0900 Subject: [SciPy-dev] St9bad_alloc In-Reply-To: <44016353@web.de> References: <44016353@web.de> Message-ID: <467A4BA9.3000705@ar.media.kyoto-u.ac.jp> Maik Tr?mel wrote: > I've watched the memory behaviour with 'top'. The example script runs for example until the third sub-matirx constantly with 33% memory. Then the memory usage raises very fast to 86% bevor the script crashes. > > I tried it on my workstation (with last numpy/scipy svn). It indeed looks like something fishy is going on. I will take a look at it to see if the problem can be spotted easily. David From david at ar.media.kyoto-u.ac.jp Thu Jun 21 06:15:12 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 21 Jun 2007 19:15:12 +0900 Subject: [SciPy-dev] St9bad_alloc In-Reply-To: <467A4BA9.3000705@ar.media.kyoto-u.ac.jp> References: <44016353@web.de> <467A4BA9.3000705@ar.media.kyoto-u.ac.jp> Message-ID: <467A4FB0.7040704@ar.media.kyoto-u.ac.jp> David Cournapeau wrote: > Maik Tr?mel wrote: > >> I've watched the memory behaviour with 'top'. The example script runs for example until the third sub-matirx constantly with 33% memory. Then the memory usage raises very fast to 86% bevor the script crashes. >> >> >> > I tried it on my workstation (with last numpy/scipy svn). It indeed > looks like something fishy is going on. I will take a look at it to see > if the problem can be spotted easily. > Actually, it is not constantly, that is sometimes it works, but most of the times it does not... This is not a good news with respect to rapid bug squashing :) David From mtroemel81 at web.de Thu Jun 21 07:38:28 2007 From: mtroemel81 at web.de (=?iso-8859-15?Q?Maik_Tr=F6mel?=) Date: Thu, 21 Jun 2007 13:38:28 +0200 Subject: [SciPy-dev] St9bad_alloc Message-ID: <44187506@web.de> Ok, I was afraid of something like this. Thank you for your effords! Maik > -----Urspr?ngliche Nachricht----- > Von: SciPy Developers List > Gesendet: 21.06.07 12:24:23 > An: SciPy Developers List > Betreff: Re: [SciPy-dev] St9bad_alloc > > David Cournapeau wrote: > > Maik Tr?mel wrote: > > > >> I've watched the memory behaviour with 'top'. The example script runs for example until the third sub-matirx constantly with 33% memory. Then the memory usage raises very fast to 86% bevor the script crashes. > >> > >> > >> > > I tried it on my workstation (with last numpy/scipy svn). It indeed > > looks like something fishy is going on. I will take a look at it to see > > if the problem can be spotted easily. > > > Actually, it is not constantly, that is sometimes it works, but most of > the times it does not... This is not a good news with respect to rapid > bug squashing :) > > David > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > _________________________________________________________________________ In 5 Schritten zur eigenen Homepage. Jetzt Domain sichern und gestalten! Nur 3,99 EUR/Monat! http://www.maildomain.web.de/?mc=021114 From robert.kern at gmail.com Thu Jun 21 12:52:21 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 21 Jun 2007 11:52:21 -0500 Subject: [SciPy-dev] St9bad_alloc In-Reply-To: <43748975@web.de> References: <43748975@web.de> Message-ID: <467AACC5.20706@gmail.com> Maik Tr?mel wrote: > I tried it out at another system and I get the same error. > Probably there is memory leak in the delaunay-package. Or does anybody else have an idea what the problem could be? It is quite likely the delaunay module's fault. There are 3 or 4 memory allocation systems operating in that module: malloc (I think), new/delete, some custom memory pools in the actual Delaunay triangulation library, and Python's. I wasn't terribly familiar with C++ when I wrote the wrappers, so it seems likely that I screwed something up there particularly given the error message that you see. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From pav at iki.fi Fri Jun 22 08:00:32 2007 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 22 Jun 2007 15:00:32 +0300 Subject: [SciPy-dev] St9bad_alloc Message-ID: <1182513632.7316.10.camel@localhost.localdomain> Maik Tr?mel wrote: > I've got some problems with Scipy/sandbox/delaunay. > Everytime I run my script the following error occures: > terminate called after throwing an instance of 'std::bad_alloc' > what(): St9bad_alloc > > Does anybody know what this means? And what I can do to avoid this > error? > > I have postet in several forums, but nowbody could help me. The delaunay module as in SVN does have a couple of bugs. I've filed bug tickets for these, but probably people aren't aware of them: http://projects.scipy.org/scipy/scipy/ticket/382 Memory leak, patch included http://projects.scipy.org/scipy/scipy/ticket/376 Algorithm failure + crash on special types of data. Partial workaround + test cases included The latter problem is more difficult, since the errors are apparently related to the robustness of the algorithm itself. The patches I included for #376 plug one crasher bug, but the error manifests instead like this (which is better than segfaulting): Traceback (most recent call last): File "tests/test_triangulate.py", line 105, in test_ticket_376_2 tri = dlny.Triangulation(data[:,0], data[:,1]) File "/home/pauli/tmp/scipy/Lib/sandbox/delaunay/build/lib.linux-x86_64-2.4/delaunay/triangulate.py", line 83, in __init__ self.hull = self._compute_convex_hull() File "/home/pauli/tmp/scipy/Lib/sandbox/delaunay/build/lib.linux-x86_64-2.4/delaunay/triangulate.py", line 118, in _compute_convex_hull hull.append(edges.pop(hull[-1])) KeyError: 0 Most of the time these errors can be fixed by rounding off a few least significant digits of the input data; like this xscale, yscale = x.ptp(), y.ptp() Triangulation(around(x/xscale, 13)*xscale, around(y/yscale, 13)*yscale) A "bad" dataset inducing this type of bug is included in the testcases. Really fixing this problem probably would need good understanding of the algorithm, though. Nonetheless, with these fixes, I've been able to hobble along using delaunay, without further problems. -- Pauli Virtanen From openopt at ukr.net Fri Jun 22 15:41:54 2007 From: openopt at ukr.net (dmitrey) Date: Fri, 22 Jun 2007 22:41:54 +0300 Subject: [SciPy-dev] GSoC weekly report Message-ID: <467C2602.3010501@ukr.net> hi all, here's the one: http://openopt.blogspot.com/ weekly report briefly: some info about my linearisation algorithm that I implement is available in Pshenichniy's book available (in russian) at http://www.box.net/shared/3yv54mg9xt suggestions about COIN-OR IPOPT and other solvers suggestions about ALGENCAN nlp solver suggestions about QP class (I have started to implement the one) Regards, Dmitrey From david at ar.media.kyoto-u.ac.jp Sat Jun 23 06:35:44 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 23 Jun 2007 19:35:44 +0900 Subject: [SciPy-dev] [pymachine] moving code outside the sandbox into scikits ? Message-ID: <467CF780.4060205@ar.media.kyoto-u.ac.jp> Hi, Following a discussion with my mentor (Jarrod Millman) and Fernando Perez, we decided to move most of the code useful for pymachine from the scipy.sandbox to scikits. Before doing so, we would like to gather other's opinion. Why ? As part of my SoC project, I was planning to move some of the existing scipy.sandbox code out of the sandbox (e.g., scipy.sandbox.pyem -> scikits.machinelearning.em). Now we are proposing to move that code into a scikits package instead because: 1. this would give all the machine learning related code a unified home under one common namespace 2. some of the code relies on ctypes, which is not allowed in scipy 3. some of the code uses matplotlib, which is not allowed in scipy 4. the code hasn't been as widely used as other parts of scipy 5. this would allow the code to be developed and released on its own timeschedule independent of scipy What ? The concerned packages are: - scipy.sandbox.pyem - scipy.sandbox.svm - and possibly others (e.g., scipy.sandbox.ga, scipy.sandbox.ann) And (just to be clear) scipy.cluster would remain where it is. Potential problems? The obvious consequence is that you will now need to install the new scikits' package to use the code currently available in the scipy.sandbox. As you need to install sandboxed code from the source anyway, this shouldn't be too big of a problem. Also the namespace changes, but that would be true if the code was moved out of the sandbox as originally planned. Better name for pymachine ? We have been calling the project pymachine. But we would rather use a more descriptive name for the scikit package (and one that doesn't contain 'py'). Here are some ideas: - scikits.learning - scikits.learn - scikits.machinelearning - scikits.mlearn What do you think of these names? Does anyone have better name in mind? If anyone has concerns with the above plan, please speak up! Cheers, D. Cournapeau, J. Millman, B. Hawthorne, and F. Perez From openopt at ukr.net Sat Jun 23 07:17:11 2007 From: openopt at ukr.net (dmitrey) Date: Sat, 23 Jun 2007 14:17:11 +0300 Subject: [SciPy-dev] [pymachine] moving code outside the sandbox into scikits ? In-Reply-To: <467CF780.4060205@ar.media.kyoto-u.ac.jp> References: <467CF780.4060205@ar.media.kyoto-u.ac.jp> Message-ID: <467D0137.9080208@ukr.net> David Cournapeau wrote: > Hi, > > Following a discussion with my mentor (Jarrod Millman) and Fernando > Perez, we decided to move most of the code useful for pymachine from > the scipy.sandbox to scikits. Before doing so, we would like to > gather other's opinion. > > Why ? > > As part of my SoC project, I was planning to move some of the existing > scipy.sandbox code out of the sandbox (e.g., scipy.sandbox.pyem -> > scikits.machinelearning.em). Now we are proposing to move that code into a > scikits package instead because: > 1. this would give all the machine learning related code a unified > home under one common namespace > 2. some of the code relies on ctypes, which is not allowed in scipy > 3. some of the code uses matplotlib, which is not allowed in scipy > 4. the code hasn't been as widely used as other parts of scipy > 5. this would allow the code to be developed and released on its own > timeschedule independent of scipy > > What ? > > The concerned packages are: > - scipy.sandbox.pyem > - scipy.sandbox.svm > - and possibly others (e.g., scipy.sandbox.ga, scipy.sandbox.ann) > And (just to be clear) scipy.cluster would remain where it is. > > Potential problems? > > The obvious consequence is that you will now need to install the new > scikits' package to use the code currently available in the > scipy.sandbox. As you need to install sandboxed code from the source > anyway, this shouldn't be too big of a problem. > > Also the namespace changes, but that would be true if the code was > moved out of the sandbox as originally planned. > > Better name for pymachine ? > > We have been calling the project pymachine. But we would rather use a > more descriptive name for the scikit package (and one that doesn't > contain 'py'). Here are some ideas: > - scikits.learning > - scikits.learn > - scikits.machinelearning > - scikits.mlearn > What do you think of these names? Does anyone have better name in mind? > it's too long to type each time "from scikits.machinelearning import (...)" maybe, machlearn or maclearn or mach_learn or ml? As for me, I would implement 2 ways of calling each scikits module - one long and unique, other small, but also unique. for example, 1st: from scikits.openopt import (...) 2nd: from scikits.oo import (...) (for those who types it dozens times per day, for example module developers itself) However, I don't know how to do the trick via setuptools, as well as supply my py-files installed with their compiled versions (setup.py compiles my py-files but (I don't know why) they are absent in destination directory) and howto pass "-j 2" option to 'make' routine when I call 'python setup.py install' (or maybe parallel compiling options should be passed somehow else?) Thank you in advance for your suggestions, D. > If anyone has concerns with the above plan, please speak up! > > Cheers, > > D. Cournapeau, J. Millman, B. Hawthorne, and F. Perez > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > > > > From matthieu.brucher at gmail.com Sat Jun 23 07:21:52 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Sat, 23 Jun 2007 13:21:52 +0200 Subject: [SciPy-dev] [pymachine] moving code outside the sandbox into scikits ? In-Reply-To: <467CF780.4060205@ar.media.kyoto-u.ac.jp> References: <467CF780.4060205@ar.media.kyoto-u.ac.jp> Message-ID: Hi, We have been calling the project pymachine. But we would rather use a > more descriptive name for the scikit package (and one that doesn't > contain 'py'). Here are some ideas: > - scikits.learning > - scikits.learn > - scikits.machinelearning > - scikits.mlearn > What do you think of these names? Does anyone have better name in mind? > machinelearning is my favourite, but I would think of a more global hierarchy inside this namespace. svm and em do not have the same final goal, so perhaps adding classification or estimation as sub-sub-namespace would be worth considering. Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.brucher at gmail.com Sat Jun 23 07:25:13 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Sat, 23 Jun 2007 13:25:13 +0200 Subject: [SciPy-dev] [pymachine] moving code outside the sandbox into scikits ? In-Reply-To: <467D0137.9080208@ukr.net> References: <467CF780.4060205@ar.media.kyoto-u.ac.jp> <467D0137.9080208@ukr.net> Message-ID: > > it's too long to type each time "from scikits.machinelearning import > (...)" > maybe, machlearn or maclearn or mach_learn or ml? Too long perhaps, but muwh more understandable. One would just have to make before "import scikits.machinelearning as ml" and it would be much more structured. As for me, I would implement 2 ways of calling each scikits module - one > long and unique, other small, but also unique. > for example, > 1st: from scikits.openopt import (...) > 2nd: from scikits.oo import (...) (for those who types it dozens times > per day, for example module developers itself) Like as said before, it is possible ;) However, I don't know how to do the trick via setuptools, as well as > supply my py-files installed with their compiled versions (setup.py > compiles my py-files but (I don't know why) they are absent in > destination directory) and howto pass "-j 2" option to 'make' routine > when I call 'python setup.py install' (or maybe parallel compiling > options should be passed somehow else?) "python setup.py install" calls make ? I don't think so. This should be proposed to the setuptools ML ;) If the .py files aren't in the egg, add them explicitely. Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From millman at berkeley.edu Mon Jun 25 05:07:05 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Mon, 25 Jun 2007 02:07:05 -0700 Subject: [SciPy-dev] [pymachine] moving code outside the sandbox into scikits ? In-Reply-To: References: <467CF780.4060205@ar.media.kyoto-u.ac.jp> Message-ID: On 6/23/07, Matthieu Brucher wrote: > > We have been calling the project pymachine. But we would rather use a > > more descriptive name for the scikit package (and one that doesn't > > contain 'py'). Here are some ideas: > > - scikits.learning > > - scikits.learn > > - scikits.machinelearning > > - scikits.mlearn > > What do you think of these names? Does anyone have better name in mind? > > machinelearning is my favourite, but I would think of a more global > hierarchy inside this namespace. svm and em do not have the same final goal, > so perhaps adding classification or estimation as sub-sub-namespace would be > worth considering. > > Matthieu Hey Matthieu, I agree with you, scikits.machinelearning is my favorite as well. I understand Dmitrey's concern about it being such a long name, but I think that it is much more important for the package name to be obvious as to what it does. Hopefully, having a well-named package will make it more obvious what the very terse names like svm or em mean given that they are found inside a machinelearning package. I also want to make sure that a good precedent is started regarding the naming of scikits packages. I also like your suggestion to use something like "import scikits.machinelearning as ml". It might be good to even have a recommendation like this in the package docstring. That way we could encourage the adoption of ml (for scikits.machinelearning) as a consistent convention. I also agree that we may need to create a nested hierarchy. But I would prefer to keep a flat namespace at least for the next few weeks. That way we can make the hierarchy after seeing what code ends up in the package. In addition to the code David is working on, there are a few other developers who have tentatively offered to contribute some working code that they have written. But we should definitely return to this point before making an official release. Unless there are additional responses or concerns, I will ask David to go ahead with the plan starting Tuesday. Specifically, he will create a new scikits package called machinelearning and start moving code out of the scipy sandbox. First, he will move the support vector machine and expectation-maximization code: scipy.sandbox.pyem --> scikits.machinelearning.em scipy.sandbox.svm --> scikits.machinelearning.svm And then later he may move the genetic algorithm and neural network code: scipy.sandbox.ga --> scikits.machinelearning.ga scipy.sandbox.ann --> scikits.machinelearning.ann Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From openopt at ukr.net Mon Jun 25 05:25:12 2007 From: openopt at ukr.net (dmitrey) Date: Mon, 25 Jun 2007 12:25:12 +0300 Subject: [SciPy-dev] equivalent to online Octave calculator? Message-ID: <467F89F8.4070202@ukr.net> Hi all, has numpy/scipy project something equivalent to online Octave calculator? http://www.online-utility.org/math/math_calculator.jsp If no, I guess it would be very useful for checking numpy/scipy bugs - are those related to user's old scipy/numpy version or they are due to build options and/or lack of some libraries (atlas, blas, lapack etc; or due to their obsolete versions installed). Also, I think it would be very useful and convenient if there will be radiobutton provided which version to use: either latest release or nightly build. (or maybe some scipy releases + some numpy releases, including nightly builds). Also, scikits might be connected in future to the frame; and some limits could be implemented (for example, no more than 60 sec cputime per day from single IP) So, what are your suggestions? Regards, Dmitrey. From david at ar.media.kyoto-u.ac.jp Mon Jun 25 05:19:48 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 25 Jun 2007 18:19:48 +0900 Subject: [SciPy-dev] [pymachine] moving code outside the sandbox into scikits ? In-Reply-To: References: <467CF780.4060205@ar.media.kyoto-u.ac.jp> Message-ID: <467F88B4.4070102@ar.media.kyoto-u.ac.jp> Jarrod Millman wrote: > On 6/23/07, Matthieu Brucher wrote: >>> We have been calling the project pymachine. But we would rather use a >>> more descriptive name for the scikit package (and one that doesn't >>> contain 'py'). Here are some ideas: >>> - scikits.learning >>> - scikits.learn >>> - scikits.machinelearning >>> - scikits.mlearn >>> What do you think of these names? Does anyone have better name in mind? >> machinelearning is my favourite, but I would think of a more global >> hierarchy inside this namespace. svm and em do not have the same final goal, >> so perhaps adding classification or estimation as sub-sub-namespace would be >> worth considering. >> >> Matthieu > > Hey Matthieu, > > I agree with you, scikits.machinelearning is my favorite as well. I > understand Dmitrey's concern about it being such a long name, but I > think that it is much more important for the package name to be > obvious as to what it does. Hopefully, having a well-named package > will make it more obvious what the very terse names like svm or em > mean given that they are found inside a machinelearning package. I > also want to make sure that a good precedent is started regarding the > naming of scikits packages. > > I also like your suggestion to use something like "import > scikits.machinelearning as ml". It might be good to even have a > recommendation like this in the package docstring. That way we could > encourage the adoption of ml (for scikits.machinelearning) as a > consistent convention. > > I also agree that we may need to create a nested hierarchy. But I > would prefer to keep a flat namespace at least for the next few weeks. > That way we can make the hierarchy after seeing what code ends up in > the package. In addition to the code David is working on, there are a > few other developers who have tentatively offered to contribute some > working code that they have written. But we should definitely return > to this point before making an official release. I agree on avoiding a flat namespace, but I disagree on doing it as Matthieu suggested: where does classification starts, where does clustering ends, where does pdf estimation goes in between ? You can use EM or SVM to do similar things (discriminative classification, clustering). For example, I have almost ready examples to do clustering, pdf estimation and discrimative learning: the actual implementation is the same, EM. The usage is different. I prefer to keep the "implementation concept" and the "usage concept" separate at the namespace level. That is I agree that having a classification or clustering namespace is useulf, but not to separate svm or em. I may miss your argument, though ? David From peter.skomoroch at gmail.com Mon Jun 25 08:08:44 2007 From: peter.skomoroch at gmail.com (Peter Skomoroch) Date: Mon, 25 Jun 2007 08:08:44 -0400 Subject: [SciPy-dev] equivalent to online Octave calculator? In-Reply-To: <467F89F8.4070202@ukr.net> References: <467F89F8.4070202@ukr.net> Message-ID: Dmitrey, Would something like SAGE be what you are looking for? http://www.sagemath.org/ -Pete On 6/25/07, dmitrey wrote: > > Hi all, > has numpy/scipy project something equivalent to online Octave calculator? > http://www.online-utility.org/math/math_calculator.jsp > > If no, I guess it would be very useful for checking numpy/scipy bugs - > are those related to user's old scipy/numpy version or they are due to > build options and/or lack of some libraries (atlas, blas, lapack etc; or > due to their obsolete versions installed). > > Also, I think it would be very useful and convenient if there will be > radiobutton provided which version to use: either latest release or > nightly build. > (or maybe some scipy releases + some numpy releases, including nightly > builds). > > Also, scikits might be connected in future to the frame; and some limits > could be implemented (for example, no more than 60 sec cputime per day > from single IP) > > So, what are your suggestions? > > Regards, Dmitrey. > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -- Peter N. Skomoroch peter.skomoroch at gmail.com http://www.datawrangling.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From openopt at ukr.net Mon Jun 25 08:41:07 2007 From: openopt at ukr.net (dmitrey) Date: Mon, 25 Jun 2007 15:41:07 +0300 Subject: [SciPy-dev] equivalent to online Octave calculator? In-Reply-To: References: <467F89F8.4070202@ukr.net> Message-ID: <467FB7E3.1030107@ukr.net> No. The urls you provide (in top of web page) requires registering. Versions of numpy & scipy are unknown and of course nightly builds are absent. As for the SAGE project itself, I think there too many info about rational numbers, rings, polynomials in documentation and too small about most common funcs. Maybe it is what *William Stein* is keen on, but I guess ordinary users first of all need ordinary float-point calculations. I had spent several days trying to learn SAGE but anyway now I think it would better to rely on scipy.org own online tool than other project one. - D. Peter Skomoroch wrote: > Dmitrey, > > Would something like SAGE be what you are looking for? > > http://www.sagemath.org/ > > -Pete > > On 6/25/07, * dmitrey* > wrote: > > Hi all, > has numpy/scipy project something equivalent to online Octave > calculator? > http://www.online-utility.org/math/math_calculator.jsp > > If no, I guess it would be very useful for checking numpy/scipy > bugs - > are those related to user's old scipy/numpy version or they are due to > build options and/or lack of some libraries (atlas, blas, lapack > etc; or > due to their obsolete versions installed). > > Also, I think it would be very useful and convenient if there will be > radiobutton provided which version to use: either latest release or > nightly build. > (or maybe some scipy releases + some numpy releases, including nightly > builds). > > Also, scikits might be connected in future to the frame; and some > limits > could be implemented (for example, no more than 60 sec cputime per day > from single IP) > > So, what are your suggestions? > > Regards, Dmitrey. > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > > > > > -- > Peter N. Skomoroch > peter.skomoroch at gmail.com > http://www.datawrangling.com From Andreas.Floeter at web.de Tue Jun 26 01:38:10 2007 From: Andreas.Floeter at web.de (Andreas =?iso-8859-1?q?Fl=F6ter?=) Date: Tue, 26 Jun 2007 07:38:10 +0200 Subject: [SciPy-dev] Question regarding the configuration of MoinMoin Message-ID: <200706260738.10651.Andreas.Floeter@web.de> Hello, Since I have asked to people directly maybe somebody on the mailing list might help me. I have seen the http://www.scipy.org/Cookbook moinmoin based wiki and I find it very appealing. Since I tried to set up a moinmoin I found some topics not so easy to resolve. Yet setting up moinmoin is a straight forward task but getting some "extra" things to work seems to me not so easy, e.g. - the style applied to the Cookbook looks like the modern style yet it seems to be different; I like the Cookbook style better. Is there information available about the structure of the pages, macros, page templates, CSS, etc. which are used for it? - the navigation menu is very attractive. I found that one can get lost very easily by a default wiki. Are templates used for Cookbook? How is the menu generated? Is there an automatism to generate menu information from categories and/or title information of pages? - what kind of "extras" is Cookbook using, e.g. macros, actions, templates, etc. and for which purpose? - what kind of authorisation model is applied? Anonymous users are not allowed to edit the pages. I am looking for a similar scheme where only certain user who are logged in can modify the pages. You see that I have a couple of questions. If you could provide me with some iinformation how Cookbook is working and setup I would very much appreciate your help. Regards, Andreas From kamrik at gmail.com Tue Jun 26 04:52:25 2007 From: kamrik at gmail.com (Mark Koudritsky) Date: Tue, 26 Jun 2007 11:52:25 +0300 Subject: [SciPy-dev] Question regarding the configuration of MoinMoin In-Reply-To: <200706260738.10651.Andreas.Floeter@web.de> References: <200706260738.10651.Andreas.Floeter@web.de> Message-ID: The Cookbook is not a separate wiki, it's part of the SciPy wiki http://www.scipy.org/SciPy The external appearance is due to the Sinorca4Moin theme http://moinmoin.wikiwikiweb.de/ThemeMarket#head-7b3ef0dfc3a812f857ed59d1efd9b988792cd589 The access control is governed by acl_rights_before and acl_rights_default settings in wikiconfig.py See http://master.moinmo.in/HelpOnAccessControlLists You are welcome to write me directly if you feel the topic is not appropriate for the entire list. Regards. On 6/26/07, Andreas Fl?ter wrote: > Hello, > > Since I have asked to people directly maybe somebody on the mailing list might > help me. > > I have seen the http://www.scipy.org/Cookbook moinmoin based wiki and I find > it very appealing. Since I tried to set up a moinmoin I found some topics not > so easy to resolve. Yet setting up moinmoin is a straight forward task but > getting some "extra" things to work seems to me not so easy, e.g. > > - the style applied to the Cookbook looks like the modern style yet it seems > to be different; I like the Cookbook style better. Is there information > available about the structure of the pages, macros, page templates, > CSS, etc. which are used for it? > > - the navigation menu is very attractive. I found that one can get lost very > easily by a default wiki. Are templates used for Cookbook? How is the > menu generated? Is there an automatism to generate menu information > from categories and/or title information of pages? > > - what kind of "extras" is Cookbook using, e.g. macros, actions, templates, > etc. and for which purpose? > > - what kind of authorisation model is applied? Anonymous users are not > allowed to edit the pages. I am looking for a similar scheme where only > certain user who are logged in can modify the pages. > > You see that I have a couple of questions. If you could provide me with some > iinformation how Cookbook is working and setup I would very much appreciate > your help. > > Regards, > Andreas > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From strawman at astraw.com Tue Jun 26 18:24:20 2007 From: strawman at astraw.com (Andrew Straw) Date: Tue, 26 Jun 2007 15:24:20 -0700 Subject: [SciPy-dev] getting the Delaunay package into scipy (out of the sandbox) Message-ID: <46819214.3030109@astraw.com> (An ongoing thread, "How to Enable Delaunay Package" in scipy-user prompts this email.) I believe there was a recent call to push the Delaunay package into main scipy (although I'm having trouble the thread). I totally agree with this request, and the package has performed great for my modest needs on i386 and amd64 architectures for some time now. There are apparently a couple bugs open, but both have patches http://projects.scipy.org/scipy/scipy/ticket/382 http://projects.scipy.org/scipy/scipy/ticket/376 Are there other considerations that need to be taken into account? I can attempt to find time to provide the necessary elbow grease to move it over if there are no fatal objections. -Andrew From edschofield at gmail.com Wed Jun 27 15:55:43 2007 From: edschofield at gmail.com (Ed Schofield) Date: Wed, 27 Jun 2007 20:55:43 +0100 Subject: [SciPy-dev] Timetable for 0.5.3 release? In-Reply-To: <465B3799.6070507@ieee.org> References: <20070527233849.GA5182@arbutus.physics.mcmaster.ca> <465A1A1A.2050605@ar.media.kyoto-u.ac.jp> <465B3799.6070507@ieee.org> Message-ID: <1b5a37350706271255v220c9386x2a651c53dfe9518c@mail.gmail.com> On 5/28/07, Travis Oliphant wrote: > David Cournapeau wrote: > >> Perhaps we should move to a timetable-based release schedule for scipy? > >> Every three months, release the current svn version. Also making a > >> release after a numpy release is a good idea, so that the current > >> versions work with each other. > >> > > As the scipy community seems to be growing, and as Travis wanted to have > > a release manager for scipy, what about adopting a scheme similar to > > bzr, which seems to work fine for them: having a different release > > manager for each release ? Not that this is against having a timetable, > > > > Here, Here. The releases are slow in coming only because it seems to be > entirely relying on my finding time for them. > > I would like to see more code move from the sandbox into the scipy > namespace. I am currently working on the interpolation module to > enhance the number of ways in which you can do interpolation using > B-splines (the basic functionality is in fitpack, but sometimes I can't > make sense of what it is doing with the knot points --- there is a lot > less flexibility then there could be). > > I would also like to see the netcdf library in scipy.io be able to write > netcdf files. Currently it can only read them. I was hoping to be > able to do this before the release, but I've run out of time. I've been hiding under a rock for a while, but I'd be happy to volunteer to do a 0.5.3 release whenever everybody's ready. The current SciPy release doesn't build against the current NumPy, which I suppose we should fix ;) *But* NumPy 1.0.3 still has the setup.py bug (fixed by Pearu in r3848) that prevents SciPy from compiling. So we really need a new NumPy release first. Is this on the horizon? -- Ed From brian.lee.hawthorne at gmail.com Thu Jun 28 02:54:40 2007 From: brian.lee.hawthorne at gmail.com (Brian Hawthorne) Date: Wed, 27 Jun 2007 23:54:40 -0700 Subject: [SciPy-dev] [pymachine] moving code outside the sandbox into scikits ? In-Reply-To: References: <467CF780.4060205@ar.media.kyoto-u.ac.jp> Message-ID: <796269930706272354q6d3ee74fuc24605914eed249b@mail.gmail.com> Hi, I'm going to throw in my contrary 2 cents and say that I prefer learning or learn. The reasons are: - they are shorter, and therefore easier to type - they are shorter, and thus produce a smaller desire to abbreviate with aliases (import x as y). I think that's good because it reduces the number of names/conventions that must be defined/remembered as referring to the same thing. - they are real words (unlike machlearn), and therefore easier to remember and type - adding the word machine seems redundant (I do realize that "machine learning" is the proper name for the field), since it is software we are talking about... I guess the only confusion that could arise is that a newcomer might think it was a package for educating humans about scikits. Anyway, that's all I have to say on the subject. Cheers, Brian On 6/25/07, Jarrod Millman wrote: > > On 6/23/07, Matthieu Brucher wrote: > > > We have been calling the project pymachine. But we would rather use a > > > more descriptive name for the scikit package (and one that doesn't > > > contain 'py'). Here are some ideas: > > > - scikits.learning > > > - scikits.learn > > > - scikits.machinelearning > > > - scikits.mlearn > > > What do you think of these names? Does anyone have better name in > mind? > > > > machinelearning is my favourite, but I would think of a more global > > hierarchy inside this namespace. svm and em do not have the same final > goal, > > so perhaps adding classification or estimation as sub-sub-namespace > would be > > worth considering. > > > > Matthieu > > Hey Matthieu, > > I agree with you, scikits.machinelearning is my favorite as well. I > understand Dmitrey's concern about it being such a long name, but I > think that it is much more important for the package name to be > obvious as to what it does. Hopefully, having a well-named package > will make it more obvious what the very terse names like svm or em > mean given that they are found inside a machinelearning package. I > also want to make sure that a good precedent is started regarding the > naming of scikits packages. > > I also like your suggestion to use something like "import > scikits.machinelearning as ml". It might be good to even have a > recommendation like this in the package docstring. That way we could > encourage the adoption of ml (for scikits.machinelearning) as a > consistent convention. > > I also agree that we may need to create a nested hierarchy. But I > would prefer to keep a flat namespace at least for the next few weeks. > That way we can make the hierarchy after seeing what code ends up in > the package. In addition to the code David is working on, there are a > few other developers who have tentatively offered to contribute some > working code that they have written. But we should definitely return > to this point before making an official release. > > Unless there are additional responses or concerns, I will ask David to > go ahead with the plan starting Tuesday. Specifically, he will create > a new scikits package called machinelearning and start moving code out > of the scipy sandbox. First, he will move the support vector machine > and expectation-maximization code: > scipy.sandbox.pyem --> scikits.machinelearning.em > scipy.sandbox.svm --> scikits.machinelearning.svm > And then later he may move the genetic algorithm and neural network code: > scipy.sandbox.ga --> scikits.machinelearning.ga > scipy.sandbox.ann --> scikits.machinelearning.ann > > Thanks, > > -- > Jarrod Millman > Computational Infrastructure for Research Labs > 10 Giannini Hall, UC Berkeley > phone: 510.643.4014 > http://cirl.berkeley.edu/ > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Thu Jun 28 03:01:14 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 28 Jun 2007 01:01:14 -0600 Subject: [SciPy-dev] [pymachine] moving code outside the sandbox into scikits ? In-Reply-To: <796269930706272354q6d3ee74fuc24605914eed249b@mail.gmail.com> References: <467CF780.4060205@ar.media.kyoto-u.ac.jp> <796269930706272354q6d3ee74fuc24605914eed249b@mail.gmail.com> Message-ID: On 6/28/07, Brian Hawthorne wrote: > Hi, I'm going to throw in my contrary 2 cents and say that I prefer learning > or learn. The reasons are: I suck at names, but FWIW, I'm +1 on 'learn'. This is all being typed into a computer, so perhaps we can all agree that the 'machine' part is implied :) Cheers, f From openopt at ukr.net Thu Jun 28 04:06:00 2007 From: openopt at ukr.net (dmitrey) Date: Thu, 28 Jun 2007 11:06:00 +0300 Subject: [SciPy-dev] question about scipy.optimize.line_search Message-ID: <46836BE8.6080501@ukr.net> help(line_search) yields -------------------------------------------------------------------- line_search(f, myfprime, xk, pk, gfk, old_fval, old_old_fval, args=(), c1=0.0001, c2=0.90000000000000002, amax=50) Find alpha that satisfies strong Wolfe conditions. Uses the line search algorithm to enforce strong Wolfe conditions Wright and Nocedal, 'Numerical Optimization', 1999, pg. 59-60 For the zoom phase it uses an algorithm by Outputs: (alpha0, gc, fc) -------------------------------------------------------------------- So I need to know what are other args, especially gfk (is it a gradient in point xk?), old_fval, old_old_fval (I guess I know what do c1 & c2 mean) Thank you in advance, D. From openopt at ukr.net Thu Jun 28 10:59:52 2007 From: openopt at ukr.net (dmitrey) Date: Thu, 28 Jun 2007 17:59:52 +0300 Subject: [SciPy-dev] I can't find howto get exp(x) or a^x in NumPy_for_Matlab_Users page Message-ID: <4683CCE8.1040309@ukr.net> howto find exp(x)? I can't find it in NumPy_for_Matlab_Users page http://www.scipy.org/NumPy_for_Matlab_Users Thx, D. From matthieu.brucher at gmail.com Thu Jun 28 11:02:47 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 28 Jun 2007 17:02:47 +0200 Subject: [SciPy-dev] I can't find howto get exp(x) or a^x in NumPy_for_Matlab_Users page In-Reply-To: <4683CCE8.1040309@ukr.net> References: <4683CCE8.1040309@ukr.net> Message-ID: Hi, numpy.exp for a element-wise exponentiation, and a**x for a^x Matthieu 2007/6/28, dmitrey : > > howto find exp(x)? > I can't find it in NumPy_for_Matlab_Users page > http://www.scipy.org/NumPy_for_Matlab_Users > > Thx, D. > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From peridot.faceted at gmail.com Thu Jun 28 11:13:14 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Thu, 28 Jun 2007 11:13:14 -0400 Subject: [SciPy-dev] I can't find howto get exp(x) or a^x in NumPy_for_Matlab_Users page In-Reply-To: References: <4683CCE8.1040309@ukr.net> Message-ID: On 28/06/07, Matthieu Brucher wrote: > numpy.exp for a element-wise exponentiation, and a**x for a^x To expand on this: if M is a matrix (not array), then M**3 will compute the cube of the matrix (fairly efficiently). But this will not handle fractional exponents (I'm not totally sure those are well-defined anyway) and I don't know if there is a matrix exponential in numpy. Anne From robert.kern at gmail.com Thu Jun 28 11:24:23 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 28 Jun 2007 10:24:23 -0500 Subject: [SciPy-dev] I can't find howto get exp(x) or a^x in NumPy_for_Matlab_Users page In-Reply-To: References: <4683CCE8.1040309@ukr.net> Message-ID: <4683D2A7.3080102@gmail.com> Anne Archibald wrote: > On 28/06/07, Matthieu Brucher wrote: > >> numpy.exp for a element-wise exponentiation, and a**x for a^x > > To expand on this: if M is a matrix (not array), then M**3 will > compute the cube of the matrix (fairly efficiently). But this will not > handle fractional exponents (I'm not totally sure those are > well-defined anyway) and I don't know if there is a matrix exponential > in numpy. scipy.linalg has expm, expm2, and expm3 which do the matrix exponential by Pad? approximation, eigenvalues, and Taylor series, respectively. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From openopt at ukr.net Thu Jun 28 11:28:57 2007 From: openopt at ukr.net (dmitrey) Date: Thu, 28 Jun 2007 18:28:57 +0300 Subject: [SciPy-dev] I can't find howto get exp(x) or a^x in NumPy_for_Matlab_Users page In-Reply-To: <4683D2A7.3080102@gmail.com> References: <4683CCE8.1040309@ukr.net> <4683D2A7.3080102@gmail.com> Message-ID: <4683D3B9.8050202@ukr.net> I think it's better to remove all mentions about scipy.linalg from website and documentation and denote to numpy.linalg also, I think that things like norm, expm, etc should be in numpy, not (or not only) in numpy.linalg //just my 2 cents D Robert Kern wrote: > scipy.linalg has expm, expm2, and expm3 which do the matrix exponential by Pad? > approximation, eigenvalues, and Taylor series, respectively. > > From steve at shrogers.com Thu Jun 28 11:29:51 2007 From: steve at shrogers.com (Steven H. Rogers) Date: Thu, 28 Jun 2007 09:29:51 -0600 (MDT) Subject: [SciPy-dev] [pymachine] moving code outside the sandbox into scikits ? In-Reply-To: References: <467CF780.4060205@ar.media.kyoto-u.ac.jp> <796269930706272354q6d3ee74fuc24605914eed249b@mail.gmail.com> Message-ID: <3335.192.55.12.36.1183044591.squirrel@mail2.webfaction.com> On Thu, June 28, 2007 01:01, Fernando Perez wrote: > On 6/28/07, Brian Hawthorne wrote: >> Hi, I'm going to throw in my contrary 2 cents and say that I prefer >> learning >> or learn. The reasons are: > > I suck at names, but FWIW, I'm +1 on 'learn'. This is all being typed > into a computer, so perhaps we can all agree that the 'machine' part > is implied :) > +1 for "learn", it's concise and clear. From peridot.faceted at gmail.com Thu Jun 28 11:33:51 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Thu, 28 Jun 2007 11:33:51 -0400 Subject: [SciPy-dev] I can't find howto get exp(x) or a^x in NumPy_for_Matlab_Users page In-Reply-To: <4683D2A7.3080102@gmail.com> References: <4683CCE8.1040309@ukr.net> <4683D2A7.3080102@gmail.com> Message-ID: On 28/06/07, Robert Kern wrote: > Anne Archibald wrote: > > On 28/06/07, Matthieu Brucher wrote: > > > >> numpy.exp for a element-wise exponentiation, and a**x for a^x > > > > To expand on this: if M is a matrix (not array), then M**3 will > > compute the cube of the matrix (fairly efficiently). But this will not > > handle fractional exponents (I'm not totally sure those are > > well-defined anyway) and I don't know if there is a matrix exponential > > in numpy. > > scipy.linalg has expm, expm2, and expm3 which do the matrix exponential by Pad? > approximation, eigenvalues, and Taylor series, respectively. It might be worth mentioning in the docstrings that they do not give the same answer: In [18]: scipy.linalg.expm(matrix([[1,1],[0,1]])) Out[18]: array([[ 2.71828183, 2.71828183], [ 0. , 2.71828183]]) In [19]: scipy.linalg.expm2(matrix([[1,1],[0,1]])) Out[19]: array([[ 2.71828183, 0. ], [ 0. , 2.71828183]]) In [20]: scipy.linalg.expm3(matrix([[1,1],[0,1]])) Out[20]: array([[ 2.71828183, 2.71828183], [ 0. , 2.71828183]]) In particular, expm2 silently gives the wrong answer if the matrix is not diagonalizable. In [22]: scipy.__version__ Out[22]: '0.5.2' Anne From robert.kern at gmail.com Thu Jun 28 11:58:04 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 28 Jun 2007 10:58:04 -0500 Subject: [SciPy-dev] I can't find howto get exp(x) or a^x in NumPy_for_Matlab_Users page In-Reply-To: <4683D3B9.8050202@ukr.net> References: <4683CCE8.1040309@ukr.net> <4683D2A7.3080102@gmail.com> <4683D3B9.8050202@ukr.net> Message-ID: <4683DA8C.8000101@gmail.com> dmitrey wrote: > I think it's better to remove all mentions about scipy.linalg from > website and documentation and denote to numpy.linalg ??? I don't even understand what you're proposing here. But I suspect my response would be "no." > also, I think that things like norm, expm, etc should be in numpy, not > (or not only) in numpy.linalg No. As much as possible, we want to keep numpy to the core business of providing an array datatype. We wouldn't even have numpy.linalg or numpy.fft if we didn't have to provide backwards compatibility for Numeric and numarray, which included such capabilities. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From peridot.faceted at gmail.com Thu Jun 28 12:07:43 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Thu, 28 Jun 2007 12:07:43 -0400 Subject: [SciPy-dev] I can't find howto get exp(x) or a^x in NumPy_for_Matlab_Users page In-Reply-To: <4683DA8C.8000101@gmail.com> References: <4683CCE8.1040309@ukr.net> <4683D2A7.3080102@gmail.com> <4683D3B9.8050202@ukr.net> <4683DA8C.8000101@gmail.com> Message-ID: On 28/06/07, Robert Kern wrote: > > also, I think that things like norm, expm, etc should be in numpy, not > > (or not only) in numpy.linalg > > No. As much as possible, we want to keep numpy to the core business of providing > an array datatype. We wouldn't even have numpy.linalg or numpy.fft if we didn't > have to provide backwards compatibility for Numeric and numarray, which included > such capabilities. Can we mark numpy.fft and numpy.linalg as deprecated, then? There are regularly posts from people confused about whether to use numpy.linalg or scipy.linalg and unclear on the difference. I know I was confused about it on occasion. Or, put another way, how long are we going to have this awkward backwards-compatibility cruft? Thanks, Anne M. Archibald From ggellner at uoguelph.ca Thu Jun 28 13:25:59 2007 From: ggellner at uoguelph.ca (Gabriel Gellner) Date: Thu, 28 Jun 2007 13:25:59 -0400 Subject: [SciPy-dev] I can't find howto get exp(x) or a^x in NumPy_for_Matlab_Users page In-Reply-To: References: <4683CCE8.1040309@ukr.net> <4683D2A7.3080102@gmail.com> <4683D3B9.8050202@ukr.net> <4683DA8C.8000101@gmail.com> Message-ID: <20070628172559.GA25965@giton> This would be good to know . . . I have used numpy.linalg in all of my new code instead of scipy.linalg . . . I thought this was the preferred way . . . a deprecation warning would be appreciated. Gabriel On Thu, Jun 28, 2007 at 12:07:43PM -0400, Anne Archibald wrote: > On 28/06/07, Robert Kern wrote: > > > > also, I think that things like norm, expm, etc should be in numpy, not > > > (or not only) in numpy.linalg > > > > No. As much as possible, we want to keep numpy to the core business of providing > > an array datatype. We wouldn't even have numpy.linalg or numpy.fft if we didn't > > have to provide backwards compatibility for Numeric and numarray, which included > > such capabilities. > > Can we mark numpy.fft and numpy.linalg as deprecated, then? There are > regularly posts from people confused about whether to use numpy.linalg > or scipy.linalg and unclear on the difference. I know I was confused > about it on occasion. > > Or, put another way, how long are we going to have this awkward > backwards-compatibility cruft? > > Thanks, > Anne M. Archibald > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev From aisaac at american.edu Thu Jun 28 13:37:21 2007 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 28 Jun 2007 13:37:21 -0400 Subject: [SciPy-dev] I can't find howto get exp(x) or a^x in NumPy_for_Matlab_Users page In-Reply-To: References: <4683CCE8.1040309@ukr.net><4683D2A7.3080102@gmail.com> <4683D3B9.8050202@ukr.net><4683DA8C.8000101@gmail.com> Message-ID: On Thu, 28 Jun 2007, Anne Archibald apparently wrote: > Can we mark numpy.fft and numpy.linalg as deprecated, > then? I hope not. I can and do ask my students to install numpy. I cannot yet ask most of them to install SciPy. Cheers, Alan Isaac From millman at berkeley.edu Thu Jun 28 13:34:38 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Thu, 28 Jun 2007 10:34:38 -0700 Subject: [SciPy-dev] [pymachine] moving code outside the sandbox into scikits ? In-Reply-To: <3335.192.55.12.36.1183044591.squirrel@mail2.webfaction.com> References: <467CF780.4060205@ar.media.kyoto-u.ac.jp> <796269930706272354q6d3ee74fuc24605914eed249b@mail.gmail.com> <3335.192.55.12.36.1183044591.squirrel@mail2.webfaction.com> Message-ID: +1, I am happy with 'learn' as well. Jarrod From robert.kern at gmail.com Thu Jun 28 13:38:03 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 28 Jun 2007 12:38:03 -0500 Subject: [SciPy-dev] I can't find howto get exp(x) or a^x in NumPy_for_Matlab_Users page In-Reply-To: <20070628172559.GA25965@giton> References: <4683CCE8.1040309@ukr.net> <4683D2A7.3080102@gmail.com> <4683D3B9.8050202@ukr.net> <4683DA8C.8000101@gmail.com> <20070628172559.GA25965@giton> Message-ID: <4683F1FB.4050205@gmail.com> Gabriel Gellner wrote: > This would be good to know . . . I have used numpy.linalg in all of my > new code instead of scipy.linalg . . . I thought this was the preferred > way . . . a deprecation warning would be appreciated. numpy.linalg is not deprecated, yet. It won't be expanded (at least until the developer team gets entirely replaced), but it won't be going anywhere, yet. It *might* be removed in 1.1, but no planning on that has happened, yet. You have to make the decision yourself based on your needs which module you use. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From openopt at ukr.net Thu Jun 28 14:22:29 2007 From: openopt at ukr.net (dmitrey) Date: Thu, 28 Jun 2007 21:22:29 +0300 Subject: [SciPy-dev] I can't find howto get exp(x) or a^x in NumPy_for_Matlab_Users page In-Reply-To: References: <4683CCE8.1040309@ukr.net><4683D2A7.3080102@gmail.com> <4683D3B9.8050202@ukr.net><4683DA8C.8000101@gmail.com> Message-ID: <4683FC65.5060800@ukr.net> So do I, I don't want to make my package dependent on scipy only due to scipy.linalg.norm (of course, if numpy.linalg will be removed, I will use sqrt(x**2)) D Alan G Isaac wrote: > On Thu, 28 Jun 2007, Anne Archibald apparently wrote: > >> Can we mark numpy.fft and numpy.linalg as deprecated, >> then? >> > > I hope not. I can and do ask my students to install numpy. > I cannot yet ask most of them to install SciPy. > > Cheers, > Alan Isaac > > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > > > > From nwagner at iam.uni-stuttgart.de Fri Jun 29 06:56:45 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 29 Jun 2007 12:56:45 +0200 Subject: [SciPy-dev] Status of sandbox.pysparse Message-ID: <4684E56D.6020909@iam.uni-stuttgart.de> Hi all, Is pysparse still maintained ? I am mainly interested in the Jacobi-Davidson (JDSYM) eigensolver . I have enabled the sandbox package. However it fails if I try to import the package. Python 2.4.1 (#1, May 25 2007, 18:41:31) [GCC 4.0.2 20050901 (prerelease) (SUSE Linux)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from scipy.sandbox import pysparse Traceback (most recent call last): File "", line 1, in ? File "/usr/lib64/python2.4/site-packages/scipy/sandbox/pysparse/__init__.py", line 4, in ? from spmatrix import * ImportError: No module named spmatrix Nils From guyer at nist.gov Fri Jun 29 08:45:01 2007 From: guyer at nist.gov (Jonathan Guyer) Date: Fri, 29 Jun 2007 08:45:01 -0400 Subject: [SciPy-dev] Status of sandbox.pysparse In-Reply-To: <4684E56D.6020909@iam.uni-stuttgart.de> References: <4684E56D.6020909@iam.uni-stuttgart.de> Message-ID: <717A6307-8FEE-4926-9EC5-C20ACF4F1D8C@nist.gov> On Jun 29, 2007, at 6:56 AM, Nils Wagner wrote: > Is pysparse still maintained ? > I am mainly interested in the Jacobi-Davidson (JDSYM) eigensolver . > I have enabled the sandbox package. I don't know anything about what's in the sandbox, but pysparse is still maintained: http://pysparse.sourceforge.net From aisaac at american.edu Fri Jun 29 09:06:50 2007 From: aisaac at american.edu (Alan G Isaac) Date: Fri, 29 Jun 2007 09:06:50 -0400 Subject: [SciPy-dev] [SciPy-user] [scikits] Updated generic optimizer (and egg download link) In-Reply-To: References: Message-ID: On Fri, 29 Jun 2007, Matthieu Brucher apparently wrote: > You're right, it's not an official scikit. I don't think > that someone in charge said something about it when > I asked if it should be a scikit or not, only Michael > McNeil Forbes. You were exposing much less of the project back then, if I recall. Anyway, I certainly did not realize how much stuff you had already coded. At this point, it should definitely be a SciKit, IMO. If there are no objections, you should add it to the repository. Make sure to use the project structure discussed here: https://projects.scipy.org/scipy/scikits/ If you do not yet have SVN commit rights, you will have to ask for them. I think these requests often go to this list, but it would be nice to clarify if there is an established procedure for requesting this. Cheers, Alan Isaac From cgalvan at enthought.com Fri Jun 29 11:32:36 2007 From: cgalvan at enthought.com (Christopher Galvan) Date: Fri, 29 Jun 2007 10:32:36 -0500 Subject: [SciPy-dev] Scipy weave error with mingw-3.4.5 Message-ID: <46852614.4010802@enthought.com> Hello, I was trying to run an example I found of compiling C code into a python module. I have attached the .py file I ran. The resulting error was given when I was using MinGW-3.4.5, but the process ran fine on MinGW-3.2.3. Is there some kind of compatibility issue that I am missing? Any help would be greatly appreciated. C:\eric_class\exercises\exercise8>python exercise8.py c:\docume~1\chrisg~1\locals~1\temp\Chris Galvan\python25_intermediate\compiler_8 94ad5ed761bb51736c6d2b7872dc212\Release\python25\lib\site-packages\scipy-0.5.3.d ev2400-py2.5-win32.egg\scipy\weave\scxx\weave_imp.o:weave_imp.cpp:(.text+0xac5): undefined reference to `std::string::_Rep::_S_empty_rep_storage' c:\docume~1\chrisg~1\locals~1\temp\Chris Galvan\python25_intermediate\compiler_8 94ad5ed761bb51736c6d2b7872dc212\Release\python25\lib\site-packages\scipy-0.5.3.d ev2400-py2.5-win32.egg\scipy\weave\scxx\weave_imp.o:weave_imp.cpp:(.text+0xb94): undefined reference to `std::string::_Rep::_S_empty_rep_storage' c:\docume~1\chrisg~1\locals~1\temp\Chris Galvan\python25_intermediate\compiler_8 94ad5ed761bb51736c6d2b7872dc212\Release\python25\lib\site-packages\scipy-0.5.3.d ev2400-py2.5-win32.egg\scipy\weave\scxx\weave_imp.o:weave_imp.cpp:(.text+0xbdc): undefined reference to `__gnu_cxx::__exchange_and_add(int volatile*, int)' c:\docume~1\chrisg~1\locals~1\temp\Chris Galvan\python25_intermediate\compiler_8 94ad5ed761bb51736c6d2b7872dc212\Release\python25\lib\site-packages\scipy-0.5.3.d ev2400-py2.5-win32.egg\scipy\weave\scxx\weave_imp.o:weave_imp.cpp:(.text+0xc17): undefined reference to `__gnu_cxx::__exchange_and_add(int volatile*, int)' collect2: ld returned 1 exit status Traceback (most recent call last): File "exercise8.py", line 21, in weave.blitz("result = a+b*(c-d)") File "c:\python25\lib\site-packages\scipy-0.5.3.dev2400-py2.5-win32.egg\scipy\ weave\blitz_tools.py", line 63, in blitz **kw) File "c:\python25\lib\site-packages\scipy-0.5.3.dev2400-py2.5-win32.egg\scipy\ weave\inline_tools.py", line 447, in compile_function verbose=verbose, **kw) File "c:\python25\lib\site-packages\scipy-0.5.3.dev2400-py2.5-win32.egg\scipy\ weave\ext_tools.py", line 365, in compile verbose = verbose, **kw) File "c:\python25\lib\site-packages\scipy-0.5.3.dev2400-py2.5-win32.egg\scipy\ weave\build_tools.py", line 269, in build_extension setup(name = module_name, ext_modules = [ext],verbose=verb) File "c:\python25\lib\site-packages\numpy-1.0.2.dev3484-py2.5-win32.egg\numpy\ distutils\core.py", line 174, in setup return old_setup(**new_attr) File "C:\Python25\lib\distutils\core.py", line 168, in setup raise SystemExit, "error: " + str(msg) distutils.errors.CompileError: error: Command "g++ -mno-cygwin -shared "c:\docum e~1\chrisg~1\locals~1\temp\Chris Galvan\python25_intermediate\compiler_894ad5ed7 61bb51736c6d2b7872dc212\Release\docume~1\chrisg~1\locals~1\temp\chris galvan\pyt hon25_compiled\sc_421d46af877a66479b224b3cb3ed5b051.o" "c:\docume~1\chrisg~1\loc als~1\temp\Chris Galvan\python25_intermediate\compiler_894ad5ed761bb51736c6d2b78 72dc212\Release\python25\lib\site-packages\scipy-0.5.3.dev2400-py2.5-win32.egg\s cipy\weave\scxx\weave_imp.o" -Lc:\python25\libs -Lc:\python25\PCBuild -lpython25 -lmsvcr71 -o "c:\docume~1\chrisg~1\locals~1\temp\Chris Galvan\python25_compiled \sc_421d46af877a66479b224b3cb3ed5b051.pyd"" failed with exit status 1 -- Chris Galvan -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: exercise8.py URL: From edschofield at gmail.com Fri Jun 29 12:05:37 2007 From: edschofield at gmail.com (Ed Schofield) Date: Fri, 29 Jun 2007 17:05:37 +0100 Subject: [SciPy-dev] Status of sandbox.pysparse In-Reply-To: <4684E56D.6020909@iam.uni-stuttgart.de> References: <4684E56D.6020909@iam.uni-stuttgart.de> Message-ID: <1b5a37350706290905g2c9ec00fi260d2d7fc2b52f1a@mail.gmail.com> On 6/29/07, Nils Wagner wrote: > Hi all, > > Is pysparse still maintained ? > I am mainly interested in the Jacobi-Davidson (JDSYM) eigensolver . > I have enabled the sandbox package. > > However it fails if I try to import the package. No, the sandbox package is long dead. It was a snapshot of pysparse that I took long ago and hacked to use 'scipy core' arrays when we were considering how to extend the functionality of scipy.sparse. I think we should remove it. I doubt anyone is using it, but I'll ask on scipy-user anyway and, if nobody objects, I'll delete it from the tree next week. -- Ed From openopt at ukr.net Sat Jun 30 08:31:42 2007 From: openopt at ukr.net (dmitrey) Date: Sat, 30 Jun 2007 15:31:42 +0300 Subject: [SciPy-dev] GSoC weekly report Message-ID: <46864D2E.7010504@ukr.net> Hi all, see http://openopt.blogspot.com/ for my reports. Briefly: 1. QP class created (the only one connected qp solver is cvxopt_qp, I wrote cvxopt_mosek qp binding but i didn't tested the one because of problems with mosek binary libs) see from scikits.openopt import QP help(QP) for more details. 2. General constrained NLP solver "lincher" (LINearisation solver from CHERkassy town) had been written. However, it requires QP solver, and cvxopt QP is GPL-licensed. In future I guess I can add native OO QP solver (+bonus - handling of QC- Quadratic Constraints, but positive-defined only, elseware the problem could be NP-hard), it will be able of handling problems with nVars up to 1000, including ill-conditioned ones and other difficulties. However, it requires consulting with some people from my department, and they are very busy for now with their own work. Also, lincher requires line-search solver, currently I use fminbound from scipy, but I intend to remove the dependence later. see from scikits.openopt import NLP help(NLP) for more details. or see the notes at http://openopt.blogspot.com/ Regards, D.