From peterson at math.utwente.nl Sun Feb 3 14:43:44 2002 From: peterson at math.utwente.nl (P.Peterson) Date: Sun, 3 Feb 2002 20:43:44 +0100 (CET) Subject: [SciPy-dev] ode - generic interface to ODEs solvers Message-ID: Hi! I have commited a new module, ode, to the integrate package. It provides a generic interface to various numerical integrators of ODE systems. Currently it has support for vode routine. Here is a stiff sample problem solved with using ode: from scipy.integrate import ode def f(t,y): ydot0 = -0.04*y[0] + 1e4*y[1]*y[2] ydot2 = 3e7*y[1]*y[1] ydot1 = -ydot0-ydot2 return [ydot0,ydot1,ydot2] def jac(t,y): jc = [[-0.04,1e4*y[2] ,1e4*y[1]], [0.04 ,-1e4*y[2]-6e7*y[1],-1e4*y[1]], [0.0 ,6e7*y[1] ,0.0]] return jc r = ode(f,jac).set_integrator('vode', rtol=1e-4, atol=[1e-8,1e-14,1e-6], method='bdf', ) r.set_initial_value([1,0,0]) print 'At t=%s y=%s'%(r.t,r.y) tout = 0.4 for i in range(12): r.integrate(tout) print 'At t=%s y=%s'%(r.t,r.y) tout *= 10 The output is: At t=0.0 y=[ 1. 0. 0.] At t=0.4 y=[ 9.85172114e-01 3.38639538e-05 1.47940224e-02] At t=4.0 y=[ 9.05518679e-01 2.24047569e-05 9.44589164e-02] At t=40.0 y=[ 7.15827066e-01 9.18553466e-06 2.84163748e-01] At t=400.0 y=[ 4.50518664e-01 3.22290138e-06 5.49478113e-01] At t=4000.0 y=[ 1.83202258e-01 8.94237125e-07 8.16796848e-01] At t=40000.0 y=[ 3.89833604e-02 1.62176759e-07 9.61016477e-01] At t=400000.0 y=[ 4.93828907e-03 1.98499999e-08 9.95061691e-01] At t=4000000.0 y=[ 5.16818452e-04 2.06832993e-09 9.99483179e-01] At t=40000000.0 y=[ 5.20295745e-05 2.08128996e-10 9.99947970e-01] At t=400000000.0 y=[ 5.19687803e-06 2.07876187e-11 9.99994803e-01] At t=4000000000.0 y=[ 5.20558016e-07 2.08223292e-12 9.99999479e-01] At t=40000000000.0 y=[ 4.22779340e-08 1.69111761e-13 9.99999958e-01] Another example, more realistic and useful in the usage, is in ode.py (see the function test1()). You may ask for the reasons why I wrote this module as the integrate package already provides odeint function for the same task. The main reason is that numerical intergration of ODE systems is a complex task that can be very sensitive to the problem in hand. For example, completely different methods are efficent for stiff and non-stiff problems, respectively. There are many algorithms available that are very efficient for certain problems while they can perform very poorly if applied to wrong problems. Therefore, it is crucial that one can easily switch algorithms for different problems and also to test which ones are most efficient. One may even need to change the algorithm during the integration. ode module tackles all these issues efficiently and at the same time being rather user-friendly. Currently the only supported routine vode is a Fortran program. There is also a C version of vode available (called cvode) that includes many improvements to vode, including Krylov solver (the former vodpk). So, it is the next routine in my task list to get supported. Comments are welcome as always. Regards, Pearu From a.schmolck at gmx.net Sun Feb 3 18:55:59 2002 From: a.schmolck at gmx.net (A.Schmolck) Date: 03 Feb 2002 23:55:59 +0000 Subject: [SciPy-dev] improvement suggestions for scipy.io (with patch) Message-ID: Hi, I often find it useful to be able to read in raw data where one of the dimensions is implicitly inferred, so I made a modification to fread that allows this in the same way as the reshape function -- by allowing to specify -1 for the unknown dimensions and deduce by the remaining bytes in the file (or the overall size of the array in reshape's case). Here is an example: >>> import scipy.io >>> from Numeric import * >>> a = reshape(arange(6.), (2,3)) >>> f = scipy.io.fopen('test.raw', 'wb') >>> f.write(a) >>> f.close() >>> >>> f = scipy.io.fopen('test.raw') >>> print f.read((2,3), 'd') [[ 0. 1. 2.] [ 3. 4. 5.]] >>> f.close() >>> >>> f = scipy.io.fopen('test.raw') >>> print f.read((-1,3), 'd') [[ 0. 1. 2.] [ 3. 4. 5.]] >>> f.close() This is e.g. handy when one has files that contain let's say raw image data of a fixed height and width but with a varying number of images per file. While I had a quick look around, I also noticed a few other points: - unicode filenames are not accepted (changed that) - data isn't read/written in binary by default. I made changes to enforce binary, since this would otherwise lead to potentially nasty results under windoze -- I realized a bit too late that the newest cvs version already added checks for windoze (there is one omission), but it would seem to me that there is no harm in _always_ opening files in binary mode, after all they are supposed to be binary. - the the method argument keywords differ from their respective file counterparts (e.g. 'permissions' instead of 'mode', 'file_name' instead of 'name', 'count' instead of 'size') -- if there is no particular reason for that, it might be worthwhile changing. - maybe that has been fixed, but fread segfaults my python if supplied with illegal arguments (e.g. negative values). alex -------------- next part -------------- A non-text attachment was scrubbed... Name: mio.py.PATCH Type: text/x-patch Size: 3925 bytes Desc: patch for mio.py URL: -------------- next part -------------- -- Alexander Schmolck Postgraduate Research Student Department of Computer Science University of Exeter A.Schmolck at gmx.net http://www.dcs.ex.ac.uk/people/aschmolc/ From pearu at cens.ioc.ee Mon Feb 4 18:37:58 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Tue, 5 Feb 2002 01:37:58 +0200 (EET) Subject: Problem solved. Re: [SciPy-dev] fblas1 In-Reply-To: Message-ID: Hi! On Tue, 22 Jan 2002, Pearu Peterson wrote: > I also have troubles with cblas1 from atlas 3.3.13. Just too many > functions from atlas just crash when I am trying to wrap them. This is a solved problem now. (Wrapping C functions required proper prototypes as e.g. float and double arguments are different (in memory usage) and caused these segfaults when lazy prototypes were used. In wrapping Fortran functions this was not an issue because float* and double* are the same.) To have this fix in effect, you'll need f2py version >= 2.11.174-1157, however. Regards, Pearu From loredo at astrosun.astro.cornell.edu Wed Feb 6 16:00:04 2002 From: loredo at astrosun.astro.cornell.edu (Tom Loredo) Date: Wed, 6 Feb 2002 16:00:04 -0500 (EST) Subject: [SciPy-dev] Problem building under Solaris Message-ID: <200202062100.g16L04M07862@laplace.astro.cornell.edu> Hi folks- I recently installed Python 2.2 and am trying to install SciPy with it. The machine is a Sun Ultra10 running Solaris (SunOS 5.7). The build is crashing when building shared libraries. The problem is that "gcc" is called with a "-lf90" option. We have a Fortran 95 compiler (Sun WorkShop 6 update 1), but it doesn't come with a libf90 (it has a few separate libraries). In any case, if I just copy the "gcc -shared..." line that setup.py spills out before it quits, and delete -lf90 and the (null) variables, it appears to link fine. I cannot figure out where "-lf90" gets added to the link command, and I don't want to keep building each one by hand as the setup script crashes (I've done three so far---I don't know how many there would be). I suspect this is a distutils problem, but I thought I'd ask here first for a fix. I've built a lot of Python extensions, and never had this problem before. But I've never built anything requiring F90/95 compatibility. Thanks, Tom Loredo From pearu at cens.ioc.ee Thu Feb 7 05:08:43 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Thu, 7 Feb 2002 12:08:43 +0200 (EET) Subject: [SciPy-dev] Problem building under Solaris In-Reply-To: <200202062100.g16L04M07862@laplace.astro.cornell.edu> Message-ID: On Wed, 6 Feb 2002, Tom Loredo wrote: > I recently installed Python 2.2 and am trying to install SciPy > with it. The machine is a Sun Ultra10 running Solaris (SunOS 5.7). > The build is crashing when building shared libraries. The > problem is that "gcc" is called with a "-lf90" option. We > have a Fortran 95 compiler (Sun WorkShop 6 update 1), but it > doesn't come with a libf90 (it has a few separate libraries). > In any case, if I just copy the "gcc -shared..." line that > setup.py spills out before it quits, and delete -lf90 and > the (null) variables, it appears to link fine. I cannot > figure out where "-lf90" gets added to the link command, > and I don't want to keep building each one by hand as the > setup script crashes (I've done three so far---I don't know > how many there would be). I suspect this is a distutils > problem, but I thought I'd ask here first for a fix. I've > built a lot of Python extensions, and never had this problem > before. But I've never built anything requiring F90/95 > compatibility. The "-lf90" gets added in scipy_distutils/command/build_flib.py: class sun_fortran_compiler The corresponding ver_match = r'f77: (?P[^\s*,]*)' seems to catch almost any compiler as a sun_fortran_compiler, and I think that this can happen too often wrongly. Tom, you may want to fix sun_fortran_compiler for your compiler and send a patch. Otherwise, you have always a choice to use gcc fortran compiler as follows: ./setup.py build_flib --fcompiler=g77 build Currently, scipy contains no F90/F95 programs at all. Regards, Pearu From travis at scipy.org Fri Feb 8 09:26:33 2002 From: travis at scipy.org (Travis N. Vaught) Date: Fri, 8 Feb 2002 08:26:33 -0600 Subject: Problem solved. Re: [SciPy-dev] fblas1 In-Reply-To: Message-ID: Pearu, Excellent! Thanks for fixing this. Travis (for Eric) > -----Original Message----- > From: scipy-dev-admin at scipy.org [mailto:scipy-dev-admin at scipy.org]On > Behalf Of Pearu Peterson > Sent: Monday, February 04, 2002 5:38 PM > To: scipy-dev at scipy.org > Subject: Problem solved. Re: [SciPy-dev] fblas1 > > > > Hi! > > On Tue, 22 Jan 2002, Pearu Peterson wrote: > > > I also have troubles with cblas1 from atlas 3.3.13. Just too many > > functions from atlas just crash when I am trying to wrap them. > > This is a solved problem now. (Wrapping C functions required proper > prototypes as e.g. float and double arguments are different (in > memory usage) and caused these segfaults when lazy prototypes were > used. In wrapping Fortran functions this was not an issue because float* > and double* are the same.) > > To have this fix in effect, you'll need f2py version >= 2.11.174-1157, > however. > > Regards, > Pearu > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev From pnmiller at pacbell.net Fri Feb 8 14:38:00 2002 From: pnmiller at pacbell.net (Pat Miller) Date: Fri, 08 Feb 2002 11:38:00 -0800 Subject: [SciPy-dev] Thoughts on weave improvements Message-ID: <3C642918.1010907@pacbell.net> I was talking with Eric a lot this week at Python 10 and we had some thoughts on further improvements to the weave module. One thing that I want to add is to blend in some of the bytecode compiler work I started under PyCOD (compile on demand) that builds C++ accelerated extension functions from Python def foo(a,b): return a+b+10 handle, fastFoo = pycod.compile(foo,[IntType,IntType]) This would write and wrap a C++ function of the form long foo(long a, long b) { return a + b + 10; } and an interface function PyObject* fooWrap(...) { .... } The compiler in PyCOD was set up sort of like weave.inline to cache precompiled results (though Eric's scheme is much more robust). As you can see, this had a somewhat different flavor than inline. Whereas weave.inline inlined expressions, pyCOD accellerated functions (given input types). We did it because we had Python callbacks in C++ code we wanted accellerated. I think that these techniques are important to bring to scipy or else many parts of it are only toys. def f(x): print integrate.quad(f,0,1) It might be better if the integration routine, using its knowledge that the argument to x must be a PyFloat (C++ double) could use a C++ accelerated function instead of slow callbacks to Python. Not important for a quick numeric integration, but crucial for using Python as a true C/FORTRAN replacement. Pat From pnmiller at pacbell.net Fri Feb 8 14:38:02 2002 From: pnmiller at pacbell.net (Pat Miller) Date: Fri, 08 Feb 2002 11:38:02 -0800 Subject: [SciPy-dev] How to get extensions to affect local state Message-ID: <3C64291A.1010707@pacbell.net> Another topic that came up at Python 10 Eric wants weave.inline to affect the local state such that sum = 0 n = 10 print 'before',sum weave.inline("sum = 0; for(i=0;i -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: test.py URL: From pnmiller at pacbell.net Sun Feb 10 04:51:46 2002 From: pnmiller at pacbell.net (Pat Miller) Date: Sun, 10 Feb 2002 01:51:46 -0800 Subject: [SciPy-dev] Thoughts on weave improvements References: <3C642918.1010907@pacbell.net> <15461.63698.247536.257478@monster.linux.in> Message-ID: <3C6642B2.3080601@pacbell.net> Prabhu Ramachandran wrote: >>>>>>"PM" == Pat Miller writes: > PM> bytecode compiler work I started under PyCOD (compile on > PM> demand) that builds C++ accelerated extension functions from > PM> Python > All this sounds very promising. I just hope it becomes part of weave Guess what.... It works (some restrictions and caveats of course :-) Here's how I have it set up import weaver # This is my prototype import weave.ext_tools # These are Eric's import weave.build_tools from types import * # Will need this in 2.1, but in 2.2 things change a bit # Get one of those nice weave module builders M = weave.ext_tools.ext_module('foobar') # Define some handy python function def f(x): "doc strings get propagated" return x*x # Here's where we define the accelerated function # Note that you need to give the input signature # of the function E.add_function( weaver.Python2CXX( f, [FloatType] ) ) # You can do it for multiple signatures, but you # need to specify a different name to disambiguate E.add_function( weaver.Python2CXX( f, [IntType], name = 'f_int' ) ) # OK... So accelerating that wasn't very impressive... # You can do a lot of calculating stuff though.... from math import sin,cos,pi def trigThing(x): "doc" # Local temporaries work xx = x*x # Some math functions work sinx = sin(x) # Math functions of one arg are available # global variables are "read-only-frozen" PI2 = 2.0*pi cosx = cos(x+PI2) # Note the use of a global here... It's # value gets frozen at compile time # Not a problem for pi, but worrysome # for other globals # 'if' works most of the time (integer test or boolean expr only!) if cosx < 0.0: cosx = abs(cosx) # In my prototype, only abs( float ) is on else: # Print works using the real Python sys.stdout, so weird # windows things will work as output goes where it is supposed to print "It was non-negative!",cosx return sinx*sinx + cosx*cosx E.add_function( weaver.Python2CXX( trigThing, [FloatType] ) ) # Of course multiple input arguments... def dist(x0,y0,z0, x1,y1,z1): deltax = x1-x0 deltay = y1-y0 deltaz = z1-z0 return sqrt(deltax*deltax + deltay*deltay + deltaz*deltaz) E.add_function ( weaver.Python2CXX( dist, [FloatType,FloatType,FloatType,FloatType,FloatType,FloatType,]) ) # Now the magic... E.generate_file() weave.build_tools.build_extension('foobar.cpp') import foobar # This prints 100.0 print foobar.f(10.0) # OK... It isn't everything, but it can make a lot of simple stuff run # about 2 to 5 times faster (real speed mostly eaten up with function call # overhead for these really small functions) Here are things that don't work... * you can't use import * you can't use loops of any kind * no support for exceptions at this time (though doable!) * no mixed mode arithmetic (x + 1 where x is a float dies!) * only involves integer, float, string types * no string operations except print and return * specifically, no python formating ( ">>> %s <<<"%x is bad ) * not all operators are there (mostly laziness in prototype) ( +,-,*,/, <, <=, ==, !=, >, >= only for now) * only single valued functions for the moment * I don't use Eric's cool weave code caching scheme * no index access * etc... Still, not too shabby for a day and a half development time. I'll be patching up holes and writing some more extensive tests for this. There are three main enhancements that will get this kicking butt... 0) Add global update (push back to python environment) 1) Add Numeric (I have an extensible interface for users adding new types) 2) Add __getitem__ and __setitem__ so that you could do something like... def f(a,b): n = len(a) m = len(b) for i in range(n): for j in range(m): a[i] = b[j] return (i.e. write normal F77 style scalarized code using numpy arrays) 3) Allow the functions you write to be 'inlined' automagically e.g. def f(x): some code def g(x): .... + f(x) + .... So that when you define f, [FloatType] as a function, then you can use it immediately as part of g when you compile it 4) Build an interface to the struct module so that you can build and use real C structs My goal here, in case it isn't obvious, is to allow users to write modules completely in Python, debug and develop in a great language, and then recover the speed through compilation. This has a side benefit in that Jython and PalmPilot-thon and Embedded-processor-thon can all use the Python variant of your code and if you run on a "real" machine you get the speed. Hope this is useful stuff. If you want the code, drop me a line and I'll pass it along. I'm hoping that Eric will let me integrate this directly with weave. I think it is good enough to use out of the box (since it doesn't barf at every opportunity), but it will take some shaking out to get it to really work and this can only happen with users! Let me know who you are... Pat From prabhu at aero.iitm.ernet.in Sat Feb 9 23:36:34 2002 From: prabhu at aero.iitm.ernet.in (Prabhu Ramachandran) Date: Sun, 10 Feb 2002 10:06:34 +0530 Subject: [SciPy-dev] Thoughts on weave improvements In-Reply-To: <3C642918.1010907@pacbell.net> References: <3C642918.1010907@pacbell.net> Message-ID: <15461.63698.247536.257478@monster.linux.in> >>>>> "PM" == Pat Miller writes: PM> I was talking with Eric a lot this week at Python 10 and we PM> had some thoughts on further improvements to the weave module. PM> One thing that I want to add is to blend in some of the PM> bytecode compiler work I started under PyCOD (compile on PM> demand) that builds C++ accelerated extension functions from PM> Python [snip] PM> It might be better if the integration routine, using its PM> knowledge that the argument to x must be a PyFloat (C++ PM> double) could use a C++ accelerated function instead of slow PM> callbacks to Python. Not important for a quick numeric PM> integration, but crucial for using Python as a true C/FORTRAN PM> replacement. All this sounds very promising. I just hope it becomes part of weave sometime. prabhu From eric at scipy.org Mon Feb 11 11:34:04 2002 From: eric at scipy.org (eric) Date: Mon, 11 Feb 2002 11:34:04 -0500 Subject: [SciPy-dev] Thoughts on weave improvements Message-ID: <01d001c1b319$eb826470$777ba8c0@ericlaptop> Hey Pat, This is a very rough outline of what I thought of in the airplane rides home. All "sketch" of a solution below has holes, but is a start at merging pycod and weave. The basic concept is to create a new class or type that wraps a function. When a class wrapper object is called from Python, it will try and compile a new version of the function for the given types. This uses pycod and weave to do this. Instead of passing in the types (IntType, etc) the types are determined from the calling argument types. weaves catalog will be used to keep up with all the available compiled functions. The C pointer will also be kept around. f2py and friends could be made to recognize this type of wrapper object and ask it for the C function pointer instead of having to make python callbacks. We should work this all out in Python first and then move it to a C type later for speed. And, here are my notes from 30000 feet... A code fragment ready for compilation: foo_code = """ def foo(a,b): return a + b """ Generating a compiled function, foo: foo = weave.cod(foo_code) foo is now a callable function_object that handles all the specialization issues. I think this is how Psycho does its thing also -- perhaps in a fancier way. For now, it'll be a Python class with dictionaries to handle argument type issues. This is way slow, but it'll work as a proof of concept. When foo is called with unkwown types, it attempts to specialize and compile this function for the call types. The function is compiled to both a pure C function as well as a Python function that wraps it. The pure C function is never called directly by Python code, but it is useful to keep around to pass into other C/Fortran functions that need to call the function (like map or many Fortran optimization methods). For a and b as integers, the C code in the generated extension module would look something like: int c_foo(int a, int b) { return a + b; } PyObject* foo(PyObject* self, PyObject* args) { int result; PyObject* result_val = NULL; try { Py::Tuple t_args(args); int a = py_to_int(t_args[0]); int b = py_to_int(t_args[1]); // here is the function call to the real C function. result = c_foo(a,b); result_val = Py::new_reference_to(Py::Int(result)); } except(...) { result_val = Py::Null(); exception_occurred = 1; } if (result_val == NULL && !exception_occured) { result_val = Py_None; Py_XINCREF(result_val); } return result_val; } The function_object will just add the extension function to a list of available Python functions. Whenever the function is called as a Python function, it'll try each function in turn with the given arguments. If the function call fails with a ConversionError, the next function in the list is called. If we get to the end of the list without it working, then we compile a new function for the specified types and stick it on the front of the list. We also put the ptr for the underlying C function in the C function dictionary. func_obj = weave.pycompile(code) class function_object: def __init__(self,code): # not sure we need this, but I'll keep it for now. self.name = get_name(code) # store the code so we can compile it for various types. # we might store the bytecode also, but I doubt that is really useful. self.code = code # cataloging of C functions # key = type signature, value = actual function pointer self.c_funcs = {} # cataloging of Python functions # need to compile the function and add it as last option to call. self.py_funcs = [] self.py_cached = None def compile(self,*args): """ Compile a specialized version of the code for the given function types. This should be persisted also. Need to look at weave to see how we can make its functionality into a class -- maybe catalog is already well suited for this. """ # This is Pat Miller's function -- it needs to return: # function name: (maybe not completely necessary) # # type_info: some variable that tells the types of all the arguments # for the function. This could be as simple as a tuple # of strings, or, later, an array of integers that # specify types. This would be better for handling fast # in C code. # # wrapper_code: The code of the wrapper function i.e. the extension # function that returns a PyObject*. It is really # only in charge of type conversions and calls the # c_code function to do all the work. # # c_code: This is the heart of the function. It holds the C version # of the byte code for the given type. name, type_info, wrapper_code, c_code = translate_to_c(self.code,*args) # This is a slightly different that weave.inline because it returns a # Python extension function, and c_func a pointer. wrapper_func, c_func = weave.compile_wrap(name,wrapper_code, c_code) # now, stick catalog both the C and the Python functions for later use. self.py_catalog(wrapper_func) self.c_catalog(type_info, c_func) def py_catalog(self,func): """ Add a function to the python function list. This doesn't associate functions with a type information. This is for speed right now. Discovering types and then looking them up in a dictionary would be slow. In C, it might be a lot faster, so storing type information might be much more useful. """ self.py_funcs.insert(0,func) # cache for a fast calling. self.py_cached = func def c_catalog(self,type_info, func): """ Add a C funcion pointer to the function list. This is gonna be looked up in C if it is used (by map or something else), and then used many times, In C, the type_info can be fast to discover and it absolutely has to be so that a C func with the correct signature is called. """ self.c_funcs[type_info] = func def get_c_ptr(self,type_info): """ Grab the c_ptr for the given type signature. I guess if we ask for this, and it doesn't exist, we should build it by calling compile(). Do this later. This will be grabbed inside C wrapper functions and passed to a function like map or something like an optimization function. """ return self.c_funcs.get(type_info,None) def call_from_list(self,*args): """ Call each function in the list one after another until one with the correct signature is found. If all the functions fail, then through a conversion error. """ success = 0 for func in py_funcs: try: result = apply(func,*args) success = 1 except ConversionError: pass if not success: raise ConversionError retun result def __call__(self,*args): """ Call the function. Try the cached extension function first. If it fails because it has the incorrect type, try calling functions from the list of available extention functions. If all of these fail, compile an new version of the functions based on the current types, cache the resulting functions, and then call it. If it fails, we're out of options and we call the Python code directly. If this throws an exception, the user will get it. """ try: # Try calling the cached (last used function) result = apply(self.py_cached,*args) except ConversionError: # The cached function failed because it didn't have the correct # argument types. Now walk through all the functions try: self.call_from_list(*args) except ConversionError: try: # We walked through all the compiled functions, and they # all failed. Now try to compile a new one for the correct # version. self.compile(*args) apply(self.py_cached,*args) except: # If the compilation failed or the function call failed for # any reason, punt. Try executing the actual code as a # final resort. apply(self.python_version,*args) The above methodolgy is pretty slow for cache misses, but I think it could be sped up quite a bit by moving it to C and keeping track of types using some fast hash with type_info (in some byte format, not strings) as the key, and functions as the values. How would weave have to change to support this? Well, I think we need to encapsulate some of the work that is in inline in a class so we get a little reuse. For the time being, this'll cost us an extra Python function call, but it is worth it for the design process. Later we'll move to C and get rid of that expensive call. Jermey commented that C function calls were noticably expensive also, but I think we'll have to live with this. (Hmmm. Maybe the code is already pretty well structured. catalog and ext_tools might fit together quite well to handle both code generation and cataloging. Need to look.) The biggest change is that we need to put the C code that actually does all the work within a separate C function and then call it from the wrapper function instead of inserting the code directly within the wrapper. This isn't really that hard, but it does require another "code template" to be added to each of the type conversion classes. It'll also require a some change to the "function template". We should work to make this stuff used as similarly as possible across all the code, but I think that C functions for inline are gonna require pass by reference so that variables changed in the function are also changed in the wrappper, and standard extension functions are gonna require pass by value. This is probably pretty easy to deal with. Tasks: 1. Add machinery for returning changed variables from C to Python through the frameobject. (from the list, it looks like Pat already has looked at this. :) 2. Convert generation code so that it puts C code in a separate function. (Perhaps this should be optional so that people can save the call if they want to????) see ya, eric -- Eric Jones Enthought, Inc. [www.enthought.com and www.scipy.org] (512) 536-1057 From prabhu at aero.iitm.ernet.in Mon Feb 11 12:46:40 2002 From: prabhu at aero.iitm.ernet.in (Prabhu Ramachandran) Date: Mon, 11 Feb 2002 23:16:40 +0530 Subject: [SciPy-dev] Thoughts on weave improvements In-Reply-To: <3C6642B2.3080601@pacbell.net> References: <3C642918.1010907@pacbell.net> <15461.63698.247536.257478@monster.linux.in> <3C6642B2.3080601@pacbell.net> Message-ID: <15464.896.840249.232832@monster.linux.in> Hi, >>>>> "PM" == Pat Miller writes: PM> Prabhu Ramachandran wrote: >> All this sounds very promising. I just hope it becomes part of >> weave PM> Guess what.... It works (some restrictions and caveats of PM> course :-) Wow! PM> # Get one of those nice weave module builders M = PM> weave.ext_tools.ext_module('foobar') PM> # Define some handy python function def f(x): "doc strings get PM> propagated" return x*x PM> # Here's where we define the accelerated function # Note that PM> you need to give the input signature # of the function PM> E.add_function( weaver.Python2CXX( f, [FloatType] ) ) I guess you need to do E = W somewhere. Anyway, I get the idea. [snip] PM> # OK... It isn't everything, but it can make a lot of simple PM> stuff run # about 2 to 5 times faster (real speed mostly eaten PM> up with function call # overhead for these really small PM> functions) Yes, this is very cool! PM> Here are things that don't work... [snip] PM> Still, not too shabby for a day and a half development time. Thats real impressive indeed! PM> I'll be patching up holes and writing some more extensive PM> tests for this. There are three main enhancements that will PM> get this kicking butt... [snip] PM> 4) Build an interface to the struct module so that you can PM> build and use real C structs Does this mean that you can actually accelerate a full fledged Python class? PM> My goal here, in case it isn't obvious, is to allow users to PM> write modules completely in Python, debug and develop in a PM> great language, and then recover the speed through PM> compilation. This has a side benefit in that Jython and PM> PalmPilot-thon and Embedded-processor-thon can all use the PM> Python variant of your code and if you run on a "real" machine PM> you get the speed. Yes, it sure would be a *very* nice goal. One thing I cant but help noticing is that it looks like this is all heading towards something like Dylan. Its like Python + type info + magical parser that parses functions and produces c/c++ extensions. I ask a naive question, what is the feasibility of actually doing something like Dylan? OK forget truly generic functions. I'm just talking of optional typing to speed things up. I guess psyco, weave, pyCOD etc. all put together and taken to their logical end will give us Python + optional typing + heaven? :D Of course I could also be *completely* mistaken. Looking at Eric's subsequent post it looks like even optional typing can be magically handled. This is looking amazing. How far can we go? What would be possible and what is simply out of question?? If only a small subset of Python features are used by high perf. Python code, what subset are we talking about?? Wrapping a function is fine. What about classes? Would that be possible. I know maybe I'm wishing for too much but I think its important to atleast understand what is possible. PM> Hope this is useful stuff. If you want the code, drop me a PM> line and I'll pass it along. I'm hoping that Eric will let me PM> integrate this directly with weave. I think it is good enough I'm sure Eric will be very happy to let you integrate this with weave as will a lot of other users! PM> to use out of the box (since it doesn't barf at every PM> opportunity), but it will take some shaking out to get it to PM> really work and this can only happen with users! Let me know PM> who you are... Well, count me in but I am busy with too many things these days but I will try to experiment with it when I have the time. prabhu From pearu at cens.ioc.ee Mon Feb 11 13:53:07 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Mon, 11 Feb 2002 20:53:07 +0200 (EET) Subject: [SciPy-dev] How to get extensions to affect local state In-Reply-To: <3C64291A.1010707@pacbell.net> Message-ID: On Fri, 8 Feb 2002, Pat Miller wrote: > It is a bit tricky as you cannot just use python tricks to update > the local dictionary for affect: > > That is if inside weave.inline, it makes a call to locals() (or > rather gets it from the caller's frame) you can't make changes. See http://www.python.org/doc/current/lib/built-in-funcs.html: locals() Return a dictionary representing the current local symbol table. Warning: The contents of this dictionary should not be modified; changes may not affect the values of local variables used by the interpreter. And I think you should not modify locals() dictionary from Python C/API either. Actually, there are no tricks needed to get what you want in Python. For example, import sys def fun(varname): frame = sys._getframe(1) exec '%s = 7' % (varname) in frame.f_locals a = 5 print a fun('a') print a will output 5 7 Regards, Pearu From pearu at cens.ioc.ee Mon Feb 11 14:22:59 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Mon, 11 Feb 2002 21:22:59 +0200 (EET) Subject: [SciPy-dev] Thoughts on weave improvements In-Reply-To: <3C642918.1010907@pacbell.net> Message-ID: Hi, First, let me say that I appreciate very much the idea of PyCOD and what it tries to accomplish. Nevertheless, I find that a bucket of cold water is in order. I hope it will be constructive ;-) Ok, I can see few issues about accelerating Python with PyCOD or by any other (already available) means: 1) About C++ extensions that PyCOD would generate. Why not C? Compiling C++ can be very time and memory consuming task when compared to C or Fortran compilation. For example, the gain from weave (as tested with weave tests) was not so remarkable on my PII,96MB laptop until I upgraded the memory to 160MB and C++ could run without swapping. I find this issue quite serious drawback also when using weave, especially when developing programs. 2) About PyCOD compiling Python functions to extension functions. If these Python functions are not simple (they may use 3rd party modules, complex Python features like classes, etc) then in order PyCOD to be applicable in these (and the most practical) cases, it should be able to transform _any_ Python code to C. (There was a project that translated python code to C code, anyone knows what is the state of this project?) Anyway, I find this task rather difficult if not possible to accomplish in a reasonable amount of time. Otherwise, if PyCOD cannot do that, then I am afraid that it will be a toy program in scipy ;) 3) About callbacks. Indeed, when using 'some hairy, expensive python code' from C/C++/Fortran, most of the time is spent in Python (as Prabhu tests show, pure Python is approx 1000 times slower than any compiled language mentioned above). However, this 'some hairy, expensive python code' need not to be pure Python code, it may contain calls to extension modules that will speed up these callback functions. So, I don't think that calling callbacks from C/C++/Fortran would be remarkably expensive unless these callbacks will be really simple (true, sometimes they just are). On Fri, 8 Feb 2002, Pat Miller wrote: > It might be better if the integration routine, using its knowledge that > the argument to x must be a PyFloat (C++ double) could use a C++ > accelerated function instead of slow callbacks to Python. Not important > for a quick numeric integration, but crucial for using Python as a > true C/FORTRAN replacement. The last statement I find hard to believe. It is almost too trivial to write down few reasons (don't let your customers to know about these ;-): 1) Python (or any other scripting language) cannot never compete with C/FORTRAN in performance. (I truly hope that you can prove me to be wrong here.) 2) There are too many high-performance and well-tested C/FORTRAN codes available (atlas,lapack,fftw to name just few) and to repeat their implementation in Python is unthinkable, in addition to the point 1). Simple, transparent, and straightforward mixing of Python/C/C++/Fortran languages seems to be the way to go in scientific computing within Python. Regards, Pearu From eric at scipy.org Mon Feb 11 15:22:45 2002 From: eric at scipy.org (eric) Date: Mon, 11 Feb 2002 15:22:45 -0500 Subject: [SciPy-dev] How to get extensions to affect local state References: Message-ID: <021a01c1b339$ddc2bf40$777ba8c0@ericlaptop> Hey Pearu, > > On Fri, 8 Feb 2002, Pat Miller wrote: > > > > > It is a bit tricky as you cannot just use python tricks to update > > the local dictionary for affect: > > > > That is if inside weave.inline, it makes a call to locals() (or > > rather gets it from the caller's frame) you can't make changes. > > See http://www.python.org/doc/current/lib/built-in-funcs.html: > locals() > Return a dictionary representing the current local symbol > table. Warning: The contents of this dictionary should not be > modified; changes may not affect the values of local variables used > by the interpreter. The quote from the manual pertains to Python code, but not to C. In C, you can get a hold of the frameobject and access the local variables directly. After all, this is what the Python interpreter does to set and get local variables. I won't argue with the theory that this is not exactly "sanctioned" behavior, but weave's behavior is already somewhat "outside the box" of standard Python extensions, so I guess that doesn't bug me much. > And I think you should not modify locals() dictionary from Python > C/API either. Well, should and shouldn't is always a debatable concept. :) The purpose of inline is to seamlessly transfer variables into C/C++ code and then back out so that variable names in Python and C share the same information -- just as if the C code were actually Python code using different syntax. The only way to do this is to be able to write back into the local frame, so we need this capablility. If it can be made to do this safely and reliably, then I don't see an issue. Fernando has expressed wanting to explicitly return values from weave.inline instead of having variables flow back out of C into Python. We could put a flag into to turn this off if people really want that. > Actually, there are no tricks needed to get what you want in Python. > For example, > > import sys > def fun(varname): > frame = sys._getframe(1) > exec '%s = 7' % (varname) in frame.f_locals > a = 5 > print a > fun('a') > print a > > will output > > 5 > 7 This will only work in certain situations and is not guaranteed to work in the future. Here is an example where it doesn't work: # module test.py import sys a=1 def bob(var): frame = sys._getframe(1) exec '%s = 7' % (var,) in frame.f_locals def bob2(): a = 3 print a bob('a') print a print a bob('a') print a bob2() # end module C:\home\ej\wrk\junk\scipy\weave\pm>python test.py 1 7 3 3 So it works when called from the module level, but not when called from within a function. Also this approach is extremely slow compared to doing within the C function, so you'd loose much of the benefit of the things weave and PyCOD are trying to do. see ya, eric From fperez at pizero.colorado.edu Mon Feb 11 16:35:41 2002 From: fperez at pizero.colorado.edu (=?ISO-8859-1?Q?Fernando_P=E9rez?=) Date: Mon, 11 Feb 2002 14:35:41 -0700 (MST) Subject: [SciPy-dev] How to get extensions to affect local state In-Reply-To: <021a01c1b339$ddc2bf40$777ba8c0@ericlaptop> Message-ID: > Fernando has expressed wanting to explicitly return values from weave.inline > instead of having variables flow back out of C into Python. We could put a > flag into to turn this off if people really want that. I think I made that comment in the mindset that currently inline() is still a 'special' thing, and I tend to think of the C code as a very particular little space all to itself. But if inline becomes natural enough that all data flows in (all variables automatically, easy callbacks, etc) as you guys seem to be shooting for, then I guess I'd accept that it would be far more natural to also return values in the same way. Because at that point it would truly satisfy Eric's idea of "same algorithm, different syntax/speed" which is what we all seem interested in. The trick is, this has to be 100% reliable, otherwise I prefer to stick to the current (more manual) approach. But if getting hold of the frame in C: 1- doesn't clash with anything else (as trying to modify locals() does) 2 -it's approved enough by the powers that be that we don't expect it to randomly break with the next version of Python then by all means do it! By the way, I have to say I was *very* excited reading today's various messages from Eric et gang. It really seems that python is taking off for serious scientific computing. In that sense, I strongly second the suggestion of getting serious review of the algorithms into the process, so we don't end up simply with "Numerical Recipes in a scripting language". Cheers, f. From pearu at cens.ioc.ee Mon Feb 11 17:45:39 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Tue, 12 Feb 2002 00:45:39 +0200 (EET) Subject: [SciPy-dev] How to get extensions to affect local state In-Reply-To: <021a01c1b339$ddc2bf40$777ba8c0@ericlaptop> Message-ID: On Mon, 11 Feb 2002, eric wrote: > This will only work in certain situations and is not guaranteed to work in > the future. Here is an example where it doesn't work: > > def bob(var): > frame = sys._getframe(1) > exec '%s = 7' % (var,) in frame.f_locals > > So it works when called from the module level, but not when called from > within a function. Also this approach is extremely slow compared to doing > within the C function, so you'd loose much of the benefit of the things > weave and PyCOD are trying to do. You are right. Doing exec '%s = 7' % (var,) in frame.f_locals is actually equivalent to exec '%s = 7' % (var,) in locals() just in one level up and your example demonstrates that the warning about locals() in Python docs is real. PS: I also second Pauls and Fernandos concerns about algorithms flow into scipy. Scipy should provide only the best and throw out all the second best ones (and only if they are proved to be so in all aspects) in order to reduce the code base and increase scipy maintainability. Pauls suggestion about OOP approach would also ease this selection of algorithms. Regards, Pearu From eric at scipy.org Mon Feb 11 18:03:18 2002 From: eric at scipy.org (eric) Date: Mon, 11 Feb 2002 18:03:18 -0500 Subject: [SciPy-dev] Thoughts on weave improvements References: Message-ID: <026501c1b350$5015dc10$777ba8c0@ericlaptop> Hey Pearu, > First, let me say that I appreciate very much the idea of PyCOD and what > it tries to accomplish. Nevertheless, I find that a bucket of cold water > is in order. I hope it will be constructive ;-) Cold water welcomed. Just remember Faraday comment when someone asked him "Of what use is then knowledge?" concerning his experiments an theories of electricity. He responded, "Of what use is a child?" Weave/pycod are way down the list of significance, but at a comparable stage in development. Just because implementation isn't complete or portions are slow doesn't mean the ideas shouldn't be pursued. weave is actually fairly complete, and PyCOD current capabilities are jaw dropping (at least to me). I imagine making PyCOD general purpose is quite a bit of work, but its current code base of 1000 lines does a whale of a lot. I'm not familiar with it or the issues it raises yet to know if general applicability is even feasible. Even if it isn't, I am sure that there is a large enough sub-set of important cases that it can/will cover to make it quite useful. > > Ok, I can see few issues about accelerating Python with PyCOD or by > any other (already available) means: > > 1) About C++ extensions that PyCOD would generate. Why not C? Compiling > C++ can be very time and memory consuming task when compared to C or > Fortran compilation. For example, the gain from weave (as tested with > weave tests) was not so remarkable on my PII,96MB laptop until I upgraded > the memory to 160MB and C++ could run without swapping. I find this issue > quite serious drawback also when using weave, especially when developing > programs. Multiple points: 1. Why not C? The main reason is that C++ allows users to write much simpler code without arduous error handling, very little reference count handling, and limited knowledge of the Python API. The combination of C++ exceptions (the biggest win IMO) and class libraries such as CXX, SCXX, Boost, etc. with simple syntax to access Python objects make this possible. Someone on comp.lang.python mentioned that weave looked like extension programming "for the rest of us." This is indeed its goal. 2. C++ is to compile. Well, it depends on what your compiling. Templates are the problem, not C++ in general (and not all templates are expensive). My bet is the swapping was caused by blitz++, not the generic weave code. Standard weave.inline calls on my W2K, PIII 850 MHz laptop with 300+ MB and the Microsoft compiler take about 1.5 seconds. Functions that use blitz take 20-30 seconds. The 1.5 second compile times could be reduced if we didn't use CXX (which uses templates) to less than a second. This is likely to happen with SCXX (or some variant) its most likely replacement. Still, these times are not likely to be the ones people complain about. Converting the blitz++ code generated by weave.blitz to C is certainly doable, but not high on the priority list. Also, machines are getting faster all the time. That is no excuse for writing stupid code, but things that swap today will fit into the BIOS in a year or two. :| I think a weave/pycod solution will be production quality about then, so we should be in good shape. I know not all machines will handle the compiles easily, but a vast majority will. The biggest strike against C++ in my mind is the "broken compiler" problem. If we run into many more goofy things like the exception bug in Mandrake Linux that showed up a few weeks ago, then you start to wonder... Final note. It is easy to create separate backends to weave so that it generates pure C code. I'm happy with C++, but if someone really wants this, they can add the capability, and I will include it in the distribution. 3. Development time One of the major themes of PyCOD and weave.blitz is that you can develop entirely in Python and then "flip a magic switch" that provides large performance improvement. 4. Performance gains On some algorithms, that have to call back into Python, the improvement is small. I consider a factor of 2 the limit of useful improvement and a factor of 10 the limit of exciting improvement. On the laplace problem studied by Prabhu, the improvement was about a factor of 10 over Numeric. The weave solution was actually faster than wrapped Fortran and within 25% of a Pure C++ approach. On vector quantization algorithms, the speed up is more on the order of 100. These are both real world/useful algorithms, and weave is relatively young (less than a year). There are multiple things that can improve its performance (mainly reducing calling overhead), so I think things will only get better. For Prabhu's notes, see here: http://www.scipy.org/site_content/weave/python_performance.html What problems did you see the small improvement? This would help us determine what needs to be fixed. > 2) About PyCOD compiling Python functions to extension functions. If these > Python functions are not simple (they may use 3rd party modules, complex > Python features like classes, etc) then in order PyCOD to be applicable in > these (and the most practical) cases, it should be able to transform _any_ > Python code to C. For completeness, yes. For usefulness, no. > (There was a project that translated python code to C > code, anyone knows what is the state of this project?) Anyway, I find this > task rather difficult if not possible to accomplish in a reasonable amount > of time. Otherwise, if PyCOD cannot do that, then I am afraid that it > will be a toy program in scipy ;) I guess I disagree. I can think of many times that I've handed into a Fortran minimization library a function that just includes simple math expressions. These PyCOD will handle, and they are a useful subset. As I remember the concept for PyCOD came out of the need for calculating various things like energy in a parallel particle physics codes. As soon as a physicists wrote their own functions in Python instead of using the canned C++ functions, the code slowed down a *huge* amount. PyCOD solve this problem. > > 3) About callbacks. Indeed, when using 'some hairy, expensive python code' > from C/C++/Fortran, most of the time is spent in Python (as Prabhu tests > show, pure Python is approx 1000 times slower than any compiled language > mentioned above). However, this 'some hairy, expensive python code' need > not to be pure Python code, it may contain calls to extension modules that > will speed up these callback functions. So, I don't think that > calling callbacks from C/C++/Fortran would be remarkably expensive unless > these callbacks will be really simple (true, sometimes they just are). > > On Fri, 8 Feb 2002, Pat Miller wrote: > > > > > It might be better if the integration routine, using its knowledge that > > the argument to x must be a PyFloat (C++ double) could use a C++ > > accelerated function instead of slow callbacks to Python. Not important > > for a quick numeric integration, but crucial for using Python as a > > true C/FORTRAN replacement. > > The last statement I find hard to believe. It is almost too trivial to > write down few reasons (don't let your customers to know about these ;-): > 1) Python (or any other scripting language) cannot never compete with > C/FORTRAN in performance. (I truly hope that you can prove me to be > wrong here.) I'm confident we will come close in many (useful) cases. > 2) There are too many high-performance and well-tested C/FORTRAN codes > available (atlas,lapack,fftw to name just few) and to repeat their > implementation in Python is unthinkable, in addition to the point 1). The idea isn't to re-write these. The SciPy approach of leveraging netlib.org to the hilt is still in full affect. But when people need to customize behavior by writing there own scripts, you'd like to make it possible for these to run as quickly as possible. Right? If they happen to write a function that weave/PyCOD won't accelerate, the worst thing that will happen is that it executes at the speed of Python. > > Simple, transparent, and straightforward mixing of Python/C/C++/Fortran > languages seems to be the way to go in scientific computing within Python. > The truth is there are relatively few people who want to go through the learning curve of mixing languages. Even when it is made as simple as f2py makes it, there is a lot to know about wrapping and debugging. Those of us who enjoy developing libraries don't have a problem with it. 95+% of the potential user base for Python in Science will never write an extension module. They'll use extensions you wrote, but they'ed rather spend there time thinking about xxx (fill in your favorite topic here). weave on its own provides a means for old C/C++ hacks to add their own C with at least somewhat less effort (and weave.blitz with a lot less effort). PyCOD makes it that much easier. Like I said earlier. I don't know where PyCOD's limits are. We may hit a brick wall on it at some point. I'm fairly confident, though, that Pat can squeeze this lemon about as hard as anyone out there. Further, PyCOD's current capabilities are only barely short of useful in my mind, aits possibilities are certainly exciting enough for me to spend a week or so making weave play nice with it. I guess we'll have to poll you at the end of the summer to see if we've ( or weave... :) changed your mind. One of its major benefits (C callbacks) will require some cooperation with f2py so that Fortran wrappers check whether the passed in object has a C representation to call instead of automatically calling back into Python. We'll try and make this as easy as possible, but, of course, you'll have to sign on as a weave believer for the integration to work well. weave/PyCOD aren't some silver bullet that solves every problem. However, they will solve many performance problems and I believe they are worth pursuing. thanks for your thoughtful comments, eric From prabhu at aero.iitm.ernet.in Mon Feb 11 22:37:55 2002 From: prabhu at aero.iitm.ernet.in (Prabhu Ramachandran) Date: Tue, 12 Feb 2002 09:07:55 +0530 Subject: [SciPy-dev] Thoughts on weave improvements In-Reply-To: References: <3C642918.1010907@pacbell.net> Message-ID: <15464.36371.769927.311656@monster.linux.in> hi, >>>>> "PP" == Pearu Peterson writes: PP> 3) About callbacks. Indeed, when using 'some hairy, expensive PP> python code' from C/C++/Fortran, most of the time is spent in PP> Python (as Prabhu tests show, pure Python is approx 1000 times PP> slower than any compiled language mentioned above). However, PP> this 'some hairy, expensive python code' need not to be pure PP> Python code, it may contain calls to extension modules that PP> will speed up these callback functions. So, I don't think that PP> calling callbacks from C/C++/Fortran would be remarkably PP> expensive unless these callbacks will be really simple (true, PP> sometimes they just are). I think I understand what you are getting at. However the following points are to be noted. My laplace example is truly a toy example and its intention was to create a simple benchmark and get the new weave user started quickly. (0) It was a useful benchmark with realistic esitmates for a simple problem. (1) I did not do anything fancy. (2) There was just one inner loop that was expensive. And here too, it was the for loop in Python that was 1000 times slower. Function call overhead in Python was not anywhere near as bad. So if at all some conclusion about speed is to be determined it is this -- for loops in Python are horribly slow. Function call overhead while larger than in C/C++ is not too bad. (3) A more sophisticated problem would involve far more complexity than my silly example. Here are a few things that would certainly be important. (a) Wrapping simple classes. This means speeding up access to members. Its obvious that one can use an OOD to deal with true complexity. If one does not do this and simply restricts oneself to deal with optimized functions then one might as well develop in pure C/Fortran. The advantage of a high level language is to be able to do more than what you can do with C/Fortran easily. (b) A common way of dealing with complex problems is to construct an array of similar objects and then invoking some method on each of these. Take for instance an unstructured grid problem. You'd construct elements and ask each element to take a time step. Its a very natural way to program and its important to speed these things up. If we dont then you have to keep redesigning code just to get performance which is a pain. I admit that this is a long term goal but its important to keep in mind. (c) Its fine if fancy features of Python are not supported but the basics must be. What those basic features are should be explored in greater detail. I need more time to think up a more comprehensive and sensible list of things. PP> On Fri, 8 Feb 2002, Pat Miller wrote: PP> >> It might be better if the integration routine, using its >> knowledge that the argument to x must be a PyFloat (C++ double) >> could use a C++ accelerated function instead of slow callbacks >> to Python. Not important for a quick numeric integration, but >> crucial for using Python as a true C/FORTRAN replacement. PP> The last statement I find hard to believe. It is almost too PP> trivial to write down few reasons (don't let your customers to PP> know about these ;-): 1) Python (or any other scripting PP> language) cannot never compete with C/FORTRAN in PP> performance. (I truly hope that you can prove me to be wrong PP> here.) 2) There are too many high-performance and well-tested PP> C/FORTRAN codes available (atlas,lapack,fftw to name just few) PP> and to repeat their implementation in Python is unthinkable, PP> in addition to the point 1). I beg to differ here. altas, lapack, fftw etc. only solve some very fundamental and focussed problems. If you want complex here is an example -- solve the flow inside a supersonic jet engine. Its horribly hard and afaik there aren't full fledged solvers that handle this sort of complexity (completely). People do use CFD for this case but also use a lot of empirical knowledge. I am not aware of a purely CFD package that can handle the complete jet engine in software with no empirical input. I'm sure Pat/Eric will know (or atleast heard) of much harder problems. Solving such problems without loosing your hair is a challenge and its pretty clear to me that using an OO design is the way to go. You cant achieve that with a hybrid c/fortran/Python option because you'll have to implement most of the objects in C in which case you do not get the advantage of developing in Python at all -- so why not just forget Python and write the whole darned thing in C? Honestly this is what most folks do. The reason moving this to Python is an important goal is that it is definitely much easier and nicer coding in Python. The development cycle is much faster and easier. Its easier to map/explore the problem out in Python than in C/C++. Rather than struggle with the problem *and* struggle with C/C++/Fortran, one just has to struggle with the problem if one is using Python. But this costs you some speed. The question is how much and how much more complex a problem can you solve by moving to Python. Speed of code is not everything. Its known that structured grid solvers are significantly faster than unstructured grid problems. But if you have a complex geometry it might take 6 months for someone to just generate a structured grid for the geometry and then solve it. OTOH, you can generate a full unstructured grid in a few days (I think that is conservative). So its not true that complexity (and the consequent slowdown) is a bad thing. PP> Simple, transparent, and straightforward mixing of PP> Python/C/C++/Fortran languages seems to be the way to go in PP> scientific computing within Python. There are issues with this also. Its easy to wrap something that can be encapsulated to a few functions. Anything more complex than that and you have to wonder if Python is a good choice at all. So PyCOD and weave are great steps in this direction. Also, as Eric pointed out most folks aren't comfortable creating wrapper functions all the times. At the very least PyCOD and weave simplify this. regards, prabhu From pnmiller at pacbell.net Mon Feb 11 23:03:30 2002 From: pnmiller at pacbell.net (Pat Miller) Date: Mon, 11 Feb 2002 20:03:30 -0800 Subject: [SciPy-dev] Thoughts on weave improvements References: <01d001c1b319$eb826470$777ba8c0@ericlaptop> Message-ID: <3C689412.3090401@pacbell.net> eric wrote: > Hey Pat, > > This is a very rough outline of what I thought of in the airplane rides > home. All "sketch" of a solution below has holes, but is a start at merging > pycod and weave. The basic concept is to create a new class or type that > wraps a function. When a class wrapper object is called from Python, it ---------------------------------------------------------- Comments on the interface.... I prefer a weave method called accelerate rather than the mystifying COD, This way, we could add in bytecode optimization and function inlining and clever bits like that through the same interface. ---------------------------------------------------------- For individual functions, the interface might look like: import weave def f(a,b): << some big hairy thing >> f = weave.accelerate(f) ---------------------------------------------------------- I think we should allow users to specify signatures if they want to... ---------------------------------------------------------- f = weave.accelerate(f,signatures=[[FloatType],[IntType]]) where signature is a suggested list of types to compile for. If you wanted to allow other types it could be: f = weave.accelerate(f,signatures=[[FloatType],[IntType],[None]]) where None indicates "Any type is OK". The default for signatures would be equivalent to [[None]], so is equivalent to Eric's scheme. Having a None signature means that as a last resort, the system will call the original code object which we can carefully squirrel away. This means never having to say you're sorry on a call! The default implementation for weave.accelerate is def accelerate(self,f, signatures=None): return f on systems where we don't have a compiler. From pnmiller at pacbell.net Mon Feb 11 23:28:07 2002 From: pnmiller at pacbell.net (Pat Miller) Date: Mon, 11 Feb 2002 20:28:07 -0800 Subject: [SciPy-dev] Accelerated modules Message-ID: <3C6899D7.6090306@pacbell.net> I think the model we want to shoot for with weave accelerated modules and functions is something like in # This is foobar.py def f(x): < some thing > def g(x,y): < some other thing > class foo: def __init__(self): ... def solver(self,x): .... try: # Pull in accelerated ones if they exist from foobar.accelerated import * except: import weave f = weave.accelerate(f) g = weave.accelerate(g) foo.solver = weave.accelerate(foo.solver) From eric at scipy.org Mon Feb 11 23:02:20 2002 From: eric at scipy.org (eric) Date: Mon, 11 Feb 2002 23:02:20 -0500 Subject: [SciPy-dev] Accelerated modules References: <3C6899D7.6090306@pacbell.net> Message-ID: <02fb01c1b37a$14aa1f40$777ba8c0@ericlaptop> Hey Pat, I like the name accelerate. > I think the model we want to shoot for with weave accelerated > modules and functions is something like in > > # This is foobar.py > > def f(x): > < some thing > > > def g(x,y): > < some other thing > > > class foo: > def __init__(self): > ... > def solver(self,x): > .... > > try: > # Pull in accelerated ones if they exist > from foobar.accelerated import * > except: > import weave > f = weave.accelerate(f) > g = weave.accelerate(g) > foo.solver = weave.accelerate(foo.solver) I don't think we need the try/except though. weave.accelerate(f) should check the on disk "catalog" for previously compiled versions of f. If they exist, it should load them. Actually, the "accelerated object" returned by weave.accelerate should handle all this. Things could be loaded from the on disk catalog during __init__, or the first time __call__ is accessed. So, the above would become: # This is foobar.py def f(x): < some thing > def g(x,y): < some other thing > class foo: def __init__(self): ... def solver(self,x): .... # acceleration methods import weave f = weave.accelerate(f) g = weave.accelerate(g) foo.solver = weave.accelerate(foo.solver) eric From pnmiller at pacbell.net Tue Feb 12 00:37:28 2002 From: pnmiller at pacbell.net (Pat Miller) Date: Mon, 11 Feb 2002 21:37:28 -0800 Subject: [SciPy-dev] Re: Scipy-dev digest, Vol 1 #86 - 7 msgs References: <200202120335.g1C3Z1j19055@scipy.org> Message-ID: <3C68AA18.8070707@pacbell.net> Pearu writes: >Pat wrote: >>It is a bit tricky as you cannot just use python tricks to update >>the local dictionary for affect: > See http://www.python.org/doc/current/lib/built-in-funcs.html: > locals() I know all about this... the value that locals() returns is actually computed on the fly and stashed into an attribute of the code object. It mirrors the elements of a local stack that is built on executing the code frame (that is why changing it doesn't affect the state, the dictionary is a one way reference to the frame. > > And I think you should not modify locals() dictionary from Python > C/API either. The point is to get things like a = 3 weave.inline("a += 1") to update the value of 'a' as if the code were written a = 3 a += 1 And it really should be done inside the C API because of speed concerns. In some sense, the weave.inline is _really_ a specialized replacement for the __builtin__ exec function. This means that the point is to muck with the local and global state. Pat From pnmiller at pacbell.net Tue Feb 12 00:58:38 2002 From: pnmiller at pacbell.net (Pat Miller) Date: Mon, 11 Feb 2002 21:58:38 -0800 Subject: [SciPy-dev] Some clarification.... Message-ID: <3C68AF0E.7070109@pacbell.net> Pearu writes: > 1) About C++ extensions that PyCOD would generate. Why not C? Compiling > C++ can be very time and memory consuming task when compared to C or C++ is easier to generate and let's you do things like have versions of static long f(long x) { ... } static double f(double x) { ... } Besides, I think the goal is (1) develop and prototype in Python and then (2) use the accelerator to beat down speed concerns after having paid the compilation price once. Besides, I don't use anything fancy in C++ (the biggest feature is not having to predeclare all my variables :-) ), so it goes pretty fast. > 2) ... > Python features like classes, etc) then in order PyCOD to be applicable in > these (and the most practical) cases, it should be able to transform _any_ > Python code to C. (There was a project that translated python code to C One cannot translate the full dynamic range of Python features to a static language like C. I think we can get more and more under the compiler until it goes fast enough. As I push this problem a bit farther, I can likely get to the point where I can get speed up from a compile of anything in which a local's type never changes. (i.e one doesn't say i = 0 one place and i = "foobar" somewhere else). But, the generality comes from simply calling the Python C API. That is, there isn't a huge savings from calling PyObject* t3 = PyNumber_Add(a,b) vs the original Python. When you want to do long t3 = a + b; for speed. I'll probably put in a more general Python model (in my copious spare time!) because loops are so slow that for i in range(n): < anything> will run MUCH faster as a C loop. You can get an idea of the over head by trying the difference between for i in xrange(n): f(i) vs map(f,xrange(n)) and see the speed difference from simply doing the loop inside C vs inside Python. And my comment about Python was for the casual user. C++/C/FORTRAN programming is my bread and butter and one wouldn't write huge solver libraries and big scientific packages in a Python to C way. BUT, I think users who want to write small important functions that are input to packages like integration routines WILL do better if the can accelerate them. I think that prototyping is vastly improved and that is where the real speed lies. A story will illustrate.... Back in the old days, when Crays were the workhorses at my Lab, Cray provided a hand tuned FFT that carefully tried to minimize bank conflicts and enhance vectorization and parallelism. My boss wrote one that was 10% better in about six weeks using a special purpose high level language (Sisal if you're interested) that did wonders. It wasn't the language, but the fact that John was able to quickly try 20 different prototypes in those six weeks that led him to the new techniques. They could then be handcoded BACK into the Cray assembly language version for improvements. The moral is to allow quick prototyping and get all the bad ideas out fast. The goal is to Write and debug in Python Throw a magic switch Enjoy some speed. Happy to see this stirs up some controversy... Pat From pnmiller at pacbell.net Tue Feb 12 02:32:16 2002 From: pnmiller at pacbell.net (Pat Miller) Date: Mon, 11 Feb 2002 23:32:16 -0800 Subject: [SciPy-dev] Is Python being Dylanized? Message-ID: <3C68C500.5090101@pacbell.net> Prahbu writes: > I ask a naive question, what is the feasibility of actually doing > something like Dylan? OK forget truly generic functions. I'm just > talking of optional typing to speed things up. I guess psyco, weave, > pyCOD etc. all put together and taken to their logical end will give > us Python + optional typing + heaven? :D Of course I could also be > *completely* mistaken. Looking at Eric's subsequent post it looks > like even optional typing can be magically handled. This is looking > amazing. It is amazing. I had a friend who was really big into Dylan, I liked a lot of the ideas there and it in fact helped frame some of the direction that pyCOD and now weave.accelerate (a better name than weave.cod!) have taken. Type inference is a powerful tool and when it works, it can lead to some great improvements using the extra typing knowledge. I for one, would love to see Python allow an optional type: e.g. def foo(string s): print s + "hi" which is much nicer than the def foo(s): assert type(s) == StringType print s + "hi" we have to use now. If I get real ambitious, I can actually check for the asserts of argument type and we can skip all the wacky guess and check that Eric has proposed and the wackier signatures that I propose. It will take more work to get the accelerator to work on class methods. This is in part due to the issues about Python's super dynamic attribute system that makes it hard to infer types of attributes. Perhaps the assert system isn't too bad for it, though I think that writing class foo: def solver(self,x): assert type(self.nx) == IntType assert type(self.name) == StringType assert type(x) == FloatType .... would get kind of tedious. One idea is to simply assume that the types you are given are fixed for all time, another is to assume that the values for a given instance are fixed for all time (that is nx and name will either never change type or perhaps never change value). Then I can compile even more efficient code (but require more information) Pat From pearu at cens.ioc.ee Tue Feb 12 05:57:09 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Tue, 12 Feb 2002 12:57:09 +0200 (EET) Subject: [SciPy-dev] Thoughts on weave improvements In-Reply-To: <026501c1b350$5015dc10$777ba8c0@ericlaptop> Message-ID: Hi, Eric and Pat, thanks for explaining the philosphy of weave and pycod, it cleared many things out for me. And I feel better now after knowing that there are ways to get weave/pycod efficient and really useful. On Mon, 11 Feb 2002, eric wrote: > I guess we'll have to poll you at the end of the summer to see if we've ( or > weave... :) changed your mind. One of its major benefits (C callbacks) will > require some cooperation with f2py so that Fortran wrappers check whether > the passed in object has a C representation to call instead of automatically > calling back into Python. We'll try and make this as easy as possible, but, > of course, you'll have to sign on as a weave believer for the integration to > work well. In fact, I was thinking the same thing for f2py, that is, if f2py'd extension function gets a call-back argument that is itself a f2py'd function (an therefore has a pointer to a real C/Fortran function), then it is called directly and not by the f2py call-back mechanism. The implementation of this feature is quite easy in f2py. What kept me implementing all that are the results of the timing measurments of the current f2py implementation. Namely, it turns out that the f2py call-back mechanism is already quite efficent (the time spent in doing this magic was only about 0.001 milli-seconds per call, error factor is 10) and most of the time (say 99%) is spent in Python (the test Python function contained only few scalar multiplications, usage of math.sin, and no array operations and loops). So, it is really crucial to get Python functions efficient and (may be) later optimize the interfaces (with a possibility of introducing new bugs;). Seems like weave/pycod will provide a solution here. You may want to look at fortranobject.c in f2py. Basically, 1) it implements a new type, FortranObject, and the objects of this type hold C pointers to real C/Fortran functions. These functions are called from f2py generated extension functions by using the corresponding pointers. 2) Also, the FortranObject type implements few methods: __call__ - so that in Python objects of type FortranObject behave like normal (well, almost) Python functions, __get/setattr__ - to return __doc__ strings and also to access Fortran common blocks or module data, the items of are also returned as FortranObject's. The idea is simple but the implementation may look complex, that is because of Fortran specific stuff (just ignore it). I guess that the bottom line is that f2py and weave/pycod have much in common regarding the calls to real C/Fortran functions in run-time and I would hope that these common parts could be shared in some or another way. Do we need to introduce some independent (from f2py, weave/pycod, etc.) specification for objects holding C/Fortran objects and that implement some common tasks that are useful for both f2py and weave? Or, may be weave/pycod implement their own, say, CPointerObject, and I can easily implement hooks for such objects for f2py (as all that f2py needs to use are the C pointers to C/Fortran functions or data). Regards, Pearu From eric at scipy.org Tue Feb 12 12:06:30 2002 From: eric at scipy.org (eric) Date: Tue, 12 Feb 2002 12:06:30 -0500 Subject: [SciPy-dev] Thoughts on weave improvements References: Message-ID: <037601c1b3e7$a1ddf5e0$777ba8c0@ericlaptop> > > Hi, > > Eric and Pat, thanks for explaining the philosphy of weave and pycod, it > cleared many things out for me. And I feel better now after knowing that > there are ways to get weave/pycod efficient and really useful. > > On Mon, 11 Feb 2002, eric wrote: > > > I guess we'll have to poll you at the end of the summer to see if we've ( or > > weave... :) changed your mind. One of its major benefits (C callbacks) will > > require some cooperation with f2py so that Fortran wrappers check whether > > the passed in object has a C representation to call instead of automatically > > calling back into Python. We'll try and make this as easy as possible, but, > > of course, you'll have to sign on as a weave believer for the integration to > > work well. > > In fact, I was thinking the same thing for f2py, that is, if f2py'd > extension function gets a call-back argument that is itself a f2py'd > function (an therefore has a pointer to a real C/Fortran function), then > it is called directly and not by the f2py call-back mechanism. > > The implementation of this feature is quite easy in f2py. What kept me > implementing all that are the results of the timing measurments of the > current f2py implementation. Namely, it turns out that the f2py call-back > mechanism is already quite efficent (the time spent in doing this magic > was only about 0.001 milli-seconds per call, error factor is 10) and most > of the time (say 99%) is spent in Python (the test Python function > contained only few scalar multiplications, usage of math.sin, and no > array operations and loops). > > So, it is really crucial to get Python functions efficient and (may > be) later optimize the interfaces (with a possibility of introducing new > bugs;). Seems like weave/pycod will provide a solution here. Just to make sure we're talking about the same thing, consider that we have the function def foo(a,b): return a+b foo = weave.accelerate(foo) foo is now some strange object. It is not a function call, but it is callable so that the casual users will not know the difference. Inside this foo object, will be multiple representations of the foo function. In the simplest case, there will be 3. We'll call them "pure python," "extension wrapper," and "pure C." The pure python version will basically be a pointer to the Python function above. The other two will be in C code, and will depend upon the type signatures of a and b. We'll say they are both int values for simplicity -- this can be determined dynamically or specified by the user. Here is what the C functions will look like -- the first generated by pycod's machinery and the second by the current weave's machinery: int pure_c_foo(int a, int b) { return a+b; } PyObject* foo_wrap(PyObject* self, PyObject* args) { // this is pseudo code Py::Tuple targs(args); int a = py_to_int(targs[0]); int b = py_to_int(targs[1]); int c_result = pure_c_foo(a,b); PyObject* py_result = int_to_py(c_result); PyINCREF(py_result); return py_result; } Generally, f2py calls back into Python into foo. It could be converted to call foo_wrap directly with some savings, but the big win is having Fortran functions call pure_c_foo directly so that Python is taken out of the loop completely. f2py will need to ask accelerated objects if the have a pure C implementation. If they do, it'll use this pointer, and everything happens as fast as computer-ly possible. If they do not, they either ask the object to compile a new version with the appropriate types, or call the pure python version using the current callback scheme. > > You may want to look at fortranobject.c in f2py. Basically, > 1) it implements a new type, FortranObject, and the objects of this type > hold C pointers to real C/Fortran functions. These functions are called > from f2py generated extension functions by using the corresponding > pointers. > 2) Also, the FortranObject type implements few methods: > __call__ - so that in Python objects of type FortranObject behave like > normal (well, almost) Python functions, > __get/setattr__ - to return __doc__ strings and also to access Fortran > common blocks or module data, the items of are also returned as > FortranObject's. > The idea is simple but the implementation may look complex, that is > because of Fortran specific stuff (just ignore it). Will do. thanks for the pointer. > > I guess that the bottom line is that f2py and weave/pycod have much in > common regarding the calls to real C/Fortran functions in run-time and I > would hope that these common parts could be shared in some or another way. > > Do we need to introduce some independent (from f2py, weave/pycod, > etc.) specification for objects holding C/Fortran objects and that > implement some common tasks that are useful for both f2py and weave? > Or, may be weave/pycod implement their own, say, CPointerObject, and I > can easily implement hooks for such objects for f2py (as all that f2py > needs to use are the C pointers to C/Fortran functions or data). Not sure yet. If we find overlap, we should defintely get rid of it -- much like we did in scipy_distutils. United code bases mean more brains working on the same thing and smaller overall code bases. Both good things. Let us get a bit farther on merging weave and pycod before we address this though. I'm still trying to get a handle on how the caching mechanism should work for this, and Pat is trying to get Python loop conversion to C working. One other thought. weave already handles SWIG pointers (at least wxPython versions of the things). The representation is pretty simple (a Python string) and includes type information. Our first cut at representing pointers will probably use SWIG's representation since it has served Dave B. so well, and he's already written the code to handle them. :-) I think they will prove to be sufficient. I'm glad to see you've got ideas about all this also. weave and f2py working in concert will be a big win (I hope...). see ya, eric From pearu at cens.ioc.ee Tue Feb 12 14:58:52 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Tue, 12 Feb 2002 21:58:52 +0200 (EET) Subject: [SciPy-dev] Thoughts on weave improvements In-Reply-To: <037601c1b3e7$a1ddf5e0$777ba8c0@ericlaptop> Message-ID: Eric and Pat, On Tue, 12 Feb 2002, eric wrote: > Just to make sure we're talking about the same thing, consider that we have > the function > the user. Here is what the C functions will look like -- the first > generated by pycod's machinery and the second by the current weave's > machinery: > > int pure_c_foo(int a, int b) > { > return a+b; > } > Generally, f2py calls back into Python into foo. It could be converted to > call foo_wrap directly with some savings, but the big win is having Fortran > functions call pure_c_foo directly so that Python is taken out of the loop > completely. f2py will need to ask accelerated objects if the have a pure C > implementation. If they do, it'll use this pointer, and everything happens > as fast as computer-ly possible. If they do not, they either ask the object > to compile a new version with the appropriate types, or call the pure python > version using the current callback scheme. Ok, very good. So, let me now see how it would work from the f2py point of view. Here follows four points. The points 2) and 3) are important to make f2py and weave to work together. 1) Suppose, f2py generates an extension function `gun.run' that wraps the following Fortran function: integer function run(fun) external fun run = fun(3,4) end In C, this function can be called by name `run_'. f2py generates the following C/API (pseudo) code: PyObject *py_fun; int c_fun(int *a,int *b) { /* Note: Fortran expects pointer arguments */ PyObject *value; value = PyObject_CallObject(py_fun,Py_BuildValue("ii",*a,*b)); return int_from_pyobj(value); } PyObject f2py_gun_run(PyObject* self, PyObject* args, PyObject* kws ) { int ret; static char *kwlist[] = {"fun",NULL}; PyArg_ParseTupleAndKeywords(args,kws,"O!",kwlist, &PyFunction_Type,&py_fun); ret = run_(&c_fun); return Py_BuildValue("i",ret); } 2) There is one issue. Fortran always expects that arguments are pointers. Is it possible that weave/pycod could generate the following int pure_c_foo(int *a, int *b) { return *a + *b; } ? Otherwise I don't see how I could pass your pure_c_foo to Fortran functions without additional wrapping. Array arguments must be pointers anyway, right. Actually, below you see a possible way out of this dilemma (look at GetAccelerated(py_fun,signature)). Or is it SWIG stuff applicable here? 3) Assume that issue 2) is solved. Then f2py should generate the following f2py_gun_run function: PyObject f2py_gun_run(PyObject* self, PyObject* args, PyObject* kws ) { int ret; static char *kwlist[] = {"fun",NULL}; PyArg_ParseTupleAndKeywords(args,kws,"O",kwlist,&py_fun); if (CheckAccelerated(py_fun)) { static char *signature = {"int","*int","*int"}; /* something like static int *signature = {INT_ARG,INT_ARR_ARG,INT_ARR_ARG}; would be more efficent. */ PyObject * accel_fun = GetAccelerated(py_fun,signature); ret = run_(&GetAccelerated_Ptr(accel_fun)); } else if (PyCheck_Function(py_fun)) { ret = run_(&c_fun); } else { // raise TypeError } return Py_BuildValue("i",ret); } where weave/pycod should provide the following macros CheckAccelerated GetAccelerated GetAccelerated_Ptr Doesn't matter for f2py how you would call them but it would be preferable if f2py extension modules would need _not_ to include any weave/pycod specific header files. For example, the Python equivalent for CheckAccelerated could be: def CheckAccelerated(obj): return hasattr(obj,'__weave_accelerated__') 4) To finalize, the Python session would look like the following: def foo(a,b): return a+b import weave foo = weave.accelerate(foo) import gun print gun.fun(foo) # -> 7 > Not sure yet. If we find overlap, we should defintely get rid of it ... > Let us get a bit farther on merging weave and pycod before we address > this though. > I'm still trying to get a handle on how the caching mechanism should work > for this, and Pat is trying to get Python loop conversion to C working. Sure. While doing all that it would be great if you could take into account the issue 2) and provide macros/functions to serve the point in 3). Then the cooperation between f2py and weave would be almost ideal (the ideal would be to merge them ;-). Regards, Pearu From heiko at hhenkelmann.de Tue Feb 12 15:48:09 2002 From: heiko at hhenkelmann.de (Heiko Henkelmann) Date: Tue, 12 Feb 2002 21:48:09 +0100 Subject: [SciPy-dev] Build problerms on various Windows versions Message-ID: <003f01c1b406$9f6feb00$2c18e33e@arrow> Dear All, I'm having some problems to build scipy on windows. There are a lot of missing symbols during the link process of special and integrate. (e.g. dcopy_ is missing). Any idea what's going on there? I tried it on 98, ME and XP. Always with the same result. Furthermore I found a bug in mingw32_support. In build_import_library() "compiler" is appended to the path (instead of "weave"). Heiko Henkelmann From patmiller at llnl.gov Tue Feb 12 18:27:17 2002 From: patmiller at llnl.gov (Patrick Miller) Date: Tue, 12 Feb 2002 15:27:17 -0800 Subject: [SciPy-dev] Thoughts on weave improvements References: Message-ID: <3C69A4D5.6A74A1B1@llnl.gov> Pearu Peterson wrote: > 2) There is one issue. Fortran always expects that arguments are > pointers. Is it possible that weave/pycod could generate the following > int pure_c_foo(int *a, int *b) > { > return *a + *b; > } If our target is really to allow extensions to call underlying C/FORTRAN (and it should!), then I would favor having weave.accelerate build both. long foo_as_C(long a, long b) { ... } FORTRAN_INT foo_as_FORTRAN(FORTRAN_INT* A, FORTRAN_INT* B) { return foo_as_C(*A,*B); } PyObject* foo_as_Python(PyObject* self, PyObject* args) { ... } There is an issue of mutability of arguments, but Python (like C) doesn't allow changes to arguments, so it is OK to make the call by reference there (and avoids aliasing issues). It make the function written in Python do the expected Python thing. > > 3) Assume that issue 2) is solved. Then f2py should generate the following > preferable if f2py extension modules would need _not_ to include any > weave/pycod specific header files. For example, the Python equivalent for > CheckAccelerated could be: Here, one could use the Python interface directly. I think you would want to do something like: PyObject* t = PyObject_CallMethod(py_fun,"as_FORTRAN","OO",PyInt_Type,PyInt_Type); if ( !PyErr_Occurred() ) { PyObject* addr = PyTuple_GetItem(t,0); int (*fun)(int*,int*) = PyInt_AsLong(addr); /* needs a cast */ } No references at all to any weave header files. Plus, CallMethod fails quickly if there is no attribute. This means that so long as we agree on an method API, ANYBODY who want's to write accelerated functions can do so (even other extension modules). It even means that weave.accelerate can directly query objects for their C source linkage if that is part of the api i.e. If the Python core did this, it might look like: print math.sin.as_C_Source(FloatType) ("#include ", "sin(%s)", FloatType) # header code, call, return type print math.sin.as_FORTRAN(FloatType) ( 2312823400, FloatType ) # Address, return type print math.sin.as_C(FloatType) ( 2321823200, FloatType ) -- Patrick Miller | (925) 423-0309 | patmiller at llnl.gov Big jobs usually go to the men who prove their ability to outgrow small ones. -- Ralph Waldo Emerson American writer and philosopher (1803-1882) From eric at scipy.org Tue Feb 12 17:53:33 2002 From: eric at scipy.org (eric) Date: Tue, 12 Feb 2002 17:53:33 -0500 Subject: [SciPy-dev] Build problerms on various Windows versions References: <003f01c1b406$9f6feb00$2c18e33e@arrow> Message-ID: <045901c1b418$19242ae0$777ba8c0@ericlaptop> > Dear All, > > I'm having some problems to build scipy on windows. There are a lot of > missing symbols during the link process of special and integrate. (e.g. > dcopy_ is missing). Any idea what's going on there? I tried it on 98, ME and > XP. Always with the same result. What Fortran compiler are you using? Can you send the output of your build to me? > > Furthermore I found a bug in mingw32_support. In build_import_library() > "compiler" is appended to the path (instead of "weave"). Thanks. The name error is fixed, now but I think this still needs to be reworked. I hadn't intended scipy_distutils to rely on weave. Perhaps we need to move the lib2def module into scipy_distutils. eric From eric at scipy.org Tue Feb 12 22:50:40 2002 From: eric at scipy.org (eric) Date: Tue, 12 Feb 2002 22:50:40 -0500 Subject: [SciPy-dev] Some clarification.... References: <3C68AF0E.7070109@pacbell.net> Message-ID: <04f401c1b441$9b4ad090$777ba8c0@ericlaptop> Hey Pat, > for speed. I'll probably put in a more general Python model > (in my copious spare time!) because loops are so slow that > > for i in range(n): > < anything> > > will run MUCH faster as a C loop. You can get an idea of the > over head by trying the difference between > > for i in xrange(n): f(i) > > vs > > map(f,xrange(n)) > > and see the speed difference from simply doing the loop inside > C vs inside Python. I think this used to be a lot worse than it is now. The following simple tests only shows modest improvement: >>> def bob(a): return float(a)**2+a/a*10 ... >>> def q(): ... result = [] ... for i in range(1,100000): ... result.append(bob(i)) ... >>> t1 = time.time();q(); print time.time() - t1 0.81200003624 >>> def r(): ... map(bob,range(1,100000)) ... >>> t1 = time.time();r(); print time.time() - t1 0.600999951363 >>> def r(): ... map(bob,xrange(1,100000)) ... >>> t1 = time.time();r(); print time.time() - t1 0.600999951363 33% improvement is decent, but there are definitely bigger fish to fry. > A story will illustrate.... > > Back in the old days, when Crays were the workhorses at my > Lab, Cray provided a hand tuned FFT that carefully tried to > minimize bank conflicts and enhance vectorization and parallelism. > My boss wrote one that was 10% better in about six weeks using > a special purpose high level language (Sisal if you're interested) > that did wonders. Man, I can hardly imagine spending 6 weeks for 10%. He should get out more... :-) see ya, eric From heiko at hhenkelmann.de Wed Feb 13 03:50:07 2002 From: heiko at hhenkelmann.de (Heiko Henkelmann) Date: Wed, 13 Feb 2002 09:50:07 +0100 Subject: [SciPy-dev] Build problerms on various Windows versions References: <003f01c1b406$9f6feb00$2c18e33e@arrow> <045901c1b418$19242ae0$777ba8c0@ericlaptop> Message-ID: <000a01c1b46b$701047a0$241be33e@arrow> > What Fortran compiler are you using? Can you send the output of your build > to me? > I'm using MinGW. Attached you can find the output. Thanx for your help Heiko -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: scipy_output.txt URL: From pearu at cens.ioc.ee Wed Feb 13 04:09:21 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Wed, 13 Feb 2002 11:09:21 +0200 (EET) Subject: [SciPy-dev] Build problerms on various Windows versions In-Reply-To: <000a01c1b46b$701047a0$241be33e@arrow> Message-ID: Tere! [1] On Wed, 13 Feb 2002, Heiko Henkelmann wrote: > > What Fortran compiler are you using? Can you send the output of your > build > > to me? > > > > I'm using MinGW. Attached you can find the output. Seems like scipy setup did not found ATLAS. Did you specified it in scipy_distutils/atlas_info.py ? There you must specify path to ATLAS directory in variable library_path = '...' In general, could we have a top level configuration file, say, scipy_local.cfg where users can specify external paths and other wishes without changing continuously evolving scipy? Pearu Footnotes: [1] 'Tere' is estonian word for 'Hi' ;-) From eric at scipy.org Wed Feb 13 03:20:40 2002 From: eric at scipy.org (eric) Date: Wed, 13 Feb 2002 03:20:40 -0500 Subject: [SciPy-dev] Build problerms on various Windows versions References: <003f01c1b406$9f6feb00$2c18e33e@arrow> <045901c1b418$19242ae0$777ba8c0@ericlaptop> <000a01c1b46b$701047a0$241be33e@arrow> Message-ID: <054101c1b467$53625930$777ba8c0@ericlaptop> Hey Heiko, All the missing functions are BLAS (basic linear algebra) functions that are needed by odepack, linalg stuff, and maybe other places. What's strange is they don't seem to be needed by quadpack because my build doesn't give me link errors here, but this is where you are getting the link errors. Maybe it is complaining because odepack is included in the link step for quadpack even though it is not used? There is no way to exclude libraries form the link step with distutils -- but also, this shouldn't be necessary. Something to try: Do you have the ATLAS version of the blas/lapack libraries present on your system? If not, you need to download and build it (it takes a while). Then you need to make sure scipy_distutils/atlas_info.py returns the correct library paths for your machine. Then edit integrate\setup_integrate.py so that the quadback extension links with the atlas blas libraries just like the odepack extension. This should solve the link issues, but again, I have no idea why you get them on this module and I don't. Let me know if this works. Also, your using Python2.2, and we haven't looked at using it much yet. There may be other build/use issues you run into. see ya, eric ----- Original Message ----- From: "Heiko Henkelmann" To: Sent: Wednesday, February 13, 2002 3:50 AM Subject: Re: [SciPy-dev] Build problerms on various Windows versions > > What Fortran compiler are you using? Can you send the output of your > build > > to me? > > > > I'm using MinGW. Attached you can find the output. > > > Thanx for your help > > Heiko > From eric at scipy.org Wed Feb 13 03:23:29 2002 From: eric at scipy.org (eric) Date: Wed, 13 Feb 2002 03:23:29 -0500 Subject: [SciPy-dev] Build problerms on various Windows versions References: Message-ID: <054901c1b467$b82c0b90$777ba8c0@ericlaptop> Tere! :-) Prabhu suggested something like this earlier. It is a good idea. I'd like to get atlas_info and friends intelligent enough that this isn't necessary, but I don't have time to do that right now. Even if I did, the .cfg is a good idea to allow users to override things. see ya, eric ----- Original Message ----- From: "Pearu Peterson" To: Sent: Wednesday, February 13, 2002 4:09 AM Subject: Re: [SciPy-dev] Build problerms on various Windows versions > > Tere! [1] > > On Wed, 13 Feb 2002, Heiko Henkelmann wrote: > > > > What Fortran compiler are you using? Can you send the output of your > > build > > > to me? > > > > > > > I'm using MinGW. Attached you can find the output. > > Seems like scipy setup did not found ATLAS. Did you specified it in > scipy_distutils/atlas_info.py > ? > There you must specify path to ATLAS directory in variable > library_path = '...' > > In general, could we have a top level configuration file, say, > scipy_local.cfg > where users can specify external paths and other wishes without changing > continuously evolving scipy? > > Pearu > > Footnotes: > [1] 'Tere' is estonian word for 'Hi' ;-) > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From pearu at cens.ioc.ee Wed Feb 13 07:14:06 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Wed, 13 Feb 2002 14:14:06 +0200 (EET) Subject: [SciPy-dev] Build problerms on various Windows versions In-Reply-To: <054101c1b467$53625930$777ba8c0@ericlaptop> Message-ID: Tervitus, [1] On Wed, 13 Feb 2002, eric wrote: > Also, your using Python2.2, and we haven't looked at using it much yet. There > may be other build/use issues you run into. Indeed, it was Python2.2 that caused these troubles (may be new distutils in 2.2, I didn't track down the exact cause). I fixed it by adding blas to the list of libraries for quadpack (that uses linpack_lite and that uses blas). The latest scipy in CVS should now build with Python 2.2. On the other hand, I find the issue mentioned in the setup_integrate.py comment: # Note that all extension modules will be linked against all c and # fortran libraries. But it is a good idea to at least comment # the dependencies in the section for each subpackage. a bug in scipy_distutils. I would prefer that when defining an Extension, then it should _explicitly_ define also all libraries that this extension module depends on (this will avoid these strange dependency problems). Basically, scipy_distutils.setup should not use 'fortran_libraries' keyword as an extension of the 'libraries' keyword. In fact, both these keywords should not be used when a package defines more than one extension module with different set of libraries. If nobody will mind, I'll try to fix this in scipy_distutils. What do you think? Regards, Pearu Footnotes: [1] 'Tervitus' is estonian word for 'Greeting' From rob at pythonemproject.com Wed Feb 13 13:00:04 2002 From: rob at pythonemproject.com (Rob) Date: Wed, 13 Feb 2002 10:00:04 -0800 Subject: [SciPy-dev] some interesting routines in Python SOMNEC program Message-ID: <3C6AA9A4.1761AAAF@pythonemproject.com> I've just finished a Python port of the NEC2 SOMNEC routine. I had a real headache resolving overlapping GO TO statements, but as far as I can tell it works. Please feel free to plagarize any of the routines, including the Variable Width Romberg Integration. Its chock full of neat stuff, and begs to be rewritten in C. (no not f2c, that becomes unintelligible :) Rob. -- The Numeric Python EM Project www.pythonemproject.com From eric at scipy.org Wed Feb 13 12:05:26 2002 From: eric at scipy.org (eric) Date: Wed, 13 Feb 2002 12:05:26 -0500 Subject: [SciPy-dev] Build problerms on various Windows versions References: Message-ID: <057101c1b4b0$a2310590$777ba8c0@ericlaptop> > The latest scipy in CVS should now build with Python 2.2. Very good news! A lot of people have asked about this, so its nice to have it fixed. > > On the other hand, I find the issue mentioned in the setup_integrate.py > comment: > > # Note that all extension modules will be linked against all c and > # fortran libraries. But it is a good idea to at least comment > # the dependencies in the section for each subpackage. > > a bug in scipy_distutils. Agreed. > I would prefer that when defining an Extension, > then it should _explicitly_ define also all libraries that this extension > module depends on (this will avoid these strange dependency > problems). I can't remember why we did this in the first place, but I'm almost sure it was work around. Explicitly specifying the needed libraries is a good idea as far as I can tell. > > Basically, scipy_distutils.setup should not use > 'fortran_libraries' keyword as an extension of the 'libraries' keyword. > In fact, both these keywords should not be used when a package defines > more than one extension module with different set of libraries. > If nobody will mind, I'll try to fix this in scipy_distutils. > What do you think? So your plan is to specify the static fortran libraries that need to be built in fortran_libraries. Then the extension modules have to list these libaries again if they want to link the them. Extension libraries *never* link against the fortran_libraries setting. It is only done against libraries settings. Is this right? I say it is a reasonable plan. see ya, eric From eric at scipy.org Wed Feb 13 12:44:38 2002 From: eric at scipy.org (eric) Date: Wed, 13 Feb 2002 12:44:38 -0500 Subject: [SciPy-dev] some interesting routines in Python SOMNEC program References: <3C6AA9A4.1761AAAF@pythonemproject.com> Message-ID: <059101c1b4b6$1c02aef0$777ba8c0@ericlaptop> > I've just finished a Python port of the NEC2 SOMNEC routine. I had a > real headache resolving overlapping GO TO statements, but as far as I > can tell it works. Please feel free to plagarize any of the routines, > including the Variable Width Romberg Integration. Its chock full of > neat stuff, and begs to be rewritten in C. (no not f2c, that becomes > unintelligible :) It seems like you should just leverage the scipy.special library, but... I just checked and there are a few things missing. The first thing is bessel and hankel functions for complex arguments. The Fortran functions are there in amos (zbesh,cbesh,etc.), but they aren't exposed from the cephes interface. Is there a reason for this? 2nd, there isn't a function to compute the derivative of the bessel function. Rob's somnec.BESSEL(z) returns both the bessel function and its derivative as results. Do you need a special algorithm to compute the derivative, or is it possible to compute it analytically by some relationship with the bessel function (seems like I remember something like this)? Also, would the functions in integrate work for calculating the integrals? This is a "real world" problem that SciPy should be able to handle in a few lines, correct? It is just a matter of making sure we have the correct algorithms underneath. see ya, eric From oliphant at ee.byu.edu Wed Feb 13 13:54:59 2002 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 13 Feb 2002 13:54:59 -0500 (EST) Subject: [SciPy-dev] some interesting routines in Python SOMNEC program In-Reply-To: <059101c1b4b6$1c02aef0$777ba8c0@ericlaptop> Message-ID: > > neat stuff, and begs to be rewritten in C. (no not f2c, that becomes > > unintelligible :) > > It seems like you should just leverage the scipy.special library, but... I just > checked and there are a few things missing. The first thing is bessel and > hankel functions for complex arguments. The Fortran functions are there in amos > (zbesh,cbesh,etc.), but they aren't exposed from the cephes interface. Is there > a reason for this? Actually, they are exposed. Are they not working on your platform? >>> special.jv(4.0,3+1j) (0.098910157742492094+0.14280019502387303j) >>> special.hankel1(4.0,3+1j) (-0.32517852499177413-0.44754176013594482j) > > 2nd, there isn't a function to compute the derivative of the bessel function. > Rob's somnec.BESSEL(z) returns both the bessel function and its derivative as > results. Do you need a special algorithm to compute the derivative, or is it > possible to compute it analytically by some relationship with the bessel > function (seems like I remember something like this)? > > Also, would the functions in integrate work for calculating the integrals? This > is a "real world" problem that SciPy should be able to handle in a few lines, > correct? It is just a matter of making sure we have the correct algorithms > underneath. Absolutely, you could use integrate.quad to compute the integrals nicely. The derivatives of any bessel function satisfy the recurrence relation: d C_v(z) ------- = ( C_(v-1) (z) - C_(v+1) (z) ) / 2 dz = C_(v-1) (z) - v / z * C_v(z) = -C_(v+1) (z) + v / z * C_v(z) where C is any of the bessel functions or modified bessel functions or any linear combination of them. (From Abramowitz and Stegun pg. 361) We could easily implement these derivatives if it is desired. -Travis From eric at scipy.org Wed Feb 13 15:26:24 2002 From: eric at scipy.org (eric) Date: Wed, 13 Feb 2002 15:26:24 -0500 Subject: [SciPy-dev] some interesting routines in Python SOMNEC program References: Message-ID: <066b01c1b4cc$b5548db0$777ba8c0@ericlaptop> > Actually, they are exposed. Are they not working on your platform? Hmmm. Yes. Now I see that they are. I had done a grep for zbesh, and it didn't show up in the methods table, so I assumed none of them were there. Now I see the cbesh entries. Should we add the double complex versions? > The derivatives of any bessel function satisfy the recurrence relation: > > d C_v(z) > ------- = ( C_(v-1) (z) - C_(v+1) (z) ) / 2 > dz > > = C_(v-1) (z) - v / z * C_v(z) > > = -C_(v+1) (z) + v / z * C_v(z) > > > where C is any of the bessel functions or modified bessel functions or any > linear combination of them. > > (From Abramowitz and Stegun pg. 361) You had to look it up in a book? > We could easily implement these derivatives if it is desired. It might be. we discuss this when we have a discussion about renaming the functions in special... see ya, eric From eric at scipy.org Wed Feb 13 15:41:20 2002 From: eric at scipy.org (eric) Date: Wed, 13 Feb 2002 15:41:20 -0500 Subject: [SciPy-dev] some interesting routines in Python SOMNEC program References: <3C6AA9A4.1761AAAF@pythonemproject.com> Message-ID: <067101c1b4ce$cb601960$777ba8c0@ericlaptop> Hey Rob, So based on Travis O's comments and some simplification, the following are drop in replacements for somnec.BESSEL and somnec.HANKEL. Please note they aren't vectorized though, so don't pass them an array (you don't in your code I don't believe). from scipy import * _sign = array((1,-1),typecode=Complex) def bessel(z): return scipy.special.jv([0,1],z)._sign def hankel(z): return scipy.special.hankel1([0,1],z)*_sign I know your purpose for the conversions is as much educational (for you and others) as it is for development, so I'm sure you want to keep the Python versions around -- there is just no need to convert them back to C unless you just want to, because scipy already has them wrapped. My cursory survey didn't reveal where I should try and plug in integrate.quad, so I didn't try. The comments say your using "Shank's" algorithm to speed up convergence. SciPy may not have that currently. Do you have a feel if it is one we should add, or does it handle the same sorts of problems as quad? eric ----- Original Message ----- From: "Rob" To: Sent: Wednesday, February 13, 2002 1:00 PM Subject: [SciPy-dev] some interesting routines in Python SOMNEC program > I've just finished a Python port of the NEC2 SOMNEC routine. I had a > real headache resolving overlapping GO TO statements, but as far as I > can tell it works. Please feel free to plagarize any of the routines, > including the Variable Width Romberg Integration. Its chock full of > neat stuff, and begs to be rewritten in C. (no not f2c, that becomes > unintelligible :) > > Rob. > -- > The Numeric Python EM Project > > www.pythonemproject.com > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From pearu at cens.ioc.ee Wed Feb 13 17:29:20 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Thu, 14 Feb 2002 00:29:20 +0200 (EET) Subject: [SciPy-dev] Getting rid of parasite libraries when linking In-Reply-To: <057101c1b4b0$a2310590$777ba8c0@ericlaptop> Message-ID: Eric, On Wed, 13 Feb 2002, eric wrote: > So your plan is to specify the static fortran libraries that need to be built in > fortran_libraries. Then the extension modules have to list these libaries again > if they want to link the them. Yes. Extension modules had to do it anyway when using Python 2.2 (see my previous fix). However, I made it a bit easier: while defining fortran_libraries, one can use also 'libraries' keyword that is interpreted as "this fortran_library needs to be linked also with libraries from the 'libraries' list". See integrate/setup_integrate.py for examples. > Extension libraries *never* link against the > fortran_libraries setting. It is only done against libraries settings. Is this > right? I say it is a reasonable plan. Ok, this cunning plan is now implemented and commited to CVS. Works for Python 2.1 and 2.2. However, note that C libraries are still linked against all extension modules. For example, -lc_misc -lcephes -lgist (and only these) appear in all linking execution arguments. Actually this is a problem only when using the top level setup.py and we should also consider fixing this for C libraries. It makes little sense to use global libraries list anyway, I think. Or can you think any possible application for using global libraries and library_dirs keywords in the toplevel setup.py?. Or should we invent c_libraries? Or do you have any better idea how to get rid of these parasite libraries? Pearu From rob at pythonemproject.com Wed Feb 13 18:35:05 2002 From: rob at pythonemproject.com (Rob) Date: Wed, 13 Feb 2002 15:35:05 -0800 Subject: [SciPy-dev] some interesting routines in Python SOMNEC program References: <3C6AA9A4.1761AAAF@pythonemproject.com> <067101c1b4ce$cb601960$777ba8c0@ericlaptop> Message-ID: <3C6AF829.D182063@pythonemproject.com> Hi Eric, I may try out the Bessel and Hankel functions. The SOMNEC program runs for 190sec on my 1.8Ghz P4 at work, 800sec on my 1Ghz P3 laptop :) I haven't profiled it yet, but there is little to vectorize in the program, other than some array copying loops. Of course the FORTRAN version executes instantly. The SOMNEC port was my first step at trying to get the Sommerfield-Norton type real ground in my ASAP-Python wire antenna simulator. Now I also have some NEC-2 routines ported to Python which are involved in the S-N ground calculations, but its going to take me a long time to figure all of this out. It may be that what I am trying to do is impossible. For now I am going to stick with straight Python/Numpy. Later once I figure out how everything works, I can try to integrate SciPy routines and other stuff.. Rob. eric wrote: > > Hey Rob, > > So based on Travis O's comments and some simplification, the following are drop > in replacements for somnec.BESSEL and somnec.HANKEL. Please note they aren't > vectorized though, so don't pass them an array (you don't in your code I don't > believe). > > from scipy import * > _sign = array((1,-1),typecode=Complex) > def bessel(z): > return scipy.special.jv([0,1],z)._sign > > def hankel(z): > return scipy.special.hankel1([0,1],z)*_sign > > I know your purpose for the conversions is as much educational (for you and > others) as it is for development, so I'm sure you want to keep the Python > versions around -- there is just no need to convert them back to C unless you > just want to, because scipy already has them wrapped. > > My cursory survey didn't reveal where I should try and plug in integrate.quad, > so I didn't try. The comments say your using "Shank's" algorithm to speed up > convergence. SciPy may not have that currently. Do you have a feel if it is > one we should add, or does it handle the same sorts of problems as quad? > > eric > > ----- Original Message ----- > From: "Rob" > To: > Sent: Wednesday, February 13, 2002 1:00 PM > Subject: [SciPy-dev] some interesting routines in Python SOMNEC program > > > I've just finished a Python port of the NEC2 SOMNEC routine. I had a > > real headache resolving overlapping GO TO statements, but as far as I > > can tell it works. Please feel free to plagarize any of the routines, > > including the Variable Width Romberg Integration. Its chock full of > > neat stuff, and begs to be rewritten in C. (no not f2c, that becomes > > unintelligible :) > > > > Rob. > > -- > > The Numeric Python EM Project > > > > www.pythonemproject.com > > _______________________________________________ > > Scipy-dev mailing list > > Scipy-dev at scipy.net > > http://www.scipy.net/mailman/listinfo/scipy-dev > > -- The Numeric Python EM Project www.pythonemproject.com From heiko at hhenkelmann.de Thu Feb 14 02:26:52 2002 From: heiko at hhenkelmann.de (Heiko Henkelmann) Date: Thu, 14 Feb 2002 08:26:52 +0100 Subject: [SciPy-dev] CVS problem Message-ID: <001301c1b528$f9768d20$201be33e@arrow> Hello there, thank you for all of your support on fixing my build problems. I've been experimenting a bit and found out that also the order of the libraries has a big impact. Anyway, I tried to update my sandbox and got the following message: cvs -z7 update -P -d (in directory C:\home\henkelma\projects\python\scipy\) ? ChangeLog cvs server: Updating . cvs server: Updating blas cvs server: Updating blas/SRC cvs server: Updating cluster cvs server: Updating cluster/doc cvs server: Updating cluster/docs cvs server: Updating cluster/src cvs server: Updating cluster/tests cvs server: Updating clustering cvs server: Updating clustering/doc cvs server: Updating clustering/tests cvs server: Updating common cvs server: Updating common/doc cvs server: Updating common/tests cvs server: Updating compiler cvs server: Updating compiler/CXX cvs server: Updating compiler/blitz-20001213 cvs server: Updating compiler/blitz-20001213/blitz cvs server: Updating compiler/blitz-20001213/blitz/array cvs server: Updating compiler/blitz-20001213/blitz/meta cvs server: Updating compiler/doc cvs server: Updating compiler/examples cvs server: Updating compiler/scxx cvs server: Updating compiler/swig cvs server: Updating compiler/tests cvs server: Updating cow cvs server: Updating cow/doc cvs server: Updating cow/tests cvs server: Updating doc cvs server: Updating fastumath cvs server: Updating fft cvs server: Updating fftw cvs server: Updating ga cvs server: Updating gplt cvs server: Updating gui_thread cvs server: Updating gui_thread/tests cvs server: Updating gui_thread/thread_tests cvs server: Updating integrate cvs server: Updating integrate/linpack_lite cvs server: Updating integrate/mach cvs server: Updating integrate/odepack cvs server: Updating integrate/quadpack cvs server: Updating interpolate cvs server: Updating interpolate/fitpack cvs server: Updating io cvs server: Updating io/docs cvs server: Updating io/tests cvs server: Updating linalg cvs server: Updating linalg/docs cvs server: Updating linalg/tests cvs server: Updating linalg2 cvs server: Updating optimize cvs server: Updating optimize/minpack cvs server: Updating plt cvs server: Updating pyunit-1.1.0 cvs server: Updating pyunit-1.1.0/doc cvs server: Updating pyunit-1.1.0/examples cvs server: Updating pyunit-1.3.1 cvs server: Updating pyunit-1.3.1/doc cvs server: Updating pyunit-1.3.1/examples cvs server: Updating scipy_distutils cvs server: Updating scipy_distutils/command cvs server: Updating scipy_test cvs server: Updating signal cvs server: Updating signal/docs cvs server: Updating signal/tests cvs server: Updating signaltools cvs server: Updating sparse cvs server: Updating sparse/SuperLU cvs server: Updating sparse/SuperLU/CBLAS cvs server: Updating sparse/SuperLU/INSTALL cvs server: Updating sparse/SuperLU/SRC cvs server: Updating sparse/UMFPACK2.2 cvs server: Updating sparse/sparsekit cvs server: Updating special cvs server: Updating special/amos cvs server: Updating special/c_misc cvs server: Updating special/cephes cvs server: Updating special/cephes/not_used cvs server: Updating special/docs cvs server: Updating special/mach cvs server: failed to create lock directory in repository `/home/cvsroot/world/scipy/special/mach': Permission denied cvs server: failed to obtain dir lock in repository `/home/cvsroot/world/scipy/special/mach' cvs [server aborted]: read lock failed - giving up *****CVS exited normally with code 1***** Any idea what's going on there? Thanx Heiko From eric at scipy.org Thu Feb 14 02:10:42 2002 From: eric at scipy.org (eric) Date: Thu, 14 Feb 2002 02:10:42 -0500 Subject: [SciPy-dev] CVS problem References: Message-ID: <072001c1b526$b788dff0$777ba8c0@ericlaptop> I fixed the files in question. I thought we fixed this earlier globally, but apparently not. Any Unix gurus that know what the magic incantation needed on /home/cvsroot/world/scipy to force all created files in scipy or one of its sub-directories to have users as the group? thanks, eric ----- Original Message ----- From: "Pearu Peterson" To: Cc: Sent: Thursday, February 14, 2002 3:03 AM Subject: Re: [SciPy-dev] CVS problem > > On Thu, 14 Feb 2002, Heiko Henkelmann wrote: > > > > > Hello there, > > > > thank you for all of your support on fixing my build problems. I've been > > experimenting a bit and found out that also the order of the libraries has a > > big impact. > > > > Anyway, I tried to update my sandbox and got the following message: > > > > > > cvs -z7 update -P -d (in directory C:\home\henkelma\projects\python\scipy\) > > cvs server: Updating special/mach > > cvs server: failed to create lock directory in repository > > `/home/cvsroot/world/scipy/special/mach': Permission denied > > cvs server: failed to obtain dir lock in repository > > `/home/cvsroot/world/scipy/special/mach' > > cvs [server aborted]: read lock failed - giving up > > > > *****CVS exited normally with code 1***** > > > > > > Any idea what's going on there? > > May be I have. It appears that the group id of special/mach is 'pearu' but > it should be 'users'. The cause of this problem is that the parent > directory 'special' has wrong gid permsissions: > > drwxrwxr-x 9 travo users 4096 Feb 14 01:36 special > ^ > \_ this should be 's' > > that results wrong GIDs for subdirectories. The same problem seems to be > also in order places. > > _All_ files in CVS must have GID of 'users' and _all_ directories in CVS > must have 'set group ID on execution' set on. > Can you fix it? > > Regards, > Pearu > From peterson at math.utwente.nl Thu Feb 14 03:28:46 2002 From: peterson at math.utwente.nl (Pearu Peterson) Date: Thu, 14 Feb 2002 09:28:46 +0100 (CET) Subject: [SciPy-dev] CVS problem In-Reply-To: <072001c1b526$b788dff0$777ba8c0@ericlaptop> Message-ID: Eric, On Thu, 14 Feb 2002, eric wrote: > I thought we fixed this earlier globally, but apparently not. Any Unix gurus > that know what the magic incantation needed on /home/cvsroot/world/scipy to > force all created files in scipy or one of its sub-directories to have users as > the group? chmod g+s does the trick. cd /home/cvsroot/world/scipy && ls -l | grep drwxrwxr-x shows directories that need that trick. Do that also for subdirectories. For example, cd /home/cvsroot/world/scipy && ls -lR | grep drwxrwxr-x | wc shows that there are 48 directories that need this fix. Sorry, but I don't know how to apply this patch recursively :-( Pearu From prabhu at aero.iitm.ernet.in Thu Feb 14 03:32:19 2002 From: prabhu at aero.iitm.ernet.in (Prabhu Ramachandran) Date: Thu, 14 Feb 2002 14:02:19 +0530 Subject: [SciPy-dev] CVS problem In-Reply-To: <072001c1b526$b788dff0$777ba8c0@ericlaptop> References: <072001c1b526$b788dff0$777ba8c0@ericlaptop> Message-ID: <15467.30227.31759.365314@monster.linux.in> >>>>> "eric" == eric writes: eric> I fixed the files in question. I thought we fixed this eric> earlier globally, but apparently not. Any Unix gurus that eric> know what the magic incantation needed on eric> /home/cvsroot/world/scipy to force all created files in eric> scipy or one of its sub-directories to have users as the eric> group? # cd /home/cvsroot/world # find scipy/ -type d -exec chmod 2775 "{}" ';' Should work I guess. prabhu From eric at scipy.org Thu Feb 14 02:33:14 2002 From: eric at scipy.org (eric) Date: Thu, 14 Feb 2002 02:33:14 -0500 Subject: [SciPy-dev] CVS problem References: Message-ID: <073801c1b529$de81f530$777ba8c0@ericlaptop> Ok. I think... I ran this, which applies the command recursively. chmod -R g+s scipy Now everything looks something like this. drwxrwsr-x 2 ej users 4096 Feb 14 02:09 doc The potential mistake is that it did this to all the files in the directories too. oops. Any harm done by this? eric ----- Original Message ----- From: "Pearu Peterson" To: "eric" chmod -R g+s scipy Sent: Thursday, February 14, 2002 3:28 AM Subject: Re: [SciPy-dev] CVS problem > > Eric, > > On Thu, 14 Feb 2002, eric wrote: > > > I thought we fixed this earlier globally, but apparently not. Any Unix gurus > > that know what the magic incantation needed on /home/cvsroot/world/scipy to > > force all created files in scipy or one of its sub-directories to have users as > > the group? > > chmod g+s > > does the trick. > > cd /home/cvsroot/world/scipy && ls -l | grep drwxrwxr-x > > shows directories that need that trick. Do that also for subdirectories. > For example, > > cd /home/cvsroot/world/scipy && ls -lR | grep drwxrwxr-x | wc > > shows that there are 48 directories that need this fix. > Sorry, but I don't know how to apply this patch recursively :-( > > Pearu > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From prabhu at aero.iitm.ernet.in Thu Feb 14 03:42:51 2002 From: prabhu at aero.iitm.ernet.in (Prabhu Ramachandran) Date: Thu, 14 Feb 2002 14:12:51 +0530 Subject: [SciPy-dev] CVS problem In-Reply-To: <073801c1b529$de81f530$777ba8c0@ericlaptop> References: <073801c1b529$de81f530$777ba8c0@ericlaptop> Message-ID: <15467.30859.466238.763994@monster.linux.in> >>>>> "eric" == eric writes: eric> Ok. I think... I ran this, which applies the command eric> recursively. chmod -R g+s scipy eric> Now everything looks something like this. eric> drwxrwsr-x 2 ej users 4096 Feb 14 02:09 doc eric> The potential mistake is that it did this to all the files eric> in the directories too. oops. Any harm done by this? Nothing serious afaik but you have suid'd *all* files with the group as the owner. Im not sure what consequences this has but maybe just do this to get things fixed. # chmod -R g-s scipy # find scipy/ -type d -exec chmod g+s '{}' ';' Find is your friend but be careful before you use it and generally avoid doing system administration late at night. Sometimes all is lost before you realize its too late. :) Biggest problem is that sometimes mistakes can be unrecoverable. prabhu From pearu at cens.ioc.ee Thu Feb 14 03:43:54 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Thu, 14 Feb 2002 10:43:54 +0200 (EET) Subject: [SciPy-dev] CVS problem In-Reply-To: <073801c1b529$de81f530$777ba8c0@ericlaptop> Message-ID: On Thu, 14 Feb 2002, eric wrote: > Ok. I think... I ran this, which applies the command recursively. > > chmod -R g+s scipy > > Now everything looks something like this. > > drwxrwsr-x 2 ej users 4096 Feb 14 02:09 doc > > The potential mistake is that it did this to all the files in the directories > too. oops. Any harm done by this? cd /home/cvsroot/world find scipy/ -type f -exec chmod 0444 "{}" ';' should fix this oops. Right, Prabhu? Pearu From eric at scipy.org Thu Feb 14 02:48:14 2002 From: eric at scipy.org (eric) Date: Thu, 14 Feb 2002 02:48:14 -0500 Subject: [SciPy-dev] CVS problem References: <073801c1b529$de81f530$777ba8c0@ericlaptop> <15467.30859.466238.763994@monster.linux.in> Message-ID: <075201c1b52b$f5880d80$777ba8c0@ericlaptop> Hey Prabhu and Pearu, > # chmod -R g-s scipy > > # find scipy/ -type d -exec chmod g+s '{}' ';' Done. Thanks. Things look better now. Let me know if any other hangups occur. see ya, eric ----- Original Message ----- From: "Prabhu Ramachandran" To: Sent: Thursday, February 14, 2002 3:42 AM Subject: Re: [SciPy-dev] CVS problem > >>>>> "eric" == eric writes: > > eric> Ok. I think... I ran this, which applies the command > eric> recursively. chmod -R g+s scipy > > eric> Now everything looks something like this. > > eric> drwxrwsr-x 2 ej users 4096 Feb 14 02:09 doc > > eric> The potential mistake is that it did this to all the files > eric> in the directories too. oops. Any harm done by this? > > Nothing serious afaik but you have suid'd *all* files with the group > as the owner. Im not sure what consequences this has but maybe just > do this to get things fixed. > > # chmod -R g-s scipy > > # find scipy/ -type d -exec chmod g+s '{}' ';' > > Find is your friend but be careful before you use it and generally > avoid doing system administration late at night. Sometimes all is > lost before you realize its too late. :) Biggest problem is that > sometimes mistakes can be unrecoverable. > > prabhu > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From pearu at cens.ioc.ee Thu Feb 14 03:51:24 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Thu, 14 Feb 2002 10:51:24 +0200 (EET) Subject: [SciPy-dev] CVS problem In-Reply-To: Message-ID: On Thu, 14 Feb 2002, Pearu Peterson wrote: > > too. oops. Any harm done by this? > > cd /home/cvsroot/world > find scipy/ -type f -exec chmod 0444 "{}" ';' > > should fix this oops. Right, Prabhu? Don't do this! This will clean up all executable bits that some files may have. I guess I am still sleeping, its almost 10am here though ;-) Pearu From eric at scipy.org Thu Feb 14 04:34:49 2002 From: eric at scipy.org (eric) Date: Thu, 14 Feb 2002 04:34:49 -0500 Subject: [SciPy-dev] Getting rid of parasite libraries when linking References: Message-ID: <078d01c1b53a$d972e200$777ba8c0@ericlaptop> > However, note that C libraries are still linked against all extension > modules. For example, > -lc_misc -lcephes -lgist > (and only these) appear in all linking execution arguments. Actually this > is a problem only when using the top level setup.py and we should also > consider fixing this for C libraries. It makes little sense to use global > libraries list anyway, I think. Or can you think any possible application > for using global libraries and library_dirs keywords in the toplevel > setup.py?. Or should we invent c_libraries? Or do you have any better idea > how to get rid of these parasite libraries? Seems like a c_libraries idea would parallel our needs a little more, but it isn't "standard". Not that that has stopped from re-inventing half of distutils, but it does make you pause. I'm a 0 on this one. (+1 being for it, 0 neutral, -1 against as on the python-dev) list. eric From eric at scipy.org Thu Feb 14 04:27:19 2002 From: eric at scipy.org (eric) Date: Thu, 14 Feb 2002 04:27:19 -0500 Subject: [SciPy-dev] rough cut at accelerated function class Message-ID: <077301c1b53a$0fef0da0$777ba8c0@ericlaptop> Hey Pat, proto.py is a rough cut of the accelerated function class. Look at test2() to see how it can work with weave.inline to build a C extension that calls a Python function which has been compiled to C (Who's on first...?). The extension module it builds for a compiled function provides both by reference and by value functions for the underlying C function so that both C and Fortran can use it for callbacks. Currently the fortran version (by reference) calls the by value version. I think we should inject the same code into each so that Fortran callbacks are not slower the C callbacks. We'll do this once we've refactored the code a little to make it easier. All callback functions are accesible through the Python object, so Pearu doesn't need to do anything besides call a Python method on the object to get a pointer to the accelerated (pure C) function. This means no C header or source file dependencies, etc. The speed up for an array reduction using a simple function like: def my_add(a,b): return a + b is a factor of 17-20 on my machine between using the C callback routine compared to a Python callback. I've made some changes to bytecodecompiler.py. I'm sure you have too, so we'll need to merge them. The latest CVS of weave is needed for this code to work. There is some code overlap between weave's type handling system and bytecodecompiler's. They need to be merged pretty soon here. Quite a bit of work remains, but it is definitely taking shape. Things not implemented: caching to disk most of the calling machinery for calls in Python doesn't exist. tons of re-factoring and cleanup needed. see ya, eric -- Eric Jones Enthought, Inc. [www.enthought.com and www.scipy.org] (512) 536-1057 -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: bytecodecompiler.py URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: proto.py URL: From pearu at cens.ioc.ee Thu Feb 14 07:16:45 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Thu, 14 Feb 2002 14:16:45 +0200 (EET) Subject: [SciPy-dev] Getting rid of parasite libraries when linking In-Reply-To: <078d01c1b53a$d972e200$777ba8c0@ericlaptop> Message-ID: On Thu, 14 Feb 2002, eric wrote: > > However, note that C libraries are still linked against all extension > > modules. For example, > > -lc_misc -lcephes -lgist > > (and only these) appear in all linking execution arguments. Actually this > > is a problem only when using the top level setup.py and we should also > > consider fixing this for C libraries. It makes little sense to use global > > libraries list anyway, I think. Or can you think any possible application > > for using global libraries and library_dirs keywords in the toplevel > > setup.py?. Or should we invent c_libraries? Or do you have any better idea > > how to get rid of these parasite libraries? > > Seems like a c_libraries idea would parallel our needs a little more, but it > isn't "standard". Not that that has stopped from re-inventing half of > distutils, but it does make you pause. I'm a 0 on this one. (+1 being for it, 0 > neutral, -1 against as on the python-dev) list. When scanning python docs about this, I found no hints about how to proceed from our situation. This is also expected because the toplevel scipy setup.py is like "a non-standard super" setup.py. Namely, the standard distutils does not support packages of extension packages (certainly it supports packages of pure packages). So, scipy_distutils is not quite re-inventing distutils but more like extending it for packaging of extension packages. Inventing c_libraries certainly does not break the "standard" but I have no glue how the standard distutils will evolve on supporting extension packages (if at all). And it is certainly better to use standardized hooks. On the other hand, scipy_distutils has passed by distutils on this issue and may be distutils should copy our experience from scipy_distutils. Though, I find this rather unlikely based on my past experience when pinging distutils about supporting Fortran stuff. One option would be also to soft-break the "standard" by using existing 'libraries' keyword in the similar way as 'fortran_libraries'. This hack would be easiest to implement. In fact, the current scipy building process uses already the 'libraries' keyword as it would be 'c_libraries' keyword, just it has this nasty effect of spreading libraries to all linking calls. Pearu From heiko at hhenkelmann.de Thu Feb 14 15:14:42 2002 From: heiko at hhenkelmann.de (Heiko Henkelmann) Date: Thu, 14 Feb 2002 21:14:42 +0100 Subject: [SciPy-dev] Bug in ext_tools.py Message-ID: <000d01c1b594$3d668240$4e11e33e@arrow> Hello there, I found a bug in line 20 of ext_tools.py. The ] should be a , Heiko From heiko at hhenkelmann.de Thu Feb 14 15:20:54 2002 From: heiko at hhenkelmann.de (Heiko Henkelmann) Date: Thu, 14 Feb 2002 21:20:54 +0100 Subject: [SciPy-dev] Problem with flapack Message-ID: <001501c1b595$286ef740$4e11e33e@arrow> Sorry guys for being such a troublemaker. After tweaking a couple of linker command lines I've been able to build scipy. Now I'm facing the following problem: >>> from scipy import signal as s >>> s.roots([1,2,3,4]) array_from_pyobj:intent(inout) array must be contiguous and with a proper type and size. Traceback (most recent call last): File "", line 1, in ? File "C:\USR\PYTHON21\scipy\basic1a.py", line 48, in roots root[:N-1] = eig(A)[0] File "C:\USR\PYTHON21\scipy\linalg\linear_algebra.py", line 439, in eig results = ev(a, jobvl='N', jobvr=vchar, lwork=-1) # query flapack.error: failed in converting 1st argument `a' of flapack.zgeev to C/Fortran array >>> Is this something you have seen before? What am I doing wrong? Heiko From eric at scipy.org Thu Feb 14 14:28:23 2002 From: eric at scipy.org (eric) Date: Thu, 14 Feb 2002 14:28:23 -0500 Subject: [SciPy-dev] Bug in ext_tools.py References: <000d01c1b594$3d668240$4e11e33e@arrow> Message-ID: <086a01c1b58d$c49156c0$777ba8c0@ericlaptop> Yep, that snuck in yesterday sometime. It was fixed an hour or so ago. Thanks, eric ----- Original Message ----- From: "Heiko Henkelmann" To: Sent: Thursday, February 14, 2002 3:14 PM Subject: [SciPy-dev] Bug in ext_tools.py > Hello there, > > I found a bug in line 20 of ext_tools.py. The ] should be a , > > > Heiko > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From eric at scipy.org Thu Feb 14 14:30:27 2002 From: eric at scipy.org (eric) Date: Thu, 14 Feb 2002 14:30:27 -0500 Subject: [SciPy-dev] Problem with flapack References: <001501c1b595$286ef740$4e11e33e@arrow> Message-ID: <087001c1b58e$10bd00d0$777ba8c0@ericlaptop> > Sorry guys for being such a troublemaker. No need to apologize. This is *exactly* what SciPy needs to shake out all the issues. > After tweaking a couple of linker > command lines I've been able to build scipy. Now I'm facing the following > problem: > > > >>> from scipy import signal as s > >>> s.roots([1,2,3,4]) > array_from_pyobj:intent(inout) array must be contiguous and with a proper > type and size. > Traceback (most recent call last): > File "", line 1, in ? > File "C:\USR\PYTHON21\scipy\basic1a.py", line 48, in roots > root[:N-1] = eig(A)[0] > File "C:\USR\PYTHON21\scipy\linalg\linear_algebra.py", line 439, in eig > results = ev(a, jobvl='N', jobvr=vchar, lwork=-1) # query > flapack.error: failed in converting 1st argument `a' of flapack.zgeev to > C/Fortran array > >>> > > Is this something you have seen before? What am I doing wrong? Does it work if you try: > > Heiko > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From eric at scipy.org Thu Feb 14 14:33:31 2002 From: eric at scipy.org (eric) Date: Thu, 14 Feb 2002 14:33:31 -0500 Subject: [SciPy-dev] Problem with flapack Message-ID: <087801c1b58e$7f50a6a0$777ba8c0@ericlaptop> Oops. The last message was somehow sent before it was ready to go... > > Sorry guys for being such a troublemaker. No need to apologize. This is *exactly* what SciPy needs to shake out all the issues. > > After tweaking a couple of linker > > command lines I've been able to build scipy. Now I'm facing the following > > problem: > > > > > > >>> from scipy import signal as s > > >>> s.roots([1,2,3,4]) > > array_from_pyobj:intent(inout) array must be contiguous and with a proper > > type and size. > > Traceback (most recent call last): > > File "", line 1, in ? > > File "C:\USR\PYTHON21\scipy\basic1a.py", line 48, in roots > > root[:N-1] = eig(A)[0] > > File "C:\USR\PYTHON21\scipy\linalg\linear_algebra.py", line 439, in eig > > results = ev(a, jobvl='N', jobvr=vchar, lwork=-1) # query > > flapack.error: failed in converting 1st argument `a' of flapack.zgeev to > > C/Fortran array > > >>> > > > > Is this something you have seen before? What am I doing wrong? > It might be that you are passing in a list (although that *should* be a valid thing to do). Does it work if you try: >>> from scipy import * >>> from scipy import signal as s >>> s.roots(array([1.,2.,3.,4.]) eric From heiko at hhenkelmann.de Thu Feb 14 15:43:24 2002 From: heiko at hhenkelmann.de (Heiko Henkelmann) Date: Thu, 14 Feb 2002 21:43:24 +0100 Subject: [SciPy-dev] Problem with flapack References: <087801c1b58e$7f50a6a0$777ba8c0@ericlaptop> Message-ID: <002d01c1b598$3fe7d920$4e11e33e@arrow> > > It might be that you are passing in a list (although that *should* be a valid > thing to do). > Does it work if you try: > >>> from scipy import * > >>> from scipy import signal as s > >>> s.roots(array([1.,2.,3.,4.]) > Nope: >>> from scipy import * >>> from scipy import signal as s >>> s.roots(array([1.,2.,3.,4.])) array_from_pyobj:intent(inout) array must be contiguous and with a proper type and size. Traceback (most recent call last): File "", line 1, in ? File "C:\USR\PYTHON21\scipy\basic1a.py", line 48, in roots root[:N-1] = eig(A)[0] File "C:\USR\PYTHON21\scipy\linalg\linear_algebra.py", line 439, in eig results = ev(a, jobvl='N', jobvr=vchar, lwork=-1) # query flapack.error: failed in converting 1st argument `a' of flapack.zgeev to C/Fortran array >>> Heiko From pearu at cens.ioc.ee Thu Feb 14 15:47:01 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Thu, 14 Feb 2002 22:47:01 +0200 (EET) Subject: [SciPy-dev] Problem with flapack In-Reply-To: <002d01c1b598$3fe7d920$4e11e33e@arrow> Message-ID: On Thu, 14 Feb 2002, Heiko Henkelmann wrote: > > > > It might be that you are passing in a list (although that *should* be a > valid > > thing to do). > > Does it work if you try: > > >>> from scipy import * > > >>> from scipy import signal as s > > >>> s.roots(array([1.,2.,3.,4.]) > > > > Nope: > > >>> from scipy import * > >>> from scipy import signal as s > >>> s.roots(array([1.,2.,3.,4.])) linear_algebra.py is broken. If you stay tuned, I'll make a new f2py release right now and then I'll get back to you (unless somebody else will fix linear_algebra.py first, the fix should be trivial). Pearu From eric at scipy.org Thu Feb 14 14:49:42 2002 From: eric at scipy.org (eric) Date: Thu, 14 Feb 2002 14:49:42 -0500 Subject: [SciPy-dev] Problem with flapack References: Message-ID: <08a801c1b590$c21a8300$777ba8c0@ericlaptop> > > linear_algebra.py is broken. If you stay tuned, I'll make a new f2py > release right now and then I'll get back to you (unless somebody else will > fix linear_algebra.py first, the fix should be trivial). Will your f2py release solve the problem, or do we need to fix linalg also? eric From loredo at astrosun.astro.cornell.edu Thu Feb 14 16:15:16 2002 From: loredo at astrosun.astro.cornell.edu (Tom Loredo) Date: Thu, 14 Feb 2002 16:15:16 -0500 (EST) Subject: [SciPy-dev] Python 2.2 compatibility? Message-ID: <200202142115.g1ELFGe14116@laplace.astro.cornell.edu> Hi folks- In my continuing attempt to build scipy with Python 2.2 on Solaris.... Okay, now it all builds (after changing the library list to eliminate "f90") and installs. It just won't load! On "import scipy" I get: Traceback (most recent call last): File "", line 1, in ? File "/home/laplace/lib/python2.2/site-packages/scipy/__init__.py", line 42, in ? from misc import * File "/home/laplace/lib/python2.2/site-packages/scipy/misc.py", line 21, in ? import scipy.stats File "/home/laplace/lib/python2.2/site-packages/scipy/stats/__init__.py", line 4, in ? from stats import * File "/home/laplace/lib/python2.2/site-packages/scipy/stats/stats.py", line 204, in ? import math, string, sys, pstat, copy File "/home/laplace/lib/python2.2/site-packages/scipy/stats/pstat.py", line 176 exec execstring SyntaxError: unqualified exec is not allowed in function 'colex' it contains a nested function with free variables Looks like the pstat.py module is unhappy with how 2.2 enforces scoping. Has this been fixed? The scipy install page says it works with "version 2.1 or higher" but if 2.2 requires fixes that are only in CVS, perhaps it's time to release a more up-to-date snapshot. Thanks, Tom Loredo PS: One other minor obstacle I encountered: Scipy requires both the single and double precision libraries for FFTW. No problem, the FFTW install instructions tell you explicitly how to install both versions. However, the recommended installation names the double version "dfftw" and the single one "sfftw." Scipy wants the double one named "fftw". It's no problem to do that, but perhaps this requirement should be mentioned on the install page, since it is not how FFTW suggests a dual-precision installation be done. Or the setup script should be changed to reflect the FFTW recommendations. From pearu at cens.ioc.ee Thu Feb 14 15:59:08 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Thu, 14 Feb 2002 22:59:08 +0200 (EET) Subject: [SciPy-dev] Problem with flapack In-Reply-To: <08a801c1b590$c21a8300$777ba8c0@ericlaptop> Message-ID: On Thu, 14 Feb 2002, eric wrote: > > > > linear_algebra.py is broken. If you stay tuned, I'll make a new f2py > > release right now and then I'll get back to you (unless somebody else will > > fix linear_algebra.py first, the fix should be trivial). > > Will your f2py release solve the problem, or do we need to fix linalg also? Nope. The bug is in linalg if used with the latest f2py as f2py now is more strict about intent(inout) arguments (that are in fact depreciated). These problems will be go away when linalg2 is finished (it misses only few lapack wrappers). Pearu From pearu at cens.ioc.ee Thu Feb 14 16:29:01 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Thu, 14 Feb 2002 23:29:01 +0200 (EET) Subject: [SciPy-dev] Problem with flapack In-Reply-To: <08a801c1b590$c21a8300$777ba8c0@ericlaptop> Message-ID: On Thu, 14 Feb 2002, eric wrote: > > > > linear_algebra.py is broken. If you stay tuned, I'll make a new f2py > > release right now and then I'll get back to you (unless somebody else will > > fix linear_algebra.py first, the fix should be trivial). > > Will your f2py release solve the problem, or do we need to fix linalg also? Ok, turns out that the fix is not trivial after all because in January f2py changed how it deals with multi-dimensional arrays ant the current linalg/flapack uses the old convention. It means that linalg2 must be finished ASAP. If one really needs linalg right now then try downgrading f2py, say to F2PY-2.8.172, or 2.10, but I doubt that it will work as then you'll have trouble with scipy_distutils, I guess. Pearu From eric at scipy.org Thu Feb 14 15:32:58 2002 From: eric at scipy.org (eric) Date: Thu, 14 Feb 2002 15:32:58 -0500 Subject: [SciPy-dev] Problem with flapack References: Message-ID: <08c201c1b596$ca1043f0$777ba8c0@ericlaptop> > > > > > > linear_algebra.py is broken. If you stay tuned, I'll make a new f2py > > > release right now and then I'll get back to you (unless somebody else will > > > fix linear_algebra.py first, the fix should be trivial). > > > > Will your f2py release solve the problem, or do we need to fix linalg also? > > Ok, turns out that the fix is not trivial after all because in January > f2py changed how it deals with multi-dimensional arrays ant the current > linalg/flapack uses the old convention. It means that linalg2 must be > finished ASAP. Ok. I agree. I'm working on weave today, but can can commit to working some on linalg tomorrow. I would like to work on the interface exposed by linear_algebra quite a bit anyway. Can you give me a few pointers as to what needs work on the wrappers, and I'll hit those first. thanks, eric From pearu at cens.ioc.ee Thu Feb 14 16:42:36 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Thu, 14 Feb 2002 23:42:36 +0200 (EET) Subject: [SciPy-dev] Problem with flapack In-Reply-To: Message-ID: On Thu, 14 Feb 2002, Pearu Peterson wrote: > Ok, turns out that the fix is not trivial after all because in January I was able to figure out a simple fix but I am not sure that the results will be correct. The fix is now in scipy CVS. See http://scipy.net/cgi-bin/viewcvs.cgi/scipy/linalg/linear_algebra.py.diff?r1=1.13&r2=1.14 for details. Here is the output of Heiko's test: >>> from scipy import signal as s >>> s.roots([1,2,3,4]) array([-0.1746854 -1.54686889e+00j, -0.1746854 +1.54686889e+00j, -1.65062919 -3.31070451e-16j]) Is this correct result? Regards, Pearu From eric at scipy.org Thu Feb 14 15:46:50 2002 From: eric at scipy.org (eric) Date: Thu, 14 Feb 2002 15:46:50 -0500 Subject: [SciPy-dev] Python 2.2 compatibility? References: <200202142115.g1ELFGe14116@laplace.astro.cornell.edu> Message-ID: <08c901c1b598$ba66cbc0$777ba8c0@ericlaptop> > > Hi folks- > > In my continuing attempt to build scipy with Python 2.2 on > Solaris.... Okay, now it all builds (after changing the > library list to eliminate "f90") and installs. It just > won't load! On "import scipy" I get: > > Traceback (most recent call last): > File "", line 1, in ? > File "/home/laplace/lib/python2.2/site-packages/scipy/__init__.py", line 42, in ? > from misc import * > File "/home/laplace/lib/python2.2/site-packages/scipy/misc.py", line 21, in ? > import scipy.stats > File "/home/laplace/lib/python2.2/site-packages/scipy/stats/__init__.py", line 4, in ? > from stats import * > File "/home/laplace/lib/python2.2/site-packages/scipy/stats/stats.py", line 204, in ? > import math, string, sys, pstat, copy > File "/home/laplace/lib/python2.2/site-packages/scipy/stats/pstat.py", line 176 > exec execstring > SyntaxError: unqualified exec is not allowed in function 'colex' it contains a nested function with free variables > > Looks like the pstat.py module is unhappy with how 2.2 enforces > scoping. Has this been fixed? Don't think so. Man do we need to get stats cleaned up. > The scipy install page says > it works with "version 2.1 or higher" I changed this to say version 2.1.x. Probably not the fix you were looking for... > but if 2.2 requires fixes > that are only in CVS, perhaps it's time to release a more > up-to-date snapshot. The CVS is broken for all versions of Python right now. Once linalg is cleaned up, perhaps we should release a snapshot. Also, fixing the pstat problem with 2.2 would be good. > PS: One other minor obstacle I encountered: Scipy requires > both the single and double precision libraries for FFTW. No > problem, the FFTW install instructions tell you explicitly > how to install both versions. However, the recommended > installation names the double version "dfftw" and the > single one "sfftw." The chosen approach was used because this is how most of the fftw packages were built (rpm, etc.) and we wanted to be compatible with what people were installing. > Scipy wants the double one named > "fftw". It's no problem to do that, but perhaps this > requirement should be mentioned on the install page, since > it is not how FFTW suggests a dual-precision installation > be done. Or the setup script should be changed to reflect > the FFTW recommendations. We mention how to build them on the build page. I've edited the binary install page to point at this now for fftw. Thanks for all your comments. I think we're close to a 2.2 compatible version (sounds like it builds now). Now we need to get everything operating correctly. eric > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From eric at scipy.org Thu Feb 14 15:51:46 2002 From: eric at scipy.org (eric) Date: Thu, 14 Feb 2002 15:51:46 -0500 Subject: [SciPy-dev] Problem with flapack References: Message-ID: <08eb01c1b599$6ac7fe80$777ba8c0@ericlaptop> > > On Thu, 14 Feb 2002, Pearu Peterson wrote: > > > Ok, turns out that the fix is not trivial after all because in January > > I was able to figure out a simple fix but I am not sure that the results > will be correct. The fix is now in scipy CVS. See > > http://scipy.net/cgi-bin/viewcvs.cgi/scipy/linalg/linear_algebra.py.diff?r1=1.13 &r2=1.14 > > for details. Here is the output of Heiko's test: > > >>> from scipy import signal as s > >>> s.roots([1,2,3,4]) > array([-0.1746854 -1.54686889e+00j, -0.1746854 +1.54686889e+00j, > -1.65062919 -3.31070451e-16j]) > > Is this correct result? Let see... divide by... carry the two... Yeah. that looks right. :-) (Matlab also agrees) eric From pearu at cens.ioc.ee Thu Feb 14 18:32:13 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Fri, 15 Feb 2002 01:32:13 +0200 (EET) Subject: [SciPy-dev] Problem with flapack In-Reply-To: <08c201c1b596$ca1043f0$777ba8c0@ericlaptop> Message-ID: Eric, On Thu, 14 Feb 2002, eric wrote: > Ok. I agree. I'm working on weave today, but can can commit to working some on > linalg tomorrow. I would like to work on the interface exposed by > linear_algebra quite a bit anyway. Can you give me a few pointers as to what > needs work on the wrappers, and I'll hit those first. linalg2 needs: 1) signatures for the following flapack routines: ?laswp,?geqrf,?gees,??ev,?geev,??egv,?ggev,?gesdd,?gelss (these are used by linear_algebra.py) 2) linear_algebra.py file, can be copied from linalg but it must be reviewed as many signatures have been changed. 3) setup_linalg.py file. It should be copy/paste from linalg/setup_linalg.py (define_macros,f2py_options should not be needed anymore). 4) __init__.py file. Copy from linalg. And later we should gradually add wrappers for cblas routines, few are missing from fblas, but they are not used anywhere in scipy. Note that the current linalg seems to work again (I have not run the tests, though). May be it would be reasonable to make a snapshot and to plug linalg2 in for the subsequent snapshot. Now I'll hit the bed but tomorrow I'll try to find some time for working with 1). Pearu From jwp at cns.nyu.edu Fri Feb 15 09:30:20 2002 From: jwp at cns.nyu.edu (Jon Peirce) Date: Fri, 15 Feb 2002 09:30:20 -0500 Subject: [SciPy-dev] mac os X port? plt development? Message-ID: <5.1.0.14.0.20020215090443.01d55be0@imap.nyu.edu> Hi there I just came across scipy and really like it - now seriously considering moving to python from matlab. You've done a great job. I'm interested though in a couple of things regarding development: a) is anyone out there working on a port for Mac Os X? That would be the best! Matlab no longer support macs so lots of users are looking for alternative scripting languages. And because of the freeBSD kernel of osX many programmers are starting to use macs (you wouldnt believe me if I told you 2 years ago!) b) how is the development of plt coming on? What sort of time do you expect it to take to give that the functionality of gnuplot? The thing I use lots that doesnt exist is a proper surface plot (not mesh). Of course its very easy to write a wish list when I'm not doing the development... ;) Thanks again for your work so far! -------------------------- Jon Peirce 212 998 7865 (tel) 212 995 4011 (fax) http://www.cns.nyu.edu/~jwp From prabhu at aero.iitm.ernet.in Fri Feb 15 12:24:53 2002 From: prabhu at aero.iitm.ernet.in (Prabhu Ramachandran) Date: Fri, 15 Feb 2002 22:54:53 +0530 Subject: [SciPy-dev] mac os X port? plt development? In-Reply-To: <5.1.0.14.0.20020215090443.01d55be0@imap.nyu.edu> References: <5.1.0.14.0.20020215090443.01d55be0@imap.nyu.edu> Message-ID: <15469.17509.593365.200594@monster.linux.in> >>>>> "JP" == Jon Peirce writes: JP> a) is anyone out there working on a port for Mac Os X? That JP> would be the best! Matlab no longer support macs so lots of JP> users are looking for alternative scripting languages. And JP> because of the freeBSD kernel of osX many programmers are JP> starting to use macs (you wouldnt believe me if I told you 2 JP> years ago!) Well, I dont know of many folks who use the Mac OS X on this list. AFAIK this seems the first mac os X post we have had on the list. It would be nice if you could take a shot at it and let us know how it goes. I believe fink (http://fink.sourceforge.net) should prove useful. But I am pretty clueless as far as Macs go. JP> b) how is the development of plt coming on? What sort of time JP> do you expect it to take to give that the functionality of JP> gnuplot? The thing I use lots that doesnt exist is a proper JP> surface plot (not mesh). Well, I'm not sure a 3d mesh was planned for plt? However I was thinking of writing a wrapper to mayavi (http://mayavi.sourceforge.net) to see if it could do the job but I havent had much time of late. Maybe you should check out mayavi and see if it is good enough. Unfortunately, I'm not sure if Tkinter is ported to Mac OS X and not sure of the status of vtk on OS X. So YMMV. prabhu From a.schmolck at gmx.net Fri Feb 15 12:58:23 2002 From: a.schmolck at gmx.net (A.Schmolck) Date: 15 Feb 2002 17:58:23 +0000 Subject: [SciPy-dev] Python 2.2 compatibility? In-Reply-To: <200202142115.g1ELFGe14116@laplace.astro.cornell.edu> References: <200202142115.g1ELFGe14116@laplace.astro.cornell.edu> Message-ID: Tom Loredo writes: > Hi folks- > > In my continuing attempt to build scipy with Python 2.2 on > Solaris.... Okay, now it all builds (after changing the > library list to eliminate "f90") and installs. It just > won't load! On "import scipy" I get: [snipped] Hi, I'm also using python2.2 and I had to make the following (makeshift) changes to get things working (at least it doesn't produce error messages anymore -- I haven't tested it, because I just fixed it so that I could use other packages). A proper fix would most likely involve changing the interface -- pstats really shoudn't eval or exec anything at all -- not to convert strings to slices at least! A much better way, IMHO would be to provide users a convinience object to create slices, like so: class SliceMaker: """Convinience class to make slices.""" def __getitem__(self, a):return a def __len__(self):return 0 sliceMaker = SliceMaker() Then one could simply call e.g: colex(x, sliceMaker[3:5]) or even: colex(x, sliceMaker[3:5,10:-4,NewAxis,3]) alex -------------- next part -------------- A non-text attachment was scrubbed... Name: pstat.py.PATCH Type: text/x-patch Size: 3257 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: quadpack.py.PATCH Type: text/x-patch Size: 237 bytes Desc: not available URL: From oliphant.travis at ieee.org Fri Feb 15 13:46:24 2002 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 15 Feb 2002 11:46:24 -0700 Subject: [SciPy-dev] Re: Scipy-dev digest, Vol 1 #93 - 17 msgs In-Reply-To: <200202151429.g1FET2j08465@scipy.org> References: <200202151429.g1FET2j08465@scipy.org> Message-ID: I just changed by mode to non-digested, so I can better keep up with the pace of the conversation. I've been working on pstat.py. I can't finish the work right now, but I will as soon as I get a 4-page paper written and submitted. There is not much work to do to fix the exec stuff, though, I can finish that right now. To fix linear algebra will require a detailed inspection of the new interfaces. I don't have time to do that until next week. It sounds like we are getting close to another release.. -Travis O. From eric at scipy.org Fri Feb 15 14:14:50 2002 From: eric at scipy.org (eric) Date: Fri, 15 Feb 2002 14:14:50 -0500 Subject: [SciPy-dev] mac os X port? plt development? References: <5.1.0.14.0.20020215090443.01d55be0@imap.nyu.edu> Message-ID: <008d01c1b655$0da58030$6b01a8c0@ericlaptop> Hey Jon, > a) is anyone out there working on a port for Mac Os X? That would be the > best! Matlab no longer support macs so lots of users are looking for > alternative scripting languages. And because of the freeBSD kernel of osX > many programmers are starting to use macs (you wouldnt believe me if I told > you 2 years ago!) Early on in the development of SciPy, Tim Lahey was trying to get Mac OSX stuff working. http://www.scipy.net/pipermail/scipy-dev/2001-July/000006.html I have no idea how far it went. > > b) how is the development of plt coming on? What sort of time do you expect > it to take to give that the functionality of gnuplot? The thing I use lots > that doesnt exist is a proper surface plot (not mesh). Development in this area is about to ramp up. I'll be discussing this shortly on the group. > Of course its very easy to write a wish list when I'm not doing the > development... ;) Well, you'll just have to rectify this then won't you. ;-) OSX patches would be very much welcomed. The algorithmic code should move over fine. The plotting (as always) may be more of an effort. eric From heiko at hhenkelmann.de Fri Feb 15 15:49:49 2002 From: heiko at hhenkelmann.de (Heiko Henkelmann) Date: Fri, 15 Feb 2002 21:49:49 +0100 Subject: [SciPy-dev] Problem with flapack References: Message-ID: <003001c1b662$4f642d20$1918e33e@arrow> Pearu, thank you for fixing the problem. Here the result from a short test: >>> from scipy import * >>> from scipy import signal as s >>> s.poly(s.roots([1.,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20])) array([ 1.+0.00000000e+000j, 2.-1.89030962e-015j, 3.+3.97294109e-015j, 4.+2.16188134e-015j, 5.-5.21090017e-014j, 6.-1.44872571e-013j, 7.-2.18532601e-013j, 8.-2.36476441e-013j, 9.-2.38065611e-013j, 10.-4.36395620e-013j, 11.-9.58807728e-013j, 12.-1.45392938e-012j, 13.-1.52765658e-012j, 14.-1.21333755e-012j, 15.-6.78134691e-013j, 16.-2.67591593e-013j, 17.-7.35574790e-014j, 18.-8.47677514e-014j, 19.-1.43588852e-013j, 20.-9.54262406e-014j]) >>> It works fine. It should be possible to polish the values, in order to get rid of the small imaginary error, in the function poly. Heiko From eric at scipy.org Fri Feb 15 14:53:28 2002 From: eric at scipy.org (eric) Date: Fri, 15 Feb 2002 14:53:28 -0500 Subject: [SciPy-dev] Problem with flapack References: <003001c1b662$4f642d20$1918e33e@arrow> Message-ID: <001501c1b65a$706120d0$6b01a8c0@ericlaptop> > thank you for fixing the problem. Here the result from a short test: > > >>> from scipy import * > >>> from scipy import signal as s > >>> s.poly(s.roots([1.,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20])) > array([ 1.+0.00000000e+000j, 2.-1.89030962e-015j, 3.+3.97294109e-015j, > 4.+2.16188134e-015j, 5.-5.21090017e-014j, > 6.-1.44872571e-013j, 7.-2.18532601e-013j, > 8.-2.36476441e-013j, 9.-2.38065611e-013j, > 10.-4.36395620e-013j, 11.-9.58807728e-013j, > 12.-1.45392938e-012j, 13.-1.52765658e-012j, > 14.-1.21333755e-012j, 15.-6.78134691e-013j, > 16.-2.67591593e-013j, 17.-7.35574790e-014j, > 18.-8.47677514e-014j, 19.-1.43588852e-013j, > 20.-9.54262406e-014j]) > >>> > > > It works fine. It should be possible to polish the values, in order to get > rid of the small imaginary error, in the function poly. Yeah, I noticed that too. I thought we already had done this??? Didn't Travis O. put this in at one point. Maybe it was on a different set of functions. Anyway, this should be added... eric From pearu at cens.ioc.ee Fri Feb 15 15:56:53 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Fri, 15 Feb 2002 22:56:53 +0200 (EET) Subject: [SciPy-dev] Problem with flapack In-Reply-To: <003001c1b662$4f642d20$1918e33e@arrow> Message-ID: On Fri, 15 Feb 2002, Heiko Henkelmann wrote: > >>> s.poly(s.roots([1.,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20])) > array([ 1.+0.00000000e+000j, 2.-1.89030962e-015j, 3.+3.97294109e-015j, > 4.+2.16188134e-015j, 5.-5.21090017e-014j, > 6.-1.44872571e-013j, 7.-2.18532601e-013j, > 8.-2.36476441e-013j, 9.-2.38065611e-013j, > 10.-4.36395620e-013j, 11.-9.58807728e-013j, > 12.-1.45392938e-012j, 13.-1.52765658e-012j, > 14.-1.21333755e-012j, 15.-6.78134691e-013j, > 16.-2.67591593e-013j, 17.-7.35574790e-014j, > 18.-8.47677514e-014j, 19.-1.43588852e-013j, > 20.-9.54262406e-014j]) > >>> > > It works fine. It should be possible to polish the values, in order to get > rid of the small imaginary error, in the function poly. How about >>> s.poly(s.roots([1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20])).real array([ 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11., 12., 13., 14., 15., 16., 17., 18., 19., 20.]) >>> Pearu From loredo at astrosun.astro.cornell.edu Fri Feb 15 16:14:48 2002 From: loredo at astrosun.astro.cornell.edu (Tom Loredo) Date: Fri, 15 Feb 2002 16:14:48 -0500 (EST) Subject: [SciPy-dev] Further Solaris woes Message-ID: <200202152114.g1FLEmD15023@laplace.astro.cornell.edu> Hi folks- Thanks for the continuing Solaris help. Sorry I didn't catch the FFTW install detail---I had already installed FFTW the recommended way for use with Travis's old wrappers for Python 2.1, and it worked fine that way. The devil's in the details I guess. Alex's patch fixes the pstat.py problem, but now I get an error when _quadpack.so is loaded: import _quadpack ImportError: ld.so.1: python: fatal: relocation error: file /home/laplace/lib/python2.2/site-packages/scipy/integrate/_quadpack.so: symbol __vlog_: referenced symbol not found I'm guessing I need to add a library somewhere, but I'm not sure where to find __vlog_. Any advice would be appreciated. The setup script ends up generating the following build command for _quadpack.so: gcc -shared build/temp.solaris-2.7-sun4u-2.2/_quadpackmodule.o -L(null) -LSun/lib -L(null) -LSun/lib -Lbuild/temp.solaris-2.7-sun4u-2.2 -Lbuild/temp.solaris-2.7-sun4u-2.2 -Wl,-R(null) -Wl,-RSun/lib -lamos -ltoms -lfitpack -lminpack -lquadpack -lodepack -llinpack_lite -lblas -lmach -lF77 -lM77 -lsunmath -lm -lgist -lc_misc -lcephes -o build/lib.solaris-2.7-sun4u-2.2/scipy/integrate/_quadpack.so -mimpure-text Thanks, Tom From eric at scipy.org Fri Feb 15 15:04:50 2002 From: eric at scipy.org (eric) Date: Fri, 15 Feb 2002 15:04:50 -0500 Subject: [SciPy-dev] Problem with flapack References: Message-ID: <002f01c1b65c$06e08b30$6b01a8c0@ericlaptop> > >>> s.poly(s.roots([1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20])).real > array([ 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11., > 12., 13., 14., 15., 16., 17., 18., 19., 20.]) Sure, but I agree with Heiko that tiny zeros should be detected internally and cleaned up automatically. eric From fperez at pizero.colorado.edu Fri Feb 15 16:15:30 2002 From: fperez at pizero.colorado.edu (=?ISO-8859-1?Q?Fernando_P=E9rez?=) Date: Fri, 15 Feb 2002 14:15:30 -0700 (MST) Subject: [SciPy-dev] Problem with flapack In-Reply-To: <002f01c1b65c$06e08b30$6b01a8c0@ericlaptop> Message-ID: > > Sure, but I agree with Heiko that tiny zeros should be detected internally and > cleaned up automatically. Careful. I can't think off the top of my head of a concrete example, but I can impgine easily cases where you really have an unusual mix of roots with significant real parts and tiny imaginary components. In fact, in some of my work something similar happens as certain parameters change and eigenvalues for a certain operator start migrating off the real axis. So I would rather have the numerical algorithms spit out whatever answer they get and let *me* do any cosmetic cleanup I want to after the fact. If we want convenience, a keyword parameter, *off* by default, could control this cleanup to be done internally. Call me old-fashioned, but I prefer numerics to spit out whatever they get, without trying to get too smart. Of course, in cases where one has a bulletproof argument for cutting off spurious values, fine. But those cases are rare enough that I'd rather see the garbage and clean it myself (or at least have that option until I'm convinced it's really garbage, and then I can turn the 'auto_cleanup' on). Just a thought, f. From eric at scipy.org Fri Feb 15 15:18:47 2002 From: eric at scipy.org (eric) Date: Fri, 15 Feb 2002 15:18:47 -0500 Subject: [SciPy-dev] Further Solaris woes References: <200202152114.g1FLEmD15023@laplace.astro.cornell.edu> Message-ID: <003801c1b65d$f9c22ab0$6b01a8c0@ericlaptop> ----- Original Message ----- From: "Tom Loredo" To: Sent: Friday, February 15, 2002 4:14 PM Subject: [SciPy-dev] Further Solaris woes > > Hi folks- > > Thanks for the continuing Solaris help. Sorry I didn't catch > the FFTW install detail---I had already installed FFTW the > recommended way for use with Travis's old wrappers for Python 2.1, > and it worked fine that way. The devil's in the details I guess. > > Alex's patch fixes the pstat.py problem, but now I get an > error when _quadpack.so is loaded: > > import _quadpack > ImportError: ld.so.1: python: fatal: relocation error: file /home/laplace/lib/python2.2/site-packages/scipy/integrate/_quadpack.so: symbol __vlog_: referenced symbol not found > > I'm guessing I need to add a library somewhere, but I'm not sure > where to find __vlog_. Any advice would be appreciated. My standard approach is the go in the library directory for the compiler and do an "nm" which preints out the symbols in the libraries and then grep for the one I want. [15] eaj2 at teer4% cd /usr/pkg/spro/sun4m_55/SC4.0/lib [14] eaj2 at teer4% nm *.a | grep vlog libmvec.a[vlog_.o]: [7] | 0| 0|NOTY |GLOB |0 |UNDEF |__vlog [6] | 0| 32|FUNC |GLOB |0 |2 |__vlog_ [5] | 0| 32|FUNC |WEAK |0 |2 |vlog_ [1] | 0| 0|FILE |LOCL |0 |ABS |vlog_.c [7] | 0| 0|NOTY |GLOB |0 |UNDEF |__vlog libmvec.a[__vlog.o]: [48] | 0| 3844|FUNC |GLOB |0 |2 |__vlog [1] | 0| 0|FILE |LOCL |0 |ABS |__vlog.s libmvec_mt.a[vlog_.o]: [13] | 0| 0|NOTY |GLOB |0 |UNDEF |__vlog [11] | 120| 200|FUNC |GLOB |0 |2 |__vlog_ [12] | 0| 104|FUNC |GLOB |0 |2 |__vlog_mfunc [10] | 120| 200|FUNC |WEAK |0 |2 |vlog_ [1] | 0| 0|FILE |LOCL |0 |ABS |vlog_.c [7] | 0| 0|NOTY |GLOB |0 |UNDEF |__vlog libmvec_mt.a[__vlog.o]: [48] | 0| 3844|FUNC |GLOB |0 |2 |__vlog [1] | 0| 0|FILE |LOCL |0 |ABS |__vlog.s I'm not familiar with Sun's nm output, but it looks like some variation of your missing function is defined in on of the above. I don't see a signature that matches __vlog__ exactly. Sometimes there is an underscore mismatch that the linker handles internally (at least on windows with gcc that appears to be the case). You might try adding -mvec to the command and see if that solves the problem, but I'll bet there is an underscore issue here. see ya, eric > The setup script ends up generating the following build command for > _quadpack.so: > > gcc -shared build/temp.solaris-2.7-sun4u-2.2/_quadpackmodule.o -L(null) -LSun/lib -L(null) -LSun/lib -Lbuild/temp.solaris-2.7-sun4u-2.2 -Lbuild/temp.solaris-2.7-sun4u-2.2 -Wl,-R(null) -Wl,-RSun/lib -lamos -ltoms -lfitpack -lminpack -lquadpack -lodepack -llinpack_lite -lblas -lmach -lF77 -lM77 -lsunmath -lm -lgist -lc_misc -lcephes -o build/lib.solaris-2.7-sun4u-2.2/scipy/integrate/_quadpack.so -mimpure-text > > Thanks, > Tom > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pearu at cens.ioc.ee Fri Feb 15 16:27:57 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Fri, 15 Feb 2002 23:27:57 +0200 (EET) Subject: [SciPy-dev] Problem with flapack In-Reply-To: <002f01c1b65c$06e08b30$6b01a8c0@ericlaptop> Message-ID: On Fri, 15 Feb 2002, eric wrote: > > >>> s.poly(s.roots([1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20])).real > > array([ 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11., > > 12., 13., 14., 15., 16., 17., 18., 19., 20.]) > > Sure, but I agree with Heiko that tiny zeros should be detected internally and > cleaned up automatically. I hope that you don't mean that in general!!! I have applications where very tiny numbers have all 15 digits exact and it is crucial that these tiny numbers will not be zerod. Here is an example of calculating the amplitude of a cnoidal wave: >>> import math >>> from soliton.cnoidal import theta >>> print theta(math.pi,.01,der=2)/theta(math.pi,.01,der=0) [ 98596.04401089] ^^^^^^^^^^^^^^^^^ - this is exact (well, almost, all shown digits are exact). But note that this is found from a ratio of _very_ tiny numbers: >>> print theta(math.pi,.01,der=0) [ 2.42316748e-213] >>> print theta(math.pi,.01,der=2) [ 2.38914727e-208] So, please please please, let us not make scipy too clever based on purely cosmetic reasons. Pearu From heiko at hhenkelmann.de Fri Feb 15 16:33:49 2002 From: heiko at hhenkelmann.de (Heiko Henkelmann) Date: Fri, 15 Feb 2002 22:33:49 +0100 Subject: [SciPy-dev] Problem with flapack References: Message-ID: <008501c1b668$7578f580$1918e33e@arrow> If a polynomial has only real and conjugate complex roots the coefficients are real. This is a very easy way to determine if the coefficients of a polynomial can be polished to real. Heiko ----- Original Message ----- From: "Fernando P?rez" To: Sent: Friday, February 15, 2002 10:15 PM Subject: Re: [SciPy-dev] Problem with flapack > > > > Sure, but I agree with Heiko that tiny zeros should be detected internally and > > cleaned up automatically. > > Careful. I can't think off the top of my head of a concrete example, but I can > impgine easily cases where you really have an unusual mix of roots with > significant real parts and tiny imaginary components. In fact, in some of my > work something similar happens as certain parameters change and eigenvalues > for a certain operator start migrating off the real axis. > > So I would rather have the numerical algorithms spit out whatever answer they > get and let *me* do any cosmetic cleanup I want to after the fact. If we want > convenience, a keyword parameter, *off* by default, could control this cleanup > to be done internally. Call me old-fashioned, but I prefer numerics to spit > out whatever they get, without trying to get too smart. > > Of course, in cases where one has a bulletproof argument for cutting off > spurious values, fine. But those cases are rare enough that I'd rather see the > garbage and clean it myself (or at least have that option until I'm convinced > it's really garbage, and then I can turn the 'auto_cleanup' on). > > Just a thought, > > f. > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From fperez at pizero.colorado.edu Fri Feb 15 16:46:19 2002 From: fperez at pizero.colorado.edu (=?ISO-8859-1?Q?Fernando_P=E9rez?=) Date: Fri, 15 Feb 2002 14:46:19 -0700 (MST) Subject: [SciPy-dev] Problem with flapack In-Reply-To: <008501c1b668$7578f580$1918e33e@arrow> Message-ID: > If a polynomial has only real and conjugate complex roots the coefficients > are real. This is a very easy way to determine if the coefficients of a > polynomial can be polished to real. As I said, cases (like this) where there's an analytical argument underneath backing up the cleanup may be fine. But as a general procedure, no (see Pearu's post for the same feeling with an example of his). In general I just prefer to be conservative than inadvertedly truncate potentially important information. Cheers, f. From pearu at cens.ioc.ee Fri Feb 15 16:46:28 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Fri, 15 Feb 2002 23:46:28 +0200 (EET) Subject: [SciPy-dev] Problem with flapack In-Reply-To: <008501c1b668$7578f580$1918e33e@arrow> Message-ID: On Fri, 15 Feb 2002, Heiko Henkelmann wrote: > If a polynomial has only real and conjugate complex roots the coefficients > are real. This is a very easy way to determine if the coefficients of a > polynomial can be polished to real. Sure, in theory. But if these numbers are computed then scipy internals can never know what are assumptions behind the computations. Only users can know or should know the assumptions and do the polishing if they find it correct. Pearu From oliphant.travis at ieee.org Fri Feb 15 22:08:05 2002 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 15 Feb 2002 20:08:05 -0700 Subject: [SciPy-dev] Problem with flapack In-Reply-To: <001501c1b65a$706120d0$6b01a8c0@ericlaptop> References: <003001c1b662$4f642d20$1918e33e@arrow> <001501c1b65a$706120d0$6b01a8c0@ericlaptop> Message-ID: > > thank you for fixing the problem. Here the result from a short test: > > >>> from scipy import * > > >>> from scipy import signal as s > > >>> s.poly(s.roots([1.,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20]) > > >>>) > > > > array([ 1.+0.00000000e+000j, 2.-1.89030962e-015j, > > 3.+3.97294109e-015j, 4.+2.16188134e-015j, 5.-5.21090017e-014j, > > 6.-1.44872571e-013j, 7.-2.18532601e-013j, > > 8.-2.36476441e-013j, 9.-2.38065611e-013j, > > 10.-4.36395620e-013j, 11.-9.58807728e-013j, > > 12.-1.45392938e-012j, 13.-1.52765658e-012j, > > 14.-1.21333755e-012j, 15.-6.78134691e-013j, > > 16.-2.67591593e-013j, 17.-7.35574790e-014j, > > 18.-8.47677514e-014j, 19.-1.43588852e-013j, > > 20.-9.54262406e-014j]) > > > > > > > > It works fine. It should be possible to polish the values, in order to > > get rid of the small imaginary error, in the function poly. > > Yeah, I noticed that too. I thought we already had done this??? Didn't > Travis O. put this in at one point. Maybe it was on a different set of > functions. Anyway, this should be added... > It is added, right now the tolerance is set at 1e-13 which is violated by this example. You could set the tolerance lower, but that doesn't seem advisable. Try an example a little smaller (say to 10 instead of 20) and you will see it work. -Travis From eric at scipy.org Sat Feb 16 03:39:25 2002 From: eric at scipy.org (eric) Date: Sat, 16 Feb 2002 03:39:25 -0500 Subject: [SciPy-dev] roots, poly, comparison operators, etc. Message-ID: <00cc01c1b6c5$70c3feb0$6b01a8c0@ericlaptop> Hey crew, I spent some quality time with roots() and poly() tonight trying to find a solution to the rounding issue acceptable to all. In the process, I've rewritten both of them. As Heiko suggested, it is easy to test if the roots are complex congugates and reals in poly(), and, if so, ignore the imaginary part. Things are never as easy as they seem... Here is what I have learned: roots() calculates eigenvalues under the covers. This is done using eig which in turn calls some LAPACK (Atlas) routine xgeev, where x is the appropriate numeric type. This is where the trouble starts. The values returned by eig() that should be complex conjugates are actually off by as much as 2-3 digits of precision. I've compared the output of SciPy to Matlab and Octave below. # MATLAB # restricts display to 14 digits of precision, 15 digits if you print in long e format >> format long >> roots([-1,1,2,1,2,3,4,5,6,7,8,9]) ans = 2.37248275302093 0.98438469660693 + 0.73802739091801i 0.98438469660693 - 0.73802739091801i 0.45252913029012 + 1.04491513664358i 0.45252913029012 - 1.04491513664358i -1.14001283346666 + 0.26706945671060i -1.14001283346666 - 0.26706945671060i -0.77808817947927 + 0.77104779864671i -0.77808817947927 - 0.77104779864671i -0.20505419046158 + 1.06435796274941i -0.20505419046158 - 1.06435796274941i # OCTAVE has slightly different answers than MatLab, but complex conjugates are always equal. octave:15> format long octave:16> roots([-1,1,2,1,2,3,4,5,6,7,8,9]) ans = 2.372482753020929 + 0.000000000000000i 0.984384696606924 + 0.738027390918009i 0.984384696606924 - 0.738027390918009i 0.452529130290118 + 1.044915136643580i 0.452529130290118 - 1.044915136643580i -1.140012833466655 + 0.267069456710605i -1.140012833466655 - 0.267069456710605i -0.778088179479270 + 0.771047798646705i -0.778088179479270 - 0.771047798646705i -0.205054190461583 + 1.064357962749411i -0.205054190461583 - 1.064357962749411i # SciPy -- neither the real or imag part of the shown complex pair are equal. >>> r=roots.roots([-1,1,2,1,2,3,4,5,6,7,8,9]) >>> r[0] (-0.2050541904615848+1.0643579627494104j) >>> r[1] (-0.20505419046158302-1.0643579627494113j) So, Matlab, Octave generally give same results out to 15 digits and their complex conjugate pairs are always equal out to 15 digits. Python all give the same results up to 14 digits, but complex conjugates are off for the 15th digit. This causes them to fail as equal during comparisons. Both Matlab and Octave equivalency tests for conjugate pairs pass. I'm not sure how these tools treat comparisons under the covers -- whether they only compare the first 14 or 15 digits or not. Python comparisons will obviously fail unless we do some kind of rounding here. The fact that octave has 15 digits of precision and SciPy only has 14 on LAPACK output bugs me -- and makes things more difficult. I wonder if Atlas is less precise than the standard LAPACK? This needs some more investigation and really should be fixed. It was save a lot of headaches. Other things: * Array printing is far superior in Matlab and Octave -- they generally always look nice. We should clean up how arrays are output. Also, the "format long" and "format short", etc options for specifying how arrays are printed are pretty nice. * Numeric not allowing the comparison of complex bit me again. When trying to compare complex conjugates, I had to write about 5 lines of code plus an extra sort function. The same thing is possible in a single line of Matlab/Octave code. In Numeric, sort() will not work for complex arrays, so you have to write your own sort() routines. I think we need to override sort() to work with complex. I feel the same way about comparison operators -- otherwise you have to special case *every* comparison operator in your code in order for it to work with generic arrays. On Matlab/Octave, sort() sorts by magnitude of complex and then by angle. On the other hand ==, >, <, etc. seem to only compare the real part of complex numbers. These seem fine to me. I know they aren't mathematically correct, but they seem to be pragmatically correct. I'd like comments on these conventions and what others think. * Comparison of IEEE floats is hairy. It looks to me like Matlab and Octave have chosen to limit precision to 15 digits. This seems like a reasonable thing to do, for SciPy also, but we'd currently have to limit to 14 digits to deal with problems of imprecise LAPACK routines. Pearu and Fernando are arguing against this, but maybe they wouldn't mind a 15 digit limit for double and 6 for float. We'd have to modify Numeric to do this internally on comparisons. There could be a flag that enables exact comparisons. * After playing with Matlab/Octave code a little today, I remember how easy it is to do some things in a line or two that take many lines of Python code (the conjugate comparison comes to mind). I hope we can tune SciPy such that this difference disappears. Here is the code I'm playing with: from scipy import diag,eig,r1array,limits from Numeric import * def find_non_zero(a): """ Mimics Matlab's find behavior. """ flattened = a.flat return compress(flattened != 0,arange(len(flattened))) def roots(p): """ return the roots of the polynomial coefficients in p. The values in the rank-1 array p are coefficients of a polynomial. If the length of p is n+1 then the polynomial is p[0] * x**n + p[1] * x**(n-1) + ... + p[n-1]*x + p[n] """ # If input is scalar, this makes it an array p = r1array(p) if len(p.shape) != 1: raise ValueError,"Input must be a rank-1 array." # there will be N-1 roots -- start with all of them as zeros roots = zeros(len(p)-1,Complex) # find non-zero array entries non_zero = find_non_zero(p) print non_zero # find the number of trailing zeros -- this is the number of roots at 0. # strip leading and trailing zeros p = p[non_zero[0]:non_zero[-1]+1] print 'p:', p N = len(p) if N > 1: A = diag(ones((N-2,),'D'),-1) A[0,:] = -p[1:] / (p[0]+0.0) # fill in the end of the roots array with the new values print A print 'eig:',eig(A) roots[-(N-1):] = eig(A)[0] # sort roots from smallest magnitude to largest. ind = argsort(abs(roots)) roots = take(roots,ind) return roots def sort_complex(a): """ Doesn't currently work for integer arrays -- only float or complex. """ a = asarray(a,typecode=a.typecode().upper()) def complex_cmp(x,y): res = cmp(x.real,y.real) if res == 0: res = cmp(x.imag,y.imag) return res l = a.tolist() l.sort(complex_cmp) return array(l) def poly(seq_of_zeros): """Return a sequence representing a polynomial given a sequence of roots. If the input is a matrix, return the characteristic polynomial. Example: >>> b = roots([1,3,1,5,6]) >>> poly(b) array([1., 3., 1., 5., 6.]) """ seq_of_zeros = r1array(seq_of_zeros) if len(seq_of_zeros) == 0: return 1.0 if len(seq_of_zeros.shape) == 2: seq_of_zeros, vecs = MLab.eig(seq_of_zeros) a = [1] for k in range(len(seq_of_zeros)): a = convolve(a,[1, -seq_of_zeros[k]], mode=2) try: output_type = seq_of_zeros.typecode().upper() zero_limit = limits.epsilon(output_type.lower()) * 50 # rather arbitrarily chosen. roots = asarray(seq_of_zeros,output_type) pos_roots = sort_complex(compress(roots.imag > 0,roots)) neg_roots = conjugate(sort_complex(compress(roots.imag < 0,roots))) print neg_roots.real - pos_roots.real print neg_roots.imag - pos_roots.imag if (alltrue(abs(neg_roots.real - pos_roots.real) < eps * abs(neg_roots.real + pos_roots.real)) and alltrue(abs(neg_roots.imag - pos_roots.imag) < eps * abs(neg_roots.imag + pos_roots.imag))): a = a.real except ValueError: # neg_roots and pos_roots had different number of entries. pass return a eric -- Eric Jones Enthought, Inc. [www.enthought.com and www.scipy.org] (512) 536-1057 -------------- next part -------------- An HTML attachment was scrubbed... URL: From pearu at cens.ioc.ee Sat Feb 16 05:22:12 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Sat, 16 Feb 2002 12:22:12 +0200 (EET) Subject: [SciPy-dev] roots, poly, comparison operators, etc. In-Reply-To: <00cc01c1b6c5$70c3feb0$6b01a8c0@ericlaptop> Message-ID: On Sat, 16 Feb 2002, eric wrote: > Hey crew, > > I spent some quality time with roots() and poly() tonight trying to find > a solution to the rounding issue acceptable to all. In the process,I've > rewritten both of them. As Heiko suggested, it is easy to test if the > roots are complex congugates and reals in poly(), and, if so, ignore the > imaginary part. Things are never as easy as they seem... > Here is what I have learned: > > roots() calculates eigenvalues under the covers. This is done using eig > which in turn calls some LAPACK (Atlas) routine xgeev, where x is the > appropriate numeric type. This is where the trouble starts. The values > returned by eig() that should be complex conjugates are actually off by > as much as 2-3 digits of precision. I've compared the output of SciPy > to Matlab and Octave below. scipy.eig uses Fortran LAPACK (octave uses also lapack, I think(??)). Actually the trouble starts from roots that forces scipy.eig to use flapack.zgeev though flapack.dgeev would be more appropiate (real input,more efficient). Namely, the function roots() uses complex array A = diag(Numeric.ones((N-2,),'D'),-1) for real input array p If I make a change A = diag(Numeric.ones((N-2,),'d'),-1) then complex conjugates will match exactly: >>> r=scipy.roots([-1,1,2,1,2,3,4,5,6,7,8,9]) >>> r[0],r[1] ((-0.20505419046158363+1.0643579627494124j), (-0.20505419046158363-1.0643579627494124j)) I believe that also Matlab and octave use the most appropiate routines in their root calculations. So, scipy.roots should be fixed so that for real and complex inputs scipy.eig will use flapack.dgeev and flapacl.zgeev, respectively. Regards, Pearu From pearu at cens.ioc.ee Sat Feb 16 08:20:58 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Sat, 16 Feb 2002 15:20:58 +0200 (EET) Subject: [SciPy-dev] roots, poly, comparison operators, etc. In-Reply-To: <00cc01c1b6c5$70c3feb0$6b01a8c0@ericlaptop> Message-ID: Hi! On Sat, 16 Feb 2002, eric wrote: > Other things: > > * Array printing is far superior in Matlab and Octave -- they > generally always look nice. We should clean up how arrays are output. > Also, the "format long" and "format short", etc options for specifying > how arrays are printed are pretty nice. I agree. May be ipython could be exploited here? In fact, rounding of tiny numbers to zeros could be done only when printing (though, personally I wouldn't prefer that either but I am just looking for a compromise), not inside calculation routines. In this way, no valuable information is lost when using these routines from other calculation routines and computation will be even more efficient. > * On Matlab/Octave, sort() sorts by magnitude of complex and then by > angle. On the other hand ==, >, <, etc. seem to only compare the real > part of complex numbers. > These seem fine to me. I know they aren't mathematically correct, but > they seem to be pragmatically correct. I'd like comments on these > conventions and what others think. There is no mathematically correct way to compare complex numbers, they just cannot be ordered in an unique and sensible way. However, in different applications different conventions may be useful or reasonable for ordering complex numbers. Whatever is the convention, their mathematical correctness is irrelevant and this cannot be used as an argument for prefering one convention to another. I would propose providing number of efficient comparison methods for complex (or any) numbers that users may use in sort functions as an optional argument. For example, scipy.sort([2,1+2j],cmpmth='abs') -> [1+2j,2] # sorts by abs value scipy.sort([2,1+2j],cmpmth='real') -> [2,1+2j] # sorts by real part scipy.sort([2,1+2j],cmpmth='realimag') # sorts by real then by imag scipy.sort([2,1+2j],cmpmth='imagreal') # sorts by imag then by real scipy.sort([2,1+2j],cmpmth='absangle') # sorts by abs then by angle etc. scipy.sort([2,1+2j],cmpfunc=) Note that scipy.sort([-1,1],cmpmth='absangle') -> [1,-1] which also demonstrates the arbitrariness of sorting complex numbers. Btw, why do you want to sort the output of roots()? As far as I know, there is no order defined for roots of polynomials. May be an optional flag could be provided here? > * Comparison of IEEE floats is hairy. It looks to me like Matlab and > Octave have chosen to limit precision to 15 digits. This seems like a > reasonable thing to do, for SciPy also, but we'd currently have to > limit to 14 digits to deal with problems of imprecise LAPACK routines. > Pearu and Fernando are arguing against this, but maybe they wouldn't > mind a 15 digit limit for double and 6 for float. We'd have to modify > Numeric to do this internally on comparisons. There could be a flag > that enables exact comparisons. I hope this issue is solved by fixing roots() and also the accuracy of LAPACK routines can be rehabilitated now (I don't claim that their output is always accurate, just floating numbers cannot be represented accuratelly in computers memory, in principle. It has nothing to do with the programs that manipulate these numbers, one can only improve the algorithtms to minimize the computational errors). The deep lesson to learn here is: Fix the computation algorithm. Do not fix the output of the computation. Matlab and Octave are programs that are oriented to certain applications, namely, to engineering applications where linear algebra is very important. SciPy needs not to choose the same orientation as Matlab, however, it can certainly cover Matlab's orientations. Python is a general purpose language and SciPy could also be a general purpose package for scientific computing (whatever are the applications). Regards, Pearu From prabhu at aero.iitm.ernet.in Sat Feb 16 12:56:48 2002 From: prabhu at aero.iitm.ernet.in (Prabhu Ramachandran) Date: Sat, 16 Feb 2002 23:26:48 +0530 Subject: [SciPy-dev] Is Python being Dylanized? In-Reply-To: <3C68C500.5090101@pacbell.net> References: <3C68C500.5090101@pacbell.net> Message-ID: <15470.40288.823875.155815@monster.linux.in> >>>>> "PM" == Pat Miller writes: [Pat on difficulty with class methods and type inference] PM> class foo: def solver(self,x): assert type(self.nx) == IntType PM> assert type(self.name) == StringType assert type(x) == PM> FloatType .... PM> would get kind of tedious. PM> One idea is to simply assume that the types you are given are PM> fixed for all time, another is to assume that the values for a PM> given instance are fixed for all time (that is nx and name PM> will either never change type or perhaps never change value). PM> Then I can compile even more efficient code (but require more PM> information) Yes, I understand the difficulty. But I've been thinking of something and thought I'd share it with you folks. I dont know if the idea is hair brained so apologies in advance if it is. The ideas are wish lists and its highly likely that you have already thought of it. However I thought it might be worth sharing just in case. I really dont know the details of how you are going to implement weave.accelerate for now but a brief look at some of the code that Eric mailed the list indicated that you have a generic "programmer" like class that allows one to implement things in another language when specific tokens or language features are encountered (I'm referring to the ByteCodeMeaning class). That seems like a very good idea. What I've been thinking about is maybe related and maybe not. Here it is: (1) Define a language that is a subset of Python with a well defined set of supported features. Lets call this High Performance Python or HPP (hpp) for short. (a) A few special details like type information should be supported and allowed for in hpp. (b) These details can also be passed in via some special data attribute or method. Something like so: class Foo: def __init__(self): self.x = 1.0 self.y = 2.0 self.name = "Foo" ... def func(self): return self.x*self.y classdata = {'x': 'FloatType', 'y': 'FloatType', 'name': 'FloatType'} classmember = {'__init__': {'args': None}, 'func': {'args': None, 'return': 'FloatType'}} f = Foo() fa = weave.accelerate(Foo, classdata, classmember) I think you get what I mean. The idea is that if someone defines a class like this its easy for weave to deal with it. Expecting the user to do this is not so bad because afterall they are going to get pretty solid acceleration. Also, they would do this after experimenting with their code and design. So the type information does not get in their way when they are experimenting/prototyping. So we get out of their hair and they out of ours. Effectively they can rapidly prototype it and then type it. :) This would make life easy for everyone. As you would note this is very similar to Dylan's optional typing. The above approach also does not rely on changing Python in anyway. (2) By defining HPP clearly it becomes easy for a user to know exactly what language features are accelerated and what are not. Therefore with a little training a person can become an adept at writing HPP code and thereby maximize their productivity. I think this is an important step. (3) As weave.accelerate improves and more core Python language features are supported the HPP language set can be changed. (4) Additionally if a tool like PyChecker were adapted to HPP then a person could easily figure out if a class is HPP'able or not and remedy the situation. (5) If its not too hard, then maybe those functions that the user does not provide type information will remain unaccelerated. Clearly the above should work for builtin types. So how do we support classes, inheritance etc.? Maybe the type information can be used to abstract an interface. Afterall, by defining a class with type information all necessary information is available. Once that is done if a person defines a member function that is passed a base class then any derived class can also be supported because the basic interface is supported. This should work by simply creating a class hierarchy in the wrapped C++ code that is generated. I wonder if I'm merely re-inventing C++ here? Maybe all that is needed is to somehow parse Python and generate equivalent C++ code underneath. Essentially, Pythonic-C++. In such a case, if we get a reasonable amount of C++ supported HPP should be highly useable and technically it could be as fast as C++. As regards types changing over time, it should be possible to deal with them if the user is capable of anticipating them and specifying them in advance. Like: def sqrt(x): # do stuff weave.accelerate(sqrt, {'Float', 'Int', 'Matrix',... }) where Matrix is some already accelerated Python class. Some of you might think that its too much expecting the user to provide type information for everything in advance. But usually, the user has to do it anyway when they use C++/C/Fortran. And IMHO, its not very hard adding type information after the code is reasonably mature. As I said before maybe these are all completely crazy and impossible ideas. My apologies if they are. If they are not then it looks like HPP should be pretty feasible and very exciting. prabhu From pearu at cens.ioc.ee Sat Feb 16 14:16:02 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Sat, 16 Feb 2002 21:16:02 +0200 (EET) Subject: [SciPy-dev] Is Python being Dylanized? In-Reply-To: <15470.40288.823875.155815@monster.linux.in> Message-ID: On Sat, 16 Feb 2002, Prabhu Ramachandran wrote: > (1) Define a language that is a subset of Python with a well defined > set of supported features. Lets call this High Performance Python > or HPP (hpp) for short. > > (a) A few special details like type information should be > supported and allowed for in hpp. > > (b) These details can also be passed in via some special data > attribute or method. Something like so: > > class Foo: > def __init__(self): > self.x = 1.0 > self.y = 2.0 > self.name = "Foo" > ... > > def func(self): > return self.x*self.y > > classdata = {'x': 'FloatType', 'y': 'FloatType', 'name': 'FloatType'} > classmember = {'__init__': {'args': None}, > 'func': {'args': None, 'return': 'FloatType'}} > > f = Foo() > fa = weave.accelerate(Foo, classdata, classmember) This HPP interface looks very similar to what I thought awhile ago but from a different perspective. Namely, I was thinking of representing the signatures of C/Fortran functions using similar class definition setup in order to get rid of pyf-files (that may look scary for few non-Fortran persons). Then let f2py to scan Fortran sources and generate these class definitions instead of pyf-files. In the second step, to use the instances of these classes to construct C/API extension modules for the wrapper functions. Well, it was just a thought and I never got a change to implement it. So, if someone will pick up this idea of High Performance Python and implement it in the similar form as above (or may be even in some better form), then s/he will certainly get patches from me ;-) Thanks, Pearu From pnmiller at pacbell.net Sat Feb 16 23:59:44 2002 From: pnmiller at pacbell.net (Patrick Miller) Date: Sat, 16 Feb 2002 20:59:44 -0800 Subject: [SciPy-dev] Is Python being Dylanized? References: Message-ID: <3C6F38C0.3040604@pacbell.net> > > > >This HPP interface looks very similar to what I thought awhile ago but >from a different perspective. Namely, I was thinking of representing the >signatures of C/Fortran functions using similar class definition setup in >order to get rid of pyf-files (that may look scary for few non-Fortran >persons). Then let f2py to scan Fortran sources and generate these class > One could also imagine that you could use this to specify base classes for C++ in Python (and get accelerated performance in Python!). Our project did something similar to what Pearu was talking about with Fortran source (ours was augmented C++ header files) that we scanned. It boils down to somebody, somewhere has to bite the bullet on types. Now Dylan and FL both used the idea that between some user specification of types and a type inference engine one could get great benefits without giving up the wonderful dynamicism of runtime typing. The direction I was planning for the weave accelerate backplane was exactly as Prabhu described... A Python subset, reasonablely described, some types supported, and extensible. I think that I can detect "conforming" classes in the bytecode compiler and then make 'em go really fast. My true goal is to let most of Python to be written in Python (accelerated into C++). Guido wants to get the core of the language smaller, and this may be a way to help. The Py-Python could use the HPP subset so that Jython could execute it directly (or put a Java interface to weave.accelerate) and the normal Python is simply a weave accelerated version of Py-Python. For a lot of this to work, all one needs is a good interface between C structs and the Python struct class. Pat From eric at scipy.org Sat Feb 16 23:00:01 2002 From: eric at scipy.org (eric) Date: Sat, 16 Feb 2002 23:00:01 -0500 Subject: [SciPy-dev] Is Python being Dylanized? References: <3C68C500.5090101@pacbell.net> <15470.40288.823875.155815@monster.linux.in> Message-ID: <014b01c1b767$92d8a2b0$6b01a8c0@ericlaptop> Hey Prabhu, My sense is that user supplied type information mainly benefits classes. I'm not convinced we need strong typing for functions yet -- the only place I can think of where we absolutely need it is for recursion. In all other cases, it appears that typing can be inferred at run time. The two benefits that strong typing does provide is smaller footprint (we can pre-compile multiple extenstion functions within the same module), and possibly some safety. I am not a language weenie, so I could be missing something. This is just based off of my experience with weave. >From Pat and your comments, the place where typing becomes more helpful is with classes. I haven't look at this much, so I'll go along with it for now. > (1) Define a language that is a subset of Python with a well defined > set of supported features. Lets call this High Performance Python > or HPP (hpp) for short. Right now, subset is the empty set. :-) > > (a) A few special details like type information should be > supported and allowed for in hpp. > > (b) These details can also be passed in via some special data > attribute or method. Something like so: > > class Foo: > def __init__(self): > self.x = 1.0 > self.y = 2.0 > self.name = "Foo" > ... > > def func(self): > return self.x*self.y > > classdata = {'x': 'FloatType', 'y': 'FloatType', 'name': 'FloatType'} > classmember = {'__init__': {'args': None}, > 'func': {'args': None, 'return': 'FloatType'}} > > f = Foo() > fa = weave.accelerate(Foo, classdata, classmember) looks reasonable. > I think you get what I mean. The idea is that if someone defines a > class like this its easy for weave to deal with it. Expecting the > user to do this is not so bad because afterall they are going to get > pretty solid acceleration. Also, they would do this after > experimenting with their code and design. So the type information > does not get in their way when they are experimenting/prototyping. > So we get out of their hair and they out of ours. Effectively they > can rapidly prototype it and then type it. :) This would make life > easy for everyone. As you would note this is very similar to > Dylan's optional typing. The above approach also does not rely on > changing Python in anyway. > > (2) By defining HPP clearly it becomes easy for a user to know > exactly what language features are accelerated and what are not. Good idea, but right now we're a looong way from specifying it. The current framework is still being developed. Once we've gotten it up and running and understand what is gonna be hard, we can start thinking about defining the limits. > Therefore with a little training a person can become an adept at > writing HPP code and thereby maximize their productivity. I think > this is an important step. > (3) As weave.accelerate improves and more core Python language > features are supported the HPP language set can be changed. Sure > (4) Additionally if a tool like PyChecker were adapted to HPP then a > person could easily figure out if a class is HPP'able or not and > remedy the situation. +1. Very good idea. > (5) If its not too hard, then maybe those functions that the user > does not provide type information will remain unaccelerated. > > Clearly the above should work for builtin types. So how do we support > classes, inheritance etc.? Maybe the type information can be used to > abstract an interface. Afterall, by defining a class with type > information all necessary information is available. Once that is done > if a person defines a member function that is passed a base class then > any derived class can also be supported because the basic interface is > supported. This should work by simply creating a class hierarchy in > the wrapped C++ code that is generated. I wonder if I'm merely > re-inventing C++ here? Maybe all that is needed is to somehow parse > Python and generate equivalent C++ code underneath. Essentially, > Pythonic-C++. In such a case, if we get a reasonable amount of C++ > supported HPP should be highly useable and technically it could be as > fast as C++. > > As regards types changing over time, it should be possible to deal > with them if the user is capable of anticipating them and specifying > them in advance. Like: > > def sqrt(x): > # do stuff > > weave.accelerate(sqrt, {'Float', 'Int', 'Matrix',... }) > > where Matrix is some already accelerated Python class. > > Some of you might think that its too much expecting the user to > provide type information for everything in advance. But usually, the > user has to do it anyway when they use C++/C/Fortran. And IMHO, its > not very hard adding type information after the code is reasonably > mature. > > As I said before maybe these are all completely crazy and impossible > ideas. My apologies if they are. If they are not then it looks like > HPP should be pretty feasible and very exciting. I don't think they are impossible, but I also do not have a handle on what the possibilities with classes are yet. Currently, I'm most interested in getting Pat's framework more mature so that it is easy for people to modify and experiment with. This will mostly be done in the context of functions because weave (the old weave) doesn't support classes yet. I'm glad to see others are thinking about all this stuff! see ya, eric From prabhu at aero.iitm.ernet.in Sun Feb 17 02:13:10 2002 From: prabhu at aero.iitm.ernet.in (Prabhu Ramachandran) Date: Sun, 17 Feb 2002 12:43:10 +0530 Subject: [SciPy-dev] Is Python being Dylanized? In-Reply-To: <3C6F38C0.3040604@pacbell.net> References: <3C6F38C0.3040604@pacbell.net> Message-ID: <15471.22534.701579.529020@monster.linux.in> >>>>> "PM" == Patrick Miller writes: PM> Now Dylan and FL both used the idea that between some user PM> specification of types and a type inference engine one could PM> get great benefits without giving up the wonderful dynamicism PM> of runtime typing. Absolutely. I have heard that Guido thinks of Dylan as a very nice language. I wonder if he would be pleased at the direction weave is thinking of taking. :) PM> The direction I was planning for the weave accelerate PM> backplane was exactly as Prabhu described... A Python subset, So atleast all of us here are almost on the same wavelength. :) PM> reasonablely described, some types supported, and extensible. PM> I think that I can detect "conforming" classes in the bytecode PM> compiler and then make 'em go really fast. My true goal is to PM> let most of Python to be written in Python (accelerated into PM> C++). Guido wants to get the core of the language smaller, PM> and this may be a way to help. The Py-Python could use the Indeed, you'd need to have a small core in C that completely describes the API and then everything is a translation to this API either dynamically interpreted or statically compiled. Right? PM> HPP subset so that Jython could execute it directly (or put a PM> Java interface to weave.accelerate) and the normal Python is PM> simply a weave accelerated version of Py-Python. For a lot of Cool. PM> this to work, all one needs is a good interface between C PM> structs and the Python struct class. Really? Then how hard is it to do this? And why is that sufficient? Any pointers on more information? If this is all in your head and you'd rather code it than explain it, thats fine. Overall this is getting to be pretty amazing and the possibilities are truly fantastic. Cheers, prabhu From prabhu at aero.iitm.ernet.in Sun Feb 17 02:18:00 2002 From: prabhu at aero.iitm.ernet.in (Prabhu Ramachandran) Date: Sun, 17 Feb 2002 12:48:00 +0530 Subject: [SciPy-dev] Is Python being Dylanized? In-Reply-To: <014b01c1b767$92d8a2b0$6b01a8c0@ericlaptop> References: <3C68C500.5090101@pacbell.net> <15470.40288.823875.155815@monster.linux.in> <014b01c1b767$92d8a2b0$6b01a8c0@ericlaptop> Message-ID: <15471.22824.646359.967439@monster.linux.in> >>>>> "eric" == eric writes: >> As I said before maybe these are all completely crazy and >> impossible ideas. My apologies if they are. If they are not >> then it looks like HPP should be pretty feasible and very >> exciting. eric> I don't think they are impossible, but I also do not have a eric> handle on what the possibilities with classes are yet. Well, Pat seems convinced that the most important thing would be getting a good wrapper for the struct class. Again there is most probably a lot of work involved even if that is involved. But what I'm excited about is that this is not some pipe dream but something that is very doable and just a matter of time. eric> Currently, I'm most interested in getting Pat's framework eric> more mature so that it is easy for people to modify and eric> experiment with. This will mostly be done in the context of eric> functions because weave (the old weave) doesn't support eric> classes yet. eric> I'm glad to see others are thinking about all this stuff! Well, if we dont have the time to cough up code the least we can do is generate ideas, stimulate discussions and get everyone excited. :) prabhu From eric at scipy.org Sun Feb 17 01:36:46 2002 From: eric at scipy.org (eric) Date: Sun, 17 Feb 2002 01:36:46 -0500 Subject: [SciPy-dev] side effects of changes to fortran_libraries? Message-ID: <016901c1b77d$792aaf00$6b01a8c0@ericlaptop> Hey Pearu, I just did an update and rebuild and ran into a few issues with the Fortran stuff. Now that scipy_distutils requires explicit linking (which is a good thing), we don't get the libraries from the Fortran compiler added to the list of libraries needed for linking. I had to add 'g2c' to the extension in special to get it to compile (and there are probably others like this). ext = Extension(parent_package+'special.cephes',sources, libraries = ['amos','toms','mach','g2c'] This is bad because it is compiler specific. So, I guess the question is, should we add a 'fortran_libraries' to the extension so that distutils knows to ask the fortran compiler class what libraries are needed for linking. Any better ideas? eric -- Eric Jones Enthought, Inc. [www.enthought.com and www.scipy.org] (512) 536-1057 From prabhu at aero.iitm.ernet.in Sun Feb 17 03:01:59 2002 From: prabhu at aero.iitm.ernet.in (Prabhu Ramachandran) Date: Sun, 17 Feb 2002 13:31:59 +0530 Subject: [SciPy-dev] sdist problems. Message-ID: <15471.25463.531640.940135@monster.linux.in> hi, Sorry for being a pain but the sdist feature does not work. I figured if you are going to do some work with the scipy_distutils maybe you could take a peek at the sdist problem too. If sdist works it would make life easier when I need to install scipy on machines that are behind a firewall. Here is the error message that I got. $ python setup.py sdist -d /tmp file: build/generated_pyfs/flapack.pyf file: build/generated_pyfs/clapack.pyf file: build/generated_pyfs/fblas.pyf file: build/generated_pyfs/cblas.pyf running sdist reading manifest file 'MANIFEST' Traceback (most recent call last): File "setup.py", line 126, in ? install_package() File "setup.py", line 116, in install_package url = "http://www.scipy.org", File "scipy_distutils/core.py", line 43, in setup return old_setup(**new_attr) File "/usr/local/lib/python2.1/distutils/core.py", line 138, in setup dist.run_commands() File "/usr/local/lib/python2.1/distutils/dist.py", line 899, in run_commands self.run_command(cmd) File "/usr/local/lib/python2.1/distutils/dist.py", line 919, in run_command cmd_obj.run() File "/usr/local/lib/python2.1/distutils/command/sdist.py", line 150, in run self.make_distribution() File "scipy_distutils/command/sdist.py", line 93, in make_distribution self.make_release_tree(base_dir, self.filelist.files) File "scipy_distutils/command/sdist.py", line 32, in make_release_tree dest_files = remove_common_base(files) File "scipy_distutils/command/sdist.py", line 126, in remove_common_base results = [string.replace(file,base,'') for file in files] File "/usr/local/lib/python2.1/string.py", line 369, in replace return s.replace(old, new, maxsplit) ValueError: empty pattern string $ I tried to modify the scipy_distutils/command/sdist.py but was only fixing symptoms since I dont understand much of the code. thanks, prabhu From eric at scipy.org Sun Feb 17 02:08:59 2002 From: eric at scipy.org (eric) Date: Sun, 17 Feb 2002 02:08:59 -0500 Subject: [SciPy-dev] roots, poly, comparison operators, etc. References: Message-ID: <019201c1b781$f9719800$6b01a8c0@ericlaptop> ----- Original Message ----- From: "Pearu Peterson" To: Sent: Saturday, February 16, 2002 5:22 AM Subject: Re: [SciPy-dev] roots, poly, comparison operators, etc. > > On Sat, 16 Feb 2002, eric wrote: > > > Hey crew, > > > > I spent some quality time with roots() and poly() tonight trying to find > > a solution to the rounding issue acceptable to all. In the process,I've > > rewritten both of them. As Heiko suggested, it is easy to test if the > > roots are complex congugates and reals in poly(), and, if so, ignore the > > imaginary part. Things are never as easy as they seem... > > > Here is what I have learned: > > > > roots() calculates eigenvalues under the covers. This is done using eig > > which in turn calls some LAPACK (Atlas) routine xgeev, where x is the > > appropriate numeric type. This is where the trouble starts. The values > > returned by eig() that should be complex conjugates are actually off by > > as much as 2-3 digits of precision. I've compared the output of SciPy > > to Matlab and Octave below. > > scipy.eig uses Fortran LAPACK (octave uses also lapack, I think(??)). > > Actually the trouble starts from roots that forces scipy.eig to use > flapack.zgeev though flapack.dgeev would be more appropiate (real > input,more efficient). Namely, the function roots() uses complex array > > A = diag(Numeric.ones((N-2,),'D'),-1) > > for real input array > > p > > If I make a change > > A = diag(Numeric.ones((N-2,),'d'),-1) > > then complex conjugates will match exactly: > > >>> r=scipy.roots([-1,1,2,1,2,3,4,5,6,7,8,9]) > >>> r[0],r[1] > ((-0.20505419046158363+1.0643579627494124j), > (-0.20505419046158363-1.0643579627494124j)) > > I believe that also Matlab and octave use the most appropiate routines in > their root calculations. So, scipy.roots should be fixed so that for real > and complex inputs scipy.eig will use flapack.dgeev and flapacl.zgeev, > respectively. Thanks. I've fixed this and checked it into the CVS. So roots and poly are now "rounding free" and seem to work. eric From pearu at cens.ioc.ee Sun Feb 17 03:34:25 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Sun, 17 Feb 2002 10:34:25 +0200 (EET) Subject: [SciPy-dev] side effects of changes to fortran_libraries? In-Reply-To: <016901c1b77d$792aaf00$6b01a8c0@ericlaptop> Message-ID: Hi Eric, On Sun, 17 Feb 2002, eric wrote: > I just did an update and rebuild and ran into a few issues with the Fortran > stuff. Now that scipy_distutils requires explicit linking (which is a good > thing), we don't get the libraries from the Fortran compiler added to the list > of libraries needed for linking. I had to add 'g2c' to the extension in special > to get it to compile (and there are probably others like this). > > ext = Extension(parent_package+'special.cephes',sources, > libraries = ['amos','toms','mach','g2c'] > > This is bad because it is compiler specific. So, I guess the question is, > should we add a 'fortran_libraries' to the extension so that distutils knows to > ask the fortran compiler class what libraries are needed for linking. > > Any better ideas? Fixed in CVS. Just forgot to add complier specific libraries when the extension needed fortran_libraries. Try now without 'g2c'. This fix only works if libraries in Extension are really built from fortran sources. However, the problem remains for linalg as it uses fblas but linalg does not compile any fortran files. So, scipy_distutils thinks that there are no fortran_libraries needed, therefore also fortran compiler specific libraries are not needed. Currently, atlas_info includes g2c to the libraries list but it shouldn't. As a fix, may be fortran_libraries should also accept items with empty sources list so that system wide fortran libraries can be specified there and when building then fortran compiler libraries get included. I think when adding fortran_libraries to Extension, there will be the same issues. Unanswered question is how to determine if a library, say f77blas in get_atlas_info, needs to be linked against Fortran compiler libraries? And to which Fortran compiler libraries? In scipy_local.cfg maybe? Pearu From eric at scipy.org Sun Feb 17 03:36:39 2002 From: eric at scipy.org (eric) Date: Sun, 17 Feb 2002 03:36:39 -0500 Subject: [SciPy-dev] roots, poly, comparison operators, etc. References: Message-ID: <01ba01c1b78e$384c5450$6b01a8c0@ericlaptop> > > * Array printing is far superior in Matlab and Octave -- they > > generally always look nice. We should clean up how arrays are output. > > Also, the "format long" and "format short", etc options for specifying > > how arrays are printed are pretty nice. > > I agree. May be ipython could be exploited here? Does it do nice Numeric array printing and have something kin to the format command? If so, yes. That would be good. Fernando, I think your the person to ask about this, correct? :-) > > In fact, rounding of tiny numbers to zeros could be done only when > printing (though, personally I wouldn't prefer that either but I am > just looking for a compromise), not inside calculation routines. In this > way, no valuable information is lost when using these routines from > other calculation routines and computation will be even more efficient. > > > * On Matlab/Octave, sort() sorts by magnitude of complex and then by > > angle. On the other hand ==, >, <, etc. seem to only compare the real > > part of complex numbers. > > These seem fine to me. I know they aren't mathematically correct, but > > they seem to be pragmatically correct. I'd like comments on these > > conventions and what others think. > > There is no mathematically correct way to compare complex numbers, > they just cannot be ordered in an unique and sensible way. > > However, in different applications different conventions may be useful or > reasonable for ordering complex numbers. Whatever is the convention, their > mathematical correctness is irrelevant and this cannot be used as an > argument for prefering one convention to another. > I'm not sure of your position on whether cmp() for complex values should work. Are you against it or for it? My main gripe with no comparison is that, for code to work generically, anywhere in SciPy that has a comparison operator we have to change: if( a < b): ...do whatever to if( a.typcode() in ['F', 'D'] or b.typcode in ['F','D'] and a.real < b.real): ... do whatever else( a < b): ... do whatever or something like that. I've run into this multiple times within my own projects and have wished that there was some default behavior. > > I would propose providing number of efficient comparison methods for > complex (or any) numbers that users may use in sort functions as an > optional argument. For example, > > scipy.sort([2,1+2j],cmpmth='abs') -> [1+2j,2] # sorts by abs value > scipy.sort([2,1+2j],cmpmth='real') -> [2,1+2j] # sorts by real part > scipy.sort([2,1+2j],cmpmth='realimag') # sorts by real then by imag > scipy.sort([2,1+2j],cmpmth='imagreal') # sorts by imag then by real > scipy.sort([2,1+2j],cmpmth='absangle') # sorts by abs then by angle > etc. > scipy.sort([2,1+2j],cmpfunc=) > > Note that > > scipy.sort([-1,1],cmpmth='absangle') -> [1,-1] > > which also demonstrates the arbitrariness of sorting complex numbers. If we did this, instead of passing a string, we should pass a cmp method in just as Python's list sort method takes instead of passing a string. I think this can be made to be fast efficient for standard cases. > > Btw, why do you want to sort the output of roots()? As far as I know, > there is no order defined for roots of polynomials. May be an optional > flag could be provided here? This was in the original code I started with, and I just left it in there. The newest CVS does not have the sorting in it. > > > * Comparison of IEEE floats is hairy. It looks to me like Matlab and > > Octave have chosen to limit precision to 15 digits. This seems like a > > reasonable thing to do, for SciPy also, but we'd currently have to > > limit to 14 digits to deal with problems of imprecise LAPACK routines. > > Pearu and Fernando are arguing against this, but maybe they wouldn't > > mind a 15 digit limit for double and 6 for float. We'd have to modify > > Numeric to do this internally on comparisons. There could be a flag > > that enables exact comparisons. > > I hope this issue is solved by fixing roots() and also the accuracy > of LAPACK routines can be rehabilitated now (I don't claim that their > output is always accurate, just floating numbers cannot be represented > accuratelly in computers memory, in principle. It has nothing to do with > the programs that manipulate these numbers, one can only improve the > algorithtms to minimize the computational errors). > > The deep lesson to learn here is: > Fix the computation algorithm. > Do not fix the output of the computation. Agreed -- when possible. Your solution to the roots() problem was indeed the right one. I spent the day digging a stump out of my yard and thinking about IEEE floating point in SciPy -- what a way to spend a Saturday. Comparison of numbers can lead to very subtle bugs that many scientist/engineers spend days to find. I'd like SciPy for to mitigate as many of these difficult programming issues as possible so that programming doesn't get in the way of solving problems for these people. Still, the choice to round has some down sides also, and this may be one of those situations where the solution is as bad as the problem. Also, I played a little more with Matlab and Octave and found that they don't do anything special to help out unsuspecting saps either. They maintain standard IEEE floating point behavior with all the warts and benefits that come with it. >> a = 1 a = 1 >> b = 1.0 + 2.22e-16 b = 1.0000 >> a == b ans = 0 For now, well table any thoughts of handling rounding internally and focus on other things. > > Matlab and Octave are programs that are oriented to certain applications, > namely, to engineering applications where linear algebra is very > important. > > SciPy needs not to choose the same orientation as Matlab, however, it can > certainly cover Matlab's orientations. > > Python is a general purpose language and SciPy could also be a general > purpose package for scientific computing (whatever are the applications). It is general purpose -- much of this is thanks to Python's huge library. I don't think any of the things we are discussing break SciPy for other applications. The fact that floating point precision is 2.22e-16 doesn't break it currently. Moving this precision to 1e-15 wouldn't break it either. Same thing for complex numbers and comparisons. Both of these and many others are just issues that SciPy needs to examine. They both impact programming in a significant way, and if we can make changes that simplify programming for the average scientist/engineer, we should. Its all a bunch of trade-offs. Also, I don't really view Matlab and Octave as that narrowly applicable -- they seem to cover a very wide breadth of Science to me. They're both excellent tools that we can learn from. The things they are missing are what the Python language and its standard library bring to the table. If SciPy can manage to be as complete and usable of a scientific library as these two programs, it will have succeeded. eric From oliphant.travis at ieee.org Sat Feb 16 11:58:34 2002 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sat, 16 Feb 2002 09:58:34 -0700 Subject: [SciPy-dev] Using IPython Message-ID: What does this group think about packaging ipython with SciPy to get access to an improved shell? Is that something worth pursuing? -Travis O. From oliphant.travis at ieee.org Sun Feb 17 04:47:46 2002 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sun, 17 Feb 2002 02:47:46 -0700 Subject: [SciPy-dev] roots, poly, comparison operators, etc. In-Reply-To: <019201c1b781$f9719800$6b01a8c0@ericlaptop> References: <019201c1b781$f9719800$6b01a8c0@ericlaptop> Message-ID: On Sunday 17 February 2002 12:08 am, you wrote: > ----- Original Message ----- > From: "Pearu Peterson" > To: > Sent: Saturday, February 16, 2002 5:22 AM > Subject: Re: [SciPy-dev] roots, poly, comparison operators, etc. > > Thanks. I've fixed this and checked it into the CVS. So roots and poly > are now "rounding free" and seem to work. > > eric > Good job, Eric. I still don't understand why you compare equality of real and imaginary parts separately. Numeric supports equality comparison of complex arrays. -Travis O. From eric at scipy.org Sun Feb 17 03:44:07 2002 From: eric at scipy.org (eric) Date: Sun, 17 Feb 2002 03:44:07 -0500 Subject: [SciPy-dev] roots, poly, comparison operators, etc. References: <019201c1b781$f9719800$6b01a8c0@ericlaptop> Message-ID: <01c801c1b78f$437768f0$6b01a8c0@ericlaptop> > > Good job, Eric. > > I still don't understand why you compare equality of real and imaginary parts > separately. Numeric supports equality comparison of complex arrays. Right you are. I'll fix this. It is left over from testing an inequality during all the rounding mumbo jumbo. eric From pearu at cens.ioc.ee Sun Feb 17 04:48:47 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Sun, 17 Feb 2002 11:48:47 +0200 (EET) Subject: [SciPy-dev] sdist problems. In-Reply-To: <15471.25463.531640.940135@monster.linux.in> Message-ID: On Sun, 17 Feb 2002, Prabhu Ramachandran wrote: > Sorry for being a pain but the sdist feature does not work. I figured > if you are going to do some work with the scipy_distutils maybe you > could take a peek at the sdist problem too. Fixed. Pearu From prabhu at aero.iitm.ernet.in Sun Feb 17 05:06:56 2002 From: prabhu at aero.iitm.ernet.in (Prabhu Ramachandran) Date: Sun, 17 Feb 2002 15:36:56 +0530 Subject: [SciPy-dev] sdist problems. In-Reply-To: References: <15471.25463.531640.940135@monster.linux.in> Message-ID: <15471.32960.699499.307167@monster.linux.in> >>>>> "PP" == Pearu Peterson writes: PP> On Sun, 17 Feb 2002, Prabhu Ramachandran wrote: >> Sorry for being a pain but the sdist feature does not work. I >> figured if you are going to do some work with the >> scipy_distutils maybe you could take a peek at the sdist >> problem too. PP> Fixed. Thanks. Yes, the sdist command works but the sdist itself does not seem complete. For instance there is no weave directory. The compiler directory exists and contains nothing. The tests are also missing. For instance the gui_thread tests directory does not exist. The tutorial, sparse, scipy_distutils, linalg2, weave directories are missing. Am I doing something wrong? prabhu From heiko at hhenkelmann.de Sun Feb 17 07:27:39 2002 From: heiko at hhenkelmann.de (Heiko Henkelmann) Date: Sun, 17 Feb 2002 13:27:39 +0100 Subject: [SciPy-dev] Problem in stats.py Message-ID: <002701c1b7ae$7dd6b4a0$641ee33e@arrow> Hello Folks, I just discovered a problem in the latest version of stats.py. In line 1637 it tries to import special directly. Changing this in scipy.special fixes this, but then it is complaining about not finding neginomcdfinv. For the time being I reverted back to the old version of the stats module. Heiko From pearu at cens.ioc.ee Sun Feb 17 07:28:08 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Sun, 17 Feb 2002 14:28:08 +0200 (EET) Subject: [SciPy-dev] sdist problems. In-Reply-To: <15471.32960.699499.307167@monster.linux.in> Message-ID: On Sun, 17 Feb 2002, Prabhu Ramachandran wrote: > >>>>> "PP" == Pearu Peterson writes: > > PP> On Sun, 17 Feb 2002, Prabhu Ramachandran wrote: > > >> Sorry for being a pain but the sdist feature does not work. I > >> figured if you are going to do some work with the > >> scipy_distutils maybe you could take a peek at the sdist > >> problem too. > > PP> Fixed. > > Thanks. Yes, the sdist command works but the sdist itself does not > seem complete. For instance there is no weave directory. The > compiler directory exists and contains nothing. The tests are also > missing. For instance the gui_thread tests directory does not exist. > The tutorial, sparse, scipy_distutils, linalg2, weave directories are > missing. Am I doing something wrong? Did you remove the MANIFEST file? Pearu From pearu at cens.ioc.ee Sun Feb 17 07:48:49 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Sun, 17 Feb 2002 14:48:49 +0200 (EET) Subject: [SciPy-dev] sdist problems. In-Reply-To: Message-ID: On Sun, 17 Feb 2002, Pearu Peterson wrote: > Did you remove the MANIFEST file? Sorry, I was not clear here. I mean you _should_ remove the MANIFEST file. It is a bug that MANIFEST is in the CVS repository. Instead we should have MANIFEST.in or no MANIFEST at all in CVS. Pearu From prabhu at aero.iitm.ernet.in Sun Feb 17 13:15:52 2002 From: prabhu at aero.iitm.ernet.in (Prabhu Ramachandran) Date: Sun, 17 Feb 2002 23:45:52 +0530 Subject: [SciPy-dev] sdist problems. In-Reply-To: References: Message-ID: <15471.62296.189997.256061@monster.linux.in> >>>>> "PP" == Pearu Peterson writes: PP> On Sun, 17 Feb 2002, Pearu Peterson wrote: >> Did you remove the MANIFEST file? PP> Sorry, I was not clear here. I mean you _should_ remove the PP> MANIFEST file. It is a bug that MANIFEST is in the CVS PP> repository. Instead we should have MANIFEST.in or no MANIFEST PP> at all in CVS. Thanks. Shouldn't we do a cvs remove MANIFEST in that case and remove it? BTW, I just updated my copy of the cvs tree and removed the manifest file in my local copy and now get this error: $ python setup.py sdist -d /tmp Traceback (most recent call last): File "setup.py", line 126, in ? install_package() File "setup.py", line 85, in install_package config.append(setup_scipy.configuration()) File "/skratch/prabhu/scipy/cvs/scipy/setup_scipy.py", line 8, in configuration config = default_config_dict() File "scipy_distutils/misc_util.py", line 247, in default_config_dict if full_name: UnboundLocalError: local variable 'full_name' referenced before assignment So i added a full_name="" in the function and it seems to work okay. I've commited this trivial change to CVS. The sdist seems to build fine. I havent checked to see if this really works properly or not but I guess it should. For some reason gui_thread.tests is commented out in setup_gui_thread.py config['packages'].append(parent_package+'gui_thread') #config['packages'].append(parent_package+'gui_thread.tests') Can this be uncommented or is it there for a reason? Also, the tutorial directory is not packaged as part of the sdist. Is this by design? prabhu From pearu at cens.ioc.ee Sun Feb 17 13:29:15 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Sun, 17 Feb 2002 20:29:15 +0200 (EET) Subject: [SciPy-dev] sdist problems. In-Reply-To: <15471.62296.189997.256061@monster.linux.in> Message-ID: Hi Prabhu, I have struggled with the sdist problem and then building from the tar-ball allmost 5 hours. There were _lots_ of bugs but a good news is that just few minutes ago I got setup.py sdist setup.py build # from tar-ball >>> import scipy finally working. Huh. There are still many things to clean up, though. Some of the things you mentioned below. I'll commit my changes and get back to you. Ok. Pearu On Sun, 17 Feb 2002, Prabhu Ramachandran wrote: > >>>>> "PP" == Pearu Peterson writes: > > PP> On Sun, 17 Feb 2002, Pearu Peterson wrote: > > >> Did you remove the MANIFEST file? > > PP> Sorry, I was not clear here. I mean you _should_ remove the > PP> MANIFEST file. It is a bug that MANIFEST is in the CVS > PP> repository. Instead we should have MANIFEST.in or no MANIFEST > PP> at all in CVS. > > Thanks. Shouldn't we do a cvs remove MANIFEST in that case and remove > it? > > BTW, I just updated my copy of the cvs tree and removed the manifest > file in my local copy and now get this error: > > $ python setup.py sdist -d /tmp > Traceback (most recent call last): > File "setup.py", line 126, in ? > install_package() > File "setup.py", line 85, in install_package > config.append(setup_scipy.configuration()) > File "/skratch/prabhu/scipy/cvs/scipy/setup_scipy.py", line 8, in configuration > config = default_config_dict() > File "scipy_distutils/misc_util.py", line 247, in default_config_dict > if full_name: > UnboundLocalError: local variable 'full_name' referenced before assignment > > So i added a full_name="" in the function and it seems to work okay. > I've commited this trivial change to CVS. > > The sdist seems to build fine. I havent checked to see if this really > works properly or not but I guess it should. > > For some reason gui_thread.tests is commented out in > setup_gui_thread.py > > config['packages'].append(parent_package+'gui_thread') > #config['packages'].append(parent_package+'gui_thread.tests') > > Can this be uncommented or is it there for a reason? > > Also, the tutorial directory is not packaged as part of the sdist. Is > this by design? From prabhu at aero.iitm.ernet.in Sun Feb 17 14:03:46 2002 From: prabhu at aero.iitm.ernet.in (Prabhu Ramachandran) Date: Mon, 18 Feb 2002 00:33:46 +0530 Subject: [SciPy-dev] Long running find in setup_xplt.py Message-ID: <15471.65170.222055.873059@monster.linux.in> hi, Each time I run setup.py inside scipy, setup_xplt.py runs a find command in /usr which in my case has about 3-4 Gigs of stuff. My disk churns for quite a long while when the command runs and its pretty irritating when I update my scipy install. Also, the code in def check_and_save(file='saved_values.py'): appends to saved_values.py all the time. So right now I have about 38 entries in that file with the same entry 'X11 = 1'. The configuration function tries to execute ../saved_values.py. I'm not sure this will work when setup.py is run from the base directory. So I tried changing this to execfile('saved_values.py'). That also does not work properly because execfile does not seem to work properly when called inside a function. I discovered this because I tried adding a print X11 after the execfile and it fails! try: execfile('saved_values.py') print X11 And it fails here! Traceback (most recent call last): File "setup.py", line 126, in ? install_package() File "setup.py", line 103, in install_package config.extend([get_package_config(x,parent_package)for x in unix_packages]) File "setup.py", line 46, in get_package_config config = mod.configuration(parent) File "xplt/setup_xplt.py", line 18, in configuration print X11 So I tried this: In [14]: def do_func(): ....: execfile('saved_values.py') ....: print X11 ....: In [17]: execfile('saved_values.py') In [18]: print X11 1 In [22]: del X11 In [23]: do_func () In saved_values.py --------------------------------------------------------------------------- NameError Traceback (most recent call last) ? ? in do_func() NameError: global name 'X11' is not defined OTOH, if I do a exec(file.read()) it works okay. In [31]: def do_func (): ....: exec(open('saved_values.py').read()) ....: print X11 ....: In [33]: do_func () In saved_values.py 1 Is this an execfile bug? Now there are no annoying finds. So, I'll go ahead and change this in CVS. prabhu From pearu at cens.ioc.ee Sun Feb 17 14:22:42 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Sun, 17 Feb 2002 21:22:42 +0200 (EET) Subject: [SciPy-dev] scipy in CVS is sdist-build-import'able Message-ID: Hi! Good news! Read the subject. Among many changes, now the version string of scipy is computed with update_version. Currently it looks like this SciPy-0.2.1294-alpha-2715 Its is long and ugly but informative. Feel free to change it to a better one in scipy_version.py file. The meanings of each item is documented in update_version.__doc__. And there is one issue with calculating version numbers automatically (see the comments in update_version for details). Basically, in order to keep this version string updated in the scipy CVS repository, one needs to use the following procedure when commiting changes to the repository: 1) Commit your changes cvs commit ... 2) Run python scipy_version.py this will update __version__.py 3) Commit __version__.py cvs commit -m "Updating version" __version__.py Steps 2) and 3) could be run automatically by CVS server. But let's see how it works first. Regards, Pearu From pearu at cens.ioc.ee Sun Feb 17 14:41:27 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Sun, 17 Feb 2002 21:41:27 +0200 (EET) Subject: [SciPy-dev] Long running find in setup_xplt.py In-Reply-To: <15471.65170.222055.873059@monster.linux.in> Message-ID: On Mon, 18 Feb 2002, Prabhu Ramachandran wrote: > Each time I run setup.py inside scipy, setup_xplt.py runs a find > command in /usr which in my case has about 3-4 Gigs of stuff. My disk > churns for quite a long while when the command runs and its pretty > irritating when I update my scipy install. Yes, I noticed that too. Good that you spot the place, I didn't have any idea where to look for this. I think there must be a better way to check if X is installed. In fact, can you imagine a Unix machine that has no X installed and someone wants to run scipy there. It is difficult to me (but not impossible though). Any ideas how configure tools check for X? So, I would propose that xplt configuration will return None only if os.name=='nt'. What do you think? Pearu From jhauser at ifm.uni-kiel.de Sun Feb 17 14:47:08 2002 From: jhauser at ifm.uni-kiel.de (Janko Hauser) Date: Sun, 17 Feb 2002 20:47:08 +0100 Subject: [SciPy-dev] Using IPython In-Reply-To: References: Message-ID: <15472.2236.206462.908298@ifm.uni-kiel.de> Travis Oliphant writes: > > What does this group think about packaging ipython with SciPy to get > access to an improved shell? > > Is that something worth pursuing? > I would find this a good addition. Perhaps in such a way, that a specialized setup is used, so that everything from scipy is already imported. I have worked on the documentation over the weekend and have seen, that there is a little namespace polution if one does `from scipy import *' which is probably the first step in an interactive session. Mainly the module names handy, basic, basic1a, misc are present together with the functions from these modules. Than there are some things defined again with a shortcut. Do you think this should be cleaned up? At the moment there are 300 definitions in scipy. Then a last one, to make the who command really useful, it needs to supress names from later imported modules. There is also a who command in ipython. Oh and the really last one :-) which help system you want to use in the future? I think I will have a first set of documents ready at the beginning of the next week. (HTML, PDF, PS, a good addition would be a special html for the help widgets of wxpython) __Janko From fperez at pizero.colorado.edu Sun Feb 17 15:07:36 2002 From: fperez at pizero.colorado.edu (=?ISO-8859-1?Q?Fernando_P=E9rez?=) Date: Sun, 17 Feb 2002 13:07:36 -0700 (MST) Subject: [SciPy-dev] Using IPython In-Reply-To: Message-ID: On Sat, 16 Feb 2002, Travis Oliphant wrote: > > What does this group think about packaging ipython with SciPy to get > access to an improved shell? > > Is that something worth pursuing? > If I may add a completely unbiased opinion to the discussion :) I wrote IPython specifically with something like scipy in mind, using ideas from the design of Mathematica and IDL along the way. Obviously the code is free so anyone can use it, but I can commit to help with whatever you folks feel is needed to better enhance scipy. So if there's enough interest in using it, you can count on not having to pick up all the slack, as I'm actively interested in working further on it. Let me know what the opinions are, and I can give a bit of a report on the current status of the program (for users), the code (for developers) and future directions. In case anyone doesn't have the url handy, it's http://www-hep.colorado.edu/~fperez/ipython/ Thanks for the interest, Fernando. From prabhu at aero.iitm.ernet.in Sun Feb 17 14:10:39 2002 From: prabhu at aero.iitm.ernet.in (Prabhu Ramachandran) Date: Mon, 18 Feb 2002 00:40:39 +0530 Subject: [SciPy-dev] sdist problems. In-Reply-To: References: <15471.62296.189997.256061@monster.linux.in> Message-ID: <15472.47.725000.189929@monster.linux.in> Hi Pearu, >>>>> "PP" == Pearu Peterson writes: PP> I have struggled with the sdist problem and then building from PP> the tar-ball allmost 5 hours. There were _lots_ of bugs but a PP> good news is that just few minutes ago I got PP> setup.py sdist setup.py build # from tar-ball >>>> import scipy PP> finally working. Huh. There are still many things to clean PP> up, though. Some of the things you mentioned below. Wow! Thats great! Thanks a lot!! PP> I'll commit my changes and get back to you. Ok. I just hope my changes to setup_xplt.py and misc_util.py don't trouble you with conflicts. prabhu From fperez at pizero.colorado.edu Sun Feb 17 15:20:19 2002 From: fperez at pizero.colorado.edu (=?ISO-8859-1?Q?Fernando_P=E9rez?=) Date: Sun, 17 Feb 2002 13:20:19 -0700 (MST) Subject: [SciPy-dev] roots, poly, comparison operators, etc. In-Reply-To: <01ba01c1b78e$384c5450$6b01a8c0@ericlaptop> Message-ID: > > > * Array printing is far superior in Matlab and Octave -- they > > > generally always look nice. We should clean up how arrays are output. > > > Also, the "format long" and "format short", etc options for specifying > > > how arrays are printed are pretty nice. > > > > I agree. May be ipython could be exploited here? > > Does it do nice Numeric array printing and have something kin to the format > command? If so, yes. That would be good. Fernando, I think your the person to > ask about this, correct? :-) Well, it does use pprint instead of print by default, but that only helps for normal python lists and dicts, not Numeric arrays. However, changing it is easy, and I'll be happy to do it. IPython's archtecture was thought for this kind of modularity to be easy. I admit that right now making these changes isn't as clean as it could be, but that's why later this year I'm planning a fairly major rewrite, and I'm drafting a 'design notes' document for that. The current code is the product of a coding frenzy with little planning, so internally it's kind of messy. But it works quite well, and those changes are easy to make even now (just not too elegant). Why don't you give me a few examples of how matlab prints things (I don't have access to it) and I'll try to code the changes in? Cheers, f. From fperez at pizero.colorado.edu Sun Feb 17 15:31:59 2002 From: fperez at pizero.colorado.edu (=?ISO-8859-1?Q?Fernando_P=E9rez?=) Date: Sun, 17 Feb 2002 13:31:59 -0700 (MST) Subject: [SciPy-dev] Using IPython In-Reply-To: <15472.2236.206462.908298@ifm.uni-kiel.de> Message-ID: > I would find this a good addition. Perhaps in such a way, that a > specialized setup is used, so that everything from scipy is already > imported. That's what 'profiles' in ipython are for. We'll just create an ipythonrc-scipy file and doing $ ipython -p scipy will load IPython with all the necessary customizations in place. Furthermore, the interactive namespace is protected there, so a @who will return a clean namespace at the beginning even if all of scipy has been loaded. I think this is the most reasonable behavior. > > I have worked on the documentation over the weekend and have seen, > that there is a little namespace polution if one does from scipy > import *' which is probably the first step in an interactive > session. Mainly the module names handy, basic, basic1a, misc are > present together with the functions from these modules. Than there are > some things defined again with a shortcut. Do you think this should be > cleaned up? At the moment there are 300 definitions in scipy. See my comment above. > Then a last one, to make the who command really useful, it needs to > supress names from later imported modules. There is also a who command > in ipython. ipython has @who and @whos (more detail). If you need changes, I'm open to anything. > Oh and the really last one :-) which help system you want to use in > the future? Sorry, is this question for ipython or for scipy? In IPython I think things are now clean: help accesses Python's help system (the pydoc-based one), and object?/?? gives information about a specific object. I yanked out all of your html-doc based help system from IPP, since I felt that it wasn't orthogonal enough to the pydoc-based help and I'd rather use a system which: - everyone gets (it's part of the standard distribution) - someone else maintains :) Cheers, f From jhauser at ifm.uni-kiel.de Sun Feb 17 16:27:13 2002 From: jhauser at ifm.uni-kiel.de (Janko Hauser) Date: Sun, 17 Feb 2002 22:27:13 +0100 Subject: [SciPy-dev] Using IPython In-Reply-To: References: <15472.2236.206462.908298@ifm.uni-kiel.de> Message-ID: <15472.8241.139999.112519@ifm.uni-kiel.de> Fernando P?rez writes: > > > Then a last one, to make the who command really useful, it needs to > > supress names from later imported modules. There is also a who command > > in ipython. > > ipython has @who and @whos (more detail). If you need changes, I'm open to > anything. > > > Oh and the really last one :-) which help system you want to use in > > the future? > > Sorry, is this question for ipython or for scipy? It was more a headsup, that the scipy folks need to make a descision about this. I would like to use pydoc as you did in ipython. And you were right IMO to ripp of the html-based thing. I write the html-doc's more to be a kind of handbook, which would be a good place to give more examples, links between functions and short overview chapters. I also think about a kind of appendix with a presentation of different groupings. But that's not the first goal. As I said, I think ipython would be the best interactive shell for scipy, also quite unbiased :-). __Janko From pearu at cens.ioc.ee Sun Feb 17 18:48:25 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Mon, 18 Feb 2002 01:48:25 +0200 (EET) Subject: [SciPy-dev] scipy_distutils/*_info.py files In-Reply-To: Message-ID: Hi, I have introduced scipy_distutils/x11_info.py that similarly to atlas_info determines the locations of X11 libraries and header files. This should fix the long running setup_xplt.py issue. Let me know if I missed something. Note also that x11_info.get_info returns a dictionary that contain items like libraries:list,library_dirs:list,etc (this is different from the current atlas_info.get_atlas_info). Seems like that there will be a collection of *_info.py files. I wonder what would be the right place to keep these files. Currently they are in scipy_distutils though scipy_distutils itself does not use them at all. It is not a very important matter but may be somebody (Eric ?) had a vision on this (unified specification,etc) that we should try to follow. Pearu From fperez at pizero.colorado.edu Sun Feb 17 18:58:47 2002 From: fperez at pizero.colorado.edu (=?ISO-8859-1?Q?Fernando_P=E9rez?=) Date: Sun, 17 Feb 2002 16:58:47 -0700 (MST) Subject: [SciPy-dev] scipy_distutils/*_info.py files In-Reply-To: Message-ID: > It is not a very important matter but may be somebody (Eric ?) had a > vision on this (unified specification,etc) that we should try to follow. Why not have a generic SystemInfo class which all instantiate? Something like (bare pseudocode): class SystemInfo: def __init__(self,what_I_need_to_find): ... def get_info(self): .... return dict_with_info Then you could do: x11_info = SystemInfo(lib = 'X11',...) atlas_info = SystemInfo(lib = 'atlas',...) and so on. I know this is the barest, roughest of sketches. But hopefully my point is clear. The advantage of this would be to have a unified interface both for specifying what one is looking for (files, directories, libraries, versions, etc.) and for the format of the output. I think it would be worth doing it that way as knowing what's installed in a system is a fairly common and generic problem. Cheers, f. From fedor.baart at hccnet.nl Sun Feb 17 19:28:36 2002 From: fedor.baart at hccnet.nl (Fedor Baart) Date: Mon, 18 Feb 2002 01:28:36 +0100 Subject: [SciPy-dev] (no subject) Message-ID: <000001c1b813$34b9dd50$0200a8c0@amd> I found a small inconsistency in the scipy/plt/plot_utility module in the autoticks function. 419 if is_base2(rng) and is_base2(upper) and rng > 4: 420 if rng == 2: 421 interval = 1 422 elif rng == 4: 423 interval = 4 424 else: 425 interval = rng / 4 # maybe we want it 8 426 else: 427 interval = auto_interval((lower,upper)) The condition "and rng > 4" is inconsistent with rng==2 and rng==4. Also From fedor.baart at hccnet.nl Sun Feb 17 19:37:43 2002 From: fedor.baart at hccnet.nl (Fedor Baart) Date: Mon, 18 Feb 2002 01:37:43 +0100 Subject: [SciPy-dev] Plot_utlity Message-ID: <000101c1b814$7aad77d0$0200a8c0@amd> In module plot_utility in the calc_bound function the axis_bound variable is not used. 496 c1 = axis_bound = (quotient + 1) * interval 497 c2 = axis_bound = (quotient) * interval I think 370 def auto_ticks(data_bounds, bounds_info = None): 371 if bounds_info==None: 372 bounds_info = ['auto','auto','auto'] is a bit more elegant than: 369 default_bounds = ['auto','auto','auto'] 370 def auto_ticks(data_bounds, bounds_info = default_bounds): From fperez at pizero.colorado.edu Sun Feb 17 21:02:51 2002 From: fperez at pizero.colorado.edu (=?ISO-8859-1?Q?Fernando_P=E9rez?=) Date: Sun, 17 Feb 2002 19:02:51 -0700 (MST) Subject: [SciPy-dev] Using IPython In-Reply-To: <15472.8241.139999.112519@ifm.uni-kiel.de> Message-ID: Hi all, I've uploaded an updated IPython (0.2.6pre2) which includes some recent bugfixes and breaks down the code into a few more files, making understanding it easier for those not familiar with it. Since there seems to be some interest in using ipython for scipy, I'd encourage you to test using this version. Let me know of any bugs you find, and also when Eric gets a chance, post the array printing examples so I can implement that functionality. Cheers, f. From prabhu at aero.iitm.ernet.in Sun Feb 17 22:18:41 2002 From: prabhu at aero.iitm.ernet.in (Prabhu Ramachandran) Date: Mon, 18 Feb 2002 08:48:41 +0530 Subject: [SciPy-dev] scipy in CVS is sdist-build-import'able In-Reply-To: References: Message-ID: <15472.29329.404490.897330@monster.linux.in> >>>>> "PP" == Pearu Peterson writes: PP> Good news! Read the subject. PP> Among many changes, now the version string of scipy is PP> computed with update_version. Currently it looks like this PP> SciPy-0.2.1294-alpha-2715 Its is long and ugly but This is very nice. Thanks a lot. I'll try it out and let you know how it goes. prabhu From fperez at pizero.colorado.edu Sun Feb 17 22:20:09 2002 From: fperez at pizero.colorado.edu (=?ISO-8859-1?Q?Fernando_P=E9rez?=) Date: Sun, 17 Feb 2002 20:20:09 -0700 (MST) Subject: [SciPy-dev] Using IPython In-Reply-To: <15472.8241.139999.112519@ifm.uni-kiel.de> Message-ID: On Sun, 17 Feb 2002, Janko Hauser wrote: > It was more a headsup, that the scipy folks need to make a descision > about this. I would like to use pydoc as you did in ipython. And you > were right IMO to ripp of the html-based thing. I write the html-doc's > more to be a kind of handbook, which would be a good place to give > more examples, links between functions and short overview chapters. I > also think about a kind of appendix with a presentation of different > groupings. But that's not the first goal. Well, I'd say let's go with the pydoc-based help system. As I said before, it's one less thing we have to worry about maintaining/distributing. And it's quite good, it will document everything that can be found in sys.path. So as long as each individual module is well documented, the information will be available to the users. Remember that pydoc has a fantastic web-server mode: $ pydoc -p 7464 & will run pydoc as a webserver on localhost:7464, with nice html docs for everything in your sys.path. I use that constantly when developing, it's beautiful. We could stress that system's existence in the docs and make sure that each module is itself well documented. In the long term, the issue of maintaing separate manuals and in-module docstrings can be addressed. My preference is having only lyx files with all the docs as 'masters', and generating html, PostScript, PDF, etc. from those (that's what I do for IPython). One could make sure that at least the module top-level __doc__'s are set from the masters and would thus get: - single-source manuals to be maintained, in LyX (or raw LaTeX) - multiple output formats from that source (html, PS, PDF). - module __doc__'s, accessible to pydoc for nice web browsing and the builtin help system. If you're interested, I have a simple wrapper around pdflatex and latex2html called lyxport at http://www-hep.colorado.edu/~fperez/lyxport. I did it last summer right before learning python, so it's in Perl :(. But hey, you don't need to look inside, just use it :) It's convenient and works around some of the annoyances of both latex2html and pdflatex. > As I said, I think ipython would be the best interactive shell for > scipy, also quite unbiased :-). Hey, if it weren't for your IPP, IPython wouldn't exist at all. My initial idea was a simple prompt system for python, nothing else. Only when I read about your IPP did I realize much more could be done. And my original system only used python's hooks (sys.displayhook, sys.ps1/2, etc.) which are nice but ultimately limited. What got me going was seeing IPP's design based off of the code.py module. After that, I wrote the tangled mess we have today. So you have every right to be 'unbiased' about IPython :) I'm really glad someone else is using it, since it was quite a bit of work. Cheers, f. PS. Do you use lyx by any chance? I've been meaning to convert IPython's docs to the standard Python howto layout, but I don't feel like making lyx .layout files from the python latex styles. Since I know you've been working on documentation issues, I was wondering. There's been talk in the lyx mailing lists of writing those layout files but nobody seems to get around to actually doing it (it's not hard, just a matter of overcoming the startup inertia and having some time for it). From eric at scipy.org Mon Feb 18 01:25:41 2002 From: eric at scipy.org (eric) Date: Mon, 18 Feb 2002 01:25:41 -0500 Subject: [SciPy-dev] Problem in stats.py References: <002701c1b7ae$7dd6b4a0$641ee33e@arrow> Message-ID: <038801c1b845$16be77c0$6b01a8c0@ericlaptop> > > Hello Folks, > > I just discovered a problem in the latest version of stats.py. In line 1637 > it tries to import special directly. Changing this in scipy.special fixes > this, but then it is complaining about not finding neginomcdfinv. For the > time being I reverted back to the old version of the stats module. > > Heiko Travis O. cleaned this up, but I'm getting the same problem - except with possioncdf. If I try: C:\WINDOWS\SYSTEM32>python Python 2.1.1 (#20, Jul 20 2001, 01:19:29) [MSC 32 bit (Intel)] on win32 Type "copyright", "credits" or "license" for more information. >>> from scipy import * Traceback (most recent call last): File "", line 1, in ? File "C:\Python21\scipy\__init__.py", line 74, in ? names2all(__all__, _level0, globals()) File "C:\Python21\scipy\__init__.py", line 42, in names2all exec("import %s" % name, gldict) File "", line 1, in ? File "C:\Python21\scipy\misc.py", line 22, in ? import scipy.stats File "C:\Python21\scipy\stats\__init__.py", line 4, in ? from stats import * File "C:\Python21\scipy\stats\stats.py", line 1637, in ? from scipy.special import binomcdf, binomcdfc, binomcdfinv, betacdf, betaq, fcdf, \ ImportError: cannot import name possioncdf But, if I comment out this: from scipy.special import binomcdf, binomcdfc, binomcdfinv, betacdf, betaq, fcdf, \ fcdfc, fp, gammacdf, gammacdfc, gammaq, negbinomcdf, negbinomcdfinv #, \ # possioncdf, poissioncdfc, possioncdfinv, studentcdf, studentq, \ # chi2cdf, chi2cdfc, chi2p, normalcdf, normalq, smirnovcdfc, smirnovp, \ # kolmogorovcdfc, kolmogorovp >>> from scipy import * >>> What gets me is that a dir(scipy.special) shows that possioncdf and friends are there as they should be. I don't understand, then, why the import statement fails. It seems like either they all would fail, or none would fail. Any ideas why the first 14 functions work and the others fail? eric From eric at scipy.org Mon Feb 18 01:48:19 2002 From: eric at scipy.org (eric) Date: Mon, 18 Feb 2002 01:48:19 -0500 Subject: [SciPy-dev] scipy in CVS is sdist-build-import'able References: Message-ID: <039601c1b848$407b40e0$6b01a8c0@ericlaptop> Hey Pearu, > Good news! Read the subject. Excellent. Sounds like this was a major pain. Thanks. > > Among many changes, now the version string of scipy is computed with > update_version. Currently it looks like this > SciPy-0.2.1294-alpha-2715 I had thought much about version numbers -- accept for the fact that we needed one. :-) This is verbose, but fine with me. Thanks. > Its is long and ugly but informative. Feel free to change it to a better > one in scipy_version.py file. > The meanings of each item is documented in > update_version.__doc__. > > And there is one issue with calculating version numbers automatically > (see the comments in update_version for details). Basically, in order to > keep this version string updated in the scipy CVS repository, one needs to > use the following procedure when commiting changes to the repository: > > 1) Commit your changes > cvs commit ... > > 2) Run > python scipy_version.py > this will update __version__.py > > 3) Commit __version__.py > cvs commit -m "Updating version" __version__.py Ok. I'll start trying this with my checkins. > > Steps 2) and 3) could be run automatically by CVS server. But let's see > how it works first. Assuming all goes well, is it hard to set up? eric From eric at scipy.org Mon Feb 18 02:31:43 2002 From: eric at scipy.org (eric) Date: Mon, 18 Feb 2002 02:31:43 -0500 Subject: [SciPy-dev] Octave array formatting examples References: Message-ID: <03ae01c1b84e$50af2520$6b01a8c0@ericlaptop> Hey Fernando, Here are some sample octave outputs. At the end is the output of the format command options and their meanings for Octave. octave:1> a = roots([1,2,3,4,5,6,7,8,9]) a = -1.28876 + 0.44768i -1.28876 - 0.44768i -0.72436 + 1.13698i -0.72436 - 1.13698i 0.13639 + 1.30495i 0.13639 - 1.30495i 0.87673 + 0.88137i 0.87673 - 0.88137i octave:2> format long octave:3> a a = -1.288756587957740 + 0.447682305901415i -1.288756587957740 - 0.447682305901415i -0.724360527656508 + 1.136975343061895i -0.724360527656508 - 1.136975343061895i 0.136385310220365 + 1.304952920520545i 0.136385310220365 - 1.304952920520545i 0.876731805393882 + 0.881372126823502i 0.876731805393882 - 0.881372126823502i octave:10> a = rand(5,5) a = Columns 1 through 4: 0.2383920848369598 0.4348705410957336 0.8961446881294250 0.6055997014045715 0.6609045863151550 0.5333006978034973 0.0284115206450224 0.2143878042697906 0.5080860257148743 0.1541089564561844 0.8641215562820435 0.5571301579475403 0.1096438020467758 0.9567501544952393 0.6171138286590576 0.9123255014419556 0.6439611911773682 0.1768558770418167 0.1341447830200195 0.2482996433973312 Column 5: 0.7593351602554321 0.6312018036842346 0.5956992506980896 0.1406104713678360 0.3457353711128235 octave:16> a = rand(5,5)+rand(5,5)*j a = Column 1: 0.6902413368225098 + 0.9270709753036499i 0.0714196115732193 + 0.9306957125663757i 0.4022956192493439 + 0.1311691850423813i 0.8854562044143677 + 0.0498457774519920i 0.8829681277275085 + 0.3563515543937683i Column 2: 0.4604839682579041 + 0.3887093961238861i 0.6259743571281433 + 0.3982550501823425i 0.4543380439281464 + 0.5150275826454163i 0.6959248781204224 + 0.1767881363630295i 0.0839983001351357 + 0.6666156053543091i Column 3: 0.7222939729690552 + 0.7316601276397705i 0.7881859540939331 + 0.1195990145206451i 0.0002050292678177 + 0.8647435903549194i 0.0405133590102196 + 0.4060045480728149i 0.8830721378326416 + 0.2345113456249237i Column 4: 0.9146161675453186 + 0.3588070273399353i 0.1572253555059433 + 0.8799443244934082i 0.6621940135955811 + 0.0548975020647049i 0.3020973503589630 + 0.7739759683609009i 0.1915321201086044 + 0.8278477191925049i Column 5: 0.2164765596389771 + 0.9701768159866333i 0.4962071478366852 + 0.0528524965047836i 0.9886205792427063 + 0.4764747023582458i 0.3292922079563141 + 0.4794612824916840i 0.9707457423210144 + 0.2543968856334686i octave:17> format short octave:18> a = rand(5,5)+rand(5,5)*j a = Columns 1 through 4: 0.6579 + 0.3181i 0.6785 + 0.1699i 0.8595 + 0.5765i 0.9841 + 0.9808i 0.1261 + 0.3136i 0.0035 + 0.0643i 0.4204 + 0.4313i 0.6348 + 0.4895i 0.4532 + 0.2377i 0.0467 + 0.0851i 0.1086 + 0.5233i 0.7480 + 0.5024i 0.3988 + 0.3001i 0.8262 + 0.2479i 0.6966 + 0.9397i 0.6697 + 0.5029i 0.1975 + 0.3135i 0.8525 + 0.3550i 0.8608 + 0.1164i 0.3418 + 0.3752i Column 5: 0.4255 + 0.1573i 0.1951 + 0.3906i 0.4658 + 0.4730i 0.7997 + 0.6897i 0.5093 + 0.7083i octave:4> help format format is a built-in text function - Command: format options Control the format of the output produced by `disp' and Octave's normal echoing mechanism. Valid options are listed in the following table. `short' Octave will try to print numbers with at least 3 significant figures within a field that is a maximum of 8 characters wide. If Octave is unable to format a matrix so that columns line up on the decimal point and all the numbers fit within the maximum field width, it switches to an `e' format. `long' Octave will try to print numbers with at least 15 significant figures within a field that is a maximum of 24 characters wide. As will the `short' format, Octave will switch to an `e' format if it is unable to format a matrix so that columns line up on the decimal point and all the numbers fit within the maximum field width. `long e' `short e' The same as `format long' or `format short' but always display output with an `e' format. For example, with the `short e' format, pi is displayed as `3.14e+00'. `long E' `short E' The same as `format long e' or `format short e' but always display output with an uppercase `E' format. For example, with the `long E' format, pi is displayed as `3.14159265358979E+00'. `free' `none' Print output in free format, without trying to line up columns of matrices on the decimal point. This also causes complex numbers to be formatted like this `(0.604194, 0.607088)' instead of like this `0.60419 + 0.60709i'. `bank' Print in a fixed format with two places to the right of the decimal point. `+' Print a `+' symbol for nonzero matrix elements and a space for zero matrix elements. This format can be very useful for examining the structure of a large matrix. `hex' Print the hexadecimal representation numbers as they are stored in memory. For example, on a workstation which stores 8 byte real values in IEEE format with the least significant byte first, the value of `pi' when printed in `hex' format is `400921fb54442d18'. This format only works for numeric values. `bit' Print the bit representation of numbers as stored in memory. For example, the value of `pi' is 01000000000010010010000111111011 01010100010001000010110100011000 (shown here in two 32 bit sections for typesetting purposes) when printed in bit format on a workstation which stores 8 byte real values in IEEE format with the least significant byte first. This format only works for numeric types. By default, Octave will try to print numbers with at least 5 significant figures within a field that is a maximum of 10 characters wide. If Octave is unable to format a matrix so that columns line up on the decimal point and all the numbers fit within the maximum field width, it switches to an `e' format. If `format' is invoked without any options, the default format state is restored. From eric at scipy.org Mon Feb 18 02:46:35 2002 From: eric at scipy.org (eric) Date: Mon, 18 Feb 2002 02:46:35 -0500 Subject: [SciPy-dev] Using IPython References: <15472.2236.206462.908298@ifm.uni-kiel.de> Message-ID: <03bc01c1b850$6446a390$6b01a8c0@ericlaptop> > > I have worked on the documentation over the weekend and have seen, > that there is a little namespace polution if one does `from scipy > import *' which is probably the first step in an interactive > session. Mainly the module names handy, basic, basic1a, misc are > present together with the functions from these modules. Than there are > some things defined again with a shortcut. Do you think this should be > cleaned up? I'd like to say yes -- and it should be possible. However, there are implications for the testing framework. In the past when we deleted misc in __init__.py after all its stuff was imported, the testing frame work failed to find the test suites for misc -- and died a premature death. This probably can be addressed, but not sure how at the moment. > At the moment there are 300 definitions in scipy. There does need to be some more thought put into what is put in the top level. I think it is necessarily a large number of functions, but we could certainly trim some fat if it is found. I've often wished that there was a "magic variable" in modules that one could define to omit names from the name-completion tools in GUI's. If you "from Numeric import *" in a stats module, it be nice to say, yeah I want them in the namespace, but don't try to auto-complete any names from Numeric when someone types stats.xxx. > > Then a last one, to make the who command really useful, it needs to > supress names from later imported modules. There is also a who command > in ipython. > > Oh and the really last one :-) which help system you want to use in > the future? pydoc is fine I think, but it does have a few problems. It doesn't work well with PythonWin. Also, it is to verbose for me when printing out help for a module. I don't want to see a gazillion lines of text describing every function and class. Instead, I'd like to see the module doc-string and perhaps a list of available functions. Again, Octave and Matlab strike a decent balance here. I started on something like this, but it is far from ready to go. Maybe pydoc with a few modifications to turn down verbosity would be better than rolling another completely different solution. One other thing. The color choices for pydoc generated HTML pages could use some, uh, modifications... : ) > > I think I will have a first set of documents ready at the beginning of > the next week. (HTML, PDF, PS, a good addition would be a special html > for the help widgets of wxpython) Good news! Thanks Janko. eric From jhauser at ifm.uni-kiel.de Mon Feb 18 04:08:23 2002 From: jhauser at ifm.uni-kiel.de (Janko Hauser) Date: Mon, 18 Feb 2002 10:08:23 +0100 Subject: [SciPy-dev] Using IPython In-Reply-To: <03bc01c1b850$6446a390$6b01a8c0@ericlaptop> References: <15472.2236.206462.908298@ifm.uni-kiel.de> <03bc01c1b850$6446a390$6b01a8c0@ericlaptop> Message-ID: <15472.50311.306468.401908@ifm.uni-kiel.de> eric writes: > > I'd like to say yes -- and it should be possible. However, there are > implications for the testing framework. In the past when we deleted misc in > __init__.py after all its stuff was imported, the testing frame work failed to > find the test suites for misc -- and died a premature death. This probably can > be addressed, but not sure how at the moment. > Ok will look into this. > > At the moment there are 300 definitions in scipy. > > There does need to be some more thought put into what is put in the top level. > I think it is necessarily a large number of functions, but we could certainly > trim some fat if it is found. > > I've often wished that there was a "magic variable" in modules that one could > define to omit names from the name-completion tools in GUI's. There is the __all__ directive in recent pythons, which allows you to specify, which names should be exported from a module. Haven't tested this at the command prompt, will also do this and report back. __Janko From prabhu at aero.iitm.ernet.in Mon Feb 18 06:08:25 2002 From: prabhu at aero.iitm.ernet.in (Prabhu Ramachandran) Date: Mon, 18 Feb 2002 16:38:25 +0530 Subject: [SciPy-dev] Long running find in setup_xplt.py In-Reply-To: References: <15471.65170.222055.873059@monster.linux.in> Message-ID: <15472.57513.397807.114680@monster.linux.in> >>>>> "PP" == Pearu Peterson writes: PP> I think there must be a better way to check if X is PP> installed. In fact, can you imagine a Unix machine that has no PP> X installed and someone wants to run scipy there. It is PP> difficult to me (but not impossible though). Any ideas how PP> configure tools check for X? I think they do pretty much what you have done in x11_info.py. So, I think its fine for now. prabhu From eric at scipy.org Mon Feb 18 11:53:39 2002 From: eric at scipy.org (eric) Date: Mon, 18 Feb 2002 11:53:39 -0500 Subject: [SciPy-dev] scipy_distutils/*_info.py files References: Message-ID: <043c01c1b89c$d076a550$6b01a8c0@ericlaptop> ----- Original Message ----- From: "Pearu Peterson" To: Sent: Sunday, February 17, 2002 6:48 PM Subject: [SciPy-dev] scipy_distutils/*_info.py files > > Hi, > > I have introduced scipy_distutils/x11_info.py that similarly to atlas_info > determines the locations of X11 libraries and header files. > This should fix the long running setup_xplt.py issue. Let me know if I > missed something. > > Note also that x11_info.get_info returns a dictionary that contain items > like libraries:list,library_dirs:list,etc (this is different from > the current atlas_info.get_atlas_info). I like your approach better. It also mirror how the configuration() functions in setup_xxx.py work. Lets plan on returning dictionaries from now on, and also fixing the atlas_info module. > > Seems like that there will be a collection of *_info.py files. I wonder > what would be the right place to keep these files. Currently they are in > scipy_distutils though scipy_distutils itself does not use them at > all. Yeah, they went there originally because they were part of the setup process. Also, they seem useful to other people trying to build modules. For example, other developers building modules that use LAPACK can use the atlas_info file. This makes me think they should all live in a central place so that people know where to look for them. scipy_distutils seems as good of a place as any. I guess we could make a scipy_distutils.config sub-package directory. Maybe that would be better.? > > It is not a very important matter but may be somebody (Eric ?) had a > vision on this (unified specification,etc) that we should try to follow. Not very important to me. I just want them somewhere easily accessible to the setup_xxx scripts. I don't think they should be in a sub-package of SciPy because this presents problems during the install phase of SciPy. If anyone else has a grand plan (that doesn't include re-writing autoconf in Python...) lets talk about it. eric From oliphant.travis at ieee.org Mon Feb 18 13:09:27 2002 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 18 Feb 2002 11:09:27 -0700 Subject: [SciPy-dev] Using IPython In-Reply-To: <15472.8241.139999.112519@ifm.uni-kiel.de> References: <15472.2236.206462.908298@ifm.uni-kiel.de> <15472.8241.139999.112519@ifm.uni-kiel.de> Message-ID: On Sunday 17 February 2002 02:27 pm, you wrote: > Fernando P?rez writes: > > > Then a last one, to make the who command really useful, it needs to > > > supress names from later imported modules. There is also a who command > > > in ipython. > > > > ipython has @who and @whos (more detail). If you need changes, I'm open > > to anything. > > > > > Oh and the really last one :-) which help system you want to use in > > > the future? > > > > Sorry, is this question for ipython or for scipy? > > It was more a headsup, that the scipy folks need to make a descision > about this. I would like to use pydoc as you did in ipython. And you > were right IMO to ripp of the html-based thing. I write the html-doc's > more to be a kind of handbook, which would be a good place to give > more examples, links between functions and short overview chapters. I > also think about a kind of appendix with a presentation of different > groupings. But that's not the first goal. The problem is pydoc uses a pager which does not work inside of an emacs session so that help system is broken in some terminals. Also, the scipy help system breaks up the line better for long argument lists in functions. The scipy help command now allows you to enter help('somestring') and it will search for an object down the scipy tree with that name and display help on it. -Travis O. From eric at scipy.org Mon Feb 18 12:04:51 2002 From: eric at scipy.org (eric) Date: Mon, 18 Feb 2002 12:04:51 -0500 Subject: [SciPy-dev] scipy_distutils/*_info.py files References: Message-ID: <044501c1b89e$616dbd90$6b01a8c0@ericlaptop> > > It is not a very important matter but may be somebody (Eric ?) had a > > vision on this (unified specification,etc) that we should try to follow. > > Why not have a generic SystemInfo class which all instantiate? > Something like (bare pseudocode): > > class SystemInfo: > def __init__(self,what_I_need_to_find): > ... > > def get_info(self): > .... > return dict_with_info > > > Then you could do: > > x11_info = SystemInfo(lib = 'X11',...) > atlas_info = SystemInfo(lib = 'atlas',...) The idea of a class is good. I think the specific cases are special enough that they should be sub-classes instead of variables handed in so that: class system_info: ... def get_info(self): ... def search_local(self): """ Search system for directories. """ def download_from_remote(self): """ Some provision for grabbing libraries from remote locations like scipy.org if the person wants this to happen. """ class atlas_info(system_info): """ Special cased to handle atlas specific stuff """ > > and so on. I know this is the barest, roughest of sketches. But hopefully my > point is clear. The advantage of this would be to have a unified interface > both for specifying what one is looking for (files, directories, libraries, > versions, etc.) and for the format of the output. > > I think it would be worth doing it that way as knowing what's installed in a > system is a fairly common and generic problem. Agreed. As long as we don't get in the business of trying to write autoconf. :-) eric -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant.travis at ieee.org Mon Feb 18 13:16:11 2002 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 18 Feb 2002 11:16:11 -0700 Subject: [SciPy-dev] Problem in stats.py In-Reply-To: <038801c1b845$16be77c0$6b01a8c0@ericlaptop> References: <002701c1b7ae$7dd6b4a0$641ee33e@arrow> <038801c1b845$16be77c0$6b01a8c0@ericlaptop> Message-ID: On Sunday 17 February 2002 11:25 pm, you wrote: > > Hello Folks, > But, if I comment out this: > > from scipy.special import binomcdf, binomcdfc, binomcdfinv, betacdf, betaq, > fcdf, \ > fcdfc, fp, gammacdf, gammacdfc, gammaq, negbinomcdf, negbinomcdfinv #, > \ # possioncdf, poissioncdfc, possioncdfinv, studentcdf, studentq, \ # > chi2cdf, chi2cdfc, chi2p, normalcdf, normalq, smirnovcdfc, smirnovp, \ # > kolmogorovcdfc, kolmogorovp > > >>> from scipy import * > > > > > What gets me is that a dir(scipy.special) shows that possioncdf and friends > are there as they should be. I don't understand, then, why the import > statement fails. It seems like either they all would fail, or none would > fail. Any ideas why the first 14 functions work and the others fail? > Sorry, subtle spelling error here. It's poissoncdf = I fixed it. -Travis From eric at scipy.org Mon Feb 18 12:38:01 2002 From: eric at scipy.org (eric) Date: Mon, 18 Feb 2002 12:38:01 -0500 Subject: [SciPy-dev] Plot_utlity References: <000101c1b814$7aad77d0$0200a8c0@amd> Message-ID: <049501c1b8a3$035d7100$6b01a8c0@ericlaptop> ----- Original Message ----- From: "Fedor Baart" To: Sent: Sunday, February 17, 2002 7:37 PM Subject: [SciPy-dev] Plot_utlity > In module plot_utility in the calc_bound function the axis_bound > variable is not used. > > 496 c1 = axis_bound = (quotient + 1) * interval > 497 c2 = axis_bound = (quotient) * interval > > I think > > 370 def auto_ticks(data_bounds, bounds_info = None): > 371 if bounds_info==None: > 372 bounds_info = ['auto','auto','auto'] > > is a bit more elegant than: > > 369 default_bounds = ['auto','auto','auto'] > 370 def auto_ticks(data_bounds, bounds_info = > default_bounds): default_bounds may have been intended to be visible outside the module. Whether that was ever used, I'm not sure. Your right though. The test for None is better and much safer. In this case, it isn't causing any problems (besides aesthetics). The plt module has rarely been accused of "elegance." Hopefully the 2nd cut will make Audry Hepburn envious. I haven't seen anything in any of these corrections that would account for the problems you were having. Are they still occuring, or have you found the root of the problem? eric > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From fperez at pizero.colorado.edu Mon Feb 18 14:50:24 2002 From: fperez at pizero.colorado.edu (=?ISO-8859-1?Q?Fernando_P=E9rez?=) Date: Mon, 18 Feb 2002 12:50:24 -0700 (MST) Subject: [SciPy-dev] Re: Octave array formatting examples In-Reply-To: <03ae01c1b84e$50af2520$6b01a8c0@ericlaptop> Message-ID: On Mon, 18 Feb 2002, eric wrote: > Hey Fernando, > > Here are some sample octave outputs. At the end is the output of the format > command options and their meanings for Octave. Thanks. Some comments: 1. It can all be done in IPython, no problem. The format thing can be implemented as a magic command which sets a global (internal) flag, and the printing subsystem queries that flag at print time. 2. It's big. So it's not the kind of thing I'll do in a day or two (with all the other things going on right now). But let's see where the discussion goes, and if there is indeed some consensus on using IPython for scipy, I'll try to whip out some proof-of-concept code. But the IPython design is exactly adapted to this, so the only limiting factor right now is my time, nothing else. Anyone who wishes to jump in is welcome to do so, I can guide them a bit through the code. Cheers, f. From fperez at pizero.colorado.edu Mon Feb 18 14:55:50 2002 From: fperez at pizero.colorado.edu (=?ISO-8859-1?Q?Fernando_P=E9rez?=) Date: Mon, 18 Feb 2002 12:55:50 -0700 (MST) Subject: [SciPy-dev] Using IPython In-Reply-To: <03bc01c1b850$6446a390$6b01a8c0@ericlaptop> Message-ID: > > Oh and the really last one :-) which help system you want to use in > > the future? > > pydoc is fine I think, but it does have a few problems. It doesn't work well > with PythonWin. Also, it is to verbose for me when printing out help for a > module. I don't want to see a gazillion lines of text describing every function > and class. Instead, I'd like to see the module doc-string and perhaps a list of > available functions. Again, Octave and Matlab strike a decent balance here. > I started on something like this, but it is far from ready to go. Maybe pydoc > with a few modifications to turn down verbosity would be better than rolling > another completely different solution. You guys are going to get tired of me :) IPython could be of help here. It has a '@doc object' command which is meant precisely to show only docstrings, and a '?object' one which shows some more info. These systems could easily be adapted to strike the balance you're looking for, and they already exist. This is IPython's startup: IPython 0.2.6pre3 -- An enhanced Interactive Python. ? -> Introduction to IPython's features. ?object -> Details about 'object'; object? also works, ?? prints more. help -> Python's own help system. @magic -> Information about IPython's 'magic' @ functions. So you see the different info systems available (@doc is one of the 'magics'). The ?/?? is much like Mathematica's, but adapted to Python (shows constructors and function prototypes, for example). Please tell me to shut up on ipython when you get sick of me :) Cheers, f. From fperez at pizero.colorado.edu Mon Feb 18 15:10:23 2002 From: fperez at pizero.colorado.edu (=?ISO-8859-1?Q?Fernando_P=E9rez?=) Date: Mon, 18 Feb 2002 13:10:23 -0700 (MST) Subject: [SciPy-dev] scipy_distutils/*_info.py files In-Reply-To: <044501c1b89e$616dbd90$6b01a8c0@ericlaptop> Message-ID: > > Agreed. As long as we don't get in the business of trying to write autoconf. :-) Have you ever looked at scons (http://www.scons.org)? I know it looks like more of a make than autoconf replacement, but it might be worth a look. It would be great if here we could concentrate on the Sci part of SciPy and leverage other's work for the 'software engineering' part. I realize you guys have already been forced to do a ton of work around distutils limitations (frankly it's a pretty primitive system, and I'd argue one of Python's Achilles' heels). I recently also saw QMTest for testing at http://www.codesourcery.com/qm/qmtest, also Python-based. I may be completely off-mark here, I'm just thinking of how we can best use other's work for some areas. The task ahead for scipy is big enough as it is. But you guys may have already gone over this, so feel free to shoot me down if I'm just blabbering nonsense. Cheers, f. From pearu at cens.ioc.ee Mon Feb 18 15:46:58 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Mon, 18 Feb 2002 22:46:58 +0200 (EET) Subject: [SciPy-dev] scipy_distutils/*_info.py files In-Reply-To: Message-ID: Hi, On Mon, 18 Feb 2002, Fernando P?rez wrote: > > > Agreed. As long as we don't get in the business of trying to > write autoconf. :-) > > Have you ever looked at scons (http://www.scons.org)? I know it looks like > more of a make than autoconf replacement, but it might be worth a look. It > would be great if here we could concentrate on the Sci part of SciPy and > leverage other's work for the 'software engineering' part. I realize you > guys have already been forced to do a ton of work around distutils > limitations (frankly it's a pretty primitive system, and I'd argue one of > Python's Achilles' heels). > > I recently also saw QMTest for testing at > http://www.codesourcery.com/qm/qmtest, also Python-based. > > I may be completely off-mark here, I'm just thinking of how we can best use > other's work for some areas. The task ahead for scipy is big enough as it is. > > But you guys may have already gone over this, so feel free to shoot me down > if I'm just blabbering nonsense. No, we have not discussed about SCons. To me it looks as a great replacement for distutils. Though it supports currently only C compilations, one could easily add Fortran support. However, I don't think that we should move to using SCons right now for the following reasons: 1) Current scipy_distutils serves scipy needs regarding building extension modules quite well, including Fortran support. scipy_distutils may need some polishing when using it with other platforms than Linux,Win32 and other Fortran compilers than g77. Using current SCons would not ease this task as it lacks the same things as distutils. And scipy_distutils already has most of the required hooks implemented. 2) SCons does not solve our autoconf problem. As you said, it's more like a make replacement. And I think your proposal with Eric's modifications will be quite adequate for scipy needs. May be Eric has more to say about QMTest. I have not really looked at Eric's testing framework yet and therefore I have very little to say about testing except it is 'must to have'. Regards, Pearu From pearu at cens.ioc.ee Mon Feb 18 16:04:52 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Mon, 18 Feb 2002 23:04:52 +0200 (EET) Subject: [SciPy-dev] Re: Octave array formatting examples In-Reply-To: Message-ID: On Mon, 18 Feb 2002, Fernando P?rez wrote: > But let's see where the discussion goes, and if there is indeed some > consensus on using IPython for scipy, I'll try to whip out some > proof-of-concept code. But the IPython design is exactly adapted to this, so > the only limiting factor right now is my time, nothing else. Anyone who > wishes to jump in is welcome to do so, I can guide them a bit through the > code. Seems that nowbody is really against using ipython as an interactive interface to scipy. I have not used ipython myself (as I hardly use python from its prompt) but it seems to be really suitable for interactive scipy sessions. I would say that let's go for it and if anybody has anything to say against using ipython as an interactive interface to scipy then say it now or be silent for ever ... ;) Pearu From pearu at cens.ioc.ee Mon Feb 18 16:26:32 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Mon, 18 Feb 2002 23:26:32 +0200 (EET) Subject: [SciPy-dev] scipy_distutils/*_info.py files In-Reply-To: <043c01c1b89c$d076a550$6b01a8c0@ericlaptop> Message-ID: Hi, On Mon, 18 Feb 2002, eric wrote: > This makes me think they should all live in a central place so that people know > where to look for them. scipy_distutils seems as good of a place as any. I > guess we could make a scipy_distutils.config sub-package directory. Maybe that > would be better.? If we go for 'class system_info' method then I think we could keep all in one file (similar to build_flib that holds all Fortran compiler descriptions). I don't think that this file would get very large. Currently we need info only about atlas,x11,fftw (may be also Numeric if NumArray becomes usable, etc). So, unless someone implements it first (as I'll be probably busy in coming days), I would suggest scipy_distutils/info.py for holding the base class 'system_info' and its derivatives atlas, x11, fftw, etc. Pearu From jhauser at ifm.uni-kiel.de Mon Feb 18 17:05:25 2002 From: jhauser at ifm.uni-kiel.de (janko hauser) Date: Mon, 18 Feb 2002 23:05:25 +0100 (MET) Subject: [SciPy-dev] Re: Octave array formatting examples In-Reply-To: References: <03ae01c1b84e$50af2520$6b01a8c0@ericlaptop> Message-ID: <15473.31397.447450.127486@caesar.ifm.uni-kiel.de> Fernando P?rez writes: > On Mon, 18 Feb 2002, eric wrote: > > > Hey Fernando, > > > > Here are some sample octave outputs. At the end is the output of the format > > command options and their meanings for Octave. > > Thanks. Some comments: > > 1. It can all be done in IPython, no problem. The format thing can be > implemented as a magic command which sets a global (internal) flag, and the > printing subsystem queries that flag at print time. > I think there is an easier way. The str() representation of arrays is done by a module, which can be overloaded or replaced. There was once somebody who has done this and also implemented the nice feature to only show array up to some size. New users of NumPy had very often the suprise, that the building of the output for big arrays take such a long time, that it looks like the system is hanging. (The output is not streaming). __Janko From fperez at pizero.colorado.edu Mon Feb 18 19:39:30 2002 From: fperez at pizero.colorado.edu (=?ISO-8859-1?Q?Fernando_P=E9rez?=) Date: Mon, 18 Feb 2002 17:39:30 -0700 (MST) Subject: [SciPy-dev] Re: Octave array formatting examples In-Reply-To: <15473.31397.447450.127486@caesar.ifm.uni-kiel.de> Message-ID: > I think there is an easier way. The str() representation of arrays is > done by a module, which can be overloaded or replaced. There was once > somebody who has done this and also implemented the nice feature to > only show array up to some size. New users of NumPy had very often the > suprise, that the building of the output for big arrays take such a > long time, that it looks like the system is hanging. (The output is > not streaming). Well, I actually think the amount of real work is the same whether implemented via __str__ or via a display hook. Since having a user-modifyable display hook is in general a good idea for a flexible system, I went ahead and put in the necessary things for that to be possible. If you grab the latest IPython from http://www-hep.colorado.edu/~fperez/ipython/ and install it, you'll get a system which knows about Numeric Arrays for printing, and has a @format command a la Octave. You need to start it with the 'math' profile, via 'ipython -p math' for this to be enabled. Now, the functionality is currently empty, but what's left to be done is 100% IPython-independent. Here's what you get now: In [1]: i=identity(3) In [2]: i Out[2]= NumPy array, format: long [[1 0 0] [0 1 0] [0 0 1]] In [3]: x=[0,1,2] In [4]: x Out[4]= [0, 1, 2] In [5]: format short In [6]: format long e Invalid format: long e Valid formats: ['long', 'short'] As you see, it currently only prints a little message telling you it knows you gave it a numpy array, and the format command doesn't really do anyhting. But the hooks are in place. The function that needs to be filled in is (defined in the IPython/Extensions/numeric_formats.py file): def num_display(self,arg): """Display method for printing which treats Numeric arrays specially. """ # Non-numpy variables are printed using the system default if type(arg) != ArrayType: self._display(arg) return # Otherwise, we do work. format = __IP.runtime_rc.numarray_print_format print 'NumPy array, format:',format # ***Here is where all the printing logic needs to be implemented*** print arg # nothing yet :) Right now it just does a 'print arg' and nothing else. But maybe someone else wants to tackle it :) As I said, all the IPython-specific work is finished. I had a quick look at the ArrayPrinter file in Numeric, and there seems to be a lot in there. Now I have to go back to working on other things, but maybe that could be a good starting point of code for this. Well, this is my 'proof-of-concept' code. I think at least the architecture is flexible enough to support what Eric wants. But writing all the fancy octave-like output is actually a fair bit of work, so that will wait. Plus, it's curses-based (it computes window width for splitting things), so we'd need to decide whether making curses a mandatory dependency (which kills Windows platforms) is a good idea. Cheers, f. From jnl at allegro.mit.edu Mon Feb 18 19:39:16 2002 From: jnl at allegro.mit.edu (J Nicholas Laneman) Date: Mon, 18 Feb 2002 19:39:16 -0500 Subject: [SciPy-dev] SciPy on MacOS X (almost) Message-ID: <15473.40628.955336.273670@localhost.starpower.net> Hi Everyone, First, let me say THANKS for SciPy! A colleague and I have begun using Python, Numeric, and SciPy exclusively for our research (instead of MATLAB). While it's been a little rough here in the beginning, already we can see a lot of advantages to making the switch. My main reason for posting is to try to help push along support for MacOS X. I noticed some recent postings on this subject, but was not a member of the list so could not reply to them directly. To summarize, I seem to have successfully built SciPy using the Fink tools, but have some errors when trying to import it (which have also shown up in other contexts on this list). Any help in getting this stuff working would be greatly appreciated! If there is interest, I can look into getting SciPy added as a Fink package. Details of my attempt are below. Cheers, Nick IMPORT ERRORS ------------- [localhost:~] jnl% python Python 2.2 (#1, Feb 17 2002, 21:17:18) [GCC 2.95.2 19991024 (release)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import scipy Traceback (most recent call last): File "", line 1, in ? File "/loc/fink/lib/python2.2/site-packages/scipy/__init__.py", line 41, in ? from handy import * File "/loc/fink/lib/python2.2/site-packages/scipy/handy.py", line 1, in ? import Numeric File "/loc/fink/lib/python2.2/site-packages/Numeric/Numeric.py", line 119, in ? arrayrange = multiarray.arange AttributeError: 'module' object has no attribute 'arange' >>> from scipy.integrate import * Traceback (most recent call last): File "", line 1, in ? File "/loc/fink/lib/python2.2/site-packages/scipy/integrate/__init__.py", line 23, in ? scipy.somenames2all(__all__, _moddict, globals()) File "/loc/fink/lib/python2.2/site-packages/scipy/__init__.py", line 9, in somenames2all exec("from %s import %s" % (key, string.join(namedict[key],',')), gldict) File "", line 1, in ? File "/loc/fink/lib/python2.2/site-packages/scipy/integrate/quadrature.py", line 1, in ? from orthogonal import P_roots File "/loc/fink/lib/python2.2/site-packages/scipy/integrate/orthogonal.py", line 53, in ? from MLab import eig File "/loc/fink/lib/python2.2/site-packages/Numeric/MLab.py", line 17, in ? import RandomArray File "/loc/fink/lib/python2.2/site-packages/Numeric/RandomArray.py", line 3, in ? import LinearAlgebra File "/loc/fink/lib/python2.2/site-packages/Numeric/LinearAlgebra.py", line 8, in ? import lapack_lite ImportError: Failure linking new module >>> BUILD ----- Using the latest fink distribution (http://fink.sourceforge.net/), with Python 2.2 and g77 2.95.2 packages, I was able to build SciPy under MacOS X 10.1 with only a few differences to the SciPy distribution. These include: 1) In setup.py, setting X11 = 1 because the line find /usr -name "libX11*" -print does not find the X11 libraries through a symbolic link. My X11 files are on another partition, with /usr/X11R6 a symbolic link pointing to them. 2) In build_flib.py, using things like '"' + object '"' to encapsulate object filenames with things like "Power Macintosh" in them. The space in the object name seems to give the Apple linker fits. SciPy makes a temporary directory called "build", with a directory called temp.darwin-5.2-Power Macintosh-2.2 into which most of the object files are placed. I don't know exactly what function call genereates this file name, and in particular the "Power Macintosh" part, so I tried to work around it. 3) In build_flib.py, running "ranlib" on libraries seems necessary after running "ar" to make them. 4) In a few of the C files, malloc.h doesn't seem to exist in OS X, but including instead of seems to work (as someone pointed out on this list earlier). Here is a diff file of the changes I made to get things working. I apologize that they are probably very kludgy, but I don't know enough about Python or SciPy to do it "the right way". diff -r SciPy-0.1.new/build_flib.py SciPy-0.1/build_flib.py 200c200 < object_files = map(lambda x,td=temp_dir: '"' + os.path.join(td,x) + '"',object_list) --- > object_files = map(lambda x,td=temp_dir: os.path.join(td,x),object_list) 216c216 < module_switch + ' -c "' + source + '" -o ' + object --- > module_switch + ' -c ' + source + ' -o ' + object 242c242 < lib_file = '"' + os.path.join(output_dir,'lib'+library_name+'.a') + '"' --- > lib_file = os.path.join(output_dir,'lib'+library_name+'.a') 250,252d249 < print cmd < os.system(cmd) < cmd = 'ranlib -s %s' % lib_file diff -r SciPy-0.1.new/setup.py SciPy-0.1/setup.py 200d199 < X11 = 1 diff -r SciPy-0.1.new/special/cephes/polmisc.c SciPy-0.1/special/cephes/polmisc.c 7c7 < #include --- > #include diff -r SciPy-0.1.new/special/cephes/polyn.c SciPy-0.1/special/cephes/polyn.c 68c68 < #include --- > #include From fperez at pizero.colorado.edu Mon Feb 18 20:07:12 2002 From: fperez at pizero.colorado.edu (=?ISO-8859-1?Q?Fernando_P=E9rez?=) Date: Mon, 18 Feb 2002 18:07:12 -0700 (MST) Subject: [SciPy-dev] Re: Octave array formatting examples In-Reply-To: Message-ID: On Mon, 18 Feb 2002, Pearu Peterson wrote: > I would say that let's go for it and if anybody has anything to say > against using ipython as an interactive interface to scipy then say it now > or be silent for ever ... ;) Well, great! As I said a moment ago to Janko, I added the hooks for the printing functionality that Eric wanted (though the meat of the work is yet to be done). All comments/problems/suggestions are welcome. For those without the url willing to take a look at it, it's http://www-hep.colorado.edu/~fperez/ipython/ Grab the latest version (if there's a 'pre_N', use that). Cheers, f. From prabhu at aero.iitm.ernet.in Mon Feb 18 23:12:18 2002 From: prabhu at aero.iitm.ernet.in (Prabhu Ramachandran) Date: Tue, 19 Feb 2002 09:42:18 +0530 Subject: [SciPy-dev] SciPy on MacOS X (almost) In-Reply-To: <15473.40628.955336.273670@localhost.starpower.net> References: <15473.40628.955336.273670@localhost.starpower.net> Message-ID: <15473.53410.712198.299175@monster.linux.in> >>>>> "JNL" == J Nicholas Laneman writes: JNL> [localhost:~] jnl% python Python 2.2 (#1, Feb 17 2002, JNL> 21:17:18) [GCC 2.95.2 19991024 (release)] on darwin Type JNL> "help", "copyright", "credits" or "license" for more JNL> information. >>>> import scipy JNL> Traceback (most recent call last): File "", line 1, in JNL> ? File JNL> "/loc/fink/lib/python2.2/site-packages/scipy/__init__.py", JNL> line 41, in ? from handy import * File JNL> "/loc/fink/lib/python2.2/site-packages/scipy/handy.py", line JNL> 1, in ? import Numeric File JNL> "/loc/fink/lib/python2.2/site-packages/Numeric/Numeric.py", JNL> line 119, in ? arrayrange = multiarray.arange JNL> AttributeError: 'module' object has no attribute 'arange' Looks like a bug in handy.py. This problem does not exist in the current scipy cvs tree. I think if you upgrade (i.e. switch to using the cvs tree) this problem should disappear. >>>> from scipy.integrate import * JNL> "/loc/fink/lib/python2.2/site-packages/Numeric/RandomArray.py", JNL> line 3, in ? import LinearAlgebra File JNL> "/loc/fink/lib/python2.2/site-packages/Numeric/LinearAlgebra.py", JNL> line 8, in ? import lapack_lite ImportError: Failure linking JNL> new module >>>> Can you look at the lapack_lite.so (or the equivalent library) it should be in the prefix/site-packages/Numeric/lapack_lite.so and see what libraries are linked in? This looks like an atlas/lapack linking related issue. prabhu From oliphant.travis at ieee.org Tue Feb 19 00:43:30 2002 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 18 Feb 2002 22:43:30 -0700 Subject: [SciPy-dev] Re: Octave array formatting examples In-Reply-To: References: Message-ID: On Monday 18 February 2002 02:04 pm, you wrote: > Seems that nowbody is really against using ipython as an interactive > interface to scipy. I have not used ipython myself (as I hardly use > python from its prompt) but it seems to be really suitable for interactive > scipy sessions. > > I would say that let's go for it and if anybody has anything to say > against using ipython as an interactive interface to scipy then say it now > or be silent for ever ... ;) I think I've voiced my only concerns already. Using a pager for the help system makes it incompatible for use with emacs. The default settings make it not work with emacs either. -Travis O. From fperez at pizero.colorado.edu Tue Feb 19 00:48:22 2002 From: fperez at pizero.colorado.edu (=?ISO-8859-1?Q?Fernando_P=E9rez?=) Date: Mon, 18 Feb 2002 22:48:22 -0700 (MST) Subject: [SciPy-dev] Re: Octave array formatting examples In-Reply-To: Message-ID: > > I think I've voiced my only concerns already. Using a pager for the help > system makes it incompatible for use with emacs. Ah, that's a problem that is with the pydoc help, so even if people use the normal python prompt and import pydoc.help (as many do), they'll run into the same thing. I have for IPython written a page() function which works both in linux and windows, but unfortunately it crashes the python process (even a normal, non-IPython one) inside emacs. The strange thing is that inside page() everything is done with try/except blocks, yet python manages to crash completely and dump me at the shell. > The default settings make it not work with emacs either. I know, using IPython in emacs just freezes. I've tried debugging this a bit but got nowhere. I realize it's annoying, but unfortunately emacs' process buffers are fairly strange terminals and I just can't figure out why they freeze and crash so badly. On the other hand, I used to be a fan of those buffers in emacs until I wrote IPython :) Basically, in IPython I get all the things I needed from emacs (history across sessions, name completion, completion in history only, etc.). But I know that's a cheap copout. Ideally IPython should work under emacs, albeit with whatever limitations the terminal has. Unfortunately I just haven't been able to even understand where the crashes are (since they bypass try/except so badly for page(), and freeze hard for IPython). If someone more familiar with emacs' internals can give a hand, I'll be happy to work on it and hopefully fix it. So as it stands, yes, IPython and X/emacs just don't get along. Sorry. Cheers, f. From eric at scipy.org Tue Feb 19 00:12:59 2002 From: eric at scipy.org (eric) Date: Tue, 19 Feb 2002 00:12:59 -0500 Subject: [SciPy-dev] version numbers. Message-ID: <064701c1b904$194a1de0$6b01a8c0@ericlaptop> Hey crew, On the version numbers, I now have an opinion. :-) Nothing like an implementation to get ones attention. Since we're working on the 0.2 release of SciPy, I think the version number should look something like this: 0.2.xxxxxx Where xxx can be replaced by anything we want from the CVS. I was looking at the version number of SciPy and noticed it was already up to 0.5.xxx. After a moment of euphoria (what will 0.5 look like anyway?), I'm back to reality now... I like the fact that Pearu's is grabbing numbers based on CVS changes, but I'd like to move them to the end of the version number. What would you think of the first two (or maybe even three as Python uses like 2.1.1) entries in the version number tied to something human specified instead of the CVS? This will lead to very long version numbers, but most people can ignore anything after the first three. So, here are my proposed modification to Pearu's versions: ..--.. , , and are all human specified and won't change that often (where should we store these anyway?). and have the some meaning as Pearu's original and numbers. I've included the doc-string from his code at the bottom that has more complete explanations of each. One other question -- do we need all three of these -- .. -- or could they combined into one CVS based number? If the 3 separate numbers are valuable in their own rights, I'm happy to keep them around. But if we're just looking for uniqueness, then we could lump them all together. Thoughts? thanks, eric Here is the doc-string from the current version number calculator: Return version string calculated from CVS/Entries file(s) starting at . If the version information is different from the one found in the /__version__.py file, update_version updates the file automatically. The version information will be always increasing in time. If CVS tree does not exist (e.g. as in distribution packages), return the version string found from /__version__.py. If no version information is available, return None. Default version string is in the form ..-- The items have the following meanings: serial - shows cumulative changes in all files in the CVS repository micro - a number that is equivalent to the number of files minor - indicates the changes in micro value (files are added or removed) release_level - is alpha, beta, canditate, or final major - indicates changes in release_level. -- Eric Jones Enthought, Inc. [www.enthought.com and www.scipy.org] (512) 536-1057 -------------- next part -------------- An HTML attachment was scrubbed... URL: From fedor.baart at hccnet.nl Tue Feb 19 02:29:37 2002 From: fedor.baart at hccnet.nl (Fedor Baart) Date: Tue, 19 Feb 2002 08:29:37 +0100 Subject: [SciPy-dev] Plot_utlity Message-ID: <000001c1b917$2f768f50$0200a8c0@amd> I didn't have any problems using the plt module, it works fine. I found the module when I was looking for a "autoticker". I have just finished a "SVG wrapper" in python and I am trying to test it by creating some plots. I used the autoticker for defining the axis. SVG is the new w3c standard for vector graphics. It's a very easy way of creating plots and graphics. Fedor >default_bounds may have been intended to be visible outside the module. Whether that was ever used, I'm not sure. Your right though. The test for None is better and much safer. In this case, it isn't causing any problems (besides aesthetics). The plt module has rarely been accused of "elegance." Hopefully the 2nd cut will make Audry Hepburn envious. >I haven't seen anything in any of these corrections that would account for the problems you were having. Are they still occuring, or have you found the root of the problem? eric >> 370 def auto_ticks(data_bounds, bounds_info = None): >> 371 if bounds_info==None: >> 372 bounds_info = ['auto','auto','auto'] >> >> is a bit more elegant than: >> >> 369 default_bounds = ['auto','auto','auto'] >> 370 def auto_ticks(data_bounds, bounds_info = >> default_bounds): From eric at scipy.org Tue Feb 19 02:08:25 2002 From: eric at scipy.org (eric) Date: Tue, 19 Feb 2002 02:08:25 -0500 Subject: [SciPy-dev] Plot_utlity References: <000001c1b917$2f768f50$0200a8c0@amd> Message-ID: <066a01c1b914$3970e8f0$6b01a8c0@ericlaptop> > I didn't have any problems using the plt module, it works fine. Sorry. Confused this with a previous post by a different person about plot troubles. > I found > the module when I was looking for a "autoticker". I have just finished a > "SVG wrapper" in python and I am trying to test it by creating some > plots. I used the autoticker for defining the axis. SVG is the new w3c > standard for vector graphics. It's a very easy way of creating plots and > graphics. This sounds very interesting. Any chance your code open? If so, I'd like to take a look. Also, what resources did you use to learn about SVG? thanks, eric > > Fedor > > > >default_bounds may have been intended to be visible outside the module. > Whether that was ever used, I'm not sure. Your right though. The test > for None is better and much safer. In this case, it isn't causing any > problems (besides aesthetics). The plt module has rarely been accused > of "elegance." Hopefully the 2nd cut will make Audry Hepburn envious. > > >I haven't seen anything in any of these corrections that would account > for the problems you were having. Are they still occuring, or have you > found the root of the problem? > > eric > > > >> 370 def auto_ticks(data_bounds, bounds_info = None): > >> 371 if bounds_info==None: > >> 372 bounds_info = ['auto','auto','auto'] > >> > >> is a bit more elegant than: > >> > >> 369 default_bounds = ['auto','auto','auto'] > >> 370 def auto_ticks(data_bounds, bounds_info = > >> default_bounds): > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From pearu at cens.ioc.ee Tue Feb 19 03:30:21 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Tue, 19 Feb 2002 10:30:21 +0200 (EET) Subject: [SciPy-dev] version numbers. In-Reply-To: <064701c1b904$194a1de0$6b01a8c0@ericlaptop> Message-ID: Hi, On Tue, 19 Feb 2002, eric wrote: > So, here are my proposed modification to Pearu's versions: > > ..--.. > > , , and are all human specified and won't change > that often (where should we store these anyway?). You can hardcode it into version_template for now until we get a better idea. > One other question -- do we need all three of these -- > .. -- or could they combined into one > CVS based number? If the 3 separate numbers are valuable in their own > rights, I'm happy to keep them around. But if we're just looking for > uniqueness, then we could lump them all together. I think you can leave out that is the approx. total number of files. should be adequate to reflect the changes in adding or removing files, nobody will really care how many there are files anyway. So, use . but I think you should keep them appart as in theory one could have situation: 2.34 -> 234 23.4 -> 234 It is expected that the part will change most rapitly and therefore will have large values. In order to reduce the length of the version string, we could represent as a hexadecimal in it. I have studied a possibility to let CVS server to calculate these cvs-numbers. Indeed, it is possible but the corresponding script needs some adjustements. So, I hope that the frustation with updating version numbers in CVS should be temporary. Regards, Pearu From fperez at pizero.colorado.edu Tue Feb 19 03:43:01 2002 From: fperez at pizero.colorado.edu (=?ISO-8859-1?Q?Fernando_P=E9rez?=) Date: Tue, 19 Feb 2002 01:43:01 -0700 (MST) Subject: [SciPy-dev] IPython updated (Emacs works now) In-Reply-To: <066a01c1b914$3970e8f0$6b01a8c0@ericlaptop> Message-ID: Hi everyone, there's a new release of IPython at the usual http://www-hep.colorado.edu/~fperez/ipython/ for your testing pleasure. Of particular interest (for Travis, at least) is that things now work inside an Emacs (at least my Xemacs) buffer. The problem turned out to be a series of ugly interactions with curses and readline. The morale is, *never* use either of those modules in a python program which will try to run inside an Emacs buffer. It was kind of a PITA to track down. But now there's workarounds everywhere which check if the terminal is 'emacs' and try their best to make things work. Please let me know how this goes. I've been going back and forth with Eric and there seems to be some interest in using IPython, so I'm open to comments from all. Best regards, Fernando. From eric at scipy.org Tue Feb 19 03:10:48 2002 From: eric at scipy.org (eric) Date: Tue, 19 Feb 2002 03:10:48 -0500 Subject: [SciPy-dev] IPython updated (Emacs works now) References: Message-ID: <06e601c1b91c$f105da40$6b01a8c0@ericlaptop> Hey Fernando, Just downloaded IPython to take a look. Looks like the emacs tests break on windows since TERM is not normally set there. C:\Program Files\IPython>IPython_shell.py Traceback (most recent call last): File "C:\Program Files\IPython\IPython_shell.py", line 6, in ? import IPython.Shell, sys File "C:\Python21\IPython\Shell.py", line 20, in ? from ipmaker import make_IPython File "C:\Python21\IPython\ipmaker.py", line 49, in ? from iplib import * File "C:\Python21\IPython\iplib.py", line 187, in ? if os.environ['TERM'] == 'emacs': File "C:\Python21\lib\os.py", line 372, in __getitem__ return self.data[key.upper()] KeyError: TERM thanks, eric ----- Original Message ----- From: "Fernando P?rez" To: Sent: Tuesday, February 19, 2002 3:43 AM Subject: [SciPy-dev] IPython updated (Emacs works now) > Hi everyone, > > there's a new release of IPython at the usual > http://www-hep.colorado.edu/~fperez/ipython/ for your testing pleasure. > > Of particular interest (for Travis, at least) is that things now work inside > an Emacs (at least my Xemacs) buffer. The problem turned out to be a series > of ugly interactions with curses and readline. The morale is, *never* use > either of those modules in a python program which will try to run inside an > Emacs buffer. It was kind of a PITA to track down. > > But now there's workarounds everywhere which check if the terminal is 'emacs' > and try their best to make things work. Please let me know how this goes. > > I've been going back and forth with Eric and there seems to be some interest > in using IPython, so I'm open to comments from all. > > Best regards, > > Fernando. > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev From prabhu at aero.iitm.ernet.in Tue Feb 19 04:21:03 2002 From: prabhu at aero.iitm.ernet.in (Prabhu Ramachandran) Date: Tue, 19 Feb 2002 14:51:03 +0530 Subject: [SciPy-dev] Plot_utlity In-Reply-To: <066a01c1b914$3970e8f0$6b01a8c0@ericlaptop> References: <000001c1b917$2f768f50$0200a8c0@amd> <066a01c1b914$3970e8f0$6b01a8c0@ericlaptop> Message-ID: <15474.6399.895529.409589@monster.linux.in> >>>>> "eric" == eric writes: eric> This sounds very interesting. Any chance your code open? eric> If so, I'd like to take a look. Also, what resources did eric> you use to learn about SVG? Indeed, this is interesting. http://www.onlamp.com/pub/a/onlamp/2002/02/07/svghist.html http://www.w3.org/Graphics/SVG/Overview.htm8 prabhu From eric at scipy.org Tue Feb 19 03:20:46 2002 From: eric at scipy.org (eric) Date: Tue, 19 Feb 2002 03:20:46 -0500 Subject: [SciPy-dev] Re: Octave array formatting examples References: <03ae01c1b84e$50af2520$6b01a8c0@ericlaptop> <15473.31397.447450.127486@caesar.ifm.uni-kiel.de> Message-ID: <06f201c1b91e$551b3b50$6b01a8c0@ericlaptop> ----- Original Message ----- From: "janko hauser" To: Sent: Monday, February 18, 2002 5:05 PM Subject: [SciPy-dev] Re: Octave array formatting examples > > Fernando P?rez writes: > > On Mon, 18 Feb 2002, eric wrote: > > > > > Hey Fernando, > > > > > > Here are some sample octave outputs. At the end is the output of the format > > > command options and their meanings for Octave. > > > > Thanks. Some comments: > > > > 1. It can all be done in IPython, no problem. The format thing can be > > implemented as a magic command which sets a global (internal) flag, and the > > printing subsystem queries that flag at print time. > > > > I think there is an easier way. The str() representation of arrays is > done by a module, which can be overloaded or replaced. There was once > somebody who has done this and also implemented the nice feature to > only show array up to some size. New users of NumPy had very often the > suprise, that the building of the output for big arrays take such a > long time, that it looks like the system is hanging. (The output is > not streaming). Do you know where this code is? eric From eric at scipy.org Tue Feb 19 05:02:27 2002 From: eric at scipy.org (eric) Date: Tue, 19 Feb 2002 05:02:27 -0500 Subject: [SciPy-dev] test suite working again -- and upgraded Message-ID: <075001c1b92c$89cd7e90$6b01a8c0@ericlaptop> Hey, The test suite has been broken for a while (yes, I do see the irony in this...), but is now working again. Running: >>> import scipy >>> scipy.test() should work. Also, if you run this, it should only take a few seconds. I've added an optional argument to test() called level. It is an integer between 1 and 10 with 1 being the least thorough (fastest) testing and 10 running the entire test suite -- even long running tests. Most (all?) of the test suites have been upgraded and segmented into groups based on there speed. I've used levels 1, 5, and 10 for most tests, but you can add more fine grained testing if you'd like. The main reason for adding levels is that the weave tests take a while to run scipy.test(5) will take a couple of minutes and scipy.test(10) will take 5-10 (maybe more) minutes. Almost all tests people add to the suite should be level 1. Only use higher level numbers for a test if you know that it will take multiple seconds. I'd like to keep the basic test suite running in the "a few seconds" range if possible. Hopefully this will encourage people to run the tests and also add tests -- boy do we need them. Also, this was a big check in. Let me know if I broke anything. thanks, eric -- Eric Jones Enthought, Inc. [www.enthought.com and www.scipy.org] (512) 536-1057 From barnard at stat.harvard.edu Tue Feb 19 10:33:50 2002 From: barnard at stat.harvard.edu (John Barnard) Date: Tue, 19 Feb 2002 10:33:50 -0500 (EST) Subject: [SciPy-dev] Bug in build_ext.py Message-ID: There appears to be a small bug in scipy_distutils.command.build_ext on line 74: it should be library_dirs and not libraries_dirs if lib_dir not in self.compiler.libraries_dirs: should be if lib_dir not in self.compiler.library_dirs: With the latest CVS sources, I've finally managed to get scipy to compile on Windows 2000 using mingw. Now it's time to play. ******************************** * John Barnard, Ph.D. * Senior Research Statistician * deCODE genetics * 1000 Winter Str., Suite 3100 * Waltham, MA 02451 * Phone (Direct) : (781) 290-5771 Ext. 27 * Phone (General) : (781) 466-8833 * Fax : (781) 466-8686 * Email: j.barnard at decode.com ******************************** From fperez at pizero.colorado.edu Tue Feb 19 11:34:33 2002 From: fperez at pizero.colorado.edu (=?ISO-8859-1?Q?Fernando_P=E9rez?=) Date: Tue, 19 Feb 2002 09:34:33 -0700 (MST) Subject: [SciPy-dev] IPython updated (Emacs works now) In-Reply-To: <06e601c1b91c$f105da40$6b01a8c0@ericlaptop> Message-ID: Hey Eric, > Looks like the emacs tests break on windows since TERM is not normally set > there. Ah, sorry! I thought I'd put that check in. Late night submits... The same problem was also present in one more place. I think now all is fine, I did a clean install on my Windows box and tested it, it looks ok. There's an updated version up at the usual http://www-hep.colorado.edu/~fperez/ipython/ An obvious demonstration of why IPython needs unit testing so badly :) Again, sorry about that. Please grab the updated copy and let me know how it goes. In particular, I'd like to know how things work under other terminals in Windows (if there is such a thing). I only have the normal 'cmd' prompt, but since I know you use Cygwin, maybe you have access to something better. Regards, F. From fperez at pizero.colorado.edu Tue Feb 19 13:53:07 2002 From: fperez at pizero.colorado.edu (=?ISO-8859-1?Q?Fernando_P=E9rez?=) Date: Tue, 19 Feb 2002 11:53:07 -0700 (MST) Subject: [SciPy-dev] weave test failure In-Reply-To: <01ba01c1b78e$384c5450$6b01a8c0@ericlaptop> Message-ID: Hi all, I hadn't updated my local cvs in the last couple of weeks, and I just did a moment ago. Now weave.test() fails with a syntax error: In [2]: weave.test ------> weave.test() ------------------------------------------------------------ SyntaxError: unqualified exec is not allowed in function 'colex' it contains a nested function with free variables (pstat.py, line 176) Any ideas? Cheers, f. PS. I'm running python 2.2, in case it matters. From fperez at pizero.colorado.edu Tue Feb 19 13:56:43 2002 From: fperez at pizero.colorado.edu (=?ISO-8859-1?Q?Fernando_P=E9rez?=) Date: Tue, 19 Feb 2002 11:56:43 -0700 (MST) Subject: [SciPy-dev] Please disregard previous! In-Reply-To: Message-ID: On Tue, 19 Feb 2002, Fernando P?rez wrote: > Hi all, > > I hadn't updated my local cvs in the last couple of weeks, and I just did a > moment ago. Now weave.test() fails with a syntax error: Sorry all, I had an old (non-cvs) copy of scipy in my sys.path, hence the conflict. Just ignore me. Need more coffee... f. From eric at scipy.org Tue Feb 19 13:19:00 2002 From: eric at scipy.org (eric) Date: Tue, 19 Feb 2002 13:19:00 -0500 Subject: [SciPy-dev] Bug in build_ext.py References: Message-ID: <009201c1b971$e796d770$de5bfea9@ericlaptop> Thanks. fixed. ----- Original Message ----- From: "John Barnard" To: Sent: Tuesday, February 19, 2002 10:33 AM Subject: [SciPy-dev] Bug in build_ext.py > There appears to be a small bug in scipy_distutils.command.build_ext on > line 74: it should be library_dirs and not libraries_dirs > > if lib_dir not in self.compiler.libraries_dirs: > > should be > > if lib_dir not in self.compiler.library_dirs: > > With the latest CVS sources, I've finally managed to get scipy to compile > on Windows 2000 using mingw. Now it's time to play. > > > ******************************** > * John Barnard, Ph.D. > * Senior Research Statistician > * deCODE genetics > * 1000 Winter Str., Suite 3100 > * Waltham, MA 02451 > * Phone (Direct) : (781) 290-5771 Ext. 27 > * Phone (General) : (781) 466-8833 > * Fax : (781) 466-8686 > * Email: j.barnard at decode.com > ******************************** > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From fperez at pizero.colorado.edu Tue Feb 19 14:42:56 2002 From: fperez at pizero.colorado.edu (=?ISO-8859-1?Q?Fernando_P=E9rez?=) Date: Tue, 19 Feb 2002 12:42:56 -0700 (MST) Subject: [SciPy-dev] Stupid question. Message-ID: Hi all, I'm trying to do a python setup.py build on a freshly updated cvs of scipy, so I can test the current state of things. But the linking is failing with: gcc -shared build/temp.linux-i686-2.2/fftw_wrap.o -L/usr/local/home/fperez/lib -Lbuild/temp.linux-i686-2.2 -lfftw_threads -lrfftw_threads -lfftw -lrfftw -lpthread -lc_misc -lcephes -lgist -o build/lib.linux-i686-2.2/scipy/fftw/fftw.so /usr//bin/ld: cannot find -lfftw_threads collect2: ld returned 1 exit status The strange thing is that I *do* have all the fft libs: root[lib]# ldconfig -p | grep fftw libsrfftw_threads.so.2 (libc6) => /usr/local/lib/libsrfftw_threads.so.2 libsrfftw.so.2 (libc6) => /usr/local/lib/libsrfftw.so.2 libsfftw_threads.so.2 (libc6) => /usr/local/lib/libsfftw_threads.so.2 libsfftw.so.2 (libc6) => /usr/local/lib/libsfftw.so.2 librfftw_threads.so.2 (libc6) => /usr/local/lib/librfftw_threads.so.2 librfftw.so.2 (libc6) => /usr/local/lib/librfftw.so.2 libfftw_threads.so.2 (libc6) => /usr/local/lib/libfftw_threads.so.2 libfftw.so.2 (libc6) => /usr/local/lib/libfftw.so.2 I'm sure I'm missing somethin painfully obvious/stupid here. Could anyone please enlighten me? Cheers, f. From pearu at cens.ioc.ee Tue Feb 19 14:48:07 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Tue, 19 Feb 2002 21:48:07 +0200 (EET) Subject: [SciPy-dev] Stupid question. In-Reply-To: Message-ID: See fftw libraries section in http://www.scipy.org/site_content/tutorials/build_instructions May be that helps. Pearu From fperez at pizero.colorado.edu Tue Feb 19 14:50:58 2002 From: fperez at pizero.colorado.edu (=?ISO-8859-1?Q?Fernando_P=E9rez?=) Date: Tue, 19 Feb 2002 12:50:58 -0700 (MST) Subject: [SciPy-dev] Stupid question. In-Reply-To: Message-ID: On Tue, 19 Feb 2002, Pearu Peterson wrote: > > See fftw libraries section in > > http://www.scipy.org/site_content/tutorials/build_instructions > Already done that. I'm using exactly the rpms from that very link. Should I build them from scratch? I don't see why that should make any difference. Thanks, f From pearu at cens.ioc.ee Tue Feb 19 14:55:54 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Tue, 19 Feb 2002 21:55:54 +0200 (EET) Subject: [SciPy-dev] Stupid question. In-Reply-To: Message-ID: On Tue, 19 Feb 2002, Fernando P?rez wrote: > Already done that. I'm using exactly the rpms from that very link. > > Should I build them from scratch? I don't see why that should make any > difference. No, I don't think it helps. Try adding -L/usr/local/lib to the linker command. What happens? Pearu From fperez at pizero.colorado.edu Tue Feb 19 14:58:48 2002 From: fperez at pizero.colorado.edu (=?ISO-8859-1?Q?Fernando_P=E9rez?=) Date: Tue, 19 Feb 2002 12:58:48 -0700 (MST) Subject: [SciPy-dev] Stupid question. In-Reply-To: Message-ID: > No, I don't think it helps. > Try adding -L/usr/local/lib to the linker command. What happens? Same error message. ?? Thanks for the help, f. From pearu at cens.ioc.ee Tue Feb 19 15:04:56 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Tue, 19 Feb 2002 22:04:56 +0200 (EET) Subject: [SciPy-dev] Stupid question. In-Reply-To: Message-ID: On Tue, 19 Feb 2002, Fernando P?rez wrote: > > No, I don't think it helps. > > Try adding -L/usr/local/lib to the linker command. What happens? > > Same error message. ?? Strange. Just to make sure, did you add -L/usr/local/lib before -lfftw_threads option? Sorry if this is a stupid question from me. Pearu From eric at scipy.org Tue Feb 19 14:05:25 2002 From: eric at scipy.org (eric) Date: Tue, 19 Feb 2002 14:05:25 -0500 Subject: [SciPy-dev] Stupid question. References: Message-ID: <00f701c1b978$68c7c060$de5bfea9@ericlaptop> > On Tue, 19 Feb 2002, Pearu Peterson wrote: > > > > > See fftw libraries section in > > > > http://www.scipy.org/site_content/tutorials/build_instructions > > > > Already done that. I'm using exactly the rpms from that very link. > > Should I build them from scratch? I don't see why that should make any > difference. More than that, it was working before right? From the command line link statement you sent, it looks like it *should* link. If you type it at the command line does it work? Maybe add -L/usr/local/lib and see if that helps? The test checkin last night shouldn't have affected this. I did make some other changes to build_ext.py because scipy_distutils wasn't picking up 'g2c' etc. when linking in Fortran libraries. Perhaps this did something bad on Linux, but I wouldn't think so. Here is the change I made: if linker_so is not None: if linker_so is not save_linker_so: print 'replacing linker_so %s with %s' %(save_linker_so,linker_so) self.compiler.linker_so = linker_so l = build_flib.get_fcompiler_library_names() #l = self.compiler.libraries + l self.compiler.libraries = l l = build_flib.get_fcompiler_library_dirs() #l = self.compiler.library_dirs + l self.compiler.library_dirs = l # I added this else statment and its contents else: libs = build_flib.get_fcompiler_library_names() for lib in libs: if lib not in self.compiler.libraries: self.compiler.libraries.append(lib) #self.compiler.libraries = self.compiler.libraries + l lib_dirs = build_flib.get_fcompiler_library_dirs() for lib_dir in lib_dirs: if lib_dir not in self.compiler.library_dirs: self.compiler.libraries.append(lib_dir) #self.compiler.library_dirs = self.compiler.library_dirs + l eric From fperez at pizero.colorado.edu Tue Feb 19 15:08:10 2002 From: fperez at pizero.colorado.edu (=?ISO-8859-1?Q?Fernando_P=E9rez?=) Date: Tue, 19 Feb 2002 13:08:10 -0700 (MST) Subject: [SciPy-dev] Stupid question. In-Reply-To: Message-ID: On Tue, 19 Feb 2002, Pearu Peterson wrote: > Strange. Just to make sure, did you add -L/usr/local/lib before > -lfftw_threads option? Sorry if this is a stupid question from me. Yes, I did it in the right order. I'm quite lost here. f. From fperez at pizero.colorado.edu Tue Feb 19 15:15:26 2002 From: fperez at pizero.colorado.edu (=?ISO-8859-1?Q?Fernando_P=E9rez?=) Date: Tue, 19 Feb 2002 13:15:26 -0700 (MST) Subject: [SciPy-dev] Stupid question. In-Reply-To: <00f701c1b978$68c7c060$de5bfea9@ericlaptop> Message-ID: > More than that, it was working before right? From the command line link > statement you sent, it looks like it *should* link. If you type it at the > command line does it work? Maybe add -L/usr/local/lib and see if that helps? Already tried that, and it makes no difference. And yes, I tried the gcc... command on its own at the cmd line and I get the same. I've rerun ldconfig (as root), /usr/local/lib is in /etc/ld.so.conf, and as I said, an ldconfig -p reports the fft libs fine: root[/etc]# ldconfig -p | grep fftw libsrfftw_threads.so.2 (libc6) => /usr/local/lib/libsrfftw_threads.so.2 libsrfftw.so.2 (libc6) => /usr/local/lib/libsrfftw.so.2 libsfftw_threads.so.2 (libc6) => /usr/local/lib/libsfftw_threads.so.2 libsfftw.so.2 (libc6) => /usr/local/lib/libsfftw.so.2 librfftw_threads.so.2 (libc6) => /usr/local/lib/librfftw_threads.so.2 librfftw.so.2 (libc6) => /usr/local/lib/librfftw.so.2 libfftw_threads.so.2 (libc6) => /usr/local/lib/libfftw_threads.so.2 libfftw.so.2 (libc6) => /usr/local/lib/libfftw.so.2 Is there a version number issue at play here? Do I need to make 'naked' symlinks to all these without the .2 numbers? I'll try that and see what happens. All help appreciated, f From eric at scipy.org Tue Feb 19 14:18:54 2002 From: eric at scipy.org (eric) Date: Tue, 19 Feb 2002 14:18:54 -0500 Subject: [SciPy-dev] Stupid question. References: Message-ID: <012901c1b97a$45c2d670$de5bfea9@ericlaptop> Hey Fernando, Come to think of it, I don't think I've ever linked SciPy against shared versions of these libraries. I've always linked against libxxx.a versions. Do you have these laying around so that you can test against them? eric ----- Original Message ----- From: "Fernando P?rez" To: Sent: Tuesday, February 19, 2002 3:15 PM Subject: Re: [SciPy-dev] Stupid question. > > More than that, it was working before right? From the command line link > > statement you sent, it looks like it *should* link. If you type it at the > > command line does it work? Maybe add -L/usr/local/lib and see if that helps? > > Already tried that, and it makes no difference. And yes, I tried the > gcc... command on its own at the cmd line and I get the same. I've rerun > ldconfig (as root), /usr/local/lib is in /etc/ld.so.conf, and as I said, an > ldconfig -p reports the fft libs fine: > > root[/etc]# ldconfig -p | grep fftw > libsrfftw_threads.so.2 (libc6) => > /usr/local/lib/libsrfftw_threads.so.2 > libsrfftw.so.2 (libc6) => /usr/local/lib/libsrfftw.so.2 > libsfftw_threads.so.2 (libc6) => /usr/local/lib/libsfftw_threads.so.2 > libsfftw.so.2 (libc6) => /usr/local/lib/libsfftw.so.2 > librfftw_threads.so.2 (libc6) => /usr/local/lib/librfftw_threads.so.2 > librfftw.so.2 (libc6) => /usr/local/lib/librfftw.so.2 > libfftw_threads.so.2 (libc6) => /usr/local/lib/libfftw_threads.so.2 > libfftw.so.2 (libc6) => /usr/local/lib/libfftw.so.2 > > > Is there a version number issue at play here? Do I need to make 'naked' > symlinks to all these without the .2 numbers? > > I'll try that and see what happens. > > All help appreciated, > > f > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From fperez at pizero.colorado.edu Tue Feb 19 15:24:23 2002 From: fperez at pizero.colorado.edu (=?ISO-8859-1?Q?Fernando_P=E9rez?=) Date: Tue, 19 Feb 2002 13:24:23 -0700 (MST) Subject: [SciPy-dev] Stupid question. In-Reply-To: Message-ID: On Tue, 19 Feb 2002, Fernando P?rez wrote: > Is there a version number issue at play here? Do I need to make 'naked' > symlinks to all these without the .2 numbers? > > I'll try that and see what happens. It worked!!! At least now the fft complains are gone. I had to make a symlink for *every* fft.so.2 library without the .2. So Eric, this also answers your last email: this is the trick to build against the shared versions of the libs: make the extra symlinks without the version numbers :) Now all builds and installs. I'll report test results in a minute :) Thanks for all the help! f. From fperez at pizero.colorado.edu Tue Feb 19 15:26:39 2002 From: fperez at pizero.colorado.edu (=?ISO-8859-1?Q?Fernando_P=E9rez?=) Date: Tue, 19 Feb 2002 13:26:39 -0700 (MST) Subject: [SciPy-dev] Stupid question. In-Reply-To: <012901c1b97a$45c2d670$de5bfea9@ericlaptop> Message-ID: On Tue, 19 Feb 2002, eric wrote: > Hey Fernando, > > Come to think of it, I don't think I've ever linked SciPy against shared > versions of these libraries. I've always linked against libxxx.a versions. Do > you have these laying around so that you can test against them? Final followup on this topic: after the trick I just found to make it work with the shared libs, it might be worth mentioning it in the build instructions page. Especially because those who download the fft rpm will get exactly the shared libs, so it's probably useful info (and will save others some grief :) Test report coming... f. From oliphant.travis at ieee.org Tue Feb 19 15:40:22 2002 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 19 Feb 2002 13:40:22 -0700 Subject: [SciPy-dev] Stupid question. In-Reply-To: References: Message-ID: On Tuesday 19 February 2002 01:24 pm, you wrote: > > > > I'll try that and see what happens. > > It worked!!! At least now the fft complains are gone. I had to make a > symlink for *every* fft.so.2 library without the .2. > I think I remember having to do this too. Actually, I always link statically now, anyway so any binary I make does not have the dependency. From eric at scipy.org Tue Feb 19 14:30:12 2002 From: eric at scipy.org (eric) Date: Tue, 19 Feb 2002 14:30:12 -0500 Subject: [SciPy-dev] Stupid question. References: Message-ID: <014001c1b97b$d99437d0$de5bfea9@ericlaptop> Pearu, Do you have the same trouble with the shared libraries -- or are you linking against the .a versions? Everyone, This fix isn't exactly what I'd call general... and only works if you have root access. Shouldn't there be a solution that doesn't require modifying the library names? eric ----- Original Message ----- From: "Fernando P?rez" To: Sent: Tuesday, February 19, 2002 3:26 PM Subject: Re: [SciPy-dev] Stupid question. > On Tue, 19 Feb 2002, eric wrote: > > > Hey Fernando, > > > > Come to think of it, I don't think I've ever linked SciPy against shared > > versions of these libraries. I've always linked against libxxx.a versions. Do > > you have these laying around so that you can test against them? > > Final followup on this topic: after the trick I just found to make it work > with the shared libs, it might be worth mentioning it in the build > instructions page. Especially because those who download the fft rpm will get > exactly the shared libs, so it's probably useful info (and will save others > some grief :) > > Test report coming... > > f. > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From fperez at pizero.colorado.edu Tue Feb 19 15:38:19 2002 From: fperez at pizero.colorado.edu (=?ISO-8859-1?Q?Fernando_P=E9rez?=) Date: Tue, 19 Feb 2002 13:38:19 -0700 (MST) Subject: [SciPy-dev] Stupid question. In-Reply-To: <014001c1b97b$d99437d0$de5bfea9@ericlaptop> Message-ID: On Tue, 19 Feb 2002, eric wrote: > Everyone, > > This fix isn't exactly what I'd call general... and only works if you have root > access. Shouldn't there be a solution that doesn't require modifying the > library names? Agreed. It might be worth then pointing people to an rpm of the static libs, making one of the shared libs with the extra symlinks, or simply telling people to build the libs themselves as static from the fftw site. I guess I just don't really understand at all how ld does name resolution, what it does with suffixes and numbers, etc. And the man pages aren't very informative either. Cheers, f From fperez at pizero.colorado.edu Tue Feb 19 15:42:43 2002 From: fperez at pizero.colorado.edu (=?ISO-8859-1?Q?Fernando_P=E9rez?=) Date: Tue, 19 Feb 2002 13:42:43 -0700 (MST) Subject: [SciPy-dev] scipy testing results Message-ID: Ok. I did a clean 'setup/build/install' on my system from cvs (just updated). After the shared lib issues we just covered, the process now works. I installed scipy system-wide, to test it as a 'normal user'. The very first global import fails, so I can't really run the tests: In [1]: import scipy /usr/lib/python2.2/site-packages/scipy/linalg/flapack.so: undefined symbol: slaswp_ Warning: FFT package not found. Some names will not be available --------------------------------------------------------------------------- ImportError Traceback (most recent call last) [snipped traceback] /usr/lib/python2.2/site-packages/scipy/basic1a.py 8 from scipy import diag 9 from scipy import special 10 from scipy.linalg import eig global scipy = undefined, global linalg = undefined, eig = undefined 11 12 from scipy import r1array,hstack ImportError: cannot import name eig I don't know if the two top warnings matter or not, but the one about FFT is strange, since I *do* have the FFT package: In [2]: import FFT In [3]: FFT? Type: module Base Class: String Form: Namespace: Interactive File: /usr/lib/python2.2/site-packages/FFT/__init__.py So, my questions are: 1- are the two top warnings important? 2- why the eig. failure? I checked linalg and eig is indeed in there. I'm all ears... f. From pearu at cens.ioc.ee Tue Feb 19 15:43:48 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Tue, 19 Feb 2002 22:43:48 +0200 (EET) Subject: [SciPy-dev] Stupid question. In-Reply-To: <014001c1b97b$d99437d0$de5bfea9@ericlaptop> Message-ID: On Tue, 19 Feb 2002, eric wrote: > Do you have the same trouble with the shared libraries -- or are you linking > against the .a versions? I am linking against .a version from debian dist. But there should be a rather dirty fix: gcc -shared ... /usr/local/lib/fft.so.2 that is adding all shared libraries to linker instead of using -lfftw etc. Fernando, can you confirm that? Then this fix could be added to fftw_info for cases where static libs are not available. pearu From pearu at cens.ioc.ee Tue Feb 19 15:47:45 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Tue, 19 Feb 2002 22:47:45 +0200 (EET) Subject: [SciPy-dev] scipy testing results In-Reply-To: Message-ID: On Tue, 19 Feb 2002, Fernando P?rez wrote: > In [1]: import scipy > /usr/lib/python2.2/site-packages/scipy/linalg/flapack.so: undefined > symbol: slaswp_ > Warning: FFT package not found. Some names will not be available > --------------------------------------------------------------------------- > ImportError Traceback (most recent call last) > > > I don't know if the two top warnings matter or not, but the one about FFT is > strange, since I *do* have the FFT package: This failure has nothing to do with FFT. > So, my questions are: > > 1- are the two top warnings important? Yes. Check your lapack installation. I suggest trying to build linalg inside linalg directory: ./setup_linalg.py build > 2- why the eig. failure? I checked linalg and eig is indeed in there. Because it is the first one that tries to import linalg, I guess. Pearu From fperez at pizero.colorado.edu Tue Feb 19 15:50:34 2002 From: fperez at pizero.colorado.edu (=?ISO-8859-1?Q?Fernando_P=E9rez?=) Date: Tue, 19 Feb 2002 13:50:34 -0700 (MST) Subject: [SciPy-dev] Stupid question. In-Reply-To: Message-ID: > But there should be a rather dirty fix: > gcc -shared ... /usr/local/lib/fft.so.2 > that is adding all shared libraries to linker instead of using -lfftw etc. Sorry Pearu, I'm not sure I follow you. I'm not too familiar with gcc's library-building syntax. Could you clarify a bit what I'm supposed to tell gcc? I'll then test it and let you know. thanks. f From pearu at cens.ioc.ee Tue Feb 19 15:54:34 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Tue, 19 Feb 2002 22:54:34 +0200 (EET) Subject: [SciPy-dev] Stupid question. In-Reply-To: Message-ID: On Tue, 19 Feb 2002, Fernando P?rez wrote: > > But there should be a rather dirty fix: > > gcc -shared ... /usr/local/lib/fft.so.2 > > that is adding all shared libraries to linker instead of using -lfftw etc. > > Sorry Pearu, I'm not sure I follow you. I'm not too familiar with gcc's > library-building syntax. Could you clarify a bit what I'm supposed to tell > gcc? I'll then test it and let you know. This should not be gcc specific. Anyway, try gcc -shared build/temp.linux-i686-2.2/fftw_wrap.o \ -L/usr/local/home/fperez/lib -Lbuild/temp.linux-i686-2.2 \ /usr/local/lib/libfftw_threads.so.2 \ /usr/local/lib/librfftw_threads.so.2 \ /usr/local/lib/libfftw.so.2 \ /usr/local/lib/librfftw.so.2 \ -lpthread \ -o build/lib.linux-i686-2.2/scipy/fftw/fftw.so (I hope that I made no types in above) Pearu From oliphant.travis at ieee.org Tue Feb 19 16:08:15 2002 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 19 Feb 2002 14:08:15 -0700 Subject: [SciPy-dev] IPython updated (Emacs works now) In-Reply-To: References: Message-ID: On Tuesday 19 February 2002 01:43 am, you wrote: > Hi everyone, > > there's a new release of IPython at the usual > http://www-hep.colorado.edu/~fperez/ipython/ for your testing pleasure. > > Of particular interest (for Travis, at least) is that things now work > inside an Emacs (at least my Xemacs) buffer. The problem turned out to be a > series of ugly interactions with curses and readline. The morale is, > *never* use either of those modules in a python program which will try to > run inside an Emacs buffer. It was kind of a PITA to track down. I tried it and couldn't get it to work. What do you give to the py-python-command and py-python-command-args emacs variables? Thanks, -Travis From eric at scipy.org Tue Feb 19 15:01:16 2002 From: eric at scipy.org (eric) Date: Tue, 19 Feb 2002 15:01:16 -0500 Subject: [SciPy-dev] fftw docs updated References: Message-ID: <017301c1b980$30a09fb0$de5bfea9@ericlaptop> I've updated the build instructions to warn people about the issue. http://www.scipy.org/site_content/tutorials/build_instructions I view this as a temporary fix though, until we learn the root of the problem. thanks to everyone bird-dogging this. eric From fperez at pizero.colorado.edu Tue Feb 19 16:05:33 2002 From: fperez at pizero.colorado.edu (=?ISO-8859-1?Q?Fernando_P=E9rez?=) Date: Tue, 19 Feb 2002 14:05:33 -0700 (MST) Subject: [SciPy-dev] IPython updated (Emacs works now) In-Reply-To: Message-ID: On Tue, 19 Feb 2002, Travis Oliphant wrote: > I tried it and couldn't get it to work. What do you give to the > py-python-command and py-python-command-args emacs variables? Huh? Sorry Travis, but I have no idea what you're talking about. When I said I got it to work under emacs, I meant the following: - open a shell buffer in emacs (M-x shell) - at the prompt you get there, type 'ipython --colors nocolor' (emacs buffers don't like color escapes). Now, IPython is a normal program running in an emacs terminal. I don't have the faintest idea how to make it work otherwise. I use emacs a lot, but I stay out of lisp's way as much as possible :) Sorry if this isn't what you had in mind. If you point me a little and there's more that needs fixing I'll do it, but the above procedure works for me. Cheers, f From fperez at pizero.colorado.edu Tue Feb 19 16:08:36 2002 From: fperez at pizero.colorado.edu (=?ISO-8859-1?Q?Fernando_P=E9rez?=) Date: Tue, 19 Feb 2002 14:08:36 -0700 (MST) Subject: [SciPy-dev] Stupid question. In-Reply-To: Message-ID: > This should not be gcc specific. Anyway, try > > gcc -shared build/temp.linux-i686-2.2/fftw_wrap.o \ > -L/usr/local/home/fperez/lib -Lbuild/temp.linux-i686-2.2 \ > /usr/local/lib/libfftw_threads.so.2 \ > /usr/local/lib/librfftw_threads.so.2 \ > /usr/local/lib/libfftw.so.2 \ > /usr/local/lib/librfftw.so.2 \ > -lpthread \ > -o build/lib.linux-i686-2.2/scipy/fftw/fftw.so Yes Pearu, it works. To summarize: I removed my hand-made symlinks to restore the system to the condition it's in when you install the rpms linked to at scipy.org. Then, running python setup build fails at: [scipy2]> gcc -shared build/temp.linux-i686-2.2/fftw_wrap.o -L/usr/local/home/fperez/lib -Lbuild/temp.linux-i686-2.2 -lfftw_threads -lrfftw_threads -lfftw -lrfftw -lpthread -lc_misc -lcephes -lgist -o build/lib.linux-i686-2.2/scipy/fftw/fftw.so /usr//bin/ld: cannot find -lfftw_threads collect2: ld returned 1 exit status Now, changing the above command by hand to: [scipy2]> gcc -shared build/temp.linux-i686-2.2/fftw_wrap.o -L/usr/local/home/fperez/lib -Lbuild/temp.linux-i686-2.2 /usr/local/lib/libfftw_threads.so.2 /usr/local/lib/librfftw_threads.so.2 /usr/local/lib/libfftw.so.2 /usr/local/lib/librfftw.so.2 -lpthread -o build/lib.linux-i686-2.2/scipy/fftw/fftw.so works fine. So I guess it's either for the users to hand-fix the links, or for setup.py to work around the issue by incorporating Pearu's idea. Even if kludgy, that sounds much better. Cheers, f. From oliphant.travis at ieee.org Tue Feb 19 16:19:50 2002 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 19 Feb 2002 14:19:50 -0700 Subject: [SciPy-dev] IPython updated (Emacs works now) In-Reply-To: References: Message-ID: On Tuesday 19 February 2002 02:05 pm, you wrote: > On Tue, 19 Feb 2002, Travis Oliphant wrote: > > I tried it and couldn't get it to work. What do you give to the > > py-python-command and py-python-command-args emacs variables? > > Huh? Sorry Travis, but I have no idea what you're talking about. When I > said I got it to work under emacs, I meant the following: > Thanks, I'm no LISP man myself. I'm just using the python environment under emacs which gives a lot of flexibility --- like excecuting snippets of code automatically. I'm not sure how all of this will work with ipython, but it would be nice if it could. This python environment by default uses the command 'python' with the arguments '-i' but I have to change that to get it to use ipython instead. I'm not sure how to get the arguments to work right. Thanks for pointing out more specifically what you did. -Travis From fperez at pizero.colorado.edu Tue Feb 19 16:17:55 2002 From: fperez at pizero.colorado.edu (=?ISO-8859-1?Q?Fernando_P=E9rez?=) Date: Tue, 19 Feb 2002 14:17:55 -0700 (MST) Subject: [SciPy-dev] scipy testing results In-Reply-To: Message-ID: On Tue, 19 Feb 2002, Pearu Peterson wrote: > Yes. Check your lapack installation. I suggest trying to build linalg > inside linalg directory: > ./setup_linalg.py build This worked fine. I did a full, clean rebuild and the whole process went through fine. I have the atlas libs installed, and there were no complaints. Then I reinstalled system-wide (python setup.py install as root at the top-level scipy dir). Still getting the same error with not being able to import eig. Is this my fault? Cheers, f. From fperez at pizero.colorado.edu Tue Feb 19 16:23:45 2002 From: fperez at pizero.colorado.edu (=?ISO-8859-1?Q?Fernando_P=E9rez?=) Date: Tue, 19 Feb 2002 14:23:45 -0700 (MST) Subject: [SciPy-dev] IPython updated (Emacs works now) In-Reply-To: Message-ID: > Thanks, I'm no LISP man myself. I'm just using the python environment > under emacs which gives a lot of flexibility --- like excecuting snippets > of code automatically. I'm not sure how all of this will work with > ipython, but it would be nice if it could. Ah, I see. IPython is more meant to 'live in it', since many of its features are really thought for all-out interactive use. So I think a reasonable setup is to use the normal python for code snippets (since all you need is to evaluate the code, not really much else), and IPython (even inside emacs) when you want a full interpreter. Now, if you want the verbose tracebacks from IPython in your 'snippet evaluator', set your sys.excepthook to verboseTB from the ultraTB.py file (details in the file). That would give you IPython's only valuable feature for non-interactive use without having to load all of IPython for every code snippet you want to test. I think that's really the best setup. You get detailed tracebacks for short code, and when you want a complete session you can still open it inside an emacs buffer (if you want). I should probably add a comment to this effect in the docs. > Thanks for pointing out more specifically what you did. No prob. Did it work? Cheers, f From pearu at cens.ioc.ee Tue Feb 19 16:26:57 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Tue, 19 Feb 2002 23:26:57 +0200 (EET) Subject: [SciPy-dev] scipy testing results In-Reply-To: Message-ID: On Tue, 19 Feb 2002, Fernando P?rez wrote: > On Tue, 19 Feb 2002, Pearu Peterson wrote: > > Yes. Check your lapack installation. I suggest trying to build linalg > > inside linalg directory: > > ./setup_linalg.py build > > > This worked fine. I did a full, clean rebuild and the whole process went > through fine. I have the atlas libs installed, and there were no > complaints. Then I reinstalled system-wide (python setup.py install as root at > the top-level scipy dir). Still getting the same error with not being able to > import eig. Is this my fault? Don't know ;-) But try the following: ./setup_linalg.py build (Strange, cblas.so and friends are built into the current directory, anyway..) python >>> import flapack How that goes? If it fails, can you send the output of ./setup_linalg.py build. Pearu From rossini at u.washington.edu Tue Feb 19 16:29:38 2002 From: rossini at u.washington.edu (Anthony Rossini) Date: Tue, 19 Feb 2002 13:29:38 -0800 (PST) Subject: [SciPy-dev] IPython updated (Emacs works now) In-Reply-To: Message-ID: Note that one thing I'm working on doing is extending ESS (an Emacs mode for data analysis, usually R or SAS, but also for other "interactive" data analysis stuff) to accomodate iPython, in the hopes of (very slowly) moving towards using R and SciPy for most of my work. ESS is different in that it works on extending the interactive environment (and redirecting things like "help" requests/responses to different buffers), from the standard python model. Just a comment; I'd love to do it today, but it's not happening this month. best, -tony On Tue, 19 Feb 2002, Fernando P?rez wrote: > > Thanks, I'm no LISP man myself. I'm just using the python environment > > under emacs which gives a lot of flexibility --- like excecuting snippets > > of code automatically. I'm not sure how all of this will work with > > ipython, but it would be nice if it could. > > Ah, I see. IPython is more meant to 'live in it', since many of its features > are really thought for all-out interactive use. > > So I think a reasonable setup is to use the normal python for code snippets > (since all you need is to evaluate the code, not really much else), and > IPython (even inside emacs) when you want a full interpreter. Now, if you want > the verbose tracebacks from IPython in your 'snippet evaluator', set your > sys.excepthook to verboseTB from the ultraTB.py file (details in the > file). That would give you IPython's only valuable feature for non-interactive > use without having to load all of IPython for every code snippet you want to > test. > > I think that's really the best setup. You get detailed tracebacks for short > code, and when you want a complete session you can still open it inside an > emacs buffer (if you want). I should probably add a comment to this effect in > the docs. > > > Thanks for pointing out more specifically what you did. > > No prob. Did it work? > > Cheers, > > f > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From fperez at pizero.colorado.edu Tue Feb 19 16:32:57 2002 From: fperez at pizero.colorado.edu (=?ISO-8859-1?Q?Fernando_P=E9rez?=) Date: Tue, 19 Feb 2002 14:32:57 -0700 (MST) Subject: [SciPy-dev] scipy testing results In-Reply-To: Message-ID: > > import eig. Is this my fault? > > Don't know ;-) I seem to be the one running into all the strange bugs (The infamous Mandrake gcc, the shared libs, now this...). Ah, my lucky stars :) > But try the following: > > ./setup_linalg.py build > (Strange, cblas.so and friends are built into the current directory, > anyway..) > python > >>> import flapack > > How that goes? If it fails, can you send the output of ./setup_linalg.py > build. In [1]: import flapack --------------------------------------------------------------------------- ImportError Traceback (most recent call last) ? ImportError: ./flapack.so: undefined symbol: slaswp_ So that didn't work. My build comes out fine: running build_ext linalg.flapack flapack needs fortran libraries 0 skipping 'linalg.flapack' extension (up-to-date) linalg.clapack clapack needs fortran libraries 0 skipping 'linalg.clapack' extension (up-to-date) linalg.fblas fblas needs fortran libraries 0 skipping 'linalg.fblas' extension (up-to-date) linalg.cblas cblas needs fortran libraries 0 skipping 'linalg.cblas' extension (up-to-date) Thiss is just the end. If you want it all, I'll redo teh build and zip you the whole thing, so it doesn't get annoying for the list. Cheers, f From fperez at pizero.colorado.edu Tue Feb 19 16:35:23 2002 From: fperez at pizero.colorado.edu (=?ISO-8859-1?Q?Fernando_P=E9rez?=) Date: Tue, 19 Feb 2002 14:35:23 -0700 (MST) Subject: [SciPy-dev] IPython updated (Emacs works now) In-Reply-To: Message-ID: On Tue, 19 Feb 2002, Anthony Rossini wrote: > Note that one thing I'm working on doing is extending ESS (an Emacs mode > for data analysis, usually R or SAS, but also for other "interactive" data > analysis stuff) to accomodate iPython, in the hopes of (very slowly) > moving towards using R and SciPy for most of my work. ESS is different in > that it works on extending the interactive environment (and redirecting > things like "help" requests/responses to different buffers), from the > standard python model. > > Just a comment; I'd love to do it today, but it's not happening this month. Great! If there's anything I can do in IPython to make your life easier, by all means let me know. My only constraint is that as I said, I stay out of lisp's way as much as I can :) But I'll definitely try to incorporate any changes needed (as long as they don't break ipython's basic functionality at a standalone terminal). Cheers, f. From pearu at cens.ioc.ee Tue Feb 19 16:37:33 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Tue, 19 Feb 2002 23:37:33 +0200 (EET) Subject: [SciPy-dev] version numbers. In-Reply-To: Message-ID: Hi Eric, On Tue, 19 Feb 2002, Pearu Peterson wrote: > I have studied a possibility to let CVS server to calculate these > cvs-numbers. Indeed, it is possible but the corresponding script needs > some adjustements. So, I hope that the frustation with updating version > numbers in CVS should be temporary. I have implemented required hooks and also tested in my CVS tree. It seems to work very nicely. So, in attachement you find a python script that calculates cvs version numbers. Installation instructions for scipy are the following: 1) Copy the file calc_cvs_version.py to /home/cvsroot/CVSROOT: cp calc_cvs_version.py /home/cvsroot/CVSROOT/calc_cvs_version.py 2) Add the following (single) line to /home/cvsroot/CVSROOT/loginfo file: world/scipy (/usr/bin/python $CVSROOT/CVSROOT/calc_cvs_version.py $CVSROOT/world/scipy) 3) Try to commit something and then do update. This should create file __cvs_version__.py into the scipy tree. Few notes: If someone tries to change __cvs_version__.py and commit it, it will have no effect. So it is safe. If files are commited to CVS then __cvs_version__.py will be updated in each commit automatically. You may also want to check the algorithm in calc_cvs_version.py in case I missed something. I have tested it to work also with Python 1.5.2. Let me know if you have any doubts about it. Regards, Pearu -------------- next part -------------- #!/usr/bin/env python # Direct usage (only for testing): # calc_cvs_version.py # Usage from CVS tree: # 1) Copy this file to $CVSROOT/CVSROOT # 2) Add the following line to $CVSROOT/CVSROOT/loginfo file: # dir/modulename (/full/path/to/python $CVSROOT/CVSROOT/calc_cvs_version.py $CVSROOT/dir/modulename) # Make sure that `python' is a full path to system python executable. # Result: # Creates CVS formatted python file # /__cvs_version__.py,v # that contains the assignment of 3-tuple: # cvs_version = (,,) # Author: # Pearu Peterson # Permission to use, modify, and distribute this software is given under the # terms of the LGPL. See http://www.fsf.org # NO WARRANTY IS EXPRESSED OR IMPLIED. USE AT YOUR OWN RISK. import sys import os import string if len(sys.argv)!=2: sys.exit() cvs_path = os.path.abspath(sys.argv[1]) def visit_cvs_tree(cvs_version,dirname,names): try: names.remove('Attic') except ValueError: pass for name in filter(lambda n:n[-2:]==',v',names): if name=='__cvs_version__.py,v': continue f = open(os.path.join(dirname,name),'r') rev_numbers_str = string.split(f.readline())[1][:-1] f.close() rev_numbers = map(eval,string.split(rev_numbers_str,'.')) if len(rev_numbers)>1: cvs_version[-1] = cvs_version[-1] + rev_numbers[-1] cvs_version[-2] = cvs_version[-2] + rev_numbers[-2] cvs_version_py_tmpl = """\ head #revision#; access; symbols; locks; strict; comment @# @; #revision# date 2002.02.19.18.41.34; author pearu; state Exp; branches; next ; desc @@ #revision# log @__cvs_version__.py @ text @### DO NOT EDIT THIS FILE!!! ### DO NOT TRY TO COMMIT THIS FILE TO CVS REPOSITORY!!! cvs_version = (#version#) @ """ cvs_version_fname = os.path.join(cvs_path,'__cvs_version__.py,v') ###### Get previous version ###### if os.path.isfile(cvs_version_fname): f = open(cvs_version_fname,'r') rev_numbers_str = string.split(f.readline())[1][:-1] f.close() prev_rev_numbers = map(eval,string.split(rev_numbers_str,'.')) while len(prev_rev_numbers)<4: prev_rev_numbers = [1] + rev_numbers else: prev_rev_numbers = [1,1,0,0] ##### Calculate new version ###### rev_numbers = prev_rev_numbers[:2]+[0,0] os.path.walk(cvs_path,visit_cvs_tree,rev_numbers) ##### Update version ###### if rev_numbers[-2] != prev_rev_numbers[-2]: # E.g. a file have been added or removed. rev_numbers[-3] = rev_numbers[-3] + 1 revision = string.join(map(str,rev_numbers),'.') version = string.join(map(str,rev_numbers),',') ##### Create new file __cvs_version__.py,v #### cvs_version_py = string.replace(cvs_version_py_tmpl,'#version#',version) cvs_version_py = string.replace(cvs_version_py,'#revision#',revision) if os.path.exists(cvs_version_fname): os.remove(cvs_version_fname) f = open(cvs_version_fname,'w') f.write(cvs_version_py) f.close() os.chmod(cvs_version_fname, 0444) #### EOF ##### From pearu at cens.ioc.ee Tue Feb 19 16:45:33 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Tue, 19 Feb 2002 23:45:33 +0200 (EET) Subject: [SciPy-dev] scipy testing results In-Reply-To: Message-ID: On Tue, 19 Feb 2002, Fernando P?rez wrote: > ImportError: ./flapack.so: undefined symbol: slaswp_ > > So that didn't work. My build comes out fine: > > running build_ext > linalg.flapack flapack needs fortran libraries 0 > skipping 'linalg.flapack' extension (up-to-date) > linalg.clapack clapack needs fortran libraries 0 > skipping 'linalg.clapack' extension (up-to-date) > linalg.fblas fblas needs fortran libraries 0 > skipping 'linalg.fblas' extension (up-to-date) > linalg.cblas cblas needs fortran libraries 0 > skipping 'linalg.cblas' extension (up-to-date) > > Thiss is just the end. If you want it all, I'll redo teh build and zip you the > whole thing, so it doesn't get annoying for the list. Yes, send me the output but before hitting ./setup_linalg.py build remove the build/ directory first. Pearu From fperez at pizero.colorado.edu Tue Feb 19 16:52:56 2002 From: fperez at pizero.colorado.edu (=?ISO-8859-1?Q?Fernando_P=E9rez?=) Date: Tue, 19 Feb 2002 14:52:56 -0700 (MST) Subject: [SciPy-dev] scipy testing results In-Reply-To: Message-ID: On Tue, 19 Feb 2002, Pearu Peterson wrote: > Yes, send me the output but before hitting > > ./setup_linalg.py build > > remove the build/ directory first. Done. Output is attached. Thanks, f -------------- next part -------------- A non-text attachment was scrubbed... Name: build.log.gz Type: application/octet-stream Size: 3327 bytes Desc: URL: From pearu at cens.ioc.ee Tue Feb 19 17:08:41 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Wed, 20 Feb 2002 00:08:41 +0200 (EET) Subject: [SciPy-dev] scipy testing results In-Reply-To: Message-ID: On Tue, 19 Feb 2002, Fernando P?rez wrote: > On Tue, 19 Feb 2002, Pearu Peterson wrote: > Done. Output is attached. Looks good to me. How did you install lapack library? Is it shared as well? Does your lapack library define symbol slaswp_? Try nm /usr/local/lib/atlas/liblapack.a | grep slaswp_ I get the following output U slaswp_ U slaswp_ 00000000 T slaswp_ U slaswp_ Pearu From fperez at pizero.colorado.edu Tue Feb 19 17:13:43 2002 From: fperez at pizero.colorado.edu (=?ISO-8859-1?Q?Fernando_P=E9rez?=) Date: Tue, 19 Feb 2002 15:13:43 -0700 (MST) Subject: [SciPy-dev] scipy testing results In-Reply-To: Message-ID: On Wed, 20 Feb 2002, Pearu Peterson wrote: > Looks good to me. > How did you install lapack library? Is it shared as well? Does your > lapack library define symbol slaswp_? > Try > nm /usr/local/lib/atlas/liblapack.a | grep slaswp_ > > I get the following output > U slaswp_ > U slaswp_ > 00000000 T slaswp_ > U slaswp_ Blank. But the file is there: [~]> d /usr/local/lib/atlas/* /usr/local/home/fperez -rw-r--r-- 1 root 5525216 Jan 24 2001 /usr/local/lib/atlas/libatlas.a -rw-r--r-- 1 root 268612 Jan 24 2001 /usr/local/lib/atlas/libcblas.a -rw-r--r-- 1 root 348090 Jan 24 2001 /usr/local/lib/atlas/libf77blas.a -rw-r--r-- 1 root 156424 Jan 24 2001 /usr/local/lib/atlas/liblapack.a I grabbed a build of atlas from the site pointed to by scipy which was made for my machine (PIII/256Kb cache). Should I rebuild atlas? thanks, f From pearu at cens.ioc.ee Tue Feb 19 17:24:23 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Wed, 20 Feb 2002 00:24:23 +0200 (EET) Subject: [SciPy-dev] scipy testing results In-Reply-To: Message-ID: On Tue, 19 Feb 2002, Fernando P?rez wrote: > [~]> d /usr/local/lib/atlas/* > /usr/local/home/fperez > -rw-r--r-- 1 root 5525216 Jan 24 2001 /usr/local/lib/atlas/libatlas.a > -rw-r--r-- 1 root 268612 Jan 24 2001 /usr/local/lib/atlas/libcblas.a > -rw-r--r-- 1 root 348090 Jan 24 2001 > /usr/local/lib/atlas/libf77blas.a > -rw-r--r-- 1 root 156424 Jan 24 2001 > /usr/local/lib/atlas/liblapack.a This is what I have (cmp the size of liblapack.a) ls -l /usr/local/lib/atlas/Linux_PII/ total 12372 -rw-r--r-- 1 pearu staff 5307992 Jan 11 00:59 libatlas.a -rw-r--r-- 1 pearu staff 280860 Jan 11 00:44 libcblas.a -rw-r--r-- 1 pearu staff 351084 Jan 11 01:01 libf77blas.a -rw-r--r-- 1 pearu staff 6397448 Jan 11 07:37 liblapack.a -rw-r--r-- 1 pearu staff 327636 Jan 11 00:17 libtstatlas.a So, you should rebuild lapack (atlas). You should also apply notes in http://math-atlas.sourceforge.net/errata.html#completelp Pearu From fperez at pizero.colorado.edu Tue Feb 19 18:49:48 2002 From: fperez at pizero.colorado.edu (=?ISO-8859-1?Q?Fernando_P=E9rez?=) Date: Tue, 19 Feb 2002 16:49:48 -0700 (MST) Subject: [SciPy-dev] scipy testing results In-Reply-To: Message-ID: > > So, you should rebuild lapack (atlas). You should also apply notes in > > http://math-atlas.sourceforge.net/errata.html#completelp Boy, wasn't that fun and simple :) The atlas compilation took a solid 1/2 hour, but it's done. I didn't build lapack from scratch, grabbed rpms for that. But I followed the 'completeness' instructions for atlas/lapack. The build_instructions web page should definitely mention that your best bet of getting things to work is just to bite the bullet and build atlas yourself. The existing prebuilt binaries are just too old/incomplete to be useful, and you end up just wasting time trying to use them. Now things are working, sort of. I can import scipy, but scipy.test() reports 5 failures. I pasted the report at the end in case it's useful for you guys. I fully realize that right now we're in a state of flow, but for 'end users' we need to consider some centralized packaging for scipy. At least have local builds available of lapack,atlas, static fft would be a plus. I think 'outsiders' would be quite put off by the current level of difficulty of the installation. This is *not* criticism of anyone, I know right now the priority is the code. Just a minor comment for when the time comes to release the next 'public' scipy. You guys have done a fantastic job so far. Anyway, *thanks a lot* to all those who helped with todays multi-problem hunt. I really appreciate your kindness. Cheers, f. PS: test report follows. I ran the full (level=10) suite. In [7]: scipy.test(level=10) ... [snipped early non-error stuff] ====================================================================== ERROR: check_qnan (test_misc.test_isnan) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.2/site-packages/scipy/tests/test_misc.py", line 112, in check_qnan assert(isnan(log(-1.)) == 1) ValueError: math domain error ====================================================================== ERROR: check_qnan (test_misc.test_isfinite) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.2/site-packages/scipy/tests/test_misc.py", line 132, in check_qnan assert(isfinite(log(-1.)) == 0) ValueError: math domain error ====================================================================== ERROR: check_qnan (test_misc.test_isinf) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.2/site-packages/scipy/tests/test_misc.py", line 156, in check_qnan assert(isinf(log(-1.)) == 0) ValueError: math domain error ====================================================================== ERROR: check_basic (test_basic1a.test_roots) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.2/site-packages/scipy/tests/test_basic1a.py", line 19, in check_basic assert_array_almost_equal(roots(a1),[2,2],11) File "/usr/lib/python2.2/site-packages/scipy/basic1a.py", line 51, in roots roots,dummy = eig(A) File "/usr/lib/python2.2/site-packages/scipy/linalg/linear_algebra.py", line 440, in eig results = ev(a, jobvl='N', jobvr=vchar, lwork=results[-2][0]) error: ((lwork==-1) || (lwork >= MAX(1,4*n))) failed for 3rd keyword lwork ====================================================================== ERROR: check_inverse (test_basic1a.test_roots) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.2/site-packages/scipy/tests/test_basic1a.py", line 25, in check_inverse assert_array_almost_equal(sort(roots(poly(a))),sort(a),5) File "/usr/lib/python2.2/site-packages/scipy/basic1a.py", line 51, in roots roots,dummy = eig(A) File "/usr/lib/python2.2/site-packages/scipy/linalg/linear_algebra.py", line 440, in eig results = ev(a, jobvl='N', jobvr=vchar, lwork=results[-2][0]) error: ((lwork==-1) || (lwork >= MAX(1,4*n))) failed for 3rd keyword lwork ---------------------------------------------------------------------- Ran 319 tests in 256.606s FAILED (errors=5) From fedor at mailandnews.com Tue Feb 19 18:52:56 2002 From: fedor at mailandnews.com (Fedor Baart) Date: Wed, 20 Feb 2002 00:52:56 +0100 Subject: [SciPy-dev] Plot_utlity In-Reply-To: <200202191800.g1JI0Gj05755@scipy.org> Message-ID: <001301c1b9a0$8df4c860$0200a8c0@amd> SVG is a very interesting format. I'm working on it for a few weeks now. And I'm getting some nice results. It is especially useful for graphs with a lot of information because of the 'vector' approach instead of the bitmap. I will probably publish my SVG program somewhere within a few days. I'm just finishing up the last elements and the documentation. I'm also trying to get it to work on zope. It is working as a external method on my local zope server. I used the SVG specification of www.w3.org as a main source of information. It is a very clear specification. I looked at some examples at http://www.adobe.com/svg I ordered a SVG book from O'reilly but haven't got it yet. I also ordered Python and XML but I didn't receive it untill I was almost done. There is another xml-based vector format. It is called 'vml' or vector markup language. It is used in the MS Office 2000 and XP and is supported by MSIE for creating and viewing the vector graphics. But since SVG is the W3 recommendation I don't think we're going to see much of vml anymore. Other usefull links in this area are: http://www.xml.com/pub/a/2001/03/21/svg.html #intro to SVG http://www.xml.com/pub/a/2002/01/23/svg/index.html #A nice example about http://search.cpan.org/search?dist=SVG #Perl module which resembles the one I am working on. http://sis.cmis.csiro.au/svg/ #Old http://xml.apache.org/batik/ #Java implementation. Very interesting. Also a nice viewer http://www.jasc.com #Look for the webdraw program. It creates SVG documents wich can be very useful for testing or for creating symbols you want to use. I'll sent you an email when I publish it somewhere. Fedor Subject: Re: [SciPy-dev] Plot_utlity Reply-To: scipy-dev at scipy.net >>>>> "eric" == eric writes: eric> This sounds very interesting. Any chance your code open? eric> If so, I'd like to take a look. Also, what resources did eric> you use to learn about SVG? Indeed, this is interesting. http://www.onlamp.com/pub/a/onlamp/2002/02/07/svghist.html http://www.w3.org/Graphics/SVG/Overview.htm8 From pearu at cens.ioc.ee Tue Feb 19 19:02:23 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Wed, 20 Feb 2002 02:02:23 +0200 (EET) Subject: [SciPy-dev] Plot_utlity In-Reply-To: <001301c1b9a0$8df4c860$0200a8c0@amd> Message-ID: On Wed, 20 Feb 2002, Fedor Baart wrote: > main source of information. It is a very clear specification. I looked > at some examples at http://www.adobe.com/svg Only Windows and Mac are supported :( Pearu From oliphant.travis at ieee.org Tue Feb 19 20:53:07 2002 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 19 Feb 2002 18:53:07 -0700 Subject: [SciPy-dev] FFTPACK and RandomArray In-Reply-To: <055101c1b8ba$05c6b160$6b01a8c0@ericlaptop> References: <055101c1b8ba$05c6b160$6b01a8c0@ericlaptop> Message-ID: On Monday 18 February 2002 01:22 pm, you wrote: > Hey guys, > > We also need to start the task of removing fftw from the core and fitting > FFTPACK in (shouldn't be hard). Also, what other packages do we need to > address? > I like it. How should we handle fftpack. Should we use f2py and modified Numeric wrappers? Or use the f2c version Numeric currently uses. Also, I've been working on the stats package. How should we handle RandomArray. Should we take all that's good from RandomArray and place it in our stats package? My inclination is to subsume RandomArray, FFTPACK, and LinearAlgebra into scipy so that all we rely on is the core array facilities of Numeric. -Travis O. From greglandrum at earthlink.net Tue Feb 19 22:52:37 2002 From: greglandrum at earthlink.net (greg landrum) Date: 19 Feb 2002 19:52:37 -0800 Subject: [SciPy-dev] Plot_utlity In-Reply-To: <001301c1b9a0$8df4c860$0200a8c0@amd> References: <001301c1b9a0$8df4c860$0200a8c0@amd> Message-ID: <1014177178.1430.22.camel@badger> On Tue, 2002-02-19 at 15:52, Fedor Baart wrote: > SVG is a very interesting format. I'm working on it for a few weeks now. > And I'm getting some nice results. It is especially useful for graphs > with a lot of information because of the 'vector' approach instead of > the bitmap. SVG is quite interesting and useful. It's also pretty easy to pick up. Some python and SVG pointers: - Piddle/sping (http://sourceforge.net/projects/piddle) has an SVG canvas (self-promotion warning: I wrote this). - Sketch (http://sketch.sourceforge.net/) handles SVG, but it's unix only. - Graphite (http://sourceforge.net/projects/graphite) is a plotting program which uses Sping as its back-end. It's definitely beta and doesn't appear to be active. On the commercial side: Adobe Illustrator now has native support for SVG (reading and writing). -greg -- greg Landrum (greglandrum at earthlink.net) Software Carpenter/Computational Chemist From eric at scipy.org Tue Feb 19 22:23:18 2002 From: eric at scipy.org (eric) Date: Tue, 19 Feb 2002 22:23:18 -0500 Subject: [SciPy-dev] SciPy on MacOS X (almost) References: <15473.40628.955336.273670@localhost.starpower.net> Message-ID: <022a01c1b9bd$f17838f0$de5bfea9@ericlaptop> > Hi Everyone, > > First, let me say THANKS for SciPy! A colleague and I have begun > using Python, Numeric, and SciPy exclusively for our research (instead > of MATLAB). I'm glad to hear it. What field are you working in? > While it's been a little rough here in the beginning, > already we can see a lot of advantages to making the switch. It is definitely rought around the edges. Hopefully it is a diamond in the rough. > > My main reason for posting is to try to help push along support for > MacOS X. I'm all for it and will help where possible. We don't have any machines though. > I noticed some recent postings on this subject, but was not > a member of the list so could not reply to them directly. To > summarize, I seem to have successfully built SciPy using the Fink > tools, but have some errors when trying to import it (which have also > shown up in other contexts on this list). Any help in getting this > stuff working would be greatly appreciated! If there is interest, I > can look into getting SciPy added as a Fink package. I'm all for getting SciPy into as many corners as possible, but I worry that it is still a little green. What are the requirements in this area for Fink? > > Details of my attempt are below. How does the latest CVS do on Mac? eric From eric at scipy.org Tue Feb 19 22:44:20 2002 From: eric at scipy.org (eric) Date: Tue, 19 Feb 2002 22:44:20 -0500 Subject: [SciPy-dev] scipy_distutils/*_info.py files References: Message-ID: <023601c1b9c0$e18a4840$de5bfea9@ericlaptop> Hey Fernando, > > Agreed. As long as we don't get in the business of trying to write autoconf. :-) > > Have you ever looked at scons (http://www.scons.org)? I know it looks like > more of a make than autoconf replacement, but it might be worth a look. It > would be great if here we could concentrate on the Sci part of SciPy and > leverage other's work for the 'software engineering' part. I realize you > guys have already been forced to do a ton of work around distutils > limitations (frankly it's a pretty primitive system, and I'd argue one of > Python's Achilles' heels). Currently, scons is general purpose like make. It doesn't serve the same purpose as distutils -- i.e. building Python extensions as automatically as possible. There is a lot of overlap though, and SCons would make a much more powerful underpinning for the second generation of distutils. I'm very hopeful this will happen. For now I think distutils is the best option. It ain't always pretty, but you can usually get the job done. > > I recently also saw QMTest for testing at > http://www.codesourcery.com/qm/qmtest, also Python-based. > qmtest is general purpose also. It uses web based entry and reporting for all of its test currently. The only other option for creating tests is to write them in XML. I think this will eventually change. For now, unittest.py plus maybe 500 or 1000 other lines of code provide a reasonable test framework that can be defined in Python and executed from the Python interpreter. > I may be completely off-mark here, I'm just thinking of how we can best use > other's work for some areas. The task ahead for scipy is big enough as it is. > > But you guys may have already gone over this, so feel free to shoot me down > if I'm just blabbering nonsense. All suggestions welcome! We've looked at quite a few options, but not all of them by any stretch of the imagination. eric From eric at scipy.org Tue Feb 19 23:14:05 2002 From: eric at scipy.org (eric) Date: Tue, 19 Feb 2002 23:14:05 -0500 Subject: [SciPy-dev] scipy testing results References: Message-ID: <028a01c1b9c5$094bc120$de5bfea9@ericlaptop> Hey Fernando, > Boy, wasn't that fun and simple :) Yeah, the CVS is not for the faint of heart. Things will simplify some, but, until 0.2 is released, I doubt the build process will be very well documented. > > The atlas compilation took a solid 1/2 hour, but it's done. I didn't build > lapack from scratch, grabbed rpms for that. But I followed the 'completeness' > instructions for atlas/lapack. > > The build_instructions web page should definitely mention that your best bet > of getting things to work is just to bite the bullet and build atlas > yourself. The existing prebuilt binaries are just too old/incomplete to be > useful, and you end up just wasting time trying to use them. > > Now things are working, sort of. I can import scipy, but scipy.test() reports > 5 failures. I pasted the report at the end in case it's useful for you guys. The errors look like a combination of the ones I'm seeing and Pearu is seeing. We'll work out the IEEE ones later. The linear algebra ones will hopefully get worked out soon -- or fixed with linalg2 which I still haven't spent any time on... > I fully realize that right now we're in a state of flow, but for 'end users' > we need to consider some centralized packaging for scipy. At least have local > builds available of lapack,atlas, static fft would be a plus. I think > 'outsiders' would be quite put off by the current level of difficulty of the > installation. If you'll write up the process you followed, I'll post it to the website as hints on building from the CVS. I'm glad you got it all at least close to working. see ya, eric From fperez at pizero.colorado.edu Wed Feb 20 00:19:39 2002 From: fperez at pizero.colorado.edu (=?ISO-8859-1?Q?Fernando_P=E9rez?=) Date: Tue, 19 Feb 2002 22:19:39 -0700 (MST) Subject: [SciPy-dev] scipy_distutils/*_info.py files In-Reply-To: <023601c1b9c0$e18a4840$de5bfea9@ericlaptop> Message-ID: Hi Eric, > Currently, scons is general purpose like make. It doesn't serve the same > purpose as distutils -- i.e. building Python extensions as automatically > as possible. There is a lot of overlap though, and SCons would make a much > more powerful underpinning for the second generation of distutils. I'm > very hopeful this will happen. For now I think distutils is the best > option. It ain't always pretty, but you can usually get the job done. Well, it seems that the 'software engineering' environment around python is still very much taking shape. I hope it solidifies soon, so that individual projects can spend more time solving their specific problems and less on the base framework. Scons and Qmtest are both from the original Software Carpentry competition, which sounded like a great idea. But things seem to have fizzled out a bit on that front. I really liked the rationale behind that contest: write 4 major applications useful for *any* software project, all using a powerful but easy language (python) both for their implementation and for end-user control and extension. That way developers waste much less time and effort context-switching between shell/make/autoconf/... Let's hope those 4 projects can pick up some momentum... But obviously you guys have done your homework, and at the moment it seems that, 'duct-tapy' as the solutions may be, they work. So the priority is now SciPy itself, I see that. It's refreshing to see though that you (collectively) remain open to new ideas and alternatives. I really see a very nice future for the SciPy project, as many things seem to be falling in the right place. Cheers, f. From eric at scipy.org Tue Feb 19 23:34:31 2002 From: eric at scipy.org (eric) Date: Tue, 19 Feb 2002 23:34:31 -0500 Subject: Strengths of R (was Re: [SciPy-dev] IPython updated (Emacs works now)) References: Message-ID: <029801c1b9c7$e72f1ee0$de5bfea9@ericlaptop> Hey Tony, > Note that one thing I'm working on doing is extending ESS (an Emacs mode for data analysis, usually R or SAS, but also for other "interactive" data analysis stuff) to accomodate iPython, in the hopes of (very slowly) moving towards using R and SciPy for most of my work. What are the benefits of R over Python/SciPy. Is there a philosophical difference that makes it better suited for statistics (and other things for that matter), or is it simply that it has more stats functionality and is much more mature? If there is a different philosophy behind it, can you summarize it? Maybe we can incorporate some of its strong points into SciPy's stats module. Travis Oliphant is working on it as we speak. We could definitely do with some of you stats guys input! thanks, eric From eric at scipy.org Tue Feb 19 23:47:54 2002 From: eric at scipy.org (eric) Date: Tue, 19 Feb 2002 23:47:54 -0500 Subject: [SciPy-dev] FFTPACK and RandomArray References: <055101c1b8ba$05c6b160$6b01a8c0@ericlaptop> Message-ID: <02b801c1b9c9$c2c63820$de5bfea9@ericlaptop> Just to fill everyone in, we're gonna be making fftw an "optional" package in SciPy. This is because its license doesn't fit with the rest of the package. FFTPACK provides pretty much a drop in replacement in functionality, so no capabilities are lost. The only drawpack is that fftw is faster. On the upside, it will simplify the build process, and the part of the pain that Fernando suffered today will be alleviated. > > I like it. > > How should we handle fftpack. Should we use f2py and modified Numeric > wrappers? Or use the f2c version Numeric currently uses. Seems the fastest (easiest to get working) solution is just to use the one that comes with Numeric. You probably loose some speed though in the f2c conversion process. I don't know how many functions there are to wrap, but making f2py wrappers shouldn't be that difficult. This should be the "long term goal" I think -- it can even be the short term goal. Who ever does it gets to decide. :-) > > Also, I've been working on the stats package. How should we handle > RandomArray. Should we take all that's good from RandomArray and place it in > our stats package? +1 By the way. How is this coming? Are you going the multivariate or singlevariate route -- or has that been decided yet? > > My inclination is to subsume RandomArray, FFTPACK, and LinearAlgebra into > scipy so that all we rely on is the core array facilities of Numeric. +1 thanks, eric From fperez at pizero.colorado.edu Wed Feb 20 01:49:34 2002 From: fperez at pizero.colorado.edu (=?ISO-8859-1?Q?Fernando_P=E9rez?=) Date: Tue, 19 Feb 2002 23:49:34 -0700 (MST) Subject: [SciPy-dev] scipy testing results In-Reply-To: <028a01c1b9c5$094bc120$de5bfea9@ericlaptop> Message-ID: On Tue, 19 Feb 2002, eric wrote: > Hey Fernando, > > > Boy, wasn't that fun and simple :) > > Yeah, the CVS is not for the faint of heart. Things will simplify some, but, > until 0.2 is released, I doubt the build process will be very well documented. > If you'll write up the process you followed, I'll post it to the website as > hints on building from the CVS. Serves me well for opening my loud mouth :) Ok, I guess I don't have an option now. Here's a sketch. I've attached it in html so hopefully you can just drop it where you want it without much effort. The links are ugly (the http//s didn't get hidden) but that's what latex2html gives me and I'm not going to fight it now. Cheers, f. -------------- next part -------------- A non-text attachment was scrubbed... Name: scipy_cvs.html.gz Type: application/octet-stream Size: 4010 bytes Desc: URL: From fperez at pizero.colorado.edu Wed Feb 20 01:51:25 2002 From: fperez at pizero.colorado.edu (=?ISO-8859-1?Q?Fernando_P=E9rez?=) Date: Tue, 19 Feb 2002 23:51:25 -0700 (MST) Subject: [SciPy-dev] FFTPACK and RandomArray In-Reply-To: <02b801c1b9c9$c2c63820$de5bfea9@ericlaptop> Message-ID: On Tue, 19 Feb 2002, eric wrote: > Just to fill everyone in, we're gonna be making fftw an "optional" package in > SciPy. This is because its license doesn't fit with the rest of the package. > FFTPACK provides pretty much a drop in replacement in functionality, so no > capabilities are lost. The only drawpack is that fftw is faster. On the > upside, it will simplify the build process, and the part of the pain that > Fernando suffered today will be alleviated. It would be nice to keep things so that users can easily use one or the other though. Since fftw is GPL, for many that's ok and they may prefer to use fftw. Don't know how much extra work that would mean though. Cheers, f From oliphant.travis at ieee.org Wed Feb 20 02:25:28 2002 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 20 Feb 2002 00:25:28 -0700 Subject: [SciPy-dev] FFTPACK and RandomArray In-Reply-To: <02b801c1b9c9$c2c63820$de5bfea9@ericlaptop> References: <055101c1b8ba$05c6b160$6b01a8c0@ericlaptop> <02b801c1b9c9$c2c63820$de5bfea9@ericlaptop> Message-ID: > By the way. How is this coming? Are you going the multivariate or > singlevariate route -- or has that been decided yet? Right now, I'm going the "get-the-code-that's there working-and-consistent" route. Yes, it's mostly single-variate. I'm happy to receive input from other people, though. There is quite a bit of overlapping code that I'm sorting through. Any thoughts people have on the stats stuff, it would be nice to hear them over the next week. Right now, I'm looking to flesh out the distributions support to include pdf's and cdf's and quantiles (and other inverses) of the distributions that are currently in the code (including RandomArray and the already wrapped ranlib). -Travis O. From eric at scipy.org Wed Feb 20 01:14:46 2002 From: eric at scipy.org (eric) Date: Wed, 20 Feb 2002 01:14:46 -0500 Subject: [SciPy-dev] FFTPACK and RandomArray References: Message-ID: <02e501c1b9d5$e5574940$de5bfea9@ericlaptop> ----- Original Message ----- From: "Fernando P?rez" To: Sent: Wednesday, February 20, 2002 1:51 AM Subject: Re: [SciPy-dev] FFTPACK and RandomArray > On Tue, 19 Feb 2002, eric wrote: > > > Just to fill everyone in, we're gonna be making fftw an "optional" package in > > SciPy. This is because its license doesn't fit with the rest of the package. > > FFTPACK provides pretty much a drop in replacement in functionality, so no > > capabilities are lost. The only drawpack is that fftw is faster. On the > > upside, it will simplify the build process, and the part of the pain that > > Fernando suffered today will be alleviated. > > It would be nice to keep things so that users can easily use one or the > other though. Since fftw is GPL, for many that's ok and they may prefer to > use fftw. Don't know how much extra work that would mean though. Well the wrappers will still be around, and they work quite well in the current state. I don't know if they will still be in the CVS, but they will certainly still be available with instructions on how to build them on the SciPy site. For those interested, it'll hopefully be a drop in replacement. Afterall, they work fine now in SciPy, and I don't see the fft interfaces changing much. see ya, eric From pearu at cens.ioc.ee Wed Feb 20 03:30:35 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Wed, 20 Feb 2002 10:30:35 +0200 (EET) Subject: [SciPy-dev] FFTPACK and RandomArray In-Reply-To: <02e501c1b9d5$e5574940$de5bfea9@ericlaptop> Message-ID: On Wed, 20 Feb 2002, eric wrote: > > On Tue, 19 Feb 2002, eric wrote: > > > > > Just to fill everyone in, we're gonna be making fftw an "optional" package > in > > > SciPy. This is because its license doesn't fit with the rest of the > package. > > > FFTPACK provides pretty much a drop in replacement in functionality, so no > > > capabilities are lost. The only drawpack is that fftw is faster. On the > > > upside, it will simplify the build process, and the part of the pain that > > > Fernando suffered today will be alleviated. > > > > It would be nice to keep things so that users can easily use one or the > > other though. Since fftw is GPL, for many that's ok and they may prefer to > > use fftw. Don't know how much extra work that would mean though. > > Well the wrappers will still be around, and they work quite well in the current > state. > I don't know if they will still be in the CVS, but they will certainly still be > available with instructions on how to build them on the SciPy site. For those > interested, it'll hopefully be a drop in replacement. Afterall, they work fine > now in SciPy, and I don't see the fft interfaces changing much. I am going to be definitely one user of fftw (because of its speed) and the pain that Fernando suffered with fftw can be certainly eased as we now know the workouts. But why do you want to throw the fftw wrappers out of scipy CVS? Does it violets any licence item if distributing wrappers (to GPL'ed codes) without distributing the corresponding libriaries? And that with an addition check if the GPL library is present. If yes, then triggering the wrappers extension build and making scipy to use the fastest routiones available, though they may be GPL'ed? Thanks, Pearu From eric at scipy.org Wed Feb 20 10:20:45 2002 From: eric at scipy.org (eric) Date: Wed, 20 Feb 2002 10:20:45 -0500 Subject: [SciPy-dev] FFTPACK and RandomArray References: Message-ID: <033601c1ba22$2b29d220$de5bfea9@ericlaptop> > On Wed, 20 Feb 2002, eric wrote: > > > > On Tue, 19 Feb 2002, eric wrote: > > > > > > > Just to fill everyone in, we're gonna be making fftw an "optional" package > > in > > > > SciPy. This is because its license doesn't fit with the rest of the > > package. > > > > FFTPACK provides pretty much a drop in replacement in functionality, so no > > > > capabilities are lost. The only drawpack is that fftw is faster. On the > > > > upside, it will simplify the build process, and the part of the pain that > > > > Fernando suffered today will be alleviated. > > > > > > It would be nice to keep things so that users can easily use one or the > > > other though. Since fftw is GPL, for many that's ok and they may prefer to > > > use fftw. Don't know how much extra work that would mean though. > > > > Well the wrappers will still be around, and they work quite well in the current > > state. > > I don't know if they will still be in the CVS, but they will certainly still be > > available with instructions on how to build them on the SciPy site. For those > > interested, it'll hopefully be a drop in replacement. Afterall, they work fine > > now in SciPy, and I don't see the fft interfaces changing much. > > I am going to be definitely one user of fftw (because of its speed) and > the pain that Fernando suffered with fftw can be certainly eased as we now > know the workouts. > > But why do you want to throw the fftw wrappers out of scipy CVS? Does it > violets any licence item if distributing wrappers (to GPL'ed > codes) without distributing the corresponding libriaries? No, just the binary disrtibutions. The headache about keeping them in CVS is the maintanance issue. Trying to keep one library tested and working is hard enough, having functional duplicates doubles the problem. Still, I'm not opposed to keeping them in the CVS in something like a nondist directory. This is sorta how Python handles things. That would keep the wrappers around and let them evolve with SciPy, but out of the way of the main development tree in an "unsupported" section. > And that with an addition check if the GPL library is present. If yes, > then triggering the wrappers extension build and making scipy to use > the fastest routiones available, though they may be GPL'ed? Sure, this is fine with me. So how does that plan sound -- move fftw into a nondist directory? eric From fperez at pizero.colorado.edu Wed Feb 20 11:39:29 2002 From: fperez at pizero.colorado.edu (=?ISO-8859-1?Q?Fernando_P=E9rez?=) Date: Wed, 20 Feb 2002 09:39:29 -0700 (MST) Subject: [SciPy-dev] FFTPACK and RandomArray In-Reply-To: <033601c1ba22$2b29d220$de5bfea9@ericlaptop> Message-ID: > > So how does that plan sound -- move fftw into a nondist directory? +1 f From pearu at cens.ioc.ee Wed Feb 20 11:43:34 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Wed, 20 Feb 2002 18:43:34 +0200 (EET) Subject: [SciPy-dev] FFTPACK and RandomArray In-Reply-To: <033601c1ba22$2b29d220$de5bfea9@ericlaptop> Message-ID: On Wed, 20 Feb 2002, eric wrote: > > But why do you want to throw the fftw wrappers out of scipy CVS? Does it > > violets any licence item if distributing wrappers (to GPL'ed > > codes) without distributing the corresponding libriaries? > > No, just the binary disrtibutions. The headache about keeping them in CVS > is the maintanance issue. Trying to keep one library tested and working is hard > enough, having functional duplicates doubles the problem. I see your point. Though with fft we now have opposite situation what one would like. The fastest implementation goes out from scipy and it is replaced by some intermediate one :( I don't like lawyers. :-( > Still, I'm not opposed to keeping them in the CVS in something like a nondist > directory. This is sorta how Python handles things. That would keep the > wrappers around and let > them evolve with SciPy, but out of the way of the main development tree in an > "unsupported" section. > > > And that with an addition check if the GPL library is present. If yes, > > then triggering the wrappers extension build and making scipy to use > > the fastest routiones available, though they may be GPL'ed? > > Sure, this is fine with me. > > So how does that plan sound -- move fftw into a nondist directory? Ok, not too bad. It is not completely clear to me how get them out from nondist for building and how to use them inside scipy in case they are available but I think it can be done. May be it is better to get FFTPACK working first before moving fftw into nondist. Pearu From oliphant at ee.byu.edu Wed Feb 20 10:07:12 2002 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 20 Feb 2002 10:07:12 -0500 (EST) Subject: [SciPy-dev] FFTPACK and RandomArray In-Reply-To: <033601c1ba22$2b29d220$de5bfea9@ericlaptop> Message-ID: > > Sure, this is fine with me. > > So how does that plan sound -- move fftw into a nondist directory? +1 -Travis From prabhu at aero.iitm.ernet.in Wed Feb 20 12:26:13 2002 From: prabhu at aero.iitm.ernet.in (Prabhu Ramachandran) Date: Wed, 20 Feb 2002 22:56:13 +0530 Subject: [SciPy-dev] Stupid question. In-Reply-To: References: Message-ID: <15475.56373.451104.288218@monster.linux.in> >>>>> "FP" == Fernando P?rez writes: FP> Hi all, I'm trying to do a FP> python setup.py build FP> on a freshly updated cvs of scipy, so I can test the current FP> state of things. But the linking is failing with: FP> gcc -shared build/temp.linux-i686-2.2/fftw_wrap.o FP> -L/usr/local/home/fperez/lib -Lbuild/temp.linux-i686-2.2 FP> -lfftw_threads -lrfftw_threads -lfftw -lrfftw -lpthread FP> -lc_misc -lcephes -lgist -o FP> build/lib.linux-i686-2.2/scipy/fftw/fftw.so /usr//bin/ld: FP> cannot find -lfftw_threads collect2: ld returned 1 exit status I know that this problem has been solved but FWIW everything seems to work fine under Woody Debian GNU/Linux. However, at the moment my scipy install seems to have gone crazy. The stats module or the ga module seems badly broken. I cant seem to import scipy anymore. I blindly changed all import scipy.stats.rv as rv lines to import scipy.stats.rv in the ga directory (in my installed copy) and now scipy seems to load ok. scipy.test() however does not work anymore. In [1]: import scipy scip In [2]: scipy.test() --------------------------------------------------------------------------- TypeError Traceback (most recent call last) ? ? in test(level=1) ? in test_suite(level=1) TypeError: harvest_test_suites() got an unexpected keyword argument 'level' I guess it serves me right to keep updating from CVS and trying. :( prabhu From eric at scipy.org Wed Feb 20 11:25:47 2002 From: eric at scipy.org (eric) Date: Wed, 20 Feb 2002 11:25:47 -0500 Subject: [SciPy-dev] FFTPACK and RandomArray References: Message-ID: <039301c1ba2b$40b8d6a0$de5bfea9@ericlaptop> > > On Wed, 20 Feb 2002, eric wrote: > > > > But why do you want to throw the fftw wrappers out of scipy CVS? Does it > > > violets any licence item if distributing wrappers (to GPL'ed > > > codes) without distributing the corresponding libriaries? > > > > No, just the binary disrtibutions. The headache about keeping them in CVS > > is the maintanance issue. Trying to keep one library tested and working is hard > > enough, having functional duplicates doubles the problem. > > I see your point. Though with fft we now have opposite situation what one > would like. The fastest implementation goes out from scipy and it is > replaced by some intermediate one :( It is a shame, but there are always compromises. > I don't like lawyers. :-( That is a rather blanket statement... This isn't a lawyer issue anyway. It's an MIT intelectual property issue. From reading the FAQ, the fftw authors would rather open the source under a different style license, but this would affect MIT's ability to earn revenue off of it. I wish it were different, but also completely understand MIT's point of view here -- they have the right to license it however they wish and also to make money with it. Unfortunately, the chosen license is different than SciPy's. As a result, if someone wanted to use SciPy in a commercial product, they would have to pay MIT a license fee for the use of fftw. With a functionally equivalent (though slightly slower) alternative, this is an unnecessary restriction. > > > Still, I'm not opposed to keeping them in the CVS in something like a nondist > > directory. This is sorta how Python handles things. That would keep the > > wrappers around and let > > them evolve with SciPy, but out of the way of the main development tree in an > > "unsupported" section. > > > > > And that with an addition check if the GPL library is present. If yes, > > > then triggering the wrappers extension build and making scipy to use > > > the fastest routiones available, though they may be GPL'ed? > > > > Sure, this is fine with me. > > > > So how does that plan sound -- move fftw into a nondist directory? > > Ok, not too bad. It is not completely clear to me how get them out from > nondist for building and how to use them inside scipy in case they are > available but I think it can be done. I'm sure this can be figured out. > > May be it is better to get FFTPACK working first before moving fftw into > nondist. Nobody has started actively working on this (at least as far as I know), so it is still a ways out. We'll definitely try to make the transition smooth. eric From rossini at u.washington.edu Wed Feb 20 12:39:42 2002 From: rossini at u.washington.edu (Anthony Rossini) Date: Wed, 20 Feb 2002 09:39:42 -0800 (PST) Subject: Strengths of R (was Re: [SciPy-dev] IPython updated (Emacs works now)) In-Reply-To: <029801c1b9c7$e72f1ee0$de5bfea9@ericlaptop> Message-ID: On Tue, 19 Feb 2002, eric wrote: > Hey Tony, > > > Note that one thing I'm working on doing is extending ESS (an Emacs mode for > > data analysis, usually R or SAS, but also for other "interactive" data analysis > > stuff) to accomodate iPython, in the hopes of (very slowly) moving towards using > > R and SciPy for most of my work. > > What are the benefits of R over Python/SciPy. Is there a philosophical > difference that makes it better suited for statistics (and other things for that > matter), or is it simply that it has more stats functionality and is much more > mature? If there is a different philosophy behind it, can you summarize it? > Maybe we can incorporate some of its strong points into SciPy's stats module. > Travis Oliphant is working on it as we speak. We could definitely do with some > of you stats guys input! John Barnard, who is on this list, should speak up, being one of the few other people that I know of (Doug Bates and some of his students at U Wisc being another exception) that actually use Python for "work" (database or computation). R (and the language it implements, S) is a language intended for primarily interactive data analysis. So, a good bit of thought has gone into data structures (lists/dataframes in R parlance, which are a bit like arrays with row/column labels which can be used interchangeably instead of row/column numbers), data types such as factors (and factor coding -- some analytic approaches are not robust to choice of coding style for nominal (categorical, non-ordered) data. It has a means for handling missing data (similar to the MA extension for Numeric), and it also has a strong modeling structure, i.e. fitting linear models (using least squares or weighted least squares) is done in a language which "looks right", i.e. lm(y ~ x) fits a model which looks like y = b x + e, e following the usual linear models assumptions. as well as smoothing methods (splines, kernels) done in similar fashion. Models are a data object as well, which means that you can act on it appropriately, comparing 2 fitted models, etc, etc. R, as opposed to the commercial version S or S-PLUS, also has a flexible packaging scheme for add-on packages (for things like Expression Array analysis, spatial-temporal data analysis, graphics, and generalized linear models, marginal models, and it seems like hundreds more. It also can call out to C, Fortran, Java, Python, and Perl (and C++, but that's recent, in the last year or so). Database work is simple, as well, though not up to Perl (or Python) interfaces. It also has lexical scoping, and is based originally on scheme (though the syntax is like python). However, it's not a true OO language like python, and some things seem to be hacks. This is mostly an aesthetic problem, not a functional problem. It's worth a look if you do data analysis. In many ways, the strength is in the ease of programming good graphics, analyses, etc, with output which is easily read and intelligible. It has problems, in terms of scope of data and speed. It's not as clean to read as python (i.e. I _LIKE_ meaningful indentation, which makes me weird :-), and isn't as generally glexible (it took me twice as long to write a reader for Flow Cytometry standard data file formats in R than in Python) but annotation of the resulting data is much easier in R than in Python (and default summary statistics, both numerical and graphical, are easier to work with). So, I don't think I'll be giving up R, but I am looking forward to SciPy (esp things like the sparse array work, which is much more difficult to handle in R, in a nice format). One thing that I did write was RPVM, for using PVM (LAN cluster library) with R; and patched up PyPVM so that the previous authors work actually worked :-); that is how I'm thinking of doing the interfacing. In general, R is great for pre- and post-processing small and medium sized datasets, as well as for descriptive and inferential statistics, but for custom analyses, one would still go to C, Fortran, or C++ after prototyping (much like Python). I can try to say more, but it's hard to describe a full language quickly. See http://www.r-project.org/ for more details (and for ESS, http://software.biostat.washington.edu/statsoft/ess/, if anyone is interested). best, -tony HOWEVER, it doesn't have From fperez at pizero.colorado.edu Wed Feb 20 12:41:57 2002 From: fperez at pizero.colorado.edu (=?ISO-8859-1?Q?Fernando_P=E9rez?=) Date: Wed, 20 Feb 2002 10:41:57 -0700 (MST) Subject: [SciPy-dev] Stupid question. In-Reply-To: <15475.56373.451104.288218@monster.linux.in> Message-ID: On Wed, 20 Feb 2002, Prabhu Ramachandran wrote: > I know that this problem has been solved but FWIW everything seems to > work fine under Woody Debian GNU/Linux. If you have extra pointers, I've posted some instructions on this topic at http://www.scipy.org/Members/fperez/PerezCVSBuild.htm Feel free to comment/fix or add information about what may be different for other distros. I think the easier we can make life for those willing to use cvs, the better off we'll be in the long run. > However, at the moment my scipy install seems to have gone crazy. The > stats module or the ga module seems badly broken. I cant seem to > import scipy anymore. I blindly changed all import scipy.stats.rv as > rv lines to import scipy.stats.rv in the ga directory (in my installed > copy) and now scipy seems to load ok. > > scipy.test() however does not work anymore. > > In [1]: import scipy > scip > In [2]: scipy.test() > --------------------------------------------------------------------------- > TypeError Traceback (most recent call last) > > ? > > ? in test(level=1) > > ? in test_suite(level=1) > > TypeError: harvest_test_suites() got an unexpected keyword argument 'level' > > > I guess it serves me right to keep updating from CVS and trying. :( Why don't you try a clean rebuild/install? cd your_scipy_cvs/ rm -rf build ./setup.py build pushd /usr/lib/python/site-packages/ rm -rf scipy* popd ./setup.py install and see what happens? That worked for me as of yesterday's code. Good luck, f From eric at scipy.org Wed Feb 20 11:40:17 2002 From: eric at scipy.org (eric) Date: Wed, 20 Feb 2002 11:40:17 -0500 Subject: [SciPy-dev] FFTPACK and RandomArray References: <055101c1b8ba$05c6b160$6b01a8c0@ericlaptop> <02b801c1b9c9$c2c63820$de5bfea9@ericlaptop> Message-ID: <03d301c1ba2d$47a2a2a0$de5bfea9@ericlaptop> > > By the way. How is this coming? Are you going the multivariate or > > singlevariate route -- or has that been decided yet? > > Right now, I'm going the "get-the-code-that's there working-and-consistent" > route. Yes, it's mostly single-variate. I'm happy to receive input from > other people, though. There is quite a bit of overlapping code that I'm > sorting through. Any thoughts people have on the stats stuff, it would be > nice to hear them over the next week. Sounds good. I wonder if we should post to the R news group or something like that and see if anyone there would bite as far as giving us a statistician's perspective. > > Right now, I'm looking to flesh out the distributions support to include > pdf's and cdf's and quantiles (and other inverses) of the distributions that > are currently in the code (including RandomArray and the already wrapped > ranlib). cool. thanks. eric From eric at scipy.org Wed Feb 20 11:56:16 2002 From: eric at scipy.org (eric) Date: Wed, 20 Feb 2002 11:56:16 -0500 Subject: [SciPy-dev] Stupid question. References: <15475.56373.451104.288218@monster.linux.in> Message-ID: <03f501c1ba2f$8339b130$de5bfea9@ericlaptop> > > scipy.test() however does not work anymore. > > In [1]: import scipy > scip > In [2]: scipy.test() > --------------------------------------------------------------------------- > TypeError Traceback (most recent call last) > > ? > > ? in test(level=1) > > ? in test_suite(level=1) > > TypeError: harvest_test_suites() got an unexpected keyword argument 'level' > > > I guess it serves me right to keep updating from CVS and trying. :( This was added when the test suites were upgraded. The CVS version has the corrct files there. I'm not sure why update didn't synchronize your sandbox correctly. eric From prabhu at aero.iitm.ernet.in Wed Feb 20 13:36:42 2002 From: prabhu at aero.iitm.ernet.in (Prabhu Ramachandran) Date: Thu, 21 Feb 2002 00:06:42 +0530 Subject: [SciPy-dev] Stupid question. In-Reply-To: References: <15475.56373.451104.288218@monster.linux.in> Message-ID: <15475.60602.883259.226610@monster.linux.in> >>>>> "FP" == Fernando P?rez writes: FP> this topic at FP> http://www.scipy.org/Members/fperez/PerezCVSBuild.htm FP> Feel free to comment/fix or add information about what may be FP> different for other distros. I think the easier we can make FP> life for those willing to use cvs, the better off we'll be in FP> the long run. Will do if and when I get some time. At the least I'll mail the list. >> I guess it serves me right to keep updating from CVS and >> trying. :( FP> Why don't you try a clean rebuild/install? Im doing that as I write. I found a problem and will try to commit a fix and will write about it later. FP> cd your_scipy_cvs/ rm -rf build ./setup.py build pushd FP> /usr/lib/python/site-packages/ rm -rf scipy* popd ./setup.py FP> install Hmm. Actually isnt it better to do: cd ./setup.py clean ./setup.py build instead of rm -rf on build? I'm not sure about this. prabhu From fperez at pizero.colorado.edu Wed Feb 20 13:40:33 2002 From: fperez at pizero.colorado.edu (=?ISO-8859-1?Q?Fernando_P=E9rez?=) Date: Wed, 20 Feb 2002 11:40:33 -0700 (MST) Subject: [SciPy-dev] Stupid question. In-Reply-To: <15475.60602.883259.226610@monster.linux.in> Message-ID: > Hmm. Actually isnt it better to do: > > cd > ./setup.py clean > ./setup.py build > > instead of rm -rf on build? I'm not sure about this. Ah! I didn't know about the clean command. My knowledge of distutils is very spotty. Thanks for the tip, I'm sure your way is saner. Cheers, f From prabhu at aero.iitm.ernet.in Wed Feb 20 13:45:15 2002 From: prabhu at aero.iitm.ernet.in (Prabhu Ramachandran) Date: Thu, 21 Feb 2002 00:15:15 +0530 Subject: cleaning a build (was Re: [SciPy-dev] Stupid question.) In-Reply-To: References: <15475.60602.883259.226610@monster.linux.in> Message-ID: <15475.61115.627129.811246@monster.linux.in> >>>>> "FP" == Fernando P?rez writes: >> Hmm. Actually isnt it better to do: >> >> cd ./setup.py clean ./setup.py build >> >> instead of rm -rf on build? I'm not sure about this. FP> Ah! I didn't know about the clean command. My knowledge of FP> distutils is very spotty. Thanks for the tip, I'm sure your FP> way is saner. Well, actually clean currently does not remove everything inside build/. Some of the pyf files wont go away. Only the temp.* dir is cleaned. The lib.* and generated_pyfs remain. I dont know if this is a bug or feature. cheers, prabhu From pearu at cens.ioc.ee Wed Feb 20 13:48:09 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Wed, 20 Feb 2002 20:48:09 +0200 (EET) Subject: [SciPy-dev] Stupid question. In-Reply-To: <15475.60602.883259.226610@monster.linux.in> Message-ID: > Hmm. Actually isnt it better to do: > > cd > ./setup.py clean > ./setup.py build > > instead of rm -rf on build? I'm not sure about this. You may need to do: ./setup.py clean -a that approx. corresponds to distclean. Pearu From prabhu at aero.iitm.ernet.in Wed Feb 20 13:58:24 2002 From: prabhu at aero.iitm.ernet.in (Prabhu Ramachandran) Date: Thu, 21 Feb 2002 00:28:24 +0530 Subject: build problems (Re: [SciPy-dev] Stupid question.) In-Reply-To: <03f501c1ba2f$8339b130$de5bfea9@ericlaptop> References: <15475.56373.451104.288218@monster.linux.in> <03f501c1ba2f$8339b130$de5bfea9@ericlaptop> Message-ID: <15475.61904.80983.506713@monster.linux.in> >>>>> "eric" == eric writes: >> I guess it serves me right to keep updating from CVS and >> trying. :( eric> This was added when the test suites were upgraded. The CVS eric> version has the corrct files there. I'm not sure why update eric> didn't synchronize your sandbox correctly. I haven't done a clean build in a while. So I cleaned out the build dir and my older install. I ran into this: [...] gcc -shared build/temp.linux-i686-2.1/fortranobject.o build/temp.linux-i686-2.1/flapackmodule.o -L/home/peterson/opt/lib/atlas -Lbuild/temp.linux-i686-2.1 -llapack -lcblas -lf77blas -latlas -lg2c -lc_misc -lcephes -lgist -o build/lib.linux-i686-2.1/scipy/linalg/flapack.so /usr/bin/ld: cannot find -llapack Ummm. scipy_distutils/atlas_info.py has this : library_path = ['/home/peterson/opt/lib/atlas'] [...] if sys.platform == 'win32': if not library_path: atlas_library_dirs=['C:\\atlas\\WinNT_PIIISSE1'] else: atlas_library_dirs = library_path blas_libraries = ['f77blas', 'cblas', 'atlas', 'g2c'] lapack_libraries = ['lapack'] + blas_libraries else: if not library_path: atlas_library_dirs = unix_atlas_directory(sys.platform) else: atlas_library_dirs = library_path etc. This will clearly fail by default unless its Pearu's machine. I propose to change this so that it works. I'll commit the changes to CVS. There is another trivial bug in stats/pstats.py which I fixed. Even after that the entire ga module fails to import the stats module properly and gives me lots of problems. I simply hacked those lines out in my install to get things to build. Rats! Someone modified everything just before I commited. I'll just delete my versions re-update and reinstall. Sigh. I think its a good idea if we keep each other informed of who is currently working on what areas and if we are to keep of certain directories. prabhu From oliphant at ee.byu.edu Wed Feb 20 12:15:09 2002 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 20 Feb 2002 12:15:09 -0500 (EST) Subject: build problems (Re: [SciPy-dev] Stupid question.) In-Reply-To: <15475.61904.80983.506713@monster.linux.in> Message-ID: > This will clearly fail by default unless its Pearu's machine. I > propose to change this so that it works. I'll commit the changes to > CVS. > > There is another trivial bug in stats/pstats.py which I fixed. > > Even after that the entire ga module fails to import the stats module > properly and gives me lots of problems. I simply hacked those lines > out in my install to get things to build. > > Rats! Someone modified everything just before I commited. I'll just > delete my versions re-update and reinstall. Sigh. > > I think its a good idea if we keep each other informed of who is > currently working on what areas and if we are to keep of certain > directories. > I'm working on the stats module. I did not realize the ga module used rv so heavily. I made the changes as soon as I realized the problem. I also ran into the Pearu library_path problem and so I changed it to so the thing would build for me. Sorry about the confusion. -Travis O. From pearu at cens.ioc.ee Wed Feb 20 14:14:06 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Wed, 20 Feb 2002 21:14:06 +0200 (EET) Subject: build problems (Re: [SciPy-dev] Stupid question.) In-Reply-To: <15475.61904.80983.506713@monster.linux.in> Message-ID: On Thu, 21 Feb 2002, Prabhu Ramachandran wrote: > Ummm. scipy_distutils/atlas_info.py has this : > > library_path = ['/home/peterson/opt/lib/atlas'] Really sorry about that. > I think its a good idea if we keep each other informed of who is > currently working on what areas and if we are to keep of certain > directories. Yes, I am currently working with system_info.py stuff. I have already implemented x11_info and now I'll do the atlas_info, etc. So, don't be very surprised if something will run differently (a soft way to say 'will fail' ;), I will be careful when commiting in order to not break the CVS state, though at some moment windows users may need to add things into system_info, I'll let you know. After system_info hooks are finished, I believe that we'll have more stable builds than it is now. (I then I can start working with real things like linalg2, etc.) Regards, Pearu From prabhu at aero.iitm.ernet.in Wed Feb 20 14:17:45 2002 From: prabhu at aero.iitm.ernet.in (Prabhu Ramachandran) Date: Thu, 21 Feb 2002 00:47:45 +0530 Subject: scipy.test troubles (was Re: [SciPy-dev] Stupid question.) In-Reply-To: <03f501c1ba2f$8339b130$de5bfea9@ericlaptop> References: <15475.56373.451104.288218@monster.linux.in> <03f501c1ba2f$8339b130$de5bfea9@ericlaptop> Message-ID: <15475.63065.45267.437533@monster.linux.in> >>>>> "eric" == eric writes: >> I guess it serves me right to keep updating from CVS and >> trying. :( eric> This was added when the test suites were upgraded. The CVS eric> version has the corrct files there. I'm not sure why update eric> didn't synchronize your sandbox correctly. Yes, a clean install works fine. Atleast it imports fine and some of my code runs fine. However the tests fail after a point. I get a segmentation fault. Here is the relavant segment: (gdb) r Starting program: /usr/local/bin/python [New Thread 1024 (LWP 8670)] Python 2.1.1 (#1, Nov 12 2001, 19:01:44) [GCC 2.95.4 20011006 (Debian prerelease)] on linux2 Type "copyright", "credits" or "license" for more information. >>> import scipy >>> scipy.test() [...] No test suite found for scipy.integrate ......................E.......E......E.................................................................................... Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 1024 (LWP 8670)] 0x40e03ed6 in array_from_pyobj (type_num=7, dims=0xbfffe93c, rank=2, intent=2, obj=0x84992c0) at /usr/local/lib/python2.1/site-packages/f2py2e/src/fortranobject.c:492 492 if (PyArray_Check(obj)) { /* here we have always intent(in) or (gdb) back #0 0x40e03ed6 in array_from_pyobj (type_num=7, dims=0xbfffe93c, rank=2, intent=2, obj=0x84992c0) at /usr/local/lib/python2.1/site-packages/f2py2e/src/fortranobject.c:492 #1 0x40e0c8f2 in f2py_rout_flapack_dgeev (capi_self=0x8293290, capi_args=0x85c89d4, capi_keywds=0x85da5c4, f2py_func=0x405bf260 ) at build/temp.linux-i686-2.1/flapackmodule.c:5631 #2 0x40e038cd in fortran_call (fp=0x8293290, arg=0x85c89d4, kw=0x85da5c4) at /usr/local/lib/python2.1/site-packages/f2py2e/src/fortranobject.c:243 #3 0x08059f9d in call_object (func=0x8293290, arg=0x85c89d4, kw=0x85da5c4) at Python/ceval.c:2813 [...] Well, maybe I have to update my f2py install as well. I might have a broken version (I updated the cvs copy a few days back). Is this a good time to update my f2py also? thanks, prabhu From prabhu at aero.iitm.ernet.in Wed Feb 20 14:21:59 2002 From: prabhu at aero.iitm.ernet.in (Prabhu Ramachandran) Date: Thu, 21 Feb 2002 00:51:59 +0530 Subject: build problems (Re: [SciPy-dev] Stupid question.) In-Reply-To: References: <15475.61904.80983.506713@monster.linux.in> Message-ID: <15475.63319.266577.49100@monster.linux.in> >>>>> "TO" == Travis Oliphant writes: >> Rats! Someone modified everything just before I commited. I'll >> just delete my versions re-update and reinstall. Sigh. >> >> I think its a good idea if we keep each other informed of who >> is currently working on what areas and if we are to keep of >> certain directories. TO> I'm working on the stats module. I did not realize the ga TO> module used rv so heavily. I made the changes as soon as I TO> realized the problem. TO> I also ran into the Pearu library_path problem and so I TO> changed it to so the thing would build for me. TO> Sorry about the confusion. Travis and Pearu: No worry. I'm extremely sorry if I was rude. I was trying to get scipy to build cleanly and ran into lots of trouble. Maybe I should have staved off and waited till things stabilized. Anyway, things work well for now, f2py is the only problematic component right now. prabhu From rossini at u.washington.edu Wed Feb 20 14:25:24 2002 From: rossini at u.washington.edu (Anthony Rossini) Date: Wed, 20 Feb 2002 11:25:24 -0800 (PST) Subject: [SciPy-dev] anonymous CVS barfing on CVS server? Message-ID: (yep, I'm behind a firewall today). Can anyone manage to update via anonCVS? I'm having difficulties... Log follows. (I was trying to start fresh, to fix a few things) best, -tony gyrfalcon 245 > socksify cvs update -Pd ? LOG cvs server: Updating . cvs server: Updating blas cvs server: cannot open .cvsignore: Too many open files cvs server: cannot open .cvswrappers: Too many open files cvs server: Updating blas/SRC cvs [server aborted]: cannot open CVS/Repository: Too many open files gyrfalcon 246 > From pearu at cens.ioc.ee Wed Feb 20 14:27:18 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Wed, 20 Feb 2002 21:27:18 +0200 (EET) Subject: scipy.test troubles (was Re: [SciPy-dev] Stupid question.) In-Reply-To: <15475.63065.45267.437533@monster.linux.in> Message-ID: On Thu, 21 Feb 2002, Prabhu Ramachandran wrote: > ......................E.......E......E.................................................................................... > Program received signal SIGSEGV, Segmentation fault. > > Well, maybe I have to update my f2py install as well. I might have a > broken version (I updated the cvs copy a few days back). > > Is this a good time to update my f2py also? If you have only few days old f2py, it should be good. I suspect that this segfault is because of current linalg but I am not sure. I'll try this test after I rebuild scipy right now. Pearu From eric at scipy.org Wed Feb 20 13:40:12 2002 From: eric at scipy.org (eric) Date: Wed, 20 Feb 2002 13:40:12 -0500 Subject: [SciPy-dev] anonymous CVS barfing on CVS server? References: Message-ID: <042001c1ba3e$0bc1bf30$de5bfea9@ericlaptop> I saw this earlier too. I'll look into this. If anyone has hints, I'm all ears. eric ----- Original Message ----- From: "Anthony Rossini" To: Sent: Wednesday, February 20, 2002 2:25 PM Subject: [SciPy-dev] anonymous CVS barfing on CVS server? > (yep, I'm behind a firewall today). Can anyone manage to update via anonCVS? I'm having difficulties... Log follows. > > (I was trying to start fresh, to fix a few things) > > best, > -tony > > > gyrfalcon 245 > socksify cvs update -Pd > ? LOG > cvs server: Updating . > cvs server: Updating blas > cvs server: cannot open .cvsignore: Too many open files > cvs server: cannot open .cvswrappers: Too many open files > cvs server: Updating blas/SRC > cvs [server aborted]: cannot open CVS/Repository: Too many open files > gyrfalcon 246 > > > > > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From pearu at cens.ioc.ee Wed Feb 20 14:47:05 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Wed, 20 Feb 2002 21:47:05 +0200 (EET) Subject: [SciPy-dev] anonymous CVS barfing on CVS server? In-Reply-To: <042001c1ba3e$0bc1bf30$de5bfea9@ericlaptop> Message-ID: On Wed, 20 Feb 2002, eric wrote: > I saw this earlier too. I'll look into this. > > If anyone has hints, I'm all ears. You may need to increase file-max number (the current one 4096 seems to be too small): echo 8192 > /proc/sys/fs/inode-max Or even to 16384 if the same problem occurs again. Pearu From pearu at cens.ioc.ee Wed Feb 20 15:22:04 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Wed, 20 Feb 2002 22:22:04 +0200 (EET) Subject: [SciPy-dev] anonymous CVS barfing on CVS server? In-Reply-To: Message-ID: On Wed, 20 Feb 2002, Pearu Peterson wrote: > You may need to increase file-max number (the current one 4096 seems to > be too small): > > echo 8192 > /proc/sys/fs/inode-max > > Or even to 16384 if the same problem occurs again. For more information, see eg http://www.linuxdoc.org/LDP/solrhe/Securing-Optimizing-Linux-RH-Edition-v1.3/chap6sec72.html Pearu From eric at scipy.org Wed Feb 20 14:20:14 2002 From: eric at scipy.org (eric) Date: Wed, 20 Feb 2002 14:20:14 -0500 Subject: [SciPy-dev] anonymous CVS barfing on CVS server? References: Message-ID: <046001c1ba43$9f82d010$de5bfea9@ericlaptop> Pearu, > You may need to increase file-max number (the current one 4096 seems to > be too small): This doesn't seem to be the problem. Searching the net, I saw this: http://mail.gnu.org/pipermail/info-cvs/2001-November/022199.html which makes me think that it is a problem within the CVS itself. I'm a bit lost here, but am working on it. eric ----- Original Message ----- From: "Pearu Peterson" To: Sent: Wednesday, February 20, 2002 2:47 PM Subject: Re: [SciPy-dev] anonymous CVS barfing on CVS server? > > On Wed, 20 Feb 2002, eric wrote: > > > I saw this earlier too. I'll look into this. > > > > If anyone has hints, I'm all ears. > > You may need to increase file-max number (the current one 4096 seems to > be too small): > > echo 8192 > /proc/sys/fs/inode-max > > Or even to 16384 if the same problem occurs again. > > Pearu > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From pearu at cens.ioc.ee Wed Feb 20 15:48:32 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Wed, 20 Feb 2002 22:48:32 +0200 (EET) Subject: scipy.test troubles (was Re: [SciPy-dev] Stupid question.) In-Reply-To: <15475.63065.45267.437533@monster.linux.in> Message-ID: Prabhu, On Thu, 21 Feb 2002, Prabhu Ramachandran wrote: > Yes, a clean install works fine. Atleast it imports fine and some of > my code runs fine. However the tests fail after a point. I get a > segmentation fault. Here is the relavant segment: > > [New Thread 1024 (LWP 8670)] > Python 2.1.1 (#1, Nov 12 2001, 19:01:44) ^^^^^ It might be that Python 2.1.1 is causing these troubles. There was a nasty bug in Python that shows up when one extension module imports another one. See http://www.python.org/2.1.2/ for details. I am running Python 2.1.2 (#1, Jan 18 2002, 18:05:45) [GCC 2.95.4 (Debian prerelease)] on linux2 and it does not crash (though there are still these 5 failures). Actually I would recommend Python 2.2 that is nice for f2py tests while 2.1 gives sometimes still trouble (the reasons I don't know yet). Pearu From rossini at u.washington.edu Wed Feb 20 15:51:17 2002 From: rossini at u.washington.edu (Anthony Rossini) Date: Wed, 20 Feb 2002 12:51:17 -0800 (PST) Subject: [SciPy-dev] anonymous CVS barfing on CVS server? In-Reply-To: <046001c1ba43$9f82d010$de5bfea9@ericlaptop> Message-ID: Another quick and semi-ugly solution would be to set up quickly anon-rsync for a "checked out/updated" version of the code (and sync over at something like 30 or 60 minute intervals?) best, -tony On Wed, 20 Feb 2002, eric wrote: > Pearu, > > > You may need to increase file-max number (the current one 4096 seems to > > be too small): > > This doesn't seem to be the problem. Searching the net, I saw this: > > http://mail.gnu.org/pipermail/info-cvs/2001-November/022199.html > > which makes me think that it is a problem within the CVS itself. > I'm a bit lost here, but am working on it. > > eric > > ----- Original Message ----- > From: "Pearu Peterson" > To: > Sent: Wednesday, February 20, 2002 2:47 PM > Subject: Re: [SciPy-dev] anonymous CVS barfing on CVS server? > > > > > > On Wed, 20 Feb 2002, eric wrote: > > > > > I saw this earlier too. I'll look into this. > > > > > > If anyone has hints, I'm all ears. > > > > You may need to increase file-max number (the current one 4096 seems to > > be too small): > > > > echo 8192 > /proc/sys/fs/inode-max > > > > Or even to 16384 if the same problem occurs again. > > > > Pearu > > > > _______________________________________________ > > Scipy-dev mailing list > > Scipy-dev at scipy.net > > http://www.scipy.net/mailman/listinfo/scipy-dev > > > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From pearu at cens.ioc.ee Wed Feb 20 15:53:22 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Wed, 20 Feb 2002 22:53:22 +0200 (EET) Subject: [SciPy-dev] anonymous CVS barfing on CVS server? In-Reply-To: <046001c1ba43$9f82d010$de5bfea9@ericlaptop> Message-ID: On Wed, 20 Feb 2002, eric wrote: > > You may need to increase file-max number (the current one 4096 seems to > > be too small): > > This doesn't seem to be the problem. Searching the net, I saw this: Did you try that? It does not hurt if it is larger that the default one, especially for various file servers. > which makes me think that it is a problem within the CVS itself. > I'm a bit lost here, but am working on it. Did you try restarting the cvs server? Or even the computer? Pearu From pearu at cens.ioc.ee Wed Feb 20 15:56:01 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Wed, 20 Feb 2002 22:56:01 +0200 (EET) Subject: [SciPy-dev] anonymous CVS barfing on CVS server? In-Reply-To: <046001c1ba43$9f82d010$de5bfea9@ericlaptop> Message-ID: On Wed, 20 Feb 2002, eric wrote: > > echo 8192 > /proc/sys/fs/inode-max ^^^^^ Note that this is type. I meant echo 8192 > /proc/sys/fs/file-max (oh boy, I keep sending critical commands with typos..) Pearu From prabhu at aero.iitm.ernet.in Wed Feb 20 15:55:08 2002 From: prabhu at aero.iitm.ernet.in (Prabhu Ramachandran) Date: Thu, 21 Feb 2002 02:25:08 +0530 Subject: scipy.test troubles (was Re: [SciPy-dev] Stupid question.) In-Reply-To: References: <15475.63065.45267.437533@monster.linux.in> Message-ID: <15476.3372.147144.984920@monster.linux.in> >>>>> "PP" == Pearu Peterson writes: PP> It might be that Python 2.1.1 is causing these troubles. There PP> was a nasty bug in Python that shows up when one extension PP> module imports another one. See PP> http://www.python.org/2.1.2/ PP> for details. I am running PP> Python 2.1.2 (#1, Jan 18 2002, 18:05:45) [GCC 2.95.4 (Debian PP> prerelease)] on linux2 PP> and it does not crash (though there are still these 5 PP> failures). Ok, I actually have lots of cruft on my machine. I just removed Python 1.5.2, 2.0.x. I also have the debian version of 2.1.2 in /usr/bin plus a version of 2.1.1 in /usr/local. However this setup has been working for quite a while. I just tried my example code with 2.1.2 and it still fails. Anyway I'll try again tommorow and get back to you. BTW, do you link anything thats used by f2py to the libpython*? regards, prabhu From pearu at cens.ioc.ee Wed Feb 20 15:58:54 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Wed, 20 Feb 2002 22:58:54 +0200 (EET) Subject: scipy.test troubles (was Re: [SciPy-dev] Stupid question.) In-Reply-To: <15476.3372.147144.984920@monster.linux.in> Message-ID: On Thu, 21 Feb 2002, Prabhu Ramachandran wrote: > Ok, I actually have lots of cruft on my machine. I just removed > Python 1.5.2, 2.0.x. I also have the debian version of 2.1.2 in > /usr/bin plus a version of 2.1.1 in /usr/local. > > However this setup has been working for quite a while. I just tried > my example code with 2.1.2 and it still fails. Anyway I'll try again > tommorow and get back to you. BTW, do you link anything thats used by > f2py to the libpython*? No. Pearu From eric at scipy.org Wed Feb 20 15:18:51 2002 From: eric at scipy.org (eric) Date: Wed, 20 Feb 2002 15:18:51 -0500 Subject: [SciPy-dev] CVS problems solved, but... References: Message-ID: <049001c1ba4b$d0175c70$de5bfea9@ericlaptop> I made the changes to the file size Pearu recommended and then rebooted. This seems to have solved the problem, but I don't understand why there was a problem in the first place. This machine is not heavily used (only SciPy). I've read a mail or two that said some versions of CVS leak 4 file descriptors per commit. Maybe we have one of these versions?? Anyone know where I should look to see if file descriptors are leaking? Well we're back up and running. Hopefully we can fix the root of the problem though instead of having to reboot periodically... > > Note that this is type. I meant > > echo 8192 > /proc/sys/fs/file-max > > (oh boy, I keep sending critical commands with typos..) oh. ok. then maybe it was the problem. Anyway, I followed the directions on the file you indicated, so after the restart, we have 16384. Let me know if you have problems. thanks for everyone's help, eric From rossini at u.washington.edu Wed Feb 20 16:23:44 2002 From: rossini at u.washington.edu (Anthony Rossini) Date: Wed, 20 Feb 2002 13:23:44 -0800 (PST) Subject: [SciPy-dev] CVS problems solved, but... In-Reply-To: <049001c1ba4b$d0175c70$de5bfea9@ericlaptop> Message-ID: verified a successful fresh checkout. Thanks... best, -tony On Wed, 20 Feb 2002, eric wrote: > I made the changes to the file size Pearu recommended and then rebooted. This > seems to have solved the problem, but I don't understand why there was a problem > in the first place. This machine is not heavily used (only SciPy). > > I've read a mail or two that said some versions of CVS leak 4 file descriptors > per commit. Maybe we have one of these versions?? Anyone know where I should > look to see if file descriptors are leaking? > > Well we're back up and running. Hopefully we can fix the root of the problem > though instead of having to reboot periodically... > > > > > Note that this is type. I meant > > > > echo 8192 > /proc/sys/fs/file-max > > > > (oh boy, I keep sending critical commands with typos..) > > oh. ok. then maybe it was the problem. Anyway, I followed the directions on > the file you > indicated, so after the restart, we have 16384. > > Let me know if you have problems. > > > thanks for everyone's help, > > eric > > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From oliphant at ee.byu.edu Wed Feb 20 14:42:12 2002 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 20 Feb 2002 14:42:12 -0500 (EST) Subject: [SciPy-dev] Tests in Scipy Message-ID: The isnan tests failed because fastumath was not being used by default in SciPy. I fixed this now (the problem is that some files whose namespaces were subsumed by Numeric issue a from Numeric import * command). Thus, the original umath functions which don't support nan, inf processing but give exceptions instead were being used. Adding from fastumath import * after the from Numeric import * fixes the problem. The tests work better for me now. -Travis From travis at scipy.org Wed Feb 20 23:01:44 2002 From: travis at scipy.org (Travis N. Vaught) Date: Wed, 20 Feb 2002 22:01:44 -0600 Subject: [SciPy-dev] RE: CVS Build Instructions In-Reply-To: Message-ID: F- I completely agree. I think the 'edit' feature was a casualty of some unrelated edition to the SciPy site dealing with menuing or something. The behaviour should definitely be for an 'Owner' of a document (its creator) to be able to edit--even after publication. This should be fixed in the next generation of the site along with other improvements and fixes. We will hopefully migrate to it later next week. BTW if you have any other suggestions about the site, they are welcome. Here are some thoughts that are being bandied about: o Roundup bug-tracker instead of the currently ignored sourceforge tracker o Better facility for community comments to docs and the site in general--perhaps just the mailing lists o get rid of duplicate functionality (Wikis are similar to comments (now disabled) are similar to archived mailing lists--we should probably use one for discussion (mailing lists) one for co-generated docs (Probably not wikis but I'm not sure--could be some CMF document in users' folders) I've posted this to the scipy-dev list to invite (gingerly worded) comments about how the scipy.org site would ideally work. Thanks, TV > -----Original Message----- > From: Fernando Perez [mailto:fperez at pizero.colorado.edu] > Sent: Wednesday, February 20, 2002 12:46 PM > To: Travis N. Vaught > Subject: Re: CVS Build Instructions > > > Hi Travis, > > a quick question/comment: > > > http://www.scipy.org/site_content/tutorials/user_folder_instructions > > seems to give no way of *editing* an existing document. For > example if I now > want to fix some typos in the CVS instructions doc, if I > understand correctly > the only option is to upload a full new one, delete the old and > go through the > editorial 'quarantine'. It seems a bit cumbersome both for you > guys and for > users, especially when one only wants to do minor editing. > > I wonder if it would be possible to add an 'edit' button that > would open the > document in a simple html text field, nothing fancy. I understand > this creates > a run around your editorial oversight, but I'd be willing to say > that scipy is > a small and focused enough group that abuse shouldn't be an > issue, at least > for quite a while. > > Just some thoughts, please correct me if any of my above > assumptions are wrong > or I missed something in the instructions. > > Best regards, > > Fernando. From fperez at pizero.colorado.edu Thu Feb 21 00:06:38 2002 From: fperez at pizero.colorado.edu (=?ISO-8859-1?Q?Fernando_P=E9rez?=) Date: Wed, 20 Feb 2002 22:06:38 -0700 (MST) Subject: [SciPy-dev] RE: CVS Build Instructions In-Reply-To: Message-ID: On Wed, 20 Feb 2002, Travis N. Vaught wrote: > This should be fixed in the next generation of the site along with other > improvements and fixes. We will hopefully migrate to it later next week. > BTW if you have any other suggestions about the site, they are welcome. > Here are some thoughts that are being bandied about: Great. I'll then wait for the updated site and make updates there. As far as ideas, you guys seem to be on a good track. I'd simply say, don't try to get too fancy. As long as the site has the basic functionality and is *easy* for you guys to administer, that's fine. One requirement is then that users can do as much as possible without needing administrator intervention, so you guys don't get unnecessary loaded. The whole business of wikis has never really convinced me, maybe I'm old fashioned. A spam-free mailing list with a well-threaded, searchable archive is one of the most efficient means of asynchronous communication I can think of. One small idea: as people start contributing packages of their own in their personal areas, it would be good to have a central list where all contributed packages were summarized. It would be a matter of having the site update the central list every time a file was uploaded, perhaps having a convention on directories to use for this (so that not *every* file showed up there). For example, you could set things so that if a user has a directory called 'pub/' in his home area, anything uploaded into it gets automatically scanned and referenced in a global page. Each upload should support a comment field. This would allow, with no manual intervention on your part, the site to become a library of contributed scientific routines. Just an idea. The one area where scipy seems to need help is documentation, though I know Janko is kicking in. That's always an unglamorous part of things, but critical for a successful, useful project of this kind. We might want to kick off a separate thread on that issue, to shake out some ideas and figure out what kind of help/resources Janko needs. Ok, I'm rambling off again... Cheers, f. From fperez at pizero.colorado.edu Thu Feb 21 00:21:18 2002 From: fperez at pizero.colorado.edu (=?ISO-8859-1?Q?Fernando_P=E9rez?=) Date: Wed, 20 Feb 2002 22:21:18 -0700 (MST) Subject: [SciPy-dev] ANN: IPython 0.2.6 Message-ID: Please forgive me if you get this twice. I posted earlier and somehow the message seems to have gone into a black hole. Hi all, since there seems to be some consensus that using IPython along with SciPy may be a good idea, I've made an 'official' release at the usual place with all recent changes and bugfixes: http://www-hep.colorado.edu/~fperez/ipython/ After discussing things with Eric, we've agreed that right now it's just not sane to merge fully IPython with SciPy, since both code bases are large, complex and still in need of work. IPython is in need of a deep internal rewrite, which I plan to do later this year, hopefully at the end of the fall. The code is however fairly stable, documented and perfectly useable (I think) for end users. I added a 'scipy' profile which does a 'from scipy import *' and has the beginnings of what we discussed earlier about array output formats. To invoke it simply type 'ipython -p scipy' (or define a convenient 'scipy' alias to do it). Doing this makes scipy look like an environment of its own, with all its functions ready at the prompt. Here's a screenshot :) [~]> ipython -p scipy Python 2.2 (#1, Jan 17 2002, 21:03:58) Type "copyright", "credits" or "license" for more information. IPython 0.2.6 -- An enhanced Interactive Python. ? -> Introduction to IPython's features. ?object -> Details about 'object'; object? also works, ?? prints more. help -> Python's own help system. @magic -> Information about IPython's 'magic' @ functions. IPython profile: scipy Welcome to the SciPy Scientific Computing Environment. In [1]: The fancy array printing system Eric wanted is still to be implemented, but all the IPython-specific work is finished. What remains is regular handling of Numeric arrays into strings, if anyone is up to the task I'll include their code in future releases :) All recent bugfixes for Emacs work are in (Travis, please let me know of any problems) plus other minor changes. So IPython will probably live a life of its own for a while, but it would be great if some of you tested it a bit. We might want to make a link in the SciPy pages to it as a 'companion' package in order to get a wider base of testers, but I'll leave that decision up to you folks. Even though the current codebase doesn't really allow for major changes, I've started a document outlining what needs to be done for the rewrite so all comments/ideas/suggestions are welcome. This document will be the starting point for the rewrite, so feel free to add anything you think would make your 'ideal' scientific computing environment (best of Mathematica/IDL/Matlab/whatever). We'll then see how far we get :) *Small* features and of course bugfixes will still go in for this series. One final thing some of you may not know about the system but may find quite useful: it was designed to be trivial to embed in another python program. There's an example of how to do it in the docs, basically you create an instance and then anywhere in your code say IPythonShell() # this is your instance and IPython pops up in your local namespace. This is very useful for debugging sessions when print isn't enough, or for data analysis where you want to stop, look at data, start up again a batch part, stop again, etc. If you're familiar with IDL's stop/.continue pair of commands, that's what this gives you. Ok, this is long enough. Enjoy, and thanks for the interest. Best regards, Fernando ... who crosses fingers waiting for the last minute brown paper bag bug to blow up :) From fperez at pizero.colorado.edu Wed Feb 20 19:47:41 2002 From: fperez at pizero.colorado.edu (=?ISO-8859-1?Q?Fernando_P=E9rez?=) Date: Wed, 20 Feb 2002 17:47:41 -0700 (MST) Subject: [SciPy-dev] ANN: IPython 0.2.6 Message-ID: Hi all, since there seems to be some consensus that using IPython along with SciPy may be a good idea, I've made an 'official' release at the usual place with all recent changes and bugfixes: http://www-hep.colorado.edu/~fperez/ipython/ After discussing things with Eric, we've agreed that right now it's just not sane to merge fully IPython with SciPy, since both code bases are large, complex and still in need of work. IPython is in need of a deep internal rewrite, which I plan to do later this year, hopefully at the end of the fall. The code is however fairly stable, documented and perfectly useable (I think) for end users. I added a 'scipy' profile which does a 'from scipy import *' and has the beginnings of what we discussed earlier about array output formats. To invoke it simply type 'ipython -p scipy' (or define a convenient 'scipy' alias to do it). Doing this makes scipy look like an environment of its own, with all its functions ready at the prompt. Here's a screenshot :) [~]> ipython -p scipy Python 2.2 (#1, Jan 17 2002, 21:03:58) Type "copyright", "credits" or "license" for more information. IPython 0.2.6 -- An enhanced Interactive Python. ? -> Introduction to IPython's features. ?object -> Details about 'object'; object? also works, ?? prints more. help -> Python's own help system. @magic -> Information about IPython's 'magic' @ functions. IPython profile: scipy Welcome to the SciPy Scientific Computing Environment. In [1]: The fancy array printing system Eric wanted is still to be implemented, but all the IPython-specific work is finished. What remains is regular handling of Numeric arrays into strings, if anyone is up to the task I'll include their code in future releases :) All recent bugfixes for Emacs work are in (Travis, please let me know of any problems) plus other minor changes. So IPython will probably live a life of its own for a while, but it would be great if some of you tested it a bit. We might want to make a link in the SciPy pages to it as a 'companion' package in order to get a wider base of testers, but I'll leave that decision up to you folks. Even though the current codebase doesn't really allow for major changes, I've started a document outlining what needs to be done for the rewrite so all comments/ideas/suggestions are welcome. This document will be the starting point for the rewrite, so feel free to add anything you think would make your 'ideal' scientific computing environment (best of Mathematica/IDL/Matlab/whatever). We'll then see how far we get :) *Small* features and of course bugfixes will still go in for this series. One final thing some of you may not know about the system but may find quite useful: it was designed to be trivial to embed in another python program. There's an example of how to do it in the docs, basically you create an instance and then anywhere in your code say IPythonShell() # this is your instance and IPython pops up in your local namespace. This is very useful for debugging sessions when print isn't enough, or for data analysis where you want to stop, look at data, start up again a batch part, stop again, etc. If you're familiar with IDL's stop/.continue pair of commands, that's what this gives you. Ok, this is long enough. Enjoy, and thanks for the interest. Best regards, Fernando ... who crosses fingers waiting for the last minute brown paper bag bug to blow up :) From oliphant.travis at ieee.org Thu Feb 21 01:32:58 2002 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 20 Feb 2002 23:32:58 -0700 Subject: [SciPy-dev] xplt setup.py Message-ID: The xplt_setup.py script now fails to install the configuration files [*.gp, *.gs, *.help, *.ps] in the proper place (I'm not sure if they are installed at all). I don't understand the setup.py scripts since they've been changed, so could someone else look at this. It looks like an attempt to get them installed was made, but for some reason it is not working. Thanks, -Travis From pearu at cens.ioc.ee Thu Feb 21 02:04:08 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Thu, 21 Feb 2002 09:04:08 +0200 (EET) Subject: [SciPy-dev] xplt setup.py In-Reply-To: Message-ID: On Wed, 20 Feb 2002, Travis Oliphant wrote: > The xplt_setup.py script now fails to install the configuration files [*.gp, > *.gs, *.help, *.ps] in the proper place (I'm not sure if they are installed > at all). Were these files installed before? Now when I look my old installation of scipy in site-packages tree, I see that the configuration files were not installed also there. > I don't understand the setup.py scripts since they've been changed, so could > someone else look at this. It looks like an attempt to get them installed > was made, but for some reason it is not working. I think it is not xplt_setup.py. Something seems to be unfinished in scipy_distutils, I am not sure ... I'll look for it. Pearu From pearu at cens.ioc.ee Thu Feb 21 02:18:13 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Thu, 21 Feb 2002 09:18:13 +0200 (EET) Subject: [SciPy-dev] xplt setup.py In-Reply-To: Message-ID: On Wed, 20 Feb 2002, Travis Oliphant wrote: > The xplt_setup.py script now fails to install the configuration files [*.gp, > *.gs, *.help, *.ps] in the proper place (I'm not sure if they are installed > at all). Fixed in CVS. (They were installed into current directory because of inorrect path specification.) Pearu From prabhu at aero.iitm.ernet.in Thu Feb 21 05:43:32 2002 From: prabhu at aero.iitm.ernet.in (Prabhu Ramachandran) Date: Thu, 21 Feb 2002 16:13:32 +0530 Subject: [SciPy-dev] Building scipy on Debian Woody. Message-ID: <15476.53076.438069.603434@monster.linux.in> hi, Earlier scipy used to modify the Numeric install and required one to link scipy/Numerical to the Numeric sources. However, with Numeric 20.3 this no longer seems necessary? Or is it? Fernando's document also makes no mention of this. Whats the deal? I also had difficulty when I made a clean rebuild with the fftw_thread libraries. The shared libs were there and somehow was not detected. The problem vanished when I installed the fftw-dev package in woody which installs the static libs and also creates libfftw_threads.so links correctly. However, please note that site-packages/scipy/fftw/fftw.so is only dynamically linked. So it seems like a packaging error (NOT in scipy but in the debian package) that there are no libfftw.so links made in /usr/lib. As regards the installation procedure mentioned in Fernando's page I'd recomend doing this: python setup.py install --prefix=/usr/local instead of plain python setup.py install without the prefix the packages are installed in /usr which can be painful. FWIW, I'd like to mention that somehow Python 2.1.2 has some trouble with f2py. I still havent figured this out. Pearu was saying that upgrading to Python 2.2 fixes this. thanks, prabhu From loredo at astrosun.astro.cornell.edu Thu Feb 21 15:10:52 2002 From: loredo at astrosun.astro.cornell.edu (Tom Loredo) Date: Thu, 21 Feb 2002 15:10:52 -0500 (EST) Subject: [SciPy-dev] SciPy 0.1 and Numeric 21 Message-ID: <200202212010.g1LKAq824232@laplace.astro.cornell.edu> Hi folks- Thanks for all the help with the Solaris install; it finally worked! Not content with leaving well enough alone (i.e. with Numeric 20.1.0), I tried installing with Numeric-21.0b1. The build and install goes fine, but "import scipy" fails with: >>> import scipy Traceback (most recent call last): File "", line 1, in ? File "/home/laplace/lib/python2.2/site-packages/scipy/__init__.py", line 41, in ? from handy import * File "/home/laplace/lib/python2.2/site-packages/scipy/handy.py", line 1, in ? import Numeric File "/home/laplace/lib/python2.2/site-packages/Numeric/Numeric.py", line 124, in ? arrayrange = multiarray.arange AttributeError: 'module' object has no attribute 'arange' Does this reflect an incompatibility with more recent versions of Numeric? If so, what is the latest version that is compatible with SciPy 0.1? Thanks, Tom Loredo From fedor at mailandnews.com Thu Feb 21 16:09:39 2002 From: fedor at mailandnews.com (Fedor Baart) Date: Thu, 21 Feb 2002 22:09:39 +0100 Subject: [SciPy-dev] Plot_utlity In-Reply-To: <200202200442.g1K4g1j13388@scipy.org> Message-ID: <000901c1bb1c$1334cb70$0200a8c0@amd> I published my SVG module on http://www2.sfk.nl/svg I used some of the scipy plot module to create some examples. The examples are a SVG-scatterplot function and a SVG-histogram. These examples are just examples, I know the code looks terrible. Please let me know what you think of the SVGdraw module, if you like it, see ways for improving it or if you find any bugs. Thanks, Fedor From pearu at cens.ioc.ee Thu Feb 21 17:49:11 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Fri, 22 Feb 2002 00:49:11 +0200 (EET) Subject: [SciPy-dev] Request to test system info hooks Message-ID: Hi, I have finished implementing system info hooks. And before I will apply them to scipy setup_*.py scripts, I would like to ask you to run a simple test in order to avoid possible scipy build instabilities after the changes. To run this test, please update scipy from CVS, or only the file scipy_distutils/system_info.py. This should have no effect on the local builds, if you worry about that. Then run python scipy_distutils/system_info.py This will print out what system_info.py found or did not found from your system. It will look only for ATLAS, FFTW, and X11 (ignore messages about blas and lapack). Here follows the result that I see: ------------------------------------------------------ atlas_info: Looking in /usr ... Looking in /usr/local ... FOUND: libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/local/lib/atlas'] blas_info: Looking in /usr ... Looking in /usr/local ... Looking in /opt ... NOT AVAILABLE fftw_info: Looking in /usr ... FOUND: include_dirs = ['/usr/include'] define_macros = ('SCIPY_FFTW_H', 1) library_dirs = ['/usr/lib'] libraries = ['fftw', 'rfftw', 'fftw_threads', 'rfftw_threads'] lapack_info: Looking in /usr ... Looking in /usr/local ... Looking in /opt ... NOT AVAILABLE x11_info: Looking in /usr ... FOUND: include_dirs = ['/usr/X11R6/include/X11'] library_dirs = ['/usr/X11R6/lib'] libraries = ['X11'] ----------------------------------------------------- Please check that everything is correct in your output and let me know if system_info did not discover some libriries that it should have. Or the results are unexpected in any other way. Those who prefer shared libraries, should edit system_info.py file and set system_info.static_first = 0 before running this test. Here follows the output if I disable static_first: ----------------------------------------------------- atlas_info: Looking in /usr ... Looking in /usr/local ... FOUND: libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/local/lib/atlas'] fftw_info: Looking in /usr ... FOUND: include_dirs = ['/usr/include'] extra_objects = ['/usr/lib/libfftw.so', '/usr/lib/librfftw.so', '/usr/lib/libfftw_threads.so', '/usr/lib/librfftw_threads.so'] define_macros = ('SCIPY_FFTW_H', 1) x11_info: Looking in /usr ... FOUND: include_dirs = ['/usr/X11R6/include/X11'] extra_objects = ['/usr/X11R6/lib/libX11.so'] --------------------------------------------------------- Win32 users should note that I cannot test system_info.py on their platform, in fact, I am not familiar with the system resource issues on win32. Therefore, I appreciate if you could read various XXX comments about Win32 in system_info.py and fix the code if necessary. Please let me know about any problems that you'll have and we'll try to resolve them before moving SciPy setup files to use system_info hooks. Thanks, Pearu From rossini at u.washington.edu Thu Feb 21 17:56:38 2002 From: rossini at u.washington.edu (Anthony Rossini) Date: Thu, 21 Feb 2002 14:56:38 -0800 (PST) Subject: [SciPy-dev] Request to test system info hooks In-Reply-To: Message-ID: Seems to work on my system, Debian-Sid (Intel). This will be rather useful! best, -tony On Fri, 22 Feb 2002, Pearu Peterson wrote: > > Hi, > > I have finished implementing system info hooks. And before I will apply > them to scipy setup_*.py scripts, I would like to ask you to run a > simple test in order to avoid possible scipy build instabilities after > the changes. > > To run this test, please update scipy from CVS, or only the file > scipy_distutils/system_info.py. This should have no effect on the local > builds, if you worry about that. > Then run > > python scipy_distutils/system_info.py > > This will print out what system_info.py found or did not found from your > system. It will look only for ATLAS, FFTW, and X11 (ignore messages about > blas and lapack). Here follows the result that I see: > > ------------------------------------------------------ > atlas_info: > Looking in /usr ... > Looking in /usr/local ... > FOUND: > libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] > library_dirs = ['/usr/local/lib/atlas'] > > blas_info: > Looking in /usr ... > Looking in /usr/local ... > Looking in /opt ... > NOT AVAILABLE > > fftw_info: > Looking in /usr ... > FOUND: > include_dirs = ['/usr/include'] > define_macros = ('SCIPY_FFTW_H', 1) > library_dirs = ['/usr/lib'] > libraries = ['fftw', 'rfftw', 'fftw_threads', 'rfftw_threads'] > > lapack_info: > Looking in /usr ... > Looking in /usr/local ... > Looking in /opt ... > NOT AVAILABLE > > x11_info: > Looking in /usr ... > FOUND: > include_dirs = ['/usr/X11R6/include/X11'] > library_dirs = ['/usr/X11R6/lib'] > libraries = ['X11'] > ----------------------------------------------------- > > Please check that everything is correct in your output and let me know if > system_info did not discover some libriries that it should have. Or the > results are unexpected in any other way. > > Those who prefer shared libraries, should edit system_info.py file and > set > system_info.static_first = 0 > > before running this test. Here follows the output if I disable > static_first: > > ----------------------------------------------------- > atlas_info: > Looking in /usr ... > Looking in /usr/local ... > FOUND: > libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] > library_dirs = ['/usr/local/lib/atlas'] > > fftw_info: > Looking in /usr ... > FOUND: > include_dirs = ['/usr/include'] > extra_objects = ['/usr/lib/libfftw.so', '/usr/lib/librfftw.so', > '/usr/lib/libfftw_threads.so', '/usr/lib/librfftw_threads.so'] > define_macros = ('SCIPY_FFTW_H', 1) > > x11_info: > Looking in /usr ... > FOUND: > include_dirs = ['/usr/X11R6/include/X11'] > extra_objects = ['/usr/X11R6/lib/libX11.so'] > --------------------------------------------------------- > > Win32 users should note that I cannot test system_info.py on their > platform, in fact, I am not familiar with the system resource issues on > win32. Therefore, I appreciate if you could read various XXX comments > about Win32 in system_info.py and fix the code if necessary. > > Please let me know about any problems that you'll have and we'll try to > resolve them before moving SciPy setup files to use system_info hooks. > > Thanks, > Pearu > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From eric at scipy.org Thu Feb 21 17:31:38 2002 From: eric at scipy.org (eric) Date: Thu, 21 Feb 2002 17:31:38 -0500 Subject: [SciPy-dev] Request to test system info hooks References: Message-ID: <065b01c1bb27$86fb2850$de5bfea9@ericlaptop> Hey Pearu, This looks nice! > Win32 users should note that I cannot test system_info.py on their > platform, in fact, I am not familiar with the system resource issues on > win32. Therefore, I appreciate if you could read various XXX comments > about Win32 in system_info.py and fix the code if necessary. Yes, right now, it can't find anything on my machine. The idea of a prefix doesn't really exist -- unless it is c:\windows\system32. I'll look into this tonight and see what sane approaches we can take to things. > > Please let me know about any problems that you'll have and we'll try to > resolve them before moving SciPy setup files to use system_info hooks. I'll get back to you -- hopefully later tonight. eric From fperez at pizero.colorado.edu Thu Feb 21 18:39:25 2002 From: fperez at pizero.colorado.edu (=?ISO-8859-1?Q?Fernando_P=E9rez?=) Date: Thu, 21 Feb 2002 16:39:25 -0700 (MST) Subject: [SciPy-dev] Request to test system info hooks In-Reply-To: Message-ID: > To run this test, please update scipy from CVS, or only the file > scipy_distutils/system_info.py. This should have no effect on the local > builds, if you worry about that. I'd love to, but the CVS server doesn't like me today: cvs [server aborted]: cannot open __version__.py for copying: No such file or directory Is the server having a bad day? Yesterday scipy.org was inaccessible for a while, and my first posting about IPython went into a black hole. cheers, f From fperez at pizero.colorado.edu Thu Feb 21 19:45:56 2002 From: fperez at pizero.colorado.edu (=?ISO-8859-1?Q?Fernando_P=E9rez?=) Date: Thu, 21 Feb 2002 17:45:56 -0700 (MST) Subject: [SciPy-dev] IPython update Message-ID: Hi all, small update. If you are using IPython 0.2.6 (the one I released yesterday) with python 2.1, some features won't work. Sorry, it was a nested scoping problem. Either upgrade to 0.2.7 or simply add from __future__ import nested_scopes at the top of the file Magic.py in the IPython/ directory. You can do it in the already installed system, in /usr/lib/python2.1/site-packages/IPython or in the directory where you unpacked: IPython-0.2.6/IPython. In the latter case, rerun the installation so the updated file goes to your main python directory. This problem does NOT affect users of Python2.2 (which has nested scopes by default). Note to Emacs users (Travis): use 'M-x term' instead of 'M-x shell' to open a terminal process in which to run IPython. Emacs 'eterm' buffers are much more functional than Emacs 'shell' buffers (file pagers work, for example). Many thanks to Prabhu for both finding the bug and the emacs tip. Cheers, f. From eric at scipy.org Thu Feb 21 21:00:34 2002 From: eric at scipy.org (eric) Date: Thu, 21 Feb 2002 21:00:34 -0500 Subject: [SciPy-dev] Request to test system info hooks References: Message-ID: <069d01c1bb44$b754e910$de5bfea9@ericlaptop> __version__.py looks like it is no longer a part of the repository. Its been replaced by __cvs_version__.py. I'm not sure why it's giving you troubles. Try removing your copy of __version__.py and see if that works. Yesterday, the T1 that the server sits on was down for about 5 hours. eric ----- Original Message ----- From: "Fernando P?rez" To: Sent: Thursday, February 21, 2002 6:39 PM Subject: Re: [SciPy-dev] Request to test system info hooks > > To run this test, please update scipy from CVS, or only the file > > scipy_distutils/system_info.py. This should have no effect on the local > > builds, if you worry about that. > > I'd love to, but the CVS server doesn't like me today: > > cvs [server aborted]: cannot open __version__.py for copying: No such file or > directory > > Is the server having a bad day? Yesterday scipy.org was inaccessible for a > while, and my first posting about IPython went into a black hole. > > cheers, > > f > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From pearu at cens.ioc.ee Fri Feb 22 09:52:19 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Fri, 22 Feb 2002 16:52:19 +0200 (EET) Subject: [SciPy-dev] Request to test system info hooks In-Reply-To: <065b01c1bb27$86fb2850$de5bfea9@ericlaptop> Message-ID: Eric, On Thu, 21 Feb 2002, eric wrote: > Yes, right now, it can't find anything on my machine. The idea of a prefix > doesn't really exist -- unless it is c:\windows\system32. I'll look into this > tonight and see what sane approaches we can take to things. Could c:\ be considered as a prefix? Because I see that you are using atlas_library_dirs=['C:\\atlas\\WinNT_PIIISSE1'] and if os.path.isdir('c:\\') would succeed, then system_info should find this atlas library. Is is reasonable to search c:\windows\system32 for libraries? Is that the place where win32 users install the third party libraries? Also I am not familiar with Cygwin and Mingw issues? Can they be considered to have similar tree structure like in unices? I mean do they have directories like /usr/, /usr/local, etc? Pearu From pearu at cens.ioc.ee Fri Feb 22 02:08:31 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Fri, 22 Feb 2002 09:08:31 +0200 (EET) Subject: [SciPy-dev] Request to test system info hooks In-Reply-To: Message-ID: Fernando, On Thu, 21 Feb 2002, Fernando P?rez wrote: > I'd love to, but the CVS server doesn't like me today: > > cvs [server aborted]: cannot open __version__.py for copying: No such file or > directory I usually update from CVS with the following command cvs -z3 update -P -d that often solves these problems. If there are still some conflicts, rm && cvs update are helpful as already Eric suggested. Pearu From prabhu at aero.iitm.ernet.in Fri Feb 22 10:39:24 2002 From: prabhu at aero.iitm.ernet.in (Prabhu Ramachandran) Date: Fri, 22 Feb 2002 21:09:24 +0530 Subject: [SciPy-dev] system_info.py errors. Message-ID: <15478.26156.238206.219497@monster.linux.in> hi, Here is the output from my system_info.py $ python scipy_distutils/system_info.py atlas_info: Looking in /usr ... Looking in /usr/local ... FOUND: libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/local/lib/atlas'] blas_info: Looking in /usr ... Looking in /usr/local ... NOT AVAILABLE fftw_info: Looking in /usr ... Traceback (most recent call last): File "scipy_distutils/system_info.py", line 337, in ? show_all() File "scipy_distutils/system_info.py", line 334, in show_all r = c.get_info() File "scipy_distutils/system_info.py", line 75, in get_info self.calc_info(p) File "scipy_distutils/system_info.py", line 217, in calc_info dict_append(info,define_macros=('SCIPY_SFFTW_H',1)) File "scipy_distutils/system_info.py", line 323, in dict_append d[k].extend(v) AttributeError: 'tuple' object has no attribute 'extend' prabhu From pearu at cens.ioc.ee Fri Feb 22 10:47:52 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Fri, 22 Feb 2002 17:47:52 +0200 (EET) Subject: [SciPy-dev] system_info.py errors. In-Reply-To: <15478.26156.238206.219497@monster.linux.in> Message-ID: On Fri, 22 Feb 2002, Prabhu Ramachandran wrote: > AttributeError: 'tuple' object has no attribute 'extend' Yes, I noticed that too. It is fixed but I have not commited the fix to CVS yet. If it builds (and imports etc) in my computer, then I'll commit my changes. Thanks, Pearu From pearu at cens.ioc.ee Fri Feb 22 11:55:37 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Fri, 22 Feb 2002 18:55:37 +0200 (EET) Subject: [SciPy-dev] system_info is in effect Message-ID: Hi, I have applied system_info hooks to SciPy setup scripts and commited my changes to CVS. Currently it builds, imports, and all (except 1) tests succeed in my computer (gcc-3.03, Python-2.2). I have tried to not apply these changes for Win32 platforms until system_info.py runs under it succesfully. However, I may have overlooked something, so let me know if there are some problems. Regards, Pearu From oliphant.travis at ieee.org Fri Feb 22 13:28:51 2002 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 22 Feb 2002 11:28:51 -0700 Subject: [SciPy-dev] SciPy 0.1 and Numeric 21 In-Reply-To: <200202212010.g1LKAq824232@laplace.astro.cornell.edu> References: <200202212010.g1LKAq824232@laplace.astro.cornell.edu> Message-ID: On Thursday 21 February 2002 01:10 pm, you wrote: > Hi folks- > > Thanks for all the help with the Solaris install; it finally > worked! > > Not content with leaving well enough alone (i.e. with > Numeric 20.1.0), I tried installing with Numeric-21.0b1. > > The build and install goes fine, but "import scipy" fails with: > >>> import scipy > > Traceback (most recent call last): > File "", line 1, in ? > File "/home/laplace/lib/python2.2/site-packages/scipy/__init__.py", line > 41, in ? from handy import * > File "/home/laplace/lib/python2.2/site-packages/scipy/handy.py", line 1, > in ? import Numeric > File "/home/laplace/lib/python2.2/site-packages/Numeric/Numeric.py", line > 124, in ? arrayrange = multiarray.arange > AttributeError: 'module' object has no attribute 'arange' This is strange. It appears to be completely a Numeric issue. Perhaps you have an older version of Numeric as a subdirectory of SciPy. I don't understand how SciPy could cause this error. -Travis From fperez at pizero.colorado.edu Fri Feb 22 13:20:05 2002 From: fperez at pizero.colorado.edu (=?ISO-8859-1?Q?Fernando_P=E9rez?=) Date: Fri, 22 Feb 2002 11:20:05 -0700 (MST) Subject: [SciPy-dev] system_info is in effect In-Reply-To: Message-ID: > I have applied system_info hooks to SciPy setup scripts and commited my > changes to CVS. Currently it builds, imports, and all (except 1) tests > succeed in my computer (gcc-3.03, Python-2.2). Does this mean that weave works with gcc3? Eric and I tested it a few weeks ago and only gcc2.9x managed to work correctly with weave. cheers, f From eric at scipy.org Fri Feb 22 12:39:09 2002 From: eric at scipy.org (eric) Date: Fri, 22 Feb 2002 12:39:09 -0500 Subject: [SciPy-dev] system_info is in effect References: Message-ID: <072101c1bbc7$d57d01b0$de5bfea9@ericlaptop> I'm betting Pearu ran scipy.test() which only runs quick tests. A more complete test suite can be run using scipy.test(5) and the full test suite can be run by using scipy.test(10). The default runs only the quick tests so that people will be willing to do it. test(5) and test(10) take multiple minutes to complete. So, I'm betting this says nothing about weave since all of its tests that actually call the compiler are in the higher level test suites. eric ----- Original Message ----- From: "Fernando P?rez" To: Sent: Friday, February 22, 2002 1:20 PM Subject: Re: [SciPy-dev] system_info is in effect > > I have applied system_info hooks to SciPy setup scripts and commited my > > changes to CVS. Currently it builds, imports, and all (except 1) tests > > succeed in my computer (gcc-3.03, Python-2.2). > > Does this mean that weave works with gcc3? Eric and I tested it a few weeks > ago and only gcc2.9x managed to work correctly with weave. > > cheers, > > f > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From fperez at pizero.colorado.edu Fri Feb 22 14:10:20 2002 From: fperez at pizero.colorado.edu (=?ISO-8859-1?Q?Fernando_P=E9rez?=) Date: Fri, 22 Feb 2002 12:10:20 -0700 (MST) Subject: [SciPy-dev] system_info is in effect In-Reply-To: <072101c1bbc7$d57d01b0$de5bfea9@ericlaptop> Message-ID: On Fri, 22 Feb 2002, eric wrote: > I'm betting Pearu ran scipy.test() which only runs quick tests. A more complete > test suite can be run using scipy.test(5) and the full test suite can be run by > using scipy.test(10). The default runs only the quick tests so that people will > be willing to do it. test(5) and test(10) take multiple minutes to complete. > > So, I'm betting this says nothing about weave since all of its tests that > actually call the compiler are in the higher level test suites. Well, here's a report. I just did a cvs update, a clean rebuild and a level 10 test. On Tuesday the same thing gave 5 fails, today I get 2 fails. Both problems are actually the same line in linalg.py. For reference, this is under Linux/Python2.2. Log: ====================================================================== ERROR: check_basic (test_basic1a.test_roots) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.2/site-packages/scipy/tests/test_basic1a.py", line 19, in check_basic assert_array_almost_equal(roots(a1),[2,2],11) File "/usr/lib/python2.2/site-packages/scipy/basic1a.py", line 52, in roots roots,dummy = eig(A) File "/usr/lib/python2.2/site-packages/scipy/linalg/linear_algebra.py", line 440, in eig results = ev(a, jobvl='N', jobvr=vchar, lwork=results[-2][0]) error: ((lwork==-1) || (lwork >= MAX(1,4*n))) failed for 3rd keyword lwork ====================================================================== ERROR: check_inverse (test_basic1a.test_roots) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.2/site-packages/scipy/tests/test_basic1a.py", line 25, in check_inverse assert_array_almost_equal(sort(roots(poly(a))),sort(a),5) File "/usr/lib/python2.2/site-packages/scipy/basic1a.py", line 52, in roots roots,dummy = eig(A) File "/usr/lib/python2.2/site-packages/scipy/linalg/linear_algebra.py", line 440, in eig results = ev(a, jobvl='N', jobvr=vchar, lwork=results[-2][0]) error: ((lwork==-1) || (lwork >= MAX(1,4*n))) failed for 3rd keyword lwork ---------------------------------------------------------------------- Ran 319 tests in 338.117s FAILED (errors=2) Cheers, f. From pearu at cens.ioc.ee Fri Feb 22 14:17:38 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Fri, 22 Feb 2002 21:17:38 +0200 (EET) Subject: [SciPy-dev] system_info is in effect In-Reply-To: <072101c1bbc7$d57d01b0$de5bfea9@ericlaptop> Message-ID: On Fri, 22 Feb 2002, eric wrote: > I'm betting Pearu ran scipy.test() which only runs quick tests. A more complete You are right. here is what I get with test(10): In file included from /home/peterson/opt/lib/python2.2/site-packages/scipy/weave/blitz-20001213/blitz/numinquire.h:60, from /home/peterson/opt/lib/python2.2/site-packages/scipy/weave/blitz-20001213/blitz/array/expr.h:63, from /home/peterson/opt/lib/python2.2/site-packages/scipy/weave/blitz-20001213/blitz/array.h:2469, from /home/peterson/.python22_compiled/128742/sc_5599df30197fe981824ad8ec934a784e0.cpp:3: /home/peterson/opt/lib/python2.2/site-packages/scipy/weave/blitz-20001213/blitz/limits-hack.h:30: multiple definition of `enum std::float_round_style' /home/peterson/opt/include/g++-v3/bits/std_limits.h:866: previous definition here /home/peterson/opt/lib/python2.2/site-packages/scipy/weave/blitz-20001213/blitz/limits-hack.h:31: conflicting types for `round_indeterminate' /home/peterson/opt/include/g++-v3/bits/std_limits.h:867: previous Also, when I run python from $HOME, I get Ewarning: specified build_dir '_bad_path_' does not exist or is or is not writable. Trying default locations ...warning: specified build_dir '..' does not exist or is or is not writable. Trying default locations .warning: specified build_dir '_bad_path_' does not exist or is or is not writable. Trying default locations ...warning: specified build_dir '..' does not exist or is or is not writable. Trying default locations Pearu From pearu at cens.ioc.ee Fri Feb 22 14:33:39 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Fri, 22 Feb 2002 21:33:39 +0200 (EET) Subject: [SciPy-dev] system_info is in effect In-Reply-To: Message-ID: On Fri, 22 Feb 2002, Fernando P?rez wrote: > For reference, this is under Linux/Python2.2. > > Log: > > ====================================================================== > ERROR: check_basic (test_basic1a.test_roots) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/lib/python2.2/site-packages/scipy/tests/test_basic1a.py", line > 19, in check_basic > assert_array_almost_equal(roots(a1),[2,2],11) > File "/usr/lib/python2.2/site-packages/scipy/basic1a.py", line 52, in roots > roots,dummy = eig(A) > File "/usr/lib/python2.2/site-packages/scipy/linalg/linear_algebra.py", line > 440, in eig > results = ev(a, jobvl='N', jobvr=vchar, lwork=results[-2][0]) > error: ((lwork==-1) || (lwork >= MAX(1,4*n))) failed for 3rd keyword lwork > > ====================================================================== > ERROR: check_inverse (test_basic1a.test_roots) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/lib/python2.2/site-packages/scipy/tests/test_basic1a.py", line > 25, in check_inverse > assert_array_almost_equal(sort(roots(poly(a))),sort(a),5) > File "/usr/lib/python2.2/site-packages/scipy/basic1a.py", line 52, in roots > roots,dummy = eig(A) > File "/usr/lib/python2.2/site-packages/scipy/linalg/linear_algebra.py", line > 440, in eig > results = ev(a, jobvl='N', jobvr=vchar, lwork=results[-2][0]) > error: ((lwork==-1) || (lwork >= MAX(1,4*n))) failed for 3rd keyword lwork > > ---------------------------------------------------------------------- This is what I get also with Python 2.1.2 (#1, Jan 18 2002, 18:05:45) [GCC 2.95.4 (Debian prerelease)] on linux2 and Python 2.2 (#1, Jan 8 2002, 01:13:32) [GCC 2.95.4 20011006 (Debian prerelease)] on linux2 but not with Python 2.2 (#7, Jan 28 2002, 13:08:12) [GCC 3.0.3] on linux2 Note the difference in GCC versions. Also gcc-3.0.3 is run on Suse linux. Ok, I'll try first fix one f2py bug and then come back to linalg2. Eric, do you have complaints about system_info on Win32? Pearu From magnus at thinkware.se Fri Feb 22 15:25:19 2002 From: magnus at thinkware.se (magnus at thinkware.se) Date: Fri, 22 Feb 2002 21:25:19 +0100 (MET) Subject: [SciPy-dev] (no subject) In-Reply-To: <049001c1ba4b$d0175c70$de5bfea9@ericlaptop> References: Message-ID: <200202222025.g1MKPJn12454@texas.it-center.se> At 15:18 2002-02-20 -0500, you wrote: >I've read a mail or two that said some versions of CVS leak 4 file descriptors >per commit. Maybe we have one of these versions?? Anyone know where I should >look to see if file descriptors are leaking? man lsof (List open files -- that should give a clue I think. But it might not be installed.) -- Magnus Lyck?, Thinkware AB ?lvans v?g 99, SE-907 50 UME? tel: 070-582 80 65, fax: 070-612 80 65 http://www.thinkware.se/ mailto:magnus at thinkware.se From eric at scipy.org Fri Feb 22 15:41:27 2002 From: eric at scipy.org (eric) Date: Fri, 22 Feb 2002 15:41:27 -0500 Subject: [SciPy-dev] Plot_utlity References: <000901c1bb1c$1334cb70$0200a8c0@amd> Message-ID: <078d01c1bbe1$4cc7c200$de5bfea9@ericlaptop> Thank you Fedor, I look forward to playing with this. eric ----- Original Message ----- From: "Fedor Baart" To: Sent: Thursday, February 21, 2002 4:09 PM Subject: Re: RE: Re: [SciPy-dev] Plot_utlity > I published my SVG module on http://www2.sfk.nl/svg > I used some of the scipy plot module to create some examples. The > examples are a SVG-scatterplot function and a SVG-histogram. > These examples are just examples, I know the code looks terrible. > Please let me know what you think of the SVGdraw module, if you like it, > see ways for improving it or if you find any bugs. > > Thanks, > > Fedor > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From pearu at cens.ioc.ee Sat Feb 23 04:40:54 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Sat, 23 Feb 2002 11:40:54 +0200 (EET) Subject: Python 2.1.x bugs Re: [SciPy-dev] Building scipy on Debian Woody. In-Reply-To: <15476.53076.438069.603434@monster.linux.in> Message-ID: Hi! On Thu, 21 Feb 2002, Prabhu Ramachandran wrote: > FWIW, I'd like to mention that somehow Python 2.1.2 has some trouble > with f2py. I still havent figured this out. Pearu was saying that > upgrading to Python 2.2 fixes this. Yes, now I am pretty convinced that Python 2.1.2 has still some bugs in it that make f2py generated extension modules to crash. I expect that one might have this trouble with any extension module that imports other extension modules and such modules are used within one Python session or script. Ok, why I am so convinced? 1) There is no problem when using Python 2.0 or Python 2.2. I have checked that. 2) This problem occurs only with Python 2.1.1 and Python 2.1.2 (I think Python 2.1 (or 2.1b) still worked correctly but I cannot verify that). Python 2.1.2 reports that python 2.1.1 may crash with extension modules that does similar what I described above and it claims that this bug is fixed. But I suspect that there are still related bugs in Python 2.1.2 that mess up namespaces and causing segmentation faults in debian and possibly in other unices as well. I know that Python 2.1.x is very popular and not all third-party packages are made available to Python 2.2. Therefore users may not be interested in upgrading to Python 2.2. Nevertheless, I can only recommend upgrading to Python 2.2 in order to solve this issue (or there will be Python 2.1.3 with a fix someday, may be not). I have not reported this bug to Python bug-tracker for the following reasons: 1) Python 2.2 works. 2) My knowledge of Python internals is too small to report anything more useful than Python 2.1.2 gives segmentation faults. If someone in the list is willing to work on the bug report, I am happy to share the symptoms of this bug. PS: In order to use scipy_distutils from Python 2.0, one must upgrade distutils to version >= 1.0.2. Regards, Pearu From pearu at cens.ioc.ee Sat Feb 23 05:48:53 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Sat, 23 Feb 2002 12:48:53 +0200 (EET) Subject: [SciPy-dev] Python 2.0 support Message-ID: Question: To we want to support Python 2.0? Here are the issues that need to be solved for Python 2.0 support: 1) distutils must be >= 1.0.2 2) inspect module (that is not a part of Python 2.0) is used in the following places: helpmod.py weave/catalog.py 3) Python 2.0 installation script does not build zlib automatically. zlib is used in dumb_shelve.py 4) Python 2.0 comes without unittest, so it must be installed manually. 5) Various scipy tests fails with TypeError: Comparison of multiarray objects other than rank-0 arrays is not implemented. (using Numeric 20.3). 6) sys._getframe is used in weave/inline_tools.py and so that try:..except:.. construct is needed to get frames. Pearu From pearu at cens.ioc.ee Sat Feb 23 07:08:42 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Sat, 23 Feb 2002 14:08:42 +0200 (EET) Subject: [SciPy-dev] NAN in special/cephes breaks CVS build Message-ID: Hi, The latest CVS does not build on my debian box and it fails with the following message: /home/users/pearu/src_cvs/scipy/special/cephes/iv.c:85: `NAN' undeclared (first use in this function) NAN is defined in math.h that includes bits/nan.h. So I tried to fix it by adding #include to scipy/special/cephes/mconf.h. But it has no effect :(. I am confused. Does it work for you? Pearu From pearu at cens.ioc.ee Sat Feb 23 07:52:48 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Sat, 23 Feb 2002 14:52:48 +0200 (EET) Subject: [SciPy-dev] NAN in special/cephes breaks CVS build In-Reply-To: Message-ID: On Sat, 23 Feb 2002, Pearu Peterson wrote: > /home/users/pearu/src_cvs/scipy/special/cephes/iv.c:85: `NAN' undeclared > (first use in this function) Ok, I found that one needs to use -D_ISOC99_SOURCE when compiling. See /usr/include/features.h for comments and /usr/include/bits/nan.h. Here is a fix that should be the first thing in the header of mconf.h file: #ifndef NAN #define _ISOC99_SOURCE #include #endif Pearu From prabhu at aero.iitm.ernet.in Sat Feb 23 11:38:56 2002 From: prabhu at aero.iitm.ernet.in (Prabhu Ramachandran) Date: Sat, 23 Feb 2002 22:08:56 +0530 Subject: [SciPy-dev] NAN in special/cephes breaks CVS build In-Reply-To: References: Message-ID: <15479.50592.457520.474955@monster.linux.in> >>>>> "PP" == Pearu Peterson writes: PP> Ok, I found that one needs to use -D_ISOC99_SOURCE when PP> compiling. See /usr/include/features.h for comments and PP> /usr/include/bits/nan.h. I just updated my CVS copy and ran a python setup.py build and it seems to build fine on woody. prabhu From pearu at cens.ioc.ee Sat Feb 23 11:45:52 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Sat, 23 Feb 2002 18:45:52 +0200 (EET) Subject: [SciPy-dev] NAN in special/cephes breaks CVS build In-Reply-To: <15479.50592.457520.474955@monster.linux.in> Message-ID: On Sat, 23 Feb 2002, Prabhu Ramachandran wrote: > >>>>> "PP" == Pearu Peterson writes: > > PP> Ok, I found that one needs to use -D_ISOC99_SOURCE when > PP> compiling. See /usr/include/features.h for comments and > PP> /usr/include/bits/nan.h. > > I just updated my CVS copy and ran a python setup.py build and it > seems to build fine on woody. Yes, because travo already fixed this. However, now I get >>> import scipy No module named distrbutions Warning: FFT package not found. Some names will not be available Traceback (most recent call last): File "", line 1, in ? File "scipy/__init__.py", line 95, in ? modules2all(__all__, _level2, globals()) File "scipy/__init__.py", line 48, in modules2all exec("import %s" % name, gldict) File "", line 1, in ? File "scipy/signal/__init__.py", line 74, in ? scipy.names2all(__all__, _namespaces, globals()) File "scipy/__init__.py", line 37, in names2all exec("import %s" % name, gldict) File "", line 1, in ? File "scipy/signal/signaltools.py", line 6, in ? from scipy import fft, ifft, ifftshift, fft2d, ifft2d ImportError: cannot import name fft I do have fftw though. I suspect that fft stuff is not finished yet. Pearu From oliphant.travis at ieee.org Sat Feb 23 12:40:20 2002 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sat, 23 Feb 2002 10:40:20 -0700 Subject: [SciPy-dev] NAN in special/cephes breaks CVS build In-Reply-To: References: Message-ID: On Saturday 23 February 2002 09:45 am, you wrote: > On Sat, 23 Feb 2002, Prabhu Ramachandran wrote: > > >>>>> "PP" == Pearu Peterson writes: > > > > PP> Ok, I found that one needs to use -D_ISOC99_SOURCE when > > PP> compiling. See /usr/include/features.h for comments and > > PP> /usr/include/bits/nan.h. > > > > I just updated my CVS copy and ran a python setup.py build and it > > seems to build fine on woody. > Actually, const.c defines NAN already and so I don't understand why you would get this error. The file iv.c includes an extern double NAN line. > Yes, because travo already fixed this. However, now I get Sorry, I added distributions.py to the __init__ file of stats but forgot to add the file to the CVS tree. It's there now (or you can just delete the reference to distributions in stats/__init__.py --- the problem is not fft. -Travis From fperez at pizero.colorado.edu Sat Feb 23 13:53:35 2002 From: fperez at pizero.colorado.edu (Fernando Perez) Date: Sat, 23 Feb 2002 11:53:35 -0700 (MST) Subject: [SciPy-dev] Python 2.0 support In-Reply-To: Message-ID: On Sat, 23 Feb 2002, Pearu Peterson wrote: > Question: > To we want to support Python 2.0? > > 5) Various scipy tests fails with > > TypeError: Comparison of multiarray objects other than rank-0 arrays is > not implemented. > > (using Numeric 20.3). > 6) sys._getframe is used in > weave/inline_tools.py > and so that try:..except:.. construct is needed to get frames. The others aren't so bad, but these last 2 are killers (esp. 5). The moment one routine in scipy uses a Message-ID: <089d01c1bc99$d80ca4d0$de5bfea9@ericlaptop> Hey Pearu, We started SciPy development with the idea that it would support Python 2.1 forward. If someone really needs 2.0 support, then we could look at accepting the patches. However, there are so many other things to work on that I don't see it as a priority. With a project that is so young, it seems like "planning the future" is a better use of time. Most of the things on your list are small, but taken together, they'ed be pain to drag along. Also (5) *is* a major feature that I'm not sure I'm willing to do without while developing SciPy functions. I'm interested in other peoples opinions. eric ----- Original Message ----- From: "Pearu Peterson" To: Sent: Saturday, February 23, 2002 5:48 AM Subject: [SciPy-dev] Python 2.0 support > > Question: > To we want to support Python 2.0? > > Here are the issues that need to be solved for Python 2.0 support: > 1) distutils must be >= 1.0.2 > 2) inspect module (that is not a part of Python 2.0) is used in the > following places: > helpmod.py > weave/catalog.py > 3) Python 2.0 installation script does not build zlib automatically. zlib > is used in dumb_shelve.py > 4) Python 2.0 comes without unittest, so it must be installed manually. > 5) Various scipy tests fails with > > TypeError: Comparison of multiarray objects other than rank-0 arrays is > not implemented. > > (using Numeric 20.3). > 6) sys._getframe is used in > weave/inline_tools.py > and so that try:..except:.. construct is needed to get frames. > > Pearu > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From eric at scipy.org Sat Feb 23 14:39:10 2002 From: eric at scipy.org (eric) Date: Sat, 23 Feb 2002 14:39:10 -0500 Subject: [SciPy-dev] Request to test system info hooks References: Message-ID: <08f601c1bca1$ce66ba80$de5bfea9@ericlaptop> > Could > > c:\ > > be considered as a prefix? Because I see that you are using Making this switch (which you've already put in the recent CVS I see, solves the problem for fftw. > > atlas_library_dirs=['C:\\atlas\\WinNT_PIIISSE1'] atlas_info still doesn't pick up my atlas library. because it looks like it doesn't search within the WinNT_PIIISSE1 directory -- only the atlas directory is searched. To solve the problem, I've moved all my libraries up into the atlas directory. Trying to come up with a scheme to calculate the WinNT_PIIISSE1 directory name stuff for atlas is a little more effort than necessary right now-- but we will need this info if we're ever to try and automate downloading of the correct atlas libraries from SciPy.org, etc. On windows, sticking all libraries in c:\atlas\ isn't that big of a deal because they are almost always used as stand-alone machines. On Unix machines where you working on a shared file system, it might be more of an issue. Someone might have multiple atlas versions built for different architectures sitting in their home directory. Handling the ATLAS directory naming scheme for different architectures/processors would be handy for this situation. Still, I think it is a second order issue. > > and if os.path.isdir('c:\\') would succeed, then system_info should find > this atlas library. > > Is is reasonable to search c:\windows\system32 for libraries? Is that the > place where win32 users install the third party libraries? This is where most dll's are stored, but it isn't a normal place to search for .lib, so no, I think it probably isn't a good idea. The c:\\ you use is as good of a convention as any. > Also I am not familiar with Cygwin and Mingw issues? Can they be > considered to have similar tree structure like in unices? I mean do they > have directories like /usr/, /usr/local, etc? Yes, I think so -- as long as the person using the machine mainly uses Cygwin and stores all their libraries in a standard Cygwin location. I think people building with Cygwin will have probably follow this convention. It looks like system_info works fine on windows after I moved my atlas files. import is currently failing after I build due to unrelated issues. As soon as these are fixed, I'll run the test suite. I'm betting it will work fine. thanks, eric From oliphant.travis at ieee.org Sun Feb 24 03:17:15 2002 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sun, 24 Feb 2002 01:17:15 -0700 Subject: [SciPy-dev] NAN in special/cephes breaks CVS build In-Reply-To: References: Message-ID: For some reason, the fastumath module is being linked against the c_misc library, the cephes library, and the gist library. I thought one of the changes Pearu made was to fix this. Is it still not quite fixed, or should I try to change something. Thanks, -Travis From oliphant.travis at ieee.org Sun Feb 24 03:24:10 2002 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sun, 24 Feb 2002 01:24:10 -0700 Subject: [SciPy-dev] New in fastumath ~ (means conjugate on floats and complex numbers) Message-ID: I finally realized that with a simple change we can use the unary operator on floats and complex numbers to mean complex conjugation. I've made the simple change in the CVS version of fastumath. So, in scipy complex-conjugation is as simple as ~a if a is a complex number (or a float). The only problem is that if a is an integer it still means bitwise inversion. Is the added convenience worth the possible confusion? The problem is that complex conjugation happens all the time, but bitwise inversion rarely. -Travis From eric at scipy.org Sun Feb 24 02:31:08 2002 From: eric at scipy.org (eric) Date: Sun, 24 Feb 2002 02:31:08 -0500 Subject: [SciPy-dev] New in fastumath ~ (means conjugate on floats and complex numbers) References: Message-ID: <095d01c1bd05$3a3b8570$de5bfea9@ericlaptop> > I finally realized that with a simple change we can use the unary operator on > floats and complex numbers to mean complex conjugation. > > I've made the simple change in the CVS version of fastumath. > > So, in scipy complex-conjugation is as simple as > > ~a > > if a is a complex number (or a float). > > The only problem is that if a is an integer it still means bitwise inversion. > > Is the added convenience worth the possible confusion? The problem is that > complex conjugation happens all the time, but bitwise inversion rarely. -1 I like the idea of having a conjugate operator, but this introduces a dangerous ambiguity. There are many times where arrays are passed around without regard for their numeric typecode. If an integer array is passed into some function that does a conjugate, a bit inversion occurs instead and silently produces invalid results. Are there any other symbols available? eric From pearu at cens.ioc.ee Sun Feb 24 03:37:49 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Sun, 24 Feb 2002 10:37:49 +0200 (EET) Subject: [SciPy-dev] NAN in special/cephes breaks CVS build In-Reply-To: Message-ID: On Sun, 24 Feb 2002, Travis Oliphant wrote: > For some reason, the fastumath module is being linked against the c_misc > library, the cephes library, and the gist library. I thought one of the > changes Pearu made was to fix this. > > Is it still not quite fixed, or should I try to change something. We fixed that the fortran libraries will not get linked against all modules. Fixing this also for C libraries requires some deviation from standard distutils, though, I think that it is worth. This was left unimplemented for a hope that some better idea comes up. But if linking alien libraries is causing trouble, then we should fix this. Tell me if it is needed and I make it a first priority. Pearu From prabhu at aero.iitm.ernet.in Sun Feb 24 03:40:21 2002 From: prabhu at aero.iitm.ernet.in (Prabhu Ramachandran) Date: Sun, 24 Feb 2002 14:10:21 +0530 Subject: [SciPy-dev] New in fastumath ~ (means conjugate on floats and complex numbers) In-Reply-To: <095d01c1bd05$3a3b8570$de5bfea9@ericlaptop> References: <095d01c1bd05$3a3b8570$de5bfea9@ericlaptop> Message-ID: <15480.42741.853549.114037@monster.linux.in> >>>>> "eric" == eric writes: eric> I like the idea of having a conjugate operator, but this eric> introduces a dangerous ambiguity. There are many times Absolutely, -1 for me too. prabhu From fperez at pizero.colorado.edu Sun Feb 24 03:43:31 2002 From: fperez at pizero.colorado.edu (Fernando Perez) Date: Sun, 24 Feb 2002 01:43:31 -0700 (MST) Subject: [SciPy-dev] New in fastumath ~ (means conjugate on floats and complex numbers) Message-ID: >>>>> "eric" == eric writes: eric> I like the idea of having a conjugate operator, but this eric> introduces a dangerous ambiguity. There are many times -1. It has the potential for nightmare bugs. I'd rather live with an explicit function call. cheers, f From pearu at cens.ioc.ee Sun Feb 24 03:50:07 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Sun, 24 Feb 2002 10:50:07 +0200 (EET) Subject: [SciPy-dev] New in fastumath ~ (means conjugate on floats and complex numbers) In-Reply-To: <095d01c1bd05$3a3b8570$de5bfea9@ericlaptop> Message-ID: On Sun, 24 Feb 2002, eric wrote: > > I finally realized that with a simple change we can use the unary operator on > > floats and complex numbers to mean complex conjugation. Is it as easy for integers? > > Is the added convenience worth the possible confusion? The problem is that > > complex conjugation happens all the time, but bitwise inversion rarely. > -1 Yes, I think so too. But see below. > I like the idea of having a conjugate operator, but this introduces a dangerous > ambiguity. There are many times where arrays are passed around without regard > for their numeric typecode. If an integer array is passed into some function > that does a conjugate, a bit inversion occurs instead and silently produces > invalid results. Are there any other symbols available? No, I think there are not. This is one shortcomings in Python language that one cannot define new operators. However, note that all scipy functions should apply asarray() to its arguments and if we assume that one never needs bitwise operations within scipy, then scipy specific asarray() function could set some flag to this array saying that ~ operator means complex conjugate also for integer arrays, otherwise ~(integer array) is array(~integers). I am not sure if Numeric then needs a patch for that. Just an idea.. as I see already -4 for Travis patch. Regards, Pearu From pearu at cens.ioc.ee Sun Feb 24 05:40:52 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Sun, 24 Feb 2002 12:40:52 +0200 (EET) Subject: Python 2.1.x bugs Re: [SciPy-dev] Building scipy on Debian Woody. In-Reply-To: Message-ID: Hi! Thanks to persuasion of Prabhu, I submitted a bug report to python bug tracker. It turned out to be productive in a sense that I got confirmation that this is a bug in Python-2.1.2-2 of Debian Woody, and not the bug of f2py or standard distribution of Python 2.1.x (I also checked this). For details, see https://sourceforge.net/tracker/index.php?func=detail&aid=521854&group_id=5470&atid=105470 and http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=135461 if you wish to be updated. Regards, Pearu From prabhu at aero.iitm.ernet.in Sun Feb 24 06:55:37 2002 From: prabhu at aero.iitm.ernet.in (Prabhu Ramachandran) Date: Sun, 24 Feb 2002 17:25:37 +0530 Subject: Python 2.1.x bugs Re: [SciPy-dev] Building scipy on Debian Woody. In-Reply-To: References: Message-ID: <15480.54457.698807.687955@monster.linux.in> >>>>> "PP" == Pearu Peterson writes: PP> https://sourceforge.net/tracker/index.php?func=detail&aid=521854&group_id=5470&atid=105470 PP> and PP> http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=135461 Thanks a ton for taking the pains to do this!! regards, prabhu From pearu at cens.ioc.ee Sun Feb 24 09:29:21 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Sun, 24 Feb 2002 16:29:21 +0200 (EET) Subject: [SciPy-dev] The next generation of the scipy site In-Reply-To: Message-ID: Hi Travis, On Wed, 20 Feb 2002, Travis N. Vaught wrote: > This should be fixed in the next generation of the site along with other > improvements and fixes. We will hopefully migrate to it later next week. > BTW if you have any other suggestions about the site, they are welcome. > Here are some thoughts that are being bandied about: > > o Roundup bug-tracker instead of the currently ignored sourceforge tracker > > o Better facility for community comments to docs and the site in > general--perhaps just the mailing lists > > o get rid of duplicate functionality (Wikis are similar to comments (now > disabled) are similar to archived mailing lists--we should probably use one > for discussion (mailing lists) one for co-generated docs (Probably not wikis > but I'm not sure--could be some CMF document in users' folders) Here follows a small wish-list from me: - In order to be updated what is going on in CVS, a scipy-cvs at scipy.org would be useful. The purpose of this list is to automatically record messages from CVS commits. Being subscribed to this list one can quickly deside whether one wants to update CVS or not. For example, if working with some part of scipy, I would like to be aware of new commits to this part. While it may not be very crucial to be updated from other subprojects (that occasionally may introduce building problems and disturb working with the particular part). I hope that you got the point. Btw, Reply-To field for scipy-cvs would be useful to set scipy-dev at scipy.org. - I understand that the sourceforge tracker is ignored because of the current too rapid development where bugs come in and go out before one is fast enough to submit a bug report to the tracker. However, some bugs are longer lasting than others and it would be really useful that they get recorded in a better place than this mailing list (it may be very tedious later to find bug reports that got no replies or fixes). So, I would like that it would be easier to send bug reports from users and developers in such a way that the most right person(s) gets notified of the bug. I don't know what would be the best way to accomplish this, just few ideas: a) to have a scipy-bug at scipy.py mailing list -- it does not solve all the mentioned problems but it would keep development and bug messages separate. b) to urge users and developers to use more often the sourceforge tracker - personally, I find it less convinient than the mailing list because there are allsorts of formalities such as one has to be logged in etc. However, if I would use it more frequently than now, I might start liking it.. I have little idea what are the differences of Roundup bug-tracker and the sourceforge tracker except the former one is python based;) What do you think? PS: I never have got the wiki idea either, may be because I have never felt that it would be somehow useful to learn. May be I am plain wrong here and have no idea what I have missed. Just I am not a mouse person;) Pearu From prabhu at aero.iitm.ernet.in Sun Feb 24 12:11:30 2002 From: prabhu at aero.iitm.ernet.in (Prabhu Ramachandran) Date: Sun, 24 Feb 2002 22:41:30 +0530 Subject: [SciPy-dev] The next generation of the scipy site In-Reply-To: References: Message-ID: <15481.7874.298982.120456@monster.linux.in> >>>>> "PP" == Pearu Peterson writes: PP> way to accomplish this, just few ideas: a) to have a PP> scipy-bug at scipy.py mailing list -- it does not solve all the Yes, scipy-bug sounds like a good idea. PP> mentioned problems but it would keep development and bug PP> messages separate. b) to urge users and developers to use PP> more often the sourceforge tracker - personally, I find it PP> less convinient than the mailing list because there are PP> allsorts of formalities such as one has to be logged in PP> etc. However, if I would use it more frequently than now, I PP> might start liking it.. I have little idea what are the PP> differences of Roundup bug-tracker and the sourceforge tracker PP> except the former one is python based;) Well, I also prefer email to web based trackers/bug reports etc. Email is faster and easier. prabhu From eric at scipy.org Sun Feb 24 13:50:19 2002 From: eric at scipy.org (eric) Date: Sun, 24 Feb 2002 13:50:19 -0500 Subject: [SciPy-dev] The next generation of the scipy site References: Message-ID: <099401c1bd64$1cf82bd0$de5bfea9@ericlaptop> ----- Original Message ----- From: "Pearu Peterson" To: Sent: Sunday, February 24, 2002 9:29 AM Subject: [SciPy-dev] The next generation of the scipy site > > Hi Travis, > > On Wed, 20 Feb 2002, Travis N. Vaught wrote: > > > This should be fixed in the next generation of the site along with other > > improvements and fixes. We will hopefully migrate to it later next week. > > BTW if you have any other suggestions about the site, they are welcome. > > Here are some thoughts that are being bandied about: > > > > o Roundup bug-tracker instead of the currently ignored sourceforge tracker > > > > o Better facility for community comments to docs and the site in > > general--perhaps just the mailing lists > > > > o get rid of duplicate functionality (Wikis are similar to comments (now > > disabled) are similar to archived mailing lists--we should probably use one > > for discussion (mailing lists) one for co-generated docs (Probably not wikis > > but I'm not sure--could be some CMF document in users' folders) > > Here follows a small wish-list from me: > > - In order to be updated what is going on in CVS, a > scipy-cvs at scipy.org > would be useful. The purpose of this list is to automatically record > messages from CVS commits. Being subscribed to this list one can quickly > deside whether one wants to update CVS or not. For example, if working > with some part of scipy, I would like to be aware of new commits to this > part. While it may not be very crucial to be updated from other > subprojects (that occasionally may introduce building problems and disturb > working with the particular part). I hope that you got the point. > Btw, Reply-To field for scipy-cvs would be useful to set > scipy-dev at scipy.org. This is a good idea. What do we need? I'm guessing a script like the cvs_version update script you build for the CVS would work. Is this correct? > > - I understand that the sourceforge tracker is ignored because of > the current too rapid development where bugs come in and go out before one > is fast enough to submit a bug report to the tracker. However, some bugs > are longer lasting than others and it would be really useful that they > get recorded in a better place than this mailing list (it may be > very tedious later to find bug reports that got no replies or fixes). > So, I would like that it would be easier to send bug reports from users > and developers in such a way that the most right person(s) gets > notified of the bug. I don't know what would be the best way to accomplish > this, just few ideas: > a) to have a scipy-bug at scipy.py mailing list -- it does not solve all > the mentioned problems but it would keep development and bug messages > separate. > b) to urge users and developers to use more often the sourceforge > tracker - personally, I find it less convinient than the mailing list > because there are allsorts of formalities such as one has to be logged in > etc. However, if I would use it more frequently than now, I might start > liking it.. I have little idea what are the differences of Roundup > bug-tracker and the sourceforge tracker except the former one is python > based;) My understanding is that roundup uses email very heavily. I think you can submit bugs and get updates all through email without having to look at the website. The website offers an alternative way of entering bugs for those more comfortable with the web, and it also provides a useful reporting interface for sorting, archiving, etc. I've never used it either, but lets give it a try. If after a few months it doesn't seem to be working, we can try an email lists or bugzilla or something else. My sense is though that a bug list works well for small projects (which, in terms of developers, still describes SciPy), but doesn't scale extremely well (and I'm hoping this becomes important :-). On the other end, the sourceforge tracker is a little bureaucratic. Maybe Roundup will fit in the middle nicely. > > What do you think? > > PS: I never have got the wiki idea either, may be because I have never > felt that it would be somehow useful to learn. May be I am plain wrong > here and have no idea what I have missed. Just I am not a mouse person;) The Zope team likes it, so it definitely fits some peoples style. I think Travis Vaught likes it pretty well also. But it hasn't really caught on at Enthought -- we still zip emails around most of the time. One other thing I've been thinking about is setting up a SciPy chat room. Prabhu and I have occasionally used one at www.debian.org (I think). There have been multiple times when I wished that I could talk with several people at a time instantly instead of waiting for emails to bounce around. The main thing I don't like about this idea is that it isn't archived, and I think this is very important. Has anyone ever written a script to watch an IRC channel and record it? We could keep these archives like the email archives on SciPy. I'm not very familiar with the chat stuff. Is there a better alternative than IRC for group chatting? Does this sound useful to others, or should we stick with the email list only approach? eric eric From loredo at astrosun.astro.cornell.edu Sun Feb 24 18:20:51 2002 From: loredo at astrosun.astro.cornell.edu (Tom Loredo) Date: Sun, 24 Feb 2002 18:20:51 -0500 (EST) Subject: [SciPy-dev] Re: Tests in Scipy Message-ID: <200202242320.g1ONKpo26960@laplace.astro.cornell.edu> Travis wrote: > Adding from fastumath import * > > after the from Numeric import * > > fixes the problem. Where did you do this fix, Travis? scipy.test() with my build of 0.1 on Solaris has 14 failures, all related to infinitude/NaN checking. Thanks, Tom From loredo at astrosun.astro.cornell.edu Sun Feb 24 18:27:16 2002 From: loredo at astrosun.astro.cornell.edu (Tom Loredo) Date: Sun, 24 Feb 2002 18:27:16 -0500 (EST) Subject: [SciPy-dev] Re: SciPy 0.1 and Numeric 21 Message-ID: <200202242327.g1ONRGI26963@laplace.astro.cornell.edu> There appears to be an imcompatibility between the scipy 0.1 installation script and recent Numeric install scripts. I wrote: > The build and install goes fine, but "import scipy" fails with: > >>> import scipy > > Traceback (most recent call last): > File "", line 1, in ? > File "/home/laplace/lib/python2.2/site-packages/scipy/__init__.py", line > 41, in ? from handy import * > File "/home/laplace/lib/python2.2/site-packages/scipy/handy.py", line 1, > in ? import Numeric > File "/home/laplace/lib/python2.2/site-packages/Numeric/Numeric.py", line > 124, in ? arrayrange = multiarray.arange > AttributeError: 'module' object has no attribute 'arange' Travis responded: > This is strange. It appears to be completely a Numeric issue. Perhaps you > have an older version of Numeric as a subdirectory of SciPy. > > I don't understand how SciPy could cause this error. Thanks for this insight. Indeed, "import Numeric" turns up the problem. What I did was build SciPy with the latest Numeric dist'n (21.0b1) in place of the "Numerical" directory. But now what I've done is moved the resulting "Numeric" package directory elsewhere, and installed Numeric-21.0b1 "manually" by doing the normal "python setup.py install" in the Numeric-21.0b1 directory. This installs Numeric in a way that works fine. This installation has several subdirectories that somehow did not get created when I let the scipy installation build Numeric. So there must be something in the scipy install script that leaves out some important stuff from recent versions of Numeric. BTW, now that I have a working Numeric 21 installation, I just copied the fastumath module that scipy had created when it installed (incompletely) Numeric 21 to the working installation. Is there anything else scipy installs under the Numeric package that I have to worry about? All scipy tests now pass except some finitude/infinitude/NaN tests. Thanks, Tom Loredo From oliphant.travis at ieee.org Sun Feb 24 22:07:06 2002 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sun, 24 Feb 2002 20:07:06 -0700 Subject: [SciPy-dev] New in fastumath ~ (means conjugate on floats and complex numbers) In-Reply-To: <095d01c1bd05$3a3b8570$de5bfea9@ericlaptop> References: <095d01c1bd05$3a3b8570$de5bfea9@ericlaptop> Message-ID: On Sunday 24 February 2002 12:31 am, you wrote: > > I finally realized that with a simple change we can use the unary > > operator on floats and complex numbers to mean complex conjugation. > > > > I've made the simple change in the CVS version of fastumath. > > > > So, in scipy complex-conjugation is as simple as > > > > ~a > > > > if a is a complex number (or a float). > > > > The only problem is that if a is an integer it still means bitwise > > inversion. > > > > Is the added convenience worth the possible confusion? The problem is > > that complex conjugation happens all the time, but bitwise inversion > > rarely. > > -1 > > I like the idea of having a conjugate operator, but this introduces a > dangerous ambiguity. There are many times where arrays are passed around > without regard for their numeric typecode. If an integer array is passed > into some function that does a conjugate, a bit inversion occurs instead > and silently produces invalid results. Are there any other symbols > available? No, the other thing we could do in fastumath (remember this isn't in the default umath in Numeric) is to eliminate the ~ as a bit inversion on Numeric arrays. That would eliminate the ambiguity. -Travis From oliphant.travis at ieee.org Sun Feb 24 22:07:50 2002 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sun, 24 Feb 2002 20:07:50 -0700 Subject: [SciPy-dev] NAN in special/cephes breaks CVS build In-Reply-To: References: Message-ID: On Sunday 24 February 2002 01:37 am, you wrote: > On Sun, 24 Feb 2002, Travis Oliphant wrote: > > For some reason, the fastumath module is being linked against the c_misc > > library, the cephes library, and the gist library. I thought one of the > > changes Pearu made was to fix this. > > > > Is it still not quite fixed, or should I try to change something. > > We fixed that the fortran libraries will not get linked against all > modules. Fixing this also for C libraries requires some deviation from > standard distutils, though, I think that it is worth. This was left > unimplemented for a hope that some better idea comes up. But if linking > alien libraries is causing trouble, then we should fix this. Tell me if it > is needed and I make it a first priority. > I worked around it for now. It's not causing difficulties anymore -Travis From oliphant.travis at ieee.org Sun Feb 24 22:17:28 2002 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sun, 24 Feb 2002 20:17:28 -0700 Subject: [SciPy-dev] Re: Tests in Scipy In-Reply-To: <200202242320.g1ONKpo26960@laplace.astro.cornell.edu> References: <200202242320.g1ONKpo26960@laplace.astro.cornell.edu> Message-ID: On Sunday 24 February 2002 04:20 pm, you wrote: > Travis wrote: > > Adding from fastumath import * > > > > after the from Numeric import * > > > > fixes the problem. > > Where did you do this fix, Travis? scipy.test() with my build > of 0.1 on Solaris has 14 failures, all related to infinitude/NaN > checking. Please let us know what these failures are. I doubt we have NaNs working right for Solaris platforms, but I bet we could make it work. -Travis From oliphant.travis at ieee.org Sun Feb 24 22:22:55 2002 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sun, 24 Feb 2002 20:22:55 -0700 Subject: [SciPy-dev] New in fastumath ~ (means conjugate on floats and complex numbers) In-Reply-To: References: Message-ID: On Sunday 24 February 2002 01:24 am, you wrote: > I finally realized that with a simple change we can use the unary operator > on floats and complex numbers to mean complex conjugation. > > I've made the simple change in the CVS version of fastumath. > > So, in scipy complex-conjugation is as simple as > > ~a > > if a is a complex number (or a float). > > The only problem is that if a is an integer it still means bitwise > inversion. > > Is the added convenience worth the possible confusion? The problem is that > complex conjugation happens all the time, but bitwise inversion rarely. So, what if we made ~ consistently complex conjugation and eliminated the confusion. The invert would still be available as a function call. We could also make changes to any of the other symbols as well while we are at it if the demand is there. We can do this without making any changes to Numeric as well. Let me know. If no one likes the idea but likes writing conjugate(myarray) everywhere then I'll back out the changes. -Travis From prabhu at aero.iitm.ernet.in Mon Feb 25 00:52:23 2002 From: prabhu at aero.iitm.ernet.in (Prabhu Ramachandran) Date: Mon, 25 Feb 2002 11:22:23 +0530 Subject: [SciPy-dev] The next generation of the scipy site In-Reply-To: <099401c1bd64$1cf82bd0$de5bfea9@ericlaptop> References: <099401c1bd64$1cf82bd0$de5bfea9@ericlaptop> Message-ID: <15481.53527.556623.871013@monster.linux.in> >>>>> "eric" == eric writes: eric> One other thing I've been thinking about is setting up a eric> SciPy chat room. Prabhu and I have occasionally used one at eric> www.debian.org (I think). There have been multiple times eric> when I wished that I could talk with several people at a eric> time instantly instead of waiting for emails to bounce eric> around. The main thing I don't like about this idea is that eric> it isn't archived, and I think this is very important. Has eric> anyone ever written a script to watch an IRC channel and eric> record it? We could keep these archives like the email eric> archives on SciPy. Yes, this is a very good idea. I dont know enough about chat to say if archiving is possible at the server. But I do know that a chat client can easily archive a chat session (although I havent done that in a while). I guess it should be relatively easy to do. At the least we could simply have one bot or user always on the chat room that simply logs all messages. prabhu From pearu at cens.ioc.ee Mon Feb 25 03:18:24 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Mon, 25 Feb 2002 10:18:24 +0200 (EET) Subject: [SciPy-dev] special/cephes: undefined symbol: psi_ Message-ID: Hi, When I try the latest CVS build then I get ImportError: scipy/special/cephes.so: undefined symbol: psi_ It seems that cdflib misses a file psi.f since cdflib/apser.f uses EXTERNAL psi and psi_ is defined nowhere. Do we need a C wrapper containing double psi_( double a ) { return psi(a); } ? Pearu From pearu at cens.ioc.ee Mon Feb 25 03:23:19 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Mon, 25 Feb 2002 10:23:19 +0200 (EET) Subject: [SciPy-dev] About stats/distributions.xls Message-ID: Hi again, stats/distributions.xls is Excel, hmm, it most contain very useful information but how I can see it if I don't have Excel? :( Would it be possible to convert stats/distributions.xls to something more readable? PDF, TXT, TEX, may be. Thanks, Pearu From jh at comunit.de Mon Feb 25 03:50:44 2002 From: jh at comunit.de (Janko) Date: Mon, 25 Feb 2002 09:50:44 +0100 Subject: [SciPy-dev] Problem with CVS co Message-ID: <20020225095044.25d70563.jh@comunit.de> Hi, I thought that the problem with the open file limits were solved? Tried to checkout this morning all the great work you had done, I get: cvs server: cannot open .cvsignore: Too many open files cvs server: cannot open .cvswrappers: Too many open files cvs [server aborted]: cannot open CVS/Repository: Too many open files The co command is cvs -z7 -q up -P -d Is this a problem with my cvs setup? Should I try a completly new co? __Janko From pearu at cens.ioc.ee Mon Feb 25 04:08:28 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Mon, 25 Feb 2002 11:08:28 +0200 (EET) Subject: [SciPy-dev] Problem with CVS co In-Reply-To: <20020225095044.25d70563.jh@comunit.de> Message-ID: On Mon, 25 Feb 2002, Janko wrote: > Hi, I thought that the problem with the open file limits were solved? > Tried to checkout this morning all the great work you had done, I get: > > cvs server: cannot open .cvsignore: Too many open files > cvs server: cannot open .cvswrappers: Too many open files > cvs [server aborted]: cannot open CVS/Repository: Too many open files This is strange. This seems to happen only for anonymous checkouts. With a login I can checkout the whole CVS with no problems. And lsof | wc shows that approx. 1141 files are open during update and it is really nothing compared to cat /proc/sys/fs/file-max 16384 So, it seems to be a problem with cvs after all. Any ideas? Does anyone know what are the differences between ordinary and anonymous accounts for a cvs server? Pearu From pearu at cens.ioc.ee Mon Feb 25 05:31:54 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Mon, 25 Feb 2002 12:31:54 +0200 (EET) Subject: [SciPy-dev] Problem with CVS co In-Reply-To: Message-ID: Eric, From http://groups.google.com/groups?hl=en&threadm=fa.eaeh3mv.ehe8gj%40ifi.uio.no&rnum=5&prev=/groups%3Fhl%3Den%26q%3Dcvs%2B1.10%2Bpserver%2BToo%2Bmany%2Bopen%2Bfiles I get an idea that you should try upgrading the cvs server. Pearu From prabhu at aero.iitm.ernet.in Mon Feb 25 04:45:08 2002 From: prabhu at aero.iitm.ernet.in (Prabhu Ramachandran) Date: Mon, 25 Feb 2002 15:15:08 +0530 Subject: [SciPy-dev] Problem with CVS co In-Reply-To: References: <20020225095044.25d70563.jh@comunit.de> Message-ID: <15482.1956.9730.332350@monster.linux.in> >>>>> "PP" == Pearu Peterson writes: PP> Any ideas? Does anyone know what are the differences between PP> ordinary and anonymous accounts for a cvs server? I dont know but the cvs version on scipy is ancient. Concurrent Versions System (CVS) 1.10.8 (client/server) RPM says the following: Build Date: Wed 12 Jul 2000 06:34:46 AM CDT And my version of CVS (version 1.11.1p1) has the following changelog entry. 2000-06-03 Larry Jones * commit.c (checkaddfile): Plug memory leak. * rcs.c (RCS_checkin): Plug memory leaks. * server.c (do_cvs_command): Plug file descriptor leaks. * tag.c (check_fileproc): Plug memory leak. I guess this change didnt make it into the RH cvs version. The scipy server is still running RH 7.0 which IIRC had lots of security holes (RH's x.0 releases usually are buggy). So I guess upgrading the scipy server should fix this and other problems. prabhu From travis at vaught.net Mon Feb 25 11:54:23 2002 From: travis at vaught.net (Travis N. Vaught) Date: Mon, 25 Feb 2002 10:54:23 -0600 Subject: [SciPy-dev] Problem with CVS co In-Reply-To: <15482.1956.9730.332350@monster.linux.in> Message-ID: Rebooting 'fixed' it--you'd think it was a windows box. I'm not sure if it's worth it to upgrade CVS or just spend the time on the new box to get it out more quickly (I'm probably two days away--that might be a couple more reboots of the current box). TV > -----Original Message----- > From: scipy-dev-admin at scipy.org [mailto:scipy-dev-admin at scipy.org]On > Behalf Of Prabhu Ramachandran > Sent: Monday, February 25, 2002 3:45 AM > To: scipy-dev at scipy.org > Subject: Re: [SciPy-dev] Problem with CVS co > > > >>>>> "PP" == Pearu Peterson writes: > > PP> Any ideas? Does anyone know what are the differences between > PP> ordinary and anonymous accounts for a cvs server? > > I dont know but the cvs version on scipy is ancient. > Concurrent Versions System (CVS) 1.10.8 (client/server) > > RPM says the following: > Build Date: Wed 12 Jul 2000 06:34:46 AM CDT > > And my version of CVS (version 1.11.1p1) has the following changelog > entry. > > 2000-06-03 Larry Jones > > * commit.c (checkaddfile): Plug memory leak. > * rcs.c (RCS_checkin): Plug memory leaks. > * server.c (do_cvs_command): Plug file descriptor leaks. > * tag.c (check_fileproc): Plug memory leak. > > I guess this change didnt make it into the RH cvs version. > > The scipy server is still running RH 7.0 which IIRC had lots of > security holes (RH's x.0 releases usually are buggy). So I guess > upgrading the scipy server should fix this and other problems. > > prabhu > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev From oliphant.travis at ieee.org Mon Feb 25 13:23:47 2002 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 25 Feb 2002 11:23:47 -0700 Subject: [SciPy-dev] About stats/distributions.xls In-Reply-To: References: Message-ID: On Monday 25 February 2002 01:23 am, you wrote: > Hi again, > > stats/distributions.xls is Excel, hmm, it most contain very useful > information but how I can see it if I don't have Excel? :( > > Would it be possible to convert stats/distributions.xls to something more > readable? PDF, TXT, TEX, may be. Sorry, it's actually just a temporary spreadsheet for me to keep track of where I am in fleshing out distribution support. I'm actually using gnumeric to create it but I thought Eric might object if I had a gnumeric file in the CVS. You should be able to read it with gnumeric. -Travis From oliphant.travis at ieee.org Mon Feb 25 13:27:02 2002 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 25 Feb 2002 11:27:02 -0700 Subject: [SciPy-dev] special/cephes: undefined symbol: psi_ In-Reply-To: References: Message-ID: On Monday 25 February 2002 01:18 am, you wrote: > Hi, > > When I try the latest CVS build then I get > > ImportError: scipy/special/cephes.so: undefined symbol: psi_ > > It seems that cdflib misses a file psi.f since cdflib/apser.f uses > EXTERNAL psi > and psi_ is defined nowhere. > Do we need a C wrapper containing Sorry, didn't add a couple of files to the CVS. There was a name clash between object files in cdflib and cephes that I had to deal with by renaming the files (I didn't add the renamed files to CVS). It should be fixed now. From heiko at hhenkelmann.de Mon Feb 25 15:48:53 2002 From: heiko at hhenkelmann.de (Heiko Henkelmann) Date: Mon, 25 Feb 2002 21:48:53 +0100 Subject: [SciPy-dev] log plots Message-ID: <004101c1be3d$d630f620$4761e03e@arrow> Hello There, are there any plans to include log plots in any of the plot modules in the future? Or did I miss anything in the current version? Heiko From oliphant.travis at ieee.org Mon Feb 25 16:02:33 2002 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 25 Feb 2002 14:02:33 -0700 Subject: [SciPy-dev] log plots In-Reply-To: <004101c1be3d$d630f620$4761e03e@arrow> References: <004101c1be3d$d630f620$4761e03e@arrow> Message-ID: On Monday 25 February 2002 01:48 pm, you wrote: > Hello There, > > are there any plans to include log plots in any of the plot modules in the > future? Or did I miss anything in the current version? > > Heiko > Look at xplt. xplt.logxy(1,1) turns the current axis into a log-log plot. -Travis From heiko at hhenkelmann.de Mon Feb 25 16:31:17 2002 From: heiko at hhenkelmann.de (Heiko Henkelmann) Date: Mon, 25 Feb 2002 22:31:17 +0100 Subject: [SciPy-dev] log plots References: <004101c1be3d$d630f620$4761e03e@arrow> Message-ID: <000901c1be43$c2ca2b00$cdd99e3e@arrow> Unfortunately I'm tied to a windows box at this point, which doesn't support xplt. Thanx Heiko ----- Original Message ----- From: "Travis Oliphant" To: Sent: Monday, February 25, 2002 10:02 PM Subject: Re: [SciPy-dev] log plots > On Monday 25 February 2002 01:48 pm, you wrote: > > Hello There, > > > > are there any plans to include log plots in any of the plot modules in the > > future? Or did I miss anything in the current version? > > > > Heiko > > > > Look at xplt. > > xplt.logxy(1,1) > > turns the current axis into a log-log plot. > > -Travis > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev From pearu at cens.ioc.ee Tue Feb 26 11:41:16 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Tue, 26 Feb 2002 18:41:16 +0200 (EET) Subject: [SciPy-dev] scipy import failures, what can we do about it? Message-ID: Hi, I have several times noticed that if scipy import fails then it shows 1) unrelated error messages and 2) the corresponding message of the true reason of the failure is either hidden or presented in an unnoticable way. The following message illustrates a particular case of all that: ---------- Forwarded message ---------- Date: Tue, 26 Feb 2002 15:43:04 +0100 From: Nils Wagner Reply-To: scipy-user at scipy.org To: "scipy-user at scipy.net" Subject: [SciPy-user] Latest CVS - import scipy fails >>> import scipy /usr/local/lib/python2.1/site-packages/scipy/linalg/clapack.so: undefined symbol: clapack_sgetrf ^^^^^^^^^^^^^^^ - this is the true reason for import failure Warning: FFT package not found. Some names will not be available ^^^^^^^^^^^^^^^ - this warning is completely wrong. As a result, the FFT package gets a bad name that it does not serve (I a bit suspect that this is what happend to fftw). Traceback (most recent call last): ImportError: cannot import name eig >>> ^^^^^^^^^^^^^ - the exception raised here is not related to the true import error above and one may start looking for errors in wrong places. My suggesion is to review how scipy imports its submodules. Though the current one may work perfectly if nothing is wrong but it behaves unpredictable if something *is* wrong. Scipy should be able to fail at least with a sensible message indicating the right direction to look for errors. The current error messages are just too misleading. If we will not fix this, we are going to get continuously scipy import failure reports that have very little to do with scipy bugs, the failures are more likely to be of incorrectly installed third-party libraries like atlas, fftw, etc. Currently, I don't have a good solution to suggest that would transparently fix all the importing issues. However, I would like to learn what others feel about this issue so that I could know if it makes sense to work something out or not (may be some of you already have a good solution?). Thanks, Pearu From oliphant at ee.byu.edu Tue Feb 26 10:41:59 2002 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 26 Feb 2002 10:41:59 -0500 (EST) Subject: [SciPy-dev] scipy import failures, what can we do about it? In-Reply-To: Message-ID: > Hi, > > I have several times noticed that if scipy import fails then it shows > 1) unrelated error messages and 2) the corresponding message of the true > reason of the failure is either hidden or presented in an unnoticable > way. The following message illustrates a particular case of all that: I'm not sure of the solution, but it would probably help to wrap the import statements (particularly the ones based on compiled modules) in try except clauses so that a more helpful error message could be raised. -Travis From pearu at cens.ioc.ee Tue Feb 26 15:04:54 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Tue, 26 Feb 2002 22:04:54 +0200 (EET) Subject: [SciPy-dev] scipy import failures, what can we do about it? In-Reply-To: Message-ID: On Tue, 26 Feb 2002, Travis Oliphant wrote: > > Hi, > > > > I have several times noticed that if scipy import fails then it shows > > 1) unrelated error messages and 2) the corresponding message of the true > > reason of the failure is either hidden or presented in an unnoticable > > way. The following message illustrates a particular case of all that: > > I'm not sure of the solution, but it would probably help to wrap the > import statements (particularly the ones based on compiled modules) in try > except clauses so that a more helpful error message could be raised. Ok, this would be helpful in cases where importing of, say, clapack fails, then instead of showing plain messages with missing symbols, a message something like 'fix your atlas installation' is shown. But I am not sure that this will fix the scipy importing failure messages, what you get is just a replaced failure message and there will be still alien failure messages. I don't understand the purpose of functions like modules2all, names2all, etc. For example, why linalg.__init__.py contains _modules = ['fblas', 'flapack', 'cblas', 'clapack'] _namespaces = ['linear_algebra'] __all__ = [] import scipy scipy.modules2all(__all__, _modules, globals()) scipy.names2all(__all__, _namespaces, globals()) instead of plain and explicit from linear_algebra import __all__ from linear_algebra import * And if scipy/__init__.py has from linalg import * then doing 'import scipy', the failure of importing clapack would raise an exception showing a direct location of the problem. Sorry, if I don't understand the big picture of scipy structure, I have always thought that keeping modules independent would be a good thing but current scipy hooks seem to try to integrate all modules and their namespaces into a one big one. I must be missing something obvious... Could someone direct me into the right path here if possible? Thanks, Pearu From eric at scipy.org Tue Feb 26 14:07:56 2002 From: eric at scipy.org (eric) Date: Tue, 26 Feb 2002 14:07:56 -0500 Subject: [SciPy-dev] scipy import failures, what can we do about it? References: Message-ID: <037d01c1bef8$e60e2ca0$6b01a8c0@ericlaptop> > Currently, I don't have a good solution to suggest that would > transparently fix all the importing issues. However, I would like to learn > what others feel about this issue so that I could know if it makes sense > to work something out or not (may be some of you already have a good > solution?). This seems desirable to me, but I also haven't looked into it much. eric From eric at scipy.org Tue Feb 26 14:10:47 2002 From: eric at scipy.org (eric) Date: Tue, 26 Feb 2002 14:10:47 -0500 Subject: [SciPy-dev] log plots References: <004101c1be3d$d630f620$4761e03e@arrow> Message-ID: <038501c1bef9$4f303ca0$6b01a8c0@ericlaptop> > Hello There, > > are there any plans to include log plots in any of the plot modules in the > future? Or did I miss anything in the current version? > They aren't there now, but they should show up within the next 6 months. Patches to the current version with this feature are welcome. eric From eric at scipy.org Tue Feb 26 15:07:05 2002 From: eric at scipy.org (eric) Date: Tue, 26 Feb 2002 15:07:05 -0500 Subject: [SciPy-dev] New in fastumath ~ (means conjugate on floats and complex numbers) References: Message-ID: <043001c1bf01$2d037b80$6b01a8c0@ericlaptop> > On Sunday 24 February 2002 01:24 am, you wrote: > > I finally realized that with a simple change we can use the unary operator > > on floats and complex numbers to mean complex conjugation. > > > > I've made the simple change in the CVS version of fastumath. > > > > So, in scipy complex-conjugation is as simple as > > > > ~a > > > > if a is a complex number (or a float). > > > > The only problem is that if a is an integer it still means bitwise > > inversion. > > > > Is the added convenience worth the possible confusion? The problem is that > > complex conjugation happens all the time, but bitwise inversion rarely. > > So, what if we made ~ consistently complex conjugation and eliminated the > confusion. The invert would still be available as a function call. > > We could also make changes to any of the other symbols as well while we are > at it if the demand is there. We can do this without making any changes to > Numeric as well. > > Let me know. If no one likes the idea but likes writing conjugate(myarray) > everywhere then I'll back out the changes. I'm not so fond of conjugate(myarray) and would rather use ~ also, but I can't see a safe way to do it in numerical code without changing Python itself. My view is that numerical operations should work with similar behavior on both arrays and scalars and should produce un-ambiguous results. I also do not think we should make any changes that affect standard Python code (I don't mind changes to Numeric's behavior though when it is appropriate). The following set of examples illustrates the problems of mixing types, etc: >>> def conj_test(a,b): ... return ~a + ~b ... # Try it with two integers >>> conj_test(1,1) -4 # Now with two complex values >>> conj_test(1+0j,1+0j) Traceback (most recent call last): File "", line 1, in ? File "", line 2, in conj_test TypeError: bad operand type for unary ~ # now try an array -- this produces the results we want >>> a=array((1+0j)) >>> conj_test(a,a) (2+0j) # but mixing an array with an integer returns bogus results >>> conj_test(a,1) (-1+0j) # and mixing with a scalar complex fails. >>> conj_test(a,1+0j) Traceback (most recent call last): File "", line 1, in ? File "", line 2, in conj_test TypeError: bad operand type for unary ~ It might be possible to overload the ~ operator in Python2.2 for complex values so that it worked correctly (I haven't looked into the new type/class stuff much), but if we did, it is effectively a language change to Python. In the complex number arena, I don't think it would break much code, but it does have that potential. We need to think long and hard about such decisions and would do better to lobby Guido et al. about such a change (I sorely want a __cmp__ for complex numbers to work with some default behavior). But even if we got it fixed for complex scalars, the fact that conj_test(1,1) would return completely different results than conj_test(1+0j,1+0j) is a show stopper to me. I'd be all for ~ universally meaning conjugate, but its been in the language for to long for this to happen. Could we shorten the name to conj()? Its not as good as ~, but it does cut down on typing and is obvious to those who would use it. eric From rossini at u.washington.edu Tue Feb 26 16:20:51 2002 From: rossini at u.washington.edu (Anthony Rossini) Date: Tue, 26 Feb 2002 13:20:51 -0800 (PST) Subject: [SciPy-dev] New in fastumath ~ (means conjugate on floats and complex numbers) In-Reply-To: <043001c1bf01$2d037b80$6b01a8c0@ericlaptop> Message-ID: Yikes. That little operation is a bit nasty for me. Explanation: in S (and hence in R, the stat languages that I referred to a while back), ~ is the "model building" operator, i.e. "y ~ x" refers to "y = x + e", where e is the associated error term (go back to linear models or regression, if you will)_. I'd like to extend SciPy to use this language at some point, and direct would be better than having to quote it (but that's probably where we will end up). I rarely need conjugation, but readily acknowledge that other computational areas really need that feature. best, -tony On Tue, 26 Feb 2002, eric wrote: > > > > On Sunday 24 February 2002 01:24 am, you wrote: > > > I finally realized that with a simple change we can use the unary operator > > > on floats and complex numbers to mean complex conjugation. > > > > > > I've made the simple change in the CVS version of fastumath. > > > > > > So, in scipy complex-conjugation is as simple as > > > > > > ~a > > > > > > if a is a complex number (or a float). > > > > > > The only problem is that if a is an integer it still means bitwise > > > inversion. > > > > > > Is the added convenience worth the possible confusion? The problem is that > > > complex conjugation happens all the time, but bitwise inversion rarely. > > > > So, what if we made ~ consistently complex conjugation and eliminated the > > confusion. The invert would still be available as a function call. > > > > We could also make changes to any of the other symbols as well while we are > > at it if the demand is there. We can do this without making any changes to > > Numeric as well. > > > > Let me know. If no one likes the idea but likes writing conjugate(myarray) > > everywhere then I'll back out the changes. > > > I'm not so fond of conjugate(myarray) and would rather use ~ also, but I can't > see a safe way to do it in numerical code without changing Python itself. My > view is that numerical operations should work with similar behavior on both > arrays and scalars and should produce un-ambiguous results. I also do not think > we should make any changes that affect standard Python code (I don't mind > changes to Numeric's behavior though when it is appropriate). The following set > of examples illustrates the problems of mixing types, etc: > > > >>> def conj_test(a,b): > ... return ~a + ~b > ... > > # Try it with two integers > >>> conj_test(1,1) > -4 > > # Now with two complex values > >>> conj_test(1+0j,1+0j) > Traceback (most recent call last): > File "", line 1, in ? > File "", line 2, in conj_test > TypeError: bad operand type for unary ~ > > # now try an array -- this produces the results we want > >>> a=array((1+0j)) > >>> conj_test(a,a) > (2+0j) > > # but mixing an array with an integer returns bogus results > >>> conj_test(a,1) > (-1+0j) > > # and mixing with a scalar complex fails. > >>> conj_test(a,1+0j) > Traceback (most recent call last): > File "", line 1, in ? > File "", line 2, in conj_test > TypeError: bad operand type for unary ~ > > It might be possible to overload the ~ operator in Python2.2 for complex values > so that it worked correctly (I haven't looked into the new type/class stuff > much), but if we did, it is effectively a language change to Python. In the > complex number arena, I don't think it would break much code, but it does have > that potential. We need to think long and hard about such decisions and would > do better to lobby Guido et al. about such a change (I sorely want a __cmp__ for > complex numbers to work with some default behavior). > > But even if we got it fixed for complex scalars, the fact that conj_test(1,1) > would return completely different results than conj_test(1+0j,1+0j) is a show > stopper to me. I'd be all for ~ universally meaning conjugate, but its been in > the language for to long for this to happen. > > Could we shorten the name to conj()? Its not as good as ~, but it does cut down > on typing and is obvious to those who would use it. > > eric > > > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From arnd.baecker at physik.uni-ulm.de Tue Feb 26 16:32:28 2002 From: arnd.baecker at physik.uni-ulm.de (arnd.baecker at physik.uni-ulm.de) Date: Tue, 26 Feb 2002 22:32:28 +0100 (MET) Subject: [SciPy-dev] New in fastumath ~ (means conjugate on floats and complex numbers) In-Reply-To: <043001c1bf01$2d037b80$6b01a8c0@ericlaptop> Message-ID: On Tue, 26 Feb 2002, eric wrote: [...] > It might be possible to overload the ~ operator in Python2.2 for complex values > so that it worked correctly (I haven't looked into the new type/class stuff > much), but if we did, it is effectively a language change to Python. In the > complex number arena, I don't think it would break much code, but it does have > that potential. We need to think long and hard about such decisions and would > do better to lobby Guido et al. about such a change (I sorely want a __cmp__ for > complex numbers to work with some default behavior). > > But even if we got it fixed for complex scalars, the fact that conj_test(1,1) > would return completely different results than conj_test(1+0j,1+0j) is a show > stopper to me. I'd be all for ~ universally meaning conjugate, but its been in > the language for to long for this to happen. > > Could we shorten the name to conj()? Its not as good as ~, but it does cut down > on typing and is obvious to those who would use it. Would there be an easy way for a user to redefine ~ (using conj()) to achieve the functionality (at his own risk) ? And what about "conj()" -> "cc()" (with cc standing for complex conjugate) to make it even shorter ? Just my 0.02 Euro ... Arnd From fperez at pizero.colorado.edu Tue Feb 26 16:37:41 2002 From: fperez at pizero.colorado.edu (Fernando Perez) Date: Tue, 26 Feb 2002 14:37:41 -0700 (MST) Subject: [SciPy-dev] New in fastumath ~ (means conjugate on floats and complex numbers) In-Reply-To: <043001c1bf01$2d037b80$6b01a8c0@ericlaptop> Message-ID: On Tue, 26 Feb 2002, eric wrote: > I'm not so fond of conjugate(myarray) and would rather use ~ also, but I can't > see a safe way to do it in numerical code without changing Python itself. My > view is that numerical operations should work with similar behavior on both > arrays and scalars and should produce un-ambiguous results. I also do not think > we should make any changes that affect standard Python code (I don't mind > changes to Numeric's behavior though when it is appropriate). The following set > of examples illustrates the problems of mixing types, etc: [snipped] I think Eric's tests very nicely summarize the problem, and point exactly at the 'nightmare bug' scenarios I mentioned in my earlier post, which were only vaguely defined in my mind at the moment. I would *love* to see a single-symbol operator for complex conjugation (and form matrix transposition, and for expression differentiation --thinking of symbolic extensions in the future :). But unfortunately it seems to create a heinous minefield of tricky problems. I'd suggest for the time being aliasing conj=conjugate (or even cnj) and perhaps lobbying Guido for, _in the future_ allowing overloading of new symbols in the language. My problem with the ~ business is that it introduces very strict type-dependencies in a language which is by design, philosophy and usage very flexible in its handling of types. The 'pythonic' way seems to be: try and operate on things, and either the right thing should happen or an exception should be raised. But the ~ overloading introduces a situation where things give _silent_ incorrect results, which to me is the worst of all possible outcomes. One thing I love about python is that either things work, or I'm explicitly told when they don't (in most cases). I really would hate to see silent modes of failure introduced by us. Just my 0.02, f From heiko at hhenkelmann.de Tue Feb 26 16:41:38 2002 From: heiko at hhenkelmann.de (Heiko Henkelmann) Date: Tue, 26 Feb 2002 22:41:38 +0100 Subject: [SciPy-dev] log plots References: <004101c1be3d$d630f620$4761e03e@arrow> <038501c1bef9$4f303ca0$6b01a8c0@ericlaptop> Message-ID: <001001c1bf0e$5f66bb20$1bd89e3e@arrow> I added logscalex() and nologscalex() to gplt. Heiko ----- Original Message ----- From: "eric" To: Sent: Tuesday, February 26, 2002 8:10 PM Subject: Re: [SciPy-dev] log plots > > > > Hello There, > > > > are there any plans to include log plots in any of the plot modules in the > > future? Or did I miss anything in the current version? > > > > They aren't there now, but they should show up within the next 6 months. > Patches to the current version with this feature are welcome. > > eric > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: interface.py URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: pyPlot.py URL: From oliphant at ee.byu.edu Tue Feb 26 15:30:17 2002 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 26 Feb 2002 15:30:17 -0500 (EST) Subject: [SciPy-dev] scipy import failures, what can we do about it? In-Reply-To: Message-ID: > > Ok, this would be helpful in cases where importing of, say, clapack > fails, then instead of showing plain messages with missing symbols, a > message something like 'fix your atlas installation' is shown. > > But I am not sure that this will fix the scipy importing failure messages, > what you get is just a replaced failure message and there will be > still alien failure messages. > > I don't understand the purpose of functions like modules2all, > names2all, etc. For example, why linalg.__init__.py contains > > _modules = ['fblas', 'flapack', 'cblas', 'clapack'] > _namespaces = ['linear_algebra'] > __all__ = [] > import scipy > scipy.modules2all(__all__, _modules, globals()) > scipy.names2all(__all__, _namespaces, globals()) > > instead of plain and explicit > > from linear_algebra import __all__ > from linear_algebra import * > > And if scipy/__init__.py has > > from linalg import * > > then doing 'import scipy', the failure of importing clapack would > raise an exception showing a direct location of the problem. Thanks for raising these issues, Pearu. What is currently there is done for the purpose of easily adding new namespaces. It is done because of my current understanding of the way Python deals with Packages and names. The reason we don't do as you describe is because I wanted to make sure that the __all__ variable was updated anytime a new name space was imported. They are convenience functions. They are not necessary, provided what the code does is done every time. > > Sorry, if I don't understand the big picture of scipy structure, I have > always thought that keeping modules independent would be a good thing but > current scipy hooks seem to try to integrate all modules and their > namespaces into a one big one. I must be missing something obvious... > Could someone direct me into the right path here if possible? The problem is that we want the user to have access to a series of submodules under the scipy namespace, e.g. scipy.optimize.fsolve scipy.special.gammainc We also want the user of scipy to have access to some "basic" functions which may actually be defined in another subpackage. We want these to be in the main scipy namespace, e.g. scipy.fft scipy.polyval We also want the user to do be able to say from scipy import * and have this be the equivalent of getting rid of the scipy name in front of all of the previous commands, so optimize.fsolve fft special.gammainc polyval all work as expected. The problem is due to faulty file systems on Windows and MAC OS platforms ( :-) ) that mangle the case of file names. This requires that for packages modules must be explicitly loaded and the __all__ variable set in the __init__.py file (this is my understanding anyway). Otherwise from package import * won't load all the names accessible as package.XXXX That's the general idea. The implementation certainly isn't set in stone. -Travis From oliphant at ee.byu.edu Tue Feb 26 17:23:32 2002 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 26 Feb 2002 17:23:32 -0500 (EST) Subject: [SciPy-dev] New in fastumath ~ (means conjugate on floats and complex numbers) In-Reply-To: <043001c1bf01$2d037b80$6b01a8c0@ericlaptop> Message-ID: > > It might be possible to overload the ~ operator in Python2.2 for complex values > so that it worked correctly (I haven't looked into the new type/class stuff > much), but if we did, it is effectively a language change to Python. In the > complex number arena, I don't think it would break much code, but it does have > that potential. We need to think long and hard about such decisions and would > do better to lobby Guido et al. about such a change (I sorely want a __cmp__ for > complex numbers to work with some default behavior). Ah, for complex scalars. Yes that would be nice. > > But even if we got it fixed for complex scalars, the fact that conj_test(1,1) > would return completely different results than conj_test(1+0j,1+0j) is a show > stopper to me. I'd be all for ~ universally meaning conjugate, but its been in > the language for to long for this to happen. > > Could we shorten the name to conj()? Its not as good as ~, but it does cut down > on typing and is obvious to those who would use it. As you'll notice in the lastest CVS, this is the solution I finally converged to as well.. -Travis From eric at scipy.org Wed Feb 27 02:52:43 2002 From: eric at scipy.org (eric) Date: Wed, 27 Feb 2002 02:52:43 -0500 Subject: [SciPy-dev] New in fastumath ~ (means conjugate on floats and complex numbers) References: Message-ID: <05d501c1bf63$bd667300$6b01a8c0@ericlaptop> > Yikes. That little operation is a bit nasty for me. Explanation: in S (and hence in R, the stat languages that I referred to a while back), > ~ is the "model building" operator, i.e. "y ~ x" refers to "y = x + e", where > e is the associated error term (go back to linear models or regression, if you will)_. I'd like to extend SciPy to use this language at some point, and direct would be better than having to quote it (but that's probably where we will end up). I can't see how you'll get around quoting things really. I can imagine adding this to weave though. Something like a weave.r('y ~ x') or something like that. I don't know if any of the machinery there is helpful for this or not, but it does fit into the overall goals of the weave package (mixing other languages into Python). > > I rarely need conjugation, but readily acknowledge that other computational areas really need that feature. Yep. I guess Travis' main point here is that conjugation is used a whole lot more than bitwise inversion in the scientific/engineering computing worlds and it would be a better use of the symbol. A quick search of the entire standard library for ~ as bitwise inversion turned up about 8 occurrences. conjugate() shows up that many times in scipy alone, and we're not near finished with it. We'll make do with conj() though. :-) eric > > best, > -tony > > > > On Tue, 26 Feb 2002, eric wrote: > > > > > > > > On Sunday 24 February 2002 01:24 am, you wrote: > > > > I finally realized that with a simple change we can use the unary operator > > > > on floats and complex numbers to mean complex conjugation. > > > > > > > > I've made the simple change in the CVS version of fastumath. > > > > > > > > So, in scipy complex-conjugation is as simple as > > > > > > > > ~a > > > > > > > > if a is a complex number (or a float). > > > > > > > > The only problem is that if a is an integer it still means bitwise > > > > inversion. > > > > > > > > Is the added convenience worth the possible confusion? The problem is that > > > > complex conjugation happens all the time, but bitwise inversion rarely. > > > > > > So, what if we made ~ consistently complex conjugation and eliminated the > > > confusion. The invert would still be available as a function call. > > > > > > We could also make changes to any of the other symbols as well while we are > > > at it if the demand is there. We can do this without making any changes to > > > Numeric as well. > > > > > > Let me know. If no one likes the idea but likes writing conjugate(myarray) > > > everywhere then I'll back out the changes. > > > > > > I'm not so fond of conjugate(myarray) and would rather use ~ also, but I can't > > see a safe way to do it in numerical code without changing Python itself. My > > view is that numerical operations should work with similar behavior on both > > arrays and scalars and should produce un-ambiguous results. I also do not think > > we should make any changes that affect standard Python code (I don't mind > > changes to Numeric's behavior though when it is appropriate). The following set > > of examples illustrates the problems of mixing types, etc: > > > > > > >>> def conj_test(a,b): > > ... return ~a + ~b > > ... > > > > # Try it with two integers > > >>> conj_test(1,1) > > -4 > > > > # Now with two complex values > > >>> conj_test(1+0j,1+0j) > > Traceback (most recent call last): > > File "", line 1, in ? > > File "", line 2, in conj_test > > TypeError: bad operand type for unary ~ > > > > # now try an array -- this produces the results we want > > >>> a=array((1+0j)) > > >>> conj_test(a,a) > > (2+0j) > > > > # but mixing an array with an integer returns bogus results > > >>> conj_test(a,1) > > (-1+0j) > > > > # and mixing with a scalar complex fails. > > >>> conj_test(a,1+0j) > > Traceback (most recent call last): > > File "", line 1, in ? > > File "", line 2, in conj_test > > TypeError: bad operand type for unary ~ > > > > It might be possible to overload the ~ operator in Python2.2 for complex values > > so that it worked correctly (I haven't looked into the new type/class stuff > > much), but if we did, it is effectively a language change to Python. In the > > complex number arena, I don't think it would break much code, but it does have > > that potential. We need to think long and hard about such decisions and would > > do better to lobby Guido et al. about such a change (I sorely want a __cmp__ for > > complex numbers to work with some default behavior). > > > > But even if we got it fixed for complex scalars, the fact that conj_test(1,1) > > would return completely different results than conj_test(1+0j,1+0j) is a show > > stopper to me. I'd be all for ~ universally meaning conjugate, but its been in > > the language for to long for this to happen. > > > > Could we shorten the name to conj()? Its not as good as ~, but it does cut down > > on typing and is obvious to those who would use it. > > > > eric > > > > > > > > > > _______________________________________________ > > Scipy-dev mailing list > > Scipy-dev at scipy.net > > http://www.scipy.net/mailman/listinfo/scipy-dev > > > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From nwagner at mecha.uni-stuttgart.de Wed Feb 27 12:44:25 2002 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Wed, 27 Feb 2002 18:44:25 +0100 Subject: [SciPy-dev] Missing functions --> Matrix functions, generalized eigenvalue problem, iterative solvers, CWT (Continuos wavelet transform) Message-ID: <3C7D1AF9.62D1085B@mecha.uni-stuttgart.de> Hi all, The current output of >>> help (linalg) is Linear algebra routines. Solving Linear Systems: inv --- Find the inverse of a square matrix solve --- Solve a linear system of equations det --- determinant of a matrix pinv --- Moore-Penrose pseudo inverse (using least-squares) pinv2 --- Moore-Penrose pseudo inverse (using SVD) lstsq --- Least-squares solve Matrix Factorizations: lu --- LU decomposition cholesky --- Cholesky factorization qr --- QR factorization schur --- Schur decomposition rsf2csf --- Real to complex schur form. norm --- vector and matrix norm eig --- eigenvectors and eigenvalues eigvals --- only eigenvalues svd --- singular value decomposition Matrix Functions expm --- exponential (using Pade approximation) cosm --- cosine sinm --- sine tanm --- tangent coshm --- hyperbolic cosine sinhm --- hyperbolic sine tanhm --- hyperbolic tangent funm --- arbitrary function >>> Is there any progress in implementing the matrix logarithm logm ? Has anyone written some functions concerning the generalized eigenvalue problem (or even more general polynomial eigenvalue problems), that is A x = \lambda B x (\lambda^n A_n + \dots + \lambda A_1 + A_0) x = 0 Are there any iterative solvers (CG, GMRES, ...) ? Nils I am also looking for discrete and continuous wavelet transforms. From pearu at cens.ioc.ee Thu Feb 28 13:26:22 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Thu, 28 Feb 2002 20:26:22 +0200 (EET) Subject: [SciPy-dev] scipy structure In-Reply-To: Message-ID: Hi Travis, On Tue, 26 Feb 2002, Travis Oliphant wrote: > The problem is that we want the user to have access to a series of > submodules under the scipy namespace, e.g. > all work as expected. That's fine. > The problem is due to faulty file systems on Windows and MAC OS platforms > ( :-) ) that mangle the case of file names. This requires that for > packages modules must be explicitly loaded and the __all__ variable set in > the __init__.py file (this is my understanding anyway). > > Otherwise > > from package import * > > won't load all the names accessible as > > package.XXXX If I read Section 6.4.1 in http://www.python.org/doc/current/tut/node8.html then I figure that the issue with Windows FS can be solved using __all__ lists only: if package defines __all__ = ['xxxx'] then from package import * will load the name xxxx, even if it is the name of a module XxXx.py. The problems may occure only if __all__ is not defined, but it is always good to have __all__. For example, for the help system. Also for reducing unnecessary names when doing from .. import *. Travis, can you confirm my understanding of Section 6.4.1? Is it possible to rework the structure of scipy by following Python recommendations that also includes Section 6.4.2? I think it will make scipy code more readable and cleanup many dependecies. It might mean extra work now (that I am also willing to do) but later the code base of scipy may become much larger and any restructuring will be more difficult to do. What do you think? Note that PEPs 235,250 might be also relevant for Windows platforms, though they are implemented for Python 2.2. Pearu