From ryanlists at gmail.com Wed Feb 1 00:56:47 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Wed, 1 Feb 2006 00:56:47 -0500 Subject: [SciPy-user] io.loadmat In-Reply-To: References: Message-ID: If anyone wants to help me and needs the mat file, it is on this webpage: http://www.imdl.gatech.edu/ryan/python/ It is that last link called matfile.tar.gz. It is about 500kb when gzipped. Thanks, Ryan On 1/31/06, Ryan Krauss wrote: > I am having trouble loading Matlab .mat files that loaded just fine > under the old scipy. Here is the error message: > In [1]: test=scipy.io.loadmat('figure5') > --------------------------------------------------------------------------- > exceptions.TypeError Traceback (most > recent call last) > > /home/ryan/thesis/actuator_modeling/ > > /usr/lib/python2.4/site-packages/scipy/io/mio.py in loadmat(name, > dict, appendmat, basename) > 745 if not (0 in test_vals): # MATLAB version 5 format > 746 fid.rewind() > --> 747 thisdict = _loadv5(fid,basename) > 748 if dict is not None: > 749 dict.update(thisdict) > > /usr/lib/python2.4/site-packages/scipy/io/mio.py in _loadv5(fid, basename) > 682 try: > 683 var = var + 1 > --> 684 el, varname = _get_element(fid) > 685 if varname is None: > 686 varname = '%s_%04d' % (basename,var) > > /usr/lib/python2.4/site-packages/scipy/io/mio.py in _get_element(fid) > 642 fid.rewind(1) > 643 # get the data tag > --> 644 raw_tag = fid.read(1,'I') > 645 > 646 # check for compressed > > /usr/lib/python2.4/site-packages/scipy/io/mio.py in read(self, count, > stype, rtype, bs, c_is_b) > 283 if count == 0: > 284 return zeros(0,rtype) > --> 285 retval = numpyio.fread(self, count, stype, rtype, bs) > 286 if len(retval) == 1: > 287 retval = retval[0] > > TypeError: argument 3 must be char, not type > > I attached the message in a previous message, but it was sent to a > moderator because it is 600kb. I thought that was a reasonable > attachment size, but I guess that message limit is 100kb. If anyone > wants the file to try and help me with this, I will gladly send it off > list or post it on my website. > > Thanks, > > Ryan > From oliphant.travis at ieee.org Wed Feb 1 02:01:14 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 01 Feb 2006 00:01:14 -0700 Subject: [SciPy-user] io.loadmat In-Reply-To: References: Message-ID: <43E05CBA.8010306@ieee.org> Ryan Krauss wrote: >I am having trouble loading Matlab .mat files that loaded just fine >under the old scipy. Here is the error message: > > Hi Ryan. With latest SVN of scipy, I'm able to load your matfile just fine. I'm not sure what the issue was, but I did change the getsize_type function in mio.py to make sure that the second argument returned is always a character. There was a chance it could be returned as a type-object. If you can't get the latest SVN version, let me know what version of numpy you are using and I can give you a couple-line edit to the mio.py file that should work. -Travis From morgan.hough at gmail.com Wed Feb 1 07:16:46 2006 From: morgan.hough at gmail.com (Morgan Hough) Date: Wed, 1 Feb 2006 12:16:46 +0000 Subject: [SciPy-user] Updating packages for new numpy/scipy Message-ID: <102408b60602010416x32b26986u60dcfed85e604496@mail.gmail.com> I am having a lot of sucess updating packages by just changing setup.py from scipy_distutils.core to numpy.distutils.core but I was wondering how to change, from scipy import scipy_version for the latest scipy from subversion. Thanks in advance. Cheers, -Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From josegomez at gmx.net Wed Feb 1 08:58:07 2006 From: josegomez at gmx.net (Jose Luis Gomez Dans) Date: Wed, 1 Feb 2006 14:58:07 +0100 (MET) Subject: [SciPy-user] Scipy and GDAL on Win32 Message-ID: <24067.1138802287@www036.gmx.net> Hi! I want to use the GDAL library to access geographic data sets. However, the only binaries provided come with a standard python installation. I would like to use a combination of scipy and matplotlib to analyse Geographic datasets, so I need both Scipy (+matplotlib) and GDAL from the fwtools package (I can't compile from source in my work PC). Is there a straightforward way to use both packages together? As I said on the subject, I have Win2k. Many thanks! Jose -- Lust, ein paar Euro nebenbei zu verdienen? Ohne Kosten, ohne Risiko! Satte Provisionen f?r GMX Partner: http://www.gmx.net/de/go/partner From ryanlists at gmail.com Wed Feb 1 10:15:38 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Wed, 1 Feb 2006 10:15:38 -0500 Subject: [SciPy-user] io.loadmat In-Reply-To: <43E05CBA.8010306@ieee.org> References: <43E05CBA.8010306@ieee.org> Message-ID: Thanks Travis. I was able to get the latest SVN and that one now loads. I have one more (I think this will be the last). Same place: http://www.imdl.gatech.edu/ryan/python/ I replaced the other link with a file called matfile2.tar.gz It should unpack a mat file called vertical1_11_24_04_lowfreq.mat This is the format that my experimental setup on campus saves. It is supposed to be mat compliant. I was able to load this exact same data file using old scipy in windows on my wife's computer last night. With the latest SVN I get the following: In [3]: temp=io.loadmat('vertical2_11_24_04_lowfreq.mat') --------------------------------------------------------------------------- exceptions.ValueError Traceback (most recent call last) /home/ryan/thesis/actuator_modeling/torque_testing/data_Nov2004/vertical_data/ /usr/lib/python2.4/site-packages/scipy/io/mio.py in loadmat(name, dict, appendmat, basename) 790 if (M > 1): 791 fid.close() --> 792 raise ValueError, "Unsupported binary format." 793 if (O != 0): 794 fid.close() ValueError: Unsupported binary format. Just to try and get some more information, I hacked up mio.py starting around line 790: M,rest = divmod(header[0],1000) O,rest = divmod(rest,100) P,rest = divmod(rest,10) T = rest print('M='+str(M)) print('O='+str(O)) print('P='+str(P)) print('T='+str(T)) if (M > 1): print('Bad M') # fid.close() # raise ValueError, "Unsupported binary format." if (O != 0): print('Bad O') # fid.close() # raise ValuError, "Hundreds digit of first integer should be zero." if (T not in [0,1]): print('Bad T') # fid.close() # raise ValueError, "Cannot handle sparse matrices, yet." And then I get this: In [1]: temp=io.loadmat('vertical2_11_24_04_lowfreq.mat') M=0 O=0 P=1 T=0 M=1818583 O=9 P=0 T=7 Bad M Bad O Bad T --------------------------------------------------------------------------- exceptions.MemoryError Traceback (most recent call last) /home/ryan/thesis/actuator_modeling/torque_testing/data_Nov2004/vertical_data/ /usr/lib/python2.4/site-packages/scipy/io/mio.py in loadmat(name, dict, appendmat, basename) 808 storage = {0:'d',1:'f',2:'i',3:'h',4:'H',5:'B'}[P] 809 --> 810 varname = fid.fread(header[-1],'char')[:-1] 811 varname = varname.tostring() 812 defnames.append(varname) /usr/lib/python2.4/site-packages/scipy/io/mio.py in read(self, count, stype, rtype, bs, c_is_b) 283 if count == 0: 284 return zeros(0,rtype) --> 285 retval = numpyio.fread(self, count, stype, rtype, bs) 286 if len(retval) == 1: 287 retval = retval[0] MemoryError: So that it looks like it is passing through with valid values for M,T,O,and P once and then getting bad values on a second pass. I don't know what any of this means. I believe this is a matlab v4 file. Old scipy and octave can both open it. Thanks, Ryan On 2/1/06, Travis Oliphant wrote: > Ryan Krauss wrote: > > >I am having trouble loading Matlab .mat files that loaded just fine > >under the old scipy. Here is the error message: > > > > > Hi Ryan. With latest SVN of scipy, I'm able to load your matfile just > fine. > > I'm not sure what the issue was, but I did change the getsize_type > function in mio.py to make sure that the second argument returned is > always a character. There was a chance it could be returned as a > type-object. > > If you can't get the latest SVN version, let me know what version of > numpy you are using and I can give you a couple-line edit to the mio.py > file that should work. > > -Travis > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From icy.flame.gm at gmail.com Wed Feb 1 13:13:30 2006 From: icy.flame.gm at gmail.com (iCy-fLaME) Date: Wed, 1 Feb 2006 18:13:30 +0000 Subject: [SciPy-user] Failed to plot array using gnuplot-py Message-ID: Not sure what is going on, dont even know if this is a glitch cause by gnuplot-py or scipy or python itself, but here is what i found out on my system. Fedora Core 4 gnuplot-4.0.0-7 gnuplot-py-1.7-1 SciPy 0.4.4 It wouldnt plot if i do this: ================================= from scipy import linspace import Gnuplot g = Gnuplot.Gnuplot(debug = 1) x = linspace(0, 1, 20).astype('f') # i.e. array(0.0, 0.05, 0.10, ... 1.0) y = linspace(0, 5, 20).astype('f') d = Gnuplot.Data(x, y) g.plot(d) ================================= However, if i convert the array into a list, then somehow it worked: ================================= from scipy import linspace import Gnuplot g = Gnuplot.Gnuplot(debug = 1) x = linspace(0, 1, 20).astype('f').tolist() y = linspace(0, 5, 20).astype('f').tolist() d = Gnuplot.Data(x, y) g.plot(d) ================================= -- iCy-fLaME The body maybe wounded, but it is the mind that hurts. From robert.kern at gmail.com Wed Feb 1 13:21:35 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 01 Feb 2006 12:21:35 -0600 Subject: [SciPy-user] Failed to plot array using gnuplot-py In-Reply-To: References: Message-ID: <43E0FC2F.1090306@gmail.com> iCy-fLaME wrote: > Not sure what is going on, dont even know if this is a glitch cause by > gnuplot-py or scipy or python itself, but here is what i found out on > my system. > > Fedora Core 4 > gnuplot-4.0.0-7 > gnuplot-py-1.7-1 > SciPy 0.4.4 > > It wouldnt plot if i do this: > ================================= > from scipy import linspace > import Gnuplot > g = Gnuplot.Gnuplot(debug = 1) > x = linspace(0, 1, 20).astype('f') # i.e. array(0.0, 0.05, 0.10, ... 1.0) > y = linspace(0, 5, 20).astype('f') > d = Gnuplot.Data(x, y) > g.plot(d) > ================================= Did you see an error message, or did it just do nothing? What version of Numeric do you have installed? I'm pretty sure the Gnuplot module still uses Numeric and may not recognize numpy arrays. Hopefully, it will try to do something like Numeric.asarray(x). If Numeric's version is >=24.0, that should automatically convert the numpy array to a Numeric array efficiently. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From icy.flame.gm at gmail.com Wed Feb 1 13:48:10 2006 From: icy.flame.gm at gmail.com (iCy-fLaME) Date: Wed, 1 Feb 2006 18:48:10 +0000 Subject: [SciPy-user] Failed to plot array using gnuplot-py In-Reply-To: <43E0FC2F.1090306@gmail.com> References: <43E0FC2F.1090306@gmail.com> Message-ID: On 2/1/06, Robert Kern wrote: > iCy-fLaME wrote: > > Not sure what is going on, dont even know if this is a glitch cause by > > gnuplot-py or scipy or python itself, but here is what i found out on > > my system. > > > > Fedora Core 4 > > gnuplot-4.0.0-7 > > gnuplot-py-1.7-1 > > SciPy 0.4.4 > > > > It wouldnt plot if i do this: > > ================================= > > from scipy import linspace > > import Gnuplot > > g = Gnuplot.Gnuplot(debug = 1) > > x = linspace(0, 1, 20).astype('f') # i.e. array(0.0, 0.05, 0.10, ... 1.0) > > y = linspace(0, 5, 20).astype('f') > > d = Gnuplot.Data(x, y) > > g.plot(d) > > ================================= > > Did you see an error message, or did it just do nothing? What version of Numeric > do you have installed? I'm pretty sure the Gnuplot module still uses Numeric and > may not recognize numpy arrays. Hopefully, it will try to do something like > Numeric.asarray(x). If Numeric's version is >=24.0, that should automatically > convert the numpy array to a Numeric array efficiently. > > -- > Robert Kern > robert.kern at gmail.com > > "In the fields of hell where the grass grows high > Are the graves of dreams allowed to die." > -- Richard Harter > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > It opened a x11 window, but nothing is plotted. Sometimes it complain about the range. I have a feeling it is converting all float to int, hence all zero and nothing to plot. It plots fine if i make x range >> than 1: ================================= from scipy import linspace import Gnuplot g = Gnuplot.Gnuplot(debug = 1) x = linspace(0, 10, 20).astype('f') # Make the end point >> 1 y = linspace(0, 50, 20).astype('f') d = Gnuplot.Data(x, y) g.plot(d) ================================= Ahh, my Numeric version is '23.7' -- iCy-fLaME The body maybe wounded, but it is the mind that hurts. From oliphant at ee.byu.edu Wed Feb 1 14:08:39 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 01 Feb 2006 12:08:39 -0700 Subject: [SciPy-user] Failed to plot array using gnuplot-py In-Reply-To: References: <43E0FC2F.1090306@gmail.com> Message-ID: <43E10737.3050008@ee.byu.edu> iCy-fLaME wrote: >On 2/1/06, Robert Kern wrote: > > >>iCy-fLaME wrote: >> >> >>>Not sure what is going on, dont even know if this is a glitch cause by >>>gnuplot-py or scipy or python itself, but here is what i found out on >>>my system. >>> >>>Fedora Core 4 >>>gnuplot-4.0.0-7 >>>gnuplot-py-1.7-1 >>>SciPy 0.4.4 >>> >>>It wouldnt plot if i do this: >>>================================= >>>from scipy import linspace >>>import Gnuplot >>>g = Gnuplot.Gnuplot(debug = 1) >>>x = linspace(0, 1, 20).astype('f') # i.e. array(0.0, 0.05, 0.10, ... 1.0) >>>y = linspace(0, 5, 20).astype('f') >>>d = Gnuplot.Data(x, y) >>>g.plot(d) >>>================================= >>> >>> >>Did you see an error message, or did it just do nothing? What version of Numeric >>do you have installed? I'm pretty sure the Gnuplot module still uses Numeric and >>may not recognize numpy arrays. Hopefully, it will try to do something like >>Numeric.asarray(x). If Numeric's version is >=24.0, that should automatically >>convert the numpy array to a Numeric array efficiently. >> >>-- >>Robert Kern >>robert.kern at gmail.com >> >>"In the fields of hell where the grass grows high >> Are the graves of dreams allowed to die." >> -- Richard Harter >> >>_______________________________________________ >>SciPy-user mailing list >>SciPy-user at scipy.net >>http://www.scipy.net/mailman/listinfo/scipy-user >> >> >> > > >It opened a x11 window, but nothing is plotted. Sometimes it complain >about the range. I have a feeling it is converting all float to int, >hence all zero and nothing to plot. > >It plots fine if i make x range >> than 1: >================================= >from scipy import linspace >import Gnuplot >g = Gnuplot.Gnuplot(debug = 1) >x = linspace(0, 10, 20).astype('f') # Make the end point >> 1 >y = linspace(0, 50, 20).astype('f') >d = Gnuplot.Data(x, y) >g.plot(d) >================================= > >Ahh, my Numeric version is '23.7' > > Earlier versions of Numeric didn't handle it well if the return of the __array__ method (which I suspect it is calling), was not an actual Numeric ArrayObject. On 32-bit platforms, the ndarray ArrayObject structure is the same as the Numeric ArrayObject structure. However, the PyArray_Descr structure is quite a bit different and so I'm sure the type information is wrong if Numeric is just trying to use the output of the __array__ method on the ndarray object. Numeric should have been checking the output of the __array__ method and raising an error. I know this got changed at some point, but I'm not sure how early. -Travis From p.jensen at virgin.net Wed Feb 1 15:09:40 2006 From: p.jensen at virgin.net (Peter Jensen) Date: Wed, 01 Feb 2006 20:09:40 +0000 Subject: [SciPy-user] BLT for python. Message-ID: <1138824580.4808.3.camel@localhost> I am new to python, and I am writing a program to display simulation and measurements results. I have a raw version of the program which uses gnuplot and Tkinter, however I would like to make the graphs more interactive. Marker etc... I found BLT for python at : http://heim.ifi.uio.no/~hpl/Pmw.Blt/doc/ It looks very good Does anybody have experience with it. Should we add it to http://www.scipy.org/Wiki/Topical_Software ? I also looked at Chaco at : http://code.enthought.com/chaco/ Again. There is no link at : http://www.scipy.org/Wiki/Topical_Software I also looked a matplotlib, however it does not look well suited for interactive graphs e.g markers are only functional with wxwindows. For TK they are difficult to get to work properly. Any recommendations ?. Regards Peter From icy.flame.gm at gmail.com Wed Feb 1 15:23:38 2006 From: icy.flame.gm at gmail.com (iCy-fLaME) Date: Wed, 1 Feb 2006 20:23:38 +0000 Subject: [SciPy-user] Failed to plot array using gnuplot-py In-Reply-To: <43E10737.3050008@ee.byu.edu> References: <43E0FC2F.1090306@gmail.com> <43E10737.3050008@ee.byu.edu> Message-ID: On 2/1/06, Travis Oliphant wrote: > iCy-fLaME wrote: > > >On 2/1/06, Robert Kern wrote: > > > > > >>iCy-fLaME wrote: > >> > >> > >>>Not sure what is going on, dont even know if this is a glitch cause by > >>>gnuplot-py or scipy or python itself, but here is what i found out on > >>>my system. > >>> > >>>Fedora Core 4 > >>>gnuplot-4.0.0-7 > >>>gnuplot-py-1.7-1 > >>>SciPy 0.4.4 > >>> > >>>It wouldnt plot if i do this: > >>>================================= > >>>from scipy import linspace > >>>import Gnuplot > >>>g = Gnuplot.Gnuplot(debug = 1) > >>>x = linspace(0, 1, 20).astype('f') # i.e. array(0.0, 0.05, 0.10, ... 1.0) > >>>y = linspace(0, 5, 20).astype('f') > >>>d = Gnuplot.Data(x, y) > >>>g.plot(d) > >>>================================= > >>> > >>> > >>Did you see an error message, or did it just do nothing? What version of Numeric > >>do you have installed? I'm pretty sure the Gnuplot module still uses Numeric and > >>may not recognize numpy arrays. Hopefully, it will try to do something like > >>Numeric.asarray(x). If Numeric's version is >=24.0, that should automatically > >>convert the numpy array to a Numeric array efficiently. > >> > >>-- > >>Robert Kern > >>robert.kern at gmail.com > >> > >>"In the fields of hell where the grass grows high > >> Are the graves of dreams allowed to die." > >> -- Richard Harter > >> > >>_______________________________________________ > >>SciPy-user mailing list > >>SciPy-user at scipy.net > >>http://www.scipy.net/mailman/listinfo/scipy-user > >> > >> > >> > > > > > >It opened a x11 window, but nothing is plotted. Sometimes it complain > >about the range. I have a feeling it is converting all float to int, > >hence all zero and nothing to plot. > > > >It plots fine if i make x range >> than 1: > >================================= > >from scipy import linspace > >import Gnuplot > >g = Gnuplot.Gnuplot(debug = 1) > >x = linspace(0, 10, 20).astype('f') # Make the end point >> 1 > >y = linspace(0, 50, 20).astype('f') > >d = Gnuplot.Data(x, y) > >g.plot(d) > >================================= > > > >Ahh, my Numeric version is '23.7' > > > > > > Earlier versions of Numeric didn't handle it well if the return of the > __array__ method (which I suspect it is calling), was not an actual > Numeric ArrayObject. On 32-bit platforms, the ndarray ArrayObject > structure is the same as the Numeric ArrayObject structure. However, > the PyArray_Descr structure is quite a bit different and so I'm sure the > type information is wrong if Numeric is just trying to use the output of > the __array__ method on the ndarray object. > > Numeric should have been checking the output of the __array__ method and > raising an error. I know this got changed at some point, but I'm not > sure how early. > > -Travis > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > Upgrade Numeric to 24.2, now everything is working. Thanks for the quick response. -- iCy-fLaME The body maybe wounded, but it is the mind that hurts. From Fernando.Perez at colorado.edu Wed Feb 1 15:35:37 2006 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Wed, 01 Feb 2006 13:35:37 -0700 Subject: [SciPy-user] BLT for python. In-Reply-To: <1138824580.4808.3.camel@localhost> References: <1138824580.4808.3.camel@localhost> Message-ID: <43E11B99.1020600@colorado.edu> Peter Jensen wrote: > I also looked a matplotlib, however it > does not look well suited for interactive > graphs e.g markers are only functional > with wxwindows. For TK they are difficult > to get to work properly. This is probably best answered on the mpl lists, though there's a fair amount of common public so you may get lucky here. > Any recommendations ?. It's a wiki, go for it! Cheers, f From evan.monroig at gmail.com Wed Feb 1 18:41:21 2006 From: evan.monroig at gmail.com (Evan Monroig) Date: Thu, 2 Feb 2006 08:41:21 +0900 Subject: [SciPy-user] Failed to plot array using gnuplot-py In-Reply-To: <43E0FC2F.1090306@gmail.com> References: <43E0FC2F.1090306@gmail.com> Message-ID: <20060201234121.GC8930@localhost.localdomain> On Feb.01 12h21, Robert Kern wrote : > I'm pretty sure the Gnuplot module still uses Numeric and may not > recognize numpy arrays. Just for the info, I downloaded the Gnuplot module and searched-and-replaced Numeric by numpy and I can use it without problems. I'm sure there should be a way to patch it so that both are accepted, but I didn't really have the time. Evan From erickt at dslextreme.com Thu Feb 2 01:23:29 2006 From: erickt at dslextreme.com (Erick Tryzelaar) Date: Wed, 01 Feb 2006 22:23:29 -0800 Subject: [SciPy-user] scipy 0.4.4 problems with converters.blitz In-Reply-To: References: <43DA1A6F.3050103@hoc.net> <1138380174.43da4d8ed9ae6@webmail.colorado.edu> Message-ID: <43E1A561.6000103@dslextreme.com> Any chance we could get a minor version bump to fix this and the dtypechar/dtype.char bug in Lib/weave/standard_array_spec.py (both already fixed in svn)? These two changes get my weave test code and then I can release my darwinports package. Thanks, -e David M. Cooke wrote: > Pearu Peterson writes: > > >> On Fri, 27 Jan 2006, Travis Oliphant wrote: >> >> >>>>> /home/ck/.python23_compiled/sc_9824b12a96792c10b5fdb725f9caa3c414.cpp:5: >>>>> /usr/lib/python2.3/site-packages/scipy/weave/blitz/blitz/compiler.h:38:6: >>>>> #error >>>>> In : A working template implementation is required by Blitz++ >>>>> (you may need to rerun the compiler/bzconfig script) >>>>> >>>>> I've tried to install Blitz 0.9, so that at least weave can find the header >>>>> file >>>>> "gnu/bzconfig.h" but it didn't really help. >>>>> >>>>> Any ideas what is wrong? >>>>> >>> Try moving the gnu/bzconfig.h file from the installation directory to >>> the weave directory >>> >>> /usr/lib/python2.3/site-packages/scipy/weave/blitz/blitz/gnu >>> >>> Then try again. >>> >>> We may be missing some files when the upgrade in blitz occurred. >>> >> Indeed, scipy/Lib/weave/blitz/blitz/gnu is an empty directory in scipy >> SVN. >> > > Fixed. I added the bzconfig.h that I had used when I updated it last. > > From jaonary at free.fr Thu Feb 2 04:48:30 2006 From: jaonary at free.fr (jaonary at free.fr) Date: Thu, 02 Feb 2006 10:48:30 +0100 Subject: [SciPy-user] PCA and Scipy Message-ID: <1138873710.43e1d56ed16ce@imp1-g19.free.fr> Hi all, I'd like to know if the principal component analysis is already implemented elsewhere in scipy. More generaly, how can I find the list of packages and modules in scipy, and the algo implemented? The link to the API documentation on the web site is not working. Thank you, Jaonary From schofield at ftw.at Thu Feb 2 06:56:05 2006 From: schofield at ftw.at (Ed Schofield) Date: Thu, 02 Feb 2006 12:56:05 +0100 Subject: [SciPy-user] scipy 0.4.6 release? In-Reply-To: <43E1A561.6000103@dslextreme.com> References: <43DA1A6F.3050103@hoc.net> <1138380174.43da4d8ed9ae6@webmail.colorado.edu> <43E1A561.6000103@dslextreme.com> Message-ID: <43E1F355.1050001@ftw.at> Erick Tryzelaar wrote: >Any chance we could get a minor version bump to fix this and the >dtypechar/dtype.char bug in Lib/weave/standard_array_spec.py (both >already fixed in svn)? These two changes get my weave test code and then >I can release my darwinports package. Thanks, > > I think this is a good idea. The most recent release (0.4.4) also isn't compatible with the latest NumPy (0.9.4). I could work on making a new release this weekend if people agree. -- Ed From oliphant.travis at ieee.org Thu Feb 2 09:29:12 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 02 Feb 2006 07:29:12 -0700 Subject: [SciPy-user] scipy 0.4.6 release? In-Reply-To: <43E1F355.1050001@ftw.at> References: <43DA1A6F.3050103@hoc.net> <1138380174.43da4d8ed9ae6@webmail.colorado.edu> <43E1A561.6000103@dslextreme.com> <43E1F355.1050001@ftw.at> Message-ID: <43E21738.4060105@ieee.org> Ed Schofield wrote: >Erick Tryzelaar wrote: > > > >>Any chance we could get a minor version bump to fix this and the >>dtypechar/dtype.char bug in Lib/weave/standard_array_spec.py (both >>already fixed in svn)? These two changes get my weave test code and then >>I can release my darwinports package. Thanks, >> >> >> >> >I think this is a good idea. The most recent release (0.4.4) also isn't >compatible with the latest NumPy (0.9.4). I could work on making a new >release this weekend if people agree. > > I'll roll out NumPy 0.9.5 at the same time so we have two versions that work together. There have been some bug-fixes and a few (minor) feature changes. But, I am running out of numbers for 1.0 release :-) -Travis From schofield at ftw.at Thu Feb 2 10:38:07 2006 From: schofield at ftw.at (Ed Schofield) Date: Thu, 02 Feb 2006 16:38:07 +0100 Subject: [SciPy-user] [Numpy-discussion] Re: scipy 0.4.6 release? In-Reply-To: <43E21738.4060105@ieee.org> References: <43DA1A6F.3050103@hoc.net> <1138380174.43da4d8ed9ae6@webmail.colorado.edu> <43E1A561.6000103@dslextreme.com> <43E1F355.1050001@ftw.at> <43E21738.4060105@ieee.org> Message-ID: <43E2275F.3050705@ftw.at> Travis Oliphant wrote: > Ed Schofield wrote: > >> I think this is a good idea. The most recent release (0.4.4) also isn't >> compatible with the latest NumPy (0.9.4). I could work on making a new >> release this weekend if people agree. > > I'll roll out NumPy 0.9.5 at the same time so we have two versions > that work together. There have been some bug-fixes and a few (minor) > feature changes. But, I am running out of numbers for 1.0 release :-) That sounds good :) How about a stream of 1.0 release candidates for Numpy, starting with 1.0-rc1? For what it's worth, I think we should exercise some patience and caution before releasing a 1.0 version of NumPy, because this is likely to signify an API freeze. The recent dtype changes are a case in point -- the API is cleaner now, but the change required many small changes in SciPy. SciPy is lucky to have helpful developers close to NumPy too, but some other projects won't be able to respond as quickly to compatibility-breaking improvements. Some things I have in mind: stronger type-checking for unsafe casts, and ensuring operations on matrices return matrices ... ;) -- Ed From dmobley at gmail.com Thu Feb 2 12:53:15 2006 From: dmobley at gmail.com (David Mobley) Date: Thu, 2 Feb 2006 09:53:15 -0800 Subject: [SciPy-user] compiling scipy on Fedora Core 4 -- undefined reference to PyInt_Type Message-ID: Dear All, I'm sure I've done something stupid, but I haven't been able to turn up anything useful with Google, so I need to appeal to the list for help. I'm trying to install scipy under RedHat Fedora Core 4 on a Xeon, and the build crashes with a whole bunch of "undefined reference to..." errors when it gets to the step of running g77 on various fft and fftpack object and shared object libraries. The first error is "In function 'int_from_pyobj': build/src/Lib/fftpack/_fftpackmodule.c:109: undefined reference to 'PyInt_Type'. Can anyone help? I already have NumPy installed and in my PYTHONPATH, and also turned up on the web various suggestions about using lapack/blas/atlas even though they're not absolutely required, so I tried with those as well as without them (first with just lapack and blas, then adding ATLAS) and encountered the same problem every time. Am I missing something eseential? Thanks very much, David Mobley -------------- next part -------------- An HTML attachment was scrubbed... URL: From icy.flame.gm at gmail.com Thu Feb 2 13:18:30 2006 From: icy.flame.gm at gmail.com (iCy-fLaME) Date: Thu, 2 Feb 2006 18:18:30 +0000 Subject: [SciPy-user] compiling scipy on Fedora Core 4 -- undefined reference to PyInt_Type In-Reply-To: References: Message-ID: On 2/2/06, David Mobley wrote: > Dear All, > > I'm sure I've done something stupid, but I haven't been able to turn up > anything useful with Google, so I need to appeal to the list for help. > > I'm trying to install scipy under RedHat Fedora Core 4 on a Xeon, and the > build crashes with a whole bunch of "undefined reference to..." errors when > it gets to the step of running g77 on various fft and fftpack object and > shared object libraries. The first error is "In function 'int_from_pyobj': > build/src/Lib/fftpack/_fftpackmodule.c:109: undefined > reference to 'PyInt_Type'. > > Can anyone help? I already have NumPy installed and in my PYTHONPATH, and > also turned up on the web various suggestions about using lapack/blas/atlas > even though they're not absolutely required, so I tried with those as well > as without them (first with just lapack and blas, then adding ATLAS) and > encountered the same problem every time. > > Am I missing something eseential? > > Thanks very much, > David Mobley > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > > > Which version of scipy are you trying to build? tried the 0.4.4? be sure to yum update everything first. also the Numeric package (python-numeric-23.7-2) for FC4 is kinda out of date, grab the latest one from sourceforge. Be sure to upgrade your gcc to at least 4.0.2. I am using FC4 on a P4 machine, the following is what i have, not all of them are needed. blas-3.0-36.fc4 atlas-3.6.0-8.fc4 lapack-3.0-36.fc4 gcc-4.0.2-8.fc4 python-numarray-1.3.3-4.fc4 fftw-2.1.5-8 fftw3-3.0.1-3 Numeric 24.2 scipy-0.4.4 -- iCy-fLaME The body maybe wounded, but it is the mind that hurts. From novak at ucolick.org Thu Feb 2 14:30:54 2006 From: novak at ucolick.org (Gregory Novak) Date: Thu, 02 Feb 2006 11:30:54 -0800 Subject: [SciPy-user] What shape should arg 2 of integrate.trapz be? Message-ID: If I do this: (Note: nrange is like arange, except you specify the number of samples instead of dx). In [474]: x = nrange(-1,1,4) In [475]: y = nrange(-1,1,5) In [476]: z = reshape(arange(20.), (4,5)) In [479]: shape(integrate.trapz(z,axis=0)) Out[479]: (5,) the output is as I expect: I integrate away the (length 4) x dimension and am left with an answer for each y value. However, if I try to specify the x axis, I get the following traceback: In [480]: shape(integrate.trapz(z,x,axis=0)) --------------------------------------------------------------------------- /sw/lib/python2.3/site-packages/scipy/integrate/quadrature.py in trapz(y, x, dx, axis) 96 slice1[axis] = slice(1,None) 97 slice2[axis] = slice(None,-1) ---> 98 return add.reduce(d * (y[slice1]+y[slice2])/2.0,axis) 99 100 def cumtrapz(y, x=None, dx=1.0, axis=-1): ValueError: frames are not aligned I can make this work again by explicitly making the x array be 2dim instead of 1dim: In [481]: shape(integrate.trapz(z,x[:,NewAxis],axis=0)) Out[481]: (5,) The question is: Is this the desired behavior? It seems to me that In[479] is well-specified and should give the correct result. Is there some reason that one would want to pass in a multi-d array as the grid for the independent variable? Thanks, Greg From strawman at astraw.com Thu Feb 2 14:34:03 2006 From: strawman at astraw.com (Andrew Straw) Date: Thu, 02 Feb 2006 11:34:03 -0800 Subject: [SciPy-user] PCA and Scipy In-Reply-To: <1138873710.43e1d56ed16ce@imp1-g19.free.fr> References: <1138873710.43e1d56ed16ce@imp1-g19.free.fr> Message-ID: <43E25EAB.7080206@astraw.com> PCA is easily implemented using singular value decomposition, which is available in numpy. Sorry I don't have any code available at the moment, but hopefully this gives you a start. Cheers! Andrew jaonary at free.fr wrote: >Hi all, >I'd like to know if the principal component analysis is already implemented >elsewhere in scipy. More generaly, how can I find the list of packages and >modules in scipy, and the algo implemented? The link to the API documentation >on the web site is not working. > >Thank you, > > >Jaonary > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > > From jdhunter at ace.bsd.uchicago.edu Thu Feb 2 14:32:19 2006 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Thu, 02 Feb 2006 13:32:19 -0600 Subject: [SciPy-user] PCA and Scipy In-Reply-To: <43E25EAB.7080206@astraw.com> (Andrew Straw's message of "Thu, 02 Feb 2006 11:34:03 -0800") References: <1138873710.43e1d56ed16ce@imp1-g19.free.fr> <43E25EAB.7080206@astraw.com> Message-ID: <87slr1h7zw.fsf@peds-pc311.bsd.uchicago.edu> >>>>> "Andrew" == Andrew Straw writes: Andrew> PCA is easily implemented using singular value Andrew> decomposition, which is available in numpy. Sorry I don't Andrew> have any code available at the moment, but hopefully this Andrew> gives you a start. from matplotlib.mlab .... def prepca(P, frac=0): """ Compute the principal components of P. P is a numVars x numObservations numeric array. frac is the minimum fraction of variance that a component must contain to be included Return value are Pcomponents : a num components x num observations numeric array Trans : the weights matrix, ie, Pcomponents = Trans*P fracVar : the fraction of the variance accounted for by each component returned """ U,s,v = svd(P) varEach = s**2/P.shape[1] totVar = asum(varEach) fracVar = divide(varEach,totVar) ind = int(asum(fracVar>=frac)) # select the components that are greater Trans = transpose(U[:,:ind]) # The transformed data Pcomponents = matrixmultiply(Trans,P) return Pcomponents, Trans, fracVar[:ind] From pajer at iname.com Thu Feb 2 15:16:52 2006 From: pajer at iname.com (Gary) Date: Thu, 02 Feb 2006 15:16:52 -0500 Subject: [SciPy-user] BLT for python. In-Reply-To: <1138824580.4808.3.camel@localhost> References: <1138824580.4808.3.camel@localhost> Message-ID: <43E268B4.5040104@iname.com> Peter Jensen wrote: >I am new to python, and I am writing a >program to display simulation and >measurements results. > >I have a raw version of the program >which uses gnuplot and Tkinter, however >I would like to make the graphs more >interactive. Marker etc... > >I found BLT for python at : > >http://heim.ifi.uio.no/~hpl/Pmw.Blt/doc/ > >It looks very good Does anybody have >experience with it. Should we add it to > >http://www.scipy.org/Wiki/Topical_Software ? > I have a few real-time (strip-chart-like) apps that use BLT. It works well in real time. It's seems that BLT's not in development anymore, so I keep waiting for the day when the apps won't run anymore. The "z" (latest) version of BLT is slightly broken under Linux, but there is a workaround. From jh at oobleck.astro.cornell.edu Thu Feb 2 20:28:15 2006 From: jh at oobleck.astro.cornell.edu (Joe Harrington) Date: Thu, 2 Feb 2006 20:28:15 -0500 Subject: [SciPy-user] PCA and Scipy In-Reply-To: (scipy-user-request@scipy.net) References: Message-ID: <200602030128.k131SFH2027465@oobleck.astro.cornell.edu> > I'd like to know if the principal component analysis is already implemented > elsewhere in scipy. If all else fails, and you are on Unix/Linux, try the brute-force approach: find path-to-scipy -type f -print | xargs egrep -i 'pca|principal components' If you don't find what you want in the SciPy package itself, be sure to check the Topical Software listing on the website, and google. Googling "PCA SciPy" shows there were several PCA-related mailing list posts in the past couple of years. --jh-- From elcorto at gmx.net Fri Feb 3 01:07:58 2006 From: elcorto at gmx.net (Steve Schmerler) Date: Fri, 03 Feb 2006 07:07:58 +0100 Subject: [SciPy-user] PCA and Scipy In-Reply-To: <1138873710.43e1d56ed16ce@imp1-g19.free.fr> References: <1138873710.43e1d56ed16ce@imp1-g19.free.fr> Message-ID: <43E2F33E.4030503@gmx.net> jaonary at free.fr wrote: > I'd like to know if the principal component analysis is already implemented > elsewhere in scipy. Have a look at http://mdp-toolkit.sourceforge.net. -- "People like Blood Sausage too. People are Morons!" -- Phil Connors, Groundhog Day From oliver.tomic at matforsk.no Fri Feb 3 03:49:39 2006 From: oliver.tomic at matforsk.no (oliver.tomic at matforsk.no) Date: Fri, 3 Feb 2006 09:49:39 +0100 Subject: [SciPy-user] PCA and Scipy In-Reply-To: <1138873710.43e1d56ed16ce@imp1-g19.free.fr> Message-ID: Hi Jaonary, here is a simple PCA-class that I wrote some time ago and which covers more of that what I needed in my project. I wanted the scores- and loadings values to be the same as in Unscrambler (http://www.camo.com/rt/Products/Unscrambler/unscrambler.html). Besides that I also needed correlation loadings, which I could not find in any other package. I am sure there are more elegant ways to this, but it works and I'd be happy to share it. Hope that helps Oliver (See attached file: pca.py) scipy-user-bounces at scipy.net wrote on 02.02.2006 10:48:30: > Hi all, > I'd like to know if the principal component analysis is already implemented > elsewhere in scipy. More generaly, how can I find the list of packages and > modules in scipy, and the algo implemented? The link to the API documentation > on the web site is not working. > Thank you, > > Jaonary > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user -------------- next part -------------- A non-text attachment was scrubbed... Name: pca.py Type: application/octet-stream Size: 3796 bytes Desc: not available URL: From aisaac at american.edu Fri Feb 3 10:20:44 2006 From: aisaac at american.edu (Alan G Isaac) Date: Fri, 3 Feb 2006 10:20:44 -0500 Subject: [SciPy-user] PCA and Scipy In-Reply-To: References: Message-ID: On Fri, 3 Feb 2006, oliver.tomic at matforsk.no apparently wrote: > I'd be happy to share it. Just a reminder: nowadays it is best to say "I'm placing it in the public domain" if that's what you mean. Is it? If not, specifying the license is helpful. http://www.opensource.org/licenses/mit-license.php It is a shame that "sharing" has become so encumbered, but there it is. Cheers, Alan Isaac From oliver.tomic at matforsk.no Fri Feb 3 10:31:30 2006 From: oliver.tomic at matforsk.no (oliver.tomic at matforsk.no) Date: Fri, 3 Feb 2006 16:31:30 +0100 Subject: [SciPy-user] PCA and Scipy In-Reply-To: Message-ID: Agreed, I should have expressed myself more clearly. I was just using my school-english :-) And you got me right ... what I meant to say is that I am placing this little piece of code in the public domain. No licenses here. Cheers Oliver scipy-user-bounces at scipy.net wrote on 03.02.2006 16:20:44: > On Fri, 3 Feb 2006, oliver.tomic at matforsk.no apparently wrote: > > I'd be happy to share it. > > Just a reminder: > nowadays it is best to say > "I'm placing it in the public domain" if that's what you mean. > Is it? If not, specifying the license is helpful. > http://www.opensource.org/licenses/mit-license.php > > It is a shame that "sharing" has become so encumbered, > but there it is. > > Cheers, > Alan Isaac > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user From ryanlists at gmail.com Fri Feb 3 10:39:12 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Fri, 3 Feb 2006 10:39:12 -0500 Subject: [SciPy-user] io.loadmat In-Reply-To: References: <43E05CBA.8010306@ieee.org> Message-ID: Any news on this? It is not a show stopper for me but has forced me to first process my experimental data on my wife's computer with old scipy. I don't want to lose this functionality in new scipy. Ryan On 2/1/06, Ryan Krauss wrote: > Thanks Travis. I was able to get the latest SVN and that one now > loads. I have one more (I think this will be the last). Same place: > http://www.imdl.gatech.edu/ryan/python/ > > I replaced the other link with a file called matfile2.tar.gz > It should unpack a mat file called vertical1_11_24_04_lowfreq.mat > > This is the format that my experimental setup on campus saves. It is > supposed to be mat compliant. I was able to load this exact same data > file using old scipy in windows on my wife's computer last night. > > With the latest SVN I get the following: > In [3]: temp=io.loadmat('vertical2_11_24_04_lowfreq.mat') > --------------------------------------------------------------------------- > exceptions.ValueError Traceback (most > recent call last) > > /home/ryan/thesis/actuator_modeling/torque_testing/data_Nov2004/vertical_data/ console> > > /usr/lib/python2.4/site-packages/scipy/io/mio.py in loadmat(name, > dict, appendmat, basename) > 790 if (M > 1): > 791 fid.close() > --> 792 raise ValueError, "Unsupported binary format." > 793 if (O != 0): > 794 fid.close() > > ValueError: Unsupported binary format. > > Just to try and get some more information, I hacked up mio.py starting > around line 790: > M,rest = divmod(header[0],1000) > O,rest = divmod(rest,100) > P,rest = divmod(rest,10) > T = rest > > print('M='+str(M)) > print('O='+str(O)) > print('P='+str(P)) > print('T='+str(T)) > if (M > 1): > print('Bad M') > # fid.close() > # raise ValueError, "Unsupported binary format." > if (O != 0): > print('Bad O') > # fid.close() > # raise ValuError, "Hundreds digit of first integer should be zero." > > if (T not in [0,1]): > print('Bad T') > # fid.close() > # raise ValueError, "Cannot handle sparse matrices, yet." > > > And then I get this: > In [1]: temp=io.loadmat('vertical2_11_24_04_lowfreq.mat') > M=0 > O=0 > P=1 > T=0 > M=1818583 > O=9 > P=0 > T=7 > Bad M > Bad O > Bad T > --------------------------------------------------------------------------- > exceptions.MemoryError Traceback (most > recent call last) > > /home/ryan/thesis/actuator_modeling/torque_testing/data_Nov2004/vertical_data/ console> > > /usr/lib/python2.4/site-packages/scipy/io/mio.py in loadmat(name, > dict, appendmat, basename) > 808 storage = {0:'d',1:'f',2:'i',3:'h',4:'H',5:'B'}[P] > 809 > --> 810 varname = fid.fread(header[-1],'char')[:-1] > 811 varname = varname.tostring() > 812 defnames.append(varname) > > /usr/lib/python2.4/site-packages/scipy/io/mio.py in read(self, count, > stype, rtype, bs, c_is_b) > 283 if count == 0: > 284 return zeros(0,rtype) > --> 285 retval = numpyio.fread(self, count, stype, rtype, bs) > 286 if len(retval) == 1: > 287 retval = retval[0] > > MemoryError: > > So that it looks like it is passing through with valid values for > M,T,O,and P once and then getting bad values on a second pass. I > don't know what any of this means. > > I believe this is a matlab v4 file. Old scipy and octave can both open it. > > Thanks, > > Ryan > > On 2/1/06, Travis Oliphant wrote: > > Ryan Krauss wrote: > > > > >I am having trouble loading Matlab .mat files that loaded just fine > > >under the old scipy. Here is the error message: > > > > > > > > Hi Ryan. With latest SVN of scipy, I'm able to load your matfile just > > fine. > > > > I'm not sure what the issue was, but I did change the getsize_type > > function in mio.py to make sure that the second argument returned is > > always a character. There was a chance it could be returned as a > > type-object. > > > > If you can't get the latest SVN version, let me know what version of > > numpy you are using and I can give you a couple-line edit to the mio.py > > file that should work. > > > > -Travis > > > > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.net > > http://www.scipy.net/mailman/listinfo/scipy-user > > > From ryanlists at gmail.com Fri Feb 3 12:46:34 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Fri, 3 Feb 2006 12:46:34 -0500 Subject: [SciPy-user] symbolic strings Message-ID: I may be getting in over my head, but I think this can work. I do a lot of analysis with matrices using the transfer matrix method. Sometimes I want to do numeric work with them and sometimes I use python to generate an input script for Maxima. Right now I have two methods for my transfer matrix element, one that outputs a numeric matrix and one that outputs a string for Maxima. I am working on a new element and would like to combined these two expressions by creating a symbolic string method so that expressions like a=-(L0*m1*r1*s**2-c1*s-k1)/(c1*s+k1) make sense whether L0, m1, r1, s, etc are floats or symstrs (my new class). I overrode the __mul__, __add__, __div__,__sub__, and __pow__ methods but don't know if there are methods to override to deal with the parenthesis and the unary - out in front. Can this be done with only a little bit more work? Basically I want the output of this expression to be 'a=-(L0*m1*r1*s**2-c1*s-k1)/(c1*s+k1)' if it is a string and a float or complex if the variables are numbers. Thanks, Ryan From oliphant.travis at ieee.org Fri Feb 3 13:08:33 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 03 Feb 2006 11:08:33 -0700 Subject: [SciPy-user] symbolic strings In-Reply-To: References: Message-ID: <43E39C21.5080902@ieee.org> Ryan Krauss wrote: >I may be getting in over my head, but I think this can work. I do a >lot of analysis with matrices using the transfer matrix method. >Sometimes I want to do numeric work with them and sometimes I use >python to generate an input script for Maxima. Right now I have two >methods for my transfer matrix element, one that outputs a numeric >matrix and one that outputs a string for Maxima. I am working on a >new element and would like to combined these two expressions by >creating a symbolic string method so that expressions like >a=-(L0*m1*r1*s**2-c1*s-k1)/(c1*s+k1) > >make sense whether L0, m1, r1, s, etc are floats or symstrs (my new >class). I overrode the __mul__, __add__, __div__,__sub__, and __pow__ >methods but don't know if there are methods to override to deal with >the parenthesis and the unary - out in front. Can this be done with >only a little bit more work? Basically I want the output of this >expression to be 'a=-(L0*m1*r1*s**2-c1*s-k1)/(c1*s+k1)' if it is a >string and a float or complex if the variables are numbers. > > This can be done, but you would basically have to add the parenthesis in yourself as you build up your string... Yes, the unary '-' can be over-written (see __neg__ special method). -Travis From rclewley at cam.cornell.edu Fri Feb 3 13:19:10 2006 From: rclewley at cam.cornell.edu (Robert Clewley) Date: Fri, 3 Feb 2006 13:19:10 -0500 (EST) Subject: [SciPy-user] symbolic strings In-Reply-To: References: Message-ID: Hi Ryan, Recently I threw together some fairly elementary symbolic expression classes in python, mainly for my own research use, from which I think you might be able to glean something useful. These classes do not permit assignments in the way you mention, although that would be a simple extension. Also, my code does not enable symbolic matrices, although symbolically defined functions are OK. Otherwise, I think they might help you do what you need. Constructing terms is easy. You can use the constructor method on my symbolic classes, but there are also capitalized symbolic objects in-built for the elementary math functions ('Sin', 'Exp', etc.) which can be called with a float, a string, or another symbolic object. These calls return another symbolic object. I've overridden all of the necessary arithmetic methods on these objects (I think!). There are several other useful goodies associated with these objects... There are no "simplification" tools beyond what the python parser provides for braces using eval(), etc, but it is easy to substitute sub-expressions, floats for symbols, etc. My code does not involve a real parser of its own, and is far from being optimized for efficiency when dealing with massive expressions. With these caveats out of the way, please feel free to check out the classes QuantSpec and Quantity in ModelSpec.py. This file is part of our PyDSTool package at http://pydstool.sourceforge.net although it can be used standalone. (FYI, ModelSpec.py provides other tools for building dynamical systems models from modular "components" made up of symbolic expressions.) The package is provided with a BSD license. The URL takes you to our wiki: if you click on Project Overview you'll see links to the page Symbolic, if you want to read some documentation. If you click on Code Topics on the side-bar you'll be able to download the code. ModelSpec.py can also be run as a script, which will show off some of its features. Let me know if this is of any use or you have any suggestions. I'd like to make this prototype code more useful for other people (and eventually, efficient :). -Rob On Fri, 3 Feb 2006, Ryan Krauss wrote: > I may be getting in over my head, but I think this can work. I do a > lot of analysis with matrices using the transfer matrix method. > Sometimes I want to do numeric work with them and sometimes I use > python to generate an input script for Maxima. Right now I have two > methods for my transfer matrix element, one that outputs a numeric > matrix and one that outputs a string for Maxima. I am working on a > new element and would like to combined these two expressions by > creating a symbolic string method so that expressions like > a=-(L0*m1*r1*s**2-c1*s-k1)/(c1*s+k1) > > make sense whether L0, m1, r1, s, etc are floats or symstrs (my new > class). I overrode the __mul__, __add__, __div__,__sub__, and __pow__ > methods but don't know if there are methods to override to deal with > the parenthesis and the unary - out in front. Can this be done with > only a little bit more work? Basically I want the output of this > expression to be 'a=-(L0*m1*r1*s**2-c1*s-k1)/(c1*s+k1)' if it is a > string and a float or complex if the variables are numbers. > > Thanks, > > Ryan > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > ----------------------------------- Rob Clewley Research Associate Department of Mathematics and Center for Applied Mathematics Cornell University Ithaca, NY 14853 www.cam.cornell.edu/~rclewley Tel: 607-255-7760 Fax: 607-255-9860 From ryanlists at gmail.com Fri Feb 3 13:23:34 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Fri, 3 Feb 2006 13:23:34 -0500 Subject: [SciPy-user] symbolic strings In-Reply-To: <43E39C21.5080902@ieee.org> References: <43E39C21.5080902@ieee.org> Message-ID: Thanks Travis, it sounds like I am on the right track. I defined checks in the __mul__ and __div__ methods to look for compound expressions and add parentheses. I am running into a weird problem though. I broke the expression into a numerator and denominator to get the parentheses to work, but when I try a=anum/aden*-1 I get an error: TypeError: unsupported operand type(s) for /: 'symstr' and 'symstr'. But symstr does have a __div__ method and when I run the code with the ipython pdb magic turned on, I have no trouble executing temp=anum/aden*-1 and getting the correct result. Why does it work in the debugger and not in the script? Below is a chunk of the ipython session. Thanks, Ryan /home/ryan/rwkpython/TMM/feedback/__init__.py in GetAugMat(self, s) 55 anum=L0*m1*r1*s**2-c1*s-k1 56 aden=c1*s+k1 ---> 57 a=anum/aden*-1 58 # a=(-1)*(L0*m1*r1*s**2-c1*s-k1)/(c1*s+k1) 59 b=(-1)*((L0*m1*r1**2-L0*L1*m1*r1+Iz1*L0)*s**2+(c1*L1+c1*L0)*s+k1*L1+k1*L0)/(c1*s+k1) TypeError: unsupported operand type(s) for /: 'symstr' and 'symstr' /home/ryan/rwkpython/TMM/feedback/__init__.py in GetAugMat() 56 aden=c1*s+k1 ---> 57 a=anum/aden*-1 58 # a=(-1)*(L0*m1*r1*s**2-c1*s-k1)/(c1*s+k1) ipdb> temp=anum/aden*-1 ipdb> temp '((Ll0*ml1*rl1*s**2-cj1*s-kj1)/(cj1*s+kj1))*(-1)' ipdb> type(anum) ipdb> type(aden) ipdb> rwkmisc.symstr(1) '1' ipdb> On 2/3/06, Travis Oliphant wrote: > Ryan Krauss wrote: > > >I may be getting in over my head, but I think this can work. I do a > >lot of analysis with matrices using the transfer matrix method. > >Sometimes I want to do numeric work with them and sometimes I use > >python to generate an input script for Maxima. Right now I have two > >methods for my transfer matrix element, one that outputs a numeric > >matrix and one that outputs a string for Maxima. I am working on a > >new element and would like to combined these two expressions by > >creating a symbolic string method so that expressions like > >a=-(L0*m1*r1*s**2-c1*s-k1)/(c1*s+k1) > > > >make sense whether L0, m1, r1, s, etc are floats or symstrs (my new > >class). I overrode the __mul__, __add__, __div__,__sub__, and __pow__ > >methods but don't know if there are methods to override to deal with > >the parenthesis and the unary - out in front. Can this be done with > >only a little bit more work? Basically I want the output of this > >expression to be 'a=-(L0*m1*r1*s**2-c1*s-k1)/(c1*s+k1)' if it is a > >string and a float or complex if the variables are numbers. > > > > > This can be done, but you would basically have to add the parenthesis in > yourself as you build up your string... > > Yes, the unary '-' can be over-written (see __neg__ special method). > > -Travis > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From rclewley at cam.cornell.edu Fri Feb 3 13:28:57 2006 From: rclewley at cam.cornell.edu (Robert Clewley) Date: Fri, 3 Feb 2006 13:28:57 -0500 (EST) Subject: [SciPy-user] symbolic strings In-Reply-To: References: Message-ID: Ryan, you'll probably need __rdiv__ and __rmul__ methods to help with this, in case the other objects involved in the construction are not symbolic objects. There are a bunch of cases you have to check to add braces appropriately if one of the operands is "compound", and you can find an example of what's needed in my code for the QuantSpec class. -Rob On Fri, 3 Feb 2006, Ryan Krauss wrote: > Thanks Travis, it sounds like I am on the right track. I defined > checks in the __mul__ and __div__ methods to look for compound > expressions and add parentheses. > > I am running into a weird problem though. I broke the expression into > a numerator and denominator to get the parentheses to work, but when I > try a=anum/aden*-1 > I get an error: TypeError: unsupported operand type(s) for /: 'symstr' > and 'symstr'. But symstr does have a __div__ method and when I run > the code with the ipython pdb magic turned on, I have no trouble > executing temp=anum/aden*-1 and getting the correct result. Why does > it work in the debugger and not in the script? > > Below is a chunk of the ipython session. > > Thanks, > > Ryan > > /home/ryan/rwkpython/TMM/feedback/__init__.py in GetAugMat(self, s) > 55 anum=L0*m1*r1*s**2-c1*s-k1 > 56 aden=c1*s+k1 > ---> 57 a=anum/aden*-1 > 58 # a=(-1)*(L0*m1*r1*s**2-c1*s-k1)/(c1*s+k1) > 59 > b=(-1)*((L0*m1*r1**2-L0*L1*m1*r1+Iz1*L0)*s**2+(c1*L1+c1*L0)*s+k1*L1+k1*L0)/(c1*s+k1) > > TypeError: unsupported operand type(s) for /: 'symstr' and 'symstr' > /home/ryan/rwkpython/TMM/feedback/__init__.py in GetAugMat() > 56 aden=c1*s+k1 > ---> 57 a=anum/aden*-1 > 58 # a=(-1)*(L0*m1*r1*s**2-c1*s-k1)/(c1*s+k1) > > ipdb> temp=anum/aden*-1 > ipdb> temp > '((Ll0*ml1*rl1*s**2-cj1*s-kj1)/(cj1*s+kj1))*(-1)' > ipdb> type(anum) > > ipdb> type(aden) > > ipdb> rwkmisc.symstr(1) > '1' > ipdb> > > > On 2/3/06, Travis Oliphant wrote: >> Ryan Krauss wrote: >> >>> I may be getting in over my head, but I think this can work. I do a >>> lot of analysis with matrices using the transfer matrix method. >>> Sometimes I want to do numeric work with them and sometimes I use >>> python to generate an input script for Maxima. Right now I have two >>> methods for my transfer matrix element, one that outputs a numeric >>> matrix and one that outputs a string for Maxima. I am working on a >>> new element and would like to combined these two expressions by >>> creating a symbolic string method so that expressions like >>> a=-(L0*m1*r1*s**2-c1*s-k1)/(c1*s+k1) >>> >>> make sense whether L0, m1, r1, s, etc are floats or symstrs (my new >>> class). I overrode the __mul__, __add__, __div__,__sub__, and __pow__ >>> methods but don't know if there are methods to override to deal with >>> the parenthesis and the unary - out in front. Can this be done with >>> only a little bit more work? Basically I want the output of this >>> expression to be 'a=-(L0*m1*r1*s**2-c1*s-k1)/(c1*s+k1)' if it is a >>> string and a float or complex if the variables are numbers. >>> >>> >> This can be done, but you would basically have to add the parenthesis in >> yourself as you build up your string... >> >> Yes, the unary '-' can be over-written (see __neg__ special method). >> >> -Travis >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.net >> http://www.scipy.net/mailman/listinfo/scipy-user >> > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > ----------------------------------- Rob Clewley Research Associate Department of Mathematics and Center for Applied Mathematics Cornell University Ithaca, NY 14853 www.cam.cornell.edu/~rclewley Tel: 607-255-7760 Fax: 607-255-9860 From ryanlists at gmail.com Fri Feb 3 14:16:32 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Fri, 3 Feb 2006 14:16:32 -0500 Subject: [SciPy-user] symbolic strings In-Reply-To: References: Message-ID: I am checking out QuantSpec and it seems very powerful and like it should be able to do what I need. I am having trouble defining a __pow__ method though: I tried just adding this in the Quant class: def __pow__(self, other): return self.__combine(other, "**") following the example of; def __mul__(self, other): return self.__combine(other, "*") I was originally trying to evaluate this expression: anum=L0*m1*r1*s**2-c1*s-k1 where everything on the right is a ModelSpec.Par I get this message: *** ValueError: Operator ** is not allowed. Please use the pow() call I changed to pow(s,2), but it still didn't work: ipdb> type(s) Out[3]: ipdb> s**2 TypeError: "unsupported operand type(s) for ** or pow(): 'Par' and 'float'" What would need to change to make Par**2 work? Thanks Rob, I think this can be really helpful. Ryan On 2/3/06, Robert Clewley wrote: > Ryan, you'll probably need __rdiv__ and __rmul__ methods to help with > this, in case the other objects involved in the construction are not > symbolic objects. There are a bunch of cases you have to check to add > braces appropriately if one of the operands is "compound", and you can > find an example of what's needed in my code for the QuantSpec > class. > > -Rob > > > On Fri, 3 Feb 2006, Ryan Krauss wrote: > > > Thanks Travis, it sounds like I am on the right track. I defined > > checks in the __mul__ and __div__ methods to look for compound > > expressions and add parentheses. > > > > I am running into a weird problem though. I broke the expression into > > a numerator and denominator to get the parentheses to work, but when I > > try a=anum/aden*-1 > > I get an error: TypeError: unsupported operand type(s) for /: 'symstr' > > and 'symstr'. But symstr does have a __div__ method and when I run > > the code with the ipython pdb magic turned on, I have no trouble > > executing temp=anum/aden*-1 and getting the correct result. Why does > > it work in the debugger and not in the script? > > > > Below is a chunk of the ipython session. > > > > Thanks, > > > > Ryan > > > > /home/ryan/rwkpython/TMM/feedback/__init__.py in GetAugMat(self, s) > > 55 anum=L0*m1*r1*s**2-c1*s-k1 > > 56 aden=c1*s+k1 > > ---> 57 a=anum/aden*-1 > > 58 # a=(-1)*(L0*m1*r1*s**2-c1*s-k1)/(c1*s+k1) > > 59 > > b=(-1)*((L0*m1*r1**2-L0*L1*m1*r1+Iz1*L0)*s**2+(c1*L1+c1*L0)*s+k1*L1+k1*L0)/(c1*s+k1) > > > > TypeError: unsupported operand type(s) for /: 'symstr' and 'symstr' > > /home/ryan/rwkpython/TMM/feedback/__init__.py in GetAugMat() > > 56 aden=c1*s+k1 > > ---> 57 a=anum/aden*-1 > > 58 # a=(-1)*(L0*m1*r1*s**2-c1*s-k1)/(c1*s+k1) > > > > ipdb> temp=anum/aden*-1 > > ipdb> temp > > '((Ll0*ml1*rl1*s**2-cj1*s-kj1)/(cj1*s+kj1))*(-1)' > > ipdb> type(anum) > > > > ipdb> type(aden) > > > > ipdb> rwkmisc.symstr(1) > > '1' > > ipdb> > > > > > > On 2/3/06, Travis Oliphant wrote: > >> Ryan Krauss wrote: > >> > >>> I may be getting in over my head, but I think this can work. I do a > >>> lot of analysis with matrices using the transfer matrix method. > >>> Sometimes I want to do numeric work with them and sometimes I use > >>> python to generate an input script for Maxima. Right now I have two > >>> methods for my transfer matrix element, one that outputs a numeric > >>> matrix and one that outputs a string for Maxima. I am working on a > >>> new element and would like to combined these two expressions by > >>> creating a symbolic string method so that expressions like > >>> a=-(L0*m1*r1*s**2-c1*s-k1)/(c1*s+k1) > >>> > >>> make sense whether L0, m1, r1, s, etc are floats or symstrs (my new > >>> class). I overrode the __mul__, __add__, __div__,__sub__, and __pow__ > >>> methods but don't know if there are methods to override to deal with > >>> the parenthesis and the unary - out in front. Can this be done with > >>> only a little bit more work? Basically I want the output of this > >>> expression to be 'a=-(L0*m1*r1*s**2-c1*s-k1)/(c1*s+k1)' if it is a > >>> string and a float or complex if the variables are numbers. > >>> > >>> > >> This can be done, but you would basically have to add the parenthesis in > >> yourself as you build up your string... > >> > >> Yes, the unary '-' can be over-written (see __neg__ special method). > >> > >> -Travis > >> > >> _______________________________________________ > >> SciPy-user mailing list > >> SciPy-user at scipy.net > >> http://www.scipy.net/mailman/listinfo/scipy-user > >> > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.net > > http://www.scipy.net/mailman/listinfo/scipy-user > > > > ----------------------------------- > Rob Clewley > Research Associate > Department of Mathematics > and Center for Applied Mathematics > Cornell University > Ithaca, NY 14853 > www.cam.cornell.edu/~rclewley > Tel: 607-255-7760 > Fax: 607-255-9860 > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From ryanlists at gmail.com Fri Feb 3 14:21:54 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Fri, 3 Feb 2006 14:21:54 -0500 Subject: [SciPy-user] symbolic strings In-Reply-To: References: Message-ID: So, I tried to get around the pow problem by just using *s*s, since I only need squares in this particular function. That gets me as far as I was before: /home/ryan/rwkpython/TMM/feedback/__init__.py in GetAugMat() 58 aden=c1*s+k1 ---> 59 temp=anum/aden 60 temp2=temp*-1 ipdb> TypeError: "unsupported operand type(s) for /: 'QuantSpec' and 'QuantSpec'" /home/ryan/rwkpython/TMM/feedback/__init__.py in GetAugMat() 58 aden=c1*s+k1 ---> 59 temp=anum/aden 60 temp2=temp*-1 Help. Ryan On 2/3/06, Ryan Krauss wrote: > I am checking out QuantSpec and it seems very powerful and like it > should be able to do what I need. I am having trouble defining a > __pow__ method though: > > I tried just adding this in the Quant class: > def __pow__(self, other): > return self.__combine(other, "**") > > following the example of; > def __mul__(self, other): > return self.__combine(other, "*") > > I was originally trying to evaluate this expression: > anum=L0*m1*r1*s**2-c1*s-k1 > where everything on the right is a ModelSpec.Par > > I get this message: > *** ValueError: Operator ** is not allowed. Please use the pow() call > > I changed to pow(s,2), but it still didn't work: > ipdb> type(s) > Out[3]: > ipdb> s**2 > TypeError: "unsupported operand type(s) for ** or pow(): 'Par' and 'float'" > > What would need to change to make Par**2 work? > > Thanks Rob, I think this can be really helpful. > > Ryan > > On 2/3/06, Robert Clewley wrote: > > Ryan, you'll probably need __rdiv__ and __rmul__ methods to help with > > this, in case the other objects involved in the construction are not > > symbolic objects. There are a bunch of cases you have to check to add > > braces appropriately if one of the operands is "compound", and you can > > find an example of what's needed in my code for the QuantSpec > > class. > > > > -Rob > > > > > > On Fri, 3 Feb 2006, Ryan Krauss wrote: > > > > > Thanks Travis, it sounds like I am on the right track. I defined > > > checks in the __mul__ and __div__ methods to look for compound > > > expressions and add parentheses. > > > > > > I am running into a weird problem though. I broke the expression into > > > a numerator and denominator to get the parentheses to work, but when I > > > try a=anum/aden*-1 > > > I get an error: TypeError: unsupported operand type(s) for /: 'symstr' > > > and 'symstr'. But symstr does have a __div__ method and when I run > > > the code with the ipython pdb magic turned on, I have no trouble > > > executing temp=anum/aden*-1 and getting the correct result. Why does > > > it work in the debugger and not in the script? > > > > > > Below is a chunk of the ipython session. > > > > > > Thanks, > > > > > > Ryan > > > > > > /home/ryan/rwkpython/TMM/feedback/__init__.py in GetAugMat(self, s) > > > 55 anum=L0*m1*r1*s**2-c1*s-k1 > > > 56 aden=c1*s+k1 > > > ---> 57 a=anum/aden*-1 > > > 58 # a=(-1)*(L0*m1*r1*s**2-c1*s-k1)/(c1*s+k1) > > > 59 > > > b=(-1)*((L0*m1*r1**2-L0*L1*m1*r1+Iz1*L0)*s**2+(c1*L1+c1*L0)*s+k1*L1+k1*L0)/(c1*s+k1) > > > > > > TypeError: unsupported operand type(s) for /: 'symstr' and 'symstr' > > > /home/ryan/rwkpython/TMM/feedback/__init__.py in GetAugMat() > > > 56 aden=c1*s+k1 > > > ---> 57 a=anum/aden*-1 > > > 58 # a=(-1)*(L0*m1*r1*s**2-c1*s-k1)/(c1*s+k1) > > > > > > ipdb> temp=anum/aden*-1 > > > ipdb> temp > > > '((Ll0*ml1*rl1*s**2-cj1*s-kj1)/(cj1*s+kj1))*(-1)' > > > ipdb> type(anum) > > > > > > ipdb> type(aden) > > > > > > ipdb> rwkmisc.symstr(1) > > > '1' > > > ipdb> > > > > > > > > > On 2/3/06, Travis Oliphant wrote: > > >> Ryan Krauss wrote: > > >> > > >>> I may be getting in over my head, but I think this can work. I do a > > >>> lot of analysis with matrices using the transfer matrix method. > > >>> Sometimes I want to do numeric work with them and sometimes I use > > >>> python to generate an input script for Maxima. Right now I have two > > >>> methods for my transfer matrix element, one that outputs a numeric > > >>> matrix and one that outputs a string for Maxima. I am working on a > > >>> new element and would like to combined these two expressions by > > >>> creating a symbolic string method so that expressions like > > >>> a=-(L0*m1*r1*s**2-c1*s-k1)/(c1*s+k1) > > >>> > > >>> make sense whether L0, m1, r1, s, etc are floats or symstrs (my new > > >>> class). I overrode the __mul__, __add__, __div__,__sub__, and __pow__ > > >>> methods but don't know if there are methods to override to deal with > > >>> the parenthesis and the unary - out in front. Can this be done with > > >>> only a little bit more work? Basically I want the output of this > > >>> expression to be 'a=-(L0*m1*r1*s**2-c1*s-k1)/(c1*s+k1)' if it is a > > >>> string and a float or complex if the variables are numbers. > > >>> > > >>> > > >> This can be done, but you would basically have to add the parenthesis in > > >> yourself as you build up your string... > > >> > > >> Yes, the unary '-' can be over-written (see __neg__ special method). > > >> > > >> -Travis > > >> > > >> _______________________________________________ > > >> SciPy-user mailing list > > >> SciPy-user at scipy.net > > >> http://www.scipy.net/mailman/listinfo/scipy-user > > >> > > > > > > _______________________________________________ > > > SciPy-user mailing list > > > SciPy-user at scipy.net > > > http://www.scipy.net/mailman/listinfo/scipy-user > > > > > > > ----------------------------------- > > Rob Clewley > > Research Associate > > Department of Mathematics > > and Center for Applied Mathematics > > Cornell University > > Ithaca, NY 14853 > > www.cam.cornell.edu/~rclewley > > Tel: 607-255-7760 > > Fax: 607-255-9860 > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.net > > http://www.scipy.net/mailman/listinfo/scipy-user > > > From dmobley at gmail.com Fri Feb 3 14:23:01 2006 From: dmobley at gmail.com (David Mobley) Date: Fri, 3 Feb 2006 11:23:01 -0800 Subject: [SciPy-user] compiling scipy on Fedora Core 4 -- undefined reference to PyInt_Type --follow-up Message-ID: Following up on the below problems, I'm still trying to compile scipy 0.4.4under RedHat Fedora Core 4. I'm now using gcc 4.0.2 and atlas 3.6.0-8, blas 3.0-36, lapack 3.0-35, numpy 0.94, numarray 1.4.0, and Numeric 24.2. We've also tried on an Opteron (fedora core 4) with no success. I'm still getting the same error when I try to compile as before: > The first error is "In function 'int_from_pyobj': build/src/Lib/fftpack/_fftpackmodule.c:109: undefine d reference to 'PyInt_Type'. Any further suggestions? This is all because I'm hoping to install f2py (although I anticipate using scipy for other reasons later) to be able to interface with some fortran code I'm trying to use for my research relating to computational drug design. Anyway, any help will be greatly appreciated. I still suspect I've done something fundamental wrong, as it doesn't seem like this should be so difficult. Thanks, David On 2/2/06, David Mobley wrote: > > Dear All, > > > > I'm sure I've done something stupid, but I haven't been able to turn up > > anything useful with Google, so I need to appeal to the list for help. > > > > I'm trying to install scipy under RedHat Fedora Core 4 on a Xeon, and > the > > build crashes with a whole bunch of "undefined reference to..." errors > when > > it gets to the step of running g77 on various fft and fftpack object and > > shared object libraries. The first error is "In function > 'int_from_pyobj': > > build/src/Lib/fftpack/_fftpackmodule.c:109: undefined > > reference to 'PyInt_Type'. > > > > Can anyone help? I already have NumPy installed and in my PYTHONPATH, > and > > also turned up on the web various suggestions about using > lapack/blas/atlas > > even though they're not absolutely required, so I tried with those as > well > > as without them (first with just lapack and blas, then adding ATLAS) and > > encountered the same problem every time. > > > > Am I missing something eseential? > > > > Thanks very much, > > David Mobley > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.net > > http://www.scipy.net/mailman/listinfo/scipy-user > > > > > > > > Which version of scipy are you trying to build? tried the 0.4.4? > > be sure to yum update everything first. also the Numeric package > (python-numeric-23.7-2) for FC4 is kinda out of date, grab the latest > one from sourceforge. Be sure to upgrade your gcc to at least 4.0.2. > > I am using FC4 on a P4 machine, the following is what i have, not all > of them are needed. > > blas-3.0-36.fc4 > atlas-3.6.0-8.fc4 > lapack-3.0-36.fc4 > gcc-4.0.2-8.fc4 > python-numarray-1.3.3-4.fc4 > > fftw-2.1.5-8 > fftw3-3.0.1-3 > Numeric 24.2 > > scipy-0.4.4 > > > -- > iCy-fLaME > The body maybe wounded, but it is the mind that hurts. > > ------------------------------ > > Message: 9 > Date: Thu, 02 Feb 2006 11:30:54 -0800 > From: Gregory Novak > Subject: [SciPy-user] What shape should arg 2 of integrate.trapz be? > To: SciPy Users List > Message-ID: > Content-Type: text/plain; charset=us-ascii > > If I do this: > > (Note: nrange is like arange, except you specify the number of samples > instead of dx). > > In [474]: x = nrange(-1,1,4) > In [475]: y = nrange(-1,1,5) > In [476]: z = reshape(arange(20.), (4,5)) > In [479]: shape(integrate.trapz(z,axis=0)) > Out[479]: (5,) > > the output is as I expect: I integrate away the (length 4) x dimension > and am left with an answer for each y value. > > However, if I try to specify the x axis, I get the following traceback: > > In [480]: shape(integrate.trapz(z,x,axis=0)) > > --------------------------------------------------------------------------- > /sw/lib/python2.3/site-packages/scipy/integrate/quadrature.py in trapz(y, > x, dx, axis) > 96 slice1[axis] = slice(1,None) > 97 slice2[axis] = slice(None,-1) > ---> 98 return add.reduce(d * (y[slice1]+y[slice2])/2.0,axis) > 99 > 100 def cumtrapz(y, x=None, dx=1.0, axis=-1): > > ValueError: frames are not aligned > > > I can make this work again by explicitly making the x array be 2dim > instead of 1dim: > > In [481]: shape(integrate.trapz(z,x[:,NewAxis],axis=0)) > Out[481]: (5,) > > The question is: Is this the desired behavior? It seems to me that > In[479] is well-specified and should give the correct result. Is > there some reason that one would want to pass in a multi-d array as > the grid for the independent variable? > > Thanks, > Greg > > > > ------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > > > End of SciPy-user Digest, Vol 30, Issue 3 > ***************************************** > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ivazquez at ivazquez.net Fri Feb 3 14:30:13 2006 From: ivazquez at ivazquez.net (Ignacio Vazquez-Abrams) Date: Fri, 03 Feb 2006 14:30:13 -0500 Subject: [SciPy-user] compiling scipy on Fedora Core 4 -- undefined reference to PyInt_Type --follow-up In-Reply-To: References: Message-ID: <1138995013.5160.5.camel@ignacio.lan> On Fri, 2006-02-03 at 11:23 -0800, David Mobley wrote: > This is all because I'm hoping to install f2py (although I anticipate > using scipy for other reasons later) to be able to interface with some > fortran code I'm trying to use for my research relating to > computational drug design. The old f2py is already in Fedora Extras. The new f2py is part of numpy which is currently under review for Fedora Extras. -- Ignacio Vazquez-Abrams http://fedora.ivazquez.net/ gpg --keyserver hkp://subkeys.pgp.net --recv-key 38028b72 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 189 bytes Desc: This is a digitally signed message part URL: From robert.kern at gmail.com Fri Feb 3 14:36:09 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 03 Feb 2006 13:36:09 -0600 Subject: [SciPy-user] compiling scipy on Fedora Core 4 -- undefined reference to PyInt_Type --follow-up In-Reply-To: References: Message-ID: <43E3B0A9.5000403@gmail.com> David Mobley wrote: > Following up on the below problems, I'm still trying to compile scipy > 0.4.4 under RedHat Fedora Core 4. I'm now using gcc 4.0.2 and atlas > 3.6.0-8, blas 3.0-36, lapack 3.0-35, numpy 0.94, numarray 1.4.0, and > Numeric 24.2. We've also tried on an Opteron (fedora core 4) with no > success. I'm still getting the same error when I try to compile as before: > >> The first error is "In function 'int_from_pyobj': > build/src/Lib/fftpack/_fftpackmodule.c:109: undefine d reference to > 'PyInt_Type'. > > Any further suggestions? Are you sure that you are picking up Python.h? > This is all because I'm hoping to install f2py (although I anticipate > using scipy for other reasons later) to be able to interface with some > fortran code I'm trying to use for my research relating to computational > drug design. Well, if all you need right now is f2py, it's part of numpy, not scipy. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From ryanlists at gmail.com Fri Feb 3 14:39:44 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Fri, 3 Feb 2006 14:39:44 -0500 Subject: [SciPy-user] symbolic strings In-Reply-To: References: Message-ID: So all of my scripts begin with from __future__ import division, so defining a __truediv_ method has my symstr class working. I don't seem to be able to add __truediv__ or __pow__ to QuantSpec or Quantity though. Any thoughts? Ryan On 2/3/06, Ryan Krauss wrote: > So, I tried to get around the pow problem by just using *s*s, since I > only need squares in this particular function. That gets me as far as > I was before: > /home/ryan/rwkpython/TMM/feedback/__init__.py in GetAugMat() > 58 aden=c1*s+k1 > ---> 59 temp=anum/aden > 60 temp2=temp*-1 > > ipdb> > TypeError: "unsupported operand type(s) for /: 'QuantSpec' and 'QuantSpec'" > /home/ryan/rwkpython/TMM/feedback/__init__.py in GetAugMat() > 58 aden=c1*s+k1 > ---> 59 temp=anum/aden > 60 temp2=temp*-1 > > Help. > > Ryan > > On 2/3/06, Ryan Krauss wrote: > > I am checking out QuantSpec and it seems very powerful and like it > > should be able to do what I need. I am having trouble defining a > > __pow__ method though: > > > > I tried just adding this in the Quant class: > > def __pow__(self, other): > > return self.__combine(other, "**") > > > > following the example of; > > def __mul__(self, other): > > return self.__combine(other, "*") > > > > I was originally trying to evaluate this expression: > > anum=L0*m1*r1*s**2-c1*s-k1 > > where everything on the right is a ModelSpec.Par > > > > I get this message: > > *** ValueError: Operator ** is not allowed. Please use the pow() call > > > > I changed to pow(s,2), but it still didn't work: > > ipdb> type(s) > > Out[3]: > > ipdb> s**2 > > TypeError: "unsupported operand type(s) for ** or pow(): 'Par' and 'float'" > > > > What would need to change to make Par**2 work? > > > > Thanks Rob, I think this can be really helpful. > > > > Ryan > > > > On 2/3/06, Robert Clewley wrote: > > > Ryan, you'll probably need __rdiv__ and __rmul__ methods to help with > > > this, in case the other objects involved in the construction are not > > > symbolic objects. There are a bunch of cases you have to check to add > > > braces appropriately if one of the operands is "compound", and you can > > > find an example of what's needed in my code for the QuantSpec > > > class. > > > > > > -Rob > > > > > > > > > On Fri, 3 Feb 2006, Ryan Krauss wrote: > > > > > > > Thanks Travis, it sounds like I am on the right track. I defined > > > > checks in the __mul__ and __div__ methods to look for compound > > > > expressions and add parentheses. > > > > > > > > I am running into a weird problem though. I broke the expression into > > > > a numerator and denominator to get the parentheses to work, but when I > > > > try a=anum/aden*-1 > > > > I get an error: TypeError: unsupported operand type(s) for /: 'symstr' > > > > and 'symstr'. But symstr does have a __div__ method and when I run > > > > the code with the ipython pdb magic turned on, I have no trouble > > > > executing temp=anum/aden*-1 and getting the correct result. Why does > > > > it work in the debugger and not in the script? > > > > > > > > Below is a chunk of the ipython session. > > > > > > > > Thanks, > > > > > > > > Ryan > > > > > > > > /home/ryan/rwkpython/TMM/feedback/__init__.py in GetAugMat(self, s) > > > > 55 anum=L0*m1*r1*s**2-c1*s-k1 > > > > 56 aden=c1*s+k1 > > > > ---> 57 a=anum/aden*-1 > > > > 58 # a=(-1)*(L0*m1*r1*s**2-c1*s-k1)/(c1*s+k1) > > > > 59 > > > > b=(-1)*((L0*m1*r1**2-L0*L1*m1*r1+Iz1*L0)*s**2+(c1*L1+c1*L0)*s+k1*L1+k1*L0)/(c1*s+k1) > > > > > > > > TypeError: unsupported operand type(s) for /: 'symstr' and 'symstr' > > > > /home/ryan/rwkpython/TMM/feedback/__init__.py in GetAugMat() > > > > 56 aden=c1*s+k1 > > > > ---> 57 a=anum/aden*-1 > > > > 58 # a=(-1)*(L0*m1*r1*s**2-c1*s-k1)/(c1*s+k1) > > > > > > > > ipdb> temp=anum/aden*-1 > > > > ipdb> temp > > > > '((Ll0*ml1*rl1*s**2-cj1*s-kj1)/(cj1*s+kj1))*(-1)' > > > > ipdb> type(anum) > > > > > > > > ipdb> type(aden) > > > > > > > > ipdb> rwkmisc.symstr(1) > > > > '1' > > > > ipdb> > > > > > > > > > > > > On 2/3/06, Travis Oliphant wrote: > > > >> Ryan Krauss wrote: > > > >> > > > >>> I may be getting in over my head, but I think this can work. I do a > > > >>> lot of analysis with matrices using the transfer matrix method. > > > >>> Sometimes I want to do numeric work with them and sometimes I use > > > >>> python to generate an input script for Maxima. Right now I have two > > > >>> methods for my transfer matrix element, one that outputs a numeric > > > >>> matrix and one that outputs a string for Maxima. I am working on a > > > >>> new element and would like to combined these two expressions by > > > >>> creating a symbolic string method so that expressions like > > > >>> a=-(L0*m1*r1*s**2-c1*s-k1)/(c1*s+k1) > > > >>> > > > >>> make sense whether L0, m1, r1, s, etc are floats or symstrs (my new > > > >>> class). I overrode the __mul__, __add__, __div__,__sub__, and __pow__ > > > >>> methods but don't know if there are methods to override to deal with > > > >>> the parenthesis and the unary - out in front. Can this be done with > > > >>> only a little bit more work? Basically I want the output of this > > > >>> expression to be 'a=-(L0*m1*r1*s**2-c1*s-k1)/(c1*s+k1)' if it is a > > > >>> string and a float or complex if the variables are numbers. > > > >>> > > > >>> > > > >> This can be done, but you would basically have to add the parenthesis in > > > >> yourself as you build up your string... > > > >> > > > >> Yes, the unary '-' can be over-written (see __neg__ special method). > > > >> > > > >> -Travis > > > >> > > > >> _______________________________________________ > > > >> SciPy-user mailing list > > > >> SciPy-user at scipy.net > > > >> http://www.scipy.net/mailman/listinfo/scipy-user > > > >> > > > > > > > > _______________________________________________ > > > > SciPy-user mailing list > > > > SciPy-user at scipy.net > > > > http://www.scipy.net/mailman/listinfo/scipy-user > > > > > > > > > > ----------------------------------- > > > Rob Clewley > > > Research Associate > > > Department of Mathematics > > > and Center for Applied Mathematics > > > Cornell University > > > Ithaca, NY 14853 > > > www.cam.cornell.edu/~rclewley > > > Tel: 607-255-7760 > > > Fax: 607-255-9860 > > > > > > _______________________________________________ > > > SciPy-user mailing list > > > SciPy-user at scipy.net > > > http://www.scipy.net/mailman/listinfo/scipy-user > > > > > > From rclewley at cam.cornell.edu Fri Feb 3 15:28:40 2006 From: rclewley at cam.cornell.edu (Robert Clewley) Date: Fri, 3 Feb 2006 15:28:40 -0500 (EST) Subject: [SciPy-user] symbolic strings In-Reply-To: References: Message-ID: >From your output I cannot see how you set up your variables. If I enter at the prompt: >> anum=QuantSpec('anum', 'a') >> aden=QuantSpec('aden', 'c1*s+k1') >> temp=anum/aden >> print temp a/(c1*s+k1) How is what you are doing different to this? Rob On Fri, 3 Feb 2006, Ryan Krauss wrote: > So, I tried to get around the pow problem by just using *s*s, since I > only need squares in this particular function. That gets me as far as > I was before: > /home/ryan/rwkpython/TMM/feedback/__init__.py in GetAugMat() > 58 aden=c1*s+k1 > ---> 59 temp=anum/aden > 60 temp2=temp*-1 > > ipdb> > TypeError: "unsupported operand type(s) for /: 'QuantSpec' and 'QuantSpec'" > /home/ryan/rwkpython/TMM/feedback/__init__.py in GetAugMat() > 58 aden=c1*s+k1 > ---> 59 temp=anum/aden > 60 temp2=temp*-1 > > Help. > > Ryan > > On 2/3/06, Ryan Krauss wrote: >> I am checking out QuantSpec and it seems very powerful and like it >> should be able to do what I need. I am having trouble defining a >> __pow__ method though: >> >> I tried just adding this in the Quant class: >> def __pow__(self, other): >> return self.__combine(other, "**") >> >> following the example of; >> def __mul__(self, other): >> return self.__combine(other, "*") >> >> I was originally trying to evaluate this expression: >> anum=L0*m1*r1*s**2-c1*s-k1 >> where everything on the right is a ModelSpec.Par >> >> I get this message: >> *** ValueError: Operator ** is not allowed. Please use the pow() call >> >> I changed to pow(s,2), but it still didn't work: >> ipdb> type(s) >> Out[3]: >> ipdb> s**2 >> TypeError: "unsupported operand type(s) for ** or pow(): 'Par' and 'float'" >> >> What would need to change to make Par**2 work? >> >> Thanks Rob, I think this can be really helpful. >> >> Ryan >> >> On 2/3/06, Robert Clewley wrote: >>> Ryan, you'll probably need __rdiv__ and __rmul__ methods to help with >>> this, in case the other objects involved in the construction are not >>> symbolic objects. There are a bunch of cases you have to check to add >>> braces appropriately if one of the operands is "compound", and you can >>> find an example of what's needed in my code for the QuantSpec >>> class. >>> >>> -Rob >>> >>> >>> On Fri, 3 Feb 2006, Ryan Krauss wrote: >>> >>>> Thanks Travis, it sounds like I am on the right track. I defined >>>> checks in the __mul__ and __div__ methods to look for compound >>>> expressions and add parentheses. >>>> >>>> I am running into a weird problem though. I broke the expression into >>>> a numerator and denominator to get the parentheses to work, but when I >>>> try a=anum/aden*-1 >>>> I get an error: TypeError: unsupported operand type(s) for /: 'symstr' >>>> and 'symstr'. But symstr does have a __div__ method and when I run >>>> the code with the ipython pdb magic turned on, I have no trouble >>>> executing temp=anum/aden*-1 and getting the correct result. Why does >>>> it work in the debugger and not in the script? >>>> >>>> Below is a chunk of the ipython session. >>>> >>>> Thanks, >>>> >>>> Ryan >>>> >>>> /home/ryan/rwkpython/TMM/feedback/__init__.py in GetAugMat(self, s) >>>> 55 anum=L0*m1*r1*s**2-c1*s-k1 >>>> 56 aden=c1*s+k1 >>>> ---> 57 a=anum/aden*-1 >>>> 58 # a=(-1)*(L0*m1*r1*s**2-c1*s-k1)/(c1*s+k1) >>>> 59 >>>> b=(-1)*((L0*m1*r1**2-L0*L1*m1*r1+Iz1*L0)*s**2+(c1*L1+c1*L0)*s+k1*L1+k1*L0)/(c1*s+k1) >>>> >>>> TypeError: unsupported operand type(s) for /: 'symstr' and 'symstr' >>>> /home/ryan/rwkpython/TMM/feedback/__init__.py in GetAugMat() >>>> 56 aden=c1*s+k1 >>>> ---> 57 a=anum/aden*-1 >>>> 58 # a=(-1)*(L0*m1*r1*s**2-c1*s-k1)/(c1*s+k1) >>>> >>>> ipdb> temp=anum/aden*-1 >>>> ipdb> temp >>>> '((Ll0*ml1*rl1*s**2-cj1*s-kj1)/(cj1*s+kj1))*(-1)' >>>> ipdb> type(anum) >>>> >>>> ipdb> type(aden) >>>> >>>> ipdb> rwkmisc.symstr(1) >>>> '1' >>>> ipdb> >>>> >>>> >>>> On 2/3/06, Travis Oliphant wrote: >>>>> Ryan Krauss wrote: >>>>> >>>>>> I may be getting in over my head, but I think this can work. I do a >>>>>> lot of analysis with matrices using the transfer matrix method. >>>>>> Sometimes I want to do numeric work with them and sometimes I use >>>>>> python to generate an input script for Maxima. Right now I have two >>>>>> methods for my transfer matrix element, one that outputs a numeric >>>>>> matrix and one that outputs a string for Maxima. I am working on a >>>>>> new element and would like to combined these two expressions by >>>>>> creating a symbolic string method so that expressions like >>>>>> a=-(L0*m1*r1*s**2-c1*s-k1)/(c1*s+k1) >>>>>> >>>>>> make sense whether L0, m1, r1, s, etc are floats or symstrs (my new >>>>>> class). I overrode the __mul__, __add__, __div__,__sub__, and __pow__ >>>>>> methods but don't know if there are methods to override to deal with >>>>>> the parenthesis and the unary - out in front. Can this be done with >>>>>> only a little bit more work? Basically I want the output of this >>>>>> expression to be 'a=-(L0*m1*r1*s**2-c1*s-k1)/(c1*s+k1)' if it is a >>>>>> string and a float or complex if the variables are numbers. >>>>>> >>>>>> >>>>> This can be done, but you would basically have to add the parenthesis in >>>>> yourself as you build up your string... >>>>> >>>>> Yes, the unary '-' can be over-written (see __neg__ special method). >>>>> >>>>> -Travis >>>>> >>>>> _______________________________________________ >>>>> SciPy-user mailing list >>>>> SciPy-user at scipy.net >>>>> http://www.scipy.net/mailman/listinfo/scipy-user >>>>> >>>> >>>> _______________________________________________ >>>> SciPy-user mailing list >>>> SciPy-user at scipy.net >>>> http://www.scipy.net/mailman/listinfo/scipy-user >>>> >>> >>> ----------------------------------- >>> Rob Clewley >>> Research Associate >>> Department of Mathematics >>> and Center for Applied Mathematics >>> Cornell University >>> Ithaca, NY 14853 >>> www.cam.cornell.edu/~rclewley >>> Tel: 607-255-7760 >>> Fax: 607-255-9860 >>> >>> _______________________________________________ >>> SciPy-user mailing list >>> SciPy-user at scipy.net >>> http://www.scipy.net/mailman/listinfo/scipy-user >>> >> > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > ----------------------------------- Rob Clewley Research Associate Department of Mathematics and Center for Applied Mathematics Cornell University Ithaca, NY 14853 www.cam.cornell.edu/~rclewley Tel: 607-255-7760 Fax: 607-255-9860 From rclewley at cam.cornell.edu Fri Feb 3 15:42:34 2006 From: rclewley at cam.cornell.edu (Robert Clewley) Date: Fri, 3 Feb 2006 15:42:34 -0500 (EST) Subject: [SciPy-user] symbolic strings In-Reply-To: References: Message-ID: The inability to use ** is something that I haven't yet addressed, it's true. I have been sidestepping this for now by doing something like anum = L0*blah*'pow(s,2)' - c1*s [strings are valid operands!] There are convoluted historical reasons why I originally tried to avoid allowing ** in expressions (essentially, pow() provided the best compatibility with C and Python language targets for my models, given mixed integer and floating point types), but the more recent versions of my pseudo-parser and target "compiler" could probably handle that without this restriction. Basically, I should go back and support ** Not today, though :) And when I said ModelSpec can be used "standalone", I should have clarified that you'd need some other PyDSTool modules to import from, especially parseUtils.py. It is in this module that the restriction on power syntax is placed (the source of your raised exception). But there be dragons in parseUtils.py. Well, more like Frankenstein's monster... If you dare to fix this before I get to it, I'll model you a medal. -Rob On Fri, 3 Feb 2006, Ryan Krauss wrote: > I am checking out QuantSpec and it seems very powerful and like it > should be able to do what I need. I am having trouble defining a > __pow__ method though: > > I tried just adding this in the Quant class: > def __pow__(self, other): > return self.__combine(other, "**") > > following the example of; > def __mul__(self, other): > return self.__combine(other, "*") > > I was originally trying to evaluate this expression: > anum=L0*m1*r1*s**2-c1*s-k1 > where everything on the right is a ModelSpec.Par > > I get this message: > *** ValueError: Operator ** is not allowed. Please use the pow() call > > I changed to pow(s,2), but it still didn't work: > ipdb> type(s) > Out[3]: > ipdb> s**2 > TypeError: "unsupported operand type(s) for ** or pow(): 'Par' and 'float'" > > What would need to change to make Par**2 work? > > Thanks Rob, I think this can be really helpful. > > Ryan > > On 2/3/06, Robert Clewley wrote: >> Ryan, you'll probably need __rdiv__ and __rmul__ methods to help with >> this, in case the other objects involved in the construction are not >> symbolic objects. There are a bunch of cases you have to check to add >> braces appropriately if one of the operands is "compound", and you can >> find an example of what's needed in my code for the QuantSpec >> class. >> >> -Rob >> >> >> On Fri, 3 Feb 2006, Ryan Krauss wrote: >> >>> Thanks Travis, it sounds like I am on the right track. I defined >>> checks in the __mul__ and __div__ methods to look for compound >>> expressions and add parentheses. >>> >>> I am running into a weird problem though. I broke the expression into >>> a numerator and denominator to get the parentheses to work, but when I >>> try a=anum/aden*-1 >>> I get an error: TypeError: unsupported operand type(s) for /: 'symstr' >>> and 'symstr'. But symstr does have a __div__ method and when I run >>> the code with the ipython pdb magic turned on, I have no trouble >>> executing temp=anum/aden*-1 and getting the correct result. Why does >>> it work in the debugger and not in the script? >>> >>> Below is a chunk of the ipython session. >>> >>> Thanks, >>> >>> Ryan >>> >>> /home/ryan/rwkpython/TMM/feedback/__init__.py in GetAugMat(self, s) >>> 55 anum=L0*m1*r1*s**2-c1*s-k1 >>> 56 aden=c1*s+k1 >>> ---> 57 a=anum/aden*-1 >>> 58 # a=(-1)*(L0*m1*r1*s**2-c1*s-k1)/(c1*s+k1) >>> 59 >>> b=(-1)*((L0*m1*r1**2-L0*L1*m1*r1+Iz1*L0)*s**2+(c1*L1+c1*L0)*s+k1*L1+k1*L0)/(c1*s+k1) >>> >>> TypeError: unsupported operand type(s) for /: 'symstr' and 'symstr' >>> /home/ryan/rwkpython/TMM/feedback/__init__.py in GetAugMat() >>> 56 aden=c1*s+k1 >>> ---> 57 a=anum/aden*-1 >>> 58 # a=(-1)*(L0*m1*r1*s**2-c1*s-k1)/(c1*s+k1) >>> >>> ipdb> temp=anum/aden*-1 >>> ipdb> temp >>> '((Ll0*ml1*rl1*s**2-cj1*s-kj1)/(cj1*s+kj1))*(-1)' >>> ipdb> type(anum) >>> >>> ipdb> type(aden) >>> >>> ipdb> rwkmisc.symstr(1) >>> '1' >>> ipdb> >>> >>> >>> On 2/3/06, Travis Oliphant wrote: >>>> Ryan Krauss wrote: >>>> >>>>> I may be getting in over my head, but I think this can work. I do a >>>>> lot of analysis with matrices using the transfer matrix method. >>>>> Sometimes I want to do numeric work with them and sometimes I use >>>>> python to generate an input script for Maxima. Right now I have two >>>>> methods for my transfer matrix element, one that outputs a numeric >>>>> matrix and one that outputs a string for Maxima. I am working on a >>>>> new element and would like to combined these two expressions by >>>>> creating a symbolic string method so that expressions like >>>>> a=-(L0*m1*r1*s**2-c1*s-k1)/(c1*s+k1) >>>>> >>>>> make sense whether L0, m1, r1, s, etc are floats or symstrs (my new >>>>> class). I overrode the __mul__, __add__, __div__,__sub__, and __pow__ >>>>> methods but don't know if there are methods to override to deal with >>>>> the parenthesis and the unary - out in front. Can this be done with >>>>> only a little bit more work? Basically I want the output of this >>>>> expression to be 'a=-(L0*m1*r1*s**2-c1*s-k1)/(c1*s+k1)' if it is a >>>>> string and a float or complex if the variables are numbers. >>>>> >>>>> >>>> This can be done, but you would basically have to add the parenthesis in >>>> yourself as you build up your string... >>>> >>>> Yes, the unary '-' can be over-written (see __neg__ special method). >>>> >>>> -Travis >>>> >>>> _______________________________________________ >>>> SciPy-user mailing list >>>> SciPy-user at scipy.net >>>> http://www.scipy.net/mailman/listinfo/scipy-user >>>> >>> >>> _______________________________________________ >>> SciPy-user mailing list >>> SciPy-user at scipy.net >>> http://www.scipy.net/mailman/listinfo/scipy-user >>> >> >> ----------------------------------- >> Rob Clewley >> Research Associate >> Department of Mathematics >> and Center for Applied Mathematics >> Cornell University >> Ithaca, NY 14853 >> www.cam.cornell.edu/~rclewley >> Tel: 607-255-7760 >> Fax: 607-255-9860 >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.net >> http://www.scipy.net/mailman/listinfo/scipy-user >> > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > ----------------------------------- Rob Clewley Research Associate Department of Mathematics and Center for Applied Mathematics Cornell University Ithaca, NY 14853 www.cam.cornell.edu/~rclewley Tel: 607-255-7760 Fax: 607-255-9860 From w.northcott at unsw.edu.au Fri Feb 3 19:06:47 2006 From: w.northcott at unsw.edu.au (Bill Northcott) Date: Sat, 4 Feb 2006 11:06:47 +1100 Subject: [SciPy-user] BLT for python. In-Reply-To: References: Message-ID: <3B15B8F0-79EA-472A-9283-B4BBBDF4A2A0@unsw.edu.au> On 04/02/2006, at 1:39 AM, Gary wrote: > > I have a few real-time (strip-chart-like) apps that use BLT. It works > well in real time. It's seems that BLT's not in development anymore, > so I keep waiting for the day when the apps won't run anymore. The > "z" > (latest) version of BLT is slightly broken under Linux, but there > is a > workaround. > The BLT developer, George Howlett, has put a lot of work recently into a version 3.0 of BLT which appears to have support for Aqua on MacOS X. There were quite a few commits to the cvs in November/ December. See http://sf.net/projects/blt Unfortunately as a Mac user myself, one of the Mac files is missing from the cvs, but the code might build under Windows or Linux. The 2.4z version will build on MacOS X with a few fixes which I have posted on the Sourceforge site, but it is far from bug free. George does not respond to email but maybe he reads it. So if a few more people show interest, he might get inspired. Bill Northcott From oliphant.travis at ieee.org Sun Feb 5 03:04:54 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sun, 05 Feb 2006 01:04:54 -0700 Subject: [SciPy-user] io.loadmat In-Reply-To: References: <43E05CBA.8010306@ieee.org> Message-ID: <43E5B1A6.60100@ieee.org> Ryan Krauss wrote: >Any news on this? It is not a show stopper for me but has forced me >to first process my experimental data on my wife's computer with old >scipy. I don't want to lose this functionality in new scipy. > > > I can read the figure5.mat file in the matfile2.tar.gz file linked on your page. >>> import scipy.io >>> scipy.io.loadmat('figure5.mat') {'exp_ph_v2th2': array([-360. , -279.81217165, -270.1051963 , ..., -364.58973674, -89.8948037 , -80.18782835]), 'oof_mag_v2th2': array([ 2.29149631e+02, 2.26880824e+02, 2.24656504e+02, ..., 3.82075299e-02, 3.81446544e-02, 3.80819791e-02]), '__header__': 'MATLAB 5.0 MAT-file, Platform: PCWIN, Created on: Fri Feb 25 18:08:42 2005', 'exp_mag_v2th2': array([ 1133.66018 , 5.11327559, 5.88005405, ..., 22.16197655, 5.88005405, 5.11327559]), 'oof_ph_v2th2': array([-90.09687897, -90.0965209 , -90.09617577, ..., -88.48426274, -88.48934695, -88.49440268]), 'exp_freq': array([ 0.00000000e+00, 2.38095238e-02, 4.76190476e-02, ..., 4.99952381e+02, 4.99976190e+02, 5.00000000e+02]), '__version__': '1.0', 'oof_freq': array([ 1.59154943e-02, 1.60746493e-02, 1.62338042e-02, ..., 9.51746560e+01, 9.53338109e+01, 9.54929659e+01])} >>> scipy.__version__ '0.4.5.1596' >>> import numpy >>> numpy.__version__ '0.9.5.2052' -Travis From hgamboa at gmail.com Sun Feb 5 12:32:57 2006 From: hgamboa at gmail.com (Hugo Gamboa) Date: Sun, 5 Feb 2006 17:32:57 +0000 Subject: [SciPy-user] Submitting images to the wiki Message-ID: <86522b1a0602050932q4caab64fk6a1d6b306dc0f412@mail.gmail.com> What is the process of introducing images in the scipy wiki? I would like to introduce a companion image for an example in the cookbook. Hugo Gamboa From arnd.baecker at web.de Sun Feb 5 14:04:59 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Sun, 5 Feb 2006 20:04:59 +0100 (CET) Subject: [SciPy-user] Submitting images to the wiki In-Reply-To: <86522b1a0602050932q4caab64fk6a1d6b306dc0f412@mail.gmail.com> References: <86522b1a0602050932q4caab64fk6a1d6b306dc0f412@mail.gmail.com> Message-ID: On Sun, 5 Feb 2006, Hugo Gamboa wrote: > What is the process of introducing images in the scipy wiki? On the left hand side: select under "Page" the point "More Actions", which gives a list containing "Attachment". In the text you can refer to an attached image by attachment:image_01.png (In the plotting tutorial inline:simple_plot.png I don't know if there is a relevant difference between these two) > I would like to introduce a companion image for an example in the cookbook. Great! (BTW: are there any recommendations wrt size/format etc.?) Best, Arnd From rclewley at cam.cornell.edu Sun Feb 5 14:58:39 2006 From: rclewley at cam.cornell.edu (Robert Clewley) Date: Sun, 5 Feb 2006 14:58:39 -0500 (EST) Subject: [SciPy-user] symbolic strings In-Reply-To: References: Message-ID: Hi Ryan, I updated ModelSpec.py to use true division, and added support for ** (although it will internally convert to a call to pow()). I had been wanting to improve the behaviour of eval() so that, in the absence of supplied substitutions, uses symbolic definitions in the local scope to provide them. It now does this (although it's a lot slower that way). The new ModelSpec.py is included in a new PyDSTool patch release at SourceForge, and you can diff the old and new files to see what methods have been added, and also to see the new examples at the end. New documentation on these features is also at the Wiki page "Symbolic". Let me know if this works for you. Rob On Fri, 3 Feb 2006, Ryan Krauss wrote: > So all of my scripts begin with from __future__ import division, so > defining a __truediv_ method has my symstr class working. I don't > seem to be able to add __truediv__ or __pow__ to QuantSpec or Quantity > though. Any thoughts? > > Ryan > > On 2/3/06, Ryan Krauss wrote: >> So, I tried to get around the pow problem by just using *s*s, since I >> only need squares in this particular function. That gets me as far as >> I was before: >> /home/ryan/rwkpython/TMM/feedback/__init__.py in GetAugMat() >> 58 aden=c1*s+k1 >> ---> 59 temp=anum/aden >> 60 temp2=temp*-1 >> >> ipdb> >> TypeError: "unsupported operand type(s) for /: 'QuantSpec' and 'QuantSpec'" >> /home/ryan/rwkpython/TMM/feedback/__init__.py in GetAugMat() >> 58 aden=c1*s+k1 >> ---> 59 temp=anum/aden >> 60 temp2=temp*-1 >> >> Help. >> >> Ryan >> >> On 2/3/06, Ryan Krauss wrote: >>> I am checking out QuantSpec and it seems very powerful and like it >>> should be able to do what I need. I am having trouble defining a >>> __pow__ method though: >>> >>> I tried just adding this in the Quant class: >>> def __pow__(self, other): >>> return self.__combine(other, "**") >>> >>> following the example of; >>> def __mul__(self, other): >>> return self.__combine(other, "*") >>> >>> I was originally trying to evaluate this expression: >>> anum=L0*m1*r1*s**2-c1*s-k1 >>> where everything on the right is a ModelSpec.Par >>> >>> I get this message: >>> *** ValueError: Operator ** is not allowed. Please use the pow() call >>> >>> I changed to pow(s,2), but it still didn't work: >>> ipdb> type(s) >>> Out[3]: >>> ipdb> s**2 >>> TypeError: "unsupported operand type(s) for ** or pow(): 'Par' and 'float'" >>> >>> What would need to change to make Par**2 work? >>> >>> Thanks Rob, I think this can be really helpful. >>> >>> Ryan >>> >>> On 2/3/06, Robert Clewley wrote: >>>> Ryan, you'll probably need __rdiv__ and __rmul__ methods to help with >>>> this, in case the other objects involved in the construction are not >>>> symbolic objects. There are a bunch of cases you have to check to add >>>> braces appropriately if one of the operands is "compound", and you can >>>> find an example of what's needed in my code for the QuantSpec >>>> class. >>>> >>>> -Rob >>>> >>>> >>>> On Fri, 3 Feb 2006, Ryan Krauss wrote: >>>> >>>>> Thanks Travis, it sounds like I am on the right track. I defined >>>>> checks in the __mul__ and __div__ methods to look for compound >>>>> expressions and add parentheses. >>>>> >>>>> I am running into a weird problem though. I broke the expression into >>>>> a numerator and denominator to get the parentheses to work, but when I >>>>> try a=anum/aden*-1 >>>>> I get an error: TypeError: unsupported operand type(s) for /: 'symstr' >>>>> and 'symstr'. But symstr does have a __div__ method and when I run >>>>> the code with the ipython pdb magic turned on, I have no trouble >>>>> executing temp=anum/aden*-1 and getting the correct result. Why does >>>>> it work in the debugger and not in the script? >>>>> >>>>> Below is a chunk of the ipython session. >>>>> >>>>> Thanks, >>>>> >>>>> Ryan >>>>> >>>>> /home/ryan/rwkpython/TMM/feedback/__init__.py in GetAugMat(self, s) >>>>> 55 anum=L0*m1*r1*s**2-c1*s-k1 >>>>> 56 aden=c1*s+k1 >>>>> ---> 57 a=anum/aden*-1 >>>>> 58 # a=(-1)*(L0*m1*r1*s**2-c1*s-k1)/(c1*s+k1) >>>>> 59 >>>>> b=(-1)*((L0*m1*r1**2-L0*L1*m1*r1+Iz1*L0)*s**2+(c1*L1+c1*L0)*s+k1*L1+k1*L0)/(c1*s+k1) >>>>> >>>>> TypeError: unsupported operand type(s) for /: 'symstr' and 'symstr' >>>>> /home/ryan/rwkpython/TMM/feedback/__init__.py in GetAugMat() >>>>> 56 aden=c1*s+k1 >>>>> ---> 57 a=anum/aden*-1 >>>>> 58 # a=(-1)*(L0*m1*r1*s**2-c1*s-k1)/(c1*s+k1) >>>>> >>>>> ipdb> temp=anum/aden*-1 >>>>> ipdb> temp >>>>> '((Ll0*ml1*rl1*s**2-cj1*s-kj1)/(cj1*s+kj1))*(-1)' >>>>> ipdb> type(anum) >>>>> >>>>> ipdb> type(aden) >>>>> >>>>> ipdb> rwkmisc.symstr(1) >>>>> '1' >>>>> ipdb> >>>>> >>>>> >>>>> On 2/3/06, Travis Oliphant wrote: >>>>>> Ryan Krauss wrote: >>>>>> >>>>>>> I may be getting in over my head, but I think this can work. I do a >>>>>>> lot of analysis with matrices using the transfer matrix method. >>>>>>> Sometimes I want to do numeric work with them and sometimes I use >>>>>>> python to generate an input script for Maxima. Right now I have two >>>>>>> methods for my transfer matrix element, one that outputs a numeric >>>>>>> matrix and one that outputs a string for Maxima. I am working on a >>>>>>> new element and would like to combined these two expressions by >>>>>>> creating a symbolic string method so that expressions like >>>>>>> a=-(L0*m1*r1*s**2-c1*s-k1)/(c1*s+k1) >>>>>>> >>>>>>> make sense whether L0, m1, r1, s, etc are floats or symstrs (my new >>>>>>> class). I overrode the __mul__, __add__, __div__,__sub__, and __pow__ >>>>>>> methods but don't know if there are methods to override to deal with >>>>>>> the parenthesis and the unary - out in front. Can this be done with >>>>>>> only a little bit more work? Basically I want the output of this >>>>>>> expression to be 'a=-(L0*m1*r1*s**2-c1*s-k1)/(c1*s+k1)' if it is a >>>>>>> string and a float or complex if the variables are numbers. >>>>>>> >>>>>>> >>>>>> This can be done, but you would basically have to add the parenthesis in >>>>>> yourself as you build up your string... >>>>>> >>>>>> Yes, the unary '-' can be over-written (see __neg__ special method). >>>>>> >>>>>> -Travis >>>>>> >>>>>> _______________________________________________ >>>>>> SciPy-user mailing list >>>>>> SciPy-user at scipy.net >>>>>> http://www.scipy.net/mailman/listinfo/scipy-user >>>>>> >>>>> >>>>> _______________________________________________ >>>>> SciPy-user mailing list >>>>> SciPy-user at scipy.net >>>>> http://www.scipy.net/mailman/listinfo/scipy-user >>>>> >>>> >>>> ----------------------------------- >>>> Rob Clewley >>>> Research Associate >>>> Department of Mathematics >>>> and Center for Applied Mathematics >>>> Cornell University >>>> Ithaca, NY 14853 >>>> www.cam.cornell.edu/~rclewley >>>> Tel: 607-255-7760 >>>> Fax: 607-255-9860 >>>> >>>> _______________________________________________ >>>> SciPy-user mailing list >>>> SciPy-user at scipy.net >>>> http://www.scipy.net/mailman/listinfo/scipy-user >>>> >>> >> > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > ----------------------------------- Rob Clewley Research Associate Department of Mathematics and Center for Applied Mathematics Cornell University Ithaca, NY 14853 www.cam.cornell.edu/~rclewley Tel: 607-255-7760 Fax: 607-255-9860 From arnd.baecker at web.de Sun Feb 5 16:33:41 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Sun, 5 Feb 2006 22:33:41 +0100 (CET) Subject: [SciPy-user] Submitting images to the wiki In-Reply-To: References: <86522b1a0602050932q4caab64fk6a1d6b306dc0f412@mail.gmail.com> Message-ID: On Sun, 5 Feb 2006, Arnd Baecker wrote: > On Sun, 5 Feb 2006, Hugo Gamboa wrote: > > > What is the process of introducing images in the scipy wiki? > > On the left hand side: select under "Page" > the point "More Actions", which gives a list containing "Attachment". Sorry, it seems that this is not there for the scipy wiki (I only tested it with my local personal wiki ...). As a workaround one can append ?action=AttachFile to the link in your browser for the page you would like to add an attachment. There must be a simpler way... > In the text you can refer to an attached image by > > attachment:image_01.png > > (In the plotting tutorial > inline:simple_plot.png > I don't know if there is a relevant difference between these two) According to http://www.scipy.org/HelpOnActions/AttachFile: """Instead of attachment:, you can also use inline:, which tries to inline the attachment's content. For pictures (files ending in .gif, .jpg, .jpeg, or .png) there is no difference. For Python sources (i.e. attachments with a .py extension), they are shown as syntax-highlighted source displays. For all other files, the behaviour is the same as for simple attachment: links.""" Would it be good to use file attachments for any python code (instead of putting it inline)? Best, Arnd From robert.kern at gmail.com Sun Feb 5 17:08:00 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 05 Feb 2006 16:08:00 -0600 Subject: [SciPy-user] Submitting images to the wiki In-Reply-To: References: <86522b1a0602050932q4caab64fk6a1d6b306dc0f412@mail.gmail.com> Message-ID: <43E67740.5080503@gmail.com> Arnd Baecker wrote: > On Sun, 5 Feb 2006, Arnd Baecker wrote: > >>On Sun, 5 Feb 2006, Hugo Gamboa wrote: >> >>>What is the process of introducing images in the scipy wiki? >> >>On the left hand side: select under "Page" >>the point "More Actions", which gives a list containing "Attachment". > > Sorry, it seems that this is not there for the scipy wiki > (I only tested it with my local personal wiki ...). > As a workaround one can append > ?action=AttachFile > to the link in your browser for the page you would like to > add an attachment. > There must be a simpler way... The "More Actions" dropdown box *does* have "Attachments". -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From hgamboa at gmail.com Sun Feb 5 20:33:06 2006 From: hgamboa at gmail.com (Hugo Gamboa) Date: Mon, 6 Feb 2006 01:33:06 +0000 Subject: [SciPy-user] Submitting images to the wiki In-Reply-To: <43E67740.5080503@gmail.com> References: <86522b1a0602050932q4caab64fk6a1d6b306dc0f412@mail.gmail.com> <43E67740.5080503@gmail.com> Message-ID: <86522b1a0602051733s347d6c50l8b3f423114a5380f@mail.gmail.com> Thanks, Arnd and Robert, I managed to submit the .png using the more actions dropdown. Hugo Gamboa On 2/5/06, Robert Kern wrote: > Arnd Baecker wrote: > > On Sun, 5 Feb 2006, Arnd Baecker wrote: > > > >>On Sun, 5 Feb 2006, Hugo Gamboa wrote: > >> > >>>What is the process of introducing images in the scipy wiki? > >> > >>On the left hand side: select under "Page" > >>the point "More Actions", which gives a list containing "Attachment". > > > > Sorry, it seems that this is not there for the scipy wiki > > (I only tested it with my local personal wiki ...). > > As a workaround one can append > > ?action=AttachFile > > to the link in your browser for the page you would like to > > add an attachment. > > There must be a simpler way... > > The "More Actions" dropdown box *does* have "Attachments". > > -- > Robert Kern > robert.kern at gmail.com > > "In the fields of hell where the grass grows high > Are the graves of dreams allowed to die." > -- Richard Harter > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From nicolist at limare.net Mon Feb 6 10:13:28 2006 From: nicolist at limare.net (Nico) Date: Mon, 06 Feb 2006 16:13:28 +0100 Subject: [SciPy-user] new on the list In-Reply-To: <20060206144906.627F28821B@sc8-sf-spam1.sourceforge.net> References: <20060206144906.627F28821B@sc8-sf-spam1.sourceforge.net> Message-ID: <43E76798.4050602@limare.net> Hi. I'm a new user of the numpy-discussion and scipy-user mailing-lists. So, as I usually do, here are a few words about me and my use of numpy/scipy. I am a doctorate student, in Paris; I will work on numerical analysis, mesh generation and image processing, and I intend to do the prototyping (and maybe everything) of my works with python. I recently choosed python because... - flexible and rich language for array manipulation - seems a good language to help me write clean, clear, bug-free and reusable code - seems possible to make a GUI frontend without too much pain - seems OK to glue with various other C/fortran applications without to much pain - free, as in free beer (I had to work on Matlab previously, and I don't like to force people pay for an expensive licence if they are interested in my work) - free, as in free speech (... I also had serious problems, needing compatibility of Matlab with a linux kernel not officially supported) I use numpy/scipy on Debian/Ubuntu, building from the release tarballs. And I am currently reading the available documentation... Last thing: What about a #scipy irc channel? I feel there are too many people on irc.freenode.org/#python for an efficient use. Happy coding! -- Nico -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 189 bytes Desc: OpenPGP digital signature URL: From strawman at astraw.com Mon Feb 6 11:58:35 2006 From: strawman at astraw.com (Andrew Straw) Date: Mon, 06 Feb 2006 08:58:35 -0800 Subject: [SciPy-user] Submitting images to the wiki In-Reply-To: References: <86522b1a0602050932q4caab64fk6a1d6b306dc0f412@mail.gmail.com> Message-ID: <43E7803B.9060305@astraw.com> Arnd Baecker wrote: >On Sun, 5 Feb 2006, Hugo Gamboa wrote: > > > >>What is the process of introducing images in the scipy wiki? >> >> > >On the left hand side: select under "Page" >the point "More Actions", which gives a list containing "Attachment". > >In the text you can refer to an attached image by > > attachment:image_01.png > >(In the plotting tutorial > inline:simple_plot.png >I don't know if there is a relevant difference between these two) > > > >>I would like to introduce a companion image for an example in the cookbook. >> >> > >Great! > >(BTW: are there any recommendations wrt size/format etc.?) > > The only suggestion I have is to make the uploaded image a reasonable size for embedding in the browser. (400x300 or something.) I say this because when the browser does its own rescaling it's often not anti-aliased (aliased?) and hence looks ugly. Interpolating in an image manipulation program or just generating a smaller image in the first place produces much nicer results. I think most people will do this by default, since I'm not sure in Moin how to get an image to display at non-100% size. But, hey, you asked for recommendations! :) Arnd, the reason you couldn't attach a file is probably that you weren't logged into the wiki. You have to have a login to edit the wiki. Other than turning on "automatic spam detection" this is so far the only measure we've taken against wiki spam. Don't forget with the wiki now accessible at http://scipy.org you may need to login again. Cheers! Andrew From bryan.cole at teraview.com Tue Feb 7 06:07:15 2006 From: bryan.cole at teraview.com (Bryan Cole) Date: Tue, 07 Feb 2006 11:07:15 +0000 Subject: [SciPy-user] Wrapping C libraries for Numeric integration Message-ID: <1139310435.17080.23.camel@bryan.teraview.local> Hi All, What's the best approach for wrapping C libraries where the inputs/outputs are C arrays (which I want to pass to/from Numeric arrays)? I'm a regular SWIG user, but I don't have any typemaps to handle C-array-to-Numeric conversion and I can't see any distributed with SWIG. What approach is used by SciPy? (I know scipy is mostly fortran routines but surely you have a few C ones?). Can anyone point me to existing typemaps for this purpose, to save me writing my own. thanks, Bryan From cimrman3 at ntc.zcu.cz Tue Feb 7 06:48:26 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Tue, 07 Feb 2006 12:48:26 +0100 Subject: [SciPy-user] Wrapping C libraries for Numeric integration In-Reply-To: <1139310435.17080.23.camel@bryan.teraview.local> References: <1139310435.17080.23.camel@bryan.teraview.local> Message-ID: <43E8890A.8050004@ntc.zcu.cz> Bryan Cole wrote: > Hi All, > > What's the best approach for wrapping C libraries where the > inputs/outputs are C arrays (which I want to pass to/from Numeric > arrays)? > > I'm a regular SWIG user, but I don't have any typemaps to handle > C-array-to-Numeric conversion and I can't see any distributed with SWIG. > What approach is used by SciPy? (I know scipy is mostly fortran routines > but surely you have a few C ones?). > > Can anyone point me to existing typemaps for this purpose, to save me > writing my own. You can have a look at the umfpack wrapper file in scipy/Lib/sandbox/umfpack/umfpack.i. Hope that helps, r. From pearu at scipy.org Tue Feb 7 06:23:59 2006 From: pearu at scipy.org (Pearu Peterson) Date: Tue, 7 Feb 2006 05:23:59 -0600 (CST) Subject: [SciPy-user] Wrapping C libraries for Numeric integration In-Reply-To: <1139310435.17080.23.camel@bryan.teraview.local> References: <1139310435.17080.23.camel@bryan.teraview.local> Message-ID: On Tue, 7 Feb 2006, Bryan Cole wrote: > What's the best approach for wrapping C libraries where the > inputs/outputs are C arrays (which I want to pass to/from Numeric > arrays)? > > I'm a regular SWIG user, but I don't have any typemaps to handle > C-array-to-Numeric conversion and I can't see any distributed with SWIG. > What approach is used by SciPy? (I know scipy is mostly fortran routines > but surely you have a few C ones?). f2py can be easily be used also wrapping C libraries if writing some Fortran-like syntax (see f2py usersguide) does not scare you. Here's a simple example: /* File foo.c */ void foo(int n, double *x, double *y) { int i; for (i=0;i>> import m >>> print m.foo.__doc__ foo - Function signature: y = foo(x) Required arguments: x : input rank-1 array('d') with bounds (n) Return objects: y : rank-1 array('d') with bounds (n) >>> m.foo([1,2,3,4,5]) array([ 1., 3., 5., 7., 9.]) >>> Regards, Pearu From hetland at tamu.edu Tue Feb 7 12:52:30 2006 From: hetland at tamu.edu (Robert Hetland) Date: Tue, 7 Feb 2006 11:52:30 -0600 Subject: [SciPy-user] Wrapping C libraries for Numeric integration In-Reply-To: References: <1139310435.17080.23.camel@bryan.teraview.local> Message-ID: This is an excellent example -- I have just been wondering how to do such things, and I would prefer f2py over swig, since it is more familiar to me. However, I can't seem to get the example to work. It complies fine, but I get an output array of all zeros (the same size as the input array). Any suggestions? -Rob On Feb 7, 2006, at 5:23 AM, Pearu Peterson wrote: > > > On Tue, 7 Feb 2006, Bryan Cole wrote: > >> What's the best approach for wrapping C libraries where the >> inputs/outputs are C arrays (which I want to pass to/from Numeric >> arrays)? >> >> I'm a regular SWIG user, but I don't have any typemaps to handle >> C-array-to-Numeric conversion and I can't see any distributed with >> SWIG. >> What approach is used by SciPy? (I know scipy is mostly fortran >> routines >> but surely you have a few C ones?). > > f2py can be easily be used also wrapping C libraries if writing some > Fortran-like syntax (see f2py usersguide) does not scare you. Here's a > simple example: > > /* File foo.c */ > void foo(int n, double *x, double *y) { > int i; > for (i=0;i y[i] = x[i] + i; > } > } > > ! File m.pyf > python module m > interface > subroutine foo(n,x,y) > intent(c) foo ! foo is a C function > intent(c) ! all foo arguments are > considered as C based > integer intent(hide), depend(x) :: n=len(x) ! n is the lenght > of input array x > double precision intent(in) :: x(n) ! x is input array (or > arbitrary sequence) > double precision intent(out) :: y(n) ! y is output array, see > code in foo.c > end subroutine foo > end interface > end python module m > > # File setup.py > def configuration(parent_package='',top_path=None): > from numpy.distutils.misc_util import Configuration > config = Configuration('',parent_package,top_path) > > config.add_extension('m', > sources = ['m.pyf','foo.c']) > return config > if __name__ == "__main__": > from numpy.distutils.core import setup > setup(**configuration(top_path='').todict()) > > Building and testing module m thats function foo calls C function: > > python setup.py build_src build_ext --inplace > > python >>>> import m >>>> print m.foo.__doc__ > foo - Function signature: > y = foo(x) > Required arguments: > x : input rank-1 array('d') with bounds (n) > Return objects: > y : rank-1 array('d') with bounds (n) > >>>> m.foo([1,2,3,4,5]) > array([ 1., 3., 5., 7., 9.]) >>>> > > Regards, > Pearu > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user ----- Rob Hetland, Assistant Professor Dept of Oceanography, Texas A&M University p: 979-458-0096, f: 979-845-6331 e: hetland at tamu.edu, w: http://pong.tamu.edu From rshepard at appl-ecosys.com Tue Feb 7 13:01:19 2006 From: rshepard at appl-ecosys.com (Rich Shepard) Date: Tue, 7 Feb 2006 10:01:19 -0800 (PST) Subject: [SciPy-user] Replacing Numeric With NumPy Message-ID: Last evening I downloaded numpy-0.9.4 and scipy-0.4.4. I have an earlier version of Numeric in /usr/lib/python2.4/site-packages/Numeric/. Should I remove all references to Numeric before installing NumPy? Rich -- Richard B. Shepard, Ph.D. | Author of "Quantifying Environmental Applied Ecosystem Services, Inc. (TM) | Impact Assessments Using Fuzzy Logic" Voice: 503-667-4517 Fax: 503-667-8863 From pearu at scipy.org Tue Feb 7 12:02:37 2006 From: pearu at scipy.org (Pearu Peterson) Date: Tue, 7 Feb 2006 11:02:37 -0600 (CST) Subject: [SciPy-user] Wrapping C libraries for Numeric integration In-Reply-To: References: <1139310435.17080.23.camel@bryan.teraview.local> Message-ID: On Tue, 7 Feb 2006, Robert Hetland wrote: > > This is an excellent example -- I have just been wondering how to do > such things, and I would prefer f2py over swig, since it is more > familiar to me. > > However, I can't seem to get the example to work. It complies fine, > but I get an output array of all zeros (the same size as the input > array). Any suggestions? What numpy version are you using? I am using numpy from svn repo. Did you follow exactly the instructions below or did you try something different? Could you send the full output of setup.py build command, may be offlist? Remember to do `rm -rf build *.so` before rebuilding. What platform/compilers are you using? Pearu > On Feb 7, 2006, at 5:23 AM, Pearu Peterson wrote: > >> >> >> On Tue, 7 Feb 2006, Bryan Cole wrote: >> >>> What's the best approach for wrapping C libraries where the >>> inputs/outputs are C arrays (which I want to pass to/from Numeric >>> arrays)? >>> >>> I'm a regular SWIG user, but I don't have any typemaps to handle >>> C-array-to-Numeric conversion and I can't see any distributed with >>> SWIG. >>> What approach is used by SciPy? (I know scipy is mostly fortran >>> routines >>> but surely you have a few C ones?). >> >> f2py can be easily be used also wrapping C libraries if writing some >> Fortran-like syntax (see f2py usersguide) does not scare you. Here's a >> simple example: >> >> /* File foo.c */ >> void foo(int n, double *x, double *y) { >> int i; >> for (i=0;i> y[i] = x[i] + i; >> } >> } >> >> ! File m.pyf >> python module m >> interface >> subroutine foo(n,x,y) >> intent(c) foo ! foo is a C function >> intent(c) ! all foo arguments are >> considered as C based >> integer intent(hide), depend(x) :: n=len(x) ! n is the lenght >> of input array x >> double precision intent(in) :: x(n) ! x is input array (or >> arbitrary sequence) >> double precision intent(out) :: y(n) ! y is output array, see >> code in foo.c >> end subroutine foo >> end interface >> end python module m >> >> # File setup.py >> def configuration(parent_package='',top_path=None): >> from numpy.distutils.misc_util import Configuration >> config = Configuration('',parent_package,top_path) >> >> config.add_extension('m', >> sources = ['m.pyf','foo.c']) >> return config >> if __name__ == "__main__": >> from numpy.distutils.core import setup >> setup(**configuration(top_path='').todict()) >> >> Building and testing module m thats function foo calls C function: >> >> python setup.py build_src build_ext --inplace >> >> python >>>>> import m >>>>> print m.foo.__doc__ >> foo - Function signature: >> y = foo(x) >> Required arguments: >> x : input rank-1 array('d') with bounds (n) >> Return objects: >> y : rank-1 array('d') with bounds (n) >> >>>>> m.foo([1,2,3,4,5]) >> array([ 1., 3., 5., 7., 9.]) >>>>> From pearu at scipy.org Tue Feb 7 12:05:24 2006 From: pearu at scipy.org (Pearu Peterson) Date: Tue, 7 Feb 2006 11:05:24 -0600 (CST) Subject: [SciPy-user] Replacing Numeric With NumPy In-Reply-To: References: Message-ID: On Tue, 7 Feb 2006, Rich Shepard wrote: > Last evening I downloaded numpy-0.9.4 and scipy-0.4.4. I have an earlier > version of Numeric in /usr/lib/python2.4/site-packages/Numeric/. Should I > remove all references to Numeric before installing NumPy? No need. Though it is easier to convert your code to numpy if it fails on any attempt to import Numeric. So, I would just hide Numeric after installing numpy. Pearu From rshepard at appl-ecosys.com Tue Feb 7 13:12:13 2006 From: rshepard at appl-ecosys.com (Rich Shepard) Date: Tue, 7 Feb 2006 10:12:13 -0800 (PST) Subject: [SciPy-user] Replacing Numeric With NumPy In-Reply-To: References: Message-ID: On Tue, 7 Feb 2006, Pearu Peterson wrote: > No need. Though it is easier to convert your code to numpy if it fails on > any attempt to import Numeric. So, I would just hide Numeric after > installing numpy. Pearu, Since I'm just getting started with these libraries I've nothing to lose yet. Guess that I'll run find and delete all Numeric references just to be a bit cleaner about it. FWIW, my need is to use the eigen() function on a symmetrical matrix to calculate the principal eigenvector. If there are any potential 'gotchas' I certainly would appreciate knowing of them. Thanks, Rich -- Richard B. Shepard, Ph.D. | Author of "Quantifying Environmental Applied Ecosystem Services, Inc. (TM) | Impact Assessments Using Fuzzy Logic" Voice: 503-667-4517 Fax: 503-667-8863 From aisaac at american.edu Tue Feb 7 13:20:56 2006 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 7 Feb 2006 13:20:56 -0500 Subject: [SciPy-user] PyDSTool Message-ID: 1. Is PyDSTool destined to be part of SciPy? This seems like a good idea ... 2. Was PyDSTool's scipy_ode.py forked from SciPy's ode.py for any reason other than lack of communication? Cheers, Alan Isaac PS The note in scipy_ode.py reads:: # This is a trivially-adapted version of the original SciPy integrate code. # Changes: (1) removed the annoying print statement whenever # the ODE code is 'found', which is unwanted screen output during # computations. # (2) Made ode a new-style class. # (3) Changed a lambda occurrence in one of the methods to use noneFn instead. From oliphant.travis at ieee.org Tue Feb 7 13:14:53 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 07 Feb 2006 11:14:53 -0700 Subject: [SciPy-user] Replacing Numeric With NumPy In-Reply-To: References: Message-ID: <43E8E39D.5020605@ieee.org> Rich Shepard wrote: > Last evening I downloaded numpy-0.9.4 and scipy-0.4.4. I have an earlier >version of Numeric in /usr/lib/python2.4/site-packages/Numeric/. Should I >remove all references to Numeric before installing NumPy? > >Rich > > > No need to do that. Numeric and NumPy (import numpy) can live happily together. With versions of Numeric about 24.0, then can even share the same data. -Travis From rshepard at appl-ecosys.com Tue Feb 7 13:17:30 2006 From: rshepard at appl-ecosys.com (Rich Shepard) Date: Tue, 7 Feb 2006 10:17:30 -0800 (PST) Subject: [SciPy-user] Replacing Numeric With NumPy In-Reply-To: <43E8E39D.5020605@ieee.org> References: <43E8E39D.5020605@ieee.org> Message-ID: On Tue, 7 Feb 2006, Travis Oliphant wrote: > No need to do that. Numeric and NumPy (import numpy) can live happily > together. With versions of Numeric about 24.0, then can even share the same > data. Travis, Are there advantages to having both on the system? I read the Numeric manual a couple of times, but haven't looked deeply at the division between the two. Many thanks, Rich -- Richard B. Shepard, Ph.D. | Author of "Quantifying Environmental Applied Ecosystem Services, Inc. (TM) | Impact Assessments Using Fuzzy Logic" Voice: 503-667-4517 Fax: 503-667-8863 From hetland at tamu.edu Tue Feb 7 13:23:13 2006 From: hetland at tamu.edu (Robert Hetland) Date: Tue, 7 Feb 2006 12:23:13 -0600 Subject: [SciPy-user] Wrapping C libraries for Numeric integration In-Reply-To: References: <1139310435.17080.23.camel@bryan.teraview.local> Message-ID: <5DC7FBDB-9D76-44D7-9DFD-4461DBF36941@tamu.edu> Here is the relevant information: Mac OS X 10.4.4, gcc, g77, numpy.__version__ = '0.9.5.2044' $ ls foo.c m.pyf setup.py $ python setup.py build_src build_ext --inplace > build.out $ cat build.out running build_src building extension "m" sources f2py options: [] f2py: ./m.pyf Reading fortran codes... Reading file './m.pyf' (format:free) {'this': 'intent', 'after': '(c) ', 'before': ''} Line #5 in ./m.pyf:" intent(c) " analyzeline: no name pattern found in intent statement for ''. Skipping. Post-processing... Block: m Block: foo Post-processing (stage 2)... Building modules... Building module "m"... Constructing wrapper function "foo"... y = foo(x) Wrote C/API module "m" to file "./mmodule.c" adding 'build/src/fortranobject.c' to sources. adding 'build/src' to include_dirs. creating build creating build/src copying /Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/ numpy/f2py/src/fortranobject.c -> build/src copying /Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/ numpy/f2py/src/fortranobject.h -> build/src running build_ext customize UnixCCompiler customize UnixCCompiler using build_ext building 'm' extension compiling C sources gcc options: '-fno-strict-aliasing -Wno-long-double -no-cpp-precomp - mno-fused-madd -f no-common -dynamic -DNDEBUG -g -O3 -Wall -Wstrict-prototypes' creating build/temp.darwin-8.4.0-Power_Macintosh-2.4 creating build/temp.darwin-8.4.0-Power_Macintosh-2.4/build creating build/temp.darwin-8.4.0-Power_Macintosh-2.4/build/src compile options: '-Ibuild/src -I/Library/Frameworks/Python.framework/ Versions/2.4/lib/ python2.4/site-packages/numpy/core/include -I/Library/Frameworks/ Python.framework/Vers ions/2.4/include/python2.4 -c' gcc: build/src/fortranobject.c In file included from build/src/fortranobject.h:13, from build/src/fortranobject.c:2: /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site- packages/numpy/co re/include/numpy/arrayobject.h:134: warning: redefinition of `ushort' /usr/include/sys/types.h:85: warning: `ushort' previously declared here /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site- packages/numpy/co re/include/numpy/arrayobject.h:135: warning: redefinition of `uint' /usr/include/sys/types.h:86: warning: `uint' previously declared here gcc: ./foo.c gcc: ./mmodule.c In file included from build/src/fortranobject.h:13, from mmodule.c:17: /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site- packages/numpy/co re/include/numpy/arrayobject.h:134: warning: redefinition of `ushort' /usr/include/sys/types.h:85: warning: `ushort' previously declared here /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site- packages/numpy/co re/include/numpy/arrayobject.h:135: warning: redefinition of `uint' /usr/include/sys/types.h:86: warning: `uint' previously declared here gcc -Wl,-x -bundle -undefined dynamic_lookup build/temp.darwin-8.4.0- Power_Macintosh-2 .4/mmodule.o build/temp.darwin-8.4.0-Power_Macintosh-2.4/foo.o build/ temp.darwin-8.4.0 -Power_Macintosh-2.4/build/src/fortranobject.o -o m.so $ ipython [...snip...] >>> import m >>> m.foo([1.,2.,3]) array([ 0., 0., 0.]) On Feb 7, 2006, at 11:02 AM, Pearu Peterson wrote: > > > On Tue, 7 Feb 2006, Robert Hetland wrote: > >> >> This is an excellent example -- I have just been wondering how to do >> such things, and I would prefer f2py over swig, since it is more >> familiar to me. >> >> However, I can't seem to get the example to work. It complies fine, >> but I get an output array of all zeros (the same size as the input >> array). Any suggestions? > > What numpy version are you using? I am using numpy from svn repo. > > Did you follow exactly the instructions below or did you try something > different? Could you send the full output of setup.py build > command, may > be offlist? Remember to do `rm -rf build *.so` before rebuilding. > What platform/compilers are you using? > > Pearu > >> On Feb 7, 2006, at 5:23 AM, Pearu Peterson wrote: >> >>> >>> >>> On Tue, 7 Feb 2006, Bryan Cole wrote: >>> >>>> What's the best approach for wrapping C libraries where the >>>> inputs/outputs are C arrays (which I want to pass to/from Numeric >>>> arrays)? >>>> >>>> I'm a regular SWIG user, but I don't have any typemaps to handle >>>> C-array-to-Numeric conversion and I can't see any distributed with >>>> SWIG. >>>> What approach is used by SciPy? (I know scipy is mostly fortran >>>> routines >>>> but surely you have a few C ones?). >>> >>> f2py can be easily be used also wrapping C libraries if writing some >>> Fortran-like syntax (see f2py usersguide) does not scare you. >>> Here's a >>> simple example: >>> >>> /* File foo.c */ >>> void foo(int n, double *x, double *y) { >>> int i; >>> for (i=0;i>> y[i] = x[i] + i; >>> } >>> } >>> >>> ! File m.pyf >>> python module m >>> interface >>> subroutine foo(n,x,y) >>> intent(c) foo ! foo is a C function >>> intent(c) ! all foo arguments are >>> considered as C based >>> integer intent(hide), depend(x) :: n=len(x) ! n is the lenght >>> of input array x >>> double precision intent(in) :: x(n) ! x is input array (or >>> arbitrary sequence) >>> double precision intent(out) :: y(n) ! y is output array, see >>> code in foo.c >>> end subroutine foo >>> end interface >>> end python module m >>> >>> # File setup.py >>> def configuration(parent_package='',top_path=None): >>> from numpy.distutils.misc_util import Configuration >>> config = Configuration('',parent_package,top_path) >>> >>> config.add_extension('m', >>> sources = ['m.pyf','foo.c']) >>> return config >>> if __name__ == "__main__": >>> from numpy.distutils.core import setup >>> setup(**configuration(top_path='').todict()) >>> >>> Building and testing module m thats function foo calls C function: >>> >>> python setup.py build_src build_ext --inplace >>> >>> python >>>>>> import m >>>>>> print m.foo.__doc__ >>> foo - Function signature: >>> y = foo(x) >>> Required arguments: >>> x : input rank-1 array('d') with bounds (n) >>> Return objects: >>> y : rank-1 array('d') with bounds (n) >>> >>>>>> m.foo([1,2,3,4,5]) >>> array([ 1., 3., 5., 7., 9.]) >>>>>> > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user ----- Rob Hetland, Assistant Professor Dept of Oceanography, Texas A&M University p: 979-458-0096, f: 979-845-6331 e: hetland at tamu.edu, w: http://pong.tamu.edu From jh at oobleck.astro.cornell.edu Tue Feb 7 13:29:44 2006 From: jh at oobleck.astro.cornell.edu (Joe Harrington) Date: Tue, 7 Feb 2006 13:29:44 -0500 Subject: [SciPy-user] Wrapping C libraries for Numeric integration In-Reply-To: (scipy-user-request@scipy.net) References: Message-ID: <200602071829.k17ITi3Q007183@oobleck.astro.cornell.edu> Pearu, That's a great and very simple example! Would you be willing to turn it into a Cookbook page? Thanks! --jh-- From sransom at nrao.edu Tue Feb 7 13:30:22 2006 From: sransom at nrao.edu (Scott Ransom) Date: Tue, 7 Feb 2006 13:30:22 -0500 Subject: [SciPy-user] Wrapping C libraries for Numeric integration In-Reply-To: <1139310435.17080.23.camel@bryan.teraview.local> References: <1139310435.17080.23.camel@bryan.teraview.local> Message-ID: <200602071330.22768.sransom@nrao.edu> I'm a bit hesitant to post this since the code is a bit old and possibly a bit crusty, but I have several typemaps that I've been using successfully for the past ~6yrs that allow me to do some interfacing of 1-D and 2-D arrays between C and Numeric. What I have allows me to use a C function on an array generated in Numeric, as well as turn an array that was created in C (with malloc) into a usable Numeric array. Some of the maps are pretty specific to my code, but if there is enough interest (and if someone is willing to check things out wrt the new numpy), I would be willing to post the typemaps and maybe some usage examples. Scott PS: These typemaps are the same ones I announced several years ago on the numeric mailing list(s). On Tuesday 07 February 2006 06:07, Bryan Cole wrote: > Hi All, > > What's the best approach for wrapping C libraries where the > inputs/outputs are C arrays (which I want to pass to/from Numeric > arrays)? > > I'm a regular SWIG user, but I don't have any typemaps to handle > C-array-to-Numeric conversion and I can't see any distributed with SWIG. > What approach is used by SciPy? (I know scipy is mostly fortran routines > but surely you have a few C ones?). > > Can anyone point me to existing typemaps for this purpose, to save me > writing my own. > > thanks, > Bryan > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user -- Scott M. Ransom Address: NRAO Phone: (434) 296-0320 520 Edgemont Rd. email: sransom at nrao.edu Charlottesville, VA 22903 USA GPG Fingerprint: 06A9 9553 78BE 16DB 407B FFCA 9BFA B6FF FFD3 2989 From rclewley at cam.cornell.edu Tue Feb 7 15:01:02 2006 From: rclewley at cam.cornell.edu (Robert Clewley) Date: Tue, 7 Feb 2006 15:01:02 -0500 (EST) Subject: [SciPy-user] PyDSTool In-Reply-To: References: Message-ID: Alan, to answer your questions: 1. I don't think there's any such destiny :) ... inasmuch as we haven't presumed that our libraries would be considered either useful or well-written enough to be brought into the SciPy fold. Our classes are still effectively prototypes, and I would not like to get roasted for trying to integrate them into SciPy without a lot more time (and probably outside help) to get them up to scratch. Now that we have shown proof-of-concept with some of our constructs, such as a Pointset, and some of the symbolic expression stuff, I expect that these could be broadly useful to scientific users in general. But I personally don't have much time to be a coder any more. If someone wanted to help me and/or give me feedback on what might be of use to SciPy users then perhaps I'll be less hesitant. 2. In short, no. It was a quick fix when we were trying to get any available integrator running with our core libraries, in order to test out our ideas for wrapped integrators in our simulation environment. Other Python developers here at Cornell (Chris Myers, Jim Sethna, Ryan Gutenkunst, and others of the SloppyCell group) have used and adapted SciPy's LSODE solver in their package, and we have discussed the relative merits of trying to improve SciPy's support of dynamical systems through our "fancier" ODE solver interfaces. Apart from a lack of time on our part to invest in maintaining publicly-releasable code, the other issue with PyDSTool is that users would have to invest in our basic classes to work with our wrapped integrators. Even though the UI permits very easy specification of ODE right-hand sides and other options, I am nevertheless worried that users may find the extra baggage of our Pointset, Generator, and Trajectory classes too much to just solve small ODE systems with minimal fuss. I guess I'd like to hear feedback about that stuff. We have lots of working demos in the download for people to try out.... My collaborator, Erik Sherwood, writes the PyDSTool SWIG interfaces and the C or Fortran wrappers to legacy ODE integrator codes. His ODE solver classes are themselves wrapped inside those of PyDSTool, and as such his python interface is more along the lines of the present SciPy ODE solvers (input and output is with Numarray arrays, not my bells-and-whistles Trajectory classes). Perhaps that would be the way to go in the short term. Certainly, he has done great work in wrapping high-end solvers like Hairer & Wanner's Dopri 853 and Radau 5 codes to work in Python, which could be very useful for SciPy users. He has even added support for user-defined events and non-constant mass-matrix support in Radau, among other useful extensions. He is in the process of adding more H&W integrators that use the same interface style. But you'd have to ask him if he'd like to be responsible for maintaining SciPy-released code! Of course, the neat thing I think about PyDSTool wrapping these legacy integrators is that it doesn't require the user to write any actual _code_ in order to get super-fast ODE integration of C vector fields (Cf. VODE which uses python function callbacks for the right-hand sides). We use distutils to compile a DLL on the fly, which contains the v.f. and an instance of the integrator. This is automatically generated and reloaded by PyDSTool. Last, but far from least, we have not yet migrated to new SciPy, so that would also be a sticking point. Our code uses a mixture of numarray and old SciPy array functions and classes, and needs some work to port that stuff to the new SciPy. Don't know when that will happen.... Sorry for the long post, but often the shortest questions require the longest answers! Rob On Tue, 7 Feb 2006, Alan G Isaac wrote: > 1. Is PyDSTool destined to be part of SciPy? This seems > like a good idea ... > 2. Was PyDSTool's scipy_ode.py forked from SciPy's ode.py > for any reason other than lack of communication? > > Cheers, > Alan Isaac > > PS The note in scipy_ode.py reads:: > > # This is a trivially-adapted version of the original > SciPy integrate code. > # Changes: (1) removed the annoying print statement whenever > # the ODE code is 'found', which is unwanted screen output during > # computations. > # (2) Made ode a new-style class. > # (3) Changed a lambda occurrence in one of the methods to use noneFn instead. > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > ----------------------------------- Rob Clewley Research Associate Department of Mathematics and Center for Applied Mathematics Cornell University Ithaca, NY 14853 www.cam.cornell.edu/~rclewley Tel: 607-255-7760 Fax: 607-255-9860 From ryanlists at gmail.com Tue Feb 7 15:01:58 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Tue, 7 Feb 2006 15:01:58 -0500 Subject: [SciPy-user] installation problem Message-ID: I am trying to install scipy/numpy from SVN on a computer on campus. I have what I thought was an identical Ubuntu installation to the one I have at home where everything is working beautifully. I am getting the following while trying to install numpy: I am getting a message about lapack not being in atlas, which I don't understand since I have the ubuntu packages for both installed (which I think is just like on my home computer). Thanks for your help, Ryan ryan at Lagrange:~/svn/numpy$ sudo python setup.py install Running from numpy source directory. Warning: not existing data path in numpy/distutils: site.cfg No module named __svn_version__ F2PY Version 2_2059 blas_opt_info: blas_mkl_info: NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS NOT AVAILABLE atlas_blas_info: FOUND: libraries = ['f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/lib/sse2'] language = c include_dirs = ['/usr/include'] running build_src building extension "atlas_version" sources adding 'build/src/atlas_version_-0xf63e660.c' to sources. running build_ext customize UnixCCompiler customize UnixCCompiler using build_ext FOUND: libraries = ['f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/lib/sse2'] language = c define_macros = [('ATLAS_INFO', '"\\"3.6.0\\""')] include_dirs = ['/usr/include'] lapack_opt_info: lapack_mkl_info: mkl_info: NOT AVAILABLE NOT AVAILABLE atlas_threads_info: Setting PTATLAS=ATLAS numpy.distutils.system_info.atlas_threads_info NOT AVAILABLE atlas_info: numpy.distutils.system_info.atlas_info /home/ryan/svn/numpy/numpy/distutils/system_info.py:811: UserWarning: ********************************************************************* Could not find lapack library within the ATLAS installation. ********************************************************************* warnings.warn(message) FOUND: libraries = ['f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/lib/sse2'] language = c define_macros = [('ATLAS_WITHOUT_LAPACK', None)] include_dirs = ['/usr/include'] running build_src building extension "atlas_version" sources adding 'build/src/atlas_version_0x39f8be52.c' to sources. running build_ext customize UnixCCompiler customize UnixCCompiler using build_ext lapack_info: FOUND: libraries = ['lapack'] library_dirs = ['/usr/lib'] language = f77 FOUND: libraries = ['f77blas', 'cblas', 'atlas', 'lapack'] library_dirs = ['/usr/lib/sse2', '/usr/lib'] define_macros = [('ATLAS_WITHOUT_LAPACK', None), ('ATLAS_INFO', '"\\"3.6.0\\""')] language = f77 include_dirs = ['/usr/include'] running install running build running config_fc running build_src building py_modules sources building extension "numpy.core.multiarray" sources adding 'build/src/numpy/core/config.h' to sources. adding 'build/src/numpy/core/__multiarray_api.h' to sources. adding 'build/src/numpy/core/src' to include_dirs. numpy.core - nothing done with h_files= ['build/src/numpy/core/src/scalartypes.inc', 'build/src/numpy/core/src/arraytypes.inc', 'build/src/numpy/core/config.h', 'build/src/numpy/core/__multiarray_api.h'] building extension "numpy.core.umath" sources adding 'build/src/numpy/core/config.h' to sources. adding 'build/src/numpy/core/__ufunc_api.h' to sources. adding 'build/src/numpy/core/src' to include_dirs. numpy.core - nothing done with h_files= ['build/src/numpy/core/src/scalartypes.inc', 'build/src/numpy/core/src/arraytypes.inc', 'build/src/numpy/core/config.h', 'build/src/numpy/core/__ufunc_api.h'] building extension "numpy.core._sort" sources adding 'build/src/numpy/core/config.h' to sources. adding 'build/src/numpy/core/__multiarray_api.h' to sources. numpy.core - nothing done with h_files= ['build/src/numpy/core/config.h', 'build/src/numpy/core/__multiarray_api.h'] building extension "numpy.core._dotblas" sources adding 'numpy/core/blasdot/_dotblas.c' to sources. building extension "numpy.lib._compiled_base" sources building extension "numpy.dft.fftpack_lite" sources building extension "numpy.linalg.lapack_lite" sources adding 'numpy/linalg/lapack_litemodule.c' to sources. building extension "numpy.random.mtrand" sources building data_files sources running build_py copying build/src/numpy/__config__.py -> build/lib.linux-i686-2.4/numpy copying build/src/numpy/distutils/__config__.py -> build/lib.linux-i686-2.4/numpy/distutils copying numpy/f2py/__svn_version__.py -> build/lib.linux-i686-2.4/numpy/f2py copying numpy/core/__svn_version__.py -> build/lib.linux-i686-2.4/numpy/core running build_ext customize UnixCCompiler customize UnixCCompiler using build_ext customize GnuFCompiler customize GnuFCompiler customize GnuFCompiler using build_ext building 'numpy.core.multiarray' extension compiling C sources gcc options: '-pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -fPIC' compile options: '-Ibuild/src/numpy/core/src -Inumpy/core/include -Ibuild/src/numpy/core -Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -c' gcc: numpy/core/src/multiarraymodule.c In file included from numpy/core/src/multiarraymodule.c:65: numpy/core/src/arrayobject.c:3527: error: conflicting types for 'PyArray_CheckStrides' build/src/numpy/core/__multiarray_api.h:157: error: previous declaration of 'PyArray_CheckStrides' was here In file included from numpy/core/src/multiarraymodule.c:65: numpy/core/src/arrayobject.c:3527: error: conflicting types for 'PyArray_CheckStrides' build/src/numpy/core/__multiarray_api.h:157: error: previous declaration of 'PyArray_CheckStrides' was here error: Command "gcc -pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -fPIC -Ibuild/src/numpy/core/src -Inumpy/core/include -Ibuild/src/numpy/core -Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -c numpy/core/src/multiarraymodule.c -o build/temp.linux-i686-2.4/numpy/core/src/multiarraymodule.o" failed with exit status 1 From oliphant.travis at ieee.org Tue Feb 7 15:13:13 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 07 Feb 2006 13:13:13 -0700 Subject: [SciPy-user] installation problem In-Reply-To: References: Message-ID: <43E8FF59.3020303@ieee.org> Ryan Krauss wrote: >I am trying to install scipy/numpy from SVN on a computer on campus. >I have what I thought was an identical Ubuntu installation to the one >I have at home where everything is working beautifully. I am getting >the following while trying to install numpy: > >I am getting a message about lapack not being in atlas, which I don't >understand since I have the ubuntu packages for both installed (which >I think is just like on my home computer). > >Thanks for your help, > > It looks like you need to remove the build directory and try again. In the numpy directory do rm -fr build/ PyArray_CheckStrides recently changed its interface and the setup.py script did not re-generate the header (though it probably should do that automatically). -Travis From kael.fischer at gmail.com Tue Feb 7 16:38:56 2006 From: kael.fischer at gmail.com (Kael Fischer) Date: Tue, 7 Feb 2006 13:38:56 -0800 Subject: [SciPy-user] Plotting libs removed but still out front on scipy.org Message-ID: I think it's potentially confusing to have the pretty picture on the scipy.org main page show functionality that scipy no longer has. More importantly, thanks to whoever added the cov_x to leastsq ! Kael -- Kael Fischer, Ph.D DeRisi Lab - Univ. Of California San Francisco 415-514-4320 -------------- next part -------------- An HTML attachment was scrubbed... URL: From travis at enthought.com Tue Feb 7 16:54:36 2006 From: travis at enthought.com (Travis N. Vaught) Date: Tue, 07 Feb 2006 15:54:36 -0600 Subject: [SciPy-user] Plotting libs removed but still out front on scipy.org In-Reply-To: References: Message-ID: <43E9171C.2030507@enthought.com> Kael Fischer wrote: > I think it's potentially confusing to have the pretty picture on the > scipy.org main page show functionality that scipy > no longer has. I agree. Unfortunately, screen shots of scipy just aren't very sexy since the removal of the plotting subpackages. I'll work on a couple of "alternative" screenshots of scipy functionality--recommendations are welcome. Travis V. From pearu at scipy.org Tue Feb 7 16:09:51 2006 From: pearu at scipy.org (Pearu Peterson) Date: Tue, 7 Feb 2006 15:09:51 -0600 (CST) Subject: [SciPy-user] Wrapping C libraries for Numeric integration In-Reply-To: <5DC7FBDB-9D76-44D7-9DFD-4461DBF36941@tamu.edu> References: <1139310435.17080.23.camel@bryan.teraview.local> <5DC7FBDB-9D76-44D7-9DFD-4461DBF36941@tamu.edu> Message-ID: On Tue, 7 Feb 2006, Robert Hetland wrote: > > Here is the relevant information: > > Mac OS X 10.4.4, gcc, g77, numpy.__version__ = '0.9.5.2044' > > $ ls > foo.c m.pyf setup.py > $ python setup.py build_src build_ext --inplace > build.out > $ cat build.out > running build_src > building extension "m" sources > f2py options: [] > f2py: ./m.pyf > Reading fortran codes... > Reading file './m.pyf' (format:free) > {'this': 'intent', 'after': '(c) ', 'before': ''} > Line #5 in ./m.pyf:" intent(c) " Remove all spaces after intent(c) statement. It looks like a silly bug in f2py and I'll look into fixing it. Pearu From hetland at tamu.edu Tue Feb 7 17:40:02 2006 From: hetland at tamu.edu (Robert Hetland) Date: Tue, 7 Feb 2006 16:40:02 -0600 Subject: [SciPy-user] Wrapping C libraries for Numeric integration In-Reply-To: References: <1139310435.17080.23.camel@bryan.teraview.local> <5DC7FBDB-9D76-44D7-9DFD-4461DBF36941@tamu.edu> Message-ID: <9FF8199A-3CAC-439D-AF09-F3C6AC189D71@tamu.edu> On Feb 7, 2006, at 3:09 PM, Pearu Peterson wrote: > Remove all spaces after > > intent(c) > > statement. Yes, this worked. Thanks! ----- Rob Hetland, Assistant Professor Dept of Oceanography, Texas A&M University p: 979-458-0096, f: 979-845-6331 e: hetland at tamu.edu, w: http://pong.tamu.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From elcorto at gmx.net Tue Feb 7 18:28:49 2006 From: elcorto at gmx.net (Steve Schmerler) Date: Wed, 08 Feb 2006 00:28:49 +0100 Subject: [SciPy-user] Plotting libs removed but still out front on scipy.org In-Reply-To: <43E9171C.2030507@enthought.com> References: <43E9171C.2030507@enthought.com> Message-ID: <43E92D31.8050002@gmx.net> Travis N. Vaught wrote: > > I'll work on a couple of "alternative" screenshots of scipy > functionality--recommendations are welcome. > The http://scipy.org/Topical_Software page is really cool and helpful. Why not take some matplotlib/ipython/scipy screenshots? I don't mean to dwarf the other packages (I'm also using gnuplot.py) but mpl and ipython belong together in some sort af way (ipython -pylab) and I think most people using these two also use scipy. cheers, steve -- Random number generation is the art of producing pure gibberish as quickly as possible. From aisaac at american.edu Tue Feb 7 22:21:38 2006 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 7 Feb 2006 22:21:38 -0500 Subject: [SciPy-user] PyDSTool In-Reply-To: References: Message-ID: Thanks for the extensive reply about PyDSTool. I guess my question now becomes: what arrangement is most likely to keep this interesting effort in front of people and possibly bring in new development efforts? My intuition is that a closer association (sandbox?) with SciPy would help. But you'll know best. Cheers, Alan Isaac From rclewley at cam.cornell.edu Wed Feb 8 00:53:25 2006 From: rclewley at cam.cornell.edu (Robert Clewley) Date: Wed, 8 Feb 2006 00:53:25 -0500 (EST) Subject: [SciPy-user] PyDSTool In-Reply-To: References: Message-ID: Alan, It's nice to hear an encouraging voice at this early stage, but it's simply up to SciPy users to take a look at our package and tell us whether they would want to see any of our ideas make it into SciPy (in whatever eventual form) ... I'm all up for a closer association if they are. All I can say is please check out the features on offer on the ProjectOverview page of our wiki, try out the demos from our download, and let us know if anything is of wider interest to the scientific community. We already have the public discussion forum on SourceForge, and people can email me or get a login for our wiki and discuss their views there. If that results in contributions of code or further development collaboration then so much the better. Thanks, Rob On Tue, 7 Feb 2006, Alan G Isaac wrote: > Thanks for the extensive reply about PyDSTool. > > I guess my question now becomes: > what arrangement is most likely to keep this > interesting effort in front of people and > possibly bring in new development efforts? > > My intuition is that a closer association (sandbox?) > with SciPy would help. But you'll know best. > > Cheers, > Alan Isaac > ----------------------------------- Rob Clewley Research Associate Department of Mathematics and Center for Applied Mathematics Cornell University Ithaca, NY 14853 www.cam.cornell.edu/~rclewley Tel: 607-255-7760 Fax: 607-255-9860 From prabhu_r at users.sf.net Wed Feb 8 01:05:13 2006 From: prabhu_r at users.sf.net (Prabhu Ramachandran) Date: Wed, 8 Feb 2006 11:35:13 +0530 Subject: [SciPy-user] Wrapping C libraries for Numeric integration In-Reply-To: <200602071330.22768.sransom@nrao.edu> References: <1139310435.17080.23.camel@bryan.teraview.local> <200602071330.22768.sransom@nrao.edu> Message-ID: <17385.35353.61946.919882@prpc.aero.iitb.ac.in> >>>>> "Scott" == Scott Ransom writes: Scott> I'm a bit hesitant to post this since the code is a bit old Scott> and possibly a bit crusty, but I have several typemaps that Scott> I've been using successfully for the past ~6yrs that allow Scott> me to do some interfacing of 1-D and 2-D arrays between C Scott> and Numeric. Scott> What I have allows me to use a C function on an array Scott> generated in Numeric, as well as turn an array that was Scott> created in C (with malloc) into a usable Numeric array. Scott> Some of the maps are pretty specific to my code, but if Scott> there is enough interest (and if someone is willing to Scott> check things out wrt the new numpy), I would be willing to Scott> post the typemaps and maybe some usage examples. I think this would be useful in general and well worth putting up on the wiki. A while back I cooked up a *very simple* SWIG interface file that demonstrates passing SciPy arrays in and out of a C program using the buffer protocol. I've attached them here just in case. The SWIG interface file can be compiled like so: $ python setup.py build_src build_ext --inplace # Now test it. $ python test_buffer.py cheers, prabhu -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: buffer.i URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: setup.py URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: test_buffer.py URL: From jh at oobleck.astro.cornell.edu Wed Feb 8 01:45:30 2006 From: jh at oobleck.astro.cornell.edu (Joe Harrington) Date: Wed, 8 Feb 2006 01:45:30 -0500 Subject: [SciPy-user] Plotting libs removed but still out front on scipy.org In-Reply-To: (scipy-user-request@scipy.net) References: Message-ID: <200602080645.k186jU9Z010048@oobleck.astro.cornell.edu> We agreed at the 2004 SciPy conference (or were we a workshop then?) and subsequent discussions to have finer-grained packaging (at least on *nix-based platforms), but to have umbrella packages that would use the dependency mechanism to pull in what a user needed. This would allow the faster-developing packages to release more frequently than the stable packages, but the user's system would still always have current stuff. We were also going to officially bless certain packages for plotting, etc., over others by means of documenting their use almost exclusively in the intro docs, including them in the umbrella packages, etc. We don't yet have official package releases for the new NumPy and SciPy, but we did agree that Matplotlib was going to be the plotting package of choice. So, I don't see why we can't put graphics made with it on the front page, or anywhere else. Plus, the site isn't just for SciPy, it's a portal for all scientific computing in Python, so *any* python-generated graphics should be fine (though I'd lobby strongly for open-source only, and "blessed" packages except where otherwise specifically called for). I also think that, in most cases, clicking on a graphic on the site should take you to the recipe for creating it. Aside from giving full disclosure, it is also very educational. --jh-- From rudolphv at gmail.com Wed Feb 8 05:16:15 2006 From: rudolphv at gmail.com (Rudolph van der Merwe) Date: Wed, 8 Feb 2006 12:16:15 +0200 Subject: [SciPy-user] Concatenating 1D arrays Message-ID: <97670e910602080216n2c77b043h7a0b21c0c1d32d38@mail.gmail.com> What is the Numpy prefered way of recursively concatenating 1D arrays (vectors) into a 2D array (matrix)? I'm trying to convert the following Matlab code to Python: x=[]; y=[]; for k=1:N, c = get_column_vector_from_somewhere() r = get_row_vector_from_somewhere() x = [x c] y = [y; r] end In other words, x and y get initialized to empty arrays and then grown in-place. x is grown (appended) column wise with vectors c and y is appended (row wise) with vectors r. In Numpy (numpy), how do I initialize an empty array with the correct dimensions which can then be used with functions like: column_stack(), concatenate(), etc. ? -- Rudolph van der Merwe From cimrman3 at ntc.zcu.cz Wed Feb 8 05:38:43 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Wed, 08 Feb 2006 11:38:43 +0100 Subject: [SciPy-user] Wrapping C libraries for Numeric integration In-Reply-To: <17385.35353.61946.919882@prpc.aero.iitb.ac.in> References: <1139310435.17080.23.camel@bryan.teraview.local> <200602071330.22768.sransom@nrao.edu> <17385.35353.61946.919882@prpc.aero.iitb.ac.in> Message-ID: <43E9CA33.9050004@ntc.zcu.cz> Prabhu Ramachandran wrote: >>>>>>"Scott" == Scott Ransom writes: > > > Scott> I'm a bit hesitant to post this since the code is a bit old > Scott> and possibly a bit crusty, but I have several typemaps that > Scott> I've been using successfully for the past ~6yrs that allow > Scott> me to do some interfacing of 1-D and 2-D arrays between C > Scott> and Numeric. > > Scott> What I have allows me to use a C function on an array > Scott> generated in Numeric, as well as turn an array that was > Scott> created in C (with malloc) into a usable Numeric array. > > Scott> Some of the maps are pretty specific to my code, but if > Scott> there is enough interest (and if someone is willing to > Scott> check things out wrt the new numpy), I would be willing to > Scott> post the typemaps and maybe some usage examples. > > I think this would be useful in general and well worth putting up on > the wiki. I have just created Cookbook/SWIG_and_NumPy page and put my own meager attempt to address this topic there. All Numpy swiggers are encouraged to add their ideas :) r. From a.u.r.e.l.i.a.n at gmx.net Wed Feb 8 06:10:22 2006 From: a.u.r.e.l.i.a.n at gmx.net (aurelian) Date: Wed, 08 Feb 2006 12:10:22 +0100 Subject: [SciPy-user] Concatenating 1D arrays In-Reply-To: <97670e910602080216n2c77b043h7a0b21c0c1d32d38@mail.gmail.com> References: <97670e910602080216n2c77b043h7a0b21c0c1d32d38@mail.gmail.com> Message-ID: <43E9D19E.2040002@gmx.net> Hi, I know two ways of achieving what you want. The first one, which is the 'closer' translation of the matlab code, reads: ----- from numpy import zeros, concatenate x=zeros((vlen, 0)) y=zeros((vlen, 0)) # Use zeros((vlen, 0)) to get an 'empty' column-oriented 2d array. # vlen is the length of one of your vectors. for k in xrange(0, N): # this will run from 0 to N-1! c = foo(k) r = bar(k) x = concatenate((x, c), axis=1) y = concatenate((y, r), axis=1) ---- The way I prefer is to initialize x and y with full dimension and "fill" it. This is imho easier to read and shorter. ---- from numpy import zeros x = zeros((vlen, N), dtype=footype) y = zeros((vlen, N), dtype=bartype) # note that x and y must have correct datatype for k in xrange(0, N): # take a slice of x and set it to the return values of foo(k) x[:, k] = foo(k) y[:, k] = bar(k) ---- HTH, Johannes Loehnert From rudolphv at gmail.com Wed Feb 8 07:16:00 2006 From: rudolphv at gmail.com (Rudolph van der Merwe) Date: Wed, 8 Feb 2006 14:16:00 +0200 Subject: [SciPy-user] Concatenating 1D arrays In-Reply-To: <43E9D19E.2040002@gmx.net> References: <97670e910602080216n2c77b043h7a0b21c0c1d32d38@mail.gmail.com> <43E9D19E.2040002@gmx.net> Message-ID: <97670e910602080416t69f04da2l943e9e70fd6a2fea@mail.gmail.com> > The way I prefer is to initialize x and y with full dimension and "fill" > it. This is imho easier to read and shorter. The problem is that the full dimension, i.e. number of vectors to be concatenated is not known apriori. I know that growing arrays in-place as apposed to filling a pre-sized array is very inefficient in Matlab (is this also the case for Numpy?), but sometimes it cannot be avoided. R. -- Rudolph van der Merwe From ryanlists at gmail.com Wed Feb 8 10:05:08 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Wed, 8 Feb 2006 10:05:08 -0500 Subject: [SciPy-user] Concatenating 1D arrays In-Reply-To: <97670e910602080416t69f04da2l943e9e70fd6a2fea@mail.gmail.com> References: <97670e910602080216n2c77b043h7a0b21c0c1d32d38@mail.gmail.com> <43E9D19E.2040002@gmx.net> <97670e910602080416t69f04da2l943e9e70fd6a2fea@mail.gmail.com> Message-ID: Growing arrays will be less efficient in NumPy as well (I can't think of a way to avoid this in any language), because you need to allocate more memory each time you add a row or column. I don't know if this is slightly better in Python vs. Matlab because Python uses references so much and Matlab makes copies of almost everything. Ryan On 2/8/06, Rudolph van der Merwe wrote: > > The way I prefer is to initialize x and y with full dimension and "fill" > > it. This is imho easier to read and shorter. > > The problem is that the full dimension, i.e. number of vectors to be > concatenated is not known apriori. I know that growing arrays in-place > as apposed to filling a pre-sized array is very inefficient in Matlab > (is this also the case for Numpy?), but sometimes it cannot be > avoided. > > R. > > -- > Rudolph van der Merwe > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From manouchk at gmail.com Wed Feb 8 10:55:28 2006 From: manouchk at gmail.com (manouchk) Date: Wed, 8 Feb 2006 13:55:28 -0200 Subject: [SciPy-user] derivatives of datas? Message-ID: <200602081355.28589.manouchk@gmail.com> Hi, I'd like to calculate derivative curve of datas. Datas is of type first column =X and 2nd one Y=f(X) that can be put in 2 vector (for example). I'd like to know if scipy provides a function data derivatives (that I would have miss?), more advanced than a simple point to point linear derivative, using weighted spline for example... ? Emmanuel From nwagner at mecha.uni-stuttgart.de Wed Feb 8 10:53:29 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Wed, 08 Feb 2006 16:53:29 +0100 Subject: [SciPy-user] derivatives of datas? In-Reply-To: <200602081355.28589.manouchk@gmail.com> References: <200602081355.28589.manouchk@gmail.com> Message-ID: <43EA13F9.3000906@mecha.uni-stuttgart.de> manouchk wrote: >Hi, > >I'd like to calculate derivative curve of datas. Datas is of type first column >=X and 2nd one Y=f(X) that can be put in 2 vector (for example). I'd like to >know if scipy provides a function data derivatives (that I would have miss?), >more advanced than a simple point to point linear derivative, using weighted >spline for example... ? > >Emmanuel > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > See interpolate.splev splev(x, tck, der=0) Evaulate a B-spline and its derivatives. Description: Given the knots and coefficients of a B-spline representation, evaluate the value of the smoothing polynomial and it's derivatives. This is a wrapper around the FORTRAN routines splev and splder of FITPACK. Inputs: x (u) -- a 1-D array of points at which to return the value of the smoothed spline or its derivatives. If tck was returned from splprep, then the parameter values, u should be given. tck -- A sequence of length 3 returned by splrep or splprep containg the knots, coefficients, and degree of the spline. der -- The order of derivative of the spline to compute (must be less than or equal to k). Outputs: (y, ) y -- an array of values representing the spline function or curve. If tck was returned from splrep, then this is a list of arrays representing the curve in N-dimensional space. Nils From hetland at tamu.edu Wed Feb 8 11:10:37 2006 From: hetland at tamu.edu (Robert Hetland) Date: Wed, 8 Feb 2006 10:10:37 -0600 Subject: [SciPy-user] PyDSTool In-Reply-To: References: Message-ID: <13A03148-E08B-4D26-A7AB-A25D1C50E870@tamu.edu> On Feb 7, 2006, at 11:53 PM, Robert Clewley wrote: > but it's > simply up to SciPy users to take a look at our package and tell us > whether > they would want to see any of our ideas make it into SciPy I, for one, would love to see this. -Rob. ----- Rob Hetland, Assistant Professor Dept of Oceanography, Texas A&M University p: 979-458-0096, f: 979-845-6331 e: hetland at tamu.edu, w: http://pong.tamu.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed Feb 8 12:26:25 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 08 Feb 2006 11:26:25 -0600 Subject: [SciPy-user] Concatenating 1D arrays In-Reply-To: <97670e910602080216n2c77b043h7a0b21c0c1d32d38@mail.gmail.com> References: <97670e910602080216n2c77b043h7a0b21c0c1d32d38@mail.gmail.com> Message-ID: <43EA29C1.90807@gmail.com> Rudolph van der Merwe wrote: > What is the Numpy prefered way of recursively concatenating 1D arrays > (vectors) into a 2D array (matrix)? > > I'm trying to convert the following Matlab code to Python: > > x=[]; > y=[]; > for k=1:N, > c = get_column_vector_from_somewhere() > r = get_row_vector_from_somewhere() > x = [x c] > y = [y; r] > end Assuming that you don't actually know N (in which case Johannes' second example is appropriate), x = [] y = [] while have_more_vectors(): x.append(get_column_vector_from_somewhere()) y.append(get_row_vector_from_somewhere()) x = hstack(x) y = vstack(y) -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From ryanlists at gmail.com Wed Feb 8 12:31:04 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Wed, 8 Feb 2006 12:31:04 -0500 Subject: [SciPy-user] Concatenating 1D arrays In-Reply-To: <43EA29C1.90807@gmail.com> References: <97670e910602080216n2c77b043h7a0b21c0c1d32d38@mail.gmail.com> <43EA29C1.90807@gmail.com> Message-ID: Are hstack and vstack new to numpy? They are pretty cool. On 2/8/06, Robert Kern wrote: > Rudolph van der Merwe wrote: > > What is the Numpy prefered way of recursively concatenating 1D arrays > > (vectors) into a 2D array (matrix)? > > > > I'm trying to convert the following Matlab code to Python: > > > > x=[]; > > y=[]; > > for k=1:N, > > c = get_column_vector_from_somewhere() > > r = get_row_vector_from_somewhere() > > x = [x c] > > y = [y; r] > > end > > Assuming that you don't actually know N (in which case Johannes' second example > is appropriate), > > x = [] > y = [] > > while have_more_vectors(): > x.append(get_column_vector_from_somewhere()) > y.append(get_row_vector_from_somewhere()) > > x = hstack(x) > y = vstack(y) > > -- > Robert Kern > robert.kern at gmail.com > > "In the fields of hell where the grass grows high > Are the graves of dreams allowed to die." > -- Richard Harter > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From robert.kern at gmail.com Wed Feb 8 12:33:56 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 08 Feb 2006 11:33:56 -0600 Subject: [SciPy-user] Concatenating 1D arrays In-Reply-To: References: <97670e910602080216n2c77b043h7a0b21c0c1d32d38@mail.gmail.com> <43EA29C1.90807@gmail.com> Message-ID: <43EA2B84.5040107@gmail.com> Ryan Krauss wrote: > Are hstack and vstack new to numpy? They are pretty cool. They were in scipy_base for a long time and have been brought into numpy proper because they are that cool. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From schofield at ftw.at Wed Feb 8 13:40:36 2006 From: schofield at ftw.at (Ed Schofield) Date: Wed, 08 Feb 2006 19:40:36 +0100 Subject: [SciPy-user] Concatenating 1D arrays In-Reply-To: <97670e910602080416t69f04da2l943e9e70fd6a2fea@mail.gmail.com> References: <97670e910602080216n2c77b043h7a0b21c0c1d32d38@mail.gmail.com> <43E9D19E.2040002@gmx.net> <97670e910602080416t69f04da2l943e9e70fd6a2fea@mail.gmail.com> Message-ID: <43EA3B24.7030504@ftw.at> Rudolph van der Merwe wrote: >>The way I prefer is to initialize x and y with full dimension and "fill" >>it. This is imho easier to read and shorter. >> >> >The problem is that the full dimension, i.e. number of vectors to be >concatenated is not known apriori. I know that growing arrays in-place >as apposed to filling a pre-sized array is very inefficient in Matlab >(is this also the case for Numpy?), but sometimes it cannot be >avoided. > > How about a list of arrays? >>> x = []; y = [] >>> for k in xrange(N): ... x += [get_col_vec()] ... y += [get_row_vec()] ... and then, when you're done, convert to matrices with >>> x = mat(x).T >>> y = mat(x) This is slower than pre-allocating an array, but should be MUCH faster than the Matlab construct you cited, which I think reallocates memory and copies the entire matrix each iteration ;) -- Ed From ryanlists at gmail.com Wed Feb 8 16:42:59 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Wed, 8 Feb 2006 16:42:59 -0500 Subject: [SciPy-user] strange,frustrating error Message-ID: I am getting an error that is driving me crazy on code that used to work. And the really weird thing is that if I use the debugger to set a trace just before the error and step into the specific line that was causing the error, the code executes correctly. But if I just try to step over the line (debugger n vs. debugger s commands), I get the error. Here is the error message: /home/ryan/thesis/feedback_modeling/experiments/samii_model.py in __init__(self, beamparams, maxsize, symname, symlabel, symsub, usez) 44 class samiiBeam(TMM.beam.BeamElement): 45 def __init__(self,beamparams={'EI':339134.5276,'mu':5.7281,'L':4.6482},maxsize=ms,symname='Ubeam',symlabel='beam',symsub=True,usez=True): ---> 46 return TMM.beam.BeamElement.__init__(self,beamparams,maxsize=maxsize,symlabel=symlabel,symname=symname,symsub=symsub,usez=usez) 47 48 class samiiLink0(TMM.rigid.RigidMass): TypeError: unbound method __init__() must be called with BeamElement instance as first argument (got samiiBeam instance instead) WARNING: Failure executing file: Basically, I have a derived class trying to call its super-class (or parent class) constructor and getting the error that it isn't an instance of its own super-class. The super-class is defined in another module and imported (I used to import it with a from statement, but that wasn't working either today). I call this constructor at another place in the code: 67 beam=samiiBeam() and that is the line that generates an error if I just hit n in the debugger, but executes correctly if I step into it and exectute the __init__ method step-by-step. Any help, Ryan From manouchk at gmail.com Wed Feb 8 17:05:02 2006 From: manouchk at gmail.com (manouchk) Date: Wed, 8 Feb 2006 20:05:02 -0200 Subject: [SciPy-user] derivatives of datas? In-Reply-To: <43EA13F9.3000906@mecha.uni-stuttgart.de> References: <200602081355.28589.manouchk@gmail.com> <43EA13F9.3000906@mecha.uni-stuttgart.de> Message-ID: <200602082005.02712.manouchk@gmail.com> Thank you so much Nils!!!!!! I did adore! Sorry for my probable exagerated enthusiasm, I just started to use python for work a few days ago... I'm just getting the position of the "smoothed" derivative of some datas. The smoothing condition (s) though is critical! peace of (bad, unoptimized etc...) code allfiles=listdir(os.getcwd()) allfiles.sort() for filename in allfiles: if filename.find("plot"): f = file(filename) f.readline() # get 7 column of float from file filename data = array([(float(x1),float(x2),float(x3),float(x4),float(x5),float(x6),float(x7)) for (x1,x2,x3,x4,x5,x6,x7) in [l.split() for l in f]]) #X=> col0 #Y=> (col1+col2+col3+col4+col5+col6)/5 X=data[:,0] Y=(data[:,5]+data[:,4]+data[:,3]+data[:,2]+data[:,1])/5 #sX for smoothed_X have more ponts than data file sX= arange(min(X),max(X),(max(X)-min(X))/1000) #calculate B-spline representation rep = scipy.interpolate.splrep(X, Y,s=1e-6) #Evaulate a B-spline and its derivatives on sX sY = scipy.interpolate.splev(sX, rep) sDY = scipy.interpolate.splev(sX, rep, der=1) sD2Y = scipy.interpolate.splev(sX, rep, der=2) #look at the result window(0) plot(X,Y,'x',sX,sY,'o') window(1) plot (sX,sDY,'x') window(2) plot (sX,sD2Y,'x') Ic=sX.tolist()[sDY.tolist().index(max(sDY))] cheers, EMmanuel From ryanlists at gmail.com Wed Feb 8 20:01:23 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Wed, 8 Feb 2006 20:01:23 -0500 Subject: [SciPy-user] other strange behavior Message-ID: I am seeing other strange instances (completely unrelated to my other post) of code executing correctly when I step through it with the debugger, but not executing correctly if I just run it as a script. Has anyone else see this before? Ryan From robert.kern at gmail.com Wed Feb 8 20:03:45 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 08 Feb 2006 19:03:45 -0600 Subject: [SciPy-user] other strange behavior In-Reply-To: References: Message-ID: <43EA94F1.5060400@gmail.com> Ryan Krauss wrote: > I am seeing other strange instances (completely unrelated to my other > post) of code executing correctly when I step through it with the > debugger, but not executing correctly if I just run it as a script. > Has anyone else see this before? http://www.catb.org/jargon/html/H/heisenbug.html -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From cookedm at physics.mcmaster.ca Wed Feb 8 22:40:33 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 08 Feb 2006 22:40:33 -0500 Subject: [SciPy-user] Replacing Numeric With NumPy In-Reply-To: (Rich Shepard's message of "Tue, 7 Feb 2006 10:01:19 -0800 (PST)") References: Message-ID: Rich Shepard writes: > Last evening I downloaded numpy-0.9.4 and scipy-0.4.4. I have an earlier > version of Numeric in /usr/lib/python2.4/site-packages/Numeric/. Should I > remove all references to Numeric before installing NumPy? As Pearu said, there's no need. You may wish to make sure you're using Numeric 24.2, though; with that version, passing Numeric arrays to numpy functions and vice versa a very low-cost operation, so you could keep a mixture of the two codes until you finish switching over. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From nokophala at aim.com Thu Feb 9 02:54:39 2006 From: nokophala at aim.com (nokophala at aim.com) Date: Thu, 09 Feb 2006 02:54:39 -0500 Subject: [SciPy-user] CFD in scipy In-Reply-To: References: Message-ID: <8C7FB65365DB516-880-55D1@mblk-r29.sysops.aol.com> Hi, Are there any success stories on the use of Scipy to do CFD or solve the Navier-Stokes equations, simple or complex problems? I would like to try some simple RTD distribution calculations for pulp/slurry flows through mixed batch/continuous reactors, and other relatively simple cases - but have no funds to do this so I cant buy advanced software yet. Thanks in advance, Noko ________________________________________________________________________ Check Out the new free AIM(R) Mail -- 2 GB of storage and industry-leading spam and email virus protection. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rudolphv at gmail.com Thu Feb 9 05:43:22 2006 From: rudolphv at gmail.com (Rudolph van der Merwe) Date: Thu, 9 Feb 2006 12:43:22 +0200 Subject: [SciPy-user] Concatenating 1D arrays In-Reply-To: <43EA29C1.90807@gmail.com> References: <97670e910602080216n2c77b043h7a0b21c0c1d32d38@mail.gmail.com> <43EA29C1.90807@gmail.com> Message-ID: <97670e910602090243sc5b9d57x3b44d2f72d9f29a6@mail.gmail.com> On 2/8/06, Robert Kern wrote: > x = [] > y = [] > > while have_more_vectors(): > x.append(get_column_vector_from_somewhere()) > y.append(get_row_vector_from_somewhere()) > > x = hstack(x) > y = vstack(y) The problem (as far as I know) is that all vectors (1-D arrays) are treated as row vectors in Numpy. So the example for y will work, but I'm not sure it will work for x. How do you construct a 1-D column vector which can then be appended to a list and finally hstacked? R. -- Rudolph van der Merwe From martin.wiechert at gmx.de Thu Feb 9 05:41:51 2006 From: martin.wiechert at gmx.de (Martin Wiechert) Date: Thu, 9 Feb 2006 11:41:51 +0100 Subject: [SciPy-user] segfault when calling PyArray_DescrFromType Message-ID: <200602091141.51520.martin.wiechert@gmx.de> Hi list, I'm trying to build an C extension, which uses arrays. It builds, and I can import it from python, but the very first call to a numpy function ea = (PyObject *) PyArray_DescrFromType (PyArray_INT); gives me a segfault. I have absolutely no clue, but nm -l mymodule.so | grep rray gives 000026a0 b PyArray_API /usr/lib/python2.4/site-packages/numpy/core/include/numpy/__multiarray_api.h:316 and this line reads static void **PyArray_API=NULL; which looks suspicious to me. Something wrong with my setup.py? Any suggestions? Regards, Martin. From ryanlists at gmail.com Thu Feb 9 05:51:54 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Thu, 9 Feb 2006 05:51:54 -0500 Subject: [SciPy-user] CFD in scipy In-Reply-To: <8C7FB65365DB516-880-55D1@mblk-r29.sysops.aol.com> References: <8C7FB65365DB516-880-55D1@mblk-r29.sysops.aol.com> Message-ID: I typed Python CFD into google and this is the first thing that came up: http://datamining.anu.edu.au/~ole/pypar/py4cfd.pdf On 2/9/06, nokophala at aim.com wrote: > > > Hi, > Are there any success stories on the use of Scipy to do CFD or solve the > Navier-Stokes equations, simple or complex problems? I would like to try > some simple RTD distribution calculations for pulp/slurry flows through > mixed batch/continuous reactors, and other relatively simple cases - but > have no funds to do this so I cant buy advanced software yet. > > Thanks in advance, > Noko > ________________________________ > Check Out the new free AIM(R) Mail -- 2 GB of storage and industry-leading > spam and email virus protection. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > > > From cimrman3 at ntc.zcu.cz Thu Feb 9 06:08:50 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 09 Feb 2006 12:08:50 +0100 Subject: [SciPy-user] segfault when calling PyArray_DescrFromType In-Reply-To: <200602091141.51520.martin.wiechert@gmx.de> References: <200602091141.51520.martin.wiechert@gmx.de> Message-ID: <43EB22C2.7040006@ntc.zcu.cz> Martin Wiechert wrote: > Hi list, > > I'm trying to build an C extension, which uses arrays. It builds, and I can > import it from python, but the very first call to a numpy function > > ea = (PyObject *) PyArray_DescrFromType (PyArray_INT); > > gives me a segfault. > > I have absolutely no clue, but > > nm -l mymodule.so | grep rray > > gives > > 000026a0 b > PyArray_API /usr/lib/python2.4/site-packages/numpy/core/include/numpy/__multiarray_api.h:316 > > and this line reads > > static void **PyArray_API=NULL; > > which looks suspicious to me. Something wrong with my setup.py? > > Any suggestions? > > Regards, Martin. Such wierd errors can happen when you compile a module with a wrong version of numpy headers (that means that PyArray_API function pointers can point to different functions than you expect...). At least that is what had happened to me not so long ago. Try to clear all old numpy/scipy files and rebuild everything. cheers, r. From ryanlists at gmail.com Thu Feb 9 06:17:34 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Thu, 9 Feb 2006 06:17:34 -0500 Subject: [SciPy-user] Concatenating 1D arrays In-Reply-To: <97670e910602090243sc5b9d57x3b44d2f72d9f29a6@mail.gmail.com> References: <97670e910602080216n2c77b043h7a0b21c0c1d32d38@mail.gmail.com> <43EA29C1.90807@gmail.com> <97670e910602090243sc5b9d57x3b44d2f72d9f29a6@mail.gmail.com> Message-ID: I am testing this on an old version of scipy on my wife's computer, but at least in version 0.3.2, you are correct: In [3]: v1=arange(1,6,2) In [4]: v1 Out[4]: [1,3,5,] In [5]: shape(v1) Out[5]: (3,) In [6]: v2=arange(2,7,2) In [7]: v2 Out[7]: [2,4,6,] In [8]: shape(v2) Out[8]: (3,) In [10]: hstack([v1,v2]) Out[10]: [1,3,5,2,4,6,] There has to be a better solution, but I have a function I wrote called colwise that basically calls atleast_2d first and then check the number of rows and columns and transposes if necessary to that the number of rows is greater than the number of columns. This isn't real robust but it works for most situations for me: In [13]: from rwkmisc import colwise In [14]: hstack([colwise(v1),colwise(v2)]) Out[14]: [[1,2,] [3,4,] [5,6,]] Here is the function if it would help: def colwise(matin,makecopy=1): if makecopy: tempmat=copy.deepcopy(matin) else: tempmat=matin matout=scipy.atleast_2d(tempmat) myshape=scipy.shape(matout) if myshape[0] wrote: > On 2/8/06, Robert Kern wrote: > > x = [] > > y = [] > > > > while have_more_vectors(): > > x.append(get_column_vector_from_somewhere()) > > y.append(get_row_vector_from_somewhere()) > > > > x = hstack(x) > > y = vstack(y) > > The problem (as far as I know) is that all vectors (1-D arrays) are > treated as row vectors in Numpy. So the example for y will work, but > I'm not sure it will work for x. How do you construct a 1-D column > vector which can then be appended to a list and finally hstacked? > > R. > > -- > Rudolph van der Merwe > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From ryanlists at gmail.com Thu Feb 9 07:30:02 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Thu, 9 Feb 2006 07:30:02 -0500 Subject: [SciPy-user] installation problem In-Reply-To: <43E8FF59.3020303@ieee.org> References: <43E8FF59.3020303@ieee.org> Message-ID: Thanks Travis, that seems to have done it. On 2/7/06, Travis Oliphant wrote: > Ryan Krauss wrote: > > >I am trying to install scipy/numpy from SVN on a computer on campus. > >I have what I thought was an identical Ubuntu installation to the one > >I have at home where everything is working beautifully. I am getting > >the following while trying to install numpy: > > > >I am getting a message about lapack not being in atlas, which I don't > >understand since I have the ubuntu packages for both installed (which > >I think is just like on my home computer). > > > >Thanks for your help, > > > > > It looks like you need to remove the build directory and try again. In > the numpy directory do rm -fr build/ > > PyArray_CheckStrides recently changed its interface and the setup.py > script did not re-generate the header (though it probably should do that > automatically). > > > -Travis > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From d.howey at imperial.ac.uk Thu Feb 9 08:31:08 2006 From: d.howey at imperial.ac.uk (Howey, David A) Date: Thu, 9 Feb 2006 13:31:08 -0000 Subject: [SciPy-user] New Lyx Version eas(ier) to install on Windows. Message-ID: <056D32E9B2D93B49B01256A88B3EB218011608F5@icex2.ic.ac.uk> I like Lyx, but one of my colleagues said to me 'why don't you just use latex directly? It's not that difficult'... So I tried latex (with winedt) and it runs just fine. I still haven't quite got my head around how Lyx and latex work together. For example, if I use lyx can I hack the latex code easily? Dave -----Original Message----- From: scipy-user-bounces at scipy.net [mailto:scipy-user-bounces at scipy.net] On Behalf Of Travis Oliphant Sent: 02 September 2005 22:32 To: SciPy Users List; SciPy Developers List Subject: [SciPy-user] New Lyx Version eas(ier) to install on Windows. There is a new Lyx Version that works on Windows quite well -- windows is now officially supported. So, those of you who have been hesitant to try LyX because it supposedly doesn't work on Windows, have no more excuse ;-) An executable installer is available at http://www.lyx.org The setup process guides you through the other programs that are needed (ImageMagick, Ghostscript, a minimal shell like minsys, Python, and optionally Perl) to run lyx well. -Travis _______________________________________________ SciPy-user mailing list SciPy-user at scipy.net http://www.scipy.net/mailman/listinfo/scipy-user From aisaac at american.edu Thu Feb 9 09:25:54 2006 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 9 Feb 2006 09:25:54 -0500 Subject: [SciPy-user] New Lyx Version eas(ier) to install on Windows. In-Reply-To: <056D32E9B2D93B49B01256A88B3EB218011608F5@icex2.ic.ac.uk> References: <056D32E9B2D93B49B01256A88B3EB218011608F5@icex2.ic.ac.uk> Message-ID: On Thu, 9 Feb 2006, David A Howey apparently wrote: > if I use lyx can I hack the latex code easily? LyX can export and import LaTeX. http://www.stat.rice.edu/~helpdesk/howto/lyxguide.html The native format is a bit different. fwiw, Alan Isaac From martin.wiechert at gmx.de Thu Feb 9 09:52:11 2006 From: martin.wiechert at gmx.de (Martin Wiechert) Date: Thu, 9 Feb 2006 15:52:11 +0100 Subject: [SciPy-user] segfault when calling PyArray_DescrFromType In-Reply-To: <200602091141.51520.martin.wiechert@gmx.de> References: <200602091141.51520.martin.wiechert@gmx.de> Message-ID: <200602091552.11896.martin.wiechert@gmx.de> Found it (in the "old" docs). Must #define PY_ARRAY_UNIQUE_SYMBOL and call import_array (). Sorry to bother. Martin. On Thursday 09 February 2006 11:41, Martin Wiechert wrote: > Hi list, > > I'm trying to build an C extension, which uses arrays. It builds, and I can > import it from python, but the very first call to a numpy function > > ea = (PyObject *) PyArray_DescrFromType (PyArray_INT); > > gives me a segfault. > > I have absolutely no clue, but > > nm -l mymodule.so | grep rray > > gives > > 000026a0 b > PyArray_API > /usr/lib/python2.4/site-packages/numpy/core/include/numpy/__multiarray_api. >h:316 > > and this line reads > > static void **PyArray_API=NULL; > > which looks suspicious to me. Something wrong with my setup.py? > > Any suggestions? > > Regards, Martin. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user From nikolai.hlubek at mailbox.tu-dresden.de Thu Feb 9 10:29:12 2006 From: nikolai.hlubek at mailbox.tu-dresden.de (Nikolai Hlubek) Date: Thu, 09 Feb 2006 16:29:12 +0100 Subject: [SciPy-user] New Lyx Version eas(ier) to install on Windows. In-Reply-To: <056D32E9B2D93B49B01256A88B3EB218011608F5@icex2.ic.ac.uk> References: <056D32E9B2D93B49B01256A88B3EB218011608F5@icex2.ic.ac.uk> Message-ID: <43EB5FC8.7010705@mailbox.tu-dresden.de> Howey, David A wrote: > I like Lyx, but one of my colleagues said to me 'why don't you just use > latex directly? It's not that difficult'... So I tried latex (with > winedt) and it runs just fine. I still haven't quite got my head around > how Lyx and latex work together. For example, if I use lyx can I hack > the latex code easily? > Dave Hi there For me plain latex files are like perl -> Write only. Whereas lyx takes the concept that you only should worry about the content not the design one step further since you don't have to worry about latex syntax as well. Besides you can enter tex code anywhere you like in your lyx documents. Cheers, Nikolai (*lyx zealot*) -- "1984" is not a howto! From oliphant.travis at ieee.org Thu Feb 9 10:44:34 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 09 Feb 2006 08:44:34 -0700 Subject: [SciPy-user] New Lyx Version eas(ier) to install on Windows. In-Reply-To: <056D32E9B2D93B49B01256A88B3EB218011608F5@icex2.ic.ac.uk> References: <056D32E9B2D93B49B01256A88B3EB218011608F5@icex2.ic.ac.uk> Message-ID: <43EB6362.8070300@ieee.org> Howey, David A wrote: >I like Lyx, but one of my colleagues said to me 'why don't you just use >latex directly? It's not that difficult'... > That's similar to saying to a Python user: "Why don't you just use C directly, it's not that difficult" :-) I used to use LaTeX exclusively, but got tired of the compilation step. I would invariably forget braces or misspell a keyword and then have to debug my document in order to get it to "compile". LyX lets you focus on the content. For example, I love how I can write math using LaTeX symbols and have the math show up on screen as I type. I do most of my derivations now using LyX. Yes, I know there are ways to do automatic compilation with LaTeX. But, LyX just makes it that much easier for me. >I still haven't quite got my head around >how Lyx and latex work together. For example, if I use lyx can I hack >the latex code easily? > > LyX has its own file format for saved files. When you want to "print" the document, Lyx creates a latex file and then processes it using latex. Every LyX construct has some underlying (LaTeX + extra packages) implementation. You can define your own LyX constructs as well to extend the system with additional LaTeX features. Or, most people just insert raw LaTeX code into LyX. This code goes through unchanged into the final LaTeX file. LyX is definitely my favorite way to write a document. -Travis From oliphant.travis at ieee.org Thu Feb 9 10:48:06 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 09 Feb 2006 08:48:06 -0700 Subject: [SciPy-user] [Numpy-discussion] Re: segfault when calling PyArray_DescrFromType In-Reply-To: <200602091552.11896.martin.wiechert@gmx.de> References: <200602091141.51520.martin.wiechert@gmx.de> <200602091552.11896.martin.wiechert@gmx.de> Message-ID: <43EB6436.1050307@ieee.org> Martin Wiechert wrote: >Found it (in the "old" docs). >Must #define PY_ARRAY_UNIQUE_SYMBOL and call import_array (). > > To be clear, you must call import_array() in the modules init function. This is the only requirement. You only have to define PY_ARRAY_UNIQUE_SYMBOL if your extension module uses more than one file. In the files without the module initialization code you also have to define NO_IMPORT_ARRAY. -Travis From robert.kern at gmail.com Thu Feb 9 11:06:54 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 09 Feb 2006 10:06:54 -0600 Subject: [SciPy-user] Concatenating 1D arrays In-Reply-To: <97670e910602090243sc5b9d57x3b44d2f72d9f29a6@mail.gmail.com> References: <97670e910602080216n2c77b043h7a0b21c0c1d32d38@mail.gmail.com> <43EA29C1.90807@gmail.com> <97670e910602090243sc5b9d57x3b44d2f72d9f29a6@mail.gmail.com> Message-ID: <43EB689E.9060303@gmail.com> Rudolph van der Merwe wrote: > On 2/8/06, Robert Kern wrote: > >>x = [] >>y = [] >> >>while have_more_vectors(): >> x.append(get_column_vector_from_somewhere()) >> y.append(get_row_vector_from_somewhere()) >> >>x = hstack(x) >>y = vstack(y) > > The problem (as far as I know) is that all vectors (1-D arrays) are > treated as row vectors in Numpy. So the example for y will work, but > I'm not sure it will work for x. How do you construct a 1-D column > vector which can then be appended to a list and finally hstacked? Okay, I presumed get_column_vector_from_somewhere() actually gave you a column vector. If it's actually giving you shape-(n,) arrays instead of shape-(n,1), use x = column_stack(x) instead. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From matthew.brett at gmail.com Thu Feb 9 11:29:21 2006 From: matthew.brett at gmail.com (Matthew Brett) Date: Thu, 9 Feb 2006 16:29:21 +0000 Subject: [SciPy-user] New Lyx Version eas(ier) to install on Windows. In-Reply-To: <43EB6362.8070300@ieee.org> References: <056D32E9B2D93B49B01256A88B3EB218011608F5@icex2.ic.ac.uk> <43EB6362.8070300@ieee.org> Message-ID: <1e2af89e0602090829s1a2de57oa95e5bb89d2da8ad@mail.gmail.com> Hi, > >I like Lyx, but one of my colleagues said to me 'why don't you just use > >latex directly? It's not that difficult'... For the die-hard latex-writer, you can include whole latex files in a LyX document. The latex writer then works on the latex part while the LyXers work on the LyX part. If you want an example I can post one. Best, Matthew From Fernando.Perez at colorado.edu Thu Feb 9 13:54:09 2006 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Thu, 09 Feb 2006 11:54:09 -0700 Subject: [SciPy-user] New Lyx Version eas(ier) to install on Windows. In-Reply-To: <43EB6362.8070300@ieee.org> References: <056D32E9B2D93B49B01256A88B3EB218011608F5@icex2.ic.ac.uk> <43EB6362.8070300@ieee.org> Message-ID: <43EB8FD1.2000005@colorado.edu> Travis Oliphant wrote: > LyX is definitely my favorite way to write a document. I second that, very much for the same reasons. I haven't written a whole document in raw latex since about 1998, back in the lyx 0.10 days. Even then it was (for me) superior to the most super-customized XEmacs+auctex+the_kitchen_sink, and it's only gotten better. Note that in Lyx, at any point you simply type Ctrl-L (or click a button), and you can start typing raw latex to your heart's content. So if you know how to do something in latex and can't find the equivalent lyx way, you'll never have to stop. This is the killer feature for me: lyx lets me write without worrying about latex 99% of the time, but whenever I need some low-level latex feature, it's just one keystroke away. Try it, see if you like it, and make your own decision. I've also met people who still prefer latex after trying lyx, so it's very much a personal thing. Cheers, f From skip at pobox.com Thu Feb 9 14:20:13 2006 From: skip at pobox.com (skip at pobox.com) Date: Thu, 9 Feb 2006 13:20:13 -0600 Subject: [SciPy-user] Can't even begin to build/install scipy 0.4.4 Message-ID: <17387.38381.677962.529017@montanaro.dyndns.org> We'd been running an old version of scipy (0.3) in our Solaris 8/9/10 environment. In preparation for migrating to a new GCC (3.4 from 3.3.2), Python (2.4.2 from 2.3.4) and Gtk (2.6 from 2.4), we have been busy rebuilding all our libraries and applications. One application (the one I happen to work on) uses a tiny bit of scipy. The guy doing the building tried to build scipy 0.3.2. That failed. He got to a certain point but found that scipy's distutils apparently fails to honor the --rpath flag. I told him I'd take a look at it and decided to try scipy 0.4.4 instead (no sense fiddling with an old version that won't build - might as well see if the new version is any better). I downloaded numpy 0.9.4, but can't even get help. For some reason, distutils believes it needs to compile something just to give me help: $ python setup.py --help Running from numpy source directory. Assuming default configuration (numpy/distutils/command/{setup_command,setup}.py was not found) Appending numpy.distutils.command configuration to numpy.distutils Assuming default configuration (numpy/distutils/fcompiler/{setup_fcompiler,setup}.py was not found) Appending numpy.distutils.fcompiler configuration to numpy.distutils Appending numpy.distutils configuration to numpy Appending numpy.testing configuration to numpy F2PY Version 2_1975 Appending numpy.f2py configuration to numpy blas_opt_info: blas_mkl_info: NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS Setting PTATLAS=ATLAS Setting PTATLAS=ATLAS FOUND: libraries = ['ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['/opt/lib'] language = c include_dirs = ['/opt/include'] running build_src building extension "atlas_version" sources adding 'build/src/atlas_version_0x4568501b.c' to sources. running build_ext customize UnixCCompiler customize UnixCCompiler using build_ext building 'atlas_version' extension compiling C sources gcc options: '-fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -fPIC' compile options: '-I/opt/include -Inumpy/core/include -I/opt/app/g++lib6/python-2.4/include/python2.4 -c' /opt/lang/gcc-3.4/bin/gcc -shared build/temp.solaris-2.8-i86pc-2.4/build/src/atlas_version_0x4568501b.o -L/opt/lib -lptf77blas -lptcblas -latlas -o build/temp.solaris-2.8-i86pc-2.4/atlas_version.so Text relocation remains referenced against symbol offset in file 0x7 /opt/lib/libatlas.a(ATL_buildinfo.o) 0xc /opt/lib/libatlas.a(ATL_buildinfo.o) 0x11 ... blah blah blah ... ##### msg: error: Command "/opt/lang/gcc-3.4/bin/gcc -shared ##### build/temp.solaris-2.8-i86pc-2.4/build/src/atlas_version_0x4568501b.o ##### -L/opt/lib -lptf77blas -lptcblas -latlas -o ##### build/temp.solaris-2.8-i86pc-2.4/atlas_version.so" failed with exit ##### status 1 error: Command "/opt/lang/gcc-3.4/bin/gcc -shared build/temp.solaris-2.8-i86pc-2.4/build/src/atlas_version_0x4568501b.o -L/opt/lib -lptf77blas -lptcblas -latlas -o build/temp.solaris-2.8-i86pc-2.4/atlas_version.so" failed with exit status 1 FOUND: libraries = ['ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['/opt/lib'] language = c define_macros = [('NO_ATLAS_INFO', 2)] include_dirs = ['/opt/include'] distutils distribution has been initialized, it may be too late to add an extension _dotblas Traceback (most recent call last): File "setup.py", line 73, in ? setup_package() File "setup.py", line 58, in setup_package config.add_subpackage('numpy') File "/home/ink/skipm/src/numpy-0.9.4/numpy/distutils/misc_util.py", line 409, in add_subpackage config = self.get_subpackage(subpackage_name,subpackage_path) File "/home/ink/skipm/src/numpy-0.9.4/numpy/distutils/misc_util.py", line 399, in get_subpackage config = setup_module.configuration(*args) File "/home/ink/skipm/src/numpy-0.9.4/numpy/setup.py", line 10, in configuration config.add_subpackage('core') File "/home/ink/skipm/src/numpy-0.9.4/numpy/distutils/misc_util.py", line 409, in add_subpackage config = self.get_subpackage(subpackage_name,subpackage_path) File "/home/ink/skipm/src/numpy-0.9.4/numpy/distutils/misc_util.py", line 399, in get_subpackage config = setup_module.configuration(*args) File "numpy/core/setup.py", line 201, in configuration config.add_data_dir('tests') File "/home/ink/skipm/src/numpy-0.9.4/numpy/distutils/misc_util.py", line 453, in add_data_dir self.add_data_files((ds,filenames)) File "/home/ink/skipm/src/numpy-0.9.4/numpy/distutils/misc_util.py", line 508, in add_data_files dist.data_files.extend(data_dict.items()) AttributeError: 'NoneType' object has no attribute 'extend' What's up with that? Thx, -- Skip Montanaro http://www.musi-cal.com/ skip at pobox.com From novak at ucolick.org Thu Feb 9 15:11:05 2006 From: novak at ucolick.org (Gregory Novak) Date: Thu, 09 Feb 2006 12:11:05 -0800 Subject: [SciPy-user] What shape should arg 2 of integrate.trapz be? Message-ID: If I do this: (Note: nrange is like arange, except you specify the number of samples instead of dx). In [474]: x = nrange(-1,1,4) In [475]: y = nrange(-1,1,5) In [476]: z = reshape(arange(20.), (4,5)) In [479]: shape(integrate.trapz(z,axis=0)) Out[479]: (5,) the output is as I expect: I integrate away the (length 4) x dimension and am left with an answer for each y value. However, if I try to specify the x axis, I get the following traceback: In [480]: shape(integrate.trapz(z,x,axis=0)) --------------------------------------------------------------------------- /sw/lib/python2.3/site-packages/scipy/integrate/quadrature.py in trapz(y, x, dx, axis) 96 slice1[axis] = slice(1,None) 97 slice2[axis] = slice(None,-1) ---> 98 return add.reduce(d * (y[slice1]+y[slice2])/2.0,axis) 99 100 def cumtrapz(y, x=None, dx=1.0, axis=-1): ValueError: frames are not aligned I can make this work again by explicitly making the x array be 2dim instead of 1dim: In [481]: shape(integrate.trapz(z,x[:,NewAxis],axis=0)) Out[481]: (5,) The question is: Is this the desired behavior? It seems to me that In[479] is well-specified and should give the correct result. Is there some reason that one would want to pass in a multi-d array as the grid for the independent variable? Thanks, Greg From oliphant.travis at ieee.org Thu Feb 9 15:55:49 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 09 Feb 2006 13:55:49 -0700 Subject: [SciPy-user] Can't even begin to build/install scipy 0.4.4 In-Reply-To: <17387.38381.677962.529017@montanaro.dyndns.org> References: <17387.38381.677962.529017@montanaro.dyndns.org> Message-ID: <43EBAC55.5040502@ieee.org> skip at pobox.com wrote: >We'd been running an old version of scipy (0.3) in our Solaris 8/9/10 >environment. In preparation for migrating to a new GCC (3.4 from 3.3.2), >Python (2.4.2 from 2.3.4) and Gtk (2.6 from 2.4), we have been busy >rebuilding all our libraries and applications. One application (the one I >happen to work on) uses a tiny bit of scipy. The guy doing the building >tried to build scipy 0.3.2. That failed. He got to a certain point but >found that scipy's distutils apparently fails to honor the --rpath flag. > >I told him I'd take a look at it and decided to try scipy 0.4.4 instead (no >sense fiddling with an old version that won't build - might as well see if >the new version is any better). I downloaded numpy 0.9.4, but can't even >get help. For some reason, distutils believes it needs to compile something >just to give me help: > >What's up with that? > > It's the way the configuration was setup. In the SVN version it detects you just want help and doesn't try to compile the little atlas program. If you can please try the SVN version. svn co http://svn.scipy.org/svn/numpy/trunk numpy will give you the latest... A new release is scheduled in a few days. We've recently worked out other compilation issues. -Travis From skip at pobox.com Thu Feb 9 16:06:00 2006 From: skip at pobox.com (skip at pobox.com) Date: Thu, 9 Feb 2006 15:06:00 -0600 Subject: [SciPy-user] Can't even begin to build/install scipy 0.4.4 In-Reply-To: <43EBAC55.5040502@ieee.org> References: <17387.38381.677962.529017@montanaro.dyndns.org> <43EBAC55.5040502@ieee.org> Message-ID: <17387.44728.3364.460260@montanaro.dyndns.org> Travis> It's the way the configuration was setup. In the SVN version it Travis> detects you just want help and doesn't try to compile the little Travis> atlas program. If you can please try the SVN version. Travis> svn co http://svn.scipy.org/svn/numpy/trunk numpy Travis> will give you the latest... Thanks, that helps with --help. Now to see if I can build Atlas... Skip From oliphant.travis at ieee.org Thu Feb 9 16:23:36 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 09 Feb 2006 14:23:36 -0700 Subject: [SciPy-user] Can't even begin to build/install scipy 0.4.4 In-Reply-To: <17387.44728.3364.460260@montanaro.dyndns.org> References: <17387.38381.677962.529017@montanaro.dyndns.org> <43EBAC55.5040502@ieee.org> <17387.44728.3364.460260@montanaro.dyndns.org> Message-ID: <43EBB2D8.3070106@ieee.org> skip at pobox.com wrote: > Travis> It's the way the configuration was setup. In the SVN version it > Travis> detects you just want help and doesn't try to compile the little > Travis> atlas program. If you can please try the SVN version. > > Travis> svn co http://svn.scipy.org/svn/numpy/trunk numpy > > Travis> will give you the latest... > >Thanks, that helps with --help. Now to see if I can build Atlas... > > > Just to be clear.... You don't *need* ATLAS to compile NumPy, but you do need it to compile SciPy. I suppose you are doing both... -Travis From aisaac at american.edu Thu Feb 9 17:19:03 2006 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 9 Feb 2006 17:19:03 -0500 Subject: [SciPy-user] parameter bounds using leastsq In-Reply-To: <20051106015110.GA16522@ssh.cv.nrao.edu> References: <723eb6930511040714h3b697bd7v2a7c95a240950b06@mail.gmail.com><436BAAE6.30903@ee.byu.edu><723eb6930511051133u5e3db87dw3b11de57d7b3a4f5@mail.gmail.com><6.0.1.1.2.20051105172446.01e29c38@mcantor.pobox.stanford.edu><20051106015110.GA16522@ssh.cv.nrao.edu> Message-ID: On Sat, 5 Nov 2005, Scott Ransom apparently wrote: > Just to remind people, there is a very full featured non-linear > least squares fitter that is written in pure python (but based > on MINPACK) available here: > http://cars9.uchicago.edu/software/python/mpfit.html Here's the actual download link: http://cars.uchicago.edu/software/pub/python_epics.tar As far as I can tell, you must get the whole collection together. Cheers, Alan Isaac From dkuhlman at cutter.rexx.com Thu Feb 9 20:50:01 2006 From: dkuhlman at cutter.rexx.com (Dave Kuhlman) Date: Thu, 9 Feb 2006 17:50:01 -0800 Subject: [SciPy-user] SciPy tutorial update Message-ID: <20060210015001.GA73831@cutter.rexx.com> I've updated my SciPy tutorial/course outline. I'd like to add a link to it on the SciPy Wiki Documentation page under tutorials. But, before I do, I'd like an opinion on whether it fits there and whether it would be helpful. Would someone be willing to take a quick look? You can find it here: http://www.rexx.com/~dkuhlman/scipy_course_01.html Dave -- Dave Kuhlman http://www.rexx.com/~dkuhlman From michael.sorich at gmail.com Fri Feb 10 01:34:53 2006 From: michael.sorich at gmail.com (Michael Sorich) Date: Fri, 10 Feb 2006 17:04:53 +1030 Subject: [SciPy-user] 'obj2dtype' not found in 'mio.py' Message-ID: <16761e100602092234m16bd9022nf4ff1776c224b0c8@mail.gmail.com> Hi, In the course of trying to use scipy.io.loadmat to load a v4 matlab file, I have run across this error. Does anyone know where obj2dtype is defined? (Using an old version of scipy version (3.x) this matlab file could be loaded.) >> import scipy.io >>> scipy.io.loadmat('D:\\test.mat') Traceback (most recent call last): File "", line 1, in ? File "D:\Python24\Lib\site-packages\scipy\io\mio.py", line 803, in loadmat varname = fid.fread(header[-1],'char')[:-1] File "D:\Python24\Lib\site-packages\scipy\io\mio.py", line 249, in read howmany,stype = getsize_type(stype) File "D:\Python24\Lib\site-packages\scipy\io\mio.py", line 48, in getsize_type mtype = obj2dtype(mtype) NameError: global name 'obj2dtype' is not defined >>> scipy.__version__ '0.4.4' >>> import numpy >>> numpy.__version__ '0.9.4' >>> numpy-0.9.4.win32-py2.4.exe and scipy-0.4.4.win32-py2.4-P4SSE2.exe were used to install the code on windows XP with Python 2.4.2. On a less important note it seems that scipy.io.loadmat does not accept file names separated with '/' on windows >>> scipy.io.loadmat('D:/test.mat') Traceback (most recent call last): File "", line 1, in ? File "D:\Python24\Lib\site-packages\scipy\io\mio.py", line 740, in loadmat raise IOError, "%s not found on the path." % name IOError: test not found on the path. >>> The line 'if os.sep in name:' is evaluated as false as os.sep is ''\\' on windows and thus the code assumes that the file name is not a full path name. However, the os.path module recognises '/' as a path separator. Eg >>> import os.path >>> os.path.isabs('D:/test.mat') True >>> os.path.split('D:/test.mat') ('D:/', 'test.mat') >>> If it considered OK to use '\' as a path separator on windows perhaps this line can be changed to 'if os.path.isabs(name):' or maybe 'if os.path.split('D/test.mat')[0] != "":' Michael -------------- next part -------------- An HTML attachment was scrubbed... URL: From rudolphv at gmail.com Fri Feb 10 01:35:54 2006 From: rudolphv at gmail.com (Rudolph van der Merwe) Date: Fri, 10 Feb 2006 08:35:54 +0200 Subject: [SciPy-user] Concatenating 1D arrays In-Reply-To: <43EB689E.9060303@gmail.com> References: <97670e910602080216n2c77b043h7a0b21c0c1d32d38@mail.gmail.com> <43EA29C1.90807@gmail.com> <97670e910602090243sc5b9d57x3b44d2f72d9f29a6@mail.gmail.com> <43EB689E.9060303@gmail.com> Message-ID: <97670e910602092235y3d65e1afta4dd962bc25eb9e9@mail.gmail.com> On 2/9/06, Robert Kern wrote: > Okay, I presumed get_column_vector_from_somewhere() actually gave you a column > vector. If it's actually giving you shape-(n,) arrays instead of shape-(n,1), use > > x = column_stack(x) > > instead. Ah... column_stack() does the trick. I guess I should have dug through the documentation a bit longer. Thanks R. -- Rudolph van der Merwe From oliphant.travis at ieee.org Fri Feb 10 01:44:53 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 09 Feb 2006 23:44:53 -0700 Subject: [SciPy-user] 'obj2dtype' not found in 'mio.py' In-Reply-To: <16761e100602092234m16bd9022nf4ff1776c224b0c8@mail.gmail.com> References: <16761e100602092234m16bd9022nf4ff1776c224b0c8@mail.gmail.com> Message-ID: <43EC3665.4040403@ieee.org> Michael Sorich wrote: > Hi, > > In the course of trying to use scipy.io.loadmat to load a v4 matlab > file, I have run across this error. Does anyone know where obj2dtype > is defined? (Using an old version of scipy version (3.x) this matlab > file could be loaded.) scipy 0.4.4 is not fixed to work with numpy 0.9.4 (unfortunately). It was built from an earlier version of numpy. This will matter less and less, but there are still some growing pains like this. -Travis From massimo.sandal at unibo.it Fri Feb 10 07:25:37 2006 From: massimo.sandal at unibo.it (massimo sandal) Date: Fri, 10 Feb 2006 13:25:37 +0100 Subject: [SciPy-user] New Lyx Version eas(ier) to install on Windows. In-Reply-To: <43EB8FD1.2000005@colorado.edu> References: <056D32E9B2D93B49B01256A88B3EB218011608F5@icex2.ic.ac.uk> <43EB6362.8070300@ieee.org> <43EB8FD1.2000005@colorado.edu> Message-ID: <43EC8641.1000703@unibo.it> Fernando Perez wrote: > Try it, see if you like it, and make your own decision. I've also met people > who still prefer latex after trying lyx, so it's very much a personal thing. Well, I'm one of these people :) I learned latex to write my degree thesis. Latex is very straightforward, it's a simple markup language in fact. You can learn it in a few days just by working on your document. For me plain latex is better since I see the *logic* of the document inside the text file. Plain LaTeX files are all but "write only" IMHO: you *see* what's in the document from the logic point of view. I played with LyX a few times but I haven't seen what are the advantages of it over Latex (I'd like to know about them, if there are), but its hybrid (WYSI-not-exactly-WYG) approach seems to me not hiding complexity, but adding confusion. There are also a lot of very good graphic Latex development environments like Kile (under Linux/KDE) or TeXnic Center (under Windows) that help much in user-friendliness and avoiding syntaxis typos/errors. m. -- Massimo Sandal University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo.sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From matthew.brett at gmail.com Fri Feb 10 07:41:47 2006 From: matthew.brett at gmail.com (Matthew Brett) Date: Fri, 10 Feb 2006 12:41:47 +0000 Subject: [SciPy-user] New Lyx Version eas(ier) to install on Windows. In-Reply-To: <43EC8641.1000703@unibo.it> References: <056D32E9B2D93B49B01256A88B3EB218011608F5@icex2.ic.ac.uk> <43EB6362.8070300@ieee.org> <43EB8FD1.2000005@colorado.edu> <43EC8641.1000703@unibo.it> Message-ID: <1e2af89e0602100441jcf45073sbf2f31c98aa88c8e@mail.gmail.com> Hi, > I played with LyX a few times but I haven't seen what are the advantages > of it over Latex (I'd like to know about them, if there are), but its > hybrid (WYSI-not-exactly-WYG) approach seems to me not hiding > complexity, but adding confusion. Well, just to add a vote to LyX; I was also unconvinced before Fernando persuaded me to use LyX, but am absolutely persuaded now I have used it for a few months. I found that it was much _easier_ to see the logical structure in a Lyx document, because you have all the section subsection etc machinery visible from the page layout, without having to constantly do an implicit parse of stuff like: \subsection{Specific Aims} \item{{\bf Aim 1:} It's simple enough to do in your head, but just soaks up brain processing power that I would prefer to use for the writing. Most of us using LyX used to use LaTeX - so the acid test is - try it for a while... Come on in, the waters warm. Matthew From massimo.sandal at unibo.it Fri Feb 10 09:32:36 2006 From: massimo.sandal at unibo.it (massimo sandal) Date: Fri, 10 Feb 2006 15:32:36 +0100 Subject: [SciPy-user] New Lyx Version eas(ier) to install on Windows. In-Reply-To: <1e2af89e0602100441jcf45073sbf2f31c98aa88c8e@mail.gmail.com> References: <056D32E9B2D93B49B01256A88B3EB218011608F5@icex2.ic.ac.uk> <43EB6362.8070300@ieee.org> <43EB8FD1.2000005@colorado.edu> <43EC8641.1000703@unibo.it> <1e2af89e0602100441jcf45073sbf2f31c98aa88c8e@mail.gmail.com> Message-ID: <43ECA404.6040903@unibo.it> > you have all the > section subsection etc machinery visible from the page layout, without > having to constantly do an implicit parse of stuff like: > > \subsection{Specific Aims} > \item{{\bf Aim 1:} > > It's simple enough to do in your head, but just soaks up brain > processing power that I would prefer to use for the writing. What parsing? I just read "subsection" and that's exactly what I want to know: that it is a subsection. It's a logical chunk of my document. To see the logical structure of my document exposed is actually more straightforward for me. Probably our brains are just hardwired differently... :) > Most of us using LyX used to use LaTeX - so the acid test is - try it > for a while... Come on in, the waters warm. I promise I'll give it a try, again! m. -- Massimo Sandal University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo.sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From Fernando.Perez at colorado.edu Fri Feb 10 10:32:20 2006 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Fri, 10 Feb 2006 08:32:20 -0700 Subject: [SciPy-user] New Lyx Version eas(ier) to install on Windows. In-Reply-To: <43ECA404.6040903@unibo.it> References: <056D32E9B2D93B49B01256A88B3EB218011608F5@icex2.ic.ac.uk> <43EB6362.8070300@ieee.org> <43EB8FD1.2000005@colorado.edu> <43EC8641.1000703@unibo.it> <1e2af89e0602100441jcf45073sbf2f31c98aa88c8e@mail.gmail.com> <43ECA404.6040903@unibo.it> Message-ID: <43ECB204.5030700@colorado.edu> massimo sandal wrote: >> you have all the >>section subsection etc machinery visible from the page layout, without >>having to constantly do an implicit parse of stuff like: >> >>\subsection{Specific Aims} >>\item{{\bf Aim 1:} >> >>It's simple enough to do in your head, but just soaks up brain >>processing power that I would prefer to use for the writing. > > > What parsing? I just read "subsection" and that's exactly what I want to > know: that it is a subsection. It's a logical chunk of my document. To > see the logical structure of my document exposed is actually more > straightforward for me. > > Probably our brains are just hardwired differently... :) Probably :) For me, it came down to very much what Matthew says (and I'm glad I won him over:). It was the mental overhead of running a latex compiler in my brain in parallel all the time. I found that I could concentrate more on writing and less on dealing with formatting and latex details thanks to lyx. The joys of reading (and getting right in the first place): \begin{eqnarray} g_{n} & = & \mathbf{T}_{n}^{b_{n}}f_{n}+\uparrow\left[\left(\mathbf{T}_{n-1}^{b_{ n-1}}-\mathbf{T}_{n-1}^{\left\lfloor b_{n}/2\right\rfloor }\right)f_{n-1}+\uparrow\left[\left(\mathbf{T}_{n-2}^{b_{n-2}}- \mathbf{T}_{n-2}^{\left\lfloor b_{n-1}/2\right\rfloor }\right)f_{n-2}+\ldots\right.\right.\label{nsf.eq:apply_algo}\\ & & \left.\left.\ldots+\left[\uparrow\left[\left(\mathbf{T}_{0}-\mathbf{T}_{0}^ {\left\lfloor b_{1}/2\right\rfloor }\right)f_{0}\right]\right]\ldots\right]\right ].\nonumber \end{eqnarray} are replaced with a perfectly clear equation on screen, where I don't have to worry about any latex errors and which is visually clear enough that I know _exactly_, immediately, whether I typed what I meant or not. Bonus points for not having to deal manually with the graphicx package, for lyx converting all graphics formats into whatever is actually needed by latex for me, managing bibtex (esp. in conjunction with the joy of pybliographic), etc... I am very much a command-line, life-is-not-worth-living-outside-of-an-Xemacs-buffer kind of guy, but lyx is the one tool where I'm willing to pay the price of a more primitive editor than Xemacs (though I curse every day the lack of dabbrev) for the payoff it gives me. But again, it may not be your cup of tea. I'm just trying to explain why _I_ (and perhaps Travis, Matthew and others with similar inclinations) have found it to be a productive tool for our latex-related needs. Best regards, f From robert.kern at gmail.com Fri Feb 10 11:04:36 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 10 Feb 2006 10:04:36 -0600 Subject: [SciPy-user] New Lyx Version eas(ier) to install on Windows. In-Reply-To: <43ECB204.5030700@colorado.edu> References: <056D32E9B2D93B49B01256A88B3EB218011608F5@icex2.ic.ac.uk> <43EB6362.8070300@ieee.org> <43EB8FD1.2000005@colorado.edu> <43EC8641.1000703@unibo.it> <1e2af89e0602100441jcf45073sbf2f31c98aa88c8e@mail.gmail.com> <43ECA404.6040903@unibo.it> <43ECB204.5030700@colorado.edu> Message-ID: <43ECB994.6080907@gmail.com> The only thing I have to add to this thread is that the Scipy project will accept documentation in any reasonable format, and even some unreasonable ones. We may reformat it, but we will accept it. And I do believe that obviates any remaining topicality of the LyX/LaTeX comparison. :-) -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From jdc at uwo.ca Fri Feb 10 11:56:06 2006 From: jdc at uwo.ca (Dan Christensen) Date: Fri, 10 Feb 2006 11:56:06 -0500 Subject: [SciPy-user] New Lyx Version eas(ier) to install on Windows. In-Reply-To: <43ECB204.5030700@colorado.edu> References: <056D32E9B2D93B49B01256A88B3EB218011608F5@icex2.ic.ac.uk> <43EB6362.8070300@ieee.org> <43EB8FD1.2000005@colorado.edu> <43EC8641.1000703@unibo.it> <1e2af89e0602100441jcf45073sbf2f31c98aa88c8e@mail.gmail.com> <43ECA404.6040903@unibo.it> <43ECB204.5030700@colorado.edu> Message-ID: <87acczw3tl.fsf@uwo.ca> Fernando Perez writes: > The joys of reading (and getting right in the first place): > > \begin{eqnarray} > g_{n} & = & \mathbf{T}_{n}^{b_{n}}f_{n}+\uparrow\left[\left(\mathbf{T}_{n-1}^{b_{ > n-1}}-\mathbf{T}_{n-1}^{\left\lfloor b_{n}/2\right\rfloor > }\right)f_{n-1}+\uparrow\left[\left(\mathbf{T}_{n-2}^{b_{n-2}}- > \mathbf{T}_{n-2}^{\left\lfloor b_{n-1}/2\right\rfloor > }\right)f_{n-2}+\ldots\right.\right.\label{nsf.eq:apply_algo}\\ > & & \left.\left.\ldots+\left[\uparrow\left[\left(\mathbf{T}_{0}-\mathbf{T}_{0}^ > {\left\lfloor b_{1}/2\right\rfloor }\right)f_{0}\right]\right]\ldots\right]\right > ].\nonumber > > \end{eqnarray} > > are replaced with a perfectly clear equation on screen, where I don't have to > worry about any latex errors and which is visually clear enough that I know > _exactly_, immediately, whether I typed what I meant or not. It's exactly the awful latex that lyx produces that makes me prefer latex. The logical structure of that formula is completely hidden, and it was probably hard to be sure that you got all of the superscripts and subscripts correct. I much prefer typing something like: \[ \begin{split} g_n =\ &\T{n}{b_n} f_n + \uparrow \lb \TT{n-1}{n} + \uparrow \lb \TT{n-2}{n-1} + \cdots \\ &\cdots + \uparrow \lb \TTT{0}{1}{} \rb \cdots \rb \rb \end{split} \] Sure, this requires some ugly macros at the top of the file (which I've included below); but most likely, many of these subexpressions occur throughout your paper, so it'll save you typing and reduce the chance of typos if you get them right in one spot. Also makes it easy to adjust things, e.g. if you decide to change the font of the T. I'd say my way is more pythonic, to get this a bit more on topic for the list... :-) By the way, my version gets rid of an unnecessary pair of square brackets, uses correctly centered dots, has better spacing around the equals sign, and removes the \label (since the equation is unnumbered). :-) You probably already know that emacs can display the rendered math inline, so in some ways you get the best of both worlds... Now if lyx was smarter about producing human readable latex, then I might be convinced to switch... Dan The ugly macros: \newcommand{\flr}[1]{\left\lfloor #1 \right\rfloor} \newcommand{\T}[2]{\mathbf{T}_{#1}^{#2}} \newcommand{\TTT}[3] {\left(\T{#1}{#3} - \T{#1}{\flr{b_{#2}/2}}\right) f_{#1}} \newcommand{\TT}[2]{\TTT{#1}{#2}{b_{#1}}} \newcommand{\lb}{\Bigl[} \newcommand{\rb}{\Bigr]} From pajer at iname.com Fri Feb 10 11:56:05 2006 From: pajer at iname.com (Gary) Date: Fri, 10 Feb 2006 11:56:05 -0500 Subject: [SciPy-user] New Lyx Version eas(ier) to install on Windows. In-Reply-To: <43ECB994.6080907@gmail.com> References: <056D32E9B2D93B49B01256A88B3EB218011608F5@icex2.ic.ac.uk> <43EB6362.8070300@ieee.org> <43EB8FD1.2000005@colorado.edu> <43EC8641.1000703@unibo.it> <1e2af89e0602100441jcf45073sbf2f31c98aa88c8e@mail.gmail.com> <43ECA404.6040903@unibo.it> <43ECB204.5030700@colorado.edu> <43ECB994.6080907@gmail.com> Message-ID: <43ECC5A5.7010904@iname.com> Robert Kern wrote: >The only thing I have to add to this thread is that the Scipy project will >accept documentation in any reasonable format, and even some unreasonable ones. >We may reformat it, but we will accept it. > >And I do believe that obviates any remaining topicality of the LyX/LaTeX >comparison. :-) > > > Before the door slams shut, can I ask Travis if he wrote the scipy manual in LyX ? (I guess I *can* ask :) ) I'll shut up now. Apologies, -gary From jh at oobleck.astro.cornell.edu Fri Feb 10 11:59:30 2006 From: jh at oobleck.astro.cornell.edu (Joe Harrington) Date: Fri, 10 Feb 2006 11:59:30 -0500 Subject: [SciPy-user] SciPy tutorial update Message-ID: <200602101659.k1AGxUp5026736@oobleck.astro.cornell.edu> Dave, great work. Boy, I hope that Section 8 is machine-generated! I just posted a long message to numpy-discussion about workflow on the web site. Rather than cross-posting, I'll just summarize, and refer folks to the archives for the full argument. The basic idea is, initially post a project in Developer_Zone. When it is at the 1.0 stage, link it under the appropriate main-site page. When it's mature and/or you are no longer looking for help, delete it from Developer_Zone. The decision of when to take each step is yours/the community's. I'd hope big projects would post a message to scipy-dev (and possibly the other lists as well) soliciting reviews before linking to the main site, and after doing major revisions. The idea is to keep the main site clean enough of "under construction" notices that it doesn't scare away non-expert users, and to attract contributors to help out on projects that are in their growth stages. So, post it in DevZone right away. Since we don't have much current documentation, I think you should link it into the Documentation page as well as soon as someone can give it a positive review. My main concern would be to have someone on a different box test all the examples, from the installations up. --jh-- From icy.flame.gm at gmail.com Fri Feb 10 12:07:33 2006 From: icy.flame.gm at gmail.com (iCy-fLaME) Date: Fri, 10 Feb 2006 17:07:33 +0000 Subject: [SciPy-user] savespace equivalent in numpy? Message-ID: I am in the process of moving my Numeric based code to the new numpy, i came across several places where i deliberately tried to restrict the auto up casting function. While it may not be the end of the world to live without the "savespace" option, i am wondering if there is an equivalent option with numpy, so i can have a smoother transition to numpy. -- iCy-fLaME The body maybe wounded, but it is the mind that hurts. From robert.kern at gmail.com Fri Feb 10 12:20:49 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 10 Feb 2006 11:20:49 -0600 Subject: [SciPy-user] New Lyx Version eas(ier) to install on Windows. In-Reply-To: <43ECC5A5.7010904@iname.com> References: <056D32E9B2D93B49B01256A88B3EB218011608F5@icex2.ic.ac.uk> <43EB6362.8070300@ieee.org> <43EB8FD1.2000005@colorado.edu> <43EC8641.1000703@unibo.it> <1e2af89e0602100441jcf45073sbf2f31c98aa88c8e@mail.gmail.com> <43ECA404.6040903@unibo.it> <43ECB204.5030700@colorado.edu> <43ECB994.6080907@gmail.com> <43ECC5A5.7010904@iname.com> Message-ID: <43ECCB71.4030605@gmail.com> Gary wrote: > Before the door slams shut, can I ask Travis if he wrote the scipy > manual in LyX ? (I guess I *can* ask :) ) Yes, he did. You can even purchase the LyX layout file he's using for $5. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From Paul.Ray at nrl.navy.mil Fri Feb 10 13:08:18 2006 From: Paul.Ray at nrl.navy.mil (Paul Ray) Date: Fri, 10 Feb 2006 13:08:18 -0500 Subject: [SciPy-user] 'obj2dtype' not found in 'mio.py' In-Reply-To: <43EC3665.4040403@ieee.org> References: <16761e100602092234m16bd9022nf4ff1776c224b0c8@mail.gmail.com> <43EC3665.4040403@ieee.org> Message-ID: <1FBAFC6D-7A19-4720-A7C7-78847F94D3F5@nrl.navy.mil> On Feb 10, 2006, at 1:44 AM, Travis Oliphant wrote: > scipy 0.4.4 is not fixed to work with numpy 0.9.4 > (unfortunately). It > was built from an earlier version of numpy. This will matter less > and > less, but there are still some growing pains like this. This is quite a bad situation now that the new scipy.org is public. A new user that downloads the latest release of NumPy and SciPy won't have compatible versions. Very confusing... Cheers, -- Paul -- Dr. Paul S. Ray E-mail: Paul.Ray at nrl.navy.mil Naval Research Laboratory WWW : http://xweb.nrl.navy.mil/ personnel/paulr/ Code 7655 Phone : (202) 404-1619 Washington, DC 20375 AIM : NRLPSR From perry at stsci.edu Fri Feb 10 15:27:03 2006 From: perry at stsci.edu (Perry Greenfield) Date: Fri, 10 Feb 2006 15:27:03 -0500 Subject: [SciPy-user] savespace equivalent in numpy? In-Reply-To: References: Message-ID: On Feb 10, 2006, at 12:07 PM, iCy-fLaME wrote: > I am in the process of moving my Numeric based code to the new numpy, > i came across several places where i deliberately tried to restrict > the auto up casting function. > > While it may not be the end of the world to live without the > "savespace" option, i am wondering if there is an equivalent option > with numpy, so i can have a smoother transition to numpy. I'm a little puzzled as to why you still feel it is necessary to have this feature. Savespace was mainly intended to prevent scalars from upcasting the type of the array. But they don't anymore. Given that, you can simply downcast any arrays to the type you need explicitly in expressions to avoid any unnecessary upcasts. Perry Greenfield From oliphant at ee.byu.edu Fri Feb 10 17:05:45 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 10 Feb 2006 15:05:45 -0700 Subject: [SciPy-user] New Lyx Version eas(ier) to install on Windows. In-Reply-To: <43ECC5A5.7010904@iname.com> References: <056D32E9B2D93B49B01256A88B3EB218011608F5@icex2.ic.ac.uk> <43EB6362.8070300@ieee.org> <43EB8FD1.2000005@colorado.edu> <43EC8641.1000703@unibo.it> <1e2af89e0602100441jcf45073sbf2f31c98aa88c8e@mail.gmail.com> <43ECA404.6040903@unibo.it> <43ECB204.5030700@colorado.edu> <43ECB994.6080907@gmail.com> <43ECC5A5.7010904@iname.com> Message-ID: <43ED0E39.4010904@ee.byu.edu> Gary wrote: >Before the door slams shut, can I ask Travis if he wrote the scipy >manual in LyX ? (I guess I *can* ask :) ) > > > Yes... -Travis From oliphant at ee.byu.edu Fri Feb 10 17:10:00 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 10 Feb 2006 15:10:00 -0700 Subject: [SciPy-user] savespace equivalent in numpy? In-Reply-To: References: Message-ID: <43ED0F38.60307@ee.byu.edu> iCy-fLaME wrote: >I am in the process of moving my Numeric based code to the new numpy, >i came across several places where i deliberately tried to restrict >the auto up casting function. > > >While it may not be the end of the world to live without the >"savespace" option, i am wondering if there is an equivalent option >with numpy, so i can have a smoother transition to numpy. > > > Not entirely. If you used the savespace functionality you will have a little more work to do to convert. There isn't as much need for it, though because scalar multiplies never upcast an array. Show the code you are trying to convert for specific help. -Travis From cookedm at physics.mcmaster.ca Fri Feb 10 18:59:08 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Fri, 10 Feb 2006 18:59:08 -0500 Subject: [SciPy-user] New Lyx Version eas(ier) to install on Windows. In-Reply-To: <43ECA404.6040903@unibo.it> (massimo sandal's message of "Fri, 10 Feb 2006 15:32:36 +0100") References: <056D32E9B2D93B49B01256A88B3EB218011608F5@icex2.ic.ac.uk> <43EB6362.8070300@ieee.org> <43EB8FD1.2000005@colorado.edu> <43EC8641.1000703@unibo.it> <1e2af89e0602100441jcf45073sbf2f31c98aa88c8e@mail.gmail.com> <43ECA404.6040903@unibo.it> Message-ID: massimo sandal writes: >> you have all the >> section subsection etc machinery visible from the page layout, without >> having to constantly do an implicit parse of stuff like: >> \subsection{Specific Aims} >> \item{{\bf Aim 1:} >> It's simple enough to do in your head, but just soaks up brain >> processing power that I would prefer to use for the writing. > > What parsing? I just read "subsection" and that's exactly what I want > to know: that it is a subsection. It's a logical chunk of my document. > To see the logical structure of my document exposed is actually more > straightforward for me. It's even easier with a current version of Emacs and syntax highlighting: sections, etc. will be in a bigger, sans serif font, of a different colour. AUCTeX is great, along with bibcite and reftex modes. Also, subscripts and superscripts are lowered and raised, and if you use the preview-latex mode, you can get your equations typeset and put in the buffer so you're not looking at huge undecipherable gibberish like Fernando mentions (although a lot of that can be removed by using appropiate use of identing and spaces). I haven't tried it yet, but the flymake mode for Emacs should do on-the-fly syntax checking of LaTeX. I also define a bunch of commands and environments as necessary to keep things clean (typing $\dint{x}{x^2}$ instead of $\int x^2\,\mathrm{d}x$, for instance). latex-not-LyX-ly yours, -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From dkuhlman at cutter.rexx.com Fri Feb 10 21:00:17 2006 From: dkuhlman at cutter.rexx.com (Dave Kuhlman) Date: Fri, 10 Feb 2006 18:00:17 -0800 Subject: [SciPy-user] SciPy tutorial update In-Reply-To: <200602101659.k1AGxUp5026736@oobleck.astro.cornell.edu> References: <200602101659.k1AGxUp5026736@oobleck.astro.cornell.edu> Message-ID: <20060211020017.GA89514@cutter.rexx.com> On Fri, Feb 10, 2006 at 11:59:30AM -0500, Joe Harrington wrote: > Dave, great work. Boy, I hope that Section 8 is machine-generated! > Thanks. [snip] > > So, post it in DevZone right away. Since we don't have much current > documentation, I think you should link it into the Documentation page > as well as soon as someone can give it a positive review. My main > concern would be to have someone on a different box test all the > examples, from the installations up. > Done. There is now a link in the Developer_Zone page, under "Projects". Dave -- Dave Kuhlman http://www.rexx.com/~dkuhlman From marcus_mklo at hotmail.com Sat Feb 11 08:18:45 2006 From: marcus_mklo at hotmail.com (Lo Marcus) Date: Sat, 11 Feb 2006 21:18:45 +0800 Subject: [SciPy-user] Conversion of scipy array to numarray array Message-ID: I am trying to convert a scipy array to numarray, without success. What I do is the following: data = scipy.asarray(pyfits.getdata('j6mi10021_crj.fits', 'sci')) nadata = numarray.asarray(data) >>> import scipy >>> import numarray >>> import pyfits >>> data = scipy.asarray(pyfits.getdata('j6mi10021_crj.fits', 'sci')) >>> nadata = numarray.asarray(data) >>> print data [[ 22.37952614 10.05145359 21.6661377 ..., 11.43289948 5.52833033 17.19585037] [ 24.8036232 -6.39581585 12.05591202 ..., -10.39195728 3.03895354 8.70327473] [ 5.82899857 14.48854351 11.53882504 ..., 12.11074352 20.28095627 20.44524193] ..., [ 19.73381042 28.00988197 25.39971733 ..., -1.82318735 2.51478434 10.70707226] [ 24.23790932 -2.87376475 18.93963051 ..., 7.07922268 3.27009034 10.84160519] [ 1.5277524 19.00962639 26.82912445 ..., 3.01001763 12.66541958 5.03601742]] >>> print nadata [[ 2.20320337e+03 -2.62657490e+01 3.32307458e+00 ..., 2.63358146e-14 9.39821272e-26 6.01923401e-23] [ -2.55308349e+11 -6.42477458e-35 5.38966492e-36 ..., 2.49919068e+32 1.51550048e-05 -2.54168066e-21] [ 1.50687839e-14 5.28608336e-27 2.39567241e-34 ..., -3.26330855e-22 2.26241264e+23 -8.08608425e+16] ..., [ -1.95813585e+15 3.63466777e-02 -3.78610938e-20 ..., 2.07603975e-07 1.84727460e-03 7.39564714e-13] [ 1.13161571e-01 -4.70435547e+02 1.19427208e+18 ..., -9.09748823e+37 4.47014387e-14 1.47328838e-05] [ 2.09204790e+22 -8.79733216e-06 2.50890015e-31 ..., 1.11300791e-18 -1.62988577e-29 2.06619783e-30]] data is an 1024 x 1024 float32 array. It doesn't have any problem in small array, as I have tested it by hand. And if I deliberately use float64 array, there is no problem. I have installed the latest numarray(1.5.1) and scipy(0.4.5.1598)/numpy(0.9.5.2055) package. Marcus Lo From jens_brandenburg at gmx.net Sat Feb 11 09:05:35 2006 From: jens_brandenburg at gmx.net (Jens Brandenburg) Date: Sat, 11 Feb 2006 15:05:35 +0100 Subject: [SciPy-user] from scipy.sandbox import xplt Message-ID: <43EDEF2F.4040507@gmx.net> Hi, after successfull numpy+scipy installation (svn co @ 11.02.2006) I was not able to import xplt... In [1]: from scipy.sandbox import xplt --------------------------------------------------------------------------- exceptions.ImportError Traceback (most recent call last) /home/jens-home/ ImportError: cannot import name xplt What could be the problem??? cheers Jens Brandenburg From pearu at scipy.org Sat Feb 11 08:45:22 2006 From: pearu at scipy.org (Pearu Peterson) Date: Sat, 11 Feb 2006 07:45:22 -0600 (CST) Subject: [SciPy-user] from scipy.sandbox import xplt In-Reply-To: <43EDEF2F.4040507@gmx.net> References: <43EDEF2F.4040507@gmx.net> Message-ID: On Sat, 11 Feb 2006, Jens Brandenburg wrote: > Hi, > > after successfull numpy+scipy installation (svn co @ 11.02.2006) I was > not able to import xplt... > > In [1]: from scipy.sandbox import xplt > --------------------------------------------------------------------------- > exceptions.ImportError Traceback (most > recent call last) > > /home/jens-home/ > > ImportError: cannot import name xplt > > What could be the problem??? You must enable building xplt in scipy/Lib/sandbox/setup.py file. Pearu From martin.wiechert at gmx.de Sat Feb 11 13:42:50 2006 From: martin.wiechert at gmx.de (Martin Wiechert) Date: Sat, 11 Feb 2006 19:42:50 +0100 Subject: [SciPy-user] bug with NO_IMPORT_ARRAY / PY_ARRAY_UNIQUE_SYMBOL? was Re: segfault when calling PyArray_DescrFromType In-Reply-To: <43EB6436.1050307@ieee.org> References: <200602091141.51520.martin.wiechert@gmx.de> <200602091552.11896.martin.wiechert@gmx.de> <43EB6436.1050307@ieee.org> Message-ID: <200602111942.50704.martin.wiechert@gmx.de> Hi Travis, thanks for your help! I think there is a small bug with NO_IMPORT_ARRAY / PY_ARRAY_UNIQUE_SYMBOL in numpy-0.9.4. For ease of reference I've pasted part of __multiarray_api.h below. The problem I ran into is, that my "non-importing" source files, the ones defining NO_IMPORT_ARRAY, cannot see PyArray_API, because they obviously cannot know which name I chose in the importing file. E.g. I do #define PY_ARRAY_UNIQUE_SYMBOL my_name in the file which calls import_array (). Then the object generated will not have the symbol PyArray_API, because PyArray_API is replaced with my_name. But the sources with NO_IMPORT_ARRAY look for PyArray_API, because for them it is not replaced. Indeed inserting #define PyArray_API my_name into these files seems to fix the problem for me. Regards, Martin. #if defined(PY_ARRAY_UNIQUE_SYMBOL) #define PyArray_API PY_ARRAY_UNIQUE_SYMBOL #endif #if defined(NO_IMPORT) || defined(NO_IMPORT_ARRAY) extern void **PyArray_API; #else #if defined(PY_ARRAY_UNIQUE_SYMBOL) void **PyArray_API; #else static void **PyArray_API=NULL; #endif #endif On Thursday 09 February 2006 16:48, Travis Oliphant wrote: > Martin Wiechert wrote: > >Found it (in the "old" docs). > >Must #define PY_ARRAY_UNIQUE_SYMBOL and call import_array (). > > To be clear, you must call import_array() in the modules init function. > This is the only requirement. > > You only have to define PY_ARRAY_UNIQUE_SYMBOL if your extension module > uses more than one file. In the files without the module initialization > code you also have to define NO_IMPORT_ARRAY. > > -Travis From gerard.vermeulen at grenoble.cnrs.fr Sun Feb 12 08:25:25 2006 From: gerard.vermeulen at grenoble.cnrs.fr (Gerard Vermeulen) Date: Sun, 12 Feb 2006 14:25:25 +0100 Subject: [SciPy-user] ANN: first release of IVuPy-0.1 Message-ID: <20060212142525.45b6b924.gerard.vermeulen@grenoble.cnrs.fr> I am proud to announce IVuPy-0.1 (I-View-Py). IVuPy is a Python extension module developed to write Python programs for 3D visualization of large data sets using Qt and PyQt. Python is extended by IVuPy with more than 600 classes of two of the Coin3D C++ class libraries: Coin and SoQt. Coin is compatible with the Open Inventor API. Open Inventor is an object-oriented 3D toolkit built on OpenGL that provides a 3D scene database, a built-in event model for user interaction, and the ability to print objects and exchange data with other graphics formats. The SoQt library interfaces Coin to Qt. See http://www.coin3d.org for more information on Coin3D. IVuPy requires at least one of the Numerical Python extension modules: NumPy, Numeric, or numarray (IVuPy works with all of them at once). Data transfer between the Numerical Python arrays and the Coin data structures has been implemented by copying. The design of the Open Inventor API favors ease of use over performance. The API is a natural match for Python, and in my opinion it is fun to program with IVuPy. The performance penalty of the design choice is small. The first example at http://ivupy.sourceforge.net/examples.html demonstrates this: NumPy calculates a surface with a million nodes in 1.7 seconds and Coin3D redisplays the surface in 0.3 seconds on my Linux system with a 3.6 GHz Pentium and a nVidea graphics card (NV41.1). The Inventor Mentor ( http://www.google.com/search?q=inventor+mentor ) is essential for learning IVuPy. The IVuPy documentation supplements the Inventor Mentor. IVuPy includes all C++ examples from the Inventor Mentor and their Python translations. There are also more advanced examples to show the integration of IVuPy and PyQt. IVuPy has been used for almost 6 months on Linux and Windows in the development of a preprocessor for a finite element flow solver and has been proven to be very stable. Prerequisites for IVuPy are: - Python-2.4.x or -2.3.x - at least one of NumPy, numarray, or Numeric - Qt-3.3.x, -3.2.x, or -3.1.x - SIP-4.3.x or -4.2.1 - PyQt-3.15.x or -3.14.1 - Coin-2.4.4 or -2.4.3 - SoQt-1.3.0 or -1.2.0 IVuPy is licensed under the terms of the GPL. Contact me, if the GPL is an obstacle for you. http://ivupy.sourceforge.net is the home page of IVuPy. Have fun -- Gerard Vermeulen From matthew.brett at gmail.com Sun Feb 12 10:03:35 2006 From: matthew.brett at gmail.com (Matthew Brett) Date: Sun, 12 Feb 2006 15:03:35 +0000 Subject: [SciPy-user] Web site down? Message-ID: <1e2af89e0602120703w29013ef0r2b3415da53ab8e07@mail.gmail.com> Hi, Is anyone else getting a error connecting to http://www.scipy.org? 502 Bad Gateway: The proxy server received an invalid response from an upstream server. Matthew From schofield at ftw.at Sun Feb 12 10:22:06 2006 From: schofield at ftw.at (Ed Schofield) Date: Sun, 12 Feb 2006 16:22:06 +0100 Subject: [SciPy-user] Web site down? In-Reply-To: <1e2af89e0602120703w29013ef0r2b3415da53ab8e07@mail.gmail.com> References: <1e2af89e0602120703w29013ef0r2b3415da53ab8e07@mail.gmail.com> Message-ID: <43EF529E.3020702@ftw.at> Matthew Brett wrote: >Hi, > >Is anyone else getting a error connecting to http://www.scipy.org? > >502 Bad Gateway: The proxy server received an invalid response from an >upstream server. > > Yes, and svn.scipy.org is down too ... -- Ed From pajer at iname.com Sun Feb 12 10:22:21 2006 From: pajer at iname.com (Gary) Date: Sun, 12 Feb 2006 10:22:21 -0500 Subject: [SciPy-user] Web site down? In-Reply-To: <1e2af89e0602120703w29013ef0r2b3415da53ab8e07@mail.gmail.com> References: <1e2af89e0602120703w29013ef0r2b3415da53ab8e07@mail.gmail.com> Message-ID: <43EF52AD.6080502@iname.com> Matthew Brett wrote: >Hi, > >Is anyone else getting a error connecting to http://www.scipy.org? > >502 Bad Gateway: The proxy server received an invalid response from an >upstream server. > > Same response here. >Matthew > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > > > From jens_brandenburg at gmx.net Sun Feb 12 12:57:24 2006 From: jens_brandenburg at gmx.net (Jens Brandenburg) Date: Sun, 12 Feb 2006 18:57:24 +0100 Subject: [SciPy-user] from scipy.sandbox import xplt In-Reply-To: References: <43EDEF2F.4040507@gmx.net> Message-ID: <43EF7704.1010705@gmx.net> Hello again, > You must enable building xplt in scipy/Lib/sandbox/setup.py file. __of cause__ I think now it works ;-) Pearu, I saw your request about a simple surface plot for matplotlib. Is there something going on??? cheers Jens From Fernando.Perez at colorado.edu Sun Feb 12 13:22:23 2006 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Sun, 12 Feb 2006 11:22:23 -0700 Subject: [SciPy-user] ANN: first release of IVuPy-0.1 In-Reply-To: <20060212142525.45b6b924.gerard.vermeulen@grenoble.cnrs.fr> References: <20060212142525.45b6b924.gerard.vermeulen@grenoble.cnrs.fr> Message-ID: <43EF7CDF.7020904@colorado.edu> Gerard Vermeulen wrote: > I am proud to announce IVuPy-0.1 (I-View-Py). > > IVuPy is a Python extension module developed to write Python programs > for 3D visualization of large data sets using Qt and PyQt. Python is > extended by IVuPy with more than 600 classes of two of the Coin3D C++ > class libraries: Coin and SoQt. It looks very nice. Perhaps you could add a link to the topical software wiki (when the site comes back up :), along with a brief note on how it differs from VTK? I'm actually curious on that last one (so feel free to reply on the list if you prefer to putting it on the wiki), as I'm much more familiar with VTK, so I'd love to hear what the differences are. Regards, f From eric at enthought.com Sun Feb 12 14:08:21 2006 From: eric at enthought.com (eric jones) Date: Sun, 12 Feb 2006 13:08:21 -0600 Subject: [SciPy-user] Web site down? In-Reply-To: <43EF52AD.6080502@iname.com> References: <1e2af89e0602120703w29013ef0r2b3415da53ab8e07@mail.gmail.com> <43EF52AD.6080502@iname.com> Message-ID: <43EF87A5.6000703@enthought.com> I'll get someone to look at this ASAP. eric Gary wrote: >Matthew Brett wrote: > > > >>Hi, >> >>Is anyone else getting a error connecting to http://www.scipy.org? >> >>502 Bad Gateway: The proxy server received an invalid response from an >>upstream server. >> >> >> >> > >Same response here. > > > > > >>Matthew >> >>_______________________________________________ >>SciPy-user mailing list >>SciPy-user at scipy.net >>http://www.scipy.net/mailman/listinfo/scipy-user >> >> >> >> >> > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > > From gerard.vermeulen at grenoble.cnrs.fr Sun Feb 12 16:39:03 2006 From: gerard.vermeulen at grenoble.cnrs.fr (Gerard Vermeulen) Date: Sun, 12 Feb 2006 22:39:03 +0100 Subject: [SciPy-user] ANN: first release of IVuPy-0.1 In-Reply-To: <43EF7CDF.7020904@colorado.edu> References: <20060212142525.45b6b924.gerard.vermeulen@grenoble.cnrs.fr> <43EF7CDF.7020904@colorado.edu> Message-ID: <20060212223903.378d2c79.gerard.vermeulen@grenoble.cnrs.fr> On Sun, 12 Feb 2006 11:22:23 -0700 Fernando Perez wrote: > Gerard Vermeulen wrote: > > I am proud to announce IVuPy-0.1 (I-View-Py). > > > > IVuPy is a Python extension module developed to write Python programs > > for 3D visualization of large data sets using Qt and PyQt. Python is > > extended by IVuPy with more than 600 classes of two of the Coin3D C++ > > class libraries: Coin and SoQt. > > It looks very nice. Perhaps you could add a link to the topical software wiki > (when the site comes back up :), along with a brief note on how it differs > from VTK? I'm actually curious on that last one (so feel free to reply on the > list if you prefer to putting it on the wiki), as I'm much more familiar with > VTK, so I'd love to hear what the differences are. > My knowledge of VTK is practically zero, but I will try. The motivation for developing IVuPy originated in the request of a PyQwt-user (the owner of a company specializing in high-end technical software) complaining that VTK was: (1) slow, (2) memory hungry, and (3) hard to program. Besides that, VTK does not integrate very well with PyQt (I have tried to help to fix it, see the QVTKRender*.py files in VTK). You have to do the right thing to get it working. Addressing (1) and (2): The PyQwt-user would love to display data consisting of 1e7 vertices (IVuPy is getting sluggish in displaying 1e6 vertices on my hardware). I have summarized some of my "benchmark results" of VTK, Coin3D and OpenSceneGraph at http://ivupy.sourceforge.net. It could be that VTK was treated unfairly, because I was not yet aware of: http://www.vtk.org/Wiki/VTK_FAQ#How_to_handle_large_data_sets_in_VTK but Coin3D is tuned out of the box for large data sets. I would be happy to take the fastest VTK version of the first example of http://ivupy.sourceforge.net/examples.html into account in the comparison. Addressing (3): The concepts needed to program VTK and Coin3D seem to be quite different. Coin3D is a scene graph library. Nodes in the scene graph can be for instance: (1) simple shapes such as cones, spheres, cylinders (2) different type of surfaces (3) manipulators (for interaction with the user) (4) materials (color, lightning) (5) separators A manipulator node (or material node) affects only the nodes in the scene graph below it. A separator is used to limit the scope of a node above it. http://ivupy.sourceforge.net contains a few links explaining why scene graph libraries optimize OpenGL programs. Anyhow, this programming model suits the PyQwt-user very well and I like it too. Disadvantages of Coin3D: Coin3D is more low-level than VTK. For instance, a 3D axis frame is missing from Coin3D (it is on my todo list for IVuPy) and I believe that VTK comes with more algorithms such as polygonizing an implicit 3D surface or contouring levels on a 3D surface. Regards -- Gerard PS: I will try to put something shorter on the Wiki (I am new to Wikis, though). From Fernando.Perez at colorado.edu Sun Feb 12 17:43:21 2006 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Sun, 12 Feb 2006 15:43:21 -0700 Subject: [SciPy-user] ANN: first release of IVuPy-0.1 In-Reply-To: <20060212223903.378d2c79.gerard.vermeulen@grenoble.cnrs.fr> References: <20060212142525.45b6b924.gerard.vermeulen@grenoble.cnrs.fr> <43EF7CDF.7020904@colorado.edu> <20060212223903.378d2c79.gerard.vermeulen@grenoble.cnrs.fr> Message-ID: <43EFBA09.3080203@colorado.edu> Gerard Vermeulen wrote: [...] > PS: I will try to put something shorter on the Wiki (I am new to Wikis, though). Excellent, many thanks for this. I'm by no means a VTK expert, but I was genuinely curious. Indeed, some of this info would make a good wiki entry for those looking into 3d visualization with python. As time goes by, we'd all like to have it fast, good looking and easy to use :) And since we do have VTK experts here, they may also contribute their point of view (in particular, Prabhu's recent TVTK is a LOT nicer to write code for than the 'raw' VTK python bindings, from all the examples I've seen). Regards, f From eric at enthought.com Sun Feb 12 19:02:11 2006 From: eric at enthought.com (eric jones) Date: Sun, 12 Feb 2006 18:02:11 -0600 Subject: [SciPy-user] Web site down? In-Reply-To: <43EF87A5.6000703@enthought.com> References: <1e2af89e0602120703w29013ef0r2b3415da53ab8e07@mail.gmail.com> <43EF52AD.6080502@iname.com> <43EF87A5.6000703@enthought.com> Message-ID: <43EFCC83.6060704@enthought.com> Word is that Apache crashed sometime around 4:00 am this morning and couldn't restart itself. Mike got it back up and running sometime around 3:00 pm. Thanks for the heads up, and sorry it took so long to get running. We'll try to get better alerts in place so that it isn't down so long next time. thanks, eric eric jones wrote: >I'll get someone to look at this ASAP. > >eric > >Gary wrote: > > > >>Matthew Brett wrote: >> >> >> >> >> >>>Hi, >>> >>>Is anyone else getting a error connecting to http://www.scipy.org? >>> >>>502 Bad Gateway: The proxy server received an invalid response from an >>>upstream server. >>> >>> >>> >>> >>> >>> >>Same response here. >> >> >> >> >> >> >> >>>Matthew >>> >>>_______________________________________________ >>>SciPy-user mailing list >>>SciPy-user at scipy.net >>>http://www.scipy.net/mailman/listinfo/scipy-user >>> >>> >>> >>> >>> >>> >>> >>_______________________________________________ >>SciPy-user mailing list >>SciPy-user at scipy.net >>http://www.scipy.net/mailman/listinfo/scipy-user >> >> >> >> > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > > From skip at pobox.com Sun Feb 12 20:25:30 2006 From: skip at pobox.com (skip at pobox.com) Date: Sun, 12 Feb 2006 19:25:30 -0600 Subject: [SciPy-user] Web site down? In-Reply-To: <43EFCC83.6060704@enthought.com> References: <1e2af89e0602120703w29013ef0r2b3415da53ab8e07@mail.gmail.com> <43EF52AD.6080502@iname.com> <43EF87A5.6000703@enthought.com> <43EFCC83.6060704@enthought.com> Message-ID: <17391.57354.550513.606525@montanaro.dyndns.org> eric> We'll try to get better alerts in place so that it isn't down so eric> long next time. Nagios is your friend: http://www.nagios.org/ Skip From stephen.walton at csun.edu Sun Feb 12 23:39:08 2006 From: stephen.walton at csun.edu (Stephen Walton) Date: Sun, 12 Feb 2006 20:39:08 -0800 Subject: [SciPy-user] Conversion of scipy array to numarray array In-Reply-To: References: Message-ID: <43F00D6C.7080201@csun.edu> Lo Marcus wrote: > I am trying to convert a scipy array to numarray, without success. > What I do is the following: > > data = scipy.asarray(pyfits.getdata('j6mi10021_crj.fits', 'sci')) > nadata = numarray.asarray(data) I'm not sure where the problem in your original post is, but you do realize that pyfits returns a numarray array by default? From prabhu_r at users.sf.net Sun Feb 12 23:43:00 2006 From: prabhu_r at users.sf.net (Prabhu Ramachandran) Date: Mon, 13 Feb 2006 10:13:00 +0530 Subject: [SciPy-user] ANN: first release of IVuPy-0.1 In-Reply-To: <20060212223903.378d2c79.gerard.vermeulen@grenoble.cnrs.fr> References: <20060212142525.45b6b924.gerard.vermeulen@grenoble.cnrs.fr> <43EF7CDF.7020904@colorado.edu> <20060212223903.378d2c79.gerard.vermeulen@grenoble.cnrs.fr> Message-ID: <17392.3668.559660.565891@prpc.aero.iitb.ac.in> >>>>> "Gerard" == Gerard Vermeulen writes: Gerard> On Sun, 12 Feb 2006 11:22:23 -0700 Fernando Perez Gerard> wrote: >> Gerard Vermeulen wrote: > I am proud to announce IVuPy-0.1 >> (I-View-Py). Gerard> My knowledge of VTK is practically zero, but I will try. Gerard> The motivation for developing IVuPy originated in the Gerard> request of a PyQwt-user (the owner of a company Gerard> specializing in high-end technical software) complaining Gerard> that VTK was: (1) slow, (2) memory hungry, and (3) hard to Gerard> program. (1) and (2) depends on what is done and how it is done. (3) is matter of personal opinion. :-) Gerard> Besides that, VTK does not integrate very well with PyQt Gerard> (I have tried to help to fix it, see the QVTKRender*.py Gerard> files in VTK). You have to do the right thing to get it Gerard> working. I thought you sent in some patches to VTK that fixed these issues? Or were there more problems? [...] Gerard> I would be happy to take the fastest VTK version of the Gerard> first example of Gerard> http://ivupy.sourceforge.net/examples.html into account in Gerard> the comparison. The VTK folks would also be interested in real benchmarks. There are a lot of developers working on VTK so if it is possible to improve performance, they will. The VTK sphere benchmark might be of use here: http://www.barre.nom.fr/vtk/bench.html It should not be very hard to create a simple Python script to do this with IVuPy and VTK and compare the results. It would be great if someone did the benchmarking and sent the results to the VTK folks also. I'm not volunteering any more of my time on this, so count me out. [...] Gerard> The concepts needed to program VTK and Coin3D seem to be Gerard> quite different. Gerard> Coin3D is a scene graph library. Nodes in the scene graph Gerard> can be for instance: (1) simple shapes such as cones, [...] Gerard> Coin3D is more low-level than VTK. For instance, a 3D Gerard> axis frame is missing from Coin3D (it is on my todo list Gerard> for IVuPy) and I believe that VTK comes with more Gerard> algorithms such as polygonizing an implicit 3D surface or Gerard> contouring levels on a 3D surface. Right, VTK provides a ton of visualization and imaging algorithms for a wide variety of data sets. Coin3D is as you say a lower level toolkit that targets a different problem domain. One more problem with Coin3D is that it is distributed under the GPL, and unless I am mistaken, building commercial apps requires a license from SIM. The same applies to Qt. Also, Coin3D is Qt/Xt only. No wxPython etc. I don't think that should be a big problem though since the number of Qt specific classes are relatively small. However, VTK does try to work with other GUI toolkits and already supports GTK, wxPython, Tkinter and Qt (apparently badly). That said, IVuPy does sound very interesting indeed. Thanks. cheers, prabhu From d.howey at imperial.ac.uk Mon Feb 13 05:31:36 2006 From: d.howey at imperial.ac.uk (Howey, David A) Date: Mon, 13 Feb 2006 10:31:36 -0000 Subject: [SciPy-user] New Lyx Version eas(ier) to install on Windows. Message-ID: <056D32E9B2D93B49B01256A88B3EB2180116090A@icex2.ic.ac.uk> Thanks for all the tips everyone! Sorry to have caused a storm. Let's consider it subject closed now. I'll give both lyx and latex a try and see which one takes my fancy for what I need it for. Cheers Dave From jelle.feringa at ezct.net Mon Feb 13 09:10:41 2006 From: jelle.feringa at ezct.net (Jelle Feringa / EZCT Architecture & Design Research) Date: Mon, 13 Feb 2006 15:10:41 +0100 Subject: [SciPy-user] Octtree / Quadtree Message-ID: <006701c630a7$470ddb80$0b01a8c0@JELLE> Dear Group, Is anyone aware of an efficient Octree and / or Quadtree implemention based on either Numpy, Numarray or Numeric? Both do exists in some very rudimentary form (octtree->twistedmatrix/quadtree->pygame), but are crude and slow implementations. Cheers, -jelle -------------- next part -------------- An HTML attachment was scrubbed... URL: From Derrick.Snowden at noaa.gov Mon Feb 13 12:35:26 2006 From: Derrick.Snowden at noaa.gov (Derrick Snowden) Date: Mon, 13 Feb 2006 12:35:26 -0500 Subject: [SciPy-user] installation on fedora core 4 x86_64 Message-ID: <1139852126.1523.94.camel@okeanos.aoml.noaa.gov> Hi all, I'm not having much luck installing numpy with atlas although I am able to install numpy alone. Attached is my site.cfg file and the location of my atlas libraries is shown below. > locate atlas | grep lib /usr/lib64/atlas /usr/lib64/atlas/libf77blas.so.3.0 /usr/lib64/atlas/libatlas.so.3 /usr/lib64/atlas/libatlas.so.3.0 /usr/lib64/atlas/libblas.so.3 /usr/lib64/atlas/libblas.so.3.0 /usr/lib64/atlas/libcblas.so.3 /usr/lib64/atlas/libcblas.so.3.0 /usr/lib64/atlas/libf77blas.so.3 /usr/lib64/atlas/liblapack.so.3.0 /usr/lib64/atlas/liblapack.so.3 /usr/lib64/atlas/liblapack_atlas.so.3 /usr/lib64/atlas/liblapack_atlas.so.3.0 I'm following the directions below, except I already have atlas installed. I just can't make the installation scripts see them. http://pong.tamu.edu/tiki/tiki-view_blog_post.php?blogId=6&postId=97 Any ideas would be appreciated. Thanks, Derrick -- Derrick Snowden PhOD/AOML/NOAA 4301 Rickenbacker Causeway Miami, FL 33149 Office: 305.361.4322 Fax: 305.361.4392 -------------- next part -------------- [DEFAULT] library_dirs = /usr/lib:/usr/local/lib:/opt/lib:/usr/lib64 include_dirs = /usr/include:/usr/local/include:/opt/include src_dirs = /usr/local/src:/opt/src # search static libraries (.a) in preference to shared ones (.so) search_static_first = 0 [fftw] fftw_libs = rfftw3, fftw3 fftw_opt_libs = rfftw3_threads, fftw3_threads # if the above aren't found, look for {s,d}fftw_libs and {s,d}fftw_opt_libs [atlas] library_dirs = /usr/lib64/atlas # for overriding the names of the atlas libraries atlas_libs = lapack, f77blas, cblas, atlas, blas, lapack_atlas From dd55 at cornell.edu Mon Feb 13 12:50:20 2006 From: dd55 at cornell.edu (Darren Dale) Date: Mon, 13 Feb 2006 12:50:20 -0500 Subject: [SciPy-user] installation on fedora core 4 x86_64 In-Reply-To: <1139852126.1523.94.camel@okeanos.aoml.noaa.gov> References: <1139852126.1523.94.camel@okeanos.aoml.noaa.gov> Message-ID: <200602131250.20597.dd55@cornell.edu> Have you created a site.cfg file in your numpy/distutils? This might be all you need: [DEFAULT] library_dirs = /usr/lib:/usr/lib/atlas I see there is no sample site.cfg file in the numpy distutils directory. It does exist in scipy-0.3.2, does anyone know why it has been removed? On Monday 13 February 2006 12:35, Derrick Snowden wrote: > Hi all, > > I'm not having much luck installing numpy with atlas although I am able > to install numpy alone. Attached is my site.cfg file and the location > of my atlas libraries is shown below. > > > locate atlas | grep lib > > /usr/lib64/atlas > /usr/lib64/atlas/libf77blas.so.3.0 > /usr/lib64/atlas/libatlas.so.3 > /usr/lib64/atlas/libatlas.so.3.0 > /usr/lib64/atlas/libblas.so.3 > /usr/lib64/atlas/libblas.so.3.0 > /usr/lib64/atlas/libcblas.so.3 > /usr/lib64/atlas/libcblas.so.3.0 > /usr/lib64/atlas/libf77blas.so.3 > /usr/lib64/atlas/liblapack.so.3.0 > /usr/lib64/atlas/liblapack.so.3 > /usr/lib64/atlas/liblapack_atlas.so.3 > /usr/lib64/atlas/liblapack_atlas.so.3.0 > > > I'm following the directions below, except I already have atlas > installed. I just can't make the installation scripts see them. > http://pong.tamu.edu/tiki/tiki-view_blog_post.php?blogId=6&postId=97 > > Any ideas would be appreciated. > > Thanks, > Derrick -- Darren S. Dale, Ph.D. Cornell High Energy Synchrotron Source Cornell University 200L Wilson Lab Rt. 366 & Pine Tree Road Ithaca, NY 14853 dd55 at cornell.edu office: (607) 255-9894 fax: (607) 255-9001 From agn at noc.soton.ac.uk Mon Feb 13 12:54:52 2006 From: agn at noc.soton.ac.uk (George Nurser) Date: Mon, 13 Feb 2006 17:54:52 +0000 Subject: [SciPy-user] installation on fedora core 4 x86_64 In-Reply-To: <1139852126.1523.94.camel@okeanos.aoml.noaa.gov> References: <1139852126.1523.94.camel@okeanos.aoml.noaa.gov> Message-ID: <754EC8FC-B013-4D1E-8D65-21E9FF6D7B56@noc.soton.ac.uk> Derrick, You need to have the site.cfg in the (installed) numpy/distutils directory. Otherwise I'm not sure; possibly you might have to link your libatlas.so.3 etc to libatlas.so -George Nurser. From dd55 at cornell.edu Mon Feb 13 13:17:40 2006 From: dd55 at cornell.edu (Darren Dale) Date: Mon, 13 Feb 2006 13:17:40 -0500 Subject: [SciPy-user] installation on fedora core 4 x86_64 In-Reply-To: <200602131250.20597.dd55@cornell.edu> References: <1139852126.1523.94.camel@okeanos.aoml.noaa.gov> <200602131250.20597.dd55@cornell.edu> Message-ID: <200602131317.40708.dd55@cornell.edu> On Monday 13 February 2006 12:50, Darren Dale wrote: > I see there is no sample site.cfg file in the numpy distutils directory. It > does exist in scipy-0.3.2, does anyone know why it has been removed? After scanning system_info.py, I see there is a discussion related to site.cfg. Perhaps this discussion should be moved or copied to README.txt in the main numpy directory. Also, why not make numpy use a site.cfg file by default? site.cfg could include configs for various distro's and a user could just uncomment the appropriate section. Darren From josegomez at gmx.net Mon Feb 13 14:30:02 2006 From: josegomez at gmx.net (Jose Luis Gomez Dans) Date: Mon, 13 Feb 2006 20:30:02 +0100 (MET) Subject: [SciPy-user] Using C code in scipy-best practice Message-ID: <12630.1139859002@www064.gmx.net> Hi! After using f2py to be able to call some Fortran 77 code from Python, I now have a need to do the same with some C function. I was wondering what the best practice is to wrap this function into some Python callable object. I read a number of different approaches are possible, but while with f2py I know what to expect, with C, it appears that SWIG, or Pyrex, or...? Many thanks! -- 10 GB Mailbox, 100 FreeSMS/Monat http://www.gmx.net/de/go/topmail +++ GMX - die erste Adresse f?r Mail, Message, More +++ From ryanlists at gmail.com Mon Feb 13 14:36:52 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Mon, 13 Feb 2006 14:36:52 -0500 Subject: [SciPy-user] Using C code in scipy-best practice In-Reply-To: <12630.1139859002@www064.gmx.net> References: <12630.1139859002@www064.gmx.net> Message-ID: You can use f2py to wrap C code. There was a thread about this last week. On 2/13/06, Jose Luis Gomez Dans wrote: > Hi! > After using f2py to be able to call some Fortran 77 code from Python, I now > have a need to do the same with some C function. I was wondering what the > best practice is to wrap this function into some Python callable object. I > read a number of different approaches are possible, but while with f2py I > know what to expect, with C, it appears that SWIG, or Pyrex, or...? > > Many thanks! > > -- > 10 GB Mailbox, 100 FreeSMS/Monat http://www.gmx.net/de/go/topmail > +++ GMX - die erste Adresse f?r Mail, Message, More +++ > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > > > From ryanlists at gmail.com Mon Feb 13 14:52:12 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Mon, 13 Feb 2006 14:52:12 -0500 Subject: [SciPy-user] Variable precision Message-ID: Is there anyway to constrain optimize.fmin to use single precision or even fewer digits in its internal calculations? I am trying to show that one approach to a problem is less prone to round off errors than another approach, but the results aren't all that dramatic with double precision numbers. Thanks, Ryan From manouchk at gmail.com Mon Feb 13 15:31:48 2006 From: manouchk at gmail.com (manouchk) Date: Mon, 13 Feb 2006 18:31:48 -0200 Subject: [SciPy-user] SciPy on mandriva Message-ID: <200602131831.48931.manouchk@gmail.com> I'm wondering if there are SciPy mandriva users. I'm quite new to python and SciPy. It seems that mandriva have obsolete scientific related python packages. python-SciPy-0.3.2-5mdk.src.rpm python-numarray-1.5.1-1mdk.src.rpm python-scientific-2.4.9-5mdk.src.rpm python-numeric-24.0-2mdk.src.rpm aparently mandriva still didn't switch to NumPy, it seems that matplotlib is not included etc... I've seen numpy provides a src.rpm packages that could be probably installed. Can NumPy be installed in conjonction with numeric? or must replace numeric? I'm wondering if there are some "clean" src.rpm for mandriva somewhere for all thoose packages (matplotlib, NumPy, SciPy) in their recent version? (if not I could work on them!) thank you, Emmanuel Favre-Nicolin From manouchk at gmail.com Mon Feb 13 16:41:57 2006 From: manouchk at gmail.com (manouchk) Date: Mon, 13 Feb 2006 19:41:57 -0200 Subject: [SciPy-user] SciPy on mandriva In-Reply-To: <200602131831.48931.manouchk@gmail.com> References: <200602131831.48931.manouchk@gmail.com> Message-ID: <200602131941.57836.manouchk@gmail.com> Has anyone compiled/installed successfully SciPy 0.4.4 on mandriva 2005? From gruben at bigpond.net.au Mon Feb 13 21:27:27 2006 From: gruben at bigpond.net.au (Gary Ruben) Date: Tue, 14 Feb 2006 13:27:27 +1100 Subject: [SciPy-user] wiki problem Message-ID: <43F1400F.2090505@bigpond.net.au> The AstroPy link on the Topical_Software page is broken. The original AstroPy page says "AstroPy has moved to this page on SciPy's wiki site (at their request)." but it isn't there. Gary R. From perry at stsci.edu Mon Feb 13 22:10:35 2006 From: perry at stsci.edu (Perry Greenfield) Date: Mon, 13 Feb 2006 22:10:35 -0500 Subject: [SciPy-user] wiki problem In-Reply-To: <43F1400F.2090505@bigpond.net.au> Message-ID: That page hasn't been migrated yet. In few days perhaps. The old one is accessible at: http://old.scipy.org/wikis/topical_software/Astronomy Perry Greenfield > -----Original Message----- > From: scipy-user-bounces at scipy.net > [mailto:scipy-user-bounces at scipy.net]On Behalf Of Gary Ruben > Sent: Monday, February 13, 2006 9:27 PM > To: SciPy Users List > Subject: [SciPy-user] wiki problem > > > The AstroPy link on the Topical_Software page is broken. The original > AstroPy page says "AstroPy has moved to this page on SciPy's wiki site > (at their request)." but it isn't there. > > Gary R. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From gerard.vermeulen at grenoble.cnrs.fr Tue Feb 14 04:08:57 2006 From: gerard.vermeulen at grenoble.cnrs.fr (Gerard Vermeulen) Date: Tue, 14 Feb 2006 10:08:57 +0100 Subject: [SciPy-user] SciPy on mandriva In-Reply-To: <200602131941.57836.manouchk@gmail.com> References: <200602131831.48931.manouchk@gmail.com> <200602131941.57836.manouchk@gmail.com> Message-ID: <20060214100857.469ff77f.gerard.vermeulen@grenoble.cnrs.fr> On Mon, 13 Feb 2006 19:41:57 -0200 manouchk wrote: > Has anyone compiled/installed successfully SciPy 0.4.4 on mandriva 2005? I build RPMs from SVN snapshots of numpy and scipy on Mandriva-2005 and I don't recall any problems around the time SciPy-0.4.4 has been released. Once you've got all dependencies installed, it is really easy. That is: find all development RPMs and install atlas (which is most difficult if you want a really optimized atlas, Mandriva has a 'universal' atlas package, I believe). Gerard From josegomez at gmx.net Tue Feb 14 05:47:12 2006 From: josegomez at gmx.net (Jose Luis Gomez Dans) Date: Tue, 14 Feb 2006 11:47:12 +0100 (MET) Subject: [SciPy-user] Using C code in scipy-best practice References: Message-ID: <22028.1139914032@www093.gmx.net> Hi! > You can use f2py to wrap C code. There was a thread about this last week. D'oh!!!! Sorry, didn't realize that. Many thanks! Jose -- Telefonieren Sie schon oder sparen Sie noch? NEU: GMX Phone_Flat http://www.gmx.net/de/go/telefonie From josegomez at gmx.net Tue Feb 14 05:47:12 2006 From: josegomez at gmx.net (Jose Luis Gomez Dans) Date: Tue, 14 Feb 2006 11:47:12 +0100 (MET) Subject: [SciPy-user] Using C code in scipy-best practice References: Message-ID: <22028.1139914032@www093.gmx.net> Hi! > You can use f2py to wrap C code. There was a thread about this last week. D'oh!!!! Sorry, didn't realize that. Many thanks! Jose -- Telefonieren Sie schon oder sparen Sie noch? NEU: GMX Phone_Flat http://www.gmx.net/de/go/telefonie From schofield at ftw.at Tue Feb 14 07:33:21 2006 From: schofield at ftw.at (Ed Schofield) Date: Tue, 14 Feb 2006 13:33:21 +0100 Subject: [SciPy-user] Sparse matrix questions In-Reply-To: <43F1C635.4070904@mecha.uni-stuttgart.de> References: <43F1A7AB.7060908@mecha.uni-stuttgart.de> <43F1B8A3.3090306@ntc.zcu.cz> <43F1C635.4070904@mecha.uni-stuttgart.de> Message-ID: <43F1CE11.6060900@ftw.at> Nils Wagner wrote: >from scipy.sparse import * >from scipy import * > >A = rand(4,4) >sparseA = csc_matrix(A) ># ># How can I build the norm of a sparse matrix without using todense ? ># >eps = linalg.norm(sparseA.todense()) > > This is a non-trivial problem, and I don't think we have a function for it. Could you write one and contribute it? ;) ># How do I scale a sparse matrix object ? ># >#sparseA = sparseA/0.5 # doesn't work > > > Interesting. I'll fix this in SVN. For now, just use multiplication instead: >>> sparseA *= 2.0 ># How do I initialize a sparse matrix of order n \times m with zeros ># >B = zeros((4,5),Float) >#sparseB = csc_matrix(B) # doesn't work > > It's filled with zeros already. Just use >>> sparseB = csc_matrix((4,5)) But for efficiency I suggest you use dok_matrix to construct and manipulate matrices, then convert to csc or csr for solvers or matrix multiplication. For diagonals, you can use dok_matrix.setdiag(): >>> sparseB = dok_matrix((5,5)) >>> sparseB.setdiag([1]*5) >>> sparseB.todense() array([[ 1., 0., 0., 0., 0.], [ 0., 1., 0., 0., 0.], [ 0., 0., 1., 0., 0.], [ 0., 0., 0., 1., 0.], [ 0., 0., 0., 0., 1.]]) I hope this helps. In the future, please post questions like these on the scipy-user list, not on scipy-dev :) -- Ed From nwagner at mecha.uni-stuttgart.de Tue Feb 14 07:48:32 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Tue, 14 Feb 2006 13:48:32 +0100 Subject: [SciPy-user] Sparse matrix questions In-Reply-To: <43F1CE11.6060900@ftw.at> References: <43F1A7AB.7060908@mecha.uni-stuttgart.de> <43F1B8A3.3090306@ntc.zcu.cz> <43F1C635.4070904@mecha.uni-stuttgart.de> <43F1CE11.6060900@ftw.at> Message-ID: <43F1D1A0.9090203@mecha.uni-stuttgart.de> Ed Schofield wrote: >Nils Wagner wrote: > >>from scipy.sparse import * >>from scipy import * > >>A = rand(4,4) >>sparseA = csc_matrix(A) >># >># How can I build the norm of a sparse matrix without using todense ? >># >>eps = linalg.norm(sparseA.todense()) >> >> >> >This is a non-trivial problem, and I don't think we have a function for >it. Could you write one and contribute it? ;) > > > Well, my programming skills are not a quarter as good (at least for integration in scipy). >># How do I scale a sparse matrix object ? >># >>#sparseA = sparseA/0.5 # doesn't work >> >> >> >> >Interesting. I'll fix this in SVN. For now, just use multiplication >instead: > >>>>sparseA *= 2.0 >>>> > > File "/usr/local/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 214, in __getattr__ raise AttributeError, attr + " not found" AttributeError: __float__ not found >># How do I initialize a sparse matrix of order n \times m with zeros >># >>B = zeros((4,5),Float) >>#sparseB = csc_matrix(B) # doesn't work >> >> >> >It's filled with zeros already. Just use > >>>>sparseB = csc_matrix((4,5)) >>>> > >But for efficiency I suggest you use dok_matrix to construct and >manipulate matrices, then convert to csc or csr for solvers or matrix >multiplication. > >For diagonals, you can use dok_matrix.setdiag(): > > >>>>sparseB = dok_matrix((5,5)) >>>>sparseB.setdiag([1]*5) >>>>sparseB.todense() >>>> >array([[ 1., 0., 0., 0., 0.], > [ 0., 1., 0., 0., 0.], > [ 0., 0., 1., 0., 0.], > [ 0., 0., 0., 1., 0.], > [ 0., 0., 0., 0., 1.]]) > > >I hope this helps. In the future, please post questions like these on >the scipy-user list, not on scipy-dev :) > >-- Ed > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > From manouchk at gmail.com Tue Feb 14 08:41:59 2006 From: manouchk at gmail.com (manouchk) Date: Tue, 14 Feb 2006 11:41:59 -0200 Subject: [SciPy-user] SciPy on mandriva In-Reply-To: <20060214100857.469ff77f.gerard.vermeulen@grenoble.cnrs.fr> References: <200602131831.48931.manouchk@gmail.com> <200602131941.57836.manouchk@gmail.com> <20060214100857.469ff77f.gerard.vermeulen@grenoble.cnrs.fr> Message-ID: <200602141141.59563.manouchk@gmail.com> Le Mardi 14 F?vrier 2006 07:08, Gerard Vermeulen a ?crit?: > On Mon, 13 Feb 2006 19:41:57 -0200 > > manouchk wrote: > > Has anyone compiled/installed successfully SciPy 0.4.4 on mandriva 2005? > > I build RPMs from SVN snapshots of numpy and scipy on Mandriva-2005 and I > don't recall any problems around the time SciPy-0.4.4 has been released. It finally worked, after installing dependancies to create matplotlib (and maybe some others), I think there should be some missing dependancies in cooker rpm SciPy-0.4.3, maybe tk-devel, tcl-devel? If someone is interested to recompile (or give a try etc...), anyway I put rpms and srpms here : http://emmanuelfavrenicolin.free.fr/Public/Ordinateur/Mandriva/MDV2005LE/SRPMS/python-SciPy-0.4.4-0.1.102mdk.src.rpm http://emmanuelfavrenicolin.free.fr/Public/Ordinateur/Mandriva/MDV2005LE/RPMS/python-SciPy-0.4.4-0.1.102mdk.i586.rpm > Once you've got all dependencies installed, it is really easy. That is: > find all development RPMs and install atlas (which is most difficult if > you want a really optimized atlas, Mandriva has a 'universal' atlas > package, I believe). I fortunatly don't need specially an optimized atlas! Cheers, Emmanuel (Favre-Nicolin) > Gerard From epost2 at gmail.com Tue Feb 14 15:38:19 2006 From: epost2 at gmail.com (Bruce) Date: Tue, 14 Feb 2006 21:38:19 +0100 Subject: [SciPy-user] scipy_base Message-ID: Hi, Im new to scipy. I installed it and it seems to work, at least I can >>> import scipy The tutorial at scipy.org/Wiki immediately start talking about the package scipy_base but I can`t seem to find it. What`s that about? thx From robert.kern at gmail.com Tue Feb 14 15:45:58 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 14 Feb 2006 14:45:58 -0600 Subject: [SciPy-user] scipy_base In-Reply-To: References: Message-ID: <43F24186.8010003@gmail.com> Bruce wrote: > Hi, > Im new to scipy. I installed it and it seems to work, at least I can > >>>>import scipy > > > The tutorial at scipy.org/Wiki immediately start talking about the > package scipy_base but I can`t seem to find it. What`s that about? Which tutorial? scipy recently underwent some changes with the Numeric->numpy switch, and some of the documentation hasn't caught up. Notably, scipy_tutorial.pdf hasn't been updated to reflect the recent changes. Most of the functions in scipy_base have been folded into numpy. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From jkgruet at sandia.gov Tue Feb 14 16:03:52 2006 From: jkgruet at sandia.gov (James K. Gruetzner) Date: Tue, 14 Feb 2006 14:03:52 -0700 Subject: [SciPy-user] scipy install fails: numpy.ndarray Message-ID: <43F245B8.4070707@sandia.gov> Any ideas how to correct this? I've just installed scipy 0.4.4, and am having some problems with it. The key appears to be in the references to numpy.ndarray. When running scipy.test(level=0), numerous errors of: AttributeError: 'numpy.ndarray' object has no attribute 'dtypechar' Installation of scipy 0.4.4 does have many warnings, but appears to complete OK. - Many warnings that variables "might be used uninitialized" in various functions. - Many warnings that "ignoring return value of 'fscanf'" - Several warnings that "POSIX_C_SOURCE" redefined. - Many "unused variable" warnings Information follows below. Thanks in advance. James ---------------------------- system/platform: posix linux2 Fedora Core 4 python 2.4.1 numpy 0.9.4 LAPACK ATLAS 3.7.11 NOTE 1 gcc 4.0.2 fftw 3.0.1 1. Installed in the local tree. In ../scipy/Lib/linalg, the command python setup_atlas_version.py build_ext --inplace --force fails with the error message: /usr/lib/python2.4/site-packages/numpy/distutils/misc_util.py:896: UserWarning: Use Configuration('linalg','',top_path=None) instead of deprecated default_config_dict('linalg','',None) warnings.warn('Use Configuration(%s,%s,top_path=%s) instead of '\ Traceback (most recent call last): File "setup_atlas_version.py", line 29, in ? setup(**configuration()) File "setup_atlas_version.py", line 13, in configuration del config['fortran_libraries'] KeyError: 'fortran_libraries' ----------------- Output of python INSTALLDIR/numpy/distutils/system_info.py command: _pkg_config_info: NOT AVAILABLE agg2_info: ( src_dirs = .:/usr/local/src ) NOT AVAILABLE atlas_blas_info: ( library_dirs = /usr/local/lib:/usr/lib ) ( paths: /usr/local/lib/atlas ) ( paths: /usr/lib/sse2 ) ( paths: /usr/local/lib/atlas/libf77blas.a ) ( paths: /usr/local/lib/atlas/libcblas.a ) ( paths: /usr/local/lib/atlas/libatlas.a ) ( include_dirs = /usr/local/include:/usr/include ) ( library_dirs = /usr/local/lib:/usr/lib ) ( paths: /usr/local/lib/atlas ) ( paths: /usr/lib/sse2 ) FOUND: libraries = ['f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/local/lib/atlas'] language = c atlas_blas_threads_info: Setting PTATLAS=ATLAS ( library_dirs = /usr/local/lib:/usr/lib ) ( paths: /usr/local/lib/atlas ) ( paths: /usr/lib/sse2 ) ( paths: /usr/local/lib/atlas/libatlas.a ) NOT AVAILABLE atlas_info: ( library_dirs = /usr/local/lib:/usr/lib ) ( paths: /usr/local/lib/atlas ) ( paths: /usr/lib/sse2 ) ( paths: /usr/local/lib/atlas/libf77blas.a ) ( paths: /usr/local/lib/atlas/libcblas.a ) ( paths: /usr/local/lib/atlas/libatlas.a ) ( paths: /usr/local/lib/atlas/liblapack.a ) system_info.atlas_info ( include_dirs = /usr/local/include:/usr/include ) ( library_dirs = /usr/local/lib:/usr/lib ) ( paths: /usr/local/lib/atlas ) ( paths: /usr/lib/sse2 ) FOUND: libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/local/lib/atlas'] language = f77 atlas_threads_info: Setting PTATLAS=ATLAS ( library_dirs = /usr/local/lib:/usr/lib ) ( paths: /usr/local/lib/atlas ) ( paths: /usr/lib/sse2 ) ( paths: /usr/local/lib/atlas/libatlas.a ) system_info.atlas_threads_info NOT AVAILABLE blas_info: ( library_dirs = /usr/local/lib:/usr/lib ) ( paths: /usr/local/lib/libblas.a ) ( library_dirs = /usr/local/lib:/usr/lib ) FOUND: libraries = ['blas'] library_dirs = ['/usr/local/lib'] language = f77 blas_mkl_info: ( library_dirs = /usr/local/lib:/usr/lib ) ( include_dirs = /usr/local/include:/usr/include ) NOT AVAILABLE blas_opt_info: running build_src building extension "atlas_version" sources adding 'build/src/atlas_version_-0x32261224.c' to sources. running build_ext customize UnixCCompiler customize UnixCCompiler using build_ext ( library_dirs = /usr/local/lib:/usr/lib ) FOUND: libraries = ['f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/local/lib/atlas'] language = c define_macros = [('ATLAS_INFO', '"\\"3.7.11\\""')] blas_src_info: ( src_dirs = .:/usr/local/src ) NOT AVAILABLE boost_python_info: ( src_dirs = .:/usr/local/src ) NOT AVAILABLE dfftw_info: ( library_dirs = /usr/local/lib:/usr/lib ) ( include_dirs = /usr/local/include:/usr/include ) dfftw not found NOT AVAILABLE dfftw_threads_info: ( library_dirs = /usr/local/lib:/usr/lib ) ( include_dirs = /usr/local/include:/usr/include ) dfftw threads not found NOT AVAILABLE djbfft_info: ( library_dirs = /usr/local/lib:/usr/lib ) ( include_dirs = /usr/local/include:/usr/include ) NOT AVAILABLE fft_opt_info: fftw3_info: FOUND: libraries = ['fftw3'] library_dirs = ['/usr/local/lib'] define_macros = [('SCIPY_FFTW3_H', None)] include_dirs = ['/usr/local/include'] ( library_dirs = /usr/local/lib:/usr/lib ) FOUND: libraries = ['fftw3'] library_dirs = ['/usr/local/lib'] define_macros = [('SCIPY_FFTW3_H', None)] include_dirs = ['/usr/local/include'] fftw2_info: ( library_dirs = /usr/local/lib:/usr/lib ) ( include_dirs = /usr/local/include:/usr/include ) fftw2 not found NOT AVAILABLE fftw_info: ( library_dirs = /usr/local/lib:/usr/lib ) ( include_dirs = /usr/local/include:/usr/include ) ( paths: /usr/local/lib/libfftw3.a ) ( paths: /usr/local/include/fftw3.h ) ( library_dirs = /usr/local/lib:/usr/lib ) FOUND: libraries = ['fftw3'] library_dirs = ['/usr/local/lib'] define_macros = [('SCIPY_FFTW3_H', None)] include_dirs = ['/usr/local/include'] fftw_threads_info: ( library_dirs = /usr/local/lib:/usr/lib ) ( include_dirs = /usr/local/include:/usr/include ) fftw threads not found NOT AVAILABLE freetype2_info: ( library_dirs = /usr/local/lib:/usr/lib ) FOUND: libraries = ['freetype', 'z'] define_macros = [('FREETYPE2_INFO', '"\\"9.7.3\\""'), ('FREETYPE2_VERSION_9_7_3', None)] include_dirs = ['/usr/include/freetype2'] gdk_2_info: ( library_dirs = /usr/local/lib:/usr/lib ) FOUND: libraries = ['gdk-x11-2.0', 'gdk_pixbuf-2.0', 'm', 'pangoxft-1.0', 'pangox-1.0', 'pango-1.0', 'gobject-2.0', 'gmodule-2.0', 'dl', 'glib-2.0'] define_macros = [('GDK_2_INFO', '"\\"2.6.10\\""'), ('GDK_VERSION_2_6_10', None), ('XTHREADS', None), ('_REENTRANT', None), ('XUSE_MTSAFE_API', None)] include_dirs = ['/usr/include/gtk-2.0', '/usr/lib/gtk-2.0/include', '/usr/X11R6/include', '/usr/include/pango-1.0', '/usr/include/freetype2', '/usr/include/freetype2/config', '/usr/include/glib-2.0', '/usr/lib/glib-2.0/include'] gdk_info: ( library_dirs = /usr/local/lib:/usr/lib ) FOUND: libraries = ['gdk', 'Xi', 'Xext', 'X11', 'm', 'glib'] library_dirs = ['/usr/X11R6/lib'] define_macros = [('GDK_INFO', '"\\"1.2.10\\""'), ('GDK_VERSION_1_2_10', None)] include_dirs = ['/usr/include/gtk-1.2', '/usr/X11R6/include', '/usr/include/glib-1.2', '/usr/lib/glib/include'] gdk_pixbuf_2_info: ( library_dirs = /usr/local/lib:/usr/lib ) FOUND: libraries = ['gdk_pixbuf-2.0', 'm', 'gobject-2.0', 'gmodule-2.0', 'dl', 'glib-2.0'] define_macros = [('GDK_PIXBUF_2_INFO', '"\\"2.6.10\\""'), ('GDK_PIXBUF_VERSION_2_6_10', None)] include_dirs = ['/usr/include/gtk-2.0', '/usr/include/glib-2.0', '/usr/lib/glib-2.0/include'] gdk_pixbuf_xlib_2_info: ( library_dirs = /usr/local/lib:/usr/lib ) FOUND: libraries = ['gdk_pixbuf_xlib-2.0', 'gdk_pixbuf-2.0', 'm', 'gobject-2.0', 'gmodule-2.0', 'dl', 'glib-2.0'] extra_link_args = ['-Wl,--export-dynamic'] define_macros = [('GDK_PIXBUF_XLIB_2_INFO', '"\\"2.6.10\\""'), ('GDK_PIXBUF_XLIB_VERSION_2_6_10', None)] include_dirs = ['/usr/include/gtk-2.0', '/usr/include/glib-2.0', '/usr/lib/glib-2.0/include'] gdk_x11_2_info: ( library_dirs = /usr/local/lib:/usr/lib ) FOUND: libraries = ['gdk-x11-2.0', 'gdk_pixbuf-2.0', 'm', 'pangoxft-1.0', 'pangox-1.0', 'pango-1.0', 'gobject-2.0', 'gmodule-2.0', 'dl', 'glib-2.0'] define_macros = [('GDK_X11_2_INFO', '"\\"2.6.10\\""'), ('GDK_X11_VERSION_2_6_10', None), ('XTHREADS', None), ('_REENTRANT', None), ('XUSE_MTSAFE_API', None)] include_dirs = ['/usr/include/gtk-2.0', '/usr/lib/gtk-2.0/include', '/usr/X11R6/include', '/usr/include/pango-1.0', '/usr/include/freetype2', '/usr/include/freetype2/config', '/usr/include/glib-2.0', '/usr/lib/glib-2.0/include'] gtkp_2_info: ( library_dirs = /usr/local/lib:/usr/lib ) FOUND: libraries = ['gtk-x11-2.0', 'gdk-x11-2.0', 'atk-1.0', 'gdk_pixbuf-2.0', 'm', 'pangoxft-1.0', 'pangox-1.0', 'pango-1.0', 'gobject-2.0', 'gmodule-2.0', 'dl', 'glib-2.0'] define_macros = [('GTKP_2_INFO', '"\\"2.6.10\\""'), ('GTK_VERSION_2_6_10', None), ('XTHREADS', None), ('_REENTRANT', None), ('XUSE_MTSAFE_API', None)] include_dirs = ['/usr/include/gtk-2.0', '/usr/lib/gtk-2.0/include', '/usr/X11R6/include', '/usr/include/atk-1.0', '/usr/include/pango-1.0', '/usr/include/freetype2', '/usr/include/freetype2/config', '/usr/include/glib-2.0', '/usr/lib/glib-2.0/include'] gtkp_x11_2_info: ( library_dirs = /usr/local/lib:/usr/lib ) FOUND: libraries = ['gtk-x11-2.0', 'gdk-x11-2.0', 'atk-1.0', 'gdk_pixbuf-2.0', 'm', 'pangoxft-1.0', 'pangox-1.0', 'pango-1.0', 'gobject-2.0', 'gmodule-2.0', 'dl', 'glib-2.0'] define_macros = [('GTKP_X11_2_INFO', '"\\"2.6.10\\""'), ('GTK_X11_VERSION_2_6_10', None), ('XTHREADS', None), ('_REENTRANT', None), ('XUSE_MTSAFE_API', None)] include_dirs = ['/usr/include/gtk-2.0', '/usr/lib/gtk-2.0/include', '/usr/X11R6/include', '/usr/include/atk-1.0', '/usr/include/pango-1.0', '/usr/include/freetype2', '/usr/include/freetype2/config', '/usr/include/glib-2.0', '/usr/lib/glib-2.0/include'] lapack_atlas_info: ( library_dirs = /usr/local/lib:/usr/lib ) ( paths: /usr/local/lib/atlas ) ( paths: /usr/lib/sse2 ) ( paths: /usr/local/lib/atlas/libf77blas.a ) ( paths: /usr/local/lib/atlas/libcblas.a ) ( paths: /usr/local/lib/atlas/libatlas.a ) system_info.lapack_atlas_info NOT AVAILABLE lapack_atlas_threads_info: Setting PTATLAS=ATLAS ( library_dirs = /usr/local/lib:/usr/lib ) ( paths: /usr/local/lib/atlas ) ( paths: /usr/lib/sse2 ) ( paths: /usr/local/lib/atlas/libatlas.a ) system_info.lapack_atlas_threads_info NOT AVAILABLE lapack_info: ( library_dirs = /usr/local/lib:/usr/lib ) ( paths: /usr/local/lib/liblapack.a ) ( library_dirs = /usr/local/lib:/usr/lib ) FOUND: libraries = ['lapack'] library_dirs = ['/usr/local/lib'] language = f77 lapack_mkl_info: mkl_info: NOT AVAILABLE NOT AVAILABLE lapack_opt_info: running build_src building extension "atlas_version" sources adding 'build/src/atlas_version_0x39ddef23.c' to sources. running build_ext customize UnixCCompiler customize UnixCCompiler using build_ext customize GnuFCompiler customize GnuFCompiler customize GnuFCompiler using build_ext ( library_dirs = /usr/local/lib:/usr/lib ) FOUND: libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/local/lib/atlas'] language = f77 define_macros = [('ATLAS_INFO', '"\\"3.7.11\\""')] lapack_src_info: ( src_dirs = .:/usr/local/src ) ( paths: /usr/local/src/LAPACK/SRC ) ( library_dirs = /usr/local/lib:/usr/lib ) FOUND: sources = ['/usr/local/src/LAPACK/SRC/sbdsdc.f', '/usr/local/src/LAPAC ... ... /LAPACK/SRC/izmax1.f', '/usr/local/src/LAPACK/SRC/dzsum1.f'] language = f77 numarray_info: ( include_dirs = /usr/include/python2.4 ) ( paths: /usr/include/python2.4/numarray/arrayobject.h ) ( library_dirs = ) FOUND: define_macros = [('NUMARRAY_VERSION', '"\\"1.5.1\\""'), ('NUMARRAY', None)] include_dirs = ['/usr/include/python2.4'] numpy_info: ( include_dirs = /usr/include/python2.4 ) ( library_dirs = ) FOUND: define_macros = [('NUMPY_VERSION', '"\\"0.9.4\\""'), ('NUMPY', None)] sfftw_info: ( library_dirs = /usr/local/lib:/usr/lib ) ( include_dirs = /usr/local/include:/usr/include ) sfftw not found NOT AVAILABLE sfftw_threads_info: ( library_dirs = /usr/local/lib:/usr/lib ) ( include_dirs = /usr/local/include:/usr/include ) sfftw threads not found NOT AVAILABLE wx_info: Could not locate executable wx-config File not found: wx-config. Cannot determine wx info. NOT AVAILABLE x11_info: ( library_dirs = /usr/X11R6/lib:/usr/lib ) ( include_dirs = /usr/X11R6/include:/usr/include ) ( paths: /usr/X11R6/lib/libX11.so ) ( paths: /usr/X11R6/include/X11/X.h ) ( library_dirs = /usr/X11R6/lib:/usr/lib ) FOUND: libraries = ['X11'] library_dirs = ['/usr/X11R6/lib'] include_dirs = ['/usr/X11R6/include'] xft_info: ( library_dirs = /usr/local/lib:/usr/lib ) FOUND: libraries = ['Xft', 'X11', 'freetype', 'Xrender', 'fontconfig'] library_dirs = ['/usr/X11R6/lib'] define_macros = [('XFT_INFO', '"\\"2.1.2.2\\""'), ('XFT_VERSION_2_1_2_2', None)] include_dirs = ['/usr/X11R6/include', '/usr/include/freetype2', '/usr/include/freetype2/config'] From epost2 at gmail.com Tue Feb 14 16:07:19 2006 From: epost2 at gmail.com (Bruce) Date: Tue, 14 Feb 2006 22:07:19 +0100 Subject: [SciPy-user] scipy_base In-Reply-To: <43F24186.8010003@gmail.com> References: <43F24186.8010003@gmail.com> Message-ID: scipy_tutorial.pdf is the one. Nothing wrong wiith my installation then, thanks On 2/14/06, Robert Kern wrote: > Bruce wrote: > > Hi, > > Im new to scipy. I installed it and it seems to work, at least I can > > > >>>>import scipy > > > > > > The tutorial at scipy.org/Wiki immediately start talking about the > > package scipy_base but I can`t seem to find it. What`s that about? > > Which tutorial? scipy recently underwent some changes with the Numeric->numpy > switch, and some of the documentation hasn't caught up. Notably, > scipy_tutorial.pdf hasn't been updated to reflect the recent changes. Most of > the functions in scipy_base have been folded into numpy. > > -- > Robert Kern > robert.kern at gmail.com > > "In the fields of hell where the grass grows high > Are the graves of dreams allowed to die." > -- Richard Harter > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From robert.kern at gmail.com Tue Feb 14 16:09:58 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 14 Feb 2006 15:09:58 -0600 Subject: [SciPy-user] scipy install fails: numpy.ndarray In-Reply-To: <43F245B8.4070707@sandia.gov> References: <43F245B8.4070707@sandia.gov> Message-ID: <43F24726.6000203@gmail.com> James K. Gruetzner wrote: > Any ideas how to correct this? I've just installed scipy 0.4.4, and am > having some problems with it. The key appears to be in the references > to numpy.ndarray. > > When running scipy.test(level=0), numerous errors of: > AttributeError: 'numpy.ndarray' object has no attribute 'dtypechar' This is a known issue. Our releases of numpy and scipy got out of sync. Current SVN checkouts of both packages should work together, though. I think we are gearing up for another (synced!) pair of releases. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From jkgruet at sandia.gov Tue Feb 14 19:41:20 2006 From: jkgruet at sandia.gov (James K. Gruetzner) Date: Tue, 14 Feb 2006 17:41:20 -0700 Subject: [SciPy-user] scipy install fails: numpy.ndarray In-Reply-To: <43F24726.6000203@gmail.com> References: <43F245B8.4070707@sandia.gov> <43F24726.6000203@gmail.com> Message-ID: <43F278B0.7080408@sandia.gov> Thanks for the info. I've never used subversion before, but the indicated statement gives an error---I presume due to firewall restrictions. Alas. There may be a way to get past it, but I don't know how yet. $ svn co http://svn.scipy.org/svn/numpy/trunk numpy svn: PROPFIND request failed on '/svn/numpy/trunk' svn: PROPFIND of '/svn/numpy/trunk': could not connect to server (http://svn.scipy.org) Well, I hope the next release comes quickly. Thanks again. James ------------------------------------ Robert Kern wrote: >James K. Gruetzner wrote: > > >>Any ideas how to correct this? I've just installed scipy 0.4.4, and am >>having some problems with it. The key appears to be in the references >>to numpy.ndarray. >> >>When running scipy.test(level=0), numerous errors of: >> AttributeError: 'numpy.ndarray' object has no attribute 'dtypechar' >> >> > >This is a known issue. Our releases of numpy and scipy got out of sync. Current >SVN checkouts of both packages should work together, though. I think we are >gearing up for another (synced!) pair of releases. > > > From robert.kern at gmail.com Tue Feb 14 20:09:21 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 14 Feb 2006 19:09:21 -0600 Subject: [SciPy-user] scipy install fails: numpy.ndarray In-Reply-To: <43F278B0.7080408@sandia.gov> References: <43F245B8.4070707@sandia.gov> <43F24726.6000203@gmail.com> <43F278B0.7080408@sandia.gov> Message-ID: <43F27F41.20207@gmail.com> James K. Gruetzner wrote: > Thanks for the info. I've never used subversion before, but the > indicated statement gives an error---I presume due to firewall > restrictions. Alas. There may be a way to get past it, but I don't > know how yet. > > $ svn co http://svn.scipy.org/svn/numpy/trunk numpy > svn: PROPFIND request failed on '/svn/numpy/trunk' > svn: PROPFIND of '/svn/numpy/trunk': could not connect to server > (http://svn.scipy.org) The server seems to be working for me, so I don't think that's the problem. Do you have to use a proxy for outgoing HTTP connections? If so, can you try the steps given here (found by haphazard Googling, so there may be better resources sitting around somewhere): http://wiki.apache.org/geronimo/GettingSourceCode """Access through a proxy The Subversion client can go through a proxy, if you configure it to do so. First, edit your "servers" configuration file to indicate which proxy to use. The files location depends on your operating system. On Linux or Unix it is located in the directory "~/.subversion". On Windows it is in "%APPDATA%\Subversion". (Try "echo %APPDATA%", note this is a hidden directory.) There are comments in the file explaining what to do. If you don't have that file, get the latest Subversion client and run any command; this will cause the configuration directory and template files to be created. Example : Edit the 'servers' file and add something like : [global] http-proxy-host = your.proxy.name http-proxy-port = 3128 """ -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From ottomaddox at fastmail.fm Tue Feb 14 21:09:50 2006 From: ottomaddox at fastmail.fm (Otto Maddox) Date: Wed, 15 Feb 2006 02:09:50 +0000 Subject: [SciPy-user] Telling setup.py where to find stuff Message-ID: <1139969390.20066.254398196@webmail.messagingengine.com> I'm having a few troubles getting SciPy built, mostly due to mixing and matching compilers out of necessity. I installed Python 2.4.2 from source using Apple's GCC 4. Then, to get g77, installed GCC 3.4.5 from source into /opt/local/gcc34 (normally not in my PATH). Numpy 0.9.4 seemed to install fine by doing F77=/opt/local/gcc34/bin/g77 python setup.py install The problem is that the same thing doesn't work for getting SciPy's setup.py to find g77. Also, I do not know how to tell it where my fftw lives (under /opt/local/lib, /opt/local/include). Does anybody have any advice, please? -- http://www.fastmail.fm - Access all of your messages and folders wherever you are From schofield at ftw.at Wed Feb 15 05:51:15 2006 From: schofield at ftw.at (Ed Schofield) Date: Wed, 15 Feb 2006 11:51:15 +0100 Subject: [SciPy-user] scipy install fails: numpy.ndarray In-Reply-To: <43F278B0.7080408@sandia.gov> References: <43F245B8.4070707@sandia.gov> <43F24726.6000203@gmail.com> <43F278B0.7080408@sandia.gov> Message-ID: <43F307A3.1090706@ftw.at> James K. Gruetzner wrote: >Thanks for the info. I've never used subversion before, but the >indicated statement gives an error---I presume due to firewall >restrictions. Alas. There may be a way to get past it, but I don't >know how yet. > >$ svn co http://svn.scipy.org/svn/numpy/trunk numpy >svn: PROPFIND request failed on '/svn/numpy/trunk' >svn: PROPFIND of '/svn/numpy/trunk': could not connect to server >(http://svn.scipy.org) > >Well, I hope the next release comes quickly. > > I've posted a pre-release of 0.4.6 online: http://userver.ftw.at/~ejs/scipy-0.4.6pre.tar.gz I've tested it against the SVN version of numpy from yesterday, but it may (should?) also be compatible with the 0.9.4 numpy release. So if you're impatient, you can try this :) -- Ed From wjdandreta at att.net Wed Feb 15 08:32:33 2006 From: wjdandreta at att.net (Bill Dandreta) Date: Wed, 15 Feb 2006 08:32:33 -0500 Subject: [SciPy-user] Telling setup.py where to find stuff In-Reply-To: <1139969390.20066.254398196@webmail.messagingengine.com> References: <1139969390.20066.254398196@webmail.messagingengine.com> Message-ID: <43F32D71.8030707@att.net> Otto Maddox wrote: >I'm having a few troubles getting SciPy built, mostly due to mixing and >matching compilers out of necessity. > >I installed Python 2.4.2 from source using Apple's GCC 4. Then, to get >g77, installed GCC 3.4.5 from source into /opt/local/gcc34 (normally not >in my PATH). > >Numpy 0.9.4 seemed to install fine by doing > >F77=/opt/local/gcc34/bin/g77 python setup.py install > >The problem is that the same thing doesn't work for getting SciPy's >setup.py to find g77. Also, I do not know how to tell it where my fftw >lives (under /opt/local/lib, /opt/local/include). > >Does anybody have any advice, please? > Try: PATH=$PATH:/opt/local/lib:/opt/local/include export PATH Bill From nwagner at mecha.uni-stuttgart.de Wed Feb 15 11:22:25 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Wed, 15 Feb 2006 17:22:25 +0100 Subject: [SciPy-user] Numpy for Matlab users Message-ID: <43F35541.2060008@mecha.uni-stuttgart.de> Hi all, I am converting a Matlab code into a numpy/scipy code. How can I translate H(k+1,:) = [ ] into numpy ? Nils From cimrman3 at ntc.zcu.cz Wed Feb 15 11:25:52 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Wed, 15 Feb 2006 17:25:52 +0100 Subject: [SciPy-user] Numpy for Matlab users In-Reply-To: <43F35541.2060008@mecha.uni-stuttgart.de> References: <43F35541.2060008@mecha.uni-stuttgart.de> Message-ID: <43F35610.2080209@ntc.zcu.cz> Nils Wagner wrote: > Hi all, > > I am converting a Matlab code into a numpy/scipy code. > > How can I translate > > H(k+1,:) = [ ] > > into numpy ? H = H[:k+1,:] ? r. From robert.kern at gmail.com Wed Feb 15 11:25:36 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 15 Feb 2006 10:25:36 -0600 Subject: [SciPy-user] Telling setup.py where to find stuff In-Reply-To: <43F32D71.8030707@att.net> References: <1139969390.20066.254398196@webmail.messagingengine.com> <43F32D71.8030707@att.net> Message-ID: <43F35600.90209@gmail.com> Bill Dandreta wrote: > Otto Maddox wrote: > >>I'm having a few troubles getting SciPy built, mostly due to mixing and >>matching compilers out of necessity. >> >>I installed Python 2.4.2 from source using Apple's GCC 4. Then, to get >>g77, installed GCC 3.4.5 from source into /opt/local/gcc34 (normally not >>in my PATH). >> >>Numpy 0.9.4 seemed to install fine by doing >> >>F77=/opt/local/gcc34/bin/g77 python setup.py install Since numpy doesn't have any FORTRAN code, we don't expect setting the FORTRAN compiler to have any effect. >>The problem is that the same thing doesn't work for getting SciPy's >>setup.py to find g77. We don't use environment variables to determine the compiler executable. numpy.distutils needs more information to determine the appropriate flags. $ python setup.py config_fc --fcompiler=gnu --f77exec=/opt/local/gcc34/bin/g77 install [All on one line, of course.] >> Also, I do not know how to tell it where my fftw >>lives (under /opt/local/lib, /opt/local/include). >> >>Does anybody have any advice, please? > > Try: > > PATH=$PATH:/opt/local/lib:/opt/local/include > export PATH I'm pretty sure numpy.distutils won't check the PATH variable (which is meant for executables) for library and header paths. You can pass -L and -I arguments to build_clib and build_ext. Or you can set the FFTW environment variable to /opt/local . Or you can create a site.cfg file in numpy/distutils/ directory in the source[1]; see the file numpy/distutils/system_info.py for an example. [1] Are we still using this location? Why aren't we looking in os.getcwd()? Are we ever not in the root of the source tree when site.cfg is supposed to be read? -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From robert.kern at gmail.com Wed Feb 15 11:26:42 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 15 Feb 2006 10:26:42 -0600 Subject: [SciPy-user] Numpy for Matlab users In-Reply-To: <43F35541.2060008@mecha.uni-stuttgart.de> References: <43F35541.2060008@mecha.uni-stuttgart.de> Message-ID: <43F35642.9080803@gmail.com> Nils Wagner wrote: > Hi all, > > I am converting a Matlab code into a numpy/scipy code. > > How can I translate > > H(k+1,:) = [ ] > > into numpy ? Describe what it does in Matlab. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From cimrman3 at ntc.zcu.cz Wed Feb 15 11:30:00 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Wed, 15 Feb 2006 17:30:00 +0100 Subject: [SciPy-user] Numpy for Matlab users In-Reply-To: <43F35610.2080209@ntc.zcu.cz> References: <43F35541.2060008@mecha.uni-stuttgart.de> <43F35610.2080209@ntc.zcu.cz> Message-ID: <43F35708.3050709@ntc.zcu.cz> Robert Cimrman wrote: > Nils Wagner wrote: > >>Hi all, >> >>I am converting a Matlab code into a numpy/scipy code. >> >>How can I translate >> >>H(k+1,:) = [ ] >> >>into numpy ? > > > H = H[:k+1,:] ? Whoops, I see ':' even on places where are none... :) nevertheless the idea holds - copy all you need: H = H[indx,:], where indx are all indices except 'k+1' or try to use the masked arrays. r. From nwagner at mecha.uni-stuttgart.de Wed Feb 15 11:33:59 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Wed, 15 Feb 2006 17:33:59 +0100 Subject: [SciPy-user] Numpy for Matlab users In-Reply-To: <43F35642.9080803@gmail.com> References: <43F35541.2060008@mecha.uni-stuttgart.de> <43F35642.9080803@gmail.com> Message-ID: <43F357F7.8070606@mecha.uni-stuttgart.de> Robert Kern wrote: >Nils Wagner wrote: > >>Hi all, >> >>I am converting a Matlab code into a numpy/scipy code. >> >>How can I translate >> >>H(k+1,:) = [ ] >> >>into numpy ? >> > >Describe what it does in Matlab. > > It removes the last row of the matrix. a = rand(4,3) a(4,:) = [] Now a is a 3 \times 3 matrix Nils From robert.kern at gmail.com Wed Feb 15 11:40:57 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 15 Feb 2006 10:40:57 -0600 Subject: [SciPy-user] Numpy for Matlab users In-Reply-To: <43F357F7.8070606@mecha.uni-stuttgart.de> References: <43F35541.2060008@mecha.uni-stuttgart.de> <43F35642.9080803@gmail.com> <43F357F7.8070606@mecha.uni-stuttgart.de> Message-ID: <43F35999.4050809@gmail.com> Nils Wagner wrote: > Robert Kern wrote: > >>Nils Wagner wrote: >> >>>Hi all, >>> >>>I am converting a Matlab code into a numpy/scipy code. >>> >>>How can I translate >>> >>>H(k+1,:) = [ ] >>> >>>into numpy ? >> >>Describe what it does in Matlab. > > It removes the last row of the matrix. > > a = rand(4,3) > a(4,:) = [] > Now a is a 3 \times 3 matrix Nothing will change the dimensions like this inplace. Instead, use slices to get a new object. In [1]: a = rand(4,3) In [2]: a Out[2]: array([[ 0.09976441, 0.15777107, 0.06565412], [ 0.39381145, 0.66810943, 0.12464937], [ 0.16995962, 0.86150518, 0.67137479], [ 0.13825453, 0.22677347, 0.04551626]]) In [3]: a = a[:3,:] In [4]: a Out[4]: array([[ 0.09976441, 0.15777107, 0.06565412], [ 0.39381145, 0.66810943, 0.12464937], [ 0.16995962, 0.86150518, 0.67137479]]) -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From nwagner at mecha.uni-stuttgart.de Wed Feb 15 11:45:43 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Wed, 15 Feb 2006 17:45:43 +0100 Subject: [SciPy-user] Numpy for Matlab users In-Reply-To: <43F35999.4050809@gmail.com> References: <43F35541.2060008@mecha.uni-stuttgart.de> <43F35642.9080803@gmail.com> <43F357F7.8070606@mecha.uni-stuttgart.de> <43F35999.4050809@gmail.com> Message-ID: <43F35AB7.6090901@mecha.uni-stuttgart.de> Robert Kern wrote: >Nils Wagner wrote: > >>Robert Kern wrote: >> >> >>>Nils Wagner wrote: >>> >>> >>>>Hi all, >>>> >>>>I am converting a Matlab code into a numpy/scipy code. >>>> >>>>How can I translate >>>> >>>>H(k+1,:) = [ ] >>>> >>>>into numpy ? >>>> >>>Describe what it does in Matlab. >>> >>It removes the last row of the matrix. >> >>a = rand(4,3) >>a(4,:) = [] >>Now a is a 3 \times 3 matrix >> > >Nothing will change the dimensions like this inplace. Instead, use slices to get >a new object. > >In [1]: a = rand(4,3) > >In [2]: a >Out[2]: >array([[ 0.09976441, 0.15777107, 0.06565412], > [ 0.39381145, 0.66810943, 0.12464937], > [ 0.16995962, 0.86150518, 0.67137479], > [ 0.13825453, 0.22677347, 0.04551626]]) > >In [3]: a = a[:3,:] > >In [4]: a >Out[4]: >array([[ 0.09976441, 0.15777107, 0.06565412], > [ 0.39381145, 0.66810943, 0.12464937], > [ 0.16995962, 0.86150518, 0.67137479]]) > > This might be straightforward if I want to remove the first or last row. a=a[1:,:] a=a[:3,:] but how can I efficiently remove the second or third row w.r.t. to your example ? Nils From robert.kern at gmail.com Wed Feb 15 11:52:25 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 15 Feb 2006 10:52:25 -0600 Subject: [SciPy-user] Numpy for Matlab users In-Reply-To: <43F35AB7.6090901@mecha.uni-stuttgart.de> References: <43F35541.2060008@mecha.uni-stuttgart.de> <43F35642.9080803@gmail.com> <43F357F7.8070606@mecha.uni-stuttgart.de> <43F35999.4050809@gmail.com> <43F35AB7.6090901@mecha.uni-stuttgart.de> Message-ID: <43F35C49.1050105@gmail.com> Nils Wagner wrote: > This might be straightforward if I want to remove the first or last row. > > a=a[1:,:] > a=a[:3,:] > > but how can I efficiently remove the second or third row w.r.t. to your > example ? Robert Cimrman gave you the answer. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From nwagner at mecha.uni-stuttgart.de Wed Feb 15 12:00:49 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Wed, 15 Feb 2006 18:00:49 +0100 Subject: [SciPy-user] Numpy for Matlab users In-Reply-To: <43F35C49.1050105@gmail.com> References: <43F35541.2060008@mecha.uni-stuttgart.de> <43F35642.9080803@gmail.com> <43F357F7.8070606@mecha.uni-stuttgart.de> <43F35999.4050809@gmail.com> <43F35AB7.6090901@mecha.uni-stuttgart.de> <43F35C49.1050105@gmail.com> Message-ID: <43F35E41.9020102@mecha.uni-stuttgart.de> Robert Kern wrote: >Nils Wagner wrote: > > >>This might be straightforward if I want to remove the first or last row. >> >>a=a[1:,:] >>a=a[:3,:] >> >>but how can I efficiently remove the second or third row w.r.t. to your >>example ? >> > >Robert Cimrman gave you the answer. > > >>> a=rand(4,3) >>> a array([[ 0.91343331, 0.78947381, 0.39115252], [ 0.17682632, 0.75310694, 0.0081875 ], [ 0.82873565, 0.08136179, 0.46704486], [ 0.38479685, 0.54347342, 0.21294132]]) >>> a=a[(0,2,3),:] >>> a array([[ 0.91343331, 0.78947381, 0.39115252], [ 0.82873565, 0.08136179, 0.46704486], [ 0.38479685, 0.54347342, 0.21294132]]) But it might be easier to say what I would like to remove instead of what I want to keep ... Do you agree ? Nils From robert.kern at gmail.com Wed Feb 15 12:06:47 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 15 Feb 2006 11:06:47 -0600 Subject: [SciPy-user] Numpy for Matlab users In-Reply-To: <43F35E41.9020102@mecha.uni-stuttgart.de> References: <43F35541.2060008@mecha.uni-stuttgart.de> <43F35642.9080803@gmail.com> <43F357F7.8070606@mecha.uni-stuttgart.de> <43F35999.4050809@gmail.com> <43F35AB7.6090901@mecha.uni-stuttgart.de> <43F35C49.1050105@gmail.com> <43F35E41.9020102@mecha.uni-stuttgart.de> Message-ID: <43F35FA7.5080604@gmail.com> Nils Wagner wrote: > Robert Kern wrote: > >>Nils Wagner wrote: >> >>>This might be straightforward if I want to remove the first or last row. >>> >>>a=a[1:,:] >>>a=a[:3,:] >>> >>>but how can I efficiently remove the second or third row w.r.t. to your >>>example ? >>> >> >>Robert Cimrman gave you the answer. >> >>>>a=rand(4,3) >>>>a > > array([[ 0.91343331, 0.78947381, 0.39115252], > [ 0.17682632, 0.75310694, 0.0081875 ], > [ 0.82873565, 0.08136179, 0.46704486], > [ 0.38479685, 0.54347342, 0.21294132]]) > >>>>a=a[(0,2,3),:] >>>>a > > array([[ 0.91343331, 0.78947381, 0.39115252], > [ 0.82873565, 0.08136179, 0.46704486], > [ 0.38479685, 0.54347342, 0.21294132]]) > > But it might be easier to say what I would like to remove instead of > what I want to keep ... > > Do you agree ? Sure. Think about lists and the .remove() method. Then you can wrap it into a function to make it convenient. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From ryanlists at gmail.com Wed Feb 15 13:04:26 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Wed, 15 Feb 2006 13:04:26 -0500 Subject: [SciPy-user] round off errors Message-ID: I am trying to understand the effects of round off error and floating point arithmetic on my research. I am reading through "What Every Computer Scientist Should Know About Floating-Point Arithmetic" and have some questions on my understanding of it and how it applies to SciPy/NumPy. My specific problem has to do with trying to find the roots of (sinh(b)+sin(b))**2-(sinh(b)-sin(b))**2 and other problematic expressions. (Acutually it is more like the determinant of mat[0,0]=(a/b)*(sinh(b)+sin(b) mat[0,1]=-a**2*d/(b**3)*(sinh(b)-sin(b)) mat[1,0]=b/d*(sinh(b)-sin(b)) mat[1,1]=-(a/b)*(sinh(b)-sin(b)) It turns out that with some symbolic manipulation, this can be rearranged to trying to find the roots of sinh(b)*sin(b) which is much better behaved. How will SciPy handle sinh(b)+ (or -) sin(b) when sinh(b) is very large and sin(b) is obviously between +/-1? As I understand it, the IEEE standard requires exact rounding for this operation which seems difficult and costly to me. What I am seeing is that when the determinant whose zeros I need to find is symbolically rearrange to sinh(b)*sin(b), the curve is smooth and has well defined places where it goes very near zero (I actually need to look for minima of the abs of this value in real problems because the results are complex valued). Note that this symbolic rearrangement might only be possible for simple systems. I am analyzing this simple system to get a handle on these errors. If I don't do the symbolic rearrangement, the determinant curve is very noisy for increasing b and the determinant might have a local minima, but that minima is not very close to zero. Is there a way to create a single precision number and do single precision math so that I can compare good and bad answers to each step in this problem (i.e. for a certain range of b, I could consider the double precision value to be "exact" and see what problems are arising because of round-off). I am trying to justify a pragmatic approach to this problem. Assuming that the expression is too complicated to look at and pick out floating-point problems and rearrange it, I want to be able to say that I can trust the roots (actually minima) if the following are true: 1. You plot the function you are trying to find minima of and it doesn't look noisy over the range of input values you are concerned with 2. The value of the expression at the minima is actually fairly small 3. Numerical searching starting at points in a region around the minima all converge to the same minima (basically there is a sort of numerical domain of attraction) Does this seem valid? I am a mechanical engineer and neither a computer scientist nor a mathematician. So, I don't know if I can or want to prove this rigidly, but I want to be convinced and convince my thesis committee at least that it is trust-worthy. My biggest fear is this: is there a case where floating-point precision problems would lead to well-behaved, stable, WRONG answers? Especially for cases involving trying to find small differences between large numbers like (a-b)**2-(a+b) for a>>>>b. Thanks for any thoughts on how to think about this, Ryan From robert.kern at gmail.com Wed Feb 15 13:38:39 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 15 Feb 2006 12:38:39 -0600 Subject: [SciPy-user] round off errors In-Reply-To: References: Message-ID: <43F3752F.5080802@gmail.com> Ryan Krauss wrote: > How will SciPy handle sinh(b)+ (or -) sin(b) when sinh(b) is very > large and sin(b) is obviously between +/-1? As I understand it, the > IEEE standard requires exact rounding for this operation which seems > difficult and costly to me. Scipy doesn't really do anything special here. We just add/subtract the values in C. After that, it's the FPU's job. One thing to note is that the C functions that compute sinh() and sin() are usually defined in double precision only. If you give them a single precision array as input, the function will upcast it to a double, do the computation in double precision, and downcast the answer to a single precision value again. Also, some processors may do addition and subtraction with 80-bit extended precision in the intermediate stages. Raymond Hettinger wrote a floating point simulator to help explore the silliness of floating point math. You might find it useful. http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/265894 One thing you might want to consider is breaking apart the sums into lists of values. You can then do various things to make the total sum behave better. Google "Kahan summation." You may also want to look into extended/unlimited precision libraries. The SAGE environment has bindings to lots of good stuff for this kind of work. http://sage.sourceforge.net/ -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From skip at pobox.com Wed Feb 15 14:32:39 2006 From: skip at pobox.com (skip at pobox.com) Date: Wed, 15 Feb 2006 13:32:39 -0600 Subject: [SciPy-user] Where do I put site.cfg? What should it contain? Can I build w/o atlas? Message-ID: <17395.33239.806646.855808@montanaro.dyndns.org> I'm trying to build numpy an svn sandbox (just updated a couple minutes ago). If I grub around in numpy/distutils/system_info.py it says something about creating a site.cfg file with (for example) information about locating atlas. It says nothing about where this file belongs. I took a stab and placed it in my numpy source tree, right next to setup.py, with these lines: [atlas] library_dirs = /home/titan/skipm/src/ATLAS/lib/SunOS_Babe include_dirs = /home/titan/skipm/src/ATLAS/include/SunOS_Babe # for overriding the names of the atlas libraries atlas_libs = lapack, f77blas, cblas, atlas I then verified that those directories exist, contain apparently useful contents, and that I didn't have a conflicting environment variable named ATLAS: % ls /home/titan/skipm/src/ATLAS/lib/SunOS_Babe Make.inc libatlas.a libf77blas.a libtstatlas.a Makefile libcblas.a liblapack.a % ls /home/titan/skipm/src/ATLAS/include/SunOS_Babe atlas_buildinfo.h atlas_dmv.h atlas_smvT.h atlas_zr1.h atlas_cNCmm.h atlas_dmvN.h atlas_sr1.h atlas_zsysinfo.h atlas_cacheedge.h atlas_dmvS.h atlas_ssysinfo.h atlas_ztrsmXover.h atlas_cmv.h atlas_dmvT.h atlas_strsmXover.h cXover.h atlas_cmvN.h atlas_dr1.h atlas_trsmNB.h cmm.h atlas_cmvS.h atlas_dsysinfo.h atlas_type.h dXover.h atlas_cmvT.h atlas_dtrsmXover.h atlas_zNCmm.h dmm.h atlas_cr1.h atlas_sNCmm.h atlas_zmv.h sXover.h atlas_csysinfo.h atlas_smv.h atlas_zmvN.h smm.h atlas_ctrsmXover.h atlas_smvN.h atlas_zmvS.h zXover.h atlas_dNCmm.h atlas_smvS.h atlas_zmvT.h zmm.h % env | grep ATLAS % Then I tried "python setup.py build": Script started on Wed Feb 15 13:23:21 2006 babe:% python setup.py build Running from numpy source directory. No module named __svn_version__ F2PY Version 2_2110 blas_opt_info: blas_mkl_info: NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS Setting PTATLAS=ATLAS Setting PTATLAS=ATLAS FOUND: libraries = ['ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['/opt/lib'] language = c include_dirs = ['/opt/include'] running build_src building extension "atlas_version" sources adding 'build/src/atlas_version_0x4568501b.c' to sources. running build_ext customize UnixCCompiler customize UnixCCompiler using build_ext building 'atlas_version' extension compiling C sources gcc options: '-fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -fPIC' compile options: '-I/opt/include -Inumpy/core/include -I/opt/app/g++lib6/python-2.4/include/python2.4 -c' /opt/lang/gcc-3.4/bin/gcc -shared build/temp.solaris-2.8-i86pc-2.4/build/src/atlas_version_0x4568501b.o -L/opt/lib -lptf77blas -lptcblas -latlas -o build/temp.solaris-2.8-i86pc-2.4/atlas_version.so Text relocation remains referenced against symbol offset in file 0x7 /opt/lib/libatlas.a(ATL_buildinfo.o) 0xc /opt/lib/libatlas.a(ATL_buildinfo.o) 0x11 /opt/lib/libatlas.a(ATL_buildinfo.o) 0x16 /opt/lib/libatlas.a(ATL_buildinfo.o) How come distutils is ignoring my site.cfg file? Neither of these commands: python setup.py --help python setup.py build --help mention site.cfg. Failing all this, is there some way to build numpy/scipy without atlas? At this point I just want the damn thing to build. I'll worry about performance later (if at all). Thx, -- Skip Montanaro http://www.musi-cal.com/ skip at pobox.com From robert.kern at gmail.com Wed Feb 15 14:39:39 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 15 Feb 2006 13:39:39 -0600 Subject: [SciPy-user] Where do I put site.cfg? What should it contain? Can I build w/o atlas? In-Reply-To: <17395.33239.806646.855808@montanaro.dyndns.org> References: <17395.33239.806646.855808@montanaro.dyndns.org> Message-ID: <43F3837B.5040603@gmail.com> skip at pobox.com wrote: > I'm trying to build numpy an svn sandbox (just updated a couple minutes > ago). If I grub around in numpy/distutils/system_info.py it says something > about creating a site.cfg file with (for example) information about locating > atlas. It says nothing about where this file belongs. Sure it does. "The file 'site.cfg' in the same directory as this module is read for configuration options." I think it's a really bad place for it to be, but that is the state of affairs right now. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From oliphant at ee.byu.edu Wed Feb 15 16:20:32 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 15 Feb 2006 14:20:32 -0700 Subject: [SciPy-user] Where do I put site.cfg? What should it contain? Can I build w/o atlas? In-Reply-To: <43F3837B.5040603@gmail.com> References: <17395.33239.806646.855808@montanaro.dyndns.org> <43F3837B.5040603@gmail.com> Message-ID: <43F39B20.50502@ee.byu.edu> Robert Kern wrote: >skip at pobox.com wrote: > > >>I'm trying to build numpy an svn sandbox (just updated a couple minutes >>ago). If I grub around in numpy/distutils/system_info.py it says something >>about creating a site.cfg file with (for example) information about locating >>atlas. It says nothing about where this file belongs. >> >> > >Sure it does. "The file 'site.cfg' in the same directory as this module is read >for configuration options." I think it's a really bad place for it to be, but >that is the state of affairs right now. > > > So, in particular, does this mean that it is read from (relative to the location of the main setup.py file) numpy/distutils/site.cfg ?? Yes, that is a bad place. We need some suggestions as to where site.cfg should be read from. I think you can set the environment variable ATLAS to 'None' and it will ignore ATLAS... I believe this is true of any of the configuration options. -Travis From oliphant at ee.byu.edu Wed Feb 15 16:26:46 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 15 Feb 2006 14:26:46 -0700 Subject: [SciPy-user] Where do I put site.cfg? What should it contain? Can I build w/o atlas? In-Reply-To: <17395.33239.806646.855808@montanaro.dyndns.org> References: <17395.33239.806646.855808@montanaro.dyndns.org> Message-ID: <43F39C96.6080506@ee.byu.edu> skip at pobox.com wrote: >I'm trying to build numpy an svn sandbox (just updated a couple minutes >ago). If I grub around in numpy/distutils/system_info.py it says something >about creating a site.cfg file with (for example) information about locating >atlas. It says nothing about where this file belongs. I took a stab and >placed it in my numpy source tree, right next to setup.py, with these lines: > >Failing all this, is there some way to build numpy/scipy without atlas? At >this point I just want the damn thing to build. I'll worry about >performance later (if at all). > > Yes. You need to set the appropriate environment variables to 'None' In particular, on my system (which is multithreaded and has a BLAS picked up from /usr/lib and an unthreaded ATLAS that the system will find) export PTATLAS='None' export ATLAS='None' export BLAS='None' did the trick. -Travis From ryanlists at gmail.com Wed Feb 15 17:27:14 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Wed, 15 Feb 2006 17:27:14 -0500 Subject: [SciPy-user] fft numerical precision In-Reply-To: <43A9B522.7020109@gmail.com> References: <200512211133.03969.dd55@cornell.edu> <43A984F9.4090809@gmail.com> <43A9B522.7020109@gmail.com> Message-ID: Sorry to bring this back up again (this thread has been dormant since 12/21/05), but I think this is the key road block between me and really understanding this problem (and I think it went over my head when it was originally posted). If double precision numbers use 52 bits for the mantissa, what does it mean to say that 2**-52 is the approximate fractional precision of double precision floats? How does this affect optimization routines like fzero and fmin so that their accuracy seems limited to 2**-52 when the smallest possible floating point number should be more like 2**-1074. Does this somehow relate to how floating point subtraction is done (as relates to my post from earlier today). Thanks again, Ryan ========================= Hi Ryan, Note that 1e-16 is the approximate _fractional_ precision of double precision floats. You can get absolute numbers much smaller than that. There are a bunch of good references to floating point accuracy etc. Some of them are linked here: http://babbage.cs.qc.edu/courses/cs341/IEEE-754references.html Note that the FFTW website lists FFT accuracies in both single and double precision here: http://www.fftw.org/accuracy/ Scott - Show quoted text - On Wednesday 21 December 2005 02:53 pm, Ryan Krauss wrote: > I have run into this as well. 1e-16 seems large to me for double > precision. Robert can you explain this a bit more please. Is there > an IEEE spec or something that specifies how much of the 64bits is > for exponent and how much is for the mantissa (I think that is the > right word)? I was playing with some FORTRAN code and it seems like > there was a big difference with complex vs. double complex. It seems > like 1e-16 was the magnitude floor of complex and double complex was > better. > > Ryan > > On 12/21/05, Robert Kern wrote: > > Darren Dale wrote: > > > I am running into problems related, I think, to numerical > > > precision of fast Fourier transforms. If I Fourier transform a > > > gaussian distribution: > > > > > > absolute(fft(stats.norm.pdf(linspace(-10,10,512), loc=0, > > > scale=1))) > > > > > > I find a floor of about 1e-16. Does anyone know of a way to > > > improve the precision? > > > > 1e-16 is the best you can do with double precision. > > > > -- > > Robert Kern > > robert.kern at gmail.com > > > > "In the fields of hell where the grass grows high > > Are the graves of dreams allowed to die." > > -- Richard Harter > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.net > > http://www.scipy.net/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user -- Scott M. Ransom Address: NRAO Phone: (434) 296-0320 520 Edgemont Rd. email: sransom at nrao.edu Charlottesville, VA 22903 USA GPG Fingerprint: 06A9 9553 78BE 16DB 407B FFCA 9BFA B6FF FFD3 2989 On 12/21/05, Robert Kern wrote: > Ryan Krauss wrote: > > I have run into this as well. 1e-16 seems large to me for double > > precision. Robert can you explain this a bit more please. Is there > > an IEEE spec or something that specifies how much of the 64bits is for > > exponent and how much is for the mantissa (I think that is the right > > word)? I was playing with some FORTRAN code and it seems like there > > was a big difference with complex vs. double complex. It seems like > > 1e-16 was the magnitude floor of complex and double complex was > > better. > > IEEE-754 double precision has 11 bits for exponent, 1 bit for sign, and 52 for > the rest of the mantissa. 2.**-52 ~= 2.22e-16 > > Goldberg, David. What Every Computer Scientist Should Know About Floating Point > Arithmetic. 1991. > http://www.physics.ohio-state.edu/~dws/grouplinks/floating_point_math.pdf > > -- > Robert Kern > robert.kern at gmail.com > > "In the fields of hell where the grass grows high > Are the graves of dreams allowed to die." > -- Richard Harter > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From robert.kern at gmail.com Wed Feb 15 17:45:27 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 15 Feb 2006 16:45:27 -0600 Subject: [SciPy-user] fft numerical precision In-Reply-To: References: <200512211133.03969.dd55@cornell.edu> <43A984F9.4090809@gmail.com> <43A9B522.7020109@gmail.com> Message-ID: <43F3AF07.2000901@gmail.com> Ryan Krauss wrote: > Sorry to bring this back up again (this thread has been dormant since > 12/21/05), but I think this is the key road block between me and > really understanding this problem (and I think it went over my head > when it was originally posted). If double precision numbers use 52 > bits for the mantissa, what does it mean to say that 2**-52 is the > approximate fractional precision of double precision floats? How does > this affect optimization routines like fzero and fmin so that their > accuracy seems limited to 2**-52 when the smallest possible floating > point number should be more like 2**-1074. Does this somehow relate > to how floating point subtraction is done (as relates to my post from > earlier today). I don't have time right now to go into a lecture on condition numbers and other such stuff, but let give you an example to chew on and a good reference: In [23]: a = rand(1024) In [24]: ifft(fft(a)) Out[24]: array([ 0.27261246 -4.33680869e-17j, 0.08520686 -1.04083409e-17j, 0.66654776 +6.60887690e-17j, ..., 0.05895311 +8.37004077e-17j, 0.40454606 -2.34796237e-17j, 0.38051768 +8.63024929e-17j]) In [25]: a Out[25]: array([ 0.27261246, 0.08520686, 0.66654776, ..., 0.05895311, 0.40454606, 0.38051768]) In [26]: a *= 1e16 In [27]: ifft(fft(a)) Out[27]: array([ 2.72612457e+15+0.21875j , 8.52068614e+14-0.359375j , 6.66547756e+15-0.11579423j, ..., 5.89531079e+14+1.16015625j, 4.04546057e+15+0.7886458j , 3.80517684e+15+0.71875j ]) The first volume of _Numerical Computation_ by Christoph W. Ueberhuber (_Computer-Numerik 1_ in the original German) covers these topics quite well and extensively. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From skip at pobox.com Wed Feb 15 20:28:11 2006 From: skip at pobox.com (skip at pobox.com) Date: Wed, 15 Feb 2006 19:28:11 -0600 Subject: [SciPy-user] Where do I put site.cfg? What should it contain? Can I build w/o atlas? In-Reply-To: <43F3837B.5040603@gmail.com> References: <17395.33239.806646.855808@montanaro.dyndns.org> <43F3837B.5040603@gmail.com> Message-ID: <17395.54571.182806.525979@montanaro.dyndns.org> >> It says nothing about where this file belongs. Robert> Sure it does. "The file 'site.cfg' in the same directory as this Robert> module is read for configuration options." Thanks. My post was made partly in frustration. Kinda hard to see straight when your blood is boiling... Any reason for site.cfg to not be searched for in the current directory? It seems that might allow better multi-platform builds (e.g., I'm working on Solaris 10 now, but will eventually build on at least one variety of Linux). Skip From robert.kern at gmail.com Wed Feb 15 20:33:26 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 15 Feb 2006 19:33:26 -0600 Subject: [SciPy-user] Where do I put site.cfg? What should it contain? Can I build w/o atlas? In-Reply-To: <17395.54571.182806.525979@montanaro.dyndns.org> References: <17395.33239.806646.855808@montanaro.dyndns.org> <43F3837B.5040603@gmail.com> <17395.54571.182806.525979@montanaro.dyndns.org> Message-ID: <43F3D666.1040905@gmail.com> skip at pobox.com wrote: > >> It says nothing about where this file belongs. > > Robert> Sure it does. "The file 'site.cfg' in the same directory as this > Robert> module is read for configuration options." > > Thanks. My post was made partly in frustration. Kinda hard to see straight > when your blood is boiling... Any reason for site.cfg to not be searched > for in the current directory? It seems that might allow better > multi-platform builds (e.g., I'm working on Solaris 10 now, but will > eventually build on at least one variety of Linux). Try the latest SVN. http://projects.scipy.org/scipy/numpy/changeset/2112 -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From skip at pobox.com Wed Feb 15 20:38:28 2006 From: skip at pobox.com (skip at pobox.com) Date: Wed, 15 Feb 2006 19:38:28 -0600 Subject: [SciPy-user] Where do I put site.cfg? What should it contain? Can I build w/o atlas? In-Reply-To: <43F39B20.50502@ee.byu.edu> References: <17395.33239.806646.855808@montanaro.dyndns.org> <43F3837B.5040603@gmail.com> <43F39B20.50502@ee.byu.edu> Message-ID: <17395.55188.130052.595605@montanaro.dyndns.org> Travis> Yes, that is a bad place. We need some suggestions as to where Travis> site.cfg should be read from. First place to look should be `pwd`. Travis> I think you can set the environment variable ATLAS to 'None' and Travis> it will ignore ATLAS... Thank you, thank you, thank you. I now have numpy built... I'll tackle the rest of scipy ma?ana. Skip From skip at pobox.com Wed Feb 15 20:42:46 2006 From: skip at pobox.com (skip at pobox.com) Date: Wed, 15 Feb 2006 19:42:46 -0600 Subject: [SciPy-user] Where do I put site.cfg? What should it contain? Can I build w/o atlas? In-Reply-To: <43F3D666.1040905@gmail.com> References: <17395.33239.806646.855808@montanaro.dyndns.org> <43F3837B.5040603@gmail.com> <17395.54571.182806.525979@montanaro.dyndns.org> <43F3D666.1040905@gmail.com> Message-ID: <17395.55446.333396.992138@montanaro.dyndns.org> Robert> Try the latest SVN. Robert> http://projects.scipy.org/scipy/numpy/changeset/2112 Thanks. Next time I'm in a masochistic mood and want to build with atlas I'll give it a whirl. While I think of it, is there any reason that include_dirs in site.cfg is ignored? Skip From robert.kern at gmail.com Wed Feb 15 20:54:32 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 15 Feb 2006 19:54:32 -0600 Subject: [SciPy-user] Where do I put site.cfg? What should it contain? Can I build w/o atlas? In-Reply-To: <17395.55446.333396.992138@montanaro.dyndns.org> References: <17395.33239.806646.855808@montanaro.dyndns.org> <43F3837B.5040603@gmail.com> <17395.54571.182806.525979@montanaro.dyndns.org> <43F3D666.1040905@gmail.com> <17395.55446.333396.992138@montanaro.dyndns.org> Message-ID: <43F3DB58.3080602@gmail.com> skip at pobox.com wrote: > Robert> Try the latest SVN. > > Robert> http://projects.scipy.org/scipy/numpy/changeset/2112 > > Thanks. Next time I'm in a masochistic mood and want to build with atlas > I'll give it a whirl. While I think of it, is there any reason that > include_dirs in site.cfg is ignored? It shouldn't be. Exactly what is failing for you? -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From strawman at astraw.com Wed Feb 15 23:24:32 2006 From: strawman at astraw.com (Andrew Straw) Date: Wed, 15 Feb 2006 20:24:32 -0800 Subject: [SciPy-user] Where do I put site.cfg? What should it contain? Can I build w/o atlas? In-Reply-To: <17395.55446.333396.992138@montanaro.dyndns.org> References: <17395.33239.806646.855808@montanaro.dyndns.org> <43F3837B.5040603@gmail.com> <17395.54571.182806.525979@montanaro.dyndns.org> <43F3D666.1040905@gmail.com> <17395.55446.333396.992138@montanaro.dyndns.org> Message-ID: <43F3FE80.3020500@astraw.com> skip at pobox.com wrote: > Robert> Try the latest SVN. > > Robert> http://projects.scipy.org/scipy/numpy/changeset/2112 > >Thanks. Next time I'm in a masochistic mood and want to build with atlas >I'll give it a whirl. While I think of it, is there any reason that >include_dirs in site.cfg is ignored? > > I haven't personally figured out how/if site.cfg is related to setup.cfg, but in setup.cfg, there was an issue that was fixed just yesterday regarding include_dirs: http://projects.scipy.org/scipy/numpy/ticket/4 (I apparently gave the ticket a wrong title referring to site.cfg -- I mean setup.cfg.) I wonder if that fixes your issue or if there's another one... From stephen.walton at csun.edu Wed Feb 15 23:57:37 2006 From: stephen.walton at csun.edu (Stephen Walton) Date: Wed, 15 Feb 2006 20:57:37 -0800 Subject: [SciPy-user] fft numerical precision In-Reply-To: <43F3AF07.2000901@gmail.com> References: <200512211133.03969.dd55@cornell.edu> <43A984F9.4090809@gmail.com> <43A9B522.7020109@gmail.com> <43F3AF07.2000901@gmail.com> Message-ID: <43F40641.70404@csun.edu> Robert Kern wrote: >Ryan Krauss wrote: > > >>Sorry to bring this back up again (this thread has been dormant since >>12/21/05), but I think this is the key road block between me and >>really understanding this problem (and I think it went over my head >>when it was originally posted). If double precision numbers use 52 >>bits for the mantissa, what does it mean to say that 2**-52 is the >>approximate fractional precision of double precision floats? >> >I don't have time right now to go into a lecture on condition numbers and other >such stuff, but let give you an example to chew on and a good reference: > > If I can jump in and show something even simpler: In [1]: x,sum=0.1,0 In [2]: for m in range(10): ...: sum = sum + x ...: In [3]: sum Out[3]: 0.99999999999999989 In [4]: sum-1 Out[4]: -1.1102230246251565e-16 As for books, I've always liked Kahaner, Moler and Nash. Steve From nwagner at mecha.uni-stuttgart.de Thu Feb 16 03:14:42 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Thu, 16 Feb 2006 09:14:42 +0100 Subject: [SciPy-user] Numpy for Matlab users In-Reply-To: <43F35FA7.5080604@gmail.com> References: <43F35541.2060008@mecha.uni-stuttgart.de> <43F35642.9080803@gmail.com> <43F357F7.8070606@mecha.uni-stuttgart.de> <43F35999.4050809@gmail.com> <43F35AB7.6090901@mecha.uni-stuttgart.de> <43F35C49.1050105@gmail.com> <43F35E41.9020102@mecha.uni-stuttgart.de> <43F35FA7.5080604@gmail.com> Message-ID: <43F43472.9050008@mecha.uni-stuttgart.de> Robert Kern wrote: >Nils Wagner wrote: > >>Robert Kern wrote: >> >> >>>Nils Wagner wrote: >>> >>> >>>>This might be straightforward if I want to remove the first or last row. >>>> >>>>a=a[1:,:] >>>>a=a[:3,:] >>>> >>>>but how can I efficiently remove the second or third row w.r.t. to your >>>>example ? >>>> >>>> >>>Robert Cimrman gave you the answer. >>> >>> >>>>>a=rand(4,3) >>>>>a >>>>> >>array([[ 0.91343331, 0.78947381, 0.39115252], >> [ 0.17682632, 0.75310694, 0.0081875 ], >> [ 0.82873565, 0.08136179, 0.46704486], >> [ 0.38479685, 0.54347342, 0.21294132]]) >> >> >>>>>a=a[(0,2,3),:] >>>>>a >>>>> >>array([[ 0.91343331, 0.78947381, 0.39115252], >> [ 0.82873565, 0.08136179, 0.46704486], >> [ 0.38479685, 0.54347342, 0.21294132]]) >> >>But it might be easier to say what I would like to remove instead of >>what I want to keep ... >> >>Do you agree ? >> > >Sure. Think about lists and the .remove() method. Then you can wrap it into a >function to make it convenient. > > It's a one-liner in Matlab. A[i,:] = [] Is numpy competitive w.r.t. this task ;-) ? Nils From robert.kern at gmail.com Thu Feb 16 03:20:22 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 16 Feb 2006 02:20:22 -0600 Subject: [SciPy-user] Numpy for Matlab users In-Reply-To: <43F43472.9050008@mecha.uni-stuttgart.de> References: <43F35541.2060008@mecha.uni-stuttgart.de> <43F35642.9080803@gmail.com> <43F357F7.8070606@mecha.uni-stuttgart.de> <43F35999.4050809@gmail.com> <43F35AB7.6090901@mecha.uni-stuttgart.de> <43F35C49.1050105@gmail.com> <43F35E41.9020102@mecha.uni-stuttgart.de> <43F35FA7.5080604@gmail.com> <43F43472.9050008@mecha.uni-stuttgart.de> Message-ID: <43F435C6.9060806@gmail.com> Nils Wagner wrote: > It's a one-liner in Matlab. > A[i,:] = [] > Is numpy competitive w.r.t. this task ;-) ? Who cares? I don't. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From josegomez at gmx.net Thu Feb 16 05:08:17 2006 From: josegomez at gmx.net (Jose Luis Gomez Dans) Date: Thu, 16 Feb 2006 11:08:17 +0100 (MET) Subject: [SciPy-user] Installation on cygwin Message-ID: <22932.1140084497@www012.gmx.net> Hi! I am trying to install Scipy (latest tarball from scipy.org) on cygwin (updated to today). As I understand it, I need to first install Numpy, and then Scipy. I can't install numpy (the ufuncs directory has a myriad of undefined symbols), and I see on the mailing list that a few other people hit the same problem. Has anyone successfully compiled numpy+scipy on cygwin? Does it make sense to use an older version of either? Many thanks! jose -- DSL-Aktion wegen gro?er Nachfrage bis 28.2.2006 verl?ngert: GMX DSL-Flatrate 1 Jahr kostenlos* http://www.gmx.net/de/go/dsl From bldrake at adaptcs.com Thu Feb 16 07:46:20 2006 From: bldrake at adaptcs.com (Barry Drake) Date: Thu, 16 Feb 2006 04:46:20 -0800 (PST) Subject: [SciPy-user] Numpy for Matlab users In-Reply-To: <43F435C6.9060806@gmail.com> Message-ID: <20060216124620.6371.qmail@web205.biz.mail.re2.yahoo.com> Why doesn't Nils Wagner post to the list so that we can all benefit from the full conversation? Regards. Barry Drake --- Robert Kern wrote: > Nils Wagner wrote: > > > It's a one-liner in Matlab. > > A[i,:] = [] > > Is numpy competitive w.r.t. this task ;-) ? > > Who cares? I don't. > > -- > Robert Kern > robert.kern at gmail.com > > "In the fields of hell where the grass grows high > Are the graves of dreams allowed to die." > -- Richard Harter > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From arnd.baecker at web.de Thu Feb 16 08:00:41 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Thu, 16 Feb 2006 14:00:41 +0100 (CET) Subject: [SciPy-user] Numpy for Matlab users In-Reply-To: <20060216124620.6371.qmail@web205.biz.mail.re2.yahoo.com> References: <20060216124620.6371.qmail@web205.biz.mail.re2.yahoo.com> Message-ID: Hi Barry, On Thu, 16 Feb 2006, Barry Drake wrote: > Why doesn't Nils Wagner post to the list so that we > can all benefit from the full conversation? > > Regards. > Barry Drake > > --- Robert Kern wrote: > > > Nils Wagner wrote: > > > > > It's a one-liner in Matlab. > > > A[i,:] = [] > > > Is numpy competitive w.r.t. this task ;-) ? If you refer to this statement, this was posted http://www.scipy.net/pipermail/scipy-user/2006-February/007039.html I think I did not miss any of his messages. Best, Arnd From nwagner at mecha.uni-stuttgart.de Thu Feb 16 08:04:14 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Thu, 16 Feb 2006 14:04:14 +0100 Subject: [SciPy-user] Dense matrices versus sparse matrices Message-ID: <43F4784E.5000701@mecha.uni-stuttgart.de> Hi all, I am going to solve l a r g e Sylvester equations A X + X B = C and I should use sparse matrices instead of dense matrices. However, I started with a dense version (ggsl-dense.py) and the results are very promising. But if I use the sparse version (ggsl-sparse.py) the results are different (floating point numbers differ in the last 7 digits). I guess that something went wrong during the transition process. Who can help me with this task ? The codes are available upon request. :-) Cheers, Nils From ryanlists at gmail.com Thu Feb 16 08:41:14 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Thu, 16 Feb 2006 08:41:14 -0500 Subject: [SciPy-user] fft numerical precision In-Reply-To: <43F40641.70404@csun.edu> References: <200512211133.03969.dd55@cornell.edu> <43A984F9.4090809@gmail.com> <43A9B522.7020109@gmail.com> <43F3AF07.2000901@gmail.com> <43F40641.70404@csun.edu> Message-ID: Thanks, I am feeling much better about this. I actually found something like this on the internet that is really helpful to me: In [1]: m=1.0 In [2]: e=m In [3]: while m+e>m: ...: e=e/2.0 ...: In [4]: e Out[4]: 1.1102230246251565e-16 In [5]: m=1e-4 In [6]: e=m In [8]: while m+e>m: ...: e=e/2.0 ...: In [9]: e Out[9]: 5.551115123125783e-21 In [10]: m*(2.0**-52) Out[10]: 2.2204460492503132e-20 Ryan On 2/15/06, Stephen Walton wrote: > Robert Kern wrote: > > >Ryan Krauss wrote: > > > > > >>Sorry to bring this back up again (this thread has been dormant since > >>12/21/05), but I think this is the key road block between me and > >>really understanding this problem (and I think it went over my head > >>when it was originally posted). If double precision numbers use 52 > >>bits for the mantissa, what does it mean to say that 2**-52 is the > >>approximate fractional precision of double precision floats? > >> > >I don't have time right now to go into a lecture on condition numbers and other > >such stuff, but let give you an example to chew on and a good reference: > > > > > If I can jump in and show something even simpler: > > In [1]: x,sum=0.1,0 > > In [2]: for m in range(10): > ...: sum = sum + x > ...: > > In [3]: sum > Out[3]: 0.99999999999999989 > > In [4]: sum-1 > Out[4]: -1.1102230246251565e-16 > > As for books, I've always liked Kahaner, Moler and Nash. > > Steve > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From oliphant.travis at ieee.org Thu Feb 16 09:26:09 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 16 Feb 2006 07:26:09 -0700 Subject: [SciPy-user] Installation on cygwin In-Reply-To: <22932.1140084497@www012.gmx.net> References: <22932.1140084497@www012.gmx.net> Message-ID: <43F48B81.3070801@ieee.org> Jose Luis Gomez Dans wrote: >Hi! >I am trying to install Scipy (latest tarball from scipy.org) on cygwin >(updated to today). As I understand it, I need to first install Numpy, and >then Scipy. I can't install numpy (the ufuncs directory has a myriad of >undefined symbols), and I see on the mailing list that a few other people >hit the same problem. > > Yes, you need to to install NumPy first. SciPy also needs a functional LAPACK and BLAS installed (or use ATLAS). I have not heard of any successful CYGWIN installation. But that doesn't mean it's not possible --- nobody has yet contributed the details necessary to get it to work. I think we just need to get a couple of auto-configuration things fixed. 1) What are the errors you are getting? 2) There is going to be some need to define the appropriate functions in ufuncobject.h to handle the IEEE error-setting correctly. -Travis From oliphant.travis at ieee.org Thu Feb 16 09:29:19 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 16 Feb 2006 07:29:19 -0700 Subject: [SciPy-user] fft numerical precision In-Reply-To: References: <200512211133.03969.dd55@cornell.edu> <43A984F9.4090809@gmail.com> <43A9B522.7020109@gmail.com> <43F3AF07.2000901@gmail.com> <43F40641.70404@csun.edu> Message-ID: <43F48C3F.3010809@ieee.org> Ryan Krauss wrote: >Thanks, I am feeling much better about this. > >I actually found something like this on the internet that is really >helpful to me: >In [1]: m=1.0 >In [2]: e=m >In [3]: while m+e>m: > ...: e=e/2.0 > ...: > >In [4]: e >Out[4]: 1.1102230246251565e-16 > >In [5]: m=1e-4 >In [6]: e=m > >In [8]: while m+e>m: > ...: e=e/2.0 > ...: > >In [9]: e >Out[9]: 5.551115123125783e-21 > >In [10]: m*(2.0**-52) >Out[10]: 2.2204460492503132e-20 > >Ryan > > There is some code in numpy/lib/machar.py that you might find useful in this regard. -Travis From nwagner at mecha.uni-stuttgart.de Thu Feb 16 11:04:30 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Thu, 16 Feb 2006 17:04:30 +0100 Subject: [SciPy-user] Trace of a sparse matrix ? Message-ID: <43F4A28E.4000302@mecha.uni-stuttgart.de> Hi all, Is it possible to compute the trace of a sparse matrix say A without detouring spur=trace(A.todense()) ? Nils From arnd.baecker at web.de Thu Feb 16 11:12:28 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Thu, 16 Feb 2006 17:12:28 +0100 (CET) Subject: [SciPy-user] Trace of a sparse matrix ? In-Reply-To: <43F4A28E.4000302@mecha.uni-stuttgart.de> References: <43F4A28E.4000302@mecha.uni-stuttgart.de> Message-ID: On Thu, 16 Feb 2006, Nils Wagner wrote: > Hi all, > > Is it possible to compute the trace of a sparse matrix say A > without detouring > > spur=trace(A.todense()) ? what about summing up all diagonal elements which exist in A? Arnd From nwagner at mecha.uni-stuttgart.de Thu Feb 16 11:43:08 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Thu, 16 Feb 2006 17:43:08 +0100 Subject: [SciPy-user] Trace of a sparse matrix ? In-Reply-To: References: <43F4A28E.4000302@mecha.uni-stuttgart.de> Message-ID: <43F4AB9C.4060101@mecha.uni-stuttgart.de> Arnd Baecker wrote: >On Thu, 16 Feb 2006, Nils Wagner wrote: > > >>Hi all, >> >>Is it possible to compute the trace of a sparse matrix say A >>without detouring >> >>spur=trace(A.todense()) ? >> > >what about summing up all diagonal elements which exist in A? > and are not equal to zero. It would be nice to have a built-in function "sptrace(A)" for this task within scipy :-) The same holds for the norm of a sparse matrix spnorm(A) linalg.norm(A.todense()) is not optimal or even impossible if A is really large. Nils >Arnd > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > From travis at enthought.com Thu Feb 16 14:17:08 2006 From: travis at enthought.com (Travis N. Vaught) Date: Thu, 16 Feb 2006 13:17:08 -0600 Subject: [SciPy-user] ANN: Python Enthought Edition Version 0.9.2 Released Message-ID: <43F4CFB4.1080305@enthought.com> Enthought is pleased to announce the release of Python Enthought Edition Version 0.9.2 (http://code.enthought.com/enthon/) -- a python distribution for Windows. This is a kitchen-sink-included Python distribution including the following packages/tools out of the box: Numeric 24.2 SciPy 0.3.3 IPython 0.6.15 Enthought Tool Suite 1.0.2 wxPython 2.6.1.0 PIL 1.1.4 mingw 20030504-1 f2py 2.45.241_1926 MayaVi 1.5 Scientific Python 2.4.5 VTK 4.4 and many more... 0.9.2 Release Notes Summary --------------------------- Version 0.9.2 of Python Enthought Edition is the first to include the Enthought Tool Suite Package (http://code.enthought.com/ets/). Other changes include upgrading to Numeric 24.2, including MayaVi 1.5 (rather than 1.3) and removing a standalone PyCrust package in favor of the one included with wxPython. Also, elementtree and celementtree have been added to the distribution. Notably, this release is still based on Python 2.3.5 and still includes SciPy 0.3.3. You'll also notice that we have changed the version numbering to a major.minor.point format (from a build number format). see full release notes at: http://code.enthought.com/release/changelog-enthon0.9.2.shtml Best, Travis N. Vaught From oliphant.travis at ieee.org Thu Feb 16 19:10:34 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 16 Feb 2006 17:10:34 -0700 Subject: [SciPy-user] Release of NumPy 0.9.5 Message-ID: <43F5147A.3020202@ieee.org> I'm pleased to announce the release of NumPy 0.9.5 The release notes and download site can be found at http://www.scipy.org Best regards, -Travis Oliphant From cookedm at physics.mcmaster.ca Thu Feb 16 20:30:36 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Thu, 16 Feb 2006 20:30:36 -0500 Subject: [SciPy-user] fft numerical precision In-Reply-To: <43F3AF07.2000901@gmail.com> (Robert Kern's message of "Wed, 15 Feb 2006 16:45:27 -0600") References: <200512211133.03969.dd55@cornell.edu> <43A984F9.4090809@gmail.com> <43A9B522.7020109@gmail.com> <43F3AF07.2000901@gmail.com> Message-ID: Robert Kern writes: > Ryan Krauss wrote: >> Sorry to bring this back up again (this thread has been dormant since >> 12/21/05), but I think this is the key road block between me and >> really understanding this problem (and I think it went over my head >> when it was originally posted). If double precision numbers use 52 >> bits for the mantissa, what does it mean to say that 2**-52 is the >> approximate fractional precision of double precision floats? How does >> this affect optimization routines like fzero and fmin so that their >> accuracy seems limited to 2**-52 when the smallest possible floating >> point number should be more like 2**-1074. Does this somehow relate >> to how floating point subtraction is done (as relates to my post from >> earlier today). > > I don't have time right now to go into a lecture on condition numbers and other > such stuff, but let give you an example to chew on and a good reference: > [snip example] > > The first volume of _Numerical Computation_ by Christoph W. Ueberhuber > (_Computer-Numerik 1_ in the original German) covers these topics quite well and > extensively. Another reference: I've been working through _Accuracy and Stability of Numerical Algorithms_ (2nd ed.), by Nicholas J. Higham, which I'm finding to be pretty good. Ryan mentioned that he was working through "What Every Computer Scientist Should Know About Floating Point" in another thread, which is good place to start. Looking at the stuff on William Kahan's site at http://www.cs.berkeley.edu/~wkahan/ can give you a good idea of what problems you can run into with floating point. Basically, when doing numerical computations, there are two levels of analysis: numerical analysis, where you're concerned about which algorithm to use, how stable it is, etc., and then the nitty-gritty things about floating point. For the last bit, you mainly worry about if rounding errors, etc. are big enough to worry about :) (In my experience, most people don't do that bit, but they're usually ok, except when they're not.) -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From ryanlists at gmail.com Thu Feb 16 23:33:41 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Thu, 16 Feb 2006 23:33:41 -0500 Subject: [SciPy-user] casting complex numbers Message-ID: Is there a way to make python return a complex number number instead of nan when taking the sqrt of a negative float? I have an automatically generated function based on Maxima symbolic output that is giving bad output because of this. Basically, part of the symoblic expression includes sqrt(-s**2*x) where s is complex and this expression would evaluate nicely in python. For whatever reason, Maxima prefers to output this expression as sqrt(-x)*s, which would also evaluate correctly if sqrt(-x) gave 1.0j*sqrt(x), but since x is a float it is returning nan. I am hoping for flag similar to from __future__ import division instead of having to declare all my variables as complex. Thanks, Ryan From oliphant.travis at ieee.org Fri Feb 17 01:55:33 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 16 Feb 2006 23:55:33 -0700 Subject: [SciPy-user] casting complex numbers In-Reply-To: References: Message-ID: <43F57365.70405@ieee.org> Ryan Krauss wrote: >Is there a way to make python return a complex number number instead >of nan when taking the sqrt of a negative float? I have an >automatically generated function based on Maxima symbolic output that >is giving bad output because of this. Basically, part of the symoblic >expression includes sqrt(-s**2*x) where s is complex and this >expression would evaluate nicely in python. For whatever reason, >Maxima prefers to output this expression as sqrt(-x)*s, which would >also evaluate correctly if sqrt(-x) gave 1.0j*sqrt(x), but since x is >a float it is returning nan. I am hoping for flag similar to from >__future__ import division instead of having to declare all my >variables as complex. > > > import numpy.lib.scimath as nls nls.sqrt(-1) -Travis From hgamboa at gmail.com Fri Feb 17 07:15:33 2006 From: hgamboa at gmail.com (Hugo Gamboa) Date: Fri, 17 Feb 2006 12:15:33 +0000 Subject: [SciPy-user] Cluster of workstation Message-ID: <86522b1a0602170415o403a1a78i6a71c3a63624cfc1@mail.gmail.com> Hi to all, I'm starting to need a small cluster to divide some of my computing tasks. What is the best direction to follow? pyMPI, cow.py, seppo with pyro? I've started to look at cow.py that presented a very simple approach, aparently well integrated with scipy, but I falied to start the cluster. Is cow abandoned? Thanks for your support. Hugo Gamboa From ryanlists at gmail.com Fri Feb 17 09:33:20 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Fri, 17 Feb 2006 09:33:20 -0500 Subject: [SciPy-user] casting complex numbers In-Reply-To: <43F57365.70405@ieee.org> References: <43F57365.70405@ieee.org> Message-ID: Perfect. Thanks Travis. On 2/17/06, Travis Oliphant wrote: > Ryan Krauss wrote: > > >Is there a way to make python return a complex number number instead > >of nan when taking the sqrt of a negative float? I have an > >automatically generated function based on Maxima symbolic output that > >is giving bad output because of this. Basically, part of the symoblic > >expression includes sqrt(-s**2*x) where s is complex and this > >expression would evaluate nicely in python. For whatever reason, > >Maxima prefers to output this expression as sqrt(-x)*s, which would > >also evaluate correctly if sqrt(-x) gave 1.0j*sqrt(x), but since x is > >a float it is returning nan. I am hoping for flag similar to from > >__future__ import division instead of having to declare all my > >variables as complex. > > > > > > > import numpy.lib.scimath as nls > > nls.sqrt(-1) > > -Travis > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From tom.denniston at alum.dartmouth.org Fri Feb 17 09:34:03 2006 From: tom.denniston at alum.dartmouth.org (Tom Denniston) Date: Fri, 17 Feb 2006 08:34:03 -0600 Subject: [SciPy-user] Cluster of workstation In-Reply-To: <86522b1a0602170415o403a1a78i6a71c3a63624cfc1@mail.gmail.com> References: <86522b1a0602170415o403a1a78i6a71c3a63624cfc1@mail.gmail.com> Message-ID: try linda. http://www-users.cs.york.ac.uk/~aw/pylinda/ On 2/17/06, Hugo Gamboa wrote: > > Hi to all, > > I'm starting to need a small cluster to divide some of my computing tasks. > > What is the best direction to follow? pyMPI, cow.py, seppo with pyro? > > I've started to look at cow.py that presented a very simple approach, > aparently well integrated with scipy, but I falied to start the > cluster. > > Is cow abandoned? > > Thanks for your support. > > Hugo Gamboa > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryanlists at gmail.com Fri Feb 17 09:41:04 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Fri, 17 Feb 2006 09:41:04 -0500 Subject: [SciPy-user] casting complex numbers In-Reply-To: References: <43F57365.70405@ieee.org> Message-ID: Actually, I may have a small problem in that nls.sqrt cannot take a vector input. On 2/17/06, Ryan Krauss wrote: > Perfect. Thanks Travis. > > On 2/17/06, Travis Oliphant wrote: > > Ryan Krauss wrote: > > > > >Is there a way to make python return a complex number number instead > > >of nan when taking the sqrt of a negative float? I have an > > >automatically generated function based on Maxima symbolic output that > > >is giving bad output because of this. Basically, part of the symoblic > > >expression includes sqrt(-s**2*x) where s is complex and this > > >expression would evaluate nicely in python. For whatever reason, > > >Maxima prefers to output this expression as sqrt(-x)*s, which would > > >also evaluate correctly if sqrt(-x) gave 1.0j*sqrt(x), but since x is > > >a float it is returning nan. I am hoping for flag similar to from > > >__future__ import division instead of having to declare all my > > >variables as complex. > > > > > > > > > > > import numpy.lib.scimath as nls > > > > nls.sqrt(-1) > > > > -Travis > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.net > > http://www.scipy.net/mailman/listinfo/scipy-user > > > From travis at enthought.com Fri Feb 17 11:45:17 2006 From: travis at enthought.com (Travis N. Vaught) Date: Fri, 17 Feb 2006 10:45:17 -0600 Subject: [SciPy-user] [Numpy-discussion] ANN: Python Enthought Edition Version 0.9.2 Released In-Reply-To: References: <43F4CFB4.1080305@enthought.com> Message-ID: <43F5FD9D.2090706@enthought.com> Arnd Baecker wrote: > On Thu, 16 Feb 2006, Travis N. Vaught wrote: > > >> Enthought is pleased to announce the release of Python Enthought Edition >> Version 0.9.2 (http://code.enthought.com/enthon/) -- a python >> distribution for Windows. This is a kitchen-sink-included Python >> distribution including the following packages/tools out of the box: >> >> Numeric 24.2 >> SciPy 0.3.3 >> IPython 0.6.15 >> Enthought Tool Suite 1.0.2 >> wxPython 2.6.1.0 >> PIL 1.1.4 >> mingw 20030504-1 >> f2py 2.45.241_1926 >> MayaVi 1.5 >> Scientific Python 2.4.5 >> VTK 4.4 >> and many more... >> > > Brilliant - many thanks for the effort! > > I was just about to ask for the plans about numpy/scipy, > but the changelog at > http://code.enthought.com/release/changelog-enthon0.9.2.shtml > shows quite a bit of activity in this direction! > > Do you have an estimate about when a numpy/scipy version > of the Enthought Edition might happen? > > Many thanks, > > Arnd > It's a bit difficult to say with much accuracy, so I'll be transparent but imprecise. Our release of Enthon versions typically tracks the state of the platform we are using for the custom software development we do to pay the bills. Thus, our current project code typically has to be ported to build and run on a cobbled-together build of the newer versions before we do a release. I realize this is a drag on the release schedule for Enthon, but it's how we allocate resources to the builds. Enough excuses, though--we are working on the migration of our project code now (Pearu Peterson) and I expect in weeks (rather than months) we'll have an Enthon release candidate with Python 2.4.2, and the latest SciPy and NumPy on Windows. Robert Kern is already working on a project that is based on this tool chain, so the wedge is in place. Thanks for the interest! (and sorry for the cross-post) Travis From bgranger at scu.edu Fri Feb 17 13:00:33 2006 From: bgranger at scu.edu (Brian Granger) Date: Fri, 17 Feb 2006 10:00:33 -0800 Subject: [SciPy-user] Cluster of workstation In-Reply-To: References: <86522b1a0602170415o403a1a78i6a71c3a63624cfc1@mail.gmail.com> Message-ID: Tom, I don't think that cow has been maintained in a while. I tried to get it running over a year ago and had no luck. The closest thing to cow right now are the parallel capabilities in IPython. There are a couple of presentations about this on the IPython homepage: http://ipython.scipy.org/ The parallel parts of IPython (which we are calling the "IPython Kernel") are still under heavy development. But the current version does work very well. However, the kernel stuff is not packaged with the standard IPython. We have a separate development branch where the parallel stuff is living. You can check it out from our subversion repository: svn co http://ipython.scipy.org/svn/ipython/ipython/branches/chainsaw ipython1 We are calling this ipython1, as the kernel will eventually become the core of ipython for the version 1.0 release. You can find install details and documentation in the INSTALL and README files. It does require Zope interface, Twisted and the current release of IPython, but those details are given in the INSTALL file. Sorry there is no web presence for this stuff yet. We are completely focused on development right now. Cheers, Brian On Feb 17, 2006, at 6:34 AM, Tom Denniston wrote: > try linda. http://www-users.cs.york.ac.uk/~aw/pylinda/ > > > > On 2/17/06, Hugo Gamboa wrote: Hi to all, > > I'm starting to need a small cluster to divide some of my computing > tasks. > > What is the best direction to follow? pyMPI, cow.py, seppo with pyro? > > I've started to look at cow.py that presented a very simple approach, > aparently well integrated with scipy, but I falied to start the > cluster. > > Is cow abandoned? > > Thanks for your support. > > Hugo Gamboa > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user From stephen.walton at csun.edu Fri Feb 17 18:46:11 2006 From: stephen.walton at csun.edu (Stephen Walton) Date: Fri, 17 Feb 2006 15:46:11 -0800 Subject: [SciPy-user] fft numerical precision In-Reply-To: References: <200512211133.03969.dd55@cornell.edu> <43A984F9.4090809@gmail.com> <43A9B522.7020109@gmail.com> <43F3AF07.2000901@gmail.com> Message-ID: <43F66043.8060600@csun.edu> David M. Cooke wrote: > Basically, when doing numerical computations, there are two levels of > >analysis: numerical analysis, where you're concerned about which >algorithm to use, how stable it is, etc., and then the nitty-gritty >things about floating point. For the last bit, you mainly worry about >if rounding errors, etc. are big enough to worry about :) (In my >experience, most people don't do that bit, but they're usually ok, >except when they're not.) > > > Which, to bring the discussion back to SciPy, it is such an excellent idea that the basic algorithms in it are from the well tested NETLIB and related routine libraries. The people who wrote those did worry about rounding errors and the like, and the code is well crafted. Don't re-invent the wheel and all that. From ckkart at hoc.net Fri Feb 17 22:47:15 2006 From: ckkart at hoc.net (Christian Kristukat) Date: Sat, 18 Feb 2006 12:47:15 +0900 Subject: [SciPy-user] casting complex numbers In-Reply-To: References: <43F57365.70405@ieee.org> Message-ID: <43F698C3.6010808@hoc.net> Ryan Krauss wrote: > Actually, I may have a small problem in that nls.sqrt cannot take a > vector input. > You could try this: sqrt = vectorize(nls.sqrt) sqrt([-1,3]) array([ 0. +1.j, 1.73205081+0.j]) Christian From ckkart at hoc.net Sat Feb 18 00:53:56 2006 From: ckkart at hoc.net (Christian Kristukat) Date: Sat, 18 Feb 2006 14:53:56 +0900 Subject: [SciPy-user] ANN: Python Enthought Edition Version 0.9.2 Released In-Reply-To: <43F4CFB4.1080305@enthought.com> References: <43F4CFB4.1080305@enthought.com> Message-ID: <43F6B674.4010402@hoc.net> Travis N. Vaught wrote: > Enthought is pleased to announce the release of Python Enthought Edition > Version 0.9.2 (http://code.enthought.com/enthon/) -- a python > distribution for Windows. This is a kitchen-sink-included Python > distribution including the following packages/tools out of the box: > > Numeric 24.2 > SciPy 0.3.3 > IPython 0.6.15 > Enthought Tool Suite 1.0.2 > wxPython 2.6.1.0 > PIL 1.1.4 > mingw 20030504-1 > f2py 2.45.241_1926 > MayaVi 1.5 > Scientific Python 2.4.5 > VTK 4.4 > and many more... Very nice package, however I can't get mayavi running correctly on XP. It looks like it is mayavi2 rather than 1.5, right? I tried to load heart.vtk and got an error message telling that I had to create a VTK scene first, although I did create one before. I noticed that the usual pipeline tree does not show up and there are some console error messages which I forgot to record, sorry. Any ideas? Regards, Christian From oliphant.travis at ieee.org Sat Feb 18 06:52:54 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sat, 18 Feb 2006 04:52:54 -0700 Subject: [SciPy-user] casting complex numbers In-Reply-To: References: <43F57365.70405@ieee.org> Message-ID: <43F70A96.6090708@ieee.org> Ryan Krauss wrote: >Actually, I may have a small problem in that nls.sqrt cannot take a >vector input. > > It originally did when it was in scipy_base. I'm not sure why it changed. I'll look in to it. It's supposed to take an array. -Travis From stefan at sun.ac.za Sat Feb 18 08:53:35 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Sat, 18 Feb 2006 15:53:35 +0200 Subject: [SciPy-user] Cluster of workstation In-Reply-To: References: <86522b1a0602170415o403a1a78i6a71c3a63624cfc1@mail.gmail.com> Message-ID: <20060218135335.GA12042@alpha> Also take a look at the MPI wrappings at http://www.penzilla.net/mmpi/ Cheers St?fan On Fri, Feb 17, 2006 at 10:00:33AM -0800, Brian Granger wrote: > Tom, > > I don't think that cow has been maintained in a while. I tried to > get it running over a year ago and had no luck. The closest thing to > cow right now are the parallel capabilities in IPython. There are a > couple of presentations about this on the IPython homepage: > > http://ipython.scipy.org/ From ckkart at hoc.net Sat Feb 18 10:36:02 2006 From: ckkart at hoc.net (Christian Kristukat) Date: Sun, 19 Feb 2006 00:36:02 +0900 Subject: [SciPy-user] numpy weave problems resolved Message-ID: <43F73EE2.6040504@hoc.net> Hi, some weeks ago I reported problems with the new numpy/weave when using the automatic type conversion. I finally boiled it down to some few lines which demonstrate the problem: import scipy.weave as weave from scipy.weave import converters import numpy code = """ double c = a2(ind(2),0); """ ind = numpy.array([0,3,2]) #ind = numpy.array([0,3,2],'i') a2 = numpy.arange(9) a2 = numpy.reshape(a2,(3,3)) weave.inline(code, ['a2','ind'], type_converters=converters.blitz) I found out that you cannot use the elements of an integer array to index another array if the former is not explicitly set to dtype='i'. Thus, if you uncomment the second array intialization above, the code compiles and runs. Once compiled, however, you can remove the dtype argument again. Apparently the types are compatible during runtime, but the compiler is checking compatibility more strictly. I'd prefer a relaxed type checking, if possible. Regards, Christian From ryanlists at gmail.com Sat Feb 18 11:40:04 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Sat, 18 Feb 2006 11:40:04 -0500 Subject: [SciPy-user] RREF - upper triangular Message-ID: Is there a scipy function to find the upper triangular form of a matrix? I need to find the null space or null spaces of a matrix and have messed around with SVD in the past, but it seems like doing row reduction on something like [A]x=[0] has worked well. I have written my own RREF algorithm in Python, but it seems like I have re-invented the wheel and I would prefer to use something someone else already wrote who is smarter about round-off error and I would also prefer something in FORTRAN of C for speed. It seems like linalg.solve almost does what I want except I would need it to return the matrix I assume it is calculating instead of or along with the solution. Thanks, Ryan From prabhu_r at users.sf.net Sat Feb 18 13:05:25 2006 From: prabhu_r at users.sf.net (Prabhu Ramachandran) Date: Sat, 18 Feb 2006 23:35:25 +0530 Subject: [SciPy-user] ANN: Python Enthought Edition Version 0.9.2 Released In-Reply-To: <43F6B674.4010402@hoc.net> References: <43F4CFB4.1080305@enthought.com> <43F6B674.4010402@hoc.net> Message-ID: <17399.25061.572106.473705@monster.iitb.ac.in> >>>>> "Christian" == Christian Kristukat writes: Christian> Travis N. Vaught wrote: >> Enthought is pleased to announce the release of Python >> Enthought Edition Version 0.9.2 [...] Christian> Very nice package, however I can't get mayavi running Christian> correctly on XP. It looks like it is mayavi2 rather Christian> than 1.5, right? I tried to load heart.vtk and got an Christian> error message telling that I had to create a VTK scene Christian> first, although I did create one before. I noticed that Christian> the usual pipeline tree does not show up and there are Christian> some console error messages which I forgot to record, Christian> sorry. Any ideas? Right, there are some nasty problems with 0.9.2 and mayavi. MayaVi-1.5 is not installed at all and the installed version of mayavi2 seems to have pretty bad problems. If you don't mind tracking ETS via SVN you can just delete (or rename) Lib/site-packages/enthought and separately get the Enthought package from SVN http://svn.enthought.com/svn/enthought/trunk. This is described here: http://www.enthought.com/enthought/wiki/GrabbingAndBuilding and also here: http://www.scipy.org/ArndBaecker/MayaVi2 HTH. cheers, prabhu From travis at enthought.com Sat Feb 18 13:41:01 2006 From: travis at enthought.com (Travis N. Vaught) Date: Sat, 18 Feb 2006 12:41:01 -0600 Subject: [SciPy-user] ANN: Python Enthought Edition Version 0.9.2 Released In-Reply-To: <43F6B674.4010402@hoc.net> References: <43F4CFB4.1080305@enthought.com> <43F6B674.4010402@hoc.net> Message-ID: <43F76A3D.1070001@enthought.com> Christian Kristukat wrote: > Travis N. Vaught wrote: > >> Enthought is pleased to announce the release of Python Enthought Edition >> Version 0.9.2 (http://code.enthought.com/enthon/) -- a python >> distribution for Windows. This is a kitchen-sink-included Python >> distribution including the following packages/tools out of the box: >> >> Numeric 24.2 >> SciPy 0.3.3 >> IPython 0.6.15 >> Enthought Tool Suite 1.0.2 >> wxPython 2.6.1.0 >> PIL 1.1.4 >> mingw 20030504-1 >> f2py 2.45.241_1926 >> MayaVi 1.5 >> Scientific Python 2.4.5 >> VTK 4.4 >> and many more... >> > > Very nice package, however I can't get mayavi running correctly on XP. It looks > like it is mayavi2 rather than 1.5, right? I tried to load heart.vtk and got an > error message telling that I had to create a VTK scene first, although I did > create one before. I noticed that the usual pipeline tree does not show up and > there are some console error messages which I forgot to record, sorry. > Any ideas? > ... I just launched mayavi.py from the scripts directory and you're right--it is MayaVi 2. Looking around a bit, I don't see 1.5 anywhere. We must have dropped 1.5 in the last build |:( I'll get it fixed for the next release--many apologies. Travis From travis at enthought.com Sat Feb 18 13:44:01 2006 From: travis at enthought.com (Travis N. Vaught) Date: Sat, 18 Feb 2006 12:44:01 -0600 Subject: [SciPy-user] ANN: Python Enthought Edition Version 0.9.2 Released In-Reply-To: <17399.25061.572106.473705@monster.iitb.ac.in> References: <43F4CFB4.1080305@enthought.com> <43F6B674.4010402@hoc.net> <17399.25061.572106.473705@monster.iitb.ac.in> Message-ID: <43F76AF1.5020703@enthought.com> Yep, I'm seeing problems with MayaVi2 as well...we'll try to turn around a build in about a week and a half (Bryce is off skiing this coming week). Prabhu Ramachandran wrote: >>>>>> "Christian" == Christian Kristukat writes: >>>>>> > > Christian> Travis N. Vaught wrote: > >> Enthought is pleased to announce the release of Python > >> Enthought Edition Version 0.9.2 > [...] > Christian> Very nice package, however I can't get mayavi running > Christian> correctly on XP. It looks like it is mayavi2 rather > Christian> than 1.5, right? I tried to load heart.vtk and got an > Christian> error message telling that I had to create a VTK scene > Christian> first, although I did create one before. I noticed that > Christian> the usual pipeline tree does not show up and there are > Christian> some console error messages which I forgot to record, > Christian> sorry. Any ideas? > > Right, there are some nasty problems with 0.9.2 and mayavi. > MayaVi-1.5 is not installed at all and the installed version of > mayavi2 seems to have pretty bad problems. If you don't mind tracking > ETS via SVN you can just delete (or rename) > Lib/site-packages/enthought and separately get the Enthought package > from SVN http://svn.enthought.com/svn/enthought/trunk. This is > described here: > > http://www.enthought.com/enthought/wiki/GrabbingAndBuilding > > and also here: > > http://www.scipy.org/ArndBaecker/MayaVi2 > > HTH. > cheers, > prabhu > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > -- ........................ Travis N. Vaught CEO Enthought, Inc. http://www.enthought.com ........................ From oliphant.travis at ieee.org Sat Feb 18 16:24:15 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sat, 18 Feb 2006 14:24:15 -0700 Subject: [SciPy-user] RREF - upper triangular In-Reply-To: References: Message-ID: <43F7907F.90909@ieee.org> Ryan Krauss wrote: >Is there a scipy function to find the upper triangular form of a >matrix? > Are you trying to find a LU decomposition? Or perhaps a QR decomposition? U is upper triangular and so is R in those decompositions. From your later description of row-reduction it sounds like you are trying to perform an LU decomposition. scipy.linalg.lu If you are trying to use LU decomposition to solve systems of equations, then you should try scipy.linalg.lu_factor --- find decomposition scipy.linalg.lu_solve -- use a decomposition to solve for a particular RHS (right-hand-side) -Travis From robert.kern at gmail.com Sat Feb 18 16:49:57 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 18 Feb 2006 15:49:57 -0600 Subject: [SciPy-user] RREF - upper triangular In-Reply-To: References: Message-ID: <43F79685.7050606@gmail.com> Ryan Krauss wrote: > Is there a scipy function to find the upper triangular form of a > matrix? I need to find the null space or null spaces of a matrix and > have messed around with SVD in the past, but it seems like doing row > reduction on something like [A]x=[0] has worked well. I have written > my own RREF algorithm in Python, but it seems like I have re-invented > the wheel and I would prefer to use something someone else already > wrote who is smarter about round-off error and I would also prefer > something in FORTRAN of C for speed. It seems like linalg.solve > almost does what I want except I would need it to return the matrix I > assume it is calculating instead of or along with the solution. Well, there's rank-revealing QR, which does essentially what you want. The appropriate LAPACK functions are xGEQP3. It is not currently wrapped in scipy.linalg, but it should be fairly easy to add. The LU and (regular) QR decompositions won't help you because they don't do any ordering to demonstrate the rank. RRQR will order R, the upper-triangular matrix, such that R[rank(A):,:] are all 0. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From nwagner at mecha.uni-stuttgart.de Mon Feb 20 04:30:12 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Mon, 20 Feb 2006 10:30:12 +0100 Subject: [SciPy-user] Interaction of sparse and dense matrices in scipy Message-ID: <43F98C24.4010004@mecha.uni-stuttgart.de> Hi all, I need some advice on operations between dense and sparse matrix "objects". Which operations are possible in scipy ? Assume that A and B are sparse while C and D are dense matrices of appropriate dimension. dense sparse dot(C,D) A*B linalg.norm(C) linalg.norm(C.todense()) linalg.kron(C,D) ? identity(n), eye(n) How about interactions of sparse and dense matrices - multiplication of a sparse matrix A with a dense matrix C - Kronecker product of sparse matrices ? - Kronecker product of a dense and a sparse matrix ? - sparse matrix functions expm(A),... ? - sparse eigensolver - to be continued Can I use different sparse formats like csc and csr w.r.t. to multiplication, addition,... And what is the output of A = io.mmread("A.mtx") ? I mean sparse or dense. Any comments ? Nils From cimrman3 at ntc.zcu.cz Mon Feb 20 09:11:39 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Mon, 20 Feb 2006 15:11:39 +0100 Subject: [SciPy-user] Interaction of sparse and dense matrices in scipy In-Reply-To: <43F98C24.4010004@mecha.uni-stuttgart.de> References: <43F98C24.4010004@mecha.uni-stuttgart.de> Message-ID: <43F9CE1B.8060903@ntc.zcu.cz> Nils Wagner wrote: > dense sparse > > dot(C,D) A*B > linalg.norm(C) linalg.norm(C.todense()) > linalg.kron(C,D) ? > identity(n), eye(n) Hi Nils, I have just added speye, spidentity to the scipy.sparse (using spdiags). You can test them in svn. r. From nwagner at mecha.uni-stuttgart.de Mon Feb 20 09:15:06 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Mon, 20 Feb 2006 15:15:06 +0100 Subject: [SciPy-user] Interaction of sparse and dense matrices in scipy In-Reply-To: <43F9CE1B.8060903@ntc.zcu.cz> References: <43F98C24.4010004@mecha.uni-stuttgart.de> <43F9CE1B.8060903@ntc.zcu.cz> Message-ID: <43F9CEEA.7020004@mecha.uni-stuttgart.de> Robert Cimrman wrote: >Nils Wagner wrote: > > >>dense sparse >> >>dot(C,D) A*B >>linalg.norm(C) linalg.norm(C.todense()) >>linalg.kron(C,D) ? >>identity(n), eye(n) >> > >Hi Nils, >I have just added speye, spidentity to the scipy.sparse (using spdiags). >You can test them in svn. > >r. > > Hi Robert, Great ! Thank you very much ! :-) Nils >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > From nwagner at mecha.uni-stuttgart.de Mon Feb 20 09:30:38 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Mon, 20 Feb 2006 15:30:38 +0100 Subject: [SciPy-user] Interaction of sparse and dense matrices in scipy In-Reply-To: <43F9CEEA.7020004@mecha.uni-stuttgart.de> References: <43F98C24.4010004@mecha.uni-stuttgart.de> <43F9CE1B.8060903@ntc.zcu.cz> <43F9CEEA.7020004@mecha.uni-stuttgart.de> Message-ID: <43F9D28E.2030203@mecha.uni-stuttgart.de> Nils Wagner wrote: >Robert Cimrman wrote: > >>Nils Wagner wrote: >> >> >> >>>dense sparse >>> >>>dot(C,D) A*B >>>linalg.norm(C) linalg.norm(C.todense()) >>>linalg.kron(C,D) ? >>>identity(n), eye(n) >>> >>> >>Hi Nils, >>I have just added speye, spidentity to the scipy.sparse (using spdiags). >>You can test them in svn. >> >>r. >> >> >> >Hi Robert, > >Great ! Thank you very much ! :-) > >Nils > > >>_______________________________________________ >>SciPy-user mailing list >>SciPy-user at scipy.net >>http://www.scipy.net/mailman/listinfo/scipy-user >> >> > > > > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > Hi Robert, Is there a special reason why you have used the CSC format ? speye(n, m=None, k=0, dtype='d') speye( n, m ) returns a (n, m) matrix tored in CSC sparse matrix format, where the k-th diagonal is all ones, and everything else is zeros. It should be stored instead of tored ;-) spidentity(n, dtype='d') spidentity( n ) returns the identity matrix of shape (n, n) stored in CSC sparse matrix format. Thank you again. Nils From cimrman3 at ntc.zcu.cz Mon Feb 20 09:48:15 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Mon, 20 Feb 2006 15:48:15 +0100 Subject: [SciPy-user] Interaction of sparse and dense matrices in scipy In-Reply-To: <43F9D28E.2030203@mecha.uni-stuttgart.de> References: <43F98C24.4010004@mecha.uni-stuttgart.de> <43F9CE1B.8060903@ntc.zcu.cz> <43F9CEEA.7020004@mecha.uni-stuttgart.de> <43F9D28E.2030203@mecha.uni-stuttgart.de> Message-ID: <43F9D6AF.60401@ntc.zcu.cz> > Hi Robert, > > Is there a special reason why you have used the CSC format ? yes, because these functions are simple wrappers of the 'spdiags' fortran function which returns the CSC matrix. > speye(n, m=None, k=0, dtype='d') > speye( n, m ) returns a (n, m) matrix tored > in CSC sparse matrix format, where the k-th diagonal is all ones, > and everything else is zeros. > > It should be stored instead of tored ;-) fixed, thanks. r. From gnata at obs.univ-lyon1.fr Mon Feb 20 11:54:37 2006 From: gnata at obs.univ-lyon1.fr (Xavier Gnata) Date: Mon, 20 Feb 2006 17:54:37 +0100 Subject: [SciPy-user] scipy cvs : bug in fftconvolve ? Message-ID: <43F9F44D.1010207@obs.univ-lyon1.fr> Hi, (If it is a repost, please just ignore). Current scipy CVS complains about a "NameError: global name 'fftn' is not defined" calling scipy.signal.fftconvolve(I,K) where I and K are 2d numpy arrays. Looking at signaltools.py, you can see that we only import the following fft related stuff: from scipy.fftpack import fft, ifft, ifftshift, fft2, ifft2 Looks like fftn is just missing. Anyway, the code is buggy because fftn is not defined anywhere. BTW, is fftconvole the best way to perform a 2D fft-based convolution with scipy? (If not, naming conventions are not very intuitive ;)) Xavier -- ############################################ Xavier Gnata CRAL - Observatoire de Lyon 9, avenue Charles Andr? 69561 Saint Genis Laval cedex Phone: +33 4 78 86 85 58 Fax: +33 4 78 86 83 86 E-mail: gnata at obs.univ-lyon1.fr ############################################ From schofield at ftw.at Mon Feb 20 12:23:10 2006 From: schofield at ftw.at (Ed Schofield) Date: Mon, 20 Feb 2006 18:23:10 +0100 Subject: [SciPy-user] Interaction of sparse and dense matrices in scipy In-Reply-To: <43F98C24.4010004@mecha.uni-stuttgart.de> References: <43F98C24.4010004@mecha.uni-stuttgart.de> Message-ID: <43F9FAFE.10408@ftw.at> Nils Wagner wrote: >I need some advice on operations between dense and sparse matrix "objects". >Which operations are possible in scipy ? > > > >- multiplication of a sparse matrix A with a dense matrix C >- Kronecker product of sparse matrices ? >- Kronecker product of a dense and a sparse matrix ? >- sparse matrix functions expm(A),... ? >- sparse eigensolver > > I don't know about all of these, but currently multiplication works between arrays by sparse matrices (and should be efficient) and you can achieve exponentiation for integer powers by repeated multiplication. But currently multiplication a * b does NOT work if 'a' is a dense *matrix* and 'b' is sparse. I'd like to fix this, but it's not easy, and we'd need to discuss how. >Can I use different sparse formats like csc and csr w.r.t. to >multiplication, addition,... > > Yes, it should work for any two formats, but one matrix will usually be converted internally to the same format as the other, so it's more efficient to use the same format unless you know what you're doing. >And what is the output of A = io.mmread("A.mtx") ? I mean sparse or dense. > > It should be the same format that it was stored in. -- Ed From nodwell at physics.ubc.ca Mon Feb 20 12:29:21 2006 From: nodwell at physics.ubc.ca (Eric Nodwell) Date: Mon, 20 Feb 2006 09:29:21 -0800 Subject: [SciPy-user] Fwd: io.read_array and columns of strings In-Reply-To: References: Message-ID: Hi, I notice that io.read_array (scipy version 0.4.4, numpy version 0.9.4) does not seem to support reading a column of strings. For example: >>> A = io.read_array ('r1683.log.layers', atype=[(string,10), Float]) Traceback (most recent call last): File "", line 1, in ? File "/sw/lib/python2.4/site-packages/scipy/io/array_import.py", line 358, in read_array if not atype[k] in typecodes: TypeError: 'in ' requires string as left operand Perhaps this is intended behaviour, although standard numpy arrays do support string types, for example A = array (('one', 'two'), (string,10)) Of course, it's easy to work around this by reading the data with a loop instead of read_array. So this is "wish-list" kind of item for convenience and consistency. Any thoughts? thanks, Eric From schofield at ftw.at Mon Feb 20 12:29:27 2006 From: schofield at ftw.at (Ed Schofield) Date: Mon, 20 Feb 2006 18:29:27 +0100 Subject: [SciPy-user] ANN: SciPy 0.4.6 released Message-ID: <43F9FC77.6060209@ftw.at> =========================== SciPy 0.4.6 Scientific tools for Python =========================== I'm pleased to announce the release of SciPy 0.4.6. This release adds support for the latest NumPy (version 0.9.5), features a new maximum entropy package, improved spline evaluation, and various bug fixes for signaltools, sparse matrices, weave, conjugate gradients, and more. It is available for download from http://www.scipy.org/Download as a source tarball for Linux/Solaris/OS X/BSD/Windows (64-bit and 32-bit) and as an executable installer for Win32. More information on SciPy is available at http://www.scipy.org/ =========================== SciPy is an Open Source library of scientific tools for Python. It contains a variety of high-level science and engineering modules, including modules for statistics, optimization, integration, linear algebra, Fourier transforms, signal and image processing, genetic algorithms, ODE solvers, special functions, and more. From erickt at dslextreme.com Mon Feb 20 17:04:28 2006 From: erickt at dslextreme.com (Erick Tryzelaar) Date: Mon, 20 Feb 2006 14:04:28 -0800 Subject: [SciPy-user] problem with test check_algebraic_log_weight in 0.4.6? Message-ID: <43FA3CEC.6040605@dslextreme.com> I just tried scipy.test on the just-released 0.4.6, and it ends up exiting my interactive python with this message: >>> scipy.test(verbosity=5) ... [snip] ... check_algebraic_log_weight (scipy.integrate.quadpack.test_quadpack.test_quad) Adjust D1MACH by uncommenting data statements appropriate for your machine. STOP 779 Any idea what is happening? I'm running on mac os x, if that helps. Thanks, -e From david.huard at gmail.com Mon Feb 20 17:09:31 2006 From: david.huard at gmail.com (David Huard) Date: Mon, 20 Feb 2006 17:09:31 -0500 Subject: [SciPy-user] mtrand no longer in scipy.stats Message-ID: <91cf711d0602201409m5295e5dfq@mail.gmail.com> Hi, I just updated from svn to the new release and I can't import mtrand anymore from scipy.stats. Does someone know where it went ? I simply want to set the seed before generating random values using the rvs method. Thanks, David From robert.kern at gmail.com Mon Feb 20 17:16:43 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 20 Feb 2006 16:16:43 -0600 Subject: [SciPy-user] mtrand no longer in scipy.stats In-Reply-To: <91cf711d0602201409m5295e5dfq@mail.gmail.com> References: <91cf711d0602201409m5295e5dfq@mail.gmail.com> Message-ID: <43FA3FCB.9070307@gmail.com> David Huard wrote: > Hi, > > I just updated from svn to the new release and I can't import mtrand > anymore from scipy.stats. Does someone know where it went ? I simply > want to set the seed before generating random values using the rvs > method. In [10]: numpy.random.seed? Type: builtin_function_or_method Base Class: String Form: Namespace: Interactive Docstring: Seed the generator. seed(seed=None) seed can be an integer, an array (or other sequence) of integers of any length, or None. If seed is None, then RandomState will try to read data from /dev/urandom (or the Windows analogue) if available or seed from the clock otherwise. It should not be exposed in scipy.stats. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From fonnesbeck at gmail.com Mon Feb 20 18:15:31 2006 From: fonnesbeck at gmail.com (Chris Fonnesbeck) Date: Mon, 20 Feb 2006 18:15:31 -0500 Subject: [SciPy-user] ANN: SciPy 0.4.6 released In-Reply-To: <43F9FC77.6060209@ftw.at> References: <43F9FC77.6060209@ftw.at> Message-ID: <723eb6930602201515u3bb92415ue3d1eeb5a5498498@mail.gmail.com> On 2/20/06, Ed Schofield wrote: > > It is available for download from > > http://www.scipy.org/Download > > as a source tarball for Linux/Solaris/OS X/BSD/Windows (64-bit and > 32-bit) and as an executable installer for Win32. I have added binary installers for Mac OS X 10.4 as well. C. -- Chris Fonnesbeck + Atlanta, GA + http://trichech.us From david.huard at gmail.com Mon Feb 20 19:33:07 2006 From: david.huard at gmail.com (David Huard) Date: Mon, 20 Feb 2006 19:33:07 -0500 Subject: [SciPy-user] mtrand no longer in scipy.stats In-Reply-To: <43FA3FCB.9070307@gmail.com> References: <91cf711d0602201409m5295e5dfq@mail.gmail.com> <43FA3FCB.9070307@gmail.com> Message-ID: <91cf711d0602201633g458cd85cy@mail.gmail.com> Thanks, David 2006/2/20, Robert Kern : > David Huard wrote: > > Hi, > > > > I just updated from svn to the new release and I can't import mtrand > > anymore from scipy.stats. Does someone know where it went ? I simply > > want to set the seed before generating random values using the rvs > > method. > > In [10]: numpy.random.seed? > Type: builtin_function_or_method > Base Class: > String Form: > Namespace: Interactive > Docstring: > Seed the generator. > > seed(seed=None) > > seed can be an integer, an array (or other sequence) of integers of any > length, or None. If seed is None, then RandomState will try to read data > from /dev/urandom (or the Windows analogue) if available or seed from > the clock otherwise. > > It should not be exposed in scipy.stats. > > -- > Robert Kern > robert.kern at gmail.com > > "In the fields of hell where the grass grows high > Are the graves of dreams allowed to die." > -- Richard Harter > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From wbaxter at gmail.com Mon Feb 20 20:59:30 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Tue, 21 Feb 2006 10:59:30 +0900 Subject: [SciPy-user] Right way to import scipy Message-ID: I posted this on the numpy list, but it seems more a scipy issue. If you do this: import numpy as N import scipy as S Then S.linalg looks to be identical to N.linalg. In particular it doesn't have things like lu, qr, gmres, etc. If you subsequently do 'from scipy.linalg import lu', then suddenly S.linalglooks different from N.linalg, and it has not just lu, but all the other scipy linalg goodies too. Is this a bug? I would like to add info about getting at these functions to the Scipy for Matlab Users wiki page, but I'd like to know first if A) this is the expected behavior, or just a bug and B) if these import statements are really the best way to get at scipy.linalg.lu et al. import numpy as N import scipy as S from scipy.linalg import lu (This is with Numpy 0.9.5 and Scipy 0.4.6. With numpy 0.9.4/scipy 0.4.4 the above imports make scipy.linalg.lu et al accessible, but they throw exceptions trying to access ndarray.dtypechar when you try to use them.) Thanks, Bill Baxter -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebastien.dementen at alumni.insead.edu Mon Feb 20 21:39:47 2006 From: sebastien.dementen at alumni.insead.edu (DE MENTEN Sebastien) Date: Tue, 21 Feb 2006 10:39:47 +0800 Subject: [SciPy-user] low-discrepancy sequences Message-ID: <521836FA9A33D147A2C81D0B95480CA70186188A@MEDUSA3.SGP.insead.intra> Hi all, Has anyone worked on low-discrepancy sequences in Scipy? Sebastien -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Mon Feb 20 22:35:46 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 20 Feb 2006 21:35:46 -0600 Subject: [SciPy-user] Right way to import scipy In-Reply-To: References: Message-ID: <43FA8A92.8000800@gmail.com> Bill Baxter wrote: > I posted this on the numpy list, but it seems more a scipy issue. > > If you do this: > > import numpy as N > import scipy as S > > Then S.linalg looks to be identical to N.linalg. In particular it > doesn't have things like lu, qr, gmres, etc. > > If you subsequently do 'from scipy.linalg import lu', then suddenly > S.linalg looks different from N.linalg, and it has not just lu, but all > the other scipy linalg goodies too. > > Is this a bug? I would like to add info about getting at these > functions to the Scipy for Matlab Users wiki page, but I'd like to know > first if > A) this is the expected behavior, or just a bug and It's expected if you know what's going on, but not really desirable. scipy/__init__.py does a "from numpy import *" which is pulling in the linalg package. I'm pretty sure we don't want that. In fact, we already have a "del lib" statement to clear that conflicting module, so we should just add a "del linalg" statement, too. > B) if these import statements are really the best way to get at > scipy.linalg.lu et al. > import numpy as N > import scipy as S > from scipy.linalg import lu Yup. Or just: from scipy.linalg import lu lu(A) or even: from scipy import linalg linalg.lu(A) -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From wbaxter at gmail.com Mon Feb 20 23:25:18 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Tue, 21 Feb 2006 13:25:18 +0900 Subject: [SciPy-user] Right way to import scipy In-Reply-To: <43FA8A92.8000800@gmail.com> References: <43FA8A92.8000800@gmail.com> Message-ID: On 2/21/06, Robert Kern wrote: > > Bill Baxter wrote: > > Is this a bug? I would like to add info about getting at these > > functions to the Scipy for Matlab Users wiki page, but I'd like to know > > first if > > A) this is the expected behavior, or just a bug and > > It's expected if you know what's going on, but not really desirable. > scipy/__init__.py does a "from numpy import *" which is pulling in the > linalg > package. I'm pretty sure we don't want that. In fact, we already have a > "del > lib" statement to clear that conflicting module, so we should just add a > "del > linalg" statement, too. Thanks for the explanation. > B) if these import statements are really the best way to get at > > scipy.linalg.lu et al. > > import numpy as N > > import scipy as S > > from scipy.linalg import lu > > Yup. Or just: > > from scipy.linalg import lu > lu(A) > > or even: > > from scipy import linalg > linalg.lu(A) That last one doesn't work: [starting from a clean environment[ >>> from scipy import linalg >>> linalg.lu Traceback (most recent call last): File "", line 1, in ? AttributeError: 'module' object has no attribute 'lu' >>> from scipy import version >>> version.version '0.4.6' --Bill Baxter -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Mon Feb 20 23:36:41 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 20 Feb 2006 22:36:41 -0600 Subject: [SciPy-user] Right way to import scipy In-Reply-To: References: <43FA8A92.8000800@gmail.com> Message-ID: <43FA98D9.3090403@gmail.com> Bill Baxter wrote: > That last one doesn't work: > > [starting from a clean environment[ >>>> from scipy import linalg >>>> linalg.lu > Traceback (most recent call last): > File "", line 1, in ? > AttributeError: 'module' object has no attribute 'lu' >>>> from scipy import version >>>> version.version > '0.4.6' Okay, it *will* work with the latest SVN. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From strawman at astraw.com Tue Feb 21 00:06:16 2006 From: strawman at astraw.com (Andrew Straw) Date: Mon, 20 Feb 2006 21:06:16 -0800 Subject: [SciPy-user] Right way to import scipy In-Reply-To: <43FA8A92.8000800@gmail.com> References: <43FA8A92.8000800@gmail.com> Message-ID: <43FA9FC8.6030709@astraw.com> Robert Kern wrote: >Bill Baxter wrote: > > >>I posted this on the numpy list, but it seems more a scipy issue. >> >>If you do this: >> >>import numpy as N >>import scipy as S >> >>Then S.linalg looks to be identical to N.linalg. In particular it >>doesn't have things like lu, qr, gmres, etc. >> >>If you subsequently do 'from scipy.linalg import lu', then suddenly >>S.linalg looks different from N.linalg, and it has not just lu, but all >>the other scipy linalg goodies too. >> >>Is this a bug? I would like to add info about getting at these >>functions to the Scipy for Matlab Users wiki page, but I'd like to know >>first if >>A) this is the expected behavior, or just a bug and >> >> > >It's expected if you know what's going on, but not really desirable. >scipy/__init__.py does a "from numpy import *" which is pulling in the linalg >package. I'm pretty sure we don't want that. In fact, we already have a "del >lib" statement to clear that conflicting module, so we should just add a "del >linalg" statement, too. > > +1 I was bitten by this one today... Just as a point of reference to explain how: I was looking for the fastest SVD code and figured if it what was in scipy was different than numpy, I'd use the scipy version which would presumably be using optimizations that couldn't be in numpy. A quick numpy.linalg.svd == scipy.linalg.svd led me to believe scipy had nothing on numpy and simply used the numpy version. Perhaps I'm right about the equivalence of scipy and numpy's SVD functions (and perhaps I'm wrong), but it's clear to me now that my "test" didn't test anything and thus I'll have to test again. From robert.kern at gmail.com Tue Feb 21 01:02:01 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 21 Feb 2006 00:02:01 -0600 Subject: [SciPy-user] low-discrepancy sequences In-Reply-To: <521836FA9A33D147A2C81D0B95480CA70186188A@MEDUSA3.SGP.insead.intra> References: <521836FA9A33D147A2C81D0B95480CA70186188A@MEDUSA3.SGP.insead.intra> Message-ID: <43FAACD9.3050709@gmail.com> DE MENTEN Sebastien wrote: > Hi all, > > Has anyone worked on low-discrepancy sequences in Scipy? I don't think so. Contributions are welcome! -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From nwagner at mecha.uni-stuttgart.de Tue Feb 21 03:48:29 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Tue, 21 Feb 2006 09:48:29 +0100 Subject: [SciPy-user] Interaction of sparse and dense matrices in scipy In-Reply-To: <43F9FAFE.10408@ftw.at> References: <43F98C24.4010004@mecha.uni-stuttgart.de> <43F9FAFE.10408@ftw.at> Message-ID: <43FAD3DD.3070503@mecha.uni-stuttgart.de> Ed Schofield wrote: >Nils Wagner wrote: > > >>I need some advice on operations between dense and sparse matrix "objects". >>Which operations are possible in scipy ? >> >> >> >>- multiplication of a sparse matrix A with a dense matrix C >>- Kronecker product of sparse matrices ? >>- Kronecker product of a dense and a sparse matrix ? >>- sparse matrix functions expm(A),... ? >>- sparse eigensolver >> >> >> >I don't know about all of these, but currently multiplication works >between arrays by sparse matrices (and should be efficient) and you can >achieve exponentiation for integer powers by repeated multiplication. >But currently multiplication a * b does NOT work if 'a' is a dense >*matrix* and 'b' is sparse. I'd like to fix this, but it's not easy, >and we'd need to discuss how. > > >>Can I use different sparse formats like csc and csr w.r.t. to >>multiplication, addition,... >> >> >> >Yes, it should work for any two formats, but one matrix will usually be >converted internally to the same format as the other, so it's more >efficient to use the same format unless you know what you're doing. > > >>And what is the output of A = io.mmread("A.mtx") ? I mean sparse or dense. >> >> >> >It should be the same format that it was stored in. > >-- Ed > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > Hi Ed, Thank you for your valuable comments. The multiplication a*b, where a is dense a b is sparse, is certainly of importance. For example Block Krylov methods make heavily use of such products. Anyway, it would be nice if scipy can handle such products :-) Any comments or suggestions ? Cheers, Nils Some References http://math.nist.gov/spblas/ Maybe we can benift from octave in some sense. http://octave.sourceforge.net/index/f/krylov.html Sparse Kronecker product http://octave.sourceforge.net/index/f/spkron.html Sparse random matrices http://octave.sourceforge.net/index/f/sprand.html . From nwagner at mecha.uni-stuttgart.de Tue Feb 21 04:18:21 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Tue, 21 Feb 2006 10:18:21 +0100 Subject: [SciPy-user] Matrix exponential for sparse matrices Message-ID: <43FADADD.2080800@mecha.uni-stuttgart.de> Hi Ed, Here you can find some details on matrix exponentials for both, dense and sparse matrices. http://www.maths.uq.edu.au/expokit/guide.html#Usage Nils From nwagner at mecha.uni-stuttgart.de Tue Feb 21 04:38:26 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Tue, 21 Feb 2006 10:38:26 +0100 Subject: [SciPy-user] Basic linear algebra for sparse matrices Message-ID: <43FADF92.1010807@mecha.uni-stuttgart.de> Hi Ed, *Also, SparseM* provides some basic R functionality for linear algebra with sparse matrices. http://cran.r-project.org/src/contrib/Descriptions/SparseM.html Nils From nwagner at mecha.uni-stuttgart.de Tue Feb 21 04:53:22 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Tue, 21 Feb 2006 10:53:22 +0100 Subject: [SciPy-user] TypeError: unsupported operand type(s) for +: 'slice' and 'int' Message-ID: <43FAE312.7060703@mecha.uni-stuttgart.de> Hi all, The operation (see below for details) works fine with dense matrices but not with sparse matrices. shape(V) (317, 60) shape(R0), type(R0) (317, 10) Traceback (most recent call last): File "ggsl_sparse.py", line 116, in ? V,H,P = g_arnoldi(A,B,R0,n,s,k) File "ggsl_sparse.py", line 68, in g_arnoldi V[:,:s] = R0/beta File "/usr/lib64/python2.4/site-packages/scipy/sparse/sparse.py", line 1191, in __setitem__ self.indptr = resize1d(self.indptr, row+2) TypeError: unsupported operand type(s) for +: 'slice' and 'int' How can I fix this problem ? Nils From schofield at ftw.at Tue Feb 21 06:33:20 2006 From: schofield at ftw.at (Ed Schofield) Date: Tue, 21 Feb 2006 12:33:20 +0100 Subject: [SciPy-user] Interaction of sparse and dense matrices in scipy In-Reply-To: <43FAD3DD.3070503@mecha.uni-stuttgart.de> References: <43F98C24.4010004@mecha.uni-stuttgart.de> <43F9FAFE.10408@ftw.at> <43FAD3DD.3070503@mecha.uni-stuttgart.de> Message-ID: <43FAFA80.5010708@ftw.at> Nils Wagner wrote: >Hi Ed, > >Thank you for your valuable comments. >The multiplication a*b, where a is dense a b is sparse, is certainly of >importance. >For example Block Krylov methods make heavily use of such products. >Anyway, it would be nice if scipy can handle such products :-) > > It does work now where 'a' is an *array*, but not when 'a' is a (dense) matrix. For now, just use a dense array :) The reason is that the matrix object overrides the * operator and calls dot(a, b). I'll think about this problem when I have time and make some proposals on scipy-dev and/or numpy-discussion for how to solve it... -- Ed From nwagner at mecha.uni-stuttgart.de Tue Feb 21 07:04:20 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Tue, 21 Feb 2006 13:04:20 +0100 Subject: [SciPy-user] Interaction of sparse and dense matrices in scipy In-Reply-To: <43FAFA80.5010708@ftw.at> References: <43F98C24.4010004@mecha.uni-stuttgart.de> <43F9FAFE.10408@ftw.at> <43FAD3DD.3070503@mecha.uni-stuttgart.de> <43FAFA80.5010708@ftw.at> Message-ID: <43FB01C4.6020608@mecha.uni-stuttgart.de> Ed Schofield wrote: >Nils Wagner wrote: > > >>Hi Ed, >> >>Thank you for your valuable comments. >>The multiplication a*b, where a is dense a b is sparse, is certainly of >>importance. >>For example Block Krylov methods make heavily use of such products. >>Anyway, it would be nice if scipy can handle such products :-) >> >> >> >It does work now where 'a' is an *array*, but not when 'a' is a (dense) >matrix. For now, just use a dense array :) > >The reason is that the matrix object overrides the * operator and calls >dot(a, b). I'll think about this problem when I have time and make some >proposals on scipy-dev and/or numpy-discussion for how to solve it... > >-- Ed > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > Hi Ed, May I send you my (sparse/dense) codes for solving large Sylvester equations. The results of the dense version are o.k. while the sparse version doesn't work as expected. I hope you can drop some hints ;-) . Cheers, Nils From bblais at bryant.edu Tue Feb 21 07:24:01 2006 From: bblais at bryant.edu (Brian Blais) Date: Tue, 21 Feb 2006 07:24:01 -0500 Subject: [SciPy-user] algorithm, optimization, or other problem? Message-ID: <43FB0661.1040202@bryant.edu> Hello, I am trying to translate some Matlab/mex code to Python, for doing neural simulations. This application is definitely computing-time limited, and I need to optimize at least one inner loop of the code, or perhaps even rethink the algorithm. The procedure is very simple, after initializing any variables: 1) select a random input vector, which I will call "x". right now I have it as an array, and I choose columns from that array randomly. in other cases, I may need to take an image, select a patch, and then make that a column vector. 2) calculate an output value, which is the dot product of the "x" and a weight vector, "w", so y=dot(x,w) 3) modify the weight vector based on a matrix equation, like: w=w+ eta * (y*x - y**2*w) ^ | +---- learning rate constant 4) repeat steps 1-3 many times I've organized it like: for e in 100: # outer loop for i in 1000: # inner loop (steps 1-3) display things. so that the bulk of the computation is in the inner loop, and is amenable to converting to a faster language. This is my issue: straight python, in the example posted below for 250000 inner-loop steps, takes 20 seconds for each outer-loop step. I tried Pyrex, which should work very fast on such a problem, takes about 8.5 seconds per outer-loop step. The same code as a C-mex file in matlab takes 1.5 seconds per outer-loop step. Given the huge difference between the Pyrex and the Mex, I feel that there is something I am doing wrong, because the C-code for both should run comparably. Perhaps the approach is wrong? I'm willing to take any suggestions! I don't mind coding some in C, but the Python API seemed a bit challenging to me. One note: I am using the Numeric package, not numpy, only because I want to be able to use the Enthought version for Windows. I develop on Linux, and haven't had a chance to see if I can compile numpy using the Enthought Python for Windows. If there is anything else anyone needs to know, I'll post it. I put the main script, and a dohebb.pyx code below. thanks! Brian Blais -- ----------------- bblais at bryant.edu http://web.bryant.edu/~bblais # Main script: from dohebb import * import pylab as p from Numeric import * from RandomArray import * import time x=random((100,1000)) # 1000 input vectors numpats=x.shape[0] w=random((numpats,1)); th=random((1,1)) params={} params['eta']=0.001; params['tau']=100.0; old_mx=0; for e in range(100): rnd=randint(0,numpats,250000) t1=time.time() if 0: # straight python for i in range(len(rnd)): pat=rnd[i] xx=reshape(x[:,pat],(1,-1)) y=matrixmultiply(xx,w) w=w+params['eta']*(y*transpose(xx)-y**2*w); th=th+(1.0/params['tau'])*(y**2-th); else: # pyrex dohebb(params,w,th,x,rnd) print time.time()-t1 p.plot(w,'o-') p.xlabel('weights') p.show() #============================================= # dohebb.pyx cdef extern from "Numeric/arrayobject.h": struct PyArray_Descr: int type_num, elsize char type ctypedef class Numeric.ArrayType [object PyArrayObject]: cdef char *data cdef int nd cdef int *dimensions, *strides cdef object base cdef PyArray_Descr *descr cdef int flags def dohebb(params,ArrayType w,ArrayType th,ArrayType X,ArrayType rnd): cdef int num_iterations cdef int num_inputs cdef int offset cdef double *wp,*xp,*thp cdef int *rndp cdef double eta,tau eta=params['eta'] # learning rate tau=params['tau'] # used for variance estimate cdef double y num_iterations=rnd.dimensions[0] num_inputs=w.dimensions[0] # get the pointers wp=w.data xp=X.data rndp=rnd.data thp=th.data for it from 0 <= it < num_iterations: offset=rndp[it]*num_inputs # calculate the output y=0.0 for i from 0 <= i < num_inputs: y=y+wp[i]*xp[i+offset] # change in the weights for i from 0 <= i < num_inputs: wp[i]=wp[i]+eta*(y*xp[i+offset] - y*y*wp[i]) # estimate the variance thp[0]=thp[0]+(1.0/tau)*(y**2-thp[0]) From arnd.baecker at web.de Tue Feb 21 07:47:09 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Tue, 21 Feb 2006 13:47:09 +0100 (CET) Subject: [SciPy-user] Interaction of sparse and dense matrices in scipy In-Reply-To: <43FB01C4.6020608@mecha.uni-stuttgart.de> References: <43F98C24.4010004@mecha.uni-stuttgart.de> <43F9FAFE.10408@ftw.at> <43FAD3DD.3070503@mecha.uni-stuttgart.de> <43FAFA80.5010708@ftw.at> <43FB01C4.6020608@mecha.uni-stuttgart.de> Message-ID: Hi Nils, On Tue, 21 Feb 2006, Nils Wagner wrote: [...] > May I send you my (sparse/dense) codes for solving large Sylvester > equations. > The results of the dense version are o.k. while the sparse version > doesn't work as expected. > I hope you can drop some hints ;-) . It seems that you are becoming the sparse expert! Would it be possible that you collect the information on sparse matrices in the http://www.scipy.org/Cookbook? Something like http://www.scipy.org/Cookbook/SparseMatrices with (e.g.) topics like: - sparse matrix formats - operations with sparse matrices - operations on sparse matrics - solving sparse matrix eigenvalue problems - presumably best on a subpage http://www.scipy.org/Cookbook/SparseMatrices/EigenvalueProblems would be extremely helpful. Your code on solving large Sylvester equations would surely be interesting as well. Best, Arnd From ckkart at hoc.net Tue Feb 21 08:46:40 2006 From: ckkart at hoc.net (Christian Kristukat) Date: Tue, 21 Feb 2006 22:46:40 +0900 Subject: [SciPy-user] algorithm, optimization, or other problem? In-Reply-To: <43FB0661.1040202@bryant.edu> References: <43FB0661.1040202@bryant.edu> Message-ID: <43FB19C0.4080608@hoc.net> Hi Brian, Brian Blais wrote: > > Given the huge difference between the Pyrex and the Mex, I feel that there is > something I am doing wrong, because the C-code for both should run comparably. > Perhaps the approach is wrong? I'm willing to take any suggestions! I don't mind > coding some in C, but the Python API seemed a bit challenging to me. Have you tried weave.inline? I'm using it a lot. Here's a nice article about its performance: http://old.scipy.org/documentation/weave/weaveperformance.html Regards,Christian From cimrman3 at ntc.zcu.cz Tue Feb 21 09:07:14 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Tue, 21 Feb 2006 15:07:14 +0100 Subject: [SciPy-user] 3D plotting Message-ID: <43FB1E92.6030705@ntc.zcu.cz> Hi, I need to do some simple 3D plots, so I tried the two Cookbook recipes (xplt, mplot3d) and failed: * xplt: does not work when using a custom installation directory such as python setup.py install --root=/home/share/software apparently the gistdata directory path is wrong: gist.error: failed to create drawing -- Gist work.gs style sheet missing * mplot3d: does not work with my version of matplotlib (0.80). I have made the changes mentioned in the Cookbook to no avail. (Axes.__init__() args apparently changed, as well as some other matplotlib object attributes) Any ideas? mplot3d looks great, I would really like to use it! thanks, r. From ryanlists at gmail.com Tue Feb 21 09:10:32 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Tue, 21 Feb 2006 09:10:32 -0500 Subject: [SciPy-user] 3D plotting In-Reply-To: <43FB1E92.6030705@ntc.zcu.cz> References: <43FB1E92.6030705@ntc.zcu.cz> Message-ID: I have had some success with Gnuplot.py and Mayavi, but both are probably more work than matplotlib's 3D surface stuff (because they have a lot more capability). If you want to pursue matplotlib, I would ask this question over there (I think somebody asked something very similar last week - so look for that thread). Ryan On 2/21/06, Robert Cimrman wrote: > Hi, > > I need to do some simple 3D plots, so I tried the two Cookbook recipes > (xplt, mplot3d) and failed: > > * xplt: does not work when using a custom installation directory such as > python setup.py install --root=/home/share/software > > apparently the gistdata directory path is wrong: > gist.error: failed to create drawing -- Gist work.gs style sheet missing > > * mplot3d: does not work with my version of matplotlib (0.80). I have > made the changes mentioned in the Cookbook to no avail. (Axes.__init__() > args apparently changed, as well as some other matplotlib object attributes) > > Any ideas? mplot3d looks great, I would really like to use it! > > thanks, > r. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From ryanlists at gmail.com Tue Feb 21 09:28:28 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Tue, 21 Feb 2006 09:28:28 -0500 Subject: [SciPy-user] inverse of column_stack Message-ID: Is there an inverse function to column_stack? i.e. I want to iterate over the columns of a matrix. I could first transpose it and then call tolist, but I was wondering if there was some clean, fast, built in way that I wasn't aware of. Thanks, Ryan From arnd.baecker at web.de Tue Feb 21 09:37:29 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Tue, 21 Feb 2006 15:37:29 +0100 (CET) Subject: [SciPy-user] 3D plotting In-Reply-To: <43FB1E92.6030705@ntc.zcu.cz> References: <43FB1E92.6030705@ntc.zcu.cz> Message-ID: On Tue, 21 Feb 2006, Robert Cimrman wrote: > Hi, > > I need to do some simple 3D plots, so I tried the two Cookbook recipes > (xplt, mplot3d) and failed: > > * xplt: does not work when using a custom installation directory such as > python setup.py install --root=/home/share/software > > apparently the gistdata directory path is wrong: > gist.error: failed to create drawing -- Gist work.gs style sheet missing > > * mplot3d: does not work with my version of matplotlib (0.80). I have > made the changes mentioned in the Cookbook to no avail. (Axes.__init__() > args apparently changed, as well as some other matplotlib object attributes) > > Any ideas? mplot3d looks great, I would really like to use it! Have a try with tvtk/MayaVi2, see http://www.scipy.org/Cookbook/MayaVi/ for some examples. If anythin is not clear, please let me know - it is very fresh and work in progress. Best, Arnd From nwagner at mecha.uni-stuttgart.de Tue Feb 21 09:58:09 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Tue, 21 Feb 2006 15:58:09 +0100 Subject: [SciPy-user] inverse of column_stack In-Reply-To: References: Message-ID: <43FB2A81.8040803@mecha.uni-stuttgart.de> Ryan Krauss wrote: >Is there an inverse function to column_stack? i.e. I want to iterate >over the columns of a matrix. I could first transpose it and then >call tolist, but I was wondering if there was some clean, fast, built >in way that I wasn't aware of. > >Thanks, > >Ryan > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > >>> a = rand(4*3) >>> a array([ 0.45210275, 0.45554869, 0.27096599, 0.72289623, 0.28874549, 0.60064951, 0.36406786, 0.97709256, 0.68812732, 0.73081783, 0.07033917, 0.50299293]) >>> a = transpose(reshape(a,(4,3))) >>> a array([[ 0.45210275, 0.72289623, 0.36406786, 0.73081783], [ 0.45554869, 0.28874549, 0.97709256, 0.07033917], [ 0.27096599, 0.60064951, 0.68812732, 0.50299293]]) Nils From ryanlists at gmail.com Tue Feb 21 10:11:18 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Tue, 21 Feb 2006 10:11:18 -0500 Subject: [SciPy-user] inverse of column_stack In-Reply-To: <43FB2A81.8040803@mecha.uni-stuttgart.de> References: <43FB2A81.8040803@mecha.uni-stuttgart.de> Message-ID: I think we misunderstood one another. I want to iterate over the columns of a matrix I already have. This does what I want, I just wondered if there was one built-in function that does this (or if tolist could have an axis argument): In [54]: a=rand(4,3) In [55]: a Out[55]: array([[ 0.98852747, 0.63751158, 0.49660263], [ 0.46143525, 0.19731989, 0.9668494 ], [ 0.78013891, 0.5187305 , 0.74449523], [ 0.67888293, 0.80072502, 0.10930396]]) In [56]: a[0] Out[56]: array([ 0.98852747, 0.63751158, 0.49660263]) In [57]: b=(a.tra a.trace a.transpose In [57]: b=(a.transpose()).tolist() In [58]: b Out[58]: [[0.98852746688978599, 0.4614352515957133, 0.7801389135899337, 0.67888292673582395], [0.63751157798107616, 0.19731989336625744, 0.5187305017688113, 0.80072502069726714], [0.49660262671527189, 0.96684939764919231, 0.74449523248153027, 0.10930396251760111]] In [59]: b[0] Out[59]: [0.98852746688978599, 0.4614352515957133, 0.7801389135899337, 0.67888292673582395] Ryan On 2/21/06, Nils Wagner wrote: > Ryan Krauss wrote: > >Is there an inverse function to column_stack? i.e. I want to iterate > >over the columns of a matrix. I could first transpose it and then > >call tolist, but I was wondering if there was some clean, fast, built > >in way that I wasn't aware of. > > > >Thanks, > > > >Ryan > > > >_______________________________________________ > >SciPy-user mailing list > >SciPy-user at scipy.net > >http://www.scipy.net/mailman/listinfo/scipy-user > > > >>> a = rand(4*3) > >>> a > array([ 0.45210275, 0.45554869, 0.27096599, 0.72289623, 0.28874549, > 0.60064951, 0.36406786, 0.97709256, 0.68812732, 0.73081783, > 0.07033917, 0.50299293]) > >>> a = transpose(reshape(a,(4,3))) > >>> a > array([[ 0.45210275, 0.72289623, 0.36406786, 0.73081783], > [ 0.45554869, 0.28874549, 0.97709256, 0.07033917], > [ 0.27096599, 0.60064951, 0.68812732, 0.50299293]]) > > Nils > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From jelle.feringa at ezct.net Tue Feb 21 10:21:53 2006 From: jelle.feringa at ezct.net (Jelle Feringa / EZCT Architecture & Design Research) Date: Tue, 21 Feb 2006 16:21:53 +0100 Subject: [SciPy-user] 3D plotting In-Reply-To: Message-ID: <003301c636fa$8c3d9d90$0e01a8c0@JELLE> http://pivy.tammura.at/ & http://pyqwt.sourceforge.net/ might be of your interest. From stefan at sun.ac.za Tue Feb 21 11:29:14 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Tue, 21 Feb 2006 18:29:14 +0200 Subject: [SciPy-user] inverse of column_stack In-Reply-To: References: <43FB2A81.8040803@mecha.uni-stuttgart.de> Message-ID: <20060221162914.GD14470@alpha> You can always do def rows(A): nr_rows = x.shape[0] for r in range(nr_rows): yield A[r,:] for r in rows(A): print r Cheers St?fan On Tue, Feb 21, 2006 at 10:11:18AM -0500, Ryan Krauss wrote: > I think we misunderstood one another. I want to iterate over the > columns of a matrix I already have. This does what I want, I just > wondered if there was one built-in function that does this (or if > tolist could have an axis argument): From ryanlists at gmail.com Tue Feb 21 11:39:03 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Tue, 21 Feb 2006 11:39:03 -0500 Subject: [SciPy-user] inverse of column_stack In-Reply-To: <20060221162914.GD14470@alpha> References: <43FB2A81.8040803@mecha.uni-stuttgart.de> <20060221162914.GD14470@alpha> Message-ID: A.tolist() will already iterate over the rows of a matrix, but your suggestion could easily be adapted to a cols function. On 2/21/06, Stefan van der Walt wrote: > You can always do > > def rows(A): > nr_rows = x.shape[0] > for r in range(nr_rows): > yield A[r,:] > > for r in rows(A): > print r > > Cheers > St?fan > > On Tue, Feb 21, 2006 at 10:11:18AM -0500, Ryan Krauss wrote: > > I think we misunderstood one another. I want to iterate over the > > columns of a matrix I already have. This does what I want, I just > > wondered if there was one built-in function that does this (or if > > tolist could have an axis argument): > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From stefan at sun.ac.za Tue Feb 21 11:52:45 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Tue, 21 Feb 2006 18:52:45 +0200 Subject: [SciPy-user] inverse of column_stack In-Reply-To: References: <43FB2A81.8040803@mecha.uni-stuttgart.de> <20060221162914.GD14470@alpha> Message-ID: <20060221165245.GE14470@alpha> On Tue, Feb 21, 2006 at 11:39:03AM -0500, Ryan Krauss wrote: > A.tolist() will already iterate over the rows of a matrix, but your > suggestion could easily be adapted to a cols function. Also, the generator below does not copy data, unlike tolist. > On 2/21/06, Stefan van der Walt wrote: > > You can always do > > > > def rows(A): > > nr_rows = x.shape[0] > > for r in range(nr_rows): > > yield A[r,:] > > > > for r in rows(A): > > print r From strawman at astraw.com Tue Feb 21 12:06:46 2006 From: strawman at astraw.com (Andrew Straw) Date: Tue, 21 Feb 2006 09:06:46 -0800 Subject: [SciPy-user] algorithm, optimization, or other problem? In-Reply-To: <43FB0661.1040202@bryant.edu> References: <43FB0661.1040202@bryant.edu> Message-ID: <43FB48A6.7090300@astraw.com> Brian Blais wrote: >Given the huge difference between the Pyrex and the Mex, I feel that there is >something I am doing wrong, because the C-code for both should run comparably. >Perhaps the approach is wrong? I'm willing to take any suggestions! I don't mind >coding some in C, but the Python API seemed a bit challenging to me. > > > Dear Brian, The answer is that most of your code is still using the (slow) Python API. Take a look at the c files that pyrex produces, particularly for your time critical loop. I suspect you'll be horrified to see how much stuff like the following there is. __pyx_3 = PyObject_CallObject(__pyx_12, __pyx_1); if (!__pyx_3) {__pyx_fi\lename = __pyx_f[0]; __pyx_lineno = 469; goto __pyx_L1;} These are calling the Python interpreter, which is quite expensive, as you've noticed. Fortunately, you'll be able to get rid of 99% of this stuff by clever use of Pyrex. Still, I'm actually fairly impressed that you got as much of a speedup as you did. What you need to do is convert your code to raw C array-element access to bypass the Python interpreter as much as possible. See the http://scipy.org/Wiki/Cookbook/Pyrex_and_NumPy demo for an example, particularly the print_elements() function. You can reduce the complexity of that code considerably if you deal only with contiguous, 1D, single dtype arrays. Finally, it's possible that implementing this in weave or some other wrapping solution would more straightforward. I prefer Pyrex for its general Python/C binding properties, not necessarily for any potential ease-of-array-manipulation. (Plus it has a syntax I like! :) For that matter, you could implement your function in pure C and use Pyrex (or whatever) simply to interface it. From ryanlists at gmail.com Tue Feb 21 12:08:59 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Tue, 21 Feb 2006 12:08:59 -0500 Subject: [SciPy-user] inverse of column_stack In-Reply-To: <20060221165245.GE14470@alpha> References: <43FB2A81.8040803@mecha.uni-stuttgart.de> <20060221162914.GD14470@alpha> <20060221165245.GE14470@alpha> Message-ID: That's a good point. Thanks Stefan. I guess my main question is, is there a built-in way to do this already. If not, a cols function like your rows is what I will likely do. (For now I am actually doing (A.transpose()).tolist()). Thanks, Ryan On 2/21/06, Stefan van der Walt wrote: > On Tue, Feb 21, 2006 at 11:39:03AM -0500, Ryan Krauss wrote: > > A.tolist() will already iterate over the rows of a matrix, but your > > suggestion could easily be adapted to a cols function. > > Also, the generator below does not copy data, unlike tolist. > > > On 2/21/06, Stefan van der Walt wrote: > > > You can always do > > > > > > def rows(A): > > > nr_rows = x.shape[0] > > > for r in range(nr_rows): > > > yield A[r,:] > > > > > > for r in rows(A): > > > print r > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From ryanlists at gmail.com Tue Feb 21 12:11:53 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Tue, 21 Feb 2006 12:11:53 -0500 Subject: [SciPy-user] algorithm, optimization, or other problem? In-Reply-To: <43FB48A6.7090300@astraw.com> References: <43FB0661.1040202@bryant.edu> <43FB48A6.7090300@astraw.com> Message-ID: I really like fortran and f2py for speeding up the guts of some time comsuming for loop (but there is the steep learning or relearning curve of FORTRAN). Ryan On 2/21/06, Andrew Straw wrote: > Brian Blais wrote: > > >Given the huge difference between the Pyrex and the Mex, I feel that there is > >something I am doing wrong, because the C-code for both should run comparably. > >Perhaps the approach is wrong? I'm willing to take any suggestions! I don't mind > >coding some in C, but the Python API seemed a bit challenging to me. > > > > > > > > Dear Brian, > > The answer is that most of your code is still using the (slow) Python > API. Take a look at the c files that pyrex produces, particularly for > your time critical loop. I suspect you'll be horrified to see how much > stuff like the following there is. > > __pyx_3 = PyObject_CallObject(__pyx_12, __pyx_1); if (!__pyx_3) > {__pyx_fi\lename = __pyx_f[0]; __pyx_lineno = 469; goto __pyx_L1;} > > These are calling the Python interpreter, which is quite expensive, as > you've noticed. Fortunately, you'll be able to get rid of 99% of this > stuff by clever use of Pyrex. Still, I'm actually fairly impressed that > you got as much of a speedup as you did. > > What you need to do is convert your code to raw C array-element access > to bypass the Python interpreter as much as possible. See the > http://scipy.org/Wiki/Cookbook/Pyrex_and_NumPy demo for an example, > particularly the print_elements() function. You can reduce the > complexity of that code considerably if you deal only with contiguous, > 1D, single dtype arrays. > > Finally, it's possible that implementing this in weave or some other > wrapping solution would more straightforward. I prefer Pyrex for its > general Python/C binding properties, not necessarily for any potential > ease-of-array-manipulation. (Plus it has a syntax I like! :) For that > matter, you could implement your function in pure C and use Pyrex (or > whatever) simply to interface it. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From agn at noc.soton.ac.uk Tue Feb 21 12:50:39 2006 From: agn at noc.soton.ac.uk (George Nurser) Date: Tue, 21 Feb 2006 17:50:39 +0000 Subject: [SciPy-user] algorithm, optimization, or other problem? In-Reply-To: References: <43FB0661.1040202@bryant.edu> <43FB48A6.7090300@astraw.com> Message-ID: <6EA37E5B-6B79-4967-B8DD-4A228BCDF371@noc.soton.ac.uk> On 21 Feb 2006, at 17:11, Ryan Krauss wrote: > I really like fortran and f2py for speeding up the guts of some time > comsuming for loop (but there is the steep learning or relearning > curve of FORTRAN). > > Ryan And f2py is absurdly easy to use:) -George. From robert.kern at gmail.com Tue Feb 21 12:52:30 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 21 Feb 2006 11:52:30 -0600 Subject: [SciPy-user] inverse of column_stack In-Reply-To: References: Message-ID: <43FB535E.80401@gmail.com> Ryan Krauss wrote: > Is there an inverse function to column_stack? i.e. I want to iterate > over the columns of a matrix. I could first transpose it and then > call tolist, but I was wondering if there was some clean, fast, built > in way that I wasn't aware of. You can iterate over arrays. In [1]: a = reshape(arange(12),(3,4)) In [2]: a Out[2]: array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) In [3]: for x in a: ...: print x ...: ...: [0 1 2 3] [4 5 6 7] [ 8 9 10 11] In [4]: for x in transpose(a): ...: print x ...: ...: [0 4 8] [1 5 9] [ 2 6 10] [ 3 7 11] -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From robert.kern at gmail.com Tue Feb 21 23:49:04 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 21 Feb 2006 22:49:04 -0600 Subject: [SciPy-user] low-discrepancy sequences In-Reply-To: <521836FA9A33D147A2C81D0B95480CA70186188A@MEDUSA3.SGP.insead.intra> References: <521836FA9A33D147A2C81D0B95480CA70186188A@MEDUSA3.SGP.insead.intra> Message-ID: <43FBED40.5010408@gmail.com> DE MENTEN Sebastien wrote: > Hi all, > > Has anyone worked on low-discrepancy sequences in Scipy? Which sequences would you like to see? Halton? Hammersley? Sobol? Niederreiter-Xing? I have a small Halton sequence generator, now, since it was so easy to implement. The others may take some more time. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From sebastien.dementen at alumni.insead.edu Wed Feb 22 00:59:44 2006 From: sebastien.dementen at alumni.insead.edu (DE MENTEN Sebastien) Date: Wed, 22 Feb 2006 13:59:44 +0800 Subject: [SciPy-user] low-discrepancy sequences Message-ID: <521836FA9A33D147A2C81D0B95480CA7018D30C7@MEDUSA3.SGP.insead.intra> > > DE MENTEN Sebastien wrote: > > Hi all, > > > > Has anyone worked on low-discrepancy sequences in Scipy? > > Which sequences would you like to see? Halton? Hammersley? Sobol? > Niederreiter-Xing? > > I have a small Halton sequence generator, now, since it was so easy to > implement. The others may take some more time. Apparently, there are a couple of algorithms in fortran already written for this at http://www.csit.fsu.edu/~burkardt/f_src/f_src.html and also in numerical recipes I think. What could be the step for integration in Scipy? Best, Seb From robert.kern at gmail.com Wed Feb 22 01:10:38 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 22 Feb 2006 00:10:38 -0600 Subject: [SciPy-user] low-discrepancy sequences In-Reply-To: <521836FA9A33D147A2C81D0B95480CA7018D30C7@MEDUSA3.SGP.insead.intra> References: <521836FA9A33D147A2C81D0B95480CA7018D30C7@MEDUSA3.SGP.insead.intra> Message-ID: <43FC005E.4090309@gmail.com> DE MENTEN Sebastien wrote: >>DE MENTEN Sebastien wrote: >> >>>Hi all, >>> >>>Has anyone worked on low-discrepancy sequences in Scipy? >> >>Which sequences would you like to see? Halton? Hammersley? Sobol? >>Niederreiter-Xing? >> >>I have a small Halton sequence generator, now, since it was so easy to >>implement. The others may take some more time. > > Apparently, there are a couple of algorithms in fortran already written > for this at http://www.csit.fsu.edu/~burkardt/f_src/f_src.html and also > in numerical recipes I think. > > What could be the step for integration in Scipy? Finding implementations with clear, acceptable licenses, or implementing them yourself from the literature. Because Burkardt's codebase is so extensive, I keep trying to convince myself that his code is acceptable if he were to just give it a license. However, he's mostly doing language conversions from other code with clearly unacceptable licenses, so I just don't trust the provenance of what he's written. Numerical Recipes code is right out. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From robert.kern at gmail.com Wed Feb 22 01:19:10 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 22 Feb 2006 00:19:10 -0600 Subject: [SciPy-user] low-discrepancy sequences In-Reply-To: <521836FA9A33D147A2C81D0B95480CA7018D30C7@MEDUSA3.SGP.insead.intra> References: <521836FA9A33D147A2C81D0B95480CA7018D30C7@MEDUSA3.SGP.insead.intra> Message-ID: <43FC025E.1000901@gmail.com> DE MENTEN Sebastien wrote: > Apparently, there are a couple of algorithms in fortran already written > for this at http://www.csit.fsu.edu/~burkardt/f_src/f_src.html and also > in numerical recipes I think. > > What could be the step for integration in Scipy? Also, I'm pretty sure that we want to avoid Fortran 90 code in the main package. There just isn't a good, free Fortran 90 compiler out there (gfortran is not good, and g95 is not legally distributable (and probably isn't all that good, either)). -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From nadavh at visionsense.com Wed Feb 22 01:57:47 2006 From: nadavh at visionsense.com (Nadav Horesh) Date: Wed, 22 Feb 2006 08:57:47 +0200 Subject: [SciPy-user] [Numpy-discussion] algorithm, optimization, or other problem? Message-ID: <07C6A61102C94148B8104D42DE95F7E8C8EF39@exchange2k.envision.co.il> You may get a significant boost by replacing the line: w=w+ eta * (y*x - y**2*w) with w *= 1.0 - eta*y*y w += eta*y*x I ran a test on a similar expression and got 5 fold speed increase. The dot() function runs faster if you compile with dotblas. Nadav. -----Original Message----- From: numpy-discussion-admin at lists.sourceforge.net on behalf of Bruce Southey Sent: Tue 21-Feb-06 17:15 To: Brian Blais Cc: python-list at python.org; numpy-discussion at lists.sourceforge.net; scipy-user at scipy.net Subject: Re: [Numpy-discussion] algorithm, optimization, or other problem? Hi, In the current version, note that Y is scalar so replace the squaring (Y**2) with Y*Y as you do in the dohebb function. On my system without blas etc removing the squaring removes a few seconds (16.28 to 12.4). It did not seem to help factorizing Y. Also, eta and tau are constants so define them only once as scalars outside the loops and do the division outside the loop. It only saves about 0.2 seconds but these add up. The inner loop probably can be vectorized because it is just vector operations on a matrix. You are just computing over the ith dimension of X. I think that you could be able to find the matrix version on the net. Regards Bruce On 2/21/06, Brian Blais wrote: > Hello, > > I am trying to translate some Matlab/mex code to Python, for doing neural > simulations. This application is definitely computing-time limited, and I need to > optimize at least one inner loop of the code, or perhaps even rethink the algorithm. > The procedure is very simple, after initializing any variables: > > 1) select a random input vector, which I will call "x". right now I have it as an > array, and I choose columns from that array randomly. in other cases, I may need to > take an image, select a patch, and then make that a column vector. > > 2) calculate an output value, which is the dot product of the "x" and a weight > vector, "w", so > > y=dot(x,w) > > 3) modify the weight vector based on a matrix equation, like: > > w=w+ eta * (y*x - y**2*w) > ^ > | > +---- learning rate constant > > 4) repeat steps 1-3 many times > > I've organized it like: > > for e in 100: # outer loop > for i in 1000: # inner loop > (steps 1-3) > > display things. > > so that the bulk of the computation is in the inner loop, and is amenable to > converting to a faster language. This is my issue: > > straight python, in the example posted below for 250000 inner-loop steps, takes 20 > seconds for each outer-loop step. I tried Pyrex, which should work very fast on such > a problem, takes about 8.5 seconds per outer-loop step. The same code as a C-mex > file in matlab takes 1.5 seconds per outer-loop step. > > Given the huge difference between the Pyrex and the Mex, I feel that there is > something I am doing wrong, because the C-code for both should run comparably. > Perhaps the approach is wrong? I'm willing to take any suggestions! I don't mind > coding some in C, but the Python API seemed a bit challenging to me. > > One note: I am using the Numeric package, not numpy, only because I want to be able > to use the Enthought version for Windows. I develop on Linux, and haven't had a > chance to see if I can compile numpy using the Enthought Python for Windows. > > If there is anything else anyone needs to know, I'll post it. I put the main script, > and a dohebb.pyx code below. > > > thanks! > > Brian Blais > > -- > ----------------- > > bblais at bryant.edu > http://web.bryant.edu/~bblais > > > > > # Main script: > > from dohebb import * > import pylab as p > from Numeric import * > from RandomArray import * > import time > > x=random((100,1000)) # 1000 input vectors > > numpats=x.shape[0] > w=random((numpats,1)); > > th=random((1,1)) > > params={} > params['eta']=0.001; > params['tau']=100.0; > old_mx=0; > for e in range(100): > > rnd=randint(0,numpats,250000) > t1=time.time() > if 0: # straight python > for i in range(len(rnd)): > pat=rnd[i] > xx=reshape(x[:,pat],(1,-1)) > y=matrixmultiply(xx,w) > w=w+params['eta']*(y*transpose(xx)-y**2*w); > th=th+(1.0/params['tau'])*(y**2-th); > else: # pyrex > dohebb(params,w,th,x,rnd) > print time.time()-t1 > > > p.plot(w,'o-') > p.xlabel('weights') > p.show() > > > #============================================= > > # dohebb.pyx > > cdef extern from "Numeric/arrayobject.h": > > struct PyArray_Descr: > int type_num, elsize > char type > > ctypedef class Numeric.ArrayType [object PyArrayObject]: > cdef char *data > cdef int nd > cdef int *dimensions, *strides > cdef object base > cdef PyArray_Descr *descr > cdef int flags > > > def dohebb(params,ArrayType w,ArrayType th,ArrayType X,ArrayType rnd): > > > cdef int num_iterations > cdef int num_inputs > cdef int offset > cdef double *wp,*xp,*thp > cdef int *rndp > cdef double eta,tau > > eta=params['eta'] # learning rate > tau=params['tau'] # used for variance estimate > > cdef double y > num_iterations=rnd.dimensions[0] > num_inputs=w.dimensions[0] > > # get the pointers > wp=w.data > xp=X.data > rndp=rnd.data > thp=th.data > > for it from 0 <= it < num_iterations: > > offset=rndp[it]*num_inputs > > # calculate the output > y=0.0 > for i from 0 <= i < num_inputs: > y=y+wp[i]*xp[i+offset] > > # change in the weights > for i from 0 <= i < num_inputs: > wp[i]=wp[i]+eta*(y*xp[i+offset] - y*y*wp[i]) > > # estimate the variance > thp[0]=thp[0]+(1.0/tau)*(y**2-thp[0]) > > > > > > > > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through log files > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=103432&bid=230486&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > ------------------------------------------------------- This SF.net email is sponsored by: Splunk Inc. Do you grep through log files for problems? Stop! Download the new AJAX search engine that makes searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! http://sel.as-us.falkag.net/sel?cmd=k&kid3432&bid#0486&dat1642 _______________________________________________ Numpy-discussion mailing list Numpy-discussion at lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/numpy-discussion From nwagner at mecha.uni-stuttgart.de Wed Feb 22 03:36:04 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Wed, 22 Feb 2006 09:36:04 +0100 Subject: [SciPy-user] RREF - upper triangular In-Reply-To: <43F79685.7050606@gmail.com> References: <43F79685.7050606@gmail.com> Message-ID: <43FC2274.1090200@mecha.uni-stuttgart.de> Robert Kern wrote: >Ryan Krauss wrote: > >>Is there a scipy function to find the upper triangular form of a >>matrix? I need to find the null space or null spaces of a matrix and >>have messed around with SVD in the past, but it seems like doing row >>reduction on something like [A]x=[0] has worked well. I have written >>my own RREF algorithm in Python, but it seems like I have re-invented >>the wheel and I would prefer to use something someone else already >>wrote who is smarter about round-off error and I would also prefer >>something in FORTRAN of C for speed. It seems like linalg.solve >>almost does what I want except I would need it to return the matrix I >>assume it is calculating instead of or along with the solution. >> > >Well, there's rank-revealing QR, which does essentially what you want. The >appropriate LAPACK functions are xGEQP3. It is not currently wrapped in >scipy.linalg, but it should be fairly easy to add. > > Robert, Please, can you integrate it in scipy - it is still on my wishlist ;-) linalg.rrqr sounds good to me. Nils >The LU and (regular) QR decompositions won't help you because they don't do any >ordering to demonstrate the rank. RRQR will order R, the upper-triangular >matrix, such that R[rank(A):,:] are all 0. > > From robert.kern at gmail.com Wed Feb 22 04:06:01 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 22 Feb 2006 03:06:01 -0600 Subject: [SciPy-user] RREF - upper triangular In-Reply-To: <43FC2274.1090200@mecha.uni-stuttgart.de> References: <43F79685.7050606@gmail.com> <43FC2274.1090200@mecha.uni-stuttgart.de> Message-ID: <43FC2979.9030408@gmail.com> Nils Wagner wrote: > Robert, > > Please, can you integrate it in scipy - it is still on my wishlist ;-) It has been so for quite some time, if I remember correctly. Surely, in that time you could have figured out how to do it. Asking me to do it over and over again won't accomplish anything other than to annoy me. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From cimrman3 at ntc.zcu.cz Wed Feb 22 05:10:09 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Wed, 22 Feb 2006 11:10:09 +0100 Subject: [SciPy-user] 3D plotting In-Reply-To: <003301c636fa$8c3d9d90$0e01a8c0@JELLE> References: <003301c636fa$8c3d9d90$0e01a8c0@JELLE> Message-ID: <43FC3881.4060808@ntc.zcu.cz> Thanks for all your suggestions! r. From nwagner at mecha.uni-stuttgart.de Wed Feb 22 05:44:43 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Wed, 22 Feb 2006 11:44:43 +0100 Subject: [SciPy-user] RREF - upper triangular In-Reply-To: <43FC2979.9030408@gmail.com> References: <43F79685.7050606@gmail.com> <43FC2274.1090200@mecha.uni-stuttgart.de> <43FC2979.9030408@gmail.com> Message-ID: <43FC409B.7020901@mecha.uni-stuttgart.de> Robert Kern wrote: >Nils Wagner wrote: > > >>Robert, >> >>Please, can you integrate it in scipy - it is still on my wishlist ;-) >> > >It has been so for quite some time, if I remember correctly. Surely, in that >time you could have figured out how to do it. Asking me to do it over and over >again won't accomplish anything other than to annoy me. > > here is a code. However it doesn't make use of LAPACK. So it might be to slow. Nils -------------- next part -------------- A non-text attachment was scrubbed... Name: rrqr.py Type: text/x-python Size: 1001 bytes Desc: not available URL: From fullung at gmail.com Wed Feb 22 09:04:32 2006 From: fullung at gmail.com (Albert Strasheim) Date: Wed, 22 Feb 2006 16:04:32 +0200 Subject: [SciPy-user] savemat not working in SciPy 0.4.6? Message-ID: <5eec5f300602220604t5893fa0dyb59779542dc38554@mail.gmail.com> Hello all I'm trying to use the savemat function in SciPy 0.4.6. The following code doesn't work: from numpy import array from scipy.io import savemat savemat('foo.mat', {'x' : array([0])}) The traceback: Traceback (most recent call last): File "frags.py", line 3, in ? savemat('foo.mat', {'x' : array([0])}) File "C:\Python24\Lib\site-packages\scipy\io\mio.py", line 875, in savemat fid.fwrite(variable+'\x00','char') File "C:\Python24\Lib\site-packages\scipy\io\mio.py", line 223, in write numpyio.fwrite(self,count,data,mtype,bs) numpyio.error: Does not support extended types. Is anybody else experiencing this problem? Regards Albert From nwagner at mecha.uni-stuttgart.de Wed Feb 22 09:40:25 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Wed, 22 Feb 2006 15:40:25 +0100 Subject: [SciPy-user] Help on special.hankel1 Message-ID: <43FC77D9.3040902@mecha.uni-stuttgart.de> Hi all, The output of help (special.hankel1) is not very informative. Help on ufunc: hankel1 = I need the Hankel function of the first kind and zero order H_0^{(1)} (arg), where arg is a complex number. How can I compute this Hankel function with scipy ? Nils From skip at pobox.com Wed Feb 22 09:54:41 2006 From: skip at pobox.com (skip at pobox.com) Date: Wed, 22 Feb 2006 08:54:41 -0600 Subject: [SciPy-user] Do I need to get and install LAPACK if I have ATLAS? Message-ID: <17404.31537.881323.594234@montanaro.dyndns.org> I seem to have properly built versions of both ATLAS and numpy. (Thank you Robert and Travis.) On to SciPy. The SciPy installation directions say: * Build ATLAS:: ... * Make optimized LAPACK library:: ... When I look in the lib directory ATLAS created I see a LAPACK library though: % ls -l total 22182 lrwxrwxrwx 1 skipm develop 41 Feb 22 06:29 Make.inc -> ... -rw-r--r-- 1 skipm develop 1529 Feb 22 06:29 Makefile -rw-r--r-- 1 skipm develop 10059380 Feb 22 08:24 libatlas.a -rw-r--r-- 1 skipm develop 274044 Feb 22 08:17 libcblas.a -rw-r--r-- 1 skipm develop 322928 Feb 22 08:23 libf77blas.a -rw-r--r-- 1 skipm develop 340352 Feb 22 08:23 liblapack.a -rw-r--r-- 1 skipm develop 300840 Feb 22 06:35 libtstatlas.a Are the SciPy directions out-of-date or is the ATLAS version of LAPACK incomplete? Thx, Skip From arnd.baecker at web.de Wed Feb 22 09:54:54 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Wed, 22 Feb 2006 15:54:54 +0100 (CET) Subject: [SciPy-user] Help on special.hankel1 In-Reply-To: <43FC77D9.3040902@mecha.uni-stuttgart.de> References: <43FC77D9.3040902@mecha.uni-stuttgart.de> Message-ID: On Wed, 22 Feb 2006, Nils Wagner wrote: > Hi all, > > The output of > > help (special.hankel1) is not very informative. > Help on ufunc: > > hankel1 = Use a proper interactive shell, e.g.: http://ipython.scipy.org/ In [1]: import scipy.special In [2]: scipy.special.hankel1? Type: ufunc String Form: Namespace: Interactive Docstring: y = hankel1(x1,x2) y=hankel1(v,z) returns the Hankel function of the first kind for real order v and complex argument z. (Not sure why the normal help does not show the docstring, In [3]: scipy.special.hankel1.__doc__ Out[3]: 'y = hankel1(x1,x2) y=hankel1(v,z) returns the Hankel function of the first kind for real order v and complex argument z.' ) TH, Arnd > I need the Hankel function of the first kind and zero order H_0^{(1)} > (arg), > where arg is a complex number. > > How can I compute this Hankel function with scipy ? > > Nils > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From skip at pobox.com Wed Feb 22 10:16:31 2006 From: skip at pobox.com (skip at pobox.com) Date: Wed, 22 Feb 2006 09:16:31 -0600 Subject: [SciPy-user] Do I need to get and install LAPACK if I have ATLAS? In-Reply-To: <17404.31537.881323.594234@montanaro.dyndns.org> References: <17404.31537.881323.594234@montanaro.dyndns.org> Message-ID: <17404.32847.834057.300668@montanaro.dyndns.org> skip> Are the SciPy directions out-of-date or is the ATLAS version of skip> LAPACK incomplete? Never mind. I found the bit in the ATLAS doc about merging ATLAS and LAPACK. S From skip at pobox.com Wed Feb 22 11:01:43 2006 From: skip at pobox.com (skip at pobox.com) Date: Wed, 22 Feb 2006 10:01:43 -0600 Subject: [SciPy-user] Do I need to get and install LAPACK if I have ATLAS? In-Reply-To: <17404.32847.834057.300668@montanaro.dyndns.org> References: <17404.31537.881323.594234@montanaro.dyndns.org> <17404.32847.834057.300668@montanaro.dyndns.org> Message-ID: <17404.35559.213728.466489@montanaro.dyndns.org> skip> Are the SciPy directions out-of-date or is the ATLAS version of skip> LAPACK incomplete? skip> Never mind. I found the bit in the ATLAS doc about merging ATLAS skip> and LAPACK. Of course, for the first time in my entire computing lifetime I get a core dump when I try to merge the ATLAS .o files into the LAPACK .a file. What a pisser this is turning out to be. Skip From i.failed.turing.test at gmail.com Wed Feb 22 12:16:17 2006 From: i.failed.turing.test at gmail.com (David Treadwell) Date: Wed, 22 Feb 2006 12:16:17 -0500 Subject: [SciPy-user] SciPy Install failure: dfftpack not found Message-ID: Help, Please! SciPy installation is giving me fits. After a recent HD crash, I'm trying to reinstall my Python system. Installing using he bash command in OS X 10.4.5 Foo:~/Desktop/scipy-0.4.6 bar$ sudo python setup.py bdist_egg proceeds just fine until it tries to compile the fftpack. The dfftpack can't be found, even tho the directory ./Lib/fftpack/ dfftpack exists and contains the Fortran source files. See ERROR below. The same error occurs for Foo:~/Desktop/scipy-0.4.6 bar$ sudo python setup.py install I've read and followed, I believe, the instructions from the Installation web page. The info for my Python system as it stands is given below. If there's any more diagnostic info needed, I'll be happy to pass it along. I'd appreciate any help in solving this problem. Thanks, --David Treadwell My Python system information: * ActivePython 2.4.2 Build 10 (ActiveState Corp.) based on Python 2.4.2 (#1, Jan 17 2006, 17:03:20) * GCC 3.3 20030304 (Apple Computer, Inc. build 1666) * fftw2.1.5 * g77 installation: Reading specs from /usr/local/lib/gcc/powerpc-apple- darwin7.9.0/3.4.4/specs Configured with: ../gcc/configure --enable-threads=posix -- enable-languages=f77 gcc version 3.4.4 *NumPy installation: ERROR: /usr/bin/ld: can't locate file for: -ldfftpack collect2: ld returned 1 exit status /usr/bin/ld: can't locate file for: -ldfftpack collect2: ld returned 1 exit status error: Command "/usr/bin/g77 -undefined dynamic_lookup -bundle build/temp.darwin-8.5.0-Power_Macintosh-2.4/build/src/Lib/fftpack/ _fftpackmodule.o build/temp.darwin-8.5.0-Power_Macintosh-2.4/Lib/ fftpack/src/zfft.o build/temp.darwin-8.5.0-Power_Macintosh-2.4/Lib/ fftpack/src/drfft.o build/temp.darwin-8.5.0-Power_Macintosh-2.4/Lib/ fftpack/src/zrfft.o build/temp.darwin-8.5.0-Power_Macintosh-2.4/Lib/ fftpack/src/zfftnd.o build/temp.darwin-8.5.0-Power_Macintosh-2.4/ build/src/fortranobject.o -L/usr/local/lib -L/usr/local/lib/gcc/ powerpc-apple-darwin7.9.0/3.4.4 -Lbuild/temp.darwin-8.5.0- Power_Macintosh-2.4 -ldfftpack -lfftw3 -lg2c -lcc_dynamic -o build/ lib.darwin-8.5.0-Power_Macintosh-2.4/scipy/fftpack/_fftpack.so" failed with exit status 1 From robert.kern at gmail.com Wed Feb 22 12:28:27 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 22 Feb 2006 11:28:27 -0600 Subject: [SciPy-user] Do I need to get and install LAPACK if I have ATLAS? In-Reply-To: <17404.35559.213728.466489@montanaro.dyndns.org> References: <17404.31537.881323.594234@montanaro.dyndns.org> <17404.32847.834057.300668@montanaro.dyndns.org> <17404.35559.213728.466489@montanaro.dyndns.org> Message-ID: <43FC9F3B.9070009@gmail.com> skip at pobox.com wrote: > skip> Are the SciPy directions out-of-date or is the ATLAS version of > skip> LAPACK incomplete? > > skip> Never mind. I found the bit in the ATLAS doc about merging ATLAS > skip> and LAPACK. > > Of course, for the first time in my entire computing lifetime I get a core > dump when I try to merge the ATLAS .o files into the LAPACK .a file. What a > pisser this is turning out to be. I'm impressed. I would think ar(1) should be fairly stable on any platform. Do you need an optimized BLAS/LAPACK, or will the reference implementations do? If the latter, then you can do the following: $ mkdir -p ~/src/blas $ cd ~/src/blas $ wget http://www.netlib.org/blas/blas.tgz $ tar zxf blas.tgz $ cd ~/src $ wget http://www.netlib.org/lapack/lapack.tgz $ tar zxf lapack.tgz $ export BLAS_SRC=~/src $ export LAPACK_SRC=~/src Remove from your site.cfg any reference to ATLAS. Also, if you defined the environment variable $ATLAS or one of the related ones, clear them. Then you should be able to build scipy, and the build system will take care of compiling all of the FORTRAN files from BLAS and LAPACK for you. I'm sorry this is turning out to be such a bear for you. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From robert.kern at gmail.com Wed Feb 22 12:35:17 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 22 Feb 2006 11:35:17 -0600 Subject: [SciPy-user] SciPy Install failure: dfftpack not found In-Reply-To: References: Message-ID: <43FCA0D5.3070908@gmail.com> David Treadwell wrote: > Help, Please! SciPy installation is giving me fits. After a recent HD > crash, I'm trying to reinstall my Python system. Chris Fonnesbeck was having the same problem over on the SciPy-dev list, so I'm just going to copy-and-paste my response to him: The dependency handling between numpy.distutils commands is not the greatest. Sometimes you have to specify all of the intermediate commands. For example, all of my build lines look like this (any other commands being elided by ...): $ python setup.py ... build_src build_clib build_ext build ... In particular, you are missing the build_clib command which will build libdfftpack.a for you. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From i.failed.turing.test at gmail.com Wed Feb 22 13:50:37 2006 From: i.failed.turing.test at gmail.com (David Treadwell) Date: Wed, 22 Feb 2006 13:50:37 -0500 Subject: [SciPy-user] SciPy Install failure: dfftpack not found In-Reply-To: <43FCA0D5.3070908@gmail.com> References: <43FCA0D5.3070908@gmail.com> Message-ID: <41690B34-410B-4585-847D-6613C890E993@gmail.com> On Feb 22, 2006, at 12:35 PM, Robert Kern wrote: > David Treadwell wrote: >> Help, Please! SciPy installation is giving me fits. After a recent HD >> crash, I'm trying to reinstall my Python system. > > Chris Fonnesbeck was having the same problem over on the SciPy-dev > list, so I'm > just going to copy-and-paste my response to him: > > The dependency handling between numpy.distutils commands is not the > greatest. > Sometimes you have to specify all of the intermediate commands. For > example, all > of my build lines look like this (any other commands being elided > by ...): > > $ python setup.py ... build_src build_clib build_ext build ... > > In particular, you are missing the build_clib command which will build > libdfftpack.a for you. Thanks for the help, Robert. The egg builds and installs just fine now with the commands $ sudo python setup.py build_clib bdist_egg $ sudo easy_install ~/Desktop/scipy-0.4.6/dist/scipy-0.4.6-py2.4- macosx-10.4-ppc.egg BUT... there is a problem with two of the modules: 'signal' and 'fftpack'. The build looks like it proceeded w/o error, but I could have missed something. The error looks similar to one you helped out with back in May 2005: http://www.scipy.net/pipermail/scipy-user/2005-May/ 004450.html , but I'm not sure it's what's going on here. I am using gcc3.3 and the latest release of NumPy and SciPy. One other possibility Google has suggested is a problem with python- ldap, which, frankly is beyond my understanding. :( http:// sourceforge.net/mailarchive/forum.php? forum_id=4346&max_rows=25&style=flat&viewmonth=200506 Any suggestions on this one? --DRT >>> import numpy >>> import scipy >>> scipy.test(10,1) import signal -> failed: Failure linking new module: /Users/drt/ Library/Python2.4/site-packages/scipy-0.4.6-py2.4-macosx-10.4-ppc.egg/ scipy/fftpack/_fftpack.so: Symbol not found: _fprintf$LDBLStub Referenced from: /Users/drt/Library/Python2.4/site-packages/ scipy-0.4.6-py2.4-macosx-10.4-ppc.egg/scipy/fftpack/_fftpack.so Expected in: dynamic lookup import fftpack -> failed: Failure linking new module: /Users/drt/ Library/Python2.4/site-packages/scipy-0.4.6-py2.4-macosx-10.4-ppc.egg/ scipy/fftpack/_fftpack.so: Symbol not found: _fprintf$LDBLStub Referenced from: /Users/drt/Library/Python2.4/site-packages/ scipy-0.4.6-py2.4-macosx-10.4-ppc.egg/scipy/fftpack/_fftpack.so Expected in: dynamic lookup From aisaac at american.edu Wed Feb 22 14:13:36 2006 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 22 Feb 2006 14:13:36 -0500 Subject: [SciPy-user] RREF - upper triangular In-Reply-To: <43FC409B.7020901@mecha.uni-stuttgart.de> References: <43F79685.7050606@gmail.com> <43FC2274.1090200@mecha.uni-stuttgart.de><43FC2979.9030408@gmail.com><43FC409B.7020901@mecha.uni-stuttgart.de> Message-ID: On Wed, 22 Feb 2006, Nils Wagner apparently wrote: > here is a code. However it doesn't make use of LAPACK. So > it might be to slow. This exchange suggests to me that Robert thinks that wrapping the LAPACK code is approximately trivial but that Nils does not see it that way. Apologies for any misunderstanding. Anyway, coming from GAUSS as a *user*, I would not know where to begin if I needed to wrap a new LAPACK function. Cheers, Alan Isaac From fullung at gmail.com Wed Feb 22 16:34:04 2006 From: fullung at gmail.com (Albert Strasheim) Date: Wed, 22 Feb 2006 23:34:04 +0200 Subject: [SciPy-user] signaltools missing imports Message-ID: <5eec5f300602221334q759604a6ue0473935d23ae53a@mail.gmail.com> Hello all It seems signaltools.py is missing some imports in SciPy 0.4.6. The following code: from scipy.signal import triang print triang(3) causes the following traceback: Traceback (most recent call last): File "frags.py", line 10, in ? print triang(3) File "C:\Python24\Lib\site-packages\scipy\signal\signaltools.py", line 579, in triang w = numpy.r_[w, w[-2::-1]] NameError: global name 'numpy' is not defined At the top of signaltools.py we have: import numpy as Numeric This seems incorrect. I've filed this issue in Trac: http://projects.scipy.org/scipy/scipy/ticket/10 Regards Albert From fullung at gmail.com Wed Feb 22 16:58:07 2006 From: fullung at gmail.com (Albert Strasheim) Date: Wed, 22 Feb 2006 23:58:07 +0200 Subject: [SciPy-user] Broken links in WindowsBuild wiki page Message-ID: <5eec5f300602221358tc3b4ae7s8d5fbcc511614812@mail.gmail.com> Hello all The WindowsBuild wiki page refers to http://www.scipy.org/download/atlasbinaries/winnt/ which opens up a "page does not exist" wiki page, instead of the directory. On my personal site I use the following .htaccess to prevent this from happening: RewriteEngine On RewriteRule ^$ FrontPage [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /cgi-bin/moin.cgi/$1 [L] Maybe something similar can be done for the SciPy site. Meanwhile, broken links can be found at old.scipy.org, for example: http://old.scipy.org/download/atlasbinaries/winnt/ Regards Albert From oliphant at ee.byu.edu Wed Feb 22 17:02:39 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 22 Feb 2006 15:02:39 -0700 Subject: [SciPy-user] savemat not working in SciPy 0.4.6? In-Reply-To: <5eec5f300602220604t5893fa0dyb59779542dc38554@mail.gmail.com> References: <5eec5f300602220604t5893fa0dyb59779542dc38554@mail.gmail.com> Message-ID: <43FCDF7F.7060700@ee.byu.edu> Albert Strasheim wrote: >Hello all > >I'm trying to use the savemat function in SciPy 0.4.6. > >The following code doesn't work: > >from numpy import array >from scipy.io import savemat >savemat('foo.mat', {'x' : array([0])}) > >The traceback: > >Traceback (most recent call last): > File "frags.py", line 3, in ? > savemat('foo.mat', {'x' : array([0])}) > File "C:\Python24\Lib\site-packages\scipy\io\mio.py", line 875, in savemat > fid.fwrite(variable+'\x00','char') > File "C:\Python24\Lib\site-packages\scipy\io\mio.py", line 223, in write > numpyio.fwrite(self,count,data,mtype,bs) >numpyio.error: Does not support extended types. > >Is anybody else experiencing this problem? > > This was a bug. This is updated in SVN. Here's a work-around. Replace data = asarray(data) the write method of fopen class with if isinstance(data, str): N, buf = len(data), buffer(data) data = ndarray(shape=(N,),dtype='B',buffer=buf) else: data = asarray(data) This is near line 218 of scipy/io/mio.py -Travis From arnd.baecker at web.de Wed Feb 22 17:11:02 2006 From: arnd.baecker at web.de (arnd.baecker at web.de) Date: Wed, 22 Feb 2006 23:11:02 +0100 (CET) Subject: [SciPy-user] RREF - upper triangular In-Reply-To: References: <43F79685.7050606@gmail.com> <43FC2274.1090200@mecha.uni-stuttgart.de><43FC2979.9030408@gmail.com><43FC409B.7020901@mecha.uni-stuttgart.de> Message-ID: On Wed, 22 Feb 2006, Alan G Isaac wrote: > On Wed, 22 Feb 2006, Nils Wagner apparently wrote: > > here is a code. However it doesn't make use of LAPACK. So > > it might be to slow. > > This exchange suggests to me that Robert thinks that > wrapping the LAPACK code is approximately trivial but that > Nils does not see it that way. Apologies for any > misunderstanding. But in this particular case previous exposure to wrapping lapack functions is granted: http://aspn.activestate.com/ASPN/Mail/Message/scipy-user/2016895 (for dgehrd) The number of arguments of DGEQP3 is the same as for dgehrd, so on this level it is not more difficult. > Anyway, coming from GAUSS as a *user*, I would not know > where to begin if I needed to wrap a new LAPACK function. Thanks to f2py, wrapping Fortran code is (with a bit of effort) trivial in many cases. For complicated functions requiring many arguments the wrapper can become longish. Fortunately, many things can be learnt from looking at ``scipy/Lib/linalg/generic_flapack.pyf`` In particular, the documentation at http://cens.ioc.ee/projects/f2py2e/ is excellent. I also found the f2py notes by Fernando Perez very helpful, http://cens.ioc.ee/pipermail/f2py-users/2003-April/000472.html Let me try to give some general remarks on how to start (the real authority on all this is of course Pearu, so please correct me if I got things wrong here): - first find a routine which will do the job you want: - If the lapack documentation is installed properly on Linux you could do apropos keyword - www.netlib.org provides a nice decision tree - make sure that that it does not exist in scipy: from scipy.lib import lapack lapack.clapack. (assuming Ipython) lapack.clapack. Remark: routines starting with c/z are for double/single complex and routines for d/s for double/single real numbers. The calling sequence for c/z and d/s are (I think always) the same and sometimes they are also the same for the real and complex case. - Then one has to download the fortran file for the lapack routine of interest. - Generate wrapper by calling f2py -m wrap_lap -h wrap_lap.pyf lapack_routine.f - Generate library f2py -c wrap_lap.pyf lapack_routine.f -latlas -llapack -lblas - You can use this by import wrap_lap Note, that this is not yet polished (this is the part on which has to spent some effort ;-), i.e. one has to tell which variables are input, which are output and which are optional. In addition temporary storage has to be provided with the right dimensions as described in the documentation part of the lapack routine. Concrete (and very simple) example (non-lapack): Wrapping Hermite polynomials ---------------------------- Download code (found after hours of googling ;-):: http://cdm.unimo.it/home/matematica/funaro.daniele/splib.txt and extract ``hermite.f`` Generate wrapper framework:: # only run the following line _once_ # (and never again, otherwise the hand-modified hermite.pyf # goes down the drains) f2py -m hermite -h hermite.pyf hermite.f Then modify ``hermite.pyf`` Create the module:: f2py2.3 -c hermite.pyf hermite.f # add this if you want: -DF2PY_REPORT_ON_ARRAY_COPY=1 -DF2PY_REPORT_ATEXIT Simple test:: import hermite hermite.vahepo(2,2.0) import scipy scipy.special.hermite(2)(2.0) A more complicated example about how to wrap routines for band matrices can be found at http://www.physik.tu-dresden.de/~baecker/comp_talks.html under "Python and Co - some recent developments". It would be nice if someone could add notes on all this to the scipy wiki. Best, Arnd From fullung at gmail.com Wed Feb 22 18:30:07 2006 From: fullung at gmail.com (Albert Strasheim) Date: Thu, 23 Feb 2006 01:30:07 +0200 Subject: [SciPy-user] Lib\special\cephes\const.c doesn't compile with Visual Studio Message-ID: <5eec5f300602221530t14d01a92jdc22d19d424f19d0@mail.gmail.com> Hello all I'm having problems recompiling SciPy from SVN with Visual Studio .NET 2003. When compiling Lib\special\cephes\const.c, the following error occurs: Lib\special\cephes\const.c(92) : error C2099: initializer is not a constant Lib\special\cephes\const.c(97) : error C2099: initializer is not a constant This problem was discussed in comp.lang.c as "initializer is not a constant error ??" way back in 2004, when somebody else was also trying to compile some variation on this same code. http://groups.google.com/group/comp.lang.c/browse_thread/thread/23303c76b160ca90/e6ec7db9b3287aaa%23e6ec7db9b3287aaa libxml2's trionan.c has some ideas for how to fix this problem. http://cvs.gnome.org/viewcvs/libxml2/trionan.c?rev=1.14&view=markup I added a ticket to Trac: http://projects.scipy.org/scipy/scipy/ticket/12 Regards Albert From strawman at astraw.com Thu Feb 23 03:02:16 2006 From: strawman at astraw.com (Andrew Straw) Date: Thu, 23 Feb 2006 00:02:16 -0800 Subject: [SciPy-user] Broken links in WindowsBuild wiki page In-Reply-To: <5eec5f300602221358tc3b4ae7s8d5fbcc511614812@mail.gmail.com> References: <5eec5f300602221358tc3b4ae7s8d5fbcc511614812@mail.gmail.com> Message-ID: <43FD6C08.9050603@astraw.com> Albert Strasheim wrote: >Hello all > >The WindowsBuild wiki page refers to > >http://www.scipy.org/download/atlasbinaries/winnt/ > > I fixed the link in the wiki. Thanks for pointing this out. From nwagner at mecha.uni-stuttgart.de Thu Feb 23 03:17:57 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Thu, 23 Feb 2006 09:17:57 +0100 Subject: [SciPy-user] RREF - upper triangular In-Reply-To: References: <43F79685.7050606@gmail.com> <43FC2274.1090200@mecha.uni-stuttgart.de><43FC2979.9030408@gmail.com><43FC409B.7020901@mecha.uni-stuttgart.de> Message-ID: <43FD6FB5.3060700@mecha.uni-stuttgart.de> arnd.baecker at web.de wrote: >On Wed, 22 Feb 2006, Alan G Isaac wrote: > > >>On Wed, 22 Feb 2006, Nils Wagner apparently wrote: >> >>>here is a code. However it doesn't make use of LAPACK. So >>>it might be to slow. >>> >>This exchange suggests to me that Robert thinks that >>wrapping the LAPACK code is approximately trivial but that >>Nils does not see it that way. Apologies for any >>misunderstanding. >> > >But in this particular case previous exposure to wrapping lapack >functions is granted: >http://aspn.activestate.com/ASPN/Mail/Message/scipy-user/2016895 >(for dgehrd) > >The number of arguments of DGEQP3 is the same as for dgehrd, >so on this level it is not more difficult. > > >>Anyway, coming from GAUSS as a *user*, I would not know >>where to begin if I needed to wrap a new LAPACK function. >> > >Thanks to f2py, wrapping Fortran code is (with a bit of effort) >trivial in many cases. For complicated functions >requiring many arguments the wrapper can become longish. >Fortunately, many things can be learnt from >looking at ``scipy/Lib/linalg/generic_flapack.pyf`` >In particular, the documentation at >http://cens.ioc.ee/projects/f2py2e/ >is excellent. >I also found the f2py notes by Fernando Perez very helpful, >http://cens.ioc.ee/pipermail/f2py-users/2003-April/000472.html > >Let me try to give some general remarks on how to start >(the real authority on all this is of course Pearu, so >please correct me if I got things wrong here): >- first find a routine which will do the job you want: > - If the lapack documentation is installed properly > on Linux you could do > apropos keyword > - www.netlib.org provides a nice decision tree >- make sure that that it does not exist in scipy: > from scipy.lib import lapack > lapack.clapack. (assuming Ipython) > lapack.clapack. > > Remark: routines starting with c/z are for double/single complex > and routines for d/s for double/single real numbers. > The calling sequence for c/z and d/s are (I think always) the same and > sometimes they are also the same for the real and complex case. >- Then one has to download the fortran file for the lapack routine > of interest. >- Generate wrapper by calling > f2py -m wrap_lap -h wrap_lap.pyf lapack_routine.f > >- Generate library > f2py -c wrap_lap.pyf lapack_routine.f -latlas -llapack -lblas > >- You can use this by > > import wrap_lap > > Note, that this is not yet polished (this is the part on > which has to spent some effort ;-), i.e. one > has to tell which variables are input, which are output > and which are optional. In addition temporary > storage has to be provided with the right dimensions > as described in the documentation part of the lapack routine. > >Concrete (and very simple) example (non-lapack): > >Wrapping Hermite polynomials >---------------------------- > >Download code (found after hours of googling ;-):: > > http://cdm.unimo.it/home/matematica/funaro.daniele/splib.txt > >and extract ``hermite.f`` > >Generate wrapper framework:: > > # only run the following line _once_ > # (and never again, otherwise the hand-modified hermite.pyf > # goes down the drains) > f2py -m hermite -h hermite.pyf hermite.f > >Then modify ``hermite.pyf`` > >Create the module:: > > f2py2.3 -c hermite.pyf hermite.f > > # add this if you want: > -DF2PY_REPORT_ON_ARRAY_COPY=1 -DF2PY_REPORT_ATEXIT > >Simple test:: > > import hermite > hermite.vahepo(2,2.0) > import scipy > scipy.special.hermite(2)(2.0) > >A more complicated example about >how to wrap routines for band matrices can be found at >http://www.physik.tu-dresden.de/~baecker/comp_talks.html >under "Python and Co - some recent developments". > >It would be nice if someone could add notes >on all this to the scipy wiki. > >Best, Arnd > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > Hi Arnd, If it is so trivial to add the rank revealing QR decomposition to scipy, I would be glad if the several steps on building a wrapper for DGEQP3 could be documented on the Wiki as an exercise. The best way to learn something is to have a well-elaborated example. :-) Nils From arnd.baecker at web.de Thu Feb 23 05:24:38 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Thu, 23 Feb 2006 11:24:38 +0100 (CET) Subject: [SciPy-user] RREF - upper triangular In-Reply-To: <43FD6FB5.3060700@mecha.uni-stuttgart.de> References: <43F79685.7050606@gmail.com> <43FC2274.1090200@mecha.uni-stuttgart.de><43FC2979.9030408@gmail.com><43FC409B.7020901@mecha.uni-stuttgart.de> <43FD6FB5.3060700@mecha.uni-stuttgart.de> Message-ID: Hi Nils, On Thu, 23 Feb 2006, Nils Wagner wrote: > arnd.baecker at web.de wrote: > >On Wed, 22 Feb 2006, Alan G Isaac wrote: > > > >>On Wed, 22 Feb 2006, Nils Wagner apparently wrote: > >> > >>>here is a code. However it doesn't make use of LAPACK. So > >>>it might be to slow. > >>> > >>This exchange suggests to me that Robert thinks that > >>wrapping the LAPACK code is approximately trivial but that > >>Nils does not see it that way. Apologies for any > >>misunderstanding. > >> > >But in this particular case previous exposure to wrapping lapack > >functions is granted: > >http://aspn.activestate.com/ASPN/Mail/Message/scipy-user/2016895 > >(for dgehrd) [... snipped my remarks on how to use f2py ...] > If it is so trivial to add the rank revealing QR decomposition to scipy, > I would be glad if the several steps on building a wrapper for DGEQP3 > could be documented on the Wiki as an exercise. > The best way to learn something is to have a well-elaborated example. :-) Have you *studied* the material I wrote in the snipped part, *including* the texts I referred to? Also, maybe you remember, that the wrapper *you* posted at http://aspn.activestate.com/ASPN/Mail/Message/scipy-user/201689 and claimed to be *yours* was in fact written by *me* for *you*. We had an off-list exchange with over 20 mails on this in which I tried to guide you step-by-step through the Hessenberg wrapping. In course of that I also send you a detailed step-by-step example on how to wrap band matrices. So the exercise you propose (including the addition to the wiki) is really left to you. I am sure that this list will help you with any concrete questions. I better stop now, before I write things which I might regret later ... Arnd From nwagner at mecha.uni-stuttgart.de Thu Feb 23 08:07:25 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Thu, 23 Feb 2006 14:07:25 +0100 Subject: [SciPy-user] RREF - upper triangular In-Reply-To: References: <43F79685.7050606@gmail.com> <43FC2274.1090200@mecha.uni-stuttgart.de><43FC2979.9030408@gmail.com><43FC409B.7020901@mecha.uni-stuttgart.de> <43FD6FB5.3060700@mecha.uni-stuttgart.de> Message-ID: <43FDB38D.2050208@mecha.uni-stuttgart.de> Arnd Baecker wrote: >Hi Nils, > >On Thu, 23 Feb 2006, Nils Wagner wrote: > > >>arnd.baecker at web.de wrote: >> >>>On Wed, 22 Feb 2006, Alan G Isaac wrote: >>> >>> >>>>On Wed, 22 Feb 2006, Nils Wagner apparently wrote: >>>> >>>> >>>>>here is a code. However it doesn't make use of LAPACK. So >>>>>it might be to slow. >>>>> >>>>> >>>>This exchange suggests to me that Robert thinks that >>>>wrapping the LAPACK code is approximately trivial but that >>>>Nils does not see it that way. Apologies for any >>>>misunderstanding. >>>> >>>> >>>But in this particular case previous exposure to wrapping lapack >>>functions is granted: >>>http://aspn.activestate.com/ASPN/Mail/Message/scipy-user/2016895 >>>(for dgehrd) >>> > >[... snipped my remarks on how to use f2py ...] > > >>If it is so trivial to add the rank revealing QR decomposition to scipy, >>I would be glad if the several steps on building a wrapper for DGEQP3 >>could be documented on the Wiki as an exercise. >>The best way to learn something is to have a well-elaborated example. :-) >> > >Have you *studied* the material I wrote in the snipped part, >*including* the texts I referred to? > >Also, maybe you remember, that the wrapper *you* posted >at http://aspn.activestate.com/ASPN/Mail/Message/scipy-user/201689 >and claimed to be *yours* was in fact written by *me* for *you*. >We had an off-list exchange with over 20 mails on this >in which I tried to guide you step-by-step >through the Hessenberg wrapping. In course >of that I also send you a detailed step-by-step example >on how to wrap band matrices. > >So the exercise you propose (including the addition to the wiki) >is really left to you. >I am sure that this list will help you with any concrete questions. > >I better stop now, before I write things which I might regret later ... > >Arnd > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > Hi Arnd, It was not my intention to claim that I have build the wrapper. Sorry for that matter. Afaik Pearu finally integrated hessenberg into scipy. Nils From alopez at imim.es Thu Feb 23 13:55:03 2006 From: alopez at imim.es (=?ISO-8859-1?Q?Adri=E1n_L=F3pez?=) Date: Thu, 23 Feb 2006 19:55:03 +0100 Subject: [SciPy-user] (no subject) Message-ID: Hi people, I use SciPy since a few months ago. I use it mainly to integrate a system of ODEs. Until now this piece of code was running nicely with the previous version of SciPy: from Numeric import * from scipy.integrate import odeint def func(x, t, *args): xdot = [0.0, 5.0, 0.0, 15.0, 0.0, 0.0, 0.0] xdot[0] = + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 xdot[1] = - (x[1] * beta) + x[4] * beta xdot[2] = - (x[2] * beta) + x[5] * beta xdot[3] = + x[6] * beta - (x[3] * beta) xdot[4] = + alpha0 + (alpha + (x[3])**( n) * alpha1) / ((K)** ( n) + (x[3])**( n)) - (k1 * x[4]) xdot[5] = + alpha0 + (alpha + (x[1])**( n) * alpha1) / ((K)** ( n) + (x[1])**( n)) - (k1 * x[5]) xdot[6] = - (k1 * x[6]) + alpha0 + (alpha + (x[2])**( n) * alpha1) / ((K)**( n) + (x[2])**( n)) g.write('%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\n'%(t, xdot[0], xdot [1], xdot[2], xdot[3], xdot[4], xdot[5], xdot[6])) return xdot t = arange(0, 100.01, 0.01) alpha0 = 0.0 alpha1 = 0.0 K = 1.0 beta = 5.0 n = 2.1 k1 = 1.0 alpha = 250.0 cell = 1.0 parameters = [alpha0, alpha1, K, beta, n, k1, alpha, cell] x_0 = [0.0, 5.0, 0.0, 15.0, 0.0, 0.0, 0.0] g = open('repressilator.veloc.out', 'w') args = (parameters, g) x = odeint(func, x_0, t, args) But now, with the new version, it is giving me the ugly error of [alopez at thymus tmp]$ python file.py Segmentation fault on a linux machine and Macintosh-3:~/work/tmp adrianlopez$ python test.py Bus error on a mac. There is something wrong on my code? Is there some problem on odeint from scipy.integrate? Could be any trouble calling also Numeric? I will appreciate any comment, Thanks for your help, -- Adri?n L?pez Garc?a de Lomana, Graduate Student Computational Biochemistry and Biophysics Lab Research Group on Biomedical Informatics (GRIB) - IMIM / UPF c/ Dr. Aiguader, 80; 08003 Barcelona (Spain) http://diana.imim.es From oliphant.travis at ieee.org Thu Feb 23 14:23:10 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 23 Feb 2006 12:23:10 -0700 Subject: [SciPy-user] (no subject) In-Reply-To: References: Message-ID: <43FE0B9E.6070606@ieee.org> Adri?n L?pez wrote: >Hi people, > >I use SciPy since a few months ago. I use it mainly to integrate a >system of ODEs. Until now this piece of code was running nicely with >the previous version of SciPy: > > > You need to tell us what version of SciPy and what version of Numeric (or NumPy which is different) you are running. The errors you are seeing look like installation trouble (i.e. the wrong backend for SciPy). -Travis From alopez at imim.es Thu Feb 23 14:41:14 2006 From: alopez at imim.es (=?ISO-8859-1?Q?Adri=E1n_L=F3pez?=) Date: Thu, 23 Feb 2006 20:41:14 +0100 Subject: [SciPy-user] (no subject) In-Reply-To: <43FE0B9E.6070606@ieee.org> References: <43FE0B9E.6070606@ieee.org> Message-ID: Sorry for not saying that before. I have scipy-0.4.6, numpy-0.9.5 and Numeric-24.2. Also, if I run >>numpy.test(1, 1) Ran 329 tests in 0.476s OK >>scipy.test(10) ... WARNING: cblas module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by numpy/distutils/system_info.py, then scipy uses fblas instead of cblas. **************************************************************** .................. ---------------------------------------------------------------------- Ran 1081 tests in 83.732s OK Thanks for your help, -- Adri?n L?pez Garc?a de Lomana, Graduate Student Computational Biochemistry and Biophysics Lab Research Group on Biomedical Informatics (GRIB) - IMIM / UPF c/ Dr. Aiguader, 80; 08003 Barcelona (Spain) http://diana.imim.es On Feb 23, 2006, at 8:23 PM, Travis Oliphant wrote: > Adri?n L?pez wrote: > >> Hi people, >> >> I use SciPy since a few months ago. I use it mainly to integrate a >> system of ODEs. Until now this piece of code was running nicely with >> the previous version of SciPy: >> >> >> > You need to tell us what version of SciPy and what version of Numeric > (or NumPy which is different) you are running. > > The errors you are seeing look like installation trouble (i.e. the > wrong > backend for SciPy). > > -Travis > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant.travis at ieee.org Thu Feb 23 15:19:41 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 23 Feb 2006 13:19:41 -0700 Subject: [SciPy-user] (no subject) In-Reply-To: References: <43FE0B9E.6070606@ieee.org> Message-ID: <43FE18DD.4030603@ieee.org> Adri?n L?pez wrote: > Sorry for not saying that before. I have scipy-0.4.6, numpy-0.9.5 > and Numeric-24.2. Also, if I run You don't need to import Numeric. So, replace the first line with from numpy import * Now, make sure you don't have any other version of scipy or numpy installed. You might be grabbing the wrong library. I ran your example fine on my system. However, I did have to change a line in the function to xdot[3] = x[6] * beta - x[3] * beta Or else I kept getting errors (but not segfaults) during the function call. -Travis From oliphant.travis at ieee.org Thu Feb 23 15:37:45 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 23 Feb 2006 13:37:45 -0700 Subject: [SciPy-user] (no subject) In-Reply-To: References: Message-ID: <43FE1D19.7020000@ieee.org> Adri?n L?pez wrote: >Hi people, > >I use SciPy since a few months ago. I use it mainly to integrate a >system of ODEs. Until now this piece of code was running nicely with >the previous version of SciPy: > > >from Numeric import * > > from numpy import * >from scipy.integrate import odeint > >def func(x, t, *args): > > You don't actually use the *args variable so don't define it. All of your parameters are being grabbed from the module name-space. def func(x, t0): > xdot = [0.0, 5.0, 0.0, 15.0, 0.0, 0.0, 0.0] > > xdot[0] = + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 > > I have no idea why you are doing this instead of xdot[0]=0. > xdot[1] = - (x[1] * beta) + x[4] * beta > > xdot[2] = - (x[2] * beta) + x[5] * beta > > xdot[3] = + x[6] * beta - (x[3] * beta) > > This line kept giving me errors. I'm not sure why.... > xdot[4] = + alpha0 + (alpha + (x[3])**( n) * alpha1) / ((K)** >( n) + (x[3])**( n)) - (k1 * x[4]) > > xdot[5] = + alpha0 + (alpha + (x[1])**( n) * alpha1) / ((K)** >( n) + (x[1])**( n)) - (k1 * x[5]) > > xdot[6] = - (k1 * x[6]) + alpha0 + (alpha + (x[2])**( n) * >alpha1) / ((K)**( n) + (x[2])**( n)) > > g.write('%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\n'%(t, xdot[0], xdot >[1], xdot[2], xdot[3], xdot[4], xdot[5], xdot[6])) > > return xdot > >t = arange(0, 100.01, 0.01) > >alpha0 = 0.0 >alpha1 = 0.0 >K = 1.0 >beta = 5.0 >n = 2.1 >k1 = 1.0 >alpha = 250.0 >cell = 1.0 > >parameters = [alpha0, alpha1, K, beta, n, k1, alpha, cell] > >x_0 = [0.0, 5.0, 0.0, 15.0, 0.0, 0.0, 0.0] > >g = open('repressilator.veloc.out', 'w') > >args = (parameters, g) > >x = odeint(func, x_0, t, args) > > > You don't actually use the arguments in the function call but instead the use global variables inside the function, so you don't need the args (or parameters) variables and can just write x = odeint(func, x_0, t) >[alopez at thymus tmp]$ python file.py >Segmentation fault > >on a linux machine and > >Macintosh-3:~/work/tmp adrianlopez$ python test.py >Bus error > > Can you tell when the error is occurring. Put print statements in file.py to see when the segfault is actually happening. If it's on import of scipy (which I suspect), then it's a problem with your installation (the wrong NumPy C-API is being used for some reason). -Travis From i.failed.turing.test at gmail.com Thu Feb 23 18:34:06 2006 From: i.failed.turing.test at gmail.com (David Treadwell) Date: Thu, 23 Feb 2006 18:34:06 -0500 Subject: [SciPy-user] 'Expected in: dynamic lookup' error for scipy.signal and .fftpack Message-ID: <1E138133-ACEA-42AD-A5D0-4724C23EBDCB@gmail.com> With Robert's help, I got SciPy-0.4.6 to install. See original post: http://www.scipy.net/pipermail/scipy-user/2006-February/007137.html SciPy now works great, with the exception of 'signal' and 'fftpack'. When those packages are imported, an error is thrown: "Expected in: dynamic lookup". See ERROR below. The build looks like it proceeded w/o error, but I could have missed something. The error looks similar to in http://www.scipy.net/pipermail/scipy-user/2005-May/004450.html I suspected a mismatch of using different compilers for fftw2.1.5, NumPy and SciPy, so I removed these three packages and recompiled them with gcc3.3 and the latest release snapshots of NumPy and SciPy. One other possibility Google has suggested is a problem with python- ldap. I'm not too sure what that means. :( http://sourceforge.net/mailarchive/forum.php? forum_id=4346&max_rows=25&style=flat&viewmonth=200506 Any suggestions on this one? --DRT ERROR: >>> import numpy >>> import scipy >>> scipy.test(10,1) import signal -> failed: Failure linking new module: ~/Library/ Python2.4/site-packages/scipy-0.4.6-py2.4-macosx-10.4-ppc.egg/scipy/ fftpack/_fftpack.so: Symbol not found: _fprintf$LDBLStub Referenced from:~/Library/Python2.4/site-packages/scipy-0.4.6- py2.4-macosx-10.4-ppc.egg/scipy/fftpack/_fftpack.so Expected in: dynamic lookup import fftpack -> failed: Failure linking new module: ~/Library/ Python2.4/site-packages/scipy-0.4.6-py2.4-macosx-10.4-ppc.egg/scipy/ fftpack/_fftpack.so: Symbol not found: _fprintf$LDBLStub Referenced from: ~/Library/Python2.4/site-packages/scipy-0.4.6- py2.4-macosx-10.4-ppc.egg/scipy/fftpack/_fftpack.so Expected in: dynamic lookup From cookedm at physics.mcmaster.ca Thu Feb 23 19:02:04 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Thu, 23 Feb 2006 19:02:04 -0500 Subject: [SciPy-user] RREF - upper triangular In-Reply-To: (arnd baecker's message of "Wed, 22 Feb 2006 23:11:02 +0100 (CET)") References: <43F79685.7050606@gmail.com> <43FC2274.1090200@mecha.uni-stuttgart.de> <43FC2979.9030408@gmail.com> <43FC409B.7020901@mecha.uni-stuttgart.de> Message-ID: arnd.baecker at web.de writes: > It would be nice if someone could add notes > on all this to the scipy wiki. Done; as Cookbook/F2Py -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From skip at pobox.com Thu Feb 23 21:29:31 2006 From: skip at pobox.com (skip at pobox.com) Date: Thu, 23 Feb 2006 20:29:31 -0600 Subject: [SciPy-user] Do I need to get and install LAPACK if I have ATLAS? In-Reply-To: <43FC9F3B.9070009@gmail.com> References: <17404.31537.881323.594234@montanaro.dyndns.org> <17404.32847.834057.300668@montanaro.dyndns.org> <17404.35559.213728.466489@montanaro.dyndns.org> <43FC9F3B.9070009@gmail.com> Message-ID: <17406.28555.422670.792192@montanaro.dyndns.org> Robert> I'm impressed. I would think ar(1) should be fairly stable on any platform. Me too. I eventually got it to work. Dunno what the problem was. We have Solaris 8/9/10 so we have to do our compiles and links on 8 for backward compatibility. I ran the ar(1) command on 10 and it worked. I think the core dump was on 8. Robert> Do you need an optimized BLAS/LAPACK, or will the reference Robert> implementations do? Maybe. I had so much time invested in ATLAS wrangling, so I kept at it. I now have ATLAS+LAPACK+numpy+scipy built with GCC 3.4.1 and Python 2.4.2 installed and apparently working okay on Solaris. I added some terse directions on the wiki at work for others who may one day trod the same harrowing path I've been down. Is there a public wiki where I can submit a suitably site-independent version of these instructions? Skip From alopez at imim.es Fri Feb 24 04:23:15 2006 From: alopez at imim.es (=?ISO-8859-1?Q?Adri=E1n_L=F3pez?=) Date: Fri, 24 Feb 2006 10:23:15 +0100 Subject: [SciPy-user] (no subject) In-Reply-To: <43FE1D19.7020000@ieee.org> References: <43FE1D19.7020000@ieee.org> Message-ID: <0B567BCB-3E42-434A-B5C4-2E84CAFC68B7@imim.es> Hi, the code file is generated automatically, that was the reason for the strange lines as xdot[0] = + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 On the other hand, the reason for the arguments was because I'm saving the velocities and the trajectories on two different files, and I was passing them as part of the arguments. Nevertheless, my background is not computer science and all I learnt about programming was by myself, so please feel free to tell me how to do things properly ;-) Here is exactly the code I'm running: from numpy import * from scipy.integrate import odeint def func(x, t, *args): xdot = [0.0, 5.0, 0.0, 15.0, 0.0, 0.0, 0.0] xdot[0] = 0 xdot[1] = - (x[1] * beta) + x[4] * beta xdot[2] = - (x[2] * beta) + x[5] * beta xdot[3] = + x[6] * beta - (x[3] * beta) xdot[4] = + alpha0 + (alpha + (x[3])**( n) * alpha1) / ((K)** ( n) + (x[3])**( n)) - (k1 * x[4]) xdot[5] = + alpha0 + (alpha + (x[1])**( n) * alpha1) / ((K)** ( n) + (x[1])**( n)) - (k1 * x[5]) xdot[6] = - (k1 * x[6]) + alpha0 + (alpha + (x[2])**( n) * alpha1) / ((K)**( n) + (x[2])**( n)) g.write('%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\n'%(t, xdot[0], xdot [1], xdot[2], xdot[3], xdot[4], xdot[5], xdot[6])) return xdot t = arange(0, 100.01, 0.01) alpha0 = 0.0 alpha1 = 0.0 K = 1.0 beta = 5.0 n = 2.1 k1 = 1.0 alpha = 250.0 cell = 1.0 parameters = [alpha0, alpha1, K, beta, n, k1, alpha, cell] x_0 = [0.0, 5.0, 0.0, 15.0, 0.0, 0.0, 0.0] g = open('repressilator.veloc.out', 'w') args = (parameters, g) print 'message1' x = odeint(func, x_0, t, args) print 'message 2' g.close() f = open('repressilator.out', 'w') for i in range(len(t)): f.write('%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\n'%(t[i], x[i][0], x [i][1], x[i][2], x[i][3], x[i][4], x[i][5], x[i][6])) f.close() That gives me this error: [alopez at rosmarinus tmp]$ python file.py message 1 Segmentation fault when I run it on a machine Fedora Core 2 with numpy-0.9.5, Python-2.4.2 and scipy-0.4.6. As you can see, it does not give any error while importing numpy or scipy. As you said there might be some problems during installation. If it could be of any help, I can tell you that when I install Python-2.4.2, numpy-0.9.5 and scipy-0.4.6 (in this order) I had the problem documented here http://www.scipy.org/FAQ#head-8371c35ef08b877875217aaac5489fc747b4aceb about "BLAS sources shipped with LAPACK are incomplete". But after I did what it is described on the Building General Instructions about BLAS and LAPACK from NetLib the import was clean. Thanks for your help, Adri?n. On Feb 23, 2006, at 9:37 PM, Travis Oliphant wrote: > Adri?n L?pez wrote: > >> Hi people, >> >> I use SciPy since a few months ago. I use it mainly to integrate a >> system of ODEs. Until now this piece of code was running nicely with >> the previous version of SciPy: >> >> >> from Numeric import * >> >> > from numpy import * > >> from scipy.integrate import odeint >> >> def func(x, t, *args): >> >> > You don't actually use the *args variable so don't define it. All of > your parameters are being grabbed from the module name-space. > > def func(x, t0): > >> xdot = [0.0, 5.0, 0.0, 15.0, 0.0, 0.0, 0.0] >> >> xdot[0] = + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 >> >> > I have no idea why you are doing this instead of xdot[0]=0. > >> xdot[1] = - (x[1] * beta) + x[4] * beta >> >> xdot[2] = - (x[2] * beta) + x[5] * beta >> >> xdot[3] = + x[6] * beta - (x[3] * beta) >> >> > This line kept giving me errors. I'm not sure why.... > >> xdot[4] = + alpha0 + (alpha + (x[3])**( n) * alpha1) / ((K)** >> ( n) + (x[3])**( n)) - (k1 * x[4]) >> >> xdot[5] = + alpha0 + (alpha + (x[1])**( n) * alpha1) / ((K)** >> ( n) + (x[1])**( n)) - (k1 * x[5]) >> >> xdot[6] = - (k1 * x[6]) + alpha0 + (alpha + (x[2])**( n) * >> alpha1) / ((K)**( n) + (x[2])**( n)) >> >> g.write('%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\n'%(t, xdot[0], xdot >> [1], xdot[2], xdot[3], xdot[4], xdot[5], xdot[6])) >> >> return xdot >> >> t = arange(0, 100.01, 0.01) >> >> alpha0 = 0.0 >> alpha1 = 0.0 >> K = 1.0 >> beta = 5.0 >> n = 2.1 >> k1 = 1.0 >> alpha = 250.0 >> cell = 1.0 >> >> parameters = [alpha0, alpha1, K, beta, n, k1, alpha, cell] >> >> x_0 = [0.0, 5.0, 0.0, 15.0, 0.0, 0.0, 0.0] >> >> g = open('repressilator.veloc.out', 'w') >> >> args = (parameters, g) >> >> x = odeint(func, x_0, t, args) >> >> >> > You don't actually use the arguments in the function call but instead > the use global variables inside the function, so you don't need the > args > (or parameters) variables and can just write > > x = odeint(func, x_0, t) > >> [alopez at thymus tmp]$ python file.py >> Segmentation fault >> >> on a linux machine and >> >> Macintosh-3:~/work/tmp adrianlopez$ python test.py >> Bus error >> >> > > Can you tell when the error is occurring. Put print statements in > file.py to see when the segfault is actually happening. If it's on > import of scipy (which I suspect), then it's a problem with your > installation (the wrong NumPy C-API is being used for some reason). > > -Travis > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user From oliphant.travis at ieee.org Fri Feb 24 04:51:07 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 24 Feb 2006 02:51:07 -0700 Subject: [SciPy-user] (no subject) In-Reply-To: <0B567BCB-3E42-434A-B5C4-2E84CAFC68B7@imim.es> References: <43FE1D19.7020000@ieee.org> <0B567BCB-3E42-434A-B5C4-2E84CAFC68B7@imim.es> Message-ID: <43FED70B.9030208@ieee.org> Adri?n L?pez wrote: >from numpy import * > >from scipy.integrate import odeint > >def func(x, t, *args): > > xdot = [0.0, 5.0, 0.0, 15.0, 0.0, 0.0, 0.0] > > xdot[0] = 0 > > xdot[1] = - (x[1] * beta) + x[4] * beta > > xdot[2] = - (x[2] * beta) + x[5] * beta > > xdot[3] = + x[6] * beta - (x[3] * beta) > > Try changing this line to x[6] * beta - x[3] * beta I was getting a strange error from it that has me perplexed. Perhaps an un-handled exception is causing the segfault. It's hard to say. If you have gdb installed you can really help out by running under gdb so that when it segfaults we find out where... Here's how... gdb file /usr/bin/python (or whever Python is installed). run execfile('file.py') -Travis From alopez at imim.es Fri Feb 24 06:20:37 2006 From: alopez at imim.es (=?ISO-8859-1?Q?Adri=E1n_L=F3pez?=) Date: Fri, 24 Feb 2006 12:20:37 +0100 Subject: [SciPy-user] (no subject) In-Reply-To: <43FED70B.9030208@ieee.org> References: <43FE1D19.7020000@ieee.org> <0B567BCB-3E42-434A-B5C4-2E84CAFC68B7@imim.es> <43FED70B.9030208@ieee.org> Message-ID: <069B2FA9-9B03-421B-84EB-95E7460A1873@imim.es> Very strange ... If I change just that single line as you said, it runs correctly !!! THANKS!!! How did you know the problem was on that line? And also, another interesting thing, that maybe is easier to explain. The code I posted, with the line corrected as you suggested gives me this output [alopez at thymus tmp]$ tail new/repressilator.out 99.91 0.0 106.561968929 12.8532193268 0.657464014072 120.022734481 10.2873375884 0.690593561295 99.92 0.0 107.232334762 12.7255621105 0.659198461945 120.585673723 10.1851132999 0.695453697878 99.93 0.0 107.897250845 12.5991702799 0.661090230937 121.140028167 10.0839043882 0.7005127881 99.94 0.0 108.556560119 12.4740313944 0.66314142979 121.685624094 9.98370079793 0.705774044541 99.95 0.0 109.210104618 12.3501331324 0.665354223171 122.222286288 9.88449257258 0.711240755997 99.96 0.0 109.857725438 12.2274632897 0.667730832672 122.749838018 9.78626985374 0.716916288878 99.97 0.0 110.499262751 12.1060097788 0.670273537691 123.268101064 9.6890228799 0.722804088444 99.98 0.0 111.134555765 11.985760628 0.672984676503 123.776895697 9.59274198564 0.728907680303 99.99 0.0 111.763442835 11.8667039806 0.67586664717 124.276040954 9.49741760048 0.735230671778 100.0 0.0 112.385761359 11.7488280938 0.678921908743 124.76535447 9.40304024813 0.741776753543 and the same code using Numeric instead of numpy, an extra line: [alopez at thymus tmp]$ tail old/repressilator.out 99.92 0.0 107.232334762 12.7255621105 0.659198461945 120.585673723 10.1851132999 0.695453697878 99.93 0.0 107.897250845 12.5991702799 0.661090230937 121.140028167 10.0839043882 0.7005127881 99.94 0.0 108.556560119 12.4740313944 0.66314142979 121.685624094 9.98370079793 0.705774044541 99.95 0.0 109.210104618 12.3501331324 0.665354223171 122.222286288 9.88449257258 0.711240755997 99.96 0.0 109.857725438 12.2274632897 0.667730832672 122.749838018 9.78626985374 0.716916288878 99.97 0.0 110.499262751 12.1060097788 0.670273537691 123.268101064 9.6890228799 0.722804088444 99.98 0.0 111.134555765 11.985760628 0.672984676503 123.776895697 9.59274198564 0.728907680303 99.99 0.0 111.763442835 11.8667039806 0.67586664717 124.276040954 9.49741760048 0.735230671778 100.0 0.0 112.385761359 11.7488280938 0.678921908743 124.76535447 9.40304024813 0.741776753543 100.01 0.0 113.001347666 11.6321213375 0.682152982467 125.244652321 9.30960054565 0.74854970128 Why? Finally I've tried the tool you pointed me. I run it importing numpy and with *the* line changed as you said. The output is here: [alopez at thymus new]$ gdb GNU gdb Red Hat Linux (6.0post-0.20040223.19rh) Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "i386-redhat-linux-gnu". (gdb) file /home/alopez/software/bin/python Reading symbols from /home/alopez/software/bin/python...done. Using host libthread_db library "/lib/tls/libthread_db.so.1". (gdb) run Starting program: /home/alopez/software/bin/python [Thread debugging using libthread_db enabled] [New Thread -1208276480 (LWP 8532)] Python 2.4.2 (#1, Feb 22 2006, 17:29:56) [GCC 3.3.3 20040412 (Red Hat Linux 3.3.3-7)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> execfile('file.py') message 1 message2 >>> seems correct, doesn't it? Thanks again and I'll report strange behavior as these ones if I experience them, -- Adri?n L?pez Garc?a de Lomana, Graduate Student Computational Biochemistry and Biophysics Lab Research Group on Biomedical Informatics (GRIB) - IMIM / UPF c/ Dr. Aiguader, 80; 08003 Barcelona (Spain) http://diana.imim.es On Feb 24, 2006, at 10:51 AM, Travis Oliphant wrote: > Adri?n L?pez wrote: > >> from numpy import * >> >> from scipy.integrate import odeint >> >> def func(x, t, *args): >> >> xdot = [0.0, 5.0, 0.0, 15.0, 0.0, 0.0, 0.0] >> >> xdot[0] = 0 >> >> xdot[1] = - (x[1] * beta) + x[4] * beta >> >> xdot[2] = - (x[2] * beta) + x[5] * beta >> >> xdot[3] = + x[6] * beta - (x[3] * beta) >> >> > Try changing this line to > > x[6] * beta - x[3] * beta > > I was getting a strange error from it that has me perplexed. Perhaps > an un-handled exception is causing the segfault. > > It's hard to say. > > If you have gdb installed you can really help out by running under gdb > so that when it segfaults we find out where... > > Here's how... > > gdb > file /usr/bin/python (or whever Python is installed). > run > execfile('file.py') > > > -Travis > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From gnata at obs.univ-lyon1.fr Fri Feb 24 06:42:26 2006 From: gnata at obs.univ-lyon1.fr (Xavier Gnata) Date: Fri, 24 Feb 2006 12:42:26 +0100 Subject: [SciPy-user] numpy : Fromfile should trigger an exception Message-ID: <43FEF122.9020502@obs.univ-lyon1.fr> Hi, Trying to read more value than those that are available in a binary file, numpy fromfile function prints a warning message but no exception is triggered import numpy a=numpy.arange(10) a.tofile('toto.dat') b = numpy.fromfile('toto.dat',numpy.Int32,11) shows 11 items requested but only 10 read But there is no easy way to know that something went wrong (or I'm missing something). Xavier -- ############################################ Xavier Gnata CRAL - Observatoire de Lyon 9, avenue Charles Andr? 69561 Saint Genis Laval cedex Phone: +33 4 78 86 85 58 Fax: +33 4 78 86 83 86 E-mail: gnata at obs.univ-lyon1.fr ############################################ From oliphant.travis at ieee.org Fri Feb 24 06:50:04 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 24 Feb 2006 04:50:04 -0700 Subject: [SciPy-user] (no subject) In-Reply-To: <069B2FA9-9B03-421B-84EB-95E7460A1873@imim.es> References: <43FE1D19.7020000@ieee.org> <0B567BCB-3E42-434A-B5C4-2E84CAFC68B7@imim.es> <43FED70B.9030208@ieee.org> <069B2FA9-9B03-421B-84EB-95E7460A1873@imim.es> Message-ID: <43FEF2EC.2070904@ieee.org> Adri?n L?pez wrote: > Very strange ... If I change just that single line as you said, it > runs correctly !!! THANKS!!! How did you know the problem was on that > line? I'm not sure what the issue is exactly but it might be a bug in Python itself (though I don't tend to jump to that conclusion quickly). I figured that line might be a problem, because I kept getting errors (but not segfaults) from the code at that line. The errors made no sense to me, but I wondered if on your system, whatever was causing the problem I was seeing was showing up as a segfault. It's an interesting test-case, though. I'll look into it some more. I don't like to leave things like this floating because it may be pointing to a problem I'm overlooking... It's hard to say why the numpy version prints an extra line. It probably has something to do with rounding modes and the fact that in numpy x[3] is not a Python float but an array-scalar float that (for now) uses array-math instead of scalar math. The scalar math is coming. -Travis From oliphant.travis at ieee.org Fri Feb 24 06:51:40 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 24 Feb 2006 04:51:40 -0700 Subject: [SciPy-user] numpy : Fromfile should trigger an exception In-Reply-To: <43FEF122.9020502@obs.univ-lyon1.fr> References: <43FEF122.9020502@obs.univ-lyon1.fr> Message-ID: <43FEF34C.10805@ieee.org> Xavier Gnata wrote: >Hi, > >Trying to read more value than those that are available in a binary >file, numpy fromfile function prints a warning message but no exception >is triggered > >import numpy >a=numpy.arange(10) >a.tofile('toto.dat') >b = numpy.fromfile('toto.dat',numpy.Int32,11) > >shows >11 items requested but only 10 read > >But there is no easy way to know that something went wrong (or I'm >missing something). > > This should probably be changed to a true warning so that you can catch it if you want to. -Travis From oliphant.travis at ieee.org Fri Feb 24 07:33:24 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 24 Feb 2006 05:33:24 -0700 Subject: [SciPy-user] (no subject) In-Reply-To: <0B567BCB-3E42-434A-B5C4-2E84CAFC68B7@imim.es> References: <43FE1D19.7020000@ieee.org> <0B567BCB-3E42-434A-B5C4-2E84CAFC68B7@imim.es> Message-ID: <43FEFD14.5090800@ieee.org> Adri?n L?pez wrote: >Hi, > >the code file is generated automatically, that was the reason for the >strange lines as > > > Found the ultimate problem. It was not a bug in Python (it usually isn't). It was a bug in array scalars. The +x[6] was causing the problem. This calls the C-equivalent of __pos__ which I had defined badly using a function that had two arguments (instead of one). The second argument was being filled with whatever was on the stack. On my system it just caused a funny argument error, but on yours it was causing a segfault. This should now be fixed in SVN. In the mean-time, avoid using +(anything that produces an array scalar like x[6]) Thanks again for the good test. -Travis From nwagner at mecha.uni-stuttgart.de Fri Feb 24 08:55:50 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Fri, 24 Feb 2006 14:55:50 +0100 Subject: [SciPy-user] Wrapping geqp3 (Rank revealing QR) Message-ID: <43FF1066.4080704@mecha.uni-stuttgart.de> Hi all, I was trying to wrap geqp3...(my homework as suggested by Arnd ;-) ) It turns out that it is not straightforward (at least for me). I have used an example taken from generic_flapack.pyf subroutine geqrf(m,n,a,tau,work,lwork,info) ! qr_a,tau,work,info = geqrf(a,lwork=3*n,overwrite_a=0) ! Compute a QR factorization of a real M-by-N matrix A: ! A = Q * R. callstatement (*f2py_func)(&m,&n,a,&m,tau,work,&lwork,&info) callprotoargument int*,int*,*,int*,*,*,int*,int* integer intent(hide),depend(a):: m = shape(a,0) integer intent(hide),depend(a):: n = shape(a,1) dimension(m,n),intent(in,out,copy,out=qr) :: a dimension(MIN(m,n)),intent(out) :: tau integer optional,intent(in),depend(n),check(lwork>=n||lwork==-1) :: lwork=3*n dimension(MAX(lwork,1)),intent(out),depend(lwork) :: work integer intent(out) :: info end subroutine geqrf Some observations: The number of arguments differ wr.t. to complex (C,Z) and real matrices (S,D). How is this handled with scipy ? SUBROUTINE CGEQP3( M, N, A, LDA, JPVT, TAU, WORK, LWORK, RWORK, $ INFO ) SUBROUTINE DGEQP3( M, N, A, LDA, JPVT, TAU, WORK, LWORK, INFO ) Here is my first (incomplete !!) attempt to adapt the wrapper . How can I proceed ? ! -*- f90 -*- ! Note: the context of this file is case sensitive. python module wrap_lap ! in interface ! in :wrap_lap subroutine geqp3(m,n,a,lda,jpvt,tau,work,lwork,info) ! in :wrap_lap:dgeqp3.f ! rrqr_a,tau,work,info = geqp3(a,lwork=...,overwrite_a=0) ! Compute the a QR factorization with column pivoting of a matrix A ! A*P = Q*R using Level 3 BLAS callstatement (*f2py_func)(&m,&n,a,&m,tau,work,&lwork,&info) callprotoargument int*,int*,*,int*,*,int*,int* integer intent(hide),depend(a):: m=shape(a,0) integer intent(hide),depend(a):: n=shape(a,1) dimension(m,n), intent(in,out,copy,out=rrqr) :: a dimension(MIN(m,n)),intent(out) :: tau integer dimension(*) :: jpvt double precision dimension(*), intent(out) :: tau double precision dimension(*), intent(out) :: work dimension(lwork),intent(hide,cache),depend(lwork):: work integer intent(out):: info end subroutine geqp3 end interface end python module wrap_lap ! This file was auto-generated with f2py (version:2_2163). ! See http://cens.ioc.ee/projects/f2py2e/ ~ Nils From pearu at scipy.org Fri Feb 24 08:18:02 2006 From: pearu at scipy.org (Pearu Peterson) Date: Fri, 24 Feb 2006 07:18:02 -0600 (CST) Subject: [SciPy-user] Wrapping geqp3 (Rank revealing QR) In-Reply-To: <43FF1066.4080704@mecha.uni-stuttgart.de> References: <43FF1066.4080704@mecha.uni-stuttgart.de> Message-ID: On Fri, 24 Feb 2006, Nils Wagner wrote: > Hi all, > > I was trying to wrap geqp3...(my homework as suggested by Arnd ;-) ) > It turns out that it is not straightforward (at least for me). > > I have used an example taken from generic_flapack.pyf First, all addition to lapack wrapping should be commited to scipy/Lib/lib/lapack. Signatures for geqp3 should be added to flapack_llsc.pyf.src file in scipy/Lib/lib/lapack. > Some observations: > The number of arguments differ wr.t. to complex (C,Z) and real matrices > (S,D). > How is this handled with scipy ? > > SUBROUTINE CGEQP3( M, N, A, LDA, JPVT, TAU, WORK, LWORK, RWORK, > $ INFO ) > SUBROUTINE DGEQP3( M, N, A, LDA, JPVT, TAU, WORK, LWORK, INFO ) See the first signature in flapack_esv.pyf.src that demonstrates how to deal with such a situation. Basically, you should use <_1=,,rwork\,,\2> instead of rwork, in the argument list of geqp3. Check out also the corresponing bits in callstatement, callprotoargument are handled in ev. HTH, Pearu From alopez at imim.es Fri Feb 24 10:18:32 2006 From: alopez at imim.es (=?ISO-8859-1?Q?Adri=E1n_L=F3pez?=) Date: Fri, 24 Feb 2006 16:18:32 +0100 Subject: [SciPy-user] (no subject) In-Reply-To: <43FEFD14.5090800@ieee.org> References: <43FE1D19.7020000@ieee.org> <0B567BCB-3E42-434A-B5C4-2E84CAFC68B7@imim.es> <43FEFD14.5090800@ieee.org> Message-ID: <36AEE679-F443-4351-A2D1-6713FACFF471@imim.es> Great !!! Using some similar code, it was running OK on a machine while in another was giving me a strange error. lsoda-- repeated calls with istate = 1 and tout = t (=r1) In above message, R1 = 0.0000000000000E+00 lsoda-- run aborted.. apparent infinite loop Was yours similar? At any rate, if I run svn co http://svn.scipy.org/svn/scipy/trunk scipy and python setup.py install I imagine I won't have these errors anymore, am I wrong? Thanks again for your help, -- Adri?n L?pez Garc?a de Lomana, Graduate Student Computational Biochemistry and Biophysics Lab Research Group on Biomedical Informatics (GRIB) - IMIM / UPF c/ Dr. Aiguader, 80; 08003 Barcelona (Spain) http://diana.imim.es On Feb 24, 2006, at 1:33 PM, Travis Oliphant wrote: > Adri?n L?pez wrote: > >> Hi, >> >> the code file is generated automatically, that was the reason for the >> strange lines as >> >> >> > Found the ultimate problem. It was not a bug in Python (it usually > isn't). It was a bug in array scalars. The +x[6] was causing the > problem. This calls the C-equivalent of __pos__ which I had defined > badly using a function that had two arguments (instead of one). The > second argument was being filled with whatever was on the stack. > On my > system it just caused a funny argument error, but on yours it was > causing a segfault. > > This should now be fixed in SVN. In the mean-time, avoid using > +(anything that produces an array scalar like x[6]) > > Thanks again for the good test. > > -Travis > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user From nwagner at mecha.uni-stuttgart.de Fri Feb 24 10:19:17 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Fri, 24 Feb 2006 16:19:17 +0100 Subject: [SciPy-user] Wrapping geqp3 (Rank revealing QR) In-Reply-To: References: <43FF1066.4080704@mecha.uni-stuttgart.de> Message-ID: <43FF23F5.70503@mecha.uni-stuttgart.de> Pearu Peterson wrote: >On Fri, 24 Feb 2006, Nils Wagner wrote: > > >>Hi all, >> >>I was trying to wrap geqp3...(my homework as suggested by Arnd ;-) ) >>It turns out that it is not straightforward (at least for me). >> >>I have used an example taken from generic_flapack.pyf >> > >First, all addition to lapack wrapping should be commited to >scipy/Lib/lib/lapack. Signatures for geqp3 should be added to >flapack_llsc.pyf.src file in scipy/Lib/lib/lapack. > > > >>Some observations: >>The number of arguments differ wr.t. to complex (C,Z) and real matrices >>(S,D). >>How is this handled with scipy ? >> >> SUBROUTINE CGEQP3( M, N, A, LDA, JPVT, TAU, WORK, LWORK, RWORK, >> $ INFO ) >> SUBROUTINE DGEQP3( M, N, A, LDA, JPVT, TAU, WORK, LWORK, INFO ) >> > > >See the first signature in flapack_esv.pyf.src that demonstrates how to >deal with such a situation. Basically, you should use > <_1=,,rwork\,,\2> >instead of > rwork, >in the argument list of geqp3. Check out also the corresponing >bits in callstatement, callprotoargument are handled in ev. > > >HTH, >Pearu > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > Pearu, Thank you for your quick reply ! The next issues are 1. How do I handle JPVT ? 2. How do I find the size of LWORK in *geqp3 ? I am willing to learn it :-) but the contents of flapack_esv.pyf.src is somehow cryptic in many places e.g. callstatement if(irange_capi==Py_None);else{irange[0]++;irange[1]++;}(*f2py_func)((compute_v?"V":"N"),(vrange_capi==Py_None?(irange_capi==Py_None?"A":"I"):"V"),(lower?"L":"U"),&n,a,&n,vrange,vrange+1,irange,irange+1,&atol,&m,w,z,&ldz,isuppz,work,&lwork,<_2=,,rwork\,&lrwork\,,\2>iwork,&liwork,&info);if(irange_capi==Py_None);else{irange[0]--;irange[1]--;}if(vrange_capi==Py_None);else {capi_w_tmp-\>dimensions[0]=capi_z_tmp-\>dimensions[1]=m;/*capi_z_tmp-\>strides[0]=m*capi_z_tmp-\>descr-\>elsize;*/} Nils From pearu at scipy.org Fri Feb 24 09:33:31 2006 From: pearu at scipy.org (Pearu Peterson) Date: Fri, 24 Feb 2006 08:33:31 -0600 (CST) Subject: [SciPy-user] Wrapping geqp3 (Rank revealing QR) In-Reply-To: <43FF23F5.70503@mecha.uni-stuttgart.de> References: <43FF1066.4080704@mecha.uni-stuttgart.de> <43FF23F5.70503@mecha.uni-stuttgart.de> Message-ID: On Fri, 24 Feb 2006, Nils Wagner wrote: > The next issues are > > 1. How do I handle JPVT ? > > 2. How do I find the size of LWORK in *geqp3 ? Read the headers of ?geqp3.f files where the arguments of related subroutines are explained. > I am willing to learn it :-) > but the contents of flapack_esv.pyf.src is somehow cryptic in many > places e.g. > > callstatement > if(irange_capi==Py_None);else{irange[0]++;irange[1]++;}(*f2py_func)((compute_v?"V":"N"),(vrange_capi==Py_None?(irange_capi==Py_None?"A":"I"):"V"),(lower?"L":"U"),&n,a,&n,vrange,vrange+1,irange,irange+1,&atol,&m,w,z,&ldz,isuppz,work,&lwork,<_2=,,rwork\,&lrwork\,,\2>iwork,&liwork,&info);if(irange_capi==Py_None);else{irange[0]--;irange[1]--;}if(vrange_capi==Py_None);else > {capi_w_tmp-\>dimensions[0]=capi_z_tmp-\>dimensions[1]=m;/*capi_z_tmp-\>strides[0]=m*capi_z_tmp-\>descr-\>elsize;*/} That is simple C code in one line and takes care of the offset between C and Fortran array indices: if (irange_capi==Py_None) ; else { irange[0]++; irange[1]++; } (*f2py_func)((compute_v?"V":"N"), (vrange_capi==Py_None?(irange_capi==Py_None?"A":"I"):"V"), (lower?"L":"U"), &n,a,&n,vrange,vrange+1,irange,irange+1,&atol,&m, w,z,&ldz,isuppz,work,&lwork, <_2=,,rwork\,&lrwork\,,\2>iwork,&liwork,&info); if (irange_capi==Py_None) ; else { irange[0]--; irange[1]--; } if (vrange_capi==Py_None) ; else { capi_w_tmp-\>dimensions[0]=capi_z_tmp-\>dimensions[1]=m; /*capi_z_tmp-\>strides[0]=m*capi_z_tmp-\>descr-\>elsize;*/ } See also numpy.distutils.from_template.__doc__. Pearu From nwagner at mecha.uni-stuttgart.de Fri Feb 24 10:51:17 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Fri, 24 Feb 2006 16:51:17 +0100 Subject: [SciPy-user] Wrapping geqp3 (Rank revealing QR) In-Reply-To: References: <43FF1066.4080704@mecha.uni-stuttgart.de> <43FF23F5.70503@mecha.uni-stuttgart.de> Message-ID: <43FF2B75.8040707@mecha.uni-stuttgart.de> Pearu Peterson wrote: >On Fri, 24 Feb 2006, Nils Wagner wrote: > > >>The next issues are >> >>1. How do I handle JPVT ? >> >>2. How do I find the size of LWORK in *geqp3 ? >> > >Read the headers of ?geqp3.f files where the arguments of related >subroutines are explained. > > Sure. I mean the special handling inside the wrapper. I found several ways for different routines to declare _lwork <_lwork=3*n-1,\0,2*n-1,\2> integer optional,intent(in),depend(n) :: lwork=<_lwork> check(<_lwork>\<=lwork) lwork ! <_lwork=(compute_vl||compute_vr)?4*n:3*n,\0,2*n,\2> integer optional,intent(in),depend(n,compute_vl,compute_vr) :: lwork=<_lwork> check(lwork>=<_lwork>) :: lwork ! <_lwork=3*minmn+MAX(2*minmn\,MAX(maxmn\,nrhs)),\0,2*minmn+MAX(maxmn\,nrhs),\2> integer optional,intent(in),depend(nrhs,minmn,maxmn), & check(lwork>=1) & :: lwork=<_lwork> !check(lwork>=<_lwork>) ! <_lwork=n,\0,\0,\0> integer optional,intent(in),depend(n),check(lwork\>=<_lwork>) :: lwork=<_lwork> dimension(lwork),intent(hide,cache),depend(lwork) :: work Which way is suitable here ? Nils >>I am willing to learn it :-) >>but the contents of flapack_esv.pyf.src is somehow cryptic in many >>places e.g. >> >> callstatement >>if(irange_capi==Py_None);else{irange[0]++;irange[1]++;}(*f2py_func)((compute_v?"V":"N"),(vrange_capi==Py_None?(irange_capi==Py_None?"A":"I"):"V"),(lower?"L":"U"),&n,a,&n,vrange,vrange+1,irange,irange+1,&atol,&m,w,z,&ldz,isuppz,work,&lwork,<_2=,,rwork\,&lrwork\,,\2>iwork,&liwork,&info);if(irange_capi==Py_None);else{irange[0]--;irange[1]--;}if(vrange_capi==Py_None);else >>{capi_w_tmp-\>dimensions[0]=capi_z_tmp-\>dimensions[1]=m;/*capi_z_tmp-\>strides[0]=m*capi_z_tmp-\>descr-\>elsize;*/} >> > >That is simple C code in one line and takes care of the offset between C >and Fortran array indices: > >if (irange_capi==Py_None) > ; >else { > irange[0]++; > irange[1]++; >} >(*f2py_func)((compute_v?"V":"N"), > (vrange_capi==Py_None?(irange_capi==Py_None?"A":"I"):"V"), > (lower?"L":"U"), > &n,a,&n,vrange,vrange+1,irange,irange+1,&atol,&m, > w,z,&ldz,isuppz,work,&lwork, > <_2=,,rwork\,&lrwork\,,\2>iwork,&liwork,&info); >if (irange_capi==Py_None) > ; >else { > irange[0]--; > irange[1]--; >} >if (vrange_capi==Py_None) > ; >else { > capi_w_tmp-\>dimensions[0]=capi_z_tmp-\>dimensions[1]=m; > /*capi_z_tmp-\>strides[0]=m*capi_z_tmp-\>descr-\>elsize;*/ >} > >See also numpy.distutils.from_template.__doc__. > >Pearu > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > From pearu at scipy.org Fri Feb 24 10:00:21 2006 From: pearu at scipy.org (Pearu Peterson) Date: Fri, 24 Feb 2006 09:00:21 -0600 (CST) Subject: [SciPy-user] Wrapping geqp3 (Rank revealing QR) In-Reply-To: <43FF2B75.8040707@mecha.uni-stuttgart.de> References: <43FF1066.4080704@mecha.uni-stuttgart.de> <43FF23F5.70503@mecha.uni-stuttgart.de> <43FF2B75.8040707@mecha.uni-stuttgart.de> Message-ID: On Fri, 24 Feb 2006, Nils Wagner wrote: >> Read the headers of ?geqp3.f files where the arguments of related >> subroutines are explained. >> >> > Sure. I mean the special handling inside the wrapper. > I found several ways for different routines to declare _lwork > > <_lwork=3*n-1,\0,2*n-1,\2> > integer optional,intent(in),depend(n) :: lwork=<_lwork> > check(<_lwork>\<=lwork) lwork > > ! <_lwork=(compute_vl||compute_vr)?4*n:3*n,\0,2*n,\2> > integer optional,intent(in),depend(n,compute_vl,compute_vr) :: > lwork=<_lwork> > check(lwork>=<_lwork>) :: lwork > > ! > <_lwork=3*minmn+MAX(2*minmn\,MAX(maxmn\,nrhs)),\0,2*minmn+MAX(maxmn\,nrhs),\2> > integer optional,intent(in),depend(nrhs,minmn,maxmn), & > check(lwork>=1) & > :: lwork=<_lwork> > !check(lwork>=<_lwork>) > > ! <_lwork=n,\0,\0,\0> > integer optional,intent(in),depend(n),check(lwork\>=<_lwork>) :: > lwork=<_lwork> > dimension(lwork),intent(hide,cache),depend(lwork) :: work > > > Which way is suitable here ? Read numpy.distutils.from_template.__doc__ about the syntax of <..> blocks. And read ?geqp3.f to find out what are proper _lwork values for each version of geqp3 subroutine. Pearu From nwagner at mecha.uni-stuttgart.de Fri Feb 24 11:12:21 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Fri, 24 Feb 2006 17:12:21 +0100 Subject: [SciPy-user] Wrapping geqp3 (Rank revealing QR) In-Reply-To: References: <43FF1066.4080704@mecha.uni-stuttgart.de> <43FF23F5.70503@mecha.uni-stuttgart.de> <43FF2B75.8040707@mecha.uni-stuttgart.de> Message-ID: <43FF3065.1050406@mecha.uni-stuttgart.de> Pearu Peterson wrote: >On Fri, 24 Feb 2006, Nils Wagner wrote: > > >>>Read the headers of ?geqp3.f files where the arguments of related >>>subroutines are explained. >>> >>> >>> >>Sure. I mean the special handling inside the wrapper. >>I found several ways for different routines to declare _lwork >> >><_lwork=3*n-1,\0,2*n-1,\2> >> integer optional,intent(in),depend(n) :: lwork=<_lwork> >> check(<_lwork>\<=lwork) lwork >> >> ! <_lwork=(compute_vl||compute_vr)?4*n:3*n,\0,2*n,\2> >> integer optional,intent(in),depend(n,compute_vl,compute_vr) :: >>lwork=<_lwork> >> check(lwork>=<_lwork>) :: lwork >> >> ! >><_lwork=3*minmn+MAX(2*minmn\,MAX(maxmn\,nrhs)),\0,2*minmn+MAX(maxmn\,nrhs),\2> >> integer optional,intent(in),depend(nrhs,minmn,maxmn), & >> check(lwork>=1) & >> :: lwork=<_lwork> >> !check(lwork>=<_lwork>) >> >>! <_lwork=n,\0,\0,\0> >> integer optional,intent(in),depend(n),check(lwork\>=<_lwork>) :: >>lwork=<_lwork> >> dimension(lwork),intent(hide,cache),depend(lwork) :: work >> >> >>Which way is suitable here ? >> > >Read numpy.distutils.from_template.__doc__ about the syntax of <..> >blocks. And read ?geqp3.f to find out what are proper _lwork values >for each version of geqp3 subroutine. > >Pearu > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > >>> numpy.distutils.from_template.__doc__ Traceback (most recent call last): File "", line 1, in ? AttributeError: 'module' object has no attribute 'from_template' >>> dir (numpy.distutils) ['ScipyTest', '_INSTALLED', '__builtins__', '__config__', '__doc__', '__file__', '__name__', '__path__', '__version__', 'ccompiler', 'exec_command', 'log', 'misc_util', 'test', 'unixccompiler'] Am I missing something ? Nils From pearu at scipy.org Fri Feb 24 10:19:56 2006 From: pearu at scipy.org (Pearu Peterson) Date: Fri, 24 Feb 2006 09:19:56 -0600 (CST) Subject: [SciPy-user] Wrapping geqp3 (Rank revealing QR) In-Reply-To: <43FF3065.1050406@mecha.uni-stuttgart.de> References: <43FF1066.4080704@mecha.uni-stuttgart.de> <43FF23F5.70503@mecha.uni-stuttgart.de> <43FF3065.1050406@mecha.uni-stuttgart.de> Message-ID: On Fri, 24 Feb 2006, Nils Wagner wrote: >> Read numpy.distutils.from_template.__doc__ about the syntax of <..> >> blocks. And read ?geqp3.f to find out what are proper _lwork values >> for each version of geqp3 subroutine. >> >>>> numpy.distutils.from_template.__doc__ > Traceback (most recent call last): > File "", line 1, in ? > AttributeError: 'module' object has no attribute 'from_template' > >>>> dir (numpy.distutils) > ['ScipyTest', '_INSTALLED', '__builtins__', '__config__', '__doc__', > '__file__', '__name__', '__path__', '__version__', 'ccompiler', > 'exec_command', 'log', 'misc_util', 'test', 'unixccompiler'] > > Am I missing something ? Yes. Have you tried >>> import numpy.distutils.from_template as m >>> print m.__doc__ or >>> help('numpy.distutils.from_template') or $ pydoc numpy.distutils.from_template ? Pearu From strawman at astraw.com Fri Feb 24 14:08:49 2006 From: strawman at astraw.com (Andrew Straw) Date: Fri, 24 Feb 2006 11:08:49 -0800 Subject: [SciPy-user] Do I need to get and install LAPACK if I have ATLAS? In-Reply-To: <17406.28555.422670.792192@montanaro.dyndns.org> References: <17404.31537.881323.594234@montanaro.dyndns.org> <17404.32847.834057.300668@montanaro.dyndns.org> <17404.35559.213728.466489@montanaro.dyndns.org> <43FC9F3B.9070009@gmail.com> <17406.28555.422670.792192@montanaro.dyndns.org> Message-ID: <43FF59C1.8030204@astraw.com> skip at pobox.com wrote: > Robert> I'm impressed. I would think ar(1) should be fairly stable on any platform. > >Me too. I eventually got it to work. Dunno what the problem was. We have >Solaris 8/9/10 so we have to do our compiles and links on 8 for backward >compatibility. I ran the ar(1) command on 10 and it worked. I think the >core dump was on 8. > > Robert> Do you need an optimized BLAS/LAPACK, or will the reference > Robert> implementations do? > >Maybe. I had so much time invested in ATLAS wrangling, so I kept at it. I >now have ATLAS+LAPACK+numpy+scipy built with GCC 3.4.1 and Python 2.4.2 >installed and apparently working okay on Solaris. I added some terse >directions on the wiki at work for others who may one day trod the same >harrowing path I've been down. Is there a public wiki where I can submit a >suitably site-independent version of these instructions? > It would be great if you could update http://www.scipy.org which is now a wiki. I suggest creating the page http://www.scipy.org/Installing_SciPy/Solaris Cheers! Andrew From Doug.LATORNELL at mdsinc.com Fri Feb 24 16:53:29 2006 From: Doug.LATORNELL at mdsinc.com (LATORNELL, Doug) Date: Fri, 24 Feb 2006 13:53:29 -0800 Subject: [SciPy-user] Confused about enhanced-scalar dtypes Message-ID: <34090E25C2327C4AA5D276799005DDE0E35178@SMDMX0501.mds.mdsinc.com> I built NumPy from SVN this morning and got a surprise concerning dtype arguments: In [24]: import numpy In [25]: numpy.__version__ Out[25]: '0.9.6.2165' In [26]: a = numpy.fromfile("profiles-O0", dtype="float", sep=' ') ------------------------------------------------------------------------ --- exceptions.TypeError Traceback (most recent call last) /ocean/dlatorne/SoG/dlatornell06/profiles/ TypeError: data type not understood In [27]: a = numpy.fromfile("profiles-O0", dtype="float_", sep=' ') In [28]: a[:5] Out[28]: array([ 282.8318, 27.2176, 20.932 , 5.9051, 0. ]) But on a several week old build on an other machine: In [1]: import numpy In [2]: numpy.__version__ Out[2]: '0.9.3.1868' In [3]: a = numpy.fromfile("test", dtype="float", sep=" ") In [4]: a = numpy.fromfile("test", dtype="float_", sep=" ") ------------------------------------------------------------------------ --- exceptions.TypeError Traceback (most recent call last) /home/doug/ TypeError: data type not understood In [5]: Based on the tip in the middle of pg 19 of my copy of the NumPy book, I expected both dtype="float" and dtype="float_" to work. I know there has been a bunch of work lately on things related to dtype, but I don't recall seeing anything about breaking the equivalence of the enhanced-scalar types (like float_) and the corresponding standard Python types (float, in this case). Did I miss, or misunderstand something, get unluck with the timing of my checkout this morning, or is this a bug? Doug This email and any files transmitted with it may contain privileged or confidential information and may be read or used only by the intended recipient. If you are not the intended recipient of the email or any of its attachments, please be advised that you have received this email in error and any use, dissemination, distribution, forwarding, printing or copying of this email or any attached files is strictly prohibited. If you have received this email in error, please immediately purge it and all attachments and notify the sender by reply email or contact the sender at the number listed. From miles at lubin.us Fri Feb 24 16:58:52 2006 From: miles at lubin.us (Miles Lubin) Date: Fri, 24 Feb 2006 16:58:52 -0500 Subject: [SciPy-user] using tcdf Message-ID: <43FF819C.2060200@lubin.us> Hi, I've looked through the documentation and the help for the stats module, but I haven't be able to find any examples of how to use the cdf function. Let's say I want n degrees of freedom, and want the cumulative probability from the left tail to -x (standard deviations) on a t-distribution, how would I generate it? On the TI-83+ there's a simple tcdf() function, but I'm lost on how to do it in scipy. Thanks, Miles From cookedm at physics.mcmaster.ca Fri Feb 24 17:27:33 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Fri, 24 Feb 2006 17:27:33 -0500 Subject: [SciPy-user] using tcdf In-Reply-To: <43FF819C.2060200@lubin.us> (Miles Lubin's message of "Fri, 24 Feb 2006 16:58:52 -0500") References: <43FF819C.2060200@lubin.us> Message-ID: Miles Lubin writes: > Hi, > > I've looked through the documentation and the help for the stats module, > but I haven't be able to find any examples of how to use the cdf > function. Let's say I want n degrees of freedom, and want the cumulative > probability from the left tail to -x (standard deviations) on a > t-distribution, how would I generate it? > On the TI-83+ there's a simple tcdf() function, but I'm lost on how to > do it in scipy. The distributions in scipy.stats are objects that have methods that calculate the interesting stuff. In your case, I think you want scipy.stats.t.cdf(-x, n) help(scipy.stats.t) will give you more info about what's implemented. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From brian.lee.hawthorne at gmail.com Fri Feb 24 19:08:20 2006 From: brian.lee.hawthorne at gmail.com (Brian Hawthorne) Date: Fri, 24 Feb 2006 16:08:20 -0800 Subject: [SciPy-user] in-place data permutation In-Reply-To: <796269930602241517l6aebac51lc3d8725e6531e747@mail.gmail.com> References: <796269930602241517l6aebac51lc3d8725e6531e747@mail.gmail.com> Message-ID: <796269930602241608s1bf0e0e9rcd5c8b44f225d4a5@mail.gmail.com> Hello, I'm looking for an efficient implementation of permuting the positions of data in an array which doesn't require copying the array. Specifically, I am shifting an array by half its length along the 0th dimension, in order to put "textbook" ordered frequency data into the packed format expected by inverse_fft. I'm surprised this is not already implemented in numpy... seems like it would be a common need. Am I missing something obvious? Brian Hawthorne UC Berkeley Brain Imaging Center -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.lee.hawthorne at gmail.com Fri Feb 24 19:09:42 2006 From: brian.lee.hawthorne at gmail.com (Brian Hawthorne) Date: Fri, 24 Feb 2006 16:09:42 -0800 Subject: [SciPy-user] in-place data permutation In-Reply-To: <796269930602241517l6aebac51lc3d8725e6531e747@mail.gmail.com> References: <796269930602241517l6aebac51lc3d8725e6531e747@mail.gmail.com> Message-ID: <796269930602241609k5b515e5etc90928a671dee733@mail.gmail.com> Hello, I'm looking for an efficient implementation of permuting the positions of data in an array which doesn't require copying the array. Specifically, I am shifting an array by half its length along the 0th dimension, in order to put "textbook" ordered frequency data into the packed format expected by inverse_fft. I'm surprised this is not already implemented in numpy... seems like it would be a common need. Am I missing something obvious? Brian Hawthorne UC Berkeley Brain Imaging Center -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant at ee.byu.edu Fri Feb 24 19:13:38 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 24 Feb 2006 17:13:38 -0700 Subject: [SciPy-user] (no subject) In-Reply-To: <36AEE679-F443-4351-A2D1-6713FACFF471@imim.es> References: <43FE1D19.7020000@ieee.org> <0B567BCB-3E42-434A-B5C4-2E84CAFC68B7@imim.es> <43FEFD14.5090800@ieee.org> <36AEE679-F443-4351-A2D1-6713FACFF471@imim.es> Message-ID: <43FFA132.2030504@ee.byu.edu> Adri?n L?pez wrote: >Great !!! Using some similar code, it was running OK on a machine >while in another was giving me a strange error. > >lsoda-- repeated calls with istate = 1 and tout = t (=r1) > In above message, R1 = 0.0000000000000E+00 >lsoda-- run aborted.. apparent infinite loop > >Was yours similar? >At any rate, if I run > >svn co http://svn.scipy.org/svn/scipy/trunk scipy > > Actually you just need to install a new numpy. (But, because current svn numpy as a changed C-API) you will also have to install scipy too). So, I would do svn co http://svn.scipy.org/svn/numpy/trunk numpy cd numpy python setup.py install Then svn co http://svn.scipy.org/svn/scipy/trunk scipy python setup.py install (But, you will need ATLAS installed for the second one to work --- and the first one to give optimized linear-algebra). -Travis From oliphant at ee.byu.edu Fri Feb 24 19:17:05 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 24 Feb 2006 17:17:05 -0700 Subject: [SciPy-user] Confused about enhanced-scalar dtypes In-Reply-To: <34090E25C2327C4AA5D276799005DDE0E35178@SMDMX0501.mds.mdsinc.com> References: <34090E25C2327C4AA5D276799005DDE0E35178@SMDMX0501.mds.mdsinc.com> Message-ID: <43FFA201.5070803@ee.byu.edu> LATORNELL, Doug wrote: >I built NumPy from SVN this morning and got a surprise concerning dtype >arguments: > >In [24]: import numpy > >In [25]: numpy.__version__ >Out[25]: '0.9.6.2165' > >In [26]: a = numpy.fromfile("profiles-O0", dtype="float", sep=' ') >------------------------------------------------------------------------ >--- >exceptions.TypeError Traceback (most >recent call last) > >/ocean/dlatorne/SoG/dlatornell06/profiles/ > >TypeError: data type not understood > > I don't recommend using strings like "float". Just use float (which is a type-object). I don't think "float" is actually an accepted string. There are a few strings that are recognized and more could be added, but I would just use dtype=float. >Based on the tip in the middle of pg 19 of my copy of the NumPy book, I >expected both dtype="float" and dtype="float_" to work. > > dtype=float and dtype=numpy.float_ are similar. Note the absence of "" >I know there has been a bunch of work lately on things related to dtype, >but I don't recall seeing anything about breaking the equivalence of the >enhanced-scalar types (like float_) and the corresponding standard >Python types (float, in this case). Did I miss, or misunderstand >something, get unluck with the timing of my checkout this morning, or is >this a bug? > > I'm surprised "float" worked before and then doesn't work now. But, there have been enough changes between 0.9.3 and 0.9.5 that it's possible. But, don't do it that way :-) -Travis From oliphant at ee.byu.edu Fri Feb 24 19:18:17 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 24 Feb 2006 17:18:17 -0700 Subject: [SciPy-user] in-place data permutation In-Reply-To: <796269930602241608s1bf0e0e9rcd5c8b44f225d4a5@mail.gmail.com> References: <796269930602241517l6aebac51lc3d8725e6531e747@mail.gmail.com> <796269930602241608s1bf0e0e9rcd5c8b44f225d4a5@mail.gmail.com> Message-ID: <43FFA249.8060703@ee.byu.edu> Brian Hawthorne wrote: > Hello, > I'm looking for an efficient implementation of permuting the positions > of data in > an array which doesn't require copying the array. Specifically, I am > shifting an > array by half its length along the 0th dimension, in order to put > "textbook" ordered > frequency data into the packed format expected by inverse_fft. I'm > surprised > this is not already implemented in numpy... seems like it would be a > common > need. Am I missing something obvious? dft.fftfreq dft.fftshift dft.ifftshift -Travis From nwagner at mecha.uni-stuttgart.de Sat Feb 25 02:54:43 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Sat, 25 Feb 2006 08:54:43 +0100 Subject: [SciPy-user] Wrapping geqp3 (Rank revealing QR) In-Reply-To: References: <43FF1066.4080704@mecha.uni-stuttgart.de> <43FF23F5.70503@mecha.uni-stuttgart.de> <43FF3065.1050406@mecha.uni-stuttgart.de> Message-ID: On Fri, 24 Feb 2006 09:19:56 -0600 (CST) Pearu Peterson wrote: > > > On Fri, 24 Feb 2006, Nils Wagner wrote: > >>> Read numpy.distutils.from_template.__doc__ about the >>>syntax of <..> >>> blocks. And read ?geqp3.f to find out what are proper >>>_lwork values >>> for each version of geqp3 subroutine. >>> >>>>> numpy.distutils.from_template.__doc__ >> Traceback (most recent call last): >> File "", line 1, in ? >> AttributeError: 'module' object has no attribute >>'from_template' >> >>>>> dir (numpy.distutils) >> ['ScipyTest', '_INSTALLED', '__builtins__', >>'__config__', '__doc__', >> '__file__', '__name__', '__path__', '__version__', >>'ccompiler', >> 'exec_command', 'log', 'misc_util', 'test', >>'unixccompiler'] >> >> Am I missing something ? > > Yes. Have you tried > > >>> import numpy.distutils.from_template as m > >>> print m.__doc__ > > or > > >>> help('numpy.distutils.from_template') > > or > > $ pydoc numpy.distutils.from_template > > ? > > Pearu > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user Hi Pearu, Thank you for your hint. I will continue my work on Monday. Thank you again Nils From pgmdevlist at mailcan.com Sat Feb 25 18:36:19 2006 From: pgmdevlist at mailcan.com (pgmdevlist at mailcan.com) Date: Sat, 25 Feb 2006 18:36:19 -0500 Subject: [SciPy-user] Messing with missing values Message-ID: <200602251836.20406.pgmdevlist@mailcan.com> Folks, Most of the data I work with have missing values, and I rely on 'ma' a lot. Unfortunately, most of the numpy/scipy functions don't handle missing values nicely. Not a problem I thought, I just have to adapt the functions I need. I have 2 options: wrapping the initial functions in tests, or recode the initial functions. * According to your experience, which is the most efficient way to go ? * I have a function that outputs either a float or MA.masked. I call this function recursively, appending the result to a list, and then trying to process the list as an array. How could I mask the values corresponding to MA.masked in the final list, without having to check every single element? Thanks for your ideas -- Pierre GERARD-MARCHANT Driftmier Engineering Center UGA Athens, GA 30602 -- "It's more fun to compute" - Kraftwerk From ckkart at hoc.net Sun Feb 26 00:54:35 2006 From: ckkart at hoc.net (Christian Kristukat) Date: Sun, 26 Feb 2006 14:54:35 +0900 Subject: [SciPy-user] D1MACH not set Message-ID: <4401429B.8050200@hoc.net> Hi, using a recent version of numpy/scipy from svn I get the following error when calling any of the optimize.fmin_* minimizers. tnc: Version 1.2, (c) 2002-2003, Jean-Sebastien Roy (js at jeannot.org) tnc: RCS ID: @(#) $Jeannot: tnc.c,v 1.201 2004/04/02 22:36:25 js Exp $ Adjust D1MACH by uncommenting data statements appropriate for your machine. STOP 779 In [2]: numpy.__version__ Out[2]: '0.9.6.2168' In [4]: scipy.__version__ Out[4]: '0.4.7.1620' Regards, Christian From schofield at ftw.at Sun Feb 26 07:36:04 2006 From: schofield at ftw.at (Ed Schofield) Date: Sun, 26 Feb 2006 13:36:04 +0100 Subject: [SciPy-user] scipy.org site down? Message-ID: <4401A0B4.7020208@ftw.at> It seems www.scipy.org and svn.scipy.org are both down again. They are replying to pings, but I get a 502 Bad Gateway error when trying to access www.scipy.org. Can anyone confirm this? -- Ed From oliver.tomic at matforsk.no Sun Feb 26 07:38:25 2006 From: oliver.tomic at matforsk.no (oliver.tomic at matforsk.no) Date: Sun, 26 Feb 2006 13:38:25 +0100 Subject: [SciPy-user] scipy.org site down? In-Reply-To: <4401A0B4.7020208@ftw.at> Message-ID: Same thing here. OLI scipy-user-bounces at scipy.net wrote on 26.02.2006 13:36:04: > > It seems www.scipy.org and svn.scipy.org are both down again. They are > replying to pings, but I get a 502 Bad Gateway error when trying to > access www.scipy.org. Can anyone confirm this? > > -- Ed > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user From nvf at MIT.EDU Sun Feb 26 07:45:22 2006 From: nvf at MIT.EDU (Nick Fotopoulos) Date: Sun, 26 Feb 2006 07:45:22 -0500 Subject: [SciPy-user] SciPy-user Digest, Vol 30, Issue 43 In-Reply-To: References: Message-ID: <05DDFF93-3D3D-4F4B-A2B6-F05E5BB5F95F@mit.edu> Confirmed. Nick On Feb 26, 2006, at 6:35 AM, scipy-user-request at scipy.net wrote: > Date: Sun, 26 Feb 2006 13:36:04 +0100 > From: Ed Schofield > Subject: [SciPy-user] scipy.org site down? > To: SciPy Users List > Message-ID: <4401A0B4.7020208 at ftw.at> > Content-Type: text/plain; charset=UTF-8 > > > It seems www.scipy.org and svn.scipy.org are both down again. They > are > replying to pings, but I get a 502 Bad Gateway error when trying to > access www.scipy.org. Can anyone confirm this? > > -- Ed From steve at shrogers.com Sun Feb 26 10:27:33 2006 From: steve at shrogers.com (Steven H. Rogers) Date: Sun, 26 Feb 2006 08:27:33 -0700 Subject: [SciPy-user] SciPy-user Digest, Vol 30, Issue 43 In-Reply-To: <05DDFF93-3D3D-4F4B-A2B6-F05E5BB5F95F@mit.edu> References: <05DDFF93-3D3D-4F4B-A2B6-F05E5BB5F95F@mit.edu> Message-ID: <4401C8E5.9010109@shrogers.com> The old Plone SciPy site (http://old.scipy.org/) is up. Is it on a different server? Steve Nick Fotopoulos wrote: > Confirmed. > > Nick > > On Feb 26, 2006, at 6:35 AM, scipy-user-request at scipy.net wrote: > >> Date: Sun, 26 Feb 2006 13:36:04 +0100 >> From: Ed Schofield >> Subject: [SciPy-user] scipy.org site down? >> To: SciPy Users List >> Message-ID: <4401A0B4.7020208 at ftw.at> >> Content-Type: text/plain; charset=UTF-8 >> >> >> It seems www.scipy.org and svn.scipy.org are both down again. They >> are >> replying to pings, but I get a 502 Bad Gateway error when trying to >> access www.scipy.org. Can anyone confirm this? >> >> -- Ed > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > > -- Steven H. Rogers, Ph.D., steve at shrogers.com Weblog: http://shrogers.com/weblog "He who refuses to do arithmetic is doomed to talk nonsense." -- John McCarthy From travis at enthought.com Sun Feb 26 11:14:00 2006 From: travis at enthought.com (Travis N. Vaught) Date: 26 Feb 2006 10:14:00 -0600 Subject: [SciPy-user] scipy.org site down? Message-ID: <3223793675.177344@www.chatteremail.com> Looking into it--thanks for pointing this out. -----Original Message----- From: oliver.tomic at matforsk.no Date: Sunday, Feb 26, 2006 6:39 am Subject: Re: [SciPy-user] scipy.org site down? Same thing here. OLI scipy-user-bounces at scipy.net wrote on 26.02.2006 13:36:04: > It seems www.scipy.org and svn.scipy.org are both down again. They are replying to pings, but I get a 502 Bad Gateway error when trying to access www.scipy.org. Can anyone confirm this? > -- Ed > _______________________________________________ SciPy-user mailing list SciPy-user at scipy.net http://www.scipy.net/mailman/listinfo/scipy-user _______________________________________________ SciPy-user mailing list SciPy-user at scipy.net http://www.scipy.net/mailman/listinfo/scipy-user From travis at enthought.com Sun Feb 26 13:06:07 2006 From: travis at enthought.com (Travis N. Vaught) Date: Sun, 26 Feb 2006 12:06:07 -0600 Subject: [SciPy-user] scipy.org site down? In-Reply-To: <3223793675.177344@www.chatteremail.com> References: <3223793675.177344@www.chatteremail.com> Message-ID: <4401EE0F.50803@enthought.com> I just started up Apache (not running) and it seems to work fine now. Thanks, Travis Travis N. Vaught wrote: > Looking into it--thanks for pointing this out. > > > -----Original Message----- > From: oliver.tomic at matforsk.no > Date: Sunday, Feb 26, 2006 6:39 am > Subject: Re: [SciPy-user] scipy.org site down? > > Same thing here. > > OLI > > scipy-user-bounces at scipy.net wrote on 26.02.2006 13:36:04: > > > It seems www.scipy.org and svn.scipy.org are both down again. They are > replying to pings, but I get a 502 Bad Gateway error when trying to > access www.scipy.org. Can anyone confirm this? > > >> -- Ed >> > > >> _______________________________________________ >> > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > -- ........................ Travis N. Vaught CEO Enthought, Inc. http://www.enthought.com ........................ From pgmdevlist at mailcan.com Mon Feb 27 00:29:46 2006 From: pgmdevlist at mailcan.com (pgmdevlist at mailcan.com) Date: Mon, 27 Feb 2006 00:29:46 -0500 Subject: [SciPy-user] Slow medians ? Message-ID: <200602270029.47061.pgmdevlist@mailcan.com> Folks, I'm trying to implement by brute force some median-based linear regression techinques, requiring a lot of median computations (for n elements, I have about n*n medians to compute). And as I have a lot of data to process that way, I'm a bit pecky about optimization. I've noticed that for 1D and 2D arrays, the basic numpy version of median is far faster than the nD version of scipy.stats (you can still choose to compute along any of the 2 axes with numpy.median by playing with transpose). Would it be useful to modify the scipy.stats version, or should it be left to the user ? And on related matter, numpy based this time: the numpy.median version seems a bit strange: what's the idea behind using `msort` twice (around line 810 in the 0.9.5 version) ? Thanks for you feedback in advance -- Pierre GM From nwagner at mecha.uni-stuttgart.de Mon Feb 27 03:11:58 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Mon, 27 Feb 2006 09:11:58 +0100 Subject: [SciPy-user] Sparse matrix file format converter Message-ID: <4402B44E.2080407@mecha.uni-stuttgart.de> I thought it might be of interest... From: "Mark Hoemmen" Date: Fri, 24 Feb 2006 14:38:06 -0800 Subject: Sparse matrix file format converter As part of my work for the UC Berkeley Benchmarking and Optimization Group ("BeBOP"), I've written the BeBOP Sparse Matrix Converter, a library and standalone utility for converting between Harwell-Boeing, MatrixMarket and Matlab (ASCII) sparse matrix file formats. I am releasing the library under a BSD license. Please try it out, and feel free to send me suggestions at mhoemmen at cs.berkeley.edu. Here is the website for the converter: http://www.cs.berkeley.edu/~mhoemmen/bebop/smc.html This library was put together with help from Richard Vuduc, Rajesh Nishtala, Ankit Jain, and other BeBOP colleagues. Mark Hoemmen mhoemmen at cs.berkeley.edu From schofield at ftw.at Mon Feb 27 04:21:26 2006 From: schofield at ftw.at (Ed Schofield) Date: Mon, 27 Feb 2006 10:21:26 +0100 Subject: [SciPy-user] New sparse matrix functionality Message-ID: <4402C496.90204@ftw.at> Hi all, I'd created a new SVN branch with some new functionality and changes to sparse matrices. This post describes what's new and what's different versus the main branch. If you have an existing SVN tree, you can pull in my branch with the command "svn switch http://svn.scipy.org/svn/scipy/branches/ejs". One of the conclusions of Jonathan Guyer's comparison between scipy.sparse and PySparse in November (http://article.gmane.org/gmane.comp.python.scientific.devel/3187/match=scipy+pysparse+wiki and http://old.scipy.org/wikis/featurerequests/SparseSolvers) was that SciPy's support for efficient construction of sparse matrices is weak. My patch adds a new data type, lil_matrix, that stores non-zeros as a list of sorted Python lists. For the simple benchmark of creating a new 10^3 x 10^5 matrix with 10^4 non-zero elements in random locations and converting to CSR and CSC matrices, the lil_matrix format is slightly more than twice as fast as dok_matrix. It's a row-wise format, so conversion to CSR is very fast, whereas conversion to CSC goes through CSR internally. Index lookups use binary searches, so they take log time. I think the implementation is already complete enough for most uses -- so please try it out and tell me how you go! Another of Jonathan's observations was that SciPy had no support for slicing matrices with A[i, :] like in PySparse. My patch adds support for slice notation and NumPy-style fancy indexing to dok_matrix and lil_matrix objects. With this it's possible to build sparse matrices quickly -- for example: >>> a = lil_matrix((3,5)) >>> a[1,[2,3,4]] = range(3) >>> a[2,:] = 3 * a[1,:] A third new feature is a .sum() method, which takes a single axis argument like in NumPy. My branch also changes one aspect of the existing behaviour: the todense() method now returns a dense (NumPy) matrix, rather than a dense array. Converting to a dense array is now available under a toarray() method (or .A attribute) instead. The rationale behind this change is to emphasize that sparse matrices are closer to dense matrices than to dense arrays in their attributes (e.g. .T for transpose) and behaviour (e.g. * means inner product). I've also been careful to make multiplication between sparse matrices and dense row or column vectors (matrix objects with unit length or height) return matrices of the correct dimensions, rather than arrays. Several unit tests relied on the old behaviour, and I've changed these accordingly. Most of these test changes are just simplifications -- for example assert_array_equal((a*c).todense(), a.todense() * c) instead of assert_array_equal((a*c).todense(), dot(a.todense(), c)) -- but I'd appreciate some criticism and feedback on which behaviour people prefer. These changes have highlighted a problem present in both the main trunk and my branch: that multiplying a dense matrix 'a' by a sparse matrix 'b' is not possible using the syntax 'a*b'. I'll follow this up with a proposal to numpy-discussion on how we can solve this. -- Ed From nwagner at mecha.uni-stuttgart.de Mon Feb 27 04:42:37 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Mon, 27 Feb 2006 10:42:37 +0100 Subject: [SciPy-user] New sparse matrix functionality In-Reply-To: <4402C496.90204@ftw.at> References: <4402C496.90204@ftw.at> Message-ID: <4402C98D.3020607@mecha.uni-stuttgart.de> Ed Schofield wrote: >Hi all, > >I'd created a new SVN branch with some new functionality and changes to >sparse matrices. This post describes what's new and what's different >versus the main branch. If you have an existing SVN tree, you can pull >in my branch with the command "svn switch >http://svn.scipy.org/svn/scipy/branches/ejs". > >One of the conclusions of Jonathan Guyer's comparison between >scipy.sparse and PySparse in November >(http://article.gmane.org/gmane.comp.python.scientific.devel/3187/match=scipy+pysparse+wiki >and http://old.scipy.org/wikis/featurerequests/SparseSolvers) was that >SciPy's support for efficient construction of sparse matrices is weak. >My patch adds a new data type, lil_matrix, that stores non-zeros as a >list of sorted Python lists. For the simple benchmark of creating a new >10^3 x 10^5 matrix with 10^4 non-zero elements in random locations and >converting to CSR and CSC matrices, the lil_matrix format is slightly >more than twice as fast as dok_matrix. It's a row-wise format, so >conversion to CSR is very fast, whereas conversion to CSC goes through >CSR internally. Index lookups use binary searches, so they take log >time. I think the implementation is already complete enough for most >uses -- so please try it out and tell me how you go! > >Another of Jonathan's observations was that SciPy had no support for >slicing matrices with A[i, :] like in PySparse. My patch adds support >for slice notation and NumPy-style fancy indexing to dok_matrix and >lil_matrix objects. With this it's possible to build sparse matrices >quickly -- for example: > > >>>>a = lil_matrix((3,5)) >>>>a[1,[2,3,4]] = range(3) >>>>a[2,:] = 3 * a[1,:] >>>> > >A third new feature is a .sum() method, which takes a single axis >argument like in NumPy. > >My branch also changes one aspect of the existing behaviour: the >todense() method now returns a dense (NumPy) matrix, rather than a dense >array. Converting to a dense array is now available under a toarray() >method (or .A attribute) instead. The rationale behind this change is >to emphasize that sparse matrices are closer to dense matrices than to >dense arrays in their attributes (e.g. .T for transpose) and behaviour >(e.g. * means inner product). I've also been careful to make >multiplication between sparse matrices and dense row or column vectors >(matrix objects with unit length or height) return matrices of the >correct dimensions, rather than arrays. Several unit tests relied on >the old behaviour, and I've changed these accordingly. Most of these >test changes are just simplifications -- for example > > assert_array_equal((a*c).todense(), a.todense() * c) > >instead of > > assert_array_equal((a*c).todense(), dot(a.todense(), c)) > >-- but I'd appreciate some criticism and feedback on which behaviour >people prefer. > >These changes have highlighted a problem present in both the main trunk >and my branch: that multiplying a dense matrix 'a' by a sparse matrix >'b' is not possible using the syntax 'a*b'. I'll follow this up with a >proposal to numpy-discussion on how we can solve this. > >-- Ed > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > Hi Ed, That sounds very interesting ! How do I install your branch ? And can I use the current implementation of sparse once I have installed your branch ? Nils From schofield at ftw.at Mon Feb 27 05:00:05 2006 From: schofield at ftw.at (Ed Schofield) Date: Mon, 27 Feb 2006 11:00:05 +0100 Subject: [SciPy-user] New sparse matrix functionality In-Reply-To: <4402C98D.3020607@mecha.uni-stuttgart.de> References: <4402C496.90204@ftw.at> <4402C98D.3020607@mecha.uni-stuttgart.de> Message-ID: <4402CDA5.2090206@ftw.at> Nils Wagner wrote: > Hi Ed, > > That sounds very interesting ! > How do I install your branch ? > And can I use the current implementation of sparse once I have installed > your branch ? > You can switch to it using: svn switch http://svn.scipy.org/svn/scipy/branches/ejs from your current SVN directory. Then type "python setup.py install" as normal. To revert to the mainline, use: svn switch http://svn.scipy.org/svn/scipy/trunk and reinstall. -- Ed From cimrman3 at ntc.zcu.cz Mon Feb 27 05:23:32 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Mon, 27 Feb 2006 11:23:32 +0100 Subject: [SciPy-user] New sparse matrix functionality In-Reply-To: <4402C496.90204@ftw.at> References: <4402C496.90204@ftw.at> Message-ID: <4402D324.4090207@ntc.zcu.cz> Hi Ed, Ed Schofield wrote: > My branch also changes one aspect of the existing behaviour: the > todense() method now returns a dense (NumPy) matrix, rather than a dense > array. Converting to a dense array is now available under a toarray() > method (or .A attribute) instead. The rationale behind this change is > to emphasize that sparse matrices are closer to dense matrices than to > dense arrays in their attributes (e.g. .T for transpose) and behaviour > (e.g. * means inner product). I've also been careful to make > multiplication between sparse matrices and dense row or column vectors > (matrix objects with unit length or height) return matrices of the > correct dimensions, rather than arrays. Several unit tests relied on > the old behaviour, and I've changed these accordingly. Most of these > test changes are just simplifications -- for example > > assert_array_equal((a*c).todense(), a.todense() * c) > > instead of > > assert_array_equal((a*c).todense(), dot(a.todense(), c)) > -- but I'd appreciate some criticism and feedback on which behaviour > people prefer. well done! I think it's better your way - if a vector (i.e. 2D array) is put in, it is expected that a vector gous out as well. Do you also plan to add the c-based linked-list matrix as in PySparse (ll_mat.c there)? This could be even faster than using the Python lists (IMHO...). > These changes have highlighted a problem present in both the main trunk > and my branch: that multiplying a dense matrix 'a' by a sparse matrix > 'b' is not possible using the syntax 'a*b'. I'll follow this up with a > proposal to numpy-discussion on how we can solve this. I am very curious how to do this. Surely NumPy will have to be aware of existence of sparse matrix objects, right? r. From alec at obtext.com Mon Feb 27 08:01:39 2006 From: alec at obtext.com (Alec Edgington) Date: Mon, 27 Feb 2006 13:01:39 -0000 Subject: [SciPy-user] Missing 'plt' Message-ID: Hi, I'm hoping someone can help me get started with plotting using 'plt' in scipy. Aplogogies in advance for being an idiot. I'm running on up-to-date Windows XP; I've downloaded and installed python-2.4.2, numpy-0.9.5, wxPython-2.6 and scipy-0.4.6. But the 'plt' sub-package just doesn't seem to be there. Nor does gui_thread. So when I start up IDLE and try to follow the instructions in http://sg0.chem.upenn.edu/adam/howto/scipy/mini-plt.shtml, all that happens is: >>> import gui_thread Traceback (most recent call last): File "", line 1, in -toplevel- import gui_thread ImportError: No module named gui_thread >>> from scipy import * >>> from scipy import plt Traceback (most recent call last): File "", line 1, in -toplevel- from scipy import plt ImportError: cannot import name plt >>> I checked in C:\Python24\Lib\site-packages\scipy and found no folder called 'plt'. What am I doing wrong? Many thanks, Alec From martin.wiechert at gmx.de Mon Feb 27 08:36:12 2006 From: martin.wiechert at gmx.de (Martin Wiechert) Date: Mon, 27 Feb 2006 14:36:12 +0100 Subject: [SciPy-user] object members Message-ID: <200602271436.12659.martin.wiechert@gmx.de> Hi developers, any plans on when object members will be back? a = ndarray (shape = (10,), dtype = {'names': ['x'], 'formats': ['|O4']}) TypeError: fields with object members not yet supported. in numpy 0.9.5. Of course, this is better than the segfault in 0.9.4, but it would be quite inconvenient for my project to not have object members. My C code still produces dtypes with object members. Can I safely use them, as long as I make sure new arrays are properly initialised? Thanks, Martin. From ryanlists at gmail.com Mon Feb 27 09:29:29 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Mon, 27 Feb 2006 09:29:29 -0500 Subject: [SciPy-user] Missing 'plt' In-Reply-To: References: Message-ID: The documentation may not reflect this yet, but scipy recommends 2D plotting be handled by matplotlib: http://matplotlib.sourceforge.net/ and instead of dealing with gui_thread directly, use the ipython shell which has lots of other nice features: http://ipython.scipy.org/ launch ipython with the -pylab flag and it will handle all of your threading needs. In windows that most likely means looking at the properties of the desktop short cut and adding -pylab to the end of the part that actually launches ipython. In linux that means typing ipython -pylab at the command prompt. Ryan On 2/27/06, Alec Edgington wrote: > Hi, > > I'm hoping someone can help me get started with plotting using 'plt' in > scipy. Aplogogies in advance for being an idiot. > > I'm running on up-to-date Windows XP; I've downloaded and installed > python-2.4.2, numpy-0.9.5, wxPython-2.6 and scipy-0.4.6. > > But the 'plt' sub-package just doesn't seem to be there. Nor does > gui_thread. So when I start up IDLE and try to follow the instructions in > http://sg0.chem.upenn.edu/adam/howto/scipy/mini-plt.shtml, all that happens > is: > > >>> import gui_thread > > Traceback (most recent call last): > File "", line 1, in -toplevel- > import gui_thread > ImportError: No module named gui_thread > >>> from scipy import * > >>> from scipy import plt > > Traceback (most recent call last): > File "", line 1, in -toplevel- > from scipy import plt > ImportError: cannot import name plt > >>> > > I checked in C:\Python24\Lib\site-packages\scipy and found no folder called > 'plt'. > > What am I doing wrong? > > Many thanks, > > Alec > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From arnd.baecker at web.de Mon Feb 27 09:32:28 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Mon, 27 Feb 2006 15:32:28 +0100 (CET) Subject: [SciPy-user] Missing 'plt' In-Reply-To: References: Message-ID: Hi Alec, On Mon, 27 Feb 2006, Alec Edgington wrote: > Hi, > > I'm hoping someone can help me get started with plotting using 'plt' in > scipy. Aplogogies in advance for being an idiot. > > I'm running on up-to-date Windows XP; I've downloaded and installed > python-2.4.2, numpy-0.9.5, wxPython-2.6 and scipy-0.4.6. > > But the 'plt' sub-package just doesn't seem to be there. Nor does > gui_thread. So when I start up IDLE and try to follow the instructions in > http://sg0.chem.upenn.edu/adam/howto/scipy/mini-plt.shtml, all that happens > is: > > >>> import gui_thread > > Traceback (most recent call last): > File "", line 1, in -toplevel- > import gui_thread > ImportError: No module named gui_thread > >>> from scipy import * > >>> from scipy import plt > > Traceback (most recent call last): > File "", line 1, in -toplevel- > from scipy import plt > ImportError: cannot import name plt > >>> > > I checked in C:\Python24\Lib\site-packages\scipy and found no folder called > 'plt'. > > What am I doing wrong? Nothing - the "problem" is that gui_thread and all plot packages (i.e. `plt`, `xplt` and `gplt`) have been essentially taken out of the new scipy. See http://www.scipy.org/Cookbook under "Graphics" They still exist under sandbox, but their installation has to be done separately. Unless you heavily rely on special features of `plt` the general recommendation is to use matplotlib http://matplotlib.sourceforge.net/ See also http://www.scipy.org/Plotting_Tutorial http://www.scipy.org/Cookbook/Matplotlib http://matplotlib.sourceforge.net/screenshots.html for examples. HTH, Arnd From alec at obtext.com Mon Feb 27 10:04:22 2006 From: alec at obtext.com (Alec Edgington) Date: Mon, 27 Feb 2006 15:04:22 -0000 Subject: [SciPy-user] Missing 'plt' In-Reply-To: Message-ID: Ah, great, many thanks! Alec -----Original Message----- From: scipy-user-bounces at scipy.net [mailto:scipy-user-bounces at scipy.net]On Behalf Of Arnd Baecker Sent: 27 February 2006 14:32 To: SciPy Users List Subject: Re: [SciPy-user] Missing 'plt' > What am I doing wrong? Nothing - the "problem" is that gui_thread and all plot packages (i.e. `plt`, `xplt` and `gplt`) have been essentially taken out of the new scipy. See http://www.scipy.org/Cookbook under "Graphics" They still exist under sandbox, but their installation has to be done separately. Unless you heavily rely on special features of `plt` the general recommendation is to use matplotlib http://matplotlib.sourceforge.net/ See also http://www.scipy.org/Plotting_Tutorial http://www.scipy.org/Cookbook/Matplotlib http://matplotlib.sourceforge.net/screenshots.html for examples. HTH, Arnd From nwagner at mecha.uni-stuttgart.de Mon Feb 27 12:51:50 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Mon, 27 Feb 2006 18:51:50 +0100 Subject: [SciPy-user] New sparse matrix functionality In-Reply-To: <4402CDA5.2090206@ftw.at> References: <4402C496.90204@ftw.at> <4402C98D.3020607@mecha.uni-stuttgart.de> <4402CDA5.2090206@ftw.at> Message-ID: On Mon, 27 Feb 2006 11:00:05 +0100 Ed Schofield wrote: > Nils Wagner wrote: >> Hi Ed, >> >> That sounds very interesting ! >> How do I install your branch ? >> And can I use the current implementation of sparse once >>I have installed >> your branch ? >> > You can switch to it using: > svn switch >http://svn.scipy.org/svn/scipy/branches/ejs > from your current SVN directory. Then type "python >setup.py install" as > normal. > > To revert to the mainline, use: > svn switch http://svn.scipy.org/svn/scipy/trunk > and reinstall. > > -- Ed > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user Hi Ed, Just curious. Is it planned to integrate it in the main tree soon ? Nils From Doug.LATORNELL at mdsinc.com Mon Feb 27 12:53:23 2006 From: Doug.LATORNELL at mdsinc.com (LATORNELL, Doug) Date: Mon, 27 Feb 2006 09:53:23 -0800 Subject: [SciPy-user] Confused about enhanced-scalar dtypes Message-ID: <34090E25C2327C4AA5D276799005DDE0E35195@SMDMX0501.mds.mdsinc.com> Thanks, Travis. I'm not sure how I got it into my head that the dtype argument took the name of the type rather than the type object. The latter is certainly obvious to me now when I go back and read the Data-Type Descriptors section in the NumPy book. dtype=float it shall be! Doug > -----Original Message----- > From: scipy-user-bounces at scipy.net > [mailto:scipy-user-bounces at scipy.net] On Behalf Of Travis Oliphant > Sent: February 24, 2006 16:17 > To: SciPy Users List > Subject: Re: [SciPy-user] Confused about enhanced-scalar dtypes > > LATORNELL, Doug wrote: > > >I built NumPy from SVN this morning and got a surprise > concerning dtype > >arguments: > > > >In [24]: import numpy > > > >In [25]: numpy.__version__ > >Out[25]: '0.9.6.2165' > > > >In [26]: a = numpy.fromfile("profiles-O0", dtype="float", sep=' ') > >------------------------------------------------------------- > ---------- > >- > >--- > >exceptions.TypeError Traceback (most > >recent call last) > > > >/ocean/dlatorne/SoG/dlatornell06/profiles/ > > > >TypeError: data type not understood > > > > > > I don't recommend using strings like "float". Just use float > (which is a type-object). > > I don't think "float" is actually an accepted string. There > are a few strings that are recognized and more could be > added, but I would just use dtype=float. > > >Based on the tip in the middle of pg 19 of my copy of the > NumPy book, I > >expected both dtype="float" and dtype="float_" to work. > > > > > dtype=float and dtype=numpy.float_ are similar. Note the > absence of "" > > >I know there has been a bunch of work lately on things related to > >dtype, but I don't recall seeing anything about breaking the > >equivalence of the enhanced-scalar types (like float_) and the > >corresponding standard Python types (float, in this case). > Did I miss, > >or misunderstand something, get unluck with the timing of my > checkout > >this morning, or is this a bug? > > > > > I'm surprised "float" worked before and then doesn't work > now. But, there have been enough changes between 0.9.3 and > 0.9.5 that it's > possible. But, don't do it that way :-) > > -Travis > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > This email and any files transmitted with it may contain privileged or confidential information and may be read or used only by the intended recipient. If you are not the intended recipient of the email or any of its attachments, please be advised that you have received this email in error and any use, dissemination, distribution, forwarding, printing or copying of this email or any attached files is strictly prohibited. If you have received this email in error, please immediately purge it and all attachments and notify the sender by reply email or contact the sender at the number listed. From klemm at phys.ethz.ch Mon Feb 27 14:48:14 2006 From: klemm at phys.ethz.ch (Hanno Klemm) Date: Mon, 27 Feb 2006 20:48:14 +0100 Subject: [SciPy-user] scipy.test problem Message-ID: Hi there, I am new to scipy and have the following installation problem: I work on a dual processor Xeon box, and try to install scipy. After various unsuccessful attempts, I now compiled BLAS, LAPACK ATLAS and FFTW3.1 from scratch, then installed numpy-0.9.5 and then installed scipy-0.4.6 under python 2.4.2. The installation went fine but after import scipy scipy.test(level=1) I get the following error message: ====================================================================== FAIL: check_simple (scipy.linalg.decomp.test_decomp.test_eig) ---------------------------------------------------------------------- Traceback (most recent call last): File "/scratch/python2.4/lib/python2.4/site-packages/scipy/linalg/tests/test_decomp.py", line 89, in check_simple assert_array_almost_equal(w,exact_w) File "/scratch/python2.4/lib/python2.4/site-packages/numpy/testing/utils.py", line 183, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 66.6666666667%): Array 1: [ 9.4371906+0.j -0.1153653+0.j -0.3218254+0.j] Array 2: [ 9.3218254 0. -0.3218254] ====================================================================== FAIL: check_simple (scipy.linalg.decomp.test_decomp.test_eigvals) ---------------------------------------------------------------------- Traceback (most recent call last): File "/scratch/python2.4/lib/python2.4/site-packages/scipy/linalg/tests/test_decomp.py", line 37, in check_simple assert_array_almost_equal(w,exact_w) File "/scratch/python2.4/lib/python2.4/site-packages/numpy/testing/utils.py", line 183, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 66.6666666667%): Array 1: [ 9.4371906+0.j -0.1153653+0.j -0.3218254+0.j] Array 2: [ 9.3218254 0. -0.3218254] ====================================================================== FAIL: check_simple_tr (scipy.linalg.decomp.test_decomp.test_eigvals) ---------------------------------------------------------------------- Traceback (most recent call last): File "/scratch/python2.4/lib/python2.4/site-packages/scipy/linalg/tests/test_decomp.py", line 45, in check_simple_tr assert_array_almost_equal(w,exact_w) File "/scratch/python2.4/lib/python2.4/site-packages/numpy/testing/utils.py", line 183, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 66.6666666667%): Array 1: [ 9.4371906+0.j -0.1153653+0.j -0.3218254+0.j] Array 2: [ 9.3218254 0. -0.3218254] ---------------------------------------------------------------------- Ran 1067 tests in 2.250s FAILED (failures=3) Could anybody please tell me what to do with that? If that is important: I compiled everything with gcc. gcc --version gives: gcc (GCC) 3.2.3 20030502 (Red Hat Linux 3.2.3-52) In order to get scipy to use the atlas libraries I had to add an option -fPIC in the compiler flags in atlas and friends. Any help would be much appreciated! Thanks in advance, Hanno -- Hanno Klemm klemm at phys.ethz.ch From vincefn at users.sourceforge.net Mon Feb 27 16:36:38 2006 From: vincefn at users.sourceforge.net (Vincent Favre-Nicolin) Date: Mon, 27 Feb 2006 22:36:38 +0100 Subject: [SciPy-user] [Matplotlib-users] switch to subversion tomorrow Tuesday 28 Feb In-Reply-To: <44036173.9020709@astraw.com> References: <44036173.9020709@astraw.com> Message-ID: <200602272236.39176.vincefn@users.sourceforge.net> Hi, > I will endeavor to switch matplotlib to a subversion repository (newly > available courtesy of SourceForge) tomorrow afternoon/evening > (US/Pacific time) unless I hear pleas from developers with unmerged > changes. > > I believe the new command for anonymous checkout of matplotlib will be: > svn co https://svn.sourceforge.net/svnroot/matplotlib/trunk/matplotlib > > I'll post the developer checkout command on the matplotlib-devel list > when I figure it out. It's actually the same command. Only, when committing changes you have to supply username and password using, e.g. --username and --password options. But a subversion GUI (I'm happy with esvn for linux & windows) will do that for you. What's really nice, there are no more delay between developers and anonymous access. Vincent -- Vincent Favre-Nicolin Universit? Joseph Fourier http://v.favrenicolin.free.fr ObjCryst & Fox : http://objcryst.sourceforge.net From jdhunter at ace.bsd.uchicago.edu Mon Feb 27 16:39:21 2006 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Mon, 27 Feb 2006 15:39:21 -0600 Subject: [SciPy-user] [Matplotlib-users] switch to subversion tomorrow Tuesday 28 Feb In-Reply-To: <200602272236.39176.vincefn@users.sourceforge.net> (Vincent Favre-Nicolin's message of "Mon, 27 Feb 2006 22:36:38 +0100") References: <44036173.9020709@astraw.com> <200602272236.39176.vincefn@users.sourceforge.net> Message-ID: <87psl8zdkm.fsf@peds-pc311.bsd.uchicago.edu> >>>>> "Vincent" == Vincent Favre-Nicolin writes: Vincent> It's actually the same command. Only, when committing Vincent> changes you have to supply username and password using, Vincent> e.g. --username and --password options. But a subversion Or when you checkout the repository, if you specify these flags when you go to commit later they will be saved. Vincent> that for you. What's really nice, there are no more Vincent> delay between developers and anonymous access. Hear, hear! JDH From schofield at ftw.at Mon Feb 27 18:21:41 2006 From: schofield at ftw.at (Ed Schofield) Date: Tue, 28 Feb 2006 00:21:41 +0100 Subject: [SciPy-user] New sparse matrix functionality In-Reply-To: <4402D324.4090207@ntc.zcu.cz> References: <4402C496.90204@ftw.at> <4402D324.4090207@ntc.zcu.cz> Message-ID: <664793EC-254A-4381-906F-72AD5CD1A918@ftw.at> Hi Robert, On 27/02/2006, at 11:23 AM, Robert Cimrman wrote: > well done! I think it's better your way - if a vector (i.e. 2D > array) is > put in, it is expected that a vector goes out as well. Okay, great. This was probably the only change that could be controversial, so if you're happy (and nobody else objects) I'll merge the whole patch. > Do you also plan to add the c-based linked-list matrix as in PySparse > (ll_mat.c there)? This could be even faster than using the Python > lists > (IMHO...). Well, I guess it would be nice to have, and the code's already written, but I don't know how we'd make it derive from the spmatrix base class, which is written in Python. Travis mentioned back in October that this is possible but not easy. So it would require some work. I don't need the extra speed personally -- the new class seems to be fast enough for my needs (the bottleneck for my work is now elsewhere :) >> These changes have highlighted a problem present in both the main >> trunk >> and my branch: that multiplying a dense matrix 'a' by a sparse matrix >> 'b' is not possible using the syntax 'a*b'. I'll follow this up >> with a >> proposal to numpy-discussion on how we can solve this. > > I am very curious how to do this. Surely NumPy will have to be > aware of > existence of sparse matrix objects, right? An update: I've changed the matrix.__mul__ function in NumPy SVN to return NotImplemented if the right operand defines __rmul__ and isn't a NumPy-compatible type. This seems to work fine for * now. Functions like numpy.dot() still won't work on sparse matrices, but I don't really have a problem with this ;) -- Ed From ryanlists at gmail.com Mon Feb 27 20:34:43 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Mon, 27 Feb 2006 20:34:43 -0500 Subject: [SciPy-user] poly1d equality test Message-ID: Is there an easy way to test for the equality of two poly1d objects? I have the following problem: In [36]: p1 Out[36]: poly1d([ 6.66666667, 0. , -49. ]) In [37]: p2 Out[37]: poly1d([ 6.66666667, 0. , -49. ]) In [38]: p1==p2 Out[38]: False In [39]: (p1.coeffs==p2.coeffs).all() Out[39]: True It seems like == is only testing if they are the same object. I am trying to write a class for transfer functions which model a system as a numerator and denominator poly1d. When I multiply or divide them I need to check if the numerator and denominators cancel exactly. Thanks, Ryan From ckkart at hoc.net Mon Feb 27 20:40:51 2006 From: ckkart at hoc.net (Christian Kristukat) Date: Tue, 28 Feb 2006 10:40:51 +0900 Subject: [SciPy-user] poly1d equality test In-Reply-To: References: Message-ID: <4403AA23.2080007@hoc.net> Ryan Krauss wrote: > Is there an easy way to test for the equality of two poly1d objects? > > I have the following problem: > In [36]: p1 > Out[36]: poly1d([ 6.66666667, 0. , -49. ]) > > In [37]: p2 > Out[37]: poly1d([ 6.66666667, 0. , -49. ]) > > In [38]: p1==p2 > Out[38]: False > > In [39]: (p1.coeffs==p2.coeffs).all() > Out[39]: True > > It seems like == is only testing if they are the same object. I am > trying to write a class for transfer functions which model a system as > a numerator and denominator poly1d. When I multiply or divide them I > need to check if the numerator and denominators cancel exactly. Hi Ryan, you could subclass poly1d and define the special method __eq__ to which is called in situations lika a==b: In [3]: class p(poly1d): ...: def __eq__(self,other): ...: return (self.coeffs==other.coeffs).all() ...: ...: In [4]: p1 = p([ 6.66666667, 0. , -49. ]) In [5]: p2 = p([ 6.66666667, 0. , -49. ]) In [6]: p1 == p2 Out[6]: True Regards, Christian From cookedm at physics.mcmaster.ca Mon Feb 27 21:36:40 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Mon, 27 Feb 2006 21:36:40 -0500 Subject: [SciPy-user] poly1d equality test In-Reply-To: <4403AA23.2080007@hoc.net> (Christian Kristukat's message of "Tue, 28 Feb 2006 10:40:51 +0900") References: <4403AA23.2080007@hoc.net> Message-ID: Christian Kristukat writes: > Ryan Krauss wrote: >> Is there an easy way to test for the equality of two poly1d objects? >> >> I have the following problem: >> In [36]: p1 >> Out[36]: poly1d([ 6.66666667, 0. , -49. ]) >> >> In [37]: p2 >> Out[37]: poly1d([ 6.66666667, 0. , -49. ]) >> >> In [38]: p1==p2 >> Out[38]: False >> >> In [39]: (p1.coeffs==p2.coeffs).all() >> Out[39]: True >> >> It seems like == is only testing if they are the same object. I am >> trying to write a class for transfer functions which model a system as >> a numerator and denominator poly1d. When I multiply or divide them I >> need to check if the numerator and denominators cancel exactly. > > Hi Ryan, > you could subclass poly1d and define the special method __eq__ to which is > called in situations lika a==b: > > In [3]: class p(poly1d): > ...: def __eq__(self,other): > ...: return (self.coeffs==other.coeffs).all() > ...: > ...: > > In [4]: p1 = p([ 6.66666667, 0. , -49. ]) > > In [5]: p2 = p([ 6.66666667, 0. , -49. ]) > > In [6]: p1 == p2 > Out[6]: True I've added __eq__ and __ne__ methods to the poly1d class in svn. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From nwagner at mecha.uni-stuttgart.de Tue Feb 28 03:34:09 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Tue, 28 Feb 2006 09:34:09 +0100 Subject: [SciPy-user] ATLAS and gcc v4.0.2 Message-ID: Hi all, Where can I find some hints w.r.t. to ATLAS and scipy/numpy ? AFAIK the new website has no information on building ATLAS. Do I need version 3 of gcc ? I am on SuSE Linux 10.0 with gcc v4.0.2 If I try to install ATLAS I get .. Looking for compilers (this may take a while): /usr/bin/gcc : v4.0.2 /usr/bin/gcc : v4.0.2 F77 = /usr/bin/g77 -fomit-frame-pointer -O -m64 CC = /usr/bin/gcc -fomit-frame-pointer -O -mfpmath=387 -m64 MCC = /usr/bin/gcc -fomit-frame-pointer -O -mfpmath=387 -m64 ATLAS has detected that you are running on an x86 architecture (eg., Athlon, P4, PIII, etc), but do not have a pre-2.96 gcc installed. Using gcc 3/2.96 will result in a performance loss of 10-20% IN THE BEST CASE over a gcc previous to 2.96. If your install goes slightly wrong, performance drops of 50% are more normal with these compilers. Full details of this problem can be found at: http://www.cs.utk.edu/~rwhaley/ATLAS/gcc30.html In the meantime, we suggest you abondon this ATLAS install, and install a gcc that delivers decent x86 performance, and restart the install then. You do not need to be root to install, and full instructions, along with all needed files, are given in the ATLAS errata file at: http://math-atlas.sourceforge.net/errata.html#UnixGccInst Stop ATLAS install? [y]: How can I continue ? Nils From klemm at phys.ethz.ch Tue Feb 28 03:48:12 2006 From: klemm at phys.ethz.ch (Hanno Klemm) Date: Tue, 28 Feb 2006 09:48:12 +0100 Subject: [SciPy-user] ATLAS and gcc v4.0.2 In-Reply-To: Message-ID: Nils, The website you cite says that the problem was fixed with version 3.1 of gcc. Why don't you just ry it and see what happens, or does the compile break? Hanno Nils Wagner said: > Hi all, > > Where can I find some hints w.r.t. to ATLAS and > scipy/numpy ? > AFAIK the new website has no information on building > ATLAS. > > Do I need version 3 of gcc ? > > I am on SuSE Linux 10.0 with gcc v4.0.2 > > If I try to install ATLAS I get .. > > Looking for compilers (this may take a while): > /usr/bin/gcc : v4.0.2 > /usr/bin/gcc : v4.0.2 > F77 = /usr/bin/g77 -fomit-frame-pointer -O -m64 > CC = /usr/bin/gcc -fomit-frame-pointer -O -mfpmath=387 > -m64 > MCC = /usr/bin/gcc -fomit-frame-pointer -O -mfpmath=387 > -m64 > > > > ATLAS has detected that you are running on an x86 > architecture > (eg., Athlon, P4, PIII, etc), but do not have a pre-2.96 > gcc installed. > Using gcc 3/2.96 will result in a performance loss of > 10-20% IN THE BEST CASE > over a gcc previous to 2.96. If your install goes > slightly wrong, performance > drops of 50% are more normal with these compilers. Full > details of this > problem can be found at: > http://www.cs.utk.edu/~rwhaley/ATLAS/gcc30.html > In the meantime, we suggest you abondon this ATLAS > install, and install a gcc > that delivers decent x86 performance, and restart the > install then. You > do not need to be root to install, and full instructions, > along with all > needed files, are given in the ATLAS errata file at: > http://math-atlas.sourceforge.net/errata.html#UnixGccInst > > Stop ATLAS install? [y]: > > How can I continue ? > > Nils > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > -- Hanno Klemm klemm at phys.ethz.ch From cournape at atr.jp Tue Feb 28 03:55:36 2006 From: cournape at atr.jp (Cournapeau David) Date: Tue, 28 Feb 2006 17:55:36 +0900 Subject: [SciPy-user] ATLAS and gcc v4.0.2 In-Reply-To: References: Message-ID: <1141116936.4817.64.camel@localhost.localdomain> On Tue, 2006-02-28 at 09:34 +0100, Nils Wagner wrote: > Hi all, > > Where can I find some hints w.r.t. to ATLAS and > scipy/numpy ? > AFAIK the new website has no information on building > ATLAS. > > Do I need version 3 of gcc ? > > I am on SuSE Linux 10.0 with gcc v4.0.2 > > If I try to install ATLAS I get .. > > Looking for compilers (this may take a while): > /usr/bin/gcc : v4.0.2 > /usr/bin/gcc : v4.0.2 > F77 = /usr/bin/g77 -fomit-frame-pointer -O -m64 > CC = /usr/bin/gcc -fomit-frame-pointer -O -mfpmath=387 > -m64 > MCC = /usr/bin/gcc -fomit-frame-pointer -O -mfpmath=387 > -m64 On a different distribution (ubuntu), I install ATLAS package, and modifiy the static library to build a full LAPACK library (it should be somewhere on the wiki, I can give more details if you need help). If SUSE does not have atlas package, then try to find if a gcc-2.95 package is available. If neither of them are available, I would advise you to just build ATLAS with your current version of gcc. >From my POV, the best solution is to use a pre-compiled ATLAS from your distribution: then, it should have been tested, and you are less likely to make a error which can appear later in a very subtle way. Compiling and installing gcc yourself is a PITA if you have never done it yourself before (it is easy to break many things). Even building atlas yourself is a bit long, and I think it is easy to mess things up, at least from a performance point of view. David From vincefn at users.sourceforge.net Tue Feb 28 03:58:47 2006 From: vincefn at users.sourceforge.net (Favre-Nicolin Vincent) Date: Tue, 28 Feb 2006 09:58:47 +0100 Subject: [SciPy-user] ATLAS and gcc v4.0.2 In-Reply-To: References: Message-ID: <200602280958.47719.vincefn@users.sourceforge.net> Hi, > ATLAS has detected that you are running on an x86 > architecture > (eg., Athlon, P4, PIII, etc), but do not have a pre-2.96 > gcc installed. > Using gcc 3/2.96 will result in a performance loss of > 10-20% IN THE BEST CASE > over a gcc previous to 2.96. If your install goes > slightly wrong, performance > drops of 50% are more normal with these compilers. Full > details of this > problem can be found at: > http://www.cs.utk.edu/~rwhaley/ATLAS/gcc30.html If you actually click on the link you can read that "all of these problems appear to be fixed in gcc 3.1". So the message appears to be outdated, and should not apply to gcc 4.0.2. Do not be stopped by a mere warning ! But I am not an atlas expert, just one of many people who is annoyed by the incomplete blas/lapack libraries, so that one cannot invert a matrix in scipy without compiling atlas, which can be a major pain. Of course I have not yet switched to the new scipy, so maybe things are better now. Or packagers have finally decided to ban incomplete blas/lapack libraries and therefore build a better world where the sky is blue and matrices invert. Vincent -- Vincent Favre-Nicolin Universit? Joseph Fourier http://v.favrenicolin.free.fr ObjCryst & Fox : http://objcryst.sourceforge.net From emsellem at obs.univ-lyon1.fr Tue Feb 28 04:05:28 2006 From: emsellem at obs.univ-lyon1.fr (Eric Emsellem) Date: Tue, 28 Feb 2006 10:05:28 +0100 Subject: [SciPy-user] ATLAS and gcc v4.0.2 Message-ID: <44041258.3060302@obs.univ-lyon1.fr> Hi, to summarize: just go on with gcc4. It works (although things may not be optimal, but I would guess the difference may be only 10% or so). ==> below is the way I compile things on Suse 10 with gcc4 starting with Lapack and going up to matplotlib. I have no problem with it although compiling ATLAS is a real pain (it just takes forever). Hope this helps. Please adapt things to your machine and system (using the right architecture name for ATLAS). Things that I don't do anymore are commands starting with "#". ALso please update some of the version numbers (ipython, matplotlib). A cleaned and updated version of such a simple procedure would be very helpful, if someone can post it. cheers Eric ============================= ####################### # LAPACK # ##################### # Build Lapack from the web since then you are sure it is complete # It is in the Suse distribution but I don't trust it wget http://www.netlib.org/lapack/lapack.tgz mdkir install_lapack cd install_lapack tar xvfz ../lapack.tgz cp INSTALL/make.inc.LINUX make.inc ## This will build a lapack_LINUX.a ## That you will use later on for ATLAS make lapacklib ######################## # ATLAS # ###################### tar xvfz atlas3.6.0.tar.gz cd ATLAS ## You will have to answer a few questions... # don't care about the warning on gcc make # change "arch" with the right one # you will be asked a few question there # don't care about the warning on gcc make install arch=Linux_UNKNOWNSSE2 ## Create a new directory where to put the include and lib files ## (root) mkdir /usr/local/lib/atlas ## then copy all the include and archive (*.a) files ## under this new dir cp include/* /usr/local/lib/atlas cp *.a /usr/local/lib/atlas # in .cshrc # and don't forget to set it up, this is for tcsh. For bash use "set .. = .." setenv ATLAS /usr/local/lib/atlas ## save the lapack lib built from ATLAS (which is incomplete) cd /usr/local/lib/atlas cp liblapack.a liblapack.a_ATLAS # then create a new temporary dir which will allow you to complete # lapack mkdir tmp cd tmp # detar the atlas lapack lib ar x ../liblapack.a # then cp the true lapack linux lib into the liblapack.a file cp /install_lapack/lapack_LINUX.a ../liblapack.a # and add the files from the atlas lib # you will then have a full lapack lib with the atlas files too ar r ../liblapack.a *.o cd .. # remove the temp dir rm -rf tmp ## copy all *.a into your local atlas dir cp *.a /usr/local/lib/atlas # ###################### ## Added all fftw libraries from yast2 # ###################### ############################################################## ###### NO NEED TO INSTALL NUMERIC IF YOU HAVE ALREADY the new NUMPY... ##### HOWEVER Numarray is required for pyfits ##### NUmeric ###cvs -d:pserver:anonymous at cvs.sourceforge.net:/cvsroot/numpy login ###cvs -z3 -d:pserver:anonymous at cvs.sourceforge.net:/cvsroot/numpy co -P Numerical ### #### THEN EDIT your customize.py ############################################################## ##### Customization variables for installing Numeric ### ###use_system_lapack = 1 ###lapack_library_dirs = ['/usr/local/lib/atlas'] ###lapack_libraries = ['lapack', 'cblas', 'f77blas', 'atlas', 'g2c'] ###lapack_extra_link_args = [] ### #### Set use_dotblas to 1 to use BLAS for the matrix multiplication routines. #### Put the directory that 'cblas.h' is in into dotblas_include_dirs ###use_dotblas = 1 ###dotblas_include_dirs = ['/usr/local/lib/atlas'] ###dotblas_cblas_header = '' ###dotblas_library_dirs = lapack_library_dirs ###dotblas_libraries = lapack_libraries ###dotblas_extra_link_args = [] ############################################################## ### ###python setup.py install ########################################################### # NUMPY # ######### svn co http://svn.scipy.org/svn/numpy/trunk numpy cd numpy python setup.py install # ###################### ## INSTALLED IPYTHON # ###################### tar xvfz ipython-0.6.15.tar.gz cd ipython-0.6.15 python setup.py install # ###################### ## F2PY # ###################### (CVS password: guest) cvs -d :pserver:anonymous at cens.ioc.ee:/home/cvs login cvs -z6 -d :pserver:anonymous at cens.ioc.ee:/home/cvs checkout f2py2e cd f2py2e make install # ###################### ## SCIPY # ###################### ## Remove any scipy in your python before.. ## ALSO REMOVE the "build" directories from the "core" and "scipy" directories before ## doing the install!! svn co http://svn.scipy.org/svn/scipy/trunk scipy cd scipy python setup.py install ## Test:: cd # Start python and run as tests: # (level 10 will take a while ...) import scipy scipy.test(1,verbosity=10) scipy.test(10,verbosity=10) ################################# # Matplotlib 0.86 ################################# # Note that some matplotlibrc options have changed # such that tick ==> xtick + ytick # add one line in setupext.py: o.tcl_inc="/usr/include/" # before if not os.path.exists(o.tcl_inc): print 'cannot find tcl/tk headers. giving up.' sys.exit() return o # and run the usual python setup.py install -- =============================================================== Observatoire de Lyon emsellem at obs.univ-lyon1.fr 9 av. Charles-Andre tel: +33 4 78 86 83 84 69561 Saint-Genis Laval Cedex fax: +33 4 78 86 83 86 France http://www-obs.univ-lyon1.fr/eric.emsellem =============================================================== From ckkart at hoc.net Tue Feb 28 04:29:44 2006 From: ckkart at hoc.net (Christian Kristukat) Date: Tue, 28 Feb 2006 18:29:44 +0900 Subject: [SciPy-user] ATLAS and gcc v4.0.2 In-Reply-To: References: Message-ID: <44041808.7050900@hoc.net> Nils Wagner wrote: > Hi all, > > Where can I find some hints w.r.t. to ATLAS and > scipy/numpy ? > AFAIK the new website has no information on building > ATLAS. Well, at least it provides a link to a page with install instructions: http://www.scipy.org/Installing_SciPy/Linux > Do I need version 3 of gcc ? > > I am on SuSE Linux 10.0 with gcc v4.0.2 > In addition to what was already said, I noticed that its important, that the gfortran compiler is used and not g77. The easiest way to achieve that, is to make sure that compat-g77.rpm is not installed, but gfortran.rpm is. And yes, gcc4 works without problems though I can't say anything about performance. Eric gave instructions to compile f2py. That, however is not needed with numpy as it's already included. Regards, Christian From nwagner at mecha.uni-stuttgart.de Tue Feb 28 04:40:38 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Tue, 28 Feb 2006 10:40:38 +0100 Subject: [SciPy-user] ATLAS and gcc v4.0.2 In-Reply-To: <44041808.7050900@hoc.net> References: <44041808.7050900@hoc.net> Message-ID: On Tue, 28 Feb 2006 18:29:44 +0900 Christian Kristukat wrote: > Nils Wagner wrote: >> Hi all, >> >> Where can I find some hints w.r.t. to ATLAS and >> scipy/numpy ? >> AFAIK the new website has no information on building >> ATLAS. > > Well, at least it provides a link to a page with install >instructions: > > http://www.scipy.org/Installing_SciPy/Linux > >> Do I need version 3 of gcc ? >> >> I am on SuSE Linux 10.0 with gcc v4.0.2 >> > > In addition to what was already said, I noticed that its >important, that the > gfortran compiler is used and not g77. The easiest way >to achieve that, is to > make sure that compat-g77.rpm is not installed, but >gfortran.rpm is. > > And yes, gcc4 works without problems though I can't say >anything about performance. > > Eric gave instructions to compile f2py. That, however is >not needed with numpy > as it's already included. > > Regards, Christian > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user Hi Christian, SuSE 10.0 comes with gcc-fortran Name : gcc-fortran Relocations: (not relocatable) Version : 4.0.2_20050901 Vendor: SUSE LINUX Products GmbH, Nuernberg, Germany Release : 3 Build Date: Fri 09 Sep 2005 06:08:56 PM CEST Install date: Mon 27 Feb 2006 01:01:19 PM CET Build Host: fatou.suse.de Group : Development/Languages/Fortran Source RPM: gcc-4.0.2_20050901-3.src.rpm Size : 7490931 License: GPL Signature : DSA/SHA1, Fri 09 Sep 2005 07:04:43 PM CEST, Key ID a84edae89c800aca Packager : http://www.suse.de/feedback URL : http://gcc.gnu.org/ Summary : The GNU Fortran Compiler and Support Files Description : This is the Fortran compiler of the GNU Compiler Collection (GCC). Authors: -------- The GCC team. Distribution: SUSE LINUX 10.0 (X86-64) Nils From ckkart at hoc.net Tue Feb 28 04:58:03 2006 From: ckkart at hoc.net (Christian Kristukat) Date: Tue, 28 Feb 2006 18:58:03 +0900 Subject: [SciPy-user] ATLAS and gcc v4.0.2 In-Reply-To: References: <44041808.7050900@hoc.net> Message-ID: <44041EAB.4070404@hoc.net> Nils Wagner wrote: > On Tue, 28 Feb 2006 18:29:44 +0900 > Christian Kristukat wrote: >> Nils Wagner wrote: >>> Hi all, >>> >>> Where can I find some hints w.r.t. to ATLAS and >>> scipy/numpy ? >>> AFAIK the new website has no information on building >>> ATLAS. >> Well, at least it provides a link to a page with install >> instructions: >> >> http://www.scipy.org/Installing_SciPy/Linux >> >>> Do I need version 3 of gcc ? >>> >>> I am on SuSE Linux 10.0 with gcc v4.0.2 >>> >> In addition to what was already said, I noticed that its >> important, that the >> gfortran compiler is used and not g77. The easiest way >> to achieve that, is to >> make sure that compat-g77.rpm is not installed, but >> gfortran.rpm is. >> >> And yes, gcc4 works without problems though I can't say >> anything about performance. >> >> Eric gave instructions to compile f2py. That, however is >> not needed with numpy >> as it's already included. >> >> Regards, Christian >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.net >> http://www.scipy.net/mailman/listinfo/scipy-user > > > Hi Christian, > > SuSE 10.0 comes with gcc-fortran > > Name : gcc-fortran Relocations: You're right. But the executable is called 'gfortran'. I mixed that up. Christian From nwagner at mecha.uni-stuttgart.de Tue Feb 28 05:05:00 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Tue, 28 Feb 2006 11:05:00 +0100 Subject: [SciPy-user] ATLAS and gcc v4.0.2 In-Reply-To: <44041EAB.4070404@hoc.net> References: <44041808.7050900@hoc.net> <44041EAB.4070404@hoc.net> Message-ID: On Tue, 28 Feb 2006 18:58:03 +0900 Christian Kristukat wrote: > Nils Wagner wrote: >> On Tue, 28 Feb 2006 18:29:44 +0900 >> Christian Kristukat wrote: >>> Nils Wagner wrote: >>>> Hi all, >>>> >>>> Where can I find some hints w.r.t. to ATLAS and >>>> scipy/numpy ? >>>> AFAIK the new website has no information on building >>>> ATLAS. >>> Well, at least it provides a link to a page with install >>> instructions: >>> >>> http://www.scipy.org/Installing_SciPy/Linux >>> >>>> Do I need version 3 of gcc ? >>>> >>>> I am on SuSE Linux 10.0 with gcc v4.0.2 >>>> >>> In addition to what was already said, I noticed that its >>> important, that the >>> gfortran compiler is used and not g77. The easiest way >>> to achieve that, is to >>> make sure that compat-g77.rpm is not installed, but >>> gfortran.rpm is. >>> >>> And yes, gcc4 works without problems though I can't say >>> anything about performance. >>> >>> Eric gave instructions to compile f2py. That, however is >>> not needed with numpy >>> as it's already included. >>> >>> Regards, Christian >>> >>> _______________________________________________ >>> SciPy-user mailing list >>> SciPy-user at scipy.net >>> http://www.scipy.net/mailman/listinfo/scipy-user >> >> >> Hi Christian, >> >> SuSE 10.0 comes with gcc-fortran >> >> Name : gcc-fortran Relocations: > > You're right. But the executable is called 'gfortran'. I >mixed that up. > I have already build blas and lapack libraries using g77 according to the installation hints. Can I use them or must I recompile blas and lapack with gfortran ? Nils > Christian > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user From ckkart at hoc.net Tue Feb 28 05:20:55 2006 From: ckkart at hoc.net (Christian Kristukat) Date: Tue, 28 Feb 2006 19:20:55 +0900 Subject: [SciPy-user] ATLAS and gcc v4.0.2 In-Reply-To: References: <44041808.7050900@hoc.net> <44041EAB.4070404@hoc.net> Message-ID: <44042407.8090106@hoc.net> Nils Wagner wrote: >>> >>> SuSE 10.0 comes with gcc-fortran >>> >>> Name : gcc-fortran Relocations: >> You're right. But the executable is called 'gfortran'. I >> mixed that up. >> > I have already build blas and lapack libraries using g77 > according to the installation hints. > > Can I use them or must I recompile blas and lapack with > gfortran ? I'm not an expert on this subject, just sharing my experiences. If you're using ATLAS you don't need BLAS as far as I understood. I followed the instructions in the link I posted before. Compared to the compilation time of ATLAS, LAPACK builds almost instantaneously, so it doesn't hurt to rebuild LAPACK, I just don't know wether g77 built LAPACK works together with gfortran ATLAS. At least I know, that if you're using gfrotran all the time, everything works fine on SuSE 10.0. Good luck, Christian From nwagner at mecha.uni-stuttgart.de Tue Feb 28 05:29:41 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Tue, 28 Feb 2006 11:29:41 +0100 Subject: [SciPy-user] ATLAS and gcc v4.0.2 In-Reply-To: <44042407.8090106@hoc.net> References: <44041808.7050900@hoc.net> <44041EAB.4070404@hoc.net> <44042407.8090106@hoc.net> Message-ID: On Tue, 28 Feb 2006 19:20:55 +0900 Christian Kristukat wrote: > Nils Wagner wrote: >>>> >>>> SuSE 10.0 comes with gcc-fortran >>>> >>>> Name : gcc-fortran Relocations: >>> You're right. But the executable is called 'gfortran'. I >>> mixed that up. >>> >> I have already build blas and lapack libraries using g77 >> according to the installation hints. >> >> Can I use them or must I recompile blas and lapack with >> gfortran ? > > I'm not an expert on this subject, just sharing my >experiences. If you're using > ATLAS you don't need BLAS as far as I understood. I >followed the instructions in > the link I posted before. Compared to the compilation >time of ATLAS, LAPACK > builds almost instantaneously, so it doesn't hurt to >rebuild LAPACK, I just > don't know wether g77 built LAPACK works together with >gfortran ATLAS. > At least I know, that if you're using gfrotran all the >time, everything works > fine on SuSE 10.0. > > Good luck, Christian > Christian, I have recompiled both blas and lapack. There is a difference in the size of the libraries -rw-r--r-- 1 root root 7887268 2006-02-28 09:19 libflapack.a_g77 -rw-r--r-- 1 root root 7833516 2006-02-28 11:20 libflapack.a_gfortran -rw-r--r-- 1 root root 629322 2006-02-28 11:06 libfblas.a_g77 -rw-r--r-- 1 root root 615722 2006-02-28 11:28 libfblas.a_gfortran Nils From nwagner at mecha.uni-stuttgart.de Tue Feb 28 06:32:34 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Tue, 28 Feb 2006 12:32:34 +0100 Subject: [SciPy-user] ATLAS and gcc v4.0.2 In-Reply-To: <44042407.8090106@hoc.net> References: <44041808.7050900@hoc.net> <44041EAB.4070404@hoc.net> <44042407.8090106@hoc.net> Message-ID: On Tue, 28 Feb 2006 19:20:55 +0900 Christian Kristukat wrote: > Nils Wagner wrote: >>>> >>>> SuSE 10.0 comes with gcc-fortran >>>> >>>> Name : gcc-fortran Relocations: >>> You're right. But the executable is called 'gfortran'. I >>> mixed that up. >>> >> I have already build blas and lapack libraries using g77 >> according to the installation hints. >> >> Can I use them or must I recompile blas and lapack with >> gfortran ? > > I'm not an expert on this subject, just sharing my >experiences. If you're using > ATLAS you don't need BLAS as far as I understood. I >followed the instructions in > the link I posted before. Compared to the compilation >time of ATLAS, LAPACK > builds almost instantaneously, so it doesn't hurt to >rebuild LAPACK, I just > don't know wether g77 built LAPACK works together with >gfortran ATLAS. > At least I know, that if you're using gfrotran all the >time, everything works > fine on SuSE 10.0. > > Good luck, Christian > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user I followed your advice and rebuilt blas and lapack using gfortran. Compilation of ATLAS seems to work. Also I have build a complete lapack http://math-atlas.sourceforge.net/errata.html#completelp I am on SuSE 10.0 x86_64 GNU/Linux However python setup.py build failed Running from numpy source directory. Warning: not existing path in numpy/distutils: site.cfg No module named __svn_version__ F2PY Version 2_2179 blas_opt_info: blas_mkl_info: /usr/local/svn/numpy/numpy/distutils/system_info.py:531: UserWarning: Library error: libs=['mkl', 'vml', 'guide'] found_libs=[] warnings.warn("Library error: libs=%s found_libs=%s" % \ NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS /usr/local/svn/numpy/numpy/distutils/system_info.py:531: UserWarning: Library error: libs=['ptf77blas', 'ptcblas', 'atlas'] found_libs=[] warnings.warn("Library error: libs=%s found_libs=%s" % \ /usr/local/svn/numpy/numpy/distutils/system_info.py:531: UserWarning: Library error: libs=['ptf77blas', 'ptcblas', 'atlas'] found_libs=['/usr/local/lib/atlas/libatlas.a'] warnings.warn("Library error: libs=%s found_libs=%s" % \ NOT AVAILABLE atlas_blas_info: /usr/local/svn/numpy/numpy/distutils/system_info.py:531: UserWarning: Library error: libs=['f77blas', 'cblas', 'atlas'] found_libs=[] warnings.warn("Library error: libs=%s found_libs=%s" % \ FOUND: libraries = ['f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/local/lib/atlas'] language = c running build_src building extension "atlas_version" sources creating build creating build/src adding 'build/src/atlas_version_0x5b360788cdd9eddc.c' to sources. running build_ext customize UnixCCompiler customize UnixCCompiler using build_ext building 'atlas_version' extension compiling C sources gcc options: '-pthread -fno-strict-aliasing -DNDEBUG -O2 -fmessage-length=0 -Wall -D_FORTIFY_SOURCE=2 -g -fPIC' creating build/temp.linux-x86_64-2.4 creating build/temp.linux-x86_64-2.4/build creating build/temp.linux-x86_64-2.4/build/src compile options: '-Inumpy/core/include -I/usr/include/python2.4 -c' gcc: build/src/atlas_version_0x5b360788cdd9eddc.c gcc -pthread -shared build/temp.linux-x86_64-2.4/build/src/atlas_version_0x5b360788cdd9eddc.o -L/usr/local/lib/atlas -lf77blas -lcblas -latlas -o build/temp.linux-x86_64-2.4/atlas_version.so /usr/lib64/gcc/x86_64-suse-linux/4.0.2/../../../../x86_64-suse-linux/bin/ld: /usr/local/lib/atlas/libatlas.a(ATL_buildinfo.o): relocation R_X86_64_32 against `a local symbol' can not be used when making a shared object; recompile with -fPIC /usr/local/lib/atlas/libatlas.a: could not read symbols: Bad value collect2: ld returned 1 exit status /usr/lib64/gcc/x86_64-suse-linux/4.0.2/../../../../x86_64-suse-linux/bin/ld: /usr/local/lib/atlas/libatlas.a(ATL_buildinfo.o): relocation R_X86_64_32 against `a local symbol' can not be used when making a shared object; recompile with -fPIC /usr/local/lib/atlas/libatlas.a: could not read symbols: Bad value collect2: ld returned 1 exit status ##### msg: error: Command "gcc -pthread -shared build/temp.linux-x86_64-2.4/build/src/atlas_version_0x5b360788cdd9eddc.o -L/usr/local/lib/atlas -lf77blas -lcblas -latlas -o build/temp.linux-x86_64-2.4/atlas_version.so" failed with exit status 1 error: Command "gcc -pthread -shared build/temp.linux-x86_64-2.4/build/src/atlas_version_0x5b360788cdd9eddc.o -L/usr/local/lib/atlas -lf77blas -lcblas -latlas -o build/temp.linux-x86_64-2.4/atlas_version.so" failed with exit status 1 FOUND: libraries = ['f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/local/lib/atlas'] language = c define_macros = [('NO_ATLAS_INFO', 2)] Warning: distutils distribution has been initialized, it may be too late to add an extension _dotblas Traceback (most recent call last): File "setup.py", line 76, in ?x86_64 GNU/Linux setup_package() File "setup.py", line 63, in setup_package config.add_subpackage('numpy') File "/usr/local/svn/numpy/numpy/distutils/misc_util.py", line 592, in add_subpackage config_list = self.get_subpackage(subpackage_name,subpackage_path) File "/usr/local/svn/numpy/numpy/distutils/misc_util.py", line 582, in get_subpackage subpackage_path) File "/usr/local/svn/numpy/numpy/distutils/misc_util.py", line 539, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File "/usr/local/svn/numpy/numpy/setup.py", line 10, in configuration config.add_subpackage('core') File "/usr/local/svn/numpy/numpy/distutils/misc_util.py", line 592, in add_subpackage config_list = self.get_subpackage(subpackage_name,subpackage_path) File "/usr/local/svn/numpy/numpy/distutils/misc_util.py", line 582, in get_subpackage subpackage_path) File "/usr/local/svn/numpy/numpy/distutils/misc_util.py", line 539, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File "numpy/core/setup.py", line 217, in configuration config.add_data_dir('tests') File "/usr/local/svn/numpy/numpy/distutils/misc_util.py", line 636, in add_data_dir self.add_data_files((ds,filenames)) File "/usr/local/svn/numpy/numpy/distutils/misc_util.py", line 702, in add_data_files dist.data_files.extend(data_dict.items()) AttributeError: 'NoneType' object has no attribute 'extend' Any suggestion ? Nils From ckkart at hoc.net Tue Feb 28 08:11:51 2006 From: ckkart at hoc.net (Christian Kristukat) Date: Tue, 28 Feb 2006 22:11:51 +0900 Subject: [SciPy-user] ATLAS and gcc v4.0.2 In-Reply-To: References: <44041808.7050900@hoc.net> <44041EAB.4070404@hoc.net> <44042407.8090106@hoc.net> Message-ID: <44044C17.8010108@hoc.net> Nils Wagner wrote: > On Tue, 28 Feb 2006 19:20:55 +0900 > Christian Kristukat wrote: >> Nils Wagner wrote: >>>>> SuSE 10.0 comes with gcc-fortran >>>>> >>>>> Name : gcc-fortran Relocations: >>>> You're right. But the executable is called 'gfortran'. I >>>> mixed that up. >>>> >>> I have already build blas and lapack libraries using g77 >>> according to the installation hints. >>> >>> Can I use them or must I recompile blas and lapack with >>> gfortran ? >> I'm not an expert on this subject, just sharing my >> experiences. If you're using >> ATLAS you don't need BLAS as far as I understood. I >> followed the instructions in >> the link I posted before. Compared to the compilation >> time of ATLAS, LAPACK >> builds almost instantaneously, so it doesn't hurt to >> rebuild LAPACK, I just >> don't know wether g77 built LAPACK works together with >> gfortran ATLAS. >> At least I know, that if you're using gfrotran all the >> time, everything works >> fine on SuSE 10.0. >> >> Good luck, Christian >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.net >> http://www.scipy.net/mailman/listinfo/scipy-user > > > > I followed your advice and rebuilt blas and lapack using > gfortran. Compilation of ATLAS seems to work. Also > I have build a complete lapack > http://math-atlas.sourceforge.net/errata.html#completelp > > I am on SuSE 10.0 x86_64 GNU/Linux > > However > > python setup.py build failed > > Running from numpy source directory. > Warning: not existing path in numpy/distutils: site.cfg > No module named __svn_version__ > F2PY Version 2_2179 > blas_opt_info: > blas_mkl_info: > /usr/local/svn/numpy/numpy/distutils/system_info.py:531: > UserWarning: Library error: libs=['mkl', 'vml', 'guide'] > found_libs=[] > warnings.warn("Library error: libs=%s found_libs=%s" % > \ > NOT AVAILABLE > > atlas_blas_threads_info: > Setting PTATLAS=ATLAS > /usr/local/svn/numpy/numpy/distutils/system_info.py:531: > UserWarning: Library error: libs=['ptf77blas', 'ptcblas', > 'atlas'] found_libs=[] > warnings.warn("Library error: libs=%s found_libs=%s" % > \ > /usr/local/svn/numpy/numpy/distutils/system_info.py:531: > UserWarning: Library error: libs=['ptf77blas', 'ptcblas', > 'atlas'] found_libs=['/usr/local/lib/atlas/libatlas.a'] > warnings.warn("Library error: libs=%s found_libs=%s" % > \ > NOT AVAILABLE > > atlas_blas_info: > /usr/local/svn/numpy/numpy/distutils/system_info.py:531: > UserWarning: Library error: libs=['f77blas', 'cblas', > 'atlas'] found_libs=[] > warnings.warn("Library error: libs=%s found_libs=%s" % > \ > FOUND: > libraries = ['f77blas', 'cblas', 'atlas'] > library_dirs = ['/usr/local/lib/atlas'] > language = c > > running build_src > building extension "atlas_version" sources > creating build > creating build/src > adding 'build/src/atlas_version_0x5b360788cdd9eddc.c' > to sources. > running build_ext > customize UnixCCompiler > customize UnixCCompiler using build_ext > building 'atlas_version' extension > compiling C sources > gcc options: '-pthread -fno-strict-aliasing -DNDEBUG -O2 > -fmessage-length=0 -Wall -D_FORTIFY_SOURCE=2 -g -fPIC' > creating build/temp.linux-x86_64-2.4 > creating build/temp.linux-x86_64-2.4/build > creating build/temp.linux-x86_64-2.4/build/src > compile options: '-Inumpy/core/include > -I/usr/include/python2.4 -c' > gcc: build/src/atlas_version_0x5b360788cdd9eddc.c > gcc -pthread -shared > build/temp.linux-x86_64-2.4/build/src/atlas_version_0x5b360788cdd9eddc.o > -L/usr/local/lib/atlas -lf77blas -lcblas -latlas -o > build/temp.linux-x86_64-2.4/atlas_version.so > /usr/lib64/gcc/x86_64-suse-linux/4.0.2/../../../../x86_64-suse-linux/bin/ld: > /usr/local/lib/atlas/libatlas.a(ATL_buildinfo.o): > relocation R_X86_64_32 against `a local symbol' can not be > used when making a shared object; recompile with -fPIC > /usr/local/lib/atlas/libatlas.a: could not read symbols: > Bad value > collect2: ld returned 1 exit status > /usr/lib64/gcc/x86_64-suse-linux/4.0.2/../../../../x86_64-suse-linux/bin/ld: > /usr/local/lib/atlas/libatlas.a(ATL_buildinfo.o): > relocation R_X86_64_32 against `a local symbol' can not be > used when making a shared object; recompile with -fPIC > /usr/local/lib/atlas/libatlas.a: could not read symbols: > Bad value > collect2: ld returned 1 exit status > ##### msg: error: Command "gcc -pthread -shared > build/temp.linux-x86_64-2.4/build/src/atlas_version_0x5b360788cdd9eddc.o > -L/usr/local/lib/atlas -lf77blas -lcblas -latlas -o > build/temp.linux-x86_64-2.4/atlas_version.so" failed with > exit status 1 > error: Command "gcc -pthread -shared > build/temp.linux-x86_64-2.4/build/src/atlas_version_0x5b360788cdd9eddc.o > -L/usr/local/lib/atlas -lf77blas -lcblas -latlas -o > build/temp.linux-x86_64-2.4/atlas_version.so" failed with > exit status 1 > FOUND: > libraries = ['f77blas', 'cblas', 'atlas'] > library_dirs = ['/usr/local/lib/atlas'] > language = c > define_macros = [('NO_ATLAS_INFO', 2)] > > Warning: distutils distribution has been initialized, it > may be too late to add an extension _dotblas > Traceback (most recent call last): > File "setup.py", line 76, in ?x86_64 GNU/Linux > setup_package() > File "setup.py", line 63, in setup_package > config.add_subpackage('numpy') > File > "/usr/local/svn/numpy/numpy/distutils/misc_util.py", line > 592, in add_subpackage > config_list = > self.get_subpackage(subpackage_name,subpackage_path) > File > "/usr/local/svn/numpy/numpy/distutils/misc_util.py", line > 582, in get_subpackage > subpackage_path) > File > "/usr/local/svn/numpy/numpy/distutils/misc_util.py", line > 539, in _get_configuration_from_setup_py > config = setup_module.configuration(*args) > File "/usr/local/svn/numpy/numpy/setup.py", line 10, in > configuration > config.add_subpackage('core') > File > "/usr/local/svn/numpy/numpy/distutils/misc_util.py", line > 592, in add_subpackage > config_list = > self.get_subpackage(subpackage_name,subpackage_path) > File > "/usr/local/svn/numpy/numpy/distutils/misc_util.py", line > 582, in get_subpackage > subpackage_path) > File > "/usr/local/svn/numpy/numpy/distutils/misc_util.py", line > 539, in _get_configuration_from_setup_py > config = setup_module.configuration(*args) > File "numpy/core/setup.py", line 217, in configuration > config.add_data_dir('tests') > File > "/usr/local/svn/numpy/numpy/distutils/misc_util.py", line > 636, in add_data_dir > self.add_data_files((ds,filenames)) > File > "/usr/local/svn/numpy/numpy/distutils/misc_util.py", line > 702, in add_data_files > dist.data_files.extend(data_dict.items()) > AttributeError: 'NoneType' object has no attribute > 'extend' > I can't say more than that I followed exactly the instructions here: http://pong.tamu.edu/tiki/tiki-view_blog_post.php?blogId=6&postId=97 (the full LAPACK issue is covered there, too) and it worked. Christian From mfmorss at aep.com Tue Feb 28 10:51:19 2006 From: mfmorss at aep.com (mfmorss at aep.com) Date: Tue, 28 Feb 2006 10:51:19 -0500 Subject: [SciPy-user] ATLAS and gcc v4.0.2 In-Reply-To: Message-ID: My own preference would be to use g77 or some other Fortran 77 compiler, not gfortran, to compile BLAS and LAPACK. My reasons are (1) LAPACK and BLAS are written in Fortran 77; (2) the gcc-4 is generally considered to be somewhat less trustworthy than much more mature gcc-3; and gfortran is entirely new with gcc-4, while g77 is ancient and well-tested; and (3) gfortran is particularly new. Perhaps I express the obvious when I say that g77 and gfortran are frontends for gcc; g77 compiles Fortran 77; gfortran compiles a GNU implementation, somewhat partial and peculiar, so I understand, of Fortran 95. It is supposed to be compatible with Fortran 77, but so far as I know, nobody has really used it very much. My impression is that post-77 versions of Fortran are regarded with some degree of suspicion for pure, fast numerics, so why complicate the compilation of LAPACK and BLAS with these issues? I might be more willing to try gfortran if it had some sort of flag to optimize pure Fortran 77 code. Concerning slow compilation on the native Suse compiler: am I wrong, or is Suse compiled for the i386? If you are running on an i686, you would achieve faster compilation if you built gcc optimized for your machine's architecture. I have built gcc-3.4.5 from source both on Linux and AIX, and I didn't find it terribly painful in either case. However, if you don't do much compilation, it probably wouldn't be worth the trouble. Mark F. Morss Principal Analyst, Market Risk American Electric Power "Nils Wagner" To Sent by: SciPy Users List scipy-user-bounce s at scipy.net cc Subject 02/28/2006 06:32 Re: [SciPy-user] ATLAS and gcc AM v4.0.2 Please respond to SciPy Users List On Tue, 28 Feb 2006 19:20:55 +0900 Christian Kristukat wrote: > Nils Wagner wrote: >>>> >>>> SuSE 10.0 comes with gcc-fortran >>>> >>>> Name : gcc-fortran Relocations: >>> You're right. But the executable is called 'gfortran'. I >>> mixed that up. >>> >> I have already build blas and lapack libraries using g77 >> according to the installation hints. >> >> Can I use them or must I recompile blas and lapack with >> gfortran ? > > I'm not an expert on this subject, just sharing my >experiences. If you're using > ATLAS you don't need BLAS as far as I understood. I >followed the instructions in > the link I posted before. Compared to the compilation >time of ATLAS, LAPACK > builds almost instantaneously, so it doesn't hurt to >rebuild LAPACK, I just > don't know wether g77 built LAPACK works together with >gfortran ATLAS. > At least I know, that if you're using gfrotran all the >time, everything works > fine on SuSE 10.0. > > Good luck, Christian > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user I followed your advice and rebuilt blas and lapack using gfortran. Compilation of ATLAS seems to work. Also I have build a complete lapack http://math-atlas.sourceforge.net/errata.html#completelp I am on SuSE 10.0 x86_64 GNU/Linux However python setup.py build failed Running from numpy source directory. Warning: not existing path in numpy/distutils: site.cfg No module named __svn_version__ F2PY Version 2_2179 blas_opt_info: blas_mkl_info: /usr/local/svn/numpy/numpy/distutils/system_info.py:531: UserWarning: Library error: libs=['mkl', 'vml', 'guide'] found_libs=[] warnings.warn("Library error: libs=%s found_libs=%s" % \ NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS /usr/local/svn/numpy/numpy/distutils/system_info.py:531: UserWarning: Library error: libs=['ptf77blas', 'ptcblas', 'atlas'] found_libs=[] warnings.warn("Library error: libs=%s found_libs=%s" % \ /usr/local/svn/numpy/numpy/distutils/system_info.py:531: UserWarning: Library error: libs=['ptf77blas', 'ptcblas', 'atlas'] found_libs=['/usr/local/lib/atlas/libatlas.a'] warnings.warn("Library error: libs=%s found_libs=%s" % \ NOT AVAILABLE atlas_blas_info: /usr/local/svn/numpy/numpy/distutils/system_info.py:531: UserWarning: Library error: libs=['f77blas', 'cblas', 'atlas'] found_libs=[] warnings.warn("Library error: libs=%s found_libs=%s" % \ FOUND: libraries = ['f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/local/lib/atlas'] language = c running build_src building extension "atlas_version" sources creating build creating build/src adding 'build/src/atlas_version_0x5b360788cdd9eddc.c' to sources. running build_ext customize UnixCCompiler customize UnixCCompiler using build_ext building 'atlas_version' extension compiling C sources gcc options: '-pthread -fno-strict-aliasing -DNDEBUG -O2 -fmessage-length=0 -Wall -D_FORTIFY_SOURCE=2 -g -fPIC' creating build/temp.linux-x86_64-2.4 creating build/temp.linux-x86_64-2.4/build creating build/temp.linux-x86_64-2.4/build/src compile options: '-Inumpy/core/include -I/usr/include/python2.4 -c' gcc: build/src/atlas_version_0x5b360788cdd9eddc.c gcc -pthread -shared build/temp.linux-x86_64-2.4/build/src/atlas_version_0x5b360788cdd9eddc.o -L/usr/local/lib/atlas -lf77blas -lcblas -latlas -o build/temp.linux-x86_64-2.4/atlas_version.so /usr/lib64/gcc/x86_64-suse-linux/4.0.2 /../../../../x86_64-suse-linux/bin/ld: /usr/local/lib/atlas/libatlas.a(ATL_buildinfo.o): relocation R_X86_64_32 against `a local symbol' can not be used when making a shared object; recompile with -fPIC /usr/local/lib/atlas/libatlas.a: could not read symbols: Bad value collect2: ld returned 1 exit status /usr/lib64/gcc/x86_64-suse-linux/4.0.2 /../../../../x86_64-suse-linux/bin/ld: /usr/local/lib/atlas/libatlas.a(ATL_buildinfo.o): relocation R_X86_64_32 against `a local symbol' can not be used when making a shared object; recompile with -fPIC /usr/local/lib/atlas/libatlas.a: could not read symbols: Bad value collect2: ld returned 1 exit status ##### msg: error: Command "gcc -pthread -shared build/temp.linux-x86_64-2.4/build/src/atlas_version_0x5b360788cdd9eddc.o -L/usr/local/lib/atlas -lf77blas -lcblas -latlas -o build/temp.linux-x86_64-2.4/atlas_version.so" failed with exit status 1 error: Command "gcc -pthread -shared build/temp.linux-x86_64-2.4/build/src/atlas_version_0x5b360788cdd9eddc.o -L/usr/local/lib/atlas -lf77blas -lcblas -latlas -o build/temp.linux-x86_64-2.4/atlas_version.so" failed with exit status 1 FOUND: libraries = ['f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/local/lib/atlas'] language = c define_macros = [('NO_ATLAS_INFO', 2)] Warning: distutils distribution has been initialized, it may be too late to add an extension _dotblas Traceback (most recent call last): File "setup.py", line 76, in ?x86_64 GNU/Linux setup_package() File "setup.py", line 63, in setup_package config.add_subpackage('numpy') File "/usr/local/svn/numpy/numpy/distutils/misc_util.py", line 592, in add_subpackage config_list = self.get_subpackage(subpackage_name,subpackage_path) File "/usr/local/svn/numpy/numpy/distutils/misc_util.py", line 582, in get_subpackage subpackage_path) File "/usr/local/svn/numpy/numpy/distutils/misc_util.py", line 539, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File "/usr/local/svn/numpy/numpy/setup.py", line 10, in configuration config.add_subpackage('core') File "/usr/local/svn/numpy/numpy/distutils/misc_util.py", line 592, in add_subpackage config_list = self.get_subpackage(subpackage_name,subpackage_path) File "/usr/local/svn/numpy/numpy/distutils/misc_util.py", line 582, in get_subpackage subpackage_path) File "/usr/local/svn/numpy/numpy/distutils/misc_util.py", line 539, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File "numpy/core/setup.py", line 217, in configuration config.add_data_dir('tests') File "/usr/local/svn/numpy/numpy/distutils/misc_util.py", line 636, in add_data_dir self.add_data_files((ds,filenames)) File "/usr/local/svn/numpy/numpy/distutils/misc_util.py", line 702, in add_data_files dist.data_files.extend(data_dict.items()) AttributeError: 'NoneType' object has no attribute 'extend' Any suggestion ? Nils _______________________________________________ SciPy-user mailing list SciPy-user at scipy.net http://www.scipy.net/mailman/listinfo/scipy-user From klemm at phys.ethz.ch Tue Feb 28 12:49:06 2006 From: klemm at phys.ethz.ch (Hanno Klemm) Date: Tue, 28 Feb 2006 18:49:06 +0100 Subject: [SciPy-user] ATLAS and gcc v4.0.2 In-Reply-To: <44044C17.8010108@hoc.net> References: <44041808.7050900@hoc.net> <44041EAB.4070404@hoc.net> <44042407.8090106@hoc.net> , Message-ID: Hi Christian, you output looks like a problem I had: On my 64bit architecture I had to modify the make file for atlas and append an -fPIC at the compiler flags (I tired that at the architecture flags, the g77 and the c flags. That did the job). Otherwise it seems not possible to use the libraries with scipy. Good luck, Hanno Christian Kristukat said: > Nils Wagner wrote: > > On Tue, 28 Feb 2006 19:20:55 +0900 > > Christian Kristukat wrote: > >> Nils Wagner wrote: > >>>>> SuSE 10.0 comes with gcc-fortran > >>>>> > >>>>> Name : gcc-fortran Relocations: > >>>> You're right. But the executable is called 'gfortran'. I > >>>> mixed that up. > >>>> > >>> I have already build blas and lapack libraries using g77 > >>> according to the installation hints. > >>> > >>> Can I use them or must I recompile blas and lapack with > >>> gfortran ? > >> I'm not an expert on this subject, just sharing my > >> experiences. If you're using > >> ATLAS you don't need BLAS as far as I understood. I > >> followed the instructions in > >> the link I posted before. Compared to the compilation > >> time of ATLAS, LAPACK > >> builds almost instantaneously, so it doesn't hurt to > >> rebuild LAPACK, I just > >> don't know wether g77 built LAPACK works together with > >> gfortran ATLAS. > >> At least I know, that if you're using gfrotran all the > >> time, everything works > >> fine on SuSE 10.0. > >> > >> Good luck, Christian > >> > >> _______________________________________________ > >> SciPy-user mailing list > >> SciPy-user at scipy.net > >> http://www.scipy.net/mailman/listinfo/scipy-user > > > > > > > > I followed your advice and rebuilt blas and lapack using > > gfortran. Compilation of ATLAS seems to work. Also > > I have build a complete lapack > > http://math-atlas.sourceforge.net/errata.html#completelp > > > > I am on SuSE 10.0 x86_64 GNU/Linux > > > > However > > > > python setup.py build failed > > > > Running from numpy source directory. > > Warning: not existing path in numpy/distutils: site.cfg > > No module named __svn_version__ > > F2PY Version 2_2179 > > blas_opt_info: > > blas_mkl_info: > > /usr/local/svn/numpy/numpy/distutils/system_info.py:531: > > UserWarning: Library error: libs=['mkl', 'vml', 'guide'] > > found_libs=[] > > warnings.warn("Library error: libs=%s found_libs=%s" % > > \ > > NOT AVAILABLE > > > > atlas_blas_threads_info: > > Setting PTATLAS=ATLAS > > /usr/local/svn/numpy/numpy/distutils/system_info.py:531: > > UserWarning: Library error: libs=['ptf77blas', 'ptcblas', > > 'atlas'] found_libs=[] > > warnings.warn("Library error: libs=%s found_libs=%s" % > > \ > > /usr/local/svn/numpy/numpy/distutils/system_info.py:531: > > UserWarning: Library error: libs=['ptf77blas', 'ptcblas', > > 'atlas'] found_libs=['/usr/local/lib/atlas/libatlas.a'] > > warnings.warn("Library error: libs=%s found_libs=%s" % > > \ > > NOT AVAILABLE > > > > atlas_blas_info: > > /usr/local/svn/numpy/numpy/distutils/system_info.py:531: > > UserWarning: Library error: libs=['f77blas', 'cblas', > > 'atlas'] found_libs=[] > > warnings.warn("Library error: libs=%s found_libs=%s" % > > \ > > FOUND: > > libraries = ['f77blas', 'cblas', 'atlas'] > > library_dirs = ['/usr/local/lib/atlas'] > > language = c > > > > running build_src > > building extension "atlas_version" sources > > creating build > > creating build/src > > adding 'build/src/atlas_version_0x5b360788cdd9eddc.c' > > to sources. > > running build_ext > > customize UnixCCompiler > > customize UnixCCompiler using build_ext > > building 'atlas_version' extension > > compiling C sources > > gcc options: '-pthread -fno-strict-aliasing -DNDEBUG -O2 > > -fmessage-length=0 -Wall -D_FORTIFY_SOURCE=2 -g -fPIC' > > creating build/temp.linux-x86_64-2.4 > > creating build/temp.linux-x86_64-2.4/build > > creating build/temp.linux-x86_64-2.4/build/src > > compile options: '-Inumpy/core/include > > -I/usr/include/python2.4 -c' > > gcc: build/src/atlas_version_0x5b360788cdd9eddc.c > > gcc -pthread -shared > > build/temp.linux-x86_64-2.4/build/src/atlas_version_0x5b360788cdd9eddc.o > > -L/usr/local/lib/atlas -lf77blas -lcblas -latlas -o > > build/temp.linux-x86_64-2.4/atlas_version.so > > /usr/lib64/gcc/x86_64-suse-linux/4.0.2/../../../../x86_64-suse-linux/bin/ld: > > /usr/local/lib/atlas/libatlas.a(ATL_buildinfo.o): > > relocation R_X86_64_32 against `a local symbol' can not be > > used when making a shared object; recompile with -fPIC > > /usr/local/lib/atlas/libatlas.a: could not read symbols: > > Bad value > > collect2: ld returned 1 exit status > > /usr/lib64/gcc/x86_64-suse-linux/4.0.2/../../../../x86_64-suse-linux/bin/ld: > > /usr/local/lib/atlas/libatlas.a(ATL_buildinfo.o): > > relocation R_X86_64_32 against `a local symbol' can not be > > used when making a shared object; recompile with -fPIC > > /usr/local/lib/atlas/libatlas.a: could not read symbols: > > Bad value > > collect2: ld returned 1 exit status > > ##### msg: error: Command "gcc -pthread -shared > > build/temp.linux-x86_64-2.4/build/src/atlas_version_0x5b360788cdd9eddc.o > > -L/usr/local/lib/atlas -lf77blas -lcblas -latlas -o > > build/temp.linux-x86_64-2.4/atlas_version.so" failed with > > exit status 1 > > error: Command "gcc -pthread -shared > > build/temp.linux-x86_64-2.4/build/src/atlas_version_0x5b360788cdd9eddc.o > > -L/usr/local/lib/atlas -lf77blas -lcblas -latlas -o > > build/temp.linux-x86_64-2.4/atlas_version.so" failed with > > exit status 1 > > FOUND: > > libraries = ['f77blas', 'cblas', 'atlas'] > > library_dirs = ['/usr/local/lib/atlas'] > > language = c > > define_macros = [('NO_ATLAS_INFO', 2)] > > > > Warning: distutils distribution has been initialized, it > > may be too late to add an extension _dotblas > > Traceback (most recent call last): > > File "setup.py", line 76, in ?x86_64 GNU/Linux > > setup_package() > > File "setup.py", line 63, in setup_package > > config.add_subpackage('numpy') > > File > > "/usr/local/svn/numpy/numpy/distutils/misc_util.py", line > > 592, in add_subpackage > > config_list = > > self.get_subpackage(subpackage_name,subpackage_path) > > File > > "/usr/local/svn/numpy/numpy/distutils/misc_util.py", line > > 582, in get_subpackage > > subpackage_path) > > File > > "/usr/local/svn/numpy/numpy/distutils/misc_util.py", line > > 539, in _get_configuration_from_setup_py > > config = setup_module.configuration(*args) > > File "/usr/local/svn/numpy/numpy/setup.py", line 10, in > > configuration > > config.add_subpackage('core') > > File > > "/usr/local/svn/numpy/numpy/distutils/misc_util.py", line > > 592, in add_subpackage > > config_list = > > self.get_subpackage(subpackage_name,subpackage_path) > > File > > "/usr/local/svn/numpy/numpy/distutils/misc_util.py", line > > 582, in get_subpackage > > subpackage_path) > > File > > "/usr/local/svn/numpy/numpy/distutils/misc_util.py", line > > 539, in _get_configuration_from_setup_py > > config = setup_module.configuration(*args) > > File "numpy/core/setup.py", line 217, in configuration > > config.add_data_dir('tests') > > File > > "/usr/local/svn/numpy/numpy/distutils/misc_util.py", line > > 636, in add_data_dir > > self.add_data_files((ds,filenames)) > > File > > "/usr/local/svn/numpy/numpy/distutils/misc_util.py", line > > 702, in add_data_files > > dist.data_files.extend(data_dict.items()) > > AttributeError: 'NoneType' object has no attribute > > 'extend' > > > > I can't say more than that I followed exactly the instructions here: > http://pong.tamu.edu/tiki/tiki-view_blog_post.php?blogId=6&postId=97 > (the full LAPACK issue is covered there, too) > and it worked. > > Christian > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > -- Hanno Klemm klemm at phys.ethz.ch From nwagner at mecha.uni-stuttgart.de Tue Feb 28 14:28:43 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Tue, 28 Feb 2006 20:28:43 +0100 Subject: [SciPy-user] ATLAS and gcc v4.0.2 In-Reply-To: References: <44041808.7050900@hoc.net> <44041EAB.4070404@hoc.net> <44042407.8090106@hoc.net> Message-ID: On Tue, 28 Feb 2006 18:49:06 +0100 "Hanno Klemm" wrote: > > Hi Christian, > > you output looks like a problem I had: > > On my 64bit architecture I had to modify the make file >for atlas and > append an -fPIC at the compiler flags (I tired that at >the > architecture flags, the g77 and the c flags. That did >the job). > Otherwise it seems not possible to use the libraries >with scipy. > > Good luck, > Hanno Hi Hanno, Thank you for your valuable comment. Did you also apply the patch to ATLAS/bin/uumtst.c ? The sanity_test failed here. http://math-atlas.sourceforge.net/errata.html#gcc4 Cheers, Nils > > Christian Kristukat said: > >> Nils Wagner wrote: >> > On Tue, 28 Feb 2006 19:20:55 +0900 >> > Christian Kristukat wrote: >> >> Nils Wagner wrote: >> >>>>> SuSE 10.0 comes with gcc-fortran >> >>>>> >> >>>>> Name : gcc-fortran >> Relocations: >> >>>> You're right. But the executable is called >>'gfortran'. I >> >>>> mixed that up. >> >>>> >> >>> I have already build blas and lapack libraries using >>g77 >> >>> according to the installation hints. >> >>> >> >>> Can I use them or must I recompile blas and lapack >>with >> >>> gfortran ? >> >> I'm not an expert on this subject, just sharing my >> >> experiences. If you're using >> >> ATLAS you don't need BLAS as far as I understood. I >> >> followed the instructions in >> >> the link I posted before. Compared to the compilation >> >> time of ATLAS, LAPACK >> >> builds almost instantaneously, so it doesn't hurt to >> >> rebuild LAPACK, I just >> >> don't know wether g77 built LAPACK works together >>with >> >> gfortran ATLAS. >> >> At least I know, that if you're using gfrotran all >>the >> >> time, everything works >> >> fine on SuSE 10.0. >> >> >> >> Good luck, Christian >> >> >> >> _______________________________________________ >> >> SciPy-user mailing list >> >> SciPy-user at scipy.net >> >> http://www.scipy.net/mailman/listinfo/scipy-user >> > >> > >> > >> > I followed your advice and rebuilt blas and lapack >>using >> > gfortran. Compilation of ATLAS seems to work. Also >> > I have build a complete lapack >> > >>http://math-atlas.sourceforge.net/errata.html#completelp >> > >> > I am on SuSE 10.0 x86_64 GNU/Linux >> > >> > However >> > >> > python setup.py build failed >> > >> > Running from numpy source directory. >> > Warning: not existing path in numpy/distutils: >>site.cfg >> > No module named __svn_version__ >> > F2PY Version 2_2179 >> > blas_opt_info: >> > blas_mkl_info: >> > >>/usr/local/svn/numpy/numpy/distutils/system_info.py:531: >> > UserWarning: Library error: libs=['mkl', 'vml', >>'guide'] >> > found_libs=[] >> > warnings.warn("Library error: libs=%s >>found_libs=%s" % >> > \ >> > NOT AVAILABLE >> > >> > atlas_blas_threads_info: >> > Setting PTATLAS=ATLAS >> > >>/usr/local/svn/numpy/numpy/distutils/system_info.py:531: >> > UserWarning: Library error: libs=['ptf77blas', >>'ptcblas', >> > 'atlas'] found_libs=[] >> > warnings.warn("Library error: libs=%s >>found_libs=%s" % >> > \ >> > >>/usr/local/svn/numpy/numpy/distutils/system_info.py:531: >> > UserWarning: Library error: libs=['ptf77blas', >>'ptcblas', >> > 'atlas'] >>found_libs=['/usr/local/lib/atlas/libatlas.a'] >> > warnings.warn("Library error: libs=%s >>found_libs=%s" % >> > \ >> > NOT AVAILABLE >> > >> > atlas_blas_info: >> > >>/usr/local/svn/numpy/numpy/distutils/system_info.py:531: >> > UserWarning: Library error: libs=['f77blas', 'cblas', >> > 'atlas'] found_libs=[] >> > warnings.warn("Library error: libs=%s >>found_libs=%s" % >> > \ >> > FOUND: >> > libraries = ['f77blas', 'cblas', 'atlas'] >> > library_dirs = ['/usr/local/lib/atlas'] >> > language = c >> > >> > running build_src >> > building extension "atlas_version" sources >> > creating build >> > creating build/src >> > adding >>'build/src/atlas_version_0x5b360788cdd9eddc.c' >> > to sources. >> > running build_ext >> > customize UnixCCompiler >> > customize UnixCCompiler using build_ext >> > building 'atlas_version' extension >> > compiling C sources >> > gcc options: '-pthread -fno-strict-aliasing -DNDEBUG >>-O2 >> > -fmessage-length=0 -Wall -D_FORTIFY_SOURCE=2 -g -fPIC' >> > creating build/temp.linux-x86_64-2.4 >> > creating build/temp.linux-x86_64-2.4/build >> > creating build/temp.linux-x86_64-2.4/build/src >> > compile options: '-Inumpy/core/include >> > -I/usr/include/python2.4 -c' >> > gcc: build/src/atlas_version_0x5b360788cdd9eddc.c >> > gcc -pthread -shared >> > > build/temp.linux-x86_64-2.4/build/src/atlas_version_0x5b360788cdd9eddc.o >> > -L/usr/local/lib/atlas -lf77blas -lcblas -latlas -o >> > build/temp.linux-x86_64-2.4/atlas_version.so >> > > /usr/lib64/gcc/x86_64-suse-linux/4.0.2/../../../../x86_64-suse-linux/bin/ld: > >> > /usr/local/lib/atlas/libatlas.a(ATL_buildinfo.o): >> > relocation R_X86_64_32 against `a local symbol' can >>not be >> > used when making a shared object; recompile with -fPIC >> > /usr/local/lib/atlas/libatlas.a: could not read >>symbols: >> > Bad value >> > collect2: ld returned 1 exit status >> > > /usr/lib64/gcc/x86_64-suse-linux/4.0.2/../../../../x86_64-suse-linux/bin/ld: > >> > /usr/local/lib/atlas/libatlas.a(ATL_buildinfo.o): >> > relocation R_X86_64_32 against `a local symbol' can >>not be >> > used when making a shared object; recompile with -fPIC >> > /usr/local/lib/atlas/libatlas.a: could not read >>symbols: >> > Bad value >> > collect2: ld returned 1 exit status >> > ##### msg: error: Command "gcc -pthread -shared >> > > build/temp.linux-x86_64-2.4/build/src/atlas_version_0x5b360788cdd9eddc.o >> > -L/usr/local/lib/atlas -lf77blas -lcblas -latlas -o >> > build/temp.linux-x86_64-2.4/atlas_version.so" failed >>with >> > exit status 1 >> > error: Command "gcc -pthread -shared >> > > build/temp.linux-x86_64-2.4/build/src/atlas_version_0x5b360788cdd9eddc.o >> > -L/usr/local/lib/atlas -lf77blas -lcblas -latlas -o >> > build/temp.linux-x86_64-2.4/atlas_version.so" failed >>with >> > exit status 1 >> > FOUND: >> > libraries = ['f77blas', 'cblas', 'atlas'] >> > library_dirs = ['/usr/local/lib/atlas'] >> > language = c >> > define_macros = [('NO_ATLAS_INFO', 2)] >> > >> > Warning: distutils distribution has been initialized, >>it >> > may be too late to add an extension _dotblas >> > Traceback (most recent call last): >> > File "setup.py", line 76, in ?x86_64 GNU/Linux >> > setup_package() >> > File "setup.py", line 63, in setup_package >> > config.add_subpackage('numpy') >> > File >> > "/usr/local/svn/numpy/numpy/distutils/misc_util.py", >>line >> > 592, in add_subpackage >> > config_list = >> > self.get_subpackage(subpackage_name,subpackage_path) >> > File >> > "/usr/local/svn/numpy/numpy/distutils/misc_util.py", >>line >> > 582, in get_subpackage >> > subpackage_path) >> > File >> > "/usr/local/svn/numpy/numpy/distutils/misc_util.py", >>line >> > 539, in _get_configuration_from_setup_py >> > config = setup_module.configuration(*args) >> > File "/usr/local/svn/numpy/numpy/setup.py", line >>10, in >> > configuration >> > config.add_subpackage('core') >> > File >> > "/usr/local/svn/numpy/numpy/distutils/misc_util.py", >>line >> > 592, in add_subpackage >> > config_list = >> > self.get_subpackage(subpackage_name,subpackage_path) >> > File >> > "/usr/local/svn/numpy/numpy/distutils/misc_util.py", >>line >> > 582, in get_subpackage >> > subpackage_path) >> > File >> > "/usr/local/svn/numpy/numpy/distutils/misc_util.py", >>line >> > 539, in _get_configuration_from_setup_py >> > config = setup_module.configuration(*args) >> > File "numpy/core/setup.py", line 217, in >>configuration >> > config.add_data_dir('tests') >> > File >> > "/usr/local/svn/numpy/numpy/distutils/misc_util.py", >>line >> > 636, in add_data_dir >> > self.add_data_files((ds,filenames)) >> > File >> > "/usr/local/svn/numpy/numpy/distutils/misc_util.py", >>line >> > 702, in add_data_files >> > dist.data_files.extend(data_dict.items()) >> > AttributeError: 'NoneType' object has no attribute >> > 'extend' >> > >> >> I can't say more than that I followed exactly the >>instructions here: >> http://pong.tamu.edu/tiki/tiki-view_blog_post.php?blogId=6&postId=97 >> (the full LAPACK issue is covered there, too) >> and it worked. >> >> Christian >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.net >> http://www.scipy.net/mailman/listinfo/scipy-user >> > > > > -- > Hanno Klemm > klemm at phys.ethz.ch > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user From klemm at phys.ethz.ch Tue Feb 28 14:44:57 2006 From: klemm at phys.ethz.ch (Hanno Klemm) Date: Tue, 28 Feb 2006 20:44:57 +0100 Subject: [SciPy-user] ATLAS and gcc v4.0.2 In-Reply-To: References: <44041808.7050900@hoc.net> <44041EAB.4070404@hoc.net> <44042407.8090106@hoc.net> Message-ID: Hi Nils, no, I compiled with gcc 3.x, therefore I didn't run into that problem. Hanno On Feb 28, 2006, at 8:28 PM, Nils Wagner wrote: > On Tue, 28 Feb 2006 18:49:06 +0100 > "Hanno Klemm" wrote: >> >> Hi Christian, >> >> you output looks like a problem I had: >> >> On my 64bit architecture I had to modify the make file >> for atlas and >> append an -fPIC at the compiler flags (I tired that at >> the >> architecture flags, the g77 and the c flags. That did >> the job). >> Otherwise it seems not possible to use the libraries >> with scipy. >> >> Good luck, >> Hanno > > Hi Hanno, > > Thank you for your valuable comment. Did you also > apply the patch to ATLAS/bin/uumtst.c ? The sanity_test > failed here. > http://math-atlas.sourceforge.net/errata.html#gcc4 > > Cheers, > > Nils > >> >> Christian Kristukat said: >> >>> Nils Wagner wrote: >>>> On Tue, 28 Feb 2006 19:20:55 +0900 >>>> Christian Kristukat wrote: >>>>> Nils Wagner wrote: >>>>>>>> SuSE 10.0 comes with gcc-fortran >>>>>>>> >>>>>>>> Name : gcc-fortran >>> Relocations: >>>>>>> You're right. But the executable is called >>> 'gfortran'. I >>>>>>> mixed that up. >>>>>>> >>>>>> I have already build blas and lapack libraries using >>> g77 >>>>>> according to the installation hints. >>>>>> >>>>>> Can I use them or must I recompile blas and lapack >>> with >>>>>> gfortran ? >>>>> I'm not an expert on this subject, just sharing my >>>>> experiences. If you're using >>>>> ATLAS you don't need BLAS as far as I understood. I >>>>> followed the instructions in >>>>> the link I posted before. Compared to the compilation >>>>> time of ATLAS, LAPACK >>>>> builds almost instantaneously, so it doesn't hurt to >>>>> rebuild LAPACK, I just >>>>> don't know wether g77 built LAPACK works together >>> with >>>>> gfortran ATLAS. >>>>> At least I know, that if you're using gfrotran all >>> the >>>>> time, everything works >>>>> fine on SuSE 10.0. >>>>> >>>>> Good luck, Christian >>>>> >>>>> _______________________________________________ >>>>> SciPy-user mailing list >>>>> SciPy-user at scipy.net >>>>> http://www.scipy.net/mailman/listinfo/scipy-user >>>> >>>> >>>> >>>> I followed your advice and rebuilt blas and lapack >>> using >>>> gfortran. Compilation of ATLAS seems to work. Also >>>> I have build a complete lapack >>>> >>> http://math-atlas.sourceforge.net/errata.html#completelp >>>> >>>> I am on SuSE 10.0 x86_64 GNU/Linux >>>> >>>> However >>>> >>>> python setup.py build failed >>>> >>>> Running from numpy source directory. >>>> Warning: not existing path in numpy/distutils: >>> site.cfg >>>> No module named __svn_version__ >>>> F2PY Version 2_2179 >>>> blas_opt_info: >>>> blas_mkl_info: >>>> >>> /usr/local/svn/numpy/numpy/distutils/system_info.py:531: >>>> UserWarning: Library error: libs=['mkl', 'vml', >>> 'guide'] >>>> found_libs=[] >>>> warnings.warn("Library error: libs=%s >>> found_libs=%s" % >>>> \ >>>> NOT AVAILABLE >>>> >>>> atlas_blas_threads_info: >>>> Setting PTATLAS=ATLAS >>>> >>> /usr/local/svn/numpy/numpy/distutils/system_info.py:531: >>>> UserWarning: Library error: libs=['ptf77blas', >>> 'ptcblas', >>>> 'atlas'] found_libs=[] >>>> warnings.warn("Library error: libs=%s >>> found_libs=%s" % >>>> \ >>>> >>> /usr/local/svn/numpy/numpy/distutils/system_info.py:531: >>>> UserWarning: Library error: libs=['ptf77blas', >>> 'ptcblas', >>>> 'atlas'] >>> found_libs=['/usr/local/lib/atlas/libatlas.a'] >>>> warnings.warn("Library error: libs=%s >>> found_libs=%s" % >>>> \ >>>> NOT AVAILABLE >>>> >>>> atlas_blas_info: >>>> >>> /usr/local/svn/numpy/numpy/distutils/system_info.py:531: >>>> UserWarning: Library error: libs=['f77blas', 'cblas', >>>> 'atlas'] found_libs=[] >>>> warnings.warn("Library error: libs=%s >>> found_libs=%s" % >>>> \ >>>> FOUND: >>>> libraries = ['f77blas', 'cblas', 'atlas'] >>>> library_dirs = ['/usr/local/lib/atlas'] >>>> language = c >>>> >>>> running build_src >>>> building extension "atlas_version" sources >>>> creating build >>>> creating build/src >>>> adding >>> 'build/src/atlas_version_0x5b360788cdd9eddc.c' >>>> to sources. >>>> running build_ext >>>> customize UnixCCompiler >>>> customize UnixCCompiler using build_ext >>>> building 'atlas_version' extension >>>> compiling C sources >>>> gcc options: '-pthread -fno-strict-aliasing -DNDEBUG >>> -O2 >>>> -fmessage-length=0 -Wall -D_FORTIFY_SOURCE=2 -g -fPIC' >>>> creating build/temp.linux-x86_64-2.4 >>>> creating build/temp.linux-x86_64-2.4/build >>>> creating build/temp.linux-x86_64-2.4/build/src >>>> compile options: '-Inumpy/core/include >>>> -I/usr/include/python2.4 -c' >>>> gcc: build/src/atlas_version_0x5b360788cdd9eddc.c >>>> gcc -pthread -shared >>>> >> build/temp.linux-x86_64-2.4/build/src/ >> atlas_version_0x5b360788cdd9eddc.o >>>> -L/usr/local/lib/atlas -lf77blas -lcblas -latlas -o >>>> build/temp.linux-x86_64-2.4/atlas_version.so >>>> >> /usr/lib64/gcc/x86_64-suse-linux/4.0.2/../../../../x86_64-suse-linux/ >> bin/ld: >> >>>> /usr/local/lib/atlas/libatlas.a(ATL_buildinfo.o): >>>> relocation R_X86_64_32 against `a local symbol' can >>> not be >>>> used when making a shared object; recompile with -fPIC >>>> /usr/local/lib/atlas/libatlas.a: could not read >>> symbols: >>>> Bad value >>>> collect2: ld returned 1 exit status >>>> >> /usr/lib64/gcc/x86_64-suse-linux/4.0.2/../../../../x86_64-suse-linux/ >> bin/ld: >> >>>> /usr/local/lib/atlas/libatlas.a(ATL_buildinfo.o): >>>> relocation R_X86_64_32 against `a local symbol' can >>> not be >>>> used when making a shared object; recompile with -fPIC >>>> /usr/local/lib/atlas/libatlas.a: could not read >>> symbols: >>>> Bad value >>>> collect2: ld returned 1 exit status >>>> ##### msg: error: Command "gcc -pthread -shared >>>> >> build/temp.linux-x86_64-2.4/build/src/ >> atlas_version_0x5b360788cdd9eddc.o >>>> -L/usr/local/lib/atlas -lf77blas -lcblas -latlas -o >>>> build/temp.linux-x86_64-2.4/atlas_version.so" failed >>> with >>>> exit status 1 >>>> error: Command "gcc -pthread -shared >>>> >> build/temp.linux-x86_64-2.4/build/src/ >> atlas_version_0x5b360788cdd9eddc.o >>>> -L/usr/local/lib/atlas -lf77blas -lcblas -latlas -o >>>> build/temp.linux-x86_64-2.4/atlas_version.so" failed >>> with >>>> exit status 1 >>>> FOUND: >>>> libraries = ['f77blas', 'cblas', 'atlas'] >>>> library_dirs = ['/usr/local/lib/atlas'] >>>> language = c >>>> define_macros = [('NO_ATLAS_INFO', 2)] >>>> >>>> Warning: distutils distribution has been initialized, >>> it >>>> may be too late to add an extension _dotblas >>>> Traceback (most recent call last): >>>> File "setup.py", line 76, in ?x86_64 GNU/Linux >>>> setup_package() >>>> File "setup.py", line 63, in setup_package >>>> config.add_subpackage('numpy') >>>> File >>>> "/usr/local/svn/numpy/numpy/distutils/misc_util.py", >>> line >>>> 592, in add_subpackage >>>> config_list = >>>> self.get_subpackage(subpackage_name,subpackage_path) >>>> File >>>> "/usr/local/svn/numpy/numpy/distutils/misc_util.py", >>> line >>>> 582, in get_subpackage >>>> subpackage_path) >>>> File >>>> "/usr/local/svn/numpy/numpy/distutils/misc_util.py", >>> line >>>> 539, in _get_configuration_from_setup_py >>>> config = setup_module.configuration(*args) >>>> File "/usr/local/svn/numpy/numpy/setup.py", line >>> 10, in >>>> configuration >>>> config.add_subpackage('core') >>>> File >>>> "/usr/local/svn/numpy/numpy/distutils/misc_util.py", >>> line >>>> 592, in add_subpackage >>>> config_list = >>>> self.get_subpackage(subpackage_name,subpackage_path) >>>> File >>>> "/usr/local/svn/numpy/numpy/distutils/misc_util.py", >>> line >>>> 582, in get_subpackage >>>> subpackage_path) >>>> File >>>> "/usr/local/svn/numpy/numpy/distutils/misc_util.py", >>> line >>>> 539, in _get_configuration_from_setup_py >>>> config = setup_module.configuration(*args) >>>> File "numpy/core/setup.py", line 217, in >>> configuration >>>> config.add_data_dir('tests') >>>> File >>>> "/usr/local/svn/numpy/numpy/distutils/misc_util.py", >>> line >>>> 636, in add_data_dir >>>> self.add_data_files((ds,filenames)) >>>> File >>>> "/usr/local/svn/numpy/numpy/distutils/misc_util.py", >>> line >>>> 702, in add_data_files >>>> dist.data_files.extend(data_dict.items()) >>>> AttributeError: 'NoneType' object has no attribute >>>> 'extend' >>>> >>> >>> I can't say more than that I followed exactly the >>> instructions here: >>> http://pong.tamu.edu/tiki/tiki-view_blog_post.php?blogId=6&postId=97 >>> (the full LAPACK issue is covered there, too) >>> and it worked. >>> >>> Christian >>> >>> _______________________________________________ >>> SciPy-user mailing list >>> SciPy-user at scipy.net >>> http://www.scipy.net/mailman/listinfo/scipy-user >>> >> >> >> >> -- >> Hanno Klemm >> klemm at phys.ethz.ch >> >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.net >> http://www.scipy.net/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > -- Hanno Klemm klemm at itp.phys.ethz.ch ETH Zurich tel: +41-1-6332580 Institute for theoretical physics mobile: +41-79-4500428 http://www.mth.kcl.ac.uk/~klemm