From emailharvest at email.com Thu Aug 1 04:29:26 2002 From: emailharvest at email.com (emailharvest at email.com) Date: Thu, 1 Aug 2002 16:29:26 +0800 Subject: [SciPy-dev] ADV: Harvest lots of E-mail addresses quickly ! Message-ID: <20020801083525.971FC3EAC0@www.scipy.com> An HTML attachment was scrubbed... URL: From lists at UltimateG.com Thu Aug 1 07:27:16 2002 From: lists at UltimateG.com (Mark Evans) Date: Thu, 1 Aug 2002 06:27:16 -0500 Subject: [SciPy-dev] Open Office Message-ID: <12215839706.20020801062716@UltimateG.com> Hello SciPy, Regarding the new plotting work I'd like to point out that similar projects are underway at OpenOffice.org. http://graphics.openoffice.org/chart/chart.html There may be room to leverage labor and talent on both sides. If nothing else, their design documents may be worth reading. OpenOffice uses a component model called UDO and there is a Python wrapper for it. http://polysorbate.org/?work/pyuno Forgive me for speaking out of school as it were. I have not followed SciPy and in fact am a newcomer. It's been a while since I was deep into Python, but the site is encouraging. Mark From prabhu at aero.iitm.ernet.in Thu Aug 1 13:29:50 2002 From: prabhu at aero.iitm.ernet.in (Prabhu Ramachandran) Date: Thu, 1 Aug 2002 22:59:50 +0530 Subject: [SciPy-dev] Open Office In-Reply-To: <12215839706.20020801062716@UltimateG.com> References: <12215839706.20020801062716@UltimateG.com> Message-ID: <15689.28686.65846.413850@monster.linux.in> >>>>> "ME" == Mark Evans writes: ME> Hello SciPy, Regarding the new plotting work I'd like to point ME> out that similar projects are underway at OpenOffice.org. ME> http://graphics.openoffice.org/chart/chart.html ME> There may be room to leverage labor and talent on both sides. ME> If nothing else, their design documents may be worth reading. ME> OpenOffice uses a component model called UDO and there is a ME> Python wrapper for it. http://polysorbate.org/?work/pyuno This sounds really good but OO.o is a huge project. I'm not sure it is going to be easy to simply join without investing a huge amount of time on this. I also heard that building OO.o is a big pain. I'm not sure what the others think of this. cheers, prabhu From emailharvest at email.com Fri Aug 2 01:50:10 2002 From: emailharvest at email.com (emailharvest at email.com) Date: Fri, 2 Aug 2002 13:50:10 +0800 Subject: [SciPy-dev] ADV: Harvest lots of E-mail addresses quickly ! Message-ID: <20020802055531.DB7D53EAC2@www.scipy.com> An HTML attachment was scrubbed... URL: From lists at UltimateG.com Fri Aug 2 15:51:48 2002 From: lists at UltimateG.com (Mark Evans) Date: Fri, 2 Aug 2002 14:51:48 -0500 Subject: [SciPy-dev] Re: Open Office Message-ID: <591246522.20020802145148@UltimateG.com> > This sounds really good but OO.o is a huge project. I'm not sure it > is going to be easy to simply join without investing a huge amount of > time on this. I also heard that building OO.o is a big pain. I'm not > sure what the others think of this. > > cheers, > prabhu Those are fair points but don't you think joining OO would be easier than constructing an entire chart API from scratch, all alone? I don't think SciPy would have to build all of OO, just the UNO interface layer. Someone has already done a Python wrapper for UNO. UNO components are like COM components (and in fact there exists a COM/UNO bridge). Regarding building OO I think some pain is natural with a 6.7 million line cross-platform package, although these comments may invalidate some previous criticisms -- http://udk.openoffice.org/common/man/uno.html "StarOffice mainly uses the C++ -in-process functionality of UNO. Before UNO, the StarOffice development suffered very much from incompatible changes (e.g., adding a new virtual method or a new member to a class) in one of the base libraries. This forced a complete rebuild of the product, which roughly consumed 2 days and was done only once a week. These incompatible updates were be reduced considerably by using UNO, and as a result the whole work became more efficient. Please have a look at this document, Uno_the_Idea, for a more complete explanation." From eric at enthought.com Fri Aug 2 16:37:06 2002 From: eric at enthought.com (eric jones) Date: Fri, 2 Aug 2002 15:37:06 -0500 Subject: [SciPy-dev] Re: Open Office In-Reply-To: <591246522.20020802145148@UltimateG.com> Message-ID: <001a01c23a64$5e316030$6b01a8c0@ericlaptop> Hey Mark, Thanks for the pointer to the OO project. In general, I like to keep as much code at the Python level as possible. Adding a dependency on a big honking library that handles many things in C++ that are easily handled in Python makes for slower development and more difficult builds. Joining the efforts obviously has benefits, but I expect the needs of the communities are not entirely coincident (business vs. scientific plotting). If building a solid plotting framework was a huge job (multiple person years), then working to share the effort would be well worth it. But, building a solid plotting framework is what I'd consider a "medium sized" job -- about one person year to build a robust python API that is easy for others to extend. In this sort of project, putting up with the C++ hassles and the sub-set of 6.7 millions lines that we would need is a large price to pay. To top things off, most of the plot architecture stuff falls in the category of things that Python does really really well. Chaco is already reasonably far along with the low level API, and Dave Morrill has a first cut at a very impressive high level API working. Dave's code isn't in CVS yet because it is wxPython specific, so you'll have to take my word for it for now. He started from plt, but I think only a few of the axis layout algorithms survived his re-write. Dave's current code is an intermediate step towards the final Chaco implementation, but, as an intermediate step, it is very far along. We will start re-working/integrating this work into Chaco in the 2nd half of August. Dave's high level API is currently 6000-7000 lines of pure Python. Chaco's low level API is another few thousand lines. They'll both grow, but I doubt they'll more than triple in the final version and even that would be surprising. This would mean 30000 lines of mostly Python code (hmm I'm leaving out the freetype C source code in this count). This is probably 1/10 the size of the code base if we used OO. All this to say, the plan is to stick as close as possible to a pure python library with C/C++ used only in critical sections. Thanks for the pointer to the OO effort though. It is always good to have other APIs as examples -- especially as we move into designing the higher level portion of Chaco. And, I'll be very interested in playing with OO when a solid Python API is around for it. I just don't want to be the one debugging it for the sake of a plotting package. :) see ya, eric > -----Original Message----- > From: scipy-dev-admin at scipy.net [mailto:scipy-dev-admin at scipy.net] On > Behalf Of Mark Evans > Sent: Friday, August 02, 2002 2:52 PM > To: scipy-dev at scipy.net > Subject: [SciPy-dev] Re: Open Office > > > This sounds really good but OO.o is a huge project. I'm not sure it > > is going to be easy to simply join without investing a huge amount of > > time on this. I also heard that building OO.o is a big pain. I'm not > > sure what the others think of this. > > > > cheers, > > prabhu > > Those are fair points but don't you think joining OO would be easier > than constructing an entire chart API from scratch, all alone? > > I don't think SciPy would have to build all of OO, just the UNO > interface layer. Someone has already done a Python wrapper for UNO. > UNO components are like COM components (and in fact there exists a > COM/UNO bridge). > > Regarding building OO I think some pain is natural with a 6.7 million > line cross-platform package, although these comments may invalidate > some previous criticisms -- > > http://udk.openoffice.org/common/man/uno.html > > "StarOffice mainly uses the C++ -in-process functionality of UNO. > Before UNO, the StarOffice development suffered very much from > incompatible changes (e.g., adding a new virtual method or a new > member to a class) in one of the base libraries. This forced a > complete rebuild of the product, which roughly consumed 2 days and was > done only once a week. These incompatible updates were be reduced > considerably by using UNO, and as a result the whole work became more > efficient. Please have a look at this document, Uno_the_Idea, for a > more complete explanation." > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev From lists at UltimateG.com Fri Aug 2 16:43:08 2002 From: lists at UltimateG.com (Mark Evans) Date: Fri, 2 Aug 2002 15:43:08 -0500 Subject: [SciPy-dev] Digital Mars C++ Compiler In-Reply-To: <591246522.20020802145148@UltimateG.com> References: <591246522.20020802145148@UltimateG.com> Message-ID: <54326070.20020802154308@UltimateG.com> This is a fantastic, five-star, free C++ compiler for Windows. Longest continuously developed C/C++ compiler for Windows. Full source code to runtime libraries. A debugger and IDE are available on a cheap CD-ROM. The console tools are all free for download. Many third party debuggers will also work. http://www.DigitalMars.com "Digital Mars C++ outperforms Visual C++ 6.0 with a factor greater than 2. It outperforms Borland C++ Builder 5.0 with a factor of at least 2." http://www.janknepper.com/Programming/C++/C++.html From eric at enthought.com Fri Aug 2 18:09:45 2002 From: eric at enthought.com (eric jones) Date: Fri, 2 Aug 2002 17:09:45 -0500 Subject: [SciPy-dev] Digital Mars C++ Compiler In-Reply-To: <54326070.20020802154308@UltimateG.com> Message-ID: <001b01c23a71$4f316d70$6b01a8c0@ericlaptop> I just downloaded this and tried it on a few files. It looks dang fast for the compile step. That would be very nice for weave, although MSVC does pretty well on windows (gcc is slower, but still works fine). No word on how fast the code runs because I didn't try anything to fancy. Now if distutils only had support for it... I'm very ready for a blessed gcc 3.1 release for mingw32. eric > -----Original Message----- > From: scipy-dev-admin at scipy.net [mailto:scipy-dev-admin at scipy.net] On > Behalf Of Mark Evans > Sent: Friday, August 02, 2002 3:43 PM > To: scipy-dev at scipy.net > Subject: [SciPy-dev] Digital Mars C++ Compiler > > This is a fantastic, five-star, free C++ compiler for Windows. > Longest continuously developed C/C++ compiler for Windows. Full > source code to runtime libraries. A debugger and IDE are available on > a cheap CD-ROM. The console tools are all free for download. Many > third party debuggers will also work. > http://www.DigitalMars.com > > "Digital Mars C++ outperforms Visual C++ 6.0 with a factor greater than > 2. It outperforms Borland C++ Builder 5.0 with a factor of at least > 2." > http://www.janknepper.com/Programming/C++/C++.html > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev From lists at UltimateG.com Sat Aug 3 14:15:53 2002 From: lists at UltimateG.com (Mark Evans) Date: Sat, 3 Aug 2002 13:15:53 -0500 Subject: [SciPy-dev] Re: Open Office In-Reply-To: <20020803170000.2857.76628.Mailman@shaft> References: <20020803170000.2857.76628.Mailman@shaft> Message-ID: <237054674.20020803131553@UltimateG.com> All right Eric. I'm not very well acquainted with Chaco. It will take a while to learn about this project. The only points I would make in OO's defence are that it already offers a well engineered component / multi-language architecture. The whole emphasis of their new chart effort is a UNO-compliant chart layer. UNO supports Java and Python along with C++. I would advise not forming fixed opinions until you have actually investigated their code and documents. Possibly they might even adopt Chaco as their engine. With UNO it's not a question of disentangling huge C++ libraries. It's more a question of writing UNO plug-ins in any supported language. Another thing -- do not force Chaco users into one class library like wxPython. There are many others. The OO views framework is one. FOX is another. And I'm getting more and more interested in a library called Visual Component Framework on Sourceforge which to my mind has the best architecture of any open source project including wx and Qt. I agree that keeping as much code as possible in Python is a good thing. This is after all Scientific Python. Mark From eric at enthought.com Sat Aug 3 16:46:17 2002 From: eric at enthought.com (eric jones) Date: Sat, 3 Aug 2002 15:46:17 -0500 Subject: [SciPy-dev] Re: Open Office In-Reply-To: <237054674.20020803131553@UltimateG.com> Message-ID: <002201c23b2e$d11cb890$6b01a8c0@ericlaptop> Hey Mark, > > All right Eric. I'm not very well acquainted with Chaco. It will > take a while to learn about this project. For info on Chaco, look here: http://www.scipy.org/site_content/chaco > The only points I would make in OO's defence are that it already > offers a well engineered component / multi-language architecture. > The whole emphasis of their new chart effort is a UNO-compliant > chart layer. UNO supports Java and Python along with C++. > > I would advise not forming fixed opinions until you have > actually investigated their code and documents. > Possibly they might even adopt Chaco as their engine. With UNO it's > not a question of disentangling huge C++ libraries. It's more a > question of writing UNO plug-ins in any supported language. As far as examining the docs of OO, it is always a matter of where to invest time. In the "research phase" before we began coding, I looked at probably 15 different graphics libraries for a couple of weeks ( I didn't find OO at the time). The chosen direction was based on my best judgment of what balanced power, implementation effort, and ease in portability. I'm happy to say that I still agree with the decision to build a DisplayPDF engine. This isn't always the case with early design decisions that I have to live with later in the project. :-) There are a zillion component architectures out there -- COM, XPCOM, CORBA, in a way, Zope3., UNO, etc. Hopefully, once Chaco is mature, and if it proves useful beyond the Python community, interfaces for these architectures will be created because components are extremely powerful. For now, we're focusing on building the graphics engine in a platform independent way. > > Another thing -- do not force Chaco users into one class library > like wxPython. There are many others. The OO views framework is one. > FOX is another. And I'm getting more and more interested in a library > called Visual Component Framework on Sourceforge which to my mind has > the best architecture of any open source project including wx and Qt. One if Chaco's primary design goals is to be platform independent. Currently, we have decent functionality in wxPython, PDF, and native Mac OSX. OpenGL also has reasonable start. I'd be very glad to see OO backends written, etc. If they have a "drawing" canvas, then it is a matter of mapping the low level API of Chaco onto their canvas. The low level API is based on DisplayPDF. Thanks for all your comments, eric From fishbrain at crackz.i989.net Sat Aug 3 21:28:22 2002 From: fishbrain at crackz.i989.net (fishbrain at crackz.i989.net) Date: Sun, 04 Aug 2002 04:28:22 +0300 Subject: [SciPy-dev] Full versions Message-ID: <00000ac8000ea307d2@[212.246.106.46]>
Hello :-) I just thought that you'd like some

Free Full-Version Software


From lists at UltimateG.com Mon Aug 5 00:39:50 2002 From: lists at UltimateG.com (Mark Evans) Date: Sun, 4 Aug 2002 23:39:50 -0500 Subject: [SciPy-dev] Re: Open Office In-Reply-To: <20020804170001.12807.79008.Mailman@shaft> References: <20020804170001.12807.79008.Mailman@shaft> Message-ID: <2240111647.20020804233950@UltimateG.com> eric> In the "research phase" before we began coding, I looked eric> at probably 15 different graphics libraries for a couple of weeks Then the OO people could benefit from your wisdom and I plead with you to share it with them. The transfer of wisdom can go both ways. They are in preliminary stages (?) of designing their new chart architecture. To me the low level drawing API is not nearly so important as the high level API. I've used packages like Mathematica for years and years, and each of them has its problems. I don't care how lines are drawn but I care a lot about how the API lets me structure charts. eric> There are a zillion component architectures out there -- COM, XPCOM, eric> CORBA, in a way, Zope3., UNO, etc. Hopefully, once Chaco is mature Nah...only half a dozen major players; and not all of those are platform independent. Mark From asfandyar_k at yahoo.com Tue Aug 6 03:21:03 2002 From: asfandyar_k at yahoo.com (Asfandyar Khan) Date: Tue, 6 Aug 2002 00:21:03 -0700 (PDT) Subject: [SciPy-dev] Problems using Gplt Message-ID: <20020806072103.15049.qmail@web11501.mail.yahoo.com> Hi, I am running Scipy on : Platform information: Redhat Linux 7.0 (posix linux2) Python version: 2.2 Python Numeric version: 20.2.1 Whenever i try to plot using gplt eg, from scipy import gplt from Numeric import * a=array([1,2,3]) b=array([4,5,6]) plot(a.b) I get the following error: IOError: [Errno 32] Broken Pipe Traceback shows: gplt/interface.py line 106 in plot gplt/pyPlot.py line 132 in plot gplt/pyPlot.py line 702 in _init_plot gplt/pyPlot.py line 820 in _send def plot(*data): _validate_active() apply(_active.plot,data) <----- def plot(self,*data): self._init_plot() self._plot(data) <----- def _init_plot(self): self.m_rmin = BIG self.m_rmax = SMALL self._send('reset') <----- self.grid('on') self.angles(self.m_angle) def _send(self,cmd): if(len(cmd) < 200): self.g.write(cmd + '\n') self.g.flush() <----- else: filename = tempfile.mktemp() f = open(filename, 'w') f.write(cmd + '\n') f.close(); fn = string.replace(filename,'\\','/') self.g.write('load "' + fn + '"\n') self.g.flush() self.m_tmpfiles.append(filename) time.sleep(.15) Regards, Asfandyar. __________________________________________________ Do You Yahoo!? Yahoo! Health - Feel better, live better http://health.yahoo.com From fflesh at crackz.i989.net Wed Aug 7 02:49:12 2002 From: fflesh at crackz.i989.net (Fishflesh) Date: Wed, 7 Aug 2002 09:49:12 +0300 Subject: [SciPy-dev] Full version Message-ID: <20020807105814.410233EAC0@www.scipy.com> An HTML attachment was scrubbed... URL: From eric at enthought.com Sat Aug 10 03:56:57 2002 From: eric at enthought.com (eric jones) Date: Sat, 10 Aug 2002 02:56:57 -0500 Subject: [SciPy-dev] multi-variable constrained minimization? Message-ID: <00e001c24043$92aec930$6b01a8c0@ericlaptop> Is there a constrained minimization function for multi-variable functions? optimize.fmin_bound is for scalar functions only from what I can tell. If the answer is "no", the next question is whether this is because there isn't a good Fortran/C function for this in the libraries we've wrapped, or is it because the ones there aren't any good. The next question would be, does anyone have a suggestion for a library we should use for this capability? There is a library that I've used before called CFSQP by Andre Tits that is very good, but it is not open source and can't be distributed with scipy (you can get the source by emailing the author). This pretty much rules it out. Any other suggestions? Thanks, eric From rossini at blindglobe.net Sat Aug 10 12:51:30 2002 From: rossini at blindglobe.net (A.J. Rossini) Date: 10 Aug 2002 09:51:30 -0700 Subject: [SciPy-dev] multi-variable constrained minimization? In-Reply-To: <00e001c24043$92aec930$6b01a8c0@ericlaptop> References: <00e001c24043$92aec930$6b01a8c0@ericlaptop> Message-ID: <87r8h64pyl.fsf@jeeves.blindglobe.net> >>>>> "eric" == eric jones writes: eric> Is there a constrained minimization function for eric> multi-variable functions? optimize.fmin_bound is for scalar eric> functions only from what I can tell. eric> If the answer is "no", the next question is whether this is eric> because there isn't a good Fortran/C function for this in eric> the libraries we've wrapped, or is it because the ones there eric> aren't any good. It's hard to do, and at least I've not found one yet. I've been looking for years for one (my thesis, back in the days of old, used CFSQP, and I can't distribute the results without such a beast!) There are variants, i.e. using penalty functions when the constraints are broken. LibWN (I might have this wrong, but it's a library by Will Naylor) had such a beast, combined with a CG approach. eric> The next question would be, does anyone have a suggestion eric> for a library we should use for this capability? There is a eric> library that I've used before called CFSQP by Andre Tits eric> that is very good, but it is not open source and can't be eric> distributed with scipy (you can get the source by emailing eric> the author). This pretty much rules it out. Any other eric> suggestions? The one I suggested above. But it's sort of "personalized" (i.e. by Will). I don't have a current link to it (it's been 3+ years since I looked). I'm sure there might be others, but there probably are very few. best, -tony -- A.J. Rossini Rsrch. Asst. Prof. of Biostatistics U. of Washington Biostatistics rossini at u.washington.edu FHCRC/SCHARP/HIV Vaccine Trials Net rossini at scharp.org -------------- http://software.biostat.washington.edu/ ---------------- FHCRC: M: 206-667-7025 (fax=4812)|Voicemail is pretty sketchy/use Email UW: Th: 206-543-1044 (fax=3286)|Change last 4 digits of phone to FAX (my tuesday/wednesday/friday locations are completely unpredictable.) From rossini at blindglobe.net Sat Aug 10 12:51:30 2002 From: rossini at blindglobe.net (A.J. Rossini) Date: 10 Aug 2002 09:51:30 -0700 Subject: [SciPy-dev] multi-variable constrained minimization? In-Reply-To: <00e001c24043$92aec930$6b01a8c0@ericlaptop> References: <00e001c24043$92aec930$6b01a8c0@ericlaptop> Message-ID: <87r8h64pyl.fsf@jeeves.blindglobe.net> >>>>> "eric" == eric jones writes: eric> Is there a constrained minimization function for eric> multi-variable functions? optimize.fmin_bound is for scalar eric> functions only from what I can tell. eric> If the answer is "no", the next question is whether this is eric> because there isn't a good Fortran/C function for this in eric> the libraries we've wrapped, or is it because the ones there eric> aren't any good. It's hard to do, and at least I've not found one yet. I've been looking for years for one (my thesis, back in the days of old, used CFSQP, and I can't distribute the results without such a beast!) There are variants, i.e. using penalty functions when the constraints are broken. LibWN (I might have this wrong, but it's a library by Will Naylor) had such a beast, combined with a CG approach. eric> The next question would be, does anyone have a suggestion eric> for a library we should use for this capability? There is a eric> library that I've used before called CFSQP by Andre Tits eric> that is very good, but it is not open source and can't be eric> distributed with scipy (you can get the source by emailing eric> the author). This pretty much rules it out. Any other eric> suggestions? The one I suggested above. But it's sort of "personalized" (i.e. by Will). I don't have a current link to it (it's been 3+ years since I looked). I'm sure there might be others, but there probably are very few. best, -tony -- A.J. Rossini Rsrch. Asst. Prof. of Biostatistics U. of Washington Biostatistics rossini at u.washington.edu FHCRC/SCHARP/HIV Vaccine Trials Net rossini at scharp.org -------------- http://software.biostat.washington.edu/ ---------------- FHCRC: M: 206-667-7025 (fax=4812)|Voicemail is pretty sketchy/use Email UW: Th: 206-543-1044 (fax=3286)|Change last 4 digits of phone to FAX (my tuesday/wednesday/friday locations are completely unpredictable.) From kern at caltech.edu Sat Aug 10 13:09:11 2002 From: kern at caltech.edu (Robert Kern) Date: Sat, 10 Aug 2002 10:09:11 -0700 Subject: [SciPy-dev] multi-variable constrained minimization? In-Reply-To: <00e001c24043$92aec930$6b01a8c0@ericlaptop> References: <00e001c24043$92aec930$6b01a8c0@ericlaptop> Message-ID: <20020810170910.GA30021@taliesen.caltech.edu> On Sat, Aug 10, 2002 at 02:56:57AM -0500, eric jones wrote: [snip] > The next question would be, does anyone have a suggestion for a library > we should use for this capability? I'm not familiar enough with the problem domain to make recommendations, but have you looked at the Decision Tree for Optimization Software? http://plato.la.asu.edu/guide.html -- Robert Kern Ruddock House President kern at caltech.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From eric at enthought.com Sat Aug 10 17:50:24 2002 From: eric at enthought.com (eric jones) Date: Sat, 10 Aug 2002 16:50:24 -0500 Subject: [SciPy-dev] multi-variable constrained minimization? In-Reply-To: <20020810170910.GA30021@taliesen.caltech.edu> Message-ID: <00e601c240b7$eeb38350$6b01a8c0@ericlaptop> Hey Robert, A useful source -- thanks. This looks like the section of interest: http://plato.la.asu.edu/topics/problems/nlores.html#QP-problem If anyone has comments on any of the packages listed there, I'd be glad to hear them. eric > -----Original Message----- > From: scipy-dev-admin at scipy.net [mailto:scipy-dev-admin at scipy.net] On > Behalf Of Robert Kern > Sent: Saturday, August 10, 2002 12:09 PM > To: scipy-dev at scipy.net > Subject: Re: [SciPy-dev] multi-variable constrained minimization? > > On Sat, Aug 10, 2002 at 02:56:57AM -0500, eric jones wrote: > > [snip] > > > The next question would be, does anyone have a suggestion for a library > > we should use for this capability? > > I'm not familiar enough with the problem domain to make recommendations, > but have you looked at the Decision Tree for Optimization Software? > > http://plato.la.asu.edu/guide.html > > -- > Robert Kern > Ruddock House President > kern at caltech.edu > > "In the fields of hell where the grass grows high > Are the graves of dreams allowed to die." > -- Richard Harter > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev From rossini at blindglobe.net Sat Aug 10 18:21:50 2002 From: rossini at blindglobe.net (A.J. Rossini) Date: 10 Aug 2002 15:21:50 -0700 Subject: [SciPy-dev] multi-variable constrained minimization? In-Reply-To: <00e601c240b7$eeb38350$6b01a8c0@ericlaptop> References: <00e601c240b7$eeb38350$6b01a8c0@ericlaptop> Message-ID: <87znvue4n5.fsf@jeeves.blindglobe.net> >>>>> "eric" == eric jones writes: eric> Hey Robert, eric> A useful source -- thanks. eric> This looks like the section of interest: eric> http://plato.la.asu.edu/topics/problems/nlores.html#QP-problem eric> If anyone has comments on any of the packages listed there, I'd be glad eric> to hear them. Check licenses carefully. I had to reject a number of them (donlp, in the same section as CFSQP), because of license problems. That being said, there are a number of new ones I'd not seen before. best, -tony -- A.J. Rossini Rsrch. Asst. Prof. of Biostatistics U. of Washington Biostatistics rossini at u.washington.edu FHCRC/SCHARP/HIV Vaccine Trials Net rossini at scharp.org -------------- http://software.biostat.washington.edu/ ---------------- FHCRC: M: 206-667-7025 (fax=4812)|Voicemail is pretty sketchy/use Email UW: Th: 206-543-1044 (fax=3286)|Change last 4 digits of phone to FAX (my tuesday/wednesday/friday locations are completely unpredictable.) From pearu at cens.ioc.ee Mon Aug 12 04:04:49 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Mon, 12 Aug 2002 11:04:49 +0300 (EEST) Subject: [SciPy-dev] Re: [Numpy-discussion] QR algorithms in Python In-Reply-To: <200208120000.00948.james.analytis@physics.ox.ac.uk> Message-ID: On Mon, 12 Aug 2002, James G Analytis wrote: > Hi, > I'm looking for a QR algorithm that can decompose complex matrices. The > LinearAlgebra package from Numeric doesn't seem to have it. SciPy's linalg > package is still on the way, so I can't use that either. Try SciPy's linalg package again (from CVS), the QR functions is working now: >>> from scipy import linalg >>> from Numeric import dot >>> q,r=linalg.qr([[1,2],[3j,4]]) >>> print q [[-0.31622777+0.j -0.78935222-0.52623481j] [ 0. -0.9486833j -0.1754116 +0.26311741j]] >>> print r [[-3.16227766+0.j -0.63245553+3.79473319j] [ 0. +0.j -2.28035085+0.j ]] >>> print dot(q,r) [[ 1. +0.00000000e+00j 2. -8.71264866e-16j] [ 0. +3.00000000e+00j 4. +3.09051829e-16j]] Regards, Pearu From oliphant.travis at ieee.org Mon Aug 12 12:35:02 2002 From: oliphant.travis at ieee.org (Travis Oliphant) Date: 12 Aug 2002 10:35:02 -0600 Subject: [SciPy-dev] multi-variable constrained minimization? In-Reply-To: <00e001c24043$92aec930$6b01a8c0@ericlaptop> References: <00e001c24043$92aec930$6b01a8c0@ericlaptop> Message-ID: <1029170104.19510.4.camel@travis> On Sat, 2002-08-10 at 01:56, eric jones wrote: > Is there a constrained minimization function for multi-variable > functions? optimize.fmin_bound is for scalar functions only from what I > can tell. No. > > If the answer is "no", the next question is whether this is because > there isn't a good Fortran/C function for this in the libraries we've > wrapped, or is it because the ones there aren't any good. > I haven't been able to locate any open source constrained minimization code but I haven't looked recently. This is an area of active research, I believe and so codes aren't readily available. > The next question would be, does anyone have a suggestion for a library > we should use for this capability? There is a library that I've used > before called CFSQP by Andre Tits that is very good, but it is not open > source and can't be distributed with scipy (you can get the source by > emailing the author). This pretty much rules it out. Any other > suggestions? Perhaps we will have to write our own code for this. I'd prefer to use somebody else's code but they may be unwilling to open it because this is an area that financial people use to accomplish their ends and are probably willing to pay $$ for the capability. -Travis From james.analytis at physics.ox.ac.uk Tue Aug 13 07:35:04 2002 From: james.analytis at physics.ox.ac.uk (James G Analytis) Date: Tue, 13 Aug 2002 12:35:04 +0100 Subject: [SciPy-dev] Re: [Numpy-discussion] QR algorithms in Python In-Reply-To: References: Message-ID: <200208131235.05006.james.analytis@physics.ox.ac.uk> Dear Pearu, SciPy's linalg modules are just what I need. However, I get the following problem on importing SciPy: [analytis at toomey analytis]$ python2 Python 2.2 (#1, Apr 12 2002, 15:29:57) [GCC 2.96 20000731 (Red Hat Linux 7.2 2.96-109)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import scipy exceptions.ImportError: /usr/lib/python2.2/site-packages/scipy/linalg/flapack.so : undefined symbol: sgesdd_ exceptions.ImportError: /usr/lib/python2.2/site-packages/scipy/linalg/_flinalg.s o: undefined symbol: dlaswp_ Traceback (most recent call last): File "", line 1, in ? File "/tmp/SciPyTest/linux2/lib/python2.2/site-packages/scipy/__init__.py", li ne 42, in ? File "/usr/lib/python2.2/site-packages/scipy/special/__init__.py", line 328, i n ? import orthogonal File "/usr/lib/python2.2/site-packages/scipy/special/orthogonal.py", line 59, in ? from scipy.linalg import eig File "/tmp/SciPyTest/linux2/lib/python2.2/site-packages/scipy/linalg/__init__. py", line 40, in ? File "/tmp/SciPyTest/linux2/lib/python2.2/site-packages/scipy/linalg/basic.py" , line 17, in ? ImportError: /usr/lib/python2.2/site-packages/scipy/linalg/calc_lwork.so: undefi ned symbol: ieeeck_ >>> I am running standard Red Hat 7.3 with a smp kernel, on a dual-processor PIII. Is there anything I should watch out for (compiler issues perhaps)? I appreciate the help! Cheers, James On Monday 12 Aug 2002 9:04 am, Pearu Peterson wrote: > On Mon, 12 Aug 2002, James G Analytis wrote: > > Hi, > > I'm looking for a QR algorithm that can decompose complex matrices. The > > LinearAlgebra package from Numeric doesn't seem to have it. SciPy's > > linalg package is still on the way, so I can't use that either. > > Try SciPy's linalg package again (from CVS), the QR functions is working > > now: > >>> from scipy import linalg > >>> from Numeric import dot > >>> q,r=linalg.qr([[1,2],[3j,4]]) > >>> print q > > [[-0.31622777+0.j -0.78935222-0.52623481j] > [ 0. -0.9486833j -0.1754116 +0.26311741j]] > > >>> print r > > [[-3.16227766+0.j -0.63245553+3.79473319j] > [ 0. +0.j -2.28035085+0.j ]] > > >>> print dot(q,r) > > [[ 1. +0.00000000e+00j 2. -8.71264866e-16j] > [ 0. +3.00000000e+00j 4. +3.09051829e-16j]] > > Regards, > Pearu From pearu at cens.ioc.ee Tue Aug 13 07:46:34 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Tue, 13 Aug 2002 14:46:34 +0300 (EEST) Subject: [SciPy-dev] Re: [Numpy-discussion] QR algorithms in Python In-Reply-To: <200208131235.05006.james.analytis@physics.ox.ac.uk> Message-ID: On Tue, 13 Aug 2002, James G Analytis wrote: > Dear Pearu, > SciPy's linalg modules are just what I need. However, I get the following > problem on importing SciPy: > > [analytis at toomey analytis]$ python2 > Python 2.2 (#1, Apr 12 2002, 15:29:57) > [GCC 2.96 20000731 (Red Hat Linux 7.2 2.96-109)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>> import scipy > exceptions.ImportError: > /usr/lib/python2.2/site-packages/scipy/linalg/flapack.so : undefined > symbol: sgesdd_ ^^^^^^^^^^^^^^^ > I am running standard Red Hat 7.3 with a smp kernel, on a dual-processor > PIII. > Is there anything I should watch out for (compiler issues perhaps)? > I appreciate the help! The problem seems to be with the lapack installation. Did you follow the instructions in scipy/INSTALL.txt file, especially the ones related to ATLAS? In order to give more hints, could you send the output of the following command: python scipy_distutils/system_info.py and also the location of your ATLAS (or BLAS and LAPACK) libraries? Pearu From nwagner at mecha.uni-stuttgart.de Tue Aug 13 07:59:37 2002 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Tue, 13 Aug 2002 13:59:37 +0200 Subject: [SciPy-dev] Re: [Numpy-discussion] QR algorithms in Python References: <200208131235.05006.james.analytis@physics.ox.ac.uk> Message-ID: <3D58F4A9.BFE43265@mecha.uni-stuttgart.de> James G Analytis schrieb: > > Dear Pearu, > SciPy's linalg modules are just what I need. However, I get the following > problem on importing SciPy: > ATLAS is also required. http://math-atlas.sourceforge.net/ Nils > [analytis at toomey analytis]$ python2 > Python 2.2 (#1, Apr 12 2002, 15:29:57) > [GCC 2.96 20000731 (Red Hat Linux 7.2 2.96-109)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>> import scipy > exceptions.ImportError: > /usr/lib/python2.2/site-packages/scipy/linalg/flapack.so : undefined > symbol: sgesdd_ > exceptions.ImportError: > /usr/lib/python2.2/site-packages/scipy/linalg/_flinalg.s o: undefined > symbol: dlaswp_ > Traceback (most recent call last): > File "", line 1, in ? > File "/tmp/SciPyTest/linux2/lib/python2.2/site-packages/scipy/__init__.py", > li ne 42, in ? > File "/usr/lib/python2.2/site-packages/scipy/special/__init__.py", line 328, > i n ? > import orthogonal > File "/usr/lib/python2.2/site-packages/scipy/special/orthogonal.py", line > 59, in ? > from scipy.linalg import eig > File > "/tmp/SciPyTest/linux2/lib/python2.2/site-packages/scipy/linalg/__init__. > py", line 40, in ? > File > "/tmp/SciPyTest/linux2/lib/python2.2/site-packages/scipy/linalg/basic.py" > , line 17, in ? > ImportError: /usr/lib/python2.2/site-packages/scipy/linalg/calc_lwork.so: > undefi ned symbol: ieeeck_ > >>> > > I am running standard Red Hat 7.3 with a smp kernel, on a dual-processor PIII. > Is there anything I should watch out for (compiler issues perhaps)? > I appreciate the help! > Cheers, > James > > On Monday 12 Aug 2002 9:04 am, Pearu Peterson wrote: > > On Mon, 12 Aug 2002, James G Analytis wrote: > > > Hi, > > > I'm looking for a QR algorithm that can decompose complex matrices. The > > > LinearAlgebra package from Numeric doesn't seem to have it. SciPy's > > > linalg package is still on the way, so I can't use that either. > > > > Try SciPy's linalg package again (from CVS), the QR functions is working > > > > now: > > >>> from scipy import linalg > > >>> from Numeric import dot > > >>> q,r=linalg.qr([[1,2],[3j,4]]) > > >>> print q > > > > [[-0.31622777+0.j -0.78935222-0.52623481j] > > [ 0. -0.9486833j -0.1754116 +0.26311741j]] > > > > >>> print r > > > > [[-3.16227766+0.j -0.63245553+3.79473319j] > > [ 0. +0.j -2.28035085+0.j ]] > > > > >>> print dot(q,r) > > > > [[ 1. +0.00000000e+00j 2. -8.71264866e-16j] > > [ 0. +3.00000000e+00j 4. +3.09051829e-16j]] > > > > Regards, > > Pearu > > ------------------------------------------------------- > This sf.net email is sponsored by: Dice - The leading online job board > for high-tech professionals. Search and apply for tech jobs today! > http://seeker.dice.com/seeker.epl?rel_code1 > _______________________________________________ > Numpy-discussion mailing list > Numpy-discussion at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/numpy-discussion From pearu at cens.ioc.ee Tue Aug 13 08:01:18 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Tue, 13 Aug 2002 15:01:18 +0300 (EEST) Subject: [SciPy-dev] Re: [Numpy-discussion] QR algorithms in Python In-Reply-To: <3D58F4A9.BFE43265@mecha.uni-stuttgart.de> Message-ID: On Tue, 13 Aug 2002, Nils Wagner wrote: > ATLAS is also required. Not really but it is highly recommended. SciPy builds and works fine with standard BLAS and LAPACK libraries. But when using ATLAS some routines may gain a speed-up upto 15 times. Pearu From james.analytis at physics.ox.ac.uk Tue Aug 13 13:17:02 2002 From: james.analytis at physics.ox.ac.uk (James G Analytis) Date: Tue, 13 Aug 2002 18:17:02 +0100 Subject: [SciPy-dev] Re: [Numpy-discussion] QR algorithms in Python In-Reply-To: References: Message-ID: <200208131817.02457.james.analytis@physics.ox.ac.uk> Dear Pearu, > > The problem seems to be with the lapack installation. Did you follow the > instructions in scipy/INSTALL.txt file, especially the ones related to > ATLAS? I followed the instructions on http://www.scipy.org/Members/fperez/PerezCVSBuild.htm But I noted the problems described in scipy/INSTALL.txt and thought I had installed ATLAS correctly. > In order to give more hints, could you send the > output of the following command: > > python scipy_distutils/system_info.py > [analytis at toomey analytis]$ python2 /home/analytis/tmp/scipy/scipy_distutils/system_info.py atlas_info: FOUND: libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/local/lib'] blas_info: FOUND: libraries = ['blas'] library_dirs = ['/usr/lib'] blas_src_info: NOT AVAILABLE fftw_info: FOUND: libraries = ['fftw', 'rfftw', 'fftw_threads', 'rfftw_threads', 'sfftw', 'srfftw', 'sfftw_threads', 'srfftw_threads'] library_dirs = ['/usr/local/lib'] define_macros = [('SCIPY_FFTW_H', 1), ('SCIPY_SFFTW_H', 1)] include_dirs = ['/usr/local/include'] lapack_info: FOUND: libraries = ['lapack'] library_dirs = ['/usr/local/lib'] lapack_src_info: NOT AVAILABLE x11_info: FOUND: libraries = ['X11'] library_dirs = ['/usr/X11R6/lib'] include_dirs = ['/usr/X11R6/include'] So everything is there except blas_src_info and lapack_src_info. > and also the location of your ATLAS (or BLAS and LAPACK) libraries? I've got two copies of liblapack.a - one that came with ATLAS and one that came with RedHat 7.3. The ATLAS liblapack is in /usr/local/lib/ and the system one is in /usr/lib/. I will remove the RedHat ones and recompile ATLAS. I couldn't initially remove the RedHat ones because Octave required them. But I've since discovered that Octave can be removed. Hopefully this should work (?) I'm using standard install of RedHat 7.3 with gcc version 2.96, but this can be changed if you recommend it. Cheers, James From pearu at cens.ioc.ee Tue Aug 13 13:41:39 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Tue, 13 Aug 2002 20:41:39 +0300 (EEST) Subject: [SciPy-dev] Re: [Numpy-discussion] QR algorithms in Python In-Reply-To: <200208131817.02457.james.analytis@physics.ox.ac.uk> Message-ID: On Tue, 13 Aug 2002, James G Analytis wrote: > Dear Pearu, > > > > The problem seems to be with the lapack installation. Did you follow the > > instructions in scipy/INSTALL.txt file, especially the ones related to > > ATLAS? > I followed the instructions on > http://www.scipy.org/Members/fperez/PerezCVSBuild.htm > But I noted the problems described in scipy/INSTALL.txt and thought I had > installed ATLAS correctly. Ok, now I see where is the problem, I think. Try ls -l /usr/local/lib/liblapack.a The size of this file should be aroung 6MB. If it is much less than that then you have incomplete lapack. To get a complete one, follow these instructions http://math-atlas.sourceforge.net/errata.html#completelp Fernando, can you add the remark above also to http://www.scipy.org/Members/fperez/PerezCVSBuild.htm ? Also, the manual linking part in the fftw section is unneccasary: system_info.py should be able to handle it now. Finally, can you make also a link to INSTALL.txt file? This file should be the most up-to-date document about building SciPy. > > and also the location of your ATLAS (or BLAS and LAPACK) libraries? > > I've got two copies of liblapack.a - one that came with ATLAS and one that > came with RedHat 7.3. The ATLAS liblapack is in /usr/local/lib/ and the > system one is in /usr/lib/. I will remove the RedHat ones and recompile > ATLAS. I couldn't initially remove the RedHat ones because Octave required > them. But I've since discovered that Octave can be removed. > Hopefully this should work (?) > I'm using standard install of RedHat 7.3 with gcc version 2.96, but this can > be changed if you recommend it. Let's first try to get scipy working without too much changing the system. According to the system_info.py output, nothing needs to be removed from your RedHat installation, I think. Pearu From james.analytis at physics.ox.ac.uk Wed Aug 14 05:08:56 2002 From: james.analytis at physics.ox.ac.uk (James G Analytis) Date: Wed, 14 Aug 2002 10:08:56 +0100 Subject: [SciPy-dev] Re: [Numpy-discussion] QR algorithms in Python In-Reply-To: References: Message-ID: <200208141008.56222.james.analytis@physics.ox.ac.uk> Dear Pearu, Thanks! Scipy seems to be working fine, but I haven't tried anything too large yet. A few things to note though: Lapack installed with a few testing errors. However the libraries are all there and about 6MB so I'm not too concerned. I had to manually copy lipapack.a from the updated ATLAS folder to the appropriate path /usr/local/lib/. I'm pretty happy with what I've seen so far. Thanks for your help! James On Tuesday 13 Aug 2002 6:41 pm, you wrote: > On Tue, 13 Aug 2002, James G Analytis wrote: > > Dear Pearu, > > > > > The problem seems to be with the lapack installation. Did you follow > > > the instructions in scipy/INSTALL.txt file, especially the ones related > > > to ATLAS? > > > > I followed the instructions on > > http://www.scipy.org/Members/fperez/PerezCVSBuild.htm > > But I noted the problems described in scipy/INSTALL.txt and thought I had > > installed ATLAS correctly. > > Ok, now I see where is the problem, I think. Try > > ls -l /usr/local/lib/liblapack.a > > The size of this file should be aroung 6MB. If it is much less than that > then you have incomplete lapack. To get a complete one, follow these > instructions > > http://math-atlas.sourceforge.net/errata.html#completelp > > > Fernando, can you add the remark above also to > > http://www.scipy.org/Members/fperez/PerezCVSBuild.htm > > ? Also, the manual linking part in the fftw section is > unneccasary: system_info.py should be able to handle it now. > Finally, can you make also a link to INSTALL.txt file? This file should be > the most up-to-date document about building SciPy. > > > > and also the location of your ATLAS (or BLAS and LAPACK) libraries? > > > > I've got two copies of liblapack.a - one that came with ATLAS and one > > that came with RedHat 7.3. The ATLAS liblapack is in /usr/local/lib/ and > > the system one is in /usr/lib/. I will remove the RedHat ones and > > recompile ATLAS. I couldn't initially remove the RedHat ones because > > Octave required them. But I've since discovered that Octave can be > > removed. > > Hopefully this should work (?) > > I'm using standard install of RedHat 7.3 with gcc version 2.96, but this > > can be changed if you recommend it. > > Let's first try to get scipy working without too much changing the system. > According to the system_info.py output, nothing needs to be removed from > your RedHat installation, I think. > > Pearu From nwagner at mecha.uni-stuttgart.de Thu Aug 15 08:56:16 2002 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Thu, 15 Aug 2002 14:56:16 +0200 Subject: [SciPy-dev] Matlab function load Message-ID: <3D5BA4F0.A20C00F0@mecha.uni-stuttgart.de> Hi, It seems to me that the Matlab function load is also very useful. Is there a trend to implement this function in scipy ? Nils From nwagner at mecha.uni-stuttgart.de Thu Aug 15 09:34:43 2002 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Thu, 15 Aug 2002 15:34:43 +0200 Subject: [SciPy-dev] plt versus gplt Message-ID: <3D5BADF3.D5D0F43C@mecha.uni-stuttgart.de> Hi, What can I do to "freeze" a picture on the screen by using plt ? The picture fanishes very fast as opposed to gplt. Nils From eric at enthought.com Fri Aug 16 03:11:29 2002 From: eric at enthought.com (eric) Date: Fri, 16 Aug 2002 01:11:29 -0600 Subject: [SciPy-dev] problems with amin/amax when nan present Message-ID: <000001c244f4$248fad60$7501a8c0@drklahn> Dave and I just ran across this NaN issue while working on Chaco: >>> a = array([-1,0,1.]) >>> b = a/0.0 >>> b array([ -inf, nan, inf]) >>> amin(b) inf This should be -inf. >>> c = array((-1,b[1],1)) >>> c array([-1. , nan, 1. ]) >>> amin(c) 1.0 This should be -1. It looks like the NaNs are what cause the problems. IF only -inf and inf are present, they behave correctly: >>> d=b.copy() >>> d[1] = 0 >>> d array([ -inf, 0.00000000e+000, inf]) >>> amin(d) -inf >>> amax(d) inf So, I've found an ugly fix listed at the end of this message. It will make amin/amax slower (probably 2X) even for the case when NaNs aren't present because it has to test for if any NaNs exists. Correct is more important than fast, but I was wondering if anyone has a better solution. We could test within the minimum/maximum methods in C using isnan(), I suppose, to save the extra array creation. Thanks, eric # fix.py from scipy_base import * def my_min(a,axis=-1): nans = isnan(a) has_nans = sometrue(ravel(nans)) if has_nans: inf_in_array = sometrue(a == Inf,axis=axis) b = a.copy() putmask(b,nans,Inf) else: b = a result = amin(b,axis) if has_nans: if type(result) is ArrayType: putmask(result,logical_not( logical_and(result == Inf,inf_in_array)),NaN) elif (not inf_in_array) and (result == Inf): result = NaN return result def my_max(a,axis=-1): nans = isnan(a) has_nans = sometrue(ravel(nans)) if has_nans: neg_inf_in_array = sometrue(a == -Inf,axis=axis) b = a.copy() putmask(b,nans,-Inf) else: b = a result = amax(b,axis) if has_nans: if type(result) is ArrayType: putmask(result,logical_not( logical_and(result == -Inf,neg_inf_in_array)),NaN) elif not(neg_inf_in_array) and (result == -Inf): result = NaN return result a = array((-1.,0.,1.)) print 'a:', a print 'min(a):', my_min(a) print 'max(a):', my_max(a) b = a/0. print 'b:', b print 'min(b):', my_min(b) print 'max(b):', my_max(b) c = array((0.,0.,0.))/0. print 'c:', c print 'min(c):', my_min(c) print 'max(c):', my_max(c) # [eric at enthoughtaus1 tmp]$ python fix.py # a: [-1. 0. 1.] # min(a): -1.0 # max(a): 1.0 # b: [ -inf nan inf] # min(b): -inf # max(b): inf # c: [ nan nan nan] # min(c): nan # max(c): nan From nwagner at mecha.uni-stuttgart.de Fri Aug 16 03:45:12 2002 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Fri, 16 Aug 2002 09:45:12 +0200 Subject: [SciPy-dev] help for plt and gplt Message-ID: <3D5CAD88.E215D66A@mecha.uni-stuttgart.de> Hi, Is there a detailed description of the functional range of both gplt and plt ? >>> scipy.__version__ '0.2.0_alpha_111.3887' >>> help(gplt) None >>> help(plt) None >>> Nils From nwagner at mecha.uni-stuttgart.de Fri Aug 16 04:05:14 2002 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Fri, 16 Aug 2002 10:05:14 +0200 Subject: [SciPy-dev] Interactive versus batch mode Message-ID: <3D5CB23A.49AEB2CE@mecha.uni-stuttgart.de> Hi, As far as I know the features of plt are only useful in an interactive mode. If I execute #! /usr/local/bin/python from scipy import * import gui_thread from scipy import plt from scipy import gplt x = arange(0.,pi,0.01) gplt.plot(sin(x)) plt.plot(sin(x)) the picture produced by plt disappear as opposed to the picture produced by gplt. What can I do to benefit from the zoom function of plt in batch mode ? Nils From prabhu at aero.iitm.ernet.in Fri Aug 16 10:32:59 2002 From: prabhu at aero.iitm.ernet.in (Prabhu Ramachandran) Date: Fri, 16 Aug 2002 20:02:59 +0530 Subject: [SciPy-dev] Interactive versus batch mode In-Reply-To: <3D5CB23A.49AEB2CE@mecha.uni-stuttgart.de> References: <3D5CB23A.49AEB2CE@mecha.uni-stuttgart.de> Message-ID: <15709.3355.747567.393723@monster.linux.in> >>>>> "NW" == Nils Wagner writes: NW> Hi, As far as I know the features of plt are only useful in an NW> interactive mode. If I execute [snip] NW> What can I do to benefit from the zoom function of plt in NW> batch mode ? Hmm, what I used to do is add a line like so: raw_input('\nPress enter to finish.') and that would keep the window alive. prabhu From finnefro at sas.upenn.edu Fri Aug 16 10:58:12 2002 From: finnefro at sas.upenn.edu (Adam C. Finnefrock) Date: 16 Aug 2002 10:58:12 -0400 Subject: [SciPy-dev] Interactive versus batch mode In-Reply-To: <3D5CB23A.49AEB2CE@mecha.uni-stuttgart.de> References: <3D5CB23A.49AEB2CE@mecha.uni-stuttgart.de> Message-ID: <41znvmvojf.fsf@localhost.localdomain> Nils Wagner writes: > If I execute > > #! /usr/local/bin/python > from scipy import * > import gui_thread > from scipy import plt > from scipy import gplt > x = arange(0.,pi,0.01) > gplt.plot(sin(x)) > plt.plot(sin(x)) > > the picture produced by plt disappear as opposed to the picture produced > by gplt. > I think you might want to put >>> import gui_thread FIRST, before anything else. If you put the plt first, the plt window crashes on my system (Python 2.2.1, 0.2.0_alpha_105.3694, Redhat 7.3 + ximian) the plt window doesn't disappear, it crashes. Might the quick disapperance of the plt window be that it is being killed? With the > What can I do to benefit from the zoom function of plt in batch mode ? How about slicing your data to what would be the zoom region? Adam From pearu at cens.ioc.ee Fri Aug 16 11:50:14 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Fri, 16 Aug 2002 18:50:14 +0300 (EEST) Subject: [SciPy-dev] problems with amin/amax when nan present In-Reply-To: <000001c244f4$248fad60$7501a8c0@drklahn> Message-ID: On Fri, 16 Aug 2002, eric wrote: > So, I've found an ugly fix listed at the end of this message. It will > make amin/amax slower (probably 2X) even for the case when NaNs aren't > present because it has to test for if any NaNs exists. Correct is more > important than fast, but I was wondering if anyone has a better > solution. We could test within the minimum/maximum methods in C using > isnan(), I suppose, to save the extra array creation. Does it makes sense to fix this in Numeric? I am not sure if Numeric is supposed to support NaNs but considering that soon (?) Numeric will be unmaintained (see Numarray's design) and (I guess) it would be rather difficult to make the transformation from Numeric to Numarray quickly, then we could start taking over Numeric array stuff by fixing this kind of issues. It sounds radical (which I don't like) and I really hope that starting to use Numarray in SciPy would be easier (for me it would mean implementing Numarray support for f2py and here I have no idea how difficult it turns out to be). On the other hand, amax and amin are probably not heavily used in large scale calculations so that the above would be no worth of trouble. May be amin/amax should have an extra flag for disabling isnan checking? Just some random thoughts ... Pearu From eric at enthought.com Fri Aug 16 16:05:27 2002 From: eric at enthought.com (eric jones) Date: Fri, 16 Aug 2002 15:05:27 -0500 Subject: [SciPy-dev] problems with amin/amax when nan present In-Reply-To: Message-ID: <005401c24560$43a15a20$6b01a8c0@ericlaptop> On one level, it isn't that hard to put in the C, because we already have an altered version of the pertinent Numeric file in our CVS (fastumath). This was necessary to allow SciPy to handle NaNs within arrays without throwing errors because, as you noted, Numeric doesn't support this. So, we can add the isnan() check to maximum/minimum functions in fastumath without touching Numeric. But, for now, I think we can stick with the "simple" versions I wrote, and move it to C later if it is clogging a critical application. I'll check them into scipy_base. I guess we could also add a flag, but I'm not excited about adding such options to standard routines. As for numarray, I'm all for moving to it as soon as it is feasible. Right now, there is still a large amount of optimization required before it can replace Numeric in SciPy. Here are some timings comparing Numeric to the latest numarray (0.3.6): >>> def time_add(a,N): ... X = range(N) ... t1 = time.time() ... for i in X: ... b = a + a ... t2 = time.time() ... return t2-t1 ... >>> a = Numeric.arange(10000,typecode=Numeric.Float32) >>> time_add(a,100) 0.019999980926513672 >>> a = numarray.arange(10000,type=numarray.Float32) >>> time_add(a,100) 0.13099992275238037 >>> a = Numeric.arange(100,typecode=Numeric.Float32) >>> time_add(a,10000) 0.060000061988830566 >>> a = numarray.arange(100,type=numarray.Float32) >>> time_add(a,10000) 10.025000095367432 So for small arrays, you can see the price is high, and, even for medium sized arrays, there is still 6x slow down. It should be noted that optimization isn't STSci's goal right now -- there are plenty of other things to work on as far as getting the rest of the compatibility issues and new features working. I have high hopes that numarray will become as fast as, or faster than Numeric in the future, but I haven't looked hard at what this entails. I also think it will not happen in the near term. Perry may have more comments on this. eric > -----Original Message----- > From: scipy-dev-admin at scipy.net [mailto:scipy-dev-admin at scipy.net] On > Behalf Of Pearu Peterson > Sent: Friday, August 16, 2002 10:50 AM > To: scipy-dev at scipy.net > Subject: Re: [SciPy-dev] problems with amin/amax when nan present > > > On Fri, 16 Aug 2002, eric wrote: > > > So, I've found an ugly fix listed at the end of this message. It will > > make amin/amax slower (probably 2X) even for the case when NaNs aren't > > present because it has to test for if any NaNs exists. Correct is more > > important than fast, but I was wondering if anyone has a better > > solution. We could test within the minimum/maximum methods in C using > > isnan(), I suppose, to save the extra array creation. > > Does it makes sense to fix this in Numeric? I am not sure if Numeric is > supposed to support NaNs but considering that soon (?) Numeric will be > unmaintained (see Numarray's design) and (I guess) it would be rather > difficult to make the transformation from Numeric to Numarray quickly, > then > we could start taking over Numeric array stuff by fixing this kind of > issues. It sounds radical (which I don't like) and I really hope that > starting to use Numarray in SciPy would be easier (for me it would mean > implementing Numarray support for f2py and here I have no idea how > difficult it turns out to be). > > On the other hand, amax and amin are probably not heavily used in large > scale calculations so that the above would be no worth of trouble. > May be amin/amax should have an extra flag for disabling isnan checking? > > Just some random thoughts ... > Pearu > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev From meruma at rio.odn.ne.jp Sat Aug 17 05:26:49 2002 From: meruma at rio.odn.ne.jp (=?ISO-2022-JP?B?GyRCTDVOQSVXJWwlPCVzJUgbKEI=?=) Date: Sat, 17 Aug 2002 18:26:49 +0900 (JST) Subject: [SciPy-dev] =?ISO-2022-JP?B?GyRCTCQ+NUJ6OS05cCIoGyhC?= Message-ID: <200208170926.g7H9QnsN014064@smtp11.dti.ne.jp> ????????????????????????????????????????????meruma at rio.odn.ne.jp ?????mcc??http://www.vesta.dti.ne.jp/~yossy50 ??????????????????????????????????????18??????????????????????????????????? http://www2.odn.ne.jp/~cjy62800/ From pearu at cens.ioc.ee Sun Aug 18 14:30:07 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Sun, 18 Aug 2002 21:30:07 +0300 (EEST) Subject: [SciPy-dev] fftpack revisited Message-ID: Hi scipy devels, Due to emerged need for the fft algorithm in my work, I have returned to the question whether the fftpack module provided by scipy is optimal. Leaving aside license issues, I considered fftw again. Running some performance tests with fftpack and fftw, I found to my surprise that fftpack is actually 2-2.5 times faster (!) than fftw (could someone verify this as well?). I checked fftw site for comparison and there fftpack was found to be about 2-3 times slower than fftw. I don't have a good explanation for this difference, only few notes: 1) fftw site timings were obtained with rather old gcc-2.7.2. May be gcc has become faster since then (I am using gcc-2.95.4). 2) scipy uses rather aggressive Fortran compiler flags, some of them was used in fftw site as well. So, compiler flags should not be the (only) reason. 3) fftw tests were run on PII with 200MHz and 300MHz and Redhat 5.0, I am running them on PII with 400MHz and Debian Woody. Other parameters seem to be the same (including compiler flags for building fftw). Next, I needed to implement some transforms based on DFT (like differentiation of periodic sequences, hilbert transforms, tilbert transforms) that has to be very efficient and stable (they will be called tens of thousands times on rather large data set). The current interface of fftpack is rather complete and general, Travis O. has made remarkable job in creating it, but it turns out to be not straightforward enough for my needs. Therefore I, have created another interface to the Fortran fftpack library using f2py, let's call it fftpack2 for now. Currently it exposes the same functions to Python as fftpack written by Travis O. but differs from the following aspects 1) y = fftpack.rfft(x) (and its inverse) is replaced with y = fftpack2.fftr(x) that returns real data array of length len(x): y = [y(0),Re(y(1)),Im(y(1)),...] The differences with the fftpack.rfft is that fftpack.rfft returns a complex array of length len(x)/2 if len(x) is even (len(x)-1)/2 if len(x) is odd and reads as follows y = [y(0),Re(y(1))+1j*Im(y(1)),...] To my opinion, fftpack2.fftr has the following advantages: a) the property ifft(fft(x))=x holds what ever is the length of x. With fftpack.rfft one has to specify the length when doing inverse, i.e. ifft(fft(x),len(x))=x holds, in general. I find it as a possible source of confusion and bugs. b) there is no real->complex->real conversion in fftpack2.fftr that makes the interface more straightforward (good for performance, crucial for time critical algorithms). However, in certain situations fftpack.rfft has an advantage (and this is the reason why I introduced different name for this function, both rfft and fftr should be available). 2) fftpack2 does not support axis keyword argument. The purpose of axis argument is that FFT can be applied in one C loop to the rows of an array. This feature is missing in the original Fortran fftpack library. On the other hand, Matlab's fft has this feature (where FFT is applied to the columns of a matrix, though). Why I have not added this feature to fftpack2? First, I am not sure how often this feature will be used so that code complexity would be justified. In SciPy, it used only in signal/signaltools.py for hilbert transform where array transpose could be used instead. Second, I don't see any remarkable performance gain for doing the loop in C: most of the time will be spent probably in doing FFT or converting the input array to a contiguous one. So, the loop could be easily done in Python without too much of performance loss. So, my questions are: Is the axis keyword argument actually used among scipy users? Should I add this feature also to fftpack2 because Matlab has one? (But notice that Matlab uses different convention from scipy, and so Matlab users will be confused anyway). Right now I am in a position of keeping fftpack functions as simple as possible and add additional features only when they are explicitly asked for. 3) Finally, fftpack2 is easier to maintain due to using f2py. I have found number bugs in fftpack causing either infinite loops or segfaults and fixing them can be a tedious task. Fortran fftpack provides also other transforms like sin,cos,etc that could be exposed to scipy. This would be really easy when using f2py. The alternative of manual wrapping and subsequent maintainance pain is not very attractive for me. In conclusion, I am a bit in the middle of deciding whether ... * I should proceed with fftpack2? It has an advantage of consisting simpler code base and also maintaining it will be easier. But I would like to drop some of the features (like axis argument) from fftpack if there is no need for it. * Or should I try to fix the current fftpack bugs, extend it but keeping all the current features? It has a disadvantage that the fftpack will contain two types of C codes, one manually written and other wrapped with f2py. This setup will make the fftpack module seemingly more complex than necessary: I find it both from the usage as well from the maintainance point of views. So, what do you think of all that? I'll appreciate any comments that would help to solve my dilemma. Thanks, Pearu From eric at enthought.com Sun Aug 18 16:18:31 2002 From: eric at enthought.com (eric jones) Date: Sun, 18 Aug 2002 15:18:31 -0500 Subject: [SciPy-dev] fftpack revisited In-Reply-To: Message-ID: <006b01c246f4$6bc4a850$6b01a8c0@ericlaptop> Hey Pearu, Here are my thoughts: 1. Given a choice, I like f2py wrappers over hand wrappers because they are easier to maintain. 2. I think the axis argument is important. It allows users to choose how they lay out their data instead of SciPy defining how they should do it. The axis should default to -1 so that the default behavior operates on contiguous memory without copying for efficiency. This is the default used throughout the SciPy library. As far as whether looping over the data sets in C or Python has much impact on speed, a simple test would be to rewrite one of Travis' functions in Python (such as fft) and then compare the two. If Python is within a few percent of the wrapped fft when testing "normal sized" 2d and 3d arrays, then the benefit of having the code in C is not to great. So, I'd be happy to see fftpack become an f2py generated library as long as (1) we're very sure that the new version is at least as robust as the old version, and (2) there is very little penalty for using Python looping. Checking the repository, I didn't see any unit tests for the fftpack library which will make verifying (1) more difficult. As far as your new fftr function, it sounds like a good addition to me, but I don't like the name that well. It is too similar to the original function name (rfft and fftr) and could cause confusion. I'd go for rfft2, except that it sounds like it is doing a 2d rfft. No other suggestions come immediately to mind though. On exposing more transforms from fftpack, that sounds like a good idea that is easily done with f2py than by hand. I wonder though, if the standard jpeg library doesn't have a blazing fast cosine transform function that we might use. Perhaps it is not worth the trouble of adding another set of source code... Summary: fftpack2 sounds like a good thing to pursue, but it would need to support axis arguments if it is to become a replacement for fftpack. Is there any other functionality you were thinking of dropping? These would need to be discussed also. Thanks, eric > -----Original Message----- > From: scipy-dev-admin at scipy.net [mailto:scipy-dev-admin at scipy.net] On > Behalf Of Pearu Peterson > Sent: Sunday, August 18, 2002 1:30 PM > To: scipy-dev at scipy.org > Subject: [SciPy-dev] fftpack revisited > > > Hi scipy devels, > > Due to emerged need for the fft algorithm in my work, I have returned to > the question whether the fftpack module provided by scipy is optimal. > > Leaving aside license issues, I considered fftw again. Running some > performance tests with fftpack and fftw, I found to my surprise > that fftpack is actually 2-2.5 times faster (!) than fftw (could someone > verify this as well?). I checked fftw site for comparison and there > fftpack was found to be about 2-3 times slower than fftw. > > I don't have a good explanation for this difference, only few notes: > 1) fftw site timings were obtained with rather old gcc-2.7.2. May be gcc > has become faster since then (I am using gcc-2.95.4). > 2) scipy uses rather aggressive Fortran compiler flags, some of them was > used in fftw site as well. So, compiler flags should not be the > (only) reason. > 3) fftw tests were run on PII with 200MHz and 300MHz and Redhat 5.0, I am > running them on PII with 400MHz and Debian Woody. Other parameters seem to > be the same (including compiler flags for building fftw). > > > Next, I needed to implement some transforms based on DFT (like > differentiation of periodic sequences, hilbert transforms, tilbert > transforms) that has to be very efficient and stable (they will be called > tens of thousands times on rather large data set). > > The current interface of fftpack is rather complete and general, > Travis O. has made remarkable job in creating it, but it turns out to be > not straightforward enough for my needs. > > Therefore I, have created another interface to the Fortran fftpack > library using f2py, let's call it fftpack2 for now. Currently it exposes > the same functions to Python as fftpack written by Travis O. but differs > from the following aspects > > 1) y = fftpack.rfft(x) (and its inverse) is replaced with > y = fftpack2.fftr(x) that returns real data array of length len(x): > y = [y(0),Re(y(1)),Im(y(1)),...] > > The differences with the fftpack.rfft is that fftpack.rfft returns a > complex array of length > len(x)/2 if len(x) is even > (len(x)-1)/2 if len(x) is odd > and reads as follows > y = [y(0),Re(y(1))+1j*Im(y(1)),...] > > To my opinion, fftpack2.fftr has the following advantages: > a) the property ifft(fft(x))=x holds what ever is the length of x. > With fftpack.rfft one has to specify the length when doing inverse, > i.e. ifft(fft(x),len(x))=x holds, in general. I find it as a possible > source of confusion and bugs. > b) there is no real->complex->real conversion in fftpack2.fftr that > makes the interface more straightforward (good for performance, > crucial for time critical algorithms). > However, in certain situations fftpack.rfft has an advantage (and this is > the reason why I introduced different name for this function, both rfft > and fftr should be available). > > 2) fftpack2 does not support axis keyword argument. The purpose of axis > argument is that FFT can be applied in one C loop to the rows of an > array. This feature is missing in the original Fortran fftpack library. On > the other hand, Matlab's fft has this feature (where FFT is applied to the > columns of a matrix, though). > > Why I have not added this feature to fftpack2? First, I am not sure how > often this feature will be used so that code complexity would be > justified. In SciPy, it used only in signal/signaltools.py for hilbert > transform where array transpose could be used instead. Second, I don't see > any remarkable performance gain for doing the loop in C: most of the time > will be spent probably in doing FFT or converting the input array to a > contiguous one. So, the loop could be easily done in Python without > too much of performance loss. > > So, my questions are: > Is the axis keyword argument actually used among scipy users? Should I add > this feature also to fftpack2 because Matlab has one? (But notice that > Matlab uses different convention from scipy, and so Matlab users will be > confused anyway). > > Right now I am in a position of keeping fftpack functions as simple as > possible and add additional features only when they are explicitly asked > for. > > 3) Finally, fftpack2 is easier to maintain due to using f2py. I have found > number bugs in fftpack causing either infinite loops or segfaults and > fixing them can be a tedious task. > Fortran fftpack provides also other transforms like sin,cos,etc that > could be exposed to scipy. This would be really easy when using f2py. > The alternative of manual wrapping and subsequent maintainance pain is > not very attractive for me. > > > In conclusion, I am a bit in the middle of deciding whether ... > * I should proceed with fftpack2? It has an advantage of consisting > simpler code base and also maintaining it will be easier. > But I would like to drop some of the features (like axis argument) from > fftpack if there is no need for it. > * Or should I try to fix the current fftpack bugs, extend it but > keeping all the current features? It has a disadvantage that > the fftpack will contain two types of C codes, one manually written and > other wrapped with f2py. This setup will make the fftpack module seemingly > more complex than necessary: I find it both from the usage as well from > the maintainance point of views. > > So, what do you think of all that? I'll appreciate any comments that > would help to solve my dilemma. > > Thanks, > Pearu > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev From eric at enthought.com Sun Aug 18 16:21:55 2002 From: eric at enthought.com (eric jones) Date: Sun, 18 Aug 2002 15:21:55 -0500 Subject: [SciPy-dev] fftpack revisited In-Reply-To: Message-ID: <006c01c246f4$e5386c80$6b01a8c0@ericlaptop> By the way, interesting timing results on fftpack and fftw. I'm glad to see the penalty paid for using fftpack is either non existent or at least not obvious. Any chance you can get the standard LAPACK to be as fast as ATLAS? :-) That sure would save a lot of wailing and gnashing of teeth. eric > -----Original Message----- > From: scipy-dev-admin at scipy.net [mailto:scipy-dev-admin at scipy.net] On > Behalf Of Pearu Peterson > Sent: Sunday, August 18, 2002 1:30 PM > To: scipy-dev at scipy.org > Subject: [SciPy-dev] fftpack revisited > > > Hi scipy devels, > > Due to emerged need for the fft algorithm in my work, I have returned to > the question whether the fftpack module provided by scipy is optimal. > > Leaving aside license issues, I considered fftw again. Running some > performance tests with fftpack and fftw, I found to my surprise > that fftpack is actually 2-2.5 times faster (!) than fftw (could someone > verify this as well?). I checked fftw site for comparison and there > fftpack was found to be about 2-3 times slower than fftw. > > I don't have a good explanation for this difference, only few notes: > 1) fftw site timings were obtained with rather old gcc-2.7.2. May be gcc > has become faster since then (I am using gcc-2.95.4). > 2) scipy uses rather aggressive Fortran compiler flags, some of them was > used in fftw site as well. So, compiler flags should not be the > (only) reason. > 3) fftw tests were run on PII with 200MHz and 300MHz and Redhat 5.0, I am > running them on PII with 400MHz and Debian Woody. Other parameters seem to > be the same (including compiler flags for building fftw). > > > Next, I needed to implement some transforms based on DFT (like > differentiation of periodic sequences, hilbert transforms, tilbert > transforms) that has to be very efficient and stable (they will be called > tens of thousands times on rather large data set). > > The current interface of fftpack is rather complete and general, > Travis O. has made remarkable job in creating it, but it turns out to be > not straightforward enough for my needs. > > Therefore I, have created another interface to the Fortran fftpack > library using f2py, let's call it fftpack2 for now. Currently it exposes > the same functions to Python as fftpack written by Travis O. but differs > from the following aspects > > 1) y = fftpack.rfft(x) (and its inverse) is replaced with > y = fftpack2.fftr(x) that returns real data array of length len(x): > y = [y(0),Re(y(1)),Im(y(1)),...] > > The differences with the fftpack.rfft is that fftpack.rfft returns a > complex array of length > len(x)/2 if len(x) is even > (len(x)-1)/2 if len(x) is odd > and reads as follows > y = [y(0),Re(y(1))+1j*Im(y(1)),...] > > To my opinion, fftpack2.fftr has the following advantages: > a) the property ifft(fft(x))=x holds what ever is the length of x. > With fftpack.rfft one has to specify the length when doing inverse, > i.e. ifft(fft(x),len(x))=x holds, in general. I find it as a possible > source of confusion and bugs. > b) there is no real->complex->real conversion in fftpack2.fftr that > makes the interface more straightforward (good for performance, > crucial for time critical algorithms). > However, in certain situations fftpack.rfft has an advantage (and this is > the reason why I introduced different name for this function, both rfft > and fftr should be available). > > 2) fftpack2 does not support axis keyword argument. The purpose of axis > argument is that FFT can be applied in one C loop to the rows of an > array. This feature is missing in the original Fortran fftpack library. On > the other hand, Matlab's fft has this feature (where FFT is applied to the > columns of a matrix, though). > > Why I have not added this feature to fftpack2? First, I am not sure how > often this feature will be used so that code complexity would be > justified. In SciPy, it used only in signal/signaltools.py for hilbert > transform where array transpose could be used instead. Second, I don't see > any remarkable performance gain for doing the loop in C: most of the time > will be spent probably in doing FFT or converting the input array to a > contiguous one. So, the loop could be easily done in Python without > too much of performance loss. > > So, my questions are: > Is the axis keyword argument actually used among scipy users? Should I add > this feature also to fftpack2 because Matlab has one? (But notice that > Matlab uses different convention from scipy, and so Matlab users will be > confused anyway). > > Right now I am in a position of keeping fftpack functions as simple as > possible and add additional features only when they are explicitly asked > for. > > 3) Finally, fftpack2 is easier to maintain due to using f2py. I have found > number bugs in fftpack causing either infinite loops or segfaults and > fixing them can be a tedious task. > Fortran fftpack provides also other transforms like sin,cos,etc that > could be exposed to scipy. This would be really easy when using f2py. > The alternative of manual wrapping and subsequent maintainance pain is > not very attractive for me. > > > In conclusion, I am a bit in the middle of deciding whether ... > * I should proceed with fftpack2? It has an advantage of consisting > simpler code base and also maintaining it will be easier. > But I would like to drop some of the features (like axis argument) from > fftpack if there is no need for it. > * Or should I try to fix the current fftpack bugs, extend it but > keeping all the current features? It has a disadvantage that > the fftpack will contain two types of C codes, one manually written and > other wrapped with f2py. This setup will make the fftpack module seemingly > more complex than necessary: I find it both from the usage as well from > the maintainance point of views. > > So, what do you think of all that? I'll appreciate any comments that > would help to solve my dilemma. > > Thanks, > Pearu > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev From pearu at cens.ioc.ee Sun Aug 18 18:19:17 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Mon, 19 Aug 2002 01:19:17 +0300 (EEST) Subject: [SciPy-dev] fftpack revisited In-Reply-To: <006b01c246f4$6bc4a850$6b01a8c0@ericlaptop> Message-ID: On Sun, 18 Aug 2002, eric jones wrote: > Hey Pearu, > > Here are my thoughts: > > 1. Given a choice, I like f2py wrappers over hand wrappers because they > are easier to maintain. > 2. I think the axis argument is important. It allows users to choose > how they lay out their data instead of SciPy defining how they should do > it. The axis should default to -1 so that the default behavior operates > on contiguous memory without copying for efficiency. This is the > default used throughout the SciPy library. Ok then. Now that I look it more closely, adding axis support is pretty much paste-copy from the current fftpack. > As far as whether looping over the data sets in C or Python has much > impact on speed, a simple test would be to rewrite one of Travis' > functions in Python (such as fft) and then compare the two. If Python > is within a few percent of the wrapped fft when testing "normal sized" > 2d and 3d arrays, then the benefit of having the code in C is not to > great. > > So, I'd be happy to see fftpack become an f2py generated library as long > as (1) we're very sure that the new version is at least as robust as the > old version, and (2) there is very little penalty for using Python > looping. Checking the repository, I didn't see any unit tests for the > fftpack library which will make verifying (1) more difficult. (1) I'll create the testing suite while proceeding with fftpack2 (2) Actually, I can create this loop also in C while still using f2py (sometimes f2py strikes me with its flexibility ;). > As far as your new fftr function, it sounds like a good addition to me, > but I don't like the name that well. It is too similar to the original > function name (rfft and fftr) and could cause confusion. I'd go for > rfft2, except that it sounds like it is doing a 2d rfft. No other > suggestions come immediately to mind though. How about rrfft,irrfft ? Read it as 'real real fft' -- the second 'real' for the fact that input is assumed to be real and the first 'real' for the fact that the returned array is really real ;-) Hmm, currently fftpack provides the following set of fft functions: fft,ifft rfft,irfft hfft,ihfft fftn,ifftn rfftn,irfftn fft2,ifft2 rfft2,irfft2 These would be the additions: rrfft,irrfft rrfftn,irrfftn rhfft,irhfft rrfft2,irrfft2 + various cos/sin transforms. Is this pattern acceptable? In addition, I would like to introduce the following functions to fftpack, all use fft extensively: y = diff(x, k=1, period=2*pi) - k-th derivative (integral) of a periodic sequence. Exact for trigonometric polynomials. y = hilbert(x) - Hilbert transform (and its inverse as ihilbert) y = tilbert(x,a=1) - Tilbert transform (and its inverse as itilbert) y = difftilbert(x,k=1,a=1,period=2*pi) - optimized diff(tilbert(..),..). > On exposing more transforms from fftpack, that sounds like a good idea > that is easily done with f2py than by hand. I wonder though, if the > standard jpeg library doesn't have a blazing fast cosine transform > function that we might use. Perhaps it is not worth the trouble of > adding another set of source code... > > Summary: fftpack2 sounds like a good thing to pursue, but it would need > to support axis arguments if it is to become a replacement for fftpack. > Is there any other functionality you were thinking of dropping? These > would need to be discussed also. I'll keep the axis arguments. There is only one thing in the interface that I would like to discuss. For example, fftpack.fft has three arguments: fft(a, n=None, axis=-1) The second argument n stands for the size of the Fourier transform (matrix). It seems to me that this argument could be dropped as this information already is contained in a.shape and axis: n = a.shape[axis] In fftpack._raw_fft this is a special case and, in general, the following rules are applied: * If n>a.shape[axis] then a is padded with zeros. * If n References: Message-ID: <1029789954.2752.15.camel@travis> On Sun, 2002-08-18 at 12:30, Pearu Peterson wrote: > > Hi scipy devels, > > Due to emerged need for the fft algorithm in my work, I have returned to > the question whether the fftpack module provided by scipy is optimal. > > Leaving aside license issues, I considered fftw again. Running some > performance tests with fftpack and fftw, I found to my surprise > that fftpack is actually 2-2.5 times faster (!) than fftw (could someone > verify this as well?). I checked fftw site for comparison and there > fftpack was found to be about 2-3 times slower than fftw. > This is interesting, anecdotally I've noticed fftpack to be "fast enough" for me so that I haven't been using fftw at all. What tests did you run? > I don't have a good explanation for this difference, only few notes: > 1) fftw site timings were obtained with rather old gcc-2.7.2. May be gcc > has become faster since then (I am using gcc-2.95.4). > 2) scipy uses rather aggressive Fortran compiler flags, some of them was > used in fftw site as well. So, compiler flags should not be the > (only) reason. > 3) fftw tests were run on PII with 200MHz and 300MHz and Redhat 5.0, I am > running them on PII with 400MHz and Debian Woody. Other parameters seem to > be the same (including compiler flags for building fftw). No idea what could be happening. > > > Next, I needed to implement some transforms based on DFT (like > differentiation of periodic sequences, hilbert transforms, tilbert > transforms) that has to be very efficient and stable (they will be called > tens of thousands times on rather large data set). > > The current interface of fftpack is rather complete and general, > Travis O. has made remarkable job in creating it, but it turns out to be > not straightforward enough for my needs. Actually I should not take credit here. This is just the Numeric interface contributed by several people. I only moved it over to SciPy once ND fft's were supported. > > Therefore I, have created another interface to the Fortran fftpack > library using f2py, let's call it fftpack2 for now. Currently it exposes > the same functions to Python as fftpack written by Travis O. but differs > from the following aspects > I don't have a problem conceptually with using f2py for this. In fact, I would like to see most if not all of the interfaces (even the ones that I did contribute :-) written with f2py) > 1) y = fftpack.rfft(x) (and its inverse) is replaced with > y = fftpack2.fftr(x) that returns real data array of length len(x): > y = [y(0),Re(y(1)),Im(y(1)),...] > > The differences with the fftpack.rfft is that fftpack.rfft returns a > complex array of length > len(x)/2 if len(x) is even > (len(x)-1)/2 if len(x) is odd > and reads as follows > y = [y(0),Re(y(1))+1j*Im(y(1)),...] > > To my opinion, fftpack2.fftr has the following advantages: > a) the property ifft(fft(x))=x holds what ever is the length of x. > With fftpack.rfft one has to specify the length when doing inverse, > i.e. ifft(fft(x),len(x))=x holds, in general. I find it as a possible > source of confusion and bugs. This is a long-standing ambiguity with fft's. You always have to check what the definition is. What does Matlab do? We need an interface that reduces surprise... > b) there is no real->complex->real conversion in fftpack2.fftr that > makes the interface more straightforward (good for performance, > crucial for time critical algorithms). > However, in certain situations fftpack.rfft has an advantage (and this is > the reason why I introduced different name for this function, both rfft > and fftr should be available). Yes, I think this is reasonable. > > 2) fftpack2 does not support axis keyword argument. The purpose of axis > argument is that FFT can be applied in one C loop to the rows of an > array. This feature is missing in the original Fortran fftpack library. On > the other hand, Matlab's fft has this feature (where FFT is applied to the > columns of a matrix, though). > I think the axis keyword argument is necessary. Sure it may not always be used, but I very often do FFT's along one direction or another and I know several other folks who code in MATLAB do as well. The default should be the one that gives the most speed, but we need to allow this feature. It would be O.K. to not have this feature in the low-level interface, but the command exposed to the casual user in Python, must have it. I'm not opposed to a new f2py-generated interface to fftpack as long as the command most users will use has the axis keyword. Like I said it's O.K. if this has to be a Python function with the low-level wrapper having no axis keyword. I think a new interface is better than a mixed f2py + current interface. Just my opinion. -Travis From pearu at cens.ioc.ee Mon Aug 19 03:02:36 2002 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Mon, 19 Aug 2002 10:02:36 +0300 (EEST) Subject: [SciPy-dev] fftpack revisited In-Reply-To: Message-ID: On Mon, 19 Aug 2002, Pearu Peterson wrote: > There is only one thing in the interface that I would like to discuss. For > example, fftpack.fft has three arguments: > fft(a, n=None, axis=-1) > The second argument n stands for the size of the Fourier transform > (matrix). It seems to me that this argument could be dropped as this > information already is contained in a.shape and axis: > n = a.shape[axis] > In fftpack._raw_fft this is a special case and, in general, the following > rules are applied: > * If n>a.shape[axis] then a is padded with zeros. > * If n Both cases create a new array that is non-contiguous (except in > special cases). So, in total there will be two copies made of input data > before passing it to Fortran. > I wonder if we could move this functionality out from the fft functions? > I think that the padding functionality is already covered by the > fftpack.zeropad function and truncating can be done by slicing. Another point of removing this feature is that it is easy to produce hard to find bugs. For instance, fft always succeeds whatever is the value of integer n. If one uses a value that is not correct, then calculations either succeed with totally incorrect results or fail far from calling fft. Debugging such cases can be difficult as it is not obvious what will happen by just reading the code. Just another thought... Pearu From pearu at scipy.org Mon Aug 19 18:19:13 2002 From: pearu at scipy.org (Pearu) Date: Mon, 19 Aug 2002 17:19:13 -0500 (CDT) Subject: [SciPy-dev] fftpack revisited In-Reply-To: <1029789954.2752.15.camel@travis> Message-ID: On 19 Aug 2002, Travis Oliphant wrote: > What tests did you run? Attached are test codes. Usage: 1) Unpack fftw2.tgz and cd fftw2 2) To build, run python ./setup_fftw.py build --build-platlib=. 3) To test, run python bench_fft.py Here are the results that I get: n= 1000 Testing ... 0.6153 seconds Testing ... 0.2199 seconds Testing ... 0.5865 seconds Testing ... 0.2179 seconds n= 1024 Testing ... 0.5279 seconds Testing ... 0.2158 seconds Testing ... 0.5033 seconds Testing ... 0.2146 seconds where refers to fftw routine refers to scipy.fftpack.fft function > > 1) y = fftpack.rfft(x) (and its inverse) is replaced with > > y = fftpack2.fftr(x) that returns real data array of length len(x): > > y = [y(0),Re(y(1)),Im(y(1)),...] > > > > The differences with the fftpack.rfft is that fftpack.rfft returns a > > complex array of length > > len(x)/2 if len(x) is even > > (len(x)-1)/2 if len(x) is odd > > and reads as follows > > y = [y(0),Re(y(1))+1j*Im(y(1)),...] > > > > To my opinion, fftpack2.fftr has the following advantages: > > a) the property ifft(fft(x))=x holds what ever is the length of x. > > With fftpack.rfft one has to specify the length when doing inverse, > > i.e. ifft(fft(x),len(x))=x holds, in general. I find it as a possible > > source of confusion and bugs. > > This is a long-standing ambiguity with fft's. You always have to check > what the definition is. What does Matlab do? We need an interface that > reduces surprise... In matlab: ++++++++++++++++++++++++ >> help fft FFT Discrete Fourier transform. FFT(X) is the discrete Fourier transform (DFT) of vector X. For matrices, the FFT operation is applied to each column. For N-D arrays, the FFT operation operates on the first non-singleton dimension. FFT(X,N) is the N-point FFT, padded with zeros if X has less than N points and truncated if it has more. FFT(X,[],DIM) or FFT(X,N,DIM) applies the FFT operation across the dimension DIM. >> help fftn FFTN N-dimensional discrete Fourier Transform. FFTN(X) returns the N-dimensional discrete Fourier transform of the N-D array X. If X is a vector, the output will have the same orientation. FFTN(X,SIZ) pads X so that its size vector is SIZ before performing the transform. If any element of SIZ is smaller than the corresponding dimension of X, then X will be cropped in that dimension. ++++++++++++++++++++++++++++++++++ In addition, matlab has functions IFFT, FFT2, IFFT2, IFFTN. For matlab functions, ifft(fft(x))=x always holds. Also I think the current fftpack provides too many fft functions (see the list in my previous mail) compared to what Matlab provides. Optimization decisions which Fortran fftpack functions to call for given input data can be partly made in fft routines. I'll see if some of the routines can be dropped if their functionality can be implemented in some more generic fft routine. Basically, here follows what we have: * fft(x),ifft(x) should accept arbitrary type sequences (int,real,complex) and they always return complex arrays of length len(x). Also fft(ifft(x))=x must hold. Internally, the most optimal fftpack routine is chosen based on the type of x. * rfft(x) accepts only real sequences. Raises an exception if x is complex. Returns a complex array of length len(x)/2. irfft(x) accepts any sequences, returns a real array of length 2*len(x) (I have assumed that len(x) is even). irfft(rfft(x),len(x)) = x holds. * rrfft(x),irrfft(x) accept only real sequences. Both return real arrays of length len(x). rrfft(irrfft(x))=x holds. * hfft(x),ihfft(x) -- opposite to rfft/irfft functions. And then there are 2D and N-D fft routines. Remarks: 1) What shall we decide about the signatures of fft functions? Matlab fft has basically the following signature fft(x[,n[,axis]]) -> y Shall use the same? Initially I was thinking of fft(x[,axis[,n]]) -> y with the possibility of dropping the n argument. Though Matlab functions do truncating and zero-padding if x.shape[axis]!=n, I find it dangerous: if one uses incorrect n, then fft function still succeeds. As a result, the calculations will be incorrect (users may not even notice that) or some exception is raise later, far from the fft call. Such cases are difficult to debug and therefore I would prefer if fft takes input with the correct size, truncating and zero-padding should be done explicitely by the user, scipy can only provide the corresponding functions. What do you think? 2) Notice that fft(x)[:len(x)/2] == rfft(x). Do we acctually need the rfft(x) function exposed in scipy? Pearu -------------- next part -------------- A non-text attachment was scrubbed... Name: fftw2.tgz Type: application/x-gzip Size: 2668 bytes Desc: URL: From pearu at scipy.org Mon Aug 19 20:18:14 2002 From: pearu at scipy.org (Pearu) Date: Mon, 19 Aug 2002 19:18:14 -0500 (CDT) Subject: [SciPy-dev] fftpack preliminary testing results Message-ID: Hi, I have started to create a testing site for fftpack and here are some preliminary benchmark results: Fast Fourier Transform ================================== size |fftpack2 | FFT | scipy | fftw2 100 | 1.55 | 1.07 | 1.65 | 0.42 (secs for 7000 calls) 1000 | 1.77 | 1.82 | 3.01 | 0.65 (secs for 2000 calls) 512 | 3.97 | 4.56 | 3.20 | 1.50 (secs for 10000 calls) 2048 | 1.30 | 1.82 | 1.26 | 0.68 (secs for 1000 calls) 8192 | 5.65 | 6.61 | 5.68 | 2.39 (secs for 500 calls) fftpack2 refers to f2py'd version of fftpack FFT refers to fftpack (f2c version of fftpack) shipped with Numeric scipy refers to scipy.fftpack fftw2 refers to quickly-put-together interface to FFTW All tests are carried out on complex 1D contiguous data. All fft routines pass correctness check (i.e. they all return the same result). Notes: * Timings for fftpack2 and scipy fluctuate strongly. In average, their performance is the same. * fftpack2/scipy perform better than FFT for large data sets. * FFTW beats them all (which makes more sense than my earlier tests). Pearu From nwagner at mecha.uni-stuttgart.de Wed Aug 21 04:29:43 2002 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Wed, 21 Aug 2002 10:29:43 +0200 Subject: [SciPy-dev] Resultant of two polynomials - Sylvester matrix. Message-ID: <3D634F77.7E2AE23D@mecha.uni-stuttgart.de> Hi, I am looking for a function which computes the resultant of two polynomials. It is defined to be the determinant of their Sylvester matrix. A small example would be appreciated. Thanks in advance. Nils From pearu at scipy.org Fri Aug 23 06:27:25 2002 From: pearu at scipy.org (Pearu) Date: Fri, 23 Aug 2002 05:27:25 -0500 (CDT) Subject: [SciPy-dev] djbfft - The fastest FFT in the universe ;) Message-ID: Hi, I have found a fft library djbfft by D. J. Bernstein that is even faster than FFTW. I have implemented its (currently optional) support to fftpack2. djbfft provides power-of-2 FFT for single/double real/complex data. Therefore, djbfft must live with some other fft library that would provide non-power-of-2 FFT. However, there is upto 1.7 times speed improvement for power-of-2 FFTs when using djbfft. djbfft home page is at http://cr.yp.to/djbfft.html Eric, can you check if there are any license issues with djbfft that would prevent us including it to scipy? At least to me it is not clear if djbfft is in public domain or not? The author notes that "djbfft can be used in my own code" but he does not specify if it can be distributed in a package like scipy. See also relevant notes in http://cr.yp.to/distributors.html but the link above does not mention djbfft. Here are the latest testing results (only with complex input): fftpack2 = f2py interface to fftpack FFT = f2c version of fftpack (from NumPy) scipy = current scipy interface to Fortran fftpack Fast Fourier Transform ================================== size |fftpack2 | FFT | scipy | fftw2 100 | 0.36 | 0.39 | 0.37 | skipped (secs for 7000 calls) 1000 | 0.56 | 0.63 | 0.53 | skipped (secs for 2000 calls) 256 | 0.60 | 0.74 | 0.61 | skipped (secs for 10000 calls) 512 | 0.96 | 1.29 | 0.98 | skipped (secs for 10000 calls) 1024 | 0.17 | 0.25 | 0.17 | skipped (secs for 1000 calls) 2048 | 0.35 | 0.49 | 0.35 | skipped (secs for 1000 calls) 4096 | 0.52 | 0.70 | 0.52 | skipped (secs for 500 calls) 8192 | 2.50 | 2.83 | 2.59 | skipped (secs for 500 calls) fftpack2 = djbfft & fftpack Fast Fourier Transform ================================== size |fftpack2 | FFT | scipy | fftw2 100 | 0.37 | 0.39 | 0.37 | skipped (secs for 7000 calls) 1000 | 0.57 | 0.63 | 0.53 | skipped (secs for 2000 calls) 256 | 0.57 | 0.74 | 0.61 | skipped (secs for 10000 calls) 512 | 0.81 | 1.28 | 0.97 | skipped (secs for 10000 calls) 1024 | 0.14 | 0.25 | 0.17 | skipped (secs for 1000 calls) 2048 | 0.27 | 0.58 | 0.35 | skipped (secs for 1000 calls) 4096 | 0.42 | 0.70 | 0.51 | skipped (secs for 500 calls) 8192 | 1.40 | 2.99 | 2.66 | skipped (secs for 500 calls) . fftpack2 = djbfft & fftw Fast Fourier Transform ================================== size |fftpack2 | FFT | scipy | fftw2 100 | 0.35 | 0.39 | 0.37 | skipped (secs for 7000 calls) 1000 | 0.47 | 0.63 | 0.53 | skipped (secs for 2000 calls) 256 | 0.57 | 0.74 | 0.61 | skipped (secs for 10000 calls) 512 | 0.82 | 1.29 | 0.98 | skipped (secs for 10000 calls) 1024 | 0.14 | 0.25 | 0.17 | skipped (secs for 1000 calls) 2048 | 0.27 | 0.48 | 0.35 | skipped (secs for 1000 calls) 4096 | 0.42 | 0.71 | 0.52 | skipped (secs for 500 calls) 8192 | 1.49 | 2.95 | 2.57 | skipped (secs for 500 calls) Regards, Pearu From eric at enthought.com Fri Aug 23 16:16:32 2002 From: eric at enthought.com (eric jones) Date: Fri, 23 Aug 2002 15:16:32 -0500 Subject: [SciPy-dev] license for djbfft? Message-ID: <011901c24ae1$f9024dd0$6b01a8c0@ericlaptop> Dr. Bernstein, I've copied the scipy-dev list on this mail. We develop an Open Source (BSD style) package of scientific computing modules for Python at www.scipy.org. We've been using fftpack and, optionally fftw, for our fft module. fftw's license is GPL, otherwise we would be using it as the standard package. One of the community developers (Pearu Peterson) has been working hard on improving the fft module lately and ran across your fft package. Congrats on making such a nice tool. Combining your algorithms for power of 2 and fftpack's generic length ffts provides pretty much equivalent or better performance than fftw on most fft lengths. Pearu posted benchmarks below: http://www.scipy.org/site_content/mailman?fn=scipy-dev/2002-August/00110 1.html I searched for a license in the djbfft source but didn't see one. I also didn't see any mention of djbfft on http://cr.yp.to/distributors.html. So, the question is, can we include djbfft source code in SciPy and distribute it under its BSD license? The SciPy community is certainly willing to contribute every change we make to the source code. Our changes are typically made to help with compatibility across platforms and only occasionally to the core algorithms. Thanks for making your tools available on the web, and I look forward to your reply regarding the licensing issue. Regards, eric From steven.robbins at videotron.ca Fri Aug 23 16:09:32 2002 From: steven.robbins at videotron.ca (Steve M. Robbins) Date: Fri, 23 Aug 2002 16:09:32 -0400 Subject: [SciPy-dev] scipy / SGI MIPSpro compiler Message-ID: <20020823200932.GI11939@nyongwa.montreal.qc.ca> Hello, I'm trying to build scipy on an IRIX 6.5 system using the vendor compiler. (That's what I built python with). The compiler complained bitterly about signal/S_bspline_util.c because there are function prototypes mixed with old-style function declarations. Here's a patch that changes them all to "new style" (I forget the technical term) declarations. Cheers, -Steve Index: signal/S_bspline_util.c =================================================================== RCS file: /home/cvsroot/world/scipy/signal/S_bspline_util.c,v retrieving revision 1.4 diff -u -b -B -r1.4 S_bspline_util.c --- signal/S_bspline_util.c 30 Jun 2002 23:55:05 -0000 1.4 +++ signal/S_bspline_util.c 23 Aug 2002 20:11:36 -0000 @@ -28,12 +28,8 @@ /* with a given starting value loaded into the array */ void -S_IIR_order1 (a1, a2, x, y, N, stridex, stridey) - float a1; - float a2; - float *x; - float *y; - int N, stridex, stridey; +S_IIR_order1 ( float a1, float a2, float *x, float *y, + int N, int stridex, int stridey ) { float *yvec = y+stridey; float *xvec = x+stridex; @@ -51,13 +47,8 @@ /* y[n] = a1 * x[n] + a2 * y[n-1] + a3 * y[n-2] */ /* with two starting values loaded into the array */ void -S_IIR_order2 (a1, a2, a3, x, y, N, stridex, stridey) - float a1; - float a2; - float a3; - float *x; - float *y; - int N, stridex, stridey; +S_IIR_order2 ( float a1, float a2, float a3, float *x, float *y, + int N, int stridex, int stridey ) { float *yvec = y+2*stridey; float *xvec = x+2*stridex; @@ -86,14 +77,8 @@ */ void -S_IIR_order2_cascade (cs, z1, z2, y1_0, x, yp, N, stridex, stridey) - float cs; - float z1; - float z2; - float y1_0; - float *x; - float *yp; - int N, stridex, stridey; +S_IIR_order2_cascade ( float cs, float z1, float z2, float y1_0, float *x, + float *yp, int N, int stridex, int stridey) { float *yvec = yp+stridey; float *xvec = x+stridex; @@ -137,13 +122,8 @@ */ int -S_IIR_forback1 (c0, z1, x, y, N, stridex, stridey, precision) - float c0; - float z1; - float *x; - float *y; - int N, stridex, stridey; - float precision; +S_IIR_forback1 ( float c0, float z1, float *x, float *y, + int N, int stridex, int stridey, float precision ) { float *yp = NULL; float *xptr = x; @@ -189,12 +169,8 @@ /* h must be odd length */ /* strides in units of sizeof(float) bytes */ void -S_FIR_mirror_symmetric (in, out, N, h, Nh, instride, outstride) - float *in; - float *out; - int N, Nh; - float *h; - int instride, outstride; +S_FIR_mirror_symmetric ( float *in, float *out, int N, float *h, int Nh, + int instride, int outstride ) { int n, k; int Nhdiv2 = Nh >> 1; @@ -254,14 +230,9 @@ } int -S_separable_2Dconvolve_mirror(in, out, M, N, hr, hc, Nhr, - Nhc, instrides, outstrides) - float *in; - float *out; - int M, N; - float *hr, *hc; - int Nhr, Nhc; - int *instrides, *outstrides; +S_separable_2Dconvolve_mirror( float *in, float *out, int M, int N, + float *hr, float *hc, int Nhr, int Nhc, + int *instrides, int *outstrides ) { int m, n; float *tmpmem; @@ -305,10 +276,7 @@ static float S_hs(int,float,double,double); float -S_hc(k, cs, r, omega) - int k; - float cs; - double r, omega; +S_hc( int k, float cs, double r, double omega ) { if (k < 0) return 0.0; if (omega == 0.0) @@ -319,10 +287,7 @@ } float -S_hs(k, cs, rsq, omega) - int k; - float cs; - double rsq, omega; +S_hs( int k, float cs, double rsq, double omega ) { float cssq; float c0; @@ -383,12 +348,8 @@ */ int -S_IIR_forback2 (r, omega, x, y, N, stridex, stridey, precision) - double r,omega; - float *x; - float *y; - int N, stridex, stridey; - float precision; +S_IIR_forback2 ( double r, double omega, float *x, float *y, + int N, int stridex, int stridey, float precision ) { float cs; float *yp = NULL; @@ -497,13 +458,8 @@ */ int -S_cubic_spline2D(image, coeffs, M, N, lambda, strides, cstrides, precision) - float *image; - float *coeffs; - int M, N; - double lambda; - int *strides, *cstrides; - float precision; +S_cubic_spline2D( float *image, float *coeffs, int M, int N, double lambda, + int *strides, int *cstrides, float precision ) { double r, omega; float *inptr; @@ -590,13 +546,9 @@ */ int -S_quadratic_spline2D(image, coeffs, M, N, lambda, strides, cstrides, precision) - float *image; - float *coeffs; - int M, N; - double lambda; - int *strides, *cstrides; - float precision; +S_quadratic_spline2D( float *image, float *coeffs, int M, int N, + double lambda, int *strides, int *cstrides, + float precision ) { double r; float *inptr; From steven.robbins at videotron.ca Fri Aug 23 16:23:09 2002 From: steven.robbins at videotron.ca (Steve M. Robbins) Date: Fri, 23 Aug 2002 16:23:09 -0400 Subject: [SciPy-dev] scipy / SGI MIPSpro compiler (part 2) Message-ID: <20020823202309.GJ11939@nyongwa.montreal.qc.ca> Hi again, Got stuck in scipy_base/_compiled_base.c with three errors and a warning. cc -O -OPT:Olimit=0 -I/usr/local/unstable/include/python2.1 -c /tmp/smr/scipy/scipy_base/_compiled_base.c -o build/temp.irix64-6.5-2.1/_compiled_base.o error: command 'cc' failed with exit status 2 swmgr at wart{scipy}cc -O -OPT:Olimit=0 -I/usr/local/unstable/include/python2.1 -c /tmp/smr/scipy/scipy_base/_compiled_base.c cc-3316 cc: ERROR File = /tmp/smr/scipy/scipy_base/_compiled_base.c, Line = 45 The expression must be a pointer to a complete object type. for (k=0; k < asize; k++,ip+=instride) { ^ cc-3316 cc: ERROR File = /tmp/smr/scipy/scipy_base/_compiled_base.c, Line = 48 The expression must be a pointer to a complete object type. for (j=0; j < copied; j++,op2+=elsize) { ^ cc-3316 cc: ERROR File = /tmp/smr/scipy/scipy_base/_compiled_base.c, Line = 58 The expression must be a pointer to a complete object type. op += elsize; /* Get ready to put next match */ ^ cc-1174 cc: WARNING File = /tmp/smr/scipy/scipy_base/_compiled_base.c, Line = 87 The variable "f1" was declared but never referenced. PyObject *m, *d, *s, *f1; ^ My understanding is that you can't do pointer arithmetic on void* types. I changed them to char*, assuming that the strides are given in bytes. Index: scipy_base/_compiled_base.c =================================================================== RCS file: /home/cvsroot/world/scipy/scipy_base/_compiled_base.c,v retrieving revision 1.1 diff -u -b -B -r1.1 _compiled_base.c --- scipy_base/_compiled_base.c 16 Aug 2002 13:09:42 -0000 1.1 +++ scipy_base/_compiled_base.c 23 Aug 2002 20:29:44 -0000 @@ -15,8 +15,8 @@ int asize, abytes, new; int copied=0, nd; int instride=0, elsize, k, j, dims[1]; - void *ip, *op; /* Current memory buffer */ - void *op2; + char *ip, *op; /* Current memory buffer */ + char *op2; static char *kwlist[] = {"input", NULL}; @@ -84,7 +84,7 @@ }; DL_EXPORT(void) init_compiled_base(void) { - PyObject *m, *d, *s, *f1; + PyObject *m, *d, *s; /* Create the module and add the functions */ m = Py_InitModule("_compiled_base", methods); From steven.robbins at videotron.ca Sat Aug 24 16:24:00 2002 From: steven.robbins at videotron.ca (Steve M. Robbins) Date: Sat, 24 Aug 2002 16:24:00 -0400 Subject: [SciPy-dev] Re: scipy / SGI MIPSpro compiler (part 3) Message-ID: <20020824202359.GA20451@nyongwa.montreal.qc.ca> Hi again, I've managed to build and install scipy (using the two patches sent previously) on an IRIX 6.5 system with SGI's MIPSpro C compiler and GNU Fortran (GCC 3.2). I'm using CVS scipy. I built *without* ATLAS, using the BLAS and LAPACK sources from netlib. (Many thanks to whoever made it possible for scipy to build blas & lapack from source.) I've moved on to running the scipy tests. I'm not sure which failures are worth worrying about. For instance, a simple "import scipy" results in a warning: >>> import scipy /home/bic/stever/irix-6/lib/python2.1/site-packages/scipy/linalg/lapack.py:24: UserWarning: exceptions.ImportError: No module named clapack warnings.warn(clapack.__doc__) exceptions.ImportError: No module named cblas I don't think this is a problem -- the comments in lapack.py suggest that either Fortran or C lapack is okay. I'm not sure why the exception is raised. (??) I'm not sure where the exception about cblas comes from -- is cblas required? (I'm using the Fortran BLAS sources, compiled during the scipy build) Now, when I run the test suite itself, python -c 'import scipy; scipy.test(level=1)' 2>err | tee out I get some more worrying messages. Here are the standard output and standard error, respectively. I'd appreciate any comments if I've done something wrong in building scipy. Are these messages about "No test suite found for ..." normal? --------------------- out ---------------------------------- exceptions.ImportError: No module named cblas No test suite found for scipy.helper No test suite found for scipy.__cvs_version__ No test suite found for scipy.helpmod creating test suite for: scipy.common No test suite found for scipy.scipy_tempfile No test suite found for scipy.sync No test suite found for scipy.pilutil No test suite found for scipy.proc No test suite found for scipy.ga No test suite found for scipy.io No test suite found for scipy.cow No test suite found for scipy.integrate creating test suite for: scipy.stats.distributions No test suite found for scipy.stats.pstat creating test suite for: scipy.stats.morestats No test suite found for scipy.stats.rv creating test suite for: scipy.stats.stats No test suite found for scipy.stats.rv2 creating test suite for: scipy_base.limits creating test suite for: scipy_base.index_tricks creating test suite for: scipy_base.shape_base creating test suite for: scipy_base.function_base creating test suite for: scipy_base.matrix_base No test suite found for scipy_base.scimath creating test suite for: scipy_base.type_check No test suite found for scipy_base.polynomial No test suite found for scipy.special No test suite found for scipy.cluster FAILURE to import scipy.weave.accelerate_tools :0: AttributeError: 'scipy.weave' module has no attribute 'accelerate_tools' (in ?) No test suite found for scipy.weave.scalar_info No test suite found for scipy.weave.blitz_info No test suite found for scipy.weave.cxx_info creating test suite for: scipy.weave.catalog No test suite found for scipy.weave.blitz_spec No test suite found for scipy.weave.dumbdbm_patched creating test suite for: scipy.weave.scalar_spec No test suite found for scipy.weave.lib2def creating test suite for: scipy.weave.sequence_spec No test suite found for scipy.weave.misc No test suite found for scipy.weave.code_blocks No test suite found for scipy.weave.inline_info No test suite found for scipy.weave.converters No test suite found for scipy.weave.unicode_spec creating test suite for: scipy.weave.inline_tools No test suite found for scipy.weave.bytecodecompiler creating test suite for: scipy.weave.slice_handler No test suite found for scipy.weave.unicode_info creating test suite for: scipy.weave.build_tools creating test suite for: scipy.weave.blitz_tools No test suite found for scipy.weave.base_info No test suite found for scipy.weave.swig_info creating test suite for: scipy.weave.ast_tools creating test suite for: scipy.weave.ext_tools building extensions here: /home/bic/stever/.python21_compiled/4190480 building extensions here: /home/bic/stever/.python21_compiled/4190481 No test suite found for scipy.weave.wx_info No test suite found for scipy.weave.common_info No test suite found for scipy.weave.wx_spec creating test suite for: scipy.weave.common_spec No test suite found for scipy.weave.dumb_shelve No test suite found for scipy.weave.base_spec No test suite found for scipy.weave.conversion_code No test suite found for scipy.weave.standard_array_info creating test suite for: scipy.weave.size_check No test suite found for scipy.weave.conversion_code_old creating test suite for: scipy.weave.standard_array_spec No test suite found for scipy.interpolate No test suite found for scipy.signal No test suite found for anneal No test suite found for scipy.optimize.minpack No test suite found for scipy.optimize.optimize No test suite found for scipy.optimize.common_routines creating test suite for: scipy.optimize.zeros No test suite found for scipy.linalg.interface_gen creating test suite for: scipy.linalg.blas No test suite found for scipy.linalg.lapack creating test suite for: scipy.linalg.basic exceptions.ImportError: No module named cblas No test suite found for scipy.linalg.flinalg creating test suite for: scipy.linalg.decomp creating test suite for: scipy.linalg.matfuncs !! FAILURE building test for scipy.linalg.matfuncs :1: ImportError: No module named test_matfuncs (in ?) Testing uniform Testing stnorm Testing norm Testing lognorm Testing expon Testing beta Testing power Testing bradford Testing burr Testing fisk Testing cauchy Testing halfcauchy Testing foldcauchy Testing gamma Testing gengamma Testing loggamma Testing alpha Testing anglit Testing arcsine Testing betaprime Testing erlang Testing dgamma Testing extreme3 Testing exponweib Testing exponpow Testing frechet Testing gilbrat Testing f Testing ncf Testing chi2 Testing chi Testing nakagami Testing genpareto Testing genextreme Testing genhalflogistic Testing pareto Testing lomax Testing halfnorm Testing halflogistic Testing fatiguelife Testing foldnorm Testing ncx2 Testing t Testing nct Testing weibull Testing dweibull Testing maxwell Testing rayleigh Testing genlogistic Testing logistic Testing gumbel Testing gompertz Testing hypsecant Testing laplace Testing reciprocal Testing triang Testing tukeylambda Ties preclude use of exact statistic. Ties preclude use of exact statistic. warning: specified build_dir '_bad_path_' does not exist or is or is not writable. Trying default locations warning: specified build_dir '..' does not exist or is or is not writable. Trying default locations warning: specified build_dir '_bad_path_' does not exist or is or is not writable. Trying default locations warning: specified build_dir '..' does not exist or is or is not writable. Trying default locations TESTING CONVERGENCE zero should be 1 function f2 cc.bisect : 1.0000000000001952 cc.ridder : 1.0000000000004661 cc.brenth : 0.9999999999999997 cc.brentq : 0.9999999999999577 function f3 cc.bisect : 1.0000000000001952 cc.ridder : 1.0000000000000000 cc.brenth : 1.0000000000000009 cc.brentq : 1.0000000000000011 function f4 cc.bisect : 1.0000000000001952 cc.ridder : 1.0000000000001454 cc.brenth : 0.9999999999993339 cc.brentq : 0.9999999999993339 function f5 cc.bisect : 1.0000000000001952 cc.ridder : 1.0000000000004574 cc.brenth : 0.9999999999991444 cc.brentq : 0.9999999999991444 function f6 cc.bisect : 1.0000000000001952 cc.ridder : 1.0000000000004949 cc.brenth : 0.9999999999990239 cc.brentq : 1.0000000000001117 ----------------------------- err ----------------------------------- /home/bic/stever/irix-6/lib/python2.1/site-packages/scipy/linalg/lapack.py:24: UserWarning: exceptions.ImportError: No module named clapack warnings.warn(clapack.__doc__) /home/bic/stever/irix-6/lib/python2.1/site-packages/scipy/linalg/lapack.py:24: UserWarning: exceptions.ImportError: No module named clapack warnings.warn(clapack.__doc__) ...................................................................F..................................................................................................................................................................................................................EE..........................................................................................................................F..FF......scopy:n=4 ..scopy:n=3 ....dcopy:n=4 ..dcopy:n=3 ....ccopy:n=4 ..ccopy:n=3 ....zcopy:n=4 ..zcopy:n=3 .....saxpy:n=4 ..saxpy:n=3 .....daxpy:n=4 ..daxpy:n=3 .....caxpy:n=4 ..caxpy:n=3 .....zaxpy:n=4 ..zaxpy:n=3 ...sscal:n=4 ...dscal:n=4 ...cscal:n=4 ...zscal:n=4 ....sswap:n=4 ..sswap:n=3 ....dswap:n=4 ..dswap:n=3 ....cswap:n=4 ..cswap:n=3 ....zswap:n=4 ..zswap:n=3 .............................................................. ====================================================================== ERROR: check_add_function_ordered (test_catalog.test_catalog) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/unstable/lib/python2.1/site-packages/scipy/weave/tests/test_catalog.py", line 286, in check_add_function_ordered q.add_function('f',string.upper) File "/usr/local/unstable/lib/python2.1/site-packages/scipy/weave/catalog.py", line 575, in add_function self.add_function_persistent(code,function) File "/usr/local/unstable/lib/python2.1/site-packages/scipy/weave/catalog.py", line 615, in add_function_persistent module = getmodule(function) File "/usr/local/unstable/lib/python2.1/site-packages/scipy/weave/catalog.py", line 66, in getmodule if mod and object in mod.__dict__.values(): File "/usr/local/unstable/lib/python2.1/site-packages/scipy_distutils/misc_util.py", line 43, in __getattr__ raise self._info[0],self._info[1] ImportError: No module named cblas ====================================================================== ERROR: Test persisting a function in the default catalog ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/unstable/lib/python2.1/site-packages/scipy/weave/tests/test_catalog.py", line 274, in check_add_function_persistent1 q.add_function_persistent('code',i) File "/usr/local/unstable/lib/python2.1/site-packages/scipy/weave/catalog.py", line 615, in add_function_persistent module = getmodule(function) File "/usr/local/unstable/lib/python2.1/site-packages/scipy/weave/catalog.py", line 66, in getmodule if mod and object in mod.__dict__.values(): File "/usr/local/unstable/lib/python2.1/site-packages/scipy_distutils/misc_util.py", line 43, in __getattr__ raise self._info[0],self._info[1] ImportError: No module named cblas ====================================================================== FAIL: check_basic (test_morestats.test_shapiro) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/unstable/lib/python2.1/site-packages/scipy/stats/tests/test_morestats.py", line 30, in check_basic assert_almost_equal(w,0.90047299861907959,8) File "/usr/local/unstable/lib/python2.1/site-packages/scipy_base/testing.py", line 283, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: DESIRED: 0.900472998619 ACTUAL: 0.900472760201 ====================================================================== FAIL: check_asum (test_blas.test_fblas1_simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/unstable/lib/python2.1/site-packages/scipy/linalg/tests/test_blas.py", line 59, in check_asum assert_almost_equal(f([3,-4,5]),12) File "/usr/local/unstable/lib/python2.1/site-packages/scipy_base/testing.py", line 283, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: DESIRED: 12 ACTUAL: 0.0 ====================================================================== FAIL: check_dot (test_blas.test_fblas1_simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/unstable/lib/python2.1/site-packages/scipy/linalg/tests/test_blas.py", line 66, in check_dot assert_almost_equal(f([3,-4,5],[2,5,1]),-9) File "/usr/local/unstable/lib/python2.1/site-packages/scipy_base/testing.py", line 283, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: DESIRED: -9 ACTUAL: 0.0 ====================================================================== FAIL: check_nrm2 (test_blas.test_fblas1_simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/unstable/lib/python2.1/site-packages/scipy/linalg/tests/test_blas.py", line 75, in check_nrm2 assert_almost_equal(f([3,-4,5]),math.sqrt(50)) File "/usr/local/unstable/lib/python2.1/site-packages/scipy_base/testing.py", line 283, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: DESIRED: 7.07106781187 ACTUAL: 3.73560645334e+270 ---------------------------------------------------------------------- Ran 559 tests in 11.403s FAILED (failures=4, errors=2) ---------------------------------------------------------------------- Thanks, -Steve From pearu at scipy.org Sat Aug 24 18:03:19 2002 From: pearu at scipy.org (Pearu) Date: Sat, 24 Aug 2002 17:03:19 -0500 (CDT) Subject: [SciPy-dev] Re: scipy / SGI MIPSpro compiler (part 3) In-Reply-To: <20020824202359.GA20451@nyongwa.montreal.qc.ca> Message-ID: On Sat, 24 Aug 2002, Steve M. Robbins wrote: > Hi again, > > I've managed to build and install scipy (using the two patches sent > previously) on an IRIX 6.5 system with SGI's MIPSpro C compiler and > GNU Fortran (GCC 3.2). I'm using CVS scipy. Thanks for the patches. I think nobody have applied them yet but I'll do it as soon as I'll get time for it. > For instance, a simple "import scipy" results in a warning: > > >>> import scipy > /home/bic/stever/irix-6/lib/python2.1/site-packages/scipy/linalg/lapack.py:24: UserWarning: exceptions.ImportError: No module named clapack > warnings.warn(clapack.__doc__) > exceptions.ImportError: No module named cblas > > I don't think this is a problem -- the comments in lapack.py suggest > that either Fortran or C lapack is okay. I'm not sure why the exception > is raised. (??) I'm not sure where the exception about cblas comes > from -- is cblas required? (I'm using the Fortran BLAS sources, compiled > during the scipy build) Actually there is no exception raised. Only the exception message is shown as a warning of missing cblas. cblas (and clapack) is build only if ATLAS is available. I am not sure yet whether this warning should be displayed or not. Sometimes it can indicate real problems... > Now, when I run the test suite itself, > > python -c 'import scipy; scipy.test(level=1)' 2>err | tee out > > I get some more worrying messages. Here are the standard output > and standard error, respectively. I'd appreciate any comments if > I've done something wrong in building scipy. Are these messages > about "No test suite found for ..." normal? Yes, they are normal but only temporarily. All these missing test suites should be implemented, eventually. Only the failures at the end of this message are crucial and worth worrying about. What f2py version are you using? Could you get the latest f2py from CVS and run the following tests: cd f2py2e && python setup.py install cd f2py2e/tests/f77 python return_integer.py --quite python return_logical.py --quite python return_real.py --quite python return_complex.py --quite python return_character.py --quite All these commands should display `ok'. If not, then remove --quite option and run again to see more verbose output. You can send the stdout/stderr messages directly to me. Pearu > ====================================================================== > FAIL: check_asum (test_blas.test_fblas1_simple) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/local/unstable/lib/python2.1/site-packages/scipy/linalg/tests/test_blas.py", line 59, in check_asum > assert_almost_equal(f([3,-4,5]),12) > File "/usr/local/unstable/lib/python2.1/site-packages/scipy_base/testing.py", line 283, in assert_almost_equal > assert round(abs(desired - actual),decimal) == 0, msg > AssertionError: > Items are not equal: > DESIRED: 12 > ACTUAL: 0.0 > ====================================================================== > FAIL: check_dot (test_blas.test_fblas1_simple) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/local/unstable/lib/python2.1/site-packages/scipy/linalg/tests/test_blas.py", line 66, in check_dot > assert_almost_equal(f([3,-4,5],[2,5,1]),-9) > File "/usr/local/unstable/lib/python2.1/site-packages/scipy_base/testing.py", line 283, in assert_almost_equal > assert round(abs(desired - actual),decimal) == 0, msg > AssertionError: > Items are not equal: > DESIRED: -9 > ACTUAL: 0.0 > ====================================================================== > FAIL: check_nrm2 (test_blas.test_fblas1_simple) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/local/unstable/lib/python2.1/site-packages/scipy/linalg/tests/test_blas.py", line 75, in check_nrm2 > assert_almost_equal(f([3,-4,5]),math.sqrt(50)) > File "/usr/local/unstable/lib/python2.1/site-packages/scipy_base/testing.py", line 283, in assert_almost_equal > assert round(abs(desired - actual),decimal) == 0, msg > AssertionError: > Items are not equal: > DESIRED: 7.07106781187 > ACTUAL: 3.73560645334e+270 > ---------------------------------------------------------------------- From pearu at scipy.org Sat Aug 24 18:08:23 2002 From: pearu at scipy.org (Pearu) Date: Sat, 24 Aug 2002 17:08:23 -0500 (CDT) Subject: [SciPy-dev] Re: scipy / SGI MIPSpro compiler (part 3) In-Reply-To: Message-ID: On Sat, 24 Aug 2002, Pearu wrote: > python return_integer.py --quite ^^^^^^^ I meant here --quiet option, of cource. Pearu From pearu at scipy.org Sun Aug 25 07:30:58 2002 From: pearu at scipy.org (Pearu) Date: Sun, 25 Aug 2002 06:30:58 -0500 (CDT) Subject: [SciPy-dev] Re: scipy / SGI MIPSpro compiler (part 3) In-Reply-To: <20020825005834.GD20451@nyongwa.montreal.qc.ca> Message-ID: Steve, On Sat, 24 Aug 2002, Steve M. Robbins wrote: > On Sat, Aug 24, 2002 at 05:03:19PM -0500, Pearu wrote: > > Actually there is no exception raised. Only the exception message is shown > > as a warning of missing cblas. cblas (and clapack) is build only if ATLAS > > is available. > > Oh. So you're saying that both "No module named clapack" and "No > module named cblas" came from a single warning message, and that I > simply mis-parsed them as two? No. When running tests then there were also the same warnings for clapack. I see no problem here, though. > > Could you get the latest > > f2py from CVS and run the following tests: > > > > cd f2py2e && python setup.py install > > This bombed with a message about scipy_distutils missing. > I made a symlink scipy_distutils -> ../scipy-cvs/scipy_distutils > where the scipy-cvs directory holds scipy checked out from > CVS within the last day or so. Is this OK? That is OK. Normally scipy_distutils is shipped also with f2py. To install scipy, one needs f2py first. f2py installs f2py2e and scipy_distutils packages. And then scipy re-installs scipy_distutils. > Incidentally, it looks like f2py installs scipy_distutils itself. > Am I right? If so, is there a way to inhibit this? Yes, you are right. f2py and scipy_distutils are developed very closely, so usually there is no problem if either scipy or f2py installs scipy_distutils, just make sure that you have the latest scipy_distutils from scipy CVS. I also use symlink from scipy to f2py2e directory while working with the codes. However, if you really want to stop f2py to re-installing scipy_distutils, then you must edit f2py2e/setup.py. > > > cd f2py2e/tests/f77 > > python return_integer.py --quiet > > Sadly, this failed. Here is the non-quiet output from > "python return_integer.py 2>&1": > > > cc-3303 cc: WARNING File = /tmp/@425198.0/ext_return_integermodule.c, Line = 179 > A type qualifier on a return type is meaningless. > > const void (*f2py_func)(int*,int*)) { > ^ These warnings are now fixed. > Traceback (most recent call last): > File "return_integer.py", line 99, in ? > runtest(t) > File "return_integer.py", line 56, in runtest > assert t(-123)==-123 > AssertionError > > > > I ran the test "t(-123)" by hand, and it is only the "t1" > version that fails: > > >>> t1(-123) > 133 > > So the integer*1 is unsigned, perhaps? The SGI cc manual says "The > default is to treat values of type char as if they had type unsigned > char". I guess the generated code should use "signed char". Yes, fixed in f2py CVS. > > python return_real.py --quiet > > This time I got a bus error from python. :-( > Running the tests manually, I find that > > abs(t([234])-234)<=err > > with t=t0 is the first test that coredumps. This is known bug with SGI compilers (when using optimization flags) and Python 2.1. See http://sourceforge.net/tracker/index.php?func=detail&aid=435026&group_id=5470&atid=105470 I have worked around this bug for f2py and SGI. > > python return_complex.py --quiet > > try: raise RuntimeError,`t(10l**400)` > RuntimeError: (inf+0j) Fixed. > > python return_character.py --quiet > Traceback (most recent call last): > File "return_character.py", line 62, in ? > runtest(t) > File "return_character.py", line 50, in runtest > assert t(array(77))=='M' > AssertionError Fixed. > Both t0(array(77)) and t1(array(77)) return ''. > > There may be some other weirdness going on. While running some of > the "return_character.py" tests by hand, I had the python process > killed by the system, with a note in the syslog: > > Aug 24 20:48:14 1A:wart unix: |$(0x6dd)ALERT: Process [python] 428471 generated trap, but has signal 11 held or ignored > Aug 24 20:48:14 6A:wart unix: epc 0xfa3a38c ra 0xfa3abd4 badvaddr 0x4973f130 > Aug 24 20:48:14 6A:wart unix: Process has been killed to prevent infinite loop > Aug 24 20:49:10 1A:wart unix: |$(0x6dd)ALERT: Process [python] 428406 generated trap, but has signal 11 held or ignored > Aug 24 20:49:10 6A:wart unix: epc 0xfa3a38c ra 0xfa3abd4 badvaddr 0x4973f180 > Aug 24 20:49:10 6A:wart unix: Process has been killed to prevent infinite loop > > This diagnostic is a new one to me. Hmm, can you reproduce this one and send the details to me? Thanks for the bug reports. I believe they are all fixed now in f2py CVS. Could you verify that? Thanks, Pearu From steven.robbins at videotron.ca Sun Aug 25 11:26:48 2002 From: steven.robbins at videotron.ca (Steve M. Robbins) Date: Sun, 25 Aug 2002 11:26:48 -0400 Subject: [SciPy-dev] Re: scipy / SGI MIPSpro compiler (part 3) In-Reply-To: References: <20020825005834.GD20451@nyongwa.montreal.qc.ca> Message-ID: <20020825152648.GA1135@nyongwa.montreal.qc.ca> Hello Pearu, Thanks so much for the f2py patches. All the tests/f77 are now "ok". I've installed the new f2py and am now rebuilding scipy to see if some of the latter's problems go away ... On Sun, Aug 25, 2002 at 06:30:58AM -0500, Pearu wrote: > Yes, you are right. f2py and scipy_distutils are developed very closely, > so usually there is no problem if either scipy or f2py installs > scipy_distutils, just make sure that you have the latest scipy_distutils > from scipy CVS. I also use symlink from scipy to f2py2e directory while > working with the codes. However, if you really want to stop f2py to > re-installing scipy_distutils, then you must edit f2py2e/setup.py. This is fine when working with both sources from CVS, but how about the binary packages of f2py and scipy? One of the other hats I wear is "Debian developer", and I'm interested in getting a package of scipy in Debian. [And f2py, since scpiy requires it to build.] Joe Reinhardt has made some experimental packages, but they have not yet been uploaded to Debian proper. It is possible to have the f2py and scipy packages both install scipy_distutils, but this is a nuisance. I would be worried that at some point the packages' copies get out of sync, and that installing "f2py" would cause "scipy" to stop working, or vice-versa. I can think of two ways to uncouple the two packages. One is to have f2py install/use the distutils under a different name, e.g. f2py_distutils. The other way is to split out scipy_distutils into a seperately-installable package that would be a prerequisite for both scipy and f2py. I realise that either solution causes work for you, and that binary packaging isn't your primary concern at the moment. However, I thought I'd test the waters to see whether decoupling the packages is at all feasible. Perhaps there are other options that I'm overlooking. > > There may be some other weirdness going on. While running some of > > the "return_character.py" tests by hand, I had the python process > > killed by the system, with a note in the syslog: > > > > Aug 24 20:48:14 1A:wart unix: |$(0x6dd)ALERT: Process [python] 428471 generated trap, but has signal 11 held or ignored > > Aug 24 20:48:14 6A:wart unix: epc 0xfa3a38c ra 0xfa3abd4 badvaddr 0x4973f130 > > Aug 24 20:48:14 6A:wart unix: Process has been killed to prevent infinite loop > > Aug 24 20:49:10 1A:wart unix: |$(0x6dd)ALERT: Process [python] 428406 generated trap, but has signal 11 held or ignored > > Aug 24 20:49:10 6A:wart unix: epc 0xfa3a38c ra 0xfa3abd4 badvaddr 0x4973f180 > > Aug 24 20:49:10 6A:wart unix: Process has been killed to prevent infinite loop > > > > This diagnostic is a new one to me. > > Hmm, can you reproduce this one and send the details to me? Unfortunately, it was one of those transient bugs. I was running python interactively and the process got killed at different times. Sometimes after I had just typed in something innocuous like "t=t1". I may have a look again later today, but in reality now that f2py is working, it is not high on my priority list. -Steve From pearu at scipy.org Sun Aug 25 13:37:23 2002 From: pearu at scipy.org (Pearu) Date: Sun, 25 Aug 2002 12:37:23 -0500 (CDT) Subject: [SciPy-dev] scipy / f2py / debian In-Reply-To: <20020825152648.GA1135@nyongwa.montreal.qc.ca> Message-ID: On Sun, 25 Aug 2002, Steve M. Robbins wrote: > On Sun, Aug 25, 2002 at 06:30:58AM -0500, Pearu wrote: > > > Yes, you are right. f2py and scipy_distutils are developed very closely, > > so usually there is no problem if either scipy or f2py installs > > scipy_distutils, just make sure that you have the latest scipy_distutils > > from scipy CVS. I also use symlink from scipy to f2py2e directory while > > working with the codes. However, if you really want to stop f2py to > > re-installing scipy_distutils, then you must edit f2py2e/setup.py. > > This is fine when working with both sources from CVS, but how about > the binary packages of f2py and scipy? > > One of the other hats I wear is "Debian developer", and I'm interested > in getting a package of scipy in Debian. [And f2py, since scpiy > requires it to build.] Joe Reinhardt has made some experimental > packages, but they have not yet been uploaded to Debian proper. > > It is possible to have the f2py and scipy packages both install > scipy_distutils, but this is a nuisance. I would be worried that at > some point the packages' copies get out of sync, and that installing > "f2py" would cause "scipy" to stop working, or vice-versa. > > I can think of two ways to uncouple the two packages. One is to > have f2py install/use the distutils under a different name, e.g. > f2py_distutils. That way makes little sense since then scipy_distutils will be hardly used in the scipy building process and the required patch to f2py would be non-trivial. > The other way is to split out scipy_distutils into > a seperately-installable package that would be a prerequisite for > both scipy and f2py. That makes more sense. Both scipy and f2py setup.py scripts can be easily patched to not install scipy_distutils and scipy_distutils can be installed as a separate package. > I realise that either solution causes work for you, and that binary > packaging isn't your primary concern at the moment. However, I thought > I'd test the waters to see whether decoupling the packages is at all > feasible. Perhaps there are other options that I'm overlooking. Note that the binary package of scipy does not use scipy_distutils. So, once scipy is built, it does not need scipy_distutils anymore. Therefore installing "f2py" with either older or newer scipy_distutils cannot affect "scipy" work. However, the other way is possible, especially when one installs scipy with older scipy_distutils. Actually, now I don't see why scipy should install scipy_distutils. I would suggest removing scipy_distutils from the separate_packages list in scipy/setup.py to avoid possible conflicts. There might be also a third way. Both scipy and f2py setup.py scripts can compare the versions of scipy_distutils they carry and what is installed already. If the installed scipy_distutils is older, only then either of packages re-install scipy_distutils. Hmm, again, if one already has f2py installed but scipy installs the newer scipy_distutils then there is a possibility that f2py installation will be broken. But not vice versa. So, my conclusion is that scipy should not install scipy_distutils. Other thoughts? Pearu From steven.robbins at videotron.ca Sun Aug 25 14:00:33 2002 From: steven.robbins at videotron.ca (Steve M. Robbins) Date: Sun, 25 Aug 2002 14:00:33 -0400 Subject: [SciPy-dev] Re: scipy / SGI MIPSpro compiler (part 3) In-Reply-To: <20020825152648.GA1135@nyongwa.montreal.qc.ca> References: <20020825005834.GD20451@nyongwa.montreal.qc.ca> <20020825152648.GA1135@nyongwa.montreal.qc.ca> Message-ID: <20020825180033.GA1552@nyongwa.montreal.qc.ca> On Sun, Aug 25, 2002 at 11:26:48AM -0400, Steve M. Robbins wrote: > Hello Pearu, > > Thanks so much for the f2py patches. All the tests/f77 are now "ok". > I've installed the new f2py and am now rebuilding scipy to see if > some of the latter's problems go away ... Sadly, they did not. In fact, I now have one EXTRA failure :-( ====================================================================== FAIL: check_normal (test_morestats.test_anderson) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/unstable/lib/python2.1/site-packages/scipy/stats/tests/test_morestats.py", line 46, in check_normal assert(scipy.all(A < crit[-2:])) AssertionError ====================================================================== I'm going to have a closer look at these failures now. Any advice guiding my search would be greatly appreciated! Regards, -Steve From pearu at scipy.org Sun Aug 25 14:32:27 2002 From: pearu at scipy.org (Pearu) Date: Sun, 25 Aug 2002 13:32:27 -0500 (CDT) Subject: [SciPy-dev] Re: scipy / SGI MIPSpro compiler (part 3) In-Reply-To: <20020825180033.GA1552@nyongwa.montreal.qc.ca> Message-ID: On Sun, 25 Aug 2002, Steve M. Robbins wrote: > Sadly, they did not. In fact, I now have one EXTRA failure :-( > > ====================================================================== > FAIL: check_normal (test_morestats.test_anderson) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/local/unstable/lib/python2.1/site-packages/scipy/stats/tests/test_morestats.py", line 46, in check_normal > assert(scipy.all(A < crit[-2:])) > AssertionError > ====================================================================== > > I'm going to have a closer look at these failures now. > Any advice guiding my search would be greatly appreciated! It seems that tests in stats module sometimes just fail. When I repeatedly hit >>> scipy.stats.test(10) then the change that some of the tests will fail is about 1/20. I am just ignoring these failures as they may be due to too strict conditions for estimating the success of the statistical methods in stats. Sometimes the random data for testing functions is not very favorable.. Pearu From eric at enthought.com Sun Aug 25 23:09:41 2002 From: eric at enthought.com (eric jones) Date: Sun, 25 Aug 2002 22:09:41 -0500 Subject: [SciPy-dev] scipy / f2py / debian In-Reply-To: Message-ID: <000401c24cae$052dbfe0$777ba8c0@ericlaptop> > On Sun, 25 Aug 2002, Steve M. Robbins wrote: > > > On Sun, Aug 25, 2002 at 06:30:58AM -0500, Pearu wrote: > > > > > Yes, you are right. f2py and scipy_distutils are developed very > closely, > > > so usually there is no problem if either scipy or f2py installs > > > scipy_distutils, just make sure that you have the latest > scipy_distutils > > > from scipy CVS. I also use symlink from scipy to f2py2e directory > while > > > working with the codes. However, if you really want to stop f2py to > > > re-installing scipy_distutils, then you must edit f2py2e/setup.py. > > > > This is fine when working with both sources from CVS, but how about > > the binary packages of f2py and scipy? > > > > One of the other hats I wear is "Debian developer", and I'm interested > > in getting a package of scipy in Debian. [And f2py, since scpiy > > requires it to build.] Joe Reinhardt has made some experimental > > packages, but they have not yet been uploaded to Debian proper. > > > > It is possible to have the f2py and scipy packages both install > > scipy_distutils, but this is a nuisance. I would be worried that at > > some point the packages' copies get out of sync, and that installing > > "f2py" would cause "scipy" to stop working, or vice-versa. > > > > I can think of two ways to uncouple the two packages. One is to > > have f2py install/use the distutils under a different name, e.g. > > f2py_distutils. > > That way makes little sense since then scipy_distutils will be hardly used > in the scipy building process and the required patch to f2py would be > non-trivial. > > > The other way is to split out scipy_distutils into > > a seperately-installable package that would be a prerequisite for > > both scipy and f2py. > > That makes more sense. Both scipy and f2py setup.py scripts can be easily > patched to not install scipy_distutils and scipy_distutils can be > installed as a separate package. Packaging issues are always difficult to resolve. The best solution is actually for scipy_distutils to be rolled into Python's distutils. That is what I would like to see happen, but I don't think its inclusion is extremely. We broached the subject once on the distutils sig, and the only response besides mine and Pearu's was from Paul Dubois saying it was a bad idea. I disagree but don't have the time to push this battle hard. I think it would take quite a bit of work (convincing and coding) to get it into the core. If someone wants to champion this issue, I'm all for it. > > > I realise that either solution causes work for you, and that binary > > packaging isn't your primary concern at the moment. However, I thought > > I'd test the waters to see whether decoupling the packages is at all > > feasible. Perhaps there are other options that I'm overlooking. > > Note that the binary package of scipy does not use scipy_distutils. So, > once scipy is built, it does not need scipy_distutils anymore. > Therefore installing "f2py" with either older or newer scipy_distutils > cannot affect "scipy" work. However, the other way is possible, > especially when one installs scipy with older scipy_distutils. > > Actually, now I don't see why scipy should install scipy_distutils. > I would suggest removing scipy_distutils from the separate_packages list > in scipy/setup.py to avoid possible conflicts. > > There might be also a third way. Both scipy and f2py setup.py scripts > can compare the versions of scipy_distutils they carry and what is > installed already. If the installed scipy_distutils is older, only then > either of packages re-install scipy_distutils. > Hmm, again, if one already has f2py installed but scipy installs the > newer scipy_distutils then there is a possibility that f2py installation > will be broken. But not vice versa. > So, my conclusion is that scipy should not install scipy_distutils. > > Other thoughts? I actually use scipy_distutils for quite a bit of my packaging needs outside of scipy, even if the package doesn't have fortran code in it. I don't want our users to have to install another package (other than scipy) to install these "secondary" packages. So, in most cases, I'd like scipy to continue to install scipy_distutils. On the Debian front, I am not well acquainted with Debian's packaging process other than knowing that it is very good. If scipy_distutils is separated out, is it possible to have a package automatically install its dependencies from some central repository? If so packaging scipy_distutils separately in the binaries for this platform would make sense. I think the packaging process will evolve for scipy, and it is a balance between keeping the installation simple and handling cases like this. I view scipy as a "standard library" for scientific computing analogous to python's standard library. As such, I tend toward the "install everything together" end of the spectrum, but am stilling forming opinions on the subject. For now, though, I want scipy_distutils to be installed by the scipy setup.py script. eric From steven.robbins at videotron.ca Sun Aug 25 23:19:37 2002 From: steven.robbins at videotron.ca (Steve M. Robbins) Date: Sun, 25 Aug 2002 23:19:37 -0400 Subject: [SciPy-dev] Re: scipy / SGI MIPSpro compiler (part 3) In-Reply-To: References: <20020825180033.GA1552@nyongwa.montreal.qc.ca> Message-ID: <20020826031937.GC1552@nyongwa.montreal.qc.ca> On Sun, Aug 25, 2002 at 01:32:27PM -0500, Pearu wrote: > I am just ignoring these failures as they may be due to too strict > conditions for estimating the success of the statistical methods in stats. > Sometimes the random data for testing functions is not very favorable.. Interesting. Okay, so I'll ignore the stats failures and the two failures due to no "cblas" module. That leaves me with three failures to consider. ====================================================================== FAIL: check_asum (test_blas.test_fblas1_simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/unstable/lib/python2.1/site-packages/scipy/linalg/tests/test_blas.py", line 59, in check_asum assert_almost_equal(f([3,-4,5]),12) File "/usr/local/unstable/lib/python2.1/site-packages/scipy_base/testing.py", line 283, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: DESIRED: 12 ACTUAL: 0.0 The failure comes from sasum() -- the single precision version. The code using dasum() works OK. ====================================================================== FAIL: check_dot (test_blas.test_fblas1_simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/unstable/lib/python2.1/site-packages/scipy/linalg/tests/test_blas.py", line 66, in check_dot assert_almost_equal(f([3,-4,5],[2,5,1]),-9) File "/usr/local/unstable/lib/python2.1/site-packages/scipy_base/testing.py", line 283, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: DESIRED: -9 ACTUAL: 0.0 Same pattern: sdot() fails, but ddot() works. ====================================================================== FAIL: check_nrm2 (test_blas.test_fblas1_simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/unstable/lib/python2.1/site-packages/scipy/linalg/tests/test_blas.py", line 75, in check_nrm2 assert_almost_equal(f([3,-4,5]),math.sqrt(50)) File "/usr/local/unstable/lib/python2.1/site-packages/scipy_base/testing.py", line 283, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: DESIRED: 7.07106781187 ACTUAL: 3.73560645334e+270 Same again: snrm2() fails but dnrm2() works. I figure there is something systematically wrong with the single precision wrappers. But I don't know what it could be --- some of them appear to work: saxpy, sscal, isamax. I tried a small test case in the style of f2py's tests/f77 for a "sasum"-like function (see below). It works fine. However, in this test case, I just use the default f2py-generated signature file. It turns out to be quite different from the signature for sasum() in linalg/fblas.pyf. For one thing, the f2py-generated files create a C wrapper in which "f2py_func" returns void and has an extra "float*" parameter (presumably to return the value) while the fblas.pyf version returns float. But I'm way out of my depth now. Is there some strange linker problem with "extern float" functions? -Steve __usage__ = """ Run: python test_asum.py [] Examples: python test_asum.py --fcompiler=Gnu --no-wrap-functions python test_asum.py --quiet """ import f2py2e import sys f2py_opts = ' '.join(sys.argv[1:]) try: import ext_test_asum except ImportError: assert not f2py2e.compile('''\ function t0(n,sx) integer n real sx(*),stemp do 10 i = 1,n,1 stemp = stemp + abs(sx(i)) 10 continue t0 = stemp end function t4(n,sx) integer n real*4 sx(*),stemp do 20 i = 1,n,1 stemp = stemp + abs(sx(i)) 20 continue t4 = stemp end ''','ext_test_asum',f2py_opts,source_fn='test_asum.f') from ext_test_asum import t0,t4 from Numeric import array def runtest(t): if t in [t0,t4]: err = 1e-5 else: err = 0.0 print t(3,[1,-2,3]) assert t(3,[3,-4,5]) == 12 for t in [t0,t4]: runtest(t) print 'ok' From pearu at scipy.org Mon Aug 26 04:22:19 2002 From: pearu at scipy.org (Pearu) Date: Mon, 26 Aug 2002 03:22:19 -0500 (CDT) Subject: [SciPy-dev] Re: scipy / SGI MIPSpro compiler (part 3) In-Reply-To: <20020826031937.GC1552@nyongwa.montreal.qc.ca> Message-ID: On Sun, 25 Aug 2002, Steve M. Robbins wrote: > ====================================================================== > FAIL: check_asum (test_blas.test_fblas1_simple) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/local/unstable/lib/python2.1/site-packages/scipy/linalg/tests/test_blas.py", line 59, in check_asum > assert_almost_equal(f([3,-4,5]),12) > File "/usr/local/unstable/lib/python2.1/site-packages/scipy_base/testing.py", line 283, in assert_almost_equal > assert round(abs(desired - actual),decimal) == 0, msg > AssertionError: > Items are not equal: > DESIRED: 12 > ACTUAL: 0.0 > > The failure comes from sasum() -- the single precision version. > The code using dasum() works OK. > > > ====================================================================== > FAIL: check_dot (test_blas.test_fblas1_simple) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/local/unstable/lib/python2.1/site-packages/scipy/linalg/tests/test_blas.py", line 66, in check_dot > assert_almost_equal(f([3,-4,5],[2,5,1]),-9) > File "/usr/local/unstable/lib/python2.1/site-packages/scipy_base/testing.py", line 283, in assert_almost_equal > assert round(abs(desired - actual),decimal) == 0, msg > AssertionError: > Items are not equal: > DESIRED: -9 > ACTUAL: 0.0 > > Same pattern: sdot() fails, but ddot() works. > > > ====================================================================== > FAIL: check_nrm2 (test_blas.test_fblas1_simple) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/local/unstable/lib/python2.1/site-packages/scipy/linalg/tests/test_blas.py", line 75, in check_nrm2 > assert_almost_equal(f([3,-4,5]),math.sqrt(50)) > File "/usr/local/unstable/lib/python2.1/site-packages/scipy_base/testing.py", line 283, in assert_almost_equal > assert round(abs(desired - actual),decimal) == 0, msg > AssertionError: > Items are not equal: > DESIRED: 7.07106781187 > ACTUAL: 3.73560645334e+270 > > Same again: snrm2() fails but dnrm2() works. > > > I figure there is something systematically wrong with the single > precision wrappers. But I don't know what it could be --- some > of them appear to work: saxpy, sscal, isamax. The wrong things happen only with functions that is supposed to return C float or Fortran real. > I tried a small test case in the style of f2py's tests/f77 for a > "sasum"-like function (see below). It works fine. However, in this > test case, I just use the default f2py-generated signature file. It > turns out to be quite different from the signature for sasum() in > linalg/fblas.pyf. For one thing, the f2py-generated files create a C > wrapper in which "f2py_func" returns void and has an extra "float*" > parameter (presumably to return the value) while the fblas.pyf version > returns float. The difference is that we tried to avoid intermediate Fortran wrappers for blas interfaces. As you already found, by default f2py generates these wrappers to Fortran functions for maximum portability and compiler independence. These wrappers are crucial for Fortran functions returning complex or character types but not so crucial for returning integer or real types. But now your case shows that these wrappers are also needed for real*4 types. Unless we figure out something else (see below) then I'll fix this issue later by letting scipy to generate these auxiliary wrappers. > But I'm way out of my depth now. Is there some strange linker > problem with "extern float" functions? I have succesfully (without any errors except from weave) tested scipy on the following platform: bash$ python -c 'import os,sys;print os.name,sys.platform' posix irix646 bash$ uname -a IRIX64 ape 6.5 07201608 IP30 bash$ python -c 'import sys;print sys.version' 2.1.1 (#1, Oct 21 2001, 16:10:39) [C] bash$ python -c 'import Numeric;print Numeric.__version__' 20.2.1 bash$ cc -version MIPSpro Compilers: Version 7.30 bash$ f77 -version MIPSpro Compilers: Version 7.30 bash$ where both LAPACK and BLAS are built by scipy, in the lines of the following: cd scipy LAPACK_SRC=/path/to/src/lapack BLAS_SRC=/path/to/src/blas \ python setup.py build Do you see any difference with your case? Did you build LAPACK/BLAS yourself (then how? compiler flags?) or did you let scipy to build them as above? Btw, what is the output of python scipy_distutils/system_info.py in your case (with all enviroment variables like LAPACK_SRC, BLAS_SRC, etc set)? Do you have any system blas/lapack installed? I am asking this information because it is possible that your Fortran compiler assumed that Fortran "real" is "real*8" but f2py assumes "real*4". See if you can find any relevant flags in your compiler manual. Pearu From steven.robbins at videotron.ca Mon Aug 26 08:45:36 2002 From: steven.robbins at videotron.ca (Steve M. Robbins) Date: Mon, 26 Aug 2002 08:45:36 -0400 Subject: [SciPy-dev] Re: scipy / SGI MIPSpro compiler (part 3) In-Reply-To: References: <20020826031937.GC1552@nyongwa.montreal.qc.ca> Message-ID: <20020826124535.GD1552@nyongwa.montreal.qc.ca> On Mon, Aug 26, 2002 at 03:22:19AM -0500, Pearu wrote: > I have succesfully (without any errors except from weave) tested scipy on > the following platform: > > bash$ python -c 'import os,sys;print os.name,sys.platform' > posix irix646 ditto > bash$ uname -a > IRIX64 ape 6.5 07201608 IP30 IRIX64 wart 6.5 10100655 IP27 mips > bash$ python -c 'import sys;print sys.version' > 2.1.1 (#1, Oct 21 2001, 16:10:39) [C] 2.1.3 (#3, Aug 24 2002, 13:12:16) [C] > bash$ python -c 'import Numeric;print Numeric.__version__' > 20.2.1 21.0 > bash$ cc -version > MIPSpro Compilers: Version 7.30 > bash$ f77 -version > MIPSpro Compilers: Version 7.30 > bash$ ditto, except scipy used GNU fortran: swmgr at wart{f77}g77 --version GNU Fortran (GCC 3.2) 3.2 20020814 (release) > where both LAPACK and BLAS are built by scipy, in the lines of the > following: > > cd scipy > LAPACK_SRC=/path/to/src/lapack BLAS_SRC=/path/to/src/blas \ > python setup.py build > > Do you see any difference with your case? I have a slightly newer python, slightly newer Numeric, and used the GNU Fortran compiler. > Did you build LAPACK/BLAS > yourself (then how? compiler flags?) or did you let scipy to build them > as above? Btw, what is the output of I let scipy build LAPACK and BLAS by editing site.cfg. > python scipy_distutils/system_info.py atlas_info: NOT AVAILABLE blas_info: NOT AVAILABLE blas_src_info: FOUND: sources = ['../../numerical/blas/caxpy.f', '../../numerical/blas/cssca ... ... /../numerical/blas/zsymm.f', '../../numerical/blas/ztrsm.f'] dfftw_info: NOT AVAILABLE dfftw_threads_info: NOT AVAILABLE djbfft_info: NOT AVAILABLE fftw_info: NOT AVAILABLE fftw_threads_info: NOT AVAILABLE lapack_info: NOT AVAILABLE lapack_src_info: FOUND: sources = ['../../numerical/LAPACK/SRC/sbdsdc.f', '../../numerical/LAP ... ... LAPACK/SRC/izmax1.f', '../../numerical/LAPACK/SRC/dzsum1.f'] sfftw_info: NOT AVAILABLE sfftw_threads_info: NOT AVAILABLE x11_info: NOT AVAILABLE > in your case (with all enviroment variables like LAPACK_SRC, BLAS_SRC, > etc set)? Do you have any system blas/lapack installed? No lapack, but there is /usr/lib/libblas. Unfortunately, it is an o32 binary so I had to trick scipy into NOT using it with [blas] # dummy value to force building from blas_src blas_libs = nonexistant in site.cfg. > I am asking this information because it is possible that your Fortran > compiler assumed that Fortran "real" is "real*8" but f2py assumes > "real*4". See if you can find any relevant flags in your compiler manual. As far as I can tell, GNU Fortran treats "real" as "real(kind=1)". The latter is defined to be the same size as a float (4 bytes). However, there is an interesting note in the section on interoperating with C/C++ http://gcc.gnu.org/onlinedocs/gcc-3.2/g77/Interoperating-with-C-and-C--.html A simple and foolproof way to write g77-callable C routines--e.g. to interface with an existing library--is to write a file (named, for example, fred.f) of dummy Fortran skeletons comprising just the declaration of the routine(s) and dummy arguments plus END statements. Then run f2c on file fred.f to produce fred.c into which you can edit useful code, confident the calling sequence is correct, at least. (There are some errors otherwise commonly made in generating C interfaces with f2c conventions, such as not using doublereal as the return type of a REAL FUNCTION.) f2c also can help with calling Fortran from C, using its -P option to generate C prototypes appropriate for calling the Fortran. If the Fortran code containing any routines to be called from C is in file joe.f, use the command f2c -P joe.f to generate the file joe.P containing prototype information. #include this in the C which has to call the Fortran routines to make sure you get it right. Using the following input real function quibit(x) real x quibit = x end I ran "f2c -P" to find the C prototype, and it came back as extern E_f quibit_(real *x); Maybe that's a clue? [I'm afraid, though, I'm pretty clueless when it comes to Fortran] Regards, -Steve From pearu at scipy.org Mon Aug 26 09:57:32 2002 From: pearu at scipy.org (Pearu) Date: Mon, 26 Aug 2002 08:57:32 -0500 (CDT) Subject: [SciPy-dev] Re: scipy / SGI MIPSpro compiler (part 3) In-Reply-To: <20020826124535.GD1552@nyongwa.montreal.qc.ca> Message-ID: On Mon, 26 Aug 2002, Steve M. Robbins wrote: > > bash$ cc -version > > MIPSpro Compilers: Version 7.30 > > bash$ f77 -version > > MIPSpro Compilers: Version 7.30 > > bash$ > > ditto, except scipy used GNU fortran: > > swmgr at wart{f77}g77 --version > GNU Fortran (GCC 3.2) 3.2 20020814 (release) Is there any reason why do you want use g77 and not the native f77? Mixing compilers is like walking on a thin ice. > However, there is an interesting note in the section on interoperating > with C/C++ > > http://gcc.gnu.org/onlinedocs/gcc-3.2/g77/Interoperating-with-C-and-C--.html > > A simple and foolproof way to write g77-callable C > routines--e.g. to interface with an existing library--is to write > a file (named, for example, fred.f) of dummy Fortran skeletons > comprising just the declaration of the routine(s) and dummy > arguments plus END statements. Then run f2c on file fred.f to > produce fred.c into which you can edit useful code, confident the > calling sequence is correct, at least. (There are some errors > otherwise commonly made in generating C interfaces with f2c > conventions, such as not using doublereal as the return type of a > REAL FUNCTION.) I find the last note in parenthesis very interesting and may explain why I don't get these errors while using f77. > I ran "f2c -P" to find the C prototype, and it came back as > > extern E_f quibit_(real *x); > > Maybe that's a clue? I think it is a good clue as E_f is defined in f2c.h as typedef double doublereal; typedef doublereal E_f; Could you try out the following scripts: #--------------- test.py import f2py2e import sys f2py_opts = ' -Df_func=F_FUNC '+' '.join(sys.argv[1:]) f2py2e.compile(''' function t0(value) real value cf2py real*8 t0 real t0 cf2py fortranname F_FUNC(t0,T0) cf2py intent(c) t0 cf2py callstatement t0_return_value = (float)(*f2py_func)(&value) cf2py callprotoargument float* t0 = value end ''','t1',f2py_opts,source_fn='t1.f') import t1 print t1.t0(9) #--------------- and then #------------- test2.py import f2py2e import sys f2py_opts = ' -Df_func=F_FUNC '+' '.join(sys.argv[1:]) f2py2e.compile(''' function t0(value) real value real t0 cf2py fortranname F_FUNC(t0,T0) cf2py intent(c) t0 cf2py callstatement t0_return_value = (*f2py_func)(&value) cf2py callprotoargument float* t0 = value end ''','t2',f2py_opts,source_fn='t2.f') import t2 print t2.t0(9) #------------- I hope that the first one prints `9' and the second one prints some nonsense. Pearu From oliphant.travis at ieee.org Mon Aug 26 11:39:41 2002 From: oliphant.travis at ieee.org (Travis Oliphant) Date: 26 Aug 2002 09:39:41 -0600 Subject: [SciPy-dev] Re: scipy / SGI MIPSpro compiler (part 3) In-Reply-To: <20020825180033.GA1552@nyongwa.montreal.qc.ca> References: <20020825005834.GD20451@nyongwa.montreal.qc.ca> <20020825152648.GA1135@nyongwa.montreal.qc.ca> <20020825180033.GA1552@nyongwa.montreal.qc.ca> Message-ID: <1030376390.32337.1.camel@travis> On Sun, 2002-08-25 at 12:00, Steve M. Robbins wrote: > On Sun, Aug 25, 2002 at 11:26:48AM -0400, Steve M. Robbins wrote: > > Hello Pearu, > > > > Thanks so much for the f2py patches. All the tests/f77 are now "ok". > > I've installed the new f2py and am now rebuilding scipy to see if > > some of the latter's problems go away ... > > Sadly, they did not. In fact, I now have one EXTRA failure :-( > > ====================================================================== > FAIL: check_normal (test_morestats.test_anderson) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/local/unstable/lib/python2.1/site-packages/scipy/stats/tests/test_morestats.py", line 46, in check_normal > assert(scipy.all(A < crit[-2:])) > AssertionError > ====================================================================== > These "errors" (and a similar one in the shapiro test) come about because of real statistical tests which can sometimes fail even though the code is working correctly. I'm not sure what to do about this. Perhaps a warning could be printed showing the results if the test fails rather than raising an error. -Travis From steven.robbins at videotron.ca Mon Aug 26 13:10:55 2002 From: steven.robbins at videotron.ca (Steve M. Robbins) Date: Mon, 26 Aug 2002 13:10:55 -0400 Subject: [SciPy-dev] Re: scipy / SGI MIPSpro compiler (part 3) In-Reply-To: References: <20020826124535.GD1552@nyongwa.montreal.qc.ca> Message-ID: <20020826171054.GA24841@nyongwa.montreal.qc.ca> On Mon, Aug 26, 2002 at 08:57:32AM -0500, Pearu wrote: > > On Mon, 26 Aug 2002, Steve M. Robbins wrote: > > > > bash$ cc -version > > > MIPSpro Compilers: Version 7.30 > > > bash$ f77 -version > > > MIPSpro Compilers: Version 7.30 > > > bash$ > > > > ditto, except scipy used GNU fortran: > > > > swmgr at wart{f77}g77 --version > > GNU Fortran (GCC 3.2) 3.2 20020814 (release) > > Is there any reason why do you want use g77 and not the native f77? No, not really. I built python using SGI cc, but naively took the defaults when building scipy. > Mixing compilers is like walking on a thin ice. You're right; it was silly of me. I'll try to rebuild scipy using SGI f77. I thought it would be enough to just remove these lines from scipy/setup.py: # force g77 for now -from scipy_distutils.command import build_flib -build_flib.all_compilers = [build_flib.gnu_fortran_compiler] After doing so, I was able to build scipy using f77. But it didn't link properly, I suppose: >>> import scipy /home/bic/stever/irix-6/lib/python2.1/site-packages/scipy/linalg/lapack.py:24: UserWarning: exceptions.ImportError: No module named clapack warnings.warn(clapack.__doc__) exceptions.ImportError: No module named cblas Traceback (most recent call last): File "", line 1, in ? File "/usr/local/unstable/lib/python2.1/site-packages/scipy/__init__.py", line 62, in ? import optimize, integrate, signal, special, interpolate, cow, ga, cluster, weave File "/usr/local/unstable/lib/python2.1/site-packages/scipy/integrate/__init__.py", line 31, in ? from quadpack import * File "/usr/local/unstable/lib/python2.1/site-packages/scipy/integrate/quadpack.py", line 4, in ? import _quadpack ImportError: 60347:python: rld: Fatal Error: unresolvable symbol in /home/bic/stever/irix-6/lib/python2.1/site-packages/scipy/integrate/_quadpack.so: ddot_ Pearu: are there other tricks needed to build with SGI cc & SGI f77? > Could you try out the following scripts: [ ... ] > I hope that the first one prints `9' and the second one prints some > nonsense. Yes, they do when using g77. But it is the other way around (first is nonsense, second prints '9') when using f77. That sounds like a nightmare to support. For me, I'm happy to build everything with the SGI compilers, once I figure out how to do it ... Thanks for the tips, -Steve From pearu at scipy.org Mon Aug 26 14:25:08 2002 From: pearu at scipy.org (Pearu Peterson) Date: Mon, 26 Aug 2002 13:25:08 -0500 (CDT) Subject: [SciPy-dev] Re: scipy / SGI MIPSpro compiler (part 3) In-Reply-To: <20020826171054.GA24841@nyongwa.montreal.qc.ca> Message-ID: On Mon, 26 Aug 2002, Steve M. Robbins wrote: > I'll try to rebuild scipy using SGI f77. I thought it would be enough > to just remove these lines from scipy/setup.py: > > # force g77 for now > -from scipy_distutils.command import build_flib > -build_flib.all_compilers = [build_flib.gnu_fortran_compiler] Yes. > After doing so, I was able to build scipy using f77. But it > didn't link properly, I suppose: > > >>> import scipy > /home/bic/stever/irix-6/lib/python2.1/site-packages/scipy/linalg/lapack.py:24: UserWarning: exceptions.ImportError: No module named clapack > warnings.warn(clapack.__doc__) > exceptions.ImportError: No module named cblas > Traceback (most recent call last): > File "", line 1, in ? > File "/usr/local/unstable/lib/python2.1/site-packages/scipy/__init__.py", line 62, in ? > import optimize, integrate, signal, special, interpolate, cow, ga, cluster, weave > File "/usr/local/unstable/lib/python2.1/site-packages/scipy/integrate/__init__.py", line 31, in ? > from quadpack import * > File "/usr/local/unstable/lib/python2.1/site-packages/scipy/integrate/quadpack.py", line 4, in ? > import _quadpack > ImportError: 60347:python: rld: Fatal Error: unresolvable symbol in /home/bic/stever/irix-6/lib/python2.1/site-packages/scipy/integrate/_quadpack.so: ddot_ > > Pearu: are there other tricks needed to build with SGI cc & SGI f77? I didn't use any additonal tricks. When rebuilding, make sure that you also do rm -rf build Now I see that integrate module uses atlas and it does not check if there is no atlas available. It should then try also blas libraries... I'll fix it later. As a quick workaround, you can copy the required (e.g. all) blas sources to integrate/linpack_lite directory and rebuild. > > Could you try out the following scripts: > [ ... ] > > I hope that the first one prints `9' and the second one prints some > > nonsense. > > Yes, they do when using g77. But it is the other way around (first is > nonsense, second prints '9') when using f77. That sounds like a > nightmare to support. Indeed. I need to think about this more. At least now I don't see a way how f2py should know how external libraries are built, in particular, which compilers were used. Because from this information the corresponding wrapper should be constructed. However, when using Fortran wrappers around Fortran functions, there should be no issue at all -- calling Fortran subroutines from C is not that compiler dependent as calling Fortran functions. Pearu From pearu at scipy.org Mon Aug 26 14:37:07 2002 From: pearu at scipy.org (Pearu Peterson) Date: Mon, 26 Aug 2002 13:37:07 -0500 (CDT) Subject: [SciPy-dev] Re: scipy / SGI MIPSpro compiler (part 3) In-Reply-To: <1030376390.32337.1.camel@travis> Message-ID: On 26 Aug 2002, Travis Oliphant wrote: > > ====================================================================== > > FAIL: check_normal (test_morestats.test_anderson) > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File "/usr/local/unstable/lib/python2.1/site-packages/scipy/stats/tests/test_morestats.py", line 46, in check_normal > > assert(scipy.all(A < crit[-2:])) > > AssertionError > > ====================================================================== > > > > These "errors" (and a similar one in the shapiro test) come about > because of real statistical tests which can sometimes fail even though > the code is working correctly. I'm not sure what to do about this. > Perhaps a warning could be printed showing the results if the test fails > rather than raising an error. When I did >>> for i in range(200): scipy.stats.test(10) and counted the number of "errors" then the result was 19. So, every 10th run one of the corresponding tests will fail. So, I would suggest the following fix: def check_normal(self): msg = '' for i in range(5): x1 = scipy.stats.expon(size=50) x2 = scipy.stats.norm(size=50) A,crit,sig = scipy.stats.anderson(x1) if not (scipy.all(A > crit[:-1])): msg = 'scipy.all(A > crit[:-1])' continue # try again A,crit,sig = scipy.stats.anderson(x2) if not (scipy.all(A < crit[-2:])): msg = 'scipy.all(A < crit[-2:]))' continue else: return # success raise AssertionError,msg+' failed' Pearu From steven.robbins at videotron.ca Mon Aug 26 14:41:21 2002 From: steven.robbins at videotron.ca (Steve M. Robbins) Date: Mon, 26 Aug 2002 14:41:21 -0400 Subject: [SciPy-dev] Re: scipy / SGI MIPSpro compiler (part 3) In-Reply-To: References: <20020826171054.GA24841@nyongwa.montreal.qc.ca> Message-ID: <20020826184121.GA25693@nyongwa.montreal.qc.ca> On Mon, Aug 26, 2002 at 01:25:08PM -0500, Pearu Peterson wrote: > Now I see that integrate module uses atlas and it does not check if there > is no atlas available. It should then try also blas libraries... I'll fix > it later. Yeah, that's what screwed me up. I managed to kludge around in integrate/setup_integrate.py and get a working scipy. All tests pass!! (except the statistics ones) Thanks so much, Pearu! Now I can get on with the _real_ work ... ;-) -Steve P.S. For reference, here's what I did to setup_integrate. Now I'm really curious how I managed to link this yesterday (when using g77) with NO atlas libs ... Index: integrate/setup_integrate.py =================================================================== RCS file: /home/cvsroot/world/scipy/integrate/setup_integrate.py,v retrieving revision 1.10 diff -u -b -B -r1.10 setup_integrate.py --- integrate/setup_integrate.py 15 Apr 2002 02:55:17 -0000 1.10 +++ integrate/setup_integrate.py 26 Aug 2002 18:41:14 -0000 @@ -14,19 +14,24 @@ atlas_library_dirs = atlas_info.get('library_dirs',[]) atlas_libraries = atlas_info.get('libraries',[]) - blas_libraries = atlas_libraries + + blas_info = get_info('blas') + blas_library_dirs = blas_info.get('library_dirs',[]) + blas_libraries = blas_info.get('libraries',[]) f_libs = [] quadpack = glob(os.path.join(local_path,'quadpack','*.f')) f_libs.append(fortran_library_item(\ - 'quadpack',quadpack,libraries = ['linpack_lite','mach'])) + 'quadpack',quadpack, + libraries = ['linpack_lite','mach']+blas_libraries, + library_dirs = blas_library_dirs)) odepack = glob(os.path.join(local_path,'odepack','*.f')) f_libs.append(fortran_library_item(\ 'odepack',odepack, libraries = ['linpack_lite']+blas_libraries, - library_dirs = atlas_library_dirs)) + library_dirs = blas_library_dirs)) # should we try to weed through files and replace with calls to # LAPACK routines? From skip at pobox.com Mon Aug 26 14:47:57 2002 From: skip at pobox.com (Skip Montanaro) Date: Mon, 26 Aug 2002 13:47:57 -0500 Subject: [SciPy-dev] scipy / f2py / debian In-Reply-To: <000401c24cae$052dbfe0$777ba8c0@ericlaptop> References: <000401c24cae$052dbfe0$777ba8c0@ericlaptop> Message-ID: <15722.30685.207168.13177@12-248-11-90.client.attbi.com> eric> The best solution is actually for scipy_distutils to be rolled eric> into Python's distutils. That is what I would like to see happen, eric> but I don't think its inclusion is extremely. We broached the eric> subject once on the distutils sig, and the only response besides eric> mine and Pearu's was from Paul Dubois saying it was a bad idea. I eric> disagree but don't have the time to push this battle hard. I eric> think it would take quite a bit of work (convincing and coding) to eric> get it into the core. If someone wants to champion this issue, eric> I'm all for it. Eric, Can you recap the discussion for me (off-list if that's more appropriate)? I am new to this list but may be able to convince the python-dev powers that be that merging your changes into distutils would be worthwhile. -- Skip Montanaro skip at pobox.com consulting: http://manatee.mojam.com/~skip/resume.html From eric at enthought.com Mon Aug 26 16:39:38 2002 From: eric at enthought.com (eric jones) Date: Mon, 26 Aug 2002 15:39:38 -0500 Subject: [SciPy-dev] scipy / f2py / debian In-Reply-To: <15722.30685.207168.13177@12-248-11-90.client.attbi.com> Message-ID: <002a01c24d40$b2815020$777ba8c0@ericlaptop> Hey Skip, Here is the short thread on the topic. http://aspn.activestate.com/ASPN/Mail/Message/751769 Paul's point is that distutils inspects the Python makefile to determine what C compiler/settings to use when building extension modules. When it comes to building Fortran based extension modules, there isn't any place to look for what compiler and flags should be used to build compatible modules. Scipy_distutils has taken the approach of trying to detect the available Fortran compiler on the system. The flags for each specific compiler are hard coded to generate Python compatible code. Pearu had already figured the settings for a lot of compilers in his f2py makefiles, and use with scipy has further refined them. The hard coded flags aren't perfect, but they seem to get the job done in most cases. I think they'll eventually work in 95% of the cases and maybe even 100%. On the coding side, work is required re-factoring distutils. There are multiple places in scipy_distutils where a long function (100 lines or so) is overridden so that we can change a few lines in the middle. The original distutils should get re-factored to solve these issues. We might need to add a few keywords also (do we have a keyword for defining fortran command line options now?) Also, dependency issues between Fortran files and the resulting .so file are not calculated correctly because the Fortran files are always archived to a static library and then included in the link step using a -l flag. -l specified libraries are checked for there date in the dependency check. There are probably a number of other issues like this that need to be fixed before it gets to Python standard library quality. I guess the final issue is that the only group that will use Fortran support is the scientific community. The maintainers (unlikely Fortran geeks) may not want to add a mass of code that they aren't familiar with. Our community, of course, is willing to help maintain it, but it is still an issue. Regards, eric > -----Original Message----- > From: scipy-dev-admin at scipy.net [mailto:scipy-dev-admin at scipy.net] On > Behalf Of Skip Montanaro > Sent: Monday, August 26, 2002 1:48 PM > To: scipy-dev at scipy.net > Subject: RE: [SciPy-dev] scipy / f2py / debian > > > eric> The best solution is actually for scipy_distutils to be rolled > eric> into Python's distutils. That is what I would like to see > happen, > eric> but I don't think its inclusion is extremely. We broached the > eric> subject once on the distutils sig, and the only response besides > eric> mine and Pearu's was from Paul Dubois saying it was a bad idea. > I > eric> disagree but don't have the time to push this battle hard. I > eric> think it would take quite a bit of work (convincing and coding) > to > eric> get it into the core. If someone wants to champion this issue, > eric> I'm all for it. > > Eric, > > Can you recap the discussion for me (off-list if that's more appropriate)? > I am new to this list but may be able to convince the python-dev powers > that > be that merging your changes into distutils would be worthwhile. > > -- > Skip Montanaro > skip at pobox.com > consulting: http://manatee.mojam.com/~skip/resume.html > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev From pearu at scipy.org Mon Aug 26 17:05:05 2002 From: pearu at scipy.org (Pearu Peterson) Date: Mon, 26 Aug 2002 16:05:05 -0500 (CDT) Subject: [SciPy-dev] scipy / f2py / debian In-Reply-To: <002a01c24d40$b2815020$777ba8c0@ericlaptop> Message-ID: For the record, here is another thread on the subject of getting Fortran support to distutils http://mail.python.org/pipermail/distutils-sig/2002-June/002898.html with Paul being more positive. Pearu From eric at enthought.com Mon Aug 26 17:20:19 2002 From: eric at enthought.com (eric jones) Date: Mon, 26 Aug 2002 16:20:19 -0500 Subject: [SciPy-dev] scipy / f2py / debian In-Reply-To: Message-ID: <002d01c24d46$618e4b40$777ba8c0@ericlaptop> Very good! I guess I wasn't aware of that thread, but it is a lot more positive, and it sounds like multiple people are willing for this to happen. So, forget all that I said about it being extremely unlikely and upgrade me to a "cautiously hopeful." :-) Pearu, you've put the most work into scipy_distutils lately. Whats your estimation of the work required to get this in the core. eric > -----Original Message----- > From: scipy-dev-admin at scipy.net [mailto:scipy-dev-admin at scipy.net] On > Behalf Of Pearu Peterson > Sent: Monday, August 26, 2002 4:05 PM > To: scipy-dev at scipy.net > Subject: RE: [SciPy-dev] scipy / f2py / debian > > > For the record, here is another thread on the subject of getting Fortran > support to distutils > > http://mail.python.org/pipermail/distutils-sig/2002-June/002898.html > > with Paul being more positive. > > > Pearu > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev From pearu at scipy.org Mon Aug 26 18:25:44 2002 From: pearu at scipy.org (Pearu Peterson) Date: Mon, 26 Aug 2002 17:25:44 -0500 (CDT) Subject: [SciPy-dev] scipy / f2py / debian In-Reply-To: <002d01c24d46$618e4b40$777ba8c0@ericlaptop> Message-ID: On Mon, 26 Aug 2002, eric jones wrote: > Pearu, you've put the most work into scipy_distutils lately. Whats your > estimation of the work required to get this in the core. I think you summarized well the issues with distutils and Fortran support. Making a first patch should not require too much work (may be a full week should be enough). I only think that the current build_flib.py should be splitted into two parts: 1) command/build_flib.py - with a similar functionality as build_clib.py but a library is built from Fortran sources 2) fcompiler.py - Fortran compiler definitions, similar to ccompiler.py During the development of scipy_distutils I think we have learned a lot about distutils internals but there are still some parts where scipy_distutils could take more advantage of the code in distutils. Of cource, in many places simple patches to distutils would simplify that task considerably. I could start working with distutils again but currently I can do that only in background: I want to finish fftpack (btw, I am getting performance boosts upto 5 times with fftpack2:-) and my daily work needs to be done. Pearu From eric at enthought.com Mon Aug 26 18:39:00 2002 From: eric at enthought.com (eric jones) Date: Mon, 26 Aug 2002 17:39:00 -0500 Subject: [SciPy-dev] scipy / f2py / debian In-Reply-To: Message-ID: <003f01c24d51$62a119d0$777ba8c0@ericlaptop> > > Pearu, you've put the most work into scipy_distutils lately. Whats your > > estimation of the work required to get this in the core. > > I think you summarized well the issues with distutils and Fortran support. > Making a first patch should not require too much work (may be a full > week should be enough). I only think that the current build_flib.py > should be splitted into two parts: > 1) command/build_flib.py - with a similar functionality as build_clib.py > but a library is built from Fortran sources > 2) fcompiler.py - Fortran compiler definitions, similar to ccompiler.py I like this change. > During the development of scipy_distutils I think we have learned a lot > about distutils internals but there are still some parts where > scipy_distutils could take more advantage of the code in distutils. > Of cource, in many places simple patches to distutils would simplify that > task considerably. > I could start working with distutils again but currently I can do that > only in background: Ok. I wonder if we should shoot for 2.3 or 2.4? I don't know the timeline for such things. 2.4 might be more feasible given other commitments on everyone's time -- it would be out the middle of next year I think. > I want to finish fftpack (btw, I am getting > performance boosts upto 5 times with fftpack2:-) Very cool. > and my daily work needs to be done. Yeah, this stuff always seems to get in the way... :-) See ya, eric From skip at pobox.com Tue Aug 27 10:03:36 2002 From: skip at pobox.com (Skip Montanaro) Date: Tue, 27 Aug 2002 09:03:36 -0500 Subject: [SciPy-dev] scipy / f2py / debian In-Reply-To: <002a01c24d40$b2815020$777ba8c0@ericlaptop> References: <15722.30685.207168.13177@12-248-11-90.client.attbi.com> <002a01c24d40$b2815020$777ba8c0@ericlaptop> Message-ID: <15723.34488.216765.63212@12-248-11-90.client.attbi.com> eric> Here is the short thread on the topic. eric> http://aspn.activestate.com/ASPN/Mail/Message/751769 eric> Paul's point is that distutils inspects the Python makefile to eric> determine what C compiler/settings to use when building extension eric> modules.... Eric, Thanks for refreshing me on the details of the conversation we had about this last week. I read through this thread and the two referenced threads. I think Paul's original worry that distutils discovers C compiler info from Makefiles is a bit of a red herring. That's where it happens to be, and I guess for Fortran that's not the case. In my opinion distutils should be flexible in this regard. (The reason to look in Makefiles for this information is probably because that's where the configure script puts it. If configure could sniff out some Fortran information and stuff it in Makefiles, you could do the same with Fortran. But who wants to mess with configure? In most peoples' minds I suspect that's worse than messing with Texas. ;-) I suggest incorporation of distutils support for Fortran be tackled in two, perhaps three, steps: 1. Refactor the scipy_distutils code the way you want it to avoid the 100-line cut-n-paste problem. Submit a patch to SF for this. Identify Fortran and f2py support as your target use case and try to give some ballpark figures for the effect the refactoring will have on adding this support (e.g., reductions in modified or duplicated lines of code). 2. Submit a patch for Fortran support which relies on the refactoring patch. 3. [Optional?] Add support for user-specified compiler flags. This may simply be a set of environment variables similar to Make's gazillion compile and link flags. If users are going to want to fiddle with optimization flags, this seems like the most reasonable approach though. I don't know how in-sync scipy_distutils is with CVS distutils, but I'd be happy to try and help sync things up. Skip From wagner.nils at vdi.de Thu Aug 29 15:59:51 2002 From: wagner.nils at vdi.de (My VDI Freemail) Date: Thu, 29 Aug 2002 21:59:51 +0200 Subject: [SciPy-dev] Reliability of linalg.eig for complex matrices A,B In-Reply-To: <20020829195659.20247.51462.Mailman@shaft> Message-ID: <20020829200732.841E33EAD2@www.scipy.com> Hi, I tried to solve a transcendental eigenvalue problem with scipy. However It works more or less for r e a l arguments x, but for complex arguments I cannot observe convergency to any eigenvalue. I guess its a problem of the eigenvalue solver. Please find attached my small example. Any ideas or suggestions ? Nils -------------- next part -------------- A non-text attachment was scrubbed... Name: ram.py Type: application/octet-stream Size: 436 bytes Desc: not available URL: From Chuck.Harris at sdl.usu.edu Thu Aug 29 16:57:27 2002 From: Chuck.Harris at sdl.usu.edu (Chuck Harris) Date: Thu, 29 Aug 2002 14:57:27 -0600 Subject: [SciPy-dev] Reliability of linalg.eig for complex matrices A,B Message-ID: from scipy import * # # Transcendental eigenvalue problem # # A(x) y = 0 # You are looking for nontrivial vectors in the null space? This isn't an eigenvalue problem, really, since you already know the eigenvalue --- 0. necessary & sufficient condition: determinant(A(x)) = 0 which is a zero finding problem. Solutions then follow by reduction to row echelon form and back substitution. # # B(x) = -d A / d x # You doing linearization of an ODE? Or what? > > > Hi, > > I tried to solve a transcendental eigenvalue problem with scipy. > However It works more or less for r e a l arguments x, but > for complex arguments I cannot observe convergency to any > eigenvalue. I guess its a problem of the eigenvalue solver. > Please find attached my small example. > > Any ideas or suggestions ? > > Nils > Chuck