From dsdale24 at gmail.com Fri Aug 1 09:50:31 2008 From: dsdale24 at gmail.com (Darren Dale) Date: Fri, 1 Aug 2008 09:50:31 -0400 Subject: [SciPy-dev] physical quantities: udunits? In-Reply-To: <48924321.7060109@llnl.gov> References: <200807291143.55141.dsdale24@gmail.com> <200807311615.43016.dsdale24@gmail.com> <48924321.7060109@llnl.gov> Message-ID: <200808010950.31720.dsdale24@gmail.com> Hi Charles, On Thursday 31 July 2008 06:56:33 pm you wrote: > Ok looks like our security update was a bit too strong :) can you try > again ? Thanks, I am able to grab the sources now. I did a search on the web a few days ago to see if udunits can be used on Windows. The answer appeared to be no, but I tried installing your unidata python package on windows anyway. There is no windows binary for Numeric and python-2.5, so I tried a first order workaround by changing the Numeric include in udunits_wrap.c to: #include "numpy/arrayobject.h" This required importing numpy in your setup script and adding numpy.get_include() to the list of include_dirs in your extension constructor. I also had to modify the path where udunits is installed, in both setup.py and udunits.py. (moving udunits.dat into Lib would allow the library to be installed using distutils package_data, I think.) I am happy to say that I was able to build the package with mingw, install it, and run the test script. I didn't see any problems (but I didn't really know what to look for). Would you mind posting a link to your udunits2 package? Based on your nice work here, and the appearance of windows compatibility, it seems like it shouldn't be too difficult to build a physical_quantities object subclassed from numpy.ndarray. Have you considered this possibility? Regards, Darren From doutriaux1 at llnl.gov Fri Aug 1 10:28:29 2008 From: doutriaux1 at llnl.gov (Charles Doutriaux) Date: Fri, 01 Aug 2008 07:28:29 -0700 Subject: [SciPy-dev] physical quantities: udunits? In-Reply-To: <200808010950.31720.dsdale24@gmail.com> References: <200807291143.55141.dsdale24@gmail.com> <200807311615.43016.dsdale24@gmail.com> <48924321.7060109@llnl.gov> <200808010950.31720.dsdale24@gmail.com> Message-ID: <48931D8D.1080504@llnl.gov> Hi Darren, Here are some examples: >>> a = unidata.udunits(1,'m') >>> a.to('cm') udunits(100.0,"cm") You can also use: a.known_units() or a.available_units() Come to think of it I should change it so it only shows units compatible with "a" Anyway, sorry I pointed to the "trunk" version, I forgot it is still Numeric based. Or devel version is numpy based. you can access it at: svn export http://www-pcmdi.llnl.gov/svn/repository/cdat/branches/devel/Packages/unidata user: guest psswd: cdatdevel It is indeed based on udunits (not udunits2). I should upgrade. You're right we could probably subclass numpy. Do you want to do it? Honestly I don't think i'll have time in the next month or so. C. Darren Dale wrote: > Hi Charles, > > On Thursday 31 July 2008 06:56:33 pm you wrote: > >> Ok looks like our security update was a bit too strong :) can you try >> again ? >> > > Thanks, I am able to grab the sources now. > > I did a search on the web a few days ago to see if udunits can be used on > Windows. The answer appeared to be no, but I tried installing your unidata > python package on windows anyway. > > There is no windows binary for Numeric and python-2.5, so I tried a first > order workaround by changing the Numeric include in udunits_wrap.c to: > > #include "numpy/arrayobject.h" > > This required importing numpy in your setup script and adding > numpy.get_include() to the list of include_dirs in your extension > constructor. I also had to modify the path where udunits is installed, in > both setup.py and udunits.py. (moving udunits.dat into Lib would allow the > library to be installed using distutils package_data, I think.) > > I am happy to say that I was able to build the package with mingw, install it, > and run the test script. I didn't see any problems (but I didn't really know > what to look for). > > Would you mind posting a link to your udunits2 package? > > Based on your nice work here, and the appearance of windows compatibility, it > seems like it shouldn't be too difficult to build a physical_quantities > object subclassed from numpy.ndarray. Have you considered this possibility? > > Regards, > Darren > > From dsdale24 at gmail.com Fri Aug 1 10:53:36 2008 From: dsdale24 at gmail.com (Darren Dale) Date: Fri, 1 Aug 2008 10:53:36 -0400 Subject: [SciPy-dev] physical quantities: udunits? In-Reply-To: <48931D8D.1080504@llnl.gov> References: <200807291143.55141.dsdale24@gmail.com> <200808010950.31720.dsdale24@gmail.com> <48931D8D.1080504@llnl.gov> Message-ID: <200808011053.36472.dsdale24@gmail.com> Hi Charles, Yes, I think I would like to take a shot at this. I have been meaning to get back to the matplotlib documentation effort, but I really need this for work. I'll look into it this weekend. Thanks, Darren On Friday 01 August 2008 10:28:29 am Charles Doutriaux wrote: > Hi Darren, > > Here are some examples: > >>> a = unidata.udunits(1,'m') > >>> a.to('cm') > > udunits(100.0,"cm") > > You can also use: > a.known_units() > or > a.available_units() > > Come to think of it I should change it so it only shows units compatible > with "a" > > Anyway, sorry I pointed to the "trunk" version, I forgot it is still > Numeric based. Or devel version is numpy based. > > you can access it at: > svn export > http://www-pcmdi.llnl.gov/svn/repository/cdat/branches/devel/Packages/unida >ta user: guest > psswd: cdatdevel > > It is indeed based on udunits (not udunits2). I should upgrade. > > You're right we could probably subclass numpy. Do you want to do it? > > Honestly I don't think i'll have time in the next month or so. > > C. > > Darren Dale wrote: > > Hi Charles, > > > > On Thursday 31 July 2008 06:56:33 pm you wrote: > >> Ok looks like our security update was a bit too strong :) can you try > >> again ? > > > > Thanks, I am able to grab the sources now. > > > > I did a search on the web a few days ago to see if udunits can be used on > > Windows. The answer appeared to be no, but I tried installing your > > unidata python package on windows anyway. > > > > There is no windows binary for Numeric and python-2.5, so I tried a first > > order workaround by changing the Numeric include in udunits_wrap.c to: > > > > #include "numpy/arrayobject.h" > > > > This required importing numpy in your setup script and adding > > numpy.get_include() to the list of include_dirs in your extension > > constructor. I also had to modify the path where udunits is installed, in > > both setup.py and udunits.py. (moving udunits.dat into Lib would allow > > the library to be installed using distutils package_data, I think.) > > > > I am happy to say that I was able to build the package with mingw, > > install it, and run the test script. I didn't see any problems (but I > > didn't really know what to look for). > > > > Would you mind posting a link to your udunits2 package? > > > > Based on your nice work here, and the appearance of windows > > compatibility, it seems like it shouldn't be too difficult to build a > > physical_quantities object subclassed from numpy.ndarray. Have you > > considered this possibility? > > > > Regards, > > Darren From nwagner at iam.uni-stuttgart.de Sat Aug 2 05:56:17 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Sat, 02 Aug 2008 11:56:17 +0200 Subject: [SciPy-dev] SciPy 0.7 release - pending tickets Message-ID: Hi all, Ticket 704 can be closed. http://scipy.org/scipy/scipy/ticket/704 The following tickets can be easily closed after an update of the docstring. http://scipy.org/scipy/scipy/ticket/677 http://scipy.org/scipy/scipy/ticket/666 The functions read_array and write_array are deprecated. Is it reasonable to close ticket http://scipy.org/scipy/scipy/ticket/568 in this context ? Ticket 626 can be closed. Works for me. http://scipy.org/scipy/scipy/ticket/626 Nils From nwagner at iam.uni-stuttgart.de Sat Aug 2 06:08:19 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Sat, 02 Aug 2008 12:08:19 +0200 Subject: [SciPy-dev] SciPy 0.7 release - pending tickets In-Reply-To: References: Message-ID: On Sat, 02 Aug 2008 11:56:17 +0200 "Nils Wagner" wrote: > Hi all, > > Ticket 704 can be closed. > http://scipy.org/scipy/scipy/ticket/704 > > The following tickets can be easily closed after an >update > of the docstring. > > http://scipy.org/scipy/scipy/ticket/677 > http://scipy.org/scipy/scipy/ticket/666 > > The functions read_array and write_array are deprecated. > Is it reasonable to close ticket > > http://scipy.org/scipy/scipy/ticket/568 > > in this context ? > > Ticket 626 can be closed. Works for me. > http://scipy.org/scipy/scipy/ticket/626 > > Nils > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev IMHO ticket 567 can be closed as well. http://scipy.org/scipy/scipy/ticket/567 From dsdale24 at gmail.com Sat Aug 2 13:25:42 2008 From: dsdale24 at gmail.com (Darren Dale) Date: Sat, 2 Aug 2008 13:25:42 -0400 Subject: [SciPy-dev] design of a physical quantities package: seeking comments Message-ID: <200808021325.42976.dsdale24@gmail.com> I have been thinking about how to handle physical quantities by subclassing ndarray and building on some of the ideas and code from Charles Doutriaux python wrappers of udunits and Enthought's units package. I would like to share my current thinking, and would appreciate some feedback at these early stages so I can get the basic design right. The proposed Quantity object would be an ndarray subclass with an additional attribute and property: * Quantity has a private attribute, a Dimensions container, which contains zero or more Dimension objects, each having associated units (a string) and a power (a number). The container object would be equipped with the various __add__, __sub__, __mul__, etc. Before performing the operations on the ndarray values, the operation would be performed with the Dimensions containers, either updating self's Dimensions and yielding the conversion factors required to scale the other quantity's values to perform the ndarray operation, or raising an exception because the two dimensionalities are not commensurate for the particular operation. I think this container approach is necessary in order to allow scaling of each Dimension's units individually, simplifying operations like 11 ft mA / ns * 1 hour = many ft mA. * Quantity has a public units property, providing a view into the object's dimensions and the ability to change from one set of units to another. q.units would return the Dimensions instance, whose __repr__ would be dynamically constructed from each dimension's units and power attributes. The setter would have some limitations by design. For example if q has units of kg m / s^2 and you do q.units='ft', then q.units would return kg ft /s^2. I think the Dimensions container may provide enough abstraction to handle more unusual operations if someone wanted to add them. Robert Kern suggested a few years back (see http://aspn.activestate.com/ASPN/Mail/Message/scipy-user/2538532) that a good physical quanitities system should be able to handle operations like a long,lat position minus another one would yield a distance, but addition would not be supported. This functionality could be built into a subclass of the Dimensions container. Comments and criticism welcome. Darren From peridot.faceted at gmail.com Sat Aug 2 16:23:43 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Sat, 2 Aug 2008 16:23:43 -0400 Subject: [SciPy-dev] design of a physical quantities package: seeking comments In-Reply-To: <200808021325.42976.dsdale24@gmail.com> References: <200808021325.42976.dsdale24@gmail.com> Message-ID: 2008/8/2 Darren Dale : > I have been thinking about how to handle physical quantities by subclassing > ndarray and building on some of the ideas and code from > Charles Doutriaux python wrappers of udunits and Enthought's units package. This sounds like a very handy tool, but you want to be careful to keep it tractable. > I would like to share my current thinking, and would appreciate some feedback > at these early stages so I can get the basic design right. > > The proposed Quantity object would be an ndarray subclass with an additional > attribute and property: > > * Quantity has a private attribute, a Dimensions container, which contains > zero or more Dimension objects, each having associated units (a string) and a > power (a number). The container object would be equipped with the various > __add__, __sub__, __mul__, etc. Before performing the operations on the > ndarray values, the operation would be performed with the Dimensions > containers, either updating self's Dimensions and yielding the conversion > factors required to scale the other quantity's values to perform the ndarray > operation, or raising an exception because the two dimensionalities are not > commensurate for the particular operation. I think this container approach is > necessary in order to allow scaling of each Dimension's units individually, > simplifying operations like 11 ft mA / ns * 1 hour = many ft mA. I think it's a good idea to try to keep things in the units they are provided in; this should reduce the occurrences of unexpected overflows when (for example) cubing a distance in megaparsecs (put this in centimetres and use single precision and you might overflow). But this does mean you need to decide, when adding (say) a distance in feet to a distance in metres, which unit the result should be in. Users will presumably also want some kind of unit normalization function that just converts its input to SI. You will also have to decide at what point simplifications occur - do ft/m immediately get converted as soon as they are produced? What about units like pc/cm^3 (actually in very common use in radio astronomy)? How do you preserve pc/m^3 without getting abominations like kg m^2 s^2/kg m s^2? How are users going to specify units? Using the existing packages, I found it useful to make the units into variables: kg = Unit("kg") so that I could then do wt = 10*kg and the error checking would behave nicely. > * Quantity has a public units property, providing a view into the object's > dimensions and the ability to change from one set of units to another. > q.units would return the Dimensions instance, whose __repr__ would be > dynamically constructed from each dimension's units and power attributes. The > setter would have some limitations by design. For example if q has units of > kg m / s^2 and you do q.units='ft', then q.units would return kg ft /s^2. Hmm. This kind of guessing is likely to trip people up. At least the default should be "convert to exactly the unit I specified". After all, the point of using units is that they catch many mathematical errors, and the sooner they are caught the better. There is something to be said for a "unit globbing" system ("convert all occurrences of metres to feet but leave everything else alone") and for conversion to predefined unit systems (MKS, CGS, "Imperial", metric with prefixes...) but I don't think it should be the default. > I think the Dimensions container may provide enough abstraction to handle more > unusual operations if someone wanted to add them. Robert Kern suggested a few > years back (see > http://aspn.activestate.com/ASPN/Mail/Message/scipy-user/2538532) that a good > physical quanitities system should be able to handle operations like a > long,lat position minus another one would yield a distance, but addition > would not be supported. This functionality could be built into a subclass of > the Dimensions container. I don't think lat/long, or even Fahrenheit/Celsius are a good idea. For one thing, it's a short step from there to general coordinate system conversion (what about UTM? ECEF? do you want great circle distances or direct line?), and then to conversion of tensor quantities between coordinate systems, and down that road lies madness. I think to be a tractable package this needs well-defined boundaries, and the place I'd put those boundaries is at multiplicative units. That's enough to be genuinely useful, and it's small enough to be doable. > Comments and criticism welcome. How do you plan to handle radians and degrees? Radians should really be no unit at all, but it would sometimes be nice to have them printed (and sometimes not). On a related topic, how are you going to handle ufuncs? Addition and subtraction should require commensurable units, multiplication should multiply the units, and I think all other standard ufuncs should require something with no units. Well, except for mean, std, and var maybe. And "pow" is tricky. And, well, you see what I'm getting at. What about user-defined functions? It's worth having some kind of decorator that enforces that a particular function acts on something with no units, and maybe enforces particular units. How can users conveniently write something like a function to add in quadrature? Are you going to support fractional exponents in your units? (Note that they probably need to be exact fractions to be sure they cancel when they're supposed to.) How are you going to deal with CGS (in common use in astronomy for some strange reason) and other semi-"natural" units? In these systems some of the formulas actually look different because units have been chosen to make constants go away. This means that there are fewer basic units (for example in GR one sometimes sets G=c=1 and converts everything - kilograms and meters - to seconds); how do you handle conversion between one of these systems and SI? As was mentioned in a previous discussion of this issue on this list, it's worth looking at how Frink handles this. I don't recommend necessarily following Frink's approach, since it has become quite complicated, but it's worth understanding the issues that pushed Frink to its current size. Keeping this package simple will definitely involve building in limitations. Anne From millman at berkeley.edu Sat Aug 2 16:28:29 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Sat, 2 Aug 2008 13:28:29 -0700 Subject: [SciPy-dev] ANN: NumPy 1.1.1 Message-ID: I'm pleased to announce the release of NumPy 1.1.1. NumPy is the fundamental package needed for scientific computing with Python. It contains: * a powerful N-dimensional array object * sophisticated (broadcasting) functions * basic linear algebra functions * basic Fourier transforms * sophisticated random number capabilities * tools for integrating Fortran code. Besides it's obvious scientific uses, NumPy can also be used as an efficient multi-dimensional container of generic data. Arbitrary data-types can be defined. This allows NumPy to seamlessly and speedily integrate with a wide-variety of databases. Numpy 1.1.1 is a bug fix release featuring major improvements in Python 2.3.x compatibility and masked arrays. For information, please see the release notes: http://sourceforge.net/project/shownotes.php?group_id=1369&release_id=617279 Thank you to everybody who contributed to this release. Enjoy, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From dsdale24 at gmail.com Sat Aug 2 21:39:49 2008 From: dsdale24 at gmail.com (Darren Dale) Date: Sat, 2 Aug 2008 21:39:49 -0400 Subject: [SciPy-dev] design of a physical quantities package: seeking comments In-Reply-To: References: <200808021325.42976.dsdale24@gmail.com> Message-ID: <200808022139.50640.dsdale24@gmail.com> Hi Anne, On Saturday 02 August 2008 4:23:43 pm Anne Archibald wrote: > 2008/8/2 Darren Dale : > > I have been thinking about how to handle physical quantities by > > subclassing ndarray and building on some of the ideas and code from > > Charles Doutriaux python wrappers of udunits and Enthought's units > > package. > > This sounds like a very handy tool, but you want to be careful to keep > it tractable. I don't plan on investing a big chunk of time on this. Hopefully I can get something together that will be useful and extendable, so if someone wants to add new features at a later time, there hopefully will be a simple but flexible-enough foundation to do so. > > I would like to share my current thinking, and would appreciate some > > feedback at these early stages so I can get the basic design right. > > > > The proposed Quantity object would be an ndarray subclass with an > > additional attribute and property: > > > > * Quantity has a private attribute, a Dimensions container, which > > contains zero or more Dimension objects, each having associated units (a > > string) and a power (a number). The container object would be equipped > > with the various __add__, __sub__, __mul__, etc. Before performing the > > operations on the ndarray values, the operation would be performed with > > the Dimensions containers, either updating self's Dimensions and yielding > > the conversion factors required to scale the other quantity's values to > > perform the ndarray operation, or raising an exception because the two > > dimensionalities are not commensurate for the particular operation. I > > think this container approach is necessary in order to allow scaling of > > each Dimension's units individually, simplifying operations like 11 ft mA > > / ns * 1 hour = many ft mA. > > I think it's a good idea to try to keep things in the units they are > provided in; this should reduce the occurrences of unexpected > overflows when (for example) cubing a distance in megaparsecs (put > this in centimetres and use single precision and you might overflow). > But this does mean you need to decide, when adding (say) a distance in > feet to a distance in metres, which unit the result should be in. Yes, a decision will have to be made as to whether A*B+C will yield a result in units of A or C. I am not worried at this point about overflows or loss of precision when attempting to convert a quantity that is some integer dtype. > Users will presumably also want some kind of unit normalization > function that just converts its input to SI. You will also have to > decide at what point simplifications occur - do ft/m immediately get > converted as soon as they are produced? I figured this would probably be the first requested feature. I don't plan on addressing it at this point, but I suppose some mechanism could be added to set a default system. > What about units like pc/cm^3 > (actually in very common use in radio astronomy)? How do you preserve > pc/m^3 without getting abominations like kg m^2 s^2/kg m s^2? I'm not familiar with the issue here (how do you go from length/length^3 to length, and where did the mass and time units come from?). To begin with, I plan on attacking this problem the same way one would do dimensional analysis, converting everything to the basic dimensions of mass, length, time, charge (or current) and temperature, which I think is the way enthought.units handles it as well. pc/m^3 would be converted to 1/m^2 or 1/pc^2. Hopefully it should be possible for someone to write a specialized Dimensions object that preserves a compound unit by performing a different dimensional analysis. Or perhaps some mechanism could be put in place to format the units string representation according some predefined rules and user-defined context. These are probably issues to be addressed at a later time. > How are users going to specify units? Using the existing packages, I > found it useful to make the units into variables: > kg = Unit("kg") > so that I could then do > wt = 10*kg > and the error checking would behave nicely. I am hoping that units can either be set in the constructor or provided after the fact using multiplication, similar to your example. > > * Quantity has a public units property, providing a view into the > > object's dimensions and the ability to change from one set of units to > > another. q.units would return the Dimensions instance, whose __repr__ > > would be dynamically constructed from each dimension's units and power > > attributes. The setter would have some limitations by design. For example > > if q has units of kg m / s^2 and you do q.units='ft', then q.units would > > return kg ft /s^2. > > Hmm. This kind of guessing is likely to trip people up. At least the > default should be "convert to exactly the unit I specified". I disagree. It is not physically possible to convert kg m / s^2 to ft. It should either convert the units of the appropriate dimension or raise an error. Personally, I think the former would be more useful. If one wants the former, perhaps the set_units method could provide a pedantic kwarg that would attempt a complete conversion and raise on error. > After > all, the point of using units is that they catch many mathematical > errors, and the sooner they are caught the better. I agree. I don't see how the proposed behavior would be problematic, there is no guessing involved. If you specify a unit of length, the lengths will be expressed in that unit. Maybe you could provide an example showing how confusion would arise. > There is something > to be said for a "unit globbing" system ("convert all occurrences of > metres to feet but leave everything else alone") and for conversion to > predefined unit systems (MKS, CGS, "Imperial", metric with > prefixes...) but I don't think it should be the default. > > > I think the Dimensions container may provide enough abstraction to handle > > more unusual operations if someone wanted to add them. Robert Kern > > suggested a few years back (see > > http://aspn.activestate.com/ASPN/Mail/Message/scipy-user/2538532) that a > > good physical quanitities system should be able to handle operations like > > a long,lat position minus another one would yield a distance, but > > addition would not be supported. This functionality could be built into a > > subclass of the Dimensions container. > > I don't think lat/long, or even Fahrenheit/Celsius are a good idea. > For one thing, it's a short step from there to general coordinate > system conversion (what about UTM? ECEF? do you want great circle > distances or direct line?), and then to conversion of tensor > quantities between coordinate systems, and down that road lies > madness. I have a copy of Levi-Civita's "The Absolute Differential Calculus" sitting near the bottom of my stack of books to read and pretend I have understood. > I think to be a tractable package this needs well-defined > boundaries, and the place I'd put those boundaries is at > multiplicative units. That's enough to be genuinely useful, and it's > small enough to be doable. I agree, I was just hoping that someone who had specific additional features in mind would speak up and comment on whether the proposed abstractions are sufficient and if not, offer their own suggestions. > > Comments and criticism welcome. > > How do you plan to handle radians and degrees? Radians should really > be no unit at all, but it would sometimes be nice to have them printed > (and sometimes not). I guess if you specify angles, you will get angles. > On a related topic, how are you going to handle ufuncs? Addition and > subtraction should require commensurable units, multiplication should > multiply the units, and I think all other standard ufuncs should > require something with no units. Well, except for mean, std, and var > maybe. And "pow" is tricky. And, well, you see what I'm getting at. Well, I already laid out a strategy for dealing with multiplication and addition, but I am really not that familiar with ufuncs and there are probably some problems lurking that I am not aware of. Maybe I will have to rely on object methods to wrap the incompatible ufuncs and return Quantities with the appropriate units. > What about user-defined functions? It's worth having some kind of > decorator that enforces that a particular function acts on something > with no units, and maybe enforces particular units. Could you give an example? I don't follow. > How can users > conveniently write something like a function to add in quadrature? If multiplication, addition, and power are supported, shouldnt this be transparent? > Are you going to support fractional exponents in your units? (Note > that they probably need to be exact fractions to be sure they cancel > when they're supposed to.) Yes, I think this is necessary. > How are you going to deal with CGS (in > common use in astronomy for some strange reason) and other > semi-"natural" units? In these systems some of the formulas actually > look different because units have been chosen to make constants go > away. This means that there are fewer basic units (for example in GR > one sometimes sets G=c=1 and converts everything - kilograms and > meters - to seconds); how do you handle conversion between one of > these systems and SI? I havent considered it. Like you said, it is better to keep the problem tractable. > As was mentioned in a previous discussion of this issue on this list, > it's worth looking at how Frink handles this. I don't recommend > necessarily following Frink's approach, since it has become quite > complicated, but it's worth understanding the issues that pushed Frink > to its current size. Keeping this package simple will definitely > involve building in limitations. It would be nice to do so, but I don't think Frink's sources are available. I would like to make clear: my concern is to get the abstractions right so it will be flexible enough that others can build on it to provide their desired functionality. If anyone has ideas on how the abstractions need to be improved, I would like to here them. Thanks for the feedback, Darren From peridot.faceted at gmail.com Sun Aug 3 03:46:21 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Sun, 3 Aug 2008 03:46:21 -0400 Subject: [SciPy-dev] design of a physical quantities package: seeking comments In-Reply-To: <200808022139.50640.dsdale24@gmail.com> References: <200808021325.42976.dsdale24@gmail.com> <200808022139.50640.dsdale24@gmail.com> Message-ID: 2008/8/2 Darren Dale : > On Saturday 02 August 2008 4:23:43 pm Anne Archibald wrote: >> 2008/8/2 Darren Dale : >> I think it's a good idea to try to keep things in the units they are >> provided in; this should reduce the occurrences of unexpected >> overflows when (for example) cubing a distance in megaparsecs (put >> this in centimetres and use single precision and you might overflow). >> But this does mean you need to decide, when adding (say) a distance in >> feet to a distance in metres, which unit the result should be in. > > Yes, a decision will have to be made as to whether A*B+C will yield a result > in units of A or C. I am not worried at this point about overflows or loss of > precision when attempting to convert a quantity that is some integer dtype. I don't think it's worth worrying about integers. But if you take, say, 10 megaparsecs, and express it in metres, you get 3e23 m. For the volume of a cube 10 Mpc on a side, you get about 3e70 - a number too big to fit in a single-precision float. So the cosmologists will be forced to use doubles if you internally represent everything using SI units. For the same reason, you may want to make the conversion rules very clear. ("Always convert to the left-hand unit" would work.) >> What about units like pc/cm^3 >> (actually in very common use in radio astronomy)? How do you preserve >> pc/m^3 without getting abominations like kg m^2 s^2/kg m s^2? > > I'm not familiar with the issue here (how do you go from length/length^3 to > length, and where did the mass and time units come from?). To begin with, I > plan on attacking this problem the same way one would do dimensional > analysis, converting everything to the basic dimensions of mass, length, > time, charge (or current) and temperature, which I think is the way > enthought.units handles it as well. pc/m^3 would be converted to 1/m^2 or > 1/pc^2. Hopefully it should be possible for someone to write a specialized > Dimensions object that preserves a compound unit by performing a different > dimensional analysis. Or perhaps some mechanism could be put in place to > format the units string representation according some predefined rules and > user-defined context. These are probably issues to be addressed at a later > time. Er. That was two separate examples. The important point is that I *don't* want 50 pc/cm^3 to be converted to 1.5e24 m^{-2}. The former is the unit universally used in the literature. If I have to I can circumvent the unit printing system by doing print "%g pc/cm^3" % (DM/Unit("pc/cm^3")) but this is a pain. (Overflow may also be an issue here.) >> How are users going to specify units? Using the existing packages, I >> found it useful to make the units into variables: >> kg = Unit("kg") >> so that I could then do >> wt = 10*kg >> and the error checking would behave nicely. > > I am hoping that units can either be set in the constructor or provided after > the fact using multiplication, similar to your example. We use many constructors for arrays, so you may find it easier to provide only a constructor for an array scalar with given units, then make available variables in all the usual units. >> > * Quantity has a public units property, providing a view into the >> > object's dimensions and the ability to change from one set of units to >> > another. q.units would return the Dimensions instance, whose __repr__ >> > would be dynamically constructed from each dimension's units and power >> > attributes. The setter would have some limitations by design. For example >> > if q has units of kg m / s^2 and you do q.units='ft', then q.units would >> > return kg ft /s^2. >> >> Hmm. This kind of guessing is likely to trip people up. At least the >> default should be "convert to exactly the unit I specified". > > I disagree. It is not physically possible to convert kg m / s^2 to ft. It > should either convert the units of the appropriate dimension or raise an > error. Personally, I think the former would be more useful. If one wants the > former, perhaps the set_units method could provide a pedantic kwarg that > would attempt a complete conversion and raise on error. > >> After >> all, the point of using units is that they catch many mathematical >> errors, and the sooner they are caught the better. > > I agree. I don't see how the proposed behavior would be problematic, there is > no guessing involved. If you specify a unit of length, the lengths will be > expressed in that unit. Maybe you could provide an example showing how > confusion would arise. Here's a simple example: I have a quantity A I think is in metres. I write "A=convert(A,'ft')". My program continues, combining A with various other quantities. At the end I obtain a meaningless number with bizarre units and I have to trace back through my code to find out that these weird units were attached to A, which in fact was in units of energy density. More, what happens when you have a quantity in Newton-metres (say) and you ask it to convert to feet? Do you get Newton-feet, or do all the occurrences of "feet" in the Newtons get converted? What about conversions like ergs -> joules? Each of these is a composite unit, so the code has to have some kind of procedure to find the right number of powers of ergs to convert, leaving behind whatever's left. Combine this with fractional exponents and you have a real nightmare. >> On a related topic, how are you going to handle ufuncs? Addition and >> subtraction should require commensurable units, multiplication should >> multiply the units, and I think all other standard ufuncs should >> require something with no units. Well, except for mean, std, and var >> maybe. And "pow" is tricky. And, well, you see what I'm getting at. > > Well, I already laid out a strategy for dealing with multiplication and > addition, but I am really not that familiar with ufuncs and there are > probably some problems lurking that I am not aware of. Maybe I will have to > rely on object methods to wrap the incompatible ufuncs and return Quantities > with the appropriate units. I don't really have any idea how this is to be accomplished, but (for example) the function "exp" really needs to check that its argument has no units. It would also be nice to be able to use numpy.exp rather than scipy.units.exp. It might work to make unitless unit arrays be normal arrays, so that the stock ufuncs worked on them. This does force you to reduce units early, though. >> What about user-defined functions? It's worth having some kind of >> decorator that enforces that a particular function acts on something >> with no units, and maybe enforces particular units. > > Could you give an example? I don't follow. Well, suppose I have a function that, like "exp", only accepts unitless quantities. Suppose in fact it's already written. It would be nice to have a decorator so all I have to write is @unitless def myfunc(x): ... Similarly, if I want to implement the Planck black-body function, it would be nice to be able to write @units(K,Hz) def B(T,nu): ... > I would like to make clear: my concern is to get the abstractions right so it > will be flexible enough that others can build on it to provide their desired > functionality. If anyone has ideas on how the abstractions need to be > improved, I would like to here them. I think the hard part will be getting the ufuncs to behave correctly. As you say, if you can get addition, multiplication, and fractional powers working, you're pretty much there. The key question for me is what units quantities are kept in internally. Keep in mind that some arrays are large, so conversions of quantities between units can be expensive. If I'm doing many calculations with quantities expressed in pc/cm^3, I would hope that they are not constantly being converted to m^{-2} on input and back to pc/cm^3 on output. In fact I can imagine cases where both these conversions happen inside a loop. That could get expensive. Anne From dsdale24 at gmail.com Sun Aug 3 09:22:33 2008 From: dsdale24 at gmail.com (Darren Dale) Date: Sun, 3 Aug 2008 09:22:33 -0400 Subject: [SciPy-dev] design of a physical quantities package: seeking comments In-Reply-To: References: <200808021325.42976.dsdale24@gmail.com> <200808022139.50640.dsdale24@gmail.com> Message-ID: <200808030922.33680.dsdale24@gmail.com> Hi Anne, On Sunday 03 August 2008 3:46:21 am Anne Archibald wrote: > 2008/8/2 Darren Dale : > > On Saturday 02 August 2008 4:23:43 pm Anne Archibald wrote: > >> 2008/8/2 Darren Dale : > >> I think it's a good idea to try to keep things in the units they are > >> provided in; this should reduce the occurrences of unexpected > >> overflows when (for example) cubing a distance in megaparsecs (put > >> this in centimetres and use single precision and you might overflow). > >> But this does mean you need to decide, when adding (say) a distance in > >> feet to a distance in metres, which unit the result should be in. > > > > Yes, a decision will have to be made as to whether A*B+C will yield a > > result in units of A or C. I am not worried at this point about overflows > > or loss of precision when attempting to convert a quantity that is some > > integer dtype. > > I don't think it's worth worrying about integers. But if you take, > say, 10 megaparsecs, and express it in metres, you get 3e23 m. For the > volume of a cube 10 Mpc on a side, you get about 3e70 - a number too > big to fit in a single-precision float. So the cosmologists will be > forced to use doubles if you internally represent everything using SI > units. For the same reason, you may want to make the conversion rules > very clear. ("Always convert to the left-hand unit" would work.) I guess I should have been more clear that I intend to internally represent each dimension in the unit specified. As you point out later, converting back and forth between some standard internal representation like SI would be costly. [...] > > Here's a simple example: I have a quantity A I think is in metres. I > write "A=convert(A,'ft')". My program continues, combining A with > various other quantities. At the end I obtain a meaningless number > with bizarre units and I have to trace back through my code to find > out that these weird units were attached to A, which in fact was in > units of energy density. You thought A had different dimensions than it did, and when you get to the end of your calculation, you have wierd units and you have to go back and see what went wrong. That would have happened whether or not you changed units from meters to feet. > More, what happens when you have a quantity in Newton-metres (say) and > you ask it to convert to feet? Do you get Newton-feet, or do all the > occurrences of "feet" in the Newtons get converted? They all get converted. > What about conversions like ergs -> joules? Each of these is a > composite unit, so the code has to have some kind of procedure to find > the right number of powers of ergs to convert, leaving behind > whatever's left. Combine this with fractional exponents and you have a > real nightmare. Compound units would be internally represented by their components. I think if you set your units to joules, all length dimensions would be expressed in m, mass in Kg, time in s. In order to print a units representation in a compound unit like J, I will probably have to inspect the units and power of each dimension and reconstruct the compound unit in some semi-intelligent manner. But I don't plan on dealing with representing compound units, for now. I just downloaded the java webstart version of Frink to have a look at its behavior. It looks like my approach is very similar to Frink's. From the Frink website: "All units are standardized and normalized into combinations a small number of several "Fundamental Dimensions" that cannot be reduced any further." Here are a few examples: b=32 J 32 m^2 s^-2 kg (energy) c=10 s 10 s (time) b*c 320 m^2 s^-1 kg (angular_momentum) b/c 16/5 (exactly 3.2) m^2 s^-3 kg (power) AA=12 pc/cm^3 3.7028130975668747443e+23 m^-2 (unknown unit type) DD=b->ft Conformance error Left side is: 24384/125 (exactly 195.072) m^3 s^-2 kg (unknown unit type) Right side is: 381/1250 (exactly 0.3048) m (length) Suggestion: divide left side by energy For help, type: units[energy] to list known units with these dimensions. I had previously looked for Frinks source and when I didnt find it, I didn't give it more thought until now. Having seen the way Frink behaves, I feel more confident that I am on the right track in terms of abstraction. > >> On a related topic, how are you going to handle ufuncs? Addition and > >> subtraction should require commensurable units, multiplication should > >> multiply the units, and I think all other standard ufuncs should > >> require something with no units. Well, except for mean, std, and var > >> maybe. And "pow" is tricky. And, well, you see what I'm getting at. > > > > Well, I already laid out a strategy for dealing with multiplication and > > addition, but I am really not that familiar with ufuncs and there are > > probably some problems lurking that I am not aware of. Maybe I will have > > to rely on object methods to wrap the incompatible ufuncs and return > > Quantities with the appropriate units. > > I don't really have any idea how this is to be accomplished, but (for > example) the function "exp" really needs to check that its argument > has no units. It would also be nice to be able to use numpy.exp rather > than scipy.units.exp. I agree it would be nice, but let me get a working implementation together and if the project eventually receives the community's blessing, let the numpy or scipy folks decide if this can and should be supported. Seems a ways off to me. [...] > > > I would like to make clear: my concern is to get the abstractions right > > so it will be flexible enough that others can build on it to provide > > their desired functionality. If anyone has ideas on how the abstractions > > need to be improved, I would like to here them. > > I think the hard part will be getting the ufuncs to behave correctly. > As you say, if you can get addition, multiplication, and fractional > powers working, you're pretty much there. > > The key question for me is what units quantities are kept in > internally. Keep in mind that some arrays are large, so conversions of > quantities between units can be expensive. If I'm doing many > calculations with quantities expressed in pc/cm^3, I would hope that > they are not constantly being converted to m^{-2} on input and back to > pc/cm^3 on output. In fact I can imagine cases where both these > conversions happen inside a loop. That could get expensive. The values would not be stored in some standard internal representation like SI, but rather in the units specified, decomposed into fundamental dimensions. I do not know how a compound unit like pc/cm^3 would work, but Frink doesnt know how to do it either. There is an example from my field too: How much surface-area can you pack into a given volume? This is often expressed in m^2/cm^3. What a headache. Maybe I could come up with a method that returns an alternate string representation of the quantity: q.in_units_of("pc/cm^3"). But it would not effect the internal representation. Darren From guyer at nist.gov Sun Aug 3 11:35:47 2008 From: guyer at nist.gov (Jonathan Guyer) Date: Sun, 3 Aug 2008 11:35:47 -0400 Subject: [SciPy-dev] design of a physical quantities package: seeking comments In-Reply-To: <200808030922.33680.dsdale24@gmail.com> References: <200808021325.42976.dsdale24@gmail.com> <200808022139.50640.dsdale24@gmail.com> <200808030922.33680.dsdale24@gmail.com> Message-ID: On Aug 3, 2008, at 9:22 AM, Darren Dale wrote: >> More, what happens when you have a quantity in Newton-metres (say) >> and >> you ask it to convert to feet? Do you get Newton-feet, or do all the >> occurrences of "feet" in the Newtons get converted? > > They all get converted. I'm with Anne. I'm having a really hard time seeing when I would want this as a default behavior, whereas I frequently want to be able automatically convert BTUs to Joules and atmospheres to Pascals, but I want an exception if somebody tries to give me a pressure when I ask for an energy. Having it possible to do what you suggest seems like it might be useful sometimes, but I don't think it should be the default behavior. >> I think the hard part will be getting the ufuncs to behave correctly. >> As you say, if you can get addition, multiplication, and fractional >> powers working, you're pretty much there. >> >> The key question for me is what units quantities are kept in >> internally. Keep in mind that some arrays are large, so conversions >> of >> quantities between units can be expensive. If I'm doing many >> calculations with quantities expressed in pc/cm^3, I would hope that >> they are not constantly being converted to m^{-2} on input and back >> to >> pc/cm^3 on output. In fact I can imagine cases where both these >> conversions happen inside a loop. That could get expensive. > > The values would not be stored in some standard internal > representation like > SI, but rather in the units specified, decomposed into fundamental > dimensions. I do not know how a compound unit like pc/cm^3 would > work, but > Frink doesnt know how to do it either. There is an example from my > field too: > How much surface-area can you pack into a given volume? This is often > expressed in m^2/cm^3. What a headache. Maybe I could come up with a > method > that returns an alternate string representation of the quantity: > q.in_units_of("pc/cm^3"). But it would not effect the internal > representation. Several years ago, we adapted Konrad Hinsen's PhysicalQuantity (from ScientificPython) to work with the large fields of numbers we needed for FiPy. It's largely Konrad's code; the primary difference is that a 1000x1000 array gets one unit, not a million of them. Using this scheme: >>> from fipy import PhysicalField >>> PhysicalField("12 lyr/cm**3") PhysicalField(12.0,'lyr/cm**3') >>> PhysicalField("1 lyr") / "7.3 cm**2" PhysicalField(0.13698630136986301,'lyr/cm**2') >>> PhysicalField("12 lyr/cm**3").inUnitsOf("m**-2") PhysicalField(1.1352876567096959e+23,'1/m**2') so the compound unit that Anne wants is supported and conversion to canonical units happens only when requested. Note: I had to use light years because parsecs aren't predefined, and adding new units & constants is one of the weaknesses of Konrad's design. I urge a long look at Konrad's code (or our adaptation of it: http://matforge.org/fipy/browser/trunk/fipy/tools/dimensions) , as it already does a lot of what you want. It was written in Numeric days, so I think a re-implementation as a subclass of ndarray with tighter integration with ufuncs [*] would be a big improvement, but the basic design is a good one, IMO. [*] Actually, I don't know if the ufunc integration needs to be any tighter: >>> from fipy import PhysicalField >>> import numpy >>> numpy.exp(PhysicalField("1 m**2")) Traceback (most recent call last): File "", line 1, in File "/Users/guyer/Documents/research/FiPy/trunk/fipy/tools/ dimensions/physicalField.py", line 595, in __array__ raise TypeError, 'Numeric array value must be dimensionless' TypeError: Numeric array value must be dimensionless >>> numpy.exp(-PhysicalField("1. eV") / "300 K * kB") 1.587664463548766e-17 We presently have a "numerix" abstraction layer to make sure that numerix.exp() calls the methods we want, because Numeric didn't do this, but with numpy I think a lot of that can go away. From dsdale24 at gmail.com Sun Aug 3 12:54:12 2008 From: dsdale24 at gmail.com (Darren Dale) Date: Sun, 3 Aug 2008 12:54:12 -0400 Subject: [SciPy-dev] design of a physical quantities package: seeking comments In-Reply-To: References: <200808021325.42976.dsdale24@gmail.com> <200808030922.33680.dsdale24@gmail.com> Message-ID: <200808031254.12591.dsdale24@gmail.com> On Sunday 03 August 2008 11:35:47 am Jonathan Guyer wrote: > On Aug 3, 2008, at 9:22 AM, Darren Dale wrote: > >> More, what happens when you have a quantity in Newton-metres (say) > >> and > >> you ask it to convert to feet? Do you get Newton-feet, or do all the > >> occurrences of "feet" in the Newtons get converted? > > > > They all get converted. > > I'm with Anne. I'm having a really hard time seeing when I would want > this as a default behavior, whereas I frequently want to be able > automatically convert BTUs to Joules and atmospheres to Pascals, but I > want an exception if somebody tries to give me a pressure when I ask > for an energy. Having it possible to do what you suggest seems like it > might be useful sometimes, but I don't think it should be the default > behavior. [...] > Using this scheme: > >>> from fipy import PhysicalField > >>> PhysicalField("12 lyr/cm**3") > > PhysicalField(12.0,'lyr/cm**3') > > >>> PhysicalField("1 lyr") / "7.3 cm**2" > > PhysicalField(0.13698630136986301,'lyr/cm**2') > > >>> PhysicalField("12 lyr/cm**3").inUnitsOf("m**-2") > > PhysicalField(1.1352876567096959e+23,'1/m**2') > > so the compound unit that Anne wants is supported and conversion to > canonical units happens only when requested. But in the meantime, you aggregate units like "dynes kg ft m^3 s / lbs Km ps^4 watts ohms". Quick, what units should I ask for that won't yield an error? It seems like it is more helpful to reduce as you go, and later when you want your value expressed with a compound unit, you ask for it. Let me try it my way and see what you think, and if people are not sold, it should be simple to reimplement so all units are aggregated, and reduced only on request. Darren From twentypoundtrout at yahoo.com Sun Aug 3 18:08:23 2008 From: twentypoundtrout at yahoo.com (Nate) Date: Sun, 3 Aug 2008 22:08:23 +0000 (UTC) Subject: [SciPy-dev] scipy compile problems Message-ID: I'm having trouble compiling scipy from the repo on Ubuntu (8.04). See the following output error from scipy.test(). Apparently there are problems with lapack. I've compiled ATLAS (first building and linking the full NA lib lapack) and still have the following errors. The build + site-package directories for numpy/scipy are removed each time I try to rebuild (both). Also, I am building everything with gfortran (following the steps here: http://www.scipy.org/Installing_SciPy/Linux#head-1c4018a51422706809ee96a4db03ca0669f5f6d1) Any ideas? Nate Here is my site.cfg file: [DEFAULT] library_dirs = /usr/local/atlas/lib include_dirs = /usr/local/atlas/include [atlas] atlas_libs = lapack, f77blas, cblas, atlas [amd] library_dirs = /usr/lib include_dirs = /usr/include/suitesparse amd_libs = amd [umfpack] library_dirs = /usr/lib include_dirs = /usr/include/suitesparse umfpack_libs = umfpack ====================================================================== ERROR: Failure: ImportError (cannot import name flapack) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.5/site-packages/nose-0.10.3-py2.5.egg/nose/loader.py", line 364, in loadTestsFromName addr.filename, addr.module) File "/usr/lib/python2.5/site-packages/nose-0.10.3-py2.5.egg/nose/importer.py", line 39, in importFromPath return self.importFromDir(dir_path, fqname) File "/usr/lib/python2.5/site-packages/nose-0.10.3-py2.5.egg/nose/importer.py", line 84, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/usr/lib/python2.5/site-packages/scipy/integrate/tests/test_integrate.py", line 9, in from scipy.linalg import norm File "/usr/lib/python2.5/site-packages/scipy/linalg/__init__.py", line 8, in from basic import * File "/usr/lib/python2.5/site-packages/scipy/linalg/basic.py", line 17, in from lapack import get_lapack_funcs File "/usr/lib/python2.5/site-packages/scipy/linalg/lapack.py", line 17, in from scipy.linalg import flapack ImportError: cannot import name flapack ====================================================================== ERROR: Failure: ImportError (/usr/lib/python2.5/site-packages/scipy/linalg/clapack.so: undefined symbol: clapack_sgesv) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.5/site-packages/nose-0.10.3-py2.5.egg/nose/loader.py", line 364, in loadTestsFromName addr.filename, addr.module) File "/usr/lib/python2.5/site-packages/nose-0.10.3-py2.5.egg/nose/importer.py", line 39, in importFromPath return self.importFromDir(dir_path, fqname) File "/usr/lib/python2.5/site-packages/nose-0.10.3-py2.5.egg/nose/importer.py", line 84, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/usr/lib/python2.5/site-packages/scipy/interpolate/__init__.py", line 7, in from interpolate import * File "/usr/lib/python2.5/site-packages/scipy/interpolate/interpolate.py", line 13, in import scipy.linalg as slin File "/usr/lib/python2.5/site-packages/scipy/linalg/__init__.py", line 8, in from basic import * File "/usr/lib/python2.5/site-packages/scipy/linalg/basic.py", line 17, in from lapack import get_lapack_funcs File "/usr/lib/python2.5/site-packages/scipy/linalg/lapack.py", line 18, in from scipy.linalg import clapack ImportError: /usr/lib/python2.5/site-packages/scipy/linalg/clapack.so: undefined symbol: clapack_sgesv From dsdale24 at gmail.com Sun Aug 3 19:09:01 2008 From: dsdale24 at gmail.com (Darren Dale) Date: Sun, 3 Aug 2008 19:09:01 -0400 Subject: [SciPy-dev] design of a physical quantities package: seeking comments In-Reply-To: <200808031254.12591.dsdale24@gmail.com> References: <200808021325.42976.dsdale24@gmail.com> <200808031254.12591.dsdale24@gmail.com> Message-ID: <200808031909.01560.dsdale24@gmail.com> On Sunday 03 August 2008 12:54:12 pm Darren Dale wrote: > On Sunday 03 August 2008 11:35:47 am Jonathan Guyer wrote: > > On Aug 3, 2008, at 9:22 AM, Darren Dale wrote: > > >> More, what happens when you have a quantity in Newton-metres (say) > > >> and > > >> you ask it to convert to feet? Do you get Newton-feet, or do all the > > >> occurrences of "feet" in the Newtons get converted? > > > > > > They all get converted. > > > > I'm with Anne. I'm having a really hard time seeing when I would want > > this as a default behavior, whereas I frequently want to be able > > automatically convert BTUs to Joules and atmospheres to Pascals, but I > > want an exception if somebody tries to give me a pressure when I ask > > for an energy. Having it possible to do what you suggest seems like it > > might be useful sometimes, but I don't think it should be the default > > behavior. > > [...] > > > Using this scheme: > > >>> from fipy import PhysicalField > > >>> PhysicalField("12 lyr/cm**3") > > > > PhysicalField(12.0,'lyr/cm**3') > > > > >>> PhysicalField("1 lyr") / "7.3 cm**2" > > > > PhysicalField(0.13698630136986301,'lyr/cm**2') > > > > >>> PhysicalField("12 lyr/cm**3").inUnitsOf("m**-2") > > > > PhysicalField(1.1352876567096959e+23,'1/m**2') > > > > so the compound unit that Anne wants is supported and conversion to > > canonical units happens only when requested. > > But in the meantime, you aggregate units like > "dynes kg ft m^3 s / lbs Km ps^4 watts ohms". Quick, what units should I > ask for that won't yield an error? It seems like it is more helpful to > reduce as you go, and later when you want your value expressed with a > compound unit, you ask for it. Let me try it my way and see what you think, > and if people are not sold, it should be simple to reimplement so all units > are aggregated, and reduced only on request. I think maybe we can have it both ways. I borrowed (and slightly modified) the parser enthought put together to interpret unit strings. Perhaps it could be modified further such that 'kg*(pc/cm^3)*K/s^4' dictates that pc/cm^3 is a compound unit and should not to be automatically decomposed. A method could be added to decompose compound units on request, in case that is desired. I put a mercurial repo up, you can check it out with "hg clone \ http://dale.chess.cornell.edu/~darren/cgi-bin/hgwebdir.cgi/quantities \ quantities" There is no documentation yet, and I'm a little unhappy with some of the organization, but I was really just blitzing through trying to get a demo together. It installs in the usual way. There is no documentation, so here is a crash course: In [1]: from quantities import NDQuantity udunits(3): Already initialized from file "/usr/lib/python2.5/site-packages/quan tities/quantities-data/udunits.dat" In [2]: a=NDQuantity([1,2,3.0],'J') In [3]: a Out[3]: NDQuantity([ 1., 2., 3.]), kg * m^2 / s^2 In [4]: b=NDQuantity([1,2,3.0],'BTU') In [5]: b Out[5]: NDQuantity([ 1055.05585262, 2110.11170524, 3165.16755786]), kg * m^2 / s^2 In [6]: c=NDQuantity([1,2,3.0],'m**-2') In [7]: c Out[7]: NDQuantity([ 1., 2., 3.]), 1 / m^2 In [8]: c.inUnitsOf('parsec/cm^3') Out[8]: NDQuantity([ 3.24077885e-23, 6.48155770e-23, 9.72233655e-23]), parsec/cm^3 In [9]: d=c.inUnitsOf('parsec/cm^3') In [10]: c Out[10]: NDQuantity([ 1., 2., 3.]), 1 / m^2 In [11]: d/a Out[11]: NDQuantity([ 3.24077885e-23, 3.24077885e-23, 3.24077885e-23]), s^2 * parsec/cm^3 / kg m^2 In [12]: b/a Out[12]: NDQuantity([ 1055.05585262, 1055.05585262, 1055.05585262]), (dimensionless) There are some temporary limitations. Enthoughts parser has certain expectations, which places some limitations on the units string passed to the constructor. udunits, which is used for all unit conversions, has a different set of expectations and limitations. This can be improved, I just havent gotten around to it yet. The available units are defined in _units.py. Quantities cannot yet be created by multiplying a _unit times another number or numpy array. It would be better for the unit definitions to be first class quantities, then you could do 10*J and get a quantitiy, but there is a circular import issue that I need to work through. I kept this in for now: In [3]: a Out[3]: NDQuantity([ 1., 2., 3.]), kg * m^2 / s^2 In [21]: a.units='ft' In [22]: a Out[22]: NDQuantity([ 10.76391042, 21.52782083, 32.29173125]), kg * ft^2 / s^2 I ran into the concatenate issue that came up a while back, it drops the units: In [26]: numpy.concatenate([a,a]) Out[26]: array([ 10.76391042, 21.52782083, 32.29173125, 10.76391042, 21.52782083, 32.29173125]) sqrt doesnt seem to work: In [28]: numpy.sqrt(a**2+a**2) Out[28]: NDQuantity([ 15.2224681 , 30.44493619, 45.66740429]), kg^2 * ft^4 / s^4 but this does: In [30]: (a**2+a**2)**0.5 Out[30]: NDQuantity([ 15.2224681 , 30.44493619, 45.66740429]), kg * ft^2 / s^2 Anyway, feel free to kick the tires and send feedback. Darren From stefan at sun.ac.za Sun Aug 3 19:11:43 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 4 Aug 2008 01:11:43 +0200 Subject: [SciPy-dev] Removing globals from minpack wrapper, v3 In-Reply-To: <488DCC72.9070508@netvision.net.il> References: <4881F9B4.7030804@netvision.net.il> <488AD95C.1080501@netvision.net.il> <488B1660.3010809@enthought.com> <488DCC72.9070508@netvision.net.il> Message-ID: <9457e7c80808031611w71015fbck6544582693af9147@mail.gmail.com> 2008/7/28 Yosef Meller : > Travis E. Oliphant wrote: >> Yosef Meller wrote: >>> Will this patch be considered for merging before 0.7? >>> >> I'd like it to happen. >> >> -Travis > > So, what is the process to merge it? File a ticket on the SciPy trac, and attach your patch to it. If you want more eyes on it, you can also upload to the code review board: http://codereview.appspot.com Regards St?fan From robince at gmail.com Mon Aug 4 05:13:13 2008 From: robince at gmail.com (Robin) Date: Mon, 4 Aug 2008 10:13:13 +0100 Subject: [SciPy-dev] scipy compile problems In-Reply-To: References: Message-ID: On Sun, Aug 3, 2008 at 11:08 PM, Nate wrote: > I'm having trouble compiling scipy from the repo on Ubuntu (8.04). See the > following output error from scipy.test(). Apparently there are problems with > lapack. I've compiled ATLAS (first building and linking the full NA lib lapack) > and still have the following errors. The build + site-package directories for > numpy/scipy are removed each time I try to rebuild (both). Also, I am building > everything with gfortran (following the steps here: > http://www.scipy.org/Installing_SciPy/Linux#head-1c4018a51422706809ee96a4db03ca0669f5f6d1) > > Any ideas? One thing is to check you are using the atlas and lapack you expect (and that you built with gfortran). I think the easiest way to do this is to make sure you don't have them installed from the packaging system. You could check with ldd which libraries the clapack.so is linked to. Robin From david at ar.media.kyoto-u.ac.jp Mon Aug 4 05:22:55 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 04 Aug 2008 18:22:55 +0900 Subject: [SciPy-dev] scipy compile problems In-Reply-To: References: Message-ID: <4896CA6F.5010402@ar.media.kyoto-u.ac.jp> Nate wrote: > I'm having trouble compiling scipy from the repo on Ubuntu (8.04). See the > following output error from scipy.test(). Apparently there are problems with > lapack. Please use the Ubuntu provided atlas if you don't have a strong reason not to, it will be much easier to deal with: sudo apt-get install libatlas-base-dev libatlas-sse2 # for gfortran sudo apt-get install atlas3-base-dev atlas3-sse2 # for g77 And once you selected one of them, clean the build directory and install directory for both numpy and scipy, and start from scratch: python setup.py build --fortran=gnu95 install # gfortran python setup.py build install # g77 There should be no need for site.cfg, numpy knows about the location under ubuntu/debian. cheers, David From david at ar.media.kyoto-u.ac.jp Mon Aug 4 07:05:55 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 04 Aug 2008 20:05:55 +0900 Subject: [SciPy-dev] Adding sound backends to scipy for basic sound playback ? Message-ID: <4896E293.8060705@ar.media.kyoto-u.ac.jp> Hi, I developed for my own research some (crude) audio backends to play sound on Linux. I thought it would be nice to have this capability built-in for scipy (something like sound/soundsc in matlab). Is this something which could be included in scipy ? For now, I have an alsa backend (for Linux), and a core audio backend (which would need some work to be published at all). I have been trying to port them on cython to learn a bit about cython (they were based on ctypes before, but we can't build ctypes extensions for inclusion in scipy). One problem for alsa is that you would need the (user-space) library headers to build it, so making it mandatory would mean one more dependency. Although ALSA is under the GPL/LGPL, I would guess it should not be a problem since it is the interface to linux audio kernel ? cheers, David From mellerf at netvision.net.il Mon Aug 4 08:52:28 2008 From: mellerf at netvision.net.il (Yosef Meller) Date: Mon, 04 Aug 2008 15:52:28 +0300 Subject: [SciPy-dev] Removing globals from minpack wrapper, v3 In-Reply-To: <9457e7c80808031611w71015fbck6544582693af9147@mail.gmail.com> References: <4881F9B4.7030804@netvision.net.il> <488AD95C.1080501@netvision.net.il> <488B1660.3010809@enthought.com> <488DCC72.9070508@netvision.net.il> <9457e7c80808031611w71015fbck6544582693af9147@mail.gmail.com> Message-ID: <4896FB8C.9030502@netvision.net.il> St?fan van der Walt wrote: > 2008/7/28 Yosef Meller : >> Travis E. Oliphant wrote: >>> Yosef Meller wrote: >>>> Will this patch be considered for merging before 0.7? >>>> >>> I'd like it to happen. >>> >>> -Travis >> So, what is the process to merge it? > > File a ticket on the SciPy trac, and attach your patch to it. If you > want more eyes on it, you can also upload to the code review board: > > http://codereview.appspot.com Here's the bug: http://www.scipy.org/scipy/scipy/ticket/713 -- http://yosefm.imagekind.com/Eclectic From cwebster at enthought.com Mon Aug 4 10:31:51 2008 From: cwebster at enthought.com (Corran Webster) Date: Mon, 4 Aug 2008 09:31:51 -0500 Subject: [SciPy-dev] design of a physical quantities package: seeking comments In-Reply-To: References: <200808021325.42976.dsdale24@gmail.com> <200808022139.50640.dsdale24@gmail.com> Message-ID: <1510FB53-54F2-470C-985A-0C77656462C1@enthought.com> Hi all, I'm new to the scipy-dev list, but we're just wrapping up a project here at Enthought where I've used the current Enthought units infrastructure. If I have time today I'll add my comments based on our experience here with using physical quantities in a real-world project. My main comment here is the following: On Aug 3, 2008, at 2:46 AM, Anne Archibald wrote: > 2008/8/2 Darren Dale : > >> On Saturday 02 August 2008 4:23:43 pm Anne Archibald wrote: >>> 2008/8/2 Darren Dale : >>> What about user-defined functions? It's worth having some kind of >>> decorator that enforces that a particular function acts on something >>> with no units, and maybe enforces particular units. >> >> Could you give an example? I don't follow. > > Well, suppose I have a function that, like "exp", only accepts > unitless quantities. Suppose in fact it's already written. It would be > nice to have a decorator so all I have to write is > > @unitless > def myfunc(x): > ... > > Similarly, if I want to implement the Planck black-body function, it > would be nice to be able to write > > @units(K,Hz) > def B(T,nu): > ... For an example of this sort of idea already implemented (although not with this particular interface), have a look at the @has_units decorator in enthought.numerical_modelling.units. It is designed to parse docstrings to extract unit information for the input and output variables. Other than that, I'd observe that variables with unit information were a significant source of bugs and additional effort in our development effort. To be fair, they probably helped identify a lot of bugs too. But we had to write a fair bit of code whose sole purpose was to add or remove units from variables. For example, we have a solver in the application which uses scipy.optimize.fsolve. To make it work correctly, we had to take our variables with user-supplied units, remove the units, but remember them in context, and then inside the function we were trying to solve for we would have to re-apply the units before calling the code which performed the computations (which then usually did its own unit conversions before actually computing anything meaningful). We also hit some of the limitations of a simple physical quantities system: this particular application involved fluid dynamics, and in particular power-law fluids, where one of the key physical quantities, consistency, has units of Pa*s**n where n is the shear index, another key physical quantity. In particular, n was variable in the situations we were dealing with, and so we had to special-case the consistency variables and ensure that they were always in a particular set of units everywhere in our code. I haven't had time yet to sit back and look hard at how the unit libraries could have been improved to make our experience smoother. As I mentioned earlier, if I have time today I'll respond to this thread in detail and try and make some concrete constructive suggestions. Best Regards, Corran From robert.kern at gmail.com Mon Aug 4 11:31:18 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 4 Aug 2008 10:31:18 -0500 Subject: [SciPy-dev] Adding sound backends to scipy for basic sound playback ? In-Reply-To: <4896E293.8060705@ar.media.kyoto-u.ac.jp> References: <4896E293.8060705@ar.media.kyoto-u.ac.jp> Message-ID: <3d375d730808040831nd29a6e6h7522cab6f271cf3a@mail.gmail.com> On Mon, Aug 4, 2008 at 06:05, David Cournapeau wrote: > Hi, > > I developed for my own research some (crude) audio backends to play > sound on Linux. I thought it would be nice to have this capability > built-in for scipy (something like sound/soundsc in matlab). Is this > something which could be included in scipy ? I don't think so. It adds significant dependencies and build issues for something that is not really associated with the core subject of scipy. It belongs in a separate package. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From nwagner at iam.uni-stuttgart.de Mon Aug 4 13:53:18 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 04 Aug 2008 19:53:18 +0200 Subject: [SciPy-dev] SciPy 0.7 release - pending tickets In-Reply-To: References: Message-ID: On Sat, 02 Aug 2008 12:08:19 +0200 "Nils Wagner" wrote: > On Sat, 02 Aug 2008 11:56:17 +0200 > "Nils Wagner" wrote: >> Hi all, >> >> Ticket 704 can be closed. >> http://scipy.org/scipy/scipy/ticket/704 >> >> The following tickets can be easily closed after an >>update >> of the docstring. >> >> http://scipy.org/scipy/scipy/ticket/677 >> http://scipy.org/scipy/scipy/ticket/666 >> >> The functions read_array and write_array are deprecated. >> Is it reasonable to close ticket >> >> http://scipy.org/scipy/scipy/ticket/568 >> >> in this context ? >> >> Ticket 626 can be closed. Works for me. >> http://scipy.org/scipy/scipy/ticket/626 >> >> Nils >> >> _______________________________________________ >> Scipy-dev mailing list >> Scipy-dev at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-dev > > IMHO ticket 567 can be closed as well. > http://scipy.org/scipy/scipy/ticket/567 Last but not least http://projects.scipy.org/scipy/scipy/ticket/712 is a duplicate of http://projects.scipy.org/scipy/scipy/ticket/711 Nils From jh at physics.ucf.edu Mon Aug 4 16:42:12 2008 From: jh at physics.ucf.edu (Joe Harrington) Date: Mon, 04 Aug 2008 16:42:12 -0400 Subject: [SciPy-dev] WTFM! Message-ID: SciPy Documentation Marathon 2008 Status Report We are now nearing the end of the summer. We have a ton of great docstrings, a nice PDF and HTML reference guide, a new package with pages on general topics like slicing, and a glossary. We had hoped to have all the numpy docstrings in first-draft form in time for the pre-fall (1.2) release. The actual number of pages was more than double our quick goal-setting assessment, so we won't make it. As of this moment, we have: status % pages Needs editing 52 430 Being written / Changed 27 226 Needs review 18 152 Needs review (revised) 0 1 Needs work (reviewed) 0 3 Reviewed (needs proof) 2 19 Proofed 0 0 Unimportant 1531 Our current status can always be seen at: http://sd-2116.dedibox.fr/pydocweb/stats/ Definitions of the categories are also on the wiki, but "being written" is our first-draft category. So, we're just shy of halfway there, and since the goal more than doubled, we can say we have not failed our expectations. But, we haven't succeeded, either, and we certainly haven't finished. So, this being a marathon, we're not going to stop! Please join us if you haven't already for the... PRE-CONFERENCE DOC BLITZ! We can, quite realistically, get up to 60% for numpy 1.2. We've had several 8% weeks this summer and we've got several weeks to go. Stefan will merge docstrings into the beta for 1.2 on 5 August, and will continue merging from the wiki for the release candidates and final cut. However, Writing has slowed to a crawl in recent weeks. Please pitch in to help those who are still writing so we can get to 60% by release 1.2. Looking further ahead, I hope all the volunteers will continue writing for the rest of the summer and fall, so that we can put 100% decent drafts into 1.3, and a 100% reviewed set of docstrings into 1.4. Then we can turn our attention to scipy. Enough stick, here's some carrot: This is the design for this year's documentation prize, a T-shirt in Robert Kern black, designed by Teresa Jeffcott: http://physics.ucf.edu/~jh/scipyshirt-2008-2.png We'll hand these out at SciPy '08 to anyone who has written 1000 words or more (according to the stats page) or who has made an equivalent contribution in other ways (reviewing, wiki creation, etc.; Stefan and I will judge). So far, 11 contributors qualify, but several more could easily reach that goal in time. In fact, several of our volunteer writers have produced 1000 words in one week. The offer remains good through the first-draft phase, though you'll need to act quickly to get your docs into 1.2 and be recognized at the conference! If you won't be at SciPy '08 or if you qualify later, we'll mail one to you. As always, further discussion belongs on the scipy-dev mailing list, as does your request to be added to the doc wiki editors group, so head over to the doc wiki main page, established an account there, and give us a shout! WTFM, --jh-- From jh at physics.ucf.edu Mon Aug 4 17:51:03 2008 From: jh at physics.ucf.edu (Joe Harrington) Date: Mon, 04 Aug 2008 17:51:03 -0400 Subject: [SciPy-dev] a doc challenge to all numpy/scipy developers Message-ID: In the early days of the current numpy, urgency to get a working code was so high that docs were largely set aside, save for Travis's book. The doc project is now producing the API and user documentation that has been missing from numpy, and will then turn its attention to scipy. However, new code is still being added to both projects, sometimes without real docs. I would like to ask all developers - even to challenge them - from this point forward to accept only fully documented software into numpy and scipy. We are no longer in an urgent condition, and writing docs is generally considered an integral part of development. In my heart I'd like to issue the same challenge for patches as well. If you know enough about a function to patch it, you can document it, if it isn't already. It doesn't take long. Of course, I don't want to encourage the perpetuation of bugs by placing a barrier to fixing them, so I won't argue this point insistently. At the same time, I would argue that the lack of a decent docstring is a bug, and a serious one. So, please at least consider writing a docstring if you write a patch to an undocumented function, particularly if you are already writing docs and can knock off a good docstring in a short time. Thanks, --jh-- From nwagner at iam.uni-stuttgart.de Mon Aug 4 18:01:36 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 05 Aug 2008 00:01:36 +0200 Subject: [SciPy-dev] SuperLU 3.1 Message-ID: Hi all, SuperLU 3.1 is available since August 1, 2008. Cheers, Nils http://crd.lbl.gov/~xiaoye/SuperLU/#superlu From wnbell at gmail.com Mon Aug 4 18:47:12 2008 From: wnbell at gmail.com (Nathan Bell) Date: Mon, 4 Aug 2008 17:47:12 -0500 Subject: [SciPy-dev] SuperLU 3.1 In-Reply-To: References: Message-ID: On Mon, Aug 4, 2008 at 5:01 PM, Nils Wagner wrote: > Hi all, > > SuperLU 3.1 is available since August 1, 2008. > Good find. We should put this on the TODO list for SciPy 0.8. Hopefully issues like the following will be resolved: http://projects.scipy.org/scipy/scipy/ticket/553 -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From doutriaux1 at llnl.gov Tue Aug 5 12:15:47 2008 From: doutriaux1 at llnl.gov (Charles Doutriaux) Date: Tue, 05 Aug 2008 09:15:47 -0700 Subject: [SciPy-dev] design of a physical quantities package: seeking comments In-Reply-To: <200808031909.01560.dsdale24@gmail.com> References: <200808021325.42976.dsdale24@gmail.com> <200808031254.12591.dsdale24@gmail.com> <200808031909.01560.dsdale24@gmail.com> Message-ID: <48987CB3.6060806@llnl.gov> Hi Darren, Obviously I didn't have time to look into the udunits2 pbm yet. But I see you far along already. Do you still want me to look into that or are you happy with udunits? C. Darren Dale wrote: > On Sunday 03 August 2008 12:54:12 pm Darren Dale wrote: > >> On Sunday 03 August 2008 11:35:47 am Jonathan Guyer wrote: >> >>> On Aug 3, 2008, at 9:22 AM, Darren Dale wrote: >>> >>>>> More, what happens when you have a quantity in Newton-metres (say) >>>>> and >>>>> you ask it to convert to feet? Do you get Newton-feet, or do all the >>>>> occurrences of "feet" in the Newtons get converted? >>>>> >>>> They all get converted. >>>> >>> I'm with Anne. I'm having a really hard time seeing when I would want >>> this as a default behavior, whereas I frequently want to be able >>> automatically convert BTUs to Joules and atmospheres to Pascals, but I >>> want an exception if somebody tries to give me a pressure when I ask >>> for an energy. Having it possible to do what you suggest seems like it >>> might be useful sometimes, but I don't think it should be the default >>> behavior. >>> >> [...] >> >> >>> Using this scheme: >>> >>> from fipy import PhysicalField >>> >>> PhysicalField("12 lyr/cm**3") >>> >>> PhysicalField(12.0,'lyr/cm**3') >>> >>> >>> PhysicalField("1 lyr") / "7.3 cm**2" >>> >>> PhysicalField(0.13698630136986301,'lyr/cm**2') >>> >>> >>> PhysicalField("12 lyr/cm**3").inUnitsOf("m**-2") >>> >>> PhysicalField(1.1352876567096959e+23,'1/m**2') >>> >>> so the compound unit that Anne wants is supported and conversion to >>> canonical units happens only when requested. >>> >> But in the meantime, you aggregate units like >> "dynes kg ft m^3 s / lbs Km ps^4 watts ohms". Quick, what units should I >> ask for that won't yield an error? It seems like it is more helpful to >> reduce as you go, and later when you want your value expressed with a >> compound unit, you ask for it. Let me try it my way and see what you think, >> and if people are not sold, it should be simple to reimplement so all units >> are aggregated, and reduced only on request. >> > > I think maybe we can have it both ways. I borrowed (and slightly modified) the > parser enthought put together to interpret unit strings. Perhaps it could be > modified further such that 'kg*(pc/cm^3)*K/s^4' dictates that pc/cm^3 is a > compound unit and should not to be automatically decomposed. A method could > be added to decompose compound units on request, in case that is desired. > > I put a mercurial repo up, you can check it out with > "hg clone \ > http:// dale.chess.cornell.edu/~darren/cgi-bin/hgwebdir.cgi/quantities \ > quantities" > > There is no documentation yet, and I'm a little unhappy with some of the > organization, but I was really just blitzing through trying to get a demo > together. It installs in the usual way. There is no documentation, so here is > a crash course: > > In [1]: from quantities import NDQuantity > udunits(3): Already initialized from > file "/usr/lib/python2.5/site-packages/quan > tities/quantities-data/udunits.dat" > > In [2]: a=NDQuantity([1,2,3.0],'J') > > In [3]: a > Out[3]: NDQuantity([ 1., 2., 3.]), kg * m^2 / s^2 > > In [4]: b=NDQuantity([1,2,3.0],'BTU') > > In [5]: b > Out[5]: NDQuantity([ 1055.05585262, 2110.11170524, 3165.16755786]), kg * > m^2 / s^2 > > In [6]: c=NDQuantity([1,2,3.0],'m**-2') > > In [7]: c > Out[7]: NDQuantity([ 1., 2., 3.]), 1 / m^2 > > In [8]: c.inUnitsOf('parsec/cm^3') > Out[8]: NDQuantity([ 3.24077885e-23, 6.48155770e-23, 9.72233655e-23]), > parsec/cm^3 > > In [9]: d=c.inUnitsOf('parsec/cm^3') > > In [10]: c > Out[10]: NDQuantity([ 1., 2., 3.]), 1 / m^2 > > In [11]: d/a > Out[11]: NDQuantity([ 3.24077885e-23, 3.24077885e-23, 3.24077885e-23]), > s^2 * parsec/cm^3 / kg m^2 > > In [12]: b/a > Out[12]: NDQuantity([ 1055.05585262, 1055.05585262, 1055.05585262]), > (dimensionless) > > > There are some temporary limitations. Enthoughts parser has certain > expectations, which places some limitations on the units string passed to the > constructor. udunits, which is used for all unit conversions, has a different > set of expectations and limitations. This can be improved, I just havent > gotten around to it yet. The available units are defined in _units.py. > Quantities cannot yet be created by multiplying a _unit times another number > or numpy array. It would be better for the unit definitions to be first class > quantities, then you could do 10*J and get a quantitiy, but there is a > circular import issue that I need to work through. > > I kept this in for now: > > In [3]: a > Out[3]: NDQuantity([ 1., 2., 3.]), kg * m^2 / s^2 > > In [21]: a.units='ft' > > In [22]: a > Out[22]: NDQuantity([ 10.76391042, 21.52782083, 32.29173125]), kg * ft^2 / > s^2 > > I ran into the concatenate issue that came up a while back, it drops the > units: > > In [26]: numpy.concatenate([a,a]) > Out[26]: > array([ 10.76391042, 21.52782083, 32.29173125, 10.76391042, > 21.52782083, 32.29173125]) > > sqrt doesnt seem to work: > > In [28]: numpy.sqrt(a**2+a**2) > Out[28]: NDQuantity([ 15.2224681 , 30.44493619, 45.66740429]), kg^2 * ft^4 / > s^4 > > but this does: > > In [30]: (a**2+a**2)**0.5 > Out[30]: NDQuantity([ 15.2224681 , 30.44493619, 45.66740429]), kg * ft^2 / > s^2 > > Anyway, feel free to kick the tires and send feedback. > > > Darren > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http:// projects.scipy.org/mailman/listinfo/scipy-dev > > > From nwagner at iam.uni-stuttgart.de Tue Aug 5 12:32:14 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 05 Aug 2008 18:32:14 +0200 Subject: [SciPy-dev] scikits.umfpack Message-ID: Hi Robert C., I saw that you have edited __init__.py today. >>> from scikits import umfpack >>> umfpack.test() Running unit tests for scikits.umfpack NumPy version 1.2.0.dev5611 NumPy is installed in /usr/lib/python2.4/site-packages/numpy Python version 2.4 (#1, Oct 13 2006, 17:13:31) [GCC 3.3.5 20050117 (prerelease) (SUSE Linux)] nose version 0.10.3 E ====================================================================== ERROR: Failure: ImportError (No module named testing) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/nose-0.10.3-py2.4.egg/nose/loader.py", line 363, in loadTestsFromName module = self.importer.importFromPath( File "/usr/lib/python2.4/site-packages/nose-0.10.3-py2.4.egg/nose/importer.py", line 39, in importFromPath return self.importFromDir(dir_path, fqname) File "/usr/lib/python2.4/site-packages/nose-0.10.3-py2.4.egg/nose/importer.py", line 84, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/usr/lib/python2.4/site-packages/scikits/umfpack/tests/test_umfpack.py", line 13, in ? from scipy.testing import * ImportError: No module named testing ---------------------------------------------------------------------- Ran 1 test in 0.088s FAILED (errors=1) Nils From cimrman3 at ntc.zcu.cz Tue Aug 5 12:33:10 2008 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Tue, 05 Aug 2008 18:33:10 +0200 Subject: [SciPy-dev] scikits.umfpack In-Reply-To: References: Message-ID: <489880C6.9020607@ntc.zcu.cz> Nils Wagner wrote: > Hi Robert C., > > I saw that you have edited __init__.py today. > >>>> from scikits import umfpack >>>> umfpack.test() > Running unit tests for scikits.umfpack > NumPy version 1.2.0.dev5611 > NumPy is installed in > /usr/lib/python2.4/site-packages/numpy > Python version 2.4 (#1, Oct 13 2006, 17:13:31) [GCC 3.3.5 > 20050117 (prerelease) (SUSE Linux)] > nose version 0.10.3 > E > ====================================================================== > ERROR: Failure: ImportError (No module named testing) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/lib/python2.4/site-packages/nose-0.10.3-py2.4.egg/nose/loader.py", > line 363, in loadTestsFromName > module = self.importer.importFromPath( > File > "/usr/lib/python2.4/site-packages/nose-0.10.3-py2.4.egg/nose/importer.py", > line 39, in importFromPath > return self.importFromDir(dir_path, fqname) > File > "/usr/lib/python2.4/site-packages/nose-0.10.3-py2.4.egg/nose/importer.py", > line 84, in importFromDir > mod = load_module(part_fqname, fh, filename, desc) > File > "/usr/lib/python2.4/site-packages/scikits/umfpack/tests/test_umfpack.py", > line 13, in ? > from scipy.testing import * > ImportError: No module named testing > > ---------------------------------------------------------------------- > Ran 1 test in 0.088s > > FAILED (errors=1) > > > Nils Hi Nils, I have to do it so that it works with the latest SVN version (0.7.0.dev4600), but may have done something wrong. The prior version is ok for you? r. From nwagner at iam.uni-stuttgart.de Tue Aug 5 12:48:01 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 05 Aug 2008 18:48:01 +0200 Subject: [SciPy-dev] scikits.umfpack In-Reply-To: <489880C6.9020607@ntc.zcu.cz> References: <489880C6.9020607@ntc.zcu.cz> Message-ID: On Tue, 05 Aug 2008 18:33:10 +0200 Robert Cimrman wrote: > Nils Wagner wrote: >> Hi Robert C., >> >> I saw that you have edited __init__.py today. >> >>>>> from scikits import umfpack >>>>> umfpack.test() >> Running unit tests for scikits.umfpack >> NumPy version 1.2.0.dev5611 >> NumPy is installed in >> /usr/lib/python2.4/site-packages/numpy >> Python version 2.4 (#1, Oct 13 2006, 17:13:31) [GCC >>3.3.5 >> 20050117 (prerelease) (SUSE Linux)] >> nose version 0.10.3 >> E >> ====================================================================== >> ERROR: Failure: ImportError (No module named testing) >> ---------------------------------------------------------------------- >> Traceback (most recent call last): >> File >> "/usr/lib/python2.4/site-packages/nose-0.10.3-py2.4.egg/nose/loader.py", >> line 363, in loadTestsFromName >> module = self.importer.importFromPath( >> File >> "/usr/lib/python2.4/site-packages/nose-0.10.3-py2.4.egg/nose/importer.py", >> line 39, in importFromPath >> return self.importFromDir(dir_path, fqname) >> File >> "/usr/lib/python2.4/site-packages/nose-0.10.3-py2.4.egg/nose/importer.py", >> line 84, in importFromDir >> mod = load_module(part_fqname, fh, filename, desc) >> File >> "/usr/lib/python2.4/site-packages/scikits/umfpack/tests/test_umfpack.py", >> line 13, in ? >> from scipy.testing import * >> ImportError: No module named testing >> >> ---------------------------------------------------------------------- >> Ran 1 test in 0.088s >> >> FAILED (errors=1) >> >> >> Nils > > Hi Nils, > > I have to do it so that it works with the latest SVN >version > (0.7.0.dev4600), but may have done something wrong. The >prior version is > ok for you? > > r. Robert, The problem is that IIRC scipy.testing is not longer used. Please correct me if I am missing something. In [1]: import scipy In [2]: scipy.te scipy.tensordot scipy.test In [2]: scipy.te scipy.tensordot scipy.test In [2]: import numpy In [3]: numpy.t numpy.take numpy.tensordot numpy.tile numpy.trapz numpy.trim_zeros numpy.typeDict numpy.typename numpy.tan numpy.test numpy.trace numpy.tri numpy.triu numpy.typeNA numpy.tanh numpy.testing numpy.transpose numpy.tril numpy.true_divide numpy.typecodes Nils From swisher at enthought.com Tue Aug 5 12:28:52 2008 From: swisher at enthought.com (Janet Swisher) Date: Tue, 05 Aug 2008 11:28:52 -0500 Subject: [SciPy-dev] WTFM! In-Reply-To: References: Message-ID: <48987FC4.6060408@enthought.com> > From: Joe Harrington > SciPy Documentation Marathon 2008 Status Report > > We are now nearing the end of the summer. We have a ton of great > docstrings, a nice PDF and HTML reference guide, a new package with > pages on general topics like slicing, and a glossary. > This is the design for this year's documentation prize, a T-shirt in > Robert Kern black, designed by Teresa Jeffcott: > > http://physics.ucf.edu/~jh/scipyshirt-2008-2.png > > We'll hand these out at SciPy '08 to anyone who has written 1000 words > or more (according to the stats page) or who has made an equivalent > contribution in other ways (reviewing, wiki creation, etc.; Stefan and > I will judge). > As always, further discussion belongs on the scipy-dev mailing list, > as does your request to be added to the doc wiki editors group, so > head over to the doc wiki main page, established an account there, and > give us a shout! I'm delighted to hear about the progress on creating documentation. And I must get one of those t-shirts! I've created a 'swisher' login ID on the doc wiki. Please give me edit access. I think my efforts would be most usefully applied in reviewing and proofing. Should I just start picking things that are in the "Needs review" or "Reviewed" states? Are there particular areas where I should especially focus? -- Janet Swisher, Sr. Technical Writer Enthought, Inc., http://www.enthought.com From stefan at sun.ac.za Tue Aug 5 13:33:40 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Tue, 5 Aug 2008 19:33:40 +0200 Subject: [SciPy-dev] WTFM! In-Reply-To: <48987FC4.6060408@enthought.com> References: <48987FC4.6060408@enthought.com> Message-ID: <9457e7c80808051033p2c0c9634p210f557caebe2bc8@mail.gmail.com> 2008/8/5 Janet Swisher : > I'm delighted to hear about the progress on creating documentation. And > I must get one of those t-shirts! > > I've created a 'swisher' login ID on the doc wiki. Please give me edit > access. You're in! > I think my efforts would be most usefully applied in reviewing and > proofing. Should I just start picking things that are in the "Needs > review" or "Reviewed" states? Are there particular areas where I should > especially focus? "Needs review", please. We need as many eyes on those as possible. I merged all the docstrings into NumPy today, but I'll merge again before and during the conference. Thanks for helping! St?fan From bsouthey at gmail.com Tue Aug 5 14:02:55 2008 From: bsouthey at gmail.com (Bruce Southey) Date: Tue, 05 Aug 2008 13:02:55 -0500 Subject: [SciPy-dev] WTFM! In-Reply-To: <9457e7c80808051033p2c0c9634p210f557caebe2bc8@mail.gmail.com> References: <48987FC4.6060408@enthought.com> <9457e7c80808051033p2c0c9634p210f557caebe2bc8@mail.gmail.com> Message-ID: <489895CF.4000700@gmail.com> Hi, Can you also add me 'BruceSouthey'? Thanks Bruce St?fan van der Walt wrote: > 2008/8/5 Janet Swisher : > >> I'm delighted to hear about the progress on creating documentation. And >> I must get one of those t-shirts! >> >> I've created a 'swisher' login ID on the doc wiki. Please give me edit >> access. >> > > You're in! > > >> I think my efforts would be most usefully applied in reviewing and >> proofing. Should I just start picking things that are in the "Needs >> review" or "Reviewed" states? Are there particular areas where I should >> especially focus? >> > > "Needs review", please. We need as many eyes on those as possible. > > I merged all the docstrings into NumPy today, but I'll merge again > before and during the conference. Thanks for helping! > > St?fan > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > > From stefan at sun.ac.za Tue Aug 5 14:46:54 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Tue, 5 Aug 2008 20:46:54 +0200 Subject: [SciPy-dev] WTFM! In-Reply-To: <489895CF.4000700@gmail.com> References: <48987FC4.6060408@enthought.com> <9457e7c80808051033p2c0c9634p210f557caebe2bc8@mail.gmail.com> <489895CF.4000700@gmail.com> Message-ID: <9457e7c80808051146q40e1df78md8a77ab13fe9ff41@mail.gmail.com> 2008/8/5 Bruce Southey : > Hi, > Can you also add me 'BruceSouthey'? Done, and thanks! Cheers St?fan From dsdale24 at gmail.com Tue Aug 5 19:20:19 2008 From: dsdale24 at gmail.com (Darren Dale) Date: Tue, 5 Aug 2008 19:20:19 -0400 Subject: [SciPy-dev] design of a physical quantities package: seeking comments In-Reply-To: <200808031909.01560.dsdale24@gmail.com> References: <200808021325.42976.dsdale24@gmail.com> <200808031254.12591.dsdale24@gmail.com> <200808031909.01560.dsdale24@gmail.com> Message-ID: <200808051920.20034.dsdale24@gmail.com> I worked through a bunch of issues today on the quantities package. One can now create a quantity by doing: >>> import quantities, numpy >>> q=quantities.Quantity([1,2,3.0], 'J') or >>> q=quantities.Quantity([1,2,3.0], quantities.J) or >>> q = numpy.array([1,2,3.0]) * quantities.J >>> q Quantity([ 1., 2., 3.]), kg * m^2 / s^2 I took the previous commenters' advice and made the following an error: >>> q.units = 'ft' IncompatibleUnits: Cannot convert between quanitites with units of 'kg m^2 s^-2' and 'ft' Instead one can do: >>> q.modify_units('ft') or >>> q.modify_units(quantities.ft) The standard units will be decomposed to the fundamental dimensions, but there is a mechanism in place to preserve a compound unit, a function called compound, which can be used directly or referenced in a units string: >>> quantities.Quantity(19,'compound("parsec/cm^3")*compound("J")') Quantity(19), (parsec/cm**3) * (J) or: >>> q=quantities.Quantity(19,'compound("parsec/cm^3")*compound("m^3/m^2")') or: >>> q=19*quantities.compound("parsec/cm^3")*quantities.compound("m^3/m^2") >>> q Quantity(19.0), (m^3/m^2) * (parsec/cm^3) and there is a mechanism to force compound units to be decomposed: >>> q.reduce_units() >>> q Quantity(5.8627881999999992e+23), 1 / m or they can be recomposed: >>> q.units=quantities.compound("parsec/cm^3")*quantities.compound("m^3/m^2") >>> q Quantity(19.000000000000004), (m^3/m^2) * (parsec/cm^3) I also started a unittest suite. A short description and directions to get the package are at http://dale.chess.cornell.edu/chess-wiki/Quantities. I'm pretty pleased with how everything is coming together. In fact, its more capable than I had originally envisioned. I intend to continue writing unittests, working out bugs, writing some documentation, and cleaning up the code but I'll refrain from future posts unless interest picks up. Darren -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsdale24 at gmail.com Tue Aug 5 20:25:07 2008 From: dsdale24 at gmail.com (Darren Dale) Date: Tue, 5 Aug 2008 20:25:07 -0400 Subject: [SciPy-dev] design of a physical quantities package: seeking comments In-Reply-To: <48987CB3.6060806@llnl.gov> References: <200808021325.42976.dsdale24@gmail.com> <200808031909.01560.dsdale24@gmail.com> <48987CB3.6060806@llnl.gov> Message-ID: <200808052025.08213.dsdale24@gmail.com> Hi Charles, Since udunits2 is not officially released, and since udunits is working out so well, I think I am happy with things as they are now. Thanks, Darren On Tuesday 05 August 2008 12:15:47 Charles Doutriaux wrote: > Hi Darren, > > Obviously I didn't have time to look into the udunits2 pbm yet. But I > see you far along already. > > Do you still want me to look into that or are you happy with udunits? > > C. > > Darren Dale wrote: > > On Sunday 03 August 2008 12:54:12 pm Darren Dale wrote: > >> On Sunday 03 August 2008 11:35:47 am Jonathan Guyer wrote: > >>> On Aug 3, 2008, at 9:22 AM, Darren Dale wrote: > >>>>> More, what happens when you have a quantity in Newton-metres (say) > >>>>> and > >>>>> you ask it to convert to feet? Do you get Newton-feet, or do all the > >>>>> occurrences of "feet" in the Newtons get converted? > >>>> > >>>> They all get converted. > >>> > >>> I'm with Anne. I'm having a really hard time seeing when I would want > >>> this as a default behavior, whereas I frequently want to be able > >>> automatically convert BTUs to Joules and atmospheres to Pascals, but I > >>> want an exception if somebody tries to give me a pressure when I ask > >>> for an energy. Having it possible to do what you suggest seems like it > >>> might be useful sometimes, but I don't think it should be the default > >>> behavior. > >> > >> [...] > >> > >>> Using this scheme: > >>> >>> from fipy import PhysicalField > >>> >>> PhysicalField("12 lyr/cm**3") > >>> > >>> PhysicalField(12.0,'lyr/cm**3') > >>> > >>> >>> PhysicalField("1 lyr") / "7.3 cm**2" > >>> > >>> PhysicalField(0.13698630136986301,'lyr/cm**2') > >>> > >>> >>> PhysicalField("12 lyr/cm**3").inUnitsOf("m**-2") > >>> > >>> PhysicalField(1.1352876567096959e+23,'1/m**2') > >>> > >>> so the compound unit that Anne wants is supported and conversion to > >>> canonical units happens only when requested. > >> > >> But in the meantime, you aggregate units like > >> "dynes kg ft m^3 s / lbs Km ps^4 watts ohms". Quick, what units should I > >> ask for that won't yield an error? It seems like it is more helpful to > >> reduce as you go, and later when you want your value expressed with a > >> compound unit, you ask for it. Let me try it my way and see what you > >> think, and if people are not sold, it should be simple to reimplement so > >> all units are aggregated, and reduced only on request. > > > > I think maybe we can have it both ways. I borrowed (and slightly > > modified) the parser enthought put together to interpret unit strings. > > Perhaps it could be modified further such that 'kg*(pc/cm^3)*K/s^4' > > dictates that pc/cm^3 is a compound unit and should not to be > > automatically decomposed. A method could be added to decompose compound > > units on request, in case that is desired. > > > > I put a mercurial repo up, you can check it out with > > "hg clone \ > > http:// dale.chess.cornell.edu/~darren/cgi-bin/hgwebdir.cgi/quantities \ > > quantities" > > > > There is no documentation yet, and I'm a little unhappy with some of the > > organization, but I was really just blitzing through trying to get a demo > > together. It installs in the usual way. There is no documentation, so > > here is a crash course: > > > > In [1]: from quantities import NDQuantity > > udunits(3): Already initialized from > > file "/usr/lib/python2.5/site-packages/quan > > tities/quantities-data/udunits.dat" > > > > In [2]: a=NDQuantity([1,2,3.0],'J') > > > > In [3]: a > > Out[3]: NDQuantity([ 1., 2., 3.]), kg * m^2 / s^2 > > > > In [4]: b=NDQuantity([1,2,3.0],'BTU') > > > > In [5]: b > > Out[5]: NDQuantity([ 1055.05585262, 2110.11170524, 3165.16755786]), kg > > * m^2 / s^2 > > > > In [6]: c=NDQuantity([1,2,3.0],'m**-2') > > > > In [7]: c > > Out[7]: NDQuantity([ 1., 2., 3.]), 1 / m^2 > > > > In [8]: c.inUnitsOf('parsec/cm^3') > > Out[8]: NDQuantity([ 3.24077885e-23, 6.48155770e-23, > > 9.72233655e-23]), parsec/cm^3 > > > > In [9]: d=c.inUnitsOf('parsec/cm^3') > > > > In [10]: c > > Out[10]: NDQuantity([ 1., 2., 3.]), 1 / m^2 > > > > In [11]: d/a > > Out[11]: NDQuantity([ 3.24077885e-23, 3.24077885e-23, > > 3.24077885e-23]), s^2 * parsec/cm^3 / kg m^2 > > > > In [12]: b/a > > Out[12]: NDQuantity([ 1055.05585262, 1055.05585262, 1055.05585262]), > > (dimensionless) > > > > > > There are some temporary limitations. Enthoughts parser has certain > > expectations, which places some limitations on the units string passed to > > the constructor. udunits, which is used for all unit conversions, has a > > different set of expectations and limitations. This can be improved, I > > just havent gotten around to it yet. The available units are defined in > > _units.py. Quantities cannot yet be created by multiplying a _unit times > > another number or numpy array. It would be better for the unit > > definitions to be first class quantities, then you could do 10*J and get > > a quantitiy, but there is a circular import issue that I need to work > > through. > > > > I kept this in for now: > > > > In [3]: a > > Out[3]: NDQuantity([ 1., 2., 3.]), kg * m^2 / s^2 > > > > In [21]: a.units='ft' > > > > In [22]: a > > Out[22]: NDQuantity([ 10.76391042, 21.52782083, 32.29173125]), kg * > > ft^2 / s^2 > > > > I ran into the concatenate issue that came up a while back, it drops the > > units: > > > > In [26]: numpy.concatenate([a,a]) > > Out[26]: > > array([ 10.76391042, 21.52782083, 32.29173125, 10.76391042, > > 21.52782083, 32.29173125]) > > > > sqrt doesnt seem to work: > > > > In [28]: numpy.sqrt(a**2+a**2) > > Out[28]: NDQuantity([ 15.2224681 , 30.44493619, 45.66740429]), kg^2 * > > ft^4 / s^4 > > > > but this does: > > > > In [30]: (a**2+a**2)**0.5 > > Out[30]: NDQuantity([ 15.2224681 , 30.44493619, 45.66740429]), kg * > > ft^2 / s^2 > > > > Anyway, feel free to kick the tires and send feedback. > > > > > > Darren > > _______________________________________________ > > Scipy-dev mailing list > > Scipy-dev at scipy.org > > http:// projects.scipy.org/mailman/listinfo/scipy-dev > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev From jh at physics.ucf.edu Tue Aug 5 15:52:00 2008 From: jh at physics.ucf.edu (Joe Harrington) Date: Tue, 05 Aug 2008 15:52:00 -0400 Subject: [SciPy-dev] WTFM! Message-ID: > I must get one of those t-shirts! Welcome aboard, Janet! One of the wiki crew will set you up. Check out the review criteria on the wiki, and go to it. We're distinguishing between review and proof because the content may change significantly in review, so mainly focus on 1) technical correctness/completeness and 2) readability to a non-expert when you review. We want eacj doc to be complete for an expert but accessible to someone one notch below the minimum math needed to understand a topic. For example, FFTs are generally taught in the middle of college, so a math-savvy college freshman should be able to read the FFT page and the page should spend an early sentence or two (and no more) in the notes telling in general what an FFT is or is for. This also means that certain pages need to be accessible at the high-school level, such as the glossary and most of the np.doc package. We've tried to keep review and revision separate, so the reviewers generally don't make the changes they suggest. Someone who makes changes is a writer for that page and someone else needs to review those changes. Every word gets read by two heads! That said, you may find you make the best contribution cleaning up some of our writing rather than commenting on it. Given your expertise, if you disagree with any of the review criteria or procedures, do say so. --jh-- From nwagner at iam.uni-stuttgart.de Wed Aug 6 01:37:58 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 06 Aug 2008 07:37:58 +0200 Subject: [SciPy-dev] ImportError: No module named testing Message-ID: ====================================================================== ERROR: Failure: ImportError (No module named testing) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/nose-0.10.3-py2.4.egg/nose/loader.py", line 363, in loadTestsFromName module = self.importer.importFromPath( File "/usr/lib/python2.4/site-packages/nose-0.10.3-py2.4.egg/nose/importer.py", line 39, in importFromPath return self.importFromDir(dir_path, fqname) File "/usr/lib/python2.4/site-packages/nose-0.10.3-py2.4.egg/nose/importer.py", line 84, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/usr/lib/python2.4/site-packages/scipy/cluster/tests/test_distance.py", line 39, in ? from scipy.testing import * ImportError: No module named testing From cimrman3 at ntc.zcu.cz Wed Aug 6 07:15:49 2008 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Wed, 06 Aug 2008 13:15:49 +0200 Subject: [SciPy-dev] scikits.umfpack In-Reply-To: References: <489880C6.9020607@ntc.zcu.cz> Message-ID: <489987E5.20308@ntc.zcu.cz> Nils Wagner wrote: > On Tue, 05 Aug 2008 18:33:10 +0200 > Robert Cimrman wrote: >> Nils Wagner wrote: >>> Hi Robert C., >>> >>> I saw that you have edited __init__.py today. >>> >>>>>> from scikits import umfpack >>>>>> umfpack.test() >>> Running unit tests for scikits.umfpack >>> NumPy version 1.2.0.dev5611 >>> NumPy is installed in >>> /usr/lib/python2.4/site-packages/numpy >>> Python version 2.4 (#1, Oct 13 2006, 17:13:31) [GCC >>> 3.3.5 >>> 20050117 (prerelease) (SUSE Linux)] >>> nose version 0.10.3 >>> E >>> ====================================================================== >>> ERROR: Failure: ImportError (No module named testing) >>> ---------------------------------------------------------------------- >>> Traceback (most recent call last): >>> File >>> "/usr/lib/python2.4/site-packages/nose-0.10.3-py2.4.egg/nose/loader.py", >>> line 363, in loadTestsFromName >>> module = self.importer.importFromPath( >>> File >>> "/usr/lib/python2.4/site-packages/nose-0.10.3-py2.4.egg/nose/importer.py", >>> line 39, in importFromPath >>> return self.importFromDir(dir_path, fqname) >>> File >>> "/usr/lib/python2.4/site-packages/nose-0.10.3-py2.4.egg/nose/importer.py", >>> line 84, in importFromDir >>> mod = load_module(part_fqname, fh, filename, desc) >>> File >>> "/usr/lib/python2.4/site-packages/scikits/umfpack/tests/test_umfpack.py", >>> line 13, in ? >>> from scipy.testing import * >>> ImportError: No module named testing >>> >>> ---------------------------------------------------------------------- >>> Ran 1 test in 0.088s >>> >>> FAILED (errors=1) >>> >>> >>> Nils >> Hi Nils, >> >> I have to do it so that it works with the latest SVN >> version >> (0.7.0.dev4600), but may have done something wrong. The >> prior version is >> ok for you? >> >> r. > > Robert, > > The problem is that IIRC scipy.testing is not longer used. > Please correct me if I am missing something. > > In [1]: import scipy > > In [2]: scipy.te > scipy.tensordot scipy.test > > In [2]: scipy.te > scipy.tensordot scipy.test > > In [2]: import numpy > > In [3]: numpy.t > numpy.take numpy.tensordot numpy.tile > numpy.trapz numpy.trim_zeros > numpy.typeDict numpy.typename > numpy.tan numpy.test numpy.trace > numpy.tri numpy.triu numpy.typeNA > numpy.tanh numpy.testing numpy.transpose > numpy.tril numpy.true_divide numpy.typecodes Hi Nils, it looks like you have numpy.testing too, so the fix might be ok. The error you see comes from test_umfpack.py not being updated to new testing framework. So what is the current way of importing Tester? numpy.testing works for me. my installation behaves in the following way: # original: In [7]: from scipy.testing.pkgtester import Tester --------------------------------------------------------------------------- ImportError Traceback (most recent call last) /home/eldaran/ ImportError: No module named testing.pkgtester # new: In [8]: from numpy.testing import Tester In [9]: import numpy In [10]: numpy.__version__ Out[10]: '1.2.0.dev5615' In [11]: import scipy In [12]: scipy.__version__ Out[12]: '0.7.0.dev4603' Related question: what versions of scipy/numpy should be scikits compatible with? I will try to update umfpack to work both with the last release and the svn. r. From josef.pktd at gmail.com Wed Aug 6 12:47:37 2008 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 6 Aug 2008 12:47:37 -0400 Subject: [SciPy-dev] are nosetests in scipy.io.matlab dummies? Message-ID: <1cd32cbb0808060947x1512e5b3h22b114853b67c47@mail.gmail.com> Hi, I'm on wondering whether the tests for scipy.io.matlab are actually executed when running nosetests, e.g. with ``nosetests scipy.io`` or with `` import scipy.io scipy.io.test() `` after correcting some import errors (which maybe due to my usage of scipy, without proper install) I get Ran 64 tests in 0.953s OK but my impression is that the tests in scipy\io\matlab\tests\test_mio.py don't test anything. I am messing up the tests so that they should fail, but they don't. When I unnest the _check functions, then it produces failures for the "screwed up" tests Note: I'm on Windows XP and usually work from the binary release, when I am building scipy with setup.py, I am not sure everything in the build is ok but it should not be related to the testing problem. ``nosetests-script.py scipy`` ends in a segfault >>>scipy.test() Ran 1478 tests in 21.828s FAILED (errors=42) I was trying to use writing to matlab .mat version 5 files, and since 0.6.0 only writes mat4 files, I tried out the trunk of scipy. When I try to savemat with format '5', then I get for some dicts "NameError: global name 'Mat5CellWriter' is not defined" As far as I can see mio5.py does not contain any Mat5CellWriter, which I found is already known. Ticket #653. Conclusion: I guess that in the transition to nose testing some tests did not get completely converted, the tests say ok, but don't test anything. Can somebody verify the actual coverage of the tests? If it is just my messy setup of the svn version of scipy and numpy, then I'm sorry for the noise and will wait for the next binary release. Josef -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Wed Aug 6 12:59:07 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 06 Aug 2008 18:59:07 +0200 Subject: [SciPy-dev] scikits.umfpack In-Reply-To: <489987E5.20308@ntc.zcu.cz> References: <489880C6.9020607@ntc.zcu.cz> <489987E5.20308@ntc.zcu.cz> Message-ID: On Wed, 06 Aug 2008 13:15:49 +0200 Robert Cimrman wrote: > Nils Wagner wrote: >> On Tue, 05 Aug 2008 18:33:10 +0200 >> Robert Cimrman wrote: >>> Nils Wagner wrote: >>>> Hi Robert C., >>>> >>>> I saw that you have edited __init__.py today. >>>> >>>>>>> from scikits import umfpack >>>>>>> umfpack.test() >>>> Running unit tests for scikits.umfpack >>>> NumPy version 1.2.0.dev5611 >>>> NumPy is installed in >>>> /usr/lib/python2.4/site-packages/numpy >>>> Python version 2.4 (#1, Oct 13 2006, 17:13:31) [GCC >>>> 3.3.5 >>>> 20050117 (prerelease) (SUSE Linux)] >>>> nose version 0.10.3 >>>> E >>>> ====================================================================== >>>> ERROR: Failure: ImportError (No module named testing) >>>> ---------------------------------------------------------------------- >>>> Traceback (most recent call last): >>>> File >>>> "/usr/lib/python2.4/site-packages/nose-0.10.3-py2.4.egg/nose/loader.py", >>>> line 363, in loadTestsFromName >>>> module = self.importer.importFromPath( >>>> File >>>> "/usr/lib/python2.4/site-packages/nose-0.10.3-py2.4.egg/nose/importer.py", >>>> line 39, in importFromPath >>>> return self.importFromDir(dir_path, fqname) >>>> File >>>> "/usr/lib/python2.4/site-packages/nose-0.10.3-py2.4.egg/nose/importer.py", >>>> line 84, in importFromDir >>>> mod = load_module(part_fqname, fh, filename, desc) >>>> File >>>> "/usr/lib/python2.4/site-packages/scikits/umfpack/tests/test_umfpack.py", >>>> line 13, in ? >>>> from scipy.testing import * >>>> ImportError: No module named testing >>>> >>>> ---------------------------------------------------------------------- >>>> Ran 1 test in 0.088s >>>> >>>> FAILED (errors=1) >>>> >>>> >>>> Nils >>> Hi Nils, >>> >>> I have to do it so that it works with the latest SVN >>> version >>> (0.7.0.dev4600), but may have done something wrong. The >>> prior version is >>> ok for you? >>> >>> r. >> >> Robert, >> >> The problem is that IIRC scipy.testing is not longer >>used. >> Please correct me if I am missing something. >> >> In [1]: import scipy >> >> In [2]: scipy.te >> scipy.tensordot scipy.test >> >> In [2]: scipy.te >> scipy.tensordot scipy.test >> >> In [2]: import numpy >> >> In [3]: numpy.t >> numpy.take numpy.tensordot numpy.tile >> numpy.trapz numpy.trim_zeros >> numpy.typeDict numpy.typename >> numpy.tan numpy.test numpy.trace >> numpy.tri numpy.triu >> numpy.typeNA >> numpy.tanh numpy.testing numpy.transpose >> numpy.tril numpy.true_divide numpy.typecodes > > Hi Nils, > > it looks like you have numpy.testing too, so the fix >might be ok. > > The error you see comes from test_umfpack.py not being >updated to new > testing framework. So what is the current way of >importing Tester? > numpy.testing works for me. > > my installation behaves in the following way: > > # original: > In [7]: from scipy.testing.pkgtester import Tester > --------------------------------------------------------------------------- > ImportError Traceback >(most recent call last) > > /home/eldaran/ > > ImportError: No module named testing.pkgtester > > # new: > In [8]: from numpy.testing import Tester > > In [9]: import numpy > In [10]: numpy.__version__ > Out[10]: '1.2.0.dev5615' > In [11]: import scipy > In [12]: scipy.__version__ > Out[12]: '0.7.0.dev4603' > > > Related question: what versions of scipy/numpy should be >scikits > compatible with? I will try to update umfpack to work >both with the last > release and the svn. > > r. Robert, Will you update test_umfpack.py in the trunk ? BTW, will you have time to look at http://projects.scipy.org/scipy/scipy/ticket/452 ? I have attached a test case for a complex Hermitian standard eigenvalue problem. Cheers, Nils From josef.pktd at gmail.com Wed Aug 6 13:21:07 2008 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 6 Aug 2008 13:21:07 -0400 Subject: [SciPy-dev] Ticket #552 - no errors with recent versions Message-ID: <1cd32cbb0808061021j6bcb1f7h4333f45739364f76@mail.gmail.com> I don't have any problem with newer versions of numpy with ticket 552 on WindowsXP Ticket #552 (new defect): linalg.svd fails for some matrices [Win32, SciPy 0.6.0, NumPy 1.0.4] Python 2.4.3 (#69, Mar 29 2006, 17:35:34) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import scipy >>> execfile(r'cpos_failure_mintestcase.py') H: [[ -8.19200144e+03 9.58422857e+00 -2.18886768e+00] [ 2.70299079e+00 2.32063587e+03 5.67734088e+00] [ 0.00000000e+00 0.00000000e+00 0.00000000e+00]] Now computing SVD of H Computed SVD of H. Result: (matrix([[ -1.00000000e+00, -1.39611407e-06, 0.00000000e+00], [ -1.39611407e-06, 1.00000000e+00, 0.00000000e+00], [ 0.00000000e+00, 0.00000000e+00, 1.00000000e+00]]), array([ 8192.0 0734196, 2320.64438555, 0. ]), matrix([[ 9.99999279e-01, -1.170344 22e-03, 2.67194555e-04], [ 1.16968708e-03, 9.99996323e-01, 2.44645150e-03], [ -2.70056763e-04, -2.44613720e-03, 9.99996972e-01]])) >>> scipy.version.version '0.6.0' >>> import numpy >>> numpy.version.version '1.1.0' >>> Python 2.5.2 (r252:60911, Feb 21 2008, 13:11:45) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import scipy >>> execfile(r'cpos_failure_mintestcase.py') H: [[ -8.192e+03 9.584e+00 -2.189e+00] [ 2.703e+00 2.321e+03 5.677e+00] [ 0.000e+00 0.000e+00 0.000e+00]] Now computing SVD of H Computed SVD of H. Result: (matrix([[ -1.000e+00, -1.396e-06, 0.000e+00], [ -1.396e-06, 1.000e+00, 0.000e+00], [ 0.000e+00, 0.000e+00, 1.000e+00]]), array([ 8192.007, 2320.644, 0. ]), matrix([[ 1.000e+00, -1.170e-03, 2.672e-04], [ 1.170e-03, 1.000e+00, 2.446e-03], [ -2.701e-04, -2.446e-03, 1.000e+00]])) >>> scipy.version.version '0.7.0.dev' >>> import numpy >>> numpy.version.version '1.2.0.dev5608' -------------- next part -------------- An HTML attachment was scrubbed... URL: From ondrej at certik.cz Wed Aug 6 17:07:53 2008 From: ondrej at certik.cz (Ondrej Certik) Date: Wed, 6 Aug 2008 23:07:53 +0200 Subject: [SciPy-dev] scikits.umfpack In-Reply-To: <489987E5.20308@ntc.zcu.cz> References: <489880C6.9020607@ntc.zcu.cz> <489987E5.20308@ntc.zcu.cz> Message-ID: <85b5c3130808061407s7af6c51do9707fa3730057b34@mail.gmail.com> On Wed, Aug 6, 2008 at 1:15 PM, Robert Cimrman wrote: > Nils Wagner wrote: >> On Tue, 05 Aug 2008 18:33:10 +0200 >> Robert Cimrman wrote: >>> Nils Wagner wrote: >>>> Hi Robert C., >>>> >>>> I saw that you have edited __init__.py today. >>>> >>>>>>> from scikits import umfpack >>>>>>> umfpack.test() >>>> Running unit tests for scikits.umfpack >>>> NumPy version 1.2.0.dev5611 >>>> NumPy is installed in >>>> /usr/lib/python2.4/site-packages/numpy >>>> Python version 2.4 (#1, Oct 13 2006, 17:13:31) [GCC >>>> 3.3.5 >>>> 20050117 (prerelease) (SUSE Linux)] >>>> nose version 0.10.3 >>>> E >>>> ====================================================================== >>>> ERROR: Failure: ImportError (No module named testing) >>>> ---------------------------------------------------------------------- >>>> Traceback (most recent call last): >>>> File >>>> "/usr/lib/python2.4/site-packages/nose-0.10.3-py2.4.egg/nose/loader.py", >>>> line 363, in loadTestsFromName >>>> module = self.importer.importFromPath( >>>> File >>>> "/usr/lib/python2.4/site-packages/nose-0.10.3-py2.4.egg/nose/importer.py", >>>> line 39, in importFromPath >>>> return self.importFromDir(dir_path, fqname) >>>> File >>>> "/usr/lib/python2.4/site-packages/nose-0.10.3-py2.4.egg/nose/importer.py", >>>> line 84, in importFromDir >>>> mod = load_module(part_fqname, fh, filename, desc) >>>> File >>>> "/usr/lib/python2.4/site-packages/scikits/umfpack/tests/test_umfpack.py", >>>> line 13, in ? >>>> from scipy.testing import * >>>> ImportError: No module named testing >>>> >>>> ---------------------------------------------------------------------- >>>> Ran 1 test in 0.088s >>>> >>>> FAILED (errors=1) >>>> >>>> >>>> Nils >>> Hi Nils, >>> >>> I have to do it so that it works with the latest SVN >>> version >>> (0.7.0.dev4600), but may have done something wrong. The >>> prior version is >>> ok for you? >>> >>> r. >> >> Robert, >> >> The problem is that IIRC scipy.testing is not longer used. >> Please correct me if I am missing something. >> >> In [1]: import scipy >> >> In [2]: scipy.te >> scipy.tensordot scipy.test >> >> In [2]: scipy.te >> scipy.tensordot scipy.test >> >> In [2]: import numpy >> >> In [3]: numpy.t >> numpy.take numpy.tensordot numpy.tile >> numpy.trapz numpy.trim_zeros >> numpy.typeDict numpy.typename >> numpy.tan numpy.test numpy.trace >> numpy.tri numpy.triu numpy.typeNA >> numpy.tanh numpy.testing numpy.transpose >> numpy.tril numpy.true_divide numpy.typecodes > > Hi Nils, > > it looks like you have numpy.testing too, so the fix might be ok. > > The error you see comes from test_umfpack.py not being updated to new > testing framework. So what is the current way of importing Tester? > numpy.testing works for me. > > my installation behaves in the following way: > > # original: > In [7]: from scipy.testing.pkgtester import Tester > --------------------------------------------------------------------------- > ImportError Traceback (most recent call last) > > /home/eldaran/ > > ImportError: No module named testing.pkgtester > > # new: > In [8]: from numpy.testing import Tester > > In [9]: import numpy > In [10]: numpy.__version__ > Out[10]: '1.2.0.dev5615' > In [11]: import scipy > In [12]: scipy.__version__ > Out[12]: '0.7.0.dev4603' > > > Related question: what versions of scipy/numpy should be scikits > compatible with? I will try to update umfpack to work both with the last > release and the svn. Well Robert, maybe it's time you start sending patches for a review, no? :) I am sure Stefan would agree with me. :) Ondrej From millman at berkeley.edu Wed Aug 6 19:59:59 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Wed, 6 Aug 2008 16:59:59 -0700 Subject: [SciPy-dev] scikits.umfpack In-Reply-To: <489987E5.20308@ntc.zcu.cz> References: <489880C6.9020607@ntc.zcu.cz> <489987E5.20308@ntc.zcu.cz> Message-ID: On Wed, Aug 6, 2008 at 4:15 AM, Robert Cimrman wrote: > Related question: what versions of scipy/numpy should be scikits > compatible with? I will try to update umfpack to work both with the last > release and the svn. Basically, the authors of the various scikits are free to make this determination themselves and should clearly state their dependencies. -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From cimrman3 at ntc.zcu.cz Thu Aug 7 06:33:29 2008 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 07 Aug 2008 12:33:29 +0200 Subject: [SciPy-dev] scikits.umfpack In-Reply-To: References: <489880C6.9020607@ntc.zcu.cz> <489987E5.20308@ntc.zcu.cz> Message-ID: <489ACF79.2050907@ntc.zcu.cz> Nils Wagner wrote: > > Robert, > > Will you update test_umfpack.py in the trunk ? Try it now, please. It should work also with the previous release of SciPy. If you could verify it, it would be nice. > BTW, will you have time to look at > http://projects.scipy.org/scipy/scipy/ticket/452 ? > > I have attached a test case for a complex Hermitian > standard eigenvalue problem. Sorry, not until end of August (then I do not know yet :). But as we are just starting to solve complex problems with sfepy, I might need it myself, just cannot guess when. r. From cimrman3 at ntc.zcu.cz Thu Aug 7 06:40:04 2008 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 07 Aug 2008 12:40:04 +0200 Subject: [SciPy-dev] scikits.umfpack In-Reply-To: <85b5c3130808061407s7af6c51do9707fa3730057b34@mail.gmail.com> References: <489880C6.9020607@ntc.zcu.cz> <489987E5.20308@ntc.zcu.cz> <85b5c3130808061407s7af6c51do9707fa3730057b34@mail.gmail.com> Message-ID: <489AD104.6020305@ntc.zcu.cz> Ondrej Certik wrote: > On Wed, Aug 6, 2008 at 1:15 PM, Robert Cimrman wrote: >> Related question: what versions of scipy/numpy should be scikits >> compatible with? I will try to update umfpack to work both with the last >> release and the svn. > > Well Robert, maybe it's time you start sending patches for a review, no? :) > > I am sure Stefan would agree with me. :) Although I agree that reviews are nice, the patch was ok - before UMFPACK scikit did not work. I just missed the failing test, caused by test_umfpack.py not being updated too. Is there a standard procedure for this? I recall a discussion was here in May but missed its conclusions if there were any. Anyway this just shows how SVN sucks compared to e.g. Mercurial. I would have cloned my repo to freehg, let Nils verify it and then commit the changes - no patch review framework needed. ;) r. From cimrman3 at ntc.zcu.cz Thu Aug 7 06:42:34 2008 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 07 Aug 2008 12:42:34 +0200 Subject: [SciPy-dev] scikits.umfpack In-Reply-To: References: <489880C6.9020607@ntc.zcu.cz> <489987E5.20308@ntc.zcu.cz> Message-ID: <489AD19A.6050007@ntc.zcu.cz> Jarrod Millman wrote: > On Wed, Aug 6, 2008 at 4:15 AM, Robert Cimrman wrote: >> Related question: what versions of scipy/numpy should be scikits >> compatible with? I will try to update umfpack to work both with the last >> release and the svn. > > Basically, the authors of the various scikits are free to make this > determination themselves and should clearly state their dependencies. I see, thanks! r. From nwagner at iam.uni-stuttgart.de Thu Aug 7 06:55:44 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 07 Aug 2008 12:55:44 +0200 Subject: [SciPy-dev] scikits.umfpack In-Reply-To: <489AD104.6020305@ntc.zcu.cz> References: <489880C6.9020607@ntc.zcu.cz> <489987E5.20308@ntc.zcu.cz> <85b5c3130808061407s7af6c51do9707fa3730057b34@mail.gmail.com> <489AD104.6020305@ntc.zcu.cz> Message-ID: On Thu, 07 Aug 2008 12:40:04 +0200 Robert Cimrman wrote: > Ondrej Certik wrote: >> On Wed, Aug 6, 2008 at 1:15 PM, Robert Cimrman >> wrote: >>> Related question: what versions of scipy/numpy should be >>>scikits >>> compatible with? I will try to update umfpack to work >>>both with the last >>> release and the svn. >> >> Well Robert, maybe it's time you start sending patches >>for a review, no? :) >> >> I am sure Stefan would agree with me. :) > > Although I agree that reviews are nice, the patch was ok >- before > UMFPACK scikit did not work. I just missed the failing >test, caused by > test_umfpack.py not being updated too. > > Is there a standard procedure for this? I recall a >discussion was here > in May but missed its conclusions if there were any. > > Anyway this just shows how SVN sucks compared to e.g. >Mercurial. I would > have cloned my repo to freehg, let Nils verify it and >then commit the > changes - no patch review framework needed. ;) > > r. Works for me with r1205 | rc | 2008-08-07 12:29:20 +0200 (Thu, 07 Aug 2008) | 2 lines fixed test_umfpack.py for the latest SVN, added README Thank you very much ! >>> from scikits import umfpack >>> umfpack.test() ........... ---------------------------------------------------------------------- Ran 11 tests in 0.610s OK Cheers, Nils From uschmitt at mineway.de Thu Aug 7 10:00:36 2008 From: uschmitt at mineway.de (Uwe Schmitt) Date: Thu, 07 Aug 2008 16:00:36 +0200 Subject: [SciPy-dev] [mailinglist] Re: NNLS In-Reply-To: <3d375d730807241137s47df563av9ba9d19beba9ea65@mail.gmail.com> References: <48875105.7010708@mineway.de> <3d375d730807231546t72f468c4u1980650481ccb8a2@mail.gmail.com> <48883ECD.6000109@mineway.de> <3d375d730807241137s47df563av9ba9d19beba9ea65@mail.gmail.com> Message-ID: <489B0004.5070103@mineway.de> Robert Kern schrieb: > On Thu, Jul 24, 2008 at 03:35, Uwe Schmitt wrote: > >> Robert Kern schrieb: >> >>> On Wed, Jul 23, 2008 at 17:47, Alan G Isaac wrote: >>> >>> Well, I'd prefer an f2py version rather than a ctypes version, but yes, please. >>> >>> >>> >> I had some problems because my local python.exe is from Enthought, which >> was compiled >> with MS Visual Studio. But I wanted g77 for compiling the Fortran code, >> which gives some problems when using f2py. >> > > I was able to fix these problems and I start liking f2py. The NNLS code is now via SVN at http://public.procoders.net/nnls/nnls_with_f2py/ How can I contribute this code now ? Is there further any interest in code for * ICA (Independent componenent ananlysis) ? I wrapped existing C-Code with f2py. * NMF/NNMA (nonnegative matrix factorization / - approximation) ? which is pure Python/numpy code. Greetings, Uwe -- Dr. rer. nat. Uwe Schmitt F&E Mathematik mineway GmbH Science Park 2 D-66123 Saarbr?cken Telefon: +49 (0)681 8390 5334 Telefax: +49 (0)681 830 4376 uschmitt at mineway.de www.mineway.de Gesch?ftsf?hrung: Dr.-Ing. Mathias Bauer Amtsgericht Saarbr?cken HRB 12339 -------------- next part -------------- An HTML attachment was scrubbed... URL: From dpeterson at enthought.com Thu Aug 7 13:17:34 2008 From: dpeterson at enthought.com (Dave Peterson) Date: Thu, 07 Aug 2008 12:17:34 -0500 Subject: [SciPy-dev] [ANNOUNCE] EPD 2.5.2001 for OS X released! Message-ID: <489B2E2E.2040809@enthought.com> I'm pleased to announce that Enthought has released the Enthought Python Distribution (EPD) 2.5.2001 for OS X! EPD is a Distribution of the Python Programming Language (currently version 2.5.2) that includes over 60 additional libraries, including ETS 2.7.1. Please visit the EPD website (http://www.enthought.com/epd) to get the OS X release, or just to find out more information about EPD 2.5.2001 So what?s the big deal? In addition to making everyones? life easier with installation, EPD also represents a common suite of functionality deployed across platforms such as Windows XP, RedHat Linux, and now OS X 10.4 and above. The cross-platform promise of Python is better realized because it?s trivial for everyone to get substantially the same set of libraries installed on their system with a single-click install. What?s the catch? You knew it was coming, huh? If you?d like to use EPD in a Commercial or Governmental entity, we do ask you to pay for an annual subscription to download and update EPD. For academics and non-profit, private-sector organizations, EPD is and will remain free. Let?s be clear, though. EPD is the bundle of software. People pay for the subscription to download the bundle. The included libraries are, of course, freely available separately under the terms of license for each individual package (this should sound familiar). The terms for the bundle subscription are available at http://www.enthought.com/products/epdlicense.php. BTW, anyone can try it out for free for 30 days. If you have questions, check out the FAQ at http://www.enthought.com/products/epdfaq.php or drop us a line at epd-users at enthought.com. And just one more note: Enthought is deeply grateful to all those who have contributed to these libraries over the years. We?ve built a business around the things they allow us to do and we appreciate such a nice set of tools and the privilege of being part of the community that created them. From timcera at earthlink.net Thu Aug 7 19:53:13 2008 From: timcera at earthlink.net (Tim Cera) Date: Thu, 07 Aug 2008 19:53:13 -0400 Subject: [SciPy-dev] Can't login to documentation editor In-Reply-To: References: Message-ID: <489B8AE9.3080807@earthlink.net> Can't login to the documentation editor. Whether editing a page or trying to add a comment, it keeps asking for login/password. Kindest regards, Tim Cera From pav at iki.fi Thu Aug 7 20:32:44 2008 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 8 Aug 2008 00:32:44 +0000 (UTC) Subject: [SciPy-dev] Documentation marathon engine code (pydocweb) Message-ID: Hi all, For those of you who are interested in this kind of a thing, source code for the "wiki" thing that we're using in the Numpy documentation marathon is currently available here, now that I managed to clean the tree up a bit: https://code.launchpad.net/~pauli-virtanen/scipy/pydocweb Documentation etc. is a bit sparse right now, but in principle if you've played with Django before, it's not very difficult to set up. Also, if you actually look at the code, some of it looks like it was written in a hurry. It was. Some refactoring is in my TODO queue, though. Enjoy, Pauli From pav at iki.fi Thu Aug 7 20:52:25 2008 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 8 Aug 2008 00:52:25 +0000 (UTC) Subject: [SciPy-dev] Can't login to documentation editor References: <489B8AE9.3080807@earthlink.net> Message-ID: Hi, Thu, 07 Aug 2008 19:53:13 -0400, Tim Cera wrote: > Can't login to the documentation editor. Whether editing a page or > trying to add a comment, it keeps asking for login/password. Thanks, I reorganized the app a bit, and forgot to change the app name also in the database. Should work again now. -- Pauli Virtanen From millman at berkeley.edu Thu Aug 7 21:33:36 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Thu, 7 Aug 2008 18:33:36 -0700 Subject: [SciPy-dev] Documentation marathon engine code (pydocweb) In-Reply-To: References: Message-ID: On Thu, Aug 7, 2008 at 5:32 PM, Pauli Virtanen wrote: > For those of you who are interested in this kind of a thing, source code > for the "wiki" thing that we're using in the Numpy documentation marathon > is currently available here, now that I managed to clean the tree up a > bit: https://code.launchpad.net/~pauli-virtanen/scipy/pydocweb Excellent! Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From aisaac at american.edu Fri Aug 8 01:07:28 2008 From: aisaac at american.edu (Alan G Isaac) Date: Fri, 08 Aug 2008 01:07:28 -0400 Subject: [SciPy-dev] numpy ``trapz`` and scipy ``cumtrapz`` Message-ID: <489BD490.1000405@american.edu> NumPy has ``trapz`` in the ``function_base`` module. SciPy has ``cumtrapz`` in the ``quadrature`` module. The code is essentially identical. Presumably this duplication should be eliminated by having SciPy import the NumPy function? Cheers, Alan Isaac From millman at berkeley.edu Sun Aug 10 22:16:21 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Sun, 10 Aug 2008 19:16:21 -0700 Subject: [SciPy-dev] ATTENTION: 0.7.0b1 tagged on Tuesday (8/12) Message-ID: Hey, I will be tagging 0.7.0b1 on Tuesday, Aug. 12th. There are a few test failures that I would like to have resolved before creating the tag. So if you see a test failure on the trunk that you think you can fix, please do so ASAP. I also wanted to mention that I plan to remove the stats.model package and the ndimage._registation and ndimage._segmenter modules from trunk before tagging. We were unable to get this code in a releasable condition. We plan to continue working on them and adding them back in time for the 0.8 release. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From nevion at gmail.com Mon Aug 11 18:03:01 2008 From: nevion at gmail.com (Jason Newton) Date: Mon, 11 Aug 2008 15:03:01 -0700 Subject: [SciPy-dev] patch for io/wavfile.py Message-ID: <48A0B715.3060604@gmail.com> Hi, I was unfortunate to discover a bug in io.wavfile, it assumes long to be 4 bytes on all architectures and on my amd64 box I was getting errors reading simple wavs until I made the changes contained in the patch. Basically just swapped out int for long in all the struct pack and unpack calls where it made sense to. -------------- next part -------------- A non-text attachment was scrubbed... Name: wavfile.diff Type: text/x-patch Size: 2062 bytes Desc: not available URL: From oliphant at enthought.com Mon Aug 11 19:13:01 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Mon, 11 Aug 2008 18:13:01 -0500 Subject: [SciPy-dev] patch for io/wavfile.py In-Reply-To: <48A0B715.3060604@gmail.com> References: <48A0B715.3060604@gmail.com> Message-ID: <48A0C77D.9050101@enthought.com> Jason Newton wrote: > Hi, > > I was unfortunate to discover a bug in io.wavfile, it assumes long to > be 4 bytes on all architectures and on my amd64 box I was getting > errors reading simple wavs until I made the changes contained in the > patch. Basically just swapped out int for long in all the struct pack > and unpack calls where it made sense to. Hi Jason, Try out the recent SVN. Thanks for the patch. -Travis From alan at ajackson.org Mon Aug 11 22:19:36 2008 From: alan at ajackson.org (Alan Jackson) Date: Mon, 11 Aug 2008 21:19:36 -0500 Subject: [SciPy-dev] Problem with F distribution, or with me? Message-ID: <20080811211936.7677846f@ajackson.org> I'm confused. I was working on the documentation for the F-distribution, and I'm getting results from the example that I don't understand. I set the numerator degrees of freedom to 1 ( 2 groups) and the denominator degrees of freedom to 48 (25 members in each group) If I look up in a table, F(p<.01) = 7.19. I checked on the website http://davidmlane.com/hyperstat/F_table.html with 1, 48 and F=7 and got P = 0.01099 But when I run f in numpy, I get no values larger than about 0.65 s = np.random.f(1, 48, 1000000) In [40]: max(s) Out[40]: 0.649036568048 I would have expected to see about 1% of the values > 7.19. Am I missing something stupid? -- ----------------------------------------------------------------------- | Alan K. Jackson | To see a World in a Grain of Sand | | alan at ajackson.org | And a Heaven in a Wild Flower, | | www.ajackson.org | Hold Infinity in the palm of your hand | | Houston, Texas | And Eternity in an hour. - Blake | ----------------------------------------------------------------------- From josef.pktd at gmail.com Tue Aug 12 15:43:54 2008 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 12 Aug 2008 15:43:54 -0400 Subject: [SciPy-dev] Problem with F distribution, or with me? Message-ID: <1cd32cbb0808121243k4b37c2c9o7d4b6e393dc3c9a5@mail.gmail.com> The problem is that the F distribution in distributions.c is missing a multiplication by the ratio of the degrees of freedom, see the correct rk_noncentral_f *dfden / dfnum http://projects.scipy.org/scipy/numpy/browser/trunk/numpy/random/mtrand/distributions.c 226 double rk_f(rk_state *state, double dfnum, double dfden) 227 { 228 return rk_chisquare(state, dfnum) / rk_chisquare(state, dfden); 229 } 230 231 double rk_noncentral_f(rk_state *state, double dfnum, double dfden, double nonc) 232 { 233 return ((rk_noncentral_chisquare(state, dfnum, nonc)*dfden) / 234 (rk_chisquare(state, dfden)*dfnum)); change line 228 to return (rk_chisquare(state, dfnum)*dfden) / (rk_chisquare(state, dfden) *dfnum); correct random variables require normalization: >>> np.sum(1/2.0*40.0*np.random.f(2, 40, 1000000)> 2.44037) 99891 >>> np.sum(1/1.0*48.0*np.random.f(1, 48, 1000000)> 7.19) 10118 >>> np.sum(1/1.0*48.0*np.random.f(1, 48, 1000000)> 7.19) 10174 >>> np.sum(1/1.0*48.0*np.random.f(1, 48, 1000000)> 7.19) 10043 Josef -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue Aug 12 18:06:22 2008 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 12 Aug 2008 17:06:22 -0500 Subject: [SciPy-dev] Problem with F distribution, or with me? In-Reply-To: <1cd32cbb0808121243k4b37c2c9o7d4b6e393dc3c9a5@mail.gmail.com> References: <1cd32cbb0808121243k4b37c2c9o7d4b6e393dc3c9a5@mail.gmail.com> Message-ID: <3d375d730808121506y7e2dbd06kc4aacc017ce25184@mail.gmail.com> On Tue, Aug 12, 2008 at 14:43, wrote: > > The problem is that the F distribution in distributions.c is missing > a multiplication by the ratio of the degrees of freedom, see the correct > rk_noncentral_f > > > *dfden / dfnum Well that's embarassing. Thank you. Fixed in SVN. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From pav at iki.fi Tue Aug 12 18:25:56 2008 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 12 Aug 2008 22:25:56 +0000 (UTC) Subject: [SciPy-dev] TOMS 526 license? Message-ID: I noticed that code from ACM TOMS 526 was added to a branch in the Scipy repository. This may be a silly question, but I'll ask it just to be on the safe side: in case this code is going to eventually land in Scipy proper, did you check the ACM license conditions? It says here: http://toms.acm.org/Authors.html#CopyrightandUseAgreement that the TOMS codes fall by default under the ACM license, which is not really either BSD or GPL compatible. (Unfortunately, the netlib.org codes don't carry a license blurb...) Or has the situation for 526 been otherwise arranged, or was the situation different in 1978 when it was published? -- Pauli Virtanen From robert.kern at gmail.com Tue Aug 12 19:31:39 2008 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 12 Aug 2008 18:31:39 -0500 Subject: [SciPy-dev] TOMS 526 license? In-Reply-To: References: Message-ID: <3d375d730808121631n3d796ee4m94cdbedaec27a3ee@mail.gmail.com> On Tue, Aug 12, 2008 at 17:25, Pauli Virtanen wrote: > > I noticed that code from ACM TOMS 526 was added to a branch in the Scipy > repository. > > This may be a silly question, but I'll ask it just to be on the safe > side: in case this code is going to eventually land in Scipy proper, did > you check the ACM license conditions? It says here: > > http://toms.acm.org/Authors.html#CopyrightandUseAgreement > > that the TOMS codes fall by default under the ACM license, which is not > really either BSD or GPL compatible. (Unfortunately, the netlib.org codes > don't carry a license blurb...) Or has the situation for 526 been > otherwise arranged, or was the situation different in 1978 when it was > published? Akima was an employee of the Federal Government (US Dept of Commerce). Near as I can tell, the code is public domain, and the ACM cannot claim copyright over it. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Tue Aug 12 19:31:59 2008 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 12 Aug 2008 18:31:59 -0500 Subject: [SciPy-dev] TOMS 526 license? In-Reply-To: <3d375d730808121631n3d796ee4m94cdbedaec27a3ee@mail.gmail.com> References: <3d375d730808121631n3d796ee4m94cdbedaec27a3ee@mail.gmail.com> Message-ID: <3d375d730808121631h38acc1a1p1b0a7ff1384fec7@mail.gmail.com> On Tue, Aug 12, 2008 at 18:31, Robert Kern wrote: > On Tue, Aug 12, 2008 at 17:25, Pauli Virtanen wrote: >> >> I noticed that code from ACM TOMS 526 was added to a branch in the Scipy >> repository. >> >> This may be a silly question, but I'll ask it just to be on the safe >> side: in case this code is going to eventually land in Scipy proper, did >> you check the ACM license conditions? It says here: >> >> http://toms.acm.org/Authors.html#CopyrightandUseAgreement >> >> that the TOMS codes fall by default under the ACM license, which is not >> really either BSD or GPL compatible. (Unfortunately, the netlib.org codes >> don't carry a license blurb...) Or has the situation for 526 been >> otherwise arranged, or was the situation different in 1978 when it was >> published? > > Akima was an employee of the Federal Government (US Dept of Commerce). > Near as I can tell, the code is public domain, and the ACM cannot > claim copyright over it. But it's worth asking, regardless. Alan? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From josef.pktd at gmail.com Wed Aug 13 01:19:34 2008 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 13 Aug 2008 01:19:34 -0400 Subject: [SciPy-dev] Problem with F distribution, or with me? In-Reply-To: <1cd32cbb0808121243k4b37c2c9o7d4b6e393dc3c9a5@mail.gmail.com> References: <1cd32cbb0808121243k4b37c2c9o7d4b6e393dc3c9a5@mail.gmail.com> Message-ID: <1cd32cbb0808122219q3dc8f504w6218246f985e1676@mail.gmail.com> I wanted to compare the distributions in numpy.random with scipy.stats.distribution. When I found the kolmogorov_test in test_distributions.py, I was wondering why this test did not find the bug in the numpy random number generator. It seems that this test is much too weak, sample size = 30 and parameters between 1 and 2. After I made the test stricter, increased the power, I get the rejection/test failure for the F-distribution, but additionally I get 2 to 4 additional failures, in fatiguelife, loggamma in all runs and in genhalflogistic, and genextreme only sometimes. Test result of an example run are below. I did not see any obvious problem with my change in the test, the parameters that are used in the tests are not ruled out from what I have seen in the doc strings or a quick google search, and I don't know these distributions at all or not well enough, to tell whether there is anything wrong with these distributions or with the tests. Josef I'm using >>> numpy.version.version '1.1.0' >>> scipy.version.version '0.6.0' Failures with changed test_distributions.py =============================== >>> execfile(r'C:\Programs\Python24\Lib\site-packages\scipy\stats\tests\test_distributions.py') Found 73/73 tests for stats.tests.test_distributions Found 10/10 tests for stats.tests.test_morestats Found 107/107 tests for stats.tests.test_stats ...................FF......F.F...............F.............................Ties preclude use of exact statistic. ..Ties preclude use of exact statistic. ................................................................................................................. ====================================================================== FAIL: check_cdf (stats.tests.test_distributions.test_f) ---------------------------------------------------------------------- Traceback (most recent call last): File "", line 9, in check_cdf AssertionError: D = 0.493585929987; pval = 0.0; alpha = 0.01 args = (9.8771486774554127, 1.2819774801876884) ====================================================================== FAIL: check_cdf (stats.tests.test_distributions.test_fatiguelife) ---------------------------------------------------------------------- Traceback (most recent call last): File "", line 9, in check_cdf AssertionError: D = 0.101323526498; pval = 0.0; alpha = 0.01 args = (3.3139748541207283,) ====================================================================== FAIL: check_cdf (stats.tests.test_distributions.test_genextreme) ---------------------------------------------------------------------- Traceback (most recent call last): File "", line 9, in check_cdf AssertionError: D = 0.02902; pval = 0.0; alpha = 0.01 args = (10.616290590132825,) ====================================================================== FAIL: check_cdf (stats.tests.test_distributions.test_genhalflogistic) ---------------------------------------------------------------------- Traceback (most recent call last): File "", line 9, in check_cdf AssertionError: D = 0.02343; pval = 0.0; alpha = 0.01 args = (8.4724627096253382,) ====================================================================== FAIL: check_cdf (stats.tests.test_distributions.test_loggamma) ---------------------------------------------------------------------- Traceback (most recent call last): File "", line 9, in check_cdf AssertionError: D = 1.0; pval = 0.0; alpha = 0.01 args = (4.4259066194420793,) ---------------------------------------------------------------------- Ran 190 tests in 5.250s FAILED (failures=5) >>> 3 Changes I made to scipy\stats\tests\test_distributions.py =========================================== * increase spread for random parameters *10 * increase sample size N Note: this is from scipy 0.60, but the same parameters are used in the current trunk {{{ for dist in dists: distfunc = eval('stats.'+dist) nargs = distfunc.numargs alpha = 0.01 if dist == 'fatiguelife': alpha = 0.001 if dist == 'erlang': args = str((4,)+tuple(rand(2))) elif dist == 'frechet': args = str(tuple(2*rand(1))+(0,)+tuple(2*rand(2))) elif dist == 'triang': args = str(tuple(rand(nargs))) elif dist == 'reciprocal': vals = rand(nargs) vals[1] = vals[0] + 1.0 args = str(tuple(vals)) else: args = str(tuple(1.0+rand(nargs)*10)) # old was without *10 exstr = r""" class test_%s(NumpyTestCase): def check_cdf(self): D,pval = stats.kstest('%s','',args=%s,N=10000) # old was N=30 if (pval < %f): D,pval = stats.kstest('%s','',args=%s,N=100000) # old was N=30 #if (pval < %f): # D,pval = stats.kstest('%s','',args=%s,N=30) assert (pval > %f), "D = " + str(D) + "; pval = " + str(pval) + "; alpha = " + str(alpha) + "\nargs = " + str(%s) """ % (dist,dist,args,alpha,dist,args,alpha,dist,args,alpha,args) exec exstr }}} -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Wed Aug 13 09:42:59 2008 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 13 Aug 2008 09:42:59 -0400 Subject: [SciPy-dev] Problem with F distribution, or with me? - error in stats.fatiguelife.rvs Message-ID: <1cd32cbb0808130642k7dcf5adfy63becc98669e16ab@mail.gmail.com> It looks like that there is an error in stats.fatiguelife.rvs Kolmogorov test fails >>> stats.kstest('fatiguelife','',args=(5,),N=1000) (0.093216666807115545, array(2.5853082230575808e-008)) Mean of sample >>> stats.fatiguelife.stats(5,moments='m') array(13.5) >>> np.mean(stats.fatiguelife.rvs(5,size=1000)) 26.683858360164475 >>> np.mean(stats.fatiguelife.rvs(5,size=10000)) 26.841525716395847 >>> np.mean(stats.fatiguelife.rvs(5,size=100000)) 26.730694604009678 >>> np.mean(stats.fatiguelife.rvs(5,size=100000)/2) 13.469823793800416 >>> stats.fatiguelife.stats(3,moments='m') array(5.5) >>> np.mean(stats.fatiguelife.rvs(3,size=100000)) 10.922712537094393 >>> np.mean(stats.fatiguelife.rvs(3,size=100000)/2) 5.5340854278553246 Variance of sample >>> stats.fatiguelife.stats(3,moments='v') array(110.25) >>> np.var(stats.fatiguelife.rvs(3,size=1000000)) 440.1793356094052 >>> np.var(stats.fatiguelife.rvs(3,size=1000000)/2) 110.445022957997 >>> np.var(stats.fatiguelife.rvs(3,size=10000000)/2) 110.03364894832275 >>> stats.fatiguelife.stats(5,moments='v') array(806.25) >>> np.var(stats.fatiguelife.rvs(5,size=1000000)) 3222.4271388000293 >>> np.var(stats.fatiguelife.rvs(5,size=1000000)/2) 809.29193071702855 theoretical mean and cdf look correct, according to http://www.itl.nist.gov/div898/handbook/eda/section3/eda366a.htm but random number generator, is wrong by approximately the scale of 1/2 Josef -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Wed Aug 13 09:44:49 2008 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 13 Aug 2008 09:44:49 -0400 Subject: [SciPy-dev] TOMS 526 license? In-Reply-To: References: Message-ID: <48A2E551.5040904@american.edu> Pauli Virtanen wrote: > I noticed that code from ACM TOMS 526 was added to a branch in the Scipy > repository. > This may be a silly question, but I'll ask it just to be on the safe > side: in case this code is going to eventually land in Scipy proper, did > you check the ACM license conditions? If this has not already been addressed, I recommend contacting: Deborah Cotton, Copyright & Permissions permissions AT acm.org ACM Publications 2 Penn Plaza, Suite 701** New York, NY 10121-0701 She has proved helfpul in releasing code to SciPy under a BSD license in the past. Cheers, Alan Isaac PS I'm happy to help with that process. From aisaac at american.edu Wed Aug 13 09:51:50 2008 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 13 Aug 2008 09:51:50 -0400 Subject: [SciPy-dev] TOMS 526 license? In-Reply-To: <3d375d730808121631h38acc1a1p1b0a7ff1384fec7@mail.gmail.com> References: <3d375d730808121631n3d796ee4m94cdbedaec27a3ee@mail.gmail.com> <3d375d730808121631h38acc1a1p1b0a7ff1384fec7@mail.gmail.com> Message-ID: <48A2E6F6.80401@american.edu> Robert Kern wrote: > But it's worth asking, regardless. Alan? OK, I'll ask. Cheers, Alan From aisaac at american.edu Wed Aug 13 11:52:30 2008 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 13 Aug 2008 11:52:30 -0400 Subject: [SciPy-dev] [Fwd: Re: contact info for Hiroshi Akima] Message-ID: <48A3033E.4000509@american.edu> Here is the view of the publications person at ITS. Alan Isaac -------------- next part -------------- An embedded message was scrubbed... From: Margaret Luebs Subject: RE: contact info for Hiroshi Akima Date: Wed, 13 Aug 2008 09:43:00 -0600 Size: 2872 URL: From guyer at nist.gov Wed Aug 13 11:22:35 2008 From: guyer at nist.gov (Jonathan Guyer) Date: Wed, 13 Aug 2008 11:22:35 -0400 Subject: [SciPy-dev] TOMS 526 license? In-Reply-To: <48A2E551.5040904@american.edu> References: <48A2E551.5040904@american.edu> Message-ID: On Aug 13, 2008, at 9:44 AM, Alan G Isaac wrote: > If this has not already been addressed, I recommend > contacting: > > Deborah Cotton, Copyright & Permissions > permissions AT acm.org > ACM Publications > 2 Penn Plaza, Suite 701** > New York, NY 10121-0701 > > She has proved helfpul in releasing code to SciPy under > a BSD license in the past. If for some reason that proves to be a problem, Robert is probably right about the implications of the code being developed by a DoC employee. The language we are told to use at NIST (also DoC) is: This software was developed at the National Institute of Standards and Technology by employees of the Federal Government in the course of their official duties. Pursuant to title 17 section 105 of the United States Code this software is not subject to copyright protection and is in the public domain. is an experimental system. NIST assumes no responsibility whatsoever for its use by other parties, and makes no guarantees, expressed or implied, about its quality, reliability, or any other characteristic. We would appreciate acknowledgement if the software is used. This software can be redistributed and/or modified freely provided that any derivative works bear some notice that they are derived from it, and any modified versions bear some notice that they have been modified. IANAL, but I don't believe that ACM could impose a more restrictive license on top of it even if they were inclined to do so. From josef.pktd at gmail.com Wed Aug 13 12:03:46 2008 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 13 Aug 2008 12:03:46 -0400 Subject: [SciPy-dev] Problem with F distribution, or with me? In-Reply-To: <1cd32cbb0808122219q3dc8f504w6218246f985e1676@mail.gmail.com> References: <1cd32cbb0808121243k4b37c2c9o7d4b6e393dc3c9a5@mail.gmail.com> <1cd32cbb0808122219q3dc8f504w6218246f985e1676@mail.gmail.com> Message-ID: <1cd32cbb0808130903k4505cf79w2b7e23678f46b580@mail.gmail.com> I looked some more at genextreme and loggamma distributions. Both seem to be ok, but only on a restricted domain. Maybe this could be mentioned in the doc strings. genextreme: mean and variance of sample c mean variance -10.1 2.03183339039e+046 4.10875567714e+097 -9.1 8.73524294338e+046 7.63037062101e+098 -8.1 3.43173786542e+029 3.7041223192e+063 -7.1 2.56715180769e+029 4.83202007123e+063 -6.1 6.38255235148e+025 3.96122540324e+056 -5.1 2.17899531771e+019 2.78413740065e+043 -4.1 2.11131451031e+018 4.45696503066e+041 -3.1 1023550562.61 2.58109364566e+022 -2.1 173604.32446 1.15250794324e+015 -1.1 17.5172037108 348742.573234 -0.1 0.67925866308 2.2049366648 0.9 0.0448402183177 0.929688939344 1.9 -0.413704842079 3.82434734783 2.9 -1.49614654365 58.0798269907 3.9 -5.06385824194 1507.40696605 4.9 -20.0955456516 63494.3016592 5.9 -101.601673556 5611029.51369 6.9 -668.732337727 464527444.003 7.9 -3864.35379097 27915569490.2 8.9 -44833.4387546 2.5781344271e+013 9.9 -156511.731133 5.9148778182e+013 10.9 -4208783.3068 4.1364771468e+017 11.9 -12363752.6511 9.49784745096e+017 12.9 -687931749.834 2.12662176368e+022 13.9 -3564704408.19 1.49396435068e+023 14.9 -55267493180.0 1.75088408867e+026 15.9 -89255193706.2 1.68855387759e+026 16.9 -1.44638018472e+013 1.95168337353e+031 17.9 -2.82021450011e+013 5.28737543263e+031 18.9 -2.70352320915e+014 2.76333229745e+033 genextreme: Kolmogorov test c pval -10.1 0.0743077659229 fail -9.1 0.42774246842 -8.1 0.320666171255 -7.1 0.140353007389 -6.1 0.422004148643 -5.1 0.235860485941 -4.1 0.123131615734 -3.1 0.0584578205023 fail -2.1 0.0806014275316 fail -1.1 0.551087520596 -0.1 0.443666131702 0.9 0.0897965413418 fail 1.9 0.000687904138691 fail 2.9 0.14533205744 3.9 0.266177157829 4.9 0.248863135101 5.9 0.0525623128538 fail 6.9 0.0189969790597 fail 7.9 1.21972182576e-007 fail 8.9 0.0 fail 9.9 0.0 fail 10.9 0.0 fail 11.9 0.0 fail 12.9 0.0 fail 13.9 0.0 fail 14.9 0.0 fail 15.9 0.0 fail 16.9 0.0 fail 17.9 0.0 fail 18.9 0.0 fail loggamma: mean and variance of sample c mean variance 0.1 1.#QNAN 1.#QNAN 0.2 1.#QNAN 1.#QNAN 0.3 1.#QNAN 1.#QNAN 0.4 1.#QNAN 1.#QNAN 0.5 1.#QNAN 1.#QNAN 0.6 1.#QNAN 1.#QNAN 0.7 1.#QNAN 1.#QNAN 0.8 1.#QNAN 1.#QNAN 0.9 1.#QNAN 1.#QNAN 1.0 -0.576668705715 1.63525671992 1.1 -0.525273410576 1.34235414411 1.2 -0.428429138219 1.11702081261 1.3 -0.341788789013 0.974156958424 1.4 -0.241612900595 0.862346072551 1.5 -0.136331803853 0.788179027654 1.6 -0.0313490020812 0.725560746671 1.7 0.0748869989073 0.678337971919 1.8 0.185785501444 0.646038183747 1.9 0.293612948064 0.632854109548 2.0 0.422063781842 0.644406991859 2.1 1.#QNAN 1.#QNAN 2.2 1.#QNAN 1.#QNAN 2.3 1.#QNAN 1.#QNAN 2.4 1.#QNAN 1.#QNAN 2.5 1.#QNAN 1.#QNAN 2.6 1.#QNAN 1.#QNAN 2.7 1.#QNAN 1.#QNAN 2.8 1.#QNAN 1.#QNAN 2.9 1.#QNAN 1.#QNAN 3.0 1.#QNAN 1.#QNAN loggamma: Kolmogorov test c pval 0.1 0.0 fail 0.2 0.0 fail 0.3 0.0 fail 0.4 0.0 fail 0.5 0.0 fail 0.6 0.0 fail 0.7 0.0 fail 0.8 0.0 fail 0.9 0.0 fail 1.0 0.0944212682068 fail 1.1 0.0721468077268 fail 1.2 0.429110318521 1.3 0.185750416387 1.4 0.12670896322 1.5 0.328821862924 1.6 0.205984847372 1.7 0.283064144052 1.8 0.0290310978597 fail 1.9 0.135995409763 2.0 0.0864826709644 fail 2.1 0.0 fail 2.2 0.0 fail 2.3 0.0 fail 2.4 0.0 fail 2.5 0.0 fail 2.6 0.0 fail 2.7 0.0 fail 2.8 0.0 fail 2.9 0.0 fail 3.0 0.0 fail This was produced with: import numpy as np import scipy.stats as stats N=100000 #N=1000 print "\ngenextreme: mean and variance of sample" print ' c mean variance' for i in range(30): c = -10.1+i/1.0 rn = stats.genextreme.rvs(c,size=N) print c, np.mean(rn), np.var(rn) print "\ngenextreme: Kolmogorov test" print ' c pval' for i in range(30): c = -10.1+i/1.0 D, pval = stats.kstest('genextreme','',args=(c,),N=N) print c, pval, (pval<0.1 and 'fail' or '') print "\nloggamma: mean and variance of sample" print ' c mean variance' for i in range(30): c = 0.100+i/10.0 rn = stats.loggamma.rvs(c,size=N) print c, np.mean(rn), np.var(rn) print "\nloggamma: Kolmogorov test" print ' c pval' for i in range(30): c = 0.100+i/10.0 D, pval = stats.kstest('loggamma','',args=(c,),N=N) print c, pval, (pval<0.1 and 'fail' or '') -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Wed Aug 13 12:15:13 2008 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 13 Aug 2008 12:15:13 -0400 Subject: [SciPy-dev] [mailinglist] Re: NNLS In-Reply-To: <489B0004.5070103@mineway.de> References: <48875105.7010708@mineway.de> <3d375d730807231546t72f468c4u1980650481ccb8a2@mail.gmail.com> <48883ECD.6000109@mineway.de> <3d375d730807241137s47df563av9ba9d19beba9ea65@mail.gmail.com> <489B0004.5070103@mineway.de> Message-ID: <48A30891.1000606@american.edu> Uwe Schmitt wrote: > The NNLS code is now via SVN at > http://public.procoders.net/nnls/nnls_with_f2py/ > How can I contribute this code now ? Did you get the needed information for this? > Is there further any interest in code for > * ICA (Independent componenent ananlysis) ? > I wrapped existing C-Code with f2py. > * NMF/NNMA (nonnegative matrix factorization / - approximation) ? > which is pure Python/numpy code. I'd like to see the latter find its way into SciPy. (I'm not familiar with ICA; what's the application area?) Cheers, Alan Isaac From josef.pktd at gmail.com Wed Aug 13 14:44:14 2008 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 13 Aug 2008 14:44:14 -0400 Subject: [SciPy-dev] Problem with F distribution, or with me? In-Reply-To: <1cd32cbb0808130903k4505cf79w2b7e23678f46b580@mail.gmail.com> References: <1cd32cbb0808121243k4b37c2c9o7d4b6e393dc3c9a5@mail.gmail.com> <1cd32cbb0808122219q3dc8f504w6218246f985e1676@mail.gmail.com> <1cd32cbb0808130903k4505cf79w2b7e23678f46b580@mail.gmail.com> Message-ID: <1cd32cbb0808131144o5b16f3e7g3d002e10a6b2a55a@mail.gmail.com> I think, stats.loggamma.rvs is wrong or uses a definition that I cannot figure out. It was still bugging me, that I did not figure out what is going on with the loggamma random variables. When I compare it with the log transformation of a gamma random variable and with the loggamma distribution in R, then it looks llike as if the stats.loggamma.rvs were only correct for a parameter of 2. In "R", I get the same mean and variance as for np.log(stats.gamma.rvs(k,size=1000000)) and neither R nor np.log(stats.gamma.rvs(...)) have the same domain restriction as stats.loggamma.rvs. So, to me it seems that there is something fishy with stats.loggamma.rvs. used package in R: 'VGAM' Below are some comparison of the loggamma random variables generated by * stats.loggamma.rvs * np.log(stats.gamma.rvs(...)) * VGAM.rlgamma in "R" Josef Kolmogorov tests ------------------------ for parameter = 2 it looks ok >>> stats.kstest(np.log(stats.gamma.rvs(2,size=10000)),'loggamma',args=(2,)) (0.010148906122060208, array(0.12659359569633333)) >>> c=2;stats.kstest(np.log(stats.gamma.rvs(c,size=10000)),'loggamma',args=(c,)) (0.0058379222756740345, array(0.50383387183308759)) strange results with c different from 2 >>> c=2.5;stats.kstest(np.log(stats.gamma.rvs(c,size=10000)),'loggamma',args=(c,)) (0.24775311834187796, array(0.0)) >>> c=2.5;stats.kstest(stats.loggamma.rvs(c,size=10000),'loggamma',args=(c,)) (1.0, array(0.0)) >>> c=1.5;stats.kstest(stats.loggamma.rvs(c,size=10000),'loggamma',args=(c,)) (0.0069725721519633965, array(0.3764490603546865)) >>> c=1.5;stats.kstest(np.log(stats.gamma.rvs(c,size=10000)),'loggamma',args=(c,)) (0.12849906523207333, array(0.0)) Comparing mean and variance with "R" ------------------------------------------------------- for k=2: is ok >>> np.mean(stats.loggamma.rvs(2,size=10000)) 0.42482669940021239 >>> np.mean(np.log(stats.gamma.rvs(2,size=100000))) 0.42284709204863447 in R > mean(rlgamma(100000, location=0, scale=1, k=2)) [1] 0.4224025 >>> np.var(np.log(stats.gamma.rvs(2,size=10000))) 0.6449444777702128 >>> np.var(stats.loggamma.rvs(2,size=100000)) 0.64758634320780184 in R: > var(rlgamma(100000, location=0, scale=1, k=2)) [1] 0.6360736 for k=5: >>> np.mean(stats.loggamma.rvs(5,size=1000000)) 1.#QNAN >>> np.mean(np.log(stats.gamma.rvs(5,size=1000000))) 1.506037882165763 in R: > mean(rlgamma(1000000, location=0, scale=1, k=5)) [1] 1.506325 >>> np.var(stats.loggamma.rvs(5,size=1000000)) 1.#QNAN >>> np.var(np.log(stats.gamma.rvs(5,size=1000000))) 0.22151419947589021 > var(rlgamma(1000000, location=0, scale=1, k=5)) [1] 0.2212415 for other k: >>> np.var(stats.loggamma.rvs(1.5,size=1000000)) 0.78615045130829264 >>> np.var(np.log(stats.gamma.rvs(1.5,size=1000000))) 0.93399967614023915 in R: > var(rlgamma(1000000, location=0, scale=1, k=1.5)) [1] 0.9332276 >>> np.var(np.log(stats.gamma.rvs(10,size=1000000))) 0.10528311964943039 in R: > var(rlgamma(1000000, location=0, scale=1, k=10)) [1] 0.1052298 -------------- next part -------------- An HTML attachment was scrubbed... URL: From zachary.pincus at yale.edu Wed Aug 13 23:01:33 2008 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Wed, 13 Aug 2008 23:01:33 -0400 Subject: [SciPy-dev] PyArray_FromDims and friends Message-ID: Hi all, In testing out svn scipy and numpy, I noticed some run-time errors from scipy.interpolate because the _fitpack module's c sources use PyArray_FromDims and PyArray_FromDimsAndData, which are now deprecated in numpy svn. I opened a ticket and made a patch for this particular case, but I'm not sure if there's an overall strategy (someone writes a very good regexp, say, and lets it loose), or if these will be fixed piecemeal. If the latter, here's the patch: http://scipy.org/scipy/scipy/ticket/723 Zach From Per.Brodtkorb at ffi.no Thu Aug 14 07:12:00 2008 From: Per.Brodtkorb at ffi.no (Per.Brodtkorb at ffi.no) Date: Thu, 14 Aug 2008 13:12:00 +0200 Subject: [SciPy-dev] Enhancement proposal for generic.rvs method in scipy.stats.distributions Message-ID: <1ED225FF18AA8B48AC192F7E1D032C6E0C5C12@hbu-posten.ffi.no> I would propose that the rvs method compute the common shape of the inputs (i.e., shape, location, scale and the size information provided) according to numpy broadcasting rules. I find this feature practical and is similar to what matlab does. Currently, the random number generators of the 1D distributions only allows scalar shape, location and scale parameters as input. The size of the output is only determined by the 'size' input variable. pab PS: One solution could be to redefine the rvs method in the rv_continous as follows: def rvs(self,*args,**kwds): loc,scale,size=map(kwds.get,['loc','scale','size'],[None,None,1]) args, loc, scale = self.__fix_loc_scale(args, loc, scale) cond = logical_and(self._argcheck(*args),(scale >= 0)) if not all(cond): raise ValueError, "Domain error in arguments." cshape = common_shape(zeros(size),loc,scale,*args) #self._size = product(cshape) self._size = cshape vals = self._rvs(*args) return vals * scale + loc where def common_shape(*varargin): ''' Return the common shape of a sequency of arrays An error is raised if some of the arrays do not conform to the common shape according to the broadcasting rules in numpy. Example: >>> import pylab >>> A = pylab.rand(4,1) >>> B = 2 >>> C = pylab.rand(1,5) >>> common_shape(A,B,C) (4, 5) ''' varargout = atleast_1d(*varargin) if len(varargin)<2: return tuple(varargout.shape) args_shape = [arg.shape for arg in varargout] #map(shape, varargout) ndims = map(len, args_shape) ndim = max(ndims) Np = len(varargin) all_shapes = ones((Np, ndim),dtype=int) for ix, Nt in enumerate(ndims): all_shapes[ix, 0:Nt] = args_shape[ix] ndims = atleast_1d(ndims) if any(ndims == 0): all_shapes[ndims == 0, :] = 0 comn_shape = numpy.max(all_shapes, axis=0) arrays_do_not_conform2common_shape = any(logical_and(all_shapes!=comn_shape[newaxis,...], all_shapes!=1),axis=1) if any(arrays_do_not_conform2common_shape): raise ValueError('Non-scalar input arguments do not match in shape according to numpy broadcasting rules') return tuple(comn_shape) -------------- next part -------------- An HTML attachment was scrubbed... URL: From mellerf at netvision.net.il Thu Aug 14 09:29:44 2008 From: mellerf at netvision.net.il (Yosef Meller) Date: Thu, 14 Aug 2008 16:29:44 +0300 Subject: [SciPy-dev] PyArray_FromDims and friends In-Reply-To: References: Message-ID: <48A43348.2030006@netvision.net.il> Zachary Pincus wrote: > In testing out svn scipy and numpy, I noticed some run-time errors > from scipy.interpolate because the _fitpack module's c sources use > PyArray_FromDims and PyArray_FromDimsAndData, which are now deprecated > in numpy svn. What is the preferred way to do it now? _minpack uses them everywhere too. From zachary.pincus at yale.edu Thu Aug 14 10:55:48 2008 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Thu, 14 Aug 2008 10:55:48 -0400 Subject: [SciPy-dev] PyArray_FromDims and friends In-Reply-To: <48A43348.2030006@netvision.net.il> References: <48A43348.2030006@netvision.net.il> Message-ID: <5C95B98C-5922-4D34-B3D3-B23C49E29300@yale.edu> > Zachary Pincus wrote: >> In testing out svn scipy and numpy, I noticed some run-time errors >> from scipy.interpolate because the _fitpack module's c sources use >> PyArray_FromDims and PyArray_FromDimsAndData, which are now >> deprecated >> in numpy svn. > > What is the preferred way to do it now? _minpack uses them > everywhere too. As far as I can tell, PyArray_FromDims can be replaced with PyArray_SimpleNew -- they have the same function signature. If you want to be correct/avoid compiler warnings, you'd probably need to make sure to cast the second argument to (npy_intp*). Likewise, PyArray_FromDimsAndData can be replaced with PyArray_SimpleNewFromData, with the same caveat about the cast. So: it's a simple regexp to fix these two if you don't care about the casting, and a slightly more-involved one if you do. I'm not sure what's best here. Zach From doutriaux1 at llnl.gov Thu Aug 14 12:28:32 2008 From: doutriaux1 at llnl.gov (Charles Doutriaux) Date: Thu, 14 Aug 2008 09:28:32 -0700 Subject: [SciPy-dev] PyArray_FromDims and friends In-Reply-To: <5C95B98C-5922-4D34-B3D3-B23C49E29300@yale.edu> References: <48A43348.2030006@netvision.net.il> <5C95B98C-5922-4D34-B3D3-B23C49E29300@yale.edu> Message-ID: <48A45D30.8040508@llnl.gov> You definately want to cast to npy_intp ! It did bite us when we went to 64bit! C. Zachary Pincus wrote: >> Zachary Pincus wrote: >> >>> In testing out svn scipy and numpy, I noticed some run-time errors >>> from scipy.interpolate because the _fitpack module's c sources use >>> PyArray_FromDims and PyArray_FromDimsAndData, which are now >>> deprecated >>> in numpy svn. >>> >> What is the preferred way to do it now? _minpack uses them >> everywhere too. >> > > As far as I can tell, PyArray_FromDims can be replaced with > PyArray_SimpleNew -- they have the same function signature. If you > want to be correct/avoid compiler warnings, you'd probably need to > make sure to cast the second argument to (npy_intp*). > > Likewise, PyArray_FromDimsAndData can be replaced with > PyArray_SimpleNewFromData, with the same caveat about the cast. > > So: it's a simple regexp to fix these two if you don't care about the > casting, and a slightly more-involved one if you do. I'm not sure > what's best here. > > Zach > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http:// projects.scipy.org/mailman/listinfo/scipy-dev > > > From millman at berkeley.edu Thu Aug 14 16:22:29 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Thu, 14 Aug 2008 13:22:29 -0700 Subject: [SciPy-dev] PyArray_FromDims and friends In-Reply-To: <48A45D30.8040508@llnl.gov> References: <48A43348.2030006@netvision.net.il> <5C95B98C-5922-4D34-B3D3-B23C49E29300@yale.edu> <48A45D30.8040508@llnl.gov> Message-ID: With the SciPy 0.7.0 release quickly approaching, I would really like to get the rid of all the deprecated NumPy calls in SciPy. While I don't think they should hold up the beta releases, I would like them all removed before the first release candidate. All patches welcome. There is no plan to systematically fix these, so please don't hesitate to fix any area of the code that you feel comfortable working on. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From millman at berkeley.edu Thu Aug 14 16:26:28 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Thu, 14 Aug 2008 13:26:28 -0700 Subject: [SciPy-dev] test_arpack errors on 64bit Linux Message-ID: I am getting test_arpack errors on 64bit Linux, but not 32bit Linux. Is anyone else seeing this? I would like to get these fixed before releasing the 0.7.0b1. >>> scipy.test() Running unit tests for scipy NumPy version 1.2.0.dev5629 NumPy is installed in /usr/lib64/python2.5/site-packages/numpy SciPy version 0.7.0.dev4637 SciPy is installed in /usr/lib64/python2.5/site-packages/scipy Python version 2.5.1 (r251:54863, Jul 10 2008, 17:25:56) [GCC 4.1.2 20070925 (Red Hat 4.1.2-33)] nose version 0.10.3 ====================================================================== ERROR: test_nonsymmetric_modes (test_arpack.TestEigenNonSymmetric) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib64/python2.5/site-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 204, in test_nonsymmetric_modes self.eval_evec(m,typ,k,which) File "/usr/lib64/python2.5/site-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 186, in eval_evec eval,evec=eigen(a,k,which=which,**kwds) File "/usr/lib64/python2.5/site-packages/scipy/sparse/linalg/eigen/arpack/arpack.py", line 220, in eigen raise RuntimeError("Error info=%d in arpack"%info) RuntimeError: Error info=-9 in arpack ====================================================================== ERROR: test_starting_vector (test_arpack.TestEigenNonSymmetric) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib64/python2.5/site-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 214, in test_starting_vector self.eval_evec(self.symmetric[0],typ,k,which='LM',v0=v0) File "/usr/lib64/python2.5/site-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 186, in eval_evec eval,evec=eigen(a,k,which=which,**kwds) File "/usr/lib64/python2.5/site-packages/scipy/sparse/linalg/eigen/arpack/arpack.py", line 220, in eigen raise RuntimeError("Error info=%d in arpack"%info) RuntimeError: Error info=-9 in arpack ====================================================================== ERROR: test_starting_vector (test_arpack.TestEigenSymmetric) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib64/python2.5/site-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 116, in test_starting_vector self.eval_evec(self.symmetric[0],typ,k,which='LM',v0=v0) File "/usr/lib64/python2.5/site-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 95, in eval_evec eval,evec=eigen_symmetric(a,k,which=which,**kwds) File "/usr/lib64/python2.5/site-packages/scipy/sparse/linalg/eigen/arpack/arpack.py", line 464, in eigen_symmetric raise RuntimeError("Error info=%d in arpack" % info) RuntimeError: Error info=-9 in arpack ====================================================================== ERROR: test_symmetric_modes (test_arpack.TestEigenSymmetric) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib64/python2.5/site-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 108, in test_symmetric_modes self.eval_evec(self.symmetric[0],typ,k,which) File "/usr/lib64/python2.5/site-packages/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py", line 95, in eval_evec eval,evec=eigen_symmetric(a,k,which=which,**kwds) File "/usr/lib64/python2.5/site-packages/scipy/sparse/linalg/eigen/arpack/arpack.py", line 464, in eigen_symmetric raise RuntimeError("Error info=%d in arpack" % info) RuntimeError: Error info=-9 in arpack From wnbell at gmail.com Thu Aug 14 16:45:45 2008 From: wnbell at gmail.com (Nathan Bell) Date: Thu, 14 Aug 2008 16:45:45 -0400 Subject: [SciPy-dev] PyArray_FromDims and friends In-Reply-To: References: <48A43348.2030006@netvision.net.il> <5C95B98C-5922-4D34-B3D3-B23C49E29300@yale.edu> <48A45D30.8040508@llnl.gov> Message-ID: On Thu, Aug 14, 2008 at 4:22 PM, Jarrod Millman wrote: > > There is no plan to systematically fix these, so please don't hesitate > to fix any area of the code that you feel comfortable working on. > scipy.sparse should be fixed as of r4645 -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From millman at berkeley.edu Thu Aug 14 18:06:12 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Thu, 14 Aug 2008 15:06:12 -0700 Subject: [SciPy-dev] test_blas failures Message-ID: I am getting test_blas failures on 64bit Linux, but not 32bit Linux. Is anyone else seeing this? I would like to get these fixed before releasing the 0.7.0b1. >>> scipy.test() Running unit tests for scipy NumPy version 1.2.0.dev5629 NumPy is installed in /usr/lib64/python2.5/site-packages/numpy SciPy version 0.7.0.dev4637 SciPy is installed in /usr/lib64/python2.5/site-packages/scipy Python version 2.5.1 (r251:54863, Jul 10 2008, 17:25:56) [GCC 4.1.2 20070925 (Red Hat 4.1.2-33)] nose version 0.10.3 ====================================================================== FAIL: test_asum (test_blas.TestFBLAS1Simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib64/python2.5/site-packages/scipy/lib/blas/tests/test_blas.py", line 60, in test_asum assert_almost_equal(f([3,-4,5]),12) File "/usr/lib64/python2.5/site-packages/numpy/testing/utils.py", line 207, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: ACTUAL: 0.0 DESIRED: 12 ====================================================================== FAIL: test_dot (test_blas.TestFBLAS1Simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib64/python2.5/site-packages/scipy/lib/blas/tests/test_blas.py", line 69, in test_dot assert_almost_equal(f([3,-4,5],[2,5,1]),-9) File "/usr/lib64/python2.5/site-packages/numpy/testing/utils.py", line 207, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: ACTUAL: 0.0 DESIRED: -9 ====================================================================== FAIL: test_nrm2 (test_blas.TestFBLAS1Simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib64/python2.5/site-packages/scipy/lib/blas/tests/test_blas.py", line 80, in test_nrm2 assert_almost_equal(f([3,-4,5]),math.sqrt(50)) File "/usr/lib64/python2.5/site-packages/numpy/testing/utils.py", line 207, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: ACTUAL: 0.0 DESIRED: 7.0710678118654755 ====================================================================== FAIL: test_lapack.test_all_lapack ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.5/site-packages/nose-0.10.3-py2.5.egg/nose/case.py", line 182, in runTest self.test(*self.arg) File "/usr/lib64/python2.5/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 41, in check_syevr assert_array_almost_equal(w,exact_w) File "/usr/lib64/python2.5/site-packages/numpy/testing/utils.py", line 304, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib64/python2.5/site-packages/numpy/testing/utils.py", line 289, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 33.3333333333%) x: array([-0.66992444, 0.48769462, 9.18222713], dtype=float32) y: array([-0.66992434, 0.48769389, 9.18223045]) ====================================================================== FAIL: test_lapack.test_all_lapack ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.5/site-packages/nose-0.10.3-py2.5.egg/nose/case.py", line 182, in runTest self.test(*self.arg) File "/usr/lib64/python2.5/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 66, in check_syevr_irange assert_array_almost_equal(w,exact_w[rslice]) File "/usr/lib64/python2.5/site-packages/numpy/testing/utils.py", line 304, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib64/python2.5/site-packages/numpy/testing/utils.py", line 289, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 33.3333333333%) x: array([-0.66992444, 0.48769462, 9.18222713], dtype=float32) y: array([-0.66992434, 0.48769389, 9.18223045]) ====================================================================== FAIL: test_asum (test_blas.TestFBLAS1Simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib64/python2.5/site-packages/scipy/linalg/tests/test_blas.py", line 61, in test_asum assert_almost_equal(f([3,-4,5]),12) File "/usr/lib64/python2.5/site-packages/numpy/testing/utils.py", line 207, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: ACTUAL: 0.0 DESIRED: 12 ====================================================================== FAIL: test_complex_dotc (test_blas.TestFBLAS1Simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib64/python2.5/site-packages/scipy/linalg/tests/test_blas.py", line 81, in test_complex_dotc assert_almost_equal(f([3j,-4,3-4j],[2,3j,1]),3-14j) File "/usr/lib64/python2.5/site-packages/numpy/testing/utils.py", line 207, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: ACTUAL: (-1.7314232451453736e-32+8.8281803252463475e-44j) DESIRED: (3-14j) ====================================================================== FAIL: test_complex_dotu (test_blas.TestFBLAS1Simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib64/python2.5/site-packages/scipy/linalg/tests/test_blas.py", line 75, in test_complex_dotu assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) File "/usr/lib64/python2.5/site-packages/numpy/testing/utils.py", line 207, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: ACTUAL: (-1.7314232451453736e-32+8.8281803252463475e-44j) DESIRED: (-9+2j) ====================================================================== FAIL: test_dot (test_blas.TestFBLAS1Simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib64/python2.5/site-packages/scipy/linalg/tests/test_blas.py", line 70, in test_dot assert_almost_equal(f([3,-4,5],[2,5,1]),-9) File "/usr/lib64/python2.5/site-packages/numpy/testing/utils.py", line 207, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: ACTUAL: 0.0 DESIRED: -9 ====================================================================== FAIL: test_nrm2 (test_blas.TestFBLAS1Simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib64/python2.5/site-packages/scipy/linalg/tests/test_blas.py", line 87, in test_nrm2 assert_almost_equal(f([3,-4,5]),math.sqrt(50)) File "/usr/lib64/python2.5/site-packages/numpy/testing/utils.py", line 207, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: ACTUAL: 0.0 DESIRED: 7.0710678118654755 ---------------------------------------------------------------------- From millman at berkeley.edu Thu Aug 14 18:08:41 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Thu, 14 Aug 2008 15:08:41 -0700 Subject: [SciPy-dev] Fwd: scipy 32 bit tests In-Reply-To: <96de71860808141448y2993b65eg6b07cdf898947143@mail.gmail.com> References: <96de71860808141448y2993b65eg6b07cdf898947143@mail.gmail.com> Message-ID: I also have a report from Fernando about failures on 32bit Linux. ---------- Forwarded message ---------- From: Fernando Perez Date: Thu, Aug 14, 2008 at 2:48 PM Subject: scipy 32 bit tests To: Jarrod Millman Hey, on 32 bit ubuntu, we get for scipy a bunch of these: ====================================================================== ERROR: test_var_in (test_wx_spec.TestWxConverter) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/fperez/usr/opt/lib/python2.5/site-packages/scipy/weave/tests/test_wx_spec.py", line 31, in setUp self.s = wx_spec.wx_converter() File "/home/fperez/usr/opt/lib/python2.5/site-packages/scipy/weave/wx_spec.py", line 72, in __init__ common_base_converter.__init__(self) File "/home/fperez/usr/opt/lib/python2.5/site-packages/scipy/weave/c_spec.py", line 74, in __init__ self.init_info() File "/home/fperez/usr/opt/lib/python2.5/site-packages/scipy/weave/wx_spec.py", line 127, in init_info cxxflags = get_wxconfig('cxxflags') File "/home/fperez/usr/opt/lib/python2.5/site-packages/scipy/weave/wx_spec.py", line 32, in get_wxconfig raise RuntimeError, msg RuntimeError: wx-config failed. Impossible to learn wxPython settings it would be good to skip them gracefully rather than erroring out, so we get less noise. Some of: ====================================================================== FAIL: test whether all methods converge ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/fperez/usr/opt/lib/python2.5/site-packages/scipy/sparse/linalg/isolve/tests/test_iterative.py", line 83, in test_c onvergence assert_equal(info,0) File "/home/fperez/usr/opt/lib/python2.5/site-packages/numpy/testing/utils.py", line 180, in assert_equal assert desired == actual, msg AssertionError: Items are not equal: ACTUAL: -10 DESIRED: 0 ====================================================================== FAIL: test whether all methods accept a preconditioner ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/fperez/usr/opt/lib/python2.5/site-packages/scipy/sparse/linalg/isolve/tests/test_iterative.py", line 111, in test_precond assert_equal(info,0) File "/home/fperez/usr/opt/lib/python2.5/site-packages/numpy/testing/utils.py", line 180, in assert_equal assert desired == actual, msg AssertionError: Items are not equal: ACTUAL: -10 DESIRED: 0 These two appear like real failures So I think on 32 bit we just need to worry about the two failures above, and hopefully handle the wx stuff more gracefully. cheers, f From zachary.pincus at yale.edu Thu Aug 14 18:12:33 2008 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Thu, 14 Aug 2008 18:12:33 -0400 Subject: [SciPy-dev] PyArray_FromDims and friends In-Reply-To: References: <48A43348.2030006@netvision.net.il> <5C95B98C-5922-4D34-B3D3-B23C49E29300@yale.edu> <48A45D30.8040508@llnl.gov> Message-ID: <3FA4ADBE-DD0B-4143-9D32-AA2971263FE2@yale.edu> > With the SciPy 0.7.0 release quickly approaching, I would really like > to get the rid of all the deprecated NumPy calls in SciPy. While I > don't think they should hold up the beta releases, I would like them > all removed before the first release candidate. All patches welcome. > > There is no plan to systematically fix these, so please don't hesitate > to fix any area of the code that you feel comfortable working on. OK, then the patch in: http://scipy.org/scipy/scipy/ticket/723 should do it for scipy.interpolate. It includes the proper casting-to- (*npy_intp) for 64-bit platforms. Zach From millman at berkeley.edu Thu Aug 14 19:37:17 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Thu, 14 Aug 2008 16:37:17 -0700 Subject: [SciPy-dev] scipy sandbox going, going, gone.... Message-ID: Hey, In preparation for the 0.7.0 beta release, I am going to remove scipy.sandbox. We are approaching 8 months since it was decided to remove it. If you are interested in why the sandbox was created and why it is being removed, please read this: http://jarrodmillman.blogspot.com/2007/12/end-of-scipy-sandbox.html Most of the sandbox code has all ready been moved somewhere else, but there is still some code that remains. So my plan is to create a branch called sandbox from the trunk on Saturday. I will then remove the sandbox from the trunk. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From hagberg at lanl.gov Thu Aug 14 20:41:37 2008 From: hagberg at lanl.gov (Aric Hagberg) Date: Thu, 14 Aug 2008 18:41:37 -0600 Subject: [SciPy-dev] test_arpack errors on 64bit Linux In-Reply-To: References: Message-ID: <20080815004137.GC8070@frappa.lanl.gov> On Thu, Aug 14, 2008 at 01:26:28PM -0700, Jarrod Millman wrote: > I am getting test_arpack errors on 64bit Linux, but not 32bit Linux. > Is anyone else seeing this? I would like to get these fixed before > releasing the 0.7.0b1. I can verify that I get errors in the same tests - but slightly different errors. I see an info=-8 and info=-9999 return form ARPACK in addition to info=-9 c = -8: Error return from LAPACK eigenvalue calculation; c = -9: Starting vector is zero. c = -10: IPARAM(7) must be 1,2,3,4. c = -11: IPARAM(7) = 1 and BMAT = 'G' are incompatable. c = -12: IPARAM(1) must be equal to 0 or 1. c = -9999: Could not build an Arnoldi factorization. c IPARAM(5) returns the size of the current Arnoldi c factorization. I'm also seeing some other errors e.g ====================================================================== FAIL: test_dot (test_blas.TestFBLAS1Simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/u/aric/lib/python/scipy/lib/blas/tests/test_blas.py", line 69, in test_dot assert_almost_equal(f([3,-4,5],[2,5,1]),-9) File "/nh/u/aric/lib/python/numpy/testing/utils.py", line 207, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: ACTUAL: 0.0 DESIRED: -9 Is there some LAPACK/BLAS issue? Aric From oliphant at enthought.com Fri Aug 15 02:37:50 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Fri, 15 Aug 2008 01:37:50 -0500 Subject: [SciPy-dev] scipy sandbox going, going, gone.... In-Reply-To: References: Message-ID: <48A5243E.4060603@enthought.com> Jarrod Millman wrote: > Hey, > > In preparation for the 0.7.0 beta release, I am going to remove > scipy.sandbox. We are approaching 8 months since it was decided to > remove it. If you are interested in why the sandbox was created and > why it is being removed, please read this: > http://jarrodmillman.blogspot.com/2007/12/end-of-scipy-sandbox.html > > Most of the sandbox code has all ready been moved somewhere else, but > there is still some code that remains. So my plan is to create a > branch called sandbox from the trunk on Saturday. I will then remove > the sandbox from the trunk. > Great idea. Thanks for doing this. -Travis From charlesr.harris at gmail.com Fri Aug 15 11:55:26 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 15 Aug 2008 09:55:26 -0600 Subject: [SciPy-dev] PyArray_FromDims and friends In-Reply-To: <3FA4ADBE-DD0B-4143-9D32-AA2971263FE2@yale.edu> References: <48A43348.2030006@netvision.net.il> <5C95B98C-5922-4D34-B3D3-B23C49E29300@yale.edu> <48A45D30.8040508@llnl.gov> <3FA4ADBE-DD0B-4143-9D32-AA2971263FE2@yale.edu> Message-ID: On Thu, Aug 14, 2008 at 4:12 PM, Zachary Pincus wrote: > > With the SciPy 0.7.0 release quickly approaching, I would really like > > to get the rid of all the deprecated NumPy calls in SciPy. While I > > don't think they should hold up the beta releases, I would like them > > all removed before the first release candidate. All patches welcome. > > > > There is no plan to systematically fix these, so please don't hesitate > > to fix any area of the code that you feel comfortable working on. > > OK, then the patch in: > http://scipy.org/scipy/scipy/ticket/723 > should do it for scipy.interpolate. It includes the proper casting-to- > (*npy_intp) for 64-bit platforms. > No, no, you can't just cast the pointer to a different type, it needs to point to npy_intp instead of int to start with, i.e., n has to be an array of npy_intp. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From zachary.pincus at yale.edu Fri Aug 15 13:26:40 2008 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Fri, 15 Aug 2008 13:26:40 -0400 Subject: [SciPy-dev] PyArray_FromDims and friends In-Reply-To: References: <48A43348.2030006@netvision.net.il> <5C95B98C-5922-4D34-B3D3-B23C49E29300@yale.edu> <48A45D30.8040508@llnl.gov> <3FA4ADBE-DD0B-4143-9D32-AA2971263FE2@yale.edu> Message-ID: <2CC40A34-8316-4F21-B73E-E3A7E1148AB7@yale.edu> > > OK, then the patch in: > http://scipy.org/scipy/scipy/ticket/723 > should do it for scipy.interpolate. It includes the proper casting-to- > (*npy_intp) for 64-bit platforms. > > No, no, you can't just cast the pointer to a different type, it > needs to point to npy_intp instead of int to start with, i.e., n has > to be an array of npy_intp. Thanks for looking that over. This is of course what happens when I try to deal with C too late at night. For cases where the dimension was 1, I had originally had: [whatever] = (PyArrayObject *)PyArray_SimpleNew(1,&(npy_intp)n,PyArray_[whatever]); But I got compiler warnings to the tune of "The argument to & is not strictly an lvalue. This will be a hard error in the future." Which makes sense, (npy_intp)n isn't a proper lvalue... So I changed it to: [whatever] = (PyArrayObject *)PyArray_SimpleNew(1, (npy_intp*)&n,PyArray_[whatever]); which, as you point out, is pretty daft. Is there a correct, portable, and warning-free way to do this in one line? I'm guessing not. Anyhow, I'll fix the patch with a multi-line solution: npy_int intp_n; ... intp_n = (npy_intp) n; [whatever] = (PyArrayObject *)PyArray_SimpleNew(1,&intp_n,PyArray_[whatever]); Sorry for the noise, and thanks Chuck for the catch. Zach From zachary.pincus at yale.edu Fri Aug 15 14:35:56 2008 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Fri, 15 Aug 2008 14:35:56 -0400 Subject: [SciPy-dev] semi-duplicated code in minpack/multipack.h files Message-ID: Hi all, In looking into (properly) fixing the uses of PyArray_FromDims in scipy.interpolate, I noticed that the following header files are nearly identical (I assume the desire to have multiple copies of the same file is to keep each of the scipy sub-packages completely independent): scipy/integrate/multipack.h scipy/interpolate/multipack.h scipy/optimize/minpack.h Anyhow, various of these what look like bug fixes which others don't. Also, many of the macros defined in this header are rather problematic in that they rely on certain variables being defined in the code around the macro (despite the fact that the macros have arguments). See. e.g. #define SET_DIAG(ap_diag,o_diag,mode) { which depends on a variable named 'n' being available. Any suggestions as to what to do here? Should I try to synch up the files and fix the macros? This is kind of delicate stuff, in terms of retaining portability, etc. Zach From neilcrighton at gmail.com Sat Aug 16 10:38:54 2008 From: neilcrighton at gmail.com (Neil Crighton) Date: Sat, 16 Aug 2008 15:38:54 +0100 Subject: [SciPy-dev] Guidelines for documenting parameter types Message-ID: <63751c30808160738j65e278bfqc12ca5ab93ee741c@mail.gmail.com> A few of us participating in the doc marathon (http://sd-2116.dedibox.fr/pydocweb/wiki/Front%20Page/) have some questions about documenting parameter types, and I thought it would be good to get others' opinions. If we can agree on some guidelines, perhaps they could be incorporated into the docstring standard (http://projects.scipy.org/scipy/numpy/wiki/CodingStyleGuidelines#docstring-standard)? I don't mind what we end up deciding on, but I think it's a good idea to address these situations in the guidelines so new people know what to do, and can feel comfortable about cleaning up someone else's docstring to match the guidelines (if necessary). Maybe some of these are pedantic, but I think they'll help to give the docs a more unified feel and make sure it's always clear what parameter types are meant. (1) When we mention types in the parameters, we are mostly using the following abbreviations: integer : int float : float boolean : bool complex : complex list : list tuple : tuple i.e. the same as the python function names for each type. It would be nice to say in the guidelines that these should be followed where possible. (2) Often it's useful to state the type of an input or returned array. If we want to say the array returned by np.all is of type bool, what should we say? Possibilities used so far are int array array of int array of ints I prefer 'array of ints', because it is also suitable for tuples and lists ('tuple of ints', or 'list of dtypes'). 'int tuple' is just bad :) . (3) Many functions accept either sequences or scalars as input, and then return arrays if the input was a sequence, or an array scalar if the input was a scalar. For example: >>> a = np.sin(np.pi/2) >>> type(a) >>> a = np.sin([np.pi/2,-np.pi/2]) >>> type(a) There was some discussion about the best way to handle this: http://sd-2116.dedibox.fr/pydocweb/doc/numpy.core.umath.arcsin/#discussion-sec http://sd-2116.dedibox.fr/pydocweb/doc/numpy.core.umath.arctan/#discussion-sec http://sd-2116.dedibox.fr/pydocweb/doc/numpy.core.umath.greater_equal/#discussion-sec Stefan proposed that for these functions we just refer to the input parameter type as array_like, and the return type as ndarray, since these are both described as including scalars in the glossary, http://sd-2116.dedibox.fr/pydocweb/doc/numpy.doc.reference.glossary/. I think this is a good rule. (Note that there is at least one proofed docstring that breaks this rule http://sd-2116.dedibox.fr/pydocweb/doc/numpy.core.umath.greater/) (4) Sometimes we need to specify more than one kind of type. For example, the shape parameter of zeros can be either an int or a sequence of ints (but is not array_like, since it doesn't accepted nested sequences). How should we write this? Some possibilities are: int or sequence of ints {int, sequence of ints} I much prefer 'int or sequence of ints' as to me it's clearer and looks nicer. Also the curly brackets are used when a parameter can assume one of a set of fixed values (e.g. the kind keyword of argsort, which can be one of {'quicksort','mergesort','heapsort'}), so I think it is confusing to also use them in this case. (5) For keyword arguments, the default value is often None. In this case we've been omitting None from the parameter types. However, sometimes None is a valid input type but is not the default (e.g. axis keyword for argsort). In this case I think it's a good idea to include None as an explicit parameter. I've posted to both the scipy-dev and numpy lists - I wasn't sure which best for this. Neil From hagberg at lanl.gov Sat Aug 16 10:51:26 2008 From: hagberg at lanl.gov (Aric Hagberg) Date: Sat, 16 Aug 2008 08:51:26 -0600 Subject: [SciPy-dev] test_arpack errors on 64bit Linux In-Reply-To: <20080815004137.GC8070@frappa.lanl.gov> References: <20080815004137.GC8070@frappa.lanl.gov> Message-ID: <20080816145126.GA11927@bigjim1.lanl.gov> On Thu, Aug 14, 2008 at 06:41:37PM -0600, Aric Hagberg wrote: > On Thu, Aug 14, 2008 at 01:26:28PM -0700, Jarrod Millman wrote: > > I am getting test_arpack errors on 64bit Linux, but not 32bit Linux. > > Is anyone else seeing this? I would like to get these fixed before > > releasing the 0.7.0b1. > > I can verify that I get errors in the same tests - but slightly > different errors. I see an info=-8 and info=-9999 return form ARPACK > in addition to info=-9 > > Is there some LAPACK/BLAS issue? It appears that the problem is with ATLAS package on 64bit Linux. I rebuilt ATLAS from source (atlas-3.8.2) and all of the tests pass. This is the configuration that *doesn't* work (Ubuntu 64 bit): >>> scipy.test() Running unit tests for scipy NumPy version 1.2.0.dev5654 NumPy is installed in /u/aric/lib/python/numpy SciPy version 0.7.0.dev4645 SciPy is installed in /u/aric/lib/python/scipy Python version 2.5.2 (r252:60911, Jul 31 2008, 17:31:22) [GCC 4.2.3 (Ubuntu 4.2.3-2ubuntu7)] nose version 0.10.0 ... fail on arpack tests (and others) $ dpkg -l |grep atlas ii atlas3-base 3.6.0-20.6 ii atlas3-base-dev 3.6.0-20.6 ii atlas3-headers 3.6.0-20.6 $ gfortran -v Using built-in specs. Target: x86_64-linux-gnu Configured with: ../src/configure -v --enable-languages=c,c++,fortran,objc,obj-c++,treelang --prefix=/usr --enable-shared --with-system-zlib --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --enable-nls --with-gxx-include-dir=/usr/include/c++/4.2 --program-suffix=-4.2 --enable-clocale=gnu --enable-libstdcxx-debug --enable-objc-gc --enable-mpfr --enable-checking=release --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=x86_64-linux-gnu Thread model: posix gcc version 4.2.3 (Ubuntu 4.2.3-2ubuntu7) Aric From hagberg at lanl.gov Sat Aug 16 10:55:19 2008 From: hagberg at lanl.gov (Aric Hagberg) Date: Sat, 16 Aug 2008 08:55:19 -0600 Subject: [SciPy-dev] test_blas failures In-Reply-To: References: Message-ID: <20080816145519.GB11927@bigjim1.lanl.gov> On Thu, Aug 14, 2008 at 03:06:12PM -0700, Jarrod Millman wrote: > I am getting test_blas failures on 64bit Linux, but not 32bit Linux. > Is anyone else seeing this? I would like to get these fixed before > releasing the 0.7.0b1. These errors go away after building the ATLAS library from source and linking to that (see previous message on test_arpack errors). Aric From hagberg at lanl.gov Sat Aug 16 11:09:15 2008 From: hagberg at lanl.gov (Aric Hagberg) Date: Sat, 16 Aug 2008 09:09:15 -0600 Subject: [SciPy-dev] more arpack errors: OSX vecLib issue? Message-ID: <20080816150914.GC11927@bigjim1.lanl.gov> The ARPACK wrapper tests for complex and double complex matrices have been commented out since they fail (Bus Error) on OSX/gfortran with the standard vecLib framework. They do work correctly with custom ATLAS libraries and on other architectures and operating systems. Perhaps this could be related to gfortran ABI issues (e.g. http://scipy.org/scipy/scipy/ticket/238 ) or some other problem with the vecLib library? Can anyone help here? See test_complex_symmetric_modes() and test_complex_nonsymmetric_modes() in test_arpack.py. I can produce a small failing example if that helps. Aric From dpeterson at enthought.com Sat Aug 16 17:53:44 2008 From: dpeterson at enthought.com (Dave Peterson) Date: Sat, 16 Aug 2008 16:53:44 -0500 Subject: [SciPy-dev] ANNOUNCE: ETS 3.0.0 released! Message-ID: <48A74C68.80000@enthought.com> Hello, I'm pleased to announce that ETS 3.0.0 has just been tagged and released! Source distributions have been pushed to PyPi and over the next couple hours, Win32 and OSX binaries will also be uploaded to PyPi. This means you can install ETS, assuming you have the prereq software installed, via the simple command: easy_install ETS[nonets] Please see the Install page on our wiki for more detailed installation instructions: https://svn.enthought.com/enthought/wiki/Install Developers of ETS will find that the projects' trunks have already been bumped up to the next version numbers so a simple "ets up" (or svn up) should bring you up to date. Others may wish to grab a complete new checkout via a "ets co ETS". The release branches that had been created are now removed. The next release is currently expected to be ETS 3.0.1 -- Dave Enthought Tool Suite --------------------------- The Enthought Tool Suite (ETS) is a collection of components developed by Enthought and open source participants, which we use every day to construct custom scientific applications. It includes a wide variety of components, including: * an extensible application framework * application building blocks * 2-D and 3-D graphics libraries * scientific and math libraries * developer tools The cornerstone on which these tools rest is the Traits package, which provides explicit type declarations in Python; its features include initialization, validation, delegation, notification, and visualization of typed attributes. More information is available for all the packages within ETS from the Enthought Tool Suite development home page at http://code.enthought.com/projects/tool-suite.php. Testimonials ---------------- "I set out to rebuild an application in one week that had been developed over the last seven years (in C by generations of post-docs). Pyface and Traits were my cornerstones and I knew nothing about Pyface or Wx. It has been a hectic week. But here ... sits in front of me a nice application that does most of what it should. I think this has been a huge success. ... Thanks to the tools Enthought built, and thanks to the friendly support from people on the [enthought-dev] list, I have been able to build what I think is the best application so far. I have built similar applications (controlling cameras for imaging Bose-Einstein condensate) in C+MFC, Matlab, and C+labWindows, each time it has taken me at least four times longer to get to a result I regard as inferior. So I just wanted to say a big "thank you". Thank you to Enthought for providing this great software open-source. Thank you for everybody on the list for your replies." ? Ga?l Varoquaux, Laboratoire Charles Fabry, Institut d?Optique, Palaiseau, France "I'm currently writing a realtime data acquisition/display application ? I'm using Enthought Tool Suite and Traits, and Chaco for display. IMHO, I think that in five years ETS/Traits will be the most comonly used framework for scientific applications." ? Gary Pajer, Department of Chemistry, Biochemistry and Physics, Rider University, Lawrenceville NJ From millman at berkeley.edu Sat Aug 16 20:49:33 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Sat, 16 Aug 2008 17:49:33 -0700 Subject: [SciPy-dev] scipy sandbox going, going, gone.... In-Reply-To: References: Message-ID: On Thu, Aug 14, 2008 at 4:37 PM, Jarrod Millman wrote: > In preparation for the 0.7.0 beta release, I am going to remove > scipy.sandbox. We are approaching 8 months since it was decided to > remove it. If you are interested in why the sandbox was created and > why it is being removed, please read this: > http://jarrodmillman.blogspot.com/2007/12/end-of-scipy-sandbox.html > > Most of the sandbox code has all ready been moved somewhere else, but > there is still some code that remains. So my plan is to create a > branch called sandbox from the trunk on Saturday. I will then remove > the sandbox from the trunk. The sandbox code has been removed: http://projects.scipy.org/scipy/scipy/changeset/4647 http://projects.scipy.org/scipy/scipy/changeset/4648 I wasn't sure what to do with a few things: ** A couple of docstrings in scipy/maxentropy/maxentutils.py: def sparsefeatures(f, x, format='csc_matrix'): """ Returns an Mx1 sparse matrix of non-zero evaluations of the scalar functions f_1,...,f_m in the list f at the point x. If format='ll_mat', the PySparse module (or a symlink to it) must be available in the Python site-packages/ directory. A trimmed-down version, patched for NumPy compatibility, is available in the SciPy sandbox/pysparse directory. """ def sparsefeaturematrix(f, sample, format='csc_matrix'): """Returns an (m x n) sparse matrix of non-zero evaluations of the scalar or vector functions f_1,...,f_m in the list f at the points x_1,...,x_n in the sequence 'sample'. If format='ll_mat', the PySparse module (or a symlink to it) must be available in the Python site-packages/ directory. A trimmed-down version, patched for NumPy compatibility, is available in the SciPy sandbox/pysparse directory. """ And this example scipy/maxentropy/examples/bergerexamplesimulated.py imports from the sandbox: from scipy.sandbox import montecarlo Any ideas about how we should handle these? Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From stefan at sun.ac.za Sat Aug 16 23:49:59 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Sat, 16 Aug 2008 22:49:59 -0500 Subject: [SciPy-dev] Guidelines for documenting parameter types In-Reply-To: <63751c30808160738j65e278bfqc12ca5ab93ee741c@mail.gmail.com> References: <63751c30808160738j65e278bfqc12ca5ab93ee741c@mail.gmail.com> Message-ID: <9457e7c80808162049p757c38f6kdcce5bc51149e2e@mail.gmail.com> Hi Neil 2008/8/16 Neil Crighton : > A few of us participating in the doc marathon > (http://sd-2116.dedibox.fr/pydocweb/wiki/Front%20Page/) have some > questions about documenting parameter types, and I thought it would be > good to get others' opinions. If we can agree on some guidelines, > perhaps they could be incorporated into the docstring standard > (http://projects.scipy.org/scipy/numpy/wiki/CodingStyleGuidelines#docstring-standard)? Thank you for bringing this conversation to the table. I'm replying only to scipy-dev, since I don't want to flood both mailing lists. > (2) Often it's useful to state the type of an input or returned array. > If we want to say the array returned by np.all is of type bool, what > should we say? Possibilities used so far are > > int array > array of int > array of ints > > I prefer 'array of ints', because it is also suitable for tuples and > lists ('tuple of ints', or 'list of dtypes'). 'int tuple' is just bad > :) . I like 'array of ints' too. Unless there are objections, let's stick to that. > (4) Sometimes we need to specify more than one kind of type. For > example, the shape parameter of zeros can be either an int or a > sequence of ints (but is not array_like, since it doesn't accepted > nested sequences). How should we write this? Some possibilities are: > > int or sequence of ints > {int, sequence of ints} I like the first option, 'int or sequence of ints'. > (5) For keyword arguments, the default value is often None. In this > case we've been omitting None from the parameter types. However, > sometimes None is a valid input type but is not the default (e.g. axis > keyword for argsort). In this case I think it's a good idea to include > None as an explicit parameter. Good point. Thank you for considering all of these cases. I'll wait a few days for more comments on the thread, but I like your suggestions above and will incorporate them into the standard unless anyone objects. Cheers St?fan From oliphant at enthought.com Sun Aug 17 00:16:43 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Sat, 16 Aug 2008 23:16:43 -0500 Subject: [SciPy-dev] scipy sandbox going, going, gone.... In-Reply-To: References: Message-ID: <48A7A62B.9040107@enthought.com> Jarrod Millman wrote: > On Thu, Aug 14, 2008 at 4:37 PM, Jarrod Millman wrote: > >> In preparation for the 0.7.0 beta release, I am going to remove >> scipy.sandbox. We are approaching 8 months since it was decided to >> remove it. If you are interested in why the sandbox was created and >> why it is being removed, please read this: >> http://jarrodmillman.blogspot.com/2007/12/end-of-scipy-sandbox.html >> >> Most of the sandbox code has all ready been moved somewhere else, but >> there is still some code that remains. So my plan is to create a >> branch called sandbox from the trunk on Saturday. I will then remove >> the sandbox from the trunk. >> > > > The sandbox code has been removed: > http://projects.scipy.org/scipy/scipy/changeset/4647 > http://projects.scipy.org/scipy/scipy/changeset/4648 > > I wasn't sure what to do with a few things: > > ** A couple of docstrings in scipy/maxentropy/maxentutils.py: > > def sparsefeatures(f, x, format='csc_matrix'): > """ Returns an Mx1 sparse matrix of non-zero evaluations of the > scalar functions f_1,...,f_m in the list f at the point x. > > If format='ll_mat', the PySparse module (or a symlink to it) must be > available in the Python site-packages/ directory. A trimmed-down > version, patched for NumPy compatibility, is available in the SciPy > sandbox/pysparse directory. > """ > def sparsefeaturematrix(f, sample, format='csc_matrix'): > """Returns an (m x n) sparse matrix of non-zero evaluations of the scalar > or vector functions f_1,...,f_m in the list f at the points > x_1,...,x_n in the sequence 'sample'. > > If format='ll_mat', the PySparse module (or a symlink to it) must be > available in the Python site-packages/ directory. A trimmed-down > version, patched for NumPy compatibility, is available in the SciPy > sandbox/pysparse directory. > """ > > > And this example scipy/maxentropy/examples/bergerexamplesimulated.py > imports from the sandbox: > from scipy.sandbox import montecarlo > > > Any ideas about how we should handle these? > Probably move the dependencies to scikits or remove them. I'd prefer seeing them moved to scikits. -Travis From fperez.net at gmail.com Sun Aug 17 01:03:57 2008 From: fperez.net at gmail.com (Fernando Perez) Date: Sat, 16 Aug 2008 22:03:57 -0700 Subject: [SciPy-dev] Possible new multiplication operators for Python Message-ID: Hi all, [ please keep all replies to this only on the numpy list. I'm cc'ing the scipy ones to make others aware of the topic, but do NOT reply on those lists so we can have an organized thread for future reference] In the Python-dev mailing lists, there were recently two threads regarding the possibility of adding to the language new multiplication operators (amongst others). This would allow one to define things like an element-wise and a matrix product for numpy arrays, for example: http://mail.python.org/pipermail/python-dev/2008-July/081508.html http://mail.python.org/pipermail/python-dev/2008-July/081551.html It turns out that there's an old pep on this issue: http://www.python.org/dev/peps/pep-0225/ which hasn't been ruled out, simply postponed. At this point it seems that there is room for some discussion, and obviously the input of the numpy/scipy crowd would be very welcome. I volunteered to host a BOF next week at scipy so we could collect feedback from those present, but it's important that those NOT present at the conference can equally voice their ideas/opinions. So I wanted to open this thread here to collect feedback. We'll then try to have the bof next week at the conference, and I'll summarize everything for python-dev. Obviously this doesn't mean that we'll get any changes in, but at least there's interest in discussing a topic that has been dear to everyone here. Cheers, f From dpeterson at enthought.com Sun Aug 17 21:05:08 2008 From: dpeterson at enthought.com (Dave Peterson) Date: Sun, 17 Aug 2008 20:05:08 -0500 Subject: [SciPy-dev] [ANNOUNCE] EPD with Py2.5 v4.0.3001 Beta1 now available Message-ID: <48A8CAC4.7070409@enthought.com> Hello, Thanks to heroic efforts by Chris Galvan this weekend, and significant efforts by the team that finalized ETS 3.0.0 this week, we've been able to publish public beta releases of EPD with Py2.5 v4.0.30001 Beta1 for Windows and Mac OS X today. I've uploaded them to the downloads website and updated the EPD product pages to provide download links for the public. You can find the link to the betas here: http://www.enthought.com/products/epddownload.php Please give them a try and report any bugs to the EPD Trac site at https://svn.enthought.com/epd. In this release, EPD has been updated to include ETS 3.0.0, NumPy 1.1.1, IPython 0.9.beta, Matplotlib 0.98.1, Sphinx 0.4.2, pyhdf 0.8, VTK 5.0.4, wxPython 2.8.8.1, and many more updated projects. There are a few issues known at this time, but remember these are our first beta release of this version: * The included documentation hasn't been updated to the current versions of the third-party libraries. * Some of the product branding is not up-to-date with regard to the product name change to "EPD with Py2.5", nor with the version number of 4.0.30001 Beta 1. -- Dave From bsouthey at gmail.com Mon Aug 18 10:56:37 2008 From: bsouthey at gmail.com (Bruce Southey) Date: Mon, 18 Aug 2008 09:56:37 -0500 Subject: [SciPy-dev] Guidelines for documenting parameter types In-Reply-To: <63751c30808160738j65e278bfqc12ca5ab93ee741c@mail.gmail.com> References: <63751c30808160738j65e278bfqc12ca5ab93ee741c@mail.gmail.com> Message-ID: <48A98DA5.4020209@gmail.com> Neil Crighton wrote: > (1) When we mention types in the parameters, we are mostly using the > following abbreviations: > > integer : int > float : float > boolean : bool > complex : complex > list : list > tuple : tuple > > i.e. the same as the python function names for each type. It would be > nice to say in the guidelines that these should be followed where > possible. > I agree with the addition of the default precision because NumPy supports multiple numerical precisions. At least the output text or notes section must indicate when NumPy changes the numerical precision: >>> a=np.array([1,2,3], dtype=np.int8) >>> type(np.mean(a)) >>> a=np.array([1,2,3], dtype=np.float32) >>> type(np.mean(a)) >>> a=np.array([1,2,3], dtype=np.float128) >>> type(np.mean(a)) > (2) Often it's useful to state the type of an input or returned array. > If we want to say the array returned by np.all is of type bool, what > should we say? Possibilities used so far are > > int array > array of int > array of ints > > I prefer 'array of ints', because it is also suitable for tuples and > lists ('tuple of ints', or 'list of dtypes'). 'int tuple' is just bad > :) . > As you indicate in the next point, most functions accept multiple input types so this really applies to the output of a function. Depending on the function, the shape does not change (logical) or changes over a specified way (overall or over a given axis) which the user needs to know. Most functions I know of tend to maintain the dtype (such as sum) or make logical changes (mean may change input type to float64, logical functions change to boolean). So while I probably have not been consistent, I prefer using something like 'scalar' or 'array' of the (input) shape and dtype. > (3) Many functions accept either sequences or scalars as input, and > then return arrays if the input was a sequence, or an array scalar if > the input was a scalar. For example: > > >>>> a = np.sin(np.pi/2) >>>> type(a) >>>> > > >>>> a = np.sin([np.pi/2,-np.pi/2]) >>>> type(a) >>>> > > > There was some discussion about the best way to handle this: > > http://sd-2116.dedibox.fr/pydocweb/doc/numpy.core.umath.arcsin/#discussion-sec > http://sd-2116.dedibox.fr/pydocweb/doc/numpy.core.umath.arctan/#discussion-sec > http://sd-2116.dedibox.fr/pydocweb/doc/numpy.core.umath.greater_equal/#discussion-sec > > Stefan proposed that for these functions we just refer to the input > parameter type as array_like, and the return type as ndarray, since > these are both described as including scalars in the glossary, > http://sd-2116.dedibox.fr/pydocweb/doc/numpy.doc.reference.glossary/. > I think this is a good rule. (Note that there is at least one proofed > docstring that breaks this rule > http://sd-2116.dedibox.fr/pydocweb/doc/numpy.core.umath.greater/) > I think that the input must be treated differently than the output. Although I have used the 'array_like', it is not really correct because the input must be compatible with the NumPy array creation (ndarray compatible?). Dictionaries don't work or sparse matrix representations don't work (as expected but both are array-like). It is not sufficient to say that the output is an ndarray because that does not describe the shape. It is essential to know if you get back the same shape as the input or a scalar, 0-d array, 1-d array etc. Also if the dtype changes for example, logical functions returns boolean and mean returns float64 even if the input was integer. Consequently, I tried to be consistent by splitting the output description into scalar (probably 0-d array) and array. > (4) Sometimes we need to specify more than one kind of type. For > example, the shape parameter of zeros can be either an int or a > sequence of ints (but is not array_like, since it doesn't accepted > nested sequences). How should we write this? Some possibilities are: > > int or sequence of ints > {int, sequence of ints} > > I much prefer 'int or sequence of ints' as to me it's clearer and > looks nicer. Also the curly brackets are used when a parameter can > assume one of a set of fixed values (e.g. the kind keyword of argsort, > which can be one of {'quicksort','mergesort','heapsort'}), so I think > it is confusing to also use them in this case. > I do not like using {}'s because I start to read dictionary and the second usage is an element of a list or tuple. In the case of shape, 'np.zeros(3)' is equivalent to 'np.zeros((3))' but different from 'np.zeros((3,3))'. For consistency, it should be clear that the shape is a tuple and behaves like Python tuples: type((1)) is an int so NumPy automatically treats an int argument as an 1-d shape ie as the tuple (int). > (5) For keyword arguments, the default value is often None. In this > case we've been omitting None from the parameter types. However, > sometimes None is a valid input type but is not the default (e.g. axis > keyword for argsort). In this case I think it's a good idea to include > None as an explicit parameter. > The Zen of Python (http://www.python.org/dev/peps/pep-0020/ or at the Python prompt type: import this): "Explicit is better than implicit." > I've posted to both the scipy-dev and numpy lists - I wasn't sure > which best for this. > > Neil > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > > Regards Bruce From rmay31 at gmail.com Mon Aug 18 20:58:27 2008 From: rmay31 at gmail.com (Ryan May) Date: Mon, 18 Aug 2008 20:58:27 -0400 Subject: [SciPy-dev] SciPy 0.7 release - pending tickets In-Reply-To: References: Message-ID: <48AA1AB3.9020700@gmail.com> Nils Wagner wrote: > Hi all, > > Ticket 704 can be closed. > http://scipy.org/scipy/scipy/ticket/704 > > The following tickets can be easily closed after an update > of the docstring. > > http://scipy.org/scipy/scipy/ticket/677 > http://scipy.org/scipy/scipy/ticket/666 > > The functions read_array and write_array are deprecated. > Is it reasonable to close ticket > > http://scipy.org/scipy/scipy/ticket/568 > > in this context ? > > Ticket 626 can be closed. Works for me. > http://scipy.org/scipy/scipy/ticket/626 > I'll put in another call to fix Ticket 581. http://scipy.org/scipy/scipy/ticket/581 It's got a patch sitting there, which fixes scipy.signal.chebwin. In its current state, chebwin is pretty much useless to anyone doing signal processing with more than a handful of points. Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma From stefan at sun.ac.za Mon Aug 18 21:58:54 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 18 Aug 2008 18:58:54 -0700 Subject: [SciPy-dev] SciPy 0.7 release - pending tickets In-Reply-To: <48AA1AB3.9020700@gmail.com> References: <48AA1AB3.9020700@gmail.com> Message-ID: <9457e7c80808181858n41e8cb0at585e2af22cd156c7@mail.gmail.com> Hey Ryan 2008/8/18 Ryan May : > I'll put in another call to fix Ticket 581. > http://scipy.org/scipy/scipy/ticket/581 > > It's got a patch sitting there, which fixes scipy.signal.chebwin. In > its current state, chebwin is pretty much useless to anyone doing signal > processing with more than a handful of points. I'd like to apply your patch. Do you have a test-case for me so that I can verify it is working correctly? Thanks St?fan From rmay31 at gmail.com Tue Aug 19 22:17:35 2008 From: rmay31 at gmail.com (Ryan May) Date: Tue, 19 Aug 2008 19:17:35 -0700 Subject: [SciPy-dev] SciPy 0.7 release - pending tickets In-Reply-To: <9457e7c80808181858n41e8cb0at585e2af22cd156c7@mail.gmail.com> References: <48AA1AB3.9020700@gmail.com> <9457e7c80808181858n41e8cb0at585e2af22cd156c7@mail.gmail.com> Message-ID: On Mon, Aug 18, 2008 at 6:58 PM, St?fan van der Walt wrote: > Hey Ryan > > 2008/8/18 Ryan May : > > I'll put in another call to fix Ticket 581. > > http://scipy.org/scipy/scipy/ticket/581 > > > > It's got a patch sitting there, which fixes scipy.signal.chebwin. In > > its current state, chebwin is pretty much useless to anyone doing signal > > processing with more than a handful of points. > > I'd like to apply your patch. Do you have a test-case for me so that > I can verify it is working correctly? > > Ok, the numbers in this case look great when plotted out and give sensible frequency domain results. The test cases are attached. There are two cases, one for even number of points and one for odd (since this hits different branches). Hope this works, Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: test_chebwin.py Type: application/octet-stream Size: 1538 bytes Desc: not available URL: From pgmdevlist at gmail.com Thu Aug 21 14:33:58 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Thu, 21 Aug 2008 14:33:58 -0400 Subject: [SciPy-dev] Adding a configuration file to a scikit Message-ID: <200808211433.59219.pgmdevlist@gmail.com> All, [short] What is the best way to define a configuration file for a scikit ? [long] I'm writing a library where some modules need to access some default variables. The easiest is to define a configuration file (say, .configrc) and use ConfigParser to read the needed variables. How must I modify the setup.py of the package so that .configrc gets written system-wide (installed along the package) AND locally (to the user $HOME directory) ? [bonus question] the setup may modify some of the options of .configrc, so I may have to use a configrc.template and write .configrc during the build, from the template. What's the best way to do it in a scikits-friendly setup ? Thanks a lot in advance for any pointer. P. From robert.kern at gmail.com Thu Aug 21 14:58:07 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 21 Aug 2008 11:58:07 -0700 Subject: [SciPy-dev] Adding a configuration file to a scikit In-Reply-To: <200808211433.59219.pgmdevlist@gmail.com> References: <200808211433.59219.pgmdevlist@gmail.com> Message-ID: <3d375d730808211158m620d9570t89183f6bde4c4326@mail.gmail.com> On Thu, Aug 21, 2008 at 11:33, Pierre GM wrote: > All, > > [short] What is the best way to define a configuration file for a scikit ? > > [long] > I'm writing a library where some modules need to access some default > variables. The easiest is to define a configuration file (say, .configrc) and > use ConfigParser to read the needed variables. How must I modify the setup.py > of the package so that .configrc gets written system-wide (installed along > the package) AND locally (to the user $HOME directory) ? distutils does not really support this well. Instead, take a look at how matplotlib does this. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From pgmdevlist at gmail.com Thu Aug 21 15:52:37 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Thu, 21 Aug 2008 15:52:37 -0400 Subject: [SciPy-dev] Adding a configuration file to a scikit In-Reply-To: <3d375d730808211158m620d9570t89183f6bde4c4326@mail.gmail.com> References: <200808211433.59219.pgmdevlist@gmail.com> <3d375d730808211158m620d9570t89183f6bde4c4326@mail.gmail.com> Message-ID: <200808211552.38245.pgmdevlist@gmail.com> On Thursday 21 August 2008 14:58:07 Robert Kern wrote: > On Thu, Aug 21, 2008 at 11:33, Pierre GM wrote: > > All, > > > > [short] What is the best way to define a configuration file for a scikit > distutils does not really support this well. Instead, take a look at > how matplotlib does this. And when using setuptools ? From robert.kern at gmail.com Thu Aug 21 17:23:26 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 21 Aug 2008 14:23:26 -0700 Subject: [SciPy-dev] Adding a configuration file to a scikit In-Reply-To: <200808211552.38245.pgmdevlist@gmail.com> References: <200808211433.59219.pgmdevlist@gmail.com> <3d375d730808211158m620d9570t89183f6bde4c4326@mail.gmail.com> <200808211552.38245.pgmdevlist@gmail.com> Message-ID: <3d375d730808211423p6edaa0efw10d83fcbb556f60c@mail.gmail.com> On Thu, Aug 21, 2008 at 12:52, Pierre GM wrote: > On Thursday 21 August 2008 14:58:07 Robert Kern wrote: >> On Thu, Aug 21, 2008 at 11:33, Pierre GM wrote: >> > All, >> > >> > [short] What is the best way to define a configuration file for a scikit > >> distutils does not really support this well. Instead, take a look at >> how matplotlib does this. > > And when using setuptools ? No help there. Look at matplotlib. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From benny.malengier at gmail.com Fri Aug 22 06:23:43 2008 From: benny.malengier at gmail.com (Benny Malengier) Date: Fri, 22 Aug 2008 12:23:43 +0200 Subject: [SciPy-dev] dae solvers Message-ID: Hello, I need a differential algebraic equation solver, and did not see this already in scipy. I started a new module, dae.py, based on ode.py to interface with ddaspk.f and possibly also lsodi.f from netlib.org (less good, but there is a patch in the tracker) I think this approach is better than the patch given in http://projects.scipy.org/scipy/scipy/ticket/615 for lsodi, as DAE essentially mean to solve for G(y', y, t) =0 and does not fit in the y' = f(y,t) scheme needed for ode.py Is there broader interest for this? That is, should I take the extra effort to create a nice patch for scipy. Time is money and all that. Benny Malengier -------------- next part -------------- An HTML attachment was scrubbed... URL: From rob.clewley at gmail.com Fri Aug 22 11:32:32 2008 From: rob.clewley at gmail.com (Rob Clewley) Date: Fri, 22 Aug 2008 11:32:32 -0400 Subject: [SciPy-dev] dae solvers In-Reply-To: References: Message-ID: > I need a differential algebraic equation solver, and did not see this > already in scipy. This was discussed very recently on the scipy-user list, and if you search for scipy "differential algebraic" on the googles you'll find a few pointers to the existing attempts to solve DAEs using python. -Rob From eads at soe.ucsc.edu Sat Aug 23 11:42:45 2008 From: eads at soe.ucsc.edu (Damian Eads) Date: Sat, 23 Aug 2008 09:42:45 -0600 Subject: [SciPy-dev] sprint: IRC chat? Message-ID: <48B02FF5.5020109@soe.ucsc.edu> Friends, Are we meeting over IRC chat? I'd like to help with the sprint but remotely. I have to leave LA today, unfortunately. Thanks! Damian From robert.kern at gmail.com Sun Aug 24 03:54:06 2008 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 24 Aug 2008 00:54:06 -0700 Subject: [SciPy-dev] Problem with F distribution, or with me? In-Reply-To: <1cd32cbb0808131144o5b16f3e7g3d002e10a6b2a55a@mail.gmail.com> References: <1cd32cbb0808121243k4b37c2c9o7d4b6e393dc3c9a5@mail.gmail.com> <1cd32cbb0808122219q3dc8f504w6218246f985e1676@mail.gmail.com> <1cd32cbb0808130903k4505cf79w2b7e23678f46b580@mail.gmail.com> <1cd32cbb0808131144o5b16f3e7g3d002e10a6b2a55a@mail.gmail.com> Message-ID: <3d375d730808240054he1e55c4i1eae2523e4d31a8@mail.gmail.com> On Wed, Aug 13, 2008 at 11:44, wrote: > I think, stats.loggamma.rvs is wrong or uses a definition that I cannot > figure out. It isn't related to log(gamma.rvs()). It is the same distribution as the "standard" version of lgammaff in VGAM: http://rss.acs.unt.edu/Rdoc/library/VGAM/html/lgammaff.html -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Sun Aug 24 04:35:28 2008 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 24 Aug 2008 01:35:28 -0700 Subject: [SciPy-dev] Problem with F distribution, or with me? - error in stats.fatiguelife.rvs In-Reply-To: <1cd32cbb0808130642k7dcf5adfy63becc98669e16ab@mail.gmail.com> References: <1cd32cbb0808130642k7dcf5adfy63becc98669e16ab@mail.gmail.com> Message-ID: <3d375d730808240135p382ce7ch105d51b7228b73be@mail.gmail.com> On Wed, Aug 13, 2008 at 06:42, wrote: > It looks like that there is an error in stats.fatiguelife.rvs Correct. I have a fix. When I run the scipy test suite tomorrow, I'll check it in. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From benny.malengier at gmail.com Sun Aug 24 08:18:01 2008 From: benny.malengier at gmail.com (Benny Malengier) Date: Sun, 24 Aug 2008 14:18:01 +0200 Subject: [SciPy-dev] dae solvers In-Reply-To: References: Message-ID: Thanks for the pointer Rob. I knew of Sundials, but interfacing the C code API from scipy looked more work than just using the old fortran progs with f2py (there is some code in the ascent package that does interfaces sundials though, http://ascendwiki.cheme.cmu.edu/Category:Solvers , but that would mean a round trip python -> ascent api -> sundials ). Guess I should have googled a bit longer. As this pysundials is there, I do think it should be integrated somehow in scipy. What use of ode class and odepack in scipy which interface old vode/lsode fortran progs when pysundials exists that interfaces sundials and like that the new cvode and ida solvers? So perhaps ode.py should obtain a backend to that? Or should scipy just no longer offer ode solvers... Well, I'm not a scipy dev, just my 2 cents. It always amazes me how many duplication is going on. I'll investigate my options furter and make up my mind next week on how to proceed. Benny 2008/8/22 Rob Clewley > > I need a differential algebraic equation solver, and did not see this > > already in scipy. > > This was discussed very recently on the scipy-user list, and if you search > for > > scipy "differential algebraic" > > on the googles you'll find a few pointers to the existing attempts to > solve DAEs using python. > > -Rob > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Sun Aug 24 15:49:09 2008 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 24 Aug 2008 15:49:09 -0400 Subject: [SciPy-dev] Problem with F distribution, or with me? In-Reply-To: <1cd32cbb0808131144o5b16f3e7g3d002e10a6b2a55a@mail.gmail.com> References: <1cd32cbb0808121243k4b37c2c9o7d4b6e393dc3c9a5@mail.gmail.com> <1cd32cbb0808122219q3dc8f504w6218246f985e1676@mail.gmail.com> <1cd32cbb0808130903k4505cf79w2b7e23678f46b580@mail.gmail.com> <1cd32cbb0808131144o5b16f3e7g3d002e10a6b2a55a@mail.gmail.com> Message-ID: <1cd32cbb0808241249i2a36ec49nb6bcf5f3370b4e5c@mail.gmail.com> >On Wed, Aug 13, 2008 at 11:44, wrote: >> I think, stats.loggamma.rvs is wrong or uses a definition that I cannot figure out. > >It isn't related to log(gamma.rvs()). It is the same distribution as the "standard" version of lgammaff in VGAM: > > http://rss.acs.unt.edu/Rdoc/library/VGAM/html/lgammaff.html > >-- >Robert Kern Hi, I still think there is a problem with the loggamma distribution. I am attaching a script that compares the random variables generated with scipy.stats.loggamma.rvs with the theoretical distribution from scipy.stats.loggamma.pdf and the explicit formula, which is the same in scipy.stats.loggamma.pdf as in the http://rss.acs.unt.edu/Rdoc/library/VGAM/html/lgammaff.html. The script produces many graphs for the range of parameters that seem reasonable to me. >From the histograms you can see that the fit of the sample to the correct pdf is very weak and seems to hold only for some parameter values, e.g. c=1, c=2. For c=1.5 or 1.6 which is in the range of the kstest in the scipy tests, the fit does not look very good. On the other hand, the log of a gamma random variable has a good fit to the theoretical distribution in scipy.stats.loggamma.pdf. I have not found any good statistics reference for the loggamma distribution and its relationship to the log of a gamma random variable, but from my interpretation of the results, they seem to have the same distribution. But while googling, I saw neither a positive nor a negative statement for this. In my previous use of the gamma distribution in R, I think, I used the correct random number generator VGAM.rlgamma, which is linked to in your reference. Note: I'm still using matplotlib-0.90.1 and for compatibility I had to downgrade numpy to 1.0.4, my scipy version is 0.6.0. But I did not see any relevant changes to scipy.stats in a quick look at the changelogs in subversion/trac. Can you run the attached file, and it should give you a quick overview of whether my suspicious results are real, or whether something has changed in newer versions, (or if I am really misinterpreting what is supposed to be going on here). Josef -------------- next part -------------- A non-text attachment was scrubbed... Name: stats_distributions_loggamma_sh.py Type: text/x-python Size: 1886 bytes Desc: not available URL: From josef.pktd at gmail.com Sun Aug 24 16:19:25 2008 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 24 Aug 2008 16:19:25 -0400 Subject: [SciPy-dev] Problem with F distribution, or with me? - error in stats.fatiguelife.rvs In-Reply-To: <1cd32cbb0808130642k7dcf5adfy63becc98669e16ab@mail.gmail.com> References: <1cd32cbb0808130642k7dcf5adfy63becc98669e16ab@mail.gmail.com> Message-ID: <1cd32cbb0808241319y71767634o740fe19ff236a701@mail.gmail.com> >On Wed, Aug 13, 2008 at 06:42, wrote: >> It looks like that there is an error in stats.fatiguelife.rvs > >Correct. I have a fix. When I run the scipy test suite tomorrow, I'll check it in. > >-- >Robert Kern Hi, Running the scipy test suite looks pretty useless for verifying the actual distribution except for serious mistakes. It didn't detect before anything wrong with fatiguelife or loggamma (which I think also gives incorrect random numbers) The Kolmogorov test in http://projects.scipy.org/scipy/scipy/browser/trunk/scipy/stats/tests/test_distributions.py is pretty powerless to detect mistakes in the actual distribution. N=30 is too small and the fail threshold for the pval for fatiguelife is set to alpha = 0.01, while for the other distributions it is at alpha = 0.1. The pvalue for N=100 or N=1000 should be a much better indicator whether the random variable really follows the theoretical distribution. Josef From robert.kern at gmail.com Sun Aug 24 16:39:50 2008 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 24 Aug 2008 13:39:50 -0700 Subject: [SciPy-dev] Problem with F distribution, or with me? In-Reply-To: <1cd32cbb0808241249i2a36ec49nb6bcf5f3370b4e5c@mail.gmail.com> References: <1cd32cbb0808121243k4b37c2c9o7d4b6e393dc3c9a5@mail.gmail.com> <1cd32cbb0808122219q3dc8f504w6218246f985e1676@mail.gmail.com> <1cd32cbb0808130903k4505cf79w2b7e23678f46b580@mail.gmail.com> <1cd32cbb0808131144o5b16f3e7g3d002e10a6b2a55a@mail.gmail.com> <1cd32cbb0808241249i2a36ec49nb6bcf5f3370b4e5c@mail.gmail.com> Message-ID: <3d375d730808241339q6841f8a2re14c8b7cebf4ddf9@mail.gmail.com> On Sun, Aug 24, 2008 at 12:49, wrote: >>On Wed, Aug 13, 2008 at 11:44, wrote: >>> I think, stats.loggamma.rvs is wrong or uses a definition that I cannot figure out. >> >>It isn't related to log(gamma.rvs()). It is the same distribution as the "standard" version of lgammaff in VGAM: >> >> http://rss.acs.unt.edu/Rdoc/library/VGAM/html/lgammaff.html >> >>-- >>Robert Kern > > Hi, > > I still think there is a problem with the loggamma distribution. I am > attaching a script that compares the random variables generated with > scipy.stats.loggamma.rvs with the theoretical distribution from > scipy.stats.loggamma.pdf and the explicit formula, which is the same > in scipy.stats.loggamma.pdf as in the > http://rss.acs.unt.edu/Rdoc/library/VGAM/html/lgammaff.html. The > script produces many graphs for the range of parameters that seem > reasonable to me. > > >From the histograms you can see that the fit of the sample to the > correct pdf is very weak and seems to hold only for some parameter > values, e.g. c=1, c=2. For c=1.5 or 1.6 which is in the range of the > kstest in the scipy tests, the fit does not look very good. > > On the other hand, the log of a gamma random variable has a good fit > to the theoretical distribution in scipy.stats.loggamma.pdf. I believe you are correct. The implementation of the CDF and PPF for this distribution appears to have numerical problems. The default implementation of the RVS, uses the PPF to invert U(0,1) random numbers and messes up significantly. Putting in log(mtrand.gamma(c)) for the RVS appears to match the PDF, but not the CDF or PPF. Ah. All of the references refer to the CDF as the ratio of the incomplete gamma function gammainc(c,exp(x)) divided by gamma(c). This was translated to special.gammainc(c,exp(x))/special.gamma(c). *However*, it appears that Cephes' gammainc() function does this whole ratio, not just the numerator. Removing the extraneous gamma()s gives us stable results across a wide range of shape parameters with or without log(mtrand.gamma(c)) (which we will use). Thank you for your attention to these issues. It's greatly appreciated. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Sun Aug 24 17:06:18 2008 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 24 Aug 2008 14:06:18 -0700 Subject: [SciPy-dev] Problem with F distribution, or with me? - error in stats.fatiguelife.rvs In-Reply-To: <1cd32cbb0808241319y71767634o740fe19ff236a701@mail.gmail.com> References: <1cd32cbb0808130642k7dcf5adfy63becc98669e16ab@mail.gmail.com> <1cd32cbb0808241319y71767634o740fe19ff236a701@mail.gmail.com> Message-ID: <3d375d730808241406k70b147dfqa4abfeba0bdae680@mail.gmail.com> On Sun, Aug 24, 2008 at 13:19, wrote: >>On Wed, Aug 13, 2008 at 06:42, wrote: >>> It looks like that there is an error in stats.fatiguelife.rvs >> >>Correct. I have a fix. When I run the scipy test suite tomorrow, I'll check it in. >> >>-- >>Robert Kern > > Hi, > Running the scipy test suite looks pretty useless for verifying the > actual distribution except for serious mistakes. True. I just don't want to break anything else. I don't really trust the automated K-S tests anyways. They don't have good parameter coverage. At the sprint yesterday, I wrote a little GUI to help me go through all of the distributions with Q-Q plots and a comparison of the histogram to the theoretical PDF with interactive control over the parameters. The remaining problems are mostly failures of the machinery rather than problems with the formulae of the distributions themselves. > It didn't detect > before anything wrong with fatiguelife or loggamma (which I think also > gives incorrect random numbers) > > The Kolmogorov test in > http://projects.scipy.org/scipy/scipy/browser/trunk/scipy/stats/tests/test_distributions.py > is pretty powerless to detect mistakes in the actual distribution. > N=30 is too small and the fail threshold for the pval for fatiguelife > is set to alpha = 0.01, while for the other distributions it is at > alpha = 0.1. No, they are 0.001 and 0.01, but your point is taken. > The pvalue for N=100 or N=1000 should be a much better indicator > whether the random variable really follows the theoretical > distribution. True. After bumping those up to 1000, the tests still pass with after my fixes. They will remain bumped up to 1000. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From dsdale24 at gmail.com Sun Aug 24 18:43:11 2008 From: dsdale24 at gmail.com (Darren Dale) Date: Sun, 24 Aug 2008 18:43:11 -0400 Subject: [SciPy-dev] design of a physical quantities package: seeking comments In-Reply-To: <200808052025.08213.dsdale24@gmail.com> References: <200808021325.42976.dsdale24@gmail.com> <48987CB3.6060806@llnl.gov> <200808052025.08213.dsdale24@gmail.com> Message-ID: <200808241843.12314.dsdale24@gmail.com> I just wanted to post an update. I added a constants subpackage in quantities which supports the database of physical constants compiled by NIST. It is similar to the structure of scipy.constants. I am seriously considering adding an additional property to Quantity: uncertainty, which would be used to account for and propagate errors or precision. I'm also planning on starting unit testing in earnest, as soon as numpy-1.2 is released. If you are interested, quantities is posted at http://dale.chess.cornell.edu/chess-wiki/Quantities . The package is still in heavy development and should not be relied on for accuracy or stability of API. For example, currently units of joules get simplified into kg m^2/s^2, and joules_ is treated as a compound unit and will not be simplified until explicitly requested. I am considering swapping the definitions. Any comments? Darren From alan at ajackson.org Sun Aug 24 18:58:19 2008 From: alan at ajackson.org (Alan Jackson) Date: Sun, 24 Aug 2008 17:58:19 -0500 Subject: [SciPy-dev] Instability in numpy.random.logseries Message-ID: <20080824175819.35108390@ajackson.org> For parameter values close to one, the returned valued can be bad. In [242]: np.random.logseries(.999999999,10) Out[242]: array([ 910, -2147483648, 13814, 17661, 14, 3783, 335, 180167317, 38, 256949708]) For that matter, the function accepts an input of 1.0, but the output is not useful. It should probably throw an error. In [225]: np.random.logseries(1.,10) Out[225]: array([-2147483648, -2147483648, -2147483648, -2147483648, -2147483648, -2147483648, -2147483648, -2147483648, -2147483648, -2147483648]) -- ----------------------------------------------------------------------- | Alan K. Jackson | To see a World in a Grain of Sand | | alan at ajackson.org | And a Heaven in a Wild Flower, | | www.ajackson.org | Hold Infinity in the palm of your hand | | Houston, Texas | And Eternity in an hour. - Blake | ----------------------------------------------------------------------- From robert.kern at gmail.com Sun Aug 24 19:09:35 2008 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 24 Aug 2008 16:09:35 -0700 Subject: [SciPy-dev] Instability in numpy.random.logseries In-Reply-To: <20080824175819.35108390@ajackson.org> References: <20080824175819.35108390@ajackson.org> Message-ID: <3d375d730808241609gfcc880fif9d8041344240786@mail.gmail.com> On Sun, Aug 24, 2008 at 15:58, Alan Jackson wrote: > For parameter values close to one, the returned valued can be bad. > > In [242]: np.random.logseries(.999999999,10) > Out[242]: > array([ 910, -2147483648, 13814, 17661, 14, > 3783, 335, 180167317, 38, 256949708]) I'll take a look at it. This is one of those conversion to longs, again. I have to get on a plane now, so we'll see if I can fix it before I land in Austin. > For that matter, the function accepts an input of 1.0, but the output > is not useful. It should probably throw an error. > > In [225]: np.random.logseries(1.,10) > Out[225]: > array([-2147483648, -2147483648, -2147483648, -2147483648, -2147483648, > -2147483648, -2147483648, -2147483648, -2147483648, -2147483648]) We should raise an exception for exactly 0 and 1, yes. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Sun Aug 24 19:22:36 2008 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 24 Aug 2008 16:22:36 -0700 Subject: [SciPy-dev] Instability in numpy.random.logseries In-Reply-To: <3d375d730808241609gfcc880fif9d8041344240786@mail.gmail.com> References: <20080824175819.35108390@ajackson.org> <3d375d730808241609gfcc880fif9d8041344240786@mail.gmail.com> Message-ID: <3d375d730808241622v81da85bqf18d5a372b53a176@mail.gmail.com> On Sun, Aug 24, 2008 at 16:09, Robert Kern wrote: > On Sun, Aug 24, 2008 at 15:58, Alan Jackson wrote: >> For parameter values close to one, the returned valued can be bad. >> >> In [242]: np.random.logseries(.999999999,10) >> Out[242]: >> array([ 910, -2147483648, 13814, 17661, 14, >> 3783, 335, 180167317, 38, 256949708]) > > I'll take a look at it. This is one of those conversion to longs, > again. I have to get on a plane now, so we'll see if I can fix it > before I land in Austin. Or I could just do it now. r5694. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From rob.clewley at gmail.com Sun Aug 24 21:15:53 2008 From: rob.clewley at gmail.com (Rob Clewley) Date: Sun, 24 Aug 2008 21:15:53 -0400 Subject: [SciPy-dev] dae solvers In-Reply-To: References: Message-ID: On Sun, Aug 24, 2008 at 8:18 AM, Benny Malengier wrote: > Guess I should have googled a bit longer. Well, maybe a bit longer still :) There have been a couple of other attempts, too. For instance, you might get some joy from the wrapped version of Radau, which has a lighter interface and is part of a more python-oriented environment. > As this pysundials is there, I do think it should be integrated somehow in > scipy. What use of ode class and odepack in scipy which interface old > vode/lsode fortran progs when pysundials exists that interfaces sundials and > like that the new cvode and ida solvers? > So perhaps ode.py should obtain a backend to that? Or should scipy just no > longer offer ode solvers... There has been some discussion about ode solvers in scipy recently, but it didn't come to much. People are still busy arguing over how to represent matrices properly, let alone getting in to how dynamical systems should be supported. Someone tried starting a new interface to the existing ode solvers in scipy recently, but I don't know how that's getting along. I think it's called pyode and on google code. > Well, I'm not a scipy dev, just my 2 cents. It always amazes me how many > duplication is going on. I'll investigate my options furter and make up my > mind next week on how to proceed. I mostly agree, but it's tricky to talk about duplication. Users in different fields want different things from their packages, and have different expectations about how many dependencies they're willing to tolerate, how much of a GUI is supported, what kinds of application or data formats are supported, etc. So there end up being some conscious attempts to do similar things somewhat differently. And so that's not necessarily a bad thing. -Rob From josef.pktd at gmail.com Mon Aug 25 00:39:40 2008 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 25 Aug 2008 00:39:40 -0400 Subject: [SciPy-dev] Problem with a jump in scipy.stats.betaprime.cdf Message-ID: <1cd32cbb0808242139t2faebce3i31f64a574bf47e6c@mail.gmail.com> Hi, I was running the kstests for those distributions that are not covered by the tests in scipy.stats. I was testing with random arguments between 0.1 and 5. the kstest for betaprime failed for the following parameters: >>> stats.kstest('betaprime','',[4.65283053384, 0.394739656544],N=1000) (0.176, array(0.0)) Looking a bit more at what is going on I found a jump for this case in the theoretical cdf in scipy.stats.betaprime.cdf at x=500 >>> stats.betaprime.cdf(490,4.65283053384, 0.394739656544) array(0.82583087380901754) >>> stats.betaprime.cdf(499,4.65283053384, 0.394739656544) array(0.82706865985439781) >>> stats.betaprime.cdf(499.99,4.65283053384, 0.394739656544) array(0.82720292867835143) >>> stats.betaprime.cdf(500.0,4.65283053384, 0.394739656544) array(1.0) >>> stats.betaprime.cdf(500.99,4.65283053384, 0.394739656544) array(1.0) The cumulative frequency count of a random sample follows quite closely the theoretical cdf up to x<500 and does not have a mass point at x=500. >>> bprv=stats.betaprime.rvs(4.65283053384, 0.394739656544,size=10000) >>> sum(bprv<499)/10000.0 0.8259 >>> sum(bprv<501)/10000.0 0.8265 >>> So the error must be in the cdf calculation, but staring at the function I didn't see any obvious numerical problems. Note: from what I infer from the description and the calculations, the mean, expected value is infinite if, as in this case, the second shape parameter is smaller than 1. However, this case is not ruled out in the description. Also the description for the vgam package for R has the same formulas and description. I did not see any problems if the second shape parameter is larger than 1, even if the first shape parameter is smaller 1 >>> stats.kstest('betaprime','',[4.65283053384, 3.94739656544],N=1000) (0.0295964079942, array(0.17006401162643214)) >>> stats.kstest('betaprime','',[0.394739656544,4.65283053384],N=1000) (0.0345741586367, array(0.08946533058440953)) Also, inverting the random sample and interchanging the parameter yields a passing kstest: bprv = stats.betaprime.rvs(4.65283053384, 0.394739656544, size=10000) >>> stats.kstest(1/bprv, 'betaprime', [0.394739656544,4.65283053384], N=1000) (0.00847538770233, array(0.23638693612403028)) I haven't verified the pdf, but it looks easy enough, but I think the cdf has some problems for large x and shape2 parameter < 1. Josef From uschmitt at mineway.de Mon Aug 25 05:14:36 2008 From: uschmitt at mineway.de (Uwe Schmitt) Date: Mon, 25 Aug 2008 11:14:36 +0200 Subject: [SciPy-dev] [mailinglist] Re: NNLS In-Reply-To: <489B0004.5070103@mineway.de> References: <48875105.7010708@mineway.de> <3d375d730807231546t72f468c4u1980650481ccb8a2@mail.gmail.com> <48883ECD.6000109@mineway.de> <3d375d730807241137s47df563av9ba9d19beba9ea65@mail.gmail.com> <489B0004.5070103@mineway.de> Message-ID: <48B277FC.9020004@mineway.de> Hi, I mailed the questions below some weeks ago, but got no answer. Is there a reason for this ? Greetings, Uwe Uwe Schmitt schrieb: > I was able to fix these problems and I start liking f2py. > > The NNLS code is now via SVN at > > http://public.procoders.net/nnls/nnls_with_f2py/ > > How can I contribute this code now ? > > Is there further any interest in code for > > * ICA (Independent componenent ananlysis) ? > I wrapped existing C-Code with f2py. > > * NMF/NNMA (nonnegative matrix factorization / - approximation) ? > which is pure Python/numpy code. > > Greetings, Uwe > > -- > Dr. rer. nat. Uwe Schmitt > F&E Mathematik > > mineway GmbH > Science Park 2 > D-66123 Saarbr?cken > > Telefon: +49 (0)681 8390 5334 > Telefax: +49 (0)681 830 4376 > > uschmitt at mineway.de > www.mineway.de > > Gesch?ftsf?hrung: Dr.-Ing. Mathias Bauer > Amtsgericht Saarbr?cken HRB 12339 > > -- Dr. rer. nat. Uwe Schmitt F&E Mathematik mineway GmbH Science Park 2 D-66123 Saarbr?cken Telefon: +49 (0)681 8390 5334 Telefax: +49 (0)681 830 4376 uschmitt at mineway.de www.mineway.de Gesch?ftsf?hrung: Dr.-Ing. Mathias Bauer Amtsgericht Saarbr?cken HRB 12339 -------------- next part -------------- An HTML attachment was scrubbed... URL: From eads at soe.ucsc.edu Mon Aug 25 08:42:21 2008 From: eads at soe.ucsc.edu (Damian Eads) Date: Mon, 25 Aug 2008 06:42:21 -0600 Subject: [SciPy-dev] Ticket 711 closed Message-ID: <48B2A8AD.4050309@soe.ucsc.edu> Two tests have been written to test my fix so I've closed Ticket 711. http://scipy.org/scipy/scipy/ticket/711 From josef.pktd at gmail.com Mon Aug 25 10:54:57 2008 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 25 Aug 2008 10:54:57 -0400 Subject: [SciPy-dev] Problem with a jump in scipy.stats.betaprime.cdf In-Reply-To: <1cd32cbb0808242139t2faebce3i31f64a574bf47e6c@mail.gmail.com> References: <1cd32cbb0808242139t2faebce3i31f64a574bf47e6c@mail.gmail.com> Message-ID: <1cd32cbb0808250754m35ade8f0w2df50102aa157fde@mail.gmail.com> Hi, I am attaching the script that I used for testing the stats.distributions with random parameters, which is a variation on the tests in scipy stats. I get several additional test failures, but I don't have time right now to look at them more closely. I am checking the admissibility of the arguments with ``numpy.all(distfunc._argcheck(*args))``, but I saw that for many distribution this method is not explicitly defined. So, the script will try out parameters for which the distribution might not be well defined. For some distributions Robert Kern already added a FIXME to the source. I get many domain errors which, I think, should be already fixed in trunk, since in the scipy 0.6.0 version some automatic generation of some properties of the distribution didn't work correctly. I read the changelog a while ago and haven't checked again. With this automatic fuzzing testing, I found several problems in scipy.stats.distributions, but adding additional ``_argcheck`` for the missing distributions would reduce the false failures. Many of the problems seem to be in parameter ranges that are not ruled out by the definition or by ``_argcheck``, but are not really the parameters that are commonly used. In this sense, my test gives an upper bound for "suspicious" distributions. Also, I haven't looked in more detail at discrete distribution such as Boltzman and Planck, which show test failures, but I'm not sure whether using automatic kstest for these is ok. I hope that helps, making stats.distribution and random number generation a bit more robust. Josef -------------- next part -------------- A non-text attachment was scrubbed... Name: test_distributions_all.py Type: text/x-python Size: 4759 bytes Desc: not available URL: From nwagner at iam.uni-stuttgart.de Mon Aug 25 12:46:24 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 25 Aug 2008 18:46:24 +0200 Subject: [SciPy-dev] NameError: global name 'array' is not defined Message-ID: Damian, you have introduced a new error. ====================================================================== ERROR: Tests pdist(X, 'canberra') to see if Canberra gives the right result as reported in Scipy bug report 711. ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib64/python2.5/site-packages/scipy/cluster/tests/test_distance.py", line 961, in test_pdist_canberra_ticket_711 right_y = array([ 0.01492537]) NameError: global name 'array' is not defined Nils From eads at soe.ucsc.edu Mon Aug 25 12:51:59 2008 From: eads at soe.ucsc.edu (Damian Eads) Date: Mon, 25 Aug 2008 10:51:59 -0600 Subject: [SciPy-dev] NameError: global name 'array' is not defined In-Reply-To: References: Message-ID: <48B2E32F.2050100@soe.ucsc.edu> Hi, This is due to a typo and I just fixed it. Do a SVN update. Damian Nils Wagner wrote: > Damian, > > you have introduced a new error. > > ====================================================================== > ERROR: Tests pdist(X, 'canberra') to see if Canberra gives > the right result as reported in Scipy bug report 711. > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/local/lib64/python2.5/site-packages/scipy/cluster/tests/test_distance.py", > line 961, in test_pdist_canberra_ticket_711 > right_y = array([ 0.01492537]) > NameError: global name 'array' is not defined > > Nils From nwagner at iam.uni-stuttgart.de Mon Aug 25 13:06:13 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 25 Aug 2008 19:06:13 +0200 Subject: [SciPy-dev] NameError: global name 'array' is not defined In-Reply-To: <48B2E32F.2050100@soe.ucsc.edu> References: <48B2E32F.2050100@soe.ucsc.edu> Message-ID: On Mon, 25 Aug 2008 10:51:59 -0600 Damian Eads wrote: > Hi, > > This is due to a typo and I just fixed it. Do a SVN >update. > > Damian > > Nils Wagner wrote: >> Damian, >> >> you have introduced a new error. >> >> ====================================================================== >> ERROR: Tests pdist(X, 'canberra') to see if Canberra >>gives >> the right result as reported in Scipy bug report 711. >> ---------------------------------------------------------------------- >> Traceback (most recent call last): >> File >> "/usr/local/lib64/python2.5/site-packages/scipy/cluster/tests/test_distance.py", >> line 961, in test_pdist_canberra_ticket_711 >> right_y = array([ 0.01492537]) >> NameError: global name 'array' is not defined >> >> Nils > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev Hi Damian, Thank you for your fast response. Works for me. Just one failure persists ====================================================================== FAIL: test_imresize (test_pilutil.TestPILUtil) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib64/python2.5/site-packages/numpy/testing/decorators.py", line 82, in skipper return f(*args, **kwargs) File "/usr/local/lib64/python2.5/site-packages/scipy/misc/tests/test_pilutil.py", line 25, in test_imresize assert_equal(im1.shape,(11,22)) File "/usr/local/lib64/python2.5/site-packages/numpy/testing/utils.py", line 172, in assert_equal assert_equal(len(actual),len(desired),err_msg,verbose) File "/usr/local/lib64/python2.5/site-packages/numpy/testing/utils.py", line 180, in assert_equal assert desired == actual, msg AssertionError: Items are not equal: ACTUAL: 0 DESIRED: 2 ---------------------------------------------------------------------- Ran 2303 tests in 31.565s FAILED (failures=1) Cheers, Nils From peridot.faceted at gmail.com Tue Aug 26 16:37:00 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Tue, 26 Aug 2008 16:37:00 -0400 Subject: [SciPy-dev] Problem with F distribution, or with me? - error in stats.fatiguelife.rvs In-Reply-To: <1cd32cbb0808241319y71767634o740fe19ff236a701@mail.gmail.com> References: <1cd32cbb0808130642k7dcf5adfy63becc98669e16ab@mail.gmail.com> <1cd32cbb0808241319y71767634o740fe19ff236a701@mail.gmail.com> Message-ID: 2008/8/24 : > Running the scipy test suite looks pretty useless for verifying the > actual distribution except for serious mistakes. It didn't detect > before anything wrong with fatiguelife or loggamma (which I think also > gives incorrect random numbers) > > The Kolmogorov test in > http://projects.scipy.org/scipy/scipy/browser/trunk/scipy/stats/tests/test_distributions.py > is pretty powerless to detect mistakes in the actual distribution. > N=30 is too small and the fail threshold for the pval for fatiguelife > is set to alpha = 0.01, while for the other distributions it is at > alpha = 0.1. > > The pvalue for N=100 or N=1000 should be a much better indicator > whether the random variable really follows the theoretical > distribution. Is there any reason not to include your fuzz tests as a part of scipy? If they take too long, it may make sense to run them only when exhaustive testing is requested, but they find a lot of bugs for such a little bit of code. Also, for the many tests of scipy that use random numbers, does it make sense to seed the random number generator ahead of time? That way debuggers can replicate the test failures... Perhaps nose provides a way to seed the random number generator before every test? Anne From josef.pktd at gmail.com Tue Aug 26 20:18:10 2008 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 26 Aug 2008 20:18:10 -0400 Subject: [SciPy-dev] Problem with F distribution, or with me? - error in stats.fatiguelife.rvs In-Reply-To: References: <1cd32cbb0808130642k7dcf5adfy63becc98669e16ab@mail.gmail.com> <1cd32cbb0808241319y71767634o740fe19ff236a701@mail.gmail.com> Message-ID: <1cd32cbb0808261718i1bae1303o402e6fd9aead85ab@mail.gmail.com> On Tue, Aug 26, 2008 at 4:37 PM, Anne Archibald wrote: > > Is there any reason not to include your fuzz tests as a part of scipy? > If they take too long, it may make sense to run them only when > exhaustive testing is requested, but they find a lot of bugs for such > a little bit of code. > > Also, for the many tests of scipy that use random numbers, does it > make sense to seed the random number generator ahead of time? That way > debuggers can replicate the test failures... Perhaps nose provides a > way to seed the random number generator before every test? > > Anne > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > I see two problems for including these tests when they are intended to be powerful. The main problem are false positives, i.e. test failures because the random sample just gives by chance a failing test statistic. If you seed all random variables, it takes the fuzziness out of these tests. Currently the parameters of the distribution are randomly drawn and then the kstests draw, I guess, a random sample for the actual test. Only seeding one of them would not remove the possibility of accidental test failures. My tests are just a variation of the current test in scipy.stats, I just included more distributions and increased the power of the tests, which is now also changed in trunk. The pvalue thresholds are set pretty low and when a test fails, then a second test is run, I assume in order to reduce the number of false failures. If the tests are too weak, then they won't detect anything except very severe errors. But when running the test suite you don't want to get lots of spurious test failures. What I did with these test was, make them very strict and then check individually whether a test failure is actually caused by an error or just a weakness of the kstest. For some kstest with low pvalues, I later did not find anything (obviously) wrong with the distribution in scipy.stats. For now, my strict fuzz tests are to coarse, all I know in general is that some tests don't work for some parameters that are not ruled out. Maybe these parameters should be ruled out for the current implementation. But my tests indicate that there might still be other problems for scipy.stats to run according to it's own specification, and there are no tests for many other methods or properties e.g. pdf or pmf, statistics such as moments, at least not from what I have seen. Can somebody run: stats.zipf.pmf(range(10),1.5) stats.zipf.cdf(10,1.5) stats.zipf.cdf(range(10),1.5) I get >>> stats.zipf.cdf(10,1.5) Traceback (most recent call last): File "", line 1, in ? File "C:\Josef\_progs\Subversion\scipy_trunk\dist\Programs\Python24\Lib\site-p ackages\scipy\stats\distributions.py", line 3549, in cdf place(output,cond,self._cdf(*goodargs)) File "C:\Josef\_progs\Subversion\scipy_trunk\dist\Programs\Python24\Lib\site-p ackages\scipy\stats\distributions.py", line 3458, in _cdf return self._cdfvec(k,*args) File "C:\Programs\Python24\Lib\site-packages\numpy\lib\function_base.py", line 1092, in __call__ raise ValueError, "mismatch between python function inputs"\ ValueError: mismatch between python function inputs and received arguments but this might be, because I am running either an older version of scipy or from my own faulty (?) build from subversion. If the current version still has this error, then I think it is related to the generic calculation of the cdf, similar to ticket 422, changeset 3797 for the moment calculation. In my tests I get many value errors, but again I don't know whether it is my version/setup, whether the parameters are not ruled out but don't make sense, or whether there is actually a bug somewhere. Josef From josef.pktd at gmail.com Tue Aug 26 21:31:49 2008 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 26 Aug 2008 21:31:49 -0400 Subject: [SciPy-dev] Problem with F distribution, or with me? - ticket 422 might apply for discrete distributions Message-ID: <1cd32cbb0808261831u75eb28a7le127325dd9e24020@mail.gmail.com> On Tue, Aug 26, 2008 at 8:18 PM, wrote: > Can somebody run: > > stats.zipf.pmf(range(10),1.5) > stats.zipf.cdf(10,1.5) > stats.zipf.cdf(range(10),1.5) > > I get >>>> stats.zipf.cdf(10,1.5) > Traceback (most recent call last): > File "", line 1, in ? > File "C:\Josef\_progs\Subversion\scipy_trunk\dist\Programs\Python24\Lib\site-p > ackages\scipy\stats\distributions.py", line 3549, in cdf > place(output,cond,self._cdf(*goodargs)) > File "C:\Josef\_progs\Subversion\scipy_trunk\dist\Programs\Python24\Lib\site-p > ackages\scipy\stats\distributions.py", line 3458, in _cdf > return self._cdfvec(k,*args) > File "C:\Programs\Python24\Lib\site-packages\numpy\lib\function_base.py", line > 1092, in __call__ > raise ValueError, "mismatch between python function inputs"\ > ValueError: mismatch between python function inputs and received arguments > > but this might be, because I am running either an older version of > scipy or from my own faulty (?) build from subversion. > If the current version still has this error, then I think it is > related to the generic calculation of the cdf, > similar to ticket 422, changeset 3797 for the moment calculation. > > In my tests I get many value errors, but again I don't know whether it > is my version/setup, whether the parameters are not ruled out but > don't make sense, or whether there is actually a bug somewhere. > > Josef > in changeset 3797 this was added to the continuous random variable: 315 self.generic_moment.nin = self.numargs+1 # Because of the *args argument 316 # of _mom0_sc, vectorize cannot count the number of arguments correctly. in current trunk these are lines 318 and 319 I don't understand the details of numpy and vectorize, but I think that the same problem with the number of arguments also applies to the generic calculation for the discrete distribution, i.e. lines 3401 self._ppf = new.instancemethod(sgf(_drv_ppf,otypes='d'), 3402 self, rv_discrete) 3403 self._pmf = new.instancemethod(sgf(_drv_pmf,otypes='d'), 3404 self, rv_discrete) 3405 self._cdf = new.instancemethod(sgf(_drv_cdf,otypes='d'), 3406 self, rv_discrete) since all _drv_??? methods also use the *args argument and the exception message I get looks the same as in ticket 422. Josef From uschmitt at mineway.de Thu Aug 28 09:23:06 2008 From: uschmitt at mineway.de (Uwe Schmitt) Date: Thu, 28 Aug 2008 15:23:06 +0200 Subject: [SciPy-dev] [mailinglist] Re: NNLS Message-ID: <48B6A6BA.6080206@mineway.de> Alan Isaac worte: >Uwe Schmitt wrote: >>/ The NNLS code is now via SVN at />>/ http://public.procoders.net/nnls/nnls_with_f2py/ />>/ How can I contribute this code now ? / >Did you get the needed information for this? No. But I had some problems receiving mails from this list, maybe my spamfilter ate them. >>/ Is there further any interest in code for />>/ * ICA (Independent componenent ananlysis) ? />>/ I wrapped existing C-Code with f2py. />>/ * NMF/NNMA (nonnegative matrix factorization / - approximation) ? />>/ which is pure Python/numpy code. / >I'd like to see the latter find its way into SciPy. Nice. I'm just playing with it and try to implement some variations as Sparse NNMA... >(I'm not familiar with ICA; what's the application >area?) ICA means "Independent Component Analysis" which can be used for solving the cocktailparty problem or for blind deconvolution. Look at http://www.procoders.net/?p=30 for further information. Greetings, Uwe >Cheers, >Alan Isaac -- Dr. rer. nat. Uwe Schmitt F&E Mathematik mineway GmbH Science Park 2 D-66123 Saarbr?cken Telefon: +49 (0)681 8390 5334 Telefax: +49 (0)681 830 4376 uschmitt at mineway.de www.mineway.de Gesch?ftsf?hrung: Dr.-Ing. Mathias Bauer Amtsgericht Saarbr?cken HRB 12339 -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Thu Aug 28 14:22:40 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 28 Aug 2008 20:22:40 +0200 Subject: [SciPy-dev] Roadmap scipy 0.7 Message-ID: Hi all, I had a short look at http://projects.scipy.org/scipy/scipy/roadmap Is there a new deadline for scipy 0.7 and numpy 1.2 ? Cheers, Nils From millman at berkeley.edu Thu Aug 28 14:44:44 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Thu, 28 Aug 2008 11:44:44 -0700 Subject: [SciPy-dev] Roadmap scipy 0.7 In-Reply-To: References: Message-ID: On Thu, Aug 28, 2008 at 11:22 AM, Nils Wagner wrote: > I had a short look at > > http://projects.scipy.org/scipy/scipy/roadmap > > Is there a new deadline for scipy 0.7 and numpy 1.2 ? I am super busy this week, but I plan to focus on release issues this weekend. Basically, I will be tagging a 1.2.0rc1 this weekend and a scipy 0.7.0a1. NumPy is basically ready to release, but SciPy needs more testing. We will need to make sure that we have a binary release of scipy that works with numpy 1.2, but I haven't had a chance to look into that. If you wanted to verify whether the scipy 0.6 release works with numpy 1.2 that would be useful information to have. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From alan.mcintyre at gmail.com Thu Aug 28 15:50:23 2008 From: alan.mcintyre at gmail.com (Alan McIntyre) Date: Thu, 28 Aug 2008 12:50:23 -0700 Subject: [SciPy-dev] Roadmap scipy 0.7 In-Reply-To: References: Message-ID: <1d36917a0808281250p62dbfdd3u613f66e9120abaa1@mail.gmail.com> On Thu, Aug 28, 2008 at 11:44 AM, Jarrod Millman wrote: > I am super busy this week, but I plan to focus on release issues this > weekend. Basically, I will be tagging a 1.2.0rc1 this weekend and a > scipy 0.7.0a1. NumPy is basically ready to release, but SciPy needs > more testing. We will need to make sure that we have a binary release > of scipy that works with numpy 1.2, but I haven't had a chance to look > into that. If you wanted to verify whether the scipy 0.6 release > works with numpy 1.2 that would be useful information to have. For what it's worth, the SciPy-0.6.0 tarball seems to work with the current NumPy trunk without much trouble on my Linux machine. The tests emit a lot of DeprecationWarnings, and 5 tests fail (but 3 of those fail with the SciPy trunk as well). The two tests that fail under 0.6.0 but not the trunk are: test_fromimage (scipy.tests.test_pilutil.test_pilutil) check_rvs (scipy.stats.tests.test_distributions.test_rv_discrete) From josef.pktd at gmail.com Thu Aug 28 16:35:26 2008 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 28 Aug 2008 16:35:26 -0400 Subject: [SciPy-dev] Problem with F distribution, or with me? - ticket 422 might apply for discrete distributions In-Reply-To: <1cd32cbb0808261831u75eb28a7le127325dd9e24020@mail.gmail.com> References: <1cd32cbb0808261831u75eb28a7le127325dd9e24020@mail.gmail.com> Message-ID: <1cd32cbb0808281335t16201f38q5619ae041f1a55ff@mail.gmail.com> I just saw the report by Alan McIntyre that this fails check_rvs (scipy.stats.tests.test_distributions.test_rv_discrete) Looking briefly at the current trunk source, I think now that the problem with the generic cdf for defined distributions such as zipf is in 3383 self._cdfvec = sgf(self._cdfsingle,otypes='d') which calls 3469 def _cdfsingle(self, k, *args): If the explanation in ticket 422 applies more generally, then all calls to vectorize, i.e. to sgf, would need to be checked in scipy.stats.distribution. I'm sorry if I'm barking up the wrong tree, I am just looking at the pattern without knowing the internals. Josef On 8/26/08, josef.pktd at gmail.com wrote: > On Tue, Aug 26, 2008 at 8:18 PM, wrote: > >> Can somebody run: >> >> stats.zipf.pmf(range(10),1.5) >> stats.zipf.cdf(10,1.5) >> stats.zipf.cdf(range(10),1.5) >> >> I get >>>>> stats.zipf.cdf(10,1.5) >> Traceback (most recent call last): >> File "", line 1, in ? >> File >> "C:\Josef\_progs\Subversion\scipy_trunk\dist\Programs\Python24\Lib\site-p >> ackages\scipy\stats\distributions.py", line 3549, in cdf >> place(output,cond,self._cdf(*goodargs)) >> File >> "C:\Josef\_progs\Subversion\scipy_trunk\dist\Programs\Python24\Lib\site-p >> ackages\scipy\stats\distributions.py", line 3458, in _cdf >> return self._cdfvec(k,*args) >> File "C:\Programs\Python24\Lib\site-packages\numpy\lib\function_base.py", >> line >> 1092, in __call__ >> raise ValueError, "mismatch between python function inputs"\ >> ValueError: mismatch between python function inputs and received arguments >> >> but this might be, because I am running either an older version of >> scipy or from my own faulty (?) build from subversion. >> If the current version still has this error, then I think it is >> related to the generic calculation of the cdf, >> similar to ticket 422, changeset 3797 for the moment calculation. >> >> In my tests I get many value errors, but again I don't know whether it >> is my version/setup, whether the parameters are not ruled out but >> don't make sense, or whether there is actually a bug somewhere. >> >> Josef >> > > in changeset 3797 this was added to the continuous random variable: > 315 self.generic_moment.nin = self.numargs+1 # Because of the > *args argument > 316 # of _mom0_sc, vectorize cannot count the number of > arguments correctly. > in current trunk these are lines 318 and 319 > > I don't understand the details of numpy and vectorize, but I think > that the same problem with the number > of arguments also applies to the generic calculation for the discrete > distribution, i.e. lines > > 3401 self._ppf = new.instancemethod(sgf(_drv_ppf,otypes='d'), > 3402 self, rv_discrete) > 3403 self._pmf = new.instancemethod(sgf(_drv_pmf,otypes='d'), > 3404 self, rv_discrete) > 3405 self._cdf = new.instancemethod(sgf(_drv_cdf,otypes='d'), > 3406 self, rv_discrete) > > since all _drv_??? methods also use the *args argument and the > exception message I get looks the same as in ticket 422. > > Josef > From josef.pktd at gmail.com Thu Aug 28 17:17:16 2008 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 28 Aug 2008 17:17:16 -0400 Subject: [SciPy-dev] Problem with F distribution, or with me? - ticket 422 might apply for discrete distributions In-Reply-To: <1cd32cbb0808281335t16201f38q5619ae041f1a55ff@mail.gmail.com> References: <1cd32cbb0808261831u75eb28a7le127325dd9e24020@mail.gmail.com> <1cd32cbb0808281335t16201f38q5619ae041f1a55ff@mail.gmail.com> Message-ID: <1cd32cbb0808281417x6faec837hdcb46ac0416d1941@mail.gmail.com> patch that works for me with scipy 0.6.0 follwing the pattern of Changeset 3797 in class rv_discrete add a line, adapted from Changeset 3797 def _cdf(self, x, *args): line 3456 in scipy 0.6.0, line 3473 in trunk k = floor(x) + self._cdfvec.nin = self.numargs+1 #JP return self._cdfvec(k,*args) I could do this without recompile now zipf works: >>> stats.zipf._cdf(10,1.5) array(0.7638016085053122) >>> stats.zipf.cdf(10,1.5) array(0.7638016085053122) all test in scipy 0.6.0 stats still pass (but that's not very reliable) stats.test() Found 73/73 tests for scipy.stats.tests.test_distributions Found 10/10 tests for scipy.stats.tests.test_morestats Found 107/107 tests for scipy.stats.tests.test_stats ...........................................................................Ties preclude use of exact statistic. ..Ties preclude use of exact statistic. ................................................................................................................. ---------------------------------------------------------------------- Ran 190 tests in 0.469s OK Josef On 8/28/08, josef.pktd at gmail.com wrote: > I just saw the report by Alan McIntyre that this fails > check_rvs (scipy.stats.tests.test_distributions.test_rv_discrete) > > Looking briefly at the current trunk source, I think now that the > problem with the generic cdf for defined distributions such as zipf is > in > > 3383 self._cdfvec = sgf(self._cdfsingle,otypes='d') > which calls > 3469 def _cdfsingle(self, k, *args): > > If the explanation in ticket 422 applies more generally, then all > calls to vectorize, i.e. to sgf, would need to be checked in > scipy.stats.distribution. > > I'm sorry if I'm barking up the wrong tree, I am just looking at the > pattern without knowing the internals. > > Josef > > On 8/26/08, josef.pktd at gmail.com wrote: >> On Tue, Aug 26, 2008 at 8:18 PM, wrote: >> >>> Can somebody run: >>> >>> stats.zipf.pmf(range(10),1.5) >>> stats.zipf.cdf(10,1.5) >>> stats.zipf.cdf(range(10),1.5) >>> >>> I get >>>>>> stats.zipf.cdf(10,1.5) >>> Traceback (most recent call last): >>> File "", line 1, in ? >>> File >>> "C:\Josef\_progs\Subversion\scipy_trunk\dist\Programs\Python24\Lib\site-p >>> ackages\scipy\stats\distributions.py", line 3549, in cdf >>> place(output,cond,self._cdf(*goodargs)) >>> File >>> "C:\Josef\_progs\Subversion\scipy_trunk\dist\Programs\Python24\Lib\site-p >>> ackages\scipy\stats\distributions.py", line 3458, in _cdf >>> return self._cdfvec(k,*args) >>> File >>> "C:\Programs\Python24\Lib\site-packages\numpy\lib\function_base.py", >>> line >>> 1092, in __call__ >>> raise ValueError, "mismatch between python function inputs"\ >>> ValueError: mismatch between python function inputs and received >>> arguments >>> >>> but this might be, because I am running either an older version of >>> scipy or from my own faulty (?) build from subversion. >>> If the current version still has this error, then I think it is >>> related to the generic calculation of the cdf, >>> similar to ticket 422, changeset 3797 for the moment calculation. >>> >>> In my tests I get many value errors, but again I don't know whether it >>> is my version/setup, whether the parameters are not ruled out but >>> don't make sense, or whether there is actually a bug somewhere. >>> >>> Josef >>> >> >> in changeset 3797 this was added to the continuous random variable: >> 315 self.generic_moment.nin = self.numargs+1 # Because of the >> *args argument >> 316 # of _mom0_sc, vectorize cannot count the number of >> arguments correctly. >> in current trunk these are lines 318 and 319 >> >> I don't understand the details of numpy and vectorize, but I think >> that the same problem with the number >> of arguments also applies to the generic calculation for the discrete >> distribution, i.e. lines >> >> 3401 self._ppf = new.instancemethod(sgf(_drv_ppf,otypes='d'), >> 3402 self, rv_discrete) >> 3403 self._pmf = new.instancemethod(sgf(_drv_pmf,otypes='d'), >> 3404 self, rv_discrete) >> 3405 self._cdf = new.instancemethod(sgf(_drv_cdf,otypes='d'), >> 3406 self, rv_discrete) >> >> since all _drv_??? methods also use the *args argument and the >> exception message I get looks the same as in ticket 422. >> >> Josef >> > From josef.pktd at gmail.com Thu Aug 28 18:47:54 2008 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 28 Aug 2008 18:47:54 -0400 Subject: [SciPy-dev] ticket 422 applies for other methods: vectorize and *args bugs Message-ID: <1cd32cbb0808281547sf4db7a1l53192b9af330863@mail.gmail.com> I think I found more errors with vectorize (sdf) and *args in scipy.stats.distributions: When I try with scipy 0.6.0 print 'rice.cdf', stats.rice.cdf(5,1.5) raise ValueError, "mismatch between python function inputs"\ ValueError: mismatch between python function inputs and received arguments if I add in class rv_continuous def _cdf(self, x, *args): + self.veccdf.nin = self.numargs+1 #JP return self.veccdf(x,*args) I also needed to add def _cdf_single_call(self, x, *args): + import scipy.integrate #JP return scipy.integrate.quad(self._pdf, self.a, x, args=args)[0] then the error is gone: >>> print 'rice.cdf', stats.rice.cdf(5,1.5) rice.cdf 0.999557350428 in order to get rice.rvs(1.5) to work, I did also def _ppf(self, q, *args): + self.vecfunc.nin = self.numargs+1 #JP return self.vecfunc(q,*args) def _isf(self, q, *args): + self.vecfunc.nin = self.numargs+1 #JP return self.vecfunc(1.0-q,*args) now, print 'rice.cdf', stats.rice.cdf(5,1.5) print 'rice.rvs',stats.rice.rvs([1.5]) works but this does not print 'rice.rvs',stats.rice.rvs([1.5]) print 'rice.cdf', stats.rice.cdf(5,1.5) cdf needs to be called before rvs. So my changes are not in the right position, the cdf needs to be initialized before, I can call rvs. But somewhere along these lines, it should be possible to make the generic methods function correctly. This was with scipy 0.60, but I haven't seen any relevant changes in the current trunk Josef for discrete distribution such as zipf, see my other email From charlesr.harris at gmail.com Thu Aug 28 22:11:46 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 28 Aug 2008 20:11:46 -0600 Subject: [SciPy-dev] Roadmap scipy 0.7 In-Reply-To: <1d36917a0808281250p62dbfdd3u613f66e9120abaa1@mail.gmail.com> References: <1d36917a0808281250p62dbfdd3u613f66e9120abaa1@mail.gmail.com> Message-ID: On Thu, Aug 28, 2008 at 1:50 PM, Alan McIntyre wrote: > On Thu, Aug 28, 2008 at 11:44 AM, Jarrod Millman > wrote: > > I am super busy this week, but I plan to focus on release issues this > > weekend. Basically, I will be tagging a 1.2.0rc1 this weekend and a > > scipy 0.7.0a1. NumPy is basically ready to release, but SciPy needs > > more testing. We will need to make sure that we have a binary release > > of scipy that works with numpy 1.2, but I haven't had a chance to look > > into that. If you wanted to verify whether the scipy 0.6 release > > works with numpy 1.2 that would be useful information to have. > > For what it's worth, the SciPy-0.6.0 tarball seems to work with the > current NumPy trunk without much trouble on my Linux machine. The > tests emit a lot of DeprecationWarnings, and 5 tests fail (but 3 of > those fail with the SciPy trunk as well). The two tests that fail > under 0.6.0 but not the trunk are: > I wonder if some of those deprecation warnings come from code generated by f2py? That needs to be checked. Could you also try running the tests using the python -OO startup flag? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan.mcintyre at gmail.com Thu Aug 28 22:22:10 2008 From: alan.mcintyre at gmail.com (Alan McIntyre) Date: Thu, 28 Aug 2008 19:22:10 -0700 Subject: [SciPy-dev] Roadmap scipy 0.7 In-Reply-To: References: <1d36917a0808281250p62dbfdd3u613f66e9120abaa1@mail.gmail.com> Message-ID: <1d36917a0808281922ufebce46l9d8ba0a61b4a67ee@mail.gmail.com> On Thu, Aug 28, 2008 at 7:11 PM, Charles R Harris wrote: > I wonder if some of those deprecation warnings come from code generated by > f2py? That needs to be checked. Could you also try running the tests using > the python -OO startup flag? With -OO I get immediate failure because of __doc__ manipulations that assume __doc__ is a string. Is it worth spending the time trying to fix those in 0.6, or should I just do it for SciPy 0.7? From charlesr.harris at gmail.com Thu Aug 28 23:23:58 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 28 Aug 2008 21:23:58 -0600 Subject: [SciPy-dev] Roadmap scipy 0.7 In-Reply-To: <1d36917a0808281922ufebce46l9d8ba0a61b4a67ee@mail.gmail.com> References: <1d36917a0808281250p62dbfdd3u613f66e9120abaa1@mail.gmail.com> <1d36917a0808281922ufebce46l9d8ba0a61b4a67ee@mail.gmail.com> Message-ID: On Thu, Aug 28, 2008 at 8:22 PM, Alan McIntyre wrote: > On Thu, Aug 28, 2008 at 7:11 PM, Charles R Harris > wrote: > > I wonder if some of those deprecation warnings come from code generated > by > > f2py? That needs to be checked. Could you also try running the tests > using > > the python -OO startup flag? > > With -OO I get immediate failure because of __doc__ manipulations that > assume __doc__ is a string. Is it worth spending the time trying to > fix those in 0.6, or should I just do it for SciPy 0.7? Depends on the schedule, I don't think these are high priority bugs. You should probably open a ticket in any case. Probably for the deprecation warnings also. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan.mcintyre at gmail.com Fri Aug 29 01:30:58 2008 From: alan.mcintyre at gmail.com (Alan McIntyre) Date: Thu, 28 Aug 2008 22:30:58 -0700 Subject: [SciPy-dev] Roadmap scipy 0.7 In-Reply-To: References: <1d36917a0808281250p62dbfdd3u613f66e9120abaa1@mail.gmail.com> <1d36917a0808281922ufebce46l9d8ba0a61b4a67ee@mail.gmail.com> Message-ID: <1d36917a0808282230m5b6eb7ffr8323446a12ef7a70@mail.gmail.com> On Thu, Aug 28, 2008 at 8:23 PM, Charles R Harris wrote: > Depends on the schedule, I don't think these are high priority bugs. You > should probably open a ticket in any case. Probably for the deprecation > warnings also. It looks like about half of the deprecation warnings (which appear whether or not -OO is used) are due to the use of items from numpy.testing that are scheduled for removal in NumPy 1.3. The other half are from the PyArray* functions that got deprecated in NumPy 1.2. There's a couple of UserWarnings as well. I don't know if they're expected or not--it seems like there's a lot of SciPy tests that liberally emit warnings (due to testing corner cases and such, I presume). For this particular issue, it's probably less work to make SciPy 0.7 work against NumPy 1.2 with a minimum of errors/warnings. I just opened a ticket for 0.7; should I open one for 0.6 as well? From benny.malengier at gmail.com Fri Aug 29 05:07:41 2008 From: benny.malengier at gmail.com (Benny Malengier) Date: Fri, 29 Aug 2008 11:07:41 +0200 Subject: [SciPy-dev] dae solvers In-Reply-To: References: Message-ID: I created a patch to scipy to add support for dae solvers: http://www.scipy.org/scipy/scipy/ticket/730 The backend I included is the netlib ddaspk.f solver. I chose for this as I need to solve large systems and I want to keep the speed of fortran with a minimum of overhead in python. It should be possible to add quickly the backend lsodi based on patch http://www.scipy.org/scipy/scipy/ticket/615 . Adding a backend to the ida solver in sundials should also be possible based on the pysundials package (advantage over pysundials would only be a single api definition ) Extension to add support of Krylov within ddaspk should also be easy with some overloading. Well, I can improve this (extra backend, ...) if it is considered for inclusion in scipy, otherwise it is just a way of sharing it in the hope the next person doesn't have to reinvent it all again. Greetings, Benny 2008/8/25 Rob Clewley > On Sun, Aug 24, 2008 at 8:18 AM, Benny Malengier > wrote: > > > Guess I should have googled a bit longer. > > Well, maybe a bit longer still :) There have been a couple of other > attempts, too. For instance, you might get some joy from the wrapped > version of Radau, which has a lighter interface and is part of a more > python-oriented environment. > > > As this pysundials is there, I do think it should be integrated somehow > in > > scipy. What use of ode class and odepack in scipy which interface old > > vode/lsode fortran progs when pysundials exists that interfaces sundials > and > > like that the new cvode and ida solvers? > > So perhaps ode.py should obtain a backend to that? Or should scipy just > no > > longer offer ode solvers... > > There has been some discussion about ode solvers in scipy recently, > but it didn't come to much. People are still busy arguing over how to > represent matrices properly, let alone getting in to how dynamical > systems should be supported. Someone tried starting a new interface to > the existing ode solvers in scipy recently, but I don't know how > that's getting along. I think it's called pyode and on google code. > > > Well, I'm not a scipy dev, just my 2 cents. It always amazes me how many > > duplication is going on. I'll investigate my options furter and make up > my > > mind next week on how to proceed. > > I mostly agree, but it's tricky to talk about duplication. Users in > different fields want different things from their packages, and have > different expectations about how many dependencies they're willing to > tolerate, how much of a GUI is supported, what kinds of application or > data formats are supported, etc. So there end up being some conscious > attempts to do similar things somewhat differently. And so that's not > necessarily a bad thing. > > -Rob > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Fri Aug 29 05:14:11 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 29 Aug 2008 11:14:11 +0200 Subject: [SciPy-dev] dae solvers In-Reply-To: References: Message-ID: On Fri, 29 Aug 2008 11:07:41 +0200 "Benny Malengier" wrote: > I created a patch to scipy to add support for dae >solvers: > http://www.scipy.org/scipy/scipy/ticket/730 > > The backend I included is the netlib ddaspk.f solver. > You might be interested in http://www.ma.ic.ac.uk/~jcash/IVP_software/readme.html Cheers, Nils > I chose for this as I need to solve large systems and I >want to keep the > speed of fortran with a minimum of overhead in python. > It should be possible to add quickly the backend lsodi >based on patch > http://www.scipy.org/scipy/scipy/ticket/615 . Adding a >backend to the ida > solver in sundials should also be possible based on the >pysundials package > (advantage over pysundials would only be a single api >definition ) > Extension to add support of Krylov within ddaspk should >also be easy with > some overloading. > > Well, I can improve this (extra backend, ...) if it is >considered for > inclusion in scipy, otherwise it is just a way of >sharing it in the hope the > next person doesn't have to reinvent it all again. > > Greetings, > Benny > > 2008/8/25 Rob Clewley > >> On Sun, Aug 24, 2008 at 8:18 AM, Benny Malengier >> wrote: >> >> > Guess I should have googled a bit longer. >> >> Well, maybe a bit longer still :) There have been a >>couple of other >> attempts, too. For instance, you might get some joy from >>the wrapped >> version of Radau, which has a lighter interface and is >>part of a more >> python-oriented environment. >> >> > As this pysundials is there, I do think it should be >>integrated somehow >> in >> > scipy. What use of ode class and odepack in scipy >>which interface old >> > vode/lsode fortran progs when pysundials exists that >>interfaces sundials >> and >> > like that the new cvode and ida solvers? >> > So perhaps ode.py should obtain a backend to that? Or >>should scipy just >> no >> > longer offer ode solvers... >> >> There has been some discussion about ode solvers in >>scipy recently, >> but it didn't come to much. People are still busy >>arguing over how to >> represent matrices properly, let alone getting in to how >>dynamical >> systems should be supported. Someone tried starting a >>new interface to >> the existing ode solvers in scipy recently, but I don't >>know how >> that's getting along. I think it's called pyode and on >>google code. >> >> > Well, I'm not a scipy dev, just my 2 cents. It always >>amazes me how many >> > duplication is going on. I'll investigate my options >>furter and make up >> my >> > mind next week on how to proceed. >> >> I mostly agree, but it's tricky to talk about >>duplication. Users in >> different fields want different things from their >>packages, and have >> different expectations about how many dependencies >>they're willing to >> tolerate, how much of a GUI is supported, what kinds of >>application or >> data formats are supported, etc. So there end up being >>some conscious >> attempts to do similar things somewhat differently. And >>so that's not >> necessarily a bad thing. >> >> -Rob >> _______________________________________________ >> Scipy-dev mailing list >> Scipy-dev at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-dev >> From benny.malengier at gmail.com Fri Aug 29 05:29:05 2008 From: benny.malengier at gmail.com (Benny Malengier) Date: Fri, 29 Aug 2008 11:29:05 +0200 Subject: [SciPy-dev] dae solvers In-Reply-To: References: Message-ID: 2008/8/29 Nils Wagner > On Fri, 29 Aug 2008 11:07:41 +0200 > "Benny Malengier" wrote: > > I created a patch to scipy to add support for dae > >solvers: > > http://www.scipy.org/scipy/scipy/ticket/730 > > > > The backend I included is the netlib ddaspk.f solver. > > > > You might be interested in > > http://www.ma.ic.ac.uk/~jcash/IVP_software/readme.html Thanks for the pointer. Those would be nice backends to add (they already have MATLAB implementations). Benny -------------- next part -------------- An HTML attachment was scrubbed... URL: From Per.Brodtkorb at ffi.no Fri Aug 29 05:46:45 2008 From: Per.Brodtkorb at ffi.no (Per.Brodtkorb at ffi.no) Date: Fri, 29 Aug 2008 11:46:45 +0200 Subject: [SciPy-dev] ticket 422 applies for other methods: vectorize and*args bugs In-Reply-To: <1cd32cbb0808281547sf4db7a1l53192b9af330863@mail.gmail.com> References: <1cd32cbb0808281547sf4db7a1l53192b9af330863@mail.gmail.com> Message-ID: <1ED225FF18AA8B48AC192F7E1D032C6E0C5C13@hbu-posten.ffi.no> I think it is better to put the statements self.veccdf.nin = self.numargs+1 self.vecfunc.nin = self.numargs+1 into the __init__ method of rv_continous class. Per A. -----Opprinnelig melding----- Fra: scipy-dev-bounces at scipy.org [mailto:scipy-dev-bounces at scipy.org] P? vegne av josef.pktd at gmail.com Sendt: 29. august 2008 00:48 Til: SciPy Developers List Emne: [SciPy-dev] ticket 422 applies for other methods: vectorize and*args bugs I think I found more errors with vectorize (sdf) and *args in scipy.stats.distributions: When I try with scipy 0.6.0 print 'rice.cdf', stats.rice.cdf(5,1.5) raise ValueError, "mismatch between python function inputs"\ ValueError: mismatch between python function inputs and received arguments if I add in class rv_continuous def _cdf(self, x, *args): + self.veccdf.nin = self.numargs+1 #JP return self.veccdf(x,*args) I also needed to add def _cdf_single_call(self, x, *args): + import scipy.integrate #JP return scipy.integrate.quad(self._pdf, self.a, x, args=args)[0] then the error is gone: >>> print 'rice.cdf', stats.rice.cdf(5,1.5) rice.cdf 0.999557350428 in order to get rice.rvs(1.5) to work, I did also def _ppf(self, q, *args): + self.vecfunc.nin = self.numargs+1 #JP return self.vecfunc(q,*args) def _isf(self, q, *args): + self.vecfunc.nin = self.numargs+1 #JP return self.vecfunc(1.0-q,*args) now, print 'rice.cdf', stats.rice.cdf(5,1.5) print 'rice.rvs',stats.rice.rvs([1.5]) works but this does not print 'rice.rvs',stats.rice.rvs([1.5]) print 'rice.cdf', stats.rice.cdf(5,1.5) cdf needs to be called before rvs. So my changes are not in the right position, the cdf needs to be initialized before, I can call rvs. But somewhere along these lines, it should be possible to make the generic methods function correctly. This was with scipy 0.60, but I haven't seen any relevant changes in the current trunk Josef for discrete distribution such as zipf, see my other email _______________________________________________ Scipy-dev mailing list Scipy-dev at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-dev From rob.clewley at gmail.com Fri Aug 29 11:08:20 2008 From: rob.clewley at gmail.com (Rob Clewley) Date: Fri, 29 Aug 2008 11:08:20 -0400 Subject: [SciPy-dev] dae solvers In-Reply-To: References: Message-ID: On Fri, Aug 29, 2008 at 5:07 AM, Benny Malengier wrote: > I created a patch to scipy to add support for dae solvers: > http://www.scipy.org/scipy/scipy/ticket/730 This is cool. We have long term plans to interface the Hairer and Wanner DAE codes to PyDSTool so that they can be used without slow python callback functions, but this will be very helpful in the meantime. From aisaac at american.edu Fri Aug 29 11:54:03 2008 From: aisaac at american.edu (Alan G Isaac) Date: Fri, 29 Aug 2008 11:54:03 -0400 Subject: [SciPy-dev] NNLS, and contribution procedure question In-Reply-To: <48B6A6BA.6080206@mineway.de> References: <48B6A6BA.6080206@mineway.de> Message-ID: <48B81B9B.9090301@american.edu> >> Uwe Schmitt wrote: >>> / The NNLS code is now via SVN at > />>/ http://public.procoders.net/nnls/nnls_with_f2py/ > />>/ How can I contribute this code now ? > Alan Isaac worte: >> Did you get the needed information for this? Uwe Schmitt wrote: > No. Can a developer help Uwe out here? I believe Robert agreed this should be in SciPy, so Uwe just needs access or at least assistance. In the meantime, Uwe, if the code can live where it is for a bit, can you open a ticket with the URL for the code. Thanks, Alan PS It would be good to have a "contribution procedures" page for people like Uwe. I remain unclear whether there is an established procedure. From charlesr.harris at gmail.com Fri Aug 29 13:55:03 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 29 Aug 2008 11:55:03 -0600 Subject: [SciPy-dev] Roadmap scipy 0.7 In-Reply-To: <1d36917a0808282230m5b6eb7ffr8323446a12ef7a70@mail.gmail.com> References: <1d36917a0808281250p62dbfdd3u613f66e9120abaa1@mail.gmail.com> <1d36917a0808281922ufebce46l9d8ba0a61b4a67ee@mail.gmail.com> <1d36917a0808282230m5b6eb7ffr8323446a12ef7a70@mail.gmail.com> Message-ID: On Thu, Aug 28, 2008 at 11:30 PM, Alan McIntyre wrote: > On Thu, Aug 28, 2008 at 8:23 PM, Charles R Harris > wrote: > > Depends on the schedule, I don't think these are high priority bugs. You > > should probably open a ticket in any case. Probably for the deprecation > > warnings also. > > It looks like about half of the deprecation warnings (which appear > whether or not -OO is used) are due to the use of items from > numpy.testing that are scheduled for removal in NumPy 1.3. The other > half are from the PyArray* functions that got deprecated in NumPy 1.2. > There's a couple of UserWarnings as well. I don't know if they're > expected or not--it seems like there's a lot of SciPy tests that > liberally emit warnings (due to testing corner cases and such, I > presume). > Because there is so little code involved, I'm thinking that we should just leave the deprecated functions in numpy 1.3 and keep the warnings. I expect you have a better idea of what to do about the testing functions than I. > > For this particular issue, it's probably less work to make SciPy 0.7 > work against NumPy 1.2 with a minimum of errors/warnings. I just > opened a ticket for 0.7; should I open one for 0.6 as well? I'm a bit confused, isn't 0.6 the current release? If so, I think 0.7 is the appropriate version for the fixes. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Fri Aug 29 14:26:30 2008 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 29 Aug 2008 14:26:30 -0400 Subject: [SciPy-dev] hypergeom or me? Message-ID: <1cd32cbb0808291126q5972b9f7k5c7fc8ede8fc1ee5@mail.gmail.com> Is this correct? scipy 0.6.0: >>> stats.hypergeom.rvs(33, 18, 6, size=10) array([3, 4, 4, 4, 3, 3, 2, 2, 4, 3]) >>> stats.hypergeom.rvs(33, 6, 18, size=10) array([24, 24, 25, 22, 24, 23, 24, 25, 25, 23]) >>> stats.hypergeom.pmf(range(20),33, 18, 6) array([ 0.00451891, 0.04880423, 0.18856179, 0.33522095, 0.29009506, 0.11603802, 0.01676105, 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ]) >>> stats.hypergeom.pmf(range(20),33, 6, 18) array([ 0.00451891, 0.04880423, 0.18856179, 0.33522095, 0.29009506, 0.11603802, 0.01676105, 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ]) >>> stats.hypergeom.cdf(range(20),33, 18, 6) array([ 0.00451891, 0.05332314, 0.24188492, 0.57710588, 0.86720093, 0.98323895, 1. , 1. , 1. , 1. , 1. , 1. , 1. , 1. , 1. , 1. , 1. , 1. , 1. , 1. ]) >>> stats.hypergeom.cdf(range(20),33, 6, 18) array([ 0.00451891, 0.05332314, 0.24188492, 0.57710588, 0.86720093, 0.98323895, 1. , 1. , 1. , 1. , 1. , 1. , 1. , 1. , 1. , 1. , 1. , 1. , 1. , 1. ]) Josef From josef.pktd at gmail.com Fri Aug 29 14:49:27 2008 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 29 Aug 2008 14:49:27 -0400 Subject: [SciPy-dev] hypergeom or me? In-Reply-To: <1cd32cbb0808291126q5972b9f7k5c7fc8ede8fc1ee5@mail.gmail.com> References: <1cd32cbb0808291126q5972b9f7k5c7fc8ede8fc1ee5@mail.gmail.com> Message-ID: <1cd32cbb0808291149o7822b90ftf0ca2db06efaeb3f@mail.gmail.com> almost done, there is no upper bound check in Boltzmann, i.e k<=N according to docstring should be pmf = 0 for k>N cdf = 1 for k>N instead cd > 1 >>> stats.boltzmann.rvs(0.5, 5, size=10) array([4, 1, 4, 0, 0, 1, 0, 0, 1, 1]) >>> stats.boltzmann.pmf(range(20),0.5, 5) array([ 4.28655529e-01, 2.59992721e-01, 1.57693556e-01, 9.56459768e-02, 5.80122174e-02, 3.51861885e-02, 2.13415021e-02, 1.29442754e-02, 7.85109987e-03, 4.76193279e-03, 2.88825823e-03, 1.75181717e-03, 1.06253082e-03, 6.44457522e-04, 3.90883246e-04, 2.37082673e-04, 1.43797910e-04, 8.72178413e-05, 5.29002948e-05, 3.20856507e-05]) >>> stats.boltzmann.cdf(range(20),0.5, 5) array([ 0.42865553, 0.68864825, 0.84634181, 0.94198778, 1. , 1.03518619, 1.05652769, 1.06947197, 1.07732307, 1.082085 , 1.08497326, 1.08672507, 1.0877876 , 1.08843206, 1.08882295, 1.08906003, 1.08920383, 1.08929104, 1.08934394, 1.08937603]) From alan.mcintyre at gmail.com Sat Aug 30 14:04:37 2008 From: alan.mcintyre at gmail.com (Alan McIntyre) Date: Sat, 30 Aug 2008 11:04:37 -0700 Subject: [SciPy-dev] Roadmap scipy 0.7 In-Reply-To: References: <1d36917a0808281250p62dbfdd3u613f66e9120abaa1@mail.gmail.com> <1d36917a0808281922ufebce46l9d8ba0a61b4a67ee@mail.gmail.com> <1d36917a0808282230m5b6eb7ffr8323446a12ef7a70@mail.gmail.com> Message-ID: <1d36917a0808301104v441a64ebpc023eccadcc2a8a9@mail.gmail.com> On Fri, Aug 29, 2008 at 10:55 AM, Charles R Harris wrote: >> For this particular issue, it's probably less work to make SciPy 0.7 >> work against NumPy 1.2 with a minimum of errors/warnings. I just >> opened a ticket for 0.7; should I open one for 0.6 as well? > > I'm a bit confused, isn't 0.6 the current release? If so, I think 0.7 is the > appropriate version for the fixes. The initial email I was responding to was Jarrod wondering if SciPy 0.6 worked with NumPy trunk; sorry for any confusion. I'll make the -OO __doc__ fixes in SciPy 0.7 over the weekend.