From jdmc80 at hotmail.com Tue Sep 1 10:23:09 2015 From: jdmc80 at hotmail.com (Joseph Codadeen) Date: Tue, 1 Sep 2015 14:23:09 +0000 Subject: [SciPy-User] RIFF header vs Scipy for odd length payloads In-Reply-To: References: Message-ID: Hi, (tried posting this before with no luck, retrying) I am a scipy newbie. The RIFF specification states; http://www.kk.iij4u.or.jp/~kondo/wave/mpidata.txt (definitive guide?) ckSize A 32-bit unsigned value identifying the size of ckData. This size value does not include the size of the ckID or ckSize fields or the pad byte at the end of ckData. ckData Binary data of fixed or variable size. The start of ckData is word-aligned with respect to the start of the RIFF file. If the chunk size is an odd number of bytes, a pad byte with value zero is written after ckData. Word aligning improves access speed (for chunks resident in memory) and maintains compatibility with EA IFF. The ckSize value does not include the pad byte. 16-bit unsigned unsigned int quantity in Intel format However, if I do this and read my HFP wav file via scipy,
framerate, data = scipy.io.wavfile.read(filepath)
it complains with;
string size must be a multiple of element size
A bit more debugging added to my test code and numpy (multiarray/ctors.c) gives: Sample file is 16 bits, note that 24 bit samples do not work in scipyGot error type "ValueError"Analysis of the wav file encountered a problem: "slen: 48683, itemsize: 2 - string size must be a multiple of element size" i.e. my payload length is odd, reflecting the actual payload as per the the spec. The length of the file reflects the additional pad byte. So for odd length payloads; * we have the spec saying do not add the pad byte to the payload length, but only to the file length, * scipy likes the payload length to be even.* If I add the pad byte to to the payload length and the file length, scipy is happy.* If I want to follow the spec then no one can load my files into scipy. Am I misunderstanding something? What is the correct thing to do in this case?* Follow the spec* Follow scipy* Fix scipy I believe it should be to fix scipy unless I am looking at the wrong spec. The spec came from http://www.digitalpreservation.gov/formats/fdd/fdd000001.shtml I have tried this on scipy version 0.16.0 on Ubuntu 14.04 LTS Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at gmail.com Tue Sep 1 14:20:00 2015 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Tue, 1 Sep 2015 14:20:00 -0400 Subject: [SciPy-User] RIFF header vs Scipy for odd length payloads In-Reply-To: References: Message-ID: On Tue, Sep 1, 2015 at 10:23 AM, Joseph Codadeen wrote: > Hi, > > (tried posting this before with no luck, retrying) > > I am a scipy newbie. > > The RIFF specification states; > > http://www.kk.iij4u.or.jp/~kondo/wave/mpidata.txt (definitive guide?) > > ckSize A 32-bit unsigned value identifying the > size of ckData. This size value does not > include the size of the ckID or ckSize > fields or the pad byte at the end of > ckData. > ckData Binary data of fixed or variable size. The > start of ckData is word-aligned with > respect to the start of the RIFF file. If > the chunk size is an odd number of bytes, a > pad byte with value zero is written after > ckData. Word aligning improves access speed > (for chunks resident in memory) and > maintains compatibility with EA IFF. The > ckSize value does not include the pad byte. > > 16-bit unsigned unsigned int > quantity in Intel > format > > However, if I do this and read my HFP wav file via scipy, >
framerate, data = scipy.io.wavfile.read(filepath)
> > it complains with; >
string size must be a multiple of element size
> > A bit more debugging added to my test code and numpy (multiarray/ctors.c) > gives: > > Sample file is 16 bits, note that 24 bit samples do not work in scipy > Got error type "ValueError" > Analysis of the wav file encountered a problem: "slen: 48683, itemsize: 2 > - string size must be a multiple of element size" > > i.e. my payload length is odd, reflecting the actual payload as per the > the spec. The length of the file reflects the additional pad byte. > > So for odd length payloads; > * we have the spec saying do not add the pad byte to the payload length, > but only to the file length, > * scipy likes the payload length to be even. > * If I add the pad byte to to the payload length and the file length, > scipy is happy. > * If I want to follow the spec then no one can load my files into scipy. > > Am I misunderstanding something? > > What is the correct thing to do in this case? > * Follow the spec > * Follow scipy > * Fix scipy > > I believe it should be to fix scipy unless I am looking at the wrong spec. > The spec came from > http://www.digitalpreservation.gov/formats/fdd/fdd000001.shtml > > I have tried this on scipy version 0.16.0 on Ubuntu 14.04 LTS > > Thanks. > Could you provide a link to a wav file that demonstrates the problem? How many bits per sample is your file? (Sorry, the answer is not clear to me from your email.) Scipy's wav reader does not support 24 bit files. If your file is 24 bit, you can try wavio, a small module I wrote specifically to read 24 bit wav files into a numpy array: https://github.com/WarrenWeckesser/wavio Warren P.S. For anyone reading this, there is also an issue on github: https://github.com/scipy/scipy/issues/5175 > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jdmc80 at hotmail.com Tue Sep 1 17:58:13 2015 From: jdmc80 at hotmail.com (Joseph Codadeen) Date: Tue, 1 Sep 2015 21:58:13 +0000 Subject: [SciPy-User] RIFF header vs Scipy for odd length payloads In-Reply-To: References: , , Message-ID: Hi, Sample file is 16 bits As for a sample, not with me no. But you may create one simply by taking any HFP sample and playing with the RIFF header. Make the payload length an odd number adjusting the data, add a pad byte and adjust only the file length to represent this now even numbered file length, i.e. 36 byte header offset + odd payload length + 1 pad byte. I amend mine easily in notepad++ . Read in the file; scipy.io.wavfile.read(filepath) and scipy should complain as numpy doesn't like odd length files. I will try to share a sample tomorrow but it is simply an audio tone being played. As for the link on github, that was my original posting. Thanks.Joseph Date: Tue, 1 Sep 2015 14:20:00 -0400 From: warren.weckesser at gmail.com To: scipy-user at scipy.org Subject: Re: [SciPy-User] RIFF header vs Scipy for odd length payloads On Tue, Sep 1, 2015 at 10:23 AM, Joseph Codadeen wrote: Hi, (tried posting this before with no luck, retrying) I am a scipy newbie. The RIFF specification states; http://www.kk.iij4u.or.jp/~kondo/wave/mpidata.txt (definitive guide?) ckSize A 32-bit unsigned value identifying the size of ckData. This size value does not include the size of the ckID or ckSize fields or the pad byte at the end of ckData. ckData Binary data of fixed or variable size. The start of ckData is word-aligned with respect to the start of the RIFF file. If the chunk size is an odd number of bytes, a pad byte with value zero is written after ckData. Word aligning improves access speed (for chunks resident in memory) and maintains compatibility with EA IFF. The ckSize value does not include the pad byte. 16-bit unsigned unsigned int quantity in Intel format However, if I do this and read my HFP wav file via scipy,
framerate, data = scipy.io.wavfile.read(filepath)
it complains with;
string size must be a multiple of element size
A bit more debugging added to my test code and numpy (multiarray/ctors.c) gives: Sample file is 16 bits, note that 24 bit samples do not work in scipy Got error type "ValueError" Analysis of the wav file encountered a problem: "slen: 48683, itemsize: 2 - string size must be a multiple of element size" i.e. my payload length is odd, reflecting the actual payload as per the the spec. The length of the file reflects the additional pad byte. So for odd length payloads; * we have the spec saying do not add the pad byte to the payload length, but only to the file length, * scipy likes the payload length to be even.* If I add the pad byte to to the payload length and the file length, scipy is happy.* If I want to follow the spec then no one can load my files into scipy. Am I misunderstanding something? What is the correct thing to do in this case?* Follow the spec* Follow scipy* Fix scipy I believe it should be to fix scipy unless I am looking at the wrong spec. The spec came from http://www.digitalpreservation.gov/formats/fdd/fdd000001.shtml I have tried this on scipy version 0.16.0 on Ubuntu 14.04 LTS Thanks. Could you provide a link to a wav file that demonstrates the problem? How many bits per sample is your file? (Sorry, the answer is not clear to me from your email.) Scipy's wav reader does not support 24 bit files. If your file is 24 bit, you can try wavio, a small module I wrote specifically to read 24 bit wav files into a numpy array: https://github.com/WarrenWeckesser/wavio Warren P.S. For anyone reading this, there is also an issue on github: https://github.com/scipy/scipy/issues/5175 _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at gmail.com Tue Sep 1 20:52:28 2015 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Tue, 1 Sep 2015 20:52:28 -0400 Subject: [SciPy-User] RIFF header vs Scipy for odd length payloads In-Reply-To: References: Message-ID: On Tue, Sep 1, 2015 at 5:58 PM, Joseph Codadeen wrote: > Hi, > > Sample file is 16 bits > > As for a sample, not with me no. But you may create one simply by taking > any HFP sample and playing with the RIFF > What is an "HFP sample"? > header. Make the payload length an odd number adjusting the data, add a > pad byte and adjust only the file length to represent this now even > numbered file length, i.e. 36 byte header offset + odd payload length + 1 > pad byte. > > I amend mine easily in notepad++ . > > Read in the file; > scipy.io.wavfile.read(filepath) > > and scipy should complain as numpy doesn't like odd length files. > > I will try to share a sample tomorrow but it is simply an audio tone being > played. > > As for the link on github, that was my original posting. > > Thanks. > Joseph > > > ------------------------------ > Date: Tue, 1 Sep 2015 14:20:00 -0400 > From: warren.weckesser at gmail.com > To: scipy-user at scipy.org > Subject: Re: [SciPy-User] RIFF header vs Scipy for odd length payloads > > > > > On Tue, Sep 1, 2015 at 10:23 AM, Joseph Codadeen > wrote: > > Hi, > > (tried posting this before with no luck, retrying) > > I am a scipy newbie. > > The RIFF specification states; > > http://www.kk.iij4u.or.jp/~kondo/wave/mpidata.txt (definitive guide?) > > ckSize A 32-bit unsigned value identifying the > size of ckData. This size value does not > include the size of the ckID or ckSize > fields or the pad byte at the end of > ckData. > ckData Binary data of fixed or variable size. The > start of ckData is word-aligned with > respect to the start of the RIFF file. If > the chunk size is an odd number of bytes, a > pad byte with value zero is written after > ckData. Word aligning improves access speed > (for chunks resident in memory) and > maintains compatibility with EA IFF. The > ckSize value does not include the pad byte. > > 16-bit unsigned unsigned int > quantity in Intel > format > > However, if I do this and read my HFP wav file via scipy, >
framerate, data = scipy.io.wavfile.read(filepath)
> > it complains with; >
string size must be a multiple of element size
> > A bit more debugging added to my test code and numpy (multiarray/ctors.c) > gives: > > Sample file is 16 bits, note that 24 bit samples do not work in scipy > Got error type "ValueError" > Analysis of the wav file encountered a problem: "slen: 48683, itemsize: 2 > - string size must be a multiple of element size" > > i.e. my payload length is odd, reflecting the actual payload as per the > the spec. The length of the file reflects the additional pad byte. > > So for odd length payloads; > * we have the spec saying do not add the pad byte to the payload length, > but only to the file length, > * scipy likes the payload length to be even. > * If I add the pad byte to to the payload length and the file length, > scipy is happy. > * If I want to follow the spec then no one can load my files into scipy. > > Am I misunderstanding something? > > What is the correct thing to do in this case? > * Follow the spec > * Follow scipy > * Fix scipy > > I believe it should be to fix scipy unless I am looking at the wrong spec. > The spec came from > http://www.digitalpreservation.gov/formats/fdd/fdd000001.shtml > > I have tried this on scipy version 0.16.0 on Ubuntu 14.04 LTS > > Thanks. > > > > Could you provide a link to a wav file that demonstrates the problem? > > How many bits per sample is your file? (Sorry, the answer is not clear to > me from your email.) Scipy's wav reader does not support 24 bit files. > If your file is 24 bit, you can try wavio, a small module I wrote > specifically to read 24 bit wav files into a numpy array: > https://github.com/WarrenWeckesser/wavio > > > Warren > > P.S. For anyone reading this, there is also an issue on github: > https://github.com/scipy/scipy/issues/5175 > > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > _______________________________________________ SciPy-User mailing list > SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johann.cohentanugi at gmail.com Thu Sep 10 12:52:12 2015 From: johann.cohentanugi at gmail.com (Johann Cohen-Tanugi) Date: Thu, 10 Sep 2015 18:52:12 +0200 Subject: [SciPy-User] issue pickling an interp1d object Message-ID: <55F1B53C.2080804@gmail.com> Dear Scipy-ers, I am using scipy (0.15.1) to interpolate a fairly complicate double integral for several parameters, for later use in yet a third integral. The pickling is thus of a dict of interpolators. When I am using InterpolatedUnivariateSpline my code runs smoothly and dump a pickled file. But when I use interp1d (with default protocol 0), I crash : Traceback (most recent call last): pickle.dump( interpolators, f ) File "/usr/lib/python2.7/pickle.py", line 1370, in dump Pickler(file, protocol).dump(obj) File "/usr/lib/python2.7/pickle.py", line 224, in dump self.save(obj) File "/usr/lib/python2.7/pickle.py", line 286, in save f(self, obj) # Call unbound method with explicit self File "/usr/lib/python2.7/pickle.py", line 649, in save_dict self._batch_setitems(obj.iteritems()) File "/usr/lib/python2.7/pickle.py", line 663, in _batch_setitems save(v) File "/usr/lib/python2.7/pickle.py", line 306, in save rv = reduce(self.proto) File "/usr/lib/python2.7/copy_reg.py", line 77, in _reduce_ex raise TypeError("a class that defines __slots__ without " TypeError: a class that defines __slots__ without defining __getstate__ cannot be pickled When I set the protocol to -1, I get a different crash : pickle.dump( interpolators, f, protocol=-1 ) File "/usr/lib/python2.7/pickle.py", line 1370, in dump Pickler(file, protocol).dump(obj) File "/usr/lib/python2.7/pickle.py", line 224, in dump self.save(obj) File "/usr/lib/python2.7/pickle.py", line 286, in save f(self, obj) # Call unbound method with explicit self File "/usr/lib/python2.7/pickle.py", line 649, in save_dict self._batch_setitems(obj.iteritems()) File "/usr/lib/python2.7/pickle.py", line 681, in _batch_setitems save(v) File "/usr/lib/python2.7/pickle.py", line 331, in save self.save_reduce(obj=obj, *rv) File "/usr/lib/python2.7/pickle.py", line 419, in save_reduce save(state) File "/usr/lib/python2.7/pickle.py", line 286, in save f(self, obj) # Call unbound method with explicit self File "/usr/lib/python2.7/pickle.py", line 548, in save_tuple save(element) File "/usr/lib/python2.7/pickle.py", line 286, in save f(self, obj) # Call unbound method with explicit self File "/usr/lib/python2.7/pickle.py", line 649, in save_dict self._batch_setitems(obj.iteritems()) File "/usr/lib/python2.7/pickle.py", line 681, in _batch_setitems save(v) File "/usr/lib/python2.7/pickle.py", line 331, in save self.save_reduce(obj=obj, *rv) File "/usr/lib/python2.7/pickle.py", line 396, in save_reduce save(cls) File "/usr/lib/python2.7/pickle.py", line 286, in save f(self, obj) # Call unbound method with explicit self File "/usr/lib/python2.7/pickle.py", line 748, in save_global (obj, module, name)) pickle.PicklingError: Can't pickle : it's not found as __builtin__.instancemethod Does that ring a bell to anyone, before I start simplifying my code to provide this list with a test case? Thanks a lot in advance, Johann From max at shron.net Thu Sep 10 15:15:37 2015 From: max at shron.net (Max Shron) Date: Thu, 10 Sep 2015 15:15:37 -0400 Subject: [SciPy-User] issue pickling an interp1d object In-Reply-To: <55F1B53C.2080804@gmail.com> References: <55F1B53C.2080804@gmail.com> Message-ID: This isn't exactly an answer, but: interp1d itself is stateless, given its inputs. Why not pickle a dictionary of inputs, pass around the pickled dictionary, then give it as **kwargs on the other side? Something like: pickle.dump({'x': [1,2,3], 'y': [4,5,6], 'kind': 'linear'}, file_obj) params = pickle.load(file_obj) ip = interp1d(**params) On Thu, Sep 10, 2015 at 12:52 PM, Johann Cohen-Tanugi < johann.cohentanugi at gmail.com> wrote: > Dear Scipy-ers, > I am using scipy (0.15.1) to interpolate a fairly complicate double > integral for several parameters, for later use in yet a third integral. > The pickling is thus of a dict of interpolators. When I am using > InterpolatedUnivariateSpline my code runs smoothly and dump a pickled > file. But when I use interp1d (with default protocol 0), I crash : > Traceback (most recent call last): > pickle.dump( interpolators, f ) > File "/usr/lib/python2.7/pickle.py", line 1370, in dump > Pickler(file, protocol).dump(obj) > File "/usr/lib/python2.7/pickle.py", line 224, in dump > self.save(obj) > File "/usr/lib/python2.7/pickle.py", line 286, in save > f(self, obj) # Call unbound method with explicit self > File "/usr/lib/python2.7/pickle.py", line 649, in save_dict > self._batch_setitems(obj.iteritems()) > File "/usr/lib/python2.7/pickle.py", line 663, in _batch_setitems > save(v) > File "/usr/lib/python2.7/pickle.py", line 306, in save > rv = reduce(self.proto) > File "/usr/lib/python2.7/copy_reg.py", line 77, in _reduce_ex > raise TypeError("a class that defines __slots__ without " > TypeError: a class that defines __slots__ without defining __getstate__ > cannot be pickled > > When I set the protocol to -1, I get a different crash : > pickle.dump( interpolators, f, protocol=-1 ) > File "/usr/lib/python2.7/pickle.py", line 1370, in dump > Pickler(file, protocol).dump(obj) > File "/usr/lib/python2.7/pickle.py", line 224, in dump > self.save(obj) > File "/usr/lib/python2.7/pickle.py", line 286, in save > f(self, obj) # Call unbound method with explicit self > File "/usr/lib/python2.7/pickle.py", line 649, in save_dict > self._batch_setitems(obj.iteritems()) > File "/usr/lib/python2.7/pickle.py", line 681, in _batch_setitems > save(v) > File "/usr/lib/python2.7/pickle.py", line 331, in save > self.save_reduce(obj=obj, *rv) > File "/usr/lib/python2.7/pickle.py", line 419, in save_reduce > save(state) > File "/usr/lib/python2.7/pickle.py", line 286, in save > f(self, obj) # Call unbound method with explicit self > File "/usr/lib/python2.7/pickle.py", line 548, in save_tuple > save(element) > File "/usr/lib/python2.7/pickle.py", line 286, in save > f(self, obj) # Call unbound method with explicit self > File "/usr/lib/python2.7/pickle.py", line 649, in save_dict > self._batch_setitems(obj.iteritems()) > File "/usr/lib/python2.7/pickle.py", line 681, in _batch_setitems > save(v) > File "/usr/lib/python2.7/pickle.py", line 331, in save > self.save_reduce(obj=obj, *rv) > File "/usr/lib/python2.7/pickle.py", line 396, in save_reduce > save(cls) > File "/usr/lib/python2.7/pickle.py", line 286, in save > f(self, obj) # Call unbound method with explicit self > File "/usr/lib/python2.7/pickle.py", line 748, in save_global > (obj, module, name)) > pickle.PicklingError: Can't pickle : it's not > found as __builtin__.instancemethod > > Does that ring a bell to anyone, before I start simplifying my code to > provide this list with a test case? > Thanks a lot in advance, > Johann > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeffreback at gmail.com Fri Sep 11 13:49:35 2015 From: jeffreback at gmail.com (Jeff Reback) Date: Fri, 11 Sep 2015 13:49:35 -0400 Subject: [SciPy-User] ANN: pandas v0.17.0rc1 - RELEASE CANDIDATE Message-ID: Hi, I'm pleased to announce the availability of the first release candidate of Pandas 0.17.0. Please try this RC and report any issues here: Pandas Issues We will be releasing officially in 1-2 weeks or so. **RELEASE CANDIDATE 1** This is a major release from 0.16.2 and includes a small number of API changes, several new features, enhancements, and performance improvements along with a large number of bug fixes. We recommend that all users upgrade to this version. Highlights include: - Release the Global Interpreter Lock (GIL) on some cython operations, see here - Plotting methods are now available as attributes of the .plot accessor, see here - The sorting API has been revamped to remove some long-time inconsistencies, see here - Support for a datetime64[ns] with timezones as a first-class dtype, see here - The default for to_datetime will now be to raise when presented with unparseable formats, previously this would return the original input, see here - The default for dropna in HDFStore has changed to False, to store by default all rows even if they are all NaN, see here - Support for Series.dt.strftime to generate formatted strings for datetime-likes, see here - Development installed versions of pandas will now have PEP440 compliant version strings GH9518 - Development support for benchmarking with the Air Speed Velocity library GH8316 - Support for reading SAS xport files, see here - Removal of the automatic TimeSeries broadcasting, deprecated since 0.8.0, see here See the Whatsnew for much more information. Best way to get this is to install via conda from our development channel. Builds for osx-64,linux-64,win-64 for Python 2.7 and Python 3.4 are all available. conda install pandas -c pandas Thanks to all who made this release happen. It is a very large release! Jeff -------------- next part -------------- An HTML attachment was scrubbed... URL: From ndbecker2 at gmail.com Fri Sep 25 08:50:48 2015 From: ndbecker2 at gmail.com (Neal Becker) Date: Fri, 25 Sep 2015 08:50:48 -0400 Subject: [SciPy-User] meaning of parameters for scipy.special.pro_ang1 Message-ID: I'm trying to compare: http://docs.scipy.org/doc/scipy/reference/generated/scipy.special.pro_ang1.html#scipy.special.pro_ang1 which lists 4 parameters: (m, n, c, x) with this reference: https://www.researchgate.net/publication/223606627_Prolate_spheroidal_wave_functions_an_introduction_to_the_Slepian_series_and_its_properties which refers to 4 parameters (see section 2): The continuous time parameter t, the order,n, of the function, the interval on which the function is known, t0, and the bandwidth parameter c. The bandwidth parameter is given by c= t0?, (3) where? is the finite bandwidth or cutoff frequency of?n(t) of a given ordern. Unfortunately the scipy doc doesn't give a description of it's parameters or any reference. From charlesr.harris at gmail.com Sun Sep 27 14:24:33 2015 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 27 Sep 2015 12:24:33 -0600 Subject: [SciPy-User] Numpy 1.10.0rc2 coming Monday, Sep 28. Message-ID: Hi All, Just a heads up. If you have encountered any unreported errors with rc1, please let us know. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Mon Sep 28 17:15:36 2015 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 28 Sep 2015 23:15:36 +0200 Subject: [SciPy-User] New version of "scipy lecture notes" Message-ID: <20150928211536.GI2445658@phare.normalesup.org> Dear Pythonistas, We have just released a new version of the "scipy lecture notes": http://www.scipy-lectures.org/ These are a consistent set of materials to learn the core aspects of the scientific Python ecosystem, from beginner to expert. They are written and maintained by a set of volunteers and published under a CC-BY license. Highlights of the latest version includes: * a chapter giving a introduction to statistics in Python * a new layout with emphasis on readability including on small devices * fully doctesting for Python 2 and 3 compatibility We hope that you will find these notes useful, for you, your colleagues, or your students. Ga?l From charlesr.harris at gmail.com Tue Sep 29 00:38:42 2015 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 28 Sep 2015 22:38:42 -0600 Subject: [SciPy-User] Numpy 1.10.0rc2 released Message-ID: Hi all, I'm pleased to announce the availability of Numpy 1.10.0rc12. Sources and 32 bit binary packages for Windows may be found at Sourceforge . There have been a few fixes since rc1. If there are no more problems I hope to release the final in a week or so. Cheers Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From t.howells42 at gmail.com Tue Sep 29 08:43:07 2015 From: t.howells42 at gmail.com (Thomas Howells) Date: Tue, 29 Sep 2015 12:43:07 +0000 Subject: [SciPy-User] ODR multiresponse multidimensional Message-ID: Dear Pythonistas, I'm new to the mailing list, my question here is related to scipy's Orthogonal Distance Regression (ODR) wrapper module. My apologies if you've received this before, I had some trouble with the mail delivery system. >From perusal of the documentation for the wrapper and the underlying Fortran routines, it seems the Fortran code can handle a dataset that is both multiresponse and multidimensional. I have such a dataset that I would like to try and use the algorithm for; however as far as I can tell the ODR wrapper doesn't have the machinery to support this usage. It's possible I've missed something, so I'm interested if anyone has any experience with this sort of problem. If not I may dig into the wrapper a bit more and see if the functionality I want could be added. Thanks, Thomas Howells -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue Sep 29 08:59:43 2015 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 29 Sep 2015 13:59:43 +0100 Subject: [SciPy-User] ODR multiresponse multidimensional In-Reply-To: References: Message-ID: On Tue, Sep 29, 2015 at 1:43 PM, Thomas Howells wrote: > > Dear Pythonistas, > > I'm new to the mailing list, my question here is related to scipy's Orthogonal Distance Regression (ODR) wrapper module. My apologies if you've received this before, I had some trouble with the mail delivery system. > > From perusal of the documentation for the wrapper and the underlying Fortran routines, it seems the Fortran code can handle a dataset that is both multiresponse and multidimensional. I have such a dataset that I would like to try and use the algorithm for; however as far as I can tell the ODR wrapper doesn't have the machinery to support this usage. What do you mean by "both multiresponse and multidimensional"? That the model is a function `f(x; beta) -> y` such that x and y are each vectors? Yes, it certainly supports this, and I think the docstrings are pretty clear about it. What did you read that makes you think otherwise? http://docs.scipy.org/doc/scipy/reference/generated/scipy.odr.Model.html -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.robitaille at gmail.com Tue Sep 29 09:59:30 2015 From: thomas.robitaille at gmail.com (Thomas Robitaille) Date: Tue, 29 Sep 2015 15:59:30 +0200 Subject: [SciPy-User] ANN: numtraits v0.2 Message-ID: Hi everyone, (apologies if you already saw this on the numpy-discussion list. I am resending it to the scipy-user and ipython-user mailing lists since these were down last week) We have released a small experimental package called numtraits that builds on top of the traitlets package and provides a NumericalTrait class that can be used to validate properties such as: * number of dimension (for arrays) * shape (for arrays) * domain (e.g. positive, negative, range of values) * units (with support for astropy.units, pint, and quantities) The idea is to be able to write a class like: class Sphere(HasTraits): radius = NumericalTrait(domain='strictly-positive', ndim=0) position = NumericalTrait(shape=(3,)) and all the validation will then be done automatically when the user sets 'radius' or 'position'. In addition, tuples and lists can get automatically converted to arrays, and default values can be specified. You can read more about the package and see examples of it in use here: https://github.com/astrofrog/numtraits and it can be easily installed with pip install numtraits The package supports both Python 3.3+ and Legacy Python (2.7) :) At this point, we would be very interested in feedback - the package is still very young and we can still change the API if needed. Please open issues with suggestions! Cheers, Tom and Francesco From t.howells42 at gmail.com Tue Sep 29 10:16:09 2015 From: t.howells42 at gmail.com (Thomas Howells) Date: Tue, 29 Sep 2015 14:16:09 +0000 Subject: [SciPy-User] ODR multiresponse multidimensional In-Reply-To: References: Message-ID: On Tue, Sep 29, 2015 at 2:00 PM Robert Kern wrote: > On Tue, Sep 29, 2015 at 1:43 PM, Thomas Howells > wrote: > > > > Dear Pythonistas, > > > > I'm new to the mailing list, my question here is related to scipy's > Orthogonal Distance Regression (ODR) wrapper module. My apologies if you've > received this before, I had some trouble with the mail delivery system. > > > > From perusal of the documentation for the wrapper and the underlying > Fortran routines, it seems the Fortran code can handle a dataset that is > both multiresponse and multidimensional. I have such a dataset that I would > like to try and use the algorithm for; however as far as I can tell the ODR > wrapper doesn't have the machinery to support this usage. > > What do you mean by "both multiresponse and multidimensional"? That the > model is a function `f(x; beta) -> y` such that x and y are each vectors? > Yes, it certainly supports this, and I think the docstrings are pretty > clear about it. What did you read that makes you think otherwise? > > http://docs.scipy.org/doc/scipy/reference/generated/scipy.odr.Model.html > > -- > Robert Kern > > I'll elucidate a little more: I have data with two control variables, theta and E, for angle and energy. Each energy is measured at each angle; this makes the data multidimensional (after checking the ODR reference guide, http://docs.scipy.org/doc/external/odrpack_guide.pdf, this is also referred to as multivariate; sorry if this caused confusion). For each combination of theta and E, I get two linked readings (or responses), alpha and beta, that I want to fit simultaneously. This makes it multi-response, as well as multivariate, a situation described on page 6 of the ODR reference guide. Ideally I need to find the best fit to all angles & both responses simultaneously to reduce correlation between parameters. The odr.Model object has instructions to handle multidimensional input x, and corresponding multidimensional response y, but not what to do if you have both a multidimensional input and multiresponse. My input array x is [m,n] where m is the dimensionality of the input and n is the number of observations. (In my case [3,56]) My response array y is then in fact [2,3,56] as I have two responses for each x. I arranged it this way after inspecting the test_odr.py function, in which a single-dimensional array x is matched with a two-dimensional, or multi-response, return array y in test_multi. Unfortunately attempting to generalise in this way results in an error when the odr module analyses my array shapes. I could not find any way to tell the code that my y array is multiresponse, even having inspected the source code. I hope this explanation makes things clearer! Thanks, Tom -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue Sep 29 10:26:54 2015 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 29 Sep 2015 15:26:54 +0100 Subject: [SciPy-User] ODR multiresponse multidimensional In-Reply-To: References: Message-ID: On Tue, Sep 29, 2015 at 3:16 PM, Thomas Howells wrote: > I'll elucidate a little more: I have data with two control variables, theta and E, for angle and energy. Each energy is measured at each angle; this makes the data multidimensional (after checking the ODR reference guide, http://docs.scipy.org/doc/external/odrpack_guide.pdf, this is also referred to as multivariate; sorry if this caused confusion). > > For each combination of theta and E, I get two linked readings (or responses), alpha and beta, that I want to fit simultaneously. This makes it multi-response, as well as multivariate, a situation described on page 6 of the ODR reference guide. > > Ideally I need to find the best fit to all angles & both responses simultaneously to reduce correlation between parameters. The odr.Model object has instructions to handle multidimensional input x, and corresponding multidimensional response y, but not what to do if you have both a multidimensional input and multiresponse. > > My input array x is [m,n] where m is the dimensionality of the input and n is the number of observations. (In my case [3,56]) What's the third one? You mentioned only two: the angle and the energy. -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From newville at cars.uchicago.edu Tue Sep 29 10:50:58 2015 From: newville at cars.uchicago.edu (Matt Newville) Date: Tue, 29 Sep 2015 09:50:58 -0500 Subject: [SciPy-User] ODR multiresponse multidimensional In-Reply-To: References: Message-ID: On Tue, Sep 29, 2015 at 9:16 AM, Thomas Howells wrote: > On Tue, Sep 29, 2015 at 2:00 PM Robert Kern wrote: > >> On Tue, Sep 29, 2015 at 1:43 PM, Thomas Howells >> wrote: >> > >> > Dear Pythonistas, >> > >> > I'm new to the mailing list, my question here is related to scipy's >> Orthogonal Distance Regression (ODR) wrapper module. My apologies if you've >> received this before, I had some trouble with the mail delivery system. >> > >> > From perusal of the documentation for the wrapper and the underlying >> Fortran routines, it seems the Fortran code can handle a dataset that is >> both multiresponse and multidimensional. I have such a dataset that I would >> like to try and use the algorithm for; however as far as I can tell the ODR >> wrapper doesn't have the machinery to support this usage. >> >> What do you mean by "both multiresponse and multidimensional"? That the >> model is a function `f(x; beta) -> y` such that x and y are each vectors? >> Yes, it certainly supports this, and I think the docstrings are pretty >> clear about it. What did you read that makes you think otherwise? >> >> http://docs.scipy.org/doc/scipy/reference/generated/scipy.odr.Model.html >> >> -- >> Robert Kern >> >> > I'll elucidate a little more: I have data with two control variables, > theta and E, for angle and energy. Each energy is measured at each angle; > this makes the data multidimensional (after checking the ODR reference > guide, http://docs.scipy.org/doc/external/odrpack_guide.pdf, this is also > referred to as multivariate; sorry if this caused confusion). > > For each combination of theta and E, I get two linked readings (or > responses), alpha and beta, that I want to fit simultaneously. This makes > it multi-response, as well as multivariate, a situation described on page 6 > of the ODR reference guide. > > Ideally I need to find the best fit to all angles & both responses > simultaneously to reduce correlation between parameters. The odr.Model > object has instructions to handle multidimensional input x, and > corresponding multidimensional response y, but not what to do if you have > both a multidimensional input and multiresponse. > ORDPACK is a little strange in its support for multi-dimensional data. The simplest thing to do (and with the added benefit that it will allow you to also use other optimization methods) is to always change the problem to a single dimension. Actually, this is not at all hard, just a slight change in perspective. To be clear, the term multivariate means "more than one variable parameter", not the number or shape of the observations. In fact, for (nearly?) all optimization problems, the algorithms seek a set of values for parameters that make the model most closely match the data. What makes ORDPACK special is its definition of "most closely match", not really that it is multi-dimensional. That's sort of a distraction. The fact that you have two signals (alpha, beta) at each value of (angle, energy) is completely unimportant to the fitting algorithm. It doesn't care what the independent variables are, or even that there *are* independent variables. It has (only) parameters and the result of the objective function. Within your objective function, you can do anything you want. You can concatenate multiple arrays of data, and/or reduce your multi-dimensional arrays of data to one dimension with flatten() or whatever else you need to do. Of course, if you are modelling data, your *model* might care about the independent variables, and you'll need to make sure the data and model are the same shape and align the observations, so you might have something like [alpha_0, beta_0, alpha_1, beta_1, ....], but (of course) the algorithm doesn't care about that order. Hope that helps, --Matt -------------- next part -------------- An HTML attachment was scrubbed... URL: From t.howells42 at gmail.com Tue Sep 29 11:24:34 2015 From: t.howells42 at gmail.com (Thomas Howells) Date: Tue, 29 Sep 2015 15:24:34 +0000 Subject: [SciPy-User] ODR multiresponse multidimensional In-Reply-To: References: Message-ID: Thanks Matt, I guess I can start by reformatting this into a single dimensional problem. I admit that part of my motivation for trying to treat this in full comes from this quote in the user reference. Here q is the number of responses: "Note that when q > 1, the responses of a multiresponse orthogonal distance regression problem cannot simply be treated as q separate observations as can be done for ordinary least squares when the q responses are uncorrelated. This is because ODRPACK would then treat the variables associated with these q observations as unrelated, and thus not constrain the errors ?i in xi to be the same for each of the q occurrences of the ith observation. The user must therefore indicate to ODRPACK when the observations are multiresponse, so that ODRPACK can make the appropriate adjustments to the estimation procedure. (See ?2.B.ii, subroutine argument NQ.)" (Page 7, ODR reference manual) I can flatten the input variables though, and see if I can get it to work as a multiresponse problem from that; then it would match the form of the multiresponse test of the bindings while maintaining the association between the two response observations.. I think. I should be able to try it tonight or tomorrow. P.S. [3,56] was a mistake, it should of course have been [2,56]. Sorry about that, but it actually doesn't matter much for the posing of the problem whether there are two or three control variables. On Tue, Sep 29, 2015 at 3:51 PM Matt Newville wrote: > On Tue, Sep 29, 2015 at 9:16 AM, Thomas Howells > wrote: > >> On Tue, Sep 29, 2015 at 2:00 PM Robert Kern >> wrote: >> >>> On Tue, Sep 29, 2015 at 1:43 PM, Thomas Howells >>> wrote: >>> > >>> > Dear Pythonistas, >>> > >>> > I'm new to the mailing list, my question here is related to scipy's >>> Orthogonal Distance Regression (ODR) wrapper module. My apologies if you've >>> received this before, I had some trouble with the mail delivery system. >>> > >>> > From perusal of the documentation for the wrapper and the underlying >>> Fortran routines, it seems the Fortran code can handle a dataset that is >>> both multiresponse and multidimensional. I have such a dataset that I would >>> like to try and use the algorithm for; however as far as I can tell the ODR >>> wrapper doesn't have the machinery to support this usage. >>> >>> What do you mean by "both multiresponse and multidimensional"? That the >>> model is a function `f(x; beta) -> y` such that x and y are each vectors? >>> Yes, it certainly supports this, and I think the docstrings are pretty >>> clear about it. What did you read that makes you think otherwise? >>> >>> http://docs.scipy.org/doc/scipy/reference/generated/scipy.odr.Model.html >>> >>> -- >>> Robert Kern >>> >>> >> I'll elucidate a little more: I have data with two control variables, >> theta and E, for angle and energy. Each energy is measured at each angle; >> this makes the data multidimensional (after checking the ODR reference >> guide, http://docs.scipy.org/doc/external/odrpack_guide.pdf, this is >> also referred to as multivariate; sorry if this caused confusion). >> >> For each combination of theta and E, I get two linked readings (or >> responses), alpha and beta, that I want to fit simultaneously. This makes >> it multi-response, as well as multivariate, a situation described on page 6 >> of the ODR reference guide. >> >> Ideally I need to find the best fit to all angles & both responses >> simultaneously to reduce correlation between parameters. The odr.Model >> object has instructions to handle multidimensional input x, and >> corresponding multidimensional response y, but not what to do if you have >> both a multidimensional input and multiresponse. >> > > ORDPACK is a little strange in its support for multi-dimensional data. > The simplest thing to do (and with the added benefit that it will allow you > to also use other optimization methods) is to always change the problem to > a single dimension. Actually, this is not at all hard, just a slight > change in perspective. > > To be clear, the term multivariate means "more than one variable > parameter", not the number or shape of the observations. > > In fact, for (nearly?) all optimization problems, the algorithms seek a > set of values for parameters that make the model most closely match the > data. What makes ORDPACK special is its definition of "most closely > match", not really that it is multi-dimensional. That's sort of a > distraction. > > The fact that you have two signals (alpha, beta) at each value of (angle, > energy) is completely unimportant to the fitting algorithm. It doesn't > care what the independent variables are, or even that there *are* > independent variables. It has (only) parameters and the result of the > objective function. > > Within your objective function, you can do anything you want. You can > concatenate multiple arrays of data, and/or reduce your multi-dimensional > arrays of data to one dimension with flatten() or whatever else you need to > do. Of course, if you are modelling data, your *model* might care about > the independent variables, and you'll need to make sure the data and model > are the same shape and align the observations, so you might have something > like [alpha_0, beta_0, alpha_1, beta_1, ....], but (of course) the > algorithm doesn't care about that order. > > Hope that helps, > > --Matt > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From newville at cars.uchicago.edu Tue Sep 29 11:55:23 2015 From: newville at cars.uchicago.edu (Matt Newville) Date: Tue, 29 Sep 2015 10:55:23 -0500 Subject: [SciPy-User] ODR multiresponse multidimensional In-Reply-To: References: Message-ID: On Tue, Sep 29, 2015 at 10:24 AM, Thomas Howells wrote: > Thanks Matt, > > I guess I can start by reformatting this into a single dimensional > problem. I admit that part of my motivation for trying to treat this in > full comes from this quote in the user reference. Here q is the number of > responses: > "Note that when q > 1, the responses of a multiresponse orthogonal > distance regression problem cannot simply be treated as q separate > observations as can be done for ordinary least squares when the q responses > are uncorrelated. This is because ODRPACK would then treat the variables > associated with these q observations as unrelated, and thus not constrain > the errors ?i in xi to be the same for each of the q occurrences of the ith > observation. The user must therefore indicate to ODRPACK when the > observations are multiresponse, so that ODRPACK can make the appropriate > adjustments to the estimation procedure. (See ?2.B.ii, subroutine argument > NQ.)" (Page 7, ODR reference manual) > > Yeah, OK it's true that ODRPACK does actually use the multi-dimensionality of the data, and what I was suggesting would remove such information that ODRPACK can use. But I also guess you might be hitting the limitations of what "orthogonal distance" means. I can flatten the input variables though, and see if I can get it to work > as a multiresponse problem from that; then it would match the form of the > multiresponse test of the bindings while maintaining the association > between the two response observations.. I think. I should be able to try it > tonight or tomorrow. > > P.S. [3,56] was a mistake, it should of course have been [2,56]. Sorry > about that, but it actually doesn't matter much for the posing of the > problem whether there are two or three control variables. > > On Tue, Sep 29, 2015 at 3:51 PM Matt Newville > wrote: > >> On Tue, Sep 29, 2015 at 9:16 AM, Thomas Howells >> wrote: >> >>> On Tue, Sep 29, 2015 at 2:00 PM Robert Kern >>> wrote: >>> >>>> On Tue, Sep 29, 2015 at 1:43 PM, Thomas Howells >>>> wrote: >>>> > >>>> > Dear Pythonistas, >>>> > >>>> > I'm new to the mailing list, my question here is related to scipy's >>>> Orthogonal Distance Regression (ODR) wrapper module. My apologies if you've >>>> received this before, I had some trouble with the mail delivery system. >>>> > >>>> > From perusal of the documentation for the wrapper and the underlying >>>> Fortran routines, it seems the Fortran code can handle a dataset that is >>>> both multiresponse and multidimensional. I have such a dataset that I would >>>> like to try and use the algorithm for; however as far as I can tell the ODR >>>> wrapper doesn't have the machinery to support this usage. >>>> >>>> What do you mean by "both multiresponse and multidimensional"? That the >>>> model is a function `f(x; beta) -> y` such that x and y are each vectors? >>>> Yes, it certainly supports this, and I think the docstrings are pretty >>>> clear about it. What did you read that makes you think otherwise? >>>> >>>> http://docs.scipy.org/doc/scipy/reference/generated/scipy.odr.Model.html >>>> >>>> -- >>>> Robert Kern >>>> >>>> >>> I'll elucidate a little more: I have data with two control variables, >>> theta and E, for angle and energy. Each energy is measured at each angle; >>> this makes the data multidimensional (after checking the ODR reference >>> guide, http://docs.scipy.org/doc/external/odrpack_guide.pdf, this is >>> also referred to as multivariate; sorry if this caused confusion). >>> >>> For each combination of theta and E, I get two linked readings (or >>> responses), alpha and beta, that I want to fit simultaneously. This makes >>> it multi-response, as well as multivariate, a situation described on page 6 >>> of the ODR reference guide. >>> >>> Ideally I need to find the best fit to all angles & both responses >>> simultaneously to reduce correlation between parameters. The odr.Model >>> object has instructions to handle multidimensional input x, and >>> corresponding multidimensional response y, but not what to do if you have >>> both a multidimensional input and multiresponse. >>> >> >> ORDPACK is a little strange in its support for multi-dimensional data. >> The simplest thing to do (and with the added benefit that it will allow you >> to also use other optimization methods) is to always change the problem to >> a single dimension. Actually, this is not at all hard, just a slight >> change in perspective. >> >> To be clear, the term multivariate means "more than one variable >> parameter", not the number or shape of the observations. >> >> In fact, for (nearly?) all optimization problems, the algorithms seek a >> set of values for parameters that make the model most closely match the >> data. What makes ORDPACK special is its definition of "most closely >> match", not really that it is multi-dimensional. That's sort of a >> distraction. >> >> The fact that you have two signals (alpha, beta) at each value of (angle, >> energy) is completely unimportant to the fitting algorithm. It doesn't >> care what the independent variables are, or even that there *are* >> independent variables. It has (only) parameters and the result of the >> objective function. >> >> Within your objective function, you can do anything you want. You can >> concatenate multiple arrays of data, and/or reduce your multi-dimensional >> arrays of data to one dimension with flatten() or whatever else you need to >> do. Of course, if you are modelling data, your *model* might care about >> the independent variables, and you'll need to make sure the data and model >> are the same shape and align the observations, so you might have something >> like [alpha_0, beta_0, alpha_1, beta_1, ....], but (of course) the >> algorithm doesn't care about that order. >> >> Hope that helps, >> >> --Matt >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> https://mail.scipy.org/mailman/listinfo/scipy-user >> > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-user > > -- --Matt Newville 630-252-0431 -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Tue Sep 29 12:17:41 2015 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 29 Sep 2015 12:17:41 -0400 Subject: [SciPy-User] ODR multiresponse multidimensional In-Reply-To: References: Message-ID: On Tue, Sep 29, 2015 at 10:16 AM, Thomas Howells wrote: > On Tue, Sep 29, 2015 at 2:00 PM Robert Kern wrote: > >> On Tue, Sep 29, 2015 at 1:43 PM, Thomas Howells >> wrote: >> > >> > Dear Pythonistas, >> > >> > I'm new to the mailing list, my question here is related to scipy's >> Orthogonal Distance Regression (ODR) wrapper module. My apologies if you've >> received this before, I had some trouble with the mail delivery system. >> > >> > From perusal of the documentation for the wrapper and the underlying >> Fortran routines, it seems the Fortran code can handle a dataset that is >> both multiresponse and multidimensional. I have such a dataset that I would >> like to try and use the algorithm for; however as far as I can tell the ODR >> wrapper doesn't have the machinery to support this usage. >> >> What do you mean by "both multiresponse and multidimensional"? That the >> model is a function `f(x; beta) -> y` such that x and y are each vectors? >> Yes, it certainly supports this, and I think the docstrings are pretty >> clear about it. What did you read that makes you think otherwise? >> >> http://docs.scipy.org/doc/scipy/reference/generated/scipy.odr.Model.html >> >> -- >> Robert Kern >> >> > I'll elucidate a little more: I have data with two control variables, > theta and E, for angle and energy. Each energy is measured at each angle; > this makes the data multidimensional (after checking the ODR reference > guide, http://docs.scipy.org/doc/external/odrpack_guide.pdf, this is also > referred to as multivariate; sorry if this caused confusion). > > For each combination of theta and E, I get two linked readings (or > responses), alpha and beta, that I want to fit simultaneously. This makes > it multi-response, as well as multivariate, a situation described on page 6 > of the ODR reference guide. > > Ideally I need to find the best fit to all angles & both responses > simultaneously to reduce correlation between parameters. The odr.Model > object has instructions to handle multidimensional input x, and > corresponding multidimensional response y, but not what to do if you have > both a multidimensional input and multiresponse. > > My input array x is [m,n] where m is the dimensionality of the input and n > is the number of observations. (In my case [3,56]) > My response array y is then in fact [2,3,56] as I have two responses for > each x. I arranged it this way after inspecting the test_odr.py function, > in which a single-dimensional array x is matched with a two-dimensional, or > multi-response, return array y in test_multi. > I would expect multivariate y to mean 2-dimensional not 3 dimensional. I don't see how covariance matrices would work with a 3-D response, it might be possible but I have never seen it. my guess would be that it needs a reshape to [6, 56], but I don't really understand the problem nor odr Josef > > Unfortunately attempting to generalise in this way results in an error > when the odr module analyses my array shapes. I could not find any way to > tell the code that my y array is multiresponse, even having inspected the > source code. I hope this explanation makes things clearer! > > Thanks, Tom > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lev.konst at gmail.com Wed Sep 30 02:52:01 2015 From: lev.konst at gmail.com (Lev Konstantinovskiy) Date: Wed, 30 Sep 2015 06:52:01 +0000 (UTC) Subject: [SciPy-User] Alternative for scipy.sparse.sparsetools for use from outside of scipy Message-ID: Hi, Getting deprecation warning for sparsetools. Is there an alternative to switch to? The use is sparsetools.csc_matvecs in gensim https://github.com/piskvorky/gensim/blob/9a1c2c954e2f72213023fc01f0e33306001e 303f/gensim/models/lsimodel.py warning on import: gensim/home/ubuntu/.vew/ds26/lib/python2.6/site- packages/numpy/lib/utils.py:95: DeprecationWarning: `scipy.sparse.sparsetools` is deprecated! scipy.sparse.sparsetools is a private module for scipy.sparse, and should not be used. warnings.warn(depdoc, DeprecationWarning) Thanks From cimrman3 at ntc.zcu.cz Wed Sep 30 03:32:26 2015 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Wed, 30 Sep 2015 09:32:26 +0200 Subject: [SciPy-User] Fwd: ANN: SfePy 2015.3 In-Reply-To: <560288BA.1040305@ntc.zcu.cz> References: <560288BA.1040305@ntc.zcu.cz> Message-ID: <560B900A.9080905@ntc.zcu.cz> FYI: resending due to mailing list problems last week, apologies if you already got this. -------- Forwarded Message -------- I am pleased to announce release 2015.3 of SfePy. Description ----------- SfePy (simple finite elements in Python) is a software for solving systems of coupled partial differential equations by the finite element method or by the isogeometric analysis (preliminary support). It is distributed under the new BSD license. Home page: http://sfepy.org Mailing list: http://groups.google.com/group/sfepy-devel Git (source) repository, issue tracker, wiki: http://github.com/sfepy Highlights of this release -------------------------- - preliminary support for parallel computing - unified evaluation of basis functions (= isogeometric analysis fields can be evaluated in arbitrary points) - (mostly) fixed finding of reference element coordinates of physical points - several new or improved examples For full release notes see http://docs.sfepy.org/doc/release_notes.html#id1 (rather long and technical). Best regards, Robert Cimrman on behalf of the SfePy development team --- Contributors to this release in alphabetical order: Robert Cimrman Vladimir Lukes From newville at cars.uchicago.edu Wed Sep 30 23:16:42 2015 From: newville at cars.uchicago.edu (Matt Newville) Date: Wed, 30 Sep 2015 22:16:42 -0500 Subject: [SciPy-User] ANN: lmfit 0.9.0 Message-ID: Hi, Lmfit Version 0.9.0 has been released. Lmfit is a python package for non-linear least-squares fitting, data modeling, and curve-fitting problems. It provides high-level functionality on top of the basic optimization routines from scipy.optimize by providing Parameter objects, an easy-to-use Model class for modeling data, and methods for better exploring uncertainties and confidence levels for Parameter values. It is distributed with an MIT license, and is available from PyPI or http://lmfit.github.io/lmfit-py/ The release is comprised of more than 25 pull requests over the past 8 months from 10 contributors: yoavram, andyfaff, tespilla, tritemio, Tillsten, MerlinSmiles, rawlik, stonebig, danielballen, and newville. There are several enhancements and bug fixes. The most significant change is that the Minimizer.minimize() method to perform an optimization now returns a MinimizerResult object (not unlike scipy.optimize.OptimizeResult) which contains the Parameters altered by the fit as well as fit statistics and run information. This change means that programs using the Minimizer() object or minimize() function from earlier versions of lmfit need to be changed to see the Parameters updated in the fit. The change is easy to make, but must be done. See http://lmfit.github.io/lmfit-py/whatsnew.html#whatsnew-090-label for more details. Cheers, --Matt Newville -------------- next part -------------- An HTML attachment was scrubbed... URL: