From thomas.pohl at gmail.com Wed Jul 1 07:16:16 2015 From: thomas.pohl at gmail.com (Tom Pohl) Date: Wed, 1 Jul 2015 13:16:16 +0200 Subject: [SciPy-User] [Ann] Early-bird registration extended for EuroSciPy 2015 Message-ID: Head over to https://www.euroscipy.org/2015/ for more information. -------------- next part -------------- An HTML attachment was scrubbed... URL: From eckjoh2 at web.de Fri Jul 3 03:02:36 2015 From: eckjoh2 at web.de (eckjoh2 at web.de) Date: Fri, 03 Jul 2015 07:02:36 -0000 Subject: [SciPy-User] eckjoh2@web.de erwartet Ihre Antwort. Annehmen? Message-ID: <0.0.384.B49.1D0A94E069F0F2C.2DFF@mail6.flipmailer.com> Hallo, eckjoh2 at web.de m?chte Ihnen folgen. ****** Ist eckjoh2 at web.de Ihr Freund? ****** Ja: http://invites.flipmailer.com/signup_e.html?fullname=Scipy+Users+List&email=scipy-user at scipy.org&invitername=Johannes&inviterid=38074537&userid=0&token=0&emailmasterid=8d2d7108-1c32-4c69-aae5-1044e7d38ffa&from=eckjoh2 at web.de&template=invite_reminder_2_a&test=AA&src=txt_yes Nein: http://invites.flipmailer.com/signup_e.html?fullname=Scipy+Users+List&email=scipy-user at scipy.org&invitername=Johannes&inviterid=38074537&userid=0&token=0&emailmasterid=8d2d7108-1c32-4c69-aae5-1044e7d38ffa&from=eckjoh2 at web.de&template=invite_reminder_2_a&test=AA&src=txt_no Klicken Sie hier, um sich aus allen derartigen E-Mails zu entfernen http://invites.flipmailer.com/uns_inviter.jsp?email=scipy-user at scipy.org&iid=8d2d7108-1c32-4c69-aae5-1044e7d38ffa&from=eckjoh2 at web.de -------------- next part -------------- An HTML attachment was scrubbed... URL: From almar.klein at gmail.com Fri Jul 3 09:06:48 2015 From: almar.klein at gmail.com (Almar Klein) Date: Fri, 03 Jul 2015 15:06:48 +0200 Subject: [SciPy-User] ANN: imageio v1.3 Message-ID: <559688E8.2000804@gmail.com> Imageio is a Python library that provides an easy interface to read and write a wide range of image data, including animated images. video, volumetric data, and scientific formats. It is cross-platform, runs on Python 2.x and 3.x, and is easy to install. Many packages that need to read/write image data roll some sort of internal image library. This sucks, because other projects tend to do the same, since they don't want to add a dependency that brings so much extra stuff and/or is a pain to install. Imageio fulfills the role of a library that reads and writes image data. Period. More information: http://imageio.github.io/ Release notes: http://imageio.readthedocs.org/en/latest/releasenotes.html Regards, Almar From jbednar at inf.ed.ac.uk Fri Jul 3 16:10:26 2015 From: jbednar at inf.ed.ac.uk (James A. Bednar) Date: Fri, 3 Jul 2015 21:10:26 +0100 Subject: [SciPy-User] ANN: HoloViews 1.3 released Message-ID: <21910.60466.945047.219063@hebb.inf.ed.ac.uk> We are pleased to announce the fourth public release of HoloViews, a Python package for simplifying the exploration of scientific data: http://holoviews.org HoloViews provides composable, sliceable, declarative data structures for building even complex visualizations easily. The goal of HoloViews is to let your data just visualize itself, allowing you to work with large datasets as easily as you work with simple datatypes at the Python prompt. You can obtain the new version using conda or pip: conda install holoviews pip install --upgrade 'holoviews[recommended]' This release includes a substantial number of new features and API improvements, most of which have been suggested by our growing userbase: - Major optimizations throughout, both for working with HoloViews data structures and for visualization. - Improved widget appearance and greatly reduced flickering issues when interactively exploring data in the browser. - Improved handling of unicode and LaTeX text throughout, using Python 3's better unicode support (when available). - New Polygons, ErrorBars, and Spread Element types. - Support for multiple matplotlib backends (vanilla matplotlib, mpld3 and nbagg) with support for other plotting systems (such as Bokeh) in development. Easily switching between backends allows you to take advantage of the unique features of each one, such as good SVG/PDF output, interactive zooming and panning, or 3D viewpoint control. - Streamlined the API based on user feedback; now even more things "just work". This includes new, easy to use constructors for common Element types as well as easy conversion between them. - More customizability of plot and style options, including easier control over font sizes, legend positions, background color, and multiple color bars. Polar projections now supported throughout. - More flexible and customizable Layouts, allowing the user to define blank spaces (using the Empty object) as well as more control over positioning and aspect ratios. - Support for a holoviews.rc file, integration with IPython Notebook interact widgets, improvements to the Pandas interface, easy saving and loading of data via pickling, and much more. And of course we have fixed a number of bugs found by our very dedicated users; please keep filing Github issues if you find any! For the full list of changes, see: https://github.com/ioam/holoviews/releases HoloViews remains freely available under a BSD license, is Python 2 and 3 compatible, and has minimal external dependencies, making it easy to integrate into your workflow. Try out the extensive tutorials at holoviews.org today, and check out our upcoming SciPy and EuroSciPy talks in Austin and Cambridge (or read the paper at http://goo.gl/NH9FTB)! Philipp Rudiger Jean-Luc R. Stevens James A. Bednar The University of Edinburgh School of Informatics -- The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From bhmerchant at gmail.com Sat Jul 4 07:55:27 2015 From: bhmerchant at gmail.com (Brian Merchant) Date: Sat, 4 Jul 2015 04:55:27 -0700 Subject: [SciPy-User] odeint: evolving ODE system with non-penetration constraints causes issues? Message-ID: Hi all, I am modelling a physical system using a system of ODEs, and am imposing a non-penetration constraint between distinct objects. In Python, I implement the non-penetration rules in the "right hand side function", simply denoted `f` in the `odeint` documentation. The integration problem becomes stiff, because there can be sharp change in the forces applied based on the activation/de-activation of non-penetration penalties. There are two issues I would like feedback on: * when I supply `ixpr = True` as an argument to `odeint`, I don't get any messages printing out that `odeint` has changed methods once the problem is stiff -- how can I be sure that `odeint` is switching to a stiff method? * I often get "excess work done" messages if I don't manually specify a high `mxstep` number -- if I do specify a high number, my program tends to crash due to running out of memory. How can I prevent this? Kind regards, Brian -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sat Jul 4 17:40:07 2015 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 4 Jul 2015 23:40:07 +0200 Subject: [SciPy-User] ANN: Scipy 0.16.0 release candidate 1 Message-ID: Hi, I'm pleased to announce the availability of the first release candidate of Scipy 0.16.0. Please try it out and report any issues on the Github issue tracker or on the scipy-dev mailing list. This first RC is a source-only release. Sources and release notes can be found at https://github.com/scipy/scipy/releases/tag/v0.16.0rc1. Note that this is a bit of an experiment - it's the first time we use the Github Releases feature. Feedback welcome. Thanks to everyone who contributed to this release! Ralf ========================== SciPy 0.16.0 Release Notes ========================== .. note:: Scipy 0.16.0 is not released yet! .. contents:: SciPy 0.16.0 is the culmination of 6 months of hard work. It contains many new features, numerous bug-fixes, improved test coverage and better documentation. There have been a number of deprecations and API changes in this release, which are documented below. All users are encouraged to upgrade to this release, as there are a large number of bug-fixes and optimizations. Moreover, our development attention will now shift to bug-fix releases on the 0.15.x branch, and on adding new features on the master branch. This release requires Python 2.6, 2.7 or 3.2-3.4 and NumPy 1.6.2 or greater. Highlights of this release include: - A Cython API for BLAS/LAPACK in `scipy.linalg` - A new benchmark suite. It's now straightforward to add new benchmarks, and they're routinely included with performance enhancement PRs. - Support for the second order sections (SOS) format in `scipy.signal`. New features ============ Benchmark suite --------------- The benchmark suite has switched to using `Airspeed Velocity `__ for benchmarking. You can run the suite locally via ``python runtests.py --bench``. For more details, see ``benchmarks/README.rst``. `scipy.linalg` improvements --------------------------- A full set of Cython wrappers for BLAS and LAPACK has been added in the modules `scipy.linalg.cython_blas` and `scipy.linalg.cython_lapack`. In Cython, these wrappers can now be cimported from their corresponding modules and used without linking directly against BLAS or LAPACK. The functions `scipy.linalg.qr_delete`, `scipy.linalg.qr_insert` and `scipy.linalg.qr_update` for updating QR decompositions were added. The function `scipy.linalg.solve_circulant` solves a linear system with a circulant coefficient matrix. The function `scipy.linalg.invpascal` computes the inverse of a Pascal matrix. The function `scipy.linalg.solve_toeplitz`, a Levinson-Durbin Toeplitz solver, was added. Added wrapper for potentially useful LAPACK function ``*lasd4``. It computes the square root of the i-th updated eigenvalue of a positive symmetric rank-one modification to a positive diagonal matrix. See its LAPACK documentation and unit tests for it to get more info. Added two extra wrappers for LAPACK least-square solvers. Namely, they are ``*gelsd`` and ``*gelsy``. Wrappers for the LAPACK ``*lange`` functions, which calculate various matrix norms, were added. Wrappers for ``*gtsv`` and ``*ptsv``, which solve ``A*X = B`` for tri-diagonal matrix ``A``, were added. `scipy.signal` improvements --------------------------- Support for second order sections (SOS) as a format for IIR filters was added. The new functions are: * `scipy.signal.sosfilt` * `scipy.signal.sosfilt_zi`, * `scipy.signal.sos2tf` * `scipy.signal.sos2zpk` * `scipy.signal.tf2sos` * `scipy.signal.zpk2sos`. Additionally, the filter design functions `iirdesign`, `iirfilter`, `butter`, `cheby1`, `cheby2`, `ellip`, and `bessel` can return the filter in the SOS format. The function `scipy.signal.place_poles`, which provides two methods to place poles for linear systems, was added. The option to use Gustafsson's method for choosing the initial conditions of the forward and backward passes was added to `scipy.signal.filtfilt`. New classes ``TransferFunction``, ``StateSpace`` and ``ZerosPolesGain`` were added. These classes are now returned when instantiating `scipy.signal.lti`. Conversion between those classes can be done explicitly now. An exponential (Poisson) window was added as `scipy.signal.exponential`, and a Tukey window was added as `scipy.signal.tukey`. The function for computing digital filter group delay was added as `scipy.signal.group_delay`. The functionality for spectral analysis and spectral density estimation has been significantly improved: `scipy.signal.welch` became ~8x faster and the functions `scipy.signal.spectrogram`, `scipy.signal.coherence` and `scipy.signal.csd` (cross-spectral density) were added. `scipy.signal.lsim` was rewritten - all known issues are fixed, so this function can now be used instead of ``lsim2``; ``lsim`` is orders of magnitude faster than ``lsim2`` in most cases. `scipy.sparse` improvements --------------------------- The function `scipy.sparse.norm`, which computes sparse matrix norms, was added. The function `scipy.sparse.random`, which allows to draw random variates from an arbitrary distribution, was added. `scipy.spatial` improvements ---------------------------- `scipy.spatial.cKDTree` has seen a major rewrite, which improved the performance of the ``query`` method significantly, added support for parallel queries, pickling, and options that affect the tree layout. See pull request 4374 for more details. The function `scipy.spatial.procrustes` for Procrustes analysis (statistical shape analysis) was added. `scipy.stats` improvements -------------------------- The Wishart distribution and its inverse have been added, as `scipy.stats.wishart` and `scipy.stats.invwishart`. The Exponentially Modified Normal distribution has been added as `scipy.stats.exponnorm`. The Generalized Normal distribution has been added as `scipy.stats.gennorm`. All distributions now contain a ``random_state`` property and allow specifying a specific ``numpy.random.RandomState`` random number generator when generating random variates. Many statistical tests and other `scipy.stats` functions that have multiple return values now return ``namedtuples``. See pull request 4709 for details. `scipy.optimize` improvements ----------------------------- A new derivative-free method DF-SANE has been added to the nonlinear equation system solving function `scipy.optimize.root`. Deprecated features =================== ``scipy.stats.pdf_fromgamma`` is deprecated. This function was undocumented, untested and rarely used. Statsmodels provides equivalent functionality with ``statsmodels.distributions.ExpandedNormal``. ``scipy.stats.fastsort`` is deprecated. This function is unnecessary, ``numpy.argsort`` can be used instead. ``scipy.stats.signaltonoise`` and ``scipy.stats.mstats.signaltonoise`` are deprecated. These functions did not belong in ``scipy.stats`` and are rarely used. See issue #609 for details. ``scipy.stats.histogram2`` is deprecated. This function is unnecessary, ``numpy.histogram2d`` can be used instead. Backwards incompatible changes ============================== The deprecated global optimizer ``scipy.optimize.anneal`` was removed. The following deprecated modules have been removed: ``scipy.lib.blas``, ``scipy.lib.lapack``, ``scipy.linalg.cblas``, ``scipy.linalg.fblas``, ``scipy.linalg.clapack``, ``scipy.linalg.flapack``. They had been deprecated since Scipy 0.12.0, the functionality should be accessed as `scipy.linalg.blas` and `scipy.linalg.lapack`. The deprecated function ``scipy.special.all_mat`` has been removed. The deprecated functions ``fprob``, ``ksprob``, ``zprob``, ``randwcdf`` and ``randwppf`` have been removed from `scipy.stats`. Other changes ============= The version numbering for development builds has been updated to comply with PEP 440. Building with ``python setup.py develop`` is now supported. -------------- next part -------------- An HTML attachment was scrubbed... URL: From eckjoh2 at web.de Wed Jul 8 07:40:19 2015 From: eckjoh2 at web.de (eckjoh2 at web.de) Date: Wed, 08 Jul 2015 06:40:19 -0500 Subject: [SciPy-User] eckjoh2@web.de is now following you. View Profile? Message-ID: <0.0.1A1.CF.1D0B971E7D8F24A.2B8A@mail4.infoaxe.net> Hi, eckjoh2 at web.de is now following you. View eckjoh2 at web.de's profile http://invites.infoaxe.net/download.jsp?invite_bcb=true&download=true&email=scipy-user at scipy.org&viral=true&inviterid=38074537&inviter=Johannes+Eckstein&emailmasterid=e6fef7d5-5055-4a87-93c5-829d33e87eec&from=eckjoh2 at web.de&template=invite_bcb_a&test=AA&src=bcbemailtxt Follow the link below to remove yourself from all such emails http://invites.infoaxe.net/uns_inviter.jsp?email=scipy-user at scipy.org&iid=e6fef7d5-5055-4a87-93c5-829d33e87eec&from=eckjoh2 at web.de -------------- next part -------------- An HTML attachment was scrubbed... URL: From eckjoh2 at web.de Thu Jul 9 16:35:11 2015 From: eckjoh2 at web.de (eckjoh2 at web.de) Date: Thu, 09 Jul 2015 15:35:11 -0500 Subject: [SciPy-User] You still haven't accepted eckjoh2@web.de's friend request. Accept? Message-ID: <0.0.1C.95A.1D0BA85C9B08EC6.1E0A@mail8.infoaxe.net> An HTML attachment was scrubbed... URL: From gb.gabrielebrambilla at gmail.com Mon Jul 13 17:10:11 2015 From: gb.gabrielebrambilla at gmail.com (Gabriele Brambilla) Date: Mon, 13 Jul 2015 17:10:11 -0400 Subject: [SciPy-User] using numpy.fromfile without knowing the number of elements in the file Message-ID: Hi, I'm trying to read an unformatted file written in Fortran. Fortran has the problem that for each write() statement it inserts 4bytes of "trash" at the beginning and at the end of the file. I tried to use numpy.fromfile in this way: import numpy as np from math import * f = open('../De0/SN01.dat', 'rb') for iw in range(5): a = np.fromfile(f , dtype = np.int8 , count = 4 ) x = np.fromfile(f , dtype = np.float64 , count = 21) a = np.fromfile(f , dtype = np.int8 , count = 4 ) print(iw, x) It works, but here I'm bounded to set the total number of iteration in the FOR cycle. How can I repeat this cycle until the eof()? Thanks Gabriele -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Mon Jul 13 17:48:17 2015 From: chris.barker at noaa.gov (Chris Barker) Date: Mon, 13 Jul 2015 14:48:17 -0700 Subject: [SciPy-User] using numpy.fromfile without knowing the number of elements in the file In-Reply-To: References: Message-ID: On Mon, Jul 13, 2015 at 2:10 PM, Gabriele Brambilla < gb.gabrielebrambilla at gmail.com> wrote: > I'm trying to read an unformatted file written in Fortran. > Fortran has the problem that for each write() statement it inserts 4bytes > of "trash" at the beginning and at the end of the file. > yes, isn't that fun! I can't recall -- do you only get the extra 4 bytes at the beginning an end of the entire file? that shouldn't be too hard to deal with. On the other hand, if there are 4byte gaps scattered through, then you're a bit stuck. I recall I've given up on fromfile, and used the stdlib struct module to deal with this in the past. But: I tried to use numpy.fromfile in this way: > > import numpy as np > from math import * > > why do you need math when you have numpy? -- but nothing to do with the problem at hand... > f = open('../De0/SN01.dat', 'rb') > > for iw in range(5): > a = np.fromfile(f , dtype = np.int8 , count = 4 ) > x = np.fromfile(f , dtype = np.float64 , count = 21) > a = np.fromfile(f , dtype = np.int8 , count = 4 ) > print(iw, x) > > > It works, but here I'm bounded to set the total number of iteration in the > FOR cycle. How can I repeat this cycle until the eof()? > OK, so that's extra bytes around each record -- kind of ugly, but this works, yes? What happens if you keep running this loop antil you get an Exception at the end of the file? That should work. But another option is to read teh whole thing in as a single byte type with numpy, then slice it up in memory. Somethign like: f = open('../De0/SN01.dat', 'rb') all_data = np.fromfile( f , dtype = np.uint8 ) # now you have the wholee thing, and can slice it up: all_data.shape = (-1, 4+21*8+4) #now each "row" is a record #get the "real" data: data = all_data[:, 4:-4].copy() # convert the type: data = data.astype(np.float64) totally untested, of course. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Mon Jul 13 17:53:28 2015 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 14 Jul 2015 00:53:28 +0300 Subject: [SciPy-User] using numpy.fromfile without knowing the number of elements in the file In-Reply-To: References: Message-ID: 14.07.2015, 00:10, Gabriele Brambilla kirjoitti: > I'm trying to read an unformatted file written in Fortran. > Fortran has the problem that for each write() statement it inserts 4bytes > of "trash" at the beginning and at the end of the file. You can try this: http://docs.scipy.org/doc/scipy-dev/reference/generated/scipy.io.FortranFile.html From ndbecker2 at gmail.com Tue Jul 14 14:39:16 2015 From: ndbecker2 at gmail.com (Neal Becker) Date: Tue, 14 Jul 2015 14:39:16 -0400 Subject: [SciPy-User] what do scale, loc parameters actually mean? Message-ID: I'm using gamma.fit, and I get returned params: In [24]: params Out[24]: (1.1803836075007037, 276.9598220688149, 315.11452134103547) These parameters are not clearly documented, what do they (mathematically) mean? From robert.kern at gmail.com Tue Jul 14 14:47:51 2015 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 14 Jul 2015 19:47:51 +0100 Subject: [SciPy-User] what do scale, loc parameters actually mean? In-Reply-To: References: Message-ID: On Tue, Jul 14, 2015 at 7:39 PM, Neal Becker wrote: > > I'm using gamma.fit, and I get returned params: > In [24]: params > Out[24]: (1.1803836075007037, 276.9598220688149, 315.11452134103547) > > These parameters are not clearly documented, what do they (mathematically) > mean? http://docs.scipy.org/doc/scipy/reference/tutorial/stats.html#shifting-and-scaling -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From evgeny.burovskiy at gmail.com Tue Jul 14 15:02:07 2015 From: evgeny.burovskiy at gmail.com (Evgeni Burovski) Date: Tue, 14 Jul 2015 20:02:07 +0100 Subject: [SciPy-User] what do scale, loc parameters actually mean? In-Reply-To: References: Message-ID: On Jul 14, 2015 10:00 PM, "Robert Kern" wrote: > > On Tue, Jul 14, 2015 at 7:39 PM, Neal Becker wrote: > > > > I'm using gamma.fit, and I get returned params: > > In [24]: params > > Out[24]: (1.1803836075007037, 276.9598220688149, 315.11452134103547) > > > > These parameters are not clearly documented, what do they (mathematically) > > mean? > > http://docs.scipy.org/doc/scipy/reference/tutorial/stats.html#shifting-and-scaling > > -- > Robert Kern > Or the Examples section of http://docs.scipy.org/doc/scipy-dev/reference/generated/scipy.stats.rv_continuous.html Evgeni -------------- next part -------------- An HTML attachment was scrubbed... URL: From pmhobson at gmail.com Tue Jul 14 19:20:25 2015 From: pmhobson at gmail.com (Paul Hobson) Date: Tue, 14 Jul 2015 16:20:25 -0700 Subject: [SciPy-User] what do scale, loc parameters actually mean? In-Reply-To: References: Message-ID: On Tue, Jul 14, 2015 at 12:02 PM, Evgeni Burovski < evgeny.burovskiy at gmail.com> wrote: > > On Jul 14, 2015 10:00 PM, "Robert Kern" wrote: > > > > On Tue, Jul 14, 2015 at 7:39 PM, Neal Becker > wrote: > > > > > > I'm using gamma.fit, and I get returned params: > > > In [24]: params > > > Out[24]: (1.1803836075007037, 276.9598220688149, 315.11452134103547) > > > > > > These parameters are not clearly documented, what do they > (mathematically) > > > mean? > > > > > http://docs.scipy.org/doc/scipy/reference/tutorial/stats.html#shifting-and-scaling > > > > -- > > Robert Kern > > > > Or the Examples section of > http://docs.scipy.org/doc/scipy-dev/reference/generated/scipy.stats.rv_continuous.html > > Evgeni > I made a guide specifically for lognormal distributions: http://nbviewer.ipython.org/gist/phobson/82f223f4aae24787bc18 -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla.molden at gmail.com Tue Jul 14 22:00:22 2015 From: sturla.molden at gmail.com (Sturla Molden) Date: Wed, 15 Jul 2015 02:00:22 +0000 (UTC) Subject: [SciPy-User] using numpy.fromfile without knowing the number of elements in the file References: Message-ID: <1565451841458617898.086520sturla.molden-gmail.com@news.gmane.org> Gabriele Brambilla wrote: > I'm trying to read an unformatted file written in Fortran. > Fortran has the problem that for each write() statement it inserts 4bytes > of "trash" at the beginning and at the end of the file. Fortran does not specify a binary format for unformatted files. The best approach IMHO is to read them with Fortran, and use f2py or Cython to call this Fortran code. But be sure to use the same Fortran compiler as the one used to compile the code that wrote the files, because otherwise this might fail too. An even better solution is to avoid Fortran unformatted files! You can use APIs like POSIX from Fortran as well, or you can write a tiny piece of C to handle the i/o. Sturla From ndbecker2 at gmail.com Wed Jul 15 09:34:45 2015 From: ndbecker2 at gmail.com (Neal Becker) Date: Wed, 15 Jul 2015 09:34:45 -0400 Subject: [SciPy-User] fit to discrete distr? Message-ID: http://docs.scipy.org/doc/scipy/reference/tutorial/stats.html says 'no fit' method is available on discrete distr, while: http://docs.scipy.org/doc/scipy/reference/tutorial/stats/discrete.html has a section on 'fitting data'. Can I use scipy to fit my data to a discrete distr (nbinom, in this case)? From robert.kern at gmail.com Wed Jul 15 09:43:46 2015 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 15 Jul 2015 14:43:46 +0100 Subject: [SciPy-User] fit to discrete distr? In-Reply-To: References: Message-ID: Construct a function that computes the negative log-likelihood: def f(params, dist=dist, data=data): return -dist.logpmf(data, *params).sum() Use optimize.minimize() on that with appropriate configuration. On Wed, Jul 15, 2015 at 2:34 PM, Neal Becker wrote: > http://docs.scipy.org/doc/scipy/reference/tutorial/stats.html > says 'no fit' method is available on discrete distr, while: > > http://docs.scipy.org/doc/scipy/reference/tutorial/stats/discrete.html > has a section on 'fitting data'. > > Can I use scipy to fit my data to a discrete distr (nbinom, in this case)? > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From gb.gabrielebrambilla at gmail.com Wed Jul 15 14:13:20 2015 From: gb.gabrielebrambilla at gmail.com (Gabriele Brambilla) Date: Wed, 15 Jul 2015 14:13:20 -0400 Subject: [SciPy-User] numpy.where() issue Message-ID: I have problems with numpy.where why if I write deathnote0 = np.where(Yy > 0.02 or Yy < -0.02) I get: Traceback (most recent call last): File "readingK.py", line 63, in deathnote0 = np.where(Yy > 0.02 or Yy < -0.02) ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() while if I do deathnote0 = np.where(Yy > 0.02) deathnote1 = np.where(Yy < -0.02) deathnote = deathnote0 + deathnote1 it works? thanks Gabriele -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebix at sebix.at Wed Jul 15 14:22:25 2015 From: sebix at sebix.at (Sebastian) Date: Wed, 15 Jul 2015 20:22:25 +0200 Subject: [SciPy-User] numpy.where() issue In-Reply-To: References: Message-ID: <55A6A4E1.4090403@sebix.at> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Hi, What is this doing? np.where(Yy > 0.02 or Yy < -0.02) Let's split it up (Yy > 0.02) or (Yy < -0.02) The both < > comparisons give boolean numpy arrays, say a and b. a or b The python or-Operator works on boolean values, so it does bool(a) or bool(b) What is "The truth value of an array with more than one element"? It "is ambiguous". Voila You can also use np.logical_or(np.where(...), np.where(...)) or shorter np.where(...) | np.where(...) Sebastian On 07/15/2015 08:13 PM, Gabriele Brambilla wrote: > I have problems with numpy.where > > why if I write > deathnote0 = np.where(Yy > 0.02 or Yy < -0.02) > I get: > > Traceback (most recent call last): > > File "readingK.py", line 63, in > > deathnote0 = np.where(Yy > 0.02 or Yy < -0.02) > > ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() > > while if I do > > deathnote0 = np.where(Yy > 0.02) > > deathnote1 = np.where(Yy < -0.02) > > deathnote = deathnote0 + deathnote1 > > it works? > > > thanks > > > Gabriele > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- > python programming - mail server - photo - video - https://sebix.at > To verify my cryptographic signature or send me encrypted mails, get my > key at https://sebix.at/DC9B463B.asc and on public keyservers. -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) iQIcBAEBCAAGBQJVpqThAAoJEBn0X+vcm0Y7x00P/ioF2JTIXCrwW0IGHWqbEIah i+LAoBFle0pJItuSY3VCOA9Ut57YG8BKqs37hA/9dK3yQqGGwPGLx9ETGk6V8du8 qeoY4vCx7yYvoLFyysDNJZtyztpW2hCXjhy9ZhrlG75MzO/8W2VJ1k5b/moyUXfn sIcscCFI36Jt3lo4kySRDk7NqlzEg2opkXp1o053dIrSLQbQWLaihc2gio2MgtvB aYIDwPZ0fGaWNB/5o+nYkGOiT+fYez6pIqNbk7fUlQ4/i6U7z8/BddZbig5tlGw6 taho+44xcML8cndMWYiL4NlYTp4GOy0x4YcAeG6osyuJPc36flaRhxEACTV7JjO/ raHckH93l4hN/8l+m6hBh9vqkSCLtMIeJ3pXUKCqptWv8mRC3lB7u+pWNVU3iysI iRuv5DoCIQ15IgNDK42n7nsuM5JLEchrTBQn5twG2A5gjM7Q/IZvOhMlqycBN8Is xr4vZBwwgV+0tsxGdKnkeateIxo/lhtZjx4yjHgpMiDTxIB5gFVfDs1eowj1Glcm AE085May/sJHhAY5rGEBFO+yjtApKHKoKWSLFAzV7nO14kXuPxF73OP8rJdb+FAK czhqbShX74tICugQTsfwg0+032gyx8eMj4mjnIjBGUMIPTYBuAzh+SFNZBHQNguO 15Rahq3Uf0zFCofVMYZO =3gGp -----END PGP SIGNATURE----- From guziy.sasha at gmail.com Wed Jul 15 14:27:19 2015 From: guziy.sasha at gmail.com (Oleksandr Huziy) Date: Wed, 15 Jul 2015 14:27:19 -0400 Subject: [SciPy-User] numpy.where() issue In-Reply-To: <55A6A4E1.4090403@sebix.at> References: <55A6A4E1.4090403@sebix.at> Message-ID: Maybe this way would be clearer (remember parentheses in the condition are crucial): """ indices = np.where((Yy > 0.02) | (Yy < -0.02)) """ Cheers 2015-07-15 14:22 GMT-04:00 Sebastian : > > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA256 > > Hi, > > What is this doing? np.where(Yy > 0.02 or Yy < -0.02) > Let's split it up > (Yy > 0.02) or (Yy < -0.02) > The both < > comparisons give boolean numpy arrays, say a and b. > a or b > The python or-Operator works on boolean values, so it does > bool(a) or bool(b) > What is "The truth value of an array with more than one element"? It "is > ambiguous". > Voila > > You can also use > np.logical_or(np.where(...), np.where(...)) > or shorter > np.where(...) | np.where(...) > > Sebastian > > On 07/15/2015 08:13 PM, Gabriele Brambilla wrote: > > I have problems with numpy.where > > > > why if I write > > deathnote0 = np.where(Yy > 0.02 or Yy < -0.02) > > I get: > > > > Traceback (most recent call last): > > > > File "readingK.py", line 63, in > > > > deathnote0 = np.where(Yy > 0.02 or Yy < -0.02) > > > > ValueError: The truth value of an array with more than one element is > ambiguous. Use a.any() or a.all() > > > > while if I do > > > > deathnote0 = np.where(Yy > 0.02) > > > > deathnote1 = np.where(Yy < -0.02) > > > > deathnote = deathnote0 + deathnote1 > > > > it works? > > > > > > thanks > > > > > > Gabriele > > > > > > > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > -- > > python programming - mail server - photo - video - https://sebix.at > > To verify my cryptographic signature or send me encrypted mails, get my > > key at https://sebix.at/DC9B463B.asc and on public keyservers. > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v2.0.22 (GNU/Linux) > > iQIcBAEBCAAGBQJVpqThAAoJEBn0X+vcm0Y7x00P/ioF2JTIXCrwW0IGHWqbEIah > i+LAoBFle0pJItuSY3VCOA9Ut57YG8BKqs37hA/9dK3yQqGGwPGLx9ETGk6V8du8 > qeoY4vCx7yYvoLFyysDNJZtyztpW2hCXjhy9ZhrlG75MzO/8W2VJ1k5b/moyUXfn > sIcscCFI36Jt3lo4kySRDk7NqlzEg2opkXp1o053dIrSLQbQWLaihc2gio2MgtvB > aYIDwPZ0fGaWNB/5o+nYkGOiT+fYez6pIqNbk7fUlQ4/i6U7z8/BddZbig5tlGw6 > taho+44xcML8cndMWYiL4NlYTp4GOy0x4YcAeG6osyuJPc36flaRhxEACTV7JjO/ > raHckH93l4hN/8l+m6hBh9vqkSCLtMIeJ3pXUKCqptWv8mRC3lB7u+pWNVU3iysI > iRuv5DoCIQ15IgNDK42n7nsuM5JLEchrTBQn5twG2A5gjM7Q/IZvOhMlqycBN8Is > xr4vZBwwgV+0tsxGdKnkeateIxo/lhtZjx4yjHgpMiDTxIB5gFVfDs1eowj1Glcm > AE085May/sJHhAY5rGEBFO+yjtApKHKoKWSLFAzV7nO14kXuPxF73OP8rJdb+FAK > czhqbShX74tICugQTsfwg0+032gyx8eMj4mjnIjBGUMIPTYBuAzh+SFNZBHQNguO > 15Rahq3Uf0zFCofVMYZO > =3gGp > -----END PGP SIGNATURE----- > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Sasha -------------- next part -------------- An HTML attachment was scrubbed... URL: From guziy.sasha at gmail.com Wed Jul 15 14:29:34 2015 From: guziy.sasha at gmail.com (Oleksandr Huziy) Date: Wed, 15 Jul 2015 14:29:34 -0400 Subject: [SciPy-User] numpy.where() issue In-Reply-To: References: <55A6A4E1.4090403@sebix.at> Message-ID: Just to save you possible frustration later: '|' is used instead of 'or' '&' is used instead of 'and' '~' is used instead of 'not' Cheers 2015-07-15 14:27 GMT-04:00 Oleksandr Huziy : > Maybe this way would be clearer (remember parentheses in the condition are > crucial): > > """ > indices = np.where((Yy > 0.02) | (Yy < -0.02)) > """ > > Cheers > > > 2015-07-15 14:22 GMT-04:00 Sebastian : > >> >> -----BEGIN PGP SIGNED MESSAGE----- >> Hash: SHA256 >> >> Hi, >> >> What is this doing? np.where(Yy > 0.02 or Yy < -0.02) >> Let's split it up >> (Yy > 0.02) or (Yy < -0.02) >> The both < > comparisons give boolean numpy arrays, say a and b. >> a or b >> The python or-Operator works on boolean values, so it does >> bool(a) or bool(b) >> What is "The truth value of an array with more than one element"? It "is >> ambiguous". >> Voila >> >> You can also use >> np.logical_or(np.where(...), np.where(...)) >> or shorter >> np.where(...) | np.where(...) >> >> Sebastian >> >> On 07/15/2015 08:13 PM, Gabriele Brambilla wrote: >> > I have problems with numpy.where >> > >> > why if I write >> > deathnote0 = np.where(Yy > 0.02 or Yy < -0.02) >> > I get: >> > >> > Traceback (most recent call last): >> > >> > File "readingK.py", line 63, in >> > >> > deathnote0 = np.where(Yy > 0.02 or Yy < -0.02) >> > >> > ValueError: The truth value of an array with more than one element is >> ambiguous. Use a.any() or a.all() >> > >> > while if I do >> > >> > deathnote0 = np.where(Yy > 0.02) >> > >> > deathnote1 = np.where(Yy < -0.02) >> > >> > deathnote = deathnote0 + deathnote1 >> > >> > it works? >> > >> > >> > thanks >> > >> > >> > Gabriele >> > >> > >> > >> > >> > _______________________________________________ >> > SciPy-User mailing list >> > SciPy-User at scipy.org >> > http://mail.scipy.org/mailman/listinfo/scipy-user >> > >> > -- >> > python programming - mail server - photo - video - https://sebix.at >> > To verify my cryptographic signature or send me encrypted mails, get my >> > key at https://sebix.at/DC9B463B.asc and on public keyservers. >> -----BEGIN PGP SIGNATURE----- >> Version: GnuPG v2.0.22 (GNU/Linux) >> >> iQIcBAEBCAAGBQJVpqThAAoJEBn0X+vcm0Y7x00P/ioF2JTIXCrwW0IGHWqbEIah >> i+LAoBFle0pJItuSY3VCOA9Ut57YG8BKqs37hA/9dK3yQqGGwPGLx9ETGk6V8du8 >> qeoY4vCx7yYvoLFyysDNJZtyztpW2hCXjhy9ZhrlG75MzO/8W2VJ1k5b/moyUXfn >> sIcscCFI36Jt3lo4kySRDk7NqlzEg2opkXp1o053dIrSLQbQWLaihc2gio2MgtvB >> aYIDwPZ0fGaWNB/5o+nYkGOiT+fYez6pIqNbk7fUlQ4/i6U7z8/BddZbig5tlGw6 >> taho+44xcML8cndMWYiL4NlYTp4GOy0x4YcAeG6osyuJPc36flaRhxEACTV7JjO/ >> raHckH93l4hN/8l+m6hBh9vqkSCLtMIeJ3pXUKCqptWv8mRC3lB7u+pWNVU3iysI >> iRuv5DoCIQ15IgNDK42n7nsuM5JLEchrTBQn5twG2A5gjM7Q/IZvOhMlqycBN8Is >> xr4vZBwwgV+0tsxGdKnkeateIxo/lhtZjx4yjHgpMiDTxIB5gFVfDs1eowj1Glcm >> AE085May/sJHhAY5rGEBFO+yjtApKHKoKWSLFAzV7nO14kXuPxF73OP8rJdb+FAK >> czhqbShX74tICugQTsfwg0+032gyx8eMj4mjnIjBGUMIPTYBuAzh+SFNZBHQNguO >> 15Rahq3Uf0zFCofVMYZO >> =3gGp >> -----END PGP SIGNATURE----- >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > > > -- > Sasha > -- Sasha -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Wed Jul 15 14:32:15 2015 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 15 Jul 2015 13:32:15 -0500 Subject: [SciPy-User] numpy.where() issue In-Reply-To: References: Message-ID: On Jul 15, 2015 13:16, "Gabriele Brambilla" wrote: > > I have problems with numpy.where > > why if I write > deathnote0 = np.where(Yy > 0.02 or Yy < -0.02) The problem is the 'or'. It's an unfortunate limitation of python-the-language that 'or' only works on scalar values, and there's no way for numpy to fix this. (The reason python works this way is that 'or' is actually a flow-control construct, like 'if' -- it basically does 'if Yy > 0.02: return True; elif Yy < -0.02: return True; else: return False'. Notice that to decide whether to take the second branch and evaluate the second comparison, it has to reduce the first comparison to a single true/false value: it can't both take the branch and not take the branch at different points in the array. So it makes sense that python works this way, it's just annoying :-(.) Instead, use the bitwise or operator, which is allowed to be overridden, and for Boolean arrays will give you an elementwise or: (Yy > 0.02) | (Yy < -0.02) Unfortunately the parentheses are necessary, because | has higher precedence than < and >. As a matter of style, I also strongly suggest using np.nonzero instead of np.where: np.where is just a confusing wrapper function that calls np.nonzero on single argument inputs, and in other cases does other things. -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume at damcb.com Fri Jul 17 05:00:39 2015 From: guillaume at damcb.com (Guillaume Gay) Date: Fri, 17 Jul 2015 11:00:39 +0200 Subject: [SciPy-User] CGAL wrapping: cython vs boost vs SWIG Message-ID: <55A8C437.3070708@damcb.com> Hi all, I'm starting a modeling project emanating from a previous one . While the original project was based on graph-tool , I now want to have more hands on the C++ side of things. Particularly, CGAL LinearCellComplex library is well suited for my modeling needs. The basic needs I have are: * Access from python to custom iterators defined in C++ (over the cells of the LCC, one cell's edges, and so on) * The ability to manipulate attributes of the LinearCellComplex (associated with edges, vertices or faces) from python as |ndarrays| or even better pandas |DataFrames|, ideally without the need to copy data, such that modifications of these attributes are synchronized between the two languages For now, the project is using Boost.Python for the wrapping, but I got a bit confused with the ndarray side of things, as there are two projects claiming to provide interfaces between numpy and boost: * https://github.com/ndarray/Boost.NumPy and: * https://github.com/mdboom/numpy-boost They don't seem very active, nor very documented? I used SWIG before, but keep a very bad memory of the experience ;) So, would Cython be more adapted? My guess is, as the numpy interface is ingrained in Cython, that would be easier, but what about iterators? Any advice or previous experience on similar issues welcome. Best, Guillaume -- -- Guillaume Gay, PhD http://damcb.com 43 rue Horace Bertin 13005 Marseille +33 953 55 98 89 +33 651 95 94 00 n?SIRET 751 175 233 00020 -------------- next part -------------- An HTML attachment was scrubbed... URL: From moorepants at gmail.com Fri Jul 17 11:02:25 2015 From: moorepants at gmail.com (Jason Moore) Date: Fri, 17 Jul 2015 08:02:25 -0700 Subject: [SciPy-User] CGAL wrapping: cython vs boost vs SWIG In-Reply-To: <55A8C437.3070708@damcb.com> References: <55A8C437.3070708@damcb.com> Message-ID: Cython has great numpy support but it is not an automatic wrapping tool. You will have to manually wrap everything you want available in Python. This is a Cython based automatic wrapping tool that could be useful too: https://github.com/xdress/xdress. Jason moorepants.info +01 530-601-9791 On Fri, Jul 17, 2015 at 2:00 AM, Guillaume Gay wrote: > Hi all, > > I'm starting a modeling project > emanating from a previous one . While > the original project was based on graph-tool > , I now want to have more hands on the C++ > side of things. > > Particularly, CGAL LinearCellComplex > library is > well suited for my modeling needs. > > The basic needs I have are: > > - > > Access from python to custom iterators defined in C++ (over the cells > of the LCC, one cell's edges, and so on) > - > > The ability to manipulate attributes of the LinearCellComplex > (associated with edges, vertices or faces) from python as ndarrays or > even better pandas DataFrames, ideally without the need to copy data, > such that modifications of these attributes are synchronized between the > two languages > > For now, the project is using Boost.Python for the wrapping, but I got a > bit confused with the ndarray side of things, as there are two projects > claiming to provide interfaces between numpy and boost: > > - https://github.com/ndarray/Boost.NumPy > and: > - https://github.com/mdboom/numpy-boost > > They don't seem very active, nor very documented? > > I used SWIG before, but keep a very bad memory of the experience ;) > > So, would Cython be more adapted? My guess is, as the numpy interface is > ingrained in Cython, that would be easier, but what about iterators? > > Any advice or previous experience on similar issues welcome. > > Best, > > Guillaume > > -- > > -- > Guillaume Gay, PhD > http://damcb.com > > 43 rue Horace Bertin > 13005 Marseille > > +33 953 55 98 89 > +33 651 95 94 00 > > n?SIRET 751 175 233 00020 > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume at damcb.com Fri Jul 17 11:13:53 2015 From: guillaume at damcb.com (Guillaume Gay) Date: Fri, 17 Jul 2015 17:13:53 +0200 Subject: [SciPy-User] CGAL wrapping: cython vs boost vs SWIG In-Reply-To: References: <55A8C437.3070708@damcb.com> Message-ID: <55A91BB1.1000802@damcb.com> Thanks Jason, As far as I understand it, boost.python isn't automatic either, you have to export all you need explicitly. I'll have a look at xdress, but their doc hosting website (xdress.org) is down, more like squatted by advertising links , actually :( G. Le 17/07/2015 17:02, Jason Moore a ?crit : > Cython has great numpy support but it is not an automatic wrapping > tool. You will have to manually wrap everything you want available in > Python. This is a Cython based automatic wrapping tool that could be > useful too: https://github.com/xdress/xdress. > > > Jason > moorepants.info > +01 530-601-9791 > > On Fri, Jul 17, 2015 at 2:00 AM, Guillaume Gay > wrote: > > Hi all, > > I'm starting a modeling project > emanating from a previous > one . While the original > project was based on graph-tool , I > now want to have more hands on the C++ side of things. > > Particularly, CGAL LinearCellComplex > > library is well suited for my modeling needs. > > The basic needs I have are: > > * > > Access from python to custom iterators defined in C++ (over > the cells of the LCC, one cell's edges, and so on) > > * > > The ability to manipulate attributes of the LinearCellComplex > (associated with edges, vertices or faces) from python as > |ndarrays| or even better pandas |DataFrames|, ideally without > the need to copy data, such that modifications of these > attributes are synchronized between the two languages > > For now, the project is using Boost.Python for the wrapping, but I > got a bit confused with the ndarray side of things, as there are > two projects claiming to provide interfaces between numpy and boost: > > * https://github.com/ndarray/Boost.NumPy > and: > * https://github.com/mdboom/numpy-boost > > They don't seem very active, nor very documented? > > I used SWIG before, but keep a very bad memory of the experience ;) > > So, would Cython be more adapted? My guess is, as the numpy > interface is ingrained in Cython, that would be easier, but what > about iterators? > > Any advice or previous experience on similar issues welcome. > > Best, > > Guillaume > > -- > > -- > Guillaume Gay, PhD > > http://damcb.com > > 43 rue Horace Bertin > 13005 Marseille > > +33 953 55 98 89 > +33 651 95 94 00 > > n?SIRET 751 175 233 00020 > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -- -- Guillaume Gay, PhD http://damcb.com 43 rue Horace Bertin 13005 Marseille +33 953 55 98 89 +33 651 95 94 00 n?SIRET 751 175 233 00020 -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Fri Jul 17 12:21:47 2015 From: chris.barker at noaa.gov (Chris Barker) Date: Fri, 17 Jul 2015 09:21:47 -0700 Subject: [SciPy-User] CGAL wrapping: cython vs boost vs SWIG In-Reply-To: <55A91BB1.1000802@damcb.com> References: <55A8C437.3070708@damcb.com> <55A91BB1.1000802@damcb.com> Message-ID: Cyton is a great option. It is hand wrapping, so you'll be writing a fair bit of boilerplate, but it does let you create a nice customized python wrapper -- with the auto-wrapping tools, you need to hand-write a bunch of stuff to do that anyway, and SWIG, at least, makes that quite painful. I'd at least give XDress a try -- if it works, it could be great, but it's not a terribly mature project. -Chris On Fri, Jul 17, 2015 at 8:13 AM, Guillaume Gay wrote: > Thanks Jason, > > As far as I understand it, boost.python isn't automatic either, you have > to export all you need explicitly. > > I'll have a look at xdress, but their doc hosting website (xdress.org) is > down, more like squatted by advertising links , actually :( > > G. > > > Le 17/07/2015 17:02, Jason Moore a ?crit : > > Cython has great numpy support but it is not an automatic wrapping tool. > You will have to manually wrap everything you want available in Python. > This is a Cython based automatic wrapping tool that could be useful too: > https://github.com/xdress/xdress. > > > Jason > moorepants.info > +01 530-601-9791 > > On Fri, Jul 17, 2015 at 2:00 AM, Guillaume Gay > wrote: > >> Hi all, >> >> I'm starting a modeling project >> emanating from a previous one . While >> the original project was based on graph-tool >> , I now want to have more hands on the C++ >> side of things. >> >> Particularly, CGAL LinearCellComplex >> library is >> well suited for my modeling needs. >> >> The basic needs I have are: >> >> - >> >> Access from python to custom iterators defined in C++ (over the cells >> of the LCC, one cell's edges, and so on) >> - >> >> The ability to manipulate attributes of the LinearCellComplex >> (associated with edges, vertices or faces) from python as ndarrays or >> even better pandas DataFrames, ideally without the need to copy data, >> such that modifications of these attributes are synchronized between the >> two languages >> >> For now, the project is using Boost.Python for the wrapping, but I got a >> bit confused with the ndarray side of things, as there are two projects >> claiming to provide interfaces between numpy and boost: >> >> - https://github.com/ndarray/Boost.NumPy >> and: >> - https://github.com/mdboom/numpy-boost >> >> They don't seem very active, nor very documented? >> >> I used SWIG before, but keep a very bad memory of the experience ;) >> >> So, would Cython be more adapted? My guess is, as the numpy interface is >> ingrained in Cython, that would be easier, but what about iterators? >> >> Any advice or previous experience on similar issues welcome. >> >> Best, >> >> Guillaume >> >> -- >> >> -- >> Guillaume Gay, PhD >> http://damcb.com >> >> 43 rue Horace Bertin >> 13005 Marseille >> >> +33 953 55 98 89 >> +33 651 95 94 00 >> >> n?SIRET 751 175 233 00020 >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > > > _______________________________________________ > SciPy-User mailing listSciPy-User at scipy.orghttp://mail.scipy.org/mailman/listinfo/scipy-user > > > -- > > -- > Guillaume Gay, PhD > http://damcb.com > > 43 rue Horace Bertin > 13005 Marseille > > +33 953 55 98 89 > +33 651 95 94 00 > > n?SIRET 751 175 233 00020 > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From gb.gabrielebrambilla at gmail.com Fri Jul 17 15:24:54 2015 From: gb.gabrielebrambilla at gmail.com (Gabriele Brambilla) Date: Fri, 17 Jul 2015 15:24:54 -0400 Subject: [SciPy-User] order data in a grid with numpy Message-ID: Hi, I have 3 vectors containing x, y and f(x,y) values. They are ordered one with respect to the others but completely disordered in itself. Let's say, for sake of simplicity, that f=x+y they are x | y | f 1 | 2 | 3 5 | 1 | 6 .... And let's suppose that I know I have all the points to fill a grid. I would like to obtain a mesh grid or a 2d python numpy array from it. In c I know how to do it, but I know in python for cycles with element substitutions are avoidable... Do you have any solution? Thanks Gabriele -------------- next part -------------- An HTML attachment was scrubbed... URL: From jni.soma at gmail.com Fri Jul 17 15:49:11 2015 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Fri, 17 Jul 2015 15:49:11 -0400 Subject: [SciPy-User] order data in a grid with numpy In-Reply-To: References: Message-ID: Hi Gabriele, It's pretty simple, even if you don't have all the points! nrows = np.max(x) + 1 ncols = np.max(y) + 1 f_arr = np.zeros((nrows, ncols)) f_arr[x, y] = f Points to remember: - numpy indexing starts at zero - "x" and "y" are loaded terms; numpy coordinates are more like those of a matrix, with (0, 0) at the top-left and the first coordinate going downwards (vertically), the second going rightwards (horizontally). I prefer therefore to use "r" and "c" as coordinates, to avoid confusion with standard Cartesian coordinates. Juan. On Fri, Jul 17, 2015 at 3:24 PM, Gabriele Brambilla < gb.gabrielebrambilla at gmail.com> wrote: > Hi, > > I have 3 vectors containing x, y and f(x,y) values. They are ordered one > with respect to the others but completely disordered in itself. > Let's say, for sake of simplicity, that f=x+y > they are > x | y | f > 1 | 2 | 3 > 5 | 1 | 6 > .... > And let's suppose that I know I have all the points to fill a grid. > > I would like to obtain a mesh grid or a 2d python numpy array from it. > > In c I know how to do it, but I know in python for cycles with element > substitutions are avoidable... > > Do you have any solution? > > Thanks > > Gabriele > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.brucher at gmail.com Wed Jul 22 15:06:18 2015 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 22 Jul 2015 20:06:18 +0100 Subject: [SciPy-User] Low pass to band shelf transform Message-ID: Hi, I saw and used several functions to transform a LP filter to different kinds of filters, but I couldn't find a reference. I was wondering about low shelf, high shelf or band shelf filters on top of the others, but I don't know their transforms :/ Matthieu -- Information System Engineer, Ph.D. Blog: http://matt.eifelle.com LinkedIn: http://www.linkedin.com/in/matthieubrucher Music band: http://liliejay.com/ From njs at pobox.com Wed Jul 22 02:53:16 2015 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 21 Jul 2015 23:53:16 -0700 Subject: [SciPy-User] [ANN] metamodule v1.0 released Message-ID: Hi all, I'm pleased to announce the first release of 'metamodule', a new package that allows you to safely and easily hook attribute access on your package's module object (among other things). So for example, you can easily set it up so that a submodule in your package is lazily loaded the first time it is used, or so that a DeprecationWarning is issued every time a global constant is accessed. Downloads: https://pypi.python.org/pypi/metamodule Source/issues: https://github.com/njsmith/metamodule Share and enjoy, -n -- Nathaniel J. Smith -- http://vorpus.org From njs at pobox.com Wed Jul 22 02:15:47 2015 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 21 Jul 2015 23:15:47 -0700 Subject: [SciPy-User] [ANN] patsy v0.4.0 released Message-ID: Hi all, I'm pleased to announce the v0.4.0 release of patsy. The main highlights of this release are that we now correctly handle data coded using pandas's new (since v0.15.0) Categorical support, and that we now publically expose much richer metadata about how a model is being coded (see http://patsy.readthedocs.org/en/latest/API-reference.html#patsy.DesignInfo for details). Aside from being useful in general, this sets us up to hopefully provide full pickling support for patsy models in a future release. Patsy is a Python library for describing statistical models (especially linear models, or models that have a linear component) and building design matrices. Patsy brings the convenience of R "formulas" to Python. Changes: https://patsy.readthedocs.org/en/latest/changes.html#v0-4-0 General information: https://github.com/pydata/patsy/blob/master/README Share and enjoy, -n -- Nathaniel J. Smith -- http://vorpus.org From sergio_r at mail.com Tue Jul 21 15:39:27 2015 From: sergio_r at mail.com (Sergio Rojas) Date: Tue, 21 Jul 2015 21:39:27 +0200 Subject: [SciPy-User] How to fix Aborted SciPy version 0.16.0b2 test ? Message-ID: An HTML attachment was scrubbed... URL: From yylee at altair.co.kr Tue Jul 21 22:18:55 2015 From: yylee at altair.co.kr (yoonyoung) Date: Wed, 22 Jul 2015 02:18:55 +0000 (UTC) Subject: [SciPy-User] line search with boundary condition Message-ID: Hello Expert, I have a difficult problem to solve and hope you guys can help me. Please refer to the attached link image before reading below. https://goo.gl/photos/kRi94KZYBt1CxBuA7 (I cannot attach the file in this mailing-list. Is it not possible?) My goal is : - To find lines which starts from left edge to right edge as attached pic. - There is the permitted area which is in red in the pic. - The found lines will be rotated by Y-axis and volume of it will be calculated. volume= 2*pi*integral(x*f*(x)) - The volume should be the same constant. - blue, skyblue and green lines are acceptable in the pic. - orange line is not acceptable because it violates the red lines. What modules or function can be helpful for this problem? Is there anybody to give me the hint? Thanks a ton in advance!! Regards From francesco.delcitto at sauber-motorsport.com Fri Jul 24 03:47:37 2015 From: francesco.delcitto at sauber-motorsport.com (Del Citto Francesco) Date: Fri, 24 Jul 2015 09:47:37 +0200 Subject: [SciPy-User] splines with imposed tangent at start and end point Message-ID: <589CEB614006334D93C1A48C1B1964C9027CEAAE230B@srvmes03.spe-ch-md9.net> Hi all, I'm sure I'm not the first one asking a similar question, but I was not able to find an answer yet. I need to define a 1D spline curve in the 3D space where both the coordinates and the tangent at first and last points are assigned. scipy.interpolate.InterpolatedUnivariateSpline can build a spline passing through many points, but no constraint on the tangent can be given. I've found other spline formulations where this is possible, but I was wondering if there is anything already available in SciPy. Thanks in advance, Francesco Freundliche Gr?sse, Best regards, Francesco Del Citto CFD Development Group [cid:image001.jpg at 01D0C5F5.C5FC73A0] www.sauberf1team.com [cid:image002.jpg at 01D0C5F5.C5FC73A0] [cid:image003.jpg at 01D0C5F5.C5FC73A0] [cid:image004.jpg at 01D0C5F5.C5FC73A0] [cid:image005.jpg at 01D0C5F5.C5FC73A0] [cid:image006.jpg at 01D0C5F5.C5FC73A0] [cid:image007.jpg at 01D0C5F5.C5FC73A0] [cid:image008.jpg at 01D0C5F5.C5FC73A0] [cid:image009.jpg at 01D0C5F5.C5FC73A0] Sauber Motorsport AG Wildbachstrasse 9 8340 Hinwil Switzerland Phone +41 44 937 95 10 This message contains confidential information and is intended only for the individual named herein. If you are not the herein named addressee you should not disseminate, distribute a copy or otherwise make use of this e-mail. Please notify the sender immediately by e-mail if you have received this e-mail by mistake, and delete this e-mail from your system. Please consider the environment before printing this email. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 3304 bytes Desc: image001.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 879 bytes Desc: image002.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.jpg Type: image/jpeg Size: 861 bytes Desc: image003.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.jpg Type: image/jpeg Size: 884 bytes Desc: image004.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.jpg Type: image/jpeg Size: 913 bytes Desc: image005.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image006.jpg Type: image/jpeg Size: 976 bytes Desc: image006.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image007.jpg Type: image/jpeg Size: 937 bytes Desc: image007.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image008.jpg Type: image/jpeg Size: 900 bytes Desc: image008.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image009.jpg Type: image/jpeg Size: 1117 bytes Desc: image009.jpg URL: From evgeny.burovskiy at gmail.com Fri Jul 24 05:49:22 2015 From: evgeny.burovskiy at gmail.com (Evgeni Burovski) Date: Fri, 24 Jul 2015 10:49:22 +0100 Subject: [SciPy-User] splines with imposed tangent at start and end point In-Reply-To: <589CEB614006334D93C1A48C1B1964C9027CEAAE230B@srvmes03.spe-ch-md9.net> References: <589CEB614006334D93C1A48C1B1964C9027CEAAE230B@srvmes03.spe-ch-md9.net> Message-ID: [Shameless plug] https://github.com/scipy/scipy/pull/3174 This is not merged yet, and there is some discussion on the interface (comments welcome). I guess it's easy to just grab the spline construction code (make_interp_spline and its cython helpers) and use splev for evaluations. On Jul 24, 2015 10:48 AM, "Del Citto Francesco" < francesco.delcitto at sauber-motorsport.com> wrote: > Hi all, > > > > I?m sure I?m not the first one asking a similar question, but I was not > able to find an answer yet. > > > > I need to define a 1D spline curve in the 3D space where both the > coordinates and the tangent at first and last points are assigned. > > scipy.interpolate.InterpolatedUnivariateSpline can build a spline passing > through many points, but no constraint on the tangent can be given. > > > > I?ve found other spline formulations where this is possible, but I was > wondering if there is anything already available in SciPy. > > > > Thanks in advance, > > Francesco > > > > > > Freundliche Gr?sse, > > Best regards, > > > > *Francesco Del Citto* > CFD Development Group > > > > > > www.sauberf1team.com > > > > [image: Beschreibung: > I:\Marketing\I_Social-Media\SocialNetwork_Logos\Logos_GIF_Blau\GooglePlus.gif] > [image: Beschreibung: > I:\Marketing\I_Social-Media\SocialNetwork_Logos\Logos_GIF_Blau\Facebook.gif] > [image: Beschreibung: > I:\Marketing\I_Social-Media\SocialNetwork_Logos\Logos_GIF_Blau\Twitter.gif] > [image: Beschreibung: > I:\Marketing\I_Social-Media\SocialNetwork_Logos\Logos_GIF_Blau\Instagram.gif] > [image: Beschreibung: > I:\Marketing\I_Social-Media\SocialNetwork_Logos\Logos_GIF_Blau\YouTube.gif] > [image: Beschreibung: > I:\Marketing\I_Social-Media\SocialNetwork_Logos\Logos_GIF_Blau\Pinterest.gif] > [image: Beschreibung: > I:\Marketing\I_Social-Media\SocialNetwork_Logos\Logos_GIF_Blau\Vine.gif] > [image: Beschreibung: > I:\Marketing\I_Social-Media\SocialNetwork_Logos\Logos_GIF_Blau\SportLobster.gif] > > > > > Sauber Motorsport AG > > Wildbachstrasse 9 > 8340 Hinwil > > Switzerland > > > > Phone +41 44 937 95 10 > > This message contains confidential information and is intended only for > the individual named herein. If you are not the herein named addressee you > should not disseminate, distribute a copy or otherwise make use of this > e-mail. Please notify the sender immediately by e-mail if you have received > this e-mail by mistake, and delete this e-mail from your system. > > Please consider the environment before printing this email. > > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image006.jpg Type: image/jpeg Size: 976 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.jpg Type: image/jpeg Size: 913 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image008.jpg Type: image/jpeg Size: 900 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 3304 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 879 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image009.jpg Type: image/jpeg Size: 1117 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image007.jpg Type: image/jpeg Size: 937 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.jpg Type: image/jpeg Size: 884 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.jpg Type: image/jpeg Size: 861 bytes Desc: not available URL: From francesco.delcitto at sauber-motorsport.com Fri Jul 24 06:00:28 2015 From: francesco.delcitto at sauber-motorsport.com (Del Citto Francesco) Date: Fri, 24 Jul 2015 12:00:28 +0200 Subject: [SciPy-User] splines with imposed tangent at start and end point In-Reply-To: References: <589CEB614006334D93C1A48C1B1964C9027CEAAE230B@srvmes03.spe-ch-md9.net> Message-ID: <589CEB614006334D93C1A48C1B1964C9027CEAAE2320@srvmes03.spe-ch-md9.net> Thanks a lot, we?ll have a look to it! Obviously, I would have preferred this to be already part of standard SciPy, to avoid compiling additional custom modules and rely on ?official? packages, but it is already a good starting point. Thanks again, Francesco From: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] On Behalf Of Evgeni Burovski Sent: Freitag, 24. Juli 2015 11:49 To: SciPy Users List Subject: Re: [SciPy-User] splines with imposed tangent at start and end point [Shameless plug] https://github.com/scipy/scipy/pull/3174 This is not merged yet, and there is some discussion on the interface (comments welcome). I guess it's easy to just grab the spline construction code (make_interp_spline and its cython helpers) and use splev for evaluations. On Jul 24, 2015 10:48 AM, "Del Citto Francesco" > wrote: Hi all, I?m sure I?m not the first one asking a similar question, but I was not able to find an answer yet. I need to define a 1D spline curve in the 3D space where both the coordinates and the tangent at first and last points are assigned. scipy.interpolate.InterpolatedUnivariateSpline can build a spline passing through many points, but no constraint on the tangent can be given. I?ve found other spline formulations where this is possible, but I was wondering if there is anything already available in SciPy. Thanks in advance, Francesco Freundliche Gr?sse, Best regards, Francesco Del Citto CFD Development Group [cid:image001.jpg at 01D0C608.54FED810] www.sauberf1team.com [cid:image002.jpg at 01D0C608.54FED810] [cid:image003.jpg at 01D0C608.54FED810] [cid:image004.jpg at 01D0C608.54FED810] [cid:image005.jpg at 01D0C608.54FED810] [cid:image006.jpg at 01D0C608.54FED810] [cid:image007.jpg at 01D0C608.54FED810] [cid:image008.jpg at 01D0C608.54FED810] [cid:image009.jpg at 01D0C608.54FED810] Sauber Motorsport AG Wildbachstrasse 9 8340 Hinwil Switzerland Phone +41 44 937 95 10 This message contains confidential information and is intended only for the individual named herein. If you are not the herein named addressee you should not disseminate, distribute a copy or otherwise make use of this e-mail. Please notify the sender immediately by e-mail if you have received this e-mail by mistake, and delete this e-mail from your system. Please consider the environment before printing this email. _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 3304 bytes Desc: image001.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 879 bytes Desc: image002.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.jpg Type: image/jpeg Size: 861 bytes Desc: image003.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.jpg Type: image/jpeg Size: 884 bytes Desc: image004.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.jpg Type: image/jpeg Size: 913 bytes Desc: image005.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image006.jpg Type: image/jpeg Size: 976 bytes Desc: image006.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image007.jpg Type: image/jpeg Size: 937 bytes Desc: image007.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image008.jpg Type: image/jpeg Size: 900 bytes Desc: image008.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image009.jpg Type: image/jpeg Size: 1117 bytes Desc: image009.jpg URL: From evgeny.burovskiy at gmail.com Fri Jul 24 06:19:11 2015 From: evgeny.burovskiy at gmail.com (Evgeni Burovski) Date: Fri, 24 Jul 2015 11:19:11 +0100 Subject: [SciPy-User] splines with imposed tangent at start and end point In-Reply-To: <589CEB614006334D93C1A48C1B1964C9027CEAAE2320@srvmes03.spe-ch-md9.net> References: <589CEB614006334D93C1A48C1B1964C9027CEAAE230B@srvmes03.spe-ch-md9.net> <589CEB614006334D93C1A48C1B1964C9027CEAAE2320@srvmes03.spe-ch-md9.net> Message-ID: While this is OT for this thread, I'll still say it out loud: we're seriously short on manpower, and we appreciate help. This includes reviewing PRs in your domain expertise (if we don't know you yet, it'd help to introduce yourself), finishing up and helping move stalled PRs, triaging bug reports, and so on. Evgeni On Jul 24, 2015 1:00 PM, "Del Citto Francesco" < francesco.delcitto at sauber-motorsport.com> wrote: > > Thanks a lot, we?ll have a look to it! > > > > Obviously, I would have preferred this to be already part of standard SciPy, to avoid compiling additional custom modules and rely on ?official? packages, but it is already a good starting point. > > > > Thanks again, > > Francesco > > > > > > From: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] On Behalf Of Evgeni Burovski > Sent: Freitag, 24. Juli 2015 11:49 > To: SciPy Users List > Subject: Re: [SciPy-User] splines with imposed tangent at start and end point > > > > [Shameless plug] https://github.com/scipy/scipy/pull/3174 > > This is not merged yet, and there is some discussion on the interface (comments welcome). I guess it's easy to just grab the spline construction code (make_interp_spline and its cython helpers) and use splev for evaluations. > > On Jul 24, 2015 10:48 AM, "Del Citto Francesco" < francesco.delcitto at sauber-motorsport.com> wrote: > > Hi all, > > > > I?m sure I?m not the first one asking a similar question, but I was not able to find an answer yet. > > > > I need to define a 1D spline curve in the 3D space where both the coordinates and the tangent at first and last points are assigned. > > scipy.interpolate.InterpolatedUnivariateSpline can build a spline passing through many points, but no constraint on the tangent can be given. > > > > I?ve found other spline formulations where this is possible, but I was wondering if there is anything already available in SciPy. > > > > Thanks in advance, > > Francesco > > > > > > Freundliche Gr?sse, > > Best regards, > > > > Francesco Del Citto > CFD Development Group > > > > www.sauberf1team.com > > > > > > > > Sauber Motorsport AG > > Wildbachstrasse 9 > 8340 Hinwil > > Switzerland > > > > Phone +41 44 937 95 10 > > This message contains confidential information and is intended only for the individual named herein. If you are not the herein named addressee you should not disseminate, distribute a copy or otherwise make use of this e-mail. Please notify the sender immediately by e-mail if you have received this e-mail by mistake, and delete this e-mail from your system. > > Please consider the environment before printing this email. > > > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Fri Jul 24 15:39:54 2015 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Fri, 24 Jul 2015 21:39:54 +0200 Subject: [SciPy-User] ANN: Scipy 0.16.0 release Message-ID: Hi all, On behalf of the Scipy development team I'm pleased to announce the availability of Scipy 0.16.0. This release contains some exciting new features (see release notes below) and more than half a years' worth of maintenance work. 93 people contributed to this release. This release requires Python 2.6, 2.7 or 3.2-3.4 and NumPy 1.6.2 or greater. Sources, binaries and release notes can be found at https://github.com/scipy/scipy/releases/tag/v0.16.0 Enjoy, Ralf -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 ========================== SciPy 0.16.0 Release Notes ========================== SciPy 0.16.0 is the culmination of 7 months of hard work. It contains many new features, numerous bug-fixes, improved test coverage and better documentation. There have been a number of deprecations and API changes in this release, which are documented below. All users are encouraged to upgrade to this release, as there are a large number of bug-fixes and optimizations. Moreover, our development attention will now shift to bug-fix releases on the 0.16.x branch, and on adding new features on the master branch. This release requires Python 2.6, 2.7 or 3.2-3.4 and NumPy 1.6.2 or greater. Highlights of this release include: - - A Cython API for BLAS/LAPACK in `scipy.linalg` - - A new benchmark suite. It's now straightforward to add new benchmarks, and they're routinely included with performance enhancement PRs. - - Support for the second order sections (SOS) format in `scipy.signal`. New features ============ Benchmark suite - --------------- The benchmark suite has switched to using `Airspeed Velocity `__ for benchmarking. You can run the suite locally via ``python runtests.py --bench``. For more details, see ``benchmarks/README.rst``. `scipy.linalg` improvements - --------------------------- A full set of Cython wrappers for BLAS and LAPACK has been added in the modules `scipy.linalg.cython_blas` and `scipy.linalg.cython_lapack`. In Cython, these wrappers can now be cimported from their corresponding modules and used without linking directly against BLAS or LAPACK. The functions `scipy.linalg.qr_delete`, `scipy.linalg.qr_insert` and `scipy.linalg.qr_update` for updating QR decompositions were added. The function `scipy.linalg.solve_circulant` solves a linear system with a circulant coefficient matrix. The function `scipy.linalg.invpascal` computes the inverse of a Pascal matrix. The function `scipy.linalg.solve_toeplitz`, a Levinson-Durbin Toeplitz solver, was added. Added wrapper for potentially useful LAPACK function ``*lasd4``. It computes the square root of the i-th updated eigenvalue of a positive symmetric rank-one modification to a positive diagonal matrix. See its LAPACK documentation and unit tests for it to get more info. Added two extra wrappers for LAPACK least-square solvers. Namely, they are ``*gelsd`` and ``*gelsy``. Wrappers for the LAPACK ``*lange`` functions, which calculate various matrix norms, were added. Wrappers for ``*gtsv`` and ``*ptsv``, which solve ``A*X = B`` for tri-diagonal matrix ``A``, were added. `scipy.signal` improvements - --------------------------- Support for second order sections (SOS) as a format for IIR filters was added. The new functions are: * `scipy.signal.sosfilt` * `scipy.signal.sosfilt_zi`, * `scipy.signal.sos2tf` * `scipy.signal.sos2zpk` * `scipy.signal.tf2sos` * `scipy.signal.zpk2sos`. Additionally, the filter design functions `iirdesign`, `iirfilter`, `butter`, `cheby1`, `cheby2`, `ellip`, and `bessel` can return the filter in the SOS format. The function `scipy.signal.place_poles`, which provides two methods to place poles for linear systems, was added. The option to use Gustafsson's method for choosing the initial conditions of the forward and backward passes was added to `scipy.signal.filtfilt`. New classes ``TransferFunction``, ``StateSpace`` and ``ZerosPolesGain`` were added. These classes are now returned when instantiating `scipy.signal.lti`. Conversion between those classes can be done explicitly now. An exponential (Poisson) window was added as `scipy.signal.exponential`, and a Tukey window was added as `scipy.signal.tukey`. The function for computing digital filter group delay was added as `scipy.signal.group_delay`. The functionality for spectral analysis and spectral density estimation has been significantly improved: `scipy.signal.welch` became ~8x faster and the functions `scipy.signal.spectrogram`, `scipy.signal.coherence` and `scipy.signal.csd` (cross-spectral density) were added. `scipy.signal.lsim` was rewritten - all known issues are fixed, so this function can now be used instead of ``lsim2``; ``lsim`` is orders of magnitude faster than ``lsim2`` in most cases. `scipy.sparse` improvements - --------------------------- The function `scipy.sparse.norm`, which computes sparse matrix norms, was added. The function `scipy.sparse.random`, which allows to draw random variates from an arbitrary distribution, was added. `scipy.spatial` improvements - ---------------------------- `scipy.spatial.cKDTree` has seen a major rewrite, which improved the performance of the ``query`` method significantly, added support for parallel queries, pickling, and options that affect the tree layout. See pull request 4374 for more details. The function `scipy.spatial.procrustes` for Procrustes analysis (statistical shape analysis) was added. `scipy.stats` improvements - -------------------------- The Wishart distribution and its inverse have been added, as `scipy.stats.wishart` and `scipy.stats.invwishart`. The Exponentially Modified Normal distribution has been added as `scipy.stats.exponnorm`. The Generalized Normal distribution has been added as `scipy.stats.gennorm`. All distributions now contain a ``random_state`` property and allow specifying a specific ``numpy.random.RandomState`` random number generator when generating random variates. Many statistical tests and other `scipy.stats` functions that have multiple return values now return ``namedtuples``. See pull request 4709 for details. `scipy.optimize` improvements - ----------------------------- A new derivative-free method DF-SANE has been added to the nonlinear equation system solving function `scipy.optimize.root`. Deprecated features =================== ``scipy.stats.pdf_fromgamma`` is deprecated. This function was undocumented, untested and rarely used. Statsmodels provides equivalent functionality with ``statsmodels.distributions.ExpandedNormal``. ``scipy.stats.fastsort`` is deprecated. This function is unnecessary, ``numpy.argsort`` can be used instead. ``scipy.stats.signaltonoise`` and ``scipy.stats.mstats.signaltonoise`` are deprecated. These functions did not belong in ``scipy.stats`` and are rarely used. See issue #609 for details. ``scipy.stats.histogram2`` is deprecated. This function is unnecessary, ``numpy.histogram2d`` can be used instead. Backwards incompatible changes ============================== The deprecated global optimizer ``scipy.optimize.anneal`` was removed. The following deprecated modules have been removed: ``scipy.lib.blas``, ``scipy.lib.lapack``, ``scipy.linalg.cblas``, ``scipy.linalg.fblas``, ``scipy.linalg.clapack``, ``scipy.linalg.flapack``. They had been deprecated since Scipy 0.12.0, the functionality should be accessed as `scipy.linalg.blas` and `scipy.linalg.lapack`. The deprecated function ``scipy.special.all_mat`` has been removed. The deprecated functions ``fprob``, ``ksprob``, ``zprob``, ``randwcdf`` and ``randwppf`` have been removed from `scipy.stats`. Other changes ============= The version numbering for development builds has been updated to comply with PEP 440. Building with ``python setup.py develop`` is now supported. Authors ======= * @axiru + * @endolith * Elliott Sales de Andrade + * Anne Archibald * Yoshiki V?zquez Baeza + * Sylvain Bellemare * Felix Berkenkamp + * Raoul Bourquin + * Matthew Brett * Per Brodtkorb * Christian Brueffer * Lars Buitinck * Evgeni Burovski * Steven Byrnes * CJ Carey * George Castillo + * Alex Conley + * Liam Damewood + * Rupak Das + * Abraham Escalante + * Matthias Feurer + * Eric Firing + * Clark Fitzgerald * Chad Fulton * Andr? Gaul * Andreea Georgescu + * Christoph Gohlke * Andrey Golovizin + * Ralf Gommers * J.J. Green + * Alex Griffing * Alexander Grigorievskiy + * Hans Moritz Gunther + * Jonas Hahnfeld + * Charles Harris * Ian Henriksen * Andreas Hilboll * ?smund Hjulstad + * Jan Schl?ter + * Janko Slavi? + * Daniel Jensen + * Johannes Ball? + * Terry Jones + * Amato Kasahara + * Eric Larson * Denis Laxalde * Antony Lee * Gregory R. Lee * Perry Lee + * Lo?c Est?ve * Martin Manns + * Eric Martin + * Mat?j Koci?n + * Andreas Mayer + * Nikolay Mayorov + * Robert McGibbon + * Sturla Molden * Nicola Montecchio + * Eric Moore * Jamie Morton + * Nikolas Moya + * Maniteja Nandana + * Andrew Nelson * Joel Nothman * Aldrian Obaja * Regina Ongowarsito + * Paul Ortyl + * Pedro L?pez-Adeva Fern?ndez-Layos + * Stefan Peterson + * Irvin Probst + * Eric Quintero + * John David Reaver + * Juha Remes + * Thomas Robitaille * Clancy Rowley + * Tobias Schmidt + * Skipper Seabold * Aman Singh + * Eric Soroos * Valentine Svensson + * Julian Taylor * Aman Thakral + * Helmut Toplitzer + * Fukumu Tsutsumi + * Anastasiia Tsyplia + * Jacob Vanderplas * Pauli Virtanen * Matteo Visconti + * Warren Weckesser * Florian Wilhelm + * Nathan Woods * Haochen Wu + * Daan Wynen + A total of 93 people contributed to this release. People with a "+" by their names contributed a patch for the first time. This list of names is automatically generated, and may not be fully complete. Issues closed for 0.16.0 - ------------------------ - - `#1063 `__: Implement a whishart distribution (Trac #536) - - `#1885 `__: Rbf: floating point warnings - possible bug (Trac #1360) - - `#2020 `__: Rbf default epsilon too large (Trac #1495) - - `#2325 `__: extending distributions, hypergeom, to degenerate cases (Trac... - - `#3502 `__: [ENH] linalg.hessenberg should use ORGHR for calc_q=True - - `#3603 `__: Passing array as window into signal.resample() fails - - `#3675 `__: Intermittent failures for signal.slepian on Windows - - `#3742 `__: Pchipinterpolator inconvenient as ppoly - - `#3786 `__: add procrustes? - - `#3798 `__: scipy.io.savemat fails for empty dicts - - `#3975 `__: Use RandomState in scipy.stats - - `#4022 `__: savemat incorrectly saves logical arrays - - `#4028 `__: scipy.stats.geom.logpmf(1,1) returns nan. The correct value is... - - `#4030 `__: simplify scipy.stats.betaprime.cdf - - `#4031 `__: improve accuracy of scipy.stats.gompertz distribution for small... - - `#4033 `__: improve accuracy of scipy.stats.lomax distribution for small... - - `#4034 `__: improve accuracy of scipy.stats.rayleigh distribution for large... - - `#4035 `__: improve accuracy of scipy.stats.truncexpon distribution for small... - - `#4081 `__: Error when reading matlab file: buffer is too small for requested... - - `#4100 `__: Why does qr(a, lwork=0) not fail? - - `#4134 `__: scipy.stats: rv_frozen has no expect() method - - `#4204 `__: Please add docstring to scipy.optimize.RootResults - - `#4206 `__: Wrap LAPACK tridiagonal solve routine `gtsv` - - `#4208 `__: Empty sparse matrices written to MAT file cannot be read by MATLAB - - `#4217 `__: use a TravisCI configuration with numpy built with NPY_RELAXED_STRIDES_CHECKING=1 - - `#4282 `__: integrate.odeint raises an exception when full_output=1 and the... - - `#4301 `__: scipy and numpy version names do not follow pep 440 - - `#4355 `__: PPoly.antiderivative() produces incorrect output - - `#4391 `__: spsolve becomes extremely slow with large b matrix - - `#4393 `__: Documentation glitsch in sparse.linalg.spilu - - `#4408 `__: Vector-valued constraints in minimize() et al - - `#4412 `__: Documentation of scipy.signal.cwt error - - `#4428 `__: dok.__setitem__ problem with negative indices - - `#4434 `__: Incomplete documentation for sparse.linalg.spsolve - - `#4438 `__: linprog() documentation example wrong - - `#4445 `__: Typo in scipy.special.expit doc - - `#4467 `__: Documentation Error in scipy.optimize options for TNC - - `#4492 `__: solve_toeplitz benchmark is bitrotting already - - `#4506 `__: lobpcg/sparse performance regression Jun 2014? - - `#4520 `__: g77_abi_wrappers needed on Linux for MKL as well - - `#4521 `__: Broken check in uses_mkl for newer versions of the library - - `#4523 `__: rbf with gaussian kernel seems to produce more noise than original... - - `#4526 `__: error in site documentation for poisson.pmf() method - - `#4527 `__: KDTree example doesn't work in Python 3 - - `#4550 `__: `scipy.stats.mode` - UnboundLocalError on empty sequence - - `#4554 `__: filter out convergence warnings in optimization tests - - `#4565 `__: odeint messages - - `#4569 `__: remez: "ValueError: Failure to converge after 25 iterations.... - - `#4582 `__: DOC: optimize: _minimize_scalar_brent does not have a disp option - - `#4585 `__: DOC: Erroneous latex-related characters in tutorial. - - `#4590 `__: sparse.linalg.svds should throw an exception if which not in... - - `#4594 `__: scipy.optimize.linprog IndexError when a callback is providen - - `#4596 `__: scipy.linalg.block_diag misbehavior with empty array inputs (v0.13.3) - - `#4599 `__: scipy.integrate.nquad should call _OptFunc when called with only... - - `#4612 `__: Crash in signal.lfilter on nd input with wrong shaped zi - - `#4613 `__: scipy.io.readsav error on reading sav file - - `#4673 `__: scipy.interpolate.RectBivariateSpline construction locks PyQt... - - `#4681 `__: Broadcasting in signal.lfilter still not quite right. - - `#4705 `__: kmeans k_or_guess parameter error if guess is not square array - - `#4719 `__: Build failure on 14.04.2 - - `#4724 `__: GenGamma _munp function fails due to overflow - - `#4726 `__: FAIL: test_cobyla.test_vector_constraints - - `#4734 `__: Failing tests in stats with numpy master. - - `#4736 `__: qr_update bug or incompatibility with numpy 1.10? - - `#4746 `__: linprog returns solution violating equality constraint - - `#4757 `__: optimize.leastsq docstring mismatch - - `#4774 `__: Update contributor list for v0.16 - - `#4779 `__: circmean and others do not appear in the documentation - - `#4788 `__: problems with scipy sparse linalg isolve iterative.py when complex - - `#4791 `__: BUG: scipy.spatial: incremental Voronoi doesn't increase size... Pull requests for 0.16.0 - ------------------------ - - `#3116 `__: sparse: enhancements for DIA format - - `#3157 `__: ENH: linalg: add the function 'solve_circulant' for solving a... - - `#3442 `__: ENH: signal: Add Gustafsson's method as an option for the filtfilt... - - `#3679 `__: WIP: fix sporadic slepian failures - - `#3680 `__: Some cleanups in stats - - `#3717 `__: ENH: Add second-order sections filtering - - `#3741 `__: Dltisys changes - - `#3956 `__: add note to scipy.signal.resample about prime sample numbers - - `#3980 `__: Add check_finite flag to UnivariateSpline - - `#3996 `__: MAINT: stricter linalg argument checking - - `#4001 `__: BUG: numerical precision in dirichlet - - `#4012 `__: ENH: linalg: Add a function to compute the inverse of a Pascal... - - `#4021 `__: ENH: Cython api for lapack and blas - - `#4089 `__: Fixes for various PEP8 issues. - - `#4116 `__: MAINT: fitpack: trim down compiler warnings (unused labels, variables) - - `#4129 `__: ENH: stats: add a random_state property to distributions - - `#4135 `__: ENH: Add Wishart and inverse Wishart distributions - - `#4195 `__: improve the interpolate docs - - `#4200 `__: ENH: Add t-test from descriptive stats function. - - `#4202 `__: Dendrogram threshold color - - `#4205 `__: BLD: fix a number of Bento build warnings. - - `#4211 `__: add an ufunc for the inverse Box-Cox transfrom - - `#4212 `__: MRG:fix for gh-4208 - - `#4213 `__: ENH: specific warning if matlab file is empty - - `#4215 `__: Issue #4209: splprep documentation updated to reflect dimensional... - - `#4219 `__: DOC: silence several Sphinx warnings when building the docs - - `#4223 `__: MAINT: remove two redundant lines of code - - `#4226 `__: try forcing the numpy rebuild with relaxed strides - - `#4228 `__: BLD: some updates to Bento config files and docs. Closes gh-3978. - - `#4232 `__: wrong references in the docs - - `#4242 `__: DOC: change example sample spacing - - `#4245 `__: Arff fixes - - `#4246 `__: MAINT: C fixes - - `#4247 `__: MAINT: remove some unused code - - `#4249 `__: Add routines for updating QR decompositions - - `#4250 `__: MAINT: Some pyflakes-driven cleanup in linalg and sparse - - `#4252 `__: MAINT trim away >10 kLOC of generated C code - - `#4253 `__: TST: stop shadowing ellip* tests vs boost data - - `#4254 `__: MAINT: special: use NPY_PI, not M_PI - - `#4255 `__: DOC: INSTALL: use Py3-compatible print syntax, and don't mention... - - `#4256 `__: ENH: spatial: reimplement cdist_cosine using np.dot - - `#4258 `__: BUG: io.arff #4429 #2088 - - `#4261 `__: MAINT: signal: PEP8 and related style clean up. - - `#4262 `__: BUG: newton_krylov() was ignoring norm_tol argument, closes #4259 - - `#4263 `__: MAINT: clean up test noise and optimize tests for docstrings... - - `#4266 `__: MAINT: io: Give an informative error when attempting to read... - - `#4268 `__: MAINT: fftpack benchmark integer division vs true division - - `#4269 `__: MAINT: avoid shadowing the eigvals function - - `#4272 `__: BUG: sparse: Fix bench_sparse.py - - `#4276 `__: DOC: remove confusing parts of the documentation related to writing... - - `#4281 `__: Sparse matrix multiplication: only convert array if needed (with... - - `#4284 `__: BUG: integrate: odeint crashed when the integration time was... - - `#4286 `__: MRG: fix matlab output type of logical array - - `#4287 `__: DEP: deprecate stats.pdf_fromgamma. Closes gh-699. - - `#4291 `__: DOC: linalg: fix layout in cholesky_banded docstring - - `#4292 `__: BUG: allow empty dict as proxy for empty struct - - `#4293 `__: MAINT: != -> not_equal in hamming distance implementation - - `#4295 `__: Pole placement - - `#4296 `__: MAINT: some cleanups in tests of several modules - - `#4302 `__: ENH: Solve toeplitz linear systems - - `#4306 `__: Add benchmark for conjugate gradient solver. - - `#4307 `__: BLD: PEP 440 - - `#4310 `__: BUG: make stats.geom.logpmf(1,1) return 0.0 instead of nan - - `#4311 `__: TST: restore a test that uses slogdet now that we have dropped... - - `#4313 `__: Some minor fixes for stats.wishart addition. - - `#4315 `__: MAINT: drop numpy 1.5 compatibility code in sparse matrix tests - - `#4318 `__: ENH: Add random_state to multivariate distributions - - `#4319 `__: MAINT: fix hamming distance regression for exotic arrays, with... - - `#4320 `__: TST: a few changes like self.assertTrue(x == y, message) -> assert_equal(x,... - - `#4321 `__: TST: more changes like self.assertTrue(x == y, message) -> assert_equal(x,... - - `#4322 `__: TST: in test_signaltools, changes like self.assertTrue(x == y,... - - `#4323 `__: MAINT: clean up benchmarks so they can all be run as single files. - - `#4324 `__: Add more detailed committer guidelines, update MAINTAINERS.txt - - `#4326 `__: TST: use numpy.testing in test_hierarchy.py - - `#4329 `__: MAINT: stats: rename check_random_state test function - - `#4330 `__: Update distance tests - - `#4333 `__: MAINT: import comb, factorial from scipy.special, not scipy.misc - - `#4338 `__: TST: more conversions from nose to numpy.testing - - `#4339 `__: MAINT: remove the deprecated all_mat function from special_matrices.py - - `#4340 `__: add several features to frozen distributions - - `#4344 `__: BUG: Fix/test invalid lwork param in qr - - `#4345 `__: Fix test noise visible with Python 3.x - - `#4347 `__: Remove deprecated blas/lapack imports, rename lib to _lib - - `#4349 `__: DOC: add a nontrivial example to stats.binned_statistic. - - `#4350 `__: MAINT: remove optimize.anneal for 0.16.0 (was deprecated in 0.14.0). - - `#4351 `__: MAINT: fix usage of deprecated Numpy C API in optimize... - - `#4352 `__: MAINT: fix a number of special test failures - - `#4353 `__: implement cdf for betaprime distribution - - `#4357 `__: BUG: piecewise polynomial antiderivative - - `#4358 `__: BUG: integrate: fix handling of banded Jacobians in odeint, plus... - - `#4359 `__: MAINT: remove a code path taken for Python version < 2.5 - - `#4360 `__: MAINT: stats.mstats: Remove some unused variables (thanks, pyflakes). - - `#4362 `__: Removed erroneous reference to smoothing parameter #4072 - - `#4363 `__: MAINT: interpolate: clean up in fitpack.py - - `#4364 `__: MAINT: lib: don't export "partial" from decorator - - `#4365 `__: svdvals now returns a length-0 sequence of singular values given... - - `#4367 `__: DOC: slightly improve TeX rendering of wishart/invwishart docstring - - `#4373 `__: ENH: wrap gtsv and ptsv for solve_banded and solveh_banded. - - `#4374 `__: ENH: Enhancements to spatial.cKDTree - - `#4376 `__: BF: fix reading off-spec matlab logical sparse - - `#4377 `__: MAINT: integrate: Clean up some Fortran test code. - - `#4378 `__: MAINT: fix usage of deprecated Numpy C API in signal - - `#4380 `__: MAINT: scipy.optimize, removing further anneal references - - `#4381 `__: ENH: Make DCT and DST accept int and complex types like fft - - `#4392 `__: ENH: optimize: add DF-SANE nonlinear derivative-free solver - - `#4394 `__: Make reordering algorithms 64-bit clean - - `#4396 `__: BUG: bundle cblas.h in Accelerate ABI wrappers to enable compilation... - - `#4398 `__: FIX pdist bug where wminkowski's w.dtype != double - - `#4402 `__: BUG: fix stat.hypergeom argcheck - - `#4404 `__: MAINT: Fill in the full symmetric squareform in the C loop - - `#4405 `__: BUG: avoid X += X.T (refs #4401) - - `#4407 `__: improved accuracy of gompertz distribution for small x - - `#4414 `__: DOC:fix error in scipy.signal.cwt documentation. - - `#4415 `__: ENH: Improve accuracy of lomax for small x. - - `#4416 `__: DOC: correct a parameter name in docstring of SuperLU.solve.... - - `#4419 `__: Restore scipy.linalg.calc_lwork also in master - - `#4420 `__: fix a performance issue with a sparse solver - - `#4423 `__: ENH: improve rayleigh accuracy for large x. - - `#4424 `__: BUG: optimize.minimize: fix overflow issue with integer x0 input. - - `#4425 `__: ENH: Improve accuracy of truncexpon for small x - - `#4426 `__: ENH: improve rayleigh accuracy for large x. - - `#4427 `__: MAINT: optimize: cleanup of TNC code - - `#4429 `__: BLD: fix build failure with numpy 1.7.x and 1.8.x. - - `#4430 `__: BUG: fix a sparse.dok_matrix set/get copy-paste bug - - `#4433 `__: Update _minimize.py - - `#4435 `__: ENH: release GIL around batch distance computations - - `#4436 `__: Fixed incomplete documentation for spsolve - - `#4439 `__: MAINT: integrate: Some clean up in the tests. - - `#4440 `__: Fast permutation t-test - - `#4442 `__: DOC: optimize: fix wrong result in docstring - - `#4447 `__: DOC: signal: Some additional documentation to go along with the... - - `#4448 `__: DOC: tweak the docstring of lapack.linalg module - - `#4449 `__: fix a typo in the expit docstring - - `#4451 `__: ENH: vectorize distance loops with gcc - - `#4456 `__: MAINT: don't fail large data tests on MemoryError - - `#4461 `__: CI: use travis_retry to deal with network timeouts - - `#4462 `__: DOC: rationalize minimize() et al. documentation - - `#4470 `__: MAINT: sparse: inherit dok_matrix.toarray from spmatrix - - `#4473 `__: BUG: signal: Fix validation of the zi shape in sosfilt. - - `#4475 `__: BLD: setup.py: update min numpy version and support "setup.py... - - `#4481 `__: ENH: add a new linalg special matrix: the Helmert matrix - - `#4485 `__: MRG: some changes to allow reading bad mat files - - `#4490 `__: [ENH] linalg.hessenberg: use orghr - rebase - - `#4491 `__: ENH: linalg: Adding wrapper for potentially useful LAPACK function... - - `#4493 `__: BENCH: the solve_toeplitz benchmark used outdated syntax and... - - `#4494 `__: MAINT: stats: remove duplicated code - - `#4496 `__: References added for watershed_ift algorithm - - `#4499 `__: DOC: reshuffle stats distributions documentation - - `#4501 `__: Replace benchmark suite with airspeed velocity - - `#4502 `__: SLSQP should strictly satisfy bound constraints - - `#4503 `__: DOC: forward port 0.15.x release notes and update author name... - - `#4504 `__: ENH: option to avoid computing possibly unused svd matrix - - `#4505 `__: Rebase of PR 3303 (sparse matrix norms) - - `#4507 `__: MAINT: fix lobpcg performance regression - - `#4509 `__: DOC: sparse: replace dead link - - `#4511 `__: Fixed differential evolution bug - - `#4512 `__: Change to fully PEP440 compliant dev version numbers (always... - - `#4525 `__: made tiny style corrections (pep8) - - `#4533 `__: Add exponentially modified gaussian distribution (scipy.stats.expongauss) - - `#4534 `__: MAINT: benchmarks: make benchmark suite importable on all scipy... - - `#4535 `__: BUG: Changed zip() to list(zip()) so that it could work in Python... - - `#4536 `__: Follow up to pr 4348 (exponential window) - - `#4540 `__: ENH: spatial: Add procrustes analysis - - `#4541 `__: Bench fixes - - `#4542 `__: TST: NumpyVersion dev -> dev0 - - `#4543 `__: BUG: Overflow in savgol_coeffs - - `#4544 `__: pep8 fixes for stats - - `#4546 `__: MAINT: use reduction axis arguments in one-norm estimation - - `#4549 `__: ENH : Added group_delay to scipy.signal - - `#4553 `__: ENH: Significantly faster moment function - - `#4556 `__: DOC: document the changes of the sparse.linalg.svds (optional... - - `#4559 `__: DOC: stats: describe loc and scale parameters in the docstring... - - `#4563 `__: ENH: rewrite of stats.ppcc_plot - - `#4564 `__: Be more (or less) forgiving when user passes +-inf instead of... - - `#4566 `__: DEP: remove a bunch of deprecated function from scipy.stats,... - - `#4570 `__: MNT: Suppress LineSearchWarning's in scipy.optimize tests - - `#4572 `__: ENH: Extract inverse hessian information from L-BFGS-B - - `#4576 `__: ENH: Split signal.lti into subclasses, part of #2912 - - `#4578 `__: MNT: Reconcile docstrings and function signatures - - `#4581 `__: Fix build with Intel MKL on Linux - - `#4583 `__: DOC: optimize: remove references to unused disp kwarg - - `#4584 `__: ENH: scipy.signal - Tukey window - - `#4587 `__: Hermite asymptotic - - `#4593 `__: DOC - add example to RegularGridInterpolator - - `#4595 `__: DOC: Fix erroneous latex characters in tutorial/optimize. - - `#4600 `__: Add return codes to optimize.tnc docs - - `#4603 `__: ENH: Wrap LAPACK ``*lange`` functions for matrix norms - - `#4604 `__: scipy.stats: generalized normal distribution - - `#4609 `__: MAINT: interpolate: fix a few inconsistencies between docstrings... - - `#4610 `__: MAINT: make runtest.py --bench-compare use asv continuous and... - - `#4611 `__: DOC: stats: explain rice scaling; add a note to the tutorial... - - `#4614 `__: BUG: lfilter, the size of zi was not checked correctly for nd... - - `#4617 `__: MAINT: integrate: Clean the C code behind odeint. - - `#4618 `__: FIX: Raise error when window length != data length - - `#4619 `__: Issue #4550: `scipy.stats.mode` - UnboundLocalError on empty... - - `#4620 `__: Fixed a problem (#4590) with svds accepting wrong eigenvalue... - - `#4621 `__: Speed up special.ai_zeros/bi_zeros by 10x - - `#4623 `__: MAINT: some tweaks to spatial.procrustes (private file, html... - - `#4628 `__: Speed up signal.lfilter and add a convolution path for FIR filters - - `#4629 `__: Bug: integrate.nquad; resolve issue #4599 - - `#4631 `__: MAINT: integrate: Remove unused variables in a Fortran test function. - - `#4633 `__: MAINT: Fix convergence message for remez - - `#4635 `__: PEP8: indentation (so that pep8 bot does not complain) - - `#4637 `__: MAINT: generalize a sign function to do the right thing for complex... - - `#4639 `__: Amended typo in apple_sgemv_fix.c - - `#4642 `__: MAINT: use lapack for scipy.linalg.norm - - `#4643 `__: RBF default epsilon too large 2020 - - `#4646 `__: Added atleast_1d around poly in invres and invresz - - `#4647 `__: fix doc pdf build - - `#4648 `__: BUG: Fixes #4408: Vector-valued constraints in minimize() et... - - `#4649 `__: Vonmisesfix - - `#4650 `__: Signal example clean up in Tukey and place_poles - - `#4652 `__: DOC: Fix the error in convolve for same mode - - `#4653 `__: improve erf performance - - `#4655 `__: DEP: deprecate scipy.stats.histogram2 in favour of np.histogram2d - - `#4656 `__: DEP: deprecate scipy.stats.signaltonoise - - `#4660 `__: Avoid extra copy for sparse compressed [:, seq] and [seq, :]... - - `#4661 `__: Clean, rebase of #4478, adding ?gelsy and ?gelsd wrappers - - `#4662 `__: MAINT: Correct odeint messages - - `#4664 `__: Update _monotone.py - - `#4672 `__: fix behavior of scipy.linalg.block_diag for empty input - - `#4675 `__: Fix lsim - - `#4676 `__: Added missing colon to :math: directive in docstring. - - `#4679 `__: ENH: sparse randn - - `#4682 `__: ENH: scipy.signal - Addition of CSD, coherence; Enhancement of... - - `#4684 `__: BUG: various errors in weight calculations in orthogonal.py - - `#4685 `__: BUG: Fixes #4594: optimize.linprog IndexError when a callback... - - `#4686 `__: MAINT: cluster: Clean up duplicated exception raising code. - - `#4688 `__: Improve is_distance_dm exception message - - `#4692 `__: MAINT: stats: Simplify the calculation in tukeylambda._ppf - - `#4693 `__: ENH: added functionality to handle scalars in `stats._chk_asarray` - - `#4694 `__: Vectorization of Anderson-Darling computations. - - `#4696 `__: Fix singleton expansion in lfilter. - - `#4698 `__: MAINT: quiet warnings from cephes. - - `#4701 `__: add Bpoly.antiderivatives / integrals - - `#4703 `__: Add citation of published paper - - `#4706 `__: MAINT: special: avoid out-of-bounds access in specfun - - `#4707 `__: MAINT: fix issues with np.matrix as input to functions related... - - `#4709 `__: ENH: `scipy.stats` now returns namedtuples. - - `#4710 `__: scipy.io.idl: make reader more robust to missing variables in... - - `#4711 `__: Fix crash for unknown chunks at the end of file - - `#4712 `__: Reduce onenormest memory usage - - `#4713 `__: MAINT: interpolate: no need to pass dtype around if it can be... - - `#4714 `__: BENCH: Add benchmarks for stats module - - `#4715 `__: MAINT: polish signal.place_poles and signal/test_ltisys.py - - `#4716 `__: DEP: deprecate mstats.signaltonoise ... - - `#4717 `__: MAINT: basinhopping: fix error in tests, silence /0 warning,... - - `#4718 `__: ENH: stats: can specify f-shapes to fix in fitting by name - - `#4721 `__: Document that imresize converts the input to a PIL image - - `#4722 `__: MAINT: PyArray_BASE is not an lvalue unless the deprecated API... - - `#4725 `__: Fix gengamma _nump failure - - `#4728 `__: DOC: add poch to the list of scipy special function descriptions - - `#4735 `__: MAINT: stats: avoid (a spurious) division-by-zero in skew - - `#4738 `__: TST: silence runtime warnings for some corner cases in `stats`... - - `#4739 `__: BLD: try to build numpy instead of using the one on TravisCI - - `#4740 `__: DOC: Update some docstrings with 'versionadded'. - - `#4742 `__: BLD: make sure that relaxed strides checking is in effect on... - - `#4750 `__: DOC: special: TeX typesetting of rel_entr, kl_div and pseudo_huber - - `#4751 `__: BENCH: add sparse null slice benchmark - - `#4753 `__: BUG: Fixed compilation with recent Cython versions. - - `#4756 `__: BUG: Fixes #4733: optimize.brute finish option is not compatible... - - `#4758 `__: DOC: optimize.leastsq default maxfev clarification - - `#4759 `__: improved stats mle fit - - `#4760 `__: MAINT: count bfgs updates more carefully - - `#4762 `__: BUGS: Fixes #4746 and #4594: linprog returns solution violating... - - `#4763 `__: fix small linprog bugs - - `#4766 `__: BENCH: add signal.lsim benchmark - - `#4768 `__: fix python syntax errors in docstring examples - - `#4769 `__: Fixes #4726: test_cobyla.test_vector_constraints - - `#4770 `__: Mark FITPACK functions as thread safe. - - `#4771 `__: edited scipy/stats/stats.py to fix doctest for fisher_exact - - `#4773 `__: DOC: update 0.16.0 release notes. - - `#4775 `__: DOC: linalg: add funm_psd as a docstring example - - `#4778 `__: Use a dictionary for function name synonyms - - `#4780 `__: Include apparently-forgotten functions in docs - - `#4783 `__: Added many missing special functions to docs - - `#4784 `__: add an axis attribute to PPoly and friends - - `#4785 `__: Brief note about origin of Lena image - - `#4786 `__: DOC: reformat the Methods section of the KDE docstring - - `#4787 `__: Add rice cdf and ppf. - - `#4792 `__: CI: add a kludge for detecting test failures which try to disguise... - - `#4795 `__: Make refguide_check smarter about false positives - - `#4797 `__: BUG/TST: numpoints not updated for incremental Voronoi - - `#4799 `__: BUG: spatial: Fix a couple edge cases for the Mahalanobis metric... - - `#4801 `__: BUG: Fix TypeError in scipy.optimize._trust-region.py when disp=True. - - `#4803 `__: Issues with relaxed strides in QR updating routines - - `#4806 `__: MAINT: use an informed initial guess for cauchy fit - - `#4810 `__: PEP8ify codata.py - - `#4812 `__: BUG: Relaxed strides cleanup in decomp_update.pyx.in - - `#4820 `__: BLD: update Bento build for sgemv fix and install cython blas/lapack... - - `#4823 `__: ENH: scipy.signal - Addition of spectrogram function - - `#4827 `__: DOC: add csd and coherence to __init__.py - - `#4833 `__: BLD: fix issue in linalg ``*lange`` wrappers for g77 builds. - - `#4841 `__: TST: fix test failures in scipy.special with mingw32 due to test... - - `#4842 `__: DOC: update site.cfg.example. Mostly taken over from Numpy - - `#4845 `__: BUG: signal: Make spectrogram's return values order match the... - - `#4849 `__: DOC:Fix error in ode docstring example - - `#4856 `__: BUG: fix typo causing memleak Checksums ========= MD5 ~~~ 1c6faa58d12c7b642e64a44b57c311c3 scipy-0.16.0-win32-superpack-python2.7.exe 18b9b53af7216ab897495cc77068285b scipy-0.16.0-win32-superpack-python3.3.exe 9cef8bc882e21854791ec80140258bc9 scipy-0.16.0-win32-superpack-python3.4.exe eb95dda0f36cc3096673993a350cde77 scipy-0.16.0.tar.gz fe1425745ab68d9d4ccaf59e854f1c28 scipy-0.16.0.tar.xz 1764bd452a72698b968ad13e51e28053 scipy-0.16.0.zip SHA256 ~~~~~~ b752366fafa3fddb96352d7b3259f7b3e58fae9f45d5a98eb91bd80005d75dfc scipy-0.16.0-win32-superpack-python2.7.exe 369777d27da760d498a9312696235ab9e90359d9f4e02347669cbf56a42312a8 scipy-0.16.0-win32-superpack-python3.3.exe bcd480ce8e8289942e57e7868d07e9a35982bc30a79150006ad085ce4c06803e scipy-0.16.0-win32-superpack-python3.4.exe 92592f40097098f3fdbe7f5855d535b29bb16719c2bb59c728bce5e7a28790e0 scipy-0.16.0.tar.gz e27f5cfa985cb1253e15aaeddc3dcd512d6853b05e84454f7f43b53b35514071 scipy-0.16.0.tar.xz c9758971df994d238a4d0ff1d47ba5b02f1cb402d6e1925c921a452bc430a3d5 scipy-0.16.0.zip -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQEcBAEBAgAGBQJVspIaAAoJEO2+o3i/Gl69n3cIAJdoEGIq/8MTsywYL6k5zsqA aBK1Q9aB4qJcCLwM6ULKErxhY9lROzgljSvl22dCaD7YYYgD4Q03+BaXjIrHenbc +sX5CzBPoz+BFjh7tTnfU5a6pVhqjQbW17A0TF0j6jah29pFnM2Xdf3zgHc+3f/B U6JC698wDKROGlvKqWcwKcs2+EPBuu92gNa/rRCmMdnt9dIqVM8+otRNMgPVCZ+R SgfneSGjZ4vXuBK3zWgcP0+r8Ek0DkUuFhEAK3W8NhEFCqd1kHkdvN+RIl6pGfHZ OAHbzds6+VHgvQ3a4g2efJY3CD0LvtOgeS3R3NdmT3gCxkJtZpHAsczFhwKIWHM= =QZFz -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From jslavin at cfa.harvard.edu Fri Jul 24 16:00:17 2015 From: jslavin at cfa.harvard.edu (Slavin, Jonathan) Date: Fri, 24 Jul 2015 16:00:17 -0400 Subject: [SciPy-User] ANN: Scipy 0.16.0 release Message-ID: ?I'm wondering why scipy, such a solid and widely used package, uses such a low version number.? If I didn't know better, I'd be reluctant to use a package with a version of 0.16. Numpy is at 1.6.2, and matplotlib is nearly at 2.0. Generally packages that are ready for general use are at least 1.0. It just seems incongruous. Regards, Jon On Fri, Jul 24, 2015 at 3:39 PM, wrote: > Date: Fri, 24 Jul 2015 21:39:54 +0200 > From: Ralf Gommers > Subject: [SciPy-User] ANN: Scipy 0.16.0 release > To: Discussion of Numerical Python , > SciPy > Developers List , SciPy Users List > , python-announce-list at python.org > Message-ID: > < > CABL7CQj5UoW2WfoSiUMM8V2R7b0WcJHUKLz5Of_EE-VbtKZOBA at mail.gmail.com> > Content-Type: text/plain; charset="utf-8" > > Hi all, > > On behalf of the Scipy development team I'm pleased to announce the > availability of Scipy 0.16.0. This release contains some exciting new > features (see release notes below) and more than half a years' worth of > maintenance work. 93 people contributed to this release. > > This release requires Python 2.6, 2.7 or 3.2-3.4 and NumPy 1.6.2 or > greater. Sources, binaries and release notes can be found at > https://github.com/scipy/scipy/releases/tag/v0.16.0 > > > Enjoy, > Ralf > -- ________________________________________________________ Jonathan D. Slavin Harvard-Smithsonian CfA jslavin at cfa.harvard.edu 60 Garden Street, MS 83 phone: (617) 496-7981 Cambridge, MA 02138-1516 cell: (781) 363-0035 USA ________________________________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Fri Jul 24 16:48:34 2015 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Fri, 24 Jul 2015 22:48:34 +0200 Subject: [SciPy-User] ANN: Scipy 0.16.0 release In-Reply-To: References: Message-ID: On Fri, Jul 24, 2015 at 10:00 PM, Slavin, Jonathan wrote: > ?I'm wondering why scipy, such a solid and widely used package, uses such > a low version number.? If I didn't know better, I'd be reluctant to use a > package with a version of 0.16. Numpy is at 1.6.2, and matplotlib is > nearly at 2.0. Generally packages that are ready for general use are at > least 1.0. It just seems incongruous. > Good question. We have at least been thinking about what it means for Scipy to become 1.0: https://github.com/scipy/scipy/compare/master...rgommers:roadmap Probably we should start connecting that to a date and descoping a few items from that list, so that we'll never reach 0.20..... Ralf > > Regards, > Jon > > On Fri, Jul 24, 2015 at 3:39 PM, wrote: > >> Date: Fri, 24 Jul 2015 21:39:54 +0200 >> From: Ralf Gommers >> Subject: [SciPy-User] ANN: Scipy 0.16.0 release >> To: Discussion of Numerical Python , >> SciPy >> Developers List , SciPy Users List >> , python-announce-list at python.org >> Message-ID: >> < >> CABL7CQj5UoW2WfoSiUMM8V2R7b0WcJHUKLz5Of_EE-VbtKZOBA at mail.gmail.com> >> Content-Type: text/plain; charset="utf-8" >> >> Hi all, >> >> On behalf of the Scipy development team I'm pleased to announce the >> availability of Scipy 0.16.0. This release contains some exciting new >> features (see release notes below) and more than half a years' worth of >> maintenance work. 93 people contributed to this release. >> >> This release requires Python 2.6, 2.7 or 3.2-3.4 and NumPy 1.6.2 or >> greater. Sources, binaries and release notes can be found at >> https://github.com/scipy/scipy/releases/tag/v0.16.0 >> >> >> Enjoy, >> Ralf >> > > > > > -- > ________________________________________________________ > Jonathan D. Slavin Harvard-Smithsonian CfA > jslavin at cfa.harvard.edu 60 Garden Street, MS 83 > phone: (617) 496-7981 Cambridge, MA 02138-1516 > cell: (781) 363-0035 USA > ________________________________________________________ > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ndbecker2 at gmail.com Tue Jul 28 09:29:40 2015 From: ndbecker2 at gmail.com (Neal Becker) Date: Tue, 28 Jul 2015 09:29:40 -0400 Subject: [SciPy-User] filtfilt + sos? Message-ID: I was interested int using filtfilt with a high order filter, so I thought why not try out the new sos? But I don't think filtfilt works with sos. From toddrjen at gmail.com Tue Jul 28 09:31:43 2015 From: toddrjen at gmail.com (Todd) Date: Tue, 28 Jul 2015 15:31:43 +0200 Subject: [SciPy-User] Decimal dtype Message-ID: Traditional base-2 floating-point numbers have a lot of well-known issues. The python standard library has a Decimal module that provides base-10 floating-point numbers, which avoid some (although not all) of these issues. Is there any possibility of numpy having one or more dtypes for base-10 floating-point numbers? I understand fully if a lack of support from underlying libraries makes this infeasible at the present time. I haven't been able to find much good information on the issue, which leads me to suspect the situation is probably not good. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Tue Jul 28 09:34:45 2015 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Tue, 28 Jul 2015 15:34:45 +0200 Subject: [SciPy-User] filtfilt + sos? In-Reply-To: References: Message-ID: On Tue, Jul 28, 2015 at 3:29 PM, Neal Becker wrote: > I was interested int using filtfilt with a high order filter, so I thought > why not try out the new sos? But I don't think filtfilt works with sos. > Nope, filtfilt only takes TF format. http://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.sosfilt.html is the only filter that can use SOS at the moment. Unifying all filter formats and making all filtering functions understand all formats is progressing slowly, but there's a long way to go. Ralf > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From archibald at astron.nl Tue Jul 28 10:09:20 2015 From: archibald at astron.nl (Anne Archibald) Date: Tue, 28 Jul 2015 14:09:20 +0000 Subject: [SciPy-User] Decimal dtype In-Reply-To: References: Message-ID: Is there a (hardware or not) fixed-size decimal format? Would that even be useful? Numpy's arrays are most useful for working with fixed-size quantities of homogeneous type for which operations are fast and can be carried out without going through python. None of that would appear to be true for decimals, even if one used a C-level decimal library. But numpy arrays can also be used to contain arbitrary python objects, such as arbitrary-precision numbers, binary or decimal. They won't be all that much faster than lists, but they do make most of numpy's array operations available. In [6]: a = np.array([decimal.Decimal(n) for n in range(10)]) In [7]: a Out[7]: array([Decimal('0'), Decimal('1'), Decimal('2'), Decimal('3'), Decimal('4'), Decimal('5'), Decimal('6'), Decimal('7'), Decimal('8'), Decimal('9')], dtype=object) In [8]: a/decimal.Decimal(10) Out[8]: array([Decimal('0'), Decimal('0.1'), Decimal('0.2'), Decimal('0.3'), Decimal('0.4'), Decimal('0.5'), Decimal('0.6'), Decimal('0.7'), Decimal('0.8'), Decimal('0.9')], dtype=object) Anne On Tue, Jul 28, 2015 at 3:32 PM Todd wrote: > Traditional base-2 floating-point numbers have a lot of well-known > issues. The python standard library has a Decimal module that provides > base-10 floating-point numbers, which avoid some (although not all) of > these issues. > > Is there any possibility of numpy having one or more dtypes for base-10 > floating-point numbers? > > I understand fully if a lack of support from underlying libraries makes > this infeasible at the present time. I haven't been able to find much good > information on the issue, which leads me to suspect the situation is > probably not good. > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From daoust.mj at gmail.com Tue Jul 28 10:16:13 2015 From: daoust.mj at gmail.com (Mark Daoust) Date: Tue, 28 Jul 2015 10:16:13 -0400 Subject: [SciPy-User] Decimal dtype In-Reply-To: References: Message-ID: > Is there a (hardware or not) fixed-size decimal format? Would that even be useful? what about something like dec64? http://dec64.com/ https://github.com/douglascrockford/DEC64 Mark Daoust -------------- next part -------------- An HTML attachment was scrubbed... URL: From toddrjen at gmail.com Tue Jul 28 10:20:07 2015 From: toddrjen at gmail.com (Todd) Date: Tue, 28 Jul 2015 16:20:07 +0200 Subject: [SciPy-User] Decimal dtype In-Reply-To: References: Message-ID: On Tue, Jul 28, 2015 at 4:09 PM, Anne Archibald wrote: > > On Tue, Jul 28, 2015 at 3:32 PM Todd wrote: > >> Traditional base-2 floating-point numbers have a lot of well-known >> issues. The python standard library has a Decimal module that provides >> base-10 floating-point numbers, which avoid some (although not all) of >> these issues. >> >> Is there any possibility of numpy having one or more dtypes for base-10 >> floating-point numbers? >> >> I understand fully if a lack of support from underlying libraries makes >> this infeasible at the present time. I haven't been able to find much good >> information on the issue, which leads me to suspect the situation is >> probably not good. >> > > Is there a (hardware or not) fixed-size decimal format? Would that even be > useful? > > IEEE 754-2008 defines 32bit, 64bit, and 128bit floating-point decimal numbers. https://en.wikipedia.org/wiki/Decimal_floating_point#IEEE_754-2008_encoding > Numpy's arrays are most useful for working with fixed-size quantities of > homogeneous type for which operations are fast and can be carried out > without going through python. None of that would appear to be true for > decimals, even if one used a C-level decimal library. > If it stuck with IEEE decimal floating point numbers then it would still be fixed-size homogeneous data. > But numpy arrays can also be used to contain arbitrary python objects, > such as arbitrary-precision numbers, binary or decimal. They won't be all > that much faster than lists, but they do make most of numpy's array > operations available. > Those operations aren't vectorized, which eliminates a lot of the advantage. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ndbecker2 at gmail.com Tue Jul 28 10:40:08 2015 From: ndbecker2 at gmail.com (Neal Becker) Date: Tue, 28 Jul 2015 10:40:08 -0400 Subject: [SciPy-User] scipy-0.16.0 self-test result Message-ID: fedora 22 x86_64 built with: CFLAGS="-O3 -march=native" ATLAS=/usr/lib64/openblas FFTW=/usr/lib64 BLAS=/usr/lib64 LAPACK=/usr/lib64 python3 setup.py install --user [nbecker at nbecker2 ~]$ python -c 'import scipy; scipy.test("full");' Running unit tests for scipy NumPy version 1.9.2 NumPy is installed in /home/nbecker/.local/lib/python2.7/site-packages/numpy SciPy version 0.16.0 SciPy is installed in /home/nbecker/.local/lib/python2.7/site-packages/scipy Python version 2.7.10 (default, Jul 5 2015, 14:15:43) [GCC 5.1.1 20150618 (Red Hat 5.1.1-4)] nose version 1.3.7 ====================================================================== FAIL: test_qhull.TestUtilities.test_degenerate_barycentric_transforms ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/home/nbecker/.local/lib/python2.7/site- packages/scipy/spatial/tests/test_qhull.py", line 294, in test_degenerate_barycentric_transforms assert_(bad_count < 20, bad_count) File "/home/nbecker/.local/lib/python2.7/site- packages/numpy/testing/utils.py", line 53, in assert_ raise AssertionError(smsg) AssertionError: 20 ====================================================================== FAIL: test_qhull.TestUtilities.test_more_barycentric_transforms ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/home/nbecker/.local/lib/python2.7/site- packages/scipy/spatial/tests/test_qhull.py", line 353, in test_more_barycentric_transforms unit_cube_tol=1e7*eps) File "/home/nbecker/.local/lib/python2.7/site- packages/scipy/spatial/tests/test_qhull.py", line 280, in _check_barycentric_transforms assert_(ok.all(), "%s %s" % (err_msg, np.where(~ok))) File "/home/nbecker/.local/lib/python2.7/site- packages/numpy/testing/utils.py", line 53, in assert_ raise AssertionError(smsg) AssertionError: ndim=4 (array([ 8071, 10512, 10513]),) ---------------------------------------------------------------------- Ran 19618 tests in 588.242s FAILED (KNOWNFAIL=149, SKIP=1195, failures=2) From ralf.gommers at gmail.com Tue Jul 28 10:44:49 2015 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Tue, 28 Jul 2015 16:44:49 +0200 Subject: [SciPy-User] scipy-0.16.0 self-test result In-Reply-To: References: Message-ID: On Tue, Jul 28, 2015 at 4:40 PM, Neal Becker wrote: > fedora 22 x86_64 > > built with: > CFLAGS="-O3 -march=native" ATLAS=/usr/lib64/openblas FFTW=/usr/lib64 > BLAS=/usr/lib64 LAPACK=/usr/lib64 python3 setup.py install --user > > [nbecker at nbecker2 ~]$ python -c 'import scipy; scipy.test("full");' > Running unit tests for scipy > NumPy version 1.9.2 > NumPy is installed in > /home/nbecker/.local/lib/python2.7/site-packages/numpy > SciPy version 0.16.0 > SciPy is installed in > /home/nbecker/.local/lib/python2.7/site-packages/scipy > Python version 2.7.10 (default, Jul 5 2015, 14:15:43) [GCC 5.1.1 20150618 > (Red Hat 5.1.1-4)] > nose version 1.3.7 > > > ====================================================================== > FAIL: test_qhull.TestUtilities.test_degenerate_barycentric_transforms > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in > runTest > self.test(*self.arg) > File "/home/nbecker/.local/lib/python2.7/site- > packages/scipy/spatial/tests/test_qhull.py", line 294, in > test_degenerate_barycentric_transforms > assert_(bad_count < 20, bad_count) > File "/home/nbecker/.local/lib/python2.7/site- > packages/numpy/testing/utils.py", line 53, in assert_ > raise AssertionError(smsg) > AssertionError: 20 > > ====================================================================== > FAIL: test_qhull.TestUtilities.test_more_barycentric_transforms > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in > runTest > self.test(*self.arg) > File "/home/nbecker/.local/lib/python2.7/site- > packages/scipy/spatial/tests/test_qhull.py", line 353, in > test_more_barycentric_transforms > unit_cube_tol=1e7*eps) > File "/home/nbecker/.local/lib/python2.7/site- > packages/scipy/spatial/tests/test_qhull.py", line 280, in > _check_barycentric_transforms > assert_(ok.all(), "%s %s" % (err_msg, np.where(~ok))) > File "/home/nbecker/.local/lib/python2.7/site- > packages/numpy/testing/utils.py", line 53, in assert_ > raise AssertionError(smsg) > AssertionError: ndim=4 (array([ 8071, 10512, 10513]),) > > ---------------------------------------------------------------------- > Ran 19618 tests in 588.242s > > FAILED (KNOWNFAIL=149, SKIP=1195, failures=2) > Hi Neal, this is a known issue probably related to the LAPACK flavor you're using (I'm seeing it on 32-bit Ubuntu with ATLAS as well): https://github.com/scipy/scipy/issues/2997 Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From archibald at astron.nl Tue Jul 28 10:45:22 2015 From: archibald at astron.nl (Anne Archibald) Date: Tue, 28 Jul 2015 14:45:22 +0000 Subject: [SciPy-User] Decimal dtype In-Reply-To: References: Message-ID: On Tue, Jul 28, 2015 at 4:20 PM Todd wrote: > On Tue, Jul 28, 2015 at 4:09 PM, Anne Archibald > wrote: > >> >> On Tue, Jul 28, 2015 at 3:32 PM Todd wrote: >> >>> Traditional base-2 floating-point numbers have a lot of well-known >>> issues. The python standard library has a Decimal module that provides >>> base-10 floating-point numbers, which avoid some (although not all) of >>> these issues. >>> >>> Is there any possibility of numpy having one or more dtypes for base-10 >>> floating-point numbers? >>> >>> I understand fully if a lack of support from underlying libraries makes >>> this infeasible at the present time. I haven't been able to find much good >>> information on the issue, which leads me to suspect the situation is >>> probably not good. >>> >> >> Is there a (hardware or not) fixed-size decimal format? Would that even >> be useful? >> >> > IEEE 754-2008 defines 32bit, 64bit, and 128bit floating-point decimal > numbers. > > https://en.wikipedia.org/wiki/Decimal_floating_point#IEEE_754-2008_encoding > Given a reasonably-efficient library for manipulating these, it might be useful to add them to numpy. Numpy's arrays are most useful for working with fixed-size quantities of >> homogeneous type for which operations are fast and can be carried out >> without going through python. None of that would appear to be true for >> decimals, even if one used a C-level decimal library. >> > > If it stuck with IEEE decimal floating point numbers then it would still > be fixed-size homogeneous data. > > >> But numpy arrays can also be used to contain arbitrary python objects, >> such as arbitrary-precision numbers, binary or decimal. They won't be all >> that much faster than lists, but they do make most of numpy's array >> operations available. >> > > Those operations aren't vectorized, which eliminates a lot of the > advantage. > Just to be clear: "vectorized" in this context means specifically, "the inner loops are in C". This is different from what numpy.vectorize does (every bottom-level operation goes through the python interpreter) or what parallel programmers mean (actual SIMD in which the operation is carried out in parallel). The disadvantage of going through python at the bottom level is probably rather modest for numbers implemented in software - for comparison, quad precision is about fifty times slower than long double calculation even without python overhead. Nevertheless, given a decent fixed-width decimal library it could certainly be done to store them in numpy arrays. This does not necessarily mean modifying numpy code (I looked into adding quad precision) - it is possible to add a dtype in an extension library. For example, there is a numpy quaternion library: https://github.com/martinling/numpy_quaternion and a numpy half-precision library: https://github.com/mwiebe/numpy_half Anne -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Tue Jul 28 12:06:12 2015 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Tue, 28 Jul 2015 18:06:12 +0200 Subject: [SciPy-User] How to fix Aborted SciPy version 0.16.0b2 test ? In-Reply-To: References: Message-ID: On Tue, Jul 21, 2015 at 9:39 PM, Sergio Rojas wrote: > Hello all, > > Runing on Ubuntu 14.04 LTS and under Python 3.5.0b3 and NumPy version > 1.9.2, the > SciPy (version 0.16.0b2) test is aborting at > > #------------ > test_interpolative.TestInterpolativeDecomposition.test_badcall ... ok > test_interpolative.TestInterpolativeDecomposition.test_id( 'numpy.float64'>,) ... *** Error in `python3': free(): invalid next size > (normal): 0x00000000039e1f60 *** > Aborted > #------------ > > would it be possible to fix it? > Hi Sergio, thanks for the report. A few questions: - did 0.15.1 work for you with the same install method - does the problem still occur with the final 0.16.0 release? - is the issue specific to Python 3.5 or not? Opening an issue on Github and attaching a gdb traceback (see http://scipy.org/bug-report.html) would be helpful. Cheers, Ralf > > Thanks in advance for your help, > > Sergio > PD. Some extra info is as follows: > > $ python3 Python 3.5.0b3 (default, Jul 21 2015, 12:33:54) > [GCC 4.8.4] on linux > Type "help", "copyright", "credits" or "license" for more information. > >>> import scipy > >>> scipy.show_config() > blas_mkl_info: > NOT AVAILABLE > atlas_3_10_threads_info: > libraries = ['tatlas', 'tatlas', 'tatlas'] > define_macros = [('ATLAS_INFO', '"\\"3.11.34\\""')] > include_dirs = ['/home/myProg/NumLibs64b/include'] > library_dirs = ['/home/myProg/NumLibs64b/lib'] > language = f77 > openblas_lapack_info: > NOT AVAILABLE > lapack_mkl_info: > NOT AVAILABLE > openblas_info: > NOT AVAILABLE > mkl_info: > NOT AVAILABLE > lapack_opt_info: > libraries = ['tatlas', 'tatlas', 'tatlas'] > define_macros = [('ATLAS_INFO', '"\\"3.11.34\\""')] > include_dirs = ['/home/myProg/NumLibs64b/include'] > library_dirs = ['/home/myProg/NumLibs64b/lib'] > language = f77 > atlas_3_10_blas_threads_info: > libraries = ['tatlas'] > define_macros = [('HAVE_CBLAS', None), ('ATLAS_INFO', > '"\\"3.11.34\\""')] > include_dirs = ['/home/myProg/NumLibs64b/include'] > library_dirs = ['/home/myProg/NumLibs64b/lib'] > language = c > blas_opt_info: > libraries = ['tatlas'] > define_macros = [('HAVE_CBLAS', None), ('ATLAS_INFO', > '"\\"3.11.34\\""')] > include_dirs = ['/home/myProg/NumLibs64b/include'] > library_dirs = ['/home/myProg/NumLibs64b/lib'] > language = c > >>> scipy.test('full', verbose=2) > Running unit tests for scipy > NumPy version 1.9.2 > NumPy is installed in > /home/myProg/PythonGnu350/Linux64b/lib/python3.5/site-packages/numpy > SciPy version 0.16.0b2 > SciPy is installed in > /home/myProg/PythonGnu350/Linux64b/lib/python3.5/site-packages/scipy > Python version 3.5.0b3 (default, Jul 21 2015, 12:33:54) [GCC 4.8.4] > nose version 1.3.7 > /home/myProg/PythonGnu350/Linux64b/lib/python3.5/site-packages/scipy/_lib/decorator.py:205: > DeprecationWarning: inspect.getargspec() is deprecated, use > inspect.signature() instead > ... > ... > test_y_bad_size (test_fblas.TestZswap) ... ok > test_y_stride (test_fblas.TestZswap) ... ok > test_interpolative.TestInterpolativeDecomposition.test_badcall ... ok > test_interpolative.TestInterpolativeDecomposition.test_id( 'numpy.float64'>,) ... *** Error in `python3': free(): invalid next size > (normal): 0x00000000039e1f60 *** > Aborted > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Tue Jul 28 16:42:55 2015 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 28 Jul 2015 13:42:55 -0700 Subject: [SciPy-User] Decimal dtype In-Reply-To: References: Message-ID: On Jul 28, 2015 7:12 AM, "Anne Archibald" wrote: > > Is there a (hardware or not) fixed-size decimal format? Would that even be useful? The newer 2008 version of IEEE-754 does include specifications for decimal32, decimal64, and decimal128 formats, and from the GCC docs it sounds like there is some effort underway to add these to ISO C: https://gcc.gnu.org/onlinedocs/gcc/Decimal-Float.html I don't think there'd be much appetite to add these to numpy core right now, between the relatively rare use, our lack of devs who know about them, and the inevitable compiler compatibility issues. But they could be supported via a third-party library that provides these dtypes. This would be possible right now; there are similar examples floating around for adding rational and quaternion dtypes to numpy as third party libraries. And then if this library proved to be solid and popular then it could potentially later become part of numpy core. Otherwise, yeah, object arrays are going to be the best bet for a quick solution... -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From sergio_r at mail.com Wed Jul 29 23:30:26 2015 From: sergio_r at mail.com (Sergio Rojas) Date: Thu, 30 Jul 2015 05:30:26 +0200 Subject: [SciPy-User] How to fix Aborted SciPy version 0.16.0b2 test ? References: Message-ID: An HTML attachment was scrubbed... URL: From sergio_r at mail.com Thu Jul 30 10:21:17 2015 From: sergio_r at mail.com (Sergio Rojas) Date: Thu, 30 Jul 2015 16:21:17 +0200 Subject: [SciPy-User] How to fix Aborted SciPy version 0.16.0b2 test ? Message-ID: An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Fri Jul 31 01:55:51 2015 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Fri, 31 Jul 2015 07:55:51 +0200 Subject: [SciPy-User] ANN: PyWavelets 0.3.0 release Message-ID: Dear all, On behalf of the PyWavelets development team I'm excited to announce the availability of PyWavelets 0.3.0. This is the first release of the package in 3 years. It is the result of a significant effort of a growing development team to modernize the package, to provide Python 3.x support and to make a start with providing new features as well as improved performance. A 0.4.0 release will follow shortly, and will contain more significant new features as well as changes/deprecations to streamline the API. This release requires Python 2.6, 2.7 or 3.3-3.5 and Numpy 1.6.2 or greater. Sources and release notes can be found on https://pypi.python.org/pypi/PyWavelets and https://github.com/PyWavelets/pywt/releases. Activity on the project is picking up quickly. If you're interested in wavelets in Python, you are welcome and invited to join us at https://github.com/PyWavelets/pywt Enjoy, Ralf ============================== PyWavelets 0.3.0 Release Notes ============================== PyWavelets 0.3.0 is the first release of the package in 3 years. It is the result of a significant effort of a growing development team to modernize the package, to provide Python 3.x support and to make a start with providing new features as well as improved performance. A 0.4.0 release will follow shortly, and will contain more significant new features as well as changes/deprecations to streamline the API. This release requires Python 2.6, 2.7 or 3.3-3.5 and NumPy 1.6.2 or greater. Highlights of this release include: - Support for Python 3.x (>=3.3) - Added a test suite (based on nose, coverage up to 61% so far) - Maintenance work: C style complying to the Numpy style guide, improved templating system, more complete docstrings, pep8/pyflakes compliance, and more. New features ============ Test suite ---------- The test suite can be run with ``nosetests pywt`` or with:: >>> import pywt >>> pywt.test() n-D Inverse Discrete Wavelet Transform -------------------------------------- The function ``pywt.idwtn``, which provides n-dimensional inverse DWT, has been added. It complements ``idwt``, ``idwt2`` and ``dwtn``. Thresholding ------------ The function `pywt.threshold` has been added. It unifies the four thresholding functions that are still provided in the ``pywt.thresholding`` namespace. Backwards incompatible changes ============================== None in this release. Other changes ============= Development has moved to `a new repo `_. Everyone with an interest in wavelets is welcome to contribute! Building wheels, building with ``python setup.py develop`` and many other standard ways to build and install PyWavelets are supported now. Authors ======= * Ankit Agrawal + * Fran?ois Boulogne + * Ralf Gommers + * David Men?ndez Hurtado + * Gregory R. Lee + * David McInnis + * Helder Oliveira + * Filip Wasilewski * Kai Wohlfahrt + A total of 9 people contributed to this release. People with a "+" by their names contributed a patch for the first time. This list of names is automatically generated, and may not be fully complete. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sergio_r at mail.com Fri Jul 31 09:21:07 2015 From: sergio_r at mail.com (Sergio Rojas) Date: Fri, 31 Jul 2015 15:21:07 +0200 Subject: [SciPy-User] How to fix Aborted SciPy version 0.16.0b2 test ? Message-ID: This is a followup of the question posed in the email subject. The issue is about fixing the aborted test: >---- > test_interpolative.TestInterpolativeDecomposition.test_id( ,) ... *** Error in `/home/myProg/PythonGNU2710/Linux64b/bin/python': double fre e or corruption (!prev): 0x0000000002d0d520 *** > Aborted >---- After some extra work, the error seems to be associated with ATLAS 3.11.34 as it does not happen after changing to ATLAS 3.10.2 In summary, Self tests of SciPy version 0.16.0 using ATLAS 3.10.2 and NumPy version 1.9.2 under Python 3.4.3 running on ubuntu 14.04 LTS, end outputing: ---------------------------------------------------------------------- Ran 19454 tests in 474.654s OK (KNOWNFAIL=129, SKIP=1319) A failure, however, is found if using SciPy version 0.16.0 under Python 3.5.0b3: FAIL: test_indeterminate_covariance (test_minpack.TestCurveFit) ---------------------------------------------------------------------- Ran 19454 tests in 457.759s FAILED (KNOWNFAIL=129, SKIP=1319, failures=1) Three failures are obtained if using SciPy version 0.16.0b2: under python 3.4.3 FAIL: test_decomp_update.TestQRinsert_F.test_fat_p_row_sqr FAIL: test_decomp_update.TestQRinsert_F.test_fat_p_row_tall test_decomp_update.TestQRinsert_f.test_fat_p_row_sqr ---------------------------------------------------------------------- Ran 19371 tests in 459.815s FAILED (KNOWNFAIL=129, SKIP=1319, failures=3) Regards, Sergio $ python3 Python 3.4.3 (default, Jul 29 2015, 16:07:32) [GCC 4.8.4] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import scipy >>> scipy.show_config() lapack_mkl_info: NOT AVAILABLE blas_mkl_info: NOT AVAILABLE atlas_3_10_blas_threads_info: include_dirs = ['/home/myProg/NumLibsAtlas102/include'] language = c library_dirs = ['/home/myProg/NumLibsAtlas102/lib'] define_macros = [('HAVE_CBLAS', None), ('ATLAS_INFO', '"\\"3.10.2\\""')] libraries = ['tatlas'] openblas_lapack_info: NOT AVAILABLE openblas_info: NOT AVAILABLE atlas_3_10_threads_info: include_dirs = ['/home/myProg/NumLibsAtlas102/include'] language = f77 library_dirs = ['/home/myProg/NumLibsAtlas102/lib'] define_macros = [('ATLAS_INFO', '"\\"3.10.2\\""')] libraries = ['tatlas', 'tatlas', 'tatlas'] blas_opt_info: include_dirs = ['/home/myProg/NumLibsAtlas102/include'] language = c library_dirs = ['/home/myProg/NumLibsAtlas102/lib'] define_macros = [('HAVE_CBLAS', None), ('ATLAS_INFO', '"\\"3.10.2\\""')] libraries = ['tatlas'] lapack_opt_info: include_dirs = ['/home/myProg/NumLibsAtlas102/include'] language = f77 library_dirs = ['/home/myProg/NumLibsAtlas102/lib'] define_macros = [('ATLAS_INFO', '"\\"3.10.2\\""')] libraries = ['tatlas', 'tatlas', 'tatlas'] mkl_info: NOT AVAILABLE >>> scipy.test('full', verbose=2) Running unit tests for scipy NumPy version 1.9.2 NumPy is installed in /home/myProg/PythonGnu343/Linux64b/lib/python3.4/site-pack ages/numpy SciPy version 0.16.0 SciPy is installed in /home/myProg/PythonGnu343/Linux64b/lib/python3.4/site-pack ages/scipy Python version 3.4.3 (default, Jul 29 2015, 16:07:32) [GCC 4.8.4] nose version 1.3.7 ... ... ... ---------------------------------------------------------------------- Ran 19454 tests in 474.654s OK (KNOWNFAIL=129, SKIP=1319) >>> $ Script started on Thu 30 Jul 2015 04:57:34 PM EDT $ python3 Python 3.4.3 (default, Jul 29 2015, 16:07:32) [GCC 4.8.4] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import scipy >>> scipy.show_config() blas_mkl_info: NOT AVAILABLE openblas_info: NOT AVAILABLE blas_opt_info: libraries = ['tatlas'] language = c library_dirs = ['/home/myProg/NumLibsAtlas102/lib'] include_dirs = ['/home/myProg/NumLibsAtlas102/include'] define_macros = [('HAVE_CBLAS', None), ('ATLAS_INFO', '"\\"3.10.2\\""')] lapack_mkl_info: NOT AVAILABLE lapack_opt_info: libraries = ['tatlas', 'tatlas', 'tatlas'] language = f77 library_dirs = ['/home/myProg/NumLibsAtlas102/lib'] include_dirs = ['/home/myProg/NumLibsAtlas102/include'] define_macros = [('ATLAS_INFO', '"\\"3.10.2\\""')] openblas_lapack_info: NOT AVAILABLE atlas_3_10_blas_threads_info: libraries = ['tatlas'] language = c library_dirs = ['/home/myProg/NumLibsAtlas102/lib'] include_dirs = ['/home/myProg/NumLibsAtlas102/include'] define_macros = [('HAVE_CBLAS', None), ('ATLAS_INFO', '"\\"3.10.2\\""')] atlas_3_10_threads_info: libraries = ['tatlas', 'tatlas', 'tatlas'] language = f77 library_dirs = ['/home/myProg/NumLibsAtlas102/lib'] include_dirs = ['/home/myProg/NumLibsAtlas102/include'] define_macros = [('ATLAS_INFO', '"\\"3.10.2\\""')] mkl_info: NOT AVAILABLE >>> scipy.test('full', verbose=2) Running unit tests for scipy NumPy version 1.9.2 NumPy is installed in /home/myProg/PythonGnu343/Linux64b/lib/python3.4/site-pack ages/numpy SciPy version 0.16.0b2 SciPy is installed in /home/myProg/PythonGnu343/Linux64b/lib/python3.4/site-pack ages/scipy Python version 3.4.3 (default, Jul 29 2015, 16:07:32) [GCC 4.8.4] nose version 1.3.7 ... ... ... Test values of lambda outside the domains of the functions. ... ok ====================================================================== FAIL: test_decomp_update.TestQRinsert_F.test_fat_p_row_sqr ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/myProg/PythonGnu343/Linux64b/lib/python3.4/site-packages/nose/case .py", line 198, in runTest self.test(*self.arg) File "/home/myProg/PythonGnu343/Linux64b/lib/python3.4/site-packages/scipy/lin alg/tests/test_decomp_update.py", line 755, in test_fat_p_row_sqr self.base_fat_p_row_xxx(5) File "/home/myProg/PythonGnu343/Linux64b/lib/python3.4/site-packages/scipy/lin alg/tests/test_decomp_update.py", line 747, in base_fat_p_row_xxx check_qr(q1, r1, a1, self.rtol, self.atol) File "/home/myProg/PythonGnu343/Linux64b/lib/python3.4/site-packages/scipy/lin alg/tests/test_decomp_update.py", line 32, in check_qr assert_unitary(q, rtol, atol, assert_sqr) File "/home/myProg/PythonGnu343/Linux64b/lib/python3.4/site-packages/scipy/lin alg/tests/test_decomp_update.py", line 21, in assert_unitary assert_allclose(aTa, np.eye(a.shape[1]), rtol=rtol, atol=atol) File "/home/myProg/PythonGnu343/Linux64b/lib/python3.4/site-packages/numpy/tes ting/utils.py", line 1297, in assert_allclose verbose=verbose, header=header) File "/home/myProg/PythonGnu343/Linux64b/lib/python3.4/site-packages/numpy/tes ting/utils.py", line 665, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=0.0001, atol=1.19209e-06 (mismatch 100.0%) x: array([[ 0.998973 +5.587935e-09j, 0.009882 +7.026103e-02j, -0.129242 -4.287320e-02j, -0.030052 -6.515425e-02j, 0.005686 -9.261262e-02j, 0.031090 +1.492383e-02j,... y: array([[ 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0.],... ====================================================================== FAIL: test_decomp_update.TestQRinsert_F.test_fat_p_row_tall ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/myProg/PythonGnu343/Linux64b/lib/python3.4/site-packages/nose/case .py", line 198, in runTest self.test(*self.arg) File "/home/myProg/PythonGnu343/Linux64b/lib/python3.4/site-packages/scipy/lin alg/tests/test_decomp_update.py", line 759, in test_fat_p_row_tall self.base_fat_p_row_xxx(7) File "/home/myProg/PythonGnu343/Linux64b/lib/python3.4/site-packages/scipy/lin alg/tests/test_decomp_update.py", line 747, in base_fat_p_row_xxx check_qr(q1, r1, a1, self.rtol, self.atol) File "/home/myProg/PythonGnu343/Linux64b/lib/python3.4/site-packages/scipy/lin alg/tests/test_decomp_update.py", line 32, in check_qr assert_unitary(q, rtol, atol, assert_sqr) File "/home/myProg/PythonGnu343/Linux64b/lib/python3.4/site-packages/scipy/lin alg/tests/test_decomp_update.py", line 21, in assert_unitary assert_allclose(aTa, np.eye(a.shape[1]), rtol=rtol, atol=atol) File "/home/myProg/PythonGnu343/Linux64b/lib/python3.4/site-packages/numpy/tes ting/utils.py", line 1297, in assert_allclose verbose=verbose, header=header) File "/home/myProg/PythonGnu343/Linux64b/lib/python3.4/site-packages/numpy/tes ting/utils.py", line 665, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=0.0001, atol=1.19209e-06 (mismatch 100.0%) x: array([[ 1.006917 -6.984919e-09j, 0.009022 -8.255728e-02j, 0.178461 -3.312873e-02j, -0.047311 +1.095396e-01j, 0.059368 +8.496426e-02j, 0.031223 +1.194810e-02j,... y: array([[ 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,... ====================================================================== FAIL: test_decomp_update.TestQRinsert_f.test_fat_p_row_sqr ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/myProg/PythonGnu343/Linux64b/lib/python3.4/site-packages/nose/case .py", line 198, in runTest self.test(*self.arg) File "/home/myProg/PythonGnu343/Linux64b/lib/python3.4/site-packages/scipy/lin alg/tests/test_decomp_update.py", line 755, in test_fat_p_row_sqr self.base_fat_p_row_xxx(5) File "/home/myProg/PythonGnu343/Linux64b/lib/python3.4/site-packages/scipy/lin alg/tests/test_decomp_update.py", line 747, in base_fat_p_row_xxx check_qr(q1, r1, a1, self.rtol, self.atol) File "/home/myProg/PythonGnu343/Linux64b/lib/python3.4/site-packages/scipy/lin alg/tests/test_decomp_update.py", line 32, in check_qr assert_unitary(q, rtol, atol, assert_sqr) File "/home/myProg/PythonGnu343/Linux64b/lib/python3.4/site-packages/scipy/lin alg/tests/test_decomp_update.py", line 21, in assert_unitary assert_allclose(aTa, np.eye(a.shape[1]), rtol=rtol, atol=atol) File "/home/myProg/PythonGnu343/Linux64b/lib/python3.4/site-packages/numpy/tes ting/utils.py", line 1297, in assert_allclose verbose=verbose, header=header) File "/home/myProg/PythonGnu343/Linux64b/lib/python3.4/site-packages/numpy/tes ting/utils.py", line 665, in assert_array_compare raise AssertionError(msg) AssertionError: Not equal to tolerance rtol=0.0001, atol=1.19209e-06 (mismatch 100.0%) x: array([[ 8.658818e-01, 1.325921e-01, 2.875357e-01, 4.060289e-02, 1.853821e-01, 2.545137e-02, -2.103027e-01, 2.341471e-02, 1.182279e-01, -1.466596e-01, -4.916818e-02, -1.404783e-01],... y: array([[ 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0.],... ---------------------------------------------------------------------- Ran 19371 tests in 459.815s FAILED (KNOWNFAIL=129, SKIP=1319, failures=3) >>> $ $ python3 Python 3.5.0b3 (default, Jul 21 2015, 12:33:54) [GCC 4.8.4] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import scipy >>> scipy.show_config() openblas_lapack_info: NOT AVAILABLE atlas_3_10_threads_info: language = f77 libraries = ['tatlas', 'tatlas', 'tatlas'] define_macros = [('ATLAS_INFO', '"\\"3.10.2\\""')] library_dirs = ['/home/myProg/NumLibsAtlas102/lib'] include_dirs = ['/home/myProg/NumLibsAtlas102/include'] blas_opt_info: language = c libraries = ['tatlas'] define_macros = [('HAVE_CBLAS', None), ('ATLAS_INFO', '"\\"3.10.2\\""')] library_dirs = ['/home/myProg/NumLibsAtlas102/lib'] include_dirs = ['/home/myProg/NumLibsAtlas102/include'] mkl_info: NOT AVAILABLE lapack_opt_info: language = f77 libraries = ['tatlas', 'tatlas', 'tatlas'] define_macros = [('ATLAS_INFO', '"\\"3.10.2\\""')] library_dirs = ['/home/myProg/NumLibsAtlas102/lib'] include_dirs = ['/home/myProg/NumLibsAtlas102/include'] lapack_mkl_info: NOT AVAILABLE openblas_info: NOT AVAILABLE blas_mkl_info: NOT AVAILABLE atlas_3_10_blas_threads_info: language = c libraries = ['tatlas'] define_macros = [('HAVE_CBLAS', None), ('ATLAS_INFO', '"\\"3.10.2\\""')] library_dirs = ['/home/myProg/NumLibsAtlas102/lib'] include_dirs = ['/home/myProg/NumLibsAtlas102/include'] >>> scipy.test('full', verbose=2) Running unit tests for scipy NumPy version 1.9.2 NumPy is installed in /home/myProg/PythonGnu350/Linux64b/lib/python3.5/site-pack ages/numpy SciPy version 0.16.0 SciPy is installed in /home/myProg/PythonGnu350/Linux64b/lib/python3.5/site-pack ages/scipy Python version 3.5.0b3 (default, Jul 21 2015, 12:33:54) [GCC 4.8.4] nose version 1.3.7 ... ... ... ====================================================================== FAIL: test_indeterminate_covariance (test_minpack.TestCurveFit) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/myProg/PythonGnu350/Linux64b/lib/python3.5/site-packages/scipy/opt imize/tests/test_minpack.py", line 397, in test_indeterminate_covariance _assert_warns(OptimizeWarning, curve_fit, lambda x, a, b: a*x, xdata, ydata) File "/home/myProg/PythonGnu350/Linux64b/lib/python3.5/site-packages/numpy/tes ting/utils.py", line 1603, in assert_warns "%s( is %s)" % (func.__name__, warning_class, l[0])) AssertionError: First warning for curve_fit is not a ( is {message : DeprecationWarning('inspect.getargspec() i s deprecated, use inspect.signature() instead',), category : 'DeprecationWarning ', filename : '/home/myProg/PythonGnu350/Linux64b/lib/python3.5/site-packages/sc ipy/optimize/minpack.py', lineno : 546, line : None}) ---------------------------------------------------------------------- Ran 19454 tests in 457.759s FAILED (KNOWNFAIL=129, SKIP=1319, failures=1) >>> ? This reported issue, Ralf, apparently has to do with the atlas library. As shown in the following output, if instead using the mkl library (from Intel) this time a scipy failure is obtained: ? ? $ python3 Python 3.5.0b3 (default, Jul 21 2015, 12:33:54) [GCC 4.8.4] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import scipy >>> scipy.show_config() blas_opt_info: ??? library_dirs = ['/home/myPROG/IntelC/composer_xe_2013.0.079/mkl/lib/intel64' ] ??? libraries = ['mkl_rt', 'pthread'] ??? define_macros = [('SCIPY_MKL_H', None)] ??? include_dirs = ['/home/myPROG/IntelC/composer_xe_2013.0.079/mkl/include'] lapack_opt_info: ??? library_dirs = ['/home/myPROG/IntelC/composer_xe_2013.0.079/mkl/lib/intel64' ] ??? libraries = ['mkl_lapack95_lp64', 'mkl_rt', 'pthread'] ??? define_macros = [('SCIPY_MKL_H', None)] ??? include_dirs = ['/home/myPROG/IntelC/composer_xe_2013.0.079/mkl/include'] lapack_mkl_info: ??? library_dirs = ['/home/myPROG/IntelC/composer_xe_2013.0.079/mkl/lib/intel64' ] ??? libraries = ['mkl_lapack95_lp64', 'mkl_rt', 'pthread'] ??? define_macros = [('SCIPY_MKL_H', None)] ??? include_dirs = ['/home/myPROG/IntelC/composer_xe_2013.0.079/mkl/include'] blas_mkl_info: ??? library_dirs = ['/home/myPROG/IntelC/composer_xe_2013.0.079/mkl/lib/intel64' ] ??? libraries = ['mkl_rt', 'pthread'] ??? define_macros = [('SCIPY_MKL_H', None)] ??? include_dirs = ['/home/myPROG/IntelC/composer_xe_2013.0.079/mkl/include'] openblas_lapack_info: ? NOT AVAILABLE mkl_info: ??? library_dirs = ['/home/myPROG/IntelC/composer_xe_2013.0.079/mkl/lib/intel64' ] ??? libraries = ['mkl_rt', 'pthread'] ??? define_macros = [('SCIPY_MKL_H', None)] ??? include_dirs = ['/home/myPROG/IntelC/composer_xe_2013.0.079/mkl/include'] >>> scipy.test('full', verbose=2) Running unit tests for scipy NumPy version 1.9.2 NumPy is installed in /home/myProg/PythonGnu350/Linux64b/lib/python3.5/site-pack ages/numpy SciPy version 0.16.0b2 SciPy is installed in /home/myProg/PythonGnu350/Linux64b/lib/python3.5/site-pack ages/scipy Python version 3.5.0b3 (default, Jul 21 2015, 12:33:54) [GCC 4.8.4] nose version 1.3.7 /home/myProg/PythonGnu350/Linux64b/lib/python3.5/site-packages/scipy/_lib/decora tor.py:205: DeprecationWarning: inspect.getargspec() is deprecated, use inspect. signature() instead ... ... ... test_y_stride (test_fblas.TestZswap) ... ok test_interpolative.TestInterpolativeDecomposition.test_badcall ... ok test_interpolative.TestInterpolativeDecomposition.test_id(,) ... ok test_interpolative.TestInterpolativeDecomposition.test_id(,) ... ok test_interpolative.TestInterpolativeDecomposition.test_rand ... ok test_sing_val_update (test_lapack.TestDlasd4) ... ok ... ... ... Compare results with some values that were computed using mpmath. ... ok Test values of lambda outside the domains of the functions. ... ok ====================================================================== FAIL: test_indeterminate_covariance (test_minpack.TestCurveFit) ---------------------------------------------------------------------- Traceback (most recent call last): ? File "/home/myProg/PythonGnu350/Linux64b/lib/python3.5/site-packages/scipy/opt imize/tests/test_minpack.py", line 397, in test_indeterminate_covariance ??? _assert_warns(OptimizeWarning, curve_fit, lambda x, a, b: a*x, xdata, ydata) ? File "/home/myProg/PythonGnu350/Linux64b/lib/python3.5/site-packages/numpy/tes ting/utils.py", line 1603, in assert_warns ??? "%s( is %s)" % (func.__name__, warning_class, l[0])) AssertionError: First warning for curve_fit is not a ( is {message : DeprecationWarning('inspect.getargspec() i s deprecated, use inspect.signature() instead',), category : 'DeprecationWarning ', filename : '/home/myProg/PythonGnu350/Linux64b/lib/python3.5/site-packages/sc ipy/optimize/minpack.py', lineno : 546, line : None}) ---------------------------------------------------------------------- Ran 19371 tests in 441.458s FAILED (KNOWNFAIL=129, SKIP=1319, failures=1) >>> ? For some reason I am not getting messages via the mailing list. I am reading them at the web [ http://mail.scipy.org/pipermail/scipy-user/2015-July/036660.html ] ? > >Hi Sergio, thanks for the report. A few questions: >- did 0.15.1 work for you with the same install method >- does the problem still occur with the final 0.16.0 release? >- is the issue specific to Python 3.5 or not? > >Opening an issue on Github and attaching a gdb traceback (see >http://scipy.org/bug-report.html[http://scipy.org/bug-report.html]) would be helpful. >Cheers, >Ralf ? At this stage, Ralf, I have confirmed that the issue is NO specific to Python 3.5 or to?SciPy version 0.16.0b2 : ? $ python Python 2.7.10 (default, Jul 29 2015, 20:51:44) [GCC 4.8.4] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import scipy >>> scipy.test('full', verbose=2) Running unit tests for scipy NumPy version 1.9.2 NumPy is installed in /home/myProg/PythonGNU2710/Linux64b/lib/python2.7/site-pac kages/numpy SciPy version 0.16.0b2 SciPy is installed in /home/myProg/PythonGNU2710/Linux64b/lib/python2.7/site-pac kages/scipy Python version 2.7.10 (default, Jul 29 2015, 20:51:44) [GCC 4.8.4] nose version 1.3.7 /home/myProg/PythonGNU2710/Linux64b/lib/python2.7/site-packages/numpy/lib/utils. py:95: DeprecationWarning: `scipy.weave` is deprecated, use `weave` instead! ? warnings.warn(depdoc, DeprecationWarning) test_hierarchy.TestCopheneticDistance.test_linkage_cophenet_tdist_Z ... ok ... ... ... test_y_bad_size (test_fblas.TestZswap) ... ok test_y_stride (test_fblas.TestZswap) ... ok test_interpolative.TestInterpolativeDecomposition.test_badcall ... ok test_interpolative.TestInterpolativeDecomposition.test_id( ,) ... *** Error in `/home/myProg/PythonGNU2710/Linux64b/bin/python': double fre e or corruption (!prev): 0x0000000002d0d520 *** Aborted $ $ python3 Python 3.4.3 (default, Jul 29 2015, 16:07:32) [GCC 4.8.4] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import scipy >>> scipy.test('full', verbose=2) Running unit tests for scipy NumPy version 1.9.2 NumPy is installed in /home/myProg/PythonGnu343/Linux64b/lib/python3.4/site-pack ages/numpy SciPy version 0.13.3 SciPy is installed in /home/myProg/PythonGnu343/Linux64b/lib/python3.4/site-pack ages/scipy Python version 3.4.3 (default, Jul 29 2015, 16:07:32) [GCC 4.8.4] nose version 1.3.7 /home/myProg/PythonGnu343/Linux64b/lib/python3.4/site-packages/numpy/lib/utils.p y:95: DeprecationWarning: `scipy.lib.blas` is deprecated, use `scipy.linalg.blas ` instead! ? warnings.warn(depdoc, DeprecationWarning) ... ... ... test_interpolative.TestInterpolativeDecomposition.test_badcall ... ok test_interpolative.TestInterpolativeDecomposition.test_id(,) ... /home/myProg/PythonGnu343/Linux64b/lib/python3.4/site-packages/numpy/cor e/fromnumeric.py:2507: VisibleDeprecationWarning: `rank` is deprecated; use the `ndim` attribute or function instead. To find the rank of a matrix see `numpy.li nalg.matrix_rank`. ? VisibleDeprecationWarning) *** Error in `python3': double free or corruption (!prev): 0x0000000001b1edb0 ** * Aborted $ ? ? Sent:?Tuesday, July 21, 2015 at 3:39 PM From:?"Sergio Rojas" To:?scipy-user at scipy.org Subject:?How to fix Aborted SciPy version 0.16.0b2 test ? Hello all, ? Runing on Ubuntu 14.04 LTS and under Python 3.5.0b3 and NumPy version 1.9.2, the SciPy (version 0.16.0b2) test is aborting at? ? #------------ test_interpolative.TestInterpolativeDecomposition.test_badcall ... ok test_interpolative.TestInterpolativeDecomposition.test_id(,) ... *** Error in `python3': free(): invalid next size (normal): 0x00000000039e1f60 *** Aborted #------------ ? would it be possible to fix it? ? Thanks in advance for your help, ? Sergio PD. Some extra info is as follows: ? $ python3 Python 3.5.0b3 (default, Jul 21 2015, 12:33:54) [GCC 4.8.4] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import scipy >>> scipy.show_config() blas_mkl_info: ? NOT AVAILABLE atlas_3_10_threads_info: ??? libraries = ['tatlas', 'tatlas', 'tatlas'] ??? define_macros = [('ATLAS_INFO', '"\\"3.11.34\\""')] ??? include_dirs = ['/home/myProg/NumLibs64b/include'] ??? library_dirs = ['/home/myProg/NumLibs64b/lib'] ??? language = f77 openblas_lapack_info: ? NOT AVAILABLE lapack_mkl_info: ? NOT AVAILABLE openblas_info: ? NOT AVAILABLE mkl_info: ? NOT AVAILABLE lapack_opt_info: ??? libraries = ['tatlas', 'tatlas', 'tatlas'] ??? define_macros = [('ATLAS_INFO', '"\\"3.11.34\\""')] ??? include_dirs = ['/home/myProg/NumLibs64b/include'] ??? library_dirs = ['/home/myProg/NumLibs64b/lib'] ??? language = f77 atlas_3_10_blas_threads_info: ??? libraries = ['tatlas'] ??? define_macros = [('HAVE_CBLAS', None), ('ATLAS_INFO', '"\\"3.11.34\\""')] ??? include_dirs = ['/home/myProg/NumLibs64b/include'] ??? library_dirs = ['/home/myProg/NumLibs64b/lib'] ??? language = c blas_opt_info: ??? libraries = ['tatlas'] ??? define_macros = [('HAVE_CBLAS', None), ('ATLAS_INFO', '"\\"3.11.34\\""')] ??? include_dirs = ['/home/myProg/NumLibs64b/include'] ??? library_dirs = ['/home/myProg/NumLibs64b/lib'] ??? language = c >>> scipy.test('full', verbose=2) Running unit tests for scipy NumPy version 1.9.2 NumPy is installed in /home/myProg/PythonGnu350/Linux64b/lib/python3.5/site-packages/numpy SciPy version 0.16.0b2 SciPy is installed in /home/myProg/PythonGnu350/Linux64b/lib/python3.5/site-packages/scipy Python version 3.5.0b3 (default, Jul 21 2015, 12:33:54) [GCC 4.8.4] nose version 1.3.7 /home/myProg/PythonGnu350/Linux64b/lib/python3.5/site-packages/scipy/_lib/decorator.py:205: DeprecationWarning: inspect.getargspec() is deprecated, use inspect.signature() instead ... ... test_y_bad_size (test_fblas.TestZswap) ... ok test_y_stride (test_fblas.TestZswap) ... ok test_interpolative.TestInterpolativeDecomposition.test_badcall ... ok test_interpolative.TestInterpolativeDecomposition.test_id(,) ... *** Error in `python3': free(): invalid next size (normal): 0x00000000039e1f60 *** Aborted ? ? From davidmenhur at gmail.com Fri Jul 31 11:05:25 2015 From: davidmenhur at gmail.com (=?UTF-8?B?RGHPgGlk?=) Date: Fri, 31 Jul 2015 17:05:25 +0200 Subject: [SciPy-User] How to fix Aborted SciPy version 0.16.0b2 test ? In-Reply-To: References: Message-ID: On 31 July 2015 at 15:21, Sergio Rojas wrote: > After some extra work, the error seems to be associated with ATLAS > 3.11.34 > as it does not happen after changing to ATLAS 3.10.2 It must be noted that 3.11 is a developing branch, while the releases are 3.10, 3.12... So it is probably worth reporting the bug to ATLAS. -------------- next part -------------- An HTML attachment was scrubbed... URL: