From christoph at grothesque.org Fri Oct 2 11:06:11 2015 From: christoph at grothesque.org (Christoph Groth) Date: Fri, 02 Oct 2015 17:06:11 +0200 Subject: [SciPy-Dev] vectorized scipy.integrate.quad References: <87bncrg3uf.fsf@grothesque.org> Message-ID: <87io6p48f0.fsf@grothesque.org> Hoi Anne, Thanks for sharing your code that combines asyncio and concurrent.futures. It took me some time to update myself on async programming, hence the late reply. Anne Archibald wrote: > Just to make sure I understand what you're asking for: You have > a > function to integrate that takes a really long time to evaluate. > (...) but the current implementation does not express that > concurrency and so parallel implementations are not possible. Exactly! > This is actually a problem that happens a lot: our languages > have not > traditionally had a way to express concurrency, so places > parallelism > should be easy are hard to exploit. Indeed. Let?s hope that things will improve over time. > I'll point out that there is actually a lot more concurrency in > this > algorithm than it seems: it's (I think) basically a recursive > algorithm, where you evaluate the integral and its truncation > error on > an interval; if the truncation error is too high, you subdivide > the > integral and repeat the process on both sides. These two > sub-evaluations are also concurrent - it doesn't matter which is > done > first. Excellent point. There?s more potential for concurrency in QAGS than what I mentioned in my original posting: At some given point in time it can be forseeable that even if bisecting a given interval would eliminate all the integration error for that interval, it will be necessary to split other intervals in order to meet the requested error margin. However, exposing this additional parallelism is more complicated than what I proposed. Also, the first step of QAGS consists of evaluating the integrand 21 times. So, unless we want to do speculative evaluation, or provide user-hints, the total speed-up will be limited due to Amdahl?s law because of that first step. I believe that the QAGS algorithm cannot be expressed in a pureley resursive way (without a shared mutable state). QAGS is performing global optimization: The default implementation consists of one big do-loop that in each iteration bisects the subinterval with the largest error estimate. > I have been thinking about how to handle this sort of > concurrency. The > only easy way to express concurrency now is with a vectorized > call: if > you call sin(x) on a vector x, you are saying that you don't > care what > order the sines are computed in. So that's what you're asking > for, and > it wouldn't be too hard to get working in this situation (though > re-porting QUADPACK is going to be a fairly major task). Indeed, vectorized call-backs are quite limited as a way to expose concurrency: the computation is still blocked until the call-back returns. It might be useful to be able to dispatch a bunch of computations without having to wait for all the results. I have been also thinking about how this could be best implemented within a modern library. > I have recently done some experimentation on a similar > situation: I > have a very expensive function to minimize, and gradient > calculation > is another example of concurrency. My solution uses either > coroutines > or threads to handle algorithm-level concurrency (like > subdividing the > interval, here) and concurrent.futures.ProcessPoolExecutor to > handle > function evaluation. There are surely other approaches as well, > but I > do think we need to think about how to express concurrency in a > way > that makes parallelization easy. I find your code (in your other posting) extremely interesting, thanks for sharing it. I thought that asynchronous programming is just about I/O, but in fact expensive calculations performed in the background (like evaluating the integrand) can be seen as a kind of I/O as well. If the set of background calculations to be performed is easy to come up with, a simple solution like concurrent.futures.Executor.map() is adequate. But when, as in this case, the set of calculations to be done is the result of a non-trivial recursive algorithm that depends on partial results, it seems to be a great idea to use asynchronous programming to fire-of the background calculations. > Making something as robust as QUADPACK is probably going to be > hard. But the basic algorithm is not too complicated - > unfortunately > scipy only has Gaussian quadrature coefficients, not > Clenshaw-Curtis - > so for a basic adaptive implementation it would not be too > difficult > to write one in python in a way that makes the concurrency > explicit. The robustness of QUADPACK (or at least of QAGS) rests on a few simple principles: we know from experience that recursive bisection of the integration interval and performing Gaussian integration on each subinterval often works very well. Any simple algorithm that relies on the same principles should be similarly robust. Or am I missing something? One could try to come up with an _exact_ workalike of QAGS (thus inheriting its robustness), that exposes more parallelism. In order to use more than 21 cores, one would have to allow speculative evaluation: Otherwise, for exmaple, there is nothing to do for more than 21 cores during the first step of the algorithm (and a similar situation may arise at later times). In any case, the result should be deterministic, and probably correspond exactly to what QAGS would give. The amount of parallelisation (and thus speculation) could be configurable. For integrations that require at least a few interval bisections, It should not be too wasteful to allow parallel evaluation of up to 3*21 points for example. In the worst case (if no splitting needs to be done), 2/3 of this will be wasted, that?s not a catastrophe. If something like your example is to be included in a library, I think it should not hard-code usage of of concurrent.futures or ProcessPoolExecutor. After all, the user might want to use MPI, or SCOOP, or whatnot for the parallelization. Do you see a way to generalize your code while keeping it elegant? Perhaps the integration method could take not a function ?f? but a coroutine? That user-provided coroutine would then launch the calculations in any appropriate way. (I?m not sure how to do something like this within asyncio, though.) One last point: when exposing parallelism, one should not forget to also ?expose vectoriztion?. QAGS works in bunches of 21 evaluations and I think this should be exposed to the user: the call-back coroutine could be called in a vectorized way. This would allow to limit the number of coroutines that get fired of and thus make it more efficient. If the individual integrand evaluations are truly expensive, the user can write the coroutine that he provides such that the evaluations are done in parallel. Christoph -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 810 bytes Desc: not available URL: From archibald at astron.nl Fri Oct 2 19:26:43 2015 From: archibald at astron.nl (Anne Archibald) Date: Fri, 02 Oct 2015 23:26:43 +0000 Subject: [SciPy-Dev] vectorized scipy.integrate.quad In-Reply-To: <87io6p48f0.fsf@grothesque.org> References: <87bncrg3uf.fsf@grothesque.org> <87io6p48f0.fsf@grothesque.org> Message-ID: On Fri, Oct 2, 2015 at 5:07 PM Christoph Groth wrote: > Anne Archibald wrote: > > I'll point out that there is actually a lot more concurrency in > > this > > algorithm than it seems: it's (I think) basically a recursive > > algorithm, where you evaluate the integral and its truncation > > error on > > an interval; if the truncation error is too high, you subdivide > > the > > integral and repeat the process on both sides. These two > > sub-evaluations are also concurrent - it doesn't matter which is > > done > > first. > > Excellent point. There?s more potential for concurrency in QAGS > than > what I mentioned in my original posting: At some given point in > time it > can be forseeable that even if bisecting a given interval would > eliminate all the integration error for that interval, it will be > necessary to split other intervals in order to meet the requested > error > margin. > > However, exposing this additional parallelism is more complicated > than > what I proposed. Also, the first step of QAGS consists of > evaluating > the integrand 21 times. So, unless we want to do speculative > evaluation, or provide user-hints, the total speed-up will be > limited > due to Amdahl?s law because of that first step. > Not exactly. If you have a stupendous number of cores, and the function to be integrated takes the same time to calculate each time, then the total time will be the recursion depth times the time per evaluation. But if you have a modest number of cores, then yes, the first step takes one evaluation time, but there may be thousands of evaluations needed, and this can take many evaluation steps if there are not also thousands of cores available. So that first step is indeed unfortunate, but it need not completely limit the performance of the algorithm. > I believe that the QAGS algorithm cannot be expressed in a pureley > resursive way (without a shared mutable state). QAGS is > performing > global optimization: The default implementation consists of one > big > do-loop that in each iteration bisects the subinterval with the > largest > error estimate. > Indeed. I thought about that as I implemented mine, which uses local error estimation. With local error estimation, all the branches are parallel; with global error estimation, you have to alter the algorithm somewhat if you want to achieve better parallelism. Non-determinism can creep in if you're not careful. For example: The algorithm could maintain a list of intervals with their error estimates. The top n could be scheduled for subdivision, with n chosen to suit the number of cores available. Ideally, the jobs would be prioritized, so that the jobs for an interval tended to get scheduled together, and the worst offenders got scheduled first. When the error bound is met, you could cancel any outstanding jobs. The most efficient implementation means that the precise value depends on the vagaries of the scheduler. In any case the value returned will not be exactly that returned from a purely sequential version of the algorithm. > It might be useful to be able to dispatch a bunch of computations > without having to wait for all the results. I have been also > thinking > about how this could be best implemented within a modern library. > The simplest approach, if you're working in python, is concurrent.futures. You create a Future object for each computation you want to run, and you query its value when you need it. If it turns out you never need it, you can cancel the Future and it may not get run. This module exists in python >= 3.2, and has a backport to python2. For the execution of the futures you can use threads, processes, or you could implement your own Executor (using MPI, say). It's fairly flexible, although there doesn't seem to be any way to provide hints to the scheduler (do these jobs together, or do those ones first). Also importantly, code that wants to use concurrent.futures can accept an Executor as an argument so as to be agnostic about implementations. The additional hack where I used coroutines is a different kind of concurrency; it's a way to squeeze extra concurrency out of recursive or other more-complex algorithms. You can actually do the same using threads; I have thread- and coroutine-based versions of the integrator, and they look very similar. (Here the GIL is actually a blessing, since we don't actually care that the threads run in parallel, and the GIL means we don't have to worry about whether, say, setting array elements is atomic.) > > Making something as robust as QUADPACK is probably going to be > > hard. But the basic algorithm is not too complicated - > > unfortunately > > scipy only has Gaussian quadrature coefficients, not > > Clenshaw-Curtis - > > so for a basic adaptive implementation it would not be too > > difficult > > to write one in python in a way that makes the concurrency > > explicit. > > The robustness of QUADPACK (or at least of QAGS) rests on a few > simple > principles: we know from experience that recursive bisection of > the > integration interval and performing Gaussian integration on each > subinterval often works very well. Any simple algorithm that > relies on > the same principles should be similarly robust. Or am I missing > something? > I say this partly from working with MINUIT versus scipy's L-BFGS minimizers: there's a lot more to robustness than a good basic algorithm. MINUIT has a lot of apparently-peculiar code in (for example) its derivative calculator to manage step size, roundoff error, and termination conditions. The whole code base has been hammered on by the particle physicists for the last twenty years, and they have stubbed their toes on a zillion funny numerical corner cases, adding checks to the code as they went. Getting that kind of detail right is hard. I think integration is generally a more well-behaved problem, but for example the error estimation is obtained by subtracting two different estimates of the integral over the interval. How do you manage round-off error on that quantity as the two estimates become similar? What happens when the Gauss-Kronrod evaluation points start to become very approximate because the steps are not very many machine epsilons? > One could try to come up with an _exact_ workalike of QAGS (thus > inheriting its robustness), that exposes more parallelism. In > order to > use more than 21 cores, one would have to allow speculative > evaluation: > Otherwise, for exmaple, there is nothing to do for more than 21 > cores > during the first step of the algorithm (and a similar situation > may > arise at later times). > > In any case, the result should be deterministic, and probably > correspond > exactly to what QAGS would give. > > The amount of parallelisation (and thus speculation) could be > configurable. For integrations that require at least a few > interval > bisections, It should not be too wasteful to allow parallel > evaluation of up > to 3*21 points for example. In the worst case (if no splitting > needs to > be done), 2/3 of this will be wasted, that?s not a catastrophe. > If the goal is to agree with QAGS, the first place to start is obviously the 21-fold parallelism. But some speculative parallelism is certainly a possibility, and the user should get to choose how much. I think the way to go is to take the list of intervals with error estimates, and in addition to subdividing the worst offender, speculatively subdivide the next few worst. You could also speculatively assume that the worst offender will need to be subdivided more than once, but that complicates the algorithm substantially. > If something like your example is to be included in a library, I > think > it should not hard-code usage of of concurrent.futures or > ProcessPoolExecutor. After all, the user might want to use MPI, > or > SCOOP, or whatnot for the parallelization. > I think concurrent.futures is the way to go. If users want to use MPI or whatever, it's not too complicated to write an Executor that uses the underlying toolkit. I haven't written an MPI executor, but it wouldn't take much to adapt emcee's MPI parallel map. Also, concurrent.futures is a standard (in the python standard library) implementation-agnostic way to express concurrency. So I think algorithms that can be expressed that way probably should be. > Do you see a way to generalize your code while keeping it elegant? > Perhaps the integration method could take not a function ?f? but a > coroutine? That user-provided coroutine would then launch the > calculations in any appropriate way. (I?m not sure how to do > something > like this within asyncio, though.) > As I said above, there are two layers of concurrency in the code I posted: there is the business of submitting function evaluations to a pool of workers, and there is the business of concurrently traversing all the branches of the recursion. For an implementation with a global error estimate, I don't think the concurrent recursion would be relevant; all you'd need would be the worker-pool evaluation. Which is good, because coroutines are new to python and not easily made to work on python2. One last point: when exposing parallelism, one should not forget > to also > ?expose vectoriztion?. QAGS works in bunches of 21 evaluations > and I > think this should be exposed to the user: the call-back coroutine > could > be called in a vectorized way. This would allow to limit the > number of > coroutines that get fired of and thus make it more efficient. If > the > individual integrand evaluations are truly expensive, the user can > write > the coroutine that he provides such that the evaluations are done > in > parallel. > Well, this is a tricky thing. All the techniques I'm talking about are fairly "heavy" - a lot of python-level work per function evaluation. I'm thinking this way because I'm working with a function that takes about six seconds to evaluate (on a single core; it can't use more than one). So the overhead of looping in python, or even shipping arguments and results through pickle, is unimportant. If you have a relatively fast integrand, you're indeed going to want some way to run the inner 21 evaluations without going through python. In fact you probably want QUADPACK to call straight into compiled code without passing through python at all; there are some schemes for doing this with cython. But I think you need different implementations for such different performance criteria. All that said, simply allowing the existing QUADPACK to call a vectorized integrand 21 points at a time would provide a substantial speedup for a range of important cases. Unfortunately it's probably going to require some grotty FORTRAN work. Anne P.S. I just wrote up the integration code and another example of parallelization (that illustrates numerical finickiness) and posted them online: http://lighthouseinthesky.blogspot.nl/2015/10/numerical-coroutines.html http://lighthouseinthesky.blogspot.nl/2015/10/numerical-derivatives-in-parallel.html -A -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sat Oct 3 15:23:07 2015 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 3 Oct 2015 13:23:07 -0600 Subject: [SciPy-Dev] Numpy 1.10.0 coming Monday, 5 Oct. Message-ID: Hi All, A heads up about the coming Numpy release. If you have discovered any problems with rc1 or rc2, please report them. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From tyler.je.reddy at gmail.com Mon Oct 5 06:39:56 2015 From: tyler.je.reddy at gmail.com (Tyler Reddy) Date: Mon, 5 Oct 2015 11:39:56 +0100 Subject: [SciPy-Dev] Spherical Voronoi Diagram Code Message-ID: Hi, As this new code / feature for calculating Voronoi diagrams on spherical surfaces is now passing the full Travis-CI test suite ( https://github.com/scipy/scipy/pull/5232), I just thought I'd politely ask for additional feedback / discussion. Some things we are aware of and / or could use help with: -it is unfortunate that Python / matplotlib does not provide a nice way to plot spherical polygons--this is a pretty big deterrent to clarity in the example docstring, as you can see if you compile the docs and check the SphericalVoronoi 'Examples' section plot. mayavi might be able to do this, but that is probably too heavy of a dependency -I noticed that our unit tests pass in the context of the full suite from the root directory, but running 'nosetests' in scipy.spatial testing dir actually seems to be problematic--not sure if this is really an issue but suggestions to fix that would help if needed -apparently, if all generators are in a single hemisphere, this is problematic--should it be? (question for computational geometers); what if all generators were on the equator? any other problematic inputs? -we decided that code for calculating the areas of the spherical polygons (important for some applications) should probably be in a separate module or at least a separate class / PR. Thanks!! Tyler -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla.molden at gmail.com Mon Oct 5 17:04:52 2015 From: sturla.molden at gmail.com (Sturla Molden) Date: Mon, 5 Oct 2015 21:04:52 +0000 (UTC) Subject: [SciPy-Dev] Governance? Message-ID: <1247897550465771473.510948sturla.molden-gmail.com@news.gmane.org> I am not sure I even care about this issue, so I should perhaps not make this noise on the mailing list. Anyhow... Should SciPy adopt the same governance model as NumPy? If so, I hope that community veto rights can be limited to the parts of SciPy where a vetoer has actually contributed. Sturla From njs at pobox.com Mon Oct 5 17:08:10 2015 From: njs at pobox.com (Nathaniel Smith) Date: Mon, 5 Oct 2015 14:08:10 -0700 Subject: [SciPy-Dev] Governance? In-Reply-To: <1247897550465771473.510948sturla.molden-gmail.com@news.gmane.org> References: <1247897550465771473.510948sturla.molden-gmail.com@news.gmane.org> Message-ID: My guess is that this is a discussion you should start in a week or two after Ralf gets back and has had a chance to catch up on things? On Mon, Oct 5, 2015 at 2:04 PM, Sturla Molden wrote: > I am not sure I even care about this issue, so I should perhaps not make > this noise on the mailing list. Anyhow... > > Should SciPy adopt the same governance model as NumPy? > > If so, I hope that community veto rights can be limited to the parts of > SciPy where a vetoer has actually contributed. > > > > Sturla > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-dev -- Nathaniel J. Smith -- http://vorpus.org From sturla.molden at gmail.com Mon Oct 5 17:14:41 2015 From: sturla.molden at gmail.com (Sturla Molden) Date: Mon, 5 Oct 2015 21:14:41 +0000 (UTC) Subject: [SciPy-Dev] Governance? References: <1247897550465771473.510948sturla.molden-gmail.com@news.gmane.org> Message-ID: <432457452465772381.722108sturla.molden-gmail.com@news.gmane.org> Nathaniel Smith wrote: > My guess is that this is a discussion you should start in a week or > two after Ralf gets back and has had a chance to catch up on things? Ok, just ignore this for now. When will he be back? Sturla From njs at pobox.com Mon Oct 5 17:19:39 2015 From: njs at pobox.com (Nathaniel Smith) Date: Mon, 5 Oct 2015 14:19:39 -0700 Subject: [SciPy-Dev] Governance? In-Reply-To: <432457452465772381.722108sturla.molden-gmail.com@news.gmane.org> References: <1247897550465771473.510948sturla.molden-gmail.com@news.gmane.org> <432457452465772381.722108sturla.molden-gmail.com@news.gmane.org> Message-ID: On Mon, Oct 5, 2015 at 2:14 PM, Sturla Molden wrote: > Nathaniel Smith wrote: > >> My guess is that this is a discussion you should start in a week or >> two after Ralf gets back and has had a chance to catch up on things? > > Ok, just ignore this for now. When will he be back? Oct. 9, apparently: http://thread.gmane.org/gmane.comp.python.scientific.devel/19924 -n -- Nathaniel J. Smith -- http://vorpus.org From sturla.molden at gmail.com Mon Oct 5 21:46:03 2015 From: sturla.molden at gmail.com (Sturla Molden) Date: Tue, 06 Oct 2015 03:46:03 +0200 Subject: [SciPy-Dev] vectorized scipy.integrate.quad In-Reply-To: References: <87bncrg3uf.fsf@grothesque.org> <87io6p48f0.fsf@grothesque.org> Message-ID: On 03/10/15 01:26, Anne Archibald wrote: > The additional hack where I used coroutines is a different kind of > concurrency; it's a way to squeeze extra concurrency out of recursive or > other more-complex algorithms. You can actually do the same using > threads; I have thread- and coroutine-based versions of the integrator, > and they look very similar. Personally I find that I prefer to make pipelines with threads (or processes) and queues, because coroutines make my brain overheat. Sturla From charlesr.harris at gmail.com Tue Oct 6 00:52:50 2015 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 5 Oct 2015 22:52:50 -0600 Subject: [SciPy-Dev] Numpy 1.10.0 release Message-ID: Hi All, It is my pleasure to release NumPy 1.10.0. Files may be found at Sourceforge and pypi . This release is the result of 789 non-merge commits made by 160 developers over a period of a year and supports Python 2.6 - 2.7 and 3.2 - 3.5. NumPy 1.10.0 Release Notes ************************** This release supports Python 2.6 - 2.7 and 3.2 - 3.5. Highlights ========== * numpy.distutils now supports parallel compilation via the --parallel/-j argument passed to setup.py build * numpy.distutils now supports additional customization via site.cfg to control compilation parameters, i.e. runtime libraries, extra linking/compilation flags. * Addition of *np.linalg.multi_dot*: compute the dot product of two or more arrays in a single function call, while automatically selecting the fastest evaluation order. * The new function `np.stack` provides a general interface for joining a sequence of arrays along a new axis, complementing `np.concatenate` for joining along an existing axis. * Addition of `nanprod` to the set of nanfunctions. * Support for the '@' operator in Python 3.5. Dropped Support: * The _dotblas module has been removed. CBLAS Support is now in Multiarray. * The testcalcs.py file has been removed. * The polytemplate.py file has been removed. * npy_PyFile_Dup and npy_PyFile_DupClose have been removed from npy_3kcompat.h. * splitcmdline has been removed from numpy/distutils/exec_command.py. * try_run and get_output have been removed from numpy/distutils/command/config.py * The a._format attribute is no longer supported for array printing. * Keywords ``skiprows`` and ``missing`` removed from np.genfromtxt. * Keyword ``old_behavior`` removed from np.correlate. Future Changes: * In array comparisons like ``arr1 == arr2``, many corner cases involving strings or structured dtypes that used to return scalars now issue ``FutureWarning`` or ``DeprecationWarning``, and in the future will be change to either perform elementwise comparisons or raise an error. * The SafeEval class will be removed. * The alterdot and restoredot functions will be removed. See below for more details on these changes. Compatibility notes =================== numpy version string ~~~~~~~~~~~~~~~~~~~~ The numpy version string for development builds has been changed from ``x.y.z.dev-githash`` to ``x.y.z.dev0+githash`` (note the +) in order to comply with PEP 440. relaxed stride checking ~~~~~~~~~~~~~~~~~~~~~~~ NPY_RELAXED_STRIDE_CHECKING is now true by default. Concatenation of 1d arrays along any but ``axis=0`` raises ``IndexError`` ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Using axis != 0 has raised a DeprecationWarning since NumPy 1.7, it now raises an error. *np.ravel*, *np.diagonal* and *np.diag* now preserve subtypes ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ There was inconsistent behavior between *x.ravel()* and *np.ravel(x)*, as well as between *x.diagonal()* and *np.diagonal(x)*, with the methods preserving subtypes while the functions did not. This has been fixed and the functions now behave like the methods, preserving subtypes except in the case of matrices. Matrices are special cased for backward compatibility and still return 1-D arrays as before. If you need to preserve the matrix subtype, use the methods instead of the functions. *rollaxis* and *swapaxes* always return a view ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Previously, a view was returned except when no change was made in the order of the axes, in which case the input array was returned. A view is now returned in all cases. *nonzero* now returns base ndarrays ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Previously, an inconsistency existed between 1-D inputs (returning a base ndarray) and higher dimensional ones (which preserved subclasses). Behavior has been unified, and the return will now be a base ndarray. Subclasses can still override this behavior by providing their own *nonzero* method. C API ~~~~~ The changes to *swapaxes* also apply to the *PyArray_SwapAxes* C function, which now returns a view in all cases. The changes to *nonzero* also apply to the *PyArray_Nonzero* C function, which now returns a base ndarray in all cases. The dtype structure (PyArray_Descr) has a new member at the end to cache its hash value. This shouldn't affect any well-written applications. The change to the concatenation function DeprecationWarning also affects PyArray_ConcatenateArrays, recarray field return types ~~~~~~~~~~~~~~~~~~~~~~~~~~~ Previously the returned types for recarray fields accessed by attribute and by index were inconsistent, and fields of string type were returned as chararrays. Now, fields accessed by either attribute or indexing will return an ndarray for fields of non-structured type, and a recarray for fields of structured type. Notably, this affect recarrays containing strings with whitespace, as trailing whitespace is trimmed from chararrays but kept in ndarrays of string type. Also, the dtype.type of nested structured fields is now inherited. recarray views ~~~~~~~~~~~~~~ Viewing an ndarray as a recarray now automatically converts the dtype to np.record. See new record array documentation. Additionally, viewing a recarray with a non-structured dtype no longer converts the result's type to ndarray - the result will remain a recarray. 'out' keyword argument of ufuncs now accepts tuples of arrays ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ When using the 'out' keyword argument of a ufunc, a tuple of arrays, one per ufunc output, can be provided. For ufuncs with a single output a single array is also a valid 'out' keyword argument. Previously a single array could be provided in the 'out' keyword argument, and it would be used as the first output for ufuncs with multiple outputs, is deprecated, and will result in a `DeprecationWarning` now and an error in the future. byte-array indices now raises an IndexError ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Indexing an ndarray using a byte-string in Python 3 now raises an IndexError instead of a ValueError. Masked arrays containing objects with arrays ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ For such (rare) masked arrays, getting a single masked item no longer returns a corrupted masked array, but a fully masked version of the item. Median warns and returns nan when invalid values are encountered ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Similar to mean, median and percentile now emits a Runtime warning and returns `NaN` in slices where a `NaN` is present. To compute the median or percentile while ignoring invalid values use the new `nanmedian` or `nanpercentile` functions. Functions available from numpy.ma.testutils have changed ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ All functions from numpy.testing were once available from numpy.ma.testutils but not all of them were redefined to work with masked arrays. Most of those functions have now been removed from numpy.ma.testutils with a small subset retained in order to preserve backward compatibility. In the long run this should help avoid mistaken use of the wrong functions, but it may cause import problems for some. New Features ============ Reading extra flags from site.cfg ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Previously customization of compilation of dependency libraries and numpy itself was only accomblishable via code changes in the distutils package. Now numpy.distutils reads in the following extra flags from each group of the *site.cfg*: * ``runtime_library_dirs/rpath``, sets runtime library directories to override ``LD_LIBRARY_PATH`` * ``extra_compile_args``, add extra flags to the compilation of sources * ``extra_link_args``, add extra flags when linking libraries This should, at least partially, complete user customization. *np.cbrt* to compute cube root for real floats ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ *np.cbrt* wraps the C99 cube root function *cbrt*. Compared to *np.power(x, 1./3.)* it is well defined for negative real floats and a bit faster. numpy.distutils now allows parallel compilation ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ By passing *--parallel=n* or *-j n* to *setup.py build* the compilation of extensions is now performed in *n* parallel processes. The parallelization is limited to files within one extension so projects using Cython will not profit because it builds extensions from single files. *genfromtxt* has a new ``max_rows`` argument ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ A ``max_rows`` argument has been added to *genfromtxt* to limit the number of rows read in a single call. Using this functionality, it is possible to read in multiple arrays stored in a single file by making repeated calls to the function. New function *np.broadcast_to* for invoking array broadcasting ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ *np.broadcast_to* manually broadcasts an array to a given shape according to numpy's broadcasting rules. The functionality is similar to broadcast_arrays, which in fact has been rewritten to use broadcast_to internally, but only a single array is necessary. New context manager *clear_and_catch_warnings* for testing warnings ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ When Python emits a warning, it records that this warning has been emitted in the module that caused the warning, in a module attribute ``__warningregistry__``. Once this has happened, it is not possible to emit the warning again, unless you clear the relevant entry in ``__warningregistry__``. This makes is hard and fragile to test warnings, because if your test comes after another that has already caused the warning, you will not be able to emit the warning or test it. The context manager ``clear_and_catch_warnings`` clears warnings from the module registry on entry and resets them on exit, meaning that warnings can be re-raised. *cov* has new ``fweights`` and ``aweights`` arguments ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The ``fweights`` and ``aweights`` arguments add new functionality to covariance calculations by applying two types of weighting to observation vectors. An array of ``fweights`` indicates the number of repeats of each observation vector, and an array of ``aweights`` provides their relative importance or probability. Support for the '@' operator in Python 3.5+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Python 3.5 adds support for a matrix multiplication operator '@' proposed in PEP465. Preliminary support for that has been implemented, and an equivalent function ``matmul`` has also been added for testing purposes and use in earlier Python versions. The function is preliminary and the order and number of its optional arguments can be expected to change. New argument ``norm`` to fft functions ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The default normalization has the direct transforms unscaled and the inverse transforms are scaled by :math:`1/n`. It is possible to obtain unitary transforms by setting the keyword argument ``norm`` to ``"ortho"`` (default is `None`) so that both direct and inverse transforms will be scaled by :math:`1/\\sqrt{n}`. Improvements ============ *np.digitize* using binary search ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ *np.digitize* is now implemented in terms of *np.searchsorted*. This means that a binary search is used to bin the values, which scales much better for larger number of bins than the previous linear search. It also removes the requirement for the input array to be 1-dimensional. *np.poly* now casts integer inputs to float ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ *np.poly* will now cast 1-dimensional input arrays of integer type to double precision floating point, to prevent integer overflow when computing the monic polynomial. It is still possible to obtain higher precision results by passing in an array of object type, filled e.g. with Python ints. *np.interp* can now be used with periodic functions ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ *np.interp* now has a new parameter *period* that supplies the period of the input data *xp*. In such case, the input data is properly normalized to the given period and one end point is added to each extremity of *xp* in order to close the previous and the next period cycles, resulting in the correct interpolation behavior. *np.pad* supports more input types for ``pad_width`` and ``constant_values`` ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``constant_values`` parameters now accepts NumPy arrays and float values. NumPy arrays are supported as input for ``pad_width``, and an exception is raised if its values are not of integral type. *np.argmax* and *np.argmin* now support an ``out`` argument ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The ``out`` parameter was added to *np.argmax* and *np.argmin* for consistency with *ndarray.argmax* and *ndarray.argmin*. The new parameter behaves exactly as it does in those methods. More system C99 complex functions detected and used ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ All of the functions ``in complex.h`` are now detected. There are new fallback implementations of the following functions. * npy_ctan, * npy_cacos, npy_casin, npy_catan * npy_ccosh, npy_csinh, npy_ctanh, * npy_cacosh, npy_casinh, npy_catanh As a result of these improvements, there will be some small changes in returned values, especially for corner cases. *np.loadtxt* support for the strings produced by the ``float.hex`` method ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The strings produced by ``float.hex`` look like ``0x1.921fb54442d18p+1``, so this is not the hex used to represent unsigned integer types. *np.isclose* properly handles minimal values of integer dtypes ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In order to properly handle minimal values of integer types, *np.isclose* will now cast to the float dtype during comparisons. This aligns its behavior with what was provided by *np.allclose*. *np.allclose* uses *np.isclose* internally. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ *np.allclose* now uses *np.isclose* internally and inherits the ability to compare NaNs as equal by setting ``equal_nan=True``. Subclasses, such as *np.ma.MaskedArray*, are also preserved now. *np.genfromtxt* now handles large integers correctly ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ *np.genfromtxt* now correctly handles integers larger than ``2**31-1`` on 32-bit systems and larger than ``2**63-1`` on 64-bit systems (it previously crashed with an ``OverflowError`` in these cases). Integers larger than ``2**63-1`` are converted to floating-point values. *np.load*, *np.save* have pickle backward compatibility flags ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The functions *np.load* and *np.save* have additional keyword arguments for controlling backward compatibility of pickled Python objects. This enables Numpy on Python 3 to load npy files containing object arrays that were generated on Python 2. MaskedArray support for more complicated base classes ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Built-in assumptions that the baseclass behaved like a plain array are being removed. In particular, setting and getting elements and ranges will respect baseclass overrides of ``__setitem__`` and ``__getitem__``, and arithmetic will respect overrides of ``__add__``, ``__sub__``, etc. Changes ======= dotblas functionality moved to multiarray ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The cblas versions of dot, inner, and vdot have been integrated into the multiarray module. In particular, vdot is now a multiarray function, which it was not before. stricter check of gufunc signature compliance ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Inputs to generalized universal functions are now more strictly checked against the function's signature: all core dimensions are now required to be present in input arrays; core dimensions with the same label must have the exact same size; and output core dimension's must be specified, either by a same label input core dimension or by a passed-in output array. views returned from *np.einsum* are writeable ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Views returned by *np.einsum* will now be writeable whenever the input array is writeable. *np.argmin* skips NaT values ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ *np.argmin* now skips NaT values in datetime64 and timedelta64 arrays, making it consistent with *np.min*, *np.argmax* and *np.max*. Deprecations ============ Array comparisons involving strings or structured dtypes ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Normally, comparison operations on arrays perform elementwise comparisons and return arrays of booleans. But in some corner cases, especially involving strings are structured dtypes, NumPy has historically returned a scalar instead. For example:: ### Current behaviour np.arange(2) == "foo" # -> False np.arange(2) < "foo" # -> True on Python 2, error on Python 3 np.ones(2, dtype="i4,i4") == np.ones(2, dtype="i4,i4,i4") # -> False Continuing work started in 1.9, in 1.10 these comparisons will now raise ``FutureWarning`` or ``DeprecationWarning``, and in the future they will be modified to behave more consistently with other comparison operations, e.g.:: ### Future behaviour np.arange(2) == "foo" # -> array([False, False]) np.arange(2) < "foo" # -> error, strings and numbers are not orderable np.ones(2, dtype="i4,i4") == np.ones(2, dtype="i4,i4,i4") # -> [False, False] SafeEval ~~~~~~~~ The SafeEval class in numpy/lib/utils.py is deprecated and will be removed in the next release. alterdot, restoredot ~~~~~~~~~~~~~~~~~~~~ The alterdot and restoredot functions no longer do anything, and are deprecated. pkgload, PackageLoader ~~~~~~~~~~~~~~~~~~~~~~ These ways of loading packages are now deprecated. bias, ddof arguments to corrcoef ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The values for the ``bias`` and ``ddof`` arguments to the ``corrcoef`` function canceled in the division implied by the correlation coefficient and so had no effect on the returned values. We now deprecate these arguments to ``corrcoef`` and the masked array version ``ma.corrcoef``. Because we are deprecating the ``bias`` argument to ``ma.corrcoef``, we also deprecate the use of the ``allow_masked`` argument as a positional argument, as its position will change with the removal of ``bias``. ``allow_masked`` will in due course become a keyword-only argument. dtype string representation changes ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Since 1.6, creating a dtype object from its string representation, e.g. ``'f4'``, would issue a deprecation warning if the size did not correspond to an existing type, and default to creating a dtype of the default size for the type. Starting with this release, this will now raise a ``TypeError``. The only exception is object dtypes, where both ``'O4'`` and ``'O8'`` will still issue a deprecation warning. This platform-dependent representation will raise an error in the next release. In preparation for this upcoming change, the string representation of an object dtype, i.e. ``np.dtype(object).str``, no longer includes the item size, i.e. will return ``'|O'`` instead of ``'|O4'`` or ``'|O8'`` as before. Authors ======= This release contains work by the following people who contributed at least one patch to this release. The names are in alphabetical order by first name. Names followed by a "+" contributed a patch for the first time. Abdul Muneer+ Adam Williams+ Alan Briolat+ Alex Griffing Alex Willmer+ Alexander Belopolsky Alistair Muldal+ Allan Haldane+ Amir Sarabadani+ Andrea Bedini+ Andrew Dawson+ Andrew Nelson+ Antoine Pitrou+ Anton Ovchinnikov+ Antony Lee+ Behzad Nouri+ Bertrand+ Blake Griffith Bob Poekert+ Brian Kearns+ CJ Carey Carl Kleffner+ Chander G+ Charles Harris Chris Hogan+ Chris Kerr Chris Lamb+ Chris Laumann+ Christian Brodbeck+ Christian Brueffer Christoph Gohlke Cimarron Mittelsteadt Daniel da Silva Darsh P. Ranjan+ David Cournapeau David M Fobes+ David Powell+ Didrik Pinte+ Dimas Abreu Dutra Dmitry Zagorny+ Eric Firing Eric Hunsberger+ Eric Martin+ Eric Moore Eric O. LEBIGOT (EOL)+ Erik M. Bray Ernest N. Mamikonyan+ Fei Liu+ Fran?ois Magimel+ Gabor Kovacs+ Gabriel-p+ Garrett-R+ George Castillo+ Gerrit Holl+ Gert-Ludwig Ingold+ Glen Mabey+ Graham Christensen+ Greg Thomsen+ Gregory R. Lee+ Helder Cesar+ Helder Oliveira+ Henning Dickten+ Ian Henriksen+ Jaime Fernandez James Camel+ James Salter+ Jan Schl?ter+ Jarl Haggerty+ Jay Bourque Joel Nothman+ John Kirkham+ John Tyree+ Joris Van den Bossche+ Joseph Martinot-Lagarde Josh Warner (Mac) Juan Luis Cano Rodr?guez Julian Taylor Kreiswolke+ Lars Buitinck Leonardo Donelli+ Lev Abalkin Lev Levitsky+ Malik Woods+ Maniteja Nandana+ Marshall Farrier+ Marten van Kerkwijk Martin Spacek Martin Thoma+ Masud Rahman+ Matt Newville+ Mattheus Ueckermann+ Matthew Brett Matthew Craig+ Michael Currie+ Michael Droettboom Michele Vallisneri+ Mortada Mehyar+ Nate Jensen+ Nathaniel J. Smith Nick Papior Andersen+ Nick Papior+ Nils Werner Oliver Eberle+ Patrick Peglar+ Paul Jacobson Pauli Virtanen Peter Iannucci+ Ralf Gommers Richard Barnes+ Ritta Narita+ Robert Johansson+ Robert LU+ Robert McGibbon+ Ryan Blakemore+ Ryan Nelson+ Sandro Tosi Saullo Giovani+ Sebastian Berg Sebastien Gouezel+ Simon Gibbons+ Simon Guillot+ Stefan Eng+ Stefan Otte+ Stefan van der Walt Stephan Hoyer+ Stuart Berg+ Sturla Molden+ Thomas A Caswell+ Thomas Robitaille Tim D. Smith+ Tom Krauss+ Tom Poole+ Toon Verstraelen+ Ulrich Seidl Valentin Haenel Vraj Mohan+ Warren Weckesser Wendell Smith+ Yaroslav Halchenko Yotam Doron Yousef Hamza+ Yuval Langer+ Yuxiang Wang+ Zbigniew J?drzejewski-Szmek+ cel+ chebee7i+ empeeu+ endolith hannaro+ immerrr jmrosen155+ jnothman kanhua+ mbyt+ mlai+ styr+ tdihp+ wim glenn+ yolanda15+ ?smund Hjulstad+ Enjjoy, Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From irvin.probst at ensta-bretagne.fr Tue Oct 6 03:39:49 2015 From: irvin.probst at ensta-bretagne.fr (Irvin Probst) Date: Tue, 6 Oct 2015 09:39:49 +0200 Subject: [SciPy-Dev] Bug in place_poles Message-ID: <56137AC5.9070407@ensta-bretagne.fr> Hi, FYI there is a bug in place_poles, the return values rtol and nb_iter are supposed to be np.nan when there is nothing to optimize and 0 when the solution is unique but the function does it the other way around. Note that the doc contains an error too, it is stated: rtol will be NaN if the optimisation algorithms can not run, i.e when B.shape[1] == 1, or 0 when the solution is unique. But one should read: rtol will be NaN if the optimisation algorithms can not run, or 0 when the solution is unique i.e when B.shape[1] == 1. I'll fix this ASAP, in the meantime please consider this bug as already acknowledged if anyone reports it. Regards. -- Irvin -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidmenhur at gmail.com Tue Oct 6 08:08:36 2015 From: davidmenhur at gmail.com (=?UTF-8?B?RGHPgGlk?=) Date: Tue, 6 Oct 2015 14:08:36 +0200 Subject: [SciPy-Dev] [Numpy-discussion] Numpy 1.10.0 release In-Reply-To: References: Message-ID: I don't get any failures on Fedora 22. I have installed it with pip, setting my CFLAGS to "-march=core-avx-i -O2 -pipe -mtune=native" and linking against openblas. With the new Numpy, Scipy full suite shows two errors, I am sorry I didn't think of running that in the RC phase: ====================================================================== FAIL: test_weighting (test_stats.TestHistogram) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/david/.local/virtualenv/py27/lib/python2.7/site-packages/scipy/stats/tests/test_stats.py", line 892, in test_weighting decimal=2) File "/home/david/.local/virtualenv/py27/lib/python2.7/site-packages/numpy/testing/utils.py", line 886, in assert_array_almost_equal precision=decimal) File "/home/david/.local/virtualenv/py27/lib/python2.7/site-packages/numpy/testing/utils.py", line 708, in assert_array_compare raise AssertionError(msg) AssertionError: Arrays are not almost equal to 2 decimals (mismatch 40.0%) x: array([ 4. , 0. , 4.5, -0.9, 0. , 0.3, 110.2, 0. , 0. , 42. ]) y: array([ 4. , 0. , 4.5, -0.9, 0.3, 0. , 7. , 103.2, 0. , 42. ]) ====================================================================== FAIL: test_nanmedian_all_axis (test_stats.TestNanFunc) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/david/.local/virtualenv/py27/lib/python2.7/site-packages/scipy/stats/tests/test_stats.py", line 226, in test_nanmedian_all_axis assert_equal(len(w), 4) File "/home/david/.local/virtualenv/py27/lib/python2.7/site-packages/numpy/testing/utils.py", line 354, in assert_equal raise AssertionError(msg) AssertionError: Items are not equal: ACTUAL: 1 DESIRED: 4 I am almost sure these errors weren't there before. On 6 October 2015 at 13:53, Neal Becker wrote: > 1 test failure: > > FAIL: test_blasdot.test_blasdot_used > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in > runTest > self.test(*self.arg) > File "/home/nbecker/.local/lib/python2.7/site- > packages/numpy/testing/decorators.py", line 146, in skipper_func > return f(*args, **kwargs) > File "/home/nbecker/.local/lib/python2.7/site- > packages/numpy/core/tests/test_blasdot.py", line 31, in test_blasdot_used > assert_(dot is _dotblas.dot) > File "/home/nbecker/.local/lib/python2.7/site- > packages/numpy/testing/utils.py", line 53, in assert_ > raise AssertionError(smsg) > AssertionError > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > https://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hherbol at gmail.com Tue Oct 6 18:15:13 2015 From: hherbol at gmail.com (Henry Herbol) Date: Tue, 6 Oct 2015 18:15:13 -0400 Subject: [SciPy-Dev] Pull Request #5318 Message-ID: To Whomever is Interested, I have submitted a Pull Request for an optimization (minimization) method. It involves an adaptation of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm in which the line search step is skipped. This was done because there are times in which optimization is required independent of the target function, and there was no code in the current Scipy.optimize methods that accomodated this. The method employed took advantage of the BFGS(Hess) algorithm touched on by Sheppard et al. (ref here ) and adjusts the step size of alpha (by a user defined beta parameter) instead of maintaining a small alpha. Henry Herbol P.S. If this was not the way I was supposed to address the scipy-dev mailing list as described here , I'm sorry for misusing the e-mail. -- Contact E-mail: hherbol at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From superscript92 at yahoo.com Wed Oct 7 01:31:47 2015 From: superscript92 at yahoo.com (Jamie Tsao) Date: Tue, 6 Oct 2015 22:31:47 -0700 Subject: [SciPy-Dev] Scipy.io.savemat optimization issue. Message-ID: <1444195907.71757.YahooMailAndroidMobile@web122904.mail.ne1.yahoo.com> I was attempting to add an optimization to help save some memory from using savemat along with my other commits attempting to fix a bug concerning compression with zlib. The optimization uses an np.array's data attribute, which in python 2.7 is a buffer. This is nice, because in the case the array is Fortran contiguous (much so with 1D arrays unless created from .real or .imag from complex arrays), then I can just pass the data buffer to file.write() without using much memory. I.e. a real sparse matrix will at most use 133% of the matrices memory to save it to disk. And even if it isn't Fortran compliant, I would just do as originally done: use tostring() to achieve the bytes in Fortran order. But then comes python 3. Using python 3.4, I found that now np.array.data is not a buffer but a memoryview. Unfortunately, the memoryview doesn't have the same ability to grab the underlying bytes in the same manner, so file.write() won't write it correctly. Furthermore, file.write() only accepts strings, not bytes. Hence, I would have to do something like str(bytes(arr.data)) to pass it and save to disk, which isn't as good as calling tostring(). What should I do to get around this? My failed pull request is here, where the last commit concerns this problem: http://www.github.com/scipy/scipy/pull/5325 On a side note, is my approach to byte counting fine? I was hoping that it will no longer need to seek around, which helps out a lot with compression, but my current code does this even without compression. Originally, I was afraid the running time would be twice as long (although I claim not many will repeatedly use savemat to the point of it being a bottleneck), but it turns out its runtime is nearly the same. Weird? Lastly, I didn't add any test cases. The only bug I fixed was concerning compressing down a >2GB (in my case, sparse) matrix, but I don't want test cases to create massive matrices and use up tons of memory. -Jamie Tsao Sent from Yahoo Mail on Android -------------- next part -------------- An HTML attachment was scrubbed... URL: From jtaylor.debian at googlemail.com Wed Oct 7 07:30:18 2015 From: jtaylor.debian at googlemail.com (Julian Taylor) Date: Wed, 7 Oct 2015 13:30:18 +0200 Subject: [SciPy-Dev] [Numpy-discussion] Numpy 1.10.0 release In-Reply-To: References: Message-ID: <5615024A.2020403@googlemail.com> On 10/06/2015 01:45 PM, Neal Becker wrote: > Are extra_compile_args actually used in all compile steps? extra_compile_args is not used by numpy, its to support some third party use case I never understood. As the typical site.cfg used by numpy only contains binaries that are never compiled by numpy itself it should have no effect on anything. > CFLAGS='-march=native -O3' python setup.py build > > Does seem to use my CFLAGS, as it always did on previous numpy versions. > still seems to work for me, though the preferred variable is OPT= as CFLAGS will contain a bunch of other stuff related to building python extensions themselves (e.g. -fno-strict-aliasing) From jtaylor.debian at googlemail.com Wed Oct 7 07:40:46 2015 From: jtaylor.debian at googlemail.com (Julian Taylor) Date: Wed, 7 Oct 2015 13:40:46 +0200 Subject: [SciPy-Dev] [Numpy-discussion] Numpy 1.10.0 release In-Reply-To: References: Message-ID: <561504BE.50500@googlemail.com> On 10/06/2015 02:08 PM, Da?id wrote: > I don't get any failures on Fedora 22. I have installed it with pip, > setting my CFLAGS to "-march=core-avx-i -O2 -pipe -mtune=native" and > linking against openblas. > > With the new Numpy, Scipy full suite shows two errors, I am sorry I > didn't think of running that in the RC phase: > ====================================================================== > FAIL: test_weighting (test_stats.TestHistogram) this is a known issue see scipy/scipy/#5148 It can most likely be ignored as the scipy test is too sensitive to floating point rounding. From david.pugh at maths.ox.ac.uk Wed Oct 7 09:49:16 2015 From: david.pugh at maths.ox.ac.uk (David Pugh) Date: Wed, 7 Oct 2015 14:49:16 +0100 Subject: [SciPy-Dev] Boundary Value Problem solver for inclusion in SciPy Message-ID: All, I am developing a pure-python (i.e., no external fortran/C dependencies) two-point boundary value solver tentatively called pyCollocation and I am writing to gauge the level of interest in eventually including the solver in SciPy. v/r, Dr. David R. Pugh Post-doctoral research fellow INET Oxford Mathematical Institute Oxford University On Wed, Oct 7, 2015 at 1:00 PM, wrote: > Send SciPy-Dev mailing list submissions to > scipy-dev at scipy.org > > To subscribe or unsubscribe via the World Wide Web, visit > https://mail.scipy.org/mailman/listinfo/scipy-dev > or, via email, send a message with subject or body 'help' to > scipy-dev-request at scipy.org > > You can reach the person managing the list at > scipy-dev-owner at scipy.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of SciPy-Dev digest..." > > > Today's Topics: > > 1. Re: [Numpy-discussion] Numpy 1.10.0 release (Da?id) > 2. Pull Request #5318 (Henry Herbol) > 3. Scipy.io.savemat optimization issue. (Jamie Tsao) > 4. Re: [Numpy-discussion] Numpy 1.10.0 release (Julian Taylor) > 5. Re: [Numpy-discussion] Numpy 1.10.0 release (Julian Taylor) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Tue, 6 Oct 2015 14:08:36 +0200 > From: Da?id > To: Discussion of Numerical Python > Cc: SciPy Users List , SciPy Developers List > > Subject: Re: [SciPy-Dev] [Numpy-discussion] Numpy 1.10.0 release > Message-ID: > 1PDQQUukR4Fdby+1ygsJeLNZRFyRVFYVAzDhueAgM06w at mail.gmail.com> > Content-Type: text/plain; charset="utf-8" > > I don't get any failures on Fedora 22. I have installed it with pip, > setting my CFLAGS to "-march=core-avx-i -O2 -pipe -mtune=native" and > linking against openblas. > > With the new Numpy, Scipy full suite shows two errors, I am sorry I didn't > think of running that in the RC phase: > > ====================================================================== > FAIL: test_weighting (test_stats.TestHistogram) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > > "/home/david/.local/virtualenv/py27/lib/python2.7/site-packages/scipy/stats/tests/test_stats.py", > line 892, in test_weighting > decimal=2) > File > > "/home/david/.local/virtualenv/py27/lib/python2.7/site-packages/numpy/testing/utils.py", > line 886, in assert_array_almost_equal > precision=decimal) > File > > "/home/david/.local/virtualenv/py27/lib/python2.7/site-packages/numpy/testing/utils.py", > line 708, in assert_array_compare > raise AssertionError(msg) > AssertionError: > Arrays are not almost equal to 2 decimals > > (mismatch 40.0%) > x: array([ 4. , 0. , 4.5, -0.9, 0. , 0.3, 110.2, 0. , > 0. , 42. ]) > y: array([ 4. , 0. , 4.5, -0.9, 0.3, 0. , 7. , 103.2, > 0. , 42. ]) > > ====================================================================== > FAIL: test_nanmedian_all_axis (test_stats.TestNanFunc) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > > "/home/david/.local/virtualenv/py27/lib/python2.7/site-packages/scipy/stats/tests/test_stats.py", > line 226, in test_nanmedian_all_axis > assert_equal(len(w), 4) > File > > "/home/david/.local/virtualenv/py27/lib/python2.7/site-packages/numpy/testing/utils.py", > line 354, in assert_equal > raise AssertionError(msg) > AssertionError: > Items are not equal: > ACTUAL: 1 > DESIRED: 4 > > I am almost sure these errors weren't there before. > > On 6 October 2015 at 13:53, Neal Becker wrote: > > > 1 test failure: > > > > FAIL: test_blasdot.test_blasdot_used > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in > > runTest > > self.test(*self.arg) > > File "/home/nbecker/.local/lib/python2.7/site- > > packages/numpy/testing/decorators.py", line 146, in skipper_func > > return f(*args, **kwargs) > > File "/home/nbecker/.local/lib/python2.7/site- > > packages/numpy/core/tests/test_blasdot.py", line 31, in test_blasdot_used > > assert_(dot is _dotblas.dot) > > File "/home/nbecker/.local/lib/python2.7/site- > > packages/numpy/testing/utils.py", line 53, in assert_ > > raise AssertionError(smsg) > > AssertionError > > > > > > _______________________________________________ > > NumPy-Discussion mailing list > > NumPy-Discussion at scipy.org > > https://mail.scipy.org/mailman/listinfo/numpy-discussion > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > https://mail.scipy.org/pipermail/scipy-dev/attachments/20151006/da0662db/attachment-0001.html > > > > ------------------------------ > > Message: 2 > Date: Tue, 6 Oct 2015 18:15:13 -0400 > From: Henry Herbol > To: scipy-dev at scipy.org > Subject: [SciPy-Dev] Pull Request #5318 > Message-ID: > MOxiH4nFRo_iujmiU8A at mail.gmail.com> > Content-Type: text/plain; charset="utf-8" > > To Whomever is Interested, > > I have submitted a Pull Request for an optimization (minimization) > method. It involves an adaptation of the Broyden-Fletcher-Goldfarb-Shanno > (BFGS) algorithm in which the line search step is skipped. This was done > because there are times in which optimization is required independent of > the target function, and there was no code in the current Scipy.optimize > methods that accomodated this. The method employed took advantage of the > BFGS(Hess) algorithm touched on by Sheppard et al. (ref here > ) and > adjusts the step size of alpha (by a user defined beta parameter) instead > of maintaining a small alpha. > > Henry Herbol > > P.S. If this was not the way I was supposed to address the scipy-dev > mailing list as described here > , I'm sorry for > misusing the e-mail. > > -- > Contact E-mail: hherbol at gmail.com > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > https://mail.scipy.org/pipermail/scipy-dev/attachments/20151006/8f48649a/attachment-0001.html > > > > ------------------------------ > > Message: 3 > Date: Tue, 6 Oct 2015 22:31:47 -0700 > From: Jamie Tsao > To: "scipy-dev at scipy.org" > Subject: [SciPy-Dev] Scipy.io.savemat optimization issue. > Message-ID: > < > 1444195907.71757.YahooMailAndroidMobile at web122904.mail.ne1.yahoo.com> > Content-Type: text/plain; charset="us-ascii" > > I was attempting to add an optimization to help save some memory from > using savemat along with my other commits attempting to fix a bug > concerning compression with zlib. The optimization uses an np.array's data > attribute, which in python 2.7 is a buffer. This is nice, because in the > case the array is Fortran contiguous (much so with 1D arrays unless created > from .real or .imag from complex arrays), then I can just pass the data > buffer to file.write() without using much memory. I.e. a real sparse matrix > will at most use 133% of the matrices memory to save it to disk. And even > if it isn't Fortran compliant, I would just do as originally done: use > tostring() to achieve the bytes in Fortran order. > > > But then comes python 3. Using python 3.4, I found that now np.array.data > is not a buffer but a memoryview. Unfortunately, the memoryview doesn't > have the same ability to grab the underlying bytes in the same manner, so > file.write() won't write it correctly. Furthermore, file.write() only > accepts strings, not bytes. Hence, I would have to do something like > str(bytes(arr.data)) to pass it and save to disk, which isn't as good as > calling tostring(). What should I do to get around this? > > > My failed pull request is here, where the last commit concerns this > problem: http://www.github.com/scipy/scipy/pull/5325 > > > > > On a side note, is my approach to byte counting fine? I was hoping that it > will no longer need to seek around, which helps out a lot with compression, > but my current code does this even without compression. Originally, I was > afraid the running time would be twice as long (although I claim not many > will repeatedly use savemat to the point of it being a bottleneck), but it > turns out its runtime is nearly the same. Weird? > > > Lastly, I didn't add any test cases. The only bug I fixed was concerning > compressing down a >2GB (in my case, sparse) matrix, but I don't want test > cases to create massive matrices and use up tons of memory. > > > -Jamie Tsao > > Sent from Yahoo Mail on Android > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > https://mail.scipy.org/pipermail/scipy-dev/attachments/20151006/9e0748ec/attachment-0001.html > > > > ------------------------------ > > Message: 4 > Date: Wed, 7 Oct 2015 13:30:18 +0200 > From: Julian Taylor > To: Discussion of Numerical Python > Cc: scipy-user at scipy.org, scipy-dev at scipy.org > Subject: Re: [SciPy-Dev] [Numpy-discussion] Numpy 1.10.0 release > Message-ID: <5615024A.2020403 at googlemail.com> > Content-Type: text/plain; charset=windows-1252; format=flowed > > On 10/06/2015 01:45 PM, Neal Becker wrote: > > Are extra_compile_args actually used in all compile steps? > > extra_compile_args is not used by numpy, its to support some third party > use case I never understood. > As the typical site.cfg used by numpy only contains binaries that are > never compiled by numpy itself it should have no effect on anything. > > > CFLAGS='-march=native -O3' python setup.py build > > > > Does seem to use my CFLAGS, as it always did on previous numpy versions. > > > > still seems to work for me, though the preferred variable is OPT= as > CFLAGS will contain a bunch of other stuff related to building python > extensions themselves (e.g. -fno-strict-aliasing) > > > > ------------------------------ > > Message: 5 > Date: Wed, 7 Oct 2015 13:40:46 +0200 > From: Julian Taylor > To: SciPy Developers List , Discussion of > Numerical Python > Cc: SciPy Users List > Subject: Re: [SciPy-Dev] [Numpy-discussion] Numpy 1.10.0 release > Message-ID: <561504BE.50500 at googlemail.com> > Content-Type: text/plain; charset=UTF-8; format=flowed > > On 10/06/2015 02:08 PM, Da?id wrote: > > I don't get any failures on Fedora 22. I have installed it with pip, > > setting my CFLAGS to "-march=core-avx-i -O2 -pipe -mtune=native" and > > linking against openblas. > > > > With the new Numpy, Scipy full suite shows two errors, I am sorry I > > didn't think of running that in the RC phase: > > ====================================================================== > > FAIL: test_weighting (test_stats.TestHistogram) > > this is a known issue see scipy/scipy/#5148 > It can most likely be ignored as the scipy test is too sensitive to > floating point rounding. > > > > > ------------------------------ > > Subject: Digest Footer > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-dev > > > ------------------------------ > > End of SciPy-Dev Digest, Vol 144, Issue 7 > ***************************************** > -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesnwoods at gmail.com Wed Oct 7 11:44:30 2015 From: charlesnwoods at gmail.com (Nathan Woods) Date: Wed, 7 Oct 2015 09:44:30 -0600 Subject: [SciPy-Dev] scipy.integrate.nquad full_output Message-ID: Hey everyone, I need a design consult. I'm trying to enable the use of the quad full_output option when using nquad, and I'm trying to come up with a good way to manage all of the data that comes back from using something like that. Any suggestions are welcome. https://github.com/scipy/scipy/issues/5330 Nathan Woods -------------- next part -------------- An HTML attachment was scrubbed... URL: From archibald at astron.nl Wed Oct 7 17:16:52 2015 From: archibald at astron.nl (Anne Archibald) Date: Wed, 07 Oct 2015 21:16:52 +0000 Subject: [SciPy-Dev] vectorized scipy.integrate.quad In-Reply-To: References: <87bncrg3uf.fsf@grothesque.org> <87io6p48f0.fsf@grothesque.org> Message-ID: On Tue, Oct 6, 2015 at 3:45 AM Sturla Molden wrote: > On 03/10/15 01:26, Anne Archibald wrote: > > > The additional hack where I used coroutines is a different kind of > > concurrency; it's a way to squeeze extra concurrency out of recursive or > > other more-complex algorithms. You can actually do the same using > > threads; I have thread- and coroutine-based versions of the integrator, > > and they look very similar. > > Personally I find that I prefer to make pipelines with threads (or > processes) and queues, because coroutines make my brain overheat. > Well, that's kind of the point - threads, processes, coroutines, even multiple inheritance and other "normal" flow-control procedures can be hopelessly confusing if used raw. We need abstractions that make concurrency mentally manageable but also allow reasonable parallelism. Fortunately the language designers have been thinking about this and have come up with some models for concurrency with less foot-shooting potential. Although concurrency by definition introduces non-determinism, some structures for concurrency can completely hide this from the programmer. Futures can be used this way, for example. Pipelines can be another way to express concurrency; taken to an extreme you can get dataflow languages like pd. A sufficiently smart pipeline implementation can prevent headaches with buffering and order of evaluation. Coroutines generally can be quite confusing, but python's asyncio implementation is essentially just cooperative multitasking: "async def" marks a function that can be suspended (and therefore needs to be run from a scheduler ("event loop")), and "await" marks a potential suspension point. Otherwise it looks just like threads. Anne -------------- next part -------------- An HTML attachment was scrubbed... URL: From gyromagnetic at gmail.com Wed Oct 7 19:02:09 2015 From: gyromagnetic at gmail.com (Gyro Funch) Date: Wed, 7 Oct 2015 17:02:09 -0600 Subject: [SciPy-Dev] Boundary Value Problem solver for inclusion in SciPy In-Reply-To: References: Message-ID: On 10/7/2015 7:49 AM, David Pugh wrote: > All, > > I am developing a pure-python (i.e., no external fortran/C > dependencies) two-point boundary value solver tentatively called > pyCollocation and I > am writing to gauge the level of interest in eventually including > the solver in SciPy. > > v/r, > > Dr. David R. Pugh Post-doctoral research fellow INET Oxford > Mathematical Institute Oxford University > I would certainly find this useful for my research and teaching. -gyro From pav at iki.fi Thu Oct 8 04:16:40 2015 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 8 Oct 2015 08:16:40 +0000 (UTC) Subject: [SciPy-Dev] Boundary Value Problem solver for inclusion in SciPy References: Message-ID: David Pugh maths.ox.ac.uk> writes: > I am developing a pure-python (i.e., no external > fortran/C dependencies) two-point boundary value > solver tentatively called pyCollocation and?I am > writing to gauge the level of interest in eventually > including the solver in SciPy. This would fall into the scope for Scipy. Existing Python packages for bvps have not been included so far largely due to licensing issues and due to the situation with free fortran 90 compilers on Windows (this latter problem however will likely be solved in near future). -- Pauli Virtanen From drobert.pugh at gmail.com Thu Oct 8 08:26:51 2015 From: drobert.pugh at gmail.com (David Pugh) Date: Thu, 8 Oct 2015 13:26:51 +0100 Subject: [SciPy-Dev] Boundary Value Problem solver for inclusion in SciPy Message-ID: Excellent. The current design of my API separates solving a BVP into three parts. 1. specify your problem (similar to scikits.bvp_solver) 2. choose your basis functions (currently, any of the polynomial classes from NumPy are supported, in very near future I plan to add B-splines, in medium term I would hope to add support for a shape-preserving option) 3. solve (solvers just wrap underlying nonlinear equations solvers in SciPy) I have a number of example notebooks and would welcome pointers BVPs that would be useful for either examples or additional tests. D On Thu, Oct 8, 2015 at 1:00 PM, wrote: > Send SciPy-Dev mailing list submissions to > scipy-dev at scipy.org > > To subscribe or unsubscribe via the World Wide Web, visit > https://mail.scipy.org/mailman/listinfo/scipy-dev > or, via email, send a message with subject or body 'help' to > scipy-dev-request at scipy.org > > You can reach the person managing the list at > scipy-dev-owner at scipy.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of SciPy-Dev digest..." > > > Today's Topics: > > 1. scipy.integrate.nquad full_output (Nathan Woods) > 2. Re: vectorized scipy.integrate.quad (Anne Archibald) > 3. Re: Boundary Value Problem solver for inclusion in SciPy > (Gyro Funch) > 4. Re: Boundary Value Problem solver for inclusion in SciPy > (Pauli Virtanen) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Wed, 7 Oct 2015 09:44:30 -0600 > From: Nathan Woods > To: SciPy Developers List > Subject: [SciPy-Dev] scipy.integrate.nquad full_output > Message-ID: > < > CAAYHtCSu+7bJ-HWKFKsN-oPRByws4w+6NiUS3LEY-2_rrhrEzQ at mail.gmail.com> > Content-Type: text/plain; charset="utf-8" > > Hey everyone, > > I need a design consult. I'm trying to enable the use of the quad > full_output option when using nquad, and I'm trying to come up with a good > way to manage all of the data that comes back from using something like > that. Any suggestions are welcome. > > https://github.com/scipy/scipy/issues/5330 > > Nathan Woods > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > https://mail.scipy.org/pipermail/scipy-dev/attachments/20151007/0caee8e1/attachment-0001.html > > > > ------------------------------ > > Message: 2 > Date: Wed, 07 Oct 2015 21:16:52 +0000 > From: Anne Archibald > To: SciPy Developers List > Subject: Re: [SciPy-Dev] vectorized scipy.integrate.quad > Message-ID: > 8tA01d23MYVENP7D7FS2c94degM-+o0D3DpOQ at mail.gmail.com> > Content-Type: text/plain; charset="utf-8" > > On Tue, Oct 6, 2015 at 3:45 AM Sturla Molden > wrote: > > > On 03/10/15 01:26, Anne Archibald wrote: > > > > > The additional hack where I used coroutines is a different kind of > > > concurrency; it's a way to squeeze extra concurrency out of recursive > or > > > other more-complex algorithms. You can actually do the same using > > > threads; I have thread- and coroutine-based versions of the integrator, > > > and they look very similar. > > > > Personally I find that I prefer to make pipelines with threads (or > > processes) and queues, because coroutines make my brain overheat. > > > > Well, that's kind of the point - threads, processes, coroutines, even > multiple inheritance and other "normal" flow-control procedures can be > hopelessly confusing if used raw. We need abstractions that make > concurrency mentally manageable but also allow reasonable parallelism. > > Fortunately the language designers have been thinking about this and have > come up with some models for concurrency with less foot-shooting potential. > Although concurrency by definition introduces non-determinism, some > structures for concurrency can completely hide this from the programmer. > Futures can be used this way, for example. Pipelines can be another way to > express concurrency; taken to an extreme you can get dataflow languages > like pd. A sufficiently smart pipeline implementation can prevent headaches > with buffering and order of evaluation. > > Coroutines generally can be quite confusing, but python's asyncio > implementation is essentially just cooperative multitasking: "async def" > marks a function that can be suspended (and therefore needs to be run from > a scheduler ("event loop")), and "await" marks a potential suspension > point. Otherwise it looks just like threads. > > Anne > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > https://mail.scipy.org/pipermail/scipy-dev/attachments/20151007/4f918b70/attachment-0001.html > > > > ------------------------------ > > Message: 3 > Date: Wed, 7 Oct 2015 17:02:09 -0600 > From: Gyro Funch > To: scipy-dev at scipy.org > Subject: Re: [SciPy-Dev] Boundary Value Problem solver for inclusion > in SciPy > Message-ID: > Content-Type: text/plain; charset=windows-1252 > > On 10/7/2015 7:49 AM, David Pugh wrote: > > All, > > > > I am developing a pure-python (i.e., no external fortran/C > > dependencies) two-point boundary value solver tentatively called > > pyCollocation and I > > am writing to gauge the level of interest in eventually including > > the solver in SciPy. > > > > v/r, > > > > Dr. David R. Pugh Post-doctoral research fellow INET Oxford > > Mathematical Institute Oxford University > > > > > I would certainly find this useful for my research and teaching. > > -gyro > > > > ------------------------------ > > Message: 4 > Date: Thu, 8 Oct 2015 08:16:40 +0000 (UTC) > From: Pauli Virtanen > To: scipy-dev at scipy.org > Subject: Re: [SciPy-Dev] Boundary Value Problem solver for inclusion > in SciPy > Message-ID: > Content-Type: text/plain; charset=utf-8 > > David Pugh maths.ox.ac.uk> writes: > > I am developing a pure-python (i.e., no external > > fortran/C dependencies) two-point boundary value > > solver tentatively called pyCollocation and?I am > > writing to gauge the level of interest in eventually > > including the solver in SciPy. > > This would fall into the scope for Scipy. > > Existing Python packages for bvps have not been included > so far largely due to licensing issues and due to the situation > with free fortran 90 compilers on Windows (this latter problem > however will likely be solved in near future). > > -- > Pauli Virtanen > > ------------------------------ > > Subject: Digest Footer > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-dev > > > ------------------------------ > > End of SciPy-Dev Digest, Vol 144, Issue 9 > ***************************************** > -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesnwoods at gmail.com Fri Oct 9 10:56:35 2015 From: charlesnwoods at gmail.com (Nathan Woods) Date: Fri, 9 Oct 2015 08:56:35 -0600 Subject: [SciPy-Dev] scipy.integrate.nquad full_output In-Reply-To: References: Message-ID: I've come up with a solution that uses h5py to handle the data, and raises an error if someone tries to use full_output without having h5py installed. I know that scipy frowns on external dependencies, but this really ends up being a problem due to the large amounts of data that are generated (> 3GB for a 4-d problem, just pickling a big mass of dict objects). Additionally, there's no way to get the data out of scipy as-is without hacking the scipy code itself, so whatever output mechanism chosen has to be part of the scipy codebase. The alternative (as I see it) is to find some way to reduce the data to something meaningful. For something like neval, that's pretty easy. For something like elist, it's not obvious what the appropriate reduction would be. That inclines me toward the idea of outputting everything when asked, at the cost of an external dependency. Questions, comments, concerns? Nathan Woods On Wed, Oct 7, 2015 at 9:44 AM, Nathan Woods wrote: > Hey everyone, > > I need a design consult. I'm trying to enable the use of the quad > full_output option when using nquad, and I'm trying to come up with a good > way to manage all of the data that comes back from using something like > that. Any suggestions are welcome. > > https://github.com/scipy/scipy/issues/5330 > > Nathan Woods > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sat Oct 10 09:07:13 2015 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 10 Oct 2015 15:07:13 +0200 Subject: [SciPy-Dev] plan for 0.16.1 release Message-ID: Hi all, Quite a few fixes have accumulated, so it's time for a 0.16.1 bugfix release. I'd like to aim to get this done within two weeks. At the moment what will go in is all open and closed PRs/issues under: https://github.com/scipy/scipy/milestones/0.16.1 https://github.com/scipy/scipy/labels/backport-candidate If you think other things should go in, please speak up or add the PR to the milestone or backport-candidate label. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From pete.bunch at gmail.com Mon Oct 12 01:43:22 2015 From: pete.bunch at gmail.com (Pete Bunch) Date: Sun, 11 Oct 2015 22:43:22 -0700 Subject: [SciPy-Dev] Matrix normal distribution Message-ID: Hi scipy folks, I'd like to propose adding the matrix normal distribution to scipy.stats. https://github.com/scipy/scipy/pull/5296 It's useful for Bayesian learning of linear systems, especially in combination with the Wishart and inverse Wishart distributions which were recently added to stats. Any comments, questions, criticisms, suggestions? @drpeteb -------------- next part -------------- An HTML attachment was scrubbed... URL: From rohit.jamuar at intel.com Mon Oct 12 11:35:22 2015 From: rohit.jamuar at intel.com (Jamuar, Rohit) Date: Mon, 12 Oct 2015 15:35:22 +0000 Subject: [SciPy-Dev] Scipy test-failures and absolute tolerance-levels Message-ID: <44AB4E37DBA7F647AC3E0AB2586710E2064A2348@ORSMSX102.amr.corp.intel.com> Hi, I'm one of the engineers from Python Scripting team at Intel Corporation. One of the things that we've been trying to achieve is to improve performance of numerical and scientific computation packages - numpy and scipy, for starters. While building Scipy(v0.16) with Intel Compiler, we see failures that are being caused by stringent absolute tolerance-levels. It also seems that you ran into such problems with these tests earlier. These are some of the failing tests : 1. test_j_roots() (from test_orthogonal in scipy.special) - a. vgq(rf(0.5, -0.5), ef(0.5, -0.5), wf(0.5, -0.5), -1., 1., 25, atol=1e-13) : If the tolerance is changed to 1e-12, the test passes 2. test_js_roots() (from test_orthogonal in scipy.special) - a. vgq(rf(1, 0.5), ef(1, 0.5), wf(1, 0.5), 0., 1., 25, atol=1e-13) : ) : If the tolerance is changed to 1e-12, the test passes b. vgq(rf(68.9, 2.25), ef(68.9, 2.25), wf(68.9, 2.25), 0., 1., 5, atol=2e-14) ) : If the tolerance is changed to 2e-13, the test passes I would really appreciate if you could share your rationale(s) behind altering tolerance-levels - does choosing a new tolerance-level depend on some theoretically defined range or is it chosen empirically? This information would help us better understand the ramifications of changing these values. Thanks, Rohit -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Mon Oct 12 12:18:46 2015 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 12 Oct 2015 10:18:46 -0600 Subject: [SciPy-Dev] Scipy test-failures and absolute tolerance-levels In-Reply-To: <44AB4E37DBA7F647AC3E0AB2586710E2064A2348@ORSMSX102.amr.corp.intel.com> References: <44AB4E37DBA7F647AC3E0AB2586710E2064A2348@ORSMSX102.amr.corp.intel.com> Message-ID: On Mon, Oct 12, 2015 at 9:35 AM, Jamuar, Rohit wrote: > Hi, > > > > I?m one of the engineers from Python Scripting team at Intel Corporation. > One of the things that we?ve been trying to achieve is to improve > performance of numerical and scientific computation packages ? numpy and > scipy, for starters. While building Scipy(v0.16) with Intel Compiler, we > see failures that are being caused by stringent absolute tolerance-levels. > It also seems that you ran into such problems with these tests earlier > . These are some of the failing > tests : > > 1. test_j_roots() (from test_orthogonal in scipy.special) - > > a. vgq(rf(0.5, -0.5), ef(0.5, -0.5), wf(0.5, -0.5), -1., 1., 25, > atol=1e-13) : If the tolerance is changed to 1e-12, the test passes > > 2. test_js_roots() (from test_orthogonal in scipy.special) - > > a. vgq(rf(1, 0.5), ef(1, 0.5), wf(1, 0.5), 0., 1., 25, atol=1e-13) > : ) : If the tolerance is changed to 1e-12, the test passes > > b. vgq(rf(68.9, 2.25), ef(68.9, 2.25), wf(68.9, 2.25), 0., 1., 5, > atol=2e-14) ) : If the tolerance is changed to 2e-13, the test passes > > > > I would really appreciate if you could share your rationale(s) behind > altering tolerance-levels ? does choosing a new tolerance-level depend on > some theoretically defined range or is it chosen empirically? This > information would help us better understand the ramifications of changing > these values. > This should be opened as an issue on the github issue tracker. The tolerances are usually chosen to be reasonable, but not too loose. It might be worth trying to track down the reason for the discrepancy here, as the tolerances should be adequate to account for normal roundoff error. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Mon Oct 12 12:27:08 2015 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 12 Oct 2015 10:27:08 -0600 Subject: [SciPy-Dev] Numpy 1.10.1 released. Message-ID: Hi All, I'm pleased to announce the release of Numpy 1.10.1. This release fixes some build problems and serves to reset the release number on pipy to something usable. As a note for future release managers, I had to upload these files from the command line, as using the file upload option at pipy resulted in a failure to parse the version. NumPy 1.10.1 Release Notes ************************** This release deals with a few build problems that showed up in 1.10.0. Most users would not have seen these problems. The differences are: * Compiling with msvc9 or msvc10 for 32 bit Windows now requires SSE2. This was the easiest fix for what looked to be some miscompiled code when SSE2 was not used. If you need to compile for 32 bit Windows systems without SSE2 support, mingw32 should still work. * Make compiling with VS2008 python2.7 SDK easier * Change Intel compiler options so that code will also be generated to support systems without SSE4.2. * Some _config test functions needed an explicit integer return in order to avoid the openSUSE rpmlinter erring out. * We ran into a problem with pipy not allowing reuse of filenames and a resulting proliferation of *.*.*.postN releases. Not only were the names getting out of hand, some packages were unable to work with the postN suffix. Numpy 1.10.1 supports Python 2.6 - 2.7 and 3.2 - 3.5. Commits: 45a3d84 DEP: Remove warning for `full` when dtype is set. 0c1a5df BLD: import setuptools to allow compile with VS2008 python2.7 sdk 04211c6 BUG: mask nan to 1 in ordered compare 826716f DOC: Document the reason msvc requires SSE2 on 32 bit platforms. 49fa187 BLD: enable SSE2 for 32-bit msvc 9 and 10 compilers dcbc4cc MAINT: remove Wreturn-type warnings from config checks d6564cb BLD: do not build exclusively for SSE4.2 processors 15cb66f BLD: do not build exclusively for SSE4.2 processors c38bc08 DOC: fix var. reference in percentile docstring 78497f4 DOC: Sync 1.10.0-notes.rst in 1.10.x branch with master. Cheers, Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Mon Oct 12 13:15:00 2015 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 12 Oct 2015 10:15:00 -0700 Subject: [SciPy-Dev] [Numpy-discussion] Numpy 1.10.1 released. In-Reply-To: References: Message-ID: Hi, On Mon, Oct 12, 2015 at 9:27 AM, Charles R Harris wrote: > Hi All, > > I'm pleased to announce the release of Numpy 1.10.1. This release fixes some > build problems and serves to reset the release number on pipy to something > usable. As a note for future release managers, I had to upload these files > from the command line, as using the file upload option at pipy resulted in a > failure to parse the version. > > NumPy 1.10.1 Release Notes > ************************** > > This release deals with a few build problems that showed up in 1.10.0. Most > users would not have seen these problems. The differences are: > > * Compiling with msvc9 or msvc10 for 32 bit Windows now requires SSE2. > This was the easiest fix for what looked to be some miscompiled code when > SSE2 was not used. If you need to compile for 32 bit Windows systems > without SSE2 support, mingw32 should still work. > > * Make compiling with VS2008 python2.7 SDK easier > > * Change Intel compiler options so that code will also be generated to > support systems without SSE4.2. > > * Some _config test functions needed an explicit integer return in > order to avoid the openSUSE rpmlinter erring out. > > * We ran into a problem with pipy not allowing reuse of filenames and a > resulting proliferation of *.*.*.postN releases. Not only were the names > getting out of hand, some packages were unable to work with the postN > suffix. > > > Numpy 1.10.1 supports Python 2.6 - 2.7 and 3.2 - 3.5. > > > Commits: > > 45a3d84 DEP: Remove warning for `full` when dtype is set. > 0c1a5df BLD: import setuptools to allow compile with VS2008 python2.7 sdk > 04211c6 BUG: mask nan to 1 in ordered compare > 826716f DOC: Document the reason msvc requires SSE2 on 32 bit platforms. > 49fa187 BLD: enable SSE2 for 32-bit msvc 9 and 10 compilers > dcbc4cc MAINT: remove Wreturn-type warnings from config checks > d6564cb BLD: do not build exclusively for SSE4.2 processors > 15cb66f BLD: do not build exclusively for SSE4.2 processors > c38bc08 DOC: fix var. reference in percentile docstring > 78497f4 DOC: Sync 1.10.0-notes.rst in 1.10.x branch with master. Thanks a lot for guiding this through. I uploaded the OSX wheels to pypi via : https://github.com/MacPython/numpy-wheels Cheers, Matthew From njs at pobox.com Wed Oct 14 12:38:45 2015 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 14 Oct 2015 09:38:45 -0700 Subject: [SciPy-Dev] [Numpy-discussion] Numpy 1.10.1 released. In-Reply-To: References: Message-ID: On Oct 14, 2015 9:15 AM, "Chris Barker" wrote: > > On Mon, Oct 12, 2015 at 9:27 AM, Charles R Harris < charlesr.harris at gmail.com> wrote: >> >> * Compiling with msvc9 or msvc10 for 32 bit Windows now requires SSE2. >> This was the easiest fix for what looked to be some miscompiled code when >> SSE2 was not used. > > > Note that there is discusion right now on pyton-dev about requireing SSE2 for teh python.org build of python3.5 -- it does now, so it's fine for third party pacakges to also require it. But there is some talk of removing that requirement -- still a lot of old machines around, I guess -- particular at schools and the like. Note that the 1.10.1 release announcement is somewhat misleading -- apparently the affected builds have actually required SSE2 since numpy 1.8, and the change here just makes it even more required. I'm not sure if this is all 32 bit builds or only ones using msvc that have been needing SSE2 all along. The change in 1.10.1 only affects msvc, which is not what most people are using (IIUC Enthought Canopy uses msvc, but the pypi, gohlke, and Anaconda builds don't). I'm actually not sure if anyone even uses the 32 bit builds at all :-) > Ideally, any binary wheels on PyPi should be compatible with the python.org builds -- so not require SSE2, if the python.org builds don't. > > Though we had this discussion a while back -- and numpy could, and maybe should require more -- did we ever figure out a way to get a meaningful message to the user if they try to run an SSE2 build on a machine without SSE2? It's not that difficult in principle, just someone has to do it :-). -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Wed Oct 14 16:34:35 2015 From: cournape at gmail.com (David Cournapeau) Date: Wed, 14 Oct 2015 21:34:35 +0100 Subject: [SciPy-Dev] [Numpy-discussion] Numpy 1.10.1 released. In-Reply-To: References: Message-ID: On Wed, Oct 14, 2015 at 5:38 PM, Nathaniel Smith wrote: > On Oct 14, 2015 9:15 AM, "Chris Barker" wrote: > > > > On Mon, Oct 12, 2015 at 9:27 AM, Charles R Harris < > charlesr.harris at gmail.com> wrote: > >> > >> * Compiling with msvc9 or msvc10 for 32 bit Windows now requires SSE2. > >> This was the easiest fix for what looked to be some miscompiled code > when > >> SSE2 was not used. > > > > > > Note that there is discusion right now on pyton-dev about requireing > SSE2 for teh python.org build of python3.5 -- it does now, so it's fine > for third party pacakges to also require it. But there is some talk of > removing that requirement -- still a lot of old machines around, I guess -- > particular at schools and the like. > > Note that the 1.10.1 release announcement is somewhat misleading -- > apparently the affected builds have actually required SSE2 since numpy 1.8, > and the change here just makes it even more required. I'm not sure if this > is all 32 bit builds or only ones using msvc that have been needing SSE2 > all along. The change in 1.10.1 only affects msvc, which is not what most > people are using (IIUC Enthought Canopy uses msvc, but the pypi, gohlke, > and Anaconda builds don't). > > I'm actually not sure if anyone even uses the 32 bit builds at all :-) > I cannot divulge exact figures for downloads, but for us at Enthought, windows 32 bits is in the same ballpark as OS X and Linux (64 bits) in terms of proportion, windows 64 bits being significantly more popular. Linux 32 bits and OS X 32 bits have been in the 1 % range each of our downloads for a while (we recently stopped support for both). David > > Ideally, any binary wheels on PyPi should be compatible with the > python.org builds -- so not require SSE2, if the python.org builds don't. > > > > Though we had this discussion a while back -- and numpy could, and maybe > should require more -- did we ever figure out a way to get a meaningful > message to the user if they try to run an SSE2 build on a machine without > SSE2? > > It's not that difficult in principle, just someone has to do it :-). > > -n > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > https://mail.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Thu Oct 15 13:57:15 2015 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Thu, 15 Oct 2015 19:57:15 +0200 Subject: [SciPy-Dev] welcome CJ Carey to the Scipy dev team Message-ID: Hi all, On behalf of the Scipy developers I'd like to welcome CJ Carey (@perimosocordiae) as a member of the core development team. CJ has been contributing for about 2 years, mainly to scipy.sparse. I'm looking forward to his continued contributions! Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at gmail.com Thu Oct 15 13:59:29 2015 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Thu, 15 Oct 2015 13:59:29 -0400 Subject: [SciPy-Dev] welcome CJ Carey to the Scipy dev team In-Reply-To: References: Message-ID: On Thu, Oct 15, 2015 at 1:57 PM, Ralf Gommers wrote: > Hi all, > > On behalf of the Scipy developers I'd like to welcome CJ Carey (@perimosocordiae) > as a member of the core development team. CJ has been contributing for > about 2 years, mainly to scipy.sparse. I'm looking forward to his continued > contributions! > > CJ, thanks for all the great contributions, and keep up the good work! Warren > Cheers, > Ralf > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kitchi.srikrishna at gmail.com Fri Oct 16 00:21:06 2015 From: kitchi.srikrishna at gmail.com (Sri Krishna) Date: Fri, 16 Oct 2015 09:51:06 +0530 Subject: [SciPy-Dev] Limit number of iterations in sigma clipping Message-ID: Hi, The current sigma clipping function has no iters keyword. I feel it would be useful to include it in the function, with a default like `iters=None` so that if people do want to use the keyword they will be able to do so. With the defaults of `iters = None` it shouldn't break anyone's workflow and will correspond to the current default behaviour. If there is any interest in this, I can file a pull request soon. Thanks, Srikrishna Sekhar -------------- next part -------------- An HTML attachment was scrubbed... URL: From prabhu at aero.iitb.ac.in Fri Oct 16 07:46:59 2015 From: prabhu at aero.iitb.ac.in (Prabhu Ramachandran) Date: Fri, 16 Oct 2015 17:16:59 +0530 Subject: [SciPy-Dev] [ANN] SciPy India 2015: call for papers Message-ID: <5620E3B3.6010803@aero.iitb.ac.in> Hello, [Apologies for the cross-posting!] The CFP and registration for SciPy India 2015 (http://scipy.in) is open. SciPy India 2015 will be held at IIT Bombay between December 14th to December 16th, 2015. Please spread the word! SciPy India is an annual conference on using Python for research and education. The conference is currently in its seventh year. Call for Papers ============= We look forward to your submissions on the use of Python for scientific computing and education. This includes pedagogy, exploration, modeling and analysis from both applied and developmental perspectives. We welcome contributions from academia as well as industry. For details on the paper submission please see here: http://scipy.in/2015/cfp/ Important Dates ================ - Call for proposals end: 24th November 2015 - List of accepted proposals will be published: 1st December 2015. We look forward to seeing you at SciPy India. Regards, Prabhu Ramachandran and Jarrod Millman From lkelley at cfa.harvard.edu Sat Oct 17 16:29:02 2015 From: lkelley at cfa.harvard.edu (Luke Zoltan Kelley) Date: Sat, 17 Oct 2015 16:29:02 -0400 Subject: [SciPy-Dev] Setting up a dev environment with conda Message-ID: <4E963723-4CB1-4B5E-8A7A-03336761718A@cfa.harvard.edu> I'm trying to play around with development for numpy. I've just switched over to using anaconda to make this easier, but I'm having trouble with the details. I've installed anaconda, and made an environment for numpy development: $ source activate numpy-py27 $ conda list -e # platform: osx-64 cython=0.23.4=py27_0 nose=1.3.7=py27_0 openssl=1.0.1k=1 pip=7.1.2=py27_0 python=2.7.10=1 readline=6.2=2 setuptools=18.4=py27_0 sqlite=3.8.4.1=1 tk=8.5.18=0 wheel=0.26.0=py27_1 zlib=1.2.8=0 I've cloned my fork of the numpy/numpy git repo to a local directory, and built the module in-place, i.e. $ git clone https://github.com//numpy.git $ cd numpy $ python setup.py build_ext --inplace But when I try to install, I get the error: $ python setup.py install Running from numpy source directory. Traceback (most recent call last): File "setup.py", line 264, in setup_package() File "setup.py", line 248, in setup_package from numpy.distutils.core import setup File "/Users/lzkelley/Programs/public/numpy/numpy/distutils/__init__.py", line 21, in from numpy.testing import Tester File "/Users/lzkelley/Programs/public/numpy/numpy/testing/__init__.py", line 14, in from .utils import * File "/Users/lzkelley/Programs/public/numpy/numpy/testing/utils.py", line 17, in from numpy.core import float32, empty, arange, array_repr, ndarray File "/Users/lzkelley/Programs/public/numpy/numpy/core/__init__.py", line 59, in test = Tester().test File "/Users/lzkelley/Programs/public/numpy/numpy/testing/nosetester.py", line 180, in __init__ if raise_warnings is None and '.dev0' in np.__version__: AttributeError: 'module' object has no attribute '__version__' I can circumvent this by adding the `numpy` directory to my `PYTHONPATH` --- but that also prevents me from isolating the development environment... what is the standard procedure here? Thanks! Luke -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesnwoods at gmail.com Sat Oct 17 17:30:33 2015 From: charlesnwoods at gmail.com (Nathan Woods) Date: Sat, 17 Oct 2015 15:30:33 -0600 Subject: [SciPy-Dev] Setting up a dev environment with conda In-Reply-To: <4E963723-4CB1-4B5E-8A7A-03336761718A@cfa.harvard.edu> References: <4E963723-4CB1-4B5E-8A7A-03336761718A@cfa.harvard.edu> Message-ID: <3A5E427F-600B-4248-B6C2-05D7295D3BCE@gmail.com> If you're trying to install anyway, why bother with the --inplace build? If you want an isolated development environment, try anaconda environments. They work pretty well. > On Oct 17, 2015, at 2:29 PM, Luke Zoltan Kelley wrote: > > I'm trying to play around with development for numpy. I've just switched over to using anaconda to make this easier, but I'm having trouble with the details. > > I've installed anaconda, and made an environment for numpy development: > > $ source activate numpy-py27 > $ conda list -e > # platform: osx-64 > cython=0.23.4=py27_0 > nose=1.3.7=py27_0 > openssl=1.0.1k=1 > pip=7.1.2=py27_0 > python=2.7.10=1 > readline=6.2=2 > setuptools=18.4=py27_0 > sqlite=3.8.4.1=1 > tk=8.5.18=0 > wheel=0.26.0=py27_1 > zlib=1.2.8=0 > > I've cloned my fork of the numpy/numpy git repo to a local directory, and built the module in-place, i.e. > > $ git clone https://github.com//numpy.git > $ cd numpy > $ python setup.py build_ext --inplace > > But when I try to install, I get the error: > > $ python setup.py install > Running from numpy source directory. > Traceback (most recent call last): > File "setup.py", line 264, in > setup_package() > File "setup.py", line 248, in setup_package > from numpy.distutils.core import setup > File "/Users/lzkelley/Programs/public/numpy/numpy/distutils/__init__.py", line 21, in > from numpy.testing import Tester > File "/Users/lzkelley/Programs/public/numpy/numpy/testing/__init__.py", line 14, in > from .utils import * > File "/Users/lzkelley/Programs/public/numpy/numpy/testing/utils.py", line 17, in > from numpy.core import float32, empty, arange, array_repr, ndarray > File "/Users/lzkelley/Programs/public/numpy/numpy/core/__init__.py", line 59, in > test = Tester().test > File "/Users/lzkelley/Programs/public/numpy/numpy/testing/nosetester.py", line 180, in __init__ > if raise_warnings is None and '.dev0' in np.__version__: > AttributeError: 'module' object has no attribute '__version__' > > I can circumvent this by adding the `numpy` directory to my `PYTHONPATH` --- but that also prevents me from isolating the development environment... what is the standard procedure here? > > Thanks! > Luke > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From lkelley at cfa.harvard.edu Sat Oct 17 17:55:06 2015 From: lkelley at cfa.harvard.edu (Luke Zoltan Kelley) Date: Sat, 17 Oct 2015 17:55:06 -0400 Subject: [SciPy-Dev] Setting up a dev environment with conda Message-ID: <3A97BA32-9431-4923-A7F3-C9A24D950449@cfa.harvard.edu> Charles, that's exactly what I'm trying to do. How do I install from source, to a specific anaconda environment? From lkelley at cfa.harvard.edu Sat Oct 17 17:56:03 2015 From: lkelley at cfa.harvard.edu (Luke Zoltan Kelley) Date: Sat, 17 Oct 2015 17:56:03 -0400 Subject: [SciPy-Dev] Setting up a dev environment with conda Message-ID: Charles, that's exactly what I'm trying to do. How do I install from source, to a specific anaconda environment? From andyfaff at gmail.com Sat Oct 17 17:57:45 2015 From: andyfaff at gmail.com (Andrew Nelson) Date: Sun, 18 Oct 2015 08:57:45 +1100 Subject: [SciPy-Dev] Setting up a dev environment with conda In-Reply-To: <3A97BA32-9431-4923-A7F3-C9A24D950449@cfa.harvard.edu> References: <3A97BA32-9431-4923-A7F3-C9A24D950449@cfa.harvard.edu> Message-ID: It would get installed into whatever conda environment you had activated. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lkelley at cfa.harvard.edu Sat Oct 17 17:59:49 2015 From: lkelley at cfa.harvard.edu (Luke Zoltan Kelley) Date: Sat, 17 Oct 2015 17:59:49 -0400 Subject: [SciPy-Dev] Setting up a dev environment with conda In-Reply-To: References: <3A97BA32-9431-4923-A7F3-C9A24D950449@cfa.harvard.edu> Message-ID: <7A24EAB8-98E2-437F-85F1-6D573924FDEC@cfa.harvard.edu> When trying to do that is when I get the error I described in the OP. i.e. I get an error when trying to install. > On Oct 17, 2015, at 5:57 PM, Andrew Nelson wrote: > > It would get installed into whatever conda environment you had activated. > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesnwoods at gmail.com Sat Oct 17 18:41:14 2015 From: charlesnwoods at gmail.com (Nathan Woods) Date: Sat, 17 Oct 2015 16:41:14 -0600 Subject: [SciPy-Dev] Setting up a dev environment with conda In-Reply-To: References: <3A97BA32-9431-4923-A7F3-C9A24D950449@cfa.harvard.edu> Message-ID: Something like Source activate environment Python setup.py install I've never had trouble with just doing that, though I haven't built Numpy. Scipy works, though. > On Oct 17, 2015, at 3:57 PM, Andrew Nelson wrote: > > It would get installed into whatever conda environment you had activated. > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesnwoods at gmail.com Sat Oct 17 18:42:40 2015 From: charlesnwoods at gmail.com (Nathan Woods) Date: Sat, 17 Oct 2015 16:42:40 -0600 Subject: [SciPy-Dev] Setting up a dev environment with conda In-Reply-To: <7A24EAB8-98E2-437F-85F1-6D573924FDEC@cfa.harvard.edu> References: <3A97BA32-9431-4923-A7F3-C9A24D950449@cfa.harvard.edu> <7A24EAB8-98E2-437F-85F1-6D573924FDEC@cfa.harvard.edu> Message-ID: <9DAD3224-EA17-4303-A7B1-3A130DE4F51D@gmail.com> My best guess is to nuke and reclone Numpy, then do setup.py install without the inplace build. What you're doing seems like it should work, though, so I'm not sure what's going on. Nathan Woods > On Oct 17, 2015, at 3:59 PM, Luke Zoltan Kelley wrote: > > When trying to do that is when I get the error I described in the OP. i.e. I get an error when trying to install. > > >> On Oct 17, 2015, at 5:57 PM, Andrew Nelson wrote: >> >> It would get installed into whatever conda environment you had activated. >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> https://mail.scipy.org/mailman/listinfo/scipy-dev > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From lkelley at cfa.harvard.edu Sat Oct 17 19:03:00 2015 From: lkelley at cfa.harvard.edu (Luke Zoltan Kelley) Date: Sat, 17 Oct 2015 19:03:00 -0400 Subject: [SciPy-Dev] Setting up a dev environment with conda In-Reply-To: <9DAD3224-EA17-4303-A7B1-3A130DE4F51D@gmail.com> References: <3A97BA32-9431-4923-A7F3-C9A24D950449@cfa.harvard.edu> <7A24EAB8-98E2-437F-85F1-6D573924FDEC@cfa.harvard.edu> <9DAD3224-EA17-4303-A7B1-3A130DE4F51D@gmail.com> Message-ID: Thanks Nathan, I'll try that. Both without the inplace build, I'll have to rebuild and install everytime I want to test something, right? > On Oct 17, 2015, at 6:42 PM, Nathan Woods wrote: > > My best guess is to nuke and reclone Numpy, then do setup.py install without the inplace build. What you're doing seems like it should work, though, so I'm not sure what's going on. > > Nathan Woods > > On Oct 17, 2015, at 3:59 PM, Luke Zoltan Kelley > wrote: > >> When trying to do that is when I get the error I described in the OP. i.e. I get an error when trying to install. >> >> >>> On Oct 17, 2015, at 5:57 PM, Andrew Nelson > wrote: >>> >>> It would get installed into whatever conda environment you had activated. >>> >>> _______________________________________________ >>> SciPy-Dev mailing list >>> SciPy-Dev at scipy.org >>> https://mail.scipy.org/mailman/listinfo/scipy-dev >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> https://mail.scipy.org/mailman/listinfo/scipy-dev > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Sat Oct 17 19:15:01 2015 From: njs at pobox.com (Nathaniel Smith) Date: Sat, 17 Oct 2015 16:15:01 -0700 Subject: [SciPy-Dev] Setting up a dev environment with conda In-Reply-To: References: <3A97BA32-9431-4923-A7F3-C9A24D950449@cfa.harvard.edu> <7A24EAB8-98E2-437F-85F1-6D573924FDEC@cfa.harvard.edu> <9DAD3224-EA17-4303-A7B1-3A130DE4F51D@gmail.com> Message-ID: Hi Luke, For day-to-day development and testing of numpy, I don't bother with either inplace builds *or* installing it -- I just use the magical "runtests.py" script that you'll find in the root of your git checkout. E.g., to build and then test the (possibly modified) source in your current checkout, just do: ./runtests.py That's all. This builds into a hidden directory and then sets up the correct PYTHONPATH before running the tests etc. -- you don't have to worry about any of it, it's magic. There are also lots of options, see ./runtests.py --help. Try adding -j for multi-core builds, or you can specify arbitrary options to pass to nose, or you can run it under gdb (there's an example in --help), or if you just want an interactive shell to futz around in manually instead of running the test suite then try passing --ipython. BTW, numpy has its own mailing list at numpy-discussion at scipy.org (CC'ed), which is where numpy development discussions usually take place -- this list is more for scipy-the-package itself. There's lots of overlap in readership between the two lists, but numpy-discussion will probably give you quicker and more useful answers to questions like this in general :-) -n On Sat, Oct 17, 2015 at 4:03 PM, Luke Zoltan Kelley wrote: > Thanks Nathan, I'll try that. Both without the inplace build, I'll have to > rebuild and install everytime I want to test something, right? > > On Oct 17, 2015, at 6:42 PM, Nathan Woods wrote: > > My best guess is to nuke and reclone Numpy, then do setup.py install without > the inplace build. What you're doing seems like it should work, though, so > I'm not sure what's going on. > > Nathan Woods > > On Oct 17, 2015, at 3:59 PM, Luke Zoltan Kelley > wrote: > > When trying to do that is when I get the error I described in the OP. i.e. > I get an error when trying to install. > > > On Oct 17, 2015, at 5:57 PM, Andrew Nelson wrote: > > It would get installed into whatever conda environment you had activated. > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-dev > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-dev > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-dev > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-dev > -- Nathaniel J. Smith -- http://vorpus.org From ralf.gommers at gmail.com Sun Oct 18 05:35:19 2015 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 18 Oct 2015 11:35:19 +0200 Subject: [SciPy-Dev] Limit number of iterations in sigma clipping In-Reply-To: References: Message-ID: On Fri, Oct 16, 2015 at 6:21 AM, Sri Krishna wrote: > Hi, > > The current sigma clipping function has no iters keyword. > For clarity: this is stats.sigmaclip > I feel it would be useful to include it in the function, with a default > like `iters=None` so that if people do want to use the keyword they will be > able to do so. > > With the defaults of `iters = None` it shouldn't break anyone's workflow > and will correspond to the current default behaviour. > > If there is any interest in this, I can file a pull request soon. > Makes sense to me to add this. I just saw that AstroPy has a much more extensive sigma clipping function. Replacing stats.sigmaclip with http://docs.astropy.org/en/latest/api/astropy.stats.sigma_clip.html would be the way to go I think. In a backwards-compatible fashion of course, so the `sigma` keyword from that function should be left out. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sun Oct 18 06:32:34 2015 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 18 Oct 2015 12:32:34 +0200 Subject: [SciPy-Dev] deprecating stats.histogram? Message-ID: Hi all, In 0.16.0 we deprecate stats.histogram2 (in PR 4655) but somehow forgot to think about deprecating the 1-D version. At least I can't find a record on this list or on Github of such a discussion. The difference between numpy.histogram and stats.histogram is very minor; basically stats.histogram uses the numpy version under the hood with only a different range. And it misses some features now added to numpy. Any objections to deprecating this function? Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From kitchi.srikrishna at gmail.com Sun Oct 18 07:18:55 2015 From: kitchi.srikrishna at gmail.com (Sri Krishna) Date: Sun, 18 Oct 2015 16:48:55 +0530 Subject: [SciPy-Dev] Limit number of iterations in sigma clipping In-Reply-To: References: Message-ID: On 18 October 2015 at 15:05, Ralf Gommers wrote: > > > > On Fri, Oct 16, 2015 at 6:21 AM, Sri Krishna > wrote: > >> Hi, >> >> The current sigma clipping function has no iters keyword. >> > > For clarity: this is stats.sigmaclip > > >> I feel it would be useful to include it in the function, with a default >> like `iters=None` so that if people do want to use the keyword they will be >> able to do so. >> >> With the defaults of `iters = None` it shouldn't break anyone's workflow >> and will correspond to the current default behaviour. >> >> If there is any interest in this, I can file a pull request soon. >> > > Makes sense to me to add this. > > I just saw that AstroPy has a much more extensive sigma clipping function. > Replacing stats.sigmaclip with > http://docs.astropy.org/en/latest/api/astropy.stats.sigma_clip.html would > be the way to go I think. In a backwards-compatible fashion of course, so > the `sigma` keyword from that function should be left out. > Well, I am working on a pull request on the astropy sigma clipping routine to incorporate the scipy function - The scipy function can be upto 100x faster than the astropy function in benchmarks that I've run, and I'm not convinced that it is worthwhile loosing the speed for features. So once the PR is merged, the astropy function will have the option to use the scipy function as well as the existing astropy function, As far as I can tell, the speed improvements in the scipy.stats.sigmaclip function come because we don't use masked arrays, and the sigma clipped data is discarded every iteration. We could potentially use another function instead of the mean (i.e., median or a user supplied function analogous to astropy) without too much of a speed hit. The only three features we need to port from the astropy function are the `cenfunc` and `varfunc` keywords to define a custom function to calculate mean/standard deviation and the `axis` keyword. I'm not sure we need to use masked arrays because of the previously mentioned performance implications. So should I work on a pull request to adapt the astropy function into scipy in a backwards compatible manner? Thanks, Krishna -------------- next part -------------- An HTML attachment was scrubbed... URL: From evgeny.burovskiy at gmail.com Sun Oct 18 07:21:52 2015 From: evgeny.burovskiy at gmail.com (Evgeni Burovski) Date: Sun, 18 Oct 2015 12:21:52 +0100 Subject: [SciPy-Dev] deprecating stats.histogram? In-Reply-To: References: Message-ID: I'm +1 for deprecating it. 18.10.2015 13:32 ???????????? "Ralf Gommers" ???????: > Hi all, > > In 0.16.0 we deprecate stats.histogram2 (in PR 4655) but somehow forgot to > think about deprecating the 1-D version. At least I can't find a record on > this list or on Github of such a discussion. > > The difference between numpy.histogram and stats.histogram is very minor; > basically stats.histogram uses the numpy version under the hood with only a > different range. And it misses some features now added to numpy. > > Any objections to deprecating this function? > > Ralf > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sun Oct 18 11:23:26 2015 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 18 Oct 2015 17:23:26 +0200 Subject: [SciPy-Dev] Limit number of iterations in sigma clipping In-Reply-To: References: Message-ID: On Sun, Oct 18, 2015 at 1:18 PM, Sri Krishna wrote: > On 18 October 2015 at 15:05, Ralf Gommers wrote: > >> >> >> >> On Fri, Oct 16, 2015 at 6:21 AM, Sri Krishna > > wrote: >> >>> Hi, >>> >>> The current sigma clipping function has no iters keyword. >>> >> >> For clarity: this is stats.sigmaclip >> >> >>> I feel it would be useful to include it in the function, with a default >>> like `iters=None` so that if people do want to use the keyword they will be >>> able to do so. >>> >>> With the defaults of `iters = None` it shouldn't break anyone's workflow >>> and will correspond to the current default behaviour. >>> >>> If there is any interest in this, I can file a pull request soon. >>> >> >> Makes sense to me to add this. >> >> I just saw that AstroPy has a much more extensive sigma clipping >> function. Replacing stats.sigmaclip with >> http://docs.astropy.org/en/latest/api/astropy.stats.sigma_clip.html >> would be the way to go I think. In a backwards-compatible fashion of >> course, so the `sigma` keyword from that function should be left out. >> > > Well, I am working on a pull request on the astropy sigma clipping routine > to incorporate the scipy function - The scipy function can be upto 100x > faster than the astropy function in benchmarks that I've run, and I'm not > convinced that it is worthwhile loosing the speed for features. So once the > PR is merged, the astropy function will have the option to use the scipy > function as well as the existing astropy function, > Ouch, 100x is a lot. > As far as I can tell, the speed improvements in the scipy.stats.sigmaclip > function come because we don't use masked arrays, and the sigma clipped > data is discarded every iteration. We could potentially use another > function instead of the mean (i.e., median or a user supplied function > analogous to astropy) without too much of a speed hit. > > The only three features we need to port from the astropy function are the > `cenfunc` and `varfunc` keywords to define a custom function to calculate > mean/standard deviation and the `axis` keyword. I'm not sure we need to use > masked arrays because of the previously mentioned performance implications. > > So should I work on a pull request to adapt the astropy function into > scipy in a backwards compatible manner? > All the keyword you mentioned make sense to add I think, so yes please. And a bit of bikeshedding: the names used by AstroPy aren't optimal imho. Maybe: - iters --> numiter - cenfunc --> centerfunc Ralf > Thanks, > Krishna > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Sun Oct 18 11:42:59 2015 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 18 Oct 2015 11:42:59 -0400 Subject: [SciPy-Dev] Limit number of iterations in sigma clipping In-Reply-To: References: Message-ID: On Sun, Oct 18, 2015 at 11:23 AM, Ralf Gommers wrote: > > > On Sun, Oct 18, 2015 at 1:18 PM, Sri Krishna > wrote: > >> On 18 October 2015 at 15:05, Ralf Gommers wrote: >> >>> >>> >>> >>> On Fri, Oct 16, 2015 at 6:21 AM, Sri Krishna < >>> kitchi.srikrishna at gmail.com> wrote: >>> >>>> Hi, >>>> >>>> The current sigma clipping function has no iters keyword. >>>> >>> >>> For clarity: this is stats.sigmaclip >>> >>> >>>> I feel it would be useful to include it in the function, with a default >>>> like `iters=None` so that if people do want to use the keyword they will be >>>> able to do so. >>>> >>>> With the defaults of `iters = None` it shouldn't break anyone's >>>> workflow and will correspond to the current default behaviour. >>>> >>>> If there is any interest in this, I can file a pull request soon. >>>> >>> >>> Makes sense to me to add this. >>> >>> I just saw that AstroPy has a much more extensive sigma clipping >>> function. Replacing stats.sigmaclip with >>> http://docs.astropy.org/en/latest/api/astropy.stats.sigma_clip.html >>> would be the way to go I think. In a backwards-compatible fashion of >>> course, so the `sigma` keyword from that function should be left out. >>> >> >> Well, I am working on a pull request on the astropy sigma clipping >> routine to incorporate the scipy function - The scipy function can be upto >> 100x faster than the astropy function in benchmarks that I've run, and I'm >> not convinced that it is worthwhile loosing the speed for features. So once >> the PR is merged, the astropy function will have the option to use the >> scipy function as well as the existing astropy function, >> > > Ouch, 100x is a lot. > > >> As far as I can tell, the speed improvements in the scipy.stats.sigmaclip >> function come because we don't use masked arrays, and the sigma clipped >> data is discarded every iteration. We could potentially use another >> function instead of the mean (i.e., median or a user supplied function >> analogous to astropy) without too much of a speed hit. >> >> The only three features we need to port from the astropy function are the >> `cenfunc` and `varfunc` keywords to define a custom function to calculate >> mean/standard deviation and the `axis` keyword. I'm not sure we need to use >> masked arrays because of the previously mentioned performance implications. >> >> So should I work on a pull request to adapt the astropy function into >> scipy in a backwards compatible manner? >> > > All the keyword you mentioned make sense to add I think, so yes please. > > And a bit of bikeshedding: the names used by AstroPy aren't optimal imho. > Maybe: > - iters --> numiter > statsmodels uses maxiter in cases like this because it can stop early > - cenfunc --> centerfunc > I think this should correspond to the robust ANOVA function, IIRC names have power and are not bikesheds (default color choices ain't no bikesheds either) Josef > > Ralf > > >> Thanks, >> Krishna >> >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> https://mail.scipy.org/mailman/listinfo/scipy-dev >> >> > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.pollak at jku.at Thu Oct 22 04:54:51 2015 From: robert.pollak at jku.at (Robert Pollak) Date: Thu, 22 Oct 2015 10:54:51 +0200 Subject: [SciPy-Dev] Scipy wiki is down again Message-ID: <5628A45B.8030804@jku.at> Dear server administrator, since several days, the website http://wiki.scipy.org/ only shows: > Internal Server Error > > The server encountered an internal error or misconfiguration and was unable to complete your request. > > Please contact the server administrator at root at enthought.com to inform them of the time this error occurred, and the actions you performed just before this error. > > More information about this error may be available in the server error log. Please make the wiki available again. I need its page "NumPy_for_Matlab_Users" for a workshop with my colleagues. Best regards, Robert Pollak From j.cossio.diaz at gmail.com Thu Oct 22 11:09:33 2015 From: j.cossio.diaz at gmail.com (Jorge Fernandez-de-Cossio-Diaz) Date: Thu, 22 Oct 2015 11:09:33 -0400 Subject: [SciPy-Dev] Scipy wiki is down again In-Reply-To: <5628A45B.8030804@jku.at> References: <5628A45B.8030804@jku.at> Message-ID: I confirm this. On Thu, Oct 22, 2015 at 4:54 AM, Robert Pollak wrote: > Dear server administrator, > > since several days, the website http://wiki.scipy.org/ only shows: > > Internal Server Error > > > > The server encountered an internal error or misconfiguration and was > unable to complete your request. > > > > Please contact the server administrator at root at enthought.com to inform > them of the time this error occurred, and the actions you performed just > before this error. > > > > More information about this error may be available in the server error > log. > > Please make the wiki available again. I need its page > "NumPy_for_Matlab_Users" for a workshop with my colleagues. > > Best regards, > Robert Pollak > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Thu Oct 22 16:32:10 2015 From: njs at pobox.com (Nathaniel Smith) Date: Thu, 22 Oct 2015 13:32:10 -0700 Subject: [SciPy-Dev] Scipy wiki is down again In-Reply-To: <5628A45B.8030804@jku.at> References: <5628A45B.8030804@jku.at> Message-ID: On Thu, Oct 22, 2015 at 1:54 AM, Robert Pollak wrote: > Dear server administrator, > > since several days, the website http://wiki.scipy.org/ only shows: >> Internal Server Error >> >> The server encountered an internal error or misconfiguration and was unable to complete your request. >> >> Please contact the server administrator at root at enthought.com to inform them of the time this error occurred, and the actions you performed just before this error. >> >> More information about this error may be available in the server error log. > > Please make the wiki available again. I need its page > "NumPy_for_Matlab_Users" for a workshop with my colleagues. My understanding is that someone is working on this, but that it requires upgrading the wiki data from MoinMoin 1.5 (released in 2006) to the current version, and that this is being a challenge, so it'll be up again as soon as possible. My understanding is also that once the immediate fires are put out, there will be some discussion about how to arrange matters so that such issues don't arise so often and are handled in a more publicly transparent way... -n From robert.kern at gmail.com Thu Oct 22 17:06:47 2015 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 22 Oct 2015 22:06:47 +0100 Subject: [SciPy-Dev] Scipy wiki is down again In-Reply-To: References: <5628A45B.8030804@jku.at> Message-ID: On Thu, Oct 22, 2015 at 9:32 PM, Nathaniel Smith wrote: > > On Thu, Oct 22, 2015 at 1:54 AM, Robert Pollak wrote: > > Please make the wiki available again. I need its page > > "NumPy_for_Matlab_Users" for a workshop with my colleagues. > > My understanding is that someone is working on this, but that it > requires upgrading the wiki data from MoinMoin 1.5 (released in 2006) > to the current version, and that this is being a challenge, so it'll > be up again as soon as possible. > > My understanding is also that once the immediate fires are put out, > there will be some discussion about how to arrange matters so that > such issues don't arise so often and are handled in a more publicly > transparent way... The long term solution is, and has always been, obvious. We have a static website. The effort to move the wiki's content to the static website stopped halfway through, leaving content that is valuable to some people still on the wiki. Anyone still interested in the remaining content needs to step up and move it to the static website. Here is the dump. Go forth and convert! https://www.dropbox.com/s/54r9ug6bzbsjxdb/scipy-wiki-dump.tbz2?dl=0 -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Thu Oct 22 17:15:18 2015 From: njs at pobox.com (Nathaniel Smith) Date: Thu, 22 Oct 2015 14:15:18 -0700 Subject: [SciPy-Dev] Scipy wiki is down again In-Reply-To: References: <5628A45B.8030804@jku.at> Message-ID: On Thu, Oct 22, 2015 at 2:06 PM, Robert Kern wrote: > On Thu, Oct 22, 2015 at 9:32 PM, Nathaniel Smith wrote: >> >> On Thu, Oct 22, 2015 at 1:54 AM, Robert Pollak >> wrote: > >> > Please make the wiki available again. I need its page >> > "NumPy_for_Matlab_Users" for a workshop with my colleagues. >> >> My understanding is that someone is working on this, but that it >> requires upgrading the wiki data from MoinMoin 1.5 (released in 2006) >> to the current version, and that this is being a challenge, so it'll >> be up again as soon as possible. >> >> My understanding is also that once the immediate fires are put out, >> there will be some discussion about how to arrange matters so that >> such issues don't arise so often and are handled in a more publicly >> transparent way... > > The long term solution is, and has always been, obvious. We have a static > website. The effort to move the wiki's content to the static website stopped > halfway through, leaving content that is valuable to some people still on > the wiki. Anyone still interested in the remaining content needs to step up > and move it to the static website. Yeah, the wiki was terribly out of date and probably needs to just die (is there anything there that's still relevant aside from the "for matlab users" page?), but there is a meta problem that is zero^Wepsilon communication between the people maintaining the infrastructure and the rest of us, so we stumble forward as best we can :-). > Here is the dump. Go forth and convert! > > https://www.dropbox.com/s/54r9ug6bzbsjxdb/scipy-wiki-dump.tbz2?dl=0 It would be super helpful if you could also say a few words about what to do with these pages once they are downloaded. Are you suggesting they go into scipy's sphinx docs, or into some other static website build somewhere? I don't actually know where the source to scipy.org is stored... -n From robert.kern at gmail.com Thu Oct 22 17:17:28 2015 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 22 Oct 2015 22:17:28 +0100 Subject: [SciPy-Dev] Scipy wiki is down again In-Reply-To: References: <5628A45B.8030804@jku.at> Message-ID: On Thu, Oct 22, 2015 at 10:15 PM, Nathaniel Smith wrote: > > On Thu, Oct 22, 2015 at 2:06 PM, Robert Kern wrote: > > Here is the dump. Go forth and convert! > > > > https://www.dropbox.com/s/54r9ug6bzbsjxdb/scipy-wiki-dump.tbz2?dl=0 > > It would be super helpful if you could also say a few words about what > to do with these pages once they are downloaded. Are you suggesting > they go into scipy's sphinx docs, or into some other static website > build somewhere? I don't actually know where the source to scipy.org > is stored... https://github.com/scipy/scipy.org -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From Sylvain.Gubian at pmi.com Fri Oct 23 09:10:38 2015 From: Sylvain.Gubian at pmi.com (Gubian, Sylvain) Date: Fri, 23 Oct 2015 13:10:38 +0000 Subject: [SciPy-Dev] optimize - add algorithm for global optimization: GenSA Message-ID: Hi everyone, We would like to propose a new method, GenSA, for global optimization to be included in the optimize module. GenSA is an implementation of the General Simulated Annealing algorithm (GSA, http://www.sciencedirect.com/science/article/pii/S0378437196002713). This approach generalizes CSA (Classical Simulated Annealing) and FSA (Fast Simulated Annealing) to search for the global minimum more efficiently. The algorithm is explained in more detail in this reference: http://journal.r-project.org/archive/2013-1/xiang-gubian-suomela-etal.pdf. SciPy has already in the past included a method based on simulated annealing, called anneal, which has been deprecated in 0.14 (with an advice to use basinhopping) and eventually removed in 0.16. A previously published comparison of 18 optimization methods in the R language (http://www.jstatsoft.org/v60/i06/paper) shows that GenSA is, among the methods tested, one of the ?most capable of consistently returning a solution near the global minimum of each test function?. This paper however did not consider basinhopping, so we have performed some tests which tend to show that GenSA is more efficient than basinhopping for high dimension problems. The results have been presented in a poster in PyCon UK 2015 (Coventry). The code is ready and passes unit tests and PEP8. We hope it would be a useful addition to SciPy and would be happy to have your opinion. Thanks, Sylvain. From jstevenson131 at gmail.com Fri Oct 23 10:06:17 2015 From: jstevenson131 at gmail.com (Jacob Stevenson) Date: Fri, 23 Oct 2015 14:06:17 +0000 Subject: [SciPy-Dev] optimize - add algorithm for global optimization: GenSA In-Reply-To: References: Message-ID: In my opinion a robust implementation of a simulated annealing based optimizer would be welcome. There are cases when this would be preferable to basinhopping, e.g. when non-smooth or non-continuous functions make the local optimization step in basinhopping less effective. I think the first step is to make a pull request (or send a link if you already did) where we can review the code and have discussions. Best, Jake On Fri, 23 Oct 2015 at 14:10 Gubian, Sylvain wrote: > Hi everyone, > > We would like to propose a new method, GenSA, for global optimization to > be included in the optimize module. > > GenSA is an implementation of the General Simulated Annealing algorithm > (GSA, http://www.sciencedirect.com/science/article/pii/S0378437196002713). > This approach generalizes CSA (Classical Simulated Annealing) and FSA (Fast > Simulated Annealing) to search for the global minimum more efficiently. The > algorithm is explained in more detail in this reference: > http://journal.r-project.org/archive/2013-1/xiang-gubian-suomela-etal.pdf. > > SciPy has already in the past included a method based on simulated > annealing, called anneal, which has been deprecated in 0.14 (with an advice > to use basinhopping) and eventually removed in 0.16. > > A previously published comparison of 18 optimization methods in the R > language (http://www.jstatsoft.org/v60/i06/paper) shows that GenSA is, > among the methods tested, one of the ?most capable of consistently > returning a solution near the global minimum of each test function?. This > paper however did not consider basinhopping, so we have performed some > tests which tend to show that GenSA is more efficient than basinhopping > for high dimension problems. The results have been presented in a poster in > PyCon UK 2015 (Coventry). > > The code is ready and passes unit tests and PEP8. We hope it would be a > useful addition to SciPy and would be happy to have your opinion. > > Thanks, > > Sylvain. > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andyfaff at gmail.com Fri Oct 23 19:00:22 2015 From: andyfaff at gmail.com (Andrew Nelson) Date: Sat, 24 Oct 2015 10:00:22 +1100 Subject: [SciPy-Dev] optimize - add algorithm for global optimization: GenSA In-Reply-To: References: Message-ID: I have a scipy PR for testing global optimisers. It has approx 120 functions, which is 2.5 times more than in the linked paper. I think for any global optimisers to be added it should give a good performance against those benchmarks. The main criterion is the number of successes and the avg number of function evaluations. Time is of secondary importance. Any optimisers added to scipy.optimize should conform to the generally standardised syntax (naming conventions, etc) used by the module. On 24 Oct 2015 1:06 am, "Jacob Stevenson" wrote: > In my opinion a robust implementation of a simulated annealing based > optimizer would be welcome. There are cases when this would be preferable > to basinhopping, e.g. when non-smooth or non-continuous functions make the > local optimization step in basinhopping less effective. > > I think the first step is to make a pull request (or send a link if you > already did) where we can review the code and have discussions. > > Best, > Jake > > On Fri, 23 Oct 2015 at 14:10 Gubian, Sylvain > wrote: > >> Hi everyone, >> >> We would like to propose a new method, GenSA, for global optimization to >> be included in the optimize module. >> >> GenSA is an implementation of the General Simulated Annealing algorithm >> (GSA, http://www.sciencedirect.com/science/article/pii/S0378437196002713). >> This approach generalizes CSA (Classical Simulated Annealing) and FSA (Fast >> Simulated Annealing) to search for the global minimum more efficiently. The >> algorithm is explained in more detail in this reference: >> http://journal.r-project.org/archive/2013-1/xiang-gubian-suomela-etal.pdf >> . >> >> SciPy has already in the past included a method based on simulated >> annealing, called anneal, which has been deprecated in 0.14 (with an advice >> to use basinhopping) and eventually removed in 0.16. >> >> A previously published comparison of 18 optimization methods in the R >> language (http://www.jstatsoft.org/v60/i06/paper) shows that GenSA is, >> among the methods tested, one of the ?most capable of consistently >> returning a solution near the global minimum of each test function?. This >> paper however did not consider basinhopping, so we have performed some >> tests which tend to show that GenSA is more efficient than basinhopping >> for high dimension problems. The results have been presented in a poster in >> PyCon UK 2015 (Coventry). >> >> The code is ready and passes unit tests and PEP8. We hope it would be a >> useful addition to SciPy and would be happy to have your opinion. >> >> Thanks, >> >> Sylvain. >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> https://mail.scipy.org/mailman/listinfo/scipy-dev >> > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sat Oct 24 10:39:44 2015 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 24 Oct 2015 08:39:44 -0600 Subject: [SciPy-Dev] optimize - add algorithm for global optimization: GenSA In-Reply-To: References: Message-ID: On Fri, Oct 23, 2015 at 7:10 AM, Gubian, Sylvain wrote: > Hi everyone, > > We would like to propose a new method, GenSA, for global optimization to > be included in the optimize module. > > GenSA is an implementation of the General Simulated Annealing algorithm > (GSA, http://www.sciencedirect.com/science/article/pii/S0378437196002713). > This approach generalizes CSA (Classical Simulated Annealing) and FSA (Fast > Simulated Annealing) to search for the global minimum more efficiently. The > algorithm is explained in more detail in this reference: > http://journal.r-project.org/archive/2013-1/xiang-gubian-suomela-etal.pdf. > > SciPy has already in the past included a method based on simulated > annealing, called anneal, which has been deprecated in 0.14 (with an advice > to use basinhopping) and eventually removed in 0.16. > > A previously published comparison of 18 optimization methods in the R > language (http://www.jstatsoft.org/v60/i06/paper) shows that GenSA is, > among the methods tested, one of the ?most capable of consistently > returning a solution near the global minimum of each test function?. This > paper however did not consider basinhopping, so we have performed some > tests which tend to show that GenSA is more efficient than basinhopping > for high dimension problems. The results have been presented in a poster in > PyCon UK 2015 (Coventry). > > The code is ready and passes unit tests and PEP8. We hope it would be a > useful addition to SciPy and would be happy to have your opinion. > I'm a bit concerned about license issues If any of the code comes from R. R has an incompatible license. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Sat Oct 24 12:10:06 2015 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 24 Oct 2015 16:10:06 +0000 (UTC) Subject: [SciPy-Dev] Scipy wiki is down again References: <5628A45B.8030804@jku.at> Message-ID: Nathaniel Smith pobox.com> writes: [clip] > Yeah, the wiki was terribly out of date and probably needs to just die > (is there anything there that's still relevant aside from the "for > matlab users" page?), but there is a meta problem that is > zero^Wepsilon communication between the people maintaining the > infrastructure and the rest of us, so we stumble forward as best we > can . The wiki dump is available (and links to it have been also before on this list years ago), the static website is on github, so there should be no blockers to this. > > Here is the dump. Go forth and convert! > > > > https://www.dropbox.com/s/54r9ug6bzbsjxdb/scipy-wiki-dump.tbz2?dl=0 > > It would be super helpful if you could also say a few words about what > to do with these pages once they are downloaded. Are you suggesting > they go into scipy's sphinx docs, or into some other static website > build somewhere? I don't actually know where the source to scipy.org > is stored... What to do with the old wiki content is up to discussion. I don't think anyone has a plan on what parts of it is still of value, what not, and whether a wiki would still be useful. Previous discussions of this matter as a rule have just fizzled out, probably because there are more interesting things to do. When the scipy.org site was set up three years ago (see previous posts on this list), I moved only the content having directly to do with the software. The fate of scipy cookbook has also been discussed before on this list, and its conversion to ipython notebooks was also done, see links in previous posts. However, the bitrot is great, and new tools for doing things in a better way appeared, so much of the content is outdated. The "numpy for matlab users" was probably one of the valuable extra pages. I have no recollection of what else was there, but that should be visible in the dump. From sturla.molden at gmail.com Sat Oct 24 12:16:36 2015 From: sturla.molden at gmail.com (Sturla Molden) Date: Sat, 24 Oct 2015 16:16:36 +0000 (UTC) Subject: [SciPy-Dev] optimize - add algorithm for global optimization: GenSA References: Message-ID: <1581029041467395774.805089sturla.molden-gmail.com@news.gmane.org> "Gubian, Sylvain" wrote: > The code is ready and passes unit tests and PEP8. We hope it would be a > useful addition to SciPy and would be happy to have your opinion. anneal was thrown out because it was very weak, not because we dislike SA in general. Although it was not clear if the weakness was due to the particular implementation or due to classical SA per se. If an implementation of SA can be shown to be strong I personally see nothing wrong with including it. Sturla From sturla.molden at gmail.com Sat Oct 24 12:21:42 2015 From: sturla.molden at gmail.com (Sturla Molden) Date: Sat, 24 Oct 2015 16:21:42 +0000 (UTC) Subject: [SciPy-Dev] Add cut-plane optimization? Message-ID: <1824753632467396345.209356sturla.molden-gmail.com@news.gmane.org> I am hearing good things about a new local optimization method called "cut-plane optimization". I think we should consider to add this method to scipy.optimize. http://phys.org/news/2015-10-general-purpose-optimization-algorithm-order-of-magnitude-speedups.html Sturla From ralf.gommers at gmail.com Sat Oct 24 12:26:11 2015 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 24 Oct 2015 18:26:11 +0200 Subject: [SciPy-Dev] Scipy wiki is down again In-Reply-To: References: <5628A45B.8030804@jku.at> Message-ID: On Sat, Oct 24, 2015 at 6:10 PM, Pauli Virtanen wrote: > Nathaniel Smith pobox.com> writes: > [clip] > > Yeah, the wiki was terribly out of date and probably needs to just die > > (is there anything there that's still relevant aside from the "for > > matlab users" page?), but there is a meta problem that is > > zero^Wepsilon communication between the people maintaining the > > infrastructure and the rest of us, so we stumble forward as best we > > can . > > The wiki dump is available (and links to it have been also before > on this list years ago), the static website is on github, so there > should be no blockers to this. > > > > Here is the dump. Go forth and convert! > > > > > > https://www.dropbox.com/s/54r9ug6bzbsjxdb/scipy-wiki-dump.tbz2?dl=0 > > > > It would be super helpful if you could also say a few words about what > > to do with these pages once they are downloaded. Are you suggesting > > they go into scipy's sphinx docs, or into some other static website > > build somewhere? I don't actually know where the source to scipy.org > > is stored... > > > What to do with the old wiki content is up to discussion. > I don't think anyone has a plan on what parts of it is still of > value, what not, and whether a wiki would still be useful. > > Previous discussions of this matter as a rule have just fizzled > out, probably because there are more interesting things to do. > > When the scipy.org site was set up three years ago (see previous > posts on this list), I moved only the content having directly to > do with the software. > > The fate of scipy cookbook has also been discussed before on this > list, and its conversion to ipython notebooks was also done, > see links in previous posts. However, the bitrot is great, and > new tools for doing things in a better way appeared, so much > of the content is outdated. > > The "numpy for matlab users" was probably one of the valuable > extra pages. I have no recollection of what else was there, > but that should be visible in the dump. > The "numpy for matlab users" page is probably more useful to people than everything else on that wiki combined. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sat Oct 24 12:27:36 2015 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 24 Oct 2015 18:27:36 +0200 Subject: [SciPy-Dev] Scipy wiki is down again In-Reply-To: References: <5628A45B.8030804@jku.at> Message-ID: On Sat, Oct 24, 2015 at 6:26 PM, Ralf Gommers wrote: > > > On Sat, Oct 24, 2015 at 6:10 PM, Pauli Virtanen wrote: > >> Nathaniel Smith pobox.com> writes: >> [clip] >> > Yeah, the wiki was terribly out of date and probably needs to just die >> > (is there anything there that's still relevant aside from the "for >> > matlab users" page?), but there is a meta problem that is >> > zero^Wepsilon communication between the people maintaining the >> > infrastructure and the rest of us, so we stumble forward as best we >> > can . >> >> The wiki dump is available (and links to it have been also before >> on this list years ago), the static website is on github, so there >> should be no blockers to this. >> >> > > Here is the dump. Go forth and convert! >> > > >> > > https://www.dropbox.com/s/54r9ug6bzbsjxdb/scipy-wiki-dump.tbz2?dl=0 >> > >> > It would be super helpful if you could also say a few words about what >> > to do with these pages once they are downloaded. Are you suggesting >> > they go into scipy's sphinx docs, or into some other static website >> > build somewhere? I don't actually know where the source to scipy.org >> > is stored... >> > > I don't think any content from the wiki belongs in the scipy.org repo. > >> What to do with the old wiki content is up to discussion. >> I don't think anyone has a plan on what parts of it is still of >> value, what not, and whether a wiki would still be useful. >> >> Previous discussions of this matter as a rule have just fizzled >> out, probably because there are more interesting things to do. >> >> When the scipy.org site was set up three years ago (see previous >> posts on this list), I moved only the content having directly to >> do with the software. >> >> The fate of scipy cookbook has also been discussed before on this >> list, and its conversion to ipython notebooks was also done, >> see links in previous posts. However, the bitrot is great, and >> new tools for doing things in a better way appeared, so much >> of the content is outdated. >> >> The "numpy for matlab users" was probably one of the valuable >> extra pages. I have no recollection of what else was there, >> but that should be visible in the dump. >> > > > The "numpy for matlab users" page is probably more useful to people than > everything else on that wiki combined. > How about converting that page to ReST and putting it in the scipy tutorial (http://docs.scipy.org/doc/scipy/reference/tutorial/index.html)? Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sat Oct 24 12:33:37 2015 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 24 Oct 2015 18:33:37 +0200 Subject: [SciPy-Dev] Add cut-plane optimization? In-Reply-To: <1824753632467396345.209356sturla.molden-gmail.com@news.gmane.org> References: <1824753632467396345.209356sturla.molden-gmail.com@news.gmane.org> Message-ID: On Sat, Oct 24, 2015 at 6:21 PM, Sturla Molden wrote: > I am hearing good things about a new local optimization method called > "cut-plane optimization". I think we should consider to add this method to > scipy.optimize. > > > http://phys.org/news/2015-10-general-purpose-optimization-algorithm-order-of-magnitude-speedups.html > It seems to solve a fairly specific problem. Also, the paper is unreadable (for me). And not including any benchmarks in a paper like this even though it's 111 pages long is odd..... Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at ericmart.in Sat Oct 24 12:52:04 2015 From: eric at ericmart.in (Eric Martin) Date: Sat, 24 Oct 2015 11:52:04 -0500 Subject: [SciPy-Dev] Add cut-plane optimization? In-Reply-To: References: <1824753632467396345.209356sturla.molden-gmail.com@news.gmane.org> Message-ID: Direct link to paper: http://arxiv.org/abs/1508.04874 I'm not an expert, but my understanding was that this is mostly a theoretical result, similar to finding a new lower bound on matrix multiplication. As far as I know, all BLAS implementations use the O(n^3) matrix multiplication algorithm even though a O(n^2.37) algorithm is known because the O(n^3) algorithm maps much better onto hardware and has lower constant costs. I don't know as much about it as I do matrix multiplication, but I believe linear programming algorithms are in a similar state, where the simplex algorithm has exponential worst case, interior point algorithms have polynomial worst case, but both methods are used in practice. -Eric On Sat, Oct 24, 2015 at 11:33 AM, Ralf Gommers wrote: > > > On Sat, Oct 24, 2015 at 6:21 PM, Sturla Molden > wrote: > >> I am hearing good things about a new local optimization method called >> "cut-plane optimization". I think we should consider to add this method to >> scipy.optimize. >> >> >> http://phys.org/news/2015-10-general-purpose-optimization-algorithm-order-of-magnitude-speedups.html >> > > It seems to solve a fairly specific problem. Also, the paper is unreadable > (for me). And not including any benchmarks in a paper like this even though > it's 111 pages long is odd..... > > Ralf > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > https://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Sat Oct 24 15:06:56 2015 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 24 Oct 2015 19:06:56 +0000 (UTC) Subject: [SciPy-Dev] Scipy wiki is down again References: <5628A45B.8030804@jku.at> Message-ID: Ralf Gommers gmail.com> writes: [clip] > The "numpy for matlab users" page is probably more useful > to people than everything else on that wiki combined. ? FWIW, here's the content in a slightly friendlier format http://pav.iki.fi/tmp/to-be-removed/localhost_8080/AllPages.html From ralf.gommers at gmail.com Sun Oct 25 06:25:26 2015 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 25 Oct 2015 11:25:26 +0100 Subject: [SciPy-Dev] ANN: Scipy 0.16.1 release Message-ID: Hi all, I'm happy to announce the availability of the Scipy 0.16.1 release. This is a bugfix only release; it contains no new features compared to 0.16.0. The sources and binary installers can be found at: - Source tarballs: at https://github.com/scipy/scipy/releases and on PyPi. - OS X: there are wheels on PyPi, so simply install with pip. - Windows: .exe installers can be found on https://github.com/scipy/scipy/releases Cheers, Ralf ========================== SciPy 0.16.1 Release Notes ========================== SciPy 0.16.1 is a bug-fix release with no new features compared to 0.16.0. Issues closed for 0.16.1 ------------------------ - `#5077 `__: cKDTree not indexing properly for arrays with too many elements - `#5127 `__: Regression in 0.16.0: solve_banded errors out in patsy test suite - `#5149 `__: linalg tests apparently cause python to crash with numpy 1.10.0b1 - `#5154 `__: 0.16.0 fails to build on OS X; can't find Python.h - `#5173 `__: failing stats.histogram test with numpy 1.10 - `#5191 `__: Scipy 0.16.x - TypeError: _asarray_validated() got an unexpected... - `#5195 `__: tarballs missing documentation source - `#5363 `__: FAIL: test_orthogonal.test_j_roots, test_orthogonal.test_js_roots Pull requests for 0.16.1 ------------------------ - `#5088 `__: BUG: fix logic error in cKDTree.sparse_distance_matrix - `#5089 `__: BUG: Don't overwrite b in lfilter's FIR path - `#5128 `__: BUG: solve_banded failed when solving 1x1 systems - `#5155 `__: BLD: fix missing Python include for Homebrew builds. - `#5192 `__: BUG: backport as_inexact kwarg to _asarray_validated - `#5203 `__: BUG: fix uninitialized use in lartg 0.16 backport - `#5204 `__: BUG: properly return error to fortran from ode_jacobian_function... - `#5207 `__: TST: Fix TestCtypesQuad failure on Python 3.5 for Windows - `#5352 `__: TST: sparse: silence warnings about boolean indexing - `#5355 `__: MAINT: backports for 0.16.1 release - `#5356 `__: REL: update Paver file to ensure sdist contents are OK for releases. - `#5382 `__: 0.16.x backport: MAINT: work around a possible numpy ufunc loop... - `#5393 `__: TST:special: bump tolerance levels for test_j_roots and test_js_roots - `#5417 From ralf.gommers at gmail.com Sun Oct 25 13:14:20 2015 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 25 Oct 2015 18:14:20 +0100 Subject: [SciPy-Dev] plans for 0.17.0 release Message-ID: Hi all, We branched 0.16.x in early May, so it's time to start thinking about branching 0.17.x sometime soon. If anyone would be interested in being the release manager for 0.17.x, or co-managing it with me, that would be very welcome. In order to make releasing look less like black magic I've worked on automating stuff a bit more (has landed in master) and on writing fairly detailed documentation: https://github.com/scipy/scipy/pull/5424. Please have a look if you're interested. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Sun Oct 25 13:54:02 2015 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 25 Oct 2015 17:54:02 +0000 (UTC) Subject: [SciPy-Dev] Retiring wiki.scipy.org? Message-ID: Hi, I've been looking through the stuff that was on wiki.scipy.org, and it appears it would be the time to retire it. cf. http://pav.iki.fi/tmp/to-be-removed/localhost_8080/AllPages.html for the actual material that is there. I have the impression that: (1) Most of the material that is useful has been moved elsewhere. The download instructions, topical software, etc. are moved to scipy.org and/or numpy/scipy documentation. The Numpy/Scipy tutorials that were there have mostly been subsumed in the Numpy/Scipy documentation, or superceded by better resources elsewhere eg. scipy-lectures.org. "Numpy for Matlab users" was just added there too. The Cookbook section was converted to IPython notebooks some years ago: https://github.com/pv/SciPy-CookBook However, several of these are not so relevant today. (2) The relevance of what's left is not so clear. The rest of the content seems fairly miscellaneous. Recipes from converting code from Numeric/numarray are probably not highly useful any more. If you know something valuable there not listed above, it would be useful to point it out. Excluding edits to front/download/etc pages that are nowadays on github/scipy/scipy.org, the wiki was edited 95 times in the last 5 years; and 2 times within the last 2 years. (3) Would having a wiki be useful? Based on the content there it's not clear if there is a need for a wiki nowadays --- people can just put stuff up on github/bitbucket/pypi and blog about it. It might be useful to have a "cookbook" collection, but a github repository with ipython notebooks, or Scipy Central, sound like better solutions for today than a wiki. Thoughts? In view of the above, I would remove links to wiki.scipy.org regardless, https://github.com/scipy/scipy.org/pull/121 -- Pauli Virtanen From njs at pobox.com Sun Oct 25 14:40:54 2015 From: njs at pobox.com (Nathaniel Smith) Date: Sun, 25 Oct 2015 11:40:54 -0700 Subject: [SciPy-Dev] Retiring wiki.scipy.org? In-Reply-To: References: Message-ID: On Oct 25, 2015 10:54 AM, "Pauli Virtanen" wrote: > [...] > If you know something valuable there not listed above, it would > be useful to point it out. Might also make sense to toss one of the content dumps into a github repo (scipy/old-wiki.scipy.org or whatever) to serve as a long term archive just in case someone later realizes that want something from it after all. -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sun Oct 25 17:26:37 2015 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 25 Oct 2015 22:26:37 +0100 Subject: [SciPy-Dev] plans for 0.17.0 release In-Reply-To: References: Message-ID: On Sun, Oct 25, 2015 at 6:14 PM, Ralf Gommers wrote: > Hi all, > > We branched 0.16.x in early May, so it's time to start thinking about > branching 0.17.x sometime soon. If anyone would be interested in being the > release manager for 0.17.x, or co-managing it with me, that would be very > welcome. > > In order to make releasing look less like black magic I've worked on > automating stuff a bit more (has landed in master) and on writing fairly > detailed documentation: https://github.com/scipy/scipy/pull/5424. Please > have a look if you're interested. > That PR is getting a bit large. Here is a rendered version of just the release manager activities: https://github.com/rgommers/scipy/blob/devguide/doc/source/dev/releasing.rst Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.pollak at jku.at Tue Oct 27 04:05:27 2015 From: robert.pollak at jku.at (Robert Pollak) Date: Tue, 27 Oct 2015 09:05:27 +0100 Subject: [SciPy-Dev] Scipy wiki is down again In-Reply-To: References: <5628A45B.8030804@jku.at> Message-ID: <562F3047.2070400@jku.at> On 24.10.2015 21:06, Pauli Virtanen wrote: > Ralf Gommers gmail.com> writes: > [clip] >> The "numpy for matlab users" page is probably more useful >> to people than everything else on that wiki combined. > > FWIW, here's the content in a slightly friendlier format > > http://pav.iki.fi/tmp/to-be-removed/localhost_8080/AllPages.html Thank you! I just wanted to suggest to change http://wiki.scipy.org/robots.txt such that archived versions like https://web.archive.org/web/20140926023421/http://wiki.scipy.org/NumPy_for_Matlab_Users can be displayed. Robert P. From robert.kern at gmail.com Tue Oct 27 05:01:33 2015 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 27 Oct 2015 09:01:33 +0000 Subject: [SciPy-Dev] Scipy wiki is down again In-Reply-To: <562F3047.2070400@jku.at> References: <5628A45B.8030804@jku.at> <562F3047.2070400@jku.at> Message-ID: On Tue, Oct 27, 2015 at 8:05 AM, Robert Pollak wrote: > > On 24.10.2015 21:06, Pauli Virtanen wrote: > > Ralf Gommers gmail.com> writes: > > [clip] > >> The "numpy for matlab users" page is probably more useful > >> to people than everything else on that wiki combined. > > > > FWIW, here's the content in a slightly friendlier format > > > > http://pav.iki.fi/tmp/to-be-removed/localhost_8080/AllPages.html > > Thank you! I just wanted to suggest to change > http://wiki.scipy.org/robots.txt such that archived versions like > https://web.archive.org/web/20140926023421/http://wiki.scipy.org/NumPy_for_Matlab_Users > can be displayed. We will add redirects when the converted pages have permanent homes. -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Tue Oct 27 06:03:55 2015 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 27 Oct 2015 10:03:55 +0000 (UTC) Subject: [SciPy-Dev] Scipy wiki is down again References: <5628A45B.8030804@jku.at> <562F3047.2070400@jku.at> Message-ID: Robert Pollak jku.at> writes: [clip] > Thank you! I just wanted to suggest to change > http://wiki.scipy.org/robots.txt such that archived versions like > https://web.archive.org/web/20140926023421/http://wiki.scipy.org/NumPy_for_Matlab_Users > can be displayed. In the case of this specific page, you can refer to this instead: https://github.com/numpy/numpy/blob/master/doc/sour ce/user/numpy-for-matlab-users.rst From robert.kern at gmail.com Tue Oct 27 06:19:20 2015 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 27 Oct 2015 10:19:20 +0000 Subject: [SciPy-Dev] Scipy wiki is down again In-Reply-To: References: <5628A45B.8030804@jku.at> <562F3047.2070400@jku.at> Message-ID: On Tue, Oct 27, 2015 at 10:03 AM, Pauli Virtanen wrote: > > Robert Pollak jku.at> writes: > > [clip] > > Thank you! I just wanted to suggest to change > > http://wiki.scipy.org/robots.txt such that archived versions like > > > https://web.archive.org/web/20140926023421/http://wiki.scipy.org/NumPy_for_Matlab_Users > > can be displayed. > > In the case of this specific page, you can refer to this instead: > > https://github.com/numpy/numpy/blob/master/doc/sour > ce/user/numpy-for-matlab-users.rst How long until this propagates to the generated docs on docs.scipy.org? -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Tue Oct 27 06:53:19 2015 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 27 Oct 2015 10:53:19 +0000 (UTC) Subject: [SciPy-Dev] Scipy wiki is down again References: <5628A45B.8030804@jku.at> <562F3047.2070400@jku.at> Message-ID: Robert Kern gmail.com> writes: [clip] > How long until this propagates to the generated docs on docs.scipy.org? There's no automation AFAIK for Numpy, so the answer is: In 1.11.0 release, or when someone rebuilds and uploads the numpy dev docs (last occurrence seems 18 Oct 2015). Automating it via Travis-CI would be fairly easy, cf. https://github.com/scipy/scipy/pull/5383 From robert.pollak at jku.at Tue Oct 27 09:39:10 2015 From: robert.pollak at jku.at (Robert Pollak) Date: Tue, 27 Oct 2015 14:39:10 +0100 Subject: [SciPy-Dev] Scipy wiki is down again In-Reply-To: References: <5628A45B.8030804@jku.at> <562F3047.2070400@jku.at> Message-ID: <562F7E7E.2070607@jku.at> On 2015-10-27 11:53, Pauli Virtanen wrote: > Robert Kern gmail.com> writes: > [clip] >> How long until this propagates to the generated docs on docs.scipy.org? > > There's no automation AFAIK for Numpy, so the answer is: > In 1.11.0 release, or when someone rebuilds and uploads the > numpy dev docs (last occurrence seems 18 Oct 2015). It would help my workshop if someone could do this before 4 Nov :) From philip.deboer at gmail.com Wed Oct 28 08:12:40 2015 From: philip.deboer at gmail.com (Philip DeBoer) Date: Wed, 28 Oct 2015 08:12:40 -0400 Subject: [SciPy-Dev] Random matrices Message-ID: I would like to expose an interface for random matrix generation on scipy, in particular, SO(N) and O(N) functionality. I wrote some code to generate random correlation matrices that could also be exposed. These seem like basic multi-dimensional results that would be useful in a wide variety of situations. There is a "random_rot" function generating SO(N) matrices in scipy/linalg/tests/test_decomp. It has a bug that was since fixed in the MDP package from whence it came, but was never fixed in scipy. A variation of this will generate O(N) matrices, and a little more work yields random correlation matrices. In Oct 2008 when random_rot was added, it was suggested that linalg was a good place for these functions. Is that still the case? Or should they be somewhere in scipy.random? Do we need a new place for random matrices? I have not contributed to scipy before. Let me know what the steps are and I will do my best to take care of this. This could benefit a wide variety of users, and it would be good to have widely used code. Thanks, Philip -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed Oct 28 09:07:23 2015 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 28 Oct 2015 13:07:23 +0000 Subject: [SciPy-Dev] Random matrices In-Reply-To: References: Message-ID: On Wed, Oct 28, 2015 at 12:12 PM, Philip DeBoer wrote: > > I would like to expose an interface for random matrix generation on scipy, in particular, SO(N) and O(N) functionality. I wrote some code to generate random correlation matrices that could also be exposed. These seem like basic multi-dimensional results that would be useful in a wide variety of situations. > > There is a "random_rot" function generating SO(N) matrices in scipy/linalg/tests/test_decomp. It has a bug that was since fixed in the MDP package from whence it came, but was never fixed in scipy. A variation of this will generate O(N) matrices, and a little more work yields random correlation matrices. > > In Oct 2008 when random_rot was added, it was suggested that linalg was a good place for these functions. Is that still the case? Or should they be somewhere in scipy.random? Do we need a new place for random matrices? > > I have not contributed to scipy before. Let me know what the steps are and I will do my best to take care of this. This could benefit a wide variety of users, and it would be good to have widely used code. We have started to add multivariate distributions to scipy.stats, so I might recommend that location, even if you just have the random generation functions and not the PDFs (though these are often just constants in these cases). https://github.com/scipy/scipy/blob/master/scipy/stats/_multivariate.py -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From philip.deboer at gmail.com Thu Oct 29 17:55:42 2015 From: philip.deboer at gmail.com (philip.deboer at gmail.com) Date: Thu, 29 Oct 2015 21:55:42 +0000 Subject: [SciPy-Dev] Random matrices In-Reply-To: References: Message-ID: <630461273-1446155740-cardhu_decombobulator_blackberry.rim.net-980370625-@b28.c1.bise6.blackberry> Right; since these are compact groups so they should have a const pdf, I suppose dependent on dimension. So I should declare a new class, inherit from multi_rv_generic, and put my code into rvs, with dimension as an input? Do I need to define additional public functions (pdf, entropy), or are they all optional? Sent from my BlackBerry device on the Rogers Wireless Network -----Original Message----- From: Robert Kern Sender: "SciPy-Dev" Date: Wed, 28 Oct 2015 13:07:23 To: SciPy Developers List Reply-To: SciPy Developers List Cc: Tiziano Zito Subject: Re: [SciPy-Dev] Random matrices _______________________________________________ SciPy-Dev mailing list SciPy-Dev at scipy.org https://mail.scipy.org/mailman/listinfo/scipy-dev From ralf.gommers at gmail.com Fri Oct 30 17:11:24 2015 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Fri, 30 Oct 2015 22:11:24 +0100 Subject: [SciPy-Dev] Scipy wiki is down again In-Reply-To: <562F7E7E.2070607@jku.at> References: <5628A45B.8030804@jku.at> <562F3047.2070400@jku.at> <562F7E7E.2070607@jku.at> Message-ID: On Tue, Oct 27, 2015 at 2:39 PM, Robert Pollak wrote: > On 2015-10-27 11:53, Pauli Virtanen wrote: > > Robert Kern gmail.com> writes: > > [clip] > >> How long until this propagates to the generated docs on docs.scipy.org? > > > > There's no automation AFAIK for Numpy, so the answer is: > > In 1.11.0 release, or when someone rebuilds and uploads the > > numpy dev docs (last occurrence seems 18 Oct 2015). > > It would help my workshop if someone could do this before 4 Nov :) > It's up: http://docs.scipy.org/doc/numpy-dev/user/numpy-for-matlab-users.html Thanks to Pauli I believe. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sat Oct 31 07:49:42 2015 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 31 Oct 2015 12:49:42 +0100 Subject: [SciPy-Dev] deleting scipy/speed repo Message-ID: Hi, Does anyone object to deleting https://github.com/scipy/speed? There's not more than 200 lines of code in there and no one uses that repo anymore. The code that's in there is Python/Cython/Fortran code to solve the Laplace problem. For people interested in that, this blog post from Travis is much nicer than the repo: http://technicaldiscovery.blogspot.nl/2011/06/speeding-up-python-numpy-cython-and.html The only thing the blog post doesn't have is the Fortran code, but it's linked in the comments and originally came from https://github.com/certik/laplace_test Conclusion: nothing to move, just delete. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sat Oct 31 08:02:58 2015 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 31 Oct 2015 13:02:58 +0100 Subject: [SciPy-Dev] Retiring wiki.scipy.org? In-Reply-To: References: Message-ID: On Sun, Oct 25, 2015 at 6:54 PM, Pauli Virtanen wrote: > Hi, > > I've been looking through the stuff that was on wiki.scipy.org, > and it appears it would be the time to retire it. > > cf. http://pav.iki.fi/tmp/to-be-removed/localhost_8080/AllPages.html > for the actual material that is there. > > > I have the impression that: > > (1) Most of the material that is useful has been moved elsewhere. > > The download instructions, topical software, etc. are moved to > scipy.org and/or numpy/scipy documentation. > > The Numpy/Scipy tutorials that were there have mostly been subsumed > in the Numpy/Scipy documentation, or superceded by better resources > elsewhere eg. scipy-lectures.org. "Numpy for Matlab users" was > just added there too. > > The Cookbook section was converted to IPython notebooks > some years ago: https://github.com/pv/SciPy-CookBook > However, several of these are not so relevant today. > > (2) The relevance of what's left is not so clear. > > The rest of the content seems fairly miscellaneous. Recipes > from converting code from Numeric/numarray are probably > not highly useful any more. > > If you know something valuable there not listed above, it would > be useful to point it out. > > Excluding edits to front/download/etc pages that are nowadays > on github/scipy/scipy.org, the wiki was edited 95 times in the > last 5 years; and 2 times within the last 2 years. > Agreed, even in the converted material there's a lot that's not relevant anymore. What wasn't converted is even less relevant. There were some comments on the issue tracker about people looking for specific pages, so making a dump of everything available somewhere like Nathaniel suggested makes sense I think. (3) Would having a wiki be useful? > > Based on the content there it's not clear if there is a need > for a wiki nowadays --- people can just put stuff up on > github/bitbucket/pypi and blog about it. > > It might be useful to have a "cookbook" collection, > but a github repository with ipython notebooks, > or Scipy Central, sound like better solutions for today > than a wiki. > > > Thoughts? > I also had a look at visitor stats (see http://www.easycounter.com/report/scipy.org). Conclusion: scipy.org is a very active domain (global rank around 25000), but wiki.scipy.org only accounts for 1.65% of the total. That's still 3450 page views a day though. So putting redirects in for the most popular pages is important. +1 for no wiki anymore Ralf > > In view of the above, I would remove links to wiki.scipy.org > regardless, https://github.com/scipy/scipy.org/pull/121 > -------------- next part -------------- An HTML attachment was scrubbed... URL: