From charlesr.harris at gmail.com Wed Jan 1 13:05:01 2020 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 1 Jan 2020 11:05:01 -0700 Subject: [SciPy-User] NumPy 1.17.5 Released Message-ID: Hi All, On behalf of the NumPy team I am pleased to announce that NumPy 1.17.5 has been released. This release fixes bugs reported against the 1.17.4 release. The supported Python versions are 3.5-3.7. This is the last planned release that supports Python 3.5. Wheels for this release can be downloaded from PyPI , source archives and release notes are available from Github . Downstream developers building this release should use Cython >= 0.29.14 and, if using OpenBLAS, OpenBLAS >= v0.3.7. It is recommended that developers interested in the new random bit generators upgrade to the NumPy 1.18.x series, as it has updated documentation and many small improvements. *Highlights* - The ``np.testing.utils`` functions have been updated from 1.19.0-dev0. This improves the function documentation and error messages as well extending the ``assert_array_compare`` function to additional types. *Contributors* A total of 6 people contributed to this release. People with a "+" by their names contributed a patch for the first time. - Charles Harris - Eric Wieser - Ilhan Polat - Matti Picus - Michael Hudson-Doyle - Ralf Gommers *Pull requests merged* A total of 8 pull requests were merged for this release. - `#14593 `__: MAINT: backport Cython API cleanup to 1.17.x, remove docs - `#14937 `__: BUG: fix integer size confusion in handling array's ndmin argument - `#14939 `__: BUILD: remove SSE2 flag from numpy.random builds - `#14993 `__: MAINT: Added Python3.8 branch to dll lib discovery - `#15038 `__: BUG: Fix refcounting in ufunc object loops - `#15067 `__: BUG: Exceptions tracebacks are dropped - `#15175 `__: ENH: Backport improvements to testing functions. - `#15213 `__: REL: Prepare for the NumPy 1.17.5 release. Cheers, Charles Harris -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Mon Jan 6 18:20:30 2020 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 6 Jan 2020 16:20:30 -0700 Subject: [SciPy-User] NumPy 1.18.1 released Message-ID: Hi All, On behalf of the NumPy team I am pleased to announce that NumPy 1.18.1 has been released. This release contains fixes for bugs reported against NumPy 1.18.0. Two bugs in particular that caused widespread problems downstream were: - The cython random extension test was not using a temporary directory for building, resulting in a permission violation. Fixed. - Numpy distutils was appending -std=c99 to all C compiler runs, leading to changed behavior and compile problems downstream. That flag is now only applied when building numpy C code. The Python versions supported in this release are 3.5-3.8. Downstream developers should use Cython >= 0.29.14 for Python 3.8 support and OpenBLAS >= 3.7 to avoid errors on the Skylake architecture. Wheels for this release can be downloaded from PyPI , source archives and release notes are available from Github . *Contributors* A total of 7 people contributed to this release. People with a "+" by their names contributed a patch for the first time. - Charles Harris - Matti Picus - Maxwell Aladago - Pauli Virtanen - Ralf Gommers - Tyler Reddy - Warren Weckesser *Pull requests merged* A total of 13 pull requests were merged for this release. - `#15158 `__: MAINT: Update pavement.py for towncrier. - `#15159 `__: DOC: add moved modules to 1.18 release note - `#15161 `__: MAINT, DOC: Minor backports and updates for 1.18.x - `#15176 `__: TST: Add assert_array_equal test for big integer arrays - `#15184 `__: BUG: use tmp dir and check version for cython test. - `#15220 `__: BUG: distutils: fix msvc+gfortran openblas handling corner case - `#15221 `__: BUG: remove -std=c99 for c++ compilation - `#15222 `__: MAINT: unskip test on win32 - `#15223 `__: TST: add BLAS ILP64 run in Travis & Azure - `#15245 `__: MAINT: only add --std=c99 where needed - `#15246 `__: BUG: lib: Fix handling of integer arrays by gradient. - `#15247 `__: MAINT: Do not use private Python function in testing - `#15250 `__: REL: Prepare for the NumPy 1.18.1 release. Cheers, Charles Harris -------------- next part -------------- An HTML attachment was scrubbed... URL: From erik.tollerud at gmail.com Wed Jan 8 20:33:31 2020 From: erik.tollerud at gmail.com (Erik Tollerud) Date: Wed, 8 Jan 2020 15:33:31 -1000 Subject: [SciPy-User] ANN: Astropy v4.0 released Message-ID: Dear colleagues, We are very happy to announce the v4.0 release of the Astropy package, a core Python package for Astronomy: http://www.astropy.org Astropy is a community-driven Python package intended to contain much of the core functionality and common tools needed for astronomy and astrophysics. It is part of the Astropy Project, which aims to foster an ecosystem of interoperable astronomy packages for Python. New and improved major functionality in this release includes: * Support for Planck 2018 Cosmological Parameters * Improved Consistency of Physical Constants and Units * Scientific enhancements to the Galactocentric Frame * New ymdhms Time Format * New Context Manager for plotting time values * Dynamic and improved handling of leap second * Major Improvements in Compatibility of Quantity Objects with NumPy Functions * Multiple interface improvements to WCSAxes * Fitting of WCS to Pairs of Pixel/World Coordinates * Support for WCS Transformations between Pixel and Time Values * Improvements to Folding for Time Series * New Table Methods and significant performance improvements for Tables * Improved downloading and caching of remote files In addition, hundreds of smaller improvements and fixes have been made. An overview of the changes is provided at: http://docs.astropy.org/en/stable/whatsnew/4.0.html The Astropy v4.0.x series now replaces v2.0.x as the long term support release, and will be supported until the end of 2021. Also note that the Astropy 4.x series only supports Python 3. Python 2 users can continue to use the 2.x series but as of now it is no longer supported (as Python 2 itself is no longer supported). For assistance converting Python 2 code to Python 3, see the Python 3 for scientists conversion guide. Instructions for installing Astropy are provided on our website, and extensive documentation can be found at: http://docs.astropy.org If you make use of the Anaconda Python Distribution, you can update to Astropy v4.0 with: conda update astropy Whereas if you usually use pip, you can do: pip install astropy --upgrade Please report any issues, or request new features via our GitHub repository: https://github.com/astropy/astropy/issues Over 350 developers have contributed code to Astropy so far, and you can find out more about the team behind Astropy here: http://www.astropy.org/team.html If you use Astropy directly for your work, or as a dependency to another package, please remember to acknowledgment it by citing the appropriate Astropy paper. For the most up-to-date suggestions, see the acknowledgement page, but as of this release the recommendation is: This research made use of Astropy, a community-developed core Python package for Astronomy (Astropy Collaboration, 2018). Special thanks to the coordinator for this release: Brigitta Sipocz. We hope that you enjoy using Astropy as much as we enjoyed developing it! Erik Tollerud, Tom Robitaille, Kelle Cruz, and Tom Aldcroft on behalf of The Astropy Collaboration https://www.astropy.org/announcements/release-4.0.html From jr at sun.ac.za Mon Jan 13 05:03:15 2020 From: jr at sun.ac.za (Johann Rohwer) Date: Mon, 13 Jan 2020 12:03:15 +0200 Subject: [SciPy-User] Building Python wheels with Fortran on Windows 10 under Python 3.8 Message-ID: An HTML attachment was scrubbed... URL: From tomo.bbe at gmail.com Mon Jan 13 06:33:51 2020 From: tomo.bbe at gmail.com (James) Date: Mon, 13 Jan 2020 11:33:51 +0000 Subject: [SciPy-User] Building Python wheels with Fortran on Windows 10 under Python 3.8 In-Reply-To: References: Message-ID: Johan, Have you read the porting notes for 3.8? There's a change to how libraries on Windows are loaded. You will likely need to use the new `os.add_dll_directory()` function. https://docs.python.org/3/whatsnew/3.8.html#bpo-36085-whatsnew Regards, James On Mon, 13 Jan 2020 at 10:09, Johann Rohwer wrote: > Hi all > > First of all, my apologies if this is not the right place for this query, > if so, please direct me to a more appropriate forum. > > We have developed a Python package for computational systems biology that > wraps two Fortran extension modules: > https://github.com/PySCeS/pysces > > For the distribution of binary wheels under Windows I've been compiling > the Fortran extension modules with MinGWin and using MSVC++ compiler to > compile the Python extension, as implimented in numpy distutils and > explained in detail in this blog post: > https://pav.iki.fi/blog/2017-10-08/pywingfortran.html > > This has been working fine for previous versions of Python (3.6 and 3.7), > but under Python 3.8 the DLL won't load, I get the following error message > when trying to import the package: > > DLL load failed while importing pitcon: The specified module could not be > found. > INFO: Pitcon import failed: continuation not available > DLL load failed while importing nleq2: The specified module could not be > found. > INFO: NLEQ2 import failed: option not available > > This is using Python 3.8.1 downloaded from python.org under Windows 10 > 64-bit, also using 64-bit MinGWin. > > When building the package under Python 3.8, the compile runs without error > and the DLL files get packaged in the .libs folder beneath the top-level > package folder, as for the other Python versions. Only problem is that they > can't be loaded during import. > > Dependencies for the build (numpy, scipy, matplotlib) are installed using > pip into a clean install of the respective Python version: > matplotlib 3.1.2 > numpy 1.18.1 > scipy 1.4.1 > > Has anything changed in how this is handled under Python 3.8? I'm stymied > because it all works off exactly the same code base under Python 3.6 and > 3.7, and moreover the compile and wheel build completes without error, also > on Python 3.8. > > Thanks, > Johann > > > > The integrity and confidentiality of this e-mail is governed by these > terms / Die integriteit en vertroulikheid van hierdie e-pos word deur die > volgende bepalings gere?l. http://www.sun.ac.za/emaildisclaimer > > _______________________________________________ > SciPy-User mailing list > SciPy-User at python.org > https://mail.python.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jr at sun.ac.za Mon Jan 13 07:03:11 2020 From: jr at sun.ac.za (Johann Rohwer) Date: Mon, 13 Jan 2020 14:03:11 +0200 Subject: [SciPy-User] Building Python wheels with Fortran on Windows 10 under Python 3.8 In-Reply-To: References: Message-ID: <9c447422-838c-76e2-39ba-aa521a302018@sun.ac.za> An HTML attachment was scrubbed... URL: From mikofski at berkeley.edu Tue Jan 21 00:36:49 2020 From: mikofski at berkeley.edu (Dr. Mark Alexander Mikofski PhD) Date: Mon, 20 Jan 2020 21:36:49 -0800 Subject: [SciPy-User] [ANN] pvlib python v0.7.1: predicting power for solar energy Message-ID: pvlib has a new minor release, v0.7.1 Release Notes: https://pvlib-python.readthedocs.io/en/v0.7.1/whatsnew.html PyPI: https://pypi.org/project/pvlib/ Read the Docs: https://pvlib-python.readthedocs.io/en/latest/ GitHub: https://github.com/pvlib/pvlib-python New examples gallery: https://pvlib-python.readthedocs.io/en/stable/auto_examples/index.html -- Mark Mikofski, PhD (2005) *Fiat Lux* -------------- next part -------------- An HTML attachment was scrubbed... URL: From pengyu.ut at gmail.com Tue Jan 28 22:09:14 2020 From: pengyu.ut at gmail.com (Peng Yu) Date: Tue, 28 Jan 2020 21:09:14 -0600 Subject: [SciPy-User] How to read row_name, col_name, value format TSV into a sparse matrix? Message-ID: Suppose that I have a TSV file in the following format. ``` row_namecol_namevalue ``` Is there an easy way to read it into a sparse matrix format in scipy? Thanks. I don't see such examples in the doc. https://docs.scipy.org/doc/scipy/reference/sparse.html -- Regards, Peng From hturesson at gmail.com Tue Jan 28 22:36:13 2020 From: hturesson at gmail.com (Hjalmar Turesson) Date: Tue, 28 Jan 2020 22:36:13 -0500 Subject: [SciPy-User] How to read row_name, col_name, value format TSV into a sparse matrix? In-Reply-To: References: Message-ID: Have you tried using Pandas? https://pandas.pydata.org/pandas-docs/stable/user_guide/sparse.html On Tue, Jan 28, 2020 at 10:09 PM Peng Yu wrote: > Suppose that I have a TSV file in the following format. > > ``` > row_namecol_namevalue > ``` > > Is there an easy way to read it into a sparse matrix format in scipy? > Thanks. > > I don't see such examples in the doc. > > https://docs.scipy.org/doc/scipy/reference/sparse.html > > -- > Regards, > Peng > _______________________________________________ > SciPy-User mailing list > SciPy-User at python.org > https://mail.python.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pengyu.ut at gmail.com Tue Jan 28 22:44:34 2020 From: pengyu.ut at gmail.com (Peng Yu) Date: Tue, 28 Jan 2020 21:44:34 -0600 Subject: [SciPy-User] How to read row_name, col_name, value format TSV into a sparse matrix? In-Reply-To: References: Message-ID: No. Which one to try? Just to be clear I want to eventually use the sparse matrix with sklearn's .fit(). On 1/28/20, Hjalmar Turesson wrote: > Have you tried using Pandas? > > https://pandas.pydata.org/pandas-docs/stable/user_guide/sparse.html > > On Tue, Jan 28, 2020 at 10:09 PM Peng Yu wrote: > >> Suppose that I have a TSV file in the following format. >> >> ``` >> row_namecol_namevalue >> ``` >> >> Is there an easy way to read it into a sparse matrix format in scipy? >> Thanks. >> >> I don't see such examples in the doc. >> >> https://docs.scipy.org/doc/scipy/reference/sparse.html >> >> -- >> Regards, >> Peng >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at python.org >> https://mail.python.org/mailman/listinfo/scipy-user >> > -- Regards, Peng From guillaume at damcb.com Wed Jan 29 02:29:24 2020 From: guillaume at damcb.com (Guillaume Gay) Date: Wed, 29 Jan 2020 08:29:24 +0100 Subject: [SciPy-User] How to read row_name, col_name, value format TSV into a sparse matrix? In-Reply-To: References: Message-ID: <3554fb84-d7eb-0659-e819-744990f3f888@damcb.com> You can either use pandas or numpy.read_csv. If the row_name and col_name columns contain the indices, you can then instanciate a scipy.sparse matrix with sparse.coo_matrix(val, (row, col))) https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.coo_matrix.html#scipy-sparse-coo-matrix G. Le 29/01/2020 ? 04:44, Peng Yu a ?crit?: > No. Which one to try? > > Just to be clear I want to eventually use the sparse matrix with > sklearn's .fit(). > > On 1/28/20, Hjalmar Turesson wrote: >> Have you tried using Pandas? >> >> https://pandas.pydata.org/pandas-docs/stable/user_guide/sparse.html >> >> On Tue, Jan 28, 2020 at 10:09 PM Peng Yu wrote: >> >>> Suppose that I have a TSV file in the following format. >>> >>> ``` >>> row_namecol_namevalue >>> ``` >>> >>> Is there an easy way to read it into a sparse matrix format in scipy? >>> Thanks. >>> >>> I don't see such examples in the doc. >>> >>> https://docs.scipy.org/doc/scipy/reference/sparse.html >>> >>> -- >>> Regards, >>> Peng >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at python.org >>> https://mail.python.org/mailman/listinfo/scipy-user >>> > -- Guillaume Gay, PhD Morphog?nie Logiciels SAS http://morphogenie.fr 12 rue Camoin Jeune 13004 Marseille +336 51 95 94 00 From pengyu.ut at gmail.com Wed Jan 29 03:46:34 2020 From: pengyu.ut at gmail.com (Peng Yu) Date: Wed, 29 Jan 2020 02:46:34 -0600 Subject: [SciPy-User] How to read row_name, col_name, value format TSV into a sparse matrix? In-Reply-To: <3554fb84-d7eb-0659-e819-744990f3f888@damcb.com> References: <3554fb84-d7eb-0659-e819-744990f3f888@damcb.com> Message-ID: But does pandas read_csv generate a dense matrix? (I don't find numpy read_csv. I only find numpy.loadtxt, which also only deal with dense matrix.) What is the purpose of read into a dense matrix then convert it to a sparse one? Isn't it better to directly read into a sparse matrix to save memory? Thanks. https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.loadtxt.html > You can either use pandas or numpy.read_csv. If the row_name and > col_name columns contain the indices, you can then instanciate a > scipy.sparse matrix with sparse.coo_matrix(val, (row, col))) > > https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.coo_matrix.html#scipy-sparse-coo-matrix -- Regards, Peng From takowl at gmail.com Wed Jan 29 04:19:12 2020 From: takowl at gmail.com (Thomas Kluyver) Date: Wed, 29 Jan 2020 09:19:12 +0000 Subject: [SciPy-User] How to read row_name, col_name, value format TSV into a sparse matrix? In-Reply-To: References: <3554fb84-d7eb-0659-e819-744990f3f888@damcb.com> Message-ID: Reading the csv/tsv (either with pandas or numpy) doesn't create a matrix at all. It just gives you the data as it is in the file: values with associated coordinates. Then you would use something like scipy.sparse.coo_matrix() to convert that to a sparse matrix. On Wed, 29 Jan 2020 at 08:47, Peng Yu wrote: > But does pandas read_csv generate a dense matrix? (I don't find numpy > read_csv. I only find numpy.loadtxt, which also only deal with dense > matrix.) What is the purpose of read into a dense matrix then convert > it to a sparse one? Isn't it better to directly read into a sparse > matrix to save memory? Thanks. > > > https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html > > https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.loadtxt.html > > > You can either use pandas or numpy.read_csv. If the row_name and > > col_name columns contain the indices, you can then instanciate a > > scipy.sparse matrix with sparse.coo_matrix(val, (row, col))) > > > > > https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.coo_matrix.html#scipy-sparse-coo-matrix > > -- > Regards, > Peng > _______________________________________________ > SciPy-User mailing list > SciPy-User at python.org > https://mail.python.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pengyu.ut at gmail.com Wed Jan 29 04:33:45 2020 From: pengyu.ut at gmail.com (Peng Yu) Date: Wed, 29 Jan 2020 03:33:45 -0600 Subject: [SciPy-User] How to read row_name, col_name, value format TSV into a sparse matrix? In-Reply-To: References: <3554fb84-d7eb-0659-e819-744990f3f888@damcb.com> Message-ID: > Reading the csv/tsv (either with pandas or numpy) doesn't create a matrix > at all. It just gives you the data as it is in the file: values with > associated coordinates. Then you would use something like > scipy.sparse.coo_matrix() to convert that to a sparse matrix. Where it documented that pandas.read_csv don't generate the whole matrix? The return value is either of the two? """ DataFrame or TextParser A comma-separated values (csv) file is returned as two-dimensional data structure with labeled axes. """ Are you referring "TextParser"? How to control which one to return? I don't see an option for it. Which function of numpy do refer to specifically? numpy.loadtxt? It returns ndarray, which should read a dense matrix into the memory. -- Regards, Peng From lingyihuu at gmail.com Wed Jan 29 04:44:27 2020 From: lingyihuu at gmail.com (Lingyi Hu) Date: Wed, 29 Jan 2020 17:44:27 +0800 Subject: [SciPy-User] How to read row_name, col_name, value format TSV into a sparse matrix? In-Reply-To: References: <3554fb84-d7eb-0659-e819-744990f3f888@damcb.com> Message-ID: Hi Peng Yu, Seems like these links might be useful: https://stackoverflow.com/questions/1938894/csv-to-sparse-matrix-in-python https://gist.github.com/oddskool/27476a1e22df357de798 Should be easy to switch out csv for tsv parsing. Lingyi On Wed, Jan 29, 2020 at 5:34 PM Peng Yu wrote: > > Reading the csv/tsv (either with pandas or numpy) doesn't create a matrix > > at all. It just gives you the data as it is in the file: values with > > associated coordinates. Then you would use something like > > scipy.sparse.coo_matrix() to convert that to a sparse matrix. > > Where it documented that pandas.read_csv don't generate the whole > matrix? The return value is either of the two? > > """ > DataFrame or TextParser > > A comma-separated values (csv) file is returned as two-dimensional > data structure with labeled axes. > """ > > Are you referring "TextParser"? How to control which one to return? I > don't see an option for it. > > Which function of numpy do refer to specifically? numpy.loadtxt? It > returns ndarray, which should read a dense matrix into the memory. > > -- > Regards, > Peng > _______________________________________________ > SciPy-User mailing list > SciPy-User at python.org > https://mail.python.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From takowl at gmail.com Wed Jan 29 04:56:45 2020 From: takowl at gmail.com (Thomas Kluyver) Date: Wed, 29 Jan 2020 09:56:45 +0000 Subject: [SciPy-User] How to read row_name, col_name, value format TSV into a sparse matrix? In-Reply-To: References: <3554fb84-d7eb-0659-e819-744990f3f888@damcb.com> Message-ID: On Wed, 29 Jan 2020 at 09:34, Peng Yu wrote: > Where it documented that pandas.read_csv don't generate the whole > matrix? The return value is either of the two? > It returns a 2D data structure as in the rows and columns of your CSV file - so the shape will be (3, n_entries). It doesn't try to interpret them as referring to entries in a matrix - you have to do that as a separate step. It's probably not exactly documented like this, because documentation doesn't usually say what a function *doesn't* do, unless it's a very common confusion. Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From lingyihuu at gmail.com Wed Jan 29 06:11:03 2020 From: lingyihuu at gmail.com (Lingyi Hu) Date: Wed, 29 Jan 2020 19:11:03 +0800 Subject: [SciPy-User] How to read row_name, col_name, value format TSV into a sparse matrix? In-Reply-To: References: <3554fb84-d7eb-0659-e819-744990f3f888@damcb.com> Message-ID: Thomas, Unless I'm misunderstanding, I think Peng Yu doesn't want to read the zeros (or empty values) from the tsv file into memory. I'm pretty sure pandas.read_csv reads your whole data into memory, zeros or not. There is no option to read it in a sparse format (only store the position of nonzero entries). So that doesn't solve the problem. I think you can also read it in chunks, call df.to_sparse to convert to a sparse matrix for each chunk and concat them. I'm not sure if you've seen this: https://stackoverflow.com/questions/31888856/read-a-large-csv-into-a-sparse-pandas-dataframe-in-a-memory-efficient-way, but it might also offer some useful insights. On Wed, Jan 29, 2020 at 5:57 PM Thomas Kluyver wrote: > On Wed, 29 Jan 2020 at 09:34, Peng Yu wrote: > >> Where it documented that pandas.read_csv don't generate the whole >> matrix? The return value is either of the two? >> > > It returns a 2D data structure as in the rows and columns of your CSV file > - so the shape will be (3, n_entries). It doesn't try to interpret them as > referring to entries in a matrix - you have to do that as a separate step. > > It's probably not exactly documented like this, because documentation > doesn't usually say what a function *doesn't* do, unless it's a very common > confusion. > > Thomas > _______________________________________________ > SciPy-User mailing list > SciPy-User at python.org > https://mail.python.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From melissawm at gmail.com Wed Jan 29 07:49:25 2020 From: melissawm at gmail.com (=?UTF-8?Q?Melissa_Mendon=C3=A7a?=) Date: Wed, 29 Jan 2020 09:49:25 -0300 Subject: [SciPy-User] SciPy 2020 Call for Submissions is open! In-Reply-To: References: Message-ID: Apologies for crossposting. ===== SciPy 2020, the 19th annual Scientific Computing with Python conference, will be held July 6-12, 2020 in Austin, Texas. The annual SciPy Conference brings together over 900 participants from industry, academia, and government to showcase their latest projects, learn from skilled users and developers, and collaborate on code development. The call for SciPy 2020 talks, posters, and tutorials is now open through February 11, 2020. Talks and Posters (July 8-10, 2020) In addition to the general track, this year will have the following specialized tracks, mini symposia, and sessions: Tracks High Performance Python Machine Learning and Data Science Mini Symposia Astronomy and Astrophysics Biology and Bioinformatics Earth, Ocean, Geo and Atmospheric Science Materials Science Special Sessions Maintainers Track SciPy Tools Plenary Session For additional details and instructions, please see the conference website. Tutorials (July 6-7, 2020) Tutorials should be focused on covering a well-defined topic in a hands-on manner. We are looking for awesome techniques or packages, helping new or advanced Python programmers develop better or faster scientific applications. We encourage submissions to be designed to allow at least 50% of the time for hands-on exercises even if this means the subject matter needs to be limited. Tutorials will be 4 hours in duration. In your tutorial application, you can indicate what prerequisite skills and knowledge will be needed for your tutorial, and the approximate expected level of knowledge of your students (i.e., beginner, intermediate, advanced). Instructors of accepted tutorials will receive a stipend. For examples of content and format, you can refer to tutorials from past SciPy tutorial sessions (SciPy 2018 , SciPy2019 ) and some accepted submissions . For additional details and instructions see the conference website . Submission page: https://easychair.org/conferences/?conf=scipy2020 Submission Deadline: February 11, 2020 - Melissa Weber Mendon?a Diversity Committee Co-Chair -------------- next part -------------- An HTML attachment was scrubbed... URL: From dufloth at gmail.com Thu Jan 30 23:28:10 2020 From: dufloth at gmail.com (Augusto Dufloth) Date: Fri, 31 Jan 2020 13:28:10 +0900 Subject: [SciPy-User] How to create a request to change a function? Message-ID: Hello, I have a request to change one of scipy?s function. I am trying to use odeint to integrate a set of equations of motion. My state equations rely on time dependent inputs. And these inputs are numpy arrays. When I do a flight path reconstruction, the inputs comes from the inertial system of the aircraft and I plug it into the state equations, so I get the deterministic position of the aircraft and than I can compare it to the measured values. The issue is that I can?t input an array. Searching stackoverflow I saw solutions using for-loops and assigning an arg for each time step. This is counter productive and create a huge time calculation for long arrays. I created my own C-dll to deal with this for loop issue, but it?s super counter productive. Can we get an option that the input args are arrays if same size of t, and as the state is integrated, the index of the arg follows the states? Thanks for your help Augusto -- "The greatest challenge to any thinker is stating the problem in a way that will allow a solution." - Bertrand Russell -------------- next part -------------- An HTML attachment was scrubbed... URL: From silva at lma.cnrs-mrs.fr Fri Jan 31 05:02:13 2020 From: silva at lma.cnrs-mrs.fr (Fabrice Silva) Date: Fri, 31 Jan 2020 11:02:13 +0100 Subject: [SciPy-User] How to create a request to change a function? In-Reply-To: References: Message-ID: Le vendredi 31 janvier 2020, Augusto Dufloth a ?crit : > Hello, > > I have a request to change one of scipy?s function. > > I am trying to use odeint to integrate a set of equations of motion. > My state equations rely on time dependent inputs. And these inputs > are numpy arrays. > > When I do a flight path reconstruction, the inputs comes from the > inertial system of the aircraft and I plug it into the state > equations, so I get the deterministic position of the aircraft and > than I can compare it to the measured values. > > The issue is that I can?t input an array. Searching stackoverflow I > saw solutions using for-loops and assigning an arg for each time > step. This is counter productive and create a huge time calculation > for long arrays. > > I created my own C-dll to deal with this for loop issue, but it?s > super counter productive. > > Can we get an option that the input args are arrays if same size of > t, and as the state is integrated, the index of the arg follows the > states? Hi, As far as I understand, you want to use time-varying parameters (which includes right-hand side terms of ODEs) specified as numpy array (i.e., at discrete time values) while ODEs are defined in the continuous-time domain. Specification of an extrapolation method is required for the well-posedness of the problem: the most trivial is the "Sample-and- hold" process which maintain values until next sample step. One simple solution (maybe one that you arleady tested) is to use the ode class and its integrate method. Adapting the example given in the docs https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.ode.html#scipy.integrate.ode from scipy.integrate import ode def f(t, y, arg1): return [1j*arg1*y[0] + y[1], -arg1*y[1]**2]def jac(t, y, arg1): return [[1j*arg1, 1], [0, -arg1*2*y[1]]] r = ode(f, jac).set_integrator('zvode', method='bdf') t = np.arange(0, 10, 1)results = np.empty((len(t), 1 + len(y0)), dtype=float) + np.nan r.set_initial_value([1.0j, 2.0], t[0])for ind in range(len(t) - 1): r.set_f_params(param[ind]).set_jac_params(param[ind]) tmp_res ult = r.integrate(t[ind+1]) if not r.successful(): break results[ind] = tmp_result This introduced a reasonnable overhead in performance while begin quite readable. Best regards Fabrice -------------- next part -------------- An HTML attachment was scrubbed... URL: From silva at lma.cnrs-mrs.fr Fri Jan 31 10:16:52 2020 From: silva at lma.cnrs-mrs.fr (Fabrice Silva) Date: Fri, 31 Jan 2020 16:16:52 +0100 Subject: [SciPy-User] How to create a request to change a function? In-Reply-To: References: Message-ID: <886c08fe7170d397bed4f067bb2e014db1fc771d.camel@lma.cnrs-mrs.fr> see https://pastebin.com/TKt12AEE for a readable version of the code sample Le vendredi 31 janvier 2020, Fabrice Silva a ?crit : > Le vendredi 31 janvier 2020, Augusto Dufloth a ?crit : > > Hello, > > > > I have a request to change one of scipy?s function. > > > > I am trying to use odeint to integrate a set of equations of > > motion. My state equations rely on time dependent inputs. And these > > inputs are numpy arrays. > > > > When I do a flight path reconstruction, the inputs comes from the > > inertial system of the aircraft and I plug it into the state > > equations, so I get the deterministic position of the aircraft and > > than I can compare it to the measured values. > > > > The issue is that I can?t input an array. Searching stackoverflow I > > saw solutions using for-loops and assigning an arg for each time > > step. This is counter productive and create a huge time calculation > > for long arrays. > > > > I created my own C-dll to deal with this for loop issue, but it?s > > super counter productive. > > > > Can we get an option that the input args are arrays if same size of > > t, and as the state is integrated, the index of the arg follows the > > states? > > Hi,As far as I understand, you want to use time-varying parameters > (which includes right-hand side terms of ODEs) specified as numpy > array (i.e., at discrete time values) while ODEs are defined in the > continuous-time domain. Specification of an extrapolation method is > required for the well-posedness of the problem: the most trivial is > the "Sample-and-hold" process which maintain values until next sample > step. > One simple solution (maybe one that you arleady tested) is to use the > ode class and its integrate method.Adapting the example given in the > docs > https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.ode.html#scipy.integrate.ode > from scipy.integrate import ode > def f(t, y, arg1): return [1j*arg1*y[0] + y[1], -arg1*y[1]**2]def > jac(t, y, arg1): return [[1j*arg1, 1], [0, -arg1*2*y[1]]] > r = ode(f, jac).set_integrator('zvode', method='bdf') > t = np.arange(0, 10, 1)results = np.empty((len(t), 1 + len(y0)), > dtype=float) + np.nan > r.set_initial_value([1.0j, 2.0], t[0])for ind in range(len(t) - > 1): r.set_f_params(param[ind]).set_jac_params(param[ind]) tmp_r > esult = r.integrate(t[ind+1]) if not > r.successful(): break results[ind] = tmp_result > This introduced a reasonnable overhead in performance while begin > quite readable. > Best regards > Fabrice > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dufloth at gmail.com Fri Jan 31 17:40:22 2020 From: dufloth at gmail.com (Augusto Dufloth) Date: Sat, 1 Feb 2020 07:40:22 +0900 Subject: [SciPy-User] How to create a request to change a function? In-Reply-To: <886c08fe7170d397bed4f067bb2e014db1fc771d.camel@lma.cnrs-mrs.fr> References: <886c08fe7170d397bed4f067bb2e014db1fc771d.camel@lma.cnrs-mrs.fr> Message-ID: Thanks for your email! My point is: for the sake of the formal definition of ode, the function is lacking a convenient feature. Since you have to input as an argument a time array, it would be super useful to have the option to also input time-varying arguments of the same length as t. Checking stackoverflow you find that many people would benefit from this addition. Because of the lack of this feature, I had to create my own ODE as a C function. In a practical example the time to solve my system was reduced by 20 fold. As compared to the solution you provided (and I also tried in the past). I believe this could be even faster, but I lack the skills of code optimization. Kind regards On Sat, Feb 1, 2020 at 0:18 Fabrice Silva wrote: > see https://pastebin.com/TKt12AEE for a readable version of the code > sample > > Le vendredi 31 janvier 2020, Fabrice Silva a ?crit : > > Le vendredi 31 janvier 2020, Augusto Dufloth a ?crit : > > Hello, > > I have a request to change one of scipy?s function. > > I am trying to use odeint to integrate a set of equations of motion. My > state equations rely on time dependent inputs. And these inputs are numpy > arrays. > > When I do a flight path reconstruction, the inputs comes from the inertial > system of the aircraft and I plug it into the state equations, so I get the > deterministic position of the aircraft and than I can compare it to the > measured values. > > The issue is that I can?t input an array. Searching stackoverflow I saw > solutions using for-loops and assigning an arg for each time step. This is > counter productive and create a huge time calculation for long arrays. > > I created my own C-dll to deal with this for loop issue, but it?s super > counter productive. > > Can we get an option that the input args are arrays if same size of t, and > as the state is integrated, the index of the arg follows the states? > > > Hi, > As far as I understand, you want to use time-varying parameters (which > includes right-hand side terms of ODEs) specified as numpy array (i.e., at > discrete time values) while ODEs are defined in the continuous-time domain. > Specification of an extrapolation method is required for the well-posedness > of the problem: the most trivial is the "Sample-and-hold" process which > maintain values until next sample step. > > One simple solution (maybe one that you arleady tested) is to use the ode > class and its integrate method. > Adapting the example given in the docs > > https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.ode.html#scipy.integrate.ode > > from scipy.integrate import ode > > > def f(t, y, arg1): > > return [1j*arg1*y[0] + y[1], -arg1*y[1]**2] > > def jac(t, y, arg1): > > return [[1j*arg1, 1], [0, -arg1*2*y[1]]] > > > r = ode(f, jac).set_integrator('zvode', method='bdf') > > > t = np.arange(0, 10, 1) > > results = np.empty((len(t), 1 + len(y0)), dtype=float) + np.nan > > > r.set_initial_value([1.0j, 2.0], t[0]) > > for ind in range(len(t) - 1): > > r.set_f_params(param[ind]).set_jac_params(param[ind]) > > tmp_result = r.integrate(t[ind+1]) > > if not r.successful(): > > break > > results[ind] = tmp_result > > > This introduced a reasonnable overhead in performance while begin quite > readable. > > Best regards > > Fabrice > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at python.org > https://mail.python.org/mailman/listinfo/scipy-user > -- "The greatest challenge to any thinker is stating the problem in a way that will allow a solution." - Bertrand Russell -------------- next part -------------- An HTML attachment was scrubbed... URL: