From pierre.debuyl at kuleuven.be Thu Nov 2 15:38:24 2017 From: pierre.debuyl at kuleuven.be (Pierre de Buyl) Date: Thu, 2 Nov 2017 20:38:24 +0100 Subject: [SciPy-User] Python @ FOSDEM 2018 Message-ID: <20171102193824.GA24760@pi-x230> Dear SciPythonists and NumPythonists, FOSDEM is a free event for software developers to meet, share ideas and collaborate. Every year, 6500+ of developers of free and open source software from all over the world gather at the event in Brussels. Every year, 6500+ of developers of free and open source software from all over the world gather at the event in Brussels. For FOSDEM 2018, we will try the new concept of a virtual Python-devroom: there is no dedicated Python room but instead, we promote the presence of Python in all devrooms. We hope to have at least one Python talk in every devroom (Yes, even in Perl, Ada, Go and Rust devrooms ;-) ). How can you help to highlight the Python community at Python-FOSDEM 2018? Propose your talk in the closest related devroom: https://fosdem.org/2018/news/2017-10-04-accepted-developer-rooms/ Not all devrooms are language-specific and a number of topics come to mind for data and science participants: "Monitoring & Cloud devroom" https://lists.fosdem.org/pipermail/fosdem/2017-October/002631.html "HPC, Big Data, and Data Science" https://lists.fosdem.org/pipermail/fosdem/2017-October/002615.html "LLVM toolchain" https://lists.fosdem.org/pipermail/fosdem/2017-October/002624.html Most call for contributions end around the 24 of november. Send a copy of your proposition to python-devroom AT lists.fosdem DOT org. We will publish a dedicated schedule for Python on https://python-fosdem.org/ and at our stand. A dinner will be also organized, stay tuned. We are waiting for your talks proposals. The Python-FOSDEM committee From stevenbocco at gmail.com Thu Nov 16 12:06:37 2017 From: stevenbocco at gmail.com (Steven Bocco) Date: Thu, 16 Nov 2017 12:06:37 -0500 Subject: [SciPy-User] Announcing Theano 1.0.0 Message-ID: Announcing Theano 1.0.0 This is a release for a major version, with lots of new features, bug fixes, and some interface changes (deprecated or potentially misleading features were removed). Upgrading to Theano 1.0.0 is recommended for everyone, but you should first make sure that your code does not raise deprecation warnings with Theano 0.9*. Otherwise either results can change, or warnings may have been turned into errors. For those using the bleeding edge version in the git repository, we encourage you to update to the rel-1.0.0 tag. What's New Highlights (since 0.9.0): - Announcing that MILA will stop developing Theano - conda packages now available and updated in our own conda channel mila-udem To install it: conda install -c mila-udem theano pygpu - Support NumPy 1.13 - Support pygpu 0.7 - Moved Python 3.* minimum supported version from 3.3 to 3.4 - Added conda recipe - Replaced deprecated package nose-parameterized with up-to-date package parameterized for Theano requirements - Theano now internally uses sha256 instead of md5 to work on systems that forbid md5 for security reason - Removed old GPU backend theano.sandbox.cuda. New backend theano.gpuarray is now the official GPU backend - Make sure MKL uses GNU OpenMP - *NB*: Matrix dot product (gemm) with mkl from conda could return wrong results in some cases. We have reported the problem upstream and we have a work around that raises an error with information about how to fix it. - Improved elemwise operations - Speed-up elemwise ops based on SciPy - Fixed memory leaks related to elemwise ops on GPU - Scan improvements - Speed up Theano scan compilation and gradient computation - Added meaningful message when missing inputs to scan - Speed up graph toposort algorithm - Faster C compilation by massively using a new interface for op params - Faster optimization step, with new optional destroy handler - Documentation updated and more complete - Added documentation for RNNBlock - Updated conv documentation - Support more debuggers for PdbBreakpoint - Many bug fixes, crash fixes and warning improvements Interface changes: - Merged duplicated diagonal functions into two ops: ExtractDiag (extract a diagonal to a vector), and AllocDiag (set a vector as a diagonal of an empty array) - Removed op ExtractDiag from theano.tensor.nlinalg, now only in theano.tensor.basic - Generalized AllocDiag for any non-scalar input - Added new parameter target for MRG functions - Renamed MultinomialWOReplacementFromUniform to ChoiceFromUniform - Changed grad() method to L_op() in ops that need the outputs to compute gradient - Removed or deprecated Theano flags: - cublas.lib - cuda.enabled - enable_initial_driver_test - gpuarray.sync - home - lib.cnmem - nvcc.* flags - pycuda.init Convolution updates: - Implemented separable convolutions for 2D and 3D - Implemented grouped convolutions for 2D and 3D - Added dilated causal convolutions for 2D - Added unshared convolutions - Implemented fractional bilinear upsampling - Removed old conv3d interface - Deprecated old conv2d interface GPU: - Added a meta-optimizer to select the fastest GPU implementations for convolutions - Prevent GPU initialization when not required - Added disk caching option for kernels - Added method my_theano_function.sync_shared() to help synchronize GPU Theano functions - Added useful stats for GPU in profile mode - Added Cholesky op based on cusolver backend - Added GPU ops based on magma library : SVD, matrix inverse, QR, cholesky and eigh - Added GpuCublasTriangularSolve - Added atomic addition and exchange for long long values in GpuAdvancedIncSubtensor1_dev20 - Support log gamma function for all non-complex types - Support GPU SoftMax in both OpenCL and CUDA - Support offset parameter k for GpuEye - CrossentropyCategorical1Hot and its gradient are now lifted to GPU - cuDNN: - Official support for v6.* and v7.* - Added spatial transformation operation based on cuDNN - Updated and improved caching system for runtime-chosen cuDNN convolution algorithms - Support cuDNN v7 tensor core operations for convolutions with runtime timed algorithms - Better support and loading on Windows and Mac - Support cuDNN v6 dilated convolutions - Support cuDNN v6 reductions for contiguous inputs - Optimized SUM(x^2), SUM(ABS(X)) and MAX(ABS(X)) operations with cuDNN reductions - Added new Theano flags cuda.include_path, dnn.base_path and dnn.bin_path to help configure Theano when CUDA and cuDNN can not be found automatically - Extended Theano flag dnn.enabled with new option no_check to help speed up cuDNN importation - Disallowed float16 precision for convolution gradients - Fixed memory alignment detection - Added profiling in C debug mode (with theano flag cmodule.debug=True ) - Added Python scripts to help test cuDNN convolutions - Automatic addition of cuDNN DLL path to PATH environment variable on Windows - Updated float16 support - Added documentation for GPU float16 ops - Support float16 for GpuGemmBatch - Started to use float32 precision for computations that don't support float16 on GPU New features: - Implemented truncated normal distribution with box-muller transform - Added L_op() overriding option for OpFromGraph - Added NumPy C-API based fallback implementation for [sd]gemv_ and [sd]dot_ - Implemented topk and argtopk on CPU and GPU - Implemented max() and min() functions for booleans and unsigned integers types - Added tensor6() and tensor7() in theano.tensor module - Added boolean indexing for sub-tensors - Added covariance matrix function theano.tensor.cov - Added a wrapper for Baidu's CTC cost and gradient functions - Added scalar and elemwise CPU ops for modified Bessel function of order 0 and 1 from scipy.special - Added Scaled Exponential Linear Unit (SELU) activation - Added sigmoid_binary_crossentropy function - Added tri-gamma function - Added unravel_index and ravel_multi_index functions on CPU - Added modes half and full for Images2Neibs ops - Implemented gradient for AbstractBatchNormTrainGrad - Implemented gradient for matrix pseudoinverse op - Added new prop replace for ChoiceFromUniform op - Added new prop on_error for CPU Cholesky op - Added new Theano flag deterministic to help control how Theano optimize certain ops that have deterministic versions. Currently used for subtensor Ops only. - Added new Theano flag cycle_detection to speed-up optimization step by reducing time spending in inplace optimizations - Added new Theano flag check_stack_trace to help check the stack trace during optimization process - Added new Theano flag cmodule.debug to allow a debug mode for Theano C code. Currently used for cuDNN convolutions only. - Added new Theano flag pickle_test_value to help disable pickling test values Others: - Kept stack trace for optimizations in new GPU backend - Added deprecation warning for the softmax and logsoftmax vector case - Added a warning to announce that C++ compiler will become mandatory in next Theano release 0.11 - Added R_op() for ZeroGrad - Added description for rnnblock Other more detailed changes: - Fixed invalid casts and index overflows in theano.tensor.signal.pool - Fixed gradient error for elemwise minimum and maximum when compared values are the same - Fixed gradient for ARange - Removed ViewOp subclass during optimization - Removed useless warning when profile is manually disabled - Added tests for abstract conv - Added options for disconnected_outputs to Rop - Removed theano/compat/six.py - Removed COp.get_op_params() - Support of list of strings for Op.c_support_code(), to help not duplicate support codes - Macro names provided for array properties are now standardized in both CPU and GPU C codes - Moved all C code files into separate folder c_code in every Theano module - Many improvements for Travis CI tests (with better splitting for faster testing) - Many improvements for Jenkins CI tests: daily testings on Mac and Windows in addition to Linux Download and Install You can download Theano from http://pypi.python.org/pypi/Theano Installation instructions are available at http://deeplearning.net/software/theano/install.html Description Theano is a Python library that allows you to define, optimize, and efficiently evaluate mathematical expressions involving multi-dimensional arrays. It is built on top of NumPy. Theano features: - tight integration with NumPy: a similar interface to NumPy's. numpy.ndarrays are also used internally in Theano-compiled functions. - transparent use of a GPU: perform data-intensive computations much faster than on a CPU. - efficient symbolic differentiation: Theano can compute derivatives for functions of one or many inputs. - speed and stability optimizations: avoid nasty bugs when computing expressions such as log(1+ exp(x)) for large values of x. - dynamic C code generation: evaluate expressions faster. - extensive unit-testing and self-verification: includes tools for detecting and diagnosing bugs and/or potential problems. Theano has been powering large-scale computationally intensive scientific research since 2007, but it is also approachable enough to be used in the classroom (IFT6266 at the University of Montreal). Resources About Theano: http://deeplearning.net/software/theano/ Theano-related projects: http://github.com/Theano/Theano/wiki/Related-projects About NumPy: http://numpy.scipy.org/ About SciPy: http://www.scipy.org/ Machine Learning Tutorial with Theano on Deep Architectures: http://deeplearning.net/tutorial/ Acknowledgments I would like to thank all contributors of Theano. Since release 0.9.0, many people have helped, notably (in alphabetical order): - Aarni Koskela - Adam Becker - Adam Geitgey - Adrian Keet - Adrian Seyboldt - Aleksandar Botev - Alexander Matyasko - amrithasuresh - Andrei Costinescu - Anirudh Goyal - Anmol Sahoo - Arnaud Bergeron - Bogdan Budescu - Boris Fomitchev - Cesar Laurent - Chiheb Trabelsi - Chong Wu - dareneiri - Daren Eiri - Dzmitry Bahdanau - erakra - Faruk Ahmed - Florian Bordes - fo40225 - Frederic Bastien - Gabe Schwartz - Ghislain Antony Vaillant - Gijs van Tulder - Holger Kohr - Jan Schl?ter - Jayanth Koushik - Jeff Donahue - jhelie - Jo?o Victor Tozatti Risso - Joseph Paul Cohen - Juan Camilo Gamboa Higuera - Laurent Dinh - Lilian Besson - lrast - Lv Tao - Matt Graham - Michael Manukyan - Mohamed Ishmael Diwan Belghazi - Mohammed Affan - morrme - mrTsjolder - Murugesh Marvel - naitonium - NALEPA - Nan Jiang - Pascal Lamblin - Ramana Subramanyam - Rebecca N. Palmer - Reyhane Askari - Saizheng Zhang - Shawn Tan - Shubh Vachher - Simon Lefrancois - Sina Honari - Steven Bocco - Tegan Maharaj - Thomas George - Tim Cooijmans - Vikram - vipulraheja - wyjw - Xavier Bouthillier - xiaoqie - Yikang Shen - Zhouhan LIN - Zotov Yuriy Also, thank you to all NumPy and Scipy developers as Theano builds on their strengths. All questions/comments are always welcome on the Theano mailing-lists ( http://deeplearning.net/software/theano/#community ) -- Steven Bocco -------------- next part -------------- An HTML attachment was scrubbed... URL: From indiajoe at gmail.com Sun Nov 19 15:45:10 2017 From: indiajoe at gmail.com (Joe P Ninan) Date: Mon, 20 Nov 2017 02:15:10 +0530 Subject: [SciPy-User] How to use scipy/fitpack Fortran subroutines from Fortran code Message-ID: Hi, I want to use some of the Fortran subroutines in scipy/fitpack directly from a Fortran code. (mainly the spline routines) Which I later plan to convert to a python callable module using f2py What is the best way to access these Fortran routines from another Fortran code? I want to avoid copy pasting the subroutines unnecessarily into my fortran code, if I can use the libraries from scipy installation on a computer. -cheers joe -- /--------------------------------------------------------------- "GNU/Linux: because a PC is a terrible thing to waste" - GNU Generation ************************************************ Joe Philip Ninan Postdoctoral Researcher 525 Davey Lab, Dept. of Astronomy & Astrophysics The Pennsylvania State University University Park, PA-16802 ------------------------------------------------------------ Website: https://indiajoe.gitlab.io/ My GnuPG Public Key: https://indiajoe.gitlab.io/files/JPN_public.key -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at brettgilio.com Sun Nov 19 17:24:19 2017 From: mail at brettgilio.com (Brett M. Gilio) Date: Sun, 19 Nov 2017 16:24:19 -0600 Subject: [SciPy-User] How to use scipy/fitpack Fortran subroutines from Fortran code In-Reply-To: References: Message-ID: I would also like to know about this, thanks for the question Joe. BMG Brett M. Gilio B.S. Biological Sciences B.M. Music Composition http://www.brettgilio.com/ "Sometimes the obvious is the enemy of the true." - G. Stolzenberg On 11/19/2017 02:45 PM, Joe P Ninan wrote: > Hi, > I want to use some of the Fortran subroutines in scipy/fitpack > directly from a Fortran code. (mainly the spline routines) > Which I later plan to convert to a python callable module using f2py > > What is the best way to access these Fortran routines from another > Fortran code? > I want to avoid copy pasting the subroutines unnecessarily into my > fortran code, if I can use the libraries from scipy installation on a > computer. > -cheers > joe > > -- > /--------------------------------------------------------------- > "GNU/Linux: because a PC is a terrible thing to waste" -? GNU Generation > > ************************************************ > Joe Philip Ninan? ?? ? > Postdoctoral Researcher > 525 Davey Lab,? > Dept. of Astronomy & Astrophysics > The Pennsylvania State University > University Park, PA-16802 > ------------------------------------------------------------ > Website: https://indiajoe.gitlab.io/ > My GnuPG Public Key: https://indiajoe.gitlab.io/files/JPN_public.key > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at python.org > https://mail.python.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Sun Nov 19 18:53:29 2017 From: njs at pobox.com (Nathaniel Smith) Date: Sun, 19 Nov 2017 15:53:29 -0800 Subject: [SciPy-User] How to use scipy/fitpack Fortran subroutines from Fortran code In-Reply-To: References: Message-ID: There is no good way, sorry. Your best option is to copy the files into your project's source tree. Fortunately, they don't change very often... On Nov 19, 2017 12:46 PM, "Joe P Ninan" wrote: > Hi, > I want to use some of the Fortran subroutines in scipy/fitpack directly > from a Fortran code. (mainly the spline routines) > Which I later plan to convert to a python callable module using f2py > > What is the best way to access these Fortran routines from another Fortran > code? > I want to avoid copy pasting the subroutines unnecessarily into my fortran > code, if I can use the libraries from scipy installation on a computer. > -cheers > joe > > -- > /--------------------------------------------------------------- > "GNU/Linux: because a PC is a terrible thing to waste" - GNU Generation > > ************************************************ > Joe Philip Ninan > Postdoctoral Researcher > 525 Davey Lab, > Dept. of Astronomy & Astrophysics > The Pennsylvania State University > University Park, PA-16802 > ------------------------------------------------------------ > Website: https://indiajoe.gitlab.io/ > My GnuPG Public Key: https://indiajoe.gitlab.io/files/JPN_public.key > > _______________________________________________ > SciPy-User mailing list > SciPy-User at python.org > https://mail.python.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: