From contact at nicholasearl.me Wed Sep 26 12:44:23 2018 From: contact at nicholasearl.me (Nicholas Earl) Date: Wed, 26 Sep 2018 12:44:23 -0400 Subject: [AstroPy] [ANN] Specutils v0.4 Release Message-ID: <1537980263.3592996.1521589000.26755F88@webmail.messagingengine.com> Dear colleagues, The specutils developers are happy to announce the release of version 0.4 of the specutils package. You can install specutils in conda by doing: $ conda install -c conda-forge specutils or with pip: $ pip install specutils Specutils is an Astropy coordinated package providing a toolbox to represent, analyze, and manipulate spectroscopic data. For more context on the ecosystem specutils is meant to integrate with, see APE13: https://github.com/astropy/astropy-APEs/blob/master/APE13.rst This version of the package contains core data structures for spectra in python, utilities for loading of spectra in common formats, parsing of associated WCS, arithmetic, uncertainty handling, spectroscopic analysis and manipulation tools, integrated modeling and fitting, and robust spectral region interface. This latest release represents a complete refactor of the package from previous iterations, supporting Python 3.5+. Therefore, the API may undergo some changes in the next few versions, based on your feedback. That said, this is the first version of specutils that we consider ?science-grade? - that is, containing many of the basic tools required for spectroscopic analysis. We encourage you to try it out, leave comments in the github issue tracker, or implement your own improvements via Pull Request! If you do indeed use specutils for analysis that is published, we ask that you cite specutil?s Zenodo DOI - see https://doi.org/10.5281/zenodo.1421356 for bibtex templates for all versions. Code: https://github.com/astropy/specutils Documentation: https://specutils.readthedocs.io Installation: https://specutils.readthedocs.io/en/latest/installation.html Cheers, Nicholas Earl Erik Tollerud Steve Crawford Adam Ginsburg and the specutils contributors From npross at roe.ac.uk Sat Sep 29 05:27:46 2018 From: npross at roe.ac.uk (Nicholas Ross) Date: Sat, 29 Sep 2018 09:27:46 +0000 Subject: [AstroPy] Intelligent averaging of time series data In-Reply-To: References: Message-ID: <3844A3EB-492A-463C-A4E1-79759201814B@roe.ac.uk> Hi Astropy, I have what seems a very easy problem, but I haven?t found an elegant solution yet. I have the following (time-series) data: t = [5.13, 5.27, 5.40, 5.46, 190.99, 191.13, 191.267, 368.70, 368.83, 368.90, 368.93] y = [17.17, 17.18, 17.014, 17.104, 16.981, 16.96, 16.85, 17.27, 17.66, 17.76, 18.01] so, groups of data in short (time) intervals then separated cleanly by a long time gap. I'm looking for a simple method that will intelligently average these together; sort of a 'Bayesian blocks? but for non-histogram data. The return of t_prime=[5.315, 191.129, 368.84], y_prime=[17.117, 16.930, 17.660] is the first-order result I'd be after, but with the option to include weights/weighted data in more sophisticated analyses. The suggested approaches to this problem are doing a simple moving average, or maybe a numpy convolution, but I'm looking for something a bit more elegant and that can generalize to larger, similar, but not identical datasets. Best, Nic From peter at newton.cx Sat Sep 29 15:04:46 2018 From: peter at newton.cx (Peter Williams) Date: Sat, 29 Sep 2018 15:04:46 -0400 Subject: [AstroPy] Intelligent averaging of time series data In-Reply-To: <3844A3EB-492A-463C-A4E1-79759201814B@roe.ac.uk> References: <3844A3EB-492A-463C-A4E1-79759201814B@roe.ac.uk> Message-ID: <5fc2e3421f181da0a40164c535658ecf26d435fa.camel@newton.cx> Hi Nic, I have some code that does this sort of thing in my "pwkit" package of astro-related Python utilities. The "documentation" is here but it's sparse: https://pwkit.readthedocs.io/en/latest/foundations/numerical/#convenience-functions-for-pandas-dataframe-objects I should really put together some examples, but unfortunately haven't done so yet ? and that's not something I expect to be able to spend time on in the near future. Given that, it probably is not enough of a turn-key solution for you, but I thought I'd at least mention its existence. FWIW, here's the source: https://github.com/pkgw/pwkit/blob/master/pwkit/numutil.py#L219 Peter On Sat, 2018-09-29 at 09:27 +0000, Nicholas Ross wrote: > Hi Astropy, > > I have what seems a very easy problem, but I haven?t found an elegant solution yet. > > I have the following (time-series) data: > t = [5.13, 5.27, 5.40, 5.46, 190.99, 191.13, 191.267, 368.70, 368.83, 368.90, 368.93] > y = [17.17, 17.18, 17.014, 17.104, 16.981, 16.96, 16.85, 17.27, 17.66, 17.76, 18.01] > so, groups of data in short (time) intervals then separated cleanly by a long time gap. > I'm looking for a simple method that will intelligently average these together; sort of a 'Bayesian blocks? > but for non-histogram data. > > The return of > t_prime=[5.315, 191.129, 368.84], > y_prime=[17.117, 16.930, 17.660] > is the first-order result I'd be after, but with the option to include weights/weighted data in more sophisticated analyses. > The suggested approaches to this problem are doing a simple moving average, or maybe a numpy convolution, > but I'm looking for something a bit more elegant and that can generalize to larger, similar, but not identical datasets. > > Best, > Nic > > > _______________________________________________ > AstroPy mailing list > AstroPy at python.org > https://mail.python.org/mailman/listinfo/astropy > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hpwarren at gmail.com Sun Sep 30 05:16:25 2018 From: hpwarren at gmail.com (Harry Warren) Date: Sun, 30 Sep 2018 05:16:25 -0400 Subject: [AstroPy] Intelligent averaging of time series data Message-ID: This seems like a good application for clustering algorithms (see below). Of course, the nuanced part is determining the number of clusters. Harry Warren import matplotlib.pyplot as plt import numpy as np from sklearn.cluster import KMeans t = (5.13, 5.27, 5.40, 5.46, 190.99, 191.13, 191.267, 368.70, 368.83, 368.90, 368.93) y = (17.17, 17.18, 17.014, 17.104, 16.981, 16.96, 16.85, 17.27, 17.66, 17.76, 18.01) X = np.array(list(zip(t,y))) kmeans = KMeans(n_clusters=3) kmeans.fit(X) y_km = kmeans.fit_predict(X) print(kmeans.cluster_centers_) print(y_km) plt.plot(X[y_km==0,0], X[y_km==0,1], 'o', c='red') plt.plot(X[y_km==1,0], X[y_km==1,1], 'o', c='green') plt.plot(X[y_km==2,0], X[y_km==2,1], 'o', c='blue') plt.show()