[scikit-learn] GPR intervals and MCMC

federico vaggi vaggi.federico at gmail.com
Tue Nov 8 10:19:35 EST 2016


Hi,

if you want to have the full posterior distribution over the values of the
hyper parameters, there is a good example on how to do that with George +
emcee, another GP package for Python.

http://dan.iel.fm/george/current/user/hyper/

On Tue, 8 Nov 2016 at 16:10 Quaglino Alessio <alessio.quaglino at usi.ch>
wrote:

> Hello,
>
> I am using scikit-learn 0.18 for doing GP regressions. I really like it
> and all works great, but I am having doubts concerning the confidence
> intervals computed by predict(X,return_std=True):
>
> - Are they true confidence intervals (i.e. of the mean / latent function)
> or they are in fact prediction intervals? I tried computing the prediction
> intervals using sample_y(X) and I get the same answer as that returned by
> predict(X,return_std=True).
>
> - My understanding is therefore that scikit-learn is not fully Bayesian,
> i.e. it does not compute probability distributions for the parameters, but
> rather the values that maximize the likelihood?
>
> - If I want the confidence interval, is my best option to use an external
> MCMC optimizer such as PyMC?
>
> Thank you in advance!
>
> Regards,
> -------------------------------------------------
> Dr. Alessio Quaglino
> Postdoctoral Researcher
> Institute of Computational Science
> Università della Svizzera Italiana
>
>
>
>
>
>
> _______________________________________________
> scikit-learn mailing list
> scikit-learn at python.org
> https://mail.python.org/mailman/listinfo/scikit-learn
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/scikit-learn/attachments/20161108/eadd15b3/attachment.html>


More information about the scikit-learn mailing list