[scikit-learn] meta-estimator for multiple MLPRegressor

Sebastian Raschka se.raschka at gmail.com
Sat Jan 7 13:27:21 EST 2017


Hi, Thomas,

the VotingClassifier can combine different models per majority voting amongst their predictions. Unfortunately, it refits the classifiers though (after cloning them). I think we implemented it this way to make it compatible to GridSearch and so forth. However, I have a version of the estimator that you can initialize with “refit=False” to avoid refitting if it helps. http://rasbt.github.io/mlxtend/user_guide/classifier/EnsembleVoteClassifier/#example-5-using-pre-fitted-classifiers

Best,
Sebastian



> On Jan 7, 2017, at 11:15 AM, Thomas Evangelidis <tevang3 at gmail.com> wrote:
> 
> Greetings,
> 
> I have trained many MLPRegressors using different random_state value and estimated the R^2 using cross-validation. Now I want to combine the top 10% of them in how to get more accurate predictions. Is there a meta-estimator that can get as input a few precomputed MLPRegressors and give consensus predictions? Can the BaggingRegressor do this job using MLPRegressors as input?
> 
> Thanks in advance for any hint.
> Thomas
> 
> 
> -- 
> ======================================================================
> Thomas Evangelidis
> Research Specialist
> CEITEC - Central European Institute of Technology
> Masaryk University
> Kamenice 5/A35/1S081, 
> 62500 Brno, Czech Republic 
> 
> email: tevang at pharm.uoa.gr
>          	tevang3 at gmail.com
> 
> website: https://sites.google.com/site/thomasevangelidishomepage/
> 
> 
> _______________________________________________
> scikit-learn mailing list
> scikit-learn at python.org
> https://mail.python.org/mailman/listinfo/scikit-learn



More information about the scikit-learn mailing list