[scikit-learn] meta-estimator for multiple MLPRegressor

Jacob Schreiber jmschreiber91 at gmail.com
Sat Jan 7 18:04:08 EST 2017


If you have such a small number of observations (with a much higher feature
space) then why do you think you can accurately train not just a single
MLP, but an ensemble of them without overfitting dramatically?

On Sat, Jan 7, 2017 at 2:26 PM, Thomas Evangelidis <tevang3 at gmail.com>
wrote:

> Regarding the evaluation, I use the leave 20% out cross validation method.
> I cannot leave more out because my data sets are very small, between 30 and
> 40 observations, each one with 600 features. Is there a limit in the number
> of MLPRegressors I can combine with stacking considering my small data
> sets?
>
> On Jan 7, 2017 23:04, "Joel Nothman" <joel.nothman at gmail.com> wrote:
>
>> *
>>
>>
>>> There is no problem, in general, with overfitting, as long as your
>>> evaluation of an estimator's performance isn't biased towards the training
>>> set. We've not talked about evaluation.
>>>
>>
>>
>> _______________________________________________
>> scikit-learn mailing list
>> scikit-learn at python.org
>> https://mail.python.org/mailman/listinfo/scikit-learn
>>
>>
> _______________________________________________
> scikit-learn mailing list
> scikit-learn at python.org
> https://mail.python.org/mailman/listinfo/scikit-learn
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/scikit-learn/attachments/20170107/4d7194c9/attachment.html>


More information about the scikit-learn mailing list