[scikit-learn] Contribution

Uri Goren uri at goren4u.com
Mon Jul 10 13:32:42 EDT 2017


Hi,
I'd like to implement the Markov clustering algorithm,
Any objections?


On Jul 10, 2017 7:10 PM, "federico vaggi" <vaggi.federico at gmail.com> wrote:

Hey Gurhan,

sklearn doesn't really neatly separate optimizers from the models they
optimize at the level of API (except in a few cases).  In order to make the
package more friendly to newer user, each model has excellent optimizer
defaults that you can use, and only in a few cases does it make sense to
tweak the optimization routines (for example, SAGA if you have a very large
dataset when doing logistic regression).

There is a fantastic library called lightning where the optimization
routines are first class citizens: http://contrib.
scikit-learn.org/lightning/ - you can take a look there.  However,
lightning focuses on convex optimization, so most algorithms have provable
convergence rates.

Good luck!

On Mon, 10 Jul 2017 at 09:05 Jacob Schreiber <jmschreiber91 at gmail.com>
wrote:

> Howdy
>
> This question and the one right after in the FAQ are probably relevant re:
> inclusion of new algorithms: http://scikit-learn.org/stable/faq.html#
> what-are-the-inclusion-criteria-for-new-algorithms. The gist is that we
> only include well established algorithms, and there are no end to those. I
> think it is unlikely that a PR will get merged with a cutting edge new
> algorithm, as the scope of scikit-learn isn't necessary "the latest" as
> opposed to "the classics." You may also consider writing a scikit-contrib
> package that basically creates what you're interested in in scikit-learn
> format, but external to the project. We'd be more than happy to link to it.
> If the algorithm becomes a smashing success over time, we'd reconsider
> adding it to the main code base.
>
> As to your first question, you should check out how the current optimizers
> are written for the algorithm you're interested in. I don't think there's a
> plug and play way to drop in your own optimizer like many deep learning
> packages support, unfortunately. You'd probably have to modify the code
> directly to support your own.
>
> Let me know if you have any other questions.
>
> Jacob
>
> On Mon, Jul 10, 2017 at 7:58 AM, Gürhan Ceylan <grhanceylan at gmail.com>
> wrote:
>
>> Hi everyone,
>>
>> I am wondering, How can I  use external optimization algorithms with scikit-learn,
>> for instance neural network
>> <http://scikit-learn.org/stable/modules/neural_networks_supervised.html#algorithms>
>> , instead of defined algorithms ( Stochastic Gradient Descent, Adam, or
>> L-BFGS).
>>
>> Furthermore, I want to introduce a new unconstrained optimization
>> algorithm to scikit-learn, implementation of the algorithm and related paper
>> can be found here <https://github.com/sibirbil/PMBSolve>.
>>
>> I couldn't find any explanation
>> <http://scikit-learn.org/stable/developers/contributing.html>, about the
>> situation. Do you have defined procedure to make such kind of
>> contributions? If this is not the case, How should I start to make such a
>> proposal/contribution ?
>>
>>
>> Kind regards,
>>
>> Gürhan C.
>>
>>
>> _______________________________________________
>> scikit-learn mailing list
>> scikit-learn at python.org
>> https://mail.python.org/mailman/listinfo/scikit-learn
>>
>>
> _______________________________________________
> scikit-learn mailing list
> scikit-learn at python.org
> https://mail.python.org/mailman/listinfo/scikit-learn
>

_______________________________________________
scikit-learn mailing list
scikit-learn at python.org
https://mail.python.org/mailman/listinfo/scikit-learn
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/scikit-learn/attachments/20170710/5ffcf14c/attachment-0001.html>


More information about the scikit-learn mailing list