Speeding up the implementation of Stochastic Gradient Ascent in Python

Ned Batchelder ned at nedbatchelder.com
Wed Jan 17 14:28:02 EST 2018


On 1/17/18 9:29 AM, leutrim.kaleci at gmail.com wrote:
> Hello everyone,
>
> I am implementing a time-dependent Recommender System which applies BPR (Bayesian Personalized Ranking), where Stochastic Gradient Ascent is used to learn the parameters of the model. Such that, one iteration involves sampling randomly the quadruple (i.e. userID, positive_item, negative_item, epoch_index) for n times, where n is the total number of positive feedbacks (i.e. the number of ratings given to all items). But, as my implementation takes too much time for learning the parameters (since it requires 100 iterations to learn the parameters), I was wondering if there is a way to improve my code and speed up the learning process of the parameters.
>
> Please find the code of my implementation (the update function named as updateFactors is the one that learns the parameters, and such that I guess is the one that should be improved in order to speed up the process of learning the parameter values) in the following link:
> https://codereview.stackexchange.com/questions/183707/speeding-up-the-implementation-of-stochastic-gradient-ascent-in-python

It looks like you have lists that you are doing "in" and "remove" 
operations on.  I don't know how long these lists are, but sets will be 
much faster in general.  You'll have to replace random.choice() with 
random.choice(list(...)), since you can't random.choice from a set.  
That might negate the benefits, but it's worth a try.

--Ned.



More information about the Python-list mailing list