Optimization: Picking random keys from a dictionary and mutating values

Carl Banks pavlovevidence at gmail.com
Thu May 29 02:55:46 EDT 2008


On May 28, 7:41 pm, blaine <frik... at gmail.com> wrote:
> Hey everyone,
>   Just a friendly question about an efficient way to do this.

Friendly advance reminder: in many optimization problems, the
objective function is far more expensive to calculate than the
optimizing procedure.  Your time is usually better spent optimizing
the objective function, or tuning the optimizer.

But what they hey.

> I have
> a graph with nodes and edges (networkx is am amazing library, check it
> out!).  I also have a lookup table with weights of each edge.  So:
>
> weights[(edge1, edge2)] = .12
> weights[(edge2, edge5)] = .53
> weights[(edge5, edge1)] = 1.23
> weights[(edge3, edge2)] = -2.34
> etc.
>
> I would like to take this weight table and subject it to evolutionary
> mutations.  So given a probability p (mutation rate), mutate edge
> weights by some random value.   So in effect, if p is .25, choose 25%
> random edges, and mutate them some amount.  So:
> 1.  Whats a good way to get the keys? I'm using a loop with
> random.choice() at the moment.

That's probably the best way.

You might be able to squeeze a little more speed out of it by using a
partial random shuffle as opposed to removing the item from the list.
This will minimize the amount of copying.  Here's an example (without
error checking):

def partial_random_shuffle(lst,nshuffled):
    for i in xrange(nshuffled):
        j = random.randrange(i,len(lst))
        t = lst[i]
        lst[i] = lst[j]
        lst[j] = t

The first nshuffled items of lst upon return will be the selected
items.  Note: This might also be slower than removing items.  It
should scale better, though.


> 2.  Any comments on how to get a 'fair' mutation on an existing edge
> value?

random.normalvariate is usually a good choice for selecting random
real-valued increments.

val = random.normalvariate(val,sigma)

where sigma, the standard deviation, is the amount


> I currently am doing something like this, which seems like it leaves
> something to be desired.
>
> import random
> weights = generateweights() # generates table like the one above
> p = 0.25
> v = random.betavariate(2, 10)
> num = int(v*len(weights)) # How many weights should we mutate?
> keys = w.keys()
> for i in xrange(num):
>   val = random.choice(keys) # Choose a single random key
>   w[val] = w[val]*(random.random()*5-1) # Is this a 'good' way to
> 'mutate' the value?

If you don't remove keys[val], there's a chance you'll mutate the same
key twice, which I doubt is what you want.


> This is an evolutionary search, so mutate in this sense relates to a
> genetic algorithm, perhaps a gradient decent?

A gradient descent method is not an evolutionary search and involves
no randomness (unless noise is added to the objective, which is a
possible way to attack a function with unimportant small scale
features, and in that case normalvariate would be the thing used).



Carl Banks



More information about the Python-list mailing list