[SciPy-User] using multiple processors for particle filtering

Robin robince at gmail.com
Tue May 25 17:16:47 EDT 2010


On Tue, May 25, 2010 at 5:39 PM, Andy Fraser <afraser at lanl.gov> wrote:
> I am using a particle filter to estimate the trajectory of a camera
> based on a sequence of images taken by the camera.  The code is slow,
> but I have 8 processors in my desktop machine.  I'd like to use them
> to get results 8 times faster.  I've been looking at the following
> sections of http://docs.python.org/library: "16.6. multiprocessing"
> and "16.2. threading".  I've also read some discussion from 2006 on
> scipy-user at scipy.org about seeds for random numbers in threads.  I
> don't have any experience with multiprocessing and would appreciate
> advice.
>
> Here is a bit of code that I want to modify:
>
>        for i in xrange(len(self.particles)):
>            self.particles[i] = self.particles[i].random_fork()

If the updates are independent and don't have to be done sequentially
you can use the multiprocessing.Pool interface which I've found very
convenient for this sort of thing.

Ideally if particles[i] is a class instance then random_fork could
modify itself in place instad of returning a modified copy of the
instance... then you could do something like

def update_particle(self, i):
    nv = numpy.random.standard_normal((N,))
    self.particles[i].random_fork(nv)

p = multiprocessing.Pool(8)
p.map(self.update_particle, range(len(self.particles)))

this will distribute each update_particle call to a different process
using all cores (providing the processing is independent).

I'm not sure if random is multiprocessor safe for use like this so
that would need checking but I hope this helps a bit...

cheers

Robin

> Each particle is a class instance that represents a possible camera
> state (position, orientation, and velocities).  particle.random_fork()
> is a method that moves the position and orientation based on current
> velocities and then uses numpy.random.standard_normal((N,)) to perturb
> the velocities.  I handle the correlation structure of the noise by
> matrices that are members of particle, and I do some of the
> calculations in c++.
>
> I would like to do something like:
>
>        for i in xrange(len(self.particles)):
>            nv = numpy.random.standard_normal((N,))
>            launch_on_any_available_processor(
>                self.particles[i] = self.particles[i].random_fork(nv)
>            )
>        wait_for_completions()
>
> But I don't see a command like "launch_on_any_available_processor".
> I would be grateful for any advice.
>
> --
> Andy Fraser                             ISR-2   (MS:B244)
> afraser at lanl.gov                        Los Alamos National Laboratory
> 505 665 9448                            Los Alamos, NM 87545
> _______________________________________________
> SciPy-User mailing list
> SciPy-User at scipy.org
> http://mail.scipy.org/mailman/listinfo/scipy-user
>



More information about the SciPy-User mailing list