PEP 249 - DB API question

James Mills prologic at shortcircuit.net.au
Tue Nov 4 19:27:55 EST 2008


On Wed, Nov 5, 2008 at 6:13 AM, k3xji <sumerc at gmail.com> wrote:
>
>> Try spawning a new process to run your query
>> in. Use the multiprocessing library. Your main
>> application can then just poll the db/query processes
>> to see if they're a) finished and b) have a result
>>
>> Your application server can also c0 kill long running
>> queries that are "deemed" to be taking "too long"
>> and may not finish (eg: Cartesian Joins).
>
> Just thinking loudly:...
>
> More backward-compatible way to do that is to have a thread
> pool of threads running queries and the main pool thread is
> polling to see if the child threads are taking too long to
> complete? However, from performance point of view this will
> be a nightmare? You have a good reason to suggest
> multiprocessing, right? But at least I can implement my
> critical queries with this kind of design, as they are not
> so many.

I hate thread :) To be perfectly honest, I would
use processes for performance reasons, and it
were me, I would use my new shiny circuits [1]
library to trigger events when the queries are done.

--JamesMills

[1] http://trac.softcircuit.com.au/circuits/

-- 
--
-- "Problems are solved by method"



More information about the Python-list mailing list