Queue.get_nowait() sometimes bogusly returns Empty
Geoffrey Talvola
gtalvola at nameconnector.com
Wed Jul 17 09:25:46 EDT 2002
Tim Peters wrote:
> [Geoffrey Talvola]
> > ...
> > I understand what you're saying. Would it make a
> > difference if I told
> > you that for this application, it's OK if the "get_nowait_reliable"
> > operation blocks for "a little while"? In other words, I
> > don't mind if
> > it blocks while 17 other threads manipulate the queue. What I care
> > about is that once the queue is no longer being manipulated in other
> > threads, I either get an Empty exception or an object out
> > of the queue.
> > For my purposes if the queue was empty at any point during
> > the operation
> > then raising the Empty exception is OK.
> >
> > My requirements are a bit fuzzier than the strict no-blocking
> > requirement of Queue.get_nowait() so maybe it's possible to
> > implement
> > this.
> >
> > In case you're wondering, this is in Webware's WebKit
> > application server
> > where there is a pool of N threads, and a queue of (non-thread-safe)
> > servlet instances acting as a cache. We only want to create servlet
> > instances when all existing instances are in-use. When a
> > given thread
> > needs a servlet, it basically does:
> >
> > try:
> > instance = cache.get_nowait()
> > except Queue.Empty:
> > instance = new_instance()
> > # use the instance...
> > cache.put(instance)
> >
> > The problem is that every once in a while get_nowait() returns Empty
> > when the queue isn't actually empty, so the size of the cache grows
> > slowly and unboundedly over time.
>
> How do you know that the unbounded growth is *due* to that
> get_nowait()
> returns Empty at odd times? If service requests follow the
> usual sorts of
> lumpy distributions, the code you showed above simply will create an
> unbounded number of instances over time, since it caters to peak
> simultaneous usage as if resources were unbounded. Fiddling the Queue
> implementation isn't going to cure that, if so.
I left out some of the other details of Webware's app server. Suffice it to
say that there are N threads that can potentially be executing the code
above, so you'd expect that the cache will never contain more than N items.
>
> > But I'd prefer for the cache to contain at most N instances
> where N is
> > the number of threads.
>
> In that case, why not use a bounded Queue (the Queue
> constructor has an
> argument for this purpose) of maximum size N, and use
> blocking get? That
> gives up the illusion that resources are unbounded, but the
> heart of your
> complaint is that writing code as if resources were unbounded leads to
> unbounded resource usage <wink>. If your threads are
> returning instances
> correctly to the queue, then the block will never wait long
> -- unless it's
> really the case that N instances aren't actually enough to service the
> threads.
But I don't want or need blocking -- the resource throttling is handled
elsewhere in the app server.
And I don't want to have to create all N instances on startup -- I'd rather
allocate create instances only as needed, but re-use instances once they
have been created. Reduces startup time, memory consumption, etc.
- Geoff
More information about the Python-list
mailing list