Python for air traffic control?

Russ 18k11tm001 at sneakemail.com
Thu Jul 5 03:56:40 EDT 2001


Carlos Ribeiro <cribeiro at mail.inet.com.br> wrote in message news:<mailman.994199197.7049.python-list at python.org>...
> Many of the concerns raised on the group are related to memory-related 
> problems: allocation/deallocation and garbage collection, which can be an 
> plentiful source of bugs. I have some specific suggestions to deal with 
> them, which may or not be applicable in your case.
> 
> - You can avoid allocation/deallocation in some languages by using static 
> structures, or by preallocating as much memory as possible. For example, 
> you could preallocate a vector to contain all the data from the aircrafts. 
> In this case you would have a maximum limit hardcoded in the software, but 
> this is not as bad as it may seem, because the limit is *deterministic*. 
> One of the problems of relying on dynamic allocation is that you never know 
> when it is going to fail, because it depends on the amount of memory 
> allocated for other purposes. This technique is can't be easily applied in 
> Python due to the nature of the language. It is still possible to do it in 
> some particular cases, but not as extensively as in C/C++.

As I wrote in a recent post, I used this technique extensively in a
real-time C++ application I worked on a few years ago (an integrated
GPS/INS autoland system). We had an update rate of a few milliseconds,
so we couldn't spare much memory management after startup. It works
great as long as you have enough memory, and you usually do these
days. Our update rate for this ATC application is on the order of a
few seconds, but I will still be disappointed if I can't use this same
technique in Python. I just don't like the idea of having the OS
interupt at what could be an inopportune time.

> - Don't rely on a long-running process for everything. Use multiple 
> short-running processes for batch-style tasks, and a lightweight dispatcher 
> to coordinate them. The batch-style tasks must be run on separate 
> interpreters. While the performance penalty may be not as acceptable for 
> many real-time tasks, you have one specific advantage: every new instance 
> of the interpreter always start with a fresh memory heap. This will allow 
> you to avoid fragmentation-related problems, which may happen on any 
> platform after a long time (not to mention Win98, where it *always* happen 
> in few time).

Interesting idea. In actual operation, I'll bet we'll restart at least
once per day, if not more.

Thanks for your insights.

Russ



More information about the Python-list mailing list