NEWBIE: Extending Python with C

Bengt Richter bokr at oz.net
Mon Nov 11 13:57:12 EST 2002


On Sun, 10 Nov 2002 21:59:35 GMT, engsol at teleport.com wrote:

>On 10 Nov 2002 06:17:44 GMT, bokr at oz.net (Bengt Richter) wrote:
>
>>On Sat, 09 Nov 2002 20:15:31 GMT, engsol at teleport.com wrote:
>>
>>>On 9 Nov 2002 09:54:01 GMT, bokr at oz.net (Bengt Richter) wrote:
>>>
>>>>On Sat, 09 Nov 2002 03:48:35 GMT, engsol at teleport.com wrote:
>>>>
>
>>>One example of the legacy requirement is: we have a custom plug-in card which needs to be
>>PCI? All done via IO ports or has DMA? Uses interrupts? Time-critical parts?
>>
>>>loaded with data, then triggered to output the data stream to the system under test. The
>>>driver for the card works well, and it'd be nice to not have to rewrite it. The source can
>>How do you use "the driver" now? Are you saying "driver" just in the loose sense of any software
>>that lets you use some hardware in some way? Or do you really mean "driver" as in developed using
>>DDK or equivalent and put in C:\WINNT\system32\drivers\whizzomatcard.sys? Or a privileged app?
>>
>>>be recompiled as a DLL(?)  module...I think.  Or at the least as a library header
>>Why do you think you want to do that? (I'm not suggesting it's a bad idea, I'm just wondering
>>what leads you to think you may need to do that). How do you currently operate, and how do
>>you hope to operate, at the user interface level?
>>
>>>file..again..I think.
>>>
>>It's still hard (for me anyway) to tell what your situation.
>>
>>Regards,
>>Bengt Richter
>
>
>LOL...this is a bit embarrassing....but here's more detail. Please keep the groans to a
>minimum...:)
>
>The test application is written in C. It's a DOS 6.22 application, compiled using Borland
>3.0. We've tried newer versions, but they won't compile our source as is. The source is
>3/4 million lines, including comments, blank lines, dead code, etc.,and the executable is
>about 1 meg. It uses COM1, COM2 and COM3 to "talk" to the test fixture, plus it talks to a
>couple of in-house designed and built ISA cards refered to above. One card's output is a
>pair of BNC coax connectors which in turn are connected to the test fixture. The other
>custom ISA card is attached to the test fixture using a 37 conductor ribbon cable.
>
>The application provides a console-like user interface, command line style. The app has
>the facility to run 'scripts', loosely based on a C-like syntax, which is how we write our
>test scenarios to automate product software testing. The app also has "drivers" to run the
>cards, IO ports, user interface, etc.
>
It sounds to me you have several kinds of analysis ahead, but the first thing is to decompose
the overall problem into pieces that don't have too much to do with each other.

E.g., critical interrupt service latency concerns are (or should be) separate from
worries about how to train new employees to edit test scripts.

I would guess you have to decide how long you need to keep the current thing going with legacy
OS and tools (you *do* have well-marked CDs with 100.00000% of the sources **and** tools and scripts
**and** OS stuff you need to recover if the building washes out to sea, right ;-).

Second and related, your hardware probably has an evolution path of its own. If it's about to
take a radical fork requiring much rework of the legacy software, you will be in a tough spot
trying to decide how much more to invest in the legacy stuff while seeing that if you can
get away with one more patchwork mod, you're probably going to get results faster by doing that
than taking on a lot of software design and various little learning curves that add up and mess
with schedules. (Incidentally, that economic dynamic is one of the fundamental causes of bloatware IMO).

It sounds like timing will be critical to how you build the next generation of your test system,
but if it's decomposed appropriately, the most critical aspects could be insulated from the general
operating environment. E.g., there are very critical hard real-time requirements in burning CDs,
but the hardware is designed with huge buffering to keep those requirements mostly away from PC software.
Perhaps running tests can be largely isolated like that, so that hard real time requirements don't
affect your ability to use Python in parts of the system that don't have any such hard RT requirements.

To see what your options are going to be, I think you will have to get a very clear understanding
of what's what timing wise. You should be able to identify every parallel thread of activity in
your whole system, including hardware, software, and human activity, and be able to draw parallel
time lines across the page for all of them and mark events and draw the triggering/dependency arrows
and crosshatched tolerance areas, like the chip guys do.

Obviously, you will have different things applying during overnight test runs, vs. whatever interactive
monitoring and control modes you may have, vs. data archiving vs. report generation vs. offline script
editing and verifying, etc. etc. The key is to identify all the elements affecting critical operation,
and get the irrelevancies sidelined.

If you have legacy custom hardware designed by people with separate money and scheduling concerns
from the software people, you may have legacy software that is doing stuff that really should have
been done in hardware, and to tighter timing constraints than necessary (e.g., someone forgot to latch
a status signal so it can be read a little longer after the interrupt? What determines how long a board
reset line is held?) Things like that should show up in a detailed analysis.

Once you know what your critical time constraints are, you can select an OS environment that can
handle it. BTW, a dumb polling loop in a DOS program with a dedicated machine is hard to beat, because
CPUs are so fast nowadays. The gray area where an interrupt context switch costs about the same as a full
polling loop has contracted to a very narrow time range. I'm guessing that a simple loop (or some state
machine version that chops the spaghetti in a little more organized fashion) is at the bottom of your DOS
test running environment, with interrupts doing minimal critical stuff and setting state that can be
polled.

I can't believe that a large percentage of that 3/4 million lines of code are concerned with time-
critical stuff (unless it's huge data tables for sequencing out timed signal patterns or parsing incoming
ones, but data ought to be easy to port).

So what do the timing diagrams show about what parts are involved and what parts can be factored out?
And which parts cause the compilation problems in migrating to a newer compiler? Do you have
hard coded assembly looking at data strucures that change for newer support libraries. Is it easy
to identify where the problems are, or (shudder) are there such potential unsafe/unportable-design
traps all over?

Have you considered an embedded SBC-based controller pod to do the critical timing stuff, and
communicating with it with firewire or usb or fast ethernet or whatever to meet bandwidth/qos
requirements (which are ... ?!)? A pod with a well-supported PC interface would let you run human
interface stuff from laptops or PCs on the bench or whatever, and presumably let you use Python
a lot without too many worries (not saying zero ;-). This could be totally blue-sky or not, depending
on your situation, people, budgets, market, etc.

An interesting site re embedded linux products and software is

    http://www.linuxdevices.com/


>As background, the app began 10-12 years ago as a simple thing, then as time went on,
>became larger and larger as more capabilty was added. Concurrantly, the products under
>test became faster, more complicated, etc. We're now to the point that maintenance is
>almost a full time job, we're out of memory (lots of page swapping/overlays), speed is an
I presume you are swapping/overlaying from files in a RAM disk file system in extended
memory? But, yuck anyhow.

>issue (we really need an RTOS....events of interest are sub-millisecond now), background
>sequencers are at their limits (threads would be nice), on and on. Plus it locks up a lot
How many parallel state sequences are you managing? (comes back to the timing diagrams).

>these days, which is not good when trying to run a two day test suite over the weekend.

>
>Soooo what is the new tool to look like? Which platform/OS is the "right" one? Which
Maybe it's not a matter of "*the* tool" and "*the* platform" -- i.e., maybe it's a few
connected systems each doing what they're best at.

>script language would provide the testers with what they need? Having to compile scripts,
>if it were C or C++, doesn't seem like a good idea, and not very newbie friendly. Perl,
>while powerful,is a bit obtuse. TCL is a bit limited, although named pipes are handy.
>Plus, "bread-boarding" quickie tests from the command line would be awkward at best. How
Since you are contemplating what language to use to express tests in, have you looked at
what standards there are? I.e., I think there are proprietary test languages, and you have
something going now that apparently does the job, and there are programming languages, Python
being the premier choice,  and there may be something else. E.g., have you looked at SMIL?
It's used to specify the parallel timed activity required for multimedia using XML syntax,
which leverages a bunch of available tools and docs and programmer familiarity. Would your
current scripting language be easy to translate to/from other representations? For SMIL, see

    http://www.w3.org/AudioVideo

In a way, preparing and running real time tests would seem to have a lot of parallels with
preparing and running a multimedia show. There are disparate media and devices to control
in a tightly coordinated fashion, and there are offline editing activites that have no real
time demands at all, and others where glitches and dropouts are acceptable albeit annoying.

>best to implement a user GUI, and still maintain a command line interface?  We're in the
>exploration mode, as it were, to answer some of these questions.
>
I still get too much of a monolith sensation. I can't believe there aren't separate activities
best handled according to their separate natures and requirements.

>After a lot of reading and web searching, we discovered Python. I'd done some tcl/tk, and
>while it was fun, didn't seem to be a suitable script language for inexperienced test
>people. But Python does. I've been writing routines in Python, just to get a feel for the
>difficulty in implementing some "bread and butter" functions. Guess what...I was amazed at
>how easy it was to perform rather complicated encoding schemes in Python...1-2 pages of
>code... as compared to doing it in C, which takes 3-6 pages of code. As another "test", I
>also wrote a Python routine to parse test log files (big, and lots of them, scattered in
>different directories...thanks, glob), and make a formatted CSV file. Then hand it off to
>Access for report generation. Piece of cake...gotta love them REs...:) That's why I'm a
>convert. Then I wondered about IO port access....not much mention of that in the Python
>docs...sockets, yes....direct control of the UARTs, no. But thanks to Mark Hammond and
>Andy Robinson's book re Win32 API...that looks do-able with no particular tricks.
>
>Then the thought occurs....will we be forced to write some things in C/C++? ISRs? RTOS
>interface? Driving the custom cards? Timers? I don't know, but it's possible there will be
>some tasks which will benefit from extending Python. If so, what's involved? How do you do
>that...exactly? This is why I posted the request for help/info to this great newsgroup. I
>need to figure out the process of extending Python...if it's needed.
>
>To end my rambling...the answer to the driver question above has answered itself, I think.
>Why would I want to port a 16-bit DOS module to NT/2K/Linux, as is? Thanks to the
<~OT>
OTOH, I was surprised how good the DOS emulation under windows is if your DOS program does
everything according to MS rules (i.e., you don't set interrupt vectors in low memory directly:
you use BIOS system calls, etc.) I ran across an old DOS program I had written in MASM 6.11
and it used BIOS interrupts for i/o, stole several interrupt vectors and processed interrupts
itself, acccessed text mode video card memory directly to do a text quasi-GUI with pop-up
windows and navigation through various input slots etc., and also spawned another program
and captured results and *all* those ancient interfaces worked when I just double click started
it with windows explorer. I could hardly believe it. The only thing was the DOS console window was
a little squashed top to bottom compared to hardware text mode, but all the colors and popups with
their drop shadows and everything worked. The cursor blinked and moved properly. Everything.

I'm not saying your your app would run as is under NT, but it would be interesting to see what
it would do with a hella fast processor and the right .pif parameters. I wouldn't waste a lot
of time on it tho ;-)

Of course for my app there was no tight timing to worry about, and what was going on behind
virtualized interrupts and video card memory access being translated to effects in an NT console
window must have involved a *lot* of cycles. But I was impressed, say what you will about MS.
Of course, I read that you could *boot* a foreign floppy on a virtual machine under OS/2 control
and have it execute in a virtual environment. I wonder if you could boot linux ;-)
</~OT>

>discussion on this helpful newsgroup..I don't think I do. But I still need to understand
>the process of adding Python modules, when to use DLLs if it makes sense, how to expose
>them to Python, how to use them, etc. I need the "process"...I can figure out the code, I
>hope...:)
>
There is good reading at

    http://www.python.org/doc/current/ext/ext.html

and

    http://www.python.org/doc/current/api/api.html

Or if you have MSVC++6.x and want to run an example real quick,
download the 2.2.2 sources and follow directions in

    D:\Python-2.2.2\PC\example_nt\readme.txt

(your path may vary).

For 2.2.2 they did a really nice job of setting up workspace and project files
for everything. (Attaboys to all involved!)

You can take a C/C++ program and link in the interpreter and have it do python things under
C/C++ control, or you can run the interpreter and have it access C/C++ code in custom extension
modules, or you can do stuff where each environment can access things defined in the other both
ways. You have a lot of options.

But it sounds to me like your critical question is going to be timing, and factoring the
system to separate out the critical parts. BTW, if you want to do things with tight timing
on windows, I think you will want to look into how multimedia is supported by the OS. I haven't
done any MM to talk about, but maybe the game people can advise you.

>(Now, if I can just find a good COM book per Alex's suggestions.....)
>
A lot to think about. Good luck ;-)

Regards,
Bengt Richter



More information about the Python-list mailing list