auto-debugging features?

Quinn Dunkan quinn at hork.ugcs.caltech.edu
Sun Apr 8 03:52:19 EDT 2001


On Sat, 7 Apr 2001 16:33:08 -0400, Steven D. Arnold <stevena at permanent.cc>
wrote:
>Hey folks,
>
>Throwing this out for consideration and criticism.
>
>I'd like a generic system-wide method of logging the activities of my
>programs for debugging purposes.  If I write and distribute a program, I'd
>like to be able to allow the user to set the debug level to a variety of
>different places, perhaps through an interface I provide, resulting in a log
>of the program's activity that can be parsed by any program that can use the
>debugging info protocol.  The log would probably be in human-readable text
>and would simply be parsed by a more sophisticated program intended to help
>with reviewing the log information.
>
>I'd like to get all this without cluttering up my code with print statements,
>or worse yet, `if' statements, a la:
>
>	if (g_debuglevel > 5):
>		print "[NOTICE/reports.cell.print_cell] Called function
>		\"print_cell\" with params:"

So define a function.  I have a 'dprint(msg, prio=DEBUG)' function I use.  A
flick of a switch and it can log to the console, to a file, to the sysog,
email me ERROR level msgs, or whatever.

It would be easy to have it print the calling function or a traceback,
courtesy of the traceback module.

My experience with "automatic" debugging info is that I know what's
interesting much better than the machine, so I put in explicit dprint()s at
important places.

>Something that would be cool in such a debugging info utility:
>
>	- Notification on entry to a function.
>	- On entry to a function, list the parameters and values for the params.

See sys.settrace and pdb.py

>	- When data is changed, note the before and after values.
>	- When an if statement is executed, list the expressions that were
>	compared and the values of each.

I'd put dprint()s in the state-changing methods of your interesting objects.
Logging every assignment will probably bury you in noise.  You might be able
to do this stuff with bytecodehacks, dunno.

>It 'feels' to me that the most general way to solve this problem would be to
>introduce a series of meta-magic functions in python that would trigger when
>important events occured; but this would only happen if some kind of flag
>were turned on.  So, for example, a __func__ meta-magic method could be
>triggered whenever any method in a class were called.  Paramters to __func__
>would be the method name and the entire parameter list.  The coder of
>__func__ could determine exactly how the method would react to any function
>being called -- anything from no action, to writing a line to a log file, to
>sending a message to a network host.

You could do this with a proxy object.  Or just use sys.settrace

>True, __getattr__ would handle some aspects of this, but I would prefer to
>keep __getattr__ clean and oriented around a single purpose -- implementing
>the class of which it is a part.  This suite of meta-magic functions would
>hopefully cover every meta-action a program could take and would therefore
>allow for very sophisticated debugging applications.  If we had a __startup__
>meta-magic function, we could write a program that would attach, when the
>program ran, to a remote debugger.  Since each action the program could take
>would be represented by a meta-magic function, we could then remotely step
>through the program.  We could support exactly the level of debugging we
>desired and would not be locked into having either too much or too little
>information.  This would be a miracle for CGI debugging, for example.

We already have a debugger, so you're mostly there.  It can't attach to a
running program, but it should be possible to have your program debug itself
by writing a signal handler (or polling mechanism, or something) to install
the debugger's trace function. Then you just need someone to talk to, by
opening a pipe or socket or something.

I haven't felt the need to do anything like this, but it shouldn't be too
hard.

I've occaisionally thought a python "core dump" might be useful: just wrap the
whole program in an exception handler that jots down the state of the system.
Just like a more detailed stack trace.  Good luck trying to figure out how to
automatically recreate that state, though :)  I don't think you could freeze a
running process for debugging, because once the fatal exception fires, it's
too late and you've jumped out of the stack.

But I have yet to see an error where the automatic stack trace didn't give me
enough info.

I think your best bet is to not make your CGI scripts so buggy in the first
place :)  I find it easier to catch bugs by testing small chunks than by
running the whole thing and spending lots of time with a debugger when
something blows up.



More information about the Python-list mailing list