[ python-Bugs-1202533 ] a bunch of infinite C recursions

SourceForge.net noreply at sourceforge.net
Tue Nov 7 00:27:41 CET 2006


Bugs item #1202533, was opened at 2005-05-15 16:43
Message generated for change (Comment added) made by bcannon
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1202533&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Interpreter Core
Group: None
Status: Open
Resolution: None
Priority: 5
Private: No
Submitted By: Armin Rigo (arigo)
Assigned to: Armin Rigo (arigo)
Summary: a bunch of infinite C recursions

Initial Comment:
There is a general way to cause unchecked infinite recursion at the C level, and I have no clue at the moment how it could be reasonably fixed.  The idea is to define special __xxx__ methods in such a way that no Python code is actually called before they invoke more special methods (e.g. themselves).

>>> class A: pass
>>> A.__mul__=new.instancemethod(operator.mul,None,A)
>>> A()*2
Segmentation fault

----------------------------------------------------------------------

>Comment By: Brett Cannon (bcannon)
Date: 2006-11-06 15:27

Message:
Logged In: YES 
user_id=357491

I was waiting on Armin to do a final review, but if you are
willing to sign off on it I can check it in.

And I agree that it isn't worth backporting to 2.5 because
of the hassle.

----------------------------------------------------------------------

Comment By: Neal Norwitz (nnorwitz)
Date: 2006-11-05 16:41

Message:
Logged In: YES 
user_id=33168

What's the status of this patch?  It seems like it's ready
to checkin.  Since this change isn't b/w compatible, should
we do anything to fix these in 2.5?  For example, bug
1590036.  We could add recursion checks one by one, but I'm
tempted to ignore 2.5.  It seems too little of a gain to fix
2.5 and 2.6 in incompatible ways for such unlikely bugs like
these.

----------------------------------------------------------------------

Comment By: Brett Cannon (bcannon)
Date: 2006-06-30 17:46

Message:
Logged In: YES 
user_id=357491

OK, new patch, with the check in PyObject_Call() (and thus
slot_tp_call() recursion check removed), the addition of
PyExc_RecursionErrorInst which is an instance of
RuntimeError and a reasonable message set, and
PyErr_NormalizeException() checking the recursion depth
directly and using PyExc_RecursionErrorInst when the limit
is hit.

Hopefully this does it.  =)

----------------------------------------------------------------------

Comment By: Brett Cannon (bcannon)
Date: 2006-06-30 10:06

Message:
Logged In: YES 
user_id=357491

Seems reasonable to me.  I will try to cook up a patch after
I am done trying to fix the xml_parsers.py crasher.

----------------------------------------------------------------------

Comment By: Armin Rigo (arigo)
Date: 2006-06-30 00:14

Message:
Logged In: YES 
user_id=4771

Looks quite obscure, a hack to avoid the bad effects of the
previous hack of temporarily decreasing the recursion depth
(which no other piece of code does).  I'd be much more
confident that I'm looking at correct code if we used a more
explicit approach.  What about a prebuilt RuntimeError
instance with the message "maximum recursion depth
exceeded", similar to the prebuilt MemoryError instance?

Then at least PyObject_Call() could raise this instance
directly, with PyErr_SetObject().  We'd get an
already-normalized exception in this way, and remove any
need for tstate recursion_depth mangling.

----------------------------------------------------------------------

Comment By: Brett Cannon (bcannon)
Date: 2006-06-26 12:31

Message:
Logged In: YES 
user_id=357491

Yes, Armin, that is a rather evil piece of code.  =)  But I
have a possible simple solution.  =)

If you add a check after the PyExceptionClass_Check() in the
'if' statement that tstate->recursion_depth is greater than
0, you short-circuit the infinite recursion and still get a
sensible output.

I have attached a patch with my proposed changes for
PyObject_Call() and PyErr_NormalizeException() and the
remove of the recursion check for slot_tp_call().  The only
issue I can see with this is if the recursion_depth check
turns out to be too small of a number.  But I really doubt
that will be an issue since one shouldn't be having a depth
of tracebacks greater than the current recursion depth.

Let me know what you think of the patch.

----------------------------------------------------------------------

Comment By: Armin Rigo (arigo)
Date: 2006-06-25 02:33

Message:
Logged In: YES 
user_id=4771

Yes, it seems reasonable.  You should probably manipulate
tstate->recursion_depth directly instead of via the
macros, as e.g. ceval.c does.

This would fix most crashes here.  It would make the attached
test17.py segfault, though.  This test17 already segfaults a
debug build of Python, but not a release build because the
recursive PyErr_NormalizeException() is turned into a tail
call by gcc.  (What, a convoluted mind, mine?)

----------------------------------------------------------------------

Comment By: Brett Cannon (bcannon)
Date: 2006-06-23 14:54

Message:
Logged In: YES 
user_id=357491

I just had an idea, Armin.  What if, at the recursive call
site in PyErr_NormalizeException(), we called
Py_LeaveRecursiveCall() before and Py_EnterRecursiveCall()
after?  That would keep the recursion limit the same when
the normalization was done, but still allow the check in
PyObject_Call()::

Py_LeaveRecursiveCall();
PyErr_NormalizeException(exc, val, tb);
Py_EnterRecursiveCall("");

Since it is an internal call I think it would be safe to
"play" with the recursion depth value like this.  What do
you think?

----------------------------------------------------------------------

Comment By: Brett Cannon (bcannon)
Date: 2006-06-23 13:57

Message:
Logged In: YES 
user_id=357491

The rev. that Armin checked in was actually r47061.

----------------------------------------------------------------------

Comment By: Brett Cannon (bcannon)
Date: 2006-06-23 13:53

Message:
Logged In: YES 
user_id=357491

I thought the check was in slot_tp_call and not
PyObject_Call.  So yeah, I basically forgot.  =)

The problem with allowing the segfault to stay is that it
destroys security in terms of protecting the interpreter,
which I am trying to deal with.  So leaving random ways to
crash the interpreter is currently a no-no for me.  I will
see if I can come up with another way to fix this issue.

----------------------------------------------------------------------

Comment By: Armin Rigo (arigo)
Date: 2006-06-23 13:05

Message:
Logged In: YES 
user_id=4771

I'd have answer "good idea, go ahead", if it were not for
what I found out a few days ago, which is that:

* you already checked yourself a Py_EnterRecursiveCall() into
PyObject_Call() -- that was in r46806 (I guess you forgot)

* I got a case of Python hanging on me in an infinite busy
loop, which turned out to be caused by this (!)

So I reverted r46806 in r47601, added a test (see log for an
explanation), and moved the PyEnter_RecursiveCall()
elsewhere, where it still catches the originally intended
case, but where it will probably not catch the cases of the
present tracker any more.  Not sure what to do about it.  I'd
suggest to be extra careful here; better some extremely
obscure and ad-hoc ways to provoke a segfault, rather than
busy-loop hangs in previously working programs...

----------------------------------------------------------------------

Comment By: Brett Cannon (bcannon)
Date: 2006-06-23 12:44

Message:
Logged In: YES 
user_id=357491

Do you have any objection to using the
Py_EnterRecursiveCall() in PyObject_Call(), Armin, to at
least deal with the crashers it fixes?

----------------------------------------------------------------------

Comment By: Terry J. Reedy (tjreedy)
Date: 2005-09-01 13:39

Message:
Logged In: YES 
user_id=593130

Bug submission [ 1267884 ] crash recursive __getattr__
appears to be another example of this problem, so I closed it as 
a duplicate.  If that turns out to be wrong, it should be reopened.


----------------------------------------------------------------------

Comment By: Armin Rigo (arigo)
Date: 2005-05-29 05:23

Message:
Logged In: YES 
user_id=4771

Adding a Py_EnterRecursiveCall() in PyObject_Call() seems to fix all the examples so far, with the exception of the "__get__=getattr" one, where we get a strange result instead of a RuntimeError (I suspect careless exception eating is taking place).

The main loop in ceval.c doesn't call PyObject_Call() very often: it usually dispatches directly itself for performance, which is exactly what we want here, as recursion from ceval.c is already protected by a Py_EnterRecursiveCall().  So this change has a minor impact on performance.  Pystone for example issues only three PyObject_Call() per loop, to call classes.  This has an almost-unmeasurable impact ( < 0.4%).

Of course I'll think a bit more and search for examples that don't go through PyObject_Call()  :-)

----------------------------------------------------------------------

Comment By: Armin Rigo (arigo)
Date: 2005-05-23 06:52

Message:
Logged In: YES 
user_id=4771

When I thought about the same problem for PyPy, I imagined
that it would be easy to use the call graph computed by
the type inferencer ("annotator").  We would find an
algorithm that figures out the minimal number of places
that need a Py_EnterRecursiveCall so that every cycle goes
through at least one of them.  For CPython it might be
possible to go down the same path if someone can find a C
code analyzer smart enough to provide the required
information -- a call graph including indirect calls through
function pointers.  Not sure it's sane, though.

----------------------------------------------------------------------

Comment By: Michael Hudson (mwh)
Date: 2005-05-23 06:16

Message:
Logged In: YES 
user_id=6656

I agree with Armin that this could easily be a never ending story.  Perhaps 
it would suffice to sprinkle Py_EnterRecursiveCall around as we find holes.

It might have to, because I can't really think of a better way of doing this.  
The only other approach I know is that of SBCL (a Common Lisp 
implementation): it mprotects a page at the end of the stack and installs a 
SIGSEGV handler (and uses sigaltstack) that knows how to abort the 
current lisp operation.  Somehow, I don't think we want to go down this 
line.

Anybody have any other ideas?

----------------------------------------------------------------------

Comment By: Martin v. Löwis (loewis)
Date: 2005-05-23 06:06

Message:
Logged In: YES 
user_id=21627

It has been a long-time policy that you should not be able
to crash the Python interpreter even with malicious code. I
think this is a good policy, because it provides people
always with a back-trace, which is much easier to analyse
than a core dump.

----------------------------------------------------------------------

Comment By: Josiah Carlson (josiahcarlson)
Date: 2005-05-23 00:41

Message:
Logged In: YES 
user_id=341410

I personally think that the CPython runtime should make a
best-effort to not crash when running code that makes sense.
 But when CPython is running on input that is nonsensical
(in each of the examples that Armin provides, no return
value could make sense), I think that as long as the
behavior is stated clearly, it is sufficient.

Certainly it would be nice if CPython did not crash in such
cases, but I don't know if the performance penalty and code
maintenance outweigh the cases where users write bad code. 
Perhaps a compile-time option, enabled by default based on
whether or not we want a safer or faster CPython.  Of course
maintenance is still a chore, and it is one additional set
of calls that C extension writers may need to be aware of
(if their code can be recursively called, and they want to
participate in the infinite recursion detection).

----------------------------------------------------------------------

Comment By: Armin Rigo (arigo)
Date: 2005-05-20 14:46

Message:
Logged In: YES 
user_id=4771

Yes, but I'm concerned that we would need to add it really really many places, and probably forget some even then.  E.g. I just thought about:

    lst = [apply]
    lst.append(lst)
    apply(*lst)

It seems a bit hopeless, honestly...

----------------------------------------------------------------------

Comment By: Martin v. Löwis (loewis)
Date: 2005-05-20 14:22

Message:
Logged In: YES 
user_id=21627

Wouldn't adding Py_EnterRecursiveCall into many places solve
the problem?

----------------------------------------------------------------------

Comment By: Armin Rigo (arigo)
Date: 2005-05-19 08:05

Message:
Logged In: YES 
user_id=4771

This is not about the new module.  The same example can be written as:

  import types
  class A: pass
  A.__mul__ = types.MethodType(operator.mul, None, A)

If this still looks essentially like an indirect way of using the new module, here is another example:

  class A(str): __get__ = getattr
  a = A('a')
  A.a = a
  a.a

Or, as I just found out, new-style classes are again vulnerable to the older example based __call__, which was fixed for old-style classes:

  class A(object): pass
  A.__call__ = A()
  A()()

I'm not denying that these examples look convoluted :-)
My point here is that we can basically build a lot of examples based only on core (if not necessarily widely understood) language features.  It appears to go against the basic hope that CPython cannot be crashed as long as you don't use features explicitely marked as dangerous.

----------------------------------------------------------------------

Comment By: Terry J. Reedy (tjreedy)
Date: 2005-05-18 19:02

Message:
Logged In: YES 
user_id=593130

On Windows, this caused the interactive window to just 
disappear.so I suspect something similar occurred.

New is a known dangerous, use at your own risk, implementation 
specific module whose use, like byte code hacking, is outside 
the language proper.  Both bypass normal object creation syntax 
and its checks and both can create invalid objects.  A hold-your-
hand inplementation would not give such access to internals.

Lib Ref 3.28 says "This module provides a low-level interface to 
the interpreter, so care must be exercised when using this 
module. It is possible to supply non-sensical arguments which 
crash the interpreter when the object is used."  Should more or 
different be said?  

If not, I suspect this should be closed as 'won't fix', as in 'won't 
remove the inherently dangerous new module'.




----------------------------------------------------------------------

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1202533&group_id=5470


More information about the Python-bugs-list mailing list