[Python-checkins] cpython (3.4): Regenerate pydoc_topics, fix markup errors, in preparation for 3.4.0 final.

larry.hastings python-checkins at python.org
Mon Mar 17 07:33:38 CET 2014


http://hg.python.org/cpython/rev/28a92fd64ba9
changeset:   89798:28a92fd64ba9
branch:      3.4
user:        Larry Hastings <larry at hastings.org>
date:        Sat Mar 15 22:29:19 2014 -0700
summary:
  Regenerate pydoc_topics, fix markup errors, in preparation for 3.4.0 final.

files:
  Doc/tools/sphinxext/susp-ignored.csv |  1 -
  Doc/whatsnew/3.4.rst                 |  2 +-
  Lib/pydoc_data/topics.py             |  8 ++++----
  3 files changed, 5 insertions(+), 6 deletions(-)


diff --git a/Doc/tools/sphinxext/susp-ignored.csv b/Doc/tools/sphinxext/susp-ignored.csv
--- a/Doc/tools/sphinxext/susp-ignored.csv
+++ b/Doc/tools/sphinxext/susp-ignored.csv
@@ -282,4 +282,3 @@
 whatsnew/changelog,,::,": Fix FTP tests for IPv6, bind to ""::1"" instead of ""localhost""."
 whatsnew/changelog,,::,": Use ""127.0.0.1"" or ""::1"" instead of ""localhost"" as much as"
 whatsnew/changelog,,:password,user:password
-whatsnew/changelog,,:gz,w:gz
diff --git a/Doc/whatsnew/3.4.rst b/Doc/whatsnew/3.4.rst
--- a/Doc/whatsnew/3.4.rst
+++ b/Doc/whatsnew/3.4.rst
@@ -2284,7 +2284,7 @@
   name, they now set it to an empty list.  The previous behavior could cause
   the import system to do the wrong thing on submodule imports if there was
   also a directory with the same name as the frozen package.  The correct way
-  to determine if a module is a package or not is to use``hasattr(module,
+  to determine if a module is a package or not is to use ``hasattr(module,
   '__path__')`` (:issue:`18065`).
 
 * Frozen modules no longer define a ``__file__`` attribute. It's semantically
diff --git a/Lib/pydoc_data/topics.py b/Lib/pydoc_data/topics.py
--- a/Lib/pydoc_data/topics.py
+++ b/Lib/pydoc_data/topics.py
@@ -1,5 +1,5 @@
 # -*- coding: utf-8 -*-
-# Autogenerated by Sphinx on Sun Mar  9 03:59:56 2014
+# Autogenerated by Sphinx on Sat Mar 15 22:21:45 2014
 topics = {'assert': '\nThe "assert" statement\n**********************\n\nAssert statements are a convenient way to insert debugging assertions\ninto a program:\n\n   assert_stmt ::= "assert" expression ["," expression]\n\nThe simple form, "assert expression", is equivalent to\n\n   if __debug__:\n      if not expression: raise AssertionError\n\nThe extended form, "assert expression1, expression2", is equivalent to\n\n   if __debug__:\n      if not expression1: raise AssertionError(expression2)\n\nThese equivalences assume that "__debug__" and "AssertionError" refer\nto the built-in variables with those names.  In the current\nimplementation, the built-in variable "__debug__" is "True" under\nnormal circumstances, "False" when optimization is requested (command\nline option -O).  The current code generator emits no code for an\nassert statement when optimization is requested at compile time.  Note\nthat it is unnecessary to include the source code for the expression\nthat failed in the error message; it will be displayed as part of the\nstack trace.\n\nAssignments to "__debug__" are illegal.  The value for the built-in\nvariable is determined when the interpreter starts.\n',
  'assignment': '\nAssignment statements\n*********************\n\nAssignment statements are used to (re)bind names to values and to\nmodify attributes or items of mutable objects:\n\n   assignment_stmt ::= (target_list "=")+ (expression_list | yield_expression)\n   target_list     ::= target ("," target)* [","]\n   target          ::= identifier\n              | "(" target_list ")"\n              | "[" target_list "]"\n              | attributeref\n              | subscription\n              | slicing\n              | "*" target\n\n(See section *Primaries* for the syntax definitions for the last three\nsymbols.)\n\nAn assignment statement evaluates the expression list (remember that\nthis can be a single expression or a comma-separated list, the latter\nyielding a tuple) and assigns the single resulting object to each of\nthe target lists, from left to right.\n\nAssignment is defined recursively depending on the form of the target\n(list). When a target is part of a mutable object (an attribute\nreference, subscription or slicing), the mutable object must\nultimately perform the assignment and decide about its validity, and\nmay raise an exception if the assignment is unacceptable.  The rules\nobserved by various types and the exceptions raised are given with the\ndefinition of the object types (see section *The standard type\nhierarchy*).\n\nAssignment of an object to a target list, optionally enclosed in\nparentheses or square brackets, is recursively defined as follows.\n\n* If the target list is a single target: The object is assigned to\n  that target.\n\n* If the target list is a comma-separated list of targets: The\n  object must be an iterable with the same number of items as there\n  are targets in the target list, and the items are assigned, from\n  left to right, to the corresponding targets.\n\n  * If the target list contains one target prefixed with an\n    asterisk, called a "starred" target: The object must be a sequence\n    with at least as many items as there are targets in the target\n    list, minus one.  The first items of the sequence are assigned,\n    from left to right, to the targets before the starred target.  The\n    final items of the sequence are assigned to the targets after the\n    starred target.  A list of the remaining items in the sequence is\n    then assigned to the starred target (the list can be empty).\n\n  * Else: The object must be a sequence with the same number of\n    items as there are targets in the target list, and the items are\n    assigned, from left to right, to the corresponding targets.\n\nAssignment of an object to a single target is recursively defined as\nfollows.\n\n* If the target is an identifier (name):\n\n  * If the name does not occur in a "global" or "nonlocal" statement\n    in the current code block: the name is bound to the object in the\n    current local namespace.\n\n  * Otherwise: the name is bound to the object in the global\n    namespace or the outer namespace determined by "nonlocal",\n    respectively.\n\n  The name is rebound if it was already bound.  This may cause the\n  reference count for the object previously bound to the name to reach\n  zero, causing the object to be deallocated and its destructor (if it\n  has one) to be called.\n\n* If the target is a target list enclosed in parentheses or in\n  square brackets: The object must be an iterable with the same number\n  of items as there are targets in the target list, and its items are\n  assigned, from left to right, to the corresponding targets.\n\n* If the target is an attribute reference: The primary expression in\n  the reference is evaluated.  It should yield an object with\n  assignable attributes; if this is not the case, "TypeError" is\n  raised.  That object is then asked to assign the assigned object to\n  the given attribute; if it cannot perform the assignment, it raises\n  an exception (usually but not necessarily "AttributeError").\n\n  Note: If the object is a class instance and the attribute reference\n  occurs on both sides of the assignment operator, the RHS expression,\n  "a.x" can access either an instance attribute or (if no instance\n  attribute exists) a class attribute.  The LHS target "a.x" is always\n  set as an instance attribute, creating it if necessary.  Thus, the\n  two occurrences of "a.x" do not necessarily refer to the same\n  attribute: if the RHS expression refers to a class attribute, the\n  LHS creates a new instance attribute as the target of the\n  assignment:\n\n     class Cls:\n         x = 3             # class variable\n     inst = Cls()\n     inst.x = inst.x + 1   # writes inst.x as 4 leaving Cls.x as 3\n\n  This description does not necessarily apply to descriptor\n  attributes, such as properties created with "property()".\n\n* If the target is a subscription: The primary expression in the\n  reference is evaluated.  It should yield either a mutable sequence\n  object (such as a list) or a mapping object (such as a dictionary).\n  Next, the subscript expression is evaluated.\n\n  If the primary is a mutable sequence object (such as a list), the\n  subscript must yield an integer.  If it is negative, the sequence\'s\n  length is added to it.  The resulting value must be a nonnegative\n  integer less than the sequence\'s length, and the sequence is asked\n  to assign the assigned object to its item with that index.  If the\n  index is out of range, "IndexError" is raised (assignment to a\n  subscripted sequence cannot add new items to a list).\n\n  If the primary is a mapping object (such as a dictionary), the\n  subscript must have a type compatible with the mapping\'s key type,\n  and the mapping is then asked to create a key/datum pair which maps\n  the subscript to the assigned object.  This can either replace an\n  existing key/value pair with the same key value, or insert a new\n  key/value pair (if no key with the same value existed).\n\n  For user-defined objects, the "__setitem__()" method is called with\n  appropriate arguments.\n\n* If the target is a slicing: The primary expression in the\n  reference is evaluated.  It should yield a mutable sequence object\n  (such as a list).  The assigned object should be a sequence object\n  of the same type.  Next, the lower and upper bound expressions are\n  evaluated, insofar they are present; defaults are zero and the\n  sequence\'s length.  The bounds should evaluate to integers. If\n  either bound is negative, the sequence\'s length is added to it.  The\n  resulting bounds are clipped to lie between zero and the sequence\'s\n  length, inclusive.  Finally, the sequence object is asked to replace\n  the slice with the items of the assigned sequence.  The length of\n  the slice may be different from the length of the assigned sequence,\n  thus changing the length of the target sequence, if the object\n  allows it.\n\n**CPython implementation detail:** In the current implementation, the\nsyntax for targets is taken to be the same as for expressions, and\ninvalid syntax is rejected during the code generation phase, causing\nless detailed error messages.\n\nWARNING: Although the definition of assignment implies that overlaps\nbetween the left-hand side and the right-hand side are \'safe\' (for\nexample "a, b = b, a" swaps two variables), overlaps *within* the\ncollection of assigned-to variables are not safe!  For instance, the\nfollowing program prints "[0, 2]":\n\n   x = [0, 1]\n   i = 0\n   i, x[i] = 1, 2\n   print(x)\n\nSee also: **PEP 3132** - Extended Iterable Unpacking\n\n     The specification for the "*target" feature.\n\n\nAugmented assignment statements\n===============================\n\nAugmented assignment is the combination, in a single statement, of a\nbinary operation and an assignment statement:\n\n   augmented_assignment_stmt ::= augtarget augop (expression_list | yield_expression)\n   augtarget                 ::= identifier | attributeref | subscription | slicing\n   augop                     ::= "+=" | "-=" | "*=" | "/=" | "//=" | "%=" | "**="\n             | ">>=" | "<<=" | "&=" | "^=" | "|="\n\n(See section *Primaries* for the syntax definitions for the last three\nsymbols.)\n\nAn augmented assignment evaluates the target (which, unlike normal\nassignment statements, cannot be an unpacking) and the expression\nlist, performs the binary operation specific to the type of assignment\non the two operands, and assigns the result to the original target.\nThe target is only evaluated once.\n\nAn augmented assignment expression like "x += 1" can be rewritten as\n"x = x + 1" to achieve a similar, but not exactly equal effect. In the\naugmented version, "x" is only evaluated once. Also, when possible,\nthe actual operation is performed *in-place*, meaning that rather than\ncreating a new object and assigning that to the target, the old object\nis modified instead.\n\nWith the exception of assigning to tuples and multiple targets in a\nsingle statement, the assignment done by augmented assignment\nstatements is handled the same way as normal assignments. Similarly,\nwith the exception of the possible *in-place* behavior, the binary\noperation performed by augmented assignment is the same as the normal\nbinary operations.\n\nFor targets which are attribute references, the same *caveat about\nclass and instance attributes* applies as for regular assignments.\n',
  'atom-identifiers': '\nIdentifiers (Names)\n*******************\n\nAn identifier occurring as an atom is a name.  See section\n*Identifiers and keywords* for lexical definition and section *Naming\nand binding* for documentation of naming and binding.\n\nWhen the name is bound to an object, evaluation of the atom yields\nthat object. When a name is not bound, an attempt to evaluate it\nraises a "NameError" exception.\n\n**Private name mangling:** When an identifier that textually occurs in\na class definition begins with two or more underscore characters and\ndoes not end in two or more underscores, it is considered a *private\nname* of that class. Private names are transformed to a longer form\nbefore code is generated for them.  The transformation inserts the\nclass name, with leading underscores removed and a single underscore\ninserted, in front of the name.  For example, the identifier "__spam"\noccurring in a class named "Ham" will be transformed to "_Ham__spam".\nThis transformation is independent of the syntactical context in which\nthe identifier is used.  If the transformed name is extremely long\n(longer than 255 characters), implementation defined truncation may\nhappen. If the class name consists only of underscores, no\ntransformation is done.\n',
@@ -23,7 +23,7 @@
  'context-managers': '\nWith Statement Context Managers\n*******************************\n\nA *context manager* is an object that defines the runtime context to\nbe established when executing a "with" statement. The context manager\nhandles the entry into, and the exit from, the desired runtime context\nfor the execution of the block of code.  Context managers are normally\ninvoked using the "with" statement (described in section *The with\nstatement*), but can also be used by directly invoking their methods.\n\nTypical uses of context managers include saving and restoring various\nkinds of global state, locking and unlocking resources, closing opened\nfiles, etc.\n\nFor more information on context managers, see *Context Manager Types*.\n\nobject.__enter__(self)\n\n   Enter the runtime context related to this object. The "with"\n   statement will bind this method\'s return value to the target(s)\n   specified in the "as" clause of the statement, if any.\n\nobject.__exit__(self, exc_type, exc_value, traceback)\n\n   Exit the runtime context related to this object. The parameters\n   describe the exception that caused the context to be exited. If the\n   context was exited without an exception, all three arguments will\n   be "None".\n\n   If an exception is supplied, and the method wishes to suppress the\n   exception (i.e., prevent it from being propagated), it should\n   return a true value. Otherwise, the exception will be processed\n   normally upon exit from this method.\n\n   Note that "__exit__()" methods should not reraise the passed-in\n   exception; this is the caller\'s responsibility.\n\nSee also: **PEP 0343** - The "with" statement\n\n     The specification, background, and examples for the Python "with"\n     statement.\n',
  'continue': '\nThe "continue" statement\n************************\n\n   continue_stmt ::= "continue"\n\n"continue" may only occur syntactically nested in a "for" or "while"\nloop, but not nested in a function or class definition or "finally"\nclause within that loop.  It continues with the next cycle of the\nnearest enclosing loop.\n\nWhen "continue" passes control out of a "try" statement with a\n"finally" clause, that "finally" clause is executed before really\nstarting the next loop cycle.\n',
  'conversions': '\nArithmetic conversions\n**********************\n\nWhen a description of an arithmetic operator below uses the phrase\n"the numeric arguments are converted to a common type," this means\nthat the operator implementation for built-in types works that way:\n\n* If either argument is a complex number, the other is converted to\n  complex;\n\n* otherwise, if either argument is a floating point number, the\n  other is converted to floating point;\n\n* otherwise, both must be integers and no conversion is necessary.\n\nSome additional rules apply for certain operators (e.g., a string left\nargument to the \'%\' operator).  Extensions must define their own\nconversion behavior.\n',
- 'customization': '\nBasic customization\n*******************\n\nobject.__new__(cls[, ...])\n\n   Called to create a new instance of class *cls*.  "__new__()" is a\n   static method (special-cased so you need not declare it as such)\n   that takes the class of which an instance was requested as its\n   first argument.  The remaining arguments are those passed to the\n   object constructor expression (the call to the class).  The return\n   value of "__new__()" should be the new object instance (usually an\n   instance of *cls*).\n\n   Typical implementations create a new instance of the class by\n   invoking the superclass\'s "__new__()" method using\n   "super(currentclass, cls).__new__(cls[, ...])" with appropriate\n   arguments and then modifying the newly-created instance as\n   necessary before returning it.\n\n   If "__new__()" returns an instance of *cls*, then the new\n   instance\'s "__init__()" method will be invoked like\n   "__init__(self[, ...])", where *self* is the new instance and the\n   remaining arguments are the same as were passed to "__new__()".\n\n   If "__new__()" does not return an instance of *cls*, then the new\n   instance\'s "__init__()" method will not be invoked.\n\n   "__new__()" is intended mainly to allow subclasses of immutable\n   types (like int, str, or tuple) to customize instance creation.  It\n   is also commonly overridden in custom metaclasses in order to\n   customize class creation.\n\nobject.__init__(self[, ...])\n\n   Called when the instance is created.  The arguments are those\n   passed to the class constructor expression.  If a base class has an\n   "__init__()" method, the derived class\'s "__init__()" method, if\n   any, must explicitly call it to ensure proper initialization of the\n   base class part of the instance; for example:\n   "BaseClass.__init__(self, [args...])".  As a special constraint on\n   constructors, no value may be returned; doing so will cause a\n   "TypeError" to be raised at runtime.\n\nobject.__del__(self)\n\n   Called when the instance is about to be destroyed.  This is also\n   called a destructor.  If a base class has a "__del__()" method, the\n   derived class\'s "__del__()" method, if any, must explicitly call it\n   to ensure proper deletion of the base class part of the instance.\n   Note that it is possible (though not recommended!) for the\n   "__del__()" method to postpone destruction of the instance by\n   creating a new reference to it.  It may then be called at a later\n   time when this new reference is deleted.  It is not guaranteed that\n   "__del__()" methods are called for objects that still exist when\n   the interpreter exits.\n\n   Note: "del x" doesn\'t directly call "x.__del__()" --- the former\n     decrements the reference count for "x" by one, and the latter is\n     only called when "x"\'s reference count reaches zero.  Some common\n     situations that may prevent the reference count of an object from\n     going to zero include: circular references between objects (e.g.,\n     a doubly-linked list or a tree data structure with parent and\n     child pointers); a reference to the object on the stack frame of\n     a function that caught an exception (the traceback stored in\n     "sys.exc_info()[2]" keeps the stack frame alive); or a reference\n     to the object on the stack frame that raised an unhandled\n     exception in interactive mode (the traceback stored in\n     "sys.last_traceback" keeps the stack frame alive).  The first\n     situation can only be remedied by explicitly breaking the cycles;\n     the latter two situations can be resolved by storing "None" in\n     "sys.last_traceback". Circular references which are garbage are\n     detected and cleaned up when the cyclic garbage collector is\n     enabled (it\'s on by default). Refer to the documentation for the\n     "gc" module for more information about this topic.\n\n   Warning: Due to the precarious circumstances under which\n     "__del__()" methods are invoked, exceptions that occur during\n     their execution are ignored, and a warning is printed to\n     "sys.stderr" instead. Also, when "__del__()" is invoked in\n     response to a module being deleted (e.g., when execution of the\n     program is done), other globals referenced by the "__del__()"\n     method may already have been deleted or in the process of being\n     torn down (e.g. the import machinery shutting down).  For this\n     reason, "__del__()" methods should do the absolute minimum needed\n     to maintain external invariants.  Starting with version 1.5,\n     Python guarantees that globals whose name begins with a single\n     underscore are deleted from their module before other globals are\n     deleted; if no other references to such globals exist, this may\n     help in assuring that imported modules are still available at the\n     time when the "__del__()" method is called.\n\nobject.__repr__(self)\n\n   Called by the "repr()" built-in function to compute the "official"\n   string representation of an object.  If at all possible, this\n   should look like a valid Python expression that could be used to\n   recreate an object with the same value (given an appropriate\n   environment).  If this is not possible, a string of the form\n   "<...some useful description...>" should be returned. The return\n   value must be a string object. If a class defines "__repr__()" but\n   not "__str__()", then "__repr__()" is also used when an "informal"\n   string representation of instances of that class is required.\n\n   This is typically used for debugging, so it is important that the\n   representation is information-rich and unambiguous.\n\nobject.__str__(self)\n\n   Called by "str(object)" and the built-in functions "format()" and\n   "print()" to compute the "informal" or nicely printable string\n   representation of an object.  The return value must be a *string*\n   object.\n\n   This method differs from "object.__repr__()" in that there is no\n   expectation that "__str__()" return a valid Python expression: a\n   more convenient or concise representation can be used.\n\n   The default implementation defined by the built-in type "object"\n   calls "object.__repr__()".\n\nobject.__bytes__(self)\n\n   Called by "bytes()" to compute a byte-string representation of an\n   object. This should return a "bytes" object.\n\nobject.__format__(self, format_spec)\n\n   Called by the "format()" built-in function (and by extension, the\n   "str.format()" method of class "str") to produce a "formatted"\n   string representation of an object. The "format_spec" argument is a\n   string that contains a description of the formatting options\n   desired. The interpretation of the "format_spec" argument is up to\n   the type implementing "__format__()", however most classes will\n   either delegate formatting to one of the built-in types, or use a\n   similar formatting option syntax.\n\n   See *Format Specification Mini-Language* for a description of the\n   standard formatting syntax.\n\n   The return value must be a string object.\n\nobject.__lt__(self, other)\nobject.__le__(self, other)\nobject.__eq__(self, other)\nobject.__ne__(self, other)\nobject.__gt__(self, other)\nobject.__ge__(self, other)\n\n   These are the so-called "rich comparison" methods. The\n   correspondence between operator symbols and method names is as\n   follows: "x<y" calls "x.__lt__(y)", "x<=y" calls "x.__le__(y)",\n   "x==y" calls "x.__eq__(y)", "x!=y" calls "x.__ne__(y)", "x>y" calls\n   "x.__gt__(y)", and "x>=y" calls "x.__ge__(y)".\n\n   A rich comparison method may return the singleton "NotImplemented"\n   if it does not implement the operation for a given pair of\n   arguments. By convention, "False" and "True" are returned for a\n   successful comparison. However, these methods can return any value,\n   so if the comparison operator is used in a Boolean context (e.g.,\n   in the condition of an "if" statement), Python will call "bool()"\n   on the value to determine if the result is true or false.\n\n   There are no implied relationships among the comparison operators.\n   The truth of "x==y" does not imply that "x!=y" is false.\n   Accordingly, when defining "__eq__()", one should also define\n   "__ne__()" so that the operators will behave as expected.  See the\n   paragraph on "__hash__()" for some important notes on creating\n   *hashable* objects which support custom comparison operations and\n   are usable as dictionary keys.\n\n   There are no swapped-argument versions of these methods (to be used\n   when the left argument does not support the operation but the right\n   argument does); rather, "__lt__()" and "__gt__()" are each other\'s\n   reflection, "__le__()" and "__ge__()" are each other\'s reflection,\n   and "__eq__()" and "__ne__()" are their own reflection.\n\n   Arguments to rich comparison methods are never coerced.\n\n   To automatically generate ordering operations from a single root\n   operation, see "functools.total_ordering()".\n\nobject.__hash__(self)\n\n   Called by built-in function "hash()" and for operations on members\n   of hashed collections including "set", "frozenset", and "dict".\n   "__hash__()" should return an integer.  The only required property\n   is that objects which compare equal have the same hash value; it is\n   advised to somehow mix together (e.g. using exclusive or) the hash\n   values for the components of the object that also play a part in\n   comparison of objects.\n\n   Note: "hash()" truncates the value returned from an object\'s\n     custom "__hash__()" method to the size of a "Py_ssize_t".  This\n     is typically 8 bytes on 64-bit builds and 4 bytes on 32-bit\n     builds. If an object\'s   "__hash__()" must interoperate on builds\n     of different bit sizes, be sure to check the width on all\n     supported builds.  An easy way to do this is with "python -c\n     "import sys; print(sys.hash_info.width)""\n\n   If a class does not define an "__eq__()" method it should not\n   define a "__hash__()" operation either; if it defines "__eq__()"\n   but not "__hash__()", its instances will not be usable as items in\n   hashable collections.  If a class defines mutable objects and\n   implements an "__eq__()" method, it should not implement\n   "__hash__()", since the implementation of hashable collections\n   requires that a key\'s hash value is immutable (if the object\'s hash\n   value changes, it will be in the wrong hash bucket).\n\n   User-defined classes have "__eq__()" and "__hash__()" methods by\n   default; with them, all objects compare unequal (except with\n   themselves) and "x.__hash__()" returns an appropriate value such\n   that "x == y" implies both that "x is y" and "hash(x) == hash(y)".\n\n   A class that overrides "__eq__()" and does not define "__hash__()"\n   will have its "__hash__()" implicitly set to "None".  When the\n   "__hash__()" method of a class is "None", instances of the class\n   will raise an appropriate "TypeError" when a program attempts to\n   retrieve their hash value, and will also be correctly identified as\n   unhashable when checking "isinstance(obj, collections.Hashable").\n\n   If a class that overrides "__eq__()" needs to retain the\n   implementation of "__hash__()" from a parent class, the interpreter\n   must be told this explicitly by setting "__hash__ =\n   <ParentClass>.__hash__".\n\n   If a class that does not override "__eq__()" wishes to suppress\n   hash support, it should include "__hash__ = None" in the class\n   definition. A class which defines its own "__hash__()" that\n   explicitly raises a "TypeError" would be incorrectly identified as\n   hashable by an "isinstance(obj, collections.Hashable)" call.\n\n   Note: By default, the "__hash__()" values of str, bytes and\n     datetime objects are "salted" with an unpredictable random value.\n     Although they remain constant within an individual Python\n     process, they are not predictable between repeated invocations of\n     Python.This is intended to provide protection against a denial-\n     of-service caused by carefully-chosen inputs that exploit the\n     worst case performance of a dict insertion, O(n^2) complexity.\n     See http://www.ocert.org/advisories/ocert-2011-003.html for\n     details.Changing hash values affects the iteration order of\n     dicts, sets and other mappings.  Python has never made guarantees\n     about this ordering (and it typically varies between 32-bit and\n     64-bit builds).See also "PYTHONHASHSEED".\n\n   Changed in version 3.3: Hash randomization is enabled by default.\n\nobject.__bool__(self)\n\n   Called to implement truth value testing and the built-in operation\n   "bool()"; should return "False" or "True".  When this method is not\n   defined, "__len__()" is called, if it is defined, and the object is\n   considered true if its result is nonzero.  If a class defines\n   neither "__len__()" nor "__bool__()", all its instances are\n   considered true.\n',
+ 'customization': '\nBasic customization\n*******************\n\nobject.__new__(cls[, ...])\n\n   Called to create a new instance of class *cls*.  "__new__()" is a\n   static method (special-cased so you need not declare it as such)\n   that takes the class of which an instance was requested as its\n   first argument.  The remaining arguments are those passed to the\n   object constructor expression (the call to the class).  The return\n   value of "__new__()" should be the new object instance (usually an\n   instance of *cls*).\n\n   Typical implementations create a new instance of the class by\n   invoking the superclass\'s "__new__()" method using\n   "super(currentclass, cls).__new__(cls[, ...])" with appropriate\n   arguments and then modifying the newly-created instance as\n   necessary before returning it.\n\n   If "__new__()" returns an instance of *cls*, then the new\n   instance\'s "__init__()" method will be invoked like\n   "__init__(self[, ...])", where *self* is the new instance and the\n   remaining arguments are the same as were passed to "__new__()".\n\n   If "__new__()" does not return an instance of *cls*, then the new\n   instance\'s "__init__()" method will not be invoked.\n\n   "__new__()" is intended mainly to allow subclasses of immutable\n   types (like int, str, or tuple) to customize instance creation.  It\n   is also commonly overridden in custom metaclasses in order to\n   customize class creation.\n\nobject.__init__(self[, ...])\n\n   Called when the instance is created.  The arguments are those\n   passed to the class constructor expression.  If a base class has an\n   "__init__()" method, the derived class\'s "__init__()" method, if\n   any, must explicitly call it to ensure proper initialization of the\n   base class part of the instance; for example:\n   "BaseClass.__init__(self, [args...])".  As a special constraint on\n   constructors, no value may be returned; doing so will cause a\n   "TypeError" to be raised at runtime.\n\nobject.__del__(self)\n\n   Called when the instance is about to be destroyed.  This is also\n   called a destructor.  If a base class has a "__del__()" method, the\n   derived class\'s "__del__()" method, if any, must explicitly call it\n   to ensure proper deletion of the base class part of the instance.\n   Note that it is possible (though not recommended!) for the\n   "__del__()" method to postpone destruction of the instance by\n   creating a new reference to it.  It may then be called at a later\n   time when this new reference is deleted.  It is not guaranteed that\n   "__del__()" methods are called for objects that still exist when\n   the interpreter exits.\n\n   Note: "del x" doesn\'t directly call "x.__del__()" --- the former\n     decrements the reference count for "x" by one, and the latter is\n     only called when "x"\'s reference count reaches zero.  Some common\n     situations that may prevent the reference count of an object from\n     going to zero include: circular references between objects (e.g.,\n     a doubly-linked list or a tree data structure with parent and\n     child pointers); a reference to the object on the stack frame of\n     a function that caught an exception (the traceback stored in\n     "sys.exc_info()[2]" keeps the stack frame alive); or a reference\n     to the object on the stack frame that raised an unhandled\n     exception in interactive mode (the traceback stored in\n     "sys.last_traceback" keeps the stack frame alive).  The first\n     situation can only be remedied by explicitly breaking the cycles;\n     the latter two situations can be resolved by storing "None" in\n     "sys.last_traceback". Circular references which are garbage are\n     detected and cleaned up when the cyclic garbage collector is\n     enabled (it\'s on by default). Refer to the documentation for the\n     "gc" module for more information about this topic.\n\n   Warning: Due to the precarious circumstances under which\n     "__del__()" methods are invoked, exceptions that occur during\n     their execution are ignored, and a warning is printed to\n     "sys.stderr" instead. Also, when "__del__()" is invoked in\n     response to a module being deleted (e.g., when execution of the\n     program is done), other globals referenced by the "__del__()"\n     method may already have been deleted or in the process of being\n     torn down (e.g. the import machinery shutting down).  For this\n     reason, "__del__()" methods should do the absolute minimum needed\n     to maintain external invariants.  Starting with version 1.5,\n     Python guarantees that globals whose name begins with a single\n     underscore are deleted from their module before other globals are\n     deleted; if no other references to such globals exist, this may\n     help in assuring that imported modules are still available at the\n     time when the "__del__()" method is called.\n\nobject.__repr__(self)\n\n   Called by the "repr()" built-in function to compute the "official"\n   string representation of an object.  If at all possible, this\n   should look like a valid Python expression that could be used to\n   recreate an object with the same value (given an appropriate\n   environment).  If this is not possible, a string of the form\n   "<...some useful description...>" should be returned. The return\n   value must be a string object. If a class defines "__repr__()" but\n   not "__str__()", then "__repr__()" is also used when an "informal"\n   string representation of instances of that class is required.\n\n   This is typically used for debugging, so it is important that the\n   representation is information-rich and unambiguous.\n\nobject.__str__(self)\n\n   Called by "str(object)" and the built-in functions "format()" and\n   "print()" to compute the "informal" or nicely printable string\n   representation of an object.  The return value must be a *string*\n   object.\n\n   This method differs from "object.__repr__()" in that there is no\n   expectation that "__str__()" return a valid Python expression: a\n   more convenient or concise representation can be used.\n\n   The default implementation defined by the built-in type "object"\n   calls "object.__repr__()".\n\nobject.__bytes__(self)\n\n   Called by "bytes()" to compute a byte-string representation of an\n   object. This should return a "bytes" object.\n\nobject.__format__(self, format_spec)\n\n   Called by the "format()" built-in function (and by extension, the\n   "str.format()" method of class "str") to produce a "formatted"\n   string representation of an object. The "format_spec" argument is a\n   string that contains a description of the formatting options\n   desired. The interpretation of the "format_spec" argument is up to\n   the type implementing "__format__()", however most classes will\n   either delegate formatting to one of the built-in types, or use a\n   similar formatting option syntax.\n\n   See *Format Specification Mini-Language* for a description of the\n   standard formatting syntax.\n\n   The return value must be a string object.\n\n   Changed in version 3.4: The __format__ method of "object" itself\n   raises a "TypeError" if passed any non-empty string.\n\nobject.__lt__(self, other)\nobject.__le__(self, other)\nobject.__eq__(self, other)\nobject.__ne__(self, other)\nobject.__gt__(self, other)\nobject.__ge__(self, other)\n\n   These are the so-called "rich comparison" methods. The\n   correspondence between operator symbols and method names is as\n   follows: "x<y" calls "x.__lt__(y)", "x<=y" calls "x.__le__(y)",\n   "x==y" calls "x.__eq__(y)", "x!=y" calls "x.__ne__(y)", "x>y" calls\n   "x.__gt__(y)", and "x>=y" calls "x.__ge__(y)".\n\n   A rich comparison method may return the singleton "NotImplemented"\n   if it does not implement the operation for a given pair of\n   arguments. By convention, "False" and "True" are returned for a\n   successful comparison. However, these methods can return any value,\n   so if the comparison operator is used in a Boolean context (e.g.,\n   in the condition of an "if" statement), Python will call "bool()"\n   on the value to determine if the result is true or false.\n\n   There are no implied relationships among the comparison operators.\n   The truth of "x==y" does not imply that "x!=y" is false.\n   Accordingly, when defining "__eq__()", one should also define\n   "__ne__()" so that the operators will behave as expected.  See the\n   paragraph on "__hash__()" for some important notes on creating\n   *hashable* objects which support custom comparison operations and\n   are usable as dictionary keys.\n\n   There are no swapped-argument versions of these methods (to be used\n   when the left argument does not support the operation but the right\n   argument does); rather, "__lt__()" and "__gt__()" are each other\'s\n   reflection, "__le__()" and "__ge__()" are each other\'s reflection,\n   and "__eq__()" and "__ne__()" are their own reflection.\n\n   Arguments to rich comparison methods are never coerced.\n\n   To automatically generate ordering operations from a single root\n   operation, see "functools.total_ordering()".\n\nobject.__hash__(self)\n\n   Called by built-in function "hash()" and for operations on members\n   of hashed collections including "set", "frozenset", and "dict".\n   "__hash__()" should return an integer.  The only required property\n   is that objects which compare equal have the same hash value; it is\n   advised to somehow mix together (e.g. using exclusive or) the hash\n   values for the components of the object that also play a part in\n   comparison of objects.\n\n   Note: "hash()" truncates the value returned from an object\'s\n     custom "__hash__()" method to the size of a "Py_ssize_t".  This\n     is typically 8 bytes on 64-bit builds and 4 bytes on 32-bit\n     builds. If an object\'s   "__hash__()" must interoperate on builds\n     of different bit sizes, be sure to check the width on all\n     supported builds.  An easy way to do this is with "python -c\n     "import sys; print(sys.hash_info.width)""\n\n   If a class does not define an "__eq__()" method it should not\n   define a "__hash__()" operation either; if it defines "__eq__()"\n   but not "__hash__()", its instances will not be usable as items in\n   hashable collections.  If a class defines mutable objects and\n   implements an "__eq__()" method, it should not implement\n   "__hash__()", since the implementation of hashable collections\n   requires that a key\'s hash value is immutable (if the object\'s hash\n   value changes, it will be in the wrong hash bucket).\n\n   User-defined classes have "__eq__()" and "__hash__()" methods by\n   default; with them, all objects compare unequal (except with\n   themselves) and "x.__hash__()" returns an appropriate value such\n   that "x == y" implies both that "x is y" and "hash(x) == hash(y)".\n\n   A class that overrides "__eq__()" and does not define "__hash__()"\n   will have its "__hash__()" implicitly set to "None".  When the\n   "__hash__()" method of a class is "None", instances of the class\n   will raise an appropriate "TypeError" when a program attempts to\n   retrieve their hash value, and will also be correctly identified as\n   unhashable when checking "isinstance(obj, collections.Hashable").\n\n   If a class that overrides "__eq__()" needs to retain the\n   implementation of "__hash__()" from a parent class, the interpreter\n   must be told this explicitly by setting "__hash__ =\n   <ParentClass>.__hash__".\n\n   If a class that does not override "__eq__()" wishes to suppress\n   hash support, it should include "__hash__ = None" in the class\n   definition. A class which defines its own "__hash__()" that\n   explicitly raises a "TypeError" would be incorrectly identified as\n   hashable by an "isinstance(obj, collections.Hashable)" call.\n\n   Note: By default, the "__hash__()" values of str, bytes and\n     datetime objects are "salted" with an unpredictable random value.\n     Although they remain constant within an individual Python\n     process, they are not predictable between repeated invocations of\n     Python.This is intended to provide protection against a denial-\n     of-service caused by carefully-chosen inputs that exploit the\n     worst case performance of a dict insertion, O(n^2) complexity.\n     See http://www.ocert.org/advisories/ocert-2011-003.html for\n     details.Changing hash values affects the iteration order of\n     dicts, sets and other mappings.  Python has never made guarantees\n     about this ordering (and it typically varies between 32-bit and\n     64-bit builds).See also "PYTHONHASHSEED".\n\n   Changed in version 3.3: Hash randomization is enabled by default.\n\nobject.__bool__(self)\n\n   Called to implement truth value testing and the built-in operation\n   "bool()"; should return "False" or "True".  When this method is not\n   defined, "__len__()" is called, if it is defined, and the object is\n   considered true if its result is nonzero.  If a class defines\n   neither "__len__()" nor "__bool__()", all its instances are\n   considered true.\n',
  'debugger': '\n"pdb" --- The Python Debugger\n*****************************\n\nThe module "pdb" defines an interactive source code debugger for\nPython programs.  It supports setting (conditional) breakpoints and\nsingle stepping at the source line level, inspection of stack frames,\nsource code listing, and evaluation of arbitrary Python code in the\ncontext of any stack frame.  It also supports post-mortem debugging\nand can be called under program control.\n\nThe debugger is extensible -- it is actually defined as the class\n"Pdb". This is currently undocumented but easily understood by reading\nthe source.  The extension interface uses the modules "bdb" and "cmd".\n\nThe debugger\'s prompt is "(Pdb)". Typical usage to run a program under\ncontrol of the debugger is:\n\n   >>> import pdb\n   >>> import mymodule\n   >>> pdb.run(\'mymodule.test()\')\n   > <string>(0)?()\n   (Pdb) continue\n   > <string>(1)?()\n   (Pdb) continue\n   NameError: \'spam\'\n   > <string>(1)?()\n   (Pdb)\n\nChanged in version 3.3: Tab-completion via the "readline" module is\navailable for commands and command arguments, e.g. the current global\nand local names are offered as arguments of the "p" command.\n\n"pdb.py" can also be invoked as a script to debug other scripts.  For\nexample:\n\n   python3 -m pdb myscript.py\n\nWhen invoked as a script, pdb will automatically enter post-mortem\ndebugging if the program being debugged exits abnormally.  After post-\nmortem debugging (or after normal exit of the program), pdb will\nrestart the program.  Automatic restarting preserves pdb\'s state (such\nas breakpoints) and in most cases is more useful than quitting the\ndebugger upon program\'s exit.\n\nNew in version 3.2: "pdb.py" now accepts a "-c" option that executes\ncommands as if given in a ".pdbrc" file, see *Debugger Commands*.\n\nThe typical usage to break into the debugger from a running program is\nto insert\n\n   import pdb; pdb.set_trace()\n\nat the location you want to break into the debugger.  You can then\nstep through the code following this statement, and continue running\nwithout the debugger using the "continue" command.\n\nThe typical usage to inspect a crashed program is:\n\n   >>> import pdb\n   >>> import mymodule\n   >>> mymodule.test()\n   Traceback (most recent call last):\n     File "<stdin>", line 1, in ?\n     File "./mymodule.py", line 4, in test\n       test2()\n     File "./mymodule.py", line 3, in test2\n       print(spam)\n   NameError: spam\n   >>> pdb.pm()\n   > ./mymodule.py(3)test2()\n   -> print(spam)\n   (Pdb)\n\nThe module defines the following functions; each enters the debugger\nin a slightly different way:\n\npdb.run(statement, globals=None, locals=None)\n\n   Execute the *statement* (given as a string or a code object) under\n   debugger control.  The debugger prompt appears before any code is\n   executed; you can set breakpoints and type "continue", or you can\n   step through the statement using "step" or "next" (all these\n   commands are explained below).  The optional *globals* and *locals*\n   arguments specify the environment in which the code is executed; by\n   default the dictionary of the module "__main__" is used.  (See the\n   explanation of the built-in "exec()" or "eval()" functions.)\n\npdb.runeval(expression, globals=None, locals=None)\n\n   Evaluate the *expression* (given as a string or a code object)\n   under debugger control.  When "runeval()" returns, it returns the\n   value of the expression.  Otherwise this function is similar to\n   "run()".\n\npdb.runcall(function, *args, **kwds)\n\n   Call the *function* (a function or method object, not a string)\n   with the given arguments.  When "runcall()" returns, it returns\n   whatever the function call returned.  The debugger prompt appears\n   as soon as the function is entered.\n\npdb.set_trace()\n\n   Enter the debugger at the calling stack frame.  This is useful to\n   hard-code a breakpoint at a given point in a program, even if the\n   code is not otherwise being debugged (e.g. when an assertion\n   fails).\n\npdb.post_mortem(traceback=None)\n\n   Enter post-mortem debugging of the given *traceback* object.  If no\n   *traceback* is given, it uses the one of the exception that is\n   currently being handled (an exception must be being handled if the\n   default is to be used).\n\npdb.pm()\n\n   Enter post-mortem debugging of the traceback found in\n   "sys.last_traceback".\n\nThe "run*" functions and "set_trace()" are aliases for instantiating\nthe "Pdb" class and calling the method of the same name.  If you want\nto access further features, you have to do this yourself:\n\nclass class pdb.Pdb(completekey=\'tab\', stdin=None, stdout=None, skip=None, nosigint=False)\n\n   "Pdb" is the debugger class.\n\n   The *completekey*, *stdin* and *stdout* arguments are passed to the\n   underlying "cmd.Cmd" class; see the description there.\n\n   The *skip* argument, if given, must be an iterable of glob-style\n   module name patterns.  The debugger will not step into frames that\n   originate in a module that matches one of these patterns. [1]\n\n   By default, Pdb sets a handler for the SIGINT signal (which is sent\n   when the user presses Ctrl-C on the console) when you give a\n   "continue" command. This allows you to break into the debugger\n   again by pressing Ctrl-C.  If you want Pdb not to touch the SIGINT\n   handler, set *nosigint* tot true.\n\n   Example call to enable tracing with *skip*:\n\n      import pdb; pdb.Pdb(skip=[\'django.*\']).set_trace()\n\n   New in version 3.1: The *skip* argument.\n\n   New in version 3.2: The *nosigint* argument.  Previously, a SIGINT\n   handler was never set by Pdb.\n\n   run(statement, globals=None, locals=None)\n   runeval(expression, globals=None, locals=None)\n   runcall(function, *args, **kwds)\n   set_trace()\n\n      See the documentation for the functions explained above.\n\n\nDebugger Commands\n=================\n\nThe commands recognized by the debugger are listed below.  Most\ncommands can be abbreviated to one or two letters as indicated; e.g.\n"h(elp)" means that either "h" or "help" can be used to enter the help\ncommand (but not "he" or "hel", nor "H" or "Help" or "HELP").\nArguments to commands must be separated by whitespace (spaces or\ntabs).  Optional arguments are enclosed in square brackets ("[]") in\nthe command syntax; the square brackets must not be typed.\nAlternatives in the command syntax are separated by a vertical bar\n("|").\n\nEntering a blank line repeats the last command entered.  Exception: if\nthe last command was a "list" command, the next 11 lines are listed.\n\nCommands that the debugger doesn\'t recognize are assumed to be Python\nstatements and are executed in the context of the program being\ndebugged.  Python statements can also be prefixed with an exclamation\npoint ("!").  This is a powerful way to inspect the program being\ndebugged; it is even possible to change a variable or call a function.\nWhen an exception occurs in such a statement, the exception name is\nprinted but the debugger\'s state is not changed.\n\nThe debugger supports *aliases*.  Aliases can have parameters which\nallows one a certain level of adaptability to the context under\nexamination.\n\nMultiple commands may be entered on a single line, separated by ";;".\n(A single ";" is not used as it is the separator for multiple commands\nin a line that is passed to the Python parser.)  No intelligence is\napplied to separating the commands; the input is split at the first\n";;" pair, even if it is in the middle of a quoted string.\n\nIf a file ".pdbrc" exists in the user\'s home directory or in the\ncurrent directory, it is read in and executed as if it had been typed\nat the debugger prompt.  This is particularly useful for aliases.  If\nboth files exist, the one in the home directory is read first and\naliases defined there can be overridden by the local file.\n\nChanged in version 3.2: ".pdbrc" can now contain commands that\ncontinue debugging, such as "continue" or "next".  Previously, these\ncommands had no effect.\n\nh(elp) [command]\n\n   Without argument, print the list of available commands.  With a\n   *command* as argument, print help about that command.  "help pdb"\n   displays the full documentation (the docstring of the "pdb"\n   module).  Since the *command* argument must be an identifier, "help\n   exec" must be entered to get help on the "!" command.\n\nw(here)\n\n   Print a stack trace, with the most recent frame at the bottom.  An\n   arrow indicates the current frame, which determines the context of\n   most commands.\n\nd(own) [count]\n\n   Move the current frame *count* (default one) levels down in the\n   stack trace (to a newer frame).\n\nu(p) [count]\n\n   Move the current frame *count* (default one) levels up in the stack\n   trace (to an older frame).\n\nb(reak) [([filename:]lineno | function) [, condition]]\n\n   With a *lineno* argument, set a break there in the current file.\n   With a *function* argument, set a break at the first executable\n   statement within that function.  The line number may be prefixed\n   with a filename and a colon, to specify a breakpoint in another\n   file (probably one that hasn\'t been loaded yet).  The file is\n   searched on "sys.path".  Note that each breakpoint is assigned a\n   number to which all the other breakpoint commands refer.\n\n   If a second argument is present, it is an expression which must\n   evaluate to true before the breakpoint is honored.\n\n   Without argument, list all breaks, including for each breakpoint,\n   the number of times that breakpoint has been hit, the current\n   ignore count, and the associated condition if any.\n\ntbreak [([filename:]lineno | function) [, condition]]\n\n   Temporary breakpoint, which is removed automatically when it is\n   first hit. The arguments are the same as for "break".\n\ncl(ear) [filename:lineno | bpnumber [bpnumber ...]]\n\n   With a *filename:lineno* argument, clear all the breakpoints at\n   this line. With a space separated list of breakpoint numbers, clear\n   those breakpoints. Without argument, clear all breaks (but first\n   ask confirmation).\n\ndisable [bpnumber [bpnumber ...]]\n\n   Disable the breakpoints given as a space separated list of\n   breakpoint numbers.  Disabling a breakpoint means it cannot cause\n   the program to stop execution, but unlike clearing a breakpoint, it\n   remains in the list of breakpoints and can be (re-)enabled.\n\nenable [bpnumber [bpnumber ...]]\n\n   Enable the breakpoints specified.\n\nignore bpnumber [count]\n\n   Set the ignore count for the given breakpoint number.  If count is\n   omitted, the ignore count is set to 0.  A breakpoint becomes active\n   when the ignore count is zero.  When non-zero, the count is\n   decremented each time the breakpoint is reached and the breakpoint\n   is not disabled and any associated condition evaluates to true.\n\ncondition bpnumber [condition]\n\n   Set a new *condition* for the breakpoint, an expression which must\n   evaluate to true before the breakpoint is honored.  If *condition*\n   is absent, any existing condition is removed; i.e., the breakpoint\n   is made unconditional.\n\ncommands [bpnumber]\n\n   Specify a list of commands for breakpoint number *bpnumber*.  The\n   commands themselves appear on the following lines.  Type a line\n   containing just "end" to terminate the commands. An example:\n\n      (Pdb) commands 1\n      (com) p some_variable\n      (com) end\n      (Pdb)\n\n   To remove all commands from a breakpoint, type commands and follow\n   it immediately with "end"; that is, give no commands.\n\n   With no *bpnumber* argument, commands refers to the last breakpoint\n   set.\n\n   You can use breakpoint commands to start your program up again.\n   Simply use the continue command, or step, or any other command that\n   resumes execution.\n\n   Specifying any command resuming execution (currently continue,\n   step, next, return, jump, quit and their abbreviations) terminates\n   the command list (as if that command was immediately followed by\n   end). This is because any time you resume execution (even with a\n   simple next or step), you may encounter another breakpoint--which\n   could have its own command list, leading to ambiguities about which\n   list to execute.\n\n   If you use the \'silent\' command in the command list, the usual\n   message about stopping at a breakpoint is not printed.  This may be\n   desirable for breakpoints that are to print a specific message and\n   then continue.  If none of the other commands print anything, you\n   see no sign that the breakpoint was reached.\n\ns(tep)\n\n   Execute the current line, stop at the first possible occasion\n   (either in a function that is called or on the next line in the\n   current function).\n\nn(ext)\n\n   Continue execution until the next line in the current function is\n   reached or it returns.  (The difference between "next" and "step"\n   is that "step" stops inside a called function, while "next"\n   executes called functions at (nearly) full speed, only stopping at\n   the next line in the current function.)\n\nunt(il) [lineno]\n\n   Without argument, continue execution until the line with a number\n   greater than the current one is reached.\n\n   With a line number, continue execution until a line with a number\n   greater or equal to that is reached.  In both cases, also stop when\n   the current frame returns.\n\n   Changed in version 3.2: Allow giving an explicit line number.\n\nr(eturn)\n\n   Continue execution until the current function returns.\n\nc(ont(inue))\n\n   Continue execution, only stop when a breakpoint is encountered.\n\nj(ump) lineno\n\n   Set the next line that will be executed.  Only available in the\n   bottom-most frame.  This lets you jump back and execute code again,\n   or jump forward to skip code that you don\'t want to run.\n\n   It should be noted that not all jumps are allowed -- for instance\n   it is not possible to jump into the middle of a "for" loop or out\n   of a "finally" clause.\n\nl(ist) [first[, last]]\n\n   List source code for the current file.  Without arguments, list 11\n   lines around the current line or continue the previous listing.\n   With "." as argument, list 11 lines around the current line.  With\n   one argument, list 11 lines around at that line.  With two\n   arguments, list the given range; if the second argument is less\n   than the first, it is interpreted as a count.\n\n   The current line in the current frame is indicated by "->".  If an\n   exception is being debugged, the line where the exception was\n   originally raised or propagated is indicated by ">>", if it differs\n   from the current line.\n\n   New in version 3.2: The ">>" marker.\n\nll | longlist\n\n   List all source code for the current function or frame.\n   Interesting lines are marked as for "list".\n\n   New in version 3.2.\n\na(rgs)\n\n   Print the argument list of the current function.\n\np expression\n\n   Evaluate the *expression* in the current context and print its\n   value.\n\n   Note: "print()" can also be used, but is not a debugger command\n     --- this executes the Python "print()" function.\n\npp expression\n\n   Like the "p" command, except the value of the expression is pretty-\n   printed using the "pprint" module.\n\nwhatis expression\n\n   Print the type of the *expression*.\n\nsource expression\n\n   Try to get source code for the given object and display it.\n\n   New in version 3.2.\n\ndisplay [expression]\n\n   Display the value of the expression if it changed, each time\n   execution stops in the current frame.\n\n   Without expression, list all display expressions for the current\n   frame.\n\n   New in version 3.2.\n\nundisplay [expression]\n\n   Do not display the expression any more in the current frame.\n   Without expression, clear all display expressions for the current\n   frame.\n\n   New in version 3.2.\n\ninteract\n\n   Start an interative interpreter (using the "code" module) whose\n   global namespace contains all the (global and local) names found in\n   the current scope.\n\n   New in version 3.2.\n\nalias [name [command]]\n\n   Create an alias called *name* that executes *command*.  The command\n   must *not* be enclosed in quotes.  Replaceable parameters can be\n   indicated by "%1", "%2", and so on, while "%*" is replaced by all\n   the parameters. If no command is given, the current alias for\n   *name* is shown. If no arguments are given, all aliases are listed.\n\n   Aliases may be nested and can contain anything that can be legally\n   typed at the pdb prompt.  Note that internal pdb commands *can* be\n   overridden by aliases.  Such a command is then hidden until the\n   alias is removed.  Aliasing is recursively applied to the first\n   word of the command line; all other words in the line are left\n   alone.\n\n   As an example, here are two useful aliases (especially when placed\n   in the ".pdbrc" file):\n\n      # Print instance variables (usage "pi classInst")\n      alias pi for k in %1.__dict__.keys(): print("%1.",k,"=",%1.__dict__[k])\n      # Print instance variables in self\n      alias ps pi self\n\nunalias name\n\n   Delete the specified alias.\n\n! statement\n\n   Execute the (one-line) *statement* in the context of the current\n   stack frame. The exclamation point can be omitted unless the first\n   word of the statement resembles a debugger command.  To set a\n   global variable, you can prefix the assignment command with a\n   "global" statement on the same line, e.g.:\n\n      (Pdb) global list_options; list_options = [\'-l\']\n      (Pdb)\n\nrun [args ...]\nrestart [args ...]\n\n   Restart the debugged Python program.  If an argument is supplied,\n   it is split with "shlex" and the result is used as the new\n   "sys.argv". History, breakpoints, actions and debugger options are\n   preserved. "restart" is an alias for "run".\n\nq(uit)\n\n   Quit from the debugger.  The program being executed is aborted.\n\n-[ Footnotes ]-\n\n[1] Whether a frame is considered to originate in a certain module\n    is determined by the "__name__" in the frame globals.\n',
  'del': '\nThe "del" statement\n*******************\n\n   del_stmt ::= "del" target_list\n\nDeletion is recursively defined very similar to the way assignment is\ndefined. Rather than spelling it out in full details, here are some\nhints.\n\nDeletion of a target list recursively deletes each target, from left\nto right.\n\nDeletion of a name removes the binding of that name from the local or\nglobal namespace, depending on whether the name occurs in a "global"\nstatement in the same code block.  If the name is unbound, a\n"NameError" exception will be raised.\n\nDeletion of attribute references, subscriptions and slicings is passed\nto the primary object involved; deletion of a slicing is in general\nequivalent to assignment of an empty slice of the right type (but even\nthis is determined by the sliced object).\n\nChanged in version 3.2: Previously it was illegal to delete a name\nfrom the local namespace if it occurs as a free variable in a nested\nblock.\n',
  'dict': '\nDictionary displays\n*******************\n\nA dictionary display is a possibly empty series of key/datum pairs\nenclosed in curly braces:\n\n   dict_display       ::= "{" [key_datum_list | dict_comprehension] "}"\n   key_datum_list     ::= key_datum ("," key_datum)* [","]\n   key_datum          ::= expression ":" expression\n   dict_comprehension ::= expression ":" expression comp_for\n\nA dictionary display yields a new dictionary object.\n\nIf a comma-separated sequence of key/datum pairs is given, they are\nevaluated from left to right to define the entries of the dictionary:\neach key object is used as a key into the dictionary to store the\ncorresponding datum.  This means that you can specify the same key\nmultiple times in the key/datum list, and the final dictionary\'s value\nfor that key will be the last one given.\n\nA dict comprehension, in contrast to list and set comprehensions,\nneeds two expressions separated with a colon followed by the usual\n"for" and "if" clauses. When the comprehension is run, the resulting\nkey and value elements are inserted in the new dictionary in the order\nthey are produced.\n\nRestrictions on the types of the key values are listed earlier in\nsection *The standard type hierarchy*.  (To summarize, the key type\nshould be *hashable*, which excludes all mutable objects.)  Clashes\nbetween duplicate keys are not detected; the last datum (textually\nrightmost in the display) stored for a given key value prevails.\n',
@@ -49,7 +49,7 @@
  'naming': '\nNaming and binding\n******************\n\n*Names* refer to objects.  Names are introduced by name binding\noperations. Each occurrence of a name in the program text refers to\nthe *binding* of that name established in the innermost function block\ncontaining the use.\n\nA *block* is a piece of Python program text that is executed as a\nunit. The following are blocks: a module, a function body, and a class\ndefinition. Each command typed interactively is a block.  A script\nfile (a file given as standard input to the interpreter or specified\non the interpreter command line the first argument) is a code block.\nA script command (a command specified on the interpreter command line\nwith the \'**-c**\' option) is a code block.  The string argument passed\nto the built-in functions "eval()" and "exec()" is a code block.\n\nA code block is executed in an *execution frame*.  A frame contains\nsome administrative information (used for debugging) and determines\nwhere and how execution continues after the code block\'s execution has\ncompleted.\n\nA *scope* defines the visibility of a name within a block.  If a local\nvariable is defined in a block, its scope includes that block.  If the\ndefinition occurs in a function block, the scope extends to any blocks\ncontained within the defining one, unless a contained block introduces\na different binding for the name.  The scope of names defined in a\nclass block is limited to the class block; it does not extend to the\ncode blocks of methods -- this includes comprehensions and generator\nexpressions since they are implemented using a function scope.  This\nmeans that the following will fail:\n\n   class A:\n       a = 42\n       b = list(a + i for i in range(10))\n\nWhen a name is used in a code block, it is resolved using the nearest\nenclosing scope.  The set of all such scopes visible to a code block\nis called the block\'s *environment*.\n\nIf a name is bound in a block, it is a local variable of that block,\nunless declared as "nonlocal".  If a name is bound at the module\nlevel, it is a global variable.  (The variables of the module code\nblock are local and global.)  If a variable is used in a code block\nbut not defined there, it is a *free variable*.\n\nWhen a name is not found at all, a "NameError" exception is raised.\nIf the name refers to a local variable that has not been bound, a\n"UnboundLocalError" exception is raised.  "UnboundLocalError" is a\nsubclass of "NameError".\n\nThe following constructs bind names: formal parameters to functions,\n"import" statements, class and function definitions (these bind the\nclass or function name in the defining block), and targets that are\nidentifiers if occurring in an assignment, "for" loop header, or after\n"as" in a "with" statement or "except" clause. The "import" statement\nof the form "from ... import *" binds all names defined in the\nimported module, except those beginning with an underscore.  This form\nmay only be used at the module level.\n\nA target occurring in a "del" statement is also considered bound for\nthis purpose (though the actual semantics are to unbind the name).\n\nEach assignment or import statement occurs within a block defined by a\nclass or function definition or at the module level (the top-level\ncode block).\n\nIf a name binding operation occurs anywhere within a code block, all\nuses of the name within the block are treated as references to the\ncurrent block.  This can lead to errors when a name is used within a\nblock before it is bound.  This rule is subtle.  Python lacks\ndeclarations and allows name binding operations to occur anywhere\nwithin a code block.  The local variables of a code block can be\ndetermined by scanning the entire text of the block for name binding\noperations.\n\nIf the "global" statement occurs within a block, all uses of the name\nspecified in the statement refer to the binding of that name in the\ntop-level namespace.  Names are resolved in the top-level namespace by\nsearching the global namespace, i.e. the namespace of the module\ncontaining the code block, and the builtins namespace, the namespace\nof the module "builtins".  The global namespace is searched first.  If\nthe name is not found there, the builtins namespace is searched.  The\nglobal statement must precede all uses of the name.\n\nThe builtins namespace associated with the execution of a code block\nis actually found by looking up the name "__builtins__" in its global\nnamespace; this should be a dictionary or a module (in the latter case\nthe module\'s dictionary is used).  By default, when in the "__main__"\nmodule, "__builtins__" is the built-in module "builtins"; when in any\nother module, "__builtins__" is an alias for the dictionary of the\n"builtins" module itself.  "__builtins__" can be set to a user-created\ndictionary to create a weak form of restricted execution.\n\n**CPython implementation detail:** Users should not touch\n"__builtins__"; it is strictly an implementation detail.  Users\nwanting to override values in the builtins namespace should "import"\nthe "builtins" module and modify its attributes appropriately.\n\nThe namespace for a module is automatically created the first time a\nmodule is imported.  The main module for a script is always called\n"__main__".\n\nThe "global" statement has the same scope as a name binding operation\nin the same block.  If the nearest enclosing scope for a free variable\ncontains a global statement, the free variable is treated as a global.\n\nA class definition is an executable statement that may use and define\nnames. These references follow the normal rules for name resolution.\nThe namespace of the class definition becomes the attribute dictionary\nof the class.  Names defined at the class scope are not visible in\nmethods.\n\n\nInteraction with dynamic features\n=================================\n\nThere are several cases where Python statements are illegal when used\nin conjunction with nested scopes that contain free variables.\n\nIf a variable is referenced in an enclosing scope, it is illegal to\ndelete the name.  An error will be reported at compile time.\n\nIf the wild card form of import --- "import *" --- is used in a\nfunction and the function contains or is a nested block with free\nvariables, the compiler will raise a "SyntaxError".\n\nThe "eval()" and "exec()" functions do not have access to the full\nenvironment for resolving names.  Names may be resolved in the local\nand global namespaces of the caller.  Free variables are not resolved\nin the nearest enclosing namespace, but in the global namespace.  [1]\nThe "exec()" and "eval()" functions have optional arguments to\noverride the global and local namespace.  If only one namespace is\nspecified, it is used for both.\n',
  'nonlocal': '\nThe "nonlocal" statement\n************************\n\n   nonlocal_stmt ::= "nonlocal" identifier ("," identifier)*\n\nThe "nonlocal" statement causes the listed identifiers to refer to\npreviously bound variables in the nearest enclosing scope.  This is\nimportant because the default behavior for binding is to search the\nlocal namespace first.  The statement allows encapsulated code to\nrebind variables outside of the local scope besides the global\n(module) scope.\n\nNames listed in a "nonlocal" statement, unlike to those listed in a\n"global" statement, must refer to pre-existing bindings in an\nenclosing scope (the scope in which a new binding should be created\ncannot be determined unambiguously).\n\nNames listed in a "nonlocal" statement must not collide with pre-\nexisting bindings in the local scope.\n\nSee also: **PEP 3104** - Access to Names in Outer Scopes\n\n     The specification for the "nonlocal" statement.\n',
  'numbers': '\nNumeric literals\n****************\n\nThere are three types of numeric literals: integers, floating point\nnumbers, and imaginary numbers.  There are no complex literals\n(complex numbers can be formed by adding a real number and an\nimaginary number).\n\nNote that numeric literals do not include a sign; a phrase like "-1"\nis actually an expression composed of the unary operator \'"-"\' and the\nliteral "1".\n',
- 'numeric-types': '\nEmulating numeric types\n***********************\n\nThe following methods can be defined to emulate numeric objects.\nMethods corresponding to operations that are not supported by the\nparticular kind of number implemented (e.g., bitwise operations for\nnon-integral numbers) should be left undefined.\n\nobject.__add__(self, other)\nobject.__sub__(self, other)\nobject.__mul__(self, other)\nobject.__truediv__(self, other)\nobject.__floordiv__(self, other)\nobject.__mod__(self, other)\nobject.__divmod__(self, other)\nobject.__pow__(self, other[, modulo])\nobject.__lshift__(self, other)\nobject.__rshift__(self, other)\nobject.__and__(self, other)\nobject.__xor__(self, other)\nobject.__or__(self, other)\n\n   These methods are called to implement the binary arithmetic\n   operations ("+", "-", "*", "/", "//", "%", "divmod()", "pow()",\n   "**", "<<", ">>", "&", "^", "|").  For instance, to evaluate the\n   expression "x + y", where *x* is an instance of a class that has an\n   "__add__()" method, "x.__add__(y)" is called.  The "__divmod__()"\n   method should be the equivalent to using "__floordiv__()" and\n   "__mod__()"; it should not be related to "__truediv__()".  Note\n   that "__pow__()" should be defined to accept an optional third\n   argument if the ternary version of the built-in "pow()" function is\n   to be supported.\n\n   If one of those methods does not support the operation with the\n   supplied arguments, it should return "NotImplemented".\n\nobject.__radd__(self, other)\nobject.__rsub__(self, other)\nobject.__rmul__(self, other)\nobject.__rtruediv__(self, other)\nobject.__rfloordiv__(self, other)\nobject.__rmod__(self, other)\nobject.__rdivmod__(self, other)\nobject.__rpow__(self, other)\nobject.__rlshift__(self, other)\nobject.__rrshift__(self, other)\nobject.__rand__(self, other)\nobject.__rxor__(self, other)\nobject.__ror__(self, other)\n\n   These methods are called to implement the binary arithmetic\n   operations ("+", "-", "*", "/", "//", "%", "divmod()", "pow()",\n   "**", "<<", ">>", "&", "^", "|") with reflected (swapped) operands.\n   These functions are only called if the left operand does not\n   support the corresponding operation and the operands are of\n   different types. [2]  For instance, to evaluate the expression "x -\n   y", where *y* is an instance of a class that has an "__rsub__()"\n   method, "y.__rsub__(x)" is called if "x.__sub__(y)" returns\n   *NotImplemented*.\n\n   Note that ternary "pow()" will not try calling "__rpow__()" (the\n   coercion rules would become too complicated).\n\n   Note: If the right operand\'s type is a subclass of the left\n     operand\'s type and that subclass provides the reflected method\n     for the operation, this method will be called before the left\n     operand\'s non-reflected method.  This behavior allows subclasses\n     to override their ancestors\' operations.\n\nobject.__iadd__(self, other)\nobject.__isub__(self, other)\nobject.__imul__(self, other)\nobject.__itruediv__(self, other)\nobject.__ifloordiv__(self, other)\nobject.__imod__(self, other)\nobject.__ipow__(self, other[, modulo])\nobject.__ilshift__(self, other)\nobject.__irshift__(self, other)\nobject.__iand__(self, other)\nobject.__ixor__(self, other)\nobject.__ior__(self, other)\n\n   These methods are called to implement the augmented arithmetic\n   assignments ("+=", "-=", "*=", "/=", "//=", "%=", "**=", "<<=",\n   ">>=", "&=", "^=", "|=").  These methods should attempt to do the\n   operation in-place (modifying *self*) and return the result (which\n   could be, but does not have to be, *self*).  If a specific method\n   is not defined, the augmented assignment falls back to the normal\n   methods.  For instance, to execute the statement "x += y", where\n   *x* is an instance of a class that has an "__iadd__()" method,\n   "x.__iadd__(y)" is called.  If *x* is an instance of a class that\n   does not define a "__iadd__()" method, "x.__add__(y)" and\n   "y.__radd__(x)" are considered, as with the evaluation of "x + y".\n\nobject.__neg__(self)\nobject.__pos__(self)\nobject.__abs__(self)\nobject.__invert__(self)\n\n   Called to implement the unary arithmetic operations ("-", "+",\n   "abs()" and "~").\n\nobject.__complex__(self)\nobject.__int__(self)\nobject.__float__(self)\nobject.__round__(self[, n])\n\n   Called to implement the built-in functions "complex()", "int()",\n   "float()" and "round()".  Should return a value of the appropriate\n   type.\n\nobject.__index__(self)\n\n   Called to implement "operator.index()", and whenever Python needs\n   to losslessly convert the numeric object to an integer object (such\n   as in slicing, or in the built-in "bin()", "hex()" and "oct()"\n   functions). Presence of this method indicates that the numeric\n   object is an integer type.  Must return an integer.\n\n   Note: When "__index__()" is defined, "__int__()" should also be\n     defined, and both shuld return the same value, in order to have a\n     coherent integer type class.\n',
+ 'numeric-types': '\nEmulating numeric types\n***********************\n\nThe following methods can be defined to emulate numeric objects.\nMethods corresponding to operations that are not supported by the\nparticular kind of number implemented (e.g., bitwise operations for\nnon-integral numbers) should be left undefined.\n\nobject.__add__(self, other)\nobject.__sub__(self, other)\nobject.__mul__(self, other)\nobject.__truediv__(self, other)\nobject.__floordiv__(self, other)\nobject.__mod__(self, other)\nobject.__divmod__(self, other)\nobject.__pow__(self, other[, modulo])\nobject.__lshift__(self, other)\nobject.__rshift__(self, other)\nobject.__and__(self, other)\nobject.__xor__(self, other)\nobject.__or__(self, other)\n\n   These methods are called to implement the binary arithmetic\n   operations ("+", "-", "*", "/", "//", "%", "divmod()", "pow()",\n   "**", "<<", ">>", "&", "^", "|").  For instance, to evaluate the\n   expression "x + y", where *x* is an instance of a class that has an\n   "__add__()" method, "x.__add__(y)" is called.  The "__divmod__()"\n   method should be the equivalent to using "__floordiv__()" and\n   "__mod__()"; it should not be related to "__truediv__()".  Note\n   that "__pow__()" should be defined to accept an optional third\n   argument if the ternary version of the built-in "pow()" function is\n   to be supported.\n\n   If one of those methods does not support the operation with the\n   supplied arguments, it should return "NotImplemented".\n\nobject.__radd__(self, other)\nobject.__rsub__(self, other)\nobject.__rmul__(self, other)\nobject.__rtruediv__(self, other)\nobject.__rfloordiv__(self, other)\nobject.__rmod__(self, other)\nobject.__rdivmod__(self, other)\nobject.__rpow__(self, other)\nobject.__rlshift__(self, other)\nobject.__rrshift__(self, other)\nobject.__rand__(self, other)\nobject.__rxor__(self, other)\nobject.__ror__(self, other)\n\n   These methods are called to implement the binary arithmetic\n   operations ("+", "-", "*", "/", "//", "%", "divmod()", "pow()",\n   "**", "<<", ">>", "&", "^", "|") with reflected (swapped) operands.\n   These functions are only called if the left operand does not\n   support the corresponding operation and the operands are of\n   different types. [2]  For instance, to evaluate the expression "x -\n   y", where *y* is an instance of a class that has an "__rsub__()"\n   method, "y.__rsub__(x)" is called if "x.__sub__(y)" returns\n   *NotImplemented*.\n\n   Note that ternary "pow()" will not try calling "__rpow__()" (the\n   coercion rules would become too complicated).\n\n   Note: If the right operand\'s type is a subclass of the left\n     operand\'s type and that subclass provides the reflected method\n     for the operation, this method will be called before the left\n     operand\'s non-reflected method.  This behavior allows subclasses\n     to override their ancestors\' operations.\n\nobject.__iadd__(self, other)\nobject.__isub__(self, other)\nobject.__imul__(self, other)\nobject.__itruediv__(self, other)\nobject.__ifloordiv__(self, other)\nobject.__imod__(self, other)\nobject.__ipow__(self, other[, modulo])\nobject.__ilshift__(self, other)\nobject.__irshift__(self, other)\nobject.__iand__(self, other)\nobject.__ixor__(self, other)\nobject.__ior__(self, other)\n\n   These methods are called to implement the augmented arithmetic\n   assignments ("+=", "-=", "*=", "/=", "//=", "%=", "**=", "<<=",\n   ">>=", "&=", "^=", "|=").  These methods should attempt to do the\n   operation in-place (modifying *self*) and return the result (which\n   could be, but does not have to be, *self*).  If a specific method\n   is not defined, the augmented assignment falls back to the normal\n   methods.  For instance, if *x* is an instance of a class with an\n   "__iadd__()" method, "x += y" is equivalent to "x = x.__iadd__(y)"\n   . Otherwise, "x.__add__(y)" and "y.__radd__(x)" are considered, as\n   with the evaluation of "x + y". In certain situations, augmented\n   assignment can result in unexpected errors (see *Why does\n   a_tuple[i] += [\'item\'] raise an exception when the addition\n   works?*), but this behavior is in fact part of the data model.\n\nobject.__neg__(self)\nobject.__pos__(self)\nobject.__abs__(self)\nobject.__invert__(self)\n\n   Called to implement the unary arithmetic operations ("-", "+",\n   "abs()" and "~").\n\nobject.__complex__(self)\nobject.__int__(self)\nobject.__float__(self)\nobject.__round__(self[, n])\n\n   Called to implement the built-in functions "complex()", "int()",\n   "float()" and "round()".  Should return a value of the appropriate\n   type.\n\nobject.__index__(self)\n\n   Called to implement "operator.index()", and whenever Python needs\n   to losslessly convert the numeric object to an integer object (such\n   as in slicing, or in the built-in "bin()", "hex()" and "oct()"\n   functions). Presence of this method indicates that the numeric\n   object is an integer type.  Must return an integer.\n\n   Note: When "__index__()" is defined, "__int__()" should also be\n     defined, and both shuld return the same value, in order to have a\n     coherent integer type class.\n',
  'objects': '\nObjects, values and types\n*************************\n\n*Objects* are Python\'s abstraction for data.  All data in a Python\nprogram is represented by objects or by relations between objects. (In\na sense, and in conformance to Von Neumann\'s model of a "stored\nprogram computer," code is also represented by objects.)\n\nEvery object has an identity, a type and a value.  An object\'s\n*identity* never changes once it has been created; you may think of it\nas the object\'s address in memory.  The \'"is"\' operator compares the\nidentity of two objects; the "id()" function returns an integer\nrepresenting its identity.\n\n**CPython implementation detail:** For CPython, "id(x)" is the memory\naddress where "x" is stored.\n\nAn object\'s type determines the operations that the object supports\n(e.g., "does it have a length?") and also defines the possible values\nfor objects of that type.  The "type()" function returns an object\'s\ntype (which is an object itself).  Like its identity, an object\'s\n*type* is also unchangeable. [1]\n\nThe *value* of some objects can change.  Objects whose value can\nchange are said to be *mutable*; objects whose value is unchangeable\nonce they are created are called *immutable*. (The value of an\nimmutable container object that contains a reference to a mutable\nobject can change when the latter\'s value is changed; however the\ncontainer is still considered immutable, because the collection of\nobjects it contains cannot be changed.  So, immutability is not\nstrictly the same as having an unchangeable value, it is more subtle.)\nAn object\'s mutability is determined by its type; for instance,\nnumbers, strings and tuples are immutable, while dictionaries and\nlists are mutable.\n\nObjects are never explicitly destroyed; however, when they become\nunreachable they may be garbage-collected.  An implementation is\nallowed to postpone garbage collection or omit it altogether --- it is\na matter of implementation quality how garbage collection is\nimplemented, as long as no objects are collected that are still\nreachable.\n\n**CPython implementation detail:** CPython currently uses a reference-\ncounting scheme with (optional) delayed detection of cyclically linked\ngarbage, which collects most objects as soon as they become\nunreachable, but is not guaranteed to collect garbage containing\ncircular references.  See the documentation of the "gc" module for\ninformation on controlling the collection of cyclic garbage. Other\nimplementations act differently and CPython may change. Do not depend\non immediate finalization of objects when they become unreachable (ex:\nalways close files).\n\nNote that the use of the implementation\'s tracing or debugging\nfacilities may keep objects alive that would normally be collectable.\nAlso note that catching an exception with a \'"try"..."except"\'\nstatement may keep objects alive.\n\nSome objects contain references to "external" resources such as open\nfiles or windows.  It is understood that these resources are freed\nwhen the object is garbage-collected, but since garbage collection is\nnot guaranteed to happen, such objects also provide an explicit way to\nrelease the external resource, usually a "close()" method. Programs\nare strongly recommended to explicitly close such objects.  The\n\'"try"..."finally"\' statement and the \'"with"\' statement provide\nconvenient ways to do this.\n\nSome objects contain references to other objects; these are called\n*containers*. Examples of containers are tuples, lists and\ndictionaries.  The references are part of a container\'s value.  In\nmost cases, when we talk about the value of a container, we imply the\nvalues, not the identities of the contained objects; however, when we\ntalk about the mutability of a container, only the identities of the\nimmediately contained objects are implied.  So, if an immutable\ncontainer (like a tuple) contains a reference to a mutable object, its\nvalue changes if that mutable object is changed.\n\nTypes affect almost all aspects of object behavior.  Even the\nimportance of object identity is affected in some sense: for immutable\ntypes, operations that compute new values may actually return a\nreference to any existing object with the same type and value, while\nfor mutable objects this is not allowed.  E.g., after "a = 1; b = 1",\n"a" and "b" may or may not refer to the same object with the value\none, depending on the implementation, but after "c = []; d = []", "c"\nand "d" are guaranteed to refer to two different, unique, newly\ncreated empty lists. (Note that "c = d = []" assigns the same object\nto both "c" and "d".)\n',
  'operator-summary': '\nOperator precedence\n*******************\n\nThe following table summarizes the operator precedences in Python,\nfrom lowest precedence (least binding) to highest precedence (most\nbinding).  Operators in the same box have the same precedence.  Unless\nthe syntax is explicitly given, operators are binary.  Operators in\nthe same box group left to right (except for comparisons, including\ntests, which all have the same precedence and chain from left to right\n--- see section *Comparisons* --- and exponentiation, which groups\nfrom right to left).\n\n+-------------------------------------------------+---------------------------------------+\n| Operator                                        | Description                           |\n+=================================================+=======================================+\n| "lambda"                                        | Lambda expression                     |\n+-------------------------------------------------+---------------------------------------+\n| "if" -- "else"                                  | Conditional expression                |\n+-------------------------------------------------+---------------------------------------+\n| "or"                                            | Boolean OR                            |\n+-------------------------------------------------+---------------------------------------+\n| "and"                                           | Boolean AND                           |\n+-------------------------------------------------+---------------------------------------+\n| "not" "x"                                       | Boolean NOT                           |\n+-------------------------------------------------+---------------------------------------+\n| "in", "not in", "is", "is not", "<", "<=", ">", | Comparisons, including membership     |\n| ">=", "!=", "=="                                | tests and identity tests              |\n+-------------------------------------------------+---------------------------------------+\n| "|"                                             | Bitwise OR                            |\n+-------------------------------------------------+---------------------------------------+\n| "^"                                             | Bitwise XOR                           |\n+-------------------------------------------------+---------------------------------------+\n| "&"                                             | Bitwise AND                           |\n+-------------------------------------------------+---------------------------------------+\n| "<<", ">>"                                      | Shifts                                |\n+-------------------------------------------------+---------------------------------------+\n| "+", "-"                                        | Addition and subtraction              |\n+-------------------------------------------------+---------------------------------------+\n| "*", "/", "//", "%"                             | Multiplication, division, remainder   |\n+-------------------------------------------------+---------------------------------------+\n| "+x", "-x", "~x"                                | Positive, negative, bitwise NOT       |\n+-------------------------------------------------+---------------------------------------+\n| "**"                                            | Exponentiation [6]                    |\n+-------------------------------------------------+---------------------------------------+\n| "x[index]", "x[index:index]",                   | Subscription, slicing, call,          |\n| "x(arguments...)", "x.attribute"                | attribute reference                   |\n+-------------------------------------------------+---------------------------------------+\n| "(expressions...)", "[expressions...]", "{key:  | Binding or tuple display, list        |\n| value...}", "{expressions...}"                  | display, dictionary display, set      |\n+-------------------------------------------------+---------------------------------------+\n\n-[ Footnotes ]-\n\n[1] While "abs(x%y) < abs(y)" is true mathematically, for floats\n    it may not be true numerically due to roundoff.  For example, and\n    assuming a platform on which a Python float is an IEEE 754 double-\n    precision number, in order that "-1e-100 % 1e100" have the same\n    sign as "1e100", the computed result is "-1e-100 + 1e100", which\n    is numerically exactly equal to "1e100".  The function\n    "math.fmod()" returns a result whose sign matches the sign of the\n    first argument instead, and so returns "-1e-100" in this case.\n    Which approach is more appropriate depends on the application.\n\n[2] If x is very close to an exact integer multiple of y, it\'s\n    possible for "x//y" to be one larger than "(x-x%y)//y" due to\n    rounding.  In such cases, Python returns the latter result, in\n    order to preserve that "divmod(x,y)[0] * y + x % y" be very close\n    to "x".\n\n[3] While comparisons between strings make sense at the byte\n    level, they may be counter-intuitive to users.  For example, the\n    strings ""\\u00C7"" and ""\\u0327\\u0043"" compare differently, even\n    though they both represent the same unicode character (LATIN\n    CAPITAL LETTER C WITH CEDILLA).  To compare strings in a human\n    recognizable way, compare using "unicodedata.normalize()".\n\n[4] Due to automatic garbage-collection, free lists, and the\n    dynamic nature of descriptors, you may notice seemingly unusual\n    behaviour in certain uses of the "is" operator, like those\n    involving comparisons between instance methods, or constants.\n    Check their documentation for more info.\n\n[5] The "%" operator is also used for string formatting; the same\n    precedence applies.\n\n[6] The power operator "**" binds less tightly than an arithmetic\n    or bitwise unary operator on its right, that is, "2**-1" is "0.5".\n',
  'pass': '\nThe "pass" statement\n********************\n\n   pass_stmt ::= "pass"\n\n"pass" is a null operation --- when it is executed, nothing happens.\nIt is useful as a placeholder when a statement is required\nsyntactically, but no code needs to be executed, for example:\n\n   def f(arg): pass    # a function that does nothing (yet)\n\n   class C: pass       # a class with no methods (yet)\n',
@@ -60,7 +60,7 @@
  'shifting': '\nShifting operations\n*******************\n\nThe shifting operations have lower priority than the arithmetic\noperations:\n\n   shift_expr ::= a_expr | shift_expr ( "<<" | ">>" ) a_expr\n\nThese operators accept integers as arguments.  They shift the first\nargument to the left or right by the number of bits given by the\nsecond argument.\n\nA right shift by *n* bits is defined as floor division by "pow(2,n)".\nA left shift by *n* bits is defined as multiplication with "pow(2,n)".\n\nNote: In the current implementation, the right-hand operand is\n  required to be at most "sys.maxsize".  If the right-hand operand is\n  larger than "sys.maxsize" an "OverflowError" exception is raised.\n',
  'slicings': '\nSlicings\n********\n\nA slicing selects a range of items in a sequence object (e.g., a\nstring, tuple or list).  Slicings may be used as expressions or as\ntargets in assignment or "del" statements.  The syntax for a slicing:\n\n   slicing      ::= primary "[" slice_list "]"\n   slice_list   ::= slice_item ("," slice_item)* [","]\n   slice_item   ::= expression | proper_slice\n   proper_slice ::= [lower_bound] ":" [upper_bound] [ ":" [stride] ]\n   lower_bound  ::= expression\n   upper_bound  ::= expression\n   stride       ::= expression\n\nThere is ambiguity in the formal syntax here: anything that looks like\nan expression list also looks like a slice list, so any subscription\ncan be interpreted as a slicing.  Rather than further complicating the\nsyntax, this is disambiguated by defining that in this case the\ninterpretation as a subscription takes priority over the\ninterpretation as a slicing (this is the case if the slice list\ncontains no proper slice).\n\nThe semantics for a slicing are as follows.  The primary must evaluate\nto a mapping object, and it is indexed (using the same "__getitem__()"\nmethod as normal subscription) with a key that is constructed from the\nslice list, as follows.  If the slice list contains at least one\ncomma, the key is a tuple containing the conversion of the slice\nitems; otherwise, the conversion of the lone slice item is the key.\nThe conversion of a slice item that is an expression is that\nexpression.  The conversion of a proper slice is a slice object (see\nsection *The standard type hierarchy*) whose "start", "stop" and\n"step" attributes are the values of the expressions given as lower\nbound, upper bound and stride, respectively, substituting "None" for\nmissing expressions.\n',
  'specialattrs': '\nSpecial Attributes\n******************\n\nThe implementation adds a few special read-only attributes to several\nobject types, where they are relevant.  Some of these are not reported\nby the "dir()" built-in function.\n\nobject.__dict__\n\n   A dictionary or other mapping object used to store an object\'s\n   (writable) attributes.\n\ninstance.__class__\n\n   The class to which a class instance belongs.\n\nclass.__bases__\n\n   The tuple of base classes of a class object.\n\nclass.__name__\n\n   The name of the class or type.\n\nclass.__qualname__\n\n   The *qualified name* of the class or type.\n\n   New in version 3.3.\n\nclass.__mro__\n\n   This attribute is a tuple of classes that are considered when\n   looking for base classes during method resolution.\n\nclass.mro()\n\n   This method can be overridden by a metaclass to customize the\n   method resolution order for its instances.  It is called at class\n   instantiation, and its result is stored in "__mro__".\n\nclass.__subclasses__()\n\n   Each class keeps a list of weak references to its immediate\n   subclasses.  This method returns a list of all those references\n   still alive. Example:\n\n      >>> int.__subclasses__()\n      [<class \'bool\'>]\n\n-[ Footnotes ]-\n\n[1] Additional information on these special methods may be found\n    in the Python Reference Manual (*Basic customization*).\n\n[2] As a consequence, the list "[1, 2]" is considered equal to\n    "[1.0, 2.0]", and similarly for tuples.\n\n[3] They must have since the parser can\'t tell the type of the\n    operands.\n\n[4] Cased characters are those with general category property\n    being one of "Lu" (Letter, uppercase), "Ll" (Letter, lowercase),\n    or "Lt" (Letter, titlecase).\n\n[5] To format only a tuple you should therefore provide a\n    singleton tuple whose only element is the tuple to be formatted.\n',
- 'specialnames': '\nSpecial method names\n********************\n\nA class can implement certain operations that are invoked by special\nsyntax (such as arithmetic operations or subscripting and slicing) by\ndefining methods with special names. This is Python\'s approach to\n*operator overloading*, allowing classes to define their own behavior\nwith respect to language operators.  For instance, if a class defines\na method named "__getitem__()", and "x" is an instance of this class,\nthen "x[i]" is roughly equivalent to "type(x).__getitem__(x, i)".\nExcept where mentioned, attempts to execute an operation raise an\nexception when no appropriate method is defined (typically\n"AttributeError" or "TypeError").\n\nWhen implementing a class that emulates any built-in type, it is\nimportant that the emulation only be implemented to the degree that it\nmakes sense for the object being modelled.  For example, some\nsequences may work well with retrieval of individual elements, but\nextracting a slice may not make sense.  (One example of this is the\n"NodeList" interface in the W3C\'s Document Object Model.)\n\n\nBasic customization\n===================\n\nobject.__new__(cls[, ...])\n\n   Called to create a new instance of class *cls*.  "__new__()" is a\n   static method (special-cased so you need not declare it as such)\n   that takes the class of which an instance was requested as its\n   first argument.  The remaining arguments are those passed to the\n   object constructor expression (the call to the class).  The return\n   value of "__new__()" should be the new object instance (usually an\n   instance of *cls*).\n\n   Typical implementations create a new instance of the class by\n   invoking the superclass\'s "__new__()" method using\n   "super(currentclass, cls).__new__(cls[, ...])" with appropriate\n   arguments and then modifying the newly-created instance as\n   necessary before returning it.\n\n   If "__new__()" returns an instance of *cls*, then the new\n   instance\'s "__init__()" method will be invoked like\n   "__init__(self[, ...])", where *self* is the new instance and the\n   remaining arguments are the same as were passed to "__new__()".\n\n   If "__new__()" does not return an instance of *cls*, then the new\n   instance\'s "__init__()" method will not be invoked.\n\n   "__new__()" is intended mainly to allow subclasses of immutable\n   types (like int, str, or tuple) to customize instance creation.  It\n   is also commonly overridden in custom metaclasses in order to\n   customize class creation.\n\nobject.__init__(self[, ...])\n\n   Called when the instance is created.  The arguments are those\n   passed to the class constructor expression.  If a base class has an\n   "__init__()" method, the derived class\'s "__init__()" method, if\n   any, must explicitly call it to ensure proper initialization of the\n   base class part of the instance; for example:\n   "BaseClass.__init__(self, [args...])".  As a special constraint on\n   constructors, no value may be returned; doing so will cause a\n   "TypeError" to be raised at runtime.\n\nobject.__del__(self)\n\n   Called when the instance is about to be destroyed.  This is also\n   called a destructor.  If a base class has a "__del__()" method, the\n   derived class\'s "__del__()" method, if any, must explicitly call it\n   to ensure proper deletion of the base class part of the instance.\n   Note that it is possible (though not recommended!) for the\n   "__del__()" method to postpone destruction of the instance by\n   creating a new reference to it.  It may then be called at a later\n   time when this new reference is deleted.  It is not guaranteed that\n   "__del__()" methods are called for objects that still exist when\n   the interpreter exits.\n\n   Note: "del x" doesn\'t directly call "x.__del__()" --- the former\n     decrements the reference count for "x" by one, and the latter is\n     only called when "x"\'s reference count reaches zero.  Some common\n     situations that may prevent the reference count of an object from\n     going to zero include: circular references between objects (e.g.,\n     a doubly-linked list or a tree data structure with parent and\n     child pointers); a reference to the object on the stack frame of\n     a function that caught an exception (the traceback stored in\n     "sys.exc_info()[2]" keeps the stack frame alive); or a reference\n     to the object on the stack frame that raised an unhandled\n     exception in interactive mode (the traceback stored in\n     "sys.last_traceback" keeps the stack frame alive).  The first\n     situation can only be remedied by explicitly breaking the cycles;\n     the latter two situations can be resolved by storing "None" in\n     "sys.last_traceback". Circular references which are garbage are\n     detected and cleaned up when the cyclic garbage collector is\n     enabled (it\'s on by default). Refer to the documentation for the\n     "gc" module for more information about this topic.\n\n   Warning: Due to the precarious circumstances under which\n     "__del__()" methods are invoked, exceptions that occur during\n     their execution are ignored, and a warning is printed to\n     "sys.stderr" instead. Also, when "__del__()" is invoked in\n     response to a module being deleted (e.g., when execution of the\n     program is done), other globals referenced by the "__del__()"\n     method may already have been deleted or in the process of being\n     torn down (e.g. the import machinery shutting down).  For this\n     reason, "__del__()" methods should do the absolute minimum needed\n     to maintain external invariants.  Starting with version 1.5,\n     Python guarantees that globals whose name begins with a single\n     underscore are deleted from their module before other globals are\n     deleted; if no other references to such globals exist, this may\n     help in assuring that imported modules are still available at the\n     time when the "__del__()" method is called.\n\nobject.__repr__(self)\n\n   Called by the "repr()" built-in function to compute the "official"\n   string representation of an object.  If at all possible, this\n   should look like a valid Python expression that could be used to\n   recreate an object with the same value (given an appropriate\n   environment).  If this is not possible, a string of the form\n   "<...some useful description...>" should be returned. The return\n   value must be a string object. If a class defines "__repr__()" but\n   not "__str__()", then "__repr__()" is also used when an "informal"\n   string representation of instances of that class is required.\n\n   This is typically used for debugging, so it is important that the\n   representation is information-rich and unambiguous.\n\nobject.__str__(self)\n\n   Called by "str(object)" and the built-in functions "format()" and\n   "print()" to compute the "informal" or nicely printable string\n   representation of an object.  The return value must be a *string*\n   object.\n\n   This method differs from "object.__repr__()" in that there is no\n   expectation that "__str__()" return a valid Python expression: a\n   more convenient or concise representation can be used.\n\n   The default implementation defined by the built-in type "object"\n   calls "object.__repr__()".\n\nobject.__bytes__(self)\n\n   Called by "bytes()" to compute a byte-string representation of an\n   object. This should return a "bytes" object.\n\nobject.__format__(self, format_spec)\n\n   Called by the "format()" built-in function (and by extension, the\n   "str.format()" method of class "str") to produce a "formatted"\n   string representation of an object. The "format_spec" argument is a\n   string that contains a description of the formatting options\n   desired. The interpretation of the "format_spec" argument is up to\n   the type implementing "__format__()", however most classes will\n   either delegate formatting to one of the built-in types, or use a\n   similar formatting option syntax.\n\n   See *Format Specification Mini-Language* for a description of the\n   standard formatting syntax.\n\n   The return value must be a string object.\n\nobject.__lt__(self, other)\nobject.__le__(self, other)\nobject.__eq__(self, other)\nobject.__ne__(self, other)\nobject.__gt__(self, other)\nobject.__ge__(self, other)\n\n   These are the so-called "rich comparison" methods. The\n   correspondence between operator symbols and method names is as\n   follows: "x<y" calls "x.__lt__(y)", "x<=y" calls "x.__le__(y)",\n   "x==y" calls "x.__eq__(y)", "x!=y" calls "x.__ne__(y)", "x>y" calls\n   "x.__gt__(y)", and "x>=y" calls "x.__ge__(y)".\n\n   A rich comparison method may return the singleton "NotImplemented"\n   if it does not implement the operation for a given pair of\n   arguments. By convention, "False" and "True" are returned for a\n   successful comparison. However, these methods can return any value,\n   so if the comparison operator is used in a Boolean context (e.g.,\n   in the condition of an "if" statement), Python will call "bool()"\n   on the value to determine if the result is true or false.\n\n   There are no implied relationships among the comparison operators.\n   The truth of "x==y" does not imply that "x!=y" is false.\n   Accordingly, when defining "__eq__()", one should also define\n   "__ne__()" so that the operators will behave as expected.  See the\n   paragraph on "__hash__()" for some important notes on creating\n   *hashable* objects which support custom comparison operations and\n   are usable as dictionary keys.\n\n   There are no swapped-argument versions of these methods (to be used\n   when the left argument does not support the operation but the right\n   argument does); rather, "__lt__()" and "__gt__()" are each other\'s\n   reflection, "__le__()" and "__ge__()" are each other\'s reflection,\n   and "__eq__()" and "__ne__()" are their own reflection.\n\n   Arguments to rich comparison methods are never coerced.\n\n   To automatically generate ordering operations from a single root\n   operation, see "functools.total_ordering()".\n\nobject.__hash__(self)\n\n   Called by built-in function "hash()" and for operations on members\n   of hashed collections including "set", "frozenset", and "dict".\n   "__hash__()" should return an integer.  The only required property\n   is that objects which compare equal have the same hash value; it is\n   advised to somehow mix together (e.g. using exclusive or) the hash\n   values for the components of the object that also play a part in\n   comparison of objects.\n\n   Note: "hash()" truncates the value returned from an object\'s\n     custom "__hash__()" method to the size of a "Py_ssize_t".  This\n     is typically 8 bytes on 64-bit builds and 4 bytes on 32-bit\n     builds. If an object\'s   "__hash__()" must interoperate on builds\n     of different bit sizes, be sure to check the width on all\n     supported builds.  An easy way to do this is with "python -c\n     "import sys; print(sys.hash_info.width)""\n\n   If a class does not define an "__eq__()" method it should not\n   define a "__hash__()" operation either; if it defines "__eq__()"\n   but not "__hash__()", its instances will not be usable as items in\n   hashable collections.  If a class defines mutable objects and\n   implements an "__eq__()" method, it should not implement\n   "__hash__()", since the implementation of hashable collections\n   requires that a key\'s hash value is immutable (if the object\'s hash\n   value changes, it will be in the wrong hash bucket).\n\n   User-defined classes have "__eq__()" and "__hash__()" methods by\n   default; with them, all objects compare unequal (except with\n   themselves) and "x.__hash__()" returns an appropriate value such\n   that "x == y" implies both that "x is y" and "hash(x) == hash(y)".\n\n   A class that overrides "__eq__()" and does not define "__hash__()"\n   will have its "__hash__()" implicitly set to "None".  When the\n   "__hash__()" method of a class is "None", instances of the class\n   will raise an appropriate "TypeError" when a program attempts to\n   retrieve their hash value, and will also be correctly identified as\n   unhashable when checking "isinstance(obj, collections.Hashable").\n\n   If a class that overrides "__eq__()" needs to retain the\n   implementation of "__hash__()" from a parent class, the interpreter\n   must be told this explicitly by setting "__hash__ =\n   <ParentClass>.__hash__".\n\n   If a class that does not override "__eq__()" wishes to suppress\n   hash support, it should include "__hash__ = None" in the class\n   definition. A class which defines its own "__hash__()" that\n   explicitly raises a "TypeError" would be incorrectly identified as\n   hashable by an "isinstance(obj, collections.Hashable)" call.\n\n   Note: By default, the "__hash__()" values of str, bytes and\n     datetime objects are "salted" with an unpredictable random value.\n     Although they remain constant within an individual Python\n     process, they are not predictable between repeated invocations of\n     Python.This is intended to provide protection against a denial-\n     of-service caused by carefully-chosen inputs that exploit the\n     worst case performance of a dict insertion, O(n^2) complexity.\n     See http://www.ocert.org/advisories/ocert-2011-003.html for\n     details.Changing hash values affects the iteration order of\n     dicts, sets and other mappings.  Python has never made guarantees\n     about this ordering (and it typically varies between 32-bit and\n     64-bit builds).See also "PYTHONHASHSEED".\n\n   Changed in version 3.3: Hash randomization is enabled by default.\n\nobject.__bool__(self)\n\n   Called to implement truth value testing and the built-in operation\n   "bool()"; should return "False" or "True".  When this method is not\n   defined, "__len__()" is called, if it is defined, and the object is\n   considered true if its result is nonzero.  If a class defines\n   neither "__len__()" nor "__bool__()", all its instances are\n   considered true.\n\n\nCustomizing attribute access\n============================\n\nThe following methods can be defined to customize the meaning of\nattribute access (use of, assignment to, or deletion of "x.name") for\nclass instances.\n\nobject.__getattr__(self, name)\n\n   Called when an attribute lookup has not found the attribute in the\n   usual places (i.e. it is not an instance attribute nor is it found\n   in the class tree for "self").  "name" is the attribute name. This\n   method should return the (computed) attribute value or raise an\n   "AttributeError" exception.\n\n   Note that if the attribute is found through the normal mechanism,\n   "__getattr__()" is not called.  (This is an intentional asymmetry\n   between "__getattr__()" and "__setattr__()".) This is done both for\n   efficiency reasons and because otherwise "__getattr__()" would have\n   no way to access other attributes of the instance.  Note that at\n   least for instance variables, you can fake total control by not\n   inserting any values in the instance attribute dictionary (but\n   instead inserting them in another object).  See the\n   "__getattribute__()" method below for a way to actually get total\n   control over attribute access.\n\nobject.__getattribute__(self, name)\n\n   Called unconditionally to implement attribute accesses for\n   instances of the class. If the class also defines "__getattr__()",\n   the latter will not be called unless "__getattribute__()" either\n   calls it explicitly or raises an "AttributeError". This method\n   should return the (computed) attribute value or raise an\n   "AttributeError" exception. In order to avoid infinite recursion in\n   this method, its implementation should always call the base class\n   method with the same name to access any attributes it needs, for\n   example, "object.__getattribute__(self, name)".\n\n   Note: This method may still be bypassed when looking up special\n     methods as the result of implicit invocation via language syntax\n     or built-in functions. See *Special method lookup*.\n\nobject.__setattr__(self, name, value)\n\n   Called when an attribute assignment is attempted.  This is called\n   instead of the normal mechanism (i.e. store the value in the\n   instance dictionary). *name* is the attribute name, *value* is the\n   value to be assigned to it.\n\n   If "__setattr__()" wants to assign to an instance attribute, it\n   should call the base class method with the same name, for example,\n   "object.__setattr__(self, name, value)".\n\nobject.__delattr__(self, name)\n\n   Like "__setattr__()" but for attribute deletion instead of\n   assignment.  This should only be implemented if "del obj.name" is\n   meaningful for the object.\n\nobject.__dir__(self)\n\n   Called when "dir()" is called on the object. A sequence must be\n   returned. "dir()" converts the returned sequence to a list and\n   sorts it.\n\n\nImplementing Descriptors\n------------------------\n\nThe following methods only apply when an instance of the class\ncontaining the method (a so-called *descriptor* class) appears in an\n*owner* class (the descriptor must be in either the owner\'s class\ndictionary or in the class dictionary for one of its parents).  In the\nexamples below, "the attribute" refers to the attribute whose name is\nthe key of the property in the owner class\' "__dict__".\n\nobject.__get__(self, instance, owner)\n\n   Called to get the attribute of the owner class (class attribute\n   access) or of an instance of that class (instance attribute\n   access). *owner* is always the owner class, while *instance* is the\n   instance that the attribute was accessed through, or "None" when\n   the attribute is accessed through the *owner*.  This method should\n   return the (computed) attribute value or raise an "AttributeError"\n   exception.\n\nobject.__set__(self, instance, value)\n\n   Called to set the attribute on an instance *instance* of the owner\n   class to a new value, *value*.\n\nobject.__delete__(self, instance)\n\n   Called to delete the attribute on an instance *instance* of the\n   owner class.\n\n\nInvoking Descriptors\n--------------------\n\nIn general, a descriptor is an object attribute with "binding\nbehavior", one whose attribute access has been overridden by methods\nin the descriptor protocol:  "__get__()", "__set__()", and\n"__delete__()". If any of those methods are defined for an object, it\nis said to be a descriptor.\n\nThe default behavior for attribute access is to get, set, or delete\nthe attribute from an object\'s dictionary. For instance, "a.x" has a\nlookup chain starting with "a.__dict__[\'x\']", then\n"type(a).__dict__[\'x\']", and continuing through the base classes of\n"type(a)" excluding metaclasses.\n\nHowever, if the looked-up value is an object defining one of the\ndescriptor methods, then Python may override the default behavior and\ninvoke the descriptor method instead.  Where this occurs in the\nprecedence chain depends on which descriptor methods were defined and\nhow they were called.\n\nThe starting point for descriptor invocation is a binding, "a.x". How\nthe arguments are assembled depends on "a":\n\nDirect Call\n   The simplest and least common call is when user code directly\n   invokes a descriptor method:    "x.__get__(a)".\n\nInstance Binding\n   If binding to an object instance, "a.x" is transformed into the\n   call: "type(a).__dict__[\'x\'].__get__(a, type(a))".\n\nClass Binding\n   If binding to a class, "A.x" is transformed into the call:\n   "A.__dict__[\'x\'].__get__(None, A)".\n\nSuper Binding\n   If "a" is an instance of "super", then the binding "super(B,\n   obj).m()" searches "obj.__class__.__mro__" for the base class "A"\n   immediately preceding "B" and then invokes the descriptor with the\n   call: "A.__dict__[\'m\'].__get__(obj, obj.__class__)".\n\nFor instance bindings, the precedence of descriptor invocation depends\non the which descriptor methods are defined.  A descriptor can define\nany combination of "__get__()", "__set__()" and "__delete__()".  If it\ndoes not define "__get__()", then accessing the attribute will return\nthe descriptor object itself unless there is a value in the object\'s\ninstance dictionary.  If the descriptor defines "__set__()" and/or\n"__delete__()", it is a data descriptor; if it defines neither, it is\na non-data descriptor.  Normally, data descriptors define both\n"__get__()" and "__set__()", while non-data descriptors have just the\n"__get__()" method.  Data descriptors with "__set__()" and "__get__()"\ndefined always override a redefinition in an instance dictionary.  In\ncontrast, non-data descriptors can be overridden by instances.\n\nPython methods (including "staticmethod()" and "classmethod()") are\nimplemented as non-data descriptors.  Accordingly, instances can\nredefine and override methods.  This allows individual instances to\nacquire behaviors that differ from other instances of the same class.\n\nThe "property()" function is implemented as a data descriptor.\nAccordingly, instances cannot override the behavior of a property.\n\n\n__slots__\n---------\n\nBy default, instances of classes have a dictionary for attribute\nstorage.  This wastes space for objects having very few instance\nvariables.  The space consumption can become acute when creating large\nnumbers of instances.\n\nThe default can be overridden by defining *__slots__* in a class\ndefinition. The *__slots__* declaration takes a sequence of instance\nvariables and reserves just enough space in each instance to hold a\nvalue for each variable.  Space is saved because *__dict__* is not\ncreated for each instance.\n\nobject.__slots__\n\n   This class variable can be assigned a string, iterable, or sequence\n   of strings with variable names used by instances.  If defined in a\n   class, *__slots__* reserves space for the declared variables and\n   prevents the automatic creation of *__dict__* and *__weakref__* for\n   each instance.\n\n\nNotes on using *__slots__*\n~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n* When inheriting from a class without *__slots__*, the *__dict__*\n  attribute of that class will always be accessible, so a *__slots__*\n  definition in the subclass is meaningless.\n\n* Without a *__dict__* variable, instances cannot be assigned new\n  variables not listed in the *__slots__* definition.  Attempts to\n  assign to an unlisted variable name raises "AttributeError". If\n  dynamic assignment of new variables is desired, then add\n  "\'__dict__\'" to the sequence of strings in the *__slots__*\n  declaration.\n\n* Without a *__weakref__* variable for each instance, classes\n  defining *__slots__* do not support weak references to its\n  instances. If weak reference support is needed, then add\n  "\'__weakref__\'" to the sequence of strings in the *__slots__*\n  declaration.\n\n* *__slots__* are implemented at the class level by creating\n  descriptors (*Implementing Descriptors*) for each variable name.  As\n  a result, class attributes cannot be used to set default values for\n  instance variables defined by *__slots__*; otherwise, the class\n  attribute would overwrite the descriptor assignment.\n\n* The action of a *__slots__* declaration is limited to the class\n  where it is defined.  As a result, subclasses will have a *__dict__*\n  unless they also define *__slots__* (which must only contain names\n  of any *additional* slots).\n\n* If a class defines a slot also defined in a base class, the\n  instance variable defined by the base class slot is inaccessible\n  (except by retrieving its descriptor directly from the base class).\n  This renders the meaning of the program undefined.  In the future, a\n  check may be added to prevent this.\n\n* Nonempty *__slots__* does not work for classes derived from\n  "variable-length" built-in types such as "int", "bytes" and "tuple".\n\n* Any non-string iterable may be assigned to *__slots__*. Mappings\n  may also be used; however, in the future, special meaning may be\n  assigned to the values corresponding to each key.\n\n* *__class__* assignment works only if both classes have the same\n  *__slots__*.\n\n\nCustomizing class creation\n==========================\n\nBy default, classes are constructed using "type()". The class body is\nexecuted in a new namespace and the class name is bound locally to the\nresult of "type(name, bases, namespace)".\n\nThe class creation process can be customised by passing the\n"metaclass" keyword argument in the class definition line, or by\ninheriting from an existing class that included such an argument. In\nthe following example, both "MyClass" and "MySubclass" are instances\nof "Meta":\n\n   class Meta(type):\n       pass\n\n   class MyClass(metaclass=Meta):\n       pass\n\n   class MySubclass(MyClass):\n       pass\n\nAny other keyword arguments that are specified in the class definition\nare passed through to all metaclass operations described below.\n\nWhen a class definition is executed, the following steps occur:\n\n* the appropriate metaclass is determined\n\n* the class namespace is prepared\n\n* the class body is executed\n\n* the class object is created\n\n\nDetermining the appropriate metaclass\n-------------------------------------\n\nThe appropriate metaclass for a class definition is determined as\nfollows:\n\n* if no bases and no explicit metaclass are given, then "type()" is\n  used\n\n* if an explicit metaclass is given and it is *not* an instance of\n  "type()", then it is used directly as the metaclass\n\n* if an instance of "type()" is given as the explicit metaclass, or\n  bases are defined, then the most derived metaclass is used\n\nThe most derived metaclass is selected from the explicitly specified\nmetaclass (if any) and the metaclasses (i.e. "type(cls)") of all\nspecified base classes. The most derived metaclass is one which is a\nsubtype of *all* of these candidate metaclasses. If none of the\ncandidate metaclasses meets that criterion, then the class definition\nwill fail with "TypeError".\n\n\nPreparing the class namespace\n-----------------------------\n\nOnce the appropriate metaclass has been identified, then the class\nnamespace is prepared. If the metaclass has a "__prepare__" attribute,\nit is called as "namespace = metaclass.__prepare__(name, bases,\n**kwds)" (where the additional keyword arguments, if any, come from\nthe class definition).\n\nIf the metaclass has no "__prepare__" attribute, then the class\nnamespace is initialised as an empty "dict()" instance.\n\nSee also: **PEP 3115** - Metaclasses in Python 3000\n\n     Introduced the "__prepare__" namespace hook\n\n\nExecuting the class body\n------------------------\n\nThe class body is executed (approximately) as "exec(body, globals(),\nnamespace)". The key difference from a normal call to "exec()" is that\nlexical scoping allows the class body (including any methods) to\nreference names from the current and outer scopes when the class\ndefinition occurs inside a function.\n\nHowever, even when the class definition occurs inside the function,\nmethods defined inside the class still cannot see names defined at the\nclass scope. Class variables must be accessed through the first\nparameter of instance or class methods, and cannot be accessed at all\nfrom static methods.\n\n\nCreating the class object\n-------------------------\n\nOnce the class namespace has been populated by executing the class\nbody, the class object is created by calling "metaclass(name, bases,\nnamespace, **kwds)" (the additional keywords passed here are the same\nas those passed to "__prepare__").\n\nThis class object is the one that will be referenced by the zero-\nargument form of "super()". "__class__" is an implicit closure\nreference created by the compiler if any methods in a class body refer\nto either "__class__" or "super". This allows the zero argument form\nof "super()" to correctly identify the class being defined based on\nlexical scoping, while the class or instance that was used to make the\ncurrent call is identified based on the first argument passed to the\nmethod.\n\nAfter the class object is created, it is passed to the class\ndecorators included in the class definition (if any) and the resulting\nobject is bound in the local namespace as the defined class.\n\nSee also: **PEP 3135** - New super\n\n     Describes the implicit "__class__" closure reference\n\n\nMetaclass example\n-----------------\n\nThe potential uses for metaclasses are boundless. Some ideas that have\nbeen explored include logging, interface checking, automatic\ndelegation, automatic property creation, proxies, frameworks, and\nautomatic resource locking/synchronization.\n\nHere is an example of a metaclass that uses an\n"collections.OrderedDict" to remember the order that class members\nwere defined:\n\n   class OrderedClass(type):\n\n        @classmethod\n        def __prepare__(metacls, name, bases, **kwds):\n           return collections.OrderedDict()\n\n        def __new__(cls, name, bases, namespace, **kwds):\n           result = type.__new__(cls, name, bases, dict(namespace))\n           result.members = tuple(namespace)\n           return result\n\n   class A(metaclass=OrderedClass):\n       def one(self): pass\n       def two(self): pass\n       def three(self): pass\n       def four(self): pass\n\n   >>> A.members\n   (\'__module__\', \'one\', \'two\', \'three\', \'four\')\n\nWhen the class definition for *A* gets executed, the process begins\nwith calling the metaclass\'s "__prepare__()" method which returns an\nempty "collections.OrderedDict".  That mapping records the methods and\nattributes of *A* as they are defined within the body of the class\nstatement. Once those definitions are executed, the ordered dictionary\nis fully populated and the metaclass\'s "__new__()" method gets\ninvoked.  That method builds the new type and it saves the ordered\ndictionary keys in an attribute called "members".\n\n\nCustomizing instance and subclass checks\n========================================\n\nThe following methods are used to override the default behavior of the\n"isinstance()" and "issubclass()" built-in functions.\n\nIn particular, the metaclass "abc.ABCMeta" implements these methods in\norder to allow the addition of Abstract Base Classes (ABCs) as\n"virtual base classes" to any class or type (including built-in\ntypes), including other ABCs.\n\nclass.__instancecheck__(self, instance)\n\n   Return true if *instance* should be considered a (direct or\n   indirect) instance of *class*. If defined, called to implement\n   "isinstance(instance, class)".\n\nclass.__subclasscheck__(self, subclass)\n\n   Return true if *subclass* should be considered a (direct or\n   indirect) subclass of *class*.  If defined, called to implement\n   "issubclass(subclass, class)".\n\nNote that these methods are looked up on the type (metaclass) of a\nclass.  They cannot be defined as class methods in the actual class.\nThis is consistent with the lookup of special methods that are called\non instances, only in this case the instance is itself a class.\n\nSee also: **PEP 3119** - Introducing Abstract Base Classes\n\n     Includes the specification for customizing "isinstance()" and\n     "issubclass()" behavior through "__instancecheck__()" and\n     "__subclasscheck__()", with motivation for this functionality in\n     the context of adding Abstract Base Classes (see the "abc"\n     module) to the language.\n\n\nEmulating callable objects\n==========================\n\nobject.__call__(self[, args...])\n\n   Called when the instance is "called" as a function; if this method\n   is defined, "x(arg1, arg2, ...)" is a shorthand for\n   "x.__call__(arg1, arg2, ...)".\n\n\nEmulating container types\n=========================\n\nThe following methods can be defined to implement container objects.\nContainers usually are sequences (such as lists or tuples) or mappings\n(like dictionaries), but can represent other containers as well.  The\nfirst set of methods is used either to emulate a sequence or to\nemulate a mapping; the difference is that for a sequence, the\nallowable keys should be the integers *k* for which "0 <= k < N" where\n*N* is the length of the sequence, or slice objects, which define a\nrange of items.  It is also recommended that mappings provide the\nmethods "keys()", "values()", "items()", "get()", "clear()",\n"setdefault()", "pop()", "popitem()", "copy()", and "update()"\nbehaving similar to those for Python\'s standard dictionary objects.\nThe "collections" module provides a "MutableMapping" abstract base\nclass to help create those methods from a base set of "__getitem__()",\n"__setitem__()", "__delitem__()", and "keys()". Mutable sequences\nshould provide methods "append()", "count()", "index()", "extend()",\n"insert()", "pop()", "remove()", "reverse()" and "sort()", like Python\nstandard list objects.  Finally, sequence types should implement\naddition (meaning concatenation) and multiplication (meaning\nrepetition) by defining the methods "__add__()", "__radd__()",\n"__iadd__()", "__mul__()", "__rmul__()" and "__imul__()" described\nbelow; they should not define other numerical operators.  It is\nrecommended that both mappings and sequences implement the\n"__contains__()" method to allow efficient use of the "in" operator;\nfor mappings, "in" should search the mapping\'s keys; for sequences, it\nshould search through the values.  It is further recommended that both\nmappings and sequences implement the "__iter__()" method to allow\nefficient iteration through the container; for mappings, "__iter__()"\nshould be the same as "keys()"; for sequences, it should iterate\nthrough the values.\n\nobject.__len__(self)\n\n   Called to implement the built-in function "len()".  Should return\n   the length of the object, an integer ">=" 0.  Also, an object that\n   doesn\'t define a "__bool__()" method and whose "__len__()" method\n   returns zero is considered to be false in a Boolean context.\n\nobject.__length_hint__(self)\n\n   Called to implement "operator.length_hint()". Should return an\n   estimated length for the object (which may be greater or less than\n   the actual length). The length must be an integer ">=" 0. This\n   method is purely an optimization and is never required for\n   correctness.\n\n   New in version 3.4.\n\nNote: Slicing is done exclusively with the following three methods.\n  A call like\n\n     a[1:2] = b\n\n  is translated to\n\n     a[slice(1, 2, None)] = b\n\n  and so forth.  Missing slice items are always filled in with "None".\n\nobject.__getitem__(self, key)\n\n   Called to implement evaluation of "self[key]". For sequence types,\n   the accepted keys should be integers and slice objects.  Note that\n   the special interpretation of negative indexes (if the class wishes\n   to emulate a sequence type) is up to the "__getitem__()" method. If\n   *key* is of an inappropriate type, "TypeError" may be raised; if of\n   a value outside the set of indexes for the sequence (after any\n   special interpretation of negative values), "IndexError" should be\n   raised. For mapping types, if *key* is missing (not in the\n   container), "KeyError" should be raised.\n\n   Note: "for" loops expect that an "IndexError" will be raised for\n     illegal indexes to allow proper detection of the end of the\n     sequence.\n\nobject.__setitem__(self, key, value)\n\n   Called to implement assignment to "self[key]".  Same note as for\n   "__getitem__()".  This should only be implemented for mappings if\n   the objects support changes to the values for keys, or if new keys\n   can be added, or for sequences if elements can be replaced.  The\n   same exceptions should be raised for improper *key* values as for\n   the "__getitem__()" method.\n\nobject.__delitem__(self, key)\n\n   Called to implement deletion of "self[key]".  Same note as for\n   "__getitem__()".  This should only be implemented for mappings if\n   the objects support removal of keys, or for sequences if elements\n   can be removed from the sequence.  The same exceptions should be\n   raised for improper *key* values as for the "__getitem__()" method.\n\nobject.__iter__(self)\n\n   This method is called when an iterator is required for a container.\n   This method should return a new iterator object that can iterate\n   over all the objects in the container.  For mappings, it should\n   iterate over the keys of the container, and should also be made\n   available as the method "keys()".\n\n   Iterator objects also need to implement this method; they are\n   required to return themselves.  For more information on iterator\n   objects, see *Iterator Types*.\n\nobject.__reversed__(self)\n\n   Called (if present) by the "reversed()" built-in to implement\n   reverse iteration.  It should return a new iterator object that\n   iterates over all the objects in the container in reverse order.\n\n   If the "__reversed__()" method is not provided, the "reversed()"\n   built-in will fall back to using the sequence protocol ("__len__()"\n   and "__getitem__()").  Objects that support the sequence protocol\n   should only provide "__reversed__()" if they can provide an\n   implementation that is more efficient than the one provided by\n   "reversed()".\n\nThe membership test operators ("in" and "not in") are normally\nimplemented as an iteration through a sequence.  However, container\nobjects can supply the following special method with a more efficient\nimplementation, which also does not require the object be a sequence.\n\nobject.__contains__(self, item)\n\n   Called to implement membership test operators.  Should return true\n   if *item* is in *self*, false otherwise.  For mapping objects, this\n   should consider the keys of the mapping rather than the values or\n   the key-item pairs.\n\n   For objects that don\'t define "__contains__()", the membership test\n   first tries iteration via "__iter__()", then the old sequence\n   iteration protocol via "__getitem__()", see *this section in the\n   language reference*.\n\n\nEmulating numeric types\n=======================\n\nThe following methods can be defined to emulate numeric objects.\nMethods corresponding to operations that are not supported by the\nparticular kind of number implemented (e.g., bitwise operations for\nnon-integral numbers) should be left undefined.\n\nobject.__add__(self, other)\nobject.__sub__(self, other)\nobject.__mul__(self, other)\nobject.__truediv__(self, other)\nobject.__floordiv__(self, other)\nobject.__mod__(self, other)\nobject.__divmod__(self, other)\nobject.__pow__(self, other[, modulo])\nobject.__lshift__(self, other)\nobject.__rshift__(self, other)\nobject.__and__(self, other)\nobject.__xor__(self, other)\nobject.__or__(self, other)\n\n   These methods are called to implement the binary arithmetic\n   operations ("+", "-", "*", "/", "//", "%", "divmod()", "pow()",\n   "**", "<<", ">>", "&", "^", "|").  For instance, to evaluate the\n   expression "x + y", where *x* is an instance of a class that has an\n   "__add__()" method, "x.__add__(y)" is called.  The "__divmod__()"\n   method should be the equivalent to using "__floordiv__()" and\n   "__mod__()"; it should not be related to "__truediv__()".  Note\n   that "__pow__()" should be defined to accept an optional third\n   argument if the ternary version of the built-in "pow()" function is\n   to be supported.\n\n   If one of those methods does not support the operation with the\n   supplied arguments, it should return "NotImplemented".\n\nobject.__radd__(self, other)\nobject.__rsub__(self, other)\nobject.__rmul__(self, other)\nobject.__rtruediv__(self, other)\nobject.__rfloordiv__(self, other)\nobject.__rmod__(self, other)\nobject.__rdivmod__(self, other)\nobject.__rpow__(self, other)\nobject.__rlshift__(self, other)\nobject.__rrshift__(self, other)\nobject.__rand__(self, other)\nobject.__rxor__(self, other)\nobject.__ror__(self, other)\n\n   These methods are called to implement the binary arithmetic\n   operations ("+", "-", "*", "/", "//", "%", "divmod()", "pow()",\n   "**", "<<", ">>", "&", "^", "|") with reflected (swapped) operands.\n   These functions are only called if the left operand does not\n   support the corresponding operation and the operands are of\n   different types. [2]  For instance, to evaluate the expression "x -\n   y", where *y* is an instance of a class that has an "__rsub__()"\n   method, "y.__rsub__(x)" is called if "x.__sub__(y)" returns\n   *NotImplemented*.\n\n   Note that ternary "pow()" will not try calling "__rpow__()" (the\n   coercion rules would become too complicated).\n\n   Note: If the right operand\'s type is a subclass of the left\n     operand\'s type and that subclass provides the reflected method\n     for the operation, this method will be called before the left\n     operand\'s non-reflected method.  This behavior allows subclasses\n     to override their ancestors\' operations.\n\nobject.__iadd__(self, other)\nobject.__isub__(self, other)\nobject.__imul__(self, other)\nobject.__itruediv__(self, other)\nobject.__ifloordiv__(self, other)\nobject.__imod__(self, other)\nobject.__ipow__(self, other[, modulo])\nobject.__ilshift__(self, other)\nobject.__irshift__(self, other)\nobject.__iand__(self, other)\nobject.__ixor__(self, other)\nobject.__ior__(self, other)\n\n   These methods are called to implement the augmented arithmetic\n   assignments ("+=", "-=", "*=", "/=", "//=", "%=", "**=", "<<=",\n   ">>=", "&=", "^=", "|=").  These methods should attempt to do the\n   operation in-place (modifying *self*) and return the result (which\n   could be, but does not have to be, *self*).  If a specific method\n   is not defined, the augmented assignment falls back to the normal\n   methods.  For instance, to execute the statement "x += y", where\n   *x* is an instance of a class that has an "__iadd__()" method,\n   "x.__iadd__(y)" is called.  If *x* is an instance of a class that\n   does not define a "__iadd__()" method, "x.__add__(y)" and\n   "y.__radd__(x)" are considered, as with the evaluation of "x + y".\n\nobject.__neg__(self)\nobject.__pos__(self)\nobject.__abs__(self)\nobject.__invert__(self)\n\n   Called to implement the unary arithmetic operations ("-", "+",\n   "abs()" and "~").\n\nobject.__complex__(self)\nobject.__int__(self)\nobject.__float__(self)\nobject.__round__(self[, n])\n\n   Called to implement the built-in functions "complex()", "int()",\n   "float()" and "round()".  Should return a value of the appropriate\n   type.\n\nobject.__index__(self)\n\n   Called to implement "operator.index()", and whenever Python needs\n   to losslessly convert the numeric object to an integer object (such\n   as in slicing, or in the built-in "bin()", "hex()" and "oct()"\n   functions). Presence of this method indicates that the numeric\n   object is an integer type.  Must return an integer.\n\n   Note: When "__index__()" is defined, "__int__()" should also be\n     defined, and both shuld return the same value, in order to have a\n     coherent integer type class.\n\n\nWith Statement Context Managers\n===============================\n\nA *context manager* is an object that defines the runtime context to\nbe established when executing a "with" statement. The context manager\nhandles the entry into, and the exit from, the desired runtime context\nfor the execution of the block of code.  Context managers are normally\ninvoked using the "with" statement (described in section *The with\nstatement*), but can also be used by directly invoking their methods.\n\nTypical uses of context managers include saving and restoring various\nkinds of global state, locking and unlocking resources, closing opened\nfiles, etc.\n\nFor more information on context managers, see *Context Manager Types*.\n\nobject.__enter__(self)\n\n   Enter the runtime context related to this object. The "with"\n   statement will bind this method\'s return value to the target(s)\n   specified in the "as" clause of the statement, if any.\n\nobject.__exit__(self, exc_type, exc_value, traceback)\n\n   Exit the runtime context related to this object. The parameters\n   describe the exception that caused the context to be exited. If the\n   context was exited without an exception, all three arguments will\n   be "None".\n\n   If an exception is supplied, and the method wishes to suppress the\n   exception (i.e., prevent it from being propagated), it should\n   return a true value. Otherwise, the exception will be processed\n   normally upon exit from this method.\n\n   Note that "__exit__()" methods should not reraise the passed-in\n   exception; this is the caller\'s responsibility.\n\nSee also: **PEP 0343** - The "with" statement\n\n     The specification, background, and examples for the Python "with"\n     statement.\n\n\nSpecial method lookup\n=====================\n\nFor custom classes, implicit invocations of special methods are only\nguaranteed to work correctly if defined on an object\'s type, not in\nthe object\'s instance dictionary.  That behaviour is the reason why\nthe following code raises an exception:\n\n   >>> class C:\n   ...     pass\n   ...\n   >>> c = C()\n   >>> c.__len__ = lambda: 5\n   >>> len(c)\n   Traceback (most recent call last):\n     File "<stdin>", line 1, in <module>\n   TypeError: object of type \'C\' has no len()\n\nThe rationale behind this behaviour lies with a number of special\nmethods such as "__hash__()" and "__repr__()" that are implemented by\nall objects, including type objects. If the implicit lookup of these\nmethods used the conventional lookup process, they would fail when\ninvoked on the type object itself:\n\n   >>> 1 .__hash__() == hash(1)\n   True\n   >>> int.__hash__() == hash(int)\n   Traceback (most recent call last):\n     File "<stdin>", line 1, in <module>\n   TypeError: descriptor \'__hash__\' of \'int\' object needs an argument\n\nIncorrectly attempting to invoke an unbound method of a class in this\nway is sometimes referred to as \'metaclass confusion\', and is avoided\nby bypassing the instance when looking up special methods:\n\n   >>> type(1).__hash__(1) == hash(1)\n   True\n   >>> type(int).__hash__(int) == hash(int)\n   True\n\nIn addition to bypassing any instance attributes in the interest of\ncorrectness, implicit special method lookup generally also bypasses\nthe "__getattribute__()" method even of the object\'s metaclass:\n\n   >>> class Meta(type):\n   ...    def __getattribute__(*args):\n   ...       print("Metaclass getattribute invoked")\n   ...       return type.__getattribute__(*args)\n   ...\n   >>> class C(object, metaclass=Meta):\n   ...     def __len__(self):\n   ...         return 10\n   ...     def __getattribute__(*args):\n   ...         print("Class getattribute invoked")\n   ...         return object.__getattribute__(*args)\n   ...\n   >>> c = C()\n   >>> c.__len__()                 # Explicit lookup via instance\n   Class getattribute invoked\n   10\n   >>> type(c).__len__(c)          # Explicit lookup via type\n   Metaclass getattribute invoked\n   10\n   >>> len(c)                      # Implicit lookup\n   10\n\nBypassing the "__getattribute__()" machinery in this fashion provides\nsignificant scope for speed optimisations within the interpreter, at\nthe cost of some flexibility in the handling of special methods (the\nspecial method *must* be set on the class object itself in order to be\nconsistently invoked by the interpreter).\n\n-[ Footnotes ]-\n\n[1] It *is* possible in some cases to change an object\'s type,\n    under certain controlled conditions. It generally isn\'t a good\n    idea though, since it can lead to some very strange behaviour if\n    it is handled incorrectly.\n\n[2] For operands of the same type, it is assumed that if the non-\n    reflected method (such as "__add__()") fails the operation is not\n    supported, which is why the reflected method is not called.\n',
+ 'specialnames': '\nSpecial method names\n********************\n\nA class can implement certain operations that are invoked by special\nsyntax (such as arithmetic operations or subscripting and slicing) by\ndefining methods with special names. This is Python\'s approach to\n*operator overloading*, allowing classes to define their own behavior\nwith respect to language operators.  For instance, if a class defines\na method named "__getitem__()", and "x" is an instance of this class,\nthen "x[i]" is roughly equivalent to "type(x).__getitem__(x, i)".\nExcept where mentioned, attempts to execute an operation raise an\nexception when no appropriate method is defined (typically\n"AttributeError" or "TypeError").\n\nWhen implementing a class that emulates any built-in type, it is\nimportant that the emulation only be implemented to the degree that it\nmakes sense for the object being modelled.  For example, some\nsequences may work well with retrieval of individual elements, but\nextracting a slice may not make sense.  (One example of this is the\n"NodeList" interface in the W3C\'s Document Object Model.)\n\n\nBasic customization\n===================\n\nobject.__new__(cls[, ...])\n\n   Called to create a new instance of class *cls*.  "__new__()" is a\n   static method (special-cased so you need not declare it as such)\n   that takes the class of which an instance was requested as its\n   first argument.  The remaining arguments are those passed to the\n   object constructor expression (the call to the class).  The return\n   value of "__new__()" should be the new object instance (usually an\n   instance of *cls*).\n\n   Typical implementations create a new instance of the class by\n   invoking the superclass\'s "__new__()" method using\n   "super(currentclass, cls).__new__(cls[, ...])" with appropriate\n   arguments and then modifying the newly-created instance as\n   necessary before returning it.\n\n   If "__new__()" returns an instance of *cls*, then the new\n   instance\'s "__init__()" method will be invoked like\n   "__init__(self[, ...])", where *self* is the new instance and the\n   remaining arguments are the same as were passed to "__new__()".\n\n   If "__new__()" does not return an instance of *cls*, then the new\n   instance\'s "__init__()" method will not be invoked.\n\n   "__new__()" is intended mainly to allow subclasses of immutable\n   types (like int, str, or tuple) to customize instance creation.  It\n   is also commonly overridden in custom metaclasses in order to\n   customize class creation.\n\nobject.__init__(self[, ...])\n\n   Called when the instance is created.  The arguments are those\n   passed to the class constructor expression.  If a base class has an\n   "__init__()" method, the derived class\'s "__init__()" method, if\n   any, must explicitly call it to ensure proper initialization of the\n   base class part of the instance; for example:\n   "BaseClass.__init__(self, [args...])".  As a special constraint on\n   constructors, no value may be returned; doing so will cause a\n   "TypeError" to be raised at runtime.\n\nobject.__del__(self)\n\n   Called when the instance is about to be destroyed.  This is also\n   called a destructor.  If a base class has a "__del__()" method, the\n   derived class\'s "__del__()" method, if any, must explicitly call it\n   to ensure proper deletion of the base class part of the instance.\n   Note that it is possible (though not recommended!) for the\n   "__del__()" method to postpone destruction of the instance by\n   creating a new reference to it.  It may then be called at a later\n   time when this new reference is deleted.  It is not guaranteed that\n   "__del__()" methods are called for objects that still exist when\n   the interpreter exits.\n\n   Note: "del x" doesn\'t directly call "x.__del__()" --- the former\n     decrements the reference count for "x" by one, and the latter is\n     only called when "x"\'s reference count reaches zero.  Some common\n     situations that may prevent the reference count of an object from\n     going to zero include: circular references between objects (e.g.,\n     a doubly-linked list or a tree data structure with parent and\n     child pointers); a reference to the object on the stack frame of\n     a function that caught an exception (the traceback stored in\n     "sys.exc_info()[2]" keeps the stack frame alive); or a reference\n     to the object on the stack frame that raised an unhandled\n     exception in interactive mode (the traceback stored in\n     "sys.last_traceback" keeps the stack frame alive).  The first\n     situation can only be remedied by explicitly breaking the cycles;\n     the latter two situations can be resolved by storing "None" in\n     "sys.last_traceback". Circular references which are garbage are\n     detected and cleaned up when the cyclic garbage collector is\n     enabled (it\'s on by default). Refer to the documentation for the\n     "gc" module for more information about this topic.\n\n   Warning: Due to the precarious circumstances under which\n     "__del__()" methods are invoked, exceptions that occur during\n     their execution are ignored, and a warning is printed to\n     "sys.stderr" instead. Also, when "__del__()" is invoked in\n     response to a module being deleted (e.g., when execution of the\n     program is done), other globals referenced by the "__del__()"\n     method may already have been deleted or in the process of being\n     torn down (e.g. the import machinery shutting down).  For this\n     reason, "__del__()" methods should do the absolute minimum needed\n     to maintain external invariants.  Starting with version 1.5,\n     Python guarantees that globals whose name begins with a single\n     underscore are deleted from their module before other globals are\n     deleted; if no other references to such globals exist, this may\n     help in assuring that imported modules are still available at the\n     time when the "__del__()" method is called.\n\nobject.__repr__(self)\n\n   Called by the "repr()" built-in function to compute the "official"\n   string representation of an object.  If at all possible, this\n   should look like a valid Python expression that could be used to\n   recreate an object with the same value (given an appropriate\n   environment).  If this is not possible, a string of the form\n   "<...some useful description...>" should be returned. The return\n   value must be a string object. If a class defines "__repr__()" but\n   not "__str__()", then "__repr__()" is also used when an "informal"\n   string representation of instances of that class is required.\n\n   This is typically used for debugging, so it is important that the\n   representation is information-rich and unambiguous.\n\nobject.__str__(self)\n\n   Called by "str(object)" and the built-in functions "format()" and\n   "print()" to compute the "informal" or nicely printable string\n   representation of an object.  The return value must be a *string*\n   object.\n\n   This method differs from "object.__repr__()" in that there is no\n   expectation that "__str__()" return a valid Python expression: a\n   more convenient or concise representation can be used.\n\n   The default implementation defined by the built-in type "object"\n   calls "object.__repr__()".\n\nobject.__bytes__(self)\n\n   Called by "bytes()" to compute a byte-string representation of an\n   object. This should return a "bytes" object.\n\nobject.__format__(self, format_spec)\n\n   Called by the "format()" built-in function (and by extension, the\n   "str.format()" method of class "str") to produce a "formatted"\n   string representation of an object. The "format_spec" argument is a\n   string that contains a description of the formatting options\n   desired. The interpretation of the "format_spec" argument is up to\n   the type implementing "__format__()", however most classes will\n   either delegate formatting to one of the built-in types, or use a\n   similar formatting option syntax.\n\n   See *Format Specification Mini-Language* for a description of the\n   standard formatting syntax.\n\n   The return value must be a string object.\n\n   Changed in version 3.4: The __format__ method of "object" itself\n   raises a "TypeError" if passed any non-empty string.\n\nobject.__lt__(self, other)\nobject.__le__(self, other)\nobject.__eq__(self, other)\nobject.__ne__(self, other)\nobject.__gt__(self, other)\nobject.__ge__(self, other)\n\n   These are the so-called "rich comparison" methods. The\n   correspondence between operator symbols and method names is as\n   follows: "x<y" calls "x.__lt__(y)", "x<=y" calls "x.__le__(y)",\n   "x==y" calls "x.__eq__(y)", "x!=y" calls "x.__ne__(y)", "x>y" calls\n   "x.__gt__(y)", and "x>=y" calls "x.__ge__(y)".\n\n   A rich comparison method may return the singleton "NotImplemented"\n   if it does not implement the operation for a given pair of\n   arguments. By convention, "False" and "True" are returned for a\n   successful comparison. However, these methods can return any value,\n   so if the comparison operator is used in a Boolean context (e.g.,\n   in the condition of an "if" statement), Python will call "bool()"\n   on the value to determine if the result is true or false.\n\n   There are no implied relationships among the comparison operators.\n   The truth of "x==y" does not imply that "x!=y" is false.\n   Accordingly, when defining "__eq__()", one should also define\n   "__ne__()" so that the operators will behave as expected.  See the\n   paragraph on "__hash__()" for some important notes on creating\n   *hashable* objects which support custom comparison operations and\n   are usable as dictionary keys.\n\n   There are no swapped-argument versions of these methods (to be used\n   when the left argument does not support the operation but the right\n   argument does); rather, "__lt__()" and "__gt__()" are each other\'s\n   reflection, "__le__()" and "__ge__()" are each other\'s reflection,\n   and "__eq__()" and "__ne__()" are their own reflection.\n\n   Arguments to rich comparison methods are never coerced.\n\n   To automatically generate ordering operations from a single root\n   operation, see "functools.total_ordering()".\n\nobject.__hash__(self)\n\n   Called by built-in function "hash()" and for operations on members\n   of hashed collections including "set", "frozenset", and "dict".\n   "__hash__()" should return an integer.  The only required property\n   is that objects which compare equal have the same hash value; it is\n   advised to somehow mix together (e.g. using exclusive or) the hash\n   values for the components of the object that also play a part in\n   comparison of objects.\n\n   Note: "hash()" truncates the value returned from an object\'s\n     custom "__hash__()" method to the size of a "Py_ssize_t".  This\n     is typically 8 bytes on 64-bit builds and 4 bytes on 32-bit\n     builds. If an object\'s   "__hash__()" must interoperate on builds\n     of different bit sizes, be sure to check the width on all\n     supported builds.  An easy way to do this is with "python -c\n     "import sys; print(sys.hash_info.width)""\n\n   If a class does not define an "__eq__()" method it should not\n   define a "__hash__()" operation either; if it defines "__eq__()"\n   but not "__hash__()", its instances will not be usable as items in\n   hashable collections.  If a class defines mutable objects and\n   implements an "__eq__()" method, it should not implement\n   "__hash__()", since the implementation of hashable collections\n   requires that a key\'s hash value is immutable (if the object\'s hash\n   value changes, it will be in the wrong hash bucket).\n\n   User-defined classes have "__eq__()" and "__hash__()" methods by\n   default; with them, all objects compare unequal (except with\n   themselves) and "x.__hash__()" returns an appropriate value such\n   that "x == y" implies both that "x is y" and "hash(x) == hash(y)".\n\n   A class that overrides "__eq__()" and does not define "__hash__()"\n   will have its "__hash__()" implicitly set to "None".  When the\n   "__hash__()" method of a class is "None", instances of the class\n   will raise an appropriate "TypeError" when a program attempts to\n   retrieve their hash value, and will also be correctly identified as\n   unhashable when checking "isinstance(obj, collections.Hashable").\n\n   If a class that overrides "__eq__()" needs to retain the\n   implementation of "__hash__()" from a parent class, the interpreter\n   must be told this explicitly by setting "__hash__ =\n   <ParentClass>.__hash__".\n\n   If a class that does not override "__eq__()" wishes to suppress\n   hash support, it should include "__hash__ = None" in the class\n   definition. A class which defines its own "__hash__()" that\n   explicitly raises a "TypeError" would be incorrectly identified as\n   hashable by an "isinstance(obj, collections.Hashable)" call.\n\n   Note: By default, the "__hash__()" values of str, bytes and\n     datetime objects are "salted" with an unpredictable random value.\n     Although they remain constant within an individual Python\n     process, they are not predictable between repeated invocations of\n     Python.This is intended to provide protection against a denial-\n     of-service caused by carefully-chosen inputs that exploit the\n     worst case performance of a dict insertion, O(n^2) complexity.\n     See http://www.ocert.org/advisories/ocert-2011-003.html for\n     details.Changing hash values affects the iteration order of\n     dicts, sets and other mappings.  Python has never made guarantees\n     about this ordering (and it typically varies between 32-bit and\n     64-bit builds).See also "PYTHONHASHSEED".\n\n   Changed in version 3.3: Hash randomization is enabled by default.\n\nobject.__bool__(self)\n\n   Called to implement truth value testing and the built-in operation\n   "bool()"; should return "False" or "True".  When this method is not\n   defined, "__len__()" is called, if it is defined, and the object is\n   considered true if its result is nonzero.  If a class defines\n   neither "__len__()" nor "__bool__()", all its instances are\n   considered true.\n\n\nCustomizing attribute access\n============================\n\nThe following methods can be defined to customize the meaning of\nattribute access (use of, assignment to, or deletion of "x.name") for\nclass instances.\n\nobject.__getattr__(self, name)\n\n   Called when an attribute lookup has not found the attribute in the\n   usual places (i.e. it is not an instance attribute nor is it found\n   in the class tree for "self").  "name" is the attribute name. This\n   method should return the (computed) attribute value or raise an\n   "AttributeError" exception.\n\n   Note that if the attribute is found through the normal mechanism,\n   "__getattr__()" is not called.  (This is an intentional asymmetry\n   between "__getattr__()" and "__setattr__()".) This is done both for\n   efficiency reasons and because otherwise "__getattr__()" would have\n   no way to access other attributes of the instance.  Note that at\n   least for instance variables, you can fake total control by not\n   inserting any values in the instance attribute dictionary (but\n   instead inserting them in another object).  See the\n   "__getattribute__()" method below for a way to actually get total\n   control over attribute access.\n\nobject.__getattribute__(self, name)\n\n   Called unconditionally to implement attribute accesses for\n   instances of the class. If the class also defines "__getattr__()",\n   the latter will not be called unless "__getattribute__()" either\n   calls it explicitly or raises an "AttributeError". This method\n   should return the (computed) attribute value or raise an\n   "AttributeError" exception. In order to avoid infinite recursion in\n   this method, its implementation should always call the base class\n   method with the same name to access any attributes it needs, for\n   example, "object.__getattribute__(self, name)".\n\n   Note: This method may still be bypassed when looking up special\n     methods as the result of implicit invocation via language syntax\n     or built-in functions. See *Special method lookup*.\n\nobject.__setattr__(self, name, value)\n\n   Called when an attribute assignment is attempted.  This is called\n   instead of the normal mechanism (i.e. store the value in the\n   instance dictionary). *name* is the attribute name, *value* is the\n   value to be assigned to it.\n\n   If "__setattr__()" wants to assign to an instance attribute, it\n   should call the base class method with the same name, for example,\n   "object.__setattr__(self, name, value)".\n\nobject.__delattr__(self, name)\n\n   Like "__setattr__()" but for attribute deletion instead of\n   assignment.  This should only be implemented if "del obj.name" is\n   meaningful for the object.\n\nobject.__dir__(self)\n\n   Called when "dir()" is called on the object. A sequence must be\n   returned. "dir()" converts the returned sequence to a list and\n   sorts it.\n\n\nImplementing Descriptors\n------------------------\n\nThe following methods only apply when an instance of the class\ncontaining the method (a so-called *descriptor* class) appears in an\n*owner* class (the descriptor must be in either the owner\'s class\ndictionary or in the class dictionary for one of its parents).  In the\nexamples below, "the attribute" refers to the attribute whose name is\nthe key of the property in the owner class\' "__dict__".\n\nobject.__get__(self, instance, owner)\n\n   Called to get the attribute of the owner class (class attribute\n   access) or of an instance of that class (instance attribute\n   access). *owner* is always the owner class, while *instance* is the\n   instance that the attribute was accessed through, or "None" when\n   the attribute is accessed through the *owner*.  This method should\n   return the (computed) attribute value or raise an "AttributeError"\n   exception.\n\nobject.__set__(self, instance, value)\n\n   Called to set the attribute on an instance *instance* of the owner\n   class to a new value, *value*.\n\nobject.__delete__(self, instance)\n\n   Called to delete the attribute on an instance *instance* of the\n   owner class.\n\n\nInvoking Descriptors\n--------------------\n\nIn general, a descriptor is an object attribute with "binding\nbehavior", one whose attribute access has been overridden by methods\nin the descriptor protocol:  "__get__()", "__set__()", and\n"__delete__()". If any of those methods are defined for an object, it\nis said to be a descriptor.\n\nThe default behavior for attribute access is to get, set, or delete\nthe attribute from an object\'s dictionary. For instance, "a.x" has a\nlookup chain starting with "a.__dict__[\'x\']", then\n"type(a).__dict__[\'x\']", and continuing through the base classes of\n"type(a)" excluding metaclasses.\n\nHowever, if the looked-up value is an object defining one of the\ndescriptor methods, then Python may override the default behavior and\ninvoke the descriptor method instead.  Where this occurs in the\nprecedence chain depends on which descriptor methods were defined and\nhow they were called.\n\nThe starting point for descriptor invocation is a binding, "a.x". How\nthe arguments are assembled depends on "a":\n\nDirect Call\n   The simplest and least common call is when user code directly\n   invokes a descriptor method:    "x.__get__(a)".\n\nInstance Binding\n   If binding to an object instance, "a.x" is transformed into the\n   call: "type(a).__dict__[\'x\'].__get__(a, type(a))".\n\nClass Binding\n   If binding to a class, "A.x" is transformed into the call:\n   "A.__dict__[\'x\'].__get__(None, A)".\n\nSuper Binding\n   If "a" is an instance of "super", then the binding "super(B,\n   obj).m()" searches "obj.__class__.__mro__" for the base class "A"\n   immediately preceding "B" and then invokes the descriptor with the\n   call: "A.__dict__[\'m\'].__get__(obj, obj.__class__)".\n\nFor instance bindings, the precedence of descriptor invocation depends\non the which descriptor methods are defined.  A descriptor can define\nany combination of "__get__()", "__set__()" and "__delete__()".  If it\ndoes not define "__get__()", then accessing the attribute will return\nthe descriptor object itself unless there is a value in the object\'s\ninstance dictionary.  If the descriptor defines "__set__()" and/or\n"__delete__()", it is a data descriptor; if it defines neither, it is\na non-data descriptor.  Normally, data descriptors define both\n"__get__()" and "__set__()", while non-data descriptors have just the\n"__get__()" method.  Data descriptors with "__set__()" and "__get__()"\ndefined always override a redefinition in an instance dictionary.  In\ncontrast, non-data descriptors can be overridden by instances.\n\nPython methods (including "staticmethod()" and "classmethod()") are\nimplemented as non-data descriptors.  Accordingly, instances can\nredefine and override methods.  This allows individual instances to\nacquire behaviors that differ from other instances of the same class.\n\nThe "property()" function is implemented as a data descriptor.\nAccordingly, instances cannot override the behavior of a property.\n\n\n__slots__\n---------\n\nBy default, instances of classes have a dictionary for attribute\nstorage.  This wastes space for objects having very few instance\nvariables.  The space consumption can become acute when creating large\nnumbers of instances.\n\nThe default can be overridden by defining *__slots__* in a class\ndefinition. The *__slots__* declaration takes a sequence of instance\nvariables and reserves just enough space in each instance to hold a\nvalue for each variable.  Space is saved because *__dict__* is not\ncreated for each instance.\n\nobject.__slots__\n\n   This class variable can be assigned a string, iterable, or sequence\n   of strings with variable names used by instances.  If defined in a\n   class, *__slots__* reserves space for the declared variables and\n   prevents the automatic creation of *__dict__* and *__weakref__* for\n   each instance.\n\n\nNotes on using *__slots__*\n~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n* When inheriting from a class without *__slots__*, the *__dict__*\n  attribute of that class will always be accessible, so a *__slots__*\n  definition in the subclass is meaningless.\n\n* Without a *__dict__* variable, instances cannot be assigned new\n  variables not listed in the *__slots__* definition.  Attempts to\n  assign to an unlisted variable name raises "AttributeError". If\n  dynamic assignment of new variables is desired, then add\n  "\'__dict__\'" to the sequence of strings in the *__slots__*\n  declaration.\n\n* Without a *__weakref__* variable for each instance, classes\n  defining *__slots__* do not support weak references to its\n  instances. If weak reference support is needed, then add\n  "\'__weakref__\'" to the sequence of strings in the *__slots__*\n  declaration.\n\n* *__slots__* are implemented at the class level by creating\n  descriptors (*Implementing Descriptors*) for each variable name.  As\n  a result, class attributes cannot be used to set default values for\n  instance variables defined by *__slots__*; otherwise, the class\n  attribute would overwrite the descriptor assignment.\n\n* The action of a *__slots__* declaration is limited to the class\n  where it is defined.  As a result, subclasses will have a *__dict__*\n  unless they also define *__slots__* (which must only contain names\n  of any *additional* slots).\n\n* If a class defines a slot also defined in a base class, the\n  instance variable defined by the base class slot is inaccessible\n  (except by retrieving its descriptor directly from the base class).\n  This renders the meaning of the program undefined.  In the future, a\n  check may be added to prevent this.\n\n* Nonempty *__slots__* does not work for classes derived from\n  "variable-length" built-in types such as "int", "bytes" and "tuple".\n\n* Any non-string iterable may be assigned to *__slots__*. Mappings\n  may also be used; however, in the future, special meaning may be\n  assigned to the values corresponding to each key.\n\n* *__class__* assignment works only if both classes have the same\n  *__slots__*.\n\n\nCustomizing class creation\n==========================\n\nBy default, classes are constructed using "type()". The class body is\nexecuted in a new namespace and the class name is bound locally to the\nresult of "type(name, bases, namespace)".\n\nThe class creation process can be customised by passing the\n"metaclass" keyword argument in the class definition line, or by\ninheriting from an existing class that included such an argument. In\nthe following example, both "MyClass" and "MySubclass" are instances\nof "Meta":\n\n   class Meta(type):\n       pass\n\n   class MyClass(metaclass=Meta):\n       pass\n\n   class MySubclass(MyClass):\n       pass\n\nAny other keyword arguments that are specified in the class definition\nare passed through to all metaclass operations described below.\n\nWhen a class definition is executed, the following steps occur:\n\n* the appropriate metaclass is determined\n\n* the class namespace is prepared\n\n* the class body is executed\n\n* the class object is created\n\n\nDetermining the appropriate metaclass\n-------------------------------------\n\nThe appropriate metaclass for a class definition is determined as\nfollows:\n\n* if no bases and no explicit metaclass are given, then "type()" is\n  used\n\n* if an explicit metaclass is given and it is *not* an instance of\n  "type()", then it is used directly as the metaclass\n\n* if an instance of "type()" is given as the explicit metaclass, or\n  bases are defined, then the most derived metaclass is used\n\nThe most derived metaclass is selected from the explicitly specified\nmetaclass (if any) and the metaclasses (i.e. "type(cls)") of all\nspecified base classes. The most derived metaclass is one which is a\nsubtype of *all* of these candidate metaclasses. If none of the\ncandidate metaclasses meets that criterion, then the class definition\nwill fail with "TypeError".\n\n\nPreparing the class namespace\n-----------------------------\n\nOnce the appropriate metaclass has been identified, then the class\nnamespace is prepared. If the metaclass has a "__prepare__" attribute,\nit is called as "namespace = metaclass.__prepare__(name, bases,\n**kwds)" (where the additional keyword arguments, if any, come from\nthe class definition).\n\nIf the metaclass has no "__prepare__" attribute, then the class\nnamespace is initialised as an empty "dict()" instance.\n\nSee also: **PEP 3115** - Metaclasses in Python 3000\n\n     Introduced the "__prepare__" namespace hook\n\n\nExecuting the class body\n------------------------\n\nThe class body is executed (approximately) as "exec(body, globals(),\nnamespace)". The key difference from a normal call to "exec()" is that\nlexical scoping allows the class body (including any methods) to\nreference names from the current and outer scopes when the class\ndefinition occurs inside a function.\n\nHowever, even when the class definition occurs inside the function,\nmethods defined inside the class still cannot see names defined at the\nclass scope. Class variables must be accessed through the first\nparameter of instance or class methods, and cannot be accessed at all\nfrom static methods.\n\n\nCreating the class object\n-------------------------\n\nOnce the class namespace has been populated by executing the class\nbody, the class object is created by calling "metaclass(name, bases,\nnamespace, **kwds)" (the additional keywords passed here are the same\nas those passed to "__prepare__").\n\nThis class object is the one that will be referenced by the zero-\nargument form of "super()". "__class__" is an implicit closure\nreference created by the compiler if any methods in a class body refer\nto either "__class__" or "super". This allows the zero argument form\nof "super()" to correctly identify the class being defined based on\nlexical scoping, while the class or instance that was used to make the\ncurrent call is identified based on the first argument passed to the\nmethod.\n\nAfter the class object is created, it is passed to the class\ndecorators included in the class definition (if any) and the resulting\nobject is bound in the local namespace as the defined class.\n\nSee also: **PEP 3135** - New super\n\n     Describes the implicit "__class__" closure reference\n\n\nMetaclass example\n-----------------\n\nThe potential uses for metaclasses are boundless. Some ideas that have\nbeen explored include logging, interface checking, automatic\ndelegation, automatic property creation, proxies, frameworks, and\nautomatic resource locking/synchronization.\n\nHere is an example of a metaclass that uses an\n"collections.OrderedDict" to remember the order that class members\nwere defined:\n\n   class OrderedClass(type):\n\n        @classmethod\n        def __prepare__(metacls, name, bases, **kwds):\n           return collections.OrderedDict()\n\n        def __new__(cls, name, bases, namespace, **kwds):\n           result = type.__new__(cls, name, bases, dict(namespace))\n           result.members = tuple(namespace)\n           return result\n\n   class A(metaclass=OrderedClass):\n       def one(self): pass\n       def two(self): pass\n       def three(self): pass\n       def four(self): pass\n\n   >>> A.members\n   (\'__module__\', \'one\', \'two\', \'three\', \'four\')\n\nWhen the class definition for *A* gets executed, the process begins\nwith calling the metaclass\'s "__prepare__()" method which returns an\nempty "collections.OrderedDict".  That mapping records the methods and\nattributes of *A* as they are defined within the body of the class\nstatement. Once those definitions are executed, the ordered dictionary\nis fully populated and the metaclass\'s "__new__()" method gets\ninvoked.  That method builds the new type and it saves the ordered\ndictionary keys in an attribute called "members".\n\n\nCustomizing instance and subclass checks\n========================================\n\nThe following methods are used to override the default behavior of the\n"isinstance()" and "issubclass()" built-in functions.\n\nIn particular, the metaclass "abc.ABCMeta" implements these methods in\norder to allow the addition of Abstract Base Classes (ABCs) as\n"virtual base classes" to any class or type (including built-in\ntypes), including other ABCs.\n\nclass.__instancecheck__(self, instance)\n\n   Return true if *instance* should be considered a (direct or\n   indirect) instance of *class*. If defined, called to implement\n   "isinstance(instance, class)".\n\nclass.__subclasscheck__(self, subclass)\n\n   Return true if *subclass* should be considered a (direct or\n   indirect) subclass of *class*.  If defined, called to implement\n   "issubclass(subclass, class)".\n\nNote that these methods are looked up on the type (metaclass) of a\nclass.  They cannot be defined as class methods in the actual class.\nThis is consistent with the lookup of special methods that are called\non instances, only in this case the instance is itself a class.\n\nSee also: **PEP 3119** - Introducing Abstract Base Classes\n\n     Includes the specification for customizing "isinstance()" and\n     "issubclass()" behavior through "__instancecheck__()" and\n     "__subclasscheck__()", with motivation for this functionality in\n     the context of adding Abstract Base Classes (see the "abc"\n     module) to the language.\n\n\nEmulating callable objects\n==========================\n\nobject.__call__(self[, args...])\n\n   Called when the instance is "called" as a function; if this method\n   is defined, "x(arg1, arg2, ...)" is a shorthand for\n   "x.__call__(arg1, arg2, ...)".\n\n\nEmulating container types\n=========================\n\nThe following methods can be defined to implement container objects.\nContainers usually are sequences (such as lists or tuples) or mappings\n(like dictionaries), but can represent other containers as well.  The\nfirst set of methods is used either to emulate a sequence or to\nemulate a mapping; the difference is that for a sequence, the\nallowable keys should be the integers *k* for which "0 <= k < N" where\n*N* is the length of the sequence, or slice objects, which define a\nrange of items.  It is also recommended that mappings provide the\nmethods "keys()", "values()", "items()", "get()", "clear()",\n"setdefault()", "pop()", "popitem()", "copy()", and "update()"\nbehaving similar to those for Python\'s standard dictionary objects.\nThe "collections" module provides a "MutableMapping" abstract base\nclass to help create those methods from a base set of "__getitem__()",\n"__setitem__()", "__delitem__()", and "keys()". Mutable sequences\nshould provide methods "append()", "count()", "index()", "extend()",\n"insert()", "pop()", "remove()", "reverse()" and "sort()", like Python\nstandard list objects.  Finally, sequence types should implement\naddition (meaning concatenation) and multiplication (meaning\nrepetition) by defining the methods "__add__()", "__radd__()",\n"__iadd__()", "__mul__()", "__rmul__()" and "__imul__()" described\nbelow; they should not define other numerical operators.  It is\nrecommended that both mappings and sequences implement the\n"__contains__()" method to allow efficient use of the "in" operator;\nfor mappings, "in" should search the mapping\'s keys; for sequences, it\nshould search through the values.  It is further recommended that both\nmappings and sequences implement the "__iter__()" method to allow\nefficient iteration through the container; for mappings, "__iter__()"\nshould be the same as "keys()"; for sequences, it should iterate\nthrough the values.\n\nobject.__len__(self)\n\n   Called to implement the built-in function "len()".  Should return\n   the length of the object, an integer ">=" 0.  Also, an object that\n   doesn\'t define a "__bool__()" method and whose "__len__()" method\n   returns zero is considered to be false in a Boolean context.\n\nobject.__length_hint__(self)\n\n   Called to implement "operator.length_hint()". Should return an\n   estimated length for the object (which may be greater or less than\n   the actual length). The length must be an integer ">=" 0. This\n   method is purely an optimization and is never required for\n   correctness.\n\n   New in version 3.4.\n\nNote: Slicing is done exclusively with the following three methods.\n  A call like\n\n     a[1:2] = b\n\n  is translated to\n\n     a[slice(1, 2, None)] = b\n\n  and so forth.  Missing slice items are always filled in with "None".\n\nobject.__getitem__(self, key)\n\n   Called to implement evaluation of "self[key]". For sequence types,\n   the accepted keys should be integers and slice objects.  Note that\n   the special interpretation of negative indexes (if the class wishes\n   to emulate a sequence type) is up to the "__getitem__()" method. If\n   *key* is of an inappropriate type, "TypeError" may be raised; if of\n   a value outside the set of indexes for the sequence (after any\n   special interpretation of negative values), "IndexError" should be\n   raised. For mapping types, if *key* is missing (not in the\n   container), "KeyError" should be raised.\n\n   Note: "for" loops expect that an "IndexError" will be raised for\n     illegal indexes to allow proper detection of the end of the\n     sequence.\n\nobject.__setitem__(self, key, value)\n\n   Called to implement assignment to "self[key]".  Same note as for\n   "__getitem__()".  This should only be implemented for mappings if\n   the objects support changes to the values for keys, or if new keys\n   can be added, or for sequences if elements can be replaced.  The\n   same exceptions should be raised for improper *key* values as for\n   the "__getitem__()" method.\n\nobject.__delitem__(self, key)\n\n   Called to implement deletion of "self[key]".  Same note as for\n   "__getitem__()".  This should only be implemented for mappings if\n   the objects support removal of keys, or for sequences if elements\n   can be removed from the sequence.  The same exceptions should be\n   raised for improper *key* values as for the "__getitem__()" method.\n\nobject.__iter__(self)\n\n   This method is called when an iterator is required for a container.\n   This method should return a new iterator object that can iterate\n   over all the objects in the container.  For mappings, it should\n   iterate over the keys of the container, and should also be made\n   available as the method "keys()".\n\n   Iterator objects also need to implement this method; they are\n   required to return themselves.  For more information on iterator\n   objects, see *Iterator Types*.\n\nobject.__reversed__(self)\n\n   Called (if present) by the "reversed()" built-in to implement\n   reverse iteration.  It should return a new iterator object that\n   iterates over all the objects in the container in reverse order.\n\n   If the "__reversed__()" method is not provided, the "reversed()"\n   built-in will fall back to using the sequence protocol ("__len__()"\n   and "__getitem__()").  Objects that support the sequence protocol\n   should only provide "__reversed__()" if they can provide an\n   implementation that is more efficient than the one provided by\n   "reversed()".\n\nThe membership test operators ("in" and "not in") are normally\nimplemented as an iteration through a sequence.  However, container\nobjects can supply the following special method with a more efficient\nimplementation, which also does not require the object be a sequence.\n\nobject.__contains__(self, item)\n\n   Called to implement membership test operators.  Should return true\n   if *item* is in *self*, false otherwise.  For mapping objects, this\n   should consider the keys of the mapping rather than the values or\n   the key-item pairs.\n\n   For objects that don\'t define "__contains__()", the membership test\n   first tries iteration via "__iter__()", then the old sequence\n   iteration protocol via "__getitem__()", see *this section in the\n   language reference*.\n\n\nEmulating numeric types\n=======================\n\nThe following methods can be defined to emulate numeric objects.\nMethods corresponding to operations that are not supported by the\nparticular kind of number implemented (e.g., bitwise operations for\nnon-integral numbers) should be left undefined.\n\nobject.__add__(self, other)\nobject.__sub__(self, other)\nobject.__mul__(self, other)\nobject.__truediv__(self, other)\nobject.__floordiv__(self, other)\nobject.__mod__(self, other)\nobject.__divmod__(self, other)\nobject.__pow__(self, other[, modulo])\nobject.__lshift__(self, other)\nobject.__rshift__(self, other)\nobject.__and__(self, other)\nobject.__xor__(self, other)\nobject.__or__(self, other)\n\n   These methods are called to implement the binary arithmetic\n   operations ("+", "-", "*", "/", "//", "%", "divmod()", "pow()",\n   "**", "<<", ">>", "&", "^", "|").  For instance, to evaluate the\n   expression "x + y", where *x* is an instance of a class that has an\n   "__add__()" method, "x.__add__(y)" is called.  The "__divmod__()"\n   method should be the equivalent to using "__floordiv__()" and\n   "__mod__()"; it should not be related to "__truediv__()".  Note\n   that "__pow__()" should be defined to accept an optional third\n   argument if the ternary version of the built-in "pow()" function is\n   to be supported.\n\n   If one of those methods does not support the operation with the\n   supplied arguments, it should return "NotImplemented".\n\nobject.__radd__(self, other)\nobject.__rsub__(self, other)\nobject.__rmul__(self, other)\nobject.__rtruediv__(self, other)\nobject.__rfloordiv__(self, other)\nobject.__rmod__(self, other)\nobject.__rdivmod__(self, other)\nobject.__rpow__(self, other)\nobject.__rlshift__(self, other)\nobject.__rrshift__(self, other)\nobject.__rand__(self, other)\nobject.__rxor__(self, other)\nobject.__ror__(self, other)\n\n   These methods are called to implement the binary arithmetic\n   operations ("+", "-", "*", "/", "//", "%", "divmod()", "pow()",\n   "**", "<<", ">>", "&", "^", "|") with reflected (swapped) operands.\n   These functions are only called if the left operand does not\n   support the corresponding operation and the operands are of\n   different types. [2]  For instance, to evaluate the expression "x -\n   y", where *y* is an instance of a class that has an "__rsub__()"\n   method, "y.__rsub__(x)" is called if "x.__sub__(y)" returns\n   *NotImplemented*.\n\n   Note that ternary "pow()" will not try calling "__rpow__()" (the\n   coercion rules would become too complicated).\n\n   Note: If the right operand\'s type is a subclass of the left\n     operand\'s type and that subclass provides the reflected method\n     for the operation, this method will be called before the left\n     operand\'s non-reflected method.  This behavior allows subclasses\n     to override their ancestors\' operations.\n\nobject.__iadd__(self, other)\nobject.__isub__(self, other)\nobject.__imul__(self, other)\nobject.__itruediv__(self, other)\nobject.__ifloordiv__(self, other)\nobject.__imod__(self, other)\nobject.__ipow__(self, other[, modulo])\nobject.__ilshift__(self, other)\nobject.__irshift__(self, other)\nobject.__iand__(self, other)\nobject.__ixor__(self, other)\nobject.__ior__(self, other)\n\n   These methods are called to implement the augmented arithmetic\n   assignments ("+=", "-=", "*=", "/=", "//=", "%=", "**=", "<<=",\n   ">>=", "&=", "^=", "|=").  These methods should attempt to do the\n   operation in-place (modifying *self*) and return the result (which\n   could be, but does not have to be, *self*).  If a specific method\n   is not defined, the augmented assignment falls back to the normal\n   methods.  For instance, if *x* is an instance of a class with an\n   "__iadd__()" method, "x += y" is equivalent to "x = x.__iadd__(y)"\n   . Otherwise, "x.__add__(y)" and "y.__radd__(x)" are considered, as\n   with the evaluation of "x + y". In certain situations, augmented\n   assignment can result in unexpected errors (see *Why does\n   a_tuple[i] += [\'item\'] raise an exception when the addition\n   works?*), but this behavior is in fact part of the data model.\n\nobject.__neg__(self)\nobject.__pos__(self)\nobject.__abs__(self)\nobject.__invert__(self)\n\n   Called to implement the unary arithmetic operations ("-", "+",\n   "abs()" and "~").\n\nobject.__complex__(self)\nobject.__int__(self)\nobject.__float__(self)\nobject.__round__(self[, n])\n\n   Called to implement the built-in functions "complex()", "int()",\n   "float()" and "round()".  Should return a value of the appropriate\n   type.\n\nobject.__index__(self)\n\n   Called to implement "operator.index()", and whenever Python needs\n   to losslessly convert the numeric object to an integer object (such\n   as in slicing, or in the built-in "bin()", "hex()" and "oct()"\n   functions). Presence of this method indicates that the numeric\n   object is an integer type.  Must return an integer.\n\n   Note: When "__index__()" is defined, "__int__()" should also be\n     defined, and both shuld return the same value, in order to have a\n     coherent integer type class.\n\n\nWith Statement Context Managers\n===============================\n\nA *context manager* is an object that defines the runtime context to\nbe established when executing a "with" statement. The context manager\nhandles the entry into, and the exit from, the desired runtime context\nfor the execution of the block of code.  Context managers are normally\ninvoked using the "with" statement (described in section *The with\nstatement*), but can also be used by directly invoking their methods.\n\nTypical uses of context managers include saving and restoring various\nkinds of global state, locking and unlocking resources, closing opened\nfiles, etc.\n\nFor more information on context managers, see *Context Manager Types*.\n\nobject.__enter__(self)\n\n   Enter the runtime context related to this object. The "with"\n   statement will bind this method\'s return value to the target(s)\n   specified in the "as" clause of the statement, if any.\n\nobject.__exit__(self, exc_type, exc_value, traceback)\n\n   Exit the runtime context related to this object. The parameters\n   describe the exception that caused the context to be exited. If the\n   context was exited without an exception, all three arguments will\n   be "None".\n\n   If an exception is supplied, and the method wishes to suppress the\n   exception (i.e., prevent it from being propagated), it should\n   return a true value. Otherwise, the exception will be processed\n   normally upon exit from this method.\n\n   Note that "__exit__()" methods should not reraise the passed-in\n   exception; this is the caller\'s responsibility.\n\nSee also: **PEP 0343** - The "with" statement\n\n     The specification, background, and examples for the Python "with"\n     statement.\n\n\nSpecial method lookup\n=====================\n\nFor custom classes, implicit invocations of special methods are only\nguaranteed to work correctly if defined on an object\'s type, not in\nthe object\'s instance dictionary.  That behaviour is the reason why\nthe following code raises an exception:\n\n   >>> class C:\n   ...     pass\n   ...\n   >>> c = C()\n   >>> c.__len__ = lambda: 5\n   >>> len(c)\n   Traceback (most recent call last):\n     File "<stdin>", line 1, in <module>\n   TypeError: object of type \'C\' has no len()\n\nThe rationale behind this behaviour lies with a number of special\nmethods such as "__hash__()" and "__repr__()" that are implemented by\nall objects, including type objects. If the implicit lookup of these\nmethods used the conventional lookup process, they would fail when\ninvoked on the type object itself:\n\n   >>> 1 .__hash__() == hash(1)\n   True\n   >>> int.__hash__() == hash(int)\n   Traceback (most recent call last):\n     File "<stdin>", line 1, in <module>\n   TypeError: descriptor \'__hash__\' of \'int\' object needs an argument\n\nIncorrectly attempting to invoke an unbound method of a class in this\nway is sometimes referred to as \'metaclass confusion\', and is avoided\nby bypassing the instance when looking up special methods:\n\n   >>> type(1).__hash__(1) == hash(1)\n   True\n   >>> type(int).__hash__(int) == hash(int)\n   True\n\nIn addition to bypassing any instance attributes in the interest of\ncorrectness, implicit special method lookup generally also bypasses\nthe "__getattribute__()" method even of the object\'s metaclass:\n\n   >>> class Meta(type):\n   ...    def __getattribute__(*args):\n   ...       print("Metaclass getattribute invoked")\n   ...       return type.__getattribute__(*args)\n   ...\n   >>> class C(object, metaclass=Meta):\n   ...     def __len__(self):\n   ...         return 10\n   ...     def __getattribute__(*args):\n   ...         print("Class getattribute invoked")\n   ...         return object.__getattribute__(*args)\n   ...\n   >>> c = C()\n   >>> c.__len__()                 # Explicit lookup via instance\n   Class getattribute invoked\n   10\n   >>> type(c).__len__(c)          # Explicit lookup via type\n   Metaclass getattribute invoked\n   10\n   >>> len(c)                      # Implicit lookup\n   10\n\nBypassing the "__getattribute__()" machinery in this fashion provides\nsignificant scope for speed optimisations within the interpreter, at\nthe cost of some flexibility in the handling of special methods (the\nspecial method *must* be set on the class object itself in order to be\nconsistently invoked by the interpreter).\n\n-[ Footnotes ]-\n\n[1] It *is* possible in some cases to change an object\'s type,\n    under certain controlled conditions. It generally isn\'t a good\n    idea though, since it can lead to some very strange behaviour if\n    it is handled incorrectly.\n\n[2] For operands of the same type, it is assumed that if the non-\n    reflected method (such as "__add__()") fails the operation is not\n    supported, which is why the reflected method is not called.\n',
  'string-methods': '\nString Methods\n**************\n\nStrings implement all of the *common* sequence operations, along with\nthe additional methods described below.\n\nStrings also support two styles of string formatting, one providing a\nlarge degree of flexibility and customization (see "str.format()",\n*Format String Syntax* and *String Formatting*) and the other based on\nC "printf" style formatting that handles a narrower range of types and\nis slightly harder to use correctly, but is often faster for the cases\nit can handle (*printf-style String Formatting*).\n\nThe *Text Processing Services* section of the standard library covers\na number of other modules that provide various text related utilities\n(including regular expression support in the "re" module).\n\nstr.capitalize()\n\n   Return a copy of the string with its first character capitalized\n   and the rest lowercased.\n\nstr.casefold()\n\n   Return a casefolded copy of the string. Casefolded strings may be\n   used for caseless matching.\n\n   Casefolding is similar to lowercasing but more aggressive because\n   it is intended to remove all case distinctions in a string. For\n   example, the German lowercase letter "\'\xc3\x9f\'" is equivalent to ""ss"".\n   Since it is already lowercase, "lower()" would do nothing to "\'\xc3\x9f\'";\n   "casefold()" converts it to ""ss"".\n\n   The casefolding algorithm is described in section 3.13 of the\n   Unicode Standard.\n\n   New in version 3.3.\n\nstr.center(width[, fillchar])\n\n   Return centered in a string of length *width*. Padding is done\n   using the specified *fillchar* (default is a space).\n\nstr.count(sub[, start[, end]])\n\n   Return the number of non-overlapping occurrences of substring *sub*\n   in the range [*start*, *end*].  Optional arguments *start* and\n   *end* are interpreted as in slice notation.\n\nstr.encode(encoding="utf-8", errors="strict")\n\n   Return an encoded version of the string as a bytes object. Default\n   encoding is "\'utf-8\'". *errors* may be given to set a different\n   error handling scheme. The default for *errors* is "\'strict\'",\n   meaning that encoding errors raise a "UnicodeError". Other possible\n   values are "\'ignore\'", "\'replace\'", "\'xmlcharrefreplace\'",\n   "\'backslashreplace\'" and any other name registered via\n   "codecs.register_error()", see section *Codec Base Classes*. For a\n   list of possible encodings, see section *Standard Encodings*.\n\n   Changed in version 3.1: Support for keyword arguments added.\n\nstr.endswith(suffix[, start[, end]])\n\n   Return "True" if the string ends with the specified *suffix*,\n   otherwise return "False".  *suffix* can also be a tuple of suffixes\n   to look for.  With optional *start*, test beginning at that\n   position.  With optional *end*, stop comparing at that position.\n\nstr.expandtabs(tabsize=8)\n\n   Return a copy of the string where all tab characters are replaced\n   by one or more spaces, depending on the current column and the\n   given tab size.  Tab positions occur every *tabsize* characters\n   (default is 8, giving tab positions at columns 0, 8, 16 and so on).\n   To expand the string, the current column is set to zero and the\n   string is examined character by character.  If the character is a\n   tab ("\\t"), one or more space characters are inserted in the result\n   until the current column is equal to the next tab position. (The\n   tab character itself is not copied.)  If the character is a newline\n   ("\\n") or return ("\\r"), it is copied and the current column is\n   reset to zero.  Any other character is copied unchanged and the\n   current column is incremented by one regardless of how the\n   character is represented when printed.\n\n   >>> \'01\\t012\\t0123\\t01234\'.expandtabs()\n   \'01      012     0123    01234\'\n   >>> \'01\\t012\\t0123\\t01234\'.expandtabs(4)\n   \'01  012 0123    01234\'\n\nstr.find(sub[, start[, end]])\n\n   Return the lowest index in the string where substring *sub* is\n   found, such that *sub* is contained in the slice "s[start:end]".\n   Optional arguments *start* and *end* are interpreted as in slice\n   notation.  Return "-1" if *sub* is not found.\n\n   Note: The "find()" method should be used only if you need to know\n     the position of *sub*.  To check if *sub* is a substring or not,\n     use the "in" operator:\n\n        >>> \'Py\' in \'Python\'\n        True\n\nstr.format(*args, **kwargs)\n\n   Perform a string formatting operation.  The string on which this\n   method is called can contain literal text or replacement fields\n   delimited by braces "{}".  Each replacement field contains either\n   the numeric index of a positional argument, or the name of a\n   keyword argument.  Returns a copy of the string where each\n   replacement field is replaced with the string value of the\n   corresponding argument.\n\n   >>> "The sum of 1 + 2 is {0}".format(1+2)\n   \'The sum of 1 + 2 is 3\'\n\n   See *Format String Syntax* for a description of the various\n   formatting options that can be specified in format strings.\n\nstr.format_map(mapping)\n\n   Similar to "str.format(**mapping)", except that "mapping" is used\n   directly and not copied to a "dict".  This is useful if for example\n   "mapping" is a dict subclass:\n\n   >>> class Default(dict):\n   ...     def __missing__(self, key):\n   ...         return key\n   ...\n   >>> \'{name} was born in {country}\'.format_map(Default(name=\'Guido\'))\n   \'Guido was born in country\'\n\n   New in version 3.2.\n\nstr.index(sub[, start[, end]])\n\n   Like "find()", but raise "ValueError" when the substring is not\n   found.\n\nstr.isalnum()\n\n   Return true if all characters in the string are alphanumeric and\n   there is at least one character, false otherwise.  A character "c"\n   is alphanumeric if one of the following returns "True":\n   "c.isalpha()", "c.isdecimal()", "c.isdigit()", or "c.isnumeric()".\n\nstr.isalpha()\n\n   Return true if all characters in the string are alphabetic and\n   there is at least one character, false otherwise.  Alphabetic\n   characters are those characters defined in the Unicode character\n   database as "Letter", i.e., those with general category property\n   being one of "Lm", "Lt", "Lu", "Ll", or "Lo".  Note that this is\n   different from the "Alphabetic" property defined in the Unicode\n   Standard.\n\nstr.isdecimal()\n\n   Return true if all characters in the string are decimal characters\n   and there is at least one character, false otherwise. Decimal\n   characters are those from general category "Nd". This category\n   includes digit characters, and all characters that can be used to\n   form decimal-radix numbers, e.g. U+0660, ARABIC-INDIC DIGIT ZERO.\n\nstr.isdigit()\n\n   Return true if all characters in the string are digits and there is\n   at least one character, false otherwise.  Digits include decimal\n   characters and digits that need special handling, such as the\n   compatibility superscript digits.  Formally, a digit is a character\n   that has the property value Numeric_Type=Digit or\n   Numeric_Type=Decimal.\n\nstr.isidentifier()\n\n   Return true if the string is a valid identifier according to the\n   language definition, section *Identifiers and keywords*.\n\n   Use "keyword.iskeyword()" to test for reserved identifiers such as\n   "def" and "class".\n\nstr.islower()\n\n   Return true if all cased characters [4] in the string are lowercase\n   and there is at least one cased character, false otherwise.\n\nstr.isnumeric()\n\n   Return true if all characters in the string are numeric characters,\n   and there is at least one character, false otherwise. Numeric\n   characters include digit characters, and all characters that have\n   the Unicode numeric value property, e.g. U+2155, VULGAR FRACTION\n   ONE FIFTH.  Formally, numeric characters are those with the\n   property value Numeric_Type=Digit, Numeric_Type=Decimal or\n   Numeric_Type=Numeric.\n\nstr.isprintable()\n\n   Return true if all characters in the string are printable or the\n   string is empty, false otherwise.  Nonprintable characters are\n   those characters defined in the Unicode character database as\n   "Other" or "Separator", excepting the ASCII space (0x20) which is\n   considered printable.  (Note that printable characters in this\n   context are those which should not be escaped when "repr()" is\n   invoked on a string.  It has no bearing on the handling of strings\n   written to "sys.stdout" or "sys.stderr".)\n\nstr.isspace()\n\n   Return true if there are only whitespace characters in the string\n   and there is at least one character, false otherwise.  Whitespace\n   characters  are those characters defined in the Unicode character\n   database as "Other" or "Separator" and those with bidirectional\n   property being one of "WS", "B", or "S".\n\nstr.istitle()\n\n   Return true if the string is a titlecased string and there is at\n   least one character, for example uppercase characters may only\n   follow uncased characters and lowercase characters only cased ones.\n   Return false otherwise.\n\nstr.isupper()\n\n   Return true if all cased characters [4] in the string are uppercase\n   and there is at least one cased character, false otherwise.\n\nstr.join(iterable)\n\n   Return a string which is the concatenation of the strings in the\n   *iterable* *iterable*.  A "TypeError" will be raised if there are\n   any non-string values in *iterable*, including "bytes" objects.\n   The separator between elements is the string providing this method.\n\nstr.ljust(width[, fillchar])\n\n   Return the string left justified in a string of length *width*.\n   Padding is done using the specified *fillchar* (default is a\n   space).  The original string is returned if *width* is less than or\n   equal to "len(s)".\n\nstr.lower()\n\n   Return a copy of the string with all the cased characters [4]\n   converted to lowercase.\n\n   The lowercasing algorithm used is described in section 3.13 of the\n   Unicode Standard.\n\nstr.lstrip([chars])\n\n   Return a copy of the string with leading characters removed.  The\n   *chars* argument is a string specifying the set of characters to be\n   removed.  If omitted or "None", the *chars* argument defaults to\n   removing whitespace.  The *chars* argument is not a prefix; rather,\n   all combinations of its values are stripped:\n\n   >>> \'   spacious   \'.lstrip()\n   \'spacious   \'\n   >>> \'www.example.com\'.lstrip(\'cmowz.\')\n   \'example.com\'\n\nstatic str.maketrans(x[, y[, z]])\n\n   This static method returns a translation table usable for\n   "str.translate()".\n\n   If there is only one argument, it must be a dictionary mapping\n   Unicode ordinals (integers) or characters (strings of length 1) to\n   Unicode ordinals, strings (of arbitrary lengths) or None.\n   Character keys will then be converted to ordinals.\n\n   If there are two arguments, they must be strings of equal length,\n   and in the resulting dictionary, each character in x will be mapped\n   to the character at the same position in y.  If there is a third\n   argument, it must be a string, whose characters will be mapped to\n   None in the result.\n\nstr.partition(sep)\n\n   Split the string at the first occurrence of *sep*, and return a\n   3-tuple containing the part before the separator, the separator\n   itself, and the part after the separator.  If the separator is not\n   found, return a 3-tuple containing the string itself, followed by\n   two empty strings.\n\nstr.replace(old, new[, count])\n\n   Return a copy of the string with all occurrences of substring *old*\n   replaced by *new*.  If the optional argument *count* is given, only\n   the first *count* occurrences are replaced.\n\nstr.rfind(sub[, start[, end]])\n\n   Return the highest index in the string where substring *sub* is\n   found, such that *sub* is contained within "s[start:end]".\n   Optional arguments *start* and *end* are interpreted as in slice\n   notation.  Return "-1" on failure.\n\nstr.rindex(sub[, start[, end]])\n\n   Like "rfind()" but raises "ValueError" when the substring *sub* is\n   not found.\n\nstr.rjust(width[, fillchar])\n\n   Return the string right justified in a string of length *width*.\n   Padding is done using the specified *fillchar* (default is a\n   space). The original string is returned if *width* is less than or\n   equal to "len(s)".\n\nstr.rpartition(sep)\n\n   Split the string at the last occurrence of *sep*, and return a\n   3-tuple containing the part before the separator, the separator\n   itself, and the part after the separator.  If the separator is not\n   found, return a 3-tuple containing two empty strings, followed by\n   the string itself.\n\nstr.rsplit(sep=None, maxsplit=-1)\n\n   Return a list of the words in the string, using *sep* as the\n   delimiter string. If *maxsplit* is given, at most *maxsplit* splits\n   are done, the *rightmost* ones.  If *sep* is not specified or\n   "None", any whitespace string is a separator.  Except for splitting\n   from the right, "rsplit()" behaves like "split()" which is\n   described in detail below.\n\nstr.rstrip([chars])\n\n   Return a copy of the string with trailing characters removed.  The\n   *chars* argument is a string specifying the set of characters to be\n   removed.  If omitted or "None", the *chars* argument defaults to\n   removing whitespace.  The *chars* argument is not a suffix; rather,\n   all combinations of its values are stripped:\n\n   >>> \'   spacious   \'.rstrip()\n   \'   spacious\'\n   >>> \'mississippi\'.rstrip(\'ipz\')\n   \'mississ\'\n\nstr.split(sep=None, maxsplit=-1)\n\n   Return a list of the words in the string, using *sep* as the\n   delimiter string.  If *maxsplit* is given, at most *maxsplit*\n   splits are done (thus, the list will have at most "maxsplit+1"\n   elements).  If *maxsplit* is not specified or "-1", then there is\n   no limit on the number of splits (all possible splits are made).\n\n   If *sep* is given, consecutive delimiters are not grouped together\n   and are deemed to delimit empty strings (for example,\n   "\'1,,2\'.split(\',\')" returns "[\'1\', \'\', \'2\']").  The *sep* argument\n   may consist of multiple characters (for example,\n   "\'1<>2<>3\'.split(\'<>\')" returns "[\'1\', \'2\', \'3\']"). Splitting an\n   empty string with a specified separator returns "[\'\']".\n\n   If *sep* is not specified or is "None", a different splitting\n   algorithm is applied: runs of consecutive whitespace are regarded\n   as a single separator, and the result will contain no empty strings\n   at the start or end if the string has leading or trailing\n   whitespace.  Consequently, splitting an empty string or a string\n   consisting of just whitespace with a "None" separator returns "[]".\n\n   For example, "\' 1  2   3  \'.split()" returns "[\'1\', \'2\', \'3\']", and\n   "\'  1  2   3  \'.split(None, 1)" returns "[\'1\', \'2   3  \']".\n\nstr.splitlines([keepends])\n\n   Return a list of the lines in the string, breaking at line\n   boundaries. This method uses the *universal newlines* approach to\n   splitting lines. Line breaks are not included in the resulting list\n   unless *keepends* is given and true.\n\n   For example, "\'ab c\\n\\nde fg\\rkl\\r\\n\'.splitlines()" returns "[\'ab\n   c\', \'\', \'de fg\', \'kl\']", while the same call with\n   "splitlines(True)" returns "[\'ab c\\n\', \'\\n\', \'de fg\\r\', \'kl\\r\\n\']".\n\n   Unlike "split()" when a delimiter string *sep* is given, this\n   method returns an empty list for the empty string, and a terminal\n   line break does not result in an extra line.\n\nstr.startswith(prefix[, start[, end]])\n\n   Return "True" if string starts with the *prefix*, otherwise return\n   "False". *prefix* can also be a tuple of prefixes to look for.\n   With optional *start*, test string beginning at that position.\n   With optional *end*, stop comparing string at that position.\n\nstr.strip([chars])\n\n   Return a copy of the string with the leading and trailing\n   characters removed. The *chars* argument is a string specifying the\n   set of characters to be removed. If omitted or "None", the *chars*\n   argument defaults to removing whitespace. The *chars* argument is\n   not a prefix or suffix; rather, all combinations of its values are\n   stripped:\n\n   >>> \'   spacious   \'.strip()\n   \'spacious\'\n   >>> \'www.example.com\'.strip(\'cmowz.\')\n   \'example\'\n\nstr.swapcase()\n\n   Return a copy of the string with uppercase characters converted to\n   lowercase and vice versa. Note that it is not necessarily true that\n   "s.swapcase().swapcase() == s".\n\nstr.title()\n\n   Return a titlecased version of the string where words start with an\n   uppercase character and the remaining characters are lowercase.\n\n   The algorithm uses a simple language-independent definition of a\n   word as groups of consecutive letters.  The definition works in\n   many contexts but it means that apostrophes in contractions and\n   possessives form word boundaries, which may not be the desired\n   result:\n\n      >>> "they\'re bill\'s friends from the UK".title()\n      "They\'Re Bill\'S Friends From The Uk"\n\n   A workaround for apostrophes can be constructed using regular\n   expressions:\n\n      >>> import re\n      >>> def titlecase(s):\n      ...     return re.sub(r"[A-Za-z]+(\'[A-Za-z]+)?",\n      ...                   lambda mo: mo.group(0)[0].upper() +\n      ...                              mo.group(0)[1:].lower(),\n      ...                   s)\n      ...\n      >>> titlecase("they\'re bill\'s friends.")\n      "They\'re Bill\'s Friends."\n\nstr.translate(map)\n\n   Return a copy of the *s* where all characters have been mapped\n   through the *map* which must be a dictionary of Unicode ordinals\n   (integers) to Unicode ordinals, strings or "None".  Unmapped\n   characters are left untouched. Characters mapped to "None" are\n   deleted.\n\n   You can use "str.maketrans()" to create a translation map from\n   character-to-character mappings in different formats.\n\n   Note: An even more flexible approach is to create a custom\n     character mapping codec using the "codecs" module (see\n     "encodings.cp1251" for an example).\n\nstr.upper()\n\n   Return a copy of the string with all the cased characters [4]\n   converted to uppercase.  Note that "str.upper().isupper()" might be\n   "False" if "s" contains uncased characters or if the Unicode\n   category of the resulting character(s) is not "Lu" (Letter,\n   uppercase), but e.g. "Lt" (Letter, titlecase).\n\n   The uppercasing algorithm used is described in section 3.13 of the\n   Unicode Standard.\n\nstr.zfill(width)\n\n   Return the numeric string left filled with zeros in a string of\n   length *width*.  A sign prefix is handled correctly.  The original\n   string is returned if *width* is less than or equal to "len(s)".\n',
  'strings': '\nString and Bytes literals\n*************************\n\nString literals are described by the following lexical definitions:\n\n   stringliteral   ::= [stringprefix](shortstring | longstring)\n   stringprefix    ::= "r" | "u" | "R" | "U"\n   shortstring     ::= "\'" shortstringitem* "\'" | \'"\' shortstringitem* \'"\'\n   longstring      ::= "\'\'\'" longstringitem* "\'\'\'" | \'"""\' longstringitem* \'"""\'\n   shortstringitem ::= shortstringchar | stringescapeseq\n   longstringitem  ::= longstringchar | stringescapeseq\n   shortstringchar ::= <any source character except "\\" or newline or the quote>\n   longstringchar  ::= <any source character except "\\">\n   stringescapeseq ::= "\\" <any source character>\n\n   bytesliteral   ::= bytesprefix(shortbytes | longbytes)\n   bytesprefix    ::= "b" | "B" | "br" | "Br" | "bR" | "BR" | "rb" | "rB" | "Rb" | "RB"\n   shortbytes     ::= "\'" shortbytesitem* "\'" | \'"\' shortbytesitem* \'"\'\n   longbytes      ::= "\'\'\'" longbytesitem* "\'\'\'" | \'"""\' longbytesitem* \'"""\'\n   shortbytesitem ::= shortbyteschar | bytesescapeseq\n   longbytesitem  ::= longbyteschar | bytesescapeseq\n   shortbyteschar ::= <any ASCII character except "\\" or newline or the quote>\n   longbyteschar  ::= <any ASCII character except "\\">\n   bytesescapeseq ::= "\\" <any ASCII character>\n\nOne syntactic restriction not indicated by these productions is that\nwhitespace is not allowed between the "stringprefix" or "bytesprefix"\nand the rest of the literal. The source character set is defined by\nthe encoding declaration; it is UTF-8 if no encoding declaration is\ngiven in the source file; see section *Encoding declarations*.\n\nIn plain English: Both types of literals can be enclosed in matching\nsingle quotes ("\'") or double quotes (""").  They can also be enclosed\nin matching groups of three single or double quotes (these are\ngenerally referred to as *triple-quoted strings*).  The backslash\n("\\") character is used to escape characters that otherwise have a\nspecial meaning, such as newline, backslash itself, or the quote\ncharacter.\n\nBytes literals are always prefixed with "\'b\'" or "\'B\'"; they produce\nan instance of the "bytes" type instead of the "str" type.  They may\nonly contain ASCII characters; bytes with a numeric value of 128 or\ngreater must be expressed with escapes.\n\nAs of Python 3.3 it is possible again to prefix unicode strings with a\n"u" prefix to simplify maintenance of dual 2.x and 3.x codebases.\n\nBoth string and bytes literals may optionally be prefixed with a\nletter "\'r\'" or "\'R\'"; such strings are called *raw strings* and treat\nbackslashes as literal characters.  As a result, in string literals,\n"\'\\U\'" and "\'\\u\'" escapes in raw strings are not treated specially.\nGiven that Python 2.x\'s raw unicode literals behave differently than\nPython 3.x\'s the "\'ur\'" syntax is not supported.\n\n   New in version 3.3: The "\'rb\'" prefix of raw bytes literals has\n   been added as a synonym of "\'br\'".\n\n   New in version 3.3: Support for the unicode legacy literal\n   ("u\'value\'") was reintroduced to simplify the maintenance of dual\n   Python 2.x and 3.x codebases. See **PEP 414** for more information.\n\nIn triple-quoted strings, unescaped newlines and quotes are allowed\n(and are retained), except that three unescaped quotes in a row\nterminate the string.  (A "quote" is the character used to open the\nstring, i.e. either "\'" or """.)\n\nUnless an "\'r\'" or "\'R\'" prefix is present, escape sequences in\nstrings are interpreted according to rules similar to those used by\nStandard C.  The recognized escape sequences are:\n\n+-------------------+-----------------------------------+---------+\n| Escape Sequence   | Meaning                           | Notes   |\n+===================+===================================+=========+\n+-------------------+-----------------------------------+---------+\n+-------------------+-----------------------------------+---------+\n+-------------------+-----------------------------------+---------+\n+-------------------+-----------------------------------+---------+\n+-------------------+-----------------------------------+---------+\n+-------------------+-----------------------------------+---------+\n+-------------------+-----------------------------------+---------+\n+-------------------+-----------------------------------+---------+\n+-------------------+-----------------------------------+---------+\n+-------------------+-----------------------------------+---------+\n+-------------------+-----------------------------------+---------+\n| "\\ooo"            | Character with octal value *ooo*  | (1,3)   |\n+-------------------+-----------------------------------+---------+\n| "\\xhh"            | Character with hex value *hh*     | (2,3)   |\n+-------------------+-----------------------------------+---------+\n\nEscape sequences only recognized in string literals are:\n\n+-------------------+-----------------------------------+---------+\n| Escape Sequence   | Meaning                           | Notes   |\n+===================+===================================+=========+\n| "\\N{name}"        | Character named *name* in the     | (4)     |\n+-------------------+-----------------------------------+---------+\n| "\\uxxxx"          | Character with 16-bit hex value   | (5)     |\n+-------------------+-----------------------------------+---------+\n| "\\Uxxxxxxxx"      | Character with 32-bit hex value   | (6)     |\n+-------------------+-----------------------------------+---------+\n\nNotes:\n\n1. As in Standard C, up to three octal digits are accepted.\n\n2. Unlike in Standard C, exactly two hex digits are required.\n\n3. In a bytes literal, hexadecimal and octal escapes denote the\n   byte with the given value. In a string literal, these escapes\n   denote a Unicode character with the given value.\n\n4. Changed in version 3.3: Support for name aliases [1] has been\n   added.\n\n5. Individual code units which form parts of a surrogate pair can\n   be encoded using this escape sequence.  Exactly four hex digits are\n   required.\n\n6. Any Unicode character can be encoded this way.  Exactly eight\n   hex digits are required.\n\nUnlike Standard C, all unrecognized escape sequences are left in the\nstring unchanged, i.e., *the backslash is left in the string*.  (This\nbehavior is useful when debugging: if an escape sequence is mistyped,\nthe resulting output is more easily recognized as broken.)  It is also\nimportant to note that the escape sequences only recognized in string\nliterals fall into the category of unrecognized escapes for bytes\nliterals.\n\nEven in a raw string, string quotes can be escaped with a backslash,\nbut the backslash remains in the string; for example, "r"\\""" is a\nvalid string literal consisting of two characters: a backslash and a\ndouble quote; "r"\\"" is not a valid string literal (even a raw string\ncannot end in an odd number of backslashes).  Specifically, *a raw\nstring cannot end in a single backslash* (since the backslash would\nescape the following quote character).  Note also that a single\nbackslash followed by a newline is interpreted as those two characters\nas part of the string, *not* as a line continuation.\n',
  'subscriptions': '\nSubscriptions\n*************\n\nA subscription selects an item of a sequence (string, tuple or list)\nor mapping (dictionary) object:\n\n   subscription ::= primary "[" expression_list "]"\n\nThe primary must evaluate to an object that supports subscription,\ne.g. a list or dictionary.  User-defined objects can support\nsubscription by defining a "__getitem__()" method.\n\nFor built-in objects, there are two types of objects that support\nsubscription:\n\nIf the primary is a mapping, the expression list must evaluate to an\nobject whose value is one of the keys of the mapping, and the\nsubscription selects the value in the mapping that corresponds to that\nkey.  (The expression list is a tuple except if it has exactly one\nitem.)\n\nIf the primary is a sequence, the expression (list) must evaluate to\nan integer or a slice (as discussed in the following section).\n\nThe formal syntax makes no special provision for negative indices in\nsequences; however, built-in sequences all provide a "__getitem__()"\nmethod that interprets negative indices by adding the length of the\nsequence to the index (so that "x[-1]" selects the last item of "x").\nThe resulting value must be a nonnegative integer less than the number\nof items in the sequence, and the subscription selects the item whose\nindex is that value (counting from zero). Since the support for\nnegative indices and slicing occurs in the object\'s "__getitem__()"\nmethod, subclasses overriding this method will need to explicitly add\nthat support.\n\nA string\'s items are characters.  A character is not a separate data\ntype but a string of exactly one character.\n',

-- 
Repository URL: http://hg.python.org/cpython


More information about the Python-checkins mailing list